What is the role of artificial intelligence in healthcare decision-making?

Getting your Trinity Audio player ready...
AI-used-in-healthcare

Artificial Intelligence is deciding crucial healthcare matters.

Doctors are already utilizing unregulated artificial intelligence tools, like virtual assistants for note-taking and predictive software that help doctors diagnose and treat ailments. (AI used in healthcare)

 

The government needs to be faster to regulate the rapidly evolving technology because the staffing and funding challenges faced by agencies such as that of the Food and Drug Administration in creating and enforcing regulations are such a vast challenge. It’s unlikely that they will be able to catch up in the near future. That’s why AI deployment in the healthcare field is turning into a hazardous experiment to see if the private sector can change how medicine is done without government oversight.

 

“The cart is so far ahead of the horse, it’s like, how do we rein it back in without careening over the ravine?” said John Ayers, associate professor at the University of California San Diego.

 

In contrast to medical devices or pharmaceuticals, AI software changes. Instead of issuing a once-only approval, the FDA wants to monitor artificial intelligence software for a period of time. This is something that needs to be done in a proactive manner.

 

The president Joe Biden in October pledged the president of the United States a swift and coordinated action from the agencies he oversees to guarantee AI security and effectiveness. However, regulators such as the FDA need more resources to manage technology that, in essence, changes constantly.

“We’d need another doubling of size, and last I looked, the taxpayer is not very interested in doing that,” FDA Commissioner Robert Califf said at an event in January. He repeated the same point during a recent meeting of FDA members.

 

Califf spoke openly about the challenges facing the FDA. Assessing AI as it’s constantly learning and can perform differently depending on the location is a daunting task outside his agency’s current model. When the FDA accepts medicines and medical devices, they don’t have to monitor the way they develop.

 

AI used in healthcare

 

The issue for the FDA concerns more than adjusting its regulatory strategy or hiring more personnel. A new report by the Government Accountability Office, the watchdog branch of Congress, The agency needs more power to demand AI performance data as well as define guidelines for AI in more precise ways that the conventional approach to risk assessments for medicines and medical devices permits to do, the GAO declared.

Consider that Congress has only begun to look at the issue, let alone reach an agreement on AI regulation; it could take quite a long time.

Congress is typically reticent to extend the authority of the FDA. As of now, the FDA still needs to ask for permission.

 

It has provided guidelines to medical device manufacturers on safely incorporating artificial intelligence. The guidance has led to some industry reaction from tech companies that claim the agency has gone too far -even though the advice is not legally binding.

In the same vein, some AI experts from industry and academia say that the FDA doesn’t do enough with the powers it already has.

Authority’s scope

Innovations in AI have resulted in considerable gaps in the areas that the FDA covers regulates. The FDA is not able to examine programs like chatbots, for instance, and has no control over systems that can summarize doctor’s notes or perform other crucial administrative tasks.

 

The FDA regulates the first-generation AI tools just like it regulates medical devices. In fact, fourteen months ago, Congress gave the FDA the power to permit manufacturers of devices, a few that include earlier AI, to apply pre-planned updates without needing to apply for approval.

 

AI used in healthcare – The scope of the FDA’s power over AI needs to be clarified.

A group of companies has submitted a petition to the FDA claiming the agency was overstepping its authority by approving the 2022 guidance, which states that manufacturers of artificial intelligence devices that provide time-sensitive recommendations and diagnoses need FDA approval. While the guidance is not legally binding, businesses generally believe they must follow.

It is the Healthcare Information and Management Systems Society, which is a trade organization that represents companies involved in health technology, has also expressed a lack of clarity on the extent of FDA authority and how the control exercised over AI regulation is distributed between it and other departments in the Department of Health and Human Services including and the Office of the National Coordinator for Health Information Technology. The office issued rules that require greater transparency regarding AI technology in the month of December.

 

“From the industry perspective, without having some sort of clarity from HHS, it gets into this area where folks don’t know directly who to go to,” said Colin Rom, a former senior adviser to the then FDA Commissioner Stephen Hahn who now leads health policy at the venture company Andreessen Horowitz.

The FDA has informed GAO that in order to monitor the effectiveness of algorithms over time, it requires an authority from Congress to collect data on performance.

 

The agency also stated that it is seeking new powers to develop specific security measures for each algorithm instead of relying on the existing classifications of medical devices to establish the controls.

The FDA intends to convey its requirements to Congress.

 

AI used in healthcare – Oversight outsourcing

However, it is dependent on the gridlocked Capitol Hill.

This is why Califf and others in the industry have proposed a new concept: the creation of public-private assurance laboratories likely at large institutions of higher education or academic health centers that could test and examine the use of artificial intelligence within health healthcare.

 

“We’ve got to have a community of entities that do the assessments in a way that gives the certification of the algorithms actually doing good and not harm,” Califf spoke at the Consumer Electronics Show earlier this month.

The idea has also received some backing from Congress. Senator. John Hickenlooper (D-Colo.) has proposed an independent third party to review sophisticated artificial intelligence. He’s specifically thinking about generative AI, a kind that is similar to ChatGPT which replicates human intelligence, but it’s the exact monitoring framework Califf has proposed.

The method could be flawed in the way some AI experts have pointed out because AI test on a huge university campus may not work similarly in a small rural hospital.

 

“You know as a practising physician that different environments are different,” Mark Sendak, the population health and data scientist of Duke’s Institute for Health Innovation, spoke to senators at the Finance Committee hearing on artificial intelligence in healthcare this month. “Every health care organization needs to be able to govern AI locally.”

 

In January, Micky Tripathi, the national coordinator of Health Information Technology, and Troy Tazbaz, FDA’s director of digital health, published in the Journal of the American Medical Association that assurance labs will be required to consider this issue.

 

The study, co-authored by researchers from Stanford Medicine, Johns Hopkins University and the Mayo Clinic, calls for creating a few pilot labs to set the standard in the design of validation systems.

 

However, the collaboration between regulators, major universities, and health providers has yet to be a boon to smaller players, who fret about conflicts of interest if these pilot laboratories are companies that are also creating their own AI systems or working with tech companies.

 

Ayers believes that the FDA should handle AI verification within their boundaries and that companies who develop AI systems must, at minimum, demonstrate that they can improve the outcomes of patients regardless of the person who performs the supervision.

He cited the inability of an AI system created by Epic’s electronic health records company Epic to recognize sepsis, a deadly reaction to infection that was not reported to regulators. Epic has since revised its algorithm, and an official from the FDA stated that they don’t divulge communications with specific companies.

 

The incident has left many people in the health and technology field thinking that the agency needs to be fixed.

“They should be out there policing this stuff,” Ayers said. Ayers. (AI used in healthcare)

 

Also Read-  OpenAI Sora

The Rise of Midjourney Artificial Intelligence: Exploring New Frontiers

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top