NIH Findings Shed Light on Risks and Benefits of Integrating AI into Medical Decision-Making

Researchers at the National Institutes of Health (NIH) found that an artificial intelligence (AI) model solved medical quiz questions - designed to test health professionals’ ability to diagnose patients based on clinical images and a brief text summary - with high accuracy. However, physician-graders found the AI model made mistakes when describing images and explaining how its decision-making led to the correct answer. The findings, which shed light on AI's potential in the clinical setting, were published in npj Digital Medicine. The study was led by researchers from NIH’s National Library of Medicine (NLM) and Weill Cornell Medicine, New York City.

"Integration of AI into health care holds great promise as a tool to help medical professionals diagnose patients faster, allowing them to start treatment sooner," said NLM Acting Director, Stephen Sherry, Ph.D. "However, as this study shows, AI is not advanced enough yet to replace human experience, which is crucial for accurate diagnosis."

The AI model and human physicians answered questions from the New England Journal of Medicine (NEJM)'s Image Challenge. The challenge is an online quiz that provides real clinical images and a short text description that includes details about the patient’s symptoms and presentation, then asks users to choose the correct diagnosis from multiple-choice answers.

The researchers tasked the AI model to answer 207 image challenge questions and provide a written rationale to justify each answer. The prompt specified that the rationale should include a description of the image, a summary of relevant medical knowledge, and provide step-by-step reasoning for how the model chose the answer.

Nine physicians from various institutions were recruited, each with a different medical specialty, and answered their assigned questions first in a "closed-book" setting, (without referring to any external materials such as online resources) and then in an "open-book" setting (using external resources). The researchers then provided the physicians with the correct answer, along with the AI model's answer and corresponding rationale. Finally, the physicians were asked to score the AI model's ability to describe the image, summarize relevant medical knowledge, and provide its step-by-step reasoning.

The researchers found that the AI model and physicians scored highly in selecting the correct diagnosis. Interestingly, the AI model selected the correct diagnosis more often than physicians in closed-book settings, while physicians with open-book tools performed better than the AI model, especially when answering the questions ranked most difficult.

Importantly, based on physician evaluations, the AI model often made mistakes when describing the medical image and explaining its reasoning behind the diagnosis - even in cases where it made the correct final choice. In one example, the AI model was provided with a photo of a patient's arm with two lesions. A physician would easily recognize that both lesions were caused by the same condition. However, because the lesions were presented at different angles-causing the illusion of different colors and shapes - the AI model failed to recognize that both lesions could be related to the same diagnosis.

The researchers argue that these findings underpin the importance of evaluating multi-modal AI technology further before introducing it into the clinical setting. ­­

"This technology has the potential to help clinicians augment their capabilities with data-driven insights that may lead to improved clinical decision-making," said NLM Senior Investigator and corresponding author of the study, Zhiyong Lu, Ph.D. "Understanding the risks and limitations of this technology is essential to harnessing its potential in medicine."

The study used an AI model known as GPT-4V (Generative Pre-trained Transformer 4 with Vision), which is a ‘multimodal AI model’ that can process combinations of multiple types of data, including text and images. The researchers note that while this is a small study, it sheds light on multi-modal AI’s potential to aid physicians’ medical decision-making. More research is needed to understand how such models compare to physicians’ ability to diagnose patients.

Jin Q, Chen F, Zhou Y, Xu Z, Cheung JM, Chen R, Summers RM, Rousseau JF, Ni P, Landsman MJ, Baxter SL, Al'Aref SJ, Li Y, Chen A, Brejt JA, Chiang MF, Peng Y, Lu Z.
Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine.
NPJ Digit Med. 2024 Jul 23;7(1):190. doi: 10.1038/s41746-024-01185-7

Most Popular Now

AI Catches One-Third of Interval Breast …

An AI algorithm for breast cancer screening has potential to enhance the performance of digital breast tomosynthesis (DBT), reducing interval cancers by up to one-third, according to a study published...

Great plan: Now We need to Get Real abou…

The government's big plan for the 10 Year Health Plan for the NHS laid out a big role for delivery. However, the Highland Marketing advisory board felt the missing implementation...

Researchers Create 'Virtual Scienti…

There may be a new artificial intelligence-driven tool to turbocharge scientific discovery: virtual labs. Modeled after a well-established Stanford School of Medicine research group, the virtual lab is complete with an...

From WebMD to AI Chatbots: How Innovatio…

A new research article published in the Journal of Participatory Medicine unveils how successive waves of digital technology innovation have empowered patients, fostering a more collaborative and responsive health care...

New AI Tool Accelerates mRNA-Based Treat…

A new artificial intelligence (AI) model can improve the process of drug and vaccine discovery by predicting how efficiently specific mRNA sequences will produce proteins, both generally and in various...

Can Amazon Alexa or Google Home Help Det…

Computer scientists at the University of Rochester have developed an AI-powered, speech-based screening tool that can help people assess whether they are showing signs of Parkinson’s disease, the fastest growing...

AI also Assesses Dutch Mammograms Better…

AI is detecting tumors more often and earlier in the Dutch breast cancer screening program. Those tumors can then be treated at an earlier stage. This has been demonstrated by...

RSNA AI Challenge Models can Independent…

Algorithms submitted for an AI Challenge hosted by the Radiological Society of North America (RSNA) have shown excellent performance for detecting breast cancers on mammography images, increasing screening sensitivity while...

AI could Help Emergency Rooms Predict Ad…

Artificial intelligence (AI) can help emergency department (ED) teams better anticipate which patients will need hospital admission, hours earlier than is currently possible, according to a multi-hospital study by the...

Head-to-Head Against AI, Pharmacy Studen…

Students pursuing a Doctor of Pharmacy degree routinely take - and pass - rigorous exams to prove competency in several areas. Can ChatGPT accurately answer the same questions? A new...

NHS Active 10 Walking Tracker Users are …

Users of the NHS Active 10 app, designed to encourage people to become more active, immediately increased their amount of brisk and non-brisk walking upon using the app, according to...

The Human Touch of Doctors will Still be…

AI-based medicine will revolutionise care including for Alzheimer’s and diabetes, predicts a technology expert, but it must be accessible to all patients. Healing with Artificial Intelligence, written by technology expert Daniele...