Vision-Based ChatGPT Shows Deficits Interpreting Radiologic Images

Researchers evaluating the performance of ChatGPT-4 Vision found that the model performed well on text-based radiology exam questions but struggled to answer image-related questions accurately. The study's results were published today in Radiology, a journal of the Radiological Society of North America (RSNA).

Chat GPT-4 Vision is the first version of the large language model that can interpret both text and images.

"ChatGPT-4 has shown promise for assisting radiologists in tasks such as simplifying patient-facing radiology reports and identifying the appropriate protocol for imaging exams," said Chad Klochko, M.D., musculoskeletal radiologist and artificial intelligence (AI) researcher at Henry Ford Health in Detroit, Michigan. "With image processing capabilities, GPT-4 Vision allows for new potential applications in radiology."

For the study, Dr. Klochko’s research team used retired questions from the American College of Radiology’s Diagnostic Radiology In-Training Examinations, a series of tests used to benchmark the progress of radiology residents. After excluding duplicates, the researchers used 377 questions across 13 domains, including 195 questions that were text-only and 182 that contained an image.

GPT-4 Vision answered 246 of the 377 questions correctly, achieving an overall score of 65.3%. The model correctly answered 81.5% (159) of the 195 text-only queries and 47.8% (87) of the 182 questions with images.

"The 81.5% accuracy for text-only questions mirrors the performance of the model’s predecessor," he said. "This consistency on text-based questions may suggest that the model has a degree of textual understanding in radiology."

Genitourinary radiology was the only subspecialty for which GPT-4 Vision performed better on questions with images (67%, or 10 of 15) than text-only questions (57%, or 4 of 7). The model performed better on text-only questions in all other subspecialties.

The model performed best on image-based questions in the chest and genitourinary subspecialties, correctly answering 69% and 67% of the image-containing questions, respectively. The model performed lowest on image-containing questions in the nuclear medicine domain, correctly answering only 2 of 10 questions.

The study also evaluated the impact of various prompts on the performance of GPT-4 Vision.

  • Original: You are taking a radiology board exam. Images of the questions will be uploaded. Choose the correct answer for each question.
  • Basic: Choose the single best answer in the following retired radiology board exam question.
  • Short instruction: This is a retired radiology board exam question to gauge your medical knowledge. Choose the single best answer letter and do not provide any reasoning for your answer.
  • Long instruction: You are a board-certified diagnostic radiologist taking an examination. Evaluate each question carefully and if the question additionally contains an image, please evaluate the image carefully in order to answer the question. Your response must include a single best answer choice. Failure to provide an answer choice will count as incorrect.
  • Chain of thought: You are taking a retired board exam for research purposes. Given the provided image, think step by step for the provided question.

Although the model correctly answered 183 of 265 questions with a basic prompt, it declined to answer 120 questions, most of which contained an image.

"The phenomenon of declining to answer questions was something we hadn’t seen in our initial exploration of the model," Dr. Klochko said.

The short instruction prompt yielded the lowest accuracy (62.6%).

On text-based questions, chain-of-thought prompting outperformed long instruction by 6.1%, basic by 6.8%, and original prompting style by 8.9%. There was no evidence to suggest performance differences between any two prompts on image-based questions.

"Our study showed evidence of hallucinatory responses when interpreting image findings," Dr. Klochko said. "We noted an alarming tendency for the model to provide correct diagnoses based on incorrect image interpretations, which could have significant clinical implications."

Dr. Klochko said his study’s findings underscore the need for more specialized and rigorous evaluation methods to assess large language model performance in radiology tasks.

"Given the current challenges in accurately interpreting key radiologic images and the tendency for hallucinatory responses, the applicability of GPT-4 Vision in information-critical fields such as radiology is limited in its current state," he said.

Hayden N, Gilbert S, Poisson LM, Griffith B, Klochko C.
Performance of GPT-4 with Vision on Text- and Image-based ACR Diagnostic Radiology In-Training Examination Questions.
Radiology. 2024 Sep;312(3):e240153. doi: 10.1148/radiol.240153

Most Popular Now

Airwave Healthcare Expands Team with Fra…

Patient stimulus technology provider Airwave Healthcare has appointed Francesca McPhail, who will help health and care providers achieve more from their media and entertainment systems for people receiving care. Francesca McPhail...

Scientists Use AI to Detect Chronic High…

Researchers at Klick Labs unveiled a cutting-edge, non-invasive technique that can predict chronic high blood pressure (hypertension) with a high degree of accuracy using just a person's voice. Just published...

ChatGPT Outperformed Trainee Doctors in …

The chatbot ChatGPT performed better than trainee doctors in assessing complex cases of respiratory disease in areas such as cystic fibrosis, asthma and chest infections in a study presented at...

Former NHS CIO Will Smart Joins Alcidion

A former national chief information officer for health and social care in England, Will Smart will join the Alcidion Group board in a global role from October. He will provide...

The Darzi Review: The NHS "Is in Se…

Lyn Whitfield, content director at Highland Marketing, takes a look at Lord Darzi's review of the NHS, immediate reaction, and next steps. The review calls for a "tilt towards technology...

SPARK TSL Appoints David Hawkins as its …

SPARK TSL has appointed David Hawkins as its new sales director, to support take-up of the SPARK Fusion infotainment solution by NHS trusts and health boards. SPARK Fusion is a state-of-the-art...

Can Google Street View Data Improve Publ…

Big data and artificial intelligence are transforming how we think about health, from detecting diseases and spotting patterns to predicting outcomes and speeding up response times. In a new study analyzing...

AI Products Like ChatGPT can Provide Med…

The much-hyped AI products like ChatGPt may provide medical doctors and healthcare professionals with information that can aggravate patients' conditions and lead to serious health consequences, a study suggests. Researchers considered...

Healthcare Week Luxembourg: Second Editi…

1 - 2 October 2024, Luxembourg.Save the date: Healthcare Week Luxembourg is back on 1 and 2 October 2024 at Luxexpo The Box. Acclaimed last year by healthcare professionals from...

One in Five UK Soctors use AI Chatbots

A survey led by researchers at Uppsala University in Sweden reveals that a significant proportion of UK general practitioners (GPs) are integrating generative AI tools, such as ChatGPT, into their...

Specially Designed Video Games may Benef…

In a review of previous studies, a Johns Hopkins Children's Center team concludes that some video games created as mental health interventions can be helpful - if modest - tools...

Paving the Way for New Treatments

A University of Missouri researcher has created a computer program that can unravel the mysteries of how proteins work together - giving scientists valuable insights to better prevent, diagnose and...