Vision-Based ChatGPT Shows Deficits Interpreting Radiologic Images

Researchers evaluating the performance of ChatGPT-4 Vision found that the model performed well on text-based radiology exam questions but struggled to answer image-related questions accurately. The study's results were published today in Radiology, a journal of the Radiological Society of North America (RSNA).

Chat GPT-4 Vision is the first version of the large language model that can interpret both text and images.

"ChatGPT-4 has shown promise for assisting radiologists in tasks such as simplifying patient-facing radiology reports and identifying the appropriate protocol for imaging exams," said Chad Klochko, M.D., musculoskeletal radiologist and artificial intelligence (AI) researcher at Henry Ford Health in Detroit, Michigan. "With image processing capabilities, GPT-4 Vision allows for new potential applications in radiology."

For the study, Dr. Klochko’s research team used retired questions from the American College of Radiology’s Diagnostic Radiology In-Training Examinations, a series of tests used to benchmark the progress of radiology residents. After excluding duplicates, the researchers used 377 questions across 13 domains, including 195 questions that were text-only and 182 that contained an image.

GPT-4 Vision answered 246 of the 377 questions correctly, achieving an overall score of 65.3%. The model correctly answered 81.5% (159) of the 195 text-only queries and 47.8% (87) of the 182 questions with images.

"The 81.5% accuracy for text-only questions mirrors the performance of the model’s predecessor," he said. "This consistency on text-based questions may suggest that the model has a degree of textual understanding in radiology."

Genitourinary radiology was the only subspecialty for which GPT-4 Vision performed better on questions with images (67%, or 10 of 15) than text-only questions (57%, or 4 of 7). The model performed better on text-only questions in all other subspecialties.

The model performed best on image-based questions in the chest and genitourinary subspecialties, correctly answering 69% and 67% of the image-containing questions, respectively. The model performed lowest on image-containing questions in the nuclear medicine domain, correctly answering only 2 of 10 questions.

The study also evaluated the impact of various prompts on the performance of GPT-4 Vision.

  • Original: You are taking a radiology board exam. Images of the questions will be uploaded. Choose the correct answer for each question.
  • Basic: Choose the single best answer in the following retired radiology board exam question.
  • Short instruction: This is a retired radiology board exam question to gauge your medical knowledge. Choose the single best answer letter and do not provide any reasoning for your answer.
  • Long instruction: You are a board-certified diagnostic radiologist taking an examination. Evaluate each question carefully and if the question additionally contains an image, please evaluate the image carefully in order to answer the question. Your response must include a single best answer choice. Failure to provide an answer choice will count as incorrect.
  • Chain of thought: You are taking a retired board exam for research purposes. Given the provided image, think step by step for the provided question.

Although the model correctly answered 183 of 265 questions with a basic prompt, it declined to answer 120 questions, most of which contained an image.

"The phenomenon of declining to answer questions was something we hadn’t seen in our initial exploration of the model," Dr. Klochko said.

The short instruction prompt yielded the lowest accuracy (62.6%).

On text-based questions, chain-of-thought prompting outperformed long instruction by 6.1%, basic by 6.8%, and original prompting style by 8.9%. There was no evidence to suggest performance differences between any two prompts on image-based questions.

"Our study showed evidence of hallucinatory responses when interpreting image findings," Dr. Klochko said. "We noted an alarming tendency for the model to provide correct diagnoses based on incorrect image interpretations, which could have significant clinical implications."

Dr. Klochko said his study’s findings underscore the need for more specialized and rigorous evaluation methods to assess large language model performance in radiology tasks.

"Given the current challenges in accurately interpreting key radiologic images and the tendency for hallucinatory responses, the applicability of GPT-4 Vision in information-critical fields such as radiology is limited in its current state," he said.

Hayden N, Gilbert S, Poisson LM, Griffith B, Klochko C.
Performance of GPT-4 with Vision on Text- and Image-based ACR Diagnostic Radiology In-Training Examination Questions.
Radiology. 2024 Sep;312(3):e240153. doi: 10.1148/radiol.240153

Most Popular Now

Generative AI's Diagnostic Capabili…

The use of generative AI for diagnostics has attracted attention in the medical field and many research papers have been published on this topic. However, because the evaluation criteria were...

AI Tool can Track Effectiveness of Multi…

A new artificial intelligence (AI) tool that can help interpret and assess how well treatments are working for patients with multiple sclerosis (MS) has been developed by UCL researchers. AI uses...

Diagnoses and Treatment Recommendations …

A new study led by Prof. Dan Zeltzer, a digital health expert from the Berglas School of Economics at Tel Aviv University, compared the quality of diagnostic and treatment recommendations...

New System for the Early Detection of Au…

A team from the Human-Tech Institute-Universitat Politècnica de València has developed a new system for the early detection of Autism Spectrum Disorder (ASD) using virtual reality and artificial intelligence. The...

Surrey and Sussex Healthcare NHS Trust g…

Surrey and Sussex Healthcare NHS Trust has marked an important milestone in connecting busy radiologists across large parts of South East England, following the successful go live of Sectra's enterprise...

AI Tool Helps Predict Relapse of Pediatr…

Artificial intelligence (AI) shows tremendous promise for analyzing vast medical imaging datasets and identifying patterns that may be missed by human observers. AI-assisted interpretation of brain scans may help improve...

Detecting Lung Cancer 4 Months Earlier a…

GPs may soon be able to identify patients with an increased risk of lung cancer up to 4 months earlier than is currently the case. The GP should be able...

Infectious Disease Surveillance Platform…

The Biothreats Emergence, Analysis and Communications Network (BEACON) leverages advanced artificial intelligence (AI), large language models (LLMs) and a network of globally based experts to rapidly collect, analyze, and disseminate...

An AI Tool Grounded in Evidence-Based Me…

A powerful clinical artificial intelligence tool developed by University at Buffalo biomedical informatics researchers has demonstrated remarkable accuracy on all three parts of the United States Medical Licensing Exam (Step...

Right Patient, Right Dose, Right Time

While artificial intelligence (AI) has shown promising potential, much of its use has remained theoretical or retrospective. Turning its potential into real-world healthcare outcomes, researchers at the Yong Loo Lin...

Children's Health Ireland to Transf…

Healthcare teams responsible for paediatric care in Ireland are to save significant time in accessing important diagnostic imaging and reports, with the help of a new agreement with medical imaging...

AI-Powered Analysis of Stent Healing

Each year, more than three million people worldwide are treated with stents to open blocked blood vessels caused by heart disease. However, monitoring the healing process after implantation remains a...