Does AI Improve Doctors' Diagnoses?

With hospitals already deploying artificial intelligence to improve patient care, a new study has found that using Chat GPT Plus does not significantly improve the accuracy of doctors' diagnoses when compared with the use of usual resources.

The study, from UVA Health’s Andrew S. Parsons, MD, MPH and colleagues, enlisted 50 physicians in family medicine, internal medicine and emergency medicine to put Chat GPT Plus to the test. Half were randomly assigned to use Chat GPT Plus to diagnose complex cases, while the other half relied on conventional methods such as medical reference sites (for example, UpToDate©) and Google. The researchers then compared the resulting diagnoses, finding that the accuracy across the two groups was similar.

That said, Chat GPT alone outperformed both groups, suggesting that it still holds promise for improving patient care. Physicians, however, will need more training and experience with the emerging technology to capitalize on its potential, the researchers conclude.

For now, Chat GPT remains best used to augment, rather than replace, human physicians, the researchers say.

"Our study shows that AI alone can be an effective and powerful tool for diagnosis," said Parsons, who oversees the teaching of clinical skills to medical students at the University of Virginia School of Medicine and co-leads the Clinical Reasoning Research Collaborative. "We were surprised to find that adding a human physician to the mix actually reduced diagnostic accuracy though improved efficiency. These results likely mean that we need formal training in how best to use AI."

Chatbots called "large language models" that produce human-like responses are growing in popularity, and they have shown impressive ability to take patient histories, communicate empathetically and even solve complex medical cases. But, for now, they still require the involvement of a human doctor.

Parsons and his colleagues were eager to determine how the high-tech tool can be used most effectively, so they launched a randomized, controlled trial at three leading-edge hospitals - UVA Health, Stanford and Harvard’s Beth Israel Deaconess Medical Center.

The participating docs made diagnoses for “clinical vignettes” based on real-life patient-care cases. These case studies included details about patients' histories, physical exams and lab test results. The researchers then scored the results and examined how quickly the two groups made their diagnoses.

The median diagnostic accuracy for the docs using Chat GPT Plus was 76.3%, while the results for the physicians using conventional approaches was 73.7%. The Chat GPT group members reached their diagnoses slightly more quickly overall - 519 seconds compared with 565 seconds.

The researchers were surprised at how well Chat GPT Plus alone performed, with a median diagnostic accuracy of more than 92%. They say this may reflect the prompts used in the study, suggesting that physicians likely will benefit from training on how to use prompts effectively. Alternately, they say, healthcare organizations could purchase predefined prompts to implement in clinical workflow and documentation.

The researchers also caution that Chat GPT Plus likely would fare less well in real life, where many other aspects of clinical reasoning come into play - especially in determining downstream effects of diagnoses and treatment decisions. They're urging additional studies to assess large language models' abilities in those areas and are conducting a similar study on management decision-making.

"As AI becomes more embedded in healthcare, it's essential to understand how we can leverage these tools to improve patient care and the physician experience," Parsons said. "This study suggests there is much work to be done in terms of optimizing our partnership with AI in the clinical environment."

Following up on this groundbreaking work, the four study sites have also launched a bi-coastal AI evaluation network called ARiSE (AI Research and Science Evaluation) to further evaluate GenAI outputs in healthcare. Find out more information at the ARiSE website.

Goh E, Gallo R, Hom J, Strong E, Weng Y, Kerman H, Cool JA, Kanjee Z, Parsons AS, Ahuja N, Horvitz E, Yang D, Milstein A, Olson APJ, Rodman A, Chen JH.
Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial.
JAMA Netw Open. 2024 Oct 1;7(10):e2440969. doi: 10.1001/jamanetworkopen.2024.40969

Most Popular Now

Open Medical Works with Moray's Dig…

Open Medical is working with the Digital Health & Care Innovation Centre’s Rural Centre of Excellence on a referral management plan, as part of a research and development scheme to...

Generative AI on Track to Shape the Futu…

Using advanced artificial intelligence (AI), researchers have developed a novel method to make drug development faster and more efficient. In a new paper, Xia Ning, lead author of the study and...

AI could Help Improve Early Detection of…

A new study led by investigators at the UCLA Health Jonsson Comprehensive Cancer Center suggests that artificial intelligence (AI) could help detect interval breast cancers - those that develop between...

Reorganisation, Consolidation, and Cuts:…

NHS England has been downsized and abolished. Integrated care boards have been told to change function, consolidate, and deliver savings. Trusts are planning big cuts. The Highland Marketing advisory board...

AI-Human Task-Sharing could Cut Mammogra…

The most effective way to harness the power of artificial intelligence (AI) when screening for breast cancer may be through collaboration with human radiologists - not by wholesale replacing them...

Siemens Healthineers infection Control S…

Klinikum Region Hannover (KRH) has commissioned Siemens Healthineers to install infection control system (ICS) at the Klinikum Siloah hospital. The ICS aims to effectively tackle nosocomial infections and increase patient...

AI Tool Uses Face Photos to Estimate Bio…

Eyes may be the window to the soul, but a person's biological age could be reflected in their facial characteristics. Investigators from Mass General Brigham developed a deep learning algorithm...

Philips Future Health Index 2025 Report …

Royal Philips (NYSE: PHG, AEX: PHIA), a global leader in health technology, today unveiled its 2025 Future Health Index U.S. report, "Building trust in healthcare AI," spotlighting the state of...

AI-Powered Precision: Unlocking the Futu…

A team of researchers from the Department of Diagnostic and Therapeutic Ultrasonography at the Tianjin Medical University Cancer Institute & Hospital, have published a review in Cancer Biology & Medicine...

AI Model Improves Delirium Prediction, L…

An artificial intelligence (AI) model improved outcomes in hospitalized patients by quadrupling the rate of detection and treatment of delirium. The model identifies patients at high risk for delirium and...

Building Trust in Artificial Intelligenc…

A new review, published in the peer-reviewed journal AI in Precision Oncology, explores the multifaceted reasons behind the skepticism surrounding artificial intelligence (AI) technologies in healthcare and advocates for approaches...

SALSA: A New AI Tool for the Automated a…

Investigators of the Vall d'Hebron Institute of Oncology's (VHIO) Radiomics Group, led by Raquel Perez-Lopez, have developed SALSA (System for Automatic Liver tumor Segmentation And detection), a fully automated deep...