When it comes to Emergency Care, ChatGPT Overprescribes

If ChatGPT were cut loose in the Emergency Department, it might suggest unneeded x-rays and antibiotics for some patients and admit others who didn't require hospital treatment, a new study from UC San Francisco has found.

The researchers said that, while the model could be prompted in ways that make its responses more accurate, it's still no match for the clinical judgment of a human doctor.

"This is a valuable message to clinicians not to blindly trust these models," said postdoctoral scholar Chris Williams, MB BChir, lead author of the study, which appears Oct. 8 in Nature Communications. "ChatGPT can answer medical exam questions and help draft clinical notes, but it’s not currently designed for situations that call for multiple considerations, like the situations in an emergency department."

Recently, Williams showed that ChatGPT, a large language model (LLM) that can be used for researching clinical applications of AI, was slightly better than humans at determining which of two emergency patients was most acutely unwell, a straightforward choice between patient A and patient B.

With the current study, Williams challenged the AI model to perform a more complex task: providing the recommendations a physician makes after initially examining a patient in the ED. This includes deciding whether to admit the patient, get x-rays or other scans, or prescribe antibiotics.

For each of the three decisions, the team compiled a set of 1,000 ED visits to analyze from an archive of more than 251,000 visits. The sets had the same ratio of “yes” to “no” responses for decisions on admission, radiology and antibiotics that are seen across UCSF Health’s Emergency Department.

Using UCSF’s secure generative AI platform, which has broad privacy protections, the researchers entered doctors’ notes on each patient’s symptoms and examination findings into ChatGPT-3.5 and ChatGPT-4. Then, they tested the accuracy of each set with a series of increasingly detailed prompts.

Overall, the AI models tended to recommend services more often than was needed. ChatGPT-4 was 8% less accurate than resident physicians, and ChatGPT-3.5 was 24% less accurate.

Williams said the AI’s tendency to overprescribe could be because the models are trained on the internet, where legitimate medical advice sites aren’t designed to answer emergency medical questions but rather to send readers to a doctor who can.

"These models are almost fine-tuned to say, 'seek medical advice,' which is quite right from a general public safety perspective," he said. "But erring on the side of caution isn’t always appropriate in the ED setting, where unnecessary interventions could cause patients harm, strain resources and lead to higher costs for patients."

He said models like ChatGPT will need better frameworks for evaluating clinical information before they are ready for the ED. The people who design those frameworks will need to strike a balance between making sure the AI doesn't miss something serious, while keeping it from triggering unneeded exams and expenses.

This means researchers developing medical applications of AI, along with the wider clinical community and the public, need to consider where to draw those lines and how much to err on the side of caution.

"There's no perfect solution," he said, "But knowing that models like ChatGPT have these tendencies, we’re charged with thinking through how we want them to perform in clinical practice."

Williams CYK, Miao BY, Kornblith AE, Butte AJ.
Evaluating the use of large language models to provide clinical recommendations in the Emergency Department.
Nat Commun. 2024 Oct 8;15(1):8236. doi: 10.1038/s41467-024-52415-1

Most Popular Now

Open Medical Works with Moray's Dig…

Open Medical is working with the Digital Health & Care Innovation Centre’s Rural Centre of Excellence on a referral management plan, as part of a research and development scheme to...

Generative AI on Track to Shape the Futu…

Using advanced artificial intelligence (AI), researchers have developed a novel method to make drug development faster and more efficient. In a new paper, Xia Ning, lead author of the study and...

AI could Help Improve Early Detection of…

A new study led by investigators at the UCLA Health Jonsson Comprehensive Cancer Center suggests that artificial intelligence (AI) could help detect interval breast cancers - those that develop between...

AI-Human Task-Sharing could Cut Mammogra…

The most effective way to harness the power of artificial intelligence (AI) when screening for breast cancer may be through collaboration with human radiologists - not by wholesale replacing them...

Reorganisation, Consolidation, and Cuts:…

NHS England has been downsized and abolished. Integrated care boards have been told to change function, consolidate, and deliver savings. Trusts are planning big cuts. The Highland Marketing advisory board...

Siemens Healthineers infection Control S…

Klinikum Region Hannover (KRH) has commissioned Siemens Healthineers to install infection control system (ICS) at the Klinikum Siloah hospital. The ICS aims to effectively tackle nosocomial infections and increase patient...

AI Tool Uses Face Photos to Estimate Bio…

Eyes may be the window to the soul, but a person's biological age could be reflected in their facial characteristics. Investigators from Mass General Brigham developed a deep learning algorithm...

Philips Future Health Index 2025 Report …

Royal Philips (NYSE: PHG, AEX: PHIA), a global leader in health technology, today unveiled its 2025 Future Health Index U.S. report, "Building trust in healthcare AI," spotlighting the state of...

AI-Powered Precision: Unlocking the Futu…

A team of researchers from the Department of Diagnostic and Therapeutic Ultrasonography at the Tianjin Medical University Cancer Institute & Hospital, have published a review in Cancer Biology & Medicine...

AI Model Improves Delirium Prediction, L…

An artificial intelligence (AI) model improved outcomes in hospitalized patients by quadrupling the rate of detection and treatment of delirium. The model identifies patients at high risk for delirium and...

Building Trust in Artificial Intelligenc…

A new review, published in the peer-reviewed journal AI in Precision Oncology, explores the multifaceted reasons behind the skepticism surrounding artificial intelligence (AI) technologies in healthcare and advocates for approaches...

SALSA: A New AI Tool for the Automated a…

Investigators of the Vall d'Hebron Institute of Oncology's (VHIO) Radiomics Group, led by Raquel Perez-Lopez, have developed SALSA (System for Automatic Liver tumor Segmentation And detection), a fully automated deep...