Are AI-Chatbots Suitable for Hospitals?

Large language models may pass medical exams with flying colors but using them for diagnoses would currently be grossly negligent. Medical chatbots make hasty diagnoses, do not adhere to guidelines, and would put patients' lives at risk. This is the conclusion reached by a team from the Technical University of Munich (TUM). For the first time, the team has systematically investigated whether this form of artificial intelligence (AI) would be suitable for everyday clinical practice. Despite the current shortcomings, the researchers see potential in the technology. They have published a method that can be used to test the reliability of future medical chatbots.

Large language models are computer programs trained with massive amounts of text. Specially trained variants of the technology behind ChatGPT now even solve final exams from medical studies almost flawlessly. But would such an AI be able to take over the tasks of doctors in an emergency room? Could it order the appropriate tests, make the right diagnosis, and create a treatment plan based on the patient's symptoms?

An interdisciplinary team led by Daniel Rückert, Professor of Artificial Intelligence in Healthcare and Medicine at TUM, addressed this question in the journal Nature Medicine. For the first time, doctors and AI experts systematically investigated how successful different variants of the open-source large language model Llama 2 are in making diagnoses

Reenacting the path from emergency room to treatment

To test the capabilities of these complex algorithms, the researchers used anonymized patient data from a clinic in the USA. They selected 2400 cases from a larger data set. All patients had come to the emergency room with abdominal pain. Each case description ended with one of four diagnoses and a treatment plan. All the data recorded for the diagnosis was available for the cases - from the medical history and blood values to the imaging data.

"We prepared the data in such a way that the algorithms were able to simulate the real procedures and decision-making processes in the hospital," explains Friederike Jungmann, assistant physician in the radiology department at TUM's Klinikum rechts der Isar and lead author of the study together with computer scientist Paul Hager. "The program only had the information that the real doctors had. For example, it had to decide for itself whether to order a blood count and then use this information to make the next decision - until it finally created a diagnosis and a treatment plan."

The team found that none of the large language models consistently requested all the necessary examinations. In fact, the programs' diagnoses became less accurate the more information they had about the case. They often did not follow treatment guidelines, sometimes ordering examinations that would have had serious health consequences for real patients.

Direct comparison with doctors

In the second part of the study, the researchers compared AI diagnoses for a subset of the data with diagnoses from four doctors. While the latter were correct in 89 percent of the diagnoses, the best large language model achieved just 73 percent. Each model recognized some diseases better than others. In one extreme case, a model correctly diagnosed gallbladder inflammation in only 13 percent of cases.

Another problem that disqualifies the programs for everyday use is a lack of robustness: the diagnosis made by a large language model depended, among other things, on the order in which it received the information. Linguistic subtleties also influenced the result - for example, whether the program was asked for a 'Main Diagnosis,' a 'Primary Diagnosis,' or a 'Final Diagnosis.' In everyday clinical practice, these terms are usually interchangeable.

ChatGPT not tested

The team explicitly did not test the commercial large language models from OpenAI (ChatGPT) and Google for two main reasons. Firstly, the provider of the hospital data has prohibited the data from being processed with these models for data protection reasons. Secondly, experts strongly advise that only open-source software should be used for applications in the healthcare sector. "Only with open-source models do hospitals have sufficient control and knowledge to ensure patient safety. When we test models, it is essential to know what data was used to train them. Otherwise, we might test them with the exact same questions and answers they were trained on. Companies of course keep their training data very secret, making fair evaluations hard," says Paul Hager. "Furthermore, basing key medical infrastructure on external services which update and change models as they wish is dangerous. In the worst-case scenario, a service on which hundreds of clinics depend could be shut down because it is not profitable."

Rapid progress

Developments in this technology are advancing rapidly. "It is quite possible that in the foreseeable future a large language model will be better suited to arriving at a diagnosis from medical history and test results," says Prof. Daniel Rückert. "We have therefore released our test environment for all research groups that want to test large language models in a clinical context." Rückert sees potential in the technology: "In the future, large language models could become important tools for doctors, for example for discussing a case. However, we must always be aware of the limitations and peculiarities of this technology and consider these when creating applications,' says the medical AI expert."

Hager P, Jungmann F, Holland R, Bhagat K, Hubrecht I, Knauer M, Vielhauer J, Makowski M, Braren R, Kaissis G, Rueckert D.
Evaluation and mitigation of the limitations of large language models in clinical decision-making.
Nat Med. 2024 Jul 4. doi: 10.1038/s41591-024-03097-1

Most Popular Now

Stanford Medicine Study Suggests Physici…

Artificial intelligence-powered chatbots are getting pretty good at diagnosing some diseases, even when they are complex. But how do chatbots do when guiding treatment and care after the diagnosis? For...

OmicsFootPrint: Mayo Clinic's AI To…

Mayo Clinic researchers have pioneered an artificial intelligence (AI) tool, called OmicsFootPrint, that helps convert vast amounts of complex biological data into two-dimensional circular images. The details of the tool...

Testing AI with AI: Ensuring Effective A…

Using a pioneering artificial intelligence platform, Flinders University researchers have assessed whether a cardiac AI tool recently trialled in South Australian hospitals actually has the potential to assist doctors and...

AI Accelerates the Search for New Tuberc…

Tuberculosis is a serious global health threat that infected more than 10 million people in 2022. Spread through the air and into the lungs, the pathogen that causes "TB" can...

Students Around the World Find ChatGPT U…

An international survey study involving more than 23,000 higher education students reveals trends in how they use and experience ChatGPT, highlighting both positive perceptions and awareness of the AI chatbot’s...

Adults don't Trust Health Care to U…

A study finds that 65.8% of adults surveyed had low trust in their health care system to use artificial intelligence responsibly and 57.7% had low trust in their health care...

How AI Bias Shapes Everything from Hirin…

Generative AI tools like ChatGPT, DeepSeek, Google's Gemini and Microsoft’s Copilot are transforming industries at a rapid pace. However, as these large language models become less expensive and more widely...

AI Unlocks Genetic Clues to Personalize …

A groundbreaking study led by USC Assistant Professor of Computer Science Ruishan Liu has uncovered how specific genetic mutations influence cancer treatment outcomes - insights that could help doctors tailor...

The 10 Year Health Plan: What do We Need…

Opinion Article by Piyush Mahapatra, Consultant Orthopaedic Surgeon and Chief Innovation Officer at Open Medical. There is a new ten-year plan for the NHS. It will "focus efforts on preventing, as...

Deep Learning to Increase Accessibility…

Coronary artery disease is the leading cause of death globally. One of the most common tools used to diagnose and monitor heart disease, myocardial perfusion imaging (MPI) by single photon...

People's Trust in AI Systems to Mak…

Psychologists warn that AI's perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions. Artificial moral advisors (AMAs) are systems based on artificial...

DMEA 2025 - Innovations, Insights and Ne…

8 - 10 April 2025, Berlin, Germany. Less than 50 days to go before DMEA 2025 opens its doors: Europe's leading event for digital health will once again bring together experts...