Study Shows Human Medical Professionals are More Reliable than AI Tools

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations and can sometimes generate incorrect advice or instructions. A new study in the American Journal of Preventive Medicine, published by Elsevier, assesses the accuracy and reliability of AI-generated advice against established medical standards and finds that LLMs are not trustworthy enough to replace human medical professionals just yet.

Andrei Brateanu, MD, Department of Internal Medicine, Cleveland Clinic Foundation, says, "Web search engines can provide access to reputable sources of information, offering accurate details on a variety of topics such as preventive measures and general medical questions. Similarly, LLMs can offer medical information that may look very accurate and convincing, when in fact it may be occasionally inaccurate. Therefore, we thought it would be important to compare the answers from LLMs with data obtained from recognized medical organizations. This comparison helps validate the reliability of the medical information by cross-referencing it with trusted healthcare data."

In the study 56 questions were posed to ChatGPT-4 and Bard, and their responses were evaluated by two physicians for accuracy, with a third resolving any disagreements. Final assessments found 28.6% of ChatGPT-4's answers accurate, 28.6% inaccurate, and 42.8% partially accurate but incomplete. Bard performed better, with 53.6% of answers accurate, 17.8% inaccurate, and 28.6% partially accurate.

Dr. Brateanu explains, "All LLMs, including ChatGPT-4 and Bard, operate using complex mathematical algorithms. The fact that both models produced responses with inaccuracies or omitted crucial information highlights the ongoing challenge of developing AI tools that can provide dependable medical advice. This might come as a surprise, considering the advanced technology behind these models and their anticipated role in healthcare environments."

This research underscores the importance of being cautious and critical of medical information obtained from AI sources, reinforcing the need to consult healthcare professionals for accurate medical advice. For healthcare professionals, it points to the potential and limitations of using AI as a supplementary tool in providing patient care and emphasizes the ongoing need for oversight and verification of AI-generated information.

Dr. Brateanu concludes, "AI tools should not be seen as substitutes for medical professionals. Instead, they can be considered as additional resources that, when combined with human expertise, can enhance the overall quality of information provided. As we incorporate AI technology into healthcare, it's crucial to ensure that the essence of healthcare continues to be fundamentally human."

Kassab J, Hadi El Hajjar A, Wardrop RM 3rd, Brateanu A.
Accuracy of Online Artificial Intelligence Models in Primary Care Settings.
Am J Prev Med. 2024 Feb 12:S0749-3797(24)00060-6. doi: 10.1016/j.amepre.2024.02.006

Most Popular Now

Mobile Phone Data Helps Track Pathogen S…

A new way to map the spread and evolution of pathogens, and their responses to vaccines and antibiotics, will provide key insights to help predict and prevent future outbreaks. The...

AI Model to Improve Patient Response to …

A new artificial intelligence (AI) tool that can help to select the most suitable treatment for cancer patients has been developed by researchers at The Australian National University (ANU). DeepPT, developed...

Can AI Tell you if You Have Osteoporosis…

Osteoporosis is so difficult to detect in early stage it’s called the "silent disease." What if artificial intelligence could help predict a patient’s chances of having the bone-loss disease before...

Study Reveals Why AI Models that Analyze…

Artificial intelligence (AI) models often play a role in medical diagnoses, especially when it comes to analyzing images such as X-rays. However, studies have found that these models don’t always...

Think You're Funny? ChatGPT might b…

A study comparing jokes by people versus those told by ChatGPT shows that humans need to work on their material. The research team behind the study published on Wednesday, July 3...

Innovative, Highly Accurate AI Model can…

If there is one medical exam that everyone in the world has taken, it's a chest x-ray. Clinicians can use radiographs to tell if someone has tuberculosis, lung cancer, or...

New AI Approach Optimizes Antibody Drugs

Proteins have evolved to excel at everything from contracting muscles to digesting food to recognizing viruses. To engineer better proteins, including antibodies, scientists often iteratively mutate the amino acids -...

AI Speeds Up Heart Scans, Saving Doctors…

Researchers have developed a groundbreaking method for analysing heart MRI scans with the help of artificial intelligence (AI), which could save valuable NHS time and resources, as well as improve...

Researchers Customize AI Tools for Digit…

Scientists from Weill Cornell Medicine and the Dana-Farber Cancer Institute in Boston have developed and tested new artificial intelligence (AI) tools tailored to digital pathology - a rapidly growing field...

Young People Believe that AI is a Valuab…

Children and young people are generally positive about artificial intelligence (AI) and think it should be used in modern healthcare, finds the first-of-its-kind survey led by UCL and Great Ormond...