Clinicians could be Fooled by Biased AI, Despite Explanations

AI models in health care are a double-edged sword, with models improving diagnostic decisions for some demographics, but worsening decisions for others when the model has absorbed biased medical data.

Given the very real life and death risks of clinical decision-making, researchers and policymakers are taking steps to ensure AI models are safe, secure and trustworthy - and that their use will lead to improved outcomes.

The U.S. Food and Drug Administration has oversight of software powered by AI and machine learning used in health care and has issued guidance for developers. This includes a call to ensure the logic used by AI models is transparent or explainable so that clinicians can review the underlying reasoning.

However, a new study in JAMA finds that even with provided AI explanations, clinicians can be fooled by biased AI models.

"The problem is that the clinician has to understand what the explanation is communicating and the explanation itself," said first author Sarah Jabbour, a Ph.D. candidate in computer science and engineering at the College of Engineering at the University of Michigan.

The U-M team studied AI models and AI explanations in patients with acute respiratory failure.

"Determining why a patient has respiratory failure can be difficult. In our study, we found clinicians baseline diagnostic accuracy to be around 73%," said Michael Sjoding, M.D., associate professor of internal medicine at the U-M Medical School, a co-senior author on the study.

"During the normal diagnostic process, we think about a patient’s history, lab tests and imaging results, and try to synthesize this information and come up with a diagnosis. It makes sense that a model could help improve accuracy."

Jabbour, Sjoding, co-senior author, Jenna Wiens, Ph.D., associate professor of computer science and engineering and their multidisciplinary team designed a study to evaluate the diagnostic accuracy of 457 hospitalist physicians, nurse practitioners and physician assistants with and without assistance from an AI model.

Each clinician was asked to make treatment recommendations based on their diagnoses. Half were randomized to receive an AI explanation with the AI model decision, while the other half received only the AI decision with no explanation.

Clinicians were then given real clinical vignettes of patients with respiratory failure, as well as a rating from the AI model on whether the patient had pneumonia, heart failure or COPD.

In the half of participants who were randomized to see explanations, the clinician was provided a heatmap, or visual representation, of where the AI model was looking in the chest radiograph, which served as the basis for the diagnosis.

The team found that clinicians who were presented with an AI model trained to make reasonably accurate predictions, but without explanations, had their own accuracy increase by 2.9 percentage points. When provided an explanation, their accuracy increased by 4.4 percentage points.

However, to test whether an explanation could enable clinicians to recognize when an AI model is clearly biased or incorrect, the team also presented clinicians with models intentionally trained to be biased - for example, a model predicting a high likelihood of pneumonia if the patient was 80 years old or older.

"AI models are susceptible to shortcuts, or spurious correlations in the training data. Given a dataset in which women are underdiagnosed with heart failure, the model could pick up on an association between being female and being at lower risk for heart failure," explained Wiens.

"If clinicians then rely on such a model, it could amplify existing bias. If explanations could help clinicians identify incorrect model reasoning this could help mitigate the risks."

When clinicians were shown the biased AI model, however, it decreased their accuracy by 11.3 percentage points and explanations which explicitly highlighted that the AI was looking at non-relevant information (such as low bone density in patients over 80 years) did not help them recover from this serious decline in performance.

The observed decline in performance aligns with previous studies that find users may be deceived by models, noted the team.

"There's still a lot to be done to develop better explanation tools so that we can better communicate to clinicians why a model is making specific decisions in a way that they can understand. It’s going to take a lot of discussion with experts across disciplines," Jabbour said.

The team hopes this study will spur more research into the safe implementation of AI-based models in health care across all populations and for medical education around AI and bias.

Jabbour S, Fouhey D, Shepard S, et al.
Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vignette Survey Study.
JAMA. 2023;330(23):2275-2284. doi: 10.1001/jama.2023.22295

Most Popular Now

Herefordshire and Worcestershire Health …

Herefordshire and Worcestershire Health and Care NHS Trust has successfully implemented Alcidion's Miya Precision platform to streamline bed management workflow across seven community hospitals in Worcestershire. The trust delivers community...

With Huge Patient Dataset, AI Accurately…

Scientists have designed a new artificial intelligence (AI) model that emulates randomized clinical trials at determining the treatment options most effective at preventing stroke in people with heart disease. The model...

A Shortcut for Drug Discovery

For most human proteins, there are no small molecules known to bind them chemically (so called "ligands"). Ligands frequently represent important starting points for drug development but this knowledge gap...

New Horizon Europe Funding Boosts Europe…

The European Commission has announced the launch of new Horizon Europe calls, with a substantial funding pool of over €112 million. These calls are aimed primarily at pioneering projects in...

Cleveland Clinic Study Finds AI can Deve…

Cleveland Clinic researchers developed an artficial intelligence (AI) model that can determine the best combination and timeline to use when prescribing drugs to treat a bacterial infection, based solely on...

New AI-Technology Estimates Brain Age Us…

As people age, their brains do, too. But if a brain ages prematurely, there is potential for age-related diseases such as mild-cognitive impairment, dementia, or Parkinson's disease. If "brain age...

Radboud University Medical Center and Ph…

Royal Philips (NYSE: PHG, AEX: PHIA), a global leader in health technology, and Radboud University Medical Center have signed a hospital-wide, long-term strategic partnership that delivers the latest patient monitoring...

GPT-4, Google Gemini Fall Short in Breas…

Use of publicly available large language models (LLMs) resulted in changes in breast imaging reports classification that could have a negative effect on patient management, according to a new international...

ChatGPT fails at heart risk assessment

Despite ChatGPT's reported ability to pass medical exams, new research indicates it would be unwise to rely on it for some health assessments, such as whether a patient with chest...

Study Shows ChatGPT Failed when Challeng…

With artificial intelligence (AI) poised to become a fundamental part of clinical research and decision making, many still question the accuracy of ChatGPT, a sophisticated AI language model, to support...

Virtual Reality Shows Promise in Fightin…

A new study published in JMIR Mental Health sheds light on the promising role of virtual reality (VR) in treating major depressive disorder (MDD). Titled "Examining the Efficacy of Extended...

AXREM and Highland Marketing Partner to …

AXREM represents member companies that collectively provide UK hospitals with most of their diagnostic medical imaging technology, and radiotherapy equipment. The association has seen substantial growth in recent years, with membership...