Clinicians could be Fooled by Biased AI, Despite Explanations

AI models in health care are a double-edged sword, with models improving diagnostic decisions for some demographics, but worsening decisions for others when the model has absorbed biased medical data.

Given the very real life and death risks of clinical decision-making, researchers and policymakers are taking steps to ensure AI models are safe, secure and trustworthy - and that their use will lead to improved outcomes.

The U.S. Food and Drug Administration has oversight of software powered by AI and machine learning used in health care and has issued guidance for developers. This includes a call to ensure the logic used by AI models is transparent or explainable so that clinicians can review the underlying reasoning.

However, a new study in JAMA finds that even with provided AI explanations, clinicians can be fooled by biased AI models.

"The problem is that the clinician has to understand what the explanation is communicating and the explanation itself," said first author Sarah Jabbour, a Ph.D. candidate in computer science and engineering at the College of Engineering at the University of Michigan.

The U-M team studied AI models and AI explanations in patients with acute respiratory failure.

"Determining why a patient has respiratory failure can be difficult. In our study, we found clinicians baseline diagnostic accuracy to be around 73%," said Michael Sjoding, M.D., associate professor of internal medicine at the U-M Medical School, a co-senior author on the study.

"During the normal diagnostic process, we think about a patient’s history, lab tests and imaging results, and try to synthesize this information and come up with a diagnosis. It makes sense that a model could help improve accuracy."

Jabbour, Sjoding, co-senior author, Jenna Wiens, Ph.D., associate professor of computer science and engineering and their multidisciplinary team designed a study to evaluate the diagnostic accuracy of 457 hospitalist physicians, nurse practitioners and physician assistants with and without assistance from an AI model.

Each clinician was asked to make treatment recommendations based on their diagnoses. Half were randomized to receive an AI explanation with the AI model decision, while the other half received only the AI decision with no explanation.

Clinicians were then given real clinical vignettes of patients with respiratory failure, as well as a rating from the AI model on whether the patient had pneumonia, heart failure or COPD.

In the half of participants who were randomized to see explanations, the clinician was provided a heatmap, or visual representation, of where the AI model was looking in the chest radiograph, which served as the basis for the diagnosis.

The team found that clinicians who were presented with an AI model trained to make reasonably accurate predictions, but without explanations, had their own accuracy increase by 2.9 percentage points. When provided an explanation, their accuracy increased by 4.4 percentage points.

However, to test whether an explanation could enable clinicians to recognize when an AI model is clearly biased or incorrect, the team also presented clinicians with models intentionally trained to be biased - for example, a model predicting a high likelihood of pneumonia if the patient was 80 years old or older.

"AI models are susceptible to shortcuts, or spurious correlations in the training data. Given a dataset in which women are underdiagnosed with heart failure, the model could pick up on an association between being female and being at lower risk for heart failure," explained Wiens.

"If clinicians then rely on such a model, it could amplify existing bias. If explanations could help clinicians identify incorrect model reasoning this could help mitigate the risks."

When clinicians were shown the biased AI model, however, it decreased their accuracy by 11.3 percentage points and explanations which explicitly highlighted that the AI was looking at non-relevant information (such as low bone density in patients over 80 years) did not help them recover from this serious decline in performance.

The observed decline in performance aligns with previous studies that find users may be deceived by models, noted the team.

"There's still a lot to be done to develop better explanation tools so that we can better communicate to clinicians why a model is making specific decisions in a way that they can understand. It’s going to take a lot of discussion with experts across disciplines," Jabbour said.

The team hopes this study will spur more research into the safe implementation of AI-based models in health care across all populations and for medical education around AI and bias.

Jabbour S, Fouhey D, Shepard S, et al.
Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vignette Survey Study.
JAMA. 2023;330(23):2275-2284. doi: 10.1001/jama.2023.22295

Most Popular Now

Most Advanced Artificial Touch for Brain…

For the first time ever, a complex sense of touch for individuals living with spinal cord injuries is a step closer to reality. A new study published in Science, paves...

Picking the Right Doctor? AI could Help

Years ago, as she sat in waiting rooms, Maytal Saar-Tsechansky began to wonder how people chose a good doctor when they had no way of knowing a doctor's track record...

From Text to Structured Information Secu…

Artificial intelligence (AI) and above all large language models (LLMs), which also form the basis for ChatGPT, are increasingly in demand in hospitals. However, patient data must always be protected...

Deep Learning Model Helps Detect Lung Tu…

A new deep learning model shows promise in detecting and segmenting lung tumors, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA)...

AI Innovation Unlocks Non-Surgical Way t…

Researchers have developed an artificial intelligence (AI) model to detect the spread of metastatic brain cancer using MRI scans, offering insights into patients’ cancer without aggressive surgery. The proof-of-concept study, co-led...

New Study Reveals AI's Transformati…

Intensive care units (ICUs) face mounting pressure to effectively manage resources while delivering optimal patient care. Groundbreaking research published in the INFORMS journal Information Systems Research highlights how a novel...

One of the Largest Global Surveys of Soc…

As leaders gather for the World Economic Forum Annual Meeting 2025 in Davos, Leaps by Bayer, the impact investing arm of Bayer, and Boston Consulting Group (BCG) announced the launch...

New Computer Models Open Door to Far Mor…

With antibiotic resistance a growing problem, University of Virginia School of Medicine researchers have developed cutting-edge computer models that could give the disease-fighting drugs a laser-like precision to target only...

New Biomarkers to Detect Colorectal Canc…

Machine learning and artificial intelligence (AI) techniques and analysis of large datasets have helped University of Birmingham researchers to discover proteins that have strong predictive potential for colorectal cancer. In a...

Sam Neville Joins the Highland Marketing…

Leading chief nursing information officer Sam Neville is joining the Highland Marketing advisory board. Sam brings a passion for nursing and safety to the board, which debates the big issues...

AI Model Identifies Potential Risk Genes…

Researchers from the Cleveland Clinic Genome Center have successfully applied advanced artificial intelligence (AI) genetics models to Parkinson's disease. Researchers identified genetic factors in progression and FDA-approved drugs that can...

AI Tool that may Assist Underserved Hosp…

As the fields of healthcare and technology increasingly evolve and intersect, researchers are collaborating on the best ways to use emerging technologies such as artificial intelligence (AI) to care for...