Who's to Blame When AI Makes a Medical Error?

Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. However, the current trend of assistive AI implementation could actually worsen challenges related to error prevention and physician burnout, according to a new brief published in JAMA Health Forum.

The brief, written by researchers from the Johns Hopkins Carey Business School, Johns Hopkins Medicine, and The University of Texas at Austin McCombs School of Business, explains that there is an increasing expectation of physicians to rely on AI to minimize medical errors. However, proper laws and regulations are not yet in place to support physicians as they make AI-guided decisions, despite the fierce adoption of these technologies among health care organizations.

The researchers predict that medical liability will depend on whom society considers at fault when the technology fails or makes a mistake, subjecting physicians to an unrealistic expectation of knowing when to override or trust AI. The authors warn that such an expectation could increase the risk of burnout and even errors among physicians.

"AI was meant to ease the burden, but instead, it’s shifting liability onto physicians - forcing them to flawlessly interpret technology even its creators can’t fully explain," said Shefali Patil, visiting associate professor at the Carey Business School and associate professor at the University of Texas McCombs School of Business. "This unrealistic expectation creates hesitation and poses a direct threat to patient care."

The new brief suggests strategies for health care organizations to support physicians by shifting the focus from individual performance to organizational support and learning, which may alleviate pressure on physicians and foster a more collaborative approach to AI integration.

"Expecting physicians to perfectly understand and apply AI alone when making clinical decisions is like expecting pilots to also design their own aircraft - while they’re flying it," said Christopher Myers, associate professor and faculty director of the Center for Innovative Leadership at the Carey Business School. "To ensure AI empowers rather than exhausts physicians, health care organizations must develop support systems that help physicians calibrate when and how to use AI so they don’t need to second-guess the tools they’re using to make key decisions."

Patil SV, Myers CG, Lu-Myers Y.
Calibrating AI Reliance-A Physician's Superhuman Dilemma.
JAMA Health Forum. 2025 Mar 7;6(3):e250106. doi: 10.1001/jamahealthforum.2025.0106

Most Popular Now

Who's to Blame When AI Makes a Medi…

Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. However, the current trend of assistive AI implementation could actually...

First Therapy Chatbot Trial Shows AI can…

Dartmouth researchers conducted the first clinical trial of a therapy chatbot powered by generative AI and found that the software resulted in significant improvements in participants' symptoms, according to results...

DeepSeek: The "Watson" to Doct…

DeepSeek is an artificial intelligence (AI) platform built on deep learning and natural language processing (NLP) technologies. Its core products include the DeepSeek-R1 and DeepSeek-V3 models. Leveraging an efficient Mixture...

Stepping Hill Hospital Announced as SPAR…

Stepping Hill Hospital, part of Stockport NHS Foundation Trust, has replaced its bedside units with state-of-the art devices running a full range of information, engagement, communications and productivity apps, to...

DMEA 2025: Digital Health Worldwide in B…

8 - 10 April 2025, Berlin, Germany. From the AI Act, to the potential of the European Health Data Space, to the power of patient data in Scandinavia - DMEA 2025...