Who's to Blame When AI Makes a Medical Error?

Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. However, the current trend of assistive AI implementation could actually worsen challenges related to error prevention and physician burnout, according to a new brief published in JAMA Health Forum.

The brief, written by researchers from the Johns Hopkins Carey Business School, Johns Hopkins Medicine, and The University of Texas at Austin McCombs School of Business, explains that there is an increasing expectation of physicians to rely on AI to minimize medical errors. However, proper laws and regulations are not yet in place to support physicians as they make AI-guided decisions, despite the fierce adoption of these technologies among health care organizations.

The researchers predict that medical liability will depend on whom society considers at fault when the technology fails or makes a mistake, subjecting physicians to an unrealistic expectation of knowing when to override or trust AI. The authors warn that such an expectation could increase the risk of burnout and even errors among physicians.

"AI was meant to ease the burden, but instead, it’s shifting liability onto physicians - forcing them to flawlessly interpret technology even its creators can’t fully explain," said Shefali Patil, visiting associate professor at the Carey Business School and associate professor at the University of Texas McCombs School of Business. "This unrealistic expectation creates hesitation and poses a direct threat to patient care."

The new brief suggests strategies for health care organizations to support physicians by shifting the focus from individual performance to organizational support and learning, which may alleviate pressure on physicians and foster a more collaborative approach to AI integration.

"Expecting physicians to perfectly understand and apply AI alone when making clinical decisions is like expecting pilots to also design their own aircraft - while they’re flying it," said Christopher Myers, associate professor and faculty director of the Center for Innovative Leadership at the Carey Business School. "To ensure AI empowers rather than exhausts physicians, health care organizations must develop support systems that help physicians calibrate when and how to use AI so they don’t need to second-guess the tools they’re using to make key decisions."

Patil SV, Myers CG, Lu-Myers Y.
Calibrating AI Reliance-A Physician's Superhuman Dilemma.
JAMA Health Forum. 2025 Mar 7;6(3):e250106. doi: 10.1001/jamahealthforum.2025.0106

Most Popular Now

Do Fitness Apps do More Harm than Good?

A study published in the British Journal of Health Psychology reveals the negative behavioral and psychological consequences of commercial fitness apps reported by users on social media. These impacts may...

Making Cancer Vaccines More Personal

In a new study, University of Arizona researchers created a model for cutaneous squamous cell carcinoma, a type of skin cancer, and identified two mutated tumor proteins, or neoantigens, that...

AI Tool Beats Humans at Detecting Parasi…

Scientists at ARUP Laboratories have developed an artificial intelligence (AI) tool that detects intestinal parasites in stool samples more quickly and accurately than traditional methods, potentially transforming how labs diagnose...

AI can Better Predict Future Risk for He…

A landmark study led by University' experts has shown that artificial intelligence can better predict how doctors should treat patients following a heart attack. The study, conducted by an international...

A New AI Model Improves the Prediction o…

Breast cancer is the most commonly diagnosed form of cancer in the world among women, with more than 2.3 million cases a year, and continues to be one of the...