Is AI in Medicine Playing Fair?

As artificial intelligence (AI) rapidly integrates into health care, a new study by researchers at the Icahn School of Medicine at Mount Sinai reveals that all generative AI models may recommend different treatments for the same medical condition based solely on a patient's socioeconomic and demographic background.

Their findings, which are detailed in the April 7, 2025 online issue of Nature Medicine, highlight the importance of early detection and intervention to ensure that AI-driven care is safe, effective, and appropriate for all.

As part of their investigation, the researchers stress-tested nine large language models (LLMs) on 1,000 emergency department cases, each replicated with 32 different patient backgrounds, generating more than 1.7 million AI-generated medical recommendations. Despite identical clinical details, the AI models occasionally altered their decisions based on a patient's socioeconomic and demographic profile, affecting key areas such as triage priority, diagnostic testing, treatment approach, and mental health evaluation.

"Our research provides a framework for AI assurance, helping developers and health care institutions design fair and reliable AI tools," says co-senior author Eyal Klang, MD, Chief of Generative-AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. "By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design, and oversight. Our rigorous validation process tests AI outputs against clinical standards, incorporating expert feedback to refine performance. This proactive approach not only enhances trust in AI-driven care but also helps shape policies for better health care for all."

One of the study's most striking findings was the tendency of some AI models to escalate care recommendations - particularly for mental health evaluations - based on patient demographics rather than medical necessity. In addition, high-income patients were more often recommended advanced diagnostic tests such as CT scans or MRI, while low-income patients were more frequently advised to undergo no further testing. The scale of these inconsistencies underscores the need for stronger oversight, say the researchers.

While the study provides critical insights, researchers caution that it represents only a snapshot of AI behavior. Future research will continue to include assurance testing to evaluate how AI models perform in real-world clinical settings and whether different prompting techniques can reduce bias. The team also aims to work with other health care institutions to refine AI tools, ensuring they uphold the highest ethical standards and treat all patients fairly.

"I am delighted to partner with Mount Sinai on this critical research to ensure AI-driven medicine benefits patients across the globe," says physician-scientist and first author of the study, Mahmud Omar, MD, who consults with the research team. "As AI becomes more integrated into clinical care, it’s essential to thoroughly evaluate its safety, reliability, and fairness. By identifying where these models may introduce bias, we can work to refine their design, strengthen oversight, and build systems that ensure patients remain at the heart of safe, effective care. This collaboration is an important step toward establishing global best practices for AI assurance in health care."

"AI has the power to revolutionize health care, but only if it’s developed and used responsibly," says co-senior author Girish N. Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health Director of the Hasso Plattner Institute for Digital Health, and the Irene and Dr. Arthur M. Fishberg Professor of Medicine, at the Icahn School of Medicine at Mount Sinai. "Through collaboration and rigorous validation, we are refining AI tools to uphold the highest ethical standards and ensure appropriate, patient-centered care. By implementing robust assurance protocols, we not only advance technology but also build the trust essential for transformative health care. With proper testing and safeguards, we can ensure these technologies improve care for everyone - not just certain groups."

Next, the investigators plan to expand their work by simulating multistep clinical conversations and piloting AI models in hospital settings to measure their real-world impact. They hope their findings will guide the development of policies and best practices for AI assurance in health care, fostering trust in these powerful new tools.

Omar M, Soffer S, Agbareia R, Bragazzi NL, Apakama DU, Horowitz CR, Charney AW, Freeman R, Kummer B, Glicksberg BS, Nadkarni GN, Klang E.
Sociodemographic biases in medical decision making by large language models.
Nat Med. 2025 Apr 7. doi: 10.1038/s41591-025-03626-6

Most Popular Now

Unlocking the 10 Year Health Plan

The government's plan for the NHS is a huge document. Jane Stephenson, chief executive of SPARK TSL, argues the key to unlocking its digital ambitions is to consider what it...

Alcidion Grows Top Talent in the UK, wit…

Alcidion has today announced the addition of three new appointments to their UK-based team, with one internal promotion and two external recruits. Dr Paul Deffley has been announced as the...

AI can Find Cancer Pathologists Miss

Men assessed as healthy after a pathologist analyses their tissue sample may still have an early form of prostate cancer. Using AI, researchers at Uppsala University have been able to...

AI, Full Automation could Expand Artific…

Automated insulin delivery (AID) systems such as the UVA Health-developed artificial pancreas could help more type 1 diabetes patients if the devices become fully automated, according to a new review...

How AI could Speed the Development of RN…

Using artificial intelligence (AI), MIT researchers have come up with a new way to design nanoparticles that can more efficiently deliver RNA vaccines and other types of RNA therapies. After training...

MIT Researchers Use Generative AI to Des…

With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA). Using generative AI algorithms, the research...

AI Hybrid Strategy Improves Mammogram In…

A hybrid reading strategy for screening mammography, developed by Dutch researchers and deployed retrospectively to more than 40,000 exams, reduced radiologist workload by 38% without changing recall or cancer detection...

Penn Developed AI Tools and Datasets Hel…

Doctors treating kidney disease have long depended on trial-and-error to find the best therapies for individual patients. Now, new artificial intelligence (AI) tools developed by researchers in the Perelman School...

New Training Year Starts at Siemens Heal…

In September, 197 school graduates will start their vocational training or dual studies in Germany at Siemens Healthineers. 117 apprentices and 80 dual students will begin their careers at Siemens...

Are You Eligible for a Clinical Trial? C…

A new study in the academic journal Machine Learning: Health discovers that ChatGPT can accelerate patient screening for clinical trials, showing promise in reducing delays and improving trial success rates. Researchers...

New AI Tool Addresses Accuracy and Fairn…

A team of researchers at the Icahn School of Medicine at Mount Sinai has developed a new method to identify and reduce biases in datasets used to train machine-learning algorithms...

Global Study Reveals How Patients View M…

How physicians feel about artificial intelligence (AI) in medicine has been studied many times. But what do patients think? A team led by researchers at the Technical University of Munich...