Scientists Argue for More FDA Oversight of Healthcare AI Tools

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical technologies. That is the takeaway from a new report issued to the FDA, published in the open-access journal PLOS Medicine by Leo Celi of the Massachusetts Institute of Technology, and colleagues.

Artificial intelligence (AI) is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they’ve been approved, meaning their behavior can shift in unpredictable ways once they’re in use.

In the new paper, Celi and his colleagues argue that the FDA's current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.

"This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centered, risk-aware, and continuously adaptive regulatory approach - one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities," the authors say.

Abulibdeh R, Celi LA, Sejdić E.
The illusion of safety: A report to the FDA on AI healthcare product approvals.
PLOS Digit Health. 2025 Jun 5;4(6):e0000866. doi: 10.1371/journal.pdig.0000866

Most Popular Now

Study Finds One-Year Change on CT Scans …

Researchers at National Jewish Health have shown that subtle increases in lung scarring, detected by an artificial intelligence-based tool on CT scans taken one year apart, are associated with disease...

Yousif's Story with Sectra and The …

Embarking on healthcare technology career after leaving his home as a refugee during his teenage years, Yousif is passionate about making a difference. He reflects on an apprenticeship in which...

New AI Tools Help Scientists Track How D…

Artificial intelligence (AI) can solve problems at remarkable speed, but it’s the people developing the algorithms who are truly driving discovery. At The University of Texas at Arlington, data scientists...

AI Tool Offers Deep Insight into the Imm…

Researchers explore the human immune system by looking at the active components, namely the various genes and cells involved. But there is a broad range of these, and observations necessarily...

New Antibiotic Targets IBD - and AI Pred…

Researchers at McMaster University and the Massachusetts Institute of Technology (MIT) have made two scientific breakthroughs at once: they not only discovered a brand-new antibiotic that targets inflammatory bowel diseases...

Highland to Help Companies Seize 'N…

Health tech growth partner Highland has today revealed its new identity - reflecting a sharper focus as it helps health tech companies to find market opportunities, convince target audiences, and...