Artificial intelligence (AI) is becoming a powerful force in healthcare, helping doctors diagnose diseases, monitor patients, and even recommend treatments. Unlike traditional medical devices, many AI tools continue to learn and change after they’ve been approved, meaning their behavior can shift in unpredictable ways once they’re in use.
In the new paper, Celi and his colleagues argue that the FDA's current system is not set up to keep tabs on these post-approval changes. Their analysis calls for stronger rules around transparency and bias, especially to protect vulnerable populations. If an algorithm is trained mostly on data from one group of people, it may make mistakes when used with others. The authors recommend that developers be required to share information about how their AI models were trained and tested, and that the FDA involve patients and community advocates more directly in decision-making. They also suggest practical fixes, including creating public data repositories to track how AI performs in the real world, offering tax incentives for companies that follow ethical practices, and training medical students to critically evaluate AI tools.
"This work has the potential to drive real-world impact by prompting the FDA to rethink existing oversight mechanisms for AI-enabled medical technologies. We advocate for a patient-centered, risk-aware, and continuously adaptive regulatory approach - one that ensures AI remains an asset to clinical practice without compromising safety or exacerbating healthcare disparities," the authors say.
Abulibdeh R, Celi LA, Sejdić E.
The illusion of safety: A report to the FDA on AI healthcare product approvals.
PLOS Digit Health. 2025 Jun 5;4(6):e0000866. doi: 10.1371/journal.pdig.0000866