Experts Propose Specific and Suited Guidelines for the Use and Regulation of AI

Current Artificial Intelligence (AI) models for cancer treatment are trained and approved only for specific intended purposes. GMAI models, in contrast, can handle a wide range of medical data including different types of images and text. For example, for a patient with colorectal cancer, a single GMAI model could interpret endoscopy videos, pathology slides and electronic health record (EHR) data. Hence, such multi-purpose or generalist models represent a paradigm shift away from narrow AI models.

Regulatory bodies face a dilemma in adapting to these new models because current regulations are designed for applications with a defined and fixed purpose, specific set of clinical indications and target population. Adaptation or extension after approval is not possible without going through quality management and regulatory, administrative processes again. GMAI models, with their adaptability and predictive potential even without specific training examples - so called zero shot reasoning - therefore pose challenges for validation and reliability assessment. Currently, they are excluded by all international frameworks.

The authors point out that existing regulatory frameworks are not well suited to handle GMAI models due to their characteristics. "If these regulations remain unchanged, a possible solution could be hybrid approaches. GMAIs could be approved as medical devices and then the range of allowed clinical prompts could be restricted," says Prof. Stephen Gilbert, Professor of Medical Device Regulatory Science at TU Dresden. "But this approach is to force models with potential to intelligential address new questions and multimodal data onto narrow tracks through rules written when these technologies were not anticipated. Specific decisions should be made on how to proceed with these technologies and not to exclude their ability to address questions they were not specifically designed for. New technologies sometimes call for new regulatory paradigms," says Prof. Gilbert.

The researchers argue that it will be impossible to prevent patients and medical experts from using generic models or unapproved medical decision support systems. Therefore, it would be crucial to maintain the central role of physicians and enable them as empowered information interpreters.

In conclusion, the researchers propose a flexible regulatory approach that accommodates the unique characteristics of GMAI models while ensuring patient safety and supporting physician decision-making. They point out that a rigid regulatory framework could hinder progress in AI-driven healthcare, and call for a nuanced approach that balances innovation with patient welfare.

Gilbert S, Kather JN.
Guardrails for the use of generalist AI in cancer care.
Nat Rev Cancer. 2024 Apr 16. doi: 10.1038/s41568-024-00685-8

Most Popular Now

Personalized Breast Cancer Prevention No…

A new telemedicine service for personalised breast cancer prevention has launched at preventcancer.co.uk. It allows women aged 30 to 75 across the UK to understand their risk of developing breast...

New App may Help Caregivers of People Ge…

A new study by investigators from Mass General Brigham showed that a new app they created can help improve the quality of life for caregivers of patients undergoing bone marrow...

An App to Detect Heart Attacks and Strok…

A potentially lifesaving new smartphone app can help people determine if they are suffering heart attacks or strokes and should seek medical attention, a clinical study suggests. The ECHAS app (Emergency...

A Machine Learning Tool for Diagnosing, …

Scientists aiming to advance cancer diagnostics have developed a machine learning tool that is able to identify metabolism-related molecular profile differences between patients with colorectal cancer and healthy people. The analysis...

Fine-Tuned LLMs Boost Error Detection in…

A type of artificial intelligence (AI) called fine-tuned large language models (LLMs) greatly enhances error detection in radiology reports, according to a new study published in Radiology, a journal of...

DeepSeek-R1 Offers Promising Potential t…

A joint research team from The Hong Kong University of Science and Technology and The Hong Kong University of Science and Technology (Guangzhou) has published a perspective article in MedComm...

Deep Learning can Predict Lung Cancer Ri…

A deep learning model was able to predict future lung cancer risk from a single low-dose chest CT scan, according to new research published at the ATS 2025 International Conference...

New Research Finds Specific Learning Str…

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm...

'AI Scientist' Suggests Combin…

An 'AI scientist', working in collaboration with human scientists, has found that combinations of cheap and safe drugs - used to treat conditions such as high cholesterol and alcohol dependence...

Patients say "Yes..ish" to the…

As artificial intelligence (AI) continues to be integrated in healthcare, a new multinational study involving Aarhus University sheds light on how dental patients really feel about its growing role in...

Brains vs. Bytes: Study Compares Diagnos…

A University of Maine study compared how well artificial intelligence (AI) models and human clinicians handled complex or sensitive medical cases. The study published in the Journal of Health Organization...

Philips Foundation 2024 Annual Report: E…

Marking its tenth anniversary, Philips Foundation released its 2024 Annual Report, highlighting a year in which the Philips Foundation helped provide access to quality healthcare for 46.5 million people around...