New Research Finds Specific Learning Strategies can Enhance AI Model Effectiveness in Hospitals

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm. A new study out today from York University found proactive, continual and transfer learning strategies for AI models to be key in mitigating data shifts and subsequent harms.

To determine the effect of data shifts, the team built and evaluated an early warning system to predict the risk of in-hospital patient mortality and enhance the triaging of patients at seven large hospitals in the Greater Toronto Area.

The study used GEMINI, Canada’s largest hospital data sharing network, to assess the impact of data shifts and biases on clinical diagnoses, demographics, sex, age, hospital type, where patients were transferred from, such as an acute care institution or nursing home, and time of admittance. It included 143,049 patient encounters, such as lab results, transfusions, imaging reports and administrative features.

"As the use of AI in hospitals increases to predict anything from mortality and length of stay to sepsis and the occurrence of disease diagnoses, there is a greater need to ensure they work as predicted and don't cause harm," says senior author York University Assistant Professor Elham Dolatabadi of York’s School of Health Policy and Management, Faculty of Health. "Building reliable and robust machine learning models, however, has proven difficult as data changes over time creating system unreliability."

The data to train clinical AI models for hospitals and other health-care settings need to accurately reflect the variability of patients, diseases and medical practices, she adds. Without that, the model could develop irrelevant or harmful predictions, and even inaccurate diagnoses. Differences in patient subpopulations, staffing, resources, as well as unforeseen changes to policy or behaviour, differing health-care practices between hospitals or an unexpected pandemic, can also cause these potential data shifts.

"We found significant shifts in data between model training and real-life applications, including changes in demographics, hospital types, admission sources, and critical laboratory assays," says first author Vallijah Subasri, AI scientist at University Health Network. "We also found harmful data shifts when models trained on community hospital patient visits were transferred to academic hospitals, but not the reverse."

To mitigate these potentially harmful data shifts, the researchers used a transfer learning strategies, which allowed the model to store knowledge gained from learning one domain and apply it to a different but related domain and continual learning strategies where the AI model is updated using a continual stream of data in a sequential manner in response to drift-triggered alarms.

Although machine learning models usually remain locked once approved for use, the researchers found models specific to hospital type which leverage transfer learning, performed better than models that use all available hospitals.

Using drift-triggered continual learning helped prevent harmful data shifts due to the COVID-19 pandemic and improved model performance over time.

Depending on the data it was trained on, the AI model could also have a propensity for certain biases leading to unfair or discriminatory outcomes for some patient groups.

"We demonstrate how to detect these data shifts, assess whether they negatively impact AI model performance, and propose strategies to mitigate their effects. We show there is a practical pathway from promise to practice, bridging the gap between the potential of AI in health and the realities of deploying and sustaining it in real-world clinical environments," says Dolatabadi.

The study is a crucial step towards the deployment of clinical AI models as it provides strategies and workflows to ensure the safety and efficacy of these models in real-world settings.

"These findings indicate that a proactive, label-agnostic monitoring pipeline incorporating transfer and continual learning can detect and mitigate harmful data shifts in Toronto's general internal medicine population, ensuring robust and equitable clinical AI deployment," says Subasri.

Subasri V, Krishnan A, Kore A, Dhalla A, Pandya D, Wang B, Malkin D, Razak F, Verma AA, Goldenberg A, Dolatabadi E.
Detecting and Remediating Harmful Data Shifts for the Responsible Deployment of Clinical AI Models.
JAMA Netw Open. 2025 Jun 2;8(6):e2513685. doi: 10.1001/jamanetworkopen.2025.13685

Most Popular Now

AI also Assesses Dutch Mammograms Better…

AI is detecting tumors more often and earlier in the Dutch breast cancer screening program. Those tumors can then be treated at an earlier stage. This has been demonstrated by...

RSNA AI Challenge Models can Independent…

Algorithms submitted for an AI Challenge hosted by the Radiological Society of North America (RSNA) have shown excellent performance for detecting breast cancers on mammography images, increasing screening sensitivity while...

AI could Help Emergency Rooms Predict Ad…

Artificial intelligence (AI) can help emergency department (ED) teams better anticipate which patients will need hospital admission, hours earlier than is currently possible, according to a multi-hospital study by the...

Head-to-Head Against AI, Pharmacy Studen…

Students pursuing a Doctor of Pharmacy degree routinely take - and pass - rigorous exams to prove competency in several areas. Can ChatGPT accurately answer the same questions? A new...

NHS Active 10 Walking Tracker Users are …

Users of the NHS Active 10 app, designed to encourage people to become more active, immediately increased their amount of brisk and non-brisk walking upon using the app, according to...

Unlocking the 10 Year Health Plan

The government's plan for the NHS is a huge document. Jane Stephenson, chief executive of SPARK TSL, argues the key to unlocking its digital ambitions is to consider what it...

AI can Find Cancer Pathologists Miss

Men assessed as healthy after a pathologist analyses their tissue sample may still have an early form of prostate cancer. Using AI, researchers at Uppsala University have been able to...

How AI could Speed the Development of RN…

Using artificial intelligence (AI), MIT researchers have come up with a new way to design nanoparticles that can more efficiently deliver RNA vaccines and other types of RNA therapies. After training...

AI, Full Automation could Expand Artific…

Automated insulin delivery (AID) systems such as the UVA Health-developed artificial pancreas could help more type 1 diabetes patients if the devices become fully automated, according to a new review...

MIT Researchers Use Generative AI to Des…

With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA). Using generative AI algorithms, the research...

AI Hybrid Strategy Improves Mammogram In…

A hybrid reading strategy for screening mammography, developed by Dutch researchers and deployed retrospectively to more than 40,000 exams, reduced radiologist workload by 38% without changing recall or cancer detection...

Penn Developed AI Tools and Datasets Hel…

Doctors treating kidney disease have long depended on trial-and-error to find the best therapies for individual patients. Now, new artificial intelligence (AI) tools developed by researchers in the Perelman School...