New Research Finds Specific Learning Strategies can Enhance AI Model Effectiveness in Hospitals

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm. A new study out today from York University found proactive, continual and transfer learning strategies for AI models to be key in mitigating data shifts and subsequent harms.

To determine the effect of data shifts, the team built and evaluated an early warning system to predict the risk of in-hospital patient mortality and enhance the triaging of patients at seven large hospitals in the Greater Toronto Area.

The study used GEMINI, Canada’s largest hospital data sharing network, to assess the impact of data shifts and biases on clinical diagnoses, demographics, sex, age, hospital type, where patients were transferred from, such as an acute care institution or nursing home, and time of admittance. It included 143,049 patient encounters, such as lab results, transfusions, imaging reports and administrative features.

"As the use of AI in hospitals increases to predict anything from mortality and length of stay to sepsis and the occurrence of disease diagnoses, there is a greater need to ensure they work as predicted and don't cause harm," says senior author York University Assistant Professor Elham Dolatabadi of York’s School of Health Policy and Management, Faculty of Health. "Building reliable and robust machine learning models, however, has proven difficult as data changes over time creating system unreliability."

The data to train clinical AI models for hospitals and other health-care settings need to accurately reflect the variability of patients, diseases and medical practices, she adds. Without that, the model could develop irrelevant or harmful predictions, and even inaccurate diagnoses. Differences in patient subpopulations, staffing, resources, as well as unforeseen changes to policy or behaviour, differing health-care practices between hospitals or an unexpected pandemic, can also cause these potential data shifts.

"We found significant shifts in data between model training and real-life applications, including changes in demographics, hospital types, admission sources, and critical laboratory assays," says first author Vallijah Subasri, AI scientist at University Health Network. "We also found harmful data shifts when models trained on community hospital patient visits were transferred to academic hospitals, but not the reverse."

To mitigate these potentially harmful data shifts, the researchers used a transfer learning strategies, which allowed the model to store knowledge gained from learning one domain and apply it to a different but related domain and continual learning strategies where the AI model is updated using a continual stream of data in a sequential manner in response to drift-triggered alarms.

Although machine learning models usually remain locked once approved for use, the researchers found models specific to hospital type which leverage transfer learning, performed better than models that use all available hospitals.

Using drift-triggered continual learning helped prevent harmful data shifts due to the COVID-19 pandemic and improved model performance over time.

Depending on the data it was trained on, the AI model could also have a propensity for certain biases leading to unfair or discriminatory outcomes for some patient groups.

"We demonstrate how to detect these data shifts, assess whether they negatively impact AI model performance, and propose strategies to mitigate their effects. We show there is a practical pathway from promise to practice, bridging the gap between the potential of AI in health and the realities of deploying and sustaining it in real-world clinical environments," says Dolatabadi.

The study is a crucial step towards the deployment of clinical AI models as it provides strategies and workflows to ensure the safety and efficacy of these models in real-world settings.

"These findings indicate that a proactive, label-agnostic monitoring pipeline incorporating transfer and continual learning can detect and mitigate harmful data shifts in Toronto's general internal medicine population, ensuring robust and equitable clinical AI deployment," says Subasri.

Subasri V, Krishnan A, Kore A, Dhalla A, Pandya D, Wang B, Malkin D, Razak F, Verma AA, Goldenberg A, Dolatabadi E.
Detecting and Remediating Harmful Data Shifts for the Responsible Deployment of Clinical AI Models.
JAMA Netw Open. 2025 Jun 2;8(6):e2513685. doi: 10.1001/jamanetworkopen.2025.13685

Most Popular Now

AI-Powered CRISPR could Lead to Faster G…

Stanford Medicine researchers have developed an artificial intelligence (AI) tool to help scientists better plan gene-editing experiments. The technology, CRISPR-GPT, acts as a gene-editing “copilot” supported by AI to help...

Groundbreaking AI Aims to Speed Lifesavi…

To solve a problem, we have to see it clearly. Whether it’s an infection by a novel virus or memory-stealing plaques forming in the brains of Alzheimer’s patients, visualizing disease processes...

AI Spots Hidden Signs of Depression in S…

Depression is one of the most common mental health challenges, but its early signs are often overlooked. It is often linked to reduced facial expressivity. However, whether mild depression or...

AI Model Forecasts Disease Risk Decades …

Imagine a future where your medical history could help predict what health conditions you might face in the next two decades. Researchers have developed a generative AI model that uses...

AI Tools Help Predict Severe Asthma Risk…

Mayo Clinic researchers have developed artificial intelligence (AI) tools that help identify which children with asthma face the highest risk of serious asthma exacerbation and acute respiratory infections. The study...

AI Model Indicates Four out of Ten Breas…

A project at Lund University in Sweden has trained an AI model to identify breast cancer patients who could be spared from axillary surgery. The model analyses previously unutilised information...

Smart Device Uses AI and Bioelectronics …

As a wound heals, it goes through several stages: clotting to stop bleeding, immune system response, scabbing, and scarring. A wearable device called "a-Heal," designed by engineers at the University...

AI Distinguishes Glioblastoma from Look-…

A Harvard Medical School–led research team has developed an AI tool that can reliably tell apart two look-alike cancers found in the brain but with different origins, behaviors, and treatments. The...

ChatGPT 4o Therapeutic Chatbot 'Ama…

One of the first randomized controlled trials assessing the effectiveness of a large language model (LLM) chatbot 'Amanda' for relationship support shows that a single session of chatbot therapy...

Overcoming the AI Applicability Crisis a…

Opinion Article by Harry Lykostratis, Chief Executive, Open Medical. The government’s 10 Year Health Plan makes a lot of the potential of AI-software to support clinical decision making, improve productivity, and...

Dartford and Gravesham Implements Clinis…

Dartford and Gravesham NHS Trust has taken a significant step towards a more digital future by rolling out electronic test ordering using Clinisys ICE. The trust deployed the order communications...