New Recommendations to Increase Transparency and Tackle Potential Bias in Medical AI Technologies

Patients will be better able to benefit from innovations in medical artificial intelligence (AI) if a new set of internationally-agreed recommendations are followed.

A new set of recommendations published in The Lancet Digital Health and NEJM AI aims to help improve the way datasets are used to build Artificial intelligence (AI) health technologies and reduce the risk of potential AI bias.

Innovative medical AI technologies may improve diagnosis and treatment for patients, however some studies have shown that medical AI can be biased, meaning that it works well for some people and not for others. This means some individuals and communities may be 'left behind,' or may even be harmed when these technologies are used.

An international initiative called ‘STANDING Together (STANdards for data Diversity, INclusivity and Generalisability)’ has published recommendations as part of a research study involving more than 350 experts from 58 countries. These recommendations aim to ensure that medical AI can be safe and effective for everyone. They cover many factors which can contribute to AI bias, including:

  • Encouraging medical AI to be developed using appropriate healthcare datasets that properly represent everyone in society, including minoritised and underserved groups;
  • Helping anyone who publishes healthcare datasets to identify any biases or limitations in the data;
  • Enabling those developing medical AI technologies to assess whether a dataset is suitable for their purposes;.
  • Defining how AI technologies should be tested to identify if they are biased, and so work less well in certain people.

Dr Xiao Liu, Associate Professor of AI and Digital Health Technologies at the University of Birmingham and Chief Investigator of the study said:

"Data is like a mirror, providing a reflection of reality. And when distorted, data can magnify societal biases. But trying to fix the data to fix the problem is like wiping the mirror to remove a stain on your shirt.

"To create lasting change in health equity, we must focus on fixing the source, not just the reflection."

The STANDING Together recommendations aim to ensure that the datasets used to train and test medical AI systems represent the full diversity of the people that the technology will be used for. This is because AI systems often work less well for people who aren’t properly represented in datasets. People who are in minority groups are particularly likely to be under-represented in datasets, so may be disproportionately affected by AI bias. Guidance is also given on how to identify those who may be harmed when medical AI systems are used, allowing this risk to be reduced.

STANDING Together is led by researchers at University Hospitals Birmingham NHS Foundation Trust, and the University of Birmingham, UK. The research has been conducted with collaborators from over 30 institutions worldwide, including universities, regulators (UK, US, Canada and Australia), patient groups and charities, and small and large health technology companies. The work has been funded by The Health Foundation and the NHS AI Lab, and supported by the National Institute for Health and Care Research (NIHR), the research partner of the NHS, public health and social care.

In addition to the recommendations themselves, a commentary published in Nature Medicine written by the STANDING Together patient representatives highlights the importance of public participation in shaping medical AI research.

Sir Jeremy Farrar, Chief Scientist of the World Health Organisation said: "Ensuring we have diverse, accessible and representative datasets to support the responsible development and testing of AI is a global priority. The STANDING Together recommendations are a major step forward in ensuring equity for AI in health."

Dominic Cushnan, Deputy Director for AI at NHS England said: "It is crucial that we have transparent and representative datasets to support the responsible and fair development and use of AI. The STANDING Together recommendations are highly timely as we leverage the exciting potential of AI tools and NHS AI Lab fully supports the adoption of their practice to mitigate AI bias."

The recommendations are available open access via The Lancet Digital Health.

These recommendations may be particularly helpful for regulatory agencies, health and care policy organisations, funding bodies, ethical review committees, universities, and government departments.

Alderman JE, Palmer J, Laws E, McCradden MD, Ordish J, Ghassemi M, Pfohl SR, Rostamzadeh N, Cole-Lewis H, Glocker B, Calvert M, Pollard TJ, Gill J, Gath J, Adebajo A, Beng J, Leung CH, Kuku S, Farmer LA, Matin RN, Mateen BA, McKay F, Heller K, Karthikesalingam A, Treanor D, Mackintosh M, Oakden-Rayner L, Pearson R, Manrai AK, Myles P, Kumuthini J, Kapacee Z, Sebire NJ, Nazer LH, Seah J, Akbari A, Berman L, Gichoya JW, Righetto L, Samuel D, Wasswa W, Charalambides M, Arora A, Pujari S, Summers C, Sapey E, Wilkinson S, Thakker V, Denniston A, Liu X.
Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations.
Lancet Digit Health. 2024 Dec 12:S2589-7500(24)00224-3. doi: 10.1016/S2589-7500(24)00224-3

Most Popular Now

AI-Powered CRISPR could Lead to Faster G…

Stanford Medicine researchers have developed an artificial intelligence (AI) tool to help scientists better plan gene-editing experiments. The technology, CRISPR-GPT, acts as a gene-editing “copilot” supported by AI to help...

Groundbreaking AI Aims to Speed Lifesavi…

To solve a problem, we have to see it clearly. Whether it’s an infection by a novel virus or memory-stealing plaques forming in the brains of Alzheimer’s patients, visualizing disease processes...

AI Spots Hidden Signs of Depression in S…

Depression is one of the most common mental health challenges, but its early signs are often overlooked. It is often linked to reduced facial expressivity. However, whether mild depression or...

AI Model Forecasts Disease Risk Decades …

Imagine a future where your medical history could help predict what health conditions you might face in the next two decades. Researchers have developed a generative AI model that uses...

AI Tools Help Predict Severe Asthma Risk…

Mayo Clinic researchers have developed artificial intelligence (AI) tools that help identify which children with asthma face the highest risk of serious asthma exacerbation and acute respiratory infections. The study...

AI Model Indicates Four out of Ten Breas…

A project at Lund University in Sweden has trained an AI model to identify breast cancer patients who could be spared from axillary surgery. The model analyses previously unutilised information...

Smart Device Uses AI and Bioelectronics …

As a wound heals, it goes through several stages: clotting to stop bleeding, immune system response, scabbing, and scarring. A wearable device called "a-Heal," designed by engineers at the University...

AI Distinguishes Glioblastoma from Look-…

A Harvard Medical School–led research team has developed an AI tool that can reliably tell apart two look-alike cancers found in the brain but with different origins, behaviors, and treatments. The...

ChatGPT 4o Therapeutic Chatbot 'Ama…

One of the first randomized controlled trials assessing the effectiveness of a large language model (LLM) chatbot 'Amanda' for relationship support shows that a single session of chatbot therapy...

Overcoming the AI Applicability Crisis a…

Opinion Article by Harry Lykostratis, Chief Executive, Open Medical. The government’s 10 Year Health Plan makes a lot of the potential of AI-software to support clinical decision making, improve productivity, and...

Dartford and Gravesham Implements Clinis…

Dartford and Gravesham NHS Trust has taken a significant step towards a more digital future by rolling out electronic test ordering using Clinisys ICE. The trust deployed the order communications...