Concerns over 'Exaggerated' Study Claims of AI Outperforming Doctors

Many studies claiming that artificial intelligence is as good as (or better than) human experts at interpreting medical images are of poor quality and are arguably exaggerated, posing a risk for the safety of 'millions of patients' warn researchers in The BMJ.

Their findings raise concerns about the quality of evidence underpinning many of these studies, and highlight the need to improve their design and reporting standards.

Artificial intelligence (AI) is an innovative and fast moving field with the potential to improve patient care and relieve overburdened health services. Deep learning is a branch of AI that has shown particular promise in medical imaging.

The volume of published research on deep learning is growing, and some media headlines that claim superior performance to doctors have fuelled hype for rapid implementation. But the methods and risk of bias of studies behind these headlines have not been examined in detail.

To address this, a team of researchers reviewed the results of published studies over the past 10 years, comparing the performance of a deep learning algorithm in medical imaging with expert clinicians.

They found just two eligible randomised clinical trials and 81 non-randomised studies.

Of the non-randomised studies, only nine were prospective (tracking and collecting information about individuals over time) and just six were tested in a 'real world' clinical setting.

The average number of human experts in the comparator group was just four, while access to raw data and code (to allow independent scrutiny of results) was severely limited.

More than two thirds (58 of 81) studies were judged to be at high risk of bias (problems in study design that can influence results), and adherence to recognised reporting standards was often poor.

Three quarters (61 studies) stated that performance of AI was at least comparable to (or better than) that of clinicians, and only 31 (38%) stated that further prospective studies or trials were needed.

The researchers point to some limitations, such as the possibility of missed studies and the focus on deep learning medical imaging studies so results may not apply to other types of AI.

Nevertheless, they say that at present, "many arguably exaggerated claims exist about equivalence with (or superiority over) clinicians, which presents a potential risk for patient safety and population health at the societal level."

Overpromising language "leaves studies susceptible to being misinterpreted by the media and the public, and as a result the possible provision of inappropriate care that does not necessarily align with patients' best interests," they warn.

"Maximising patient safety will be best served by ensuring that we develop a high quality and transparently reported evidence base moving forward," they conclude.

Myura Nagendran, Yang Chen, Christopher A Lovejoy, Anthony C Gordon, Matthieu Komorowski, Hugh Harvey, Eric J Topol, John P A Ioannidis, Gary S Collins, Mahiben Maruthappu.
Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies.
BMJ 2020. doi: 10.1136/bmj.m689.

Most Popular Now

AI-Powered CRISPR could Lead to Faster G…

Stanford Medicine researchers have developed an artificial intelligence (AI) tool to help scientists better plan gene-editing experiments. The technology, CRISPR-GPT, acts as a gene-editing “copilot” supported by AI to help...

Groundbreaking AI Aims to Speed Lifesavi…

To solve a problem, we have to see it clearly. Whether it’s an infection by a novel virus or memory-stealing plaques forming in the brains of Alzheimer’s patients, visualizing disease processes...

ChatGPT 4o Therapeutic Chatbot 'Ama…

One of the first randomized controlled trials assessing the effectiveness of a large language model (LLM) chatbot 'Amanda' for relationship support shows that a single session of chatbot therapy...

AI Tools Help Predict Severe Asthma Risk…

Mayo Clinic researchers have developed artificial intelligence (AI) tools that help identify which children with asthma face the highest risk of serious asthma exacerbation and acute respiratory infections. The study...

AI Distinguishes Glioblastoma from Look-…

A Harvard Medical School–led research team has developed an AI tool that can reliably tell apart two look-alike cancers found in the brain but with different origins, behaviors, and treatments. The...

AI Model Forecasts Disease Risk Decades …

Imagine a future where your medical history could help predict what health conditions you might face in the next two decades. Researchers have developed a generative AI model that uses...

Smart Device Uses AI and Bioelectronics …

As a wound heals, it goes through several stages: clotting to stop bleeding, immune system response, scabbing, and scarring. A wearable device called "a-Heal," designed by engineers at the University...

Overcoming the AI Applicability Crisis a…

Opinion Article by Harry Lykostratis, Chief Executive, Open Medical. The government’s 10 Year Health Plan makes a lot of the potential of AI-software to support clinical decision making, improve productivity, and...

AI Model Indicates Four out of Ten Breas…

A project at Lund University in Sweden has trained an AI model to identify breast cancer patients who could be spared from axillary surgery. The model analyses previously unutilised information...

Dartford and Gravesham Implements Clinis…

Dartford and Gravesham NHS Trust has taken a significant step towards a more digital future by rolling out electronic test ordering using Clinisys ICE. The trust deployed the order communications...