New Tool Overcomes Major Hurdle in Clinical AI Design

Harvard Medical School scientists and colleagues at Stanford University have developed an artificial intelligence (AI) diagnostic tool that can detect diseases on chest X-rays directly from natural-language descriptions contained in accompanying clinical reports.

The step is deemed a major advance in clinical AI design because most current AI models require laborious human annotation of vast reams of data before the labeled data are fed into the model to train it.

A report on the work, published Sept. 15 in Nature Biomedical Engineering, shows that the model, called CheXzero, performed on par with human radiologists in its ability to detect pathologies on chest X-rays.

The team has made the code for the model publicly available for other researchers.

Most AI models require labeled datasets during their "training" so they can learn to correctly identify pathologies. This process is especially burdensome for medical image-interpretation tasks since it involves large-scale annotation by human clinicians, which is often expensive and time-consuming. For instance, to label a chest X-ray dataset, expert radiologists would have to look at hundreds of thousands of X-ray images one by one and explicitly annotate each one with the conditions detected. While more recent AI models have tried to address this labeling bottlenck by learning from unlabeled data in a "pre-training" stage, they eventually require fine-tuning on labeled data to achieve high performance.

By contrast, the new model is self-supervised, in the sense that it learns more independently, without the need for hand-labeled data before or after training. The model relies solely on chest X-rays and the English-language notes found in accompanying X-ray reports.

"We’re living the early days of the next-generation medical AI models that are able to perform flexible tasks by directly learning from text," said study lead investigator Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS. "Up until now, most AI models have relied on manual annotation of huge amounts of data - to the tune of 100,000 images - to achieve a high performance. Our method needs no such disease-specific annotations.

"With CheXzero, one can simply feed the model a chest X-ray and corresponding radiology report, and it will learn that the image and the text in the report should be considered as similar - in other words, it learns to match chest X-rays with their accompanying report," Rajpurkar added. "The model is able to eventually learn how concepts in the unstructured text correspond to visual patterns in the image."

The model was "trained" on a publicly available dataset containing more than 377,000 chest X-rays and more than 227,000 corresponding clinical notes. Its performance was then tested on two separate datasets of chest X-rays and corresponding notes collected from two different institutions, one of which was in a different country. This diversity of datasets was meant to ensure that the model performed equally well when exposed to clinical notes that may use different terminology to describe the same finding.

Upon testing, CheXzero successfully identified pathologies that were not explicitly annotated by human clinicians. It outperformed other self-supervised AI tools and performed with accuracy similar to that of human radiologists.

The approach, the researchers said, could eventually be applied to imaging modalities well beyond X-rays, including CT scans, MRIs, and echocardiograms.

"CheXzero shows that accuracy of complex medical image interpretation no longer needs to remain at the mercy of large labeled datasets," said study co-first author Ekin Tiu, an undergraduate student at Stanford and a visiting researcher at HMS. "We use chest X-rays as a driving example, but in reality CheXzero's capability is generalizable to a vast array of medical settings where unstructured data is the norm, and precisely embodies the promise of bypassing the large-scale labeling bottleneck that has plagued the field of medical machine learning."

Tiu E, Talius E, Patel P et al.
Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning.
Nat. Biomed. Eng, 2022. doi: 10.1038/s41551-022-00936-9

Most Popular Now

West Midlands to Digitally Transform Can…

NHS patients throughout the West Midlands are to benefit from a digital pathology programme, designed to help reduce cancer backlogs, transform services, and improve the speed and accuracy of cancer...

AI Transforms Smartwatch ECG Signals int…

A study published in Nature Medicine reports the ability of a smartwatch ECG to accurately detect heart failure in nonclinical environments. Researchers at Mayo Clinic applied artificial intelligence (AI) to...

Siemens Healthineers Splits Fast-Growing…

Siemens Healthineers is splitting its Asia Pacific operations into two to allow both China and the rest of the region to achieve their full potential. China, now its own region...

Siemens Healthineers Presents Two Revolu…

7 Tesla (T) Magnetom Terra.X(1) will offer excellent imaging of even the smallest structures 3T Magnetom Cima.X(2) more than doubles the gradient amplitude(3) AI algorithms which can reduce scanning...

3D Protein Structure Predictions Made by…

In a living being, proteins make up roughly everything: from the molecular machines running every cell's metabolism, to the tip of your hair. Encoded in the DNA, a protein may...

New Group to Advance Digital Twins in He…

EDITH (Ecosystem for Digital Twins in Healthcare) Coordination and Support Action (CSA) - a group made up of numerous internationally renowned research institutions, professional associations, companies, and hospitals of excellence...

Willingness to Use Video Telehealth Incr…

Americans' use and willingness to use video telehealth has increased since the beginning of the COVID-19 pandemic, rising most sharply among Black Americans and people with less education, according to...

Evaluating Use of New AI Technology in D…

Published in the Journal of the American Medical Informatics Association, University of Minnesota researchers led a study evaluating federated learning variations for COVID-19 diagnosis in chest x-rays. Federated learning is...

DMEA Call for Papers: Supporting Digital…

25 - 27 April 2023, Berlin, Germany. Health meets digitalisation: from 25 to 27 April 2023 at DMEA - Connecting Digital Health, all actors aiming to promote health IT will be...

Machine Learning can Help Predict Patien…

Predicting which patients will respond well to treatment is a quandary that has plagued the field of cancer immunotherapy for more than four decades. Now, researchers at the Johns Hopkins...

MEDICA 2022 and COMPAMED 2022: Internati…

14 - 17 November 2022, Düsseldorf, Germany. Next week sees the return of the date marked in extra thick outline in many yearly calendars of the international health and medical technology...

MEDICA and COMPAMED Present Themselves a…

14 - 17 November 2022, Düsseldorf, Germany. MEDICA and COMPAMED continue to develop in an extremely vital manner. The world's leading medical trade fair and the international No. 1 for the...