Towards an AI Diagnosis Like the Doctor's

Artificial intelligence (AI) is an important innovation in diagnostics, because it can quickly learn to recognize abnormalities that a doctor would also label as a disease. But the way that these systems work is often opaque, and doctors do have a better "overall picture" when they make the diagnosis. In a new publication, researchers from Radboudumc show how they can make the AI show how it's working, as well as let it diagnose more like a doctor, thus making AI-systems more relevant to clinical practice.

Doctor vs AI

In recent years, artificial intelligence has been on the rise in the diagnosis of medical imaging. A doctor can look at an X-ray or biopsy to identify abnormalities, but this can increasingly also be done by an AI system by means of "deep learning" (see 'Background: what is deep learning' below). Such a system learns to arrive at a diagnosis on its own, and in some cases it does this just as well or better than experienced doctors.

The two major differences compared to a human doctor are, first, that AI is often not transparent in how it's analyzing the images, and, second, that these systems are quite "lazy". AI looks at what is needed for a particular diagnosis, and then stops. This means that a scan does not always identify all abnormalities, even if the diagnosis is correct. A doctor, especially when considering the treatment plan, looks at the big picture: what do I see? Which anomalies should be removed or treated during surgery?

AI more like the doctor

To make AI systems more attractive for the clinical practice, Cristina González Gonzalo, PhD candidate at the A-eye Research and Diagnostic Image Analysis Group of Radboudumc, developed a two-sided innovation for diagnostic AI. She did this based on eye scans, in which abnormalities of the retina occurred - specifically diabetic retinopathy and age-related macular degeneration. These abnormalities can be easily recognized by both a doctor and AI. But they are also abnormalities that often occur in groups. A classic AI would diagnose one or a few spots and stop the analysis. In the process developed by González Gonzalo however, the AI goes through the picture over and over again, learning to ignore the places it has already passed, thus discovering new ones. Moreover, the AI also shows which areas of the eye scan it deemed suspicious, therefore making the diagnostic process transparent.

An iterative process

A basic AI could come up with a diagnosis based on one assessment of the eye scan, and thanks to the first contribution by González Gonzalo, it can show how it arrived at that diagnosis. This visual explanation shows that the system is indeed lazy - stopping the analysis after it as obtained just enough information to make a diagnosis. That's why she also made the process iterative in an innovative way, forcing the AI to look harder and create more of a 'complete picture' that radiologists would have.

How did the system learn to look at the same eye scan with 'fresh eyes'? The system ignored the familiar parts by digitally filling in the abnormalities already found using healthy tissue from around the abnormality. The results of all the assessment rounds are then added together and that produces the final diagnosis. In the study, this approach improved the sensitivity of the detection of diabetic retinopathy and age-related macular degeneration by 11.2+/-2.0% per image. What this project proves is that it's possible to have an AI system assess images more like a doctor, as well as make transparent how it's doing it. This might help these systems become easier to trust and thus to be adopted by radiologists.

Background: what is 'deep learning'?

Deep learning is a term used for systems that learn in a way that is similar to how our brain works. It consists of networks of electronic 'neurons', each of which learns to recognize one aspect of the desired image. It then follows the principles of 'learning by doing', and 'practice makes perfect'. The system is fed more and more images that include relevant information saying - in this case - whether there is an anomaly in the retina, and if so, which disease it is. The system then learns to recognize which characteristics belong to those diseases, and the more pictures it sees, the better it can recognize those characteristics in undiagnosed images. We do something similar with small children: we repeatedly hold up an object, say an apple, in front of them and say that it is an apple. After some time, you don't have to say it anymore - even though each apple is slightly different. Another major advantage of these systems is that they complete their training much faster than humans and can work 24 hours a day.

C González-Gonzalo, B Liefers, B van Ginneken, CI Sánchez.
Iterative augmentation of visual evidence for weakly-supervised lesion localization in deep interpretability frameworks: application to color fundus images.
IEEE Transactions on Medical Imaging, 2020. doi: 10.1109/TMI.2020.2994463.

Most Popular Now

Study Details First Artificial Intellige…

Hospital-based laboratories and doctors at the front line of the COVID-19 pandemic might soon add artificial intelligence to their testing toolkit. A recent study conducted with collaborators from the University...

Accelerating Data Solutions to Save LIVE…

The consortium of COVID-X announces the launch of the ​1st Open Call ​framed in a ​2-year initiative that will invest ​4 million € to fast-track to market 30+ European data...

Significant Disparities in Telemedicine …

After "COVID-19," the term that most people will remember best from 2020 is likely to be "social distancing." While it most commonly applied to social gatherings with family and friends...

Model Used to Evaluate Lockdowns was Fla…

In a recent study, researchers from Imperial College London developed a model to assess the effect of different measures used to curb the spread of the coronavirus. However, the model...

New Virtual Screening Strategy Identifie…

A novel computational drug screening strategy combined with lab experiments suggest that pralatrexate, a chemotherapy medication originally developed to treat lymphoma, could potentially be repurposed to treat COVID-19. Haiping Zhang...

Using Artificial Intelligence to Find Ne…

Scientists have developed a machine-learning method that crunches massive amounts of data to help determine which existing medications could improve outcomes in diseases for which they are not prescribed. The intent...

One in Four Doctors Attacked, Harassed o…

While many physicians benefit from social media by networking with potential collaborators or interfacing with patients, a new study from Northwestern University and the University of Chicago found many physicians...

CliniSys Launches Laboratory Information…

CliniSys has launched a new laboratory information management system for genomic laboratories in the UK. The company has brought GLIMS Genomics to the UK from Europe, where it is being...

The Institute of Healthcare Management C…

The Institute of Healthcare Management has called for honest and open communication about NHS capacity after a snapshot survey revealed the scale of sickness absence across the service. The UK's leading...

FDA Releases Artificial Intelligence / M…

Today, the U.S. Food and Drug Administration released the agency's first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. This action plan describes a multi-pronged approach...

First NHS Trust and Sectra Achieve Rapid…

A new region-wide approach to analysing x-rays, MRI scans, CT scans, mammography, and an entire range of crucial diagnostic images, has started to become reality, now that The Pennine Acute...