Towards an AI Diagnosis Like the Doctor's

Artificial intelligence (AI) is an important innovation in diagnostics, because it can quickly learn to recognize abnormalities that a doctor would also label as a disease. But the way that these systems work is often opaque, and doctors do have a better "overall picture" when they make the diagnosis. In a new publication, researchers from Radboudumc show how they can make the AI show how it's working, as well as let it diagnose more like a doctor, thus making AI-systems more relevant to clinical practice.

Doctor vs AI

In recent years, artificial intelligence has been on the rise in the diagnosis of medical imaging. A doctor can look at an X-ray or biopsy to identify abnormalities, but this can increasingly also be done by an AI system by means of "deep learning" (see 'Background: what is deep learning' below). Such a system learns to arrive at a diagnosis on its own, and in some cases it does this just as well or better than experienced doctors.

The two major differences compared to a human doctor are, first, that AI is often not transparent in how it's analyzing the images, and, second, that these systems are quite "lazy". AI looks at what is needed for a particular diagnosis, and then stops. This means that a scan does not always identify all abnormalities, even if the diagnosis is correct. A doctor, especially when considering the treatment plan, looks at the big picture: what do I see? Which anomalies should be removed or treated during surgery?

AI more like the doctor

To make AI systems more attractive for the clinical practice, Cristina González Gonzalo, PhD candidate at the A-eye Research and Diagnostic Image Analysis Group of Radboudumc, developed a two-sided innovation for diagnostic AI. She did this based on eye scans, in which abnormalities of the retina occurred - specifically diabetic retinopathy and age-related macular degeneration. These abnormalities can be easily recognized by both a doctor and AI. But they are also abnormalities that often occur in groups. A classic AI would diagnose one or a few spots and stop the analysis. In the process developed by González Gonzalo however, the AI goes through the picture over and over again, learning to ignore the places it has already passed, thus discovering new ones. Moreover, the AI also shows which areas of the eye scan it deemed suspicious, therefore making the diagnostic process transparent.

An iterative process

A basic AI could come up with a diagnosis based on one assessment of the eye scan, and thanks to the first contribution by González Gonzalo, it can show how it arrived at that diagnosis. This visual explanation shows that the system is indeed lazy - stopping the analysis after it as obtained just enough information to make a diagnosis. That's why she also made the process iterative in an innovative way, forcing the AI to look harder and create more of a 'complete picture' that radiologists would have.

How did the system learn to look at the same eye scan with 'fresh eyes'? The system ignored the familiar parts by digitally filling in the abnormalities already found using healthy tissue from around the abnormality. The results of all the assessment rounds are then added together and that produces the final diagnosis. In the study, this approach improved the sensitivity of the detection of diabetic retinopathy and age-related macular degeneration by 11.2+/-2.0% per image. What this project proves is that it's possible to have an AI system assess images more like a doctor, as well as make transparent how it's doing it. This might help these systems become easier to trust and thus to be adopted by radiologists.

Background: what is 'deep learning'?

Deep learning is a term used for systems that learn in a way that is similar to how our brain works. It consists of networks of electronic 'neurons', each of which learns to recognize one aspect of the desired image. It then follows the principles of 'learning by doing', and 'practice makes perfect'. The system is fed more and more images that include relevant information saying - in this case - whether there is an anomaly in the retina, and if so, which disease it is. The system then learns to recognize which characteristics belong to those diseases, and the more pictures it sees, the better it can recognize those characteristics in undiagnosed images. We do something similar with small children: we repeatedly hold up an object, say an apple, in front of them and say that it is an apple. After some time, you don't have to say it anymore - even though each apple is slightly different. Another major advantage of these systems is that they complete their training much faster than humans and can work 24 hours a day.

C González-Gonzalo, B Liefers, B van Ginneken, CI Sánchez.
Iterative augmentation of visual evidence for weakly-supervised lesion localization in deep interpretability frameworks: application to color fundus images.
IEEE Transactions on Medical Imaging, 2020. doi: 10.1109/TMI.2020.2994463.

Most Popular Now

AI Distinguishes Glioblastoma from Look-…

A Harvard Medical School–led research team has developed an AI tool that can reliably tell apart two look-alike cancers found in the brain but with different origins, behaviors, and treatments. The...

AI Body Composition Measurements can Pre…

Adiposity - or the accumulation of excess fat in the body - is a known driver of cardiometabolic diseases such as heart disease, stroke, type 2 diabetes, and kidney disease...

AI can Strengthen Pandemic Preparedness

How to identify the next dangerous virus before it spreads among people is the central question in a new Comment in The Lancet Infectious Diseases. In it, researchers discuss how...

New AI Tool Scans Social Media for Hidde…

A new artificial intelligence tool can scan social media data to discover adverse events associated with consumer health products, according to a study published September 30th in the open-access journal...

'Future-Guided' AI Improves Se…

In the world around us, many things exist in the context of time: a bird’s path through the sky is understood as different positions over a period of time, and...

Yousif's Story with Sectra and The …

Embarking on healthcare technology career after leaving his home as a refugee during his teenage years, Yousif is passionate about making a difference. He reflects on an apprenticeship in which...

Study Finds One-Year Change on CT Scans …

Researchers at National Jewish Health have shown that subtle increases in lung scarring, detected by an artificial intelligence-based tool on CT scans taken one year apart, are associated with disease...

New AI Tools Help Scientists Track How D…

Artificial intelligence (AI) can solve problems at remarkable speed, but it’s the people developing the algorithms who are truly driving discovery. At The University of Texas at Arlington, data scientists...

New Antibiotic Targets IBD - and AI Pred…

Researchers at McMaster University and the Massachusetts Institute of Technology (MIT) have made two scientific breakthroughs at once: they not only discovered a brand-new antibiotic that targets inflammatory bowel diseases...

AI Tool Offers Deep Insight into the Imm…

Researchers explore the human immune system by looking at the active components, namely the various genes and cells involved. But there is a broad range of these, and observations necessarily...

Highland to Help Companies Seize 'N…

Health tech growth partner Highland has today revealed its new identity - reflecting a sharper focus as it helps health tech companies to find market opportunities, convince target audiences, and...