Artificial Intelligence: Unexpected Results

Artificial intelligence (AI) is on the rise. Until now, AI applications generally have "black box" character: How AI arrives at its results remains hidden. Prof. Dr. Jürgen Bajorath, a cheminformatics scientist at the University of Bonn, and his team have developed a method that reveals how certain AI applications work in pharmaceutical research. The results are unexpected: the AI programs largely remembered known data and hardly learned specific chemical interactions when predicting drug potency. The results have now been published in Nature Machine Intelligence.

Which drug molecule is most effective? Researchers are feverishly searching for efficient active substances to combat diseases. These compounds often dock onto protein, which usually are enzymes or receptors that trigger a specific chain of physiological actions. In some cases, certain molecules are also intended to block undesirable reactions in the body - such as an excessive inflammatory response. Given the abundance of available chemical compounds, at a first glance this research is like searching for a needle in a haystack. Drug discovery therefore attempts to use scientific models to predict which molecules will best dock to the respective target protein and bind strongly. These potential drug candidates are then investigated in more detail in experimental studies.

Since the advance of AI, drug discovery research has also been increasingly using machine learning applications. As one "Graph neural networks" (GNNs) provide one of several opportunities for such applications. They are adapted to predict, for example, how strongly a certain molecule binds to a target protein. To this end, GNN models are trained with graphs that represent complexes formed between proteins and chemical compounds (ligands). Graphs generally consist of nodes representing objects and edges representing relationship between nodes. In graph representations of protein-ligand complexes, edges connect only protein or ligand nodes, representing their structures, respectively, or protein and ligand nodes, representing specific protein-ligand interactions.

"How GNNs arrive at their predictions is like a black box we can't glimpse into," says Prof. Dr. Jürgen Bajorath. The chemoinformatics researcher from the LIMES Institute at the University of Bonn, the Bonn-Aachen International Center for Information Technology (B-IT) and the Lamarr Institute for Machine Learning and Artificial Intelligence in Bonn, together with colleagues from Sapienza University in Rome, has analyzed in detail whether graph neural networks actually learn protein-ligand interactions to predict how strongly an active substance binds to a target protein.

How do the AI applications work?

The researchers analyzed a total of six different GNN architectures using their specially developed "EdgeSHAPer" method and a conceptually different methodology for comparison. These computer programs "screen" whether the GNNs learn the most important interactions between a compound and a protein and thereby predict the potency of the ligand, as intended and anticipated by researchers - or whether AI arrives at the predictions in other ways. "The GNNs are very dependent on the data they are trained with," says the first author of the study, PhD candidate Andrea Mastropietro from Sapienza University in Rome, who conducted a part of his doctoral research in Prof. Bajorath's group in Bonn.

The scientists trained the six GNNs with graphs extracted from structures of protein-ligand complexes, for which the mode of action and binding strength of the compounds to their target proteins was already known from experiments. The trained GNNs were then tested on other complexes. The subsequent EdgeSHAPer analysis then made it possible to understand how the GNNs generated apparently promising predictions.

"If the GNNs do what they are expected to, they need to learn the interactions between the compound and target protein and the predictions should be determined by prioritizing specific interactions," explains Prof. Bajorath. According to the research team's analyses, however, the six GNNs essentially failed to do so. Most GNNs only learned a few protein-drug interactions and mainly focused on the ligands. Bajorath: "To predict the binding strength of a molecule to a target protein, the models mainly 'remembered' chemically similar molecules that they encountered during training and their binding data, regardless of the target protein. These learned chemical similarities then essentially determined the predictions."

According to the scientists, this is largely reminiscent of the "Clever Hans effect". This effect refers to a horse that could apparently count. How often Hans tapped his hoof was supposed to indicate the result of a calculation. As it turned out later, however, the horse was not able to calculate at all, but deduced expected results from nuances in the facial expressions and gestures of his companion.

What do these findings mean for drug discovery research? "It is generally not tenable that GNNs learn chemical interactions between active substances and proteins," says the cheminformatics scientist. Their predictions are largely overrated because forecasts of equivalent quality can be made using chemical knowledge and simpler methods. However, the research also offers opportunities of AI. Two of the GNN examined models displayed a clear tendency to learn more interactions when the potency of test compounds increased. "It's worth taking a closer look here," says Bajorath. Perhaps these GNNs could be further improved in the desired direction through modified representations and training techniques. However, the assumption that physical quantities can be learned on the basis of molecular graphs should generally be treated with caution. "AI is not black magic," says Bajorath.

Even more light into the darkness of AI

In fact, he sees the previous open access publication of EdgeSHAPer and other specially developed analysis tools as promising approaches to shed light on the black box of AI models. His team's approach currently focuses on GNNs and new "chemical language models". "The development of methods for explaining predictions of complex models is an important area of AI research. There are also approaches for other network architectures such as language models that help to better understand how machine learning arrives at its results," says Bajorath. He expects that exciting things will soon also happen in the field of "Explainable AI" at the Lamarr Institute, where he is a PI and Chair of AI in the Life Sciences.

Mastropietro A, Pasculli G, Bajorath J.
Learning characteristics of graph neural networks predicting protein-ligand affinities.
Nat Mach Intell, 2023. doi: 10.1038/s42256-023-00756-9

Most Popular Now

AI in Personalized Cancer Medicine: New …

The application of AI in precision oncology has so far been largely confined to the development of new drugs and had only limited impact on the personalisation of therapies. New...

AI can Predict Brain Cancer Patients…

Artificial Intelligence (AI) can predict whether adult patients with brain cancer will survive more than eight months after receiving radiotherapy treatment. The use of the AI to successfully predict patient outcomes...

Max Planck Institute for Informatics and…

The Max Planck Institute for Informatics and Google deepen their strategic research partnership. With additional financial support from the U.S. IT company, the "Saarbrücken Research Center for Visual Computing, Interaction...

JMIR Medical Informatics Invites Submiss…

JMIR Publications has announced a new section titled, "AI Language Models in Health Care" in JMIR Medical Informatics. This leading peer-reviewed journal is indexed in PubMed and has a unique...

Paper Calls for Patient-First Regulation…

Ever wonder if the latest and greatest artificial intelligence (AI) tool you read about in the morning paper is going to save your life? A new study published in JAMA...

Could ChatGPT Help or Hurt Scientific Re…

Since its introduction to the public in November 2022, ChatGPT, an artificial intelligence system, has substantially grown in use, creating written stories, graphics, art and more with just a short...

Evaluating the Performance of AI-Based L…

A new study evaluates an artificial intelligence (AI)-based algorithm for autocontouring prior to radiotherapy in head and neck cancer. Manual contouring to pinpoint the area of treatment requires significant time...

Making AI a Partner in Neuroscientific D…

The past year has seen major advances in Large Language Models (LLMs) such as ChatGPT. The ability of these models to interpret and produce human text sources (and other sequence...

Chapman Scientists Code ChatGPT to Desig…

Generative artificial intelligence platforms, from ChatGPT to Midjourney, grabbed headlines in 2023. But GenAI can do more than create collaged images and help write emails - it can also design...

DMEA nova Award: Wanted - Visionary Solu…

9 - 11 April 2024, Berlin, Germany. The DMEA nova Award is being presented at DMEA 2024 for the first time. The award honours a digital health startup for an outstanding...

New Digital Therapy Reduces Anxiety and …

A therapist-guided digital cognitive behavioural therapy reduced distress in 89 per cent of participants living with long-term physical health conditions, a new King's College London study finds. Researchers at the Institute...

Europe's Digital Health Industry Me…

9 - 11 April 2024, Berlin, Germany. In just over two months, from 9 to 11 April 2024, DMEA, Europe's leading event for digitalisation of healthcare, will gather digital health experts...