ChatGPT Shows Limited Ability to Recommend Guidelines-Based Cancer Treatments

For many patients, the internet serves as a powerful tool for self-education on medical topics. With ChatGPT now at patients’ fingertips, researchers from Brigham and Women’s Hospital, a founding member of the Mass General Brigham healthcare system, assessed how consistently the artificial intelligence chatbot provides recommendations for cancer treatment that align with National Comprehensive Cancer Network (NCCN) guidelines. Their findings, published in JAMA Oncology, show that in approximately one-third of cases, ChatGPT 3.5 provided an inappropriate (“non-concordant”) recommendation, highlighting the need for awareness of the technology’s limitations.

"Patients should feel empowered to educate themselves about their medical conditions, but they should always discuss with a clinician, and resources on the Internet should not be consulted in isolation," said corresponding author Danielle Bitterman, MD, of the Department of Radiation Oncology and the Artificial Intelligence in Medicine (AIM) Program of Mass General Brigham. "ChatGPT responses can sound a lot like a human and can be quite convincing. But, when it comes to clinical decision-making, there are so many subtleties for every patient’s unique situation. A right answer can be very nuanced, and not necessarily something ChatGPT or another large language model can provide."

The emergence of artificial intelligence tools in health has been groundbreaking and has the potential to positively reshape the continuum of care. Mass General Brigham, as one of the nation’s top integrated academic health systems and largest innovation enterprises, is leading the way in conducting rigorous research on new and emerging technologies to inform the responsible incorporation of AI into care delivery, workforce support, and administrative processes.

Although medical decision-making can be influenced by many factors, Bitterman and colleagues chose to evaluate the extent to which ChatGPT's recommendations aligned with the NCCN guidelines, which are used by physicians at institutions across the country. They focused on the three most common cancers (breast, prostate and lung cancer) and prompted ChatGPT to provide a treatment approach for each cancer based on the severity of the disease. In total, the researchers included 26 unique diagnosis descriptions and used four, slightly different prompts to ask ChatGPT to provide a treatment approach, generating a total of 104 prompts.

Nearly all responses (98 percent) included at least one treatment approach that agreed with NCCN guidelines. However, the researchers found that 34 percent of these responses also included one or more non-concordant recommendations, which were sometimes difficult to detect amidst otherwise sound guidance. A non-concordant treatment recommendation was defined as one that was only partially correct; for example, for a locally advanced breast cancer, a recommendation of surgery alone, without mention of another therapy modality. Notably, complete agreement in scoring only occurred in 62 percent of cases, underscoring both the complexity of the NCCN guidelines themselves and the extent to which ChatGPT's output could be vague or difficult to interpret.

In 12.5 percent of cases, ChatGPT produced “hallucinations,” or a treatment recommendation entirely absent from NCCN guidelines. These included recommendations of novel therapies, or curative therapies for non-curative cancers. The authors emphasized that this form of misinformation can incorrectly set patients’ expectations about treatment and potentially impact the clinician-patient relationship.

Going forward, the researchers are exploring how well both patients and clinicians can distinguish between medical advice written by a clinician versus a large language model (LLM) like ChatGPT. They are also prompting ChatGPT with more detailed clinical cases to further evaluate its clinical knowledge.

The authors used GPT-3.5-turbo-0301, one of the largest models available at the time they conducted the study and the model class that is currently used in the open-access version of ChatGPT (a newer version, GPT-4, is only available with the paid subscription). They also used the 2021 NCCN guidelines, because GPT-3.5-turbo-0301 was developed using data up to September 2021. While results may vary if other LLMs and/or clinical guidelines are used, the researchers emphasize that many LLMs are similar in the way they are built and the limitations they possess.

"It is an open research question as to the extent LLMs provide consistent logical responses as oftentimes 'hallucinations' are observed," said first author Shan Chen, MS, of the AIM Program. "Users are likely to seek answers from the LLMs to educate themselves on health-related topics - similarly to how Google searches have been used. At the same time, we need to raise awareness that LLMs are not the equivalent of trained medical professionals."

Chen S, Kann BH, Foote MB, Aerts HJWL, Savova GK, Mak RH, Bitterman DS.
Use of Artificial Intelligence Chatbots for Cancer Treatment Information.
JAMA Oncol. 2023 Aug 24:e232954. doi: 10.1001/jamaoncol.2023.2954

Most Popular Now

Specially Designed Video Games may Benef…

In a review of previous studies, a Johns Hopkins Children's Center team concludes that some video games created as mental health interventions can be helpful - if modest - tools...

AI may Enhance Patient Safety

Generative artificial intelligence (genAI) uses hundreds of millions, sometimes billions, of data points to train itself to produce realistic and innovative outputs that can mimic human-created content. Its applications include...

AI Chatbots Rival Doctors in Accuracy fo…

A new study reveals that artificial intelligence chatbots, such as ChatGPT, may be almost as effective as consulting a doctor for advice on low back pain. Conducted by an international team...

Researchers Harness AI to Repurpose Exis…

There are more than 7,000 rare and undiagnosed diseases globally. Although each condition occurs in a small number of individuals, collectively these diseases exert a staggering human and economic toll because...

Paving the Way for New Treatments

A University of Missouri researcher has created a computer program that can unravel the mysteries of how proteins work together - giving scientists valuable insights to better prevent, diagnose and...

AI Language Models Write Good Doctor…

Generative AI should be able to write usable doctor's letters and thus potentially speed up medical documentation, according to a study by the University Medical Center Freiburg. Around 93% of...

Clanwilliam Brings Epic Care to the UK

Care homes looking to digitise their administration and care procedures have a new option with the launch of Epic Care in the UK. Epic Care is a modular, scalable system developed...

When Detecting Depression, the Eyes have…

It has been estimated that nearly 300 million people, or about 4% of the global population, are afflicted by some form of depression. But detecting it can be difficult, particularly...

West Yorkshire and Harrogate Hospitals S…

Clinicians working at five of the six trusts in the West Yorkshire Association of Acute Trusts (WYAAT) can access test results from across their pathology network, following a summer roll-out...

ChatGPT Shows Human-Level Assessment of …

As artificial intelligence advances, its uses and capabilities in real-world applications continue to reach new heights that may even surpass human expertise. In the field of radiology, where a correct...

HWL 2024 Brings Together a Record Number…

1 - 2 October 2024, Luxembourg. The second edition of Healthcare Week Luxembourg on 1 and 2 October 2024, organised by the Federation of Luxembourg Hospitals (FHL), in partnership with the...

When it comes to Emergency Care, ChatGPT…

If ChatGPT were cut loose in the Emergency Department, it might suggest unneeded x-rays and antibiotics for some patients and admit others who didn't require hospital treatment, a new study...