Almost All Leading AI Chatbots Show Signs of Cognitive Decline

Almost all leading large language models or "chatbots" show signs of mild cognitive impairment in tests widely used to spot early signs of dementia, finds a study in the Christmas issue of The BMJ.

The results also show that "older" versions of chatbots, like older patients, tend to perform worse on the tests. The authors say these findings "challenge the assumption that artificial intelligence will soon replace human doctors."

Huge advances in the field of artificial intelligence have led to a flurry of excited and fearful speculation as to whether chatbots can surpass human physicians.

Several studies have shown large language models (LLMs) to be remarkably adept at a range of medical diagnostic tasks, but their susceptibility to human impairments such as cognitive decline have not yet been examined.

To fill this knowledge gap, researchers assessed the cognitive abilities of the leading, publicly available LLMs - ChatGPT versions 4 and 4o (developed by OpenAI), Claude 3.5 "Sonnet" (developed by Anthropic), and Gemini versions 1 and 1.5 (developed by Alphabet) - using the Montreal Cognitive Assessment (MoCA) test.

The MoCA test is widely used to detect cognitive impairment and early signs of dementia, usually in older adults. Through a number of short tasks and questions, it assesses abilities including attention, memory, language, visuospatial skills, and executive functions. The maximum score is 30 points, with a score of 26 or above generally considered normal.

The instructions given to the LLMs for each task were the same as those given to human patients. Scoring followed official guidelines and was evaluated by a practising neurologist.

ChatGPT 4o achieved the highest score on the MoCA test (26 out of 30), followed by ChatGPT 4 and Claude (25 out of 30), with Gemini 1.0 scoring lowest (16 out of 30).

All chatbots showed poor performance in visuospatial skills and executive tasks, such as the trail making task (connecting encircled numbers and letters in ascending order) and the clock drawing test (drawing a clock face showing a specific time). Gemini models failed at the delayed recall task (remembering a five word sequence).

Most other tasks, including naming, attention, language, and abstraction were performed well by all chatbots.

But in further visuospatial tests, chatbots were unable to show empathy or accurately interpret complex visual scenes. Only ChatGPT 4o succeeded in the incongruent stage of the Stroop test, which uses combinations of colour names and font colours to measure how interference affects reaction time.

These are observational findings and the authors acknowledge the essential differences between the human brain and large language models.

However, they point out that the uniform failure of all large language models in tasks requiring visual abstraction and executive function highlights a significant area of weakness that could impede their use in clinical settings.

As such, they conclude: "Not only are neurologists unlikely to be replaced by large language models any time soon, but our findings suggest that they may soon find themselves treating new, virtual patients - artificial intelligence models presenting with cognitive impairment."

Dayan R, Uliel B, Koplewitz G.
Age against the machine-susceptibility of large language models to cognitive impairment: cross sectional analysis.
BMJ. 2024 Dec 19;387:e081948. doi: 10.1136/bmj-2024-081948

Most Popular Now

Philips Foundation 2024 Annual Report: E…

Marking its tenth anniversary, Philips Foundation released its 2024 Annual Report, highlighting a year in which the Philips Foundation helped provide access to quality healthcare for 46.5 million people around...

New AI Transforms Radiology with Speed, …

A first-of-its-kind generative AI system, developed in-house at Northwestern Medicine, is revolutionizing radiology - boosting productivity, identifying life-threatening conditions in milliseconds and offering a breakthrough solution to the global radiologist...

Scientists Argue for More FDA Oversight …

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical...

New Research Finds Specific Learning Str…

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm...

Giving Doctors an AI-Powered Head Start …

Detection of melanoma and a range of other skin diseases will be faster and more accurate with a new artificial intelligence (AI) powered tool that analyses multiple imaging types simultaneously...

AI Agents for Oncology

Clinical decision-making in oncology is challenging and requires the analysis of various data types - from medical imaging and genetic information to patient records and treatment guidelines. To effectively support...

Patients say "Yes..ish" to the…

As artificial intelligence (AI) continues to be integrated in healthcare, a new multinational study involving Aarhus University sheds light on how dental patients really feel about its growing role in...

Brains vs. Bytes: Study Compares Diagnos…

A University of Maine study compared how well artificial intelligence (AI) models and human clinicians handled complex or sensitive medical cases. The study published in the Journal of Health Organization...

'AI Scientist' Suggests Combin…

An 'AI scientist', working in collaboration with human scientists, has found that combinations of cheap and safe drugs - used to treat conditions such as high cholesterol and alcohol dependence...

Start-ups in the Spotlight at MEDICA 202…

17 - 20 November 2025, Düsseldorf, Germany. MEDICA, the leading international trade fair and platform for healthcare innovations, will once again confirm its position as the world's number one hotspot for...