One in Five UK Soctors use AI Chatbots

A survey led by researchers at Uppsala University in Sweden reveals that a significant proportion of UK general practitioners (GPs) are integrating generative AI tools, such as ChatGPT, into their clinical workflows. The results highlight the rapidly growing role of artificial intelligence in healthcare - a development that has the potential to revolutionise patient care but also raises significant ethical and safety concerns.

"While there is much talk about the hype of AI, our study suggests that the use of AI in healthcare is not just on the horizon - it's happening now. Doctors are deriving value from these tools. The medical community must act swiftly to address the ethical and practical challenges for patients that generative AI brings," says lead researcher Dr Charlotte Blease, Associate Professor at Uppsala University.

The study reveals that 20 per cent of GPs reported using generative AI tools in their practice, with ChatGPT being the most frequently used AI tool. Conducted with collaborators at Harvard Medical School, Boston, USA and the University of Basel, Switzerland, this is the most comprehensive examination of generative AI in clinical practice since the launch of ChatGPT in November 2022.

The study was conducted in February 2024 as part of a monthly omnibus survey and was designed to include GPs from across different regions of the UK. Researchers surveyed 1,006 GPs registered with Doctors.net.uk, the largest professional network for UK doctors.

The aim of the study was to measure the adoption of AI-powered chatbots by GPs across the UK and to understand how these tools are being used in clinical settings. With the advent of large language models (LLMs), there has been substantial interest in their potential to support medical professionals in tasks ranging from documentation to differential diagnosis.

Apart from revealing that 20 per cent of GPs used AI tools in their practice, the study also shows that among users, 29 per cent utilised these tools for generating documentation after patient appointments, while 28 per cent employed them to assist with differential diagnosis.

These findings suggest that AI chatbots are becoming valuable assets in medical practice, particularly in reducing administrative burdens and supporting clinical decision-making. However, the use of these tools is not without risks. The potential for AI to introduce errors ("hallucinations"), exacerbate biases, and compromise patient privacy is significant. As these tools continue to evolve, there is an urgent need for the healthcare industry to establish robust guidelines and training programmes to ensure their safe and effective use.

"This study underscores the growing reliance on AI tools by UK GPs, despite the lack of formal training and guidance and the potential risks involved. As the healthcare sector and regulatory authorities continue to grapple with these challenges, the need to train doctors to be 21st century physicians is more pressing than ever," Blease concludes.

Blease CR, Locher C, Gaab J, Hägglund M, Mandl KD.
Generative artificial intelligence in primary care: an online survey of UK general practitioners.
BMJ Health Care Inform. 2024 Sep 17;31(1):e101102. doi: 10.1136/bmjhci-2024-101102

Most Popular Now

Philips Foundation 2024 Annual Report: E…

Marking its tenth anniversary, Philips Foundation released its 2024 Annual Report, highlighting a year in which the Philips Foundation helped provide access to quality healthcare for 46.5 million people around...

New AI Transforms Radiology with Speed, …

A first-of-its-kind generative AI system, developed in-house at Northwestern Medicine, is revolutionizing radiology - boosting productivity, identifying life-threatening conditions in milliseconds and offering a breakthrough solution to the global radiologist...

Scientists Argue for More FDA Oversight …

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical...

New Research Finds Specific Learning Str…

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm...

Giving Doctors an AI-Powered Head Start …

Detection of melanoma and a range of other skin diseases will be faster and more accurate with a new artificial intelligence (AI) powered tool that analyses multiple imaging types simultaneously...

AI Agents for Oncology

Clinical decision-making in oncology is challenging and requires the analysis of various data types - from medical imaging and genetic information to patient records and treatment guidelines. To effectively support...

Patients say "Yes..ish" to the…

As artificial intelligence (AI) continues to be integrated in healthcare, a new multinational study involving Aarhus University sheds light on how dental patients really feel about its growing role in...

Brains vs. Bytes: Study Compares Diagnos…

A University of Maine study compared how well artificial intelligence (AI) models and human clinicians handled complex or sensitive medical cases. The study published in the Journal of Health Organization...

'AI Scientist' Suggests Combin…

An 'AI scientist', working in collaboration with human scientists, has found that combinations of cheap and safe drugs - used to treat conditions such as high cholesterol and alcohol dependence...

Start-ups in the Spotlight at MEDICA 202…

17 - 20 November 2025, Düsseldorf, Germany. MEDICA, the leading international trade fair and platform for healthcare innovations, will once again confirm its position as the world's number one hotspot for...