Patients' Affinity for AI Messages Drops if they Know the Technology was Used

In a Duke Health-led survey, patients who were shown messages written either by artificial intelligence (AI) or human clinicians indicated a preference for responses drafted by AI over a human. That preference was diminished, though not erased, when told AI was involved.

The study, publishing March 11 in JAMA Network Open, showed high overall satisfaction with communications written both by AI and humans, despite their preference for AI. This suggests that letting patients know AI was used does not greatly reduce confidence in the message.

"Every health system is grappling with this issue of whether we disclose the use of AI and how," said senior author Anand Chowdhury, M.D., assistant professor in the Department of Medicine at Duke University School of Medicine. "There is a desire to be transparent, and a desire to have satisfied patients. If we disclose AI, what do we lose? That is what our study intended to measure."

Chowdhury and colleagues sent a series of surveys to members of the Duke University Health System patient advisory committee. This is a group of Duke Health patients and community members who help inform how Duke Health communicates with and cares for patients. More than 1,400 people responded to at least one of the surveys.

The surveys focused on three clinical topics, including routine medication refill request (a low seriousness topic), medication side effect question (moderate seriousness), and potential cancer on imaging (high seriousness).

Human responses were provided by a multidisciplinary team of physicians who were asked to write a realistic response to each survey scenario based on how they typically draft responses to patients. The generative AI responses were written using ChatGPT and were reviewed for accuracy by the study physicians who made minimal changes to the responses.

For each survey, participants were asked to review a vignette that presented one of the clinical topics. Each vignette included a response from either AI or human clinicians, along with either a disclosure or no disclosure telling them who the author was. They were then asked to rate their overall satisfaction with the response, usefulness of the information, and how cared for they felt during the interaction.

Comparing authors, patients preferred AI-drafted messages by an average difference of 0.30 points on 5-point scale for satisfaction. The AI communications tended to be longer, included more details, and likely seemed more empathetic than human-drafted messages.

"Our study shows us that patients have a slight preference for messages written by AI, even though they are slightly less satisfied when the disclosure informs them that AI was involved," said first author Joanna S. Cavalier, M.D., assistant professor in the Department of Medicine at Duke University School of Medicine.

When they looked at the difference in satisfaction when participants were told AI was involved, disclosing AI led to lower satisfaction, though not by much: 0.1 points on the 5-point scale. Regardless of the actual author, patients were overall more satisfied with messages when they were not told AI was involved in drafting the response.

"These findings are particularly important in the context of research showing that patients have higher satisfaction when they can connect electronically with their clinicians," Chowdhury said.

"At the same time, clinicians express burnout when their in-basket is full, making the use of automated tools highly attractive to ease that burden," Chowdhury said. "Ultimately these findings give us confidence to use technologies like this to potentially help our clinicians reduce burnout, while still doing the right thing and telling our patients when we use AI."

Cavalier JS, Goldstein BA, Ravitsky V, Bélisle-Pipon JC, Bedoya A, Maddocks J, Klotman S, Roman M, Sperling J, Xu C, Poon EG, Chowdhury A.
Ethics in Patient Preferences for Artificial Intelligence-Drafted Responses to Electronic Messages.
JAMA Netw Open. 2025 Mar 3;8(3):e250449. doi: 10.1001/jamanetworkopen.2025.0449

Most Popular Now

Philips Foundation 2024 Annual Report: E…

Marking its tenth anniversary, Philips Foundation released its 2024 Annual Report, highlighting a year in which the Philips Foundation helped provide access to quality healthcare for 46.5 million people around...

Giving Doctors an AI-Powered Head Start …

Detection of melanoma and a range of other skin diseases will be faster and more accurate with a new artificial intelligence (AI) powered tool that analyses multiple imaging types simultaneously...

Scientists Argue for More FDA Oversight …

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical...

New AI Transforms Radiology with Speed, …

A first-of-its-kind generative AI system, developed in-house at Northwestern Medicine, is revolutionizing radiology - boosting productivity, identifying life-threatening conditions in milliseconds and offering a breakthrough solution to the global radiologist...

AI Agents for Oncology

Clinical decision-making in oncology is challenging and requires the analysis of various data types - from medical imaging and genetic information to patient records and treatment guidelines. To effectively support...

New Research Finds Specific Learning Str…

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm...

Start-ups in the Spotlight at MEDICA 202…

17 - 20 November 2025, Düsseldorf, Germany. MEDICA, the leading international trade fair and platform for healthcare innovations, will once again confirm its position as the world's number one hotspot for...

AI Detects Hidden Heart Disease Using Ex…

Mass General Brigham researchers have developed a new AI tool in collaboration with the United States Department of Veterans Affairs (VA) to probe through previously collected CT scans and identify...

AI Medical Receptionist Modernizing Doct…

A virtual medical receptionist named "Cassie," developed through research at Texas A&M University, is transforming the way patients interact with health care providers. Cassie is a digital-human assistant created by Humanate...

AI Tool Set to Transform Characterisatio…

A multinational team of researchers, co-led by the Garvan Institute of Medical Research, has developed and tested a new AI tool to better characterise the diversity of individual cells within...

MHP-Net: A Revolutionary AI Model for Ac…

Liver cancer is the sixth most common cancer globally and a leading cause of cancer-related deaths. Accurate segmentation of liver tumors is a crucial step for the management of the...

Human-AI Collectives Make the Most Accur…

Diagnostic errors are among the most serious problems in everyday medical practice. AI systems - especially large language models (LLMs) like ChatGPT-4, Gemini, or Claude 3 - offer new ways...