ChatGPT's Responses to People's Healthcare-Related Queries are Nearly Indistinguishable from those Provided by Humans

ChatGPT’s responses to people's healthcare-related queries are nearly indistinguishable from those provided by humans, a new study from NYU Tandon School of Engineering and Grossman School of Medicine reveals, suggesting the potential for chatbots to be effective allies to healthcare providers' communications with patients.

An NYU research team presented 392 people aged 18 and above with ten patient questions and responses, with half of the responses generated by a human healthcare provider and the other half by ChatGPT.

Participants were asked to identify the source of each response and rate their trust in the ChatGPT responses using a 5-point scale from completely untrustworthy to completely trustworthy.

The study found people have limited ability to distinguish between chatbot and human-generated responses. On average, participants correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with ranges of 49.0% to 85.7% for different questions. Results remained consistent no matter the demographic categories of the respondents.

The study found participants mildly trust chatbots' responses overall (3.4 average score), with lower trust when the health-related complexity of the task in question was higher. Logistical questions (e.g. scheduling appointments, insurance questions) had the highest trust rating (3.94 average score), followed by preventative care (e.g. vaccines, cancer screenings, 3.52 average score). Diagnostic and treatment advice had the lowest trust ratings (scores 2.90 and 2.89, respectively).

According to the researchers, the study highlights the possibility that chatbots can assist in patient-provider communication particularly related to administrative tasks and common chronic disease management. Further research is needed, however, around chatbots' taking on more clinical roles. Providers should remain cautious and exercise critical judgment when curating chatbot-generated advice due to the limitations and potential biases of AI models.

Nov O, Singh N, Mann D.
Putting ChatGPT's Medical Advice to the (Turing) Test: Survey Study.
JMIR Med Educ. 2023 Jul 10;9:e46939. doi: 10.2196/46939

Most Popular Now

Study Finds One-Year Change on CT Scans …

Researchers at National Jewish Health have shown that subtle increases in lung scarring, detected by an artificial intelligence-based tool on CT scans taken one year apart, are associated with disease...

New AI Tools Help Scientists Track How D…

Artificial intelligence (AI) can solve problems at remarkable speed, but it’s the people developing the algorithms who are truly driving discovery. At The University of Texas at Arlington, data scientists...

Yousif's Story with Sectra and The …

Embarking on healthcare technology career after leaving his home as a refugee during his teenage years, Yousif is passionate about making a difference. He reflects on an apprenticeship in which...

AI Tool Offers Deep Insight into the Imm…

Researchers explore the human immune system by looking at the active components, namely the various genes and cells involved. But there is a broad range of these, and observations necessarily...

New Antibiotic Targets IBD - and AI Pred…

Researchers at McMaster University and the Massachusetts Institute of Technology (MIT) have made two scientific breakthroughs at once: they not only discovered a brand-new antibiotic that targets inflammatory bowel diseases...

Highland to Help Companies Seize 'N…

Health tech growth partner Highland has today revealed its new identity - reflecting a sharper focus as it helps health tech companies to find market opportunities, convince target audiences, and...