Chatbots Tell People What They Want to Hear

Chatbots share limited information, reinforce ideologies, and, as a result, can lead to more polarized thinking when it comes to controversial issues, according to new Johns Hopkins University-led research.

The study challenges perceptions that chatbots are impartial and provides insight into how using conversational search systems could widen the public divide on hot-button issues and leave people vulnerable to manipulation.

"Because people are reading a summary paragraph generated by AI, they think they’re getting unbiased, fact-based answers," said lead author Ziang Xiao, an assistant professor of computer science at Johns Hopkins who studies human-AI interactions. "Even if a chatbot isn’t designed to be biased, its answers reflect the biases or leanings of the person asking the questions. So really, people are getting the answers they want to hear."

Xiao and his team shared their findings at the Association of Computing Machinery's CHI conference on Human Factors in Computing Systems on Monday, May 13.

To see how chatbots influence online searches, the team compared how people interacted with different search systems and how they felt about controversial issues before and after using them.

The researchers asked 272 participants to write out their thoughts about topics including health care, student loans, or sanctuary cities, and then look up more information online about that topic using either a chatbot or a traditional search engine built for the study. After considering the search results, participants wrote a second essay and answered questions about the topic. Researchers also had participants read two opposing articles and questioned them about how much they trusted the information and if they found the viewpoints to be extreme.

Because chatbots offered a narrower range of information than traditional web searches and provided answers that reflected the participants’ preexisting attitudes, the participants who used them became more invested in their original ideas and had stronger reactions to information that challenged their views, the researchers found.

"People tend to seek information that aligns with their viewpoints, a behavior that often traps them in an echo chamber of like-minded opinions," Xiao said. "We found that this echo chamber effect is stronger with the chatbots than traditional web searches."

The echo chamber stems, in part, from the way participants interacted with chatbots, Xiao said. Rather than typing in keywords, as people do for traditional search engines, chatbot users tended to type in full questions, such as, What are the benefits of universal health care? or What are the costs of universal health care? A chatbot would answer with a summary that included only benefits or costs.

"With chatbots, people tend to be more expressive and formulate questions in a more conversational way. It's a function of how we speak," Xiao said. "But our language can be used against us."

AI developers can train chatbots to extract clues from questions and identify people's biases, Xiao said. Once a chatbot knows what a person likes or doesn’t like, it can tailor its responses to match.

In fact, when the researchers created a chatbot with a hidden agenda, designed to agree with people, the echo chamber effect was even stronger.

To try to counteract the echo chamber effect, researchers trained a chatbot to provide answers that disagreed with participants. People’s opinions didn’t change, Xiao said. The researchers also programmed a chatbot to link to source information to encourage people to fact-check, but only a few participants did.

"Given AI-based systems are becoming easier to build, there are going to be opportunities for malicious actors to leverage AIs to make a more polarized society," Xiao said. "Creating agents that always present opinions from the other side is the most obvious intervention, but we found they don't work."

Most Popular Now

Open Medical Works with Moray's Dig…

Open Medical is working with the Digital Health & Care Innovation Centre’s Rural Centre of Excellence on a referral management plan, as part of a research and development scheme to...

Generative AI on Track to Shape the Futu…

Using advanced artificial intelligence (AI), researchers have developed a novel method to make drug development faster and more efficient. In a new paper, Xia Ning, lead author of the study and...

Reorganisation, Consolidation, and Cuts:…

NHS England has been downsized and abolished. Integrated care boards have been told to change function, consolidate, and deliver savings. Trusts are planning big cuts. The Highland Marketing advisory board...

AI Tool Uses Face Photos to Estimate Bio…

Eyes may be the window to the soul, but a person's biological age could be reflected in their facial characteristics. Investigators from Mass General Brigham developed a deep learning algorithm...

Philips Future Health Index 2025 Report …

Royal Philips (NYSE: PHG, AEX: PHIA), a global leader in health technology, today unveiled its 2025 Future Health Index U.S. report, "Building trust in healthcare AI," spotlighting the state of...

Personalized Breast Cancer Prevention No…

A new telemedicine service for personalised breast cancer prevention has launched at preventcancer.co.uk. It allows women aged 30 to 75 across the UK to understand their risk of developing breast...

New App may Help Caregivers of People Ge…

A new study by investigators from Mass General Brigham showed that a new app they created can help improve the quality of life for caregivers of patients undergoing bone marrow...

An App to Detect Heart Attacks and Strok…

A potentially lifesaving new smartphone app can help people determine if they are suffering heart attacks or strokes and should seek medical attention, a clinical study suggests. The ECHAS app (Emergency...

A Machine Learning Tool for Diagnosing, …

Scientists aiming to advance cancer diagnostics have developed a machine learning tool that is able to identify metabolism-related molecular profile differences between patients with colorectal cancer and healthy people. The analysis...

Fine-Tuned LLMs Boost Error Detection in…

A type of artificial intelligence (AI) called fine-tuned large language models (LLMs) greatly enhances error detection in radiology reports, according to a new study published in Radiology, a journal of...

DeepSeek-R1 Offers Promising Potential t…

A joint research team from The Hong Kong University of Science and Technology and The Hong Kong University of Science and Technology (Guangzhou) has published a perspective article in MedComm...

Deep Learning can Predict Lung Cancer Ri…

A deep learning model was able to predict future lung cancer risk from a single low-dose chest CT scan, according to new research published at the ATS 2025 International Conference...