AI echo chambers: Chatbots feed our own bias back to us, study finds

AI specialists have long warned about the risk of chatbots reflecting our own bias. New research lays bare that so-called large language models, while appearing to give us facts, are often just giving us the answers we want to read. Jonas Walzberg/dpa

Artificial intelligence chatbots are increasingly inclined to echo the views of people who use them, according to US researchers who found that platforms limit what information they share depending on who is asking.

"Because people are reading a summary paragraph generated by AI, they think they’re getting unbiased, fact-based answers," said Ziang Xiao of the Johns Hopkins University in Baltimore.

But such assumptions are largely wrong, Xiao and colleagues argue, after looking at the results of tests involving 272 participants asked use standard internet searches or AI to help them write about news topics in the US, such as health care and student loans.

The "echo chamber" effect is louder when people sought information from a chatbot using large language models (LLMs) than via conventional searches, the team found.

"Even if a chatbot isn’t designed to be biased, its answers reflect the biases or leanings of the person asking the questions. So really, people are getting the answers they want to hear," Xiao said, ahead of presenting the team’s findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems.

Research published on May 10 in the science journal Cell Press meanwhile showed AI to be sometimes able to bluff even card sharks while playing poker and to come out best in diplomacy simulations.

The researchers found the bots to be capable a form of sycophancy, as they were "observed to systematically agree with their conversation partners, regardless of the accuracy of their statements" and "to mirror the user’s stance, even if it means forgoing the presentation of an impartial or balanced viewpoint."

© Deutsche Presse-Agentur GmbH