Groundbreaking psychology research sheds light on the trust dynamics of human-machine collectives

A series of psychological experiments has shown that people’s treatment of machines differs from their treatment of humans, which influences the establishment of trust within human-machine collectives. The new findings provide insights into the dynamics of human-bot interactions and have implications for understanding human behavior in the emerging context of AI systems. The research was published in Nature Communications.

The researchers were interested in studying how humans and bots interact in online communities and how their behavior is influenced by social norms. They wanted to understand the challenges that arise in mixed human-bot collectives, similar to those faced by human societies, such as cooperation, exploitation, and norm stabilization.

“Ever since I was a child, I have always been fascinated by human-bot collectives and their societal implications, inspired by classic movies such as Terminator 2 and Blade Runner,” said study co-author Talal Rahwan, an associate professor of computer science at New York University Abu Dhabi.

“My fascination with these issues grew bigger after I received my PhD in Artificial Intelligence, but I felt that studying human-bot interactions should include the perspective of social scientists and psychologists. With this in mind, I sought out two collaborators, Kinga Makovi (a sociologist) and Jean-François Bonnefon (a psychologist), and together we developed the current study of cooperation and punishment in human-bot collectives.”

To conduct their study, the researchers performed a series of online experiments with a total of 7,917 participants. They created a stylized society (a simplified and artificial representation of a social system) in which participants could take on different roles: Beneficiaries, Helpers, Punishers, and Trustors. The participants played economic games with real financial consequences, which served as proxies for real-life interactions.

In all the experiments, the distinction between human and bot participants was communicated to participants through textual descriptions (referring to humans as “MTurk worker” and bots as “Bot”) and stylized images of robots or people. This information was presented on the screen where participants made their choices.

Humans typically earn trust by sharing and punishing those who don’t share, but the researchers observed that this trust was less pronounced when humans interacted with bots. Sharing or punishing behaviors towards bots didn’t lead to as much trust as when humans interacted with other humans.

In other words, sharing resources with bots resulted in a smaller increase in trust compared to sharing with humans. Similarly, people who did not share with bot beneficiaries were less likely to be punished compared to those who did not share with humans. Bots also did not receive the same level of trust gain as humans when they shared resources. As a result, trust was not easily established in these mixed human-bot communities, which led to worse collective outcomes.

But the trust gains were not completely eliminated when interacting with bots. This suggests that people carried assumptions about social norms from human societies into these mixed communities, the researchers said.

“It is known that people can signal their trustworthiness to others by acting cooperatively, or by punishing those who do not cooperate with others. We show that that same holds in human-bot collectives, albeit to a lesser extent,” explained co-author Kinga Makovi, an assistant professor at New York University Abu Dhabi.

“More specifically, in our experiments, the trust gained by sharing resources with a bot was less than the trust gained when sharing with a fellow human. We saw something similar when it comes to punishing a bot as opposed to a human. Importantly though, the trust-gains for ‘doing the right thing’ (sharing or punishing those who do not share) were only attenuated, rather than eliminated, suggesting that people carry into human-bot societies similar assumptions about the social norms that they have long relied on within human societies.”

Additionally, the researchers found that when participants were informed about the high consensus regarding the norm of sharing, trust gains generally increased. This suggests that people may alter their behavior once they are made aware of social norms.

“Previous attempts to increase trust and cooperation between humans and bots often tried to make bots look more like humans, but this approach led to disappointing results,” noted co-author Jean-François Bonnefon, a research director at the Toulouse School of Economics.

“We show that there is a better approach: instead of trying to pull bots into the circle of trust by giving them a human appearance, you can nudge people to expand the circle of trust so that it reaches out to nonhuman bots. This is done by making them aware that social norms are shifting, and that many people are starting to think it is a good thing to cooperate with bots, even if they don’t yet realize it is a common opinion.”

The researchers noted that stylized societies with incentivized interactions are useful for studying human cooperation in the lab. However, it may not be as suitable for studying human-bot cooperation since bots have no use for money. Participants recognized that bots didn’t desire money but acted as if they did, possibly because prosocial behavior towards bots is seen as a signal to other humans.

“It is totally understandable that people can signal their trustworthiness by acting cooperatively with fellow humans, but it is surprising that they can also do so by acting cooperatively with bots, or by punishing bots who do not act cooperatively,” Rahwan told PsyPost. “After all, machines do not have emotions or needs, and so one could argue that it is perfectly fine not to share with a bot, or that it is pointless to punish a bot. Yet, our study shows otherwise.”

Like any study, the new research includes some caveats. The sample consisted of online participants, primarily from Amazon Mechanical Turk, who tend to be younger, more educated, and more technologically savvy. The results may not generalize to older or less technologically savvy populations. In addition, the use of a stylized society allowed for experimental control but may yield different results in other contexts.

“Our conclusions are based on an experimental setup that is meant to capture a ‘stylized society’ of humans and bots,” Makovi said. “Will people act differently when facing bots in the field? This remains to be seen.”

“The paper would not have come to fruition without Wendi Li who was an undergraduate student at NYU Abu Dhabi when we started the study, and Anahit Sargsyan who supported the data collection and analysis of the multiple iterations,” Rahwan added.

The study, “Trust within human-machine collectives depends on the perceived consensus about cooperative norms“, was authored by Kinga Makovi, Anahit Sargsyan, Wendi Li, Jean-François Bonnefon, and Talal Rahwan.

© PsyPost