People are more likely to conform to artificial intelligence in objective tasks, study reveals

New research sheds light on how much people conform to information given by a human compared to information given by an artificial intelligence (AI) agent. Results showed that participants conformed more to information given by an AI in counting tasks with a single correct answer (objective tasks). They conformed more to information given by a human in tasks based on attributing meaning to images (subjective tasks). The study was published in Acta Psychologica.

Social influence refers to processes by which individuals or groups affect the attitudes, beliefs and decisions of others. Forms of social influence involve conformity i.e., adjusting attitudes, beliefs and behaviors to align with social norms or norms of another, compliance i.e., accepting a direct request or demand from another group, obedience, and persuasion.

Through most of history, the “others” that were able to affect people’s thoughts, emotions and behaviors were mostly other humans. However, with the advent of artificial intelligence and non-human agents such as chatbots, virtual assistants or robots, sources of possible social influence expanded beyond just human sources.

Study author Paolo Riva and his colleagues wanted to compare how much people would be influenced by information provided by another human vs. information provided by an artificial intelligence agent. They expected that this might depend on the task at hand. If a task was objective i.e., if a participant was asked to count something, they expected AI to be more influential.

However, if a task involved attributing meaning i.e., if it was subjective, researchers expected a human to be more influential. They conducted two experiments, one with an objective and one with a subjective task.

Participants were recruited via Qualtrics. One hundred seventy seven participants participated in the first study and 102 completed the second study.

In the first study, participants were shown a set of 8 black images with white dots on them. Each image was shown for 7 seconds. Participants’ task was to estimate the number of dots on the image.

Each image contained between 138 and 288 dots. 7 seconds were far from being enough time to count them, but they were enough for the participant to create a rough estimation of how many dots there could be. When the image disappeared, participants were asked to provide their estimate of the number of dots.

After this, they were presented with two estimations of the number of dots. Participants were told that one was provided by an AI and the second by a human. Participants were randomly divided into two groups. In the first group, the AI systematically overestimated the number of dots by about 15%, and the “human” systematically underestimated the number by the same amount.

In the other group, the roles were reversed – the AI underestimated, while the “human” overestimated. After viewing these estimates, participants were asked to provide their own estimates of the number of dots again.

In study 2, participants were presented with images taken from the card game Dixit. There were no time limits for viewing. Each of these images was paired with two concepts for which previous evaluations showed that they could be equally well associated with the image. They were told that one concept was proposed by an AI and the other by a human.

For each participant, a program randomly decided which concept would be presented as proposed by an AI and which by a human. Participants were then asked to rate how much each of the two concepts is representative of the image they were shown.

Results of study 1 showed that participants conformed more to the influence of the AI. When asked to estimate the number of dots again, their estimates changed from initial values towards the number proposed by the AI more often than they did towards the value presented as proposed by a human. This difference was found both when AI overestimated and when it underestimated the results. Participants also explicitly reported believing that AI estimations were more accurate.

Results of study 2 showed that the “human” had greater influence on the participants than the “AI.” However, when they were explicitly asked about the source they thought to be more informative, numbers of participants that found the human more informative was practically the same as the number of participants responding that the AI was more informative.

“The results showed that people can conform more to non-human agents (than human ones) in a digital context under specific circumstances. For objective tasks eliciting uncertainty, people might be more prone to conform to AI agents than another human being, whereas for subjective tasks, other humans may continue to be the most credible source of influence compared with AI agents,” the study authors concluded.

The study sheds light on an important and novel aspect of human social behavior. However, it should be noted that the study did not examine mental states attributed to agents of influence. It also remains unknown whether the influence persists when the source of influence is no longer present.

The study, “Social influences in the digital era: When do people conform more to a human being or an artificial intelligence?”, was authored by Paolo Riva, Nicolas Aureli, and Federica Silvestrini.

© PsyPost