Trust in human vs AI teammates depends on team size, study finds

(Photo credit: Adobe Stock)

Recent research from the Netherlands reveals that individuals working in two-member teams tend to report greater cognitive interpersonal trust in their human teammates compared to AI agents. However, in three-member teams, the perceived trustworthiness of human and AI members was found to be similar. This study, published in the Journal of Occupational and Organizational Psychology, provides new insights into team dynamics involving AI.

Artificial intelligence (AI) systems are computer systems designed to perform tasks traditionally requiring human intelligence, such as understanding natural language, recognizing patterns, and making decisions. These systems employ various techniques, including machine learning, where algorithms improve their performance by learning from data, and deep learning, which uses neural networks with multiple layers. These technologies allow AI to process vast amounts of information quickly and accurately, making AI useful in numerous applications.

The recent years saw great advances in the development of AI. Large language models, Ais that can use natural human language, became practical and publicly available, initiating a profound transformation of ways how humans work and how many activities are conducted. With further development of AI and transformation of the human society that ensues, it is likely that even more effective AI agents will be developed and applied in various fields of activity that used to be exclusively human in the past.

AI agents are software entities that autonomously perform tasks or make decisions based on predefined rules, learned patterns, or real-time data. These agents can operate independently or collaborate with humans and other agents to achieve specific goals. Examples include virtual assistants like Siri and Alexa, autonomous vehicles, and recommendation systems used by streaming services and online retailers. Future advancements may see AI agents becoming more capable, human-like, and better adapted for seamless interactions with people.

Study authors Eleni Georganta and Anna-Sophie Ulfert wanted to explore how trustworthy humans would find AI agents employed as members of their team in an organizational setting to be. They noted that trust is essential for the work of any team, and that humans might find it harder to evaluate the trustworthiness of AI team members because typical resources for evaluating trust, such as shared experiences and familiarity are limited for AI agents.

The researchers conducted two experiments. In both, team members collaborated on an online task. The studies examined how having an AI versus a human teammate impacted perceived trustworthiness, similarity, and interpersonal trust within the team.

The first study involved 494 participants from a German university, with an average age of 24 years, and 48% were female. Participants were asked to imagine working for a smartphone company to convince the CEO to finance a new fitness app. One team member acted as a designer, and the other as a software developer. In one group, participants were told their teammate was a human, while in the other group, the teammate was presented as an AI. Participants in the role of the software developer received training to communicate like an AI.

The team meetings were divided into three sessions. After each session, participants assessed their teammate’s perceived trustworthiness (e.g., “My team member shows a great deal of integrity”), perceived similarity (e.g., “My team member and I are similar in terms of our outlook, perspective, and values”), and cognitive and affective interpersonal trust (e.g., “I can freely share my ideas, feelings, and hopes with my team member”).

The second study followed a similar design but included a third team member assigned the role of a marketing expert. This study involved 318 participants from a Dutch university and the same German university, excluding those who participated in the first study.

Results from the first study indicated that designers identified slightly less with their team when their teammate was an AI. However, there were no differences in objective team performance. Participants in the role of the software developer reported higher perceived trustworthiness, similarity, and interpersonal trust when their teammate was human.

In the second study, there were no differences in performance or trust indicators between teams with an AI member and those with all human members.

The study sheds light on the differences in perceptions of AI agents and humans in work settings. However, it should be noted that the scale and duration of interactions between participants in these experiments were limited to the task at hand, and severely constrained. This is quite different from how team members in most real-world situations interact and develop relations of trust.

The paper, “Would you trust an AI team member? Team trust in human–AI teams,” was authored by Eleni Georganta and Anna-Sophie Ulfert.