Held responsible, yet mere tools: Study reveals paradoxical views on AI assistants

A recent study published in the journal iScience aimed to uncover how people perceive the responsibility of AI assistants in scenarios involving driving. Surprisingly, the findings suggest that while people tend to attribute responsibility to AI in their assessments, they still viewed these AI systems primarily as tools, not agents deserving of moral accountability.

Artificial intelligence has become an integral part of our lives, assisting us in various tasks, from recommending movies to aiding in complex tasks like driving. However, as AI becomes more intertwined with human activities, questions about responsibility and accountability arise. How do we assess who is responsible when things go right or wrong in situations involving both humans and AI?

The researchers embarked on this study to unravel the intricate dynamics of responsibility attribution in human-AI interactions. While previous research has explored the topic, this study sought to dig deeper and examine whether people view AI as mere tools or as agents capable of sharing moral responsibility.

“Artificial Intelligence (AI) may be driving cars and serving foods in canteens in the future, but at the moment, real-life AI assistants are far removed from this kind of autonomy,” said study author Louis Longin, member of the Cognition, Values and Behavior research lab at Ludwig-Maximilians-University in Munich. “So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant? To find out, we set up an online study where participants allocated responsibility for driving scenarios to a human driver and varying kinds of AI assistants.”

The researchers conducted two online studies, each with its own set of participants. The first study included 746 participants, while Study 2 involved 194 individuals.

The studies employed hypothetical scenarios, or vignettes, that depicted various driving situations involving a human driver and an AI assistant. The AI assistant could provide advice through either sensory cues (like steering wheel vibrations) or verbal instructions.

In the first study, participants were presented with scenarios in which the AI assistant’s status (active or inactive due to an electrical wiring problem) and the outcome of the driving scenario (positive or negative) were manipulated. They were asked to rate the responsibility, blame/praise, causality, and counterfactual capacity of both the human driver and the AI assistant.

The second study, a follow-up to the first, involved scenarios with a non-AI-powered tool (state-of-the-art fog lights) instead of an AI assistant. Again, the tool’s status was manipulated, and participants rated responsibility and related factors.

The researchers found that the way AI advice was presented did not significantly influence participants’ judgments of responsibility. This suggests that people assigned responsibility to the AI assistant irrespective of how it communicated.

The presence or absence of the AI assistant had a substantial impact on participants’ assessments. When the AI assistant was active and a crash occurred, participants rated the human driver as less responsible and the AI assistant as more responsible. This pattern held true even when there was no crash. In essence, the AI’s status strongly affected how people assigned responsibility.

The outcomes of the scenarios played a significant role in participants’ judgments. When the AI assistant was inactive, it was seen as equally responsible in both negative and positive outcomes. However, when the AI assistant was active, it was perceived as significantly more responsible for positive outcomes, such as avoiding an accident, than for negative ones. This contrasted with the human driver, who did not show a similar outcome effect.

“We were surprised to find that that the AI assistants were considered more responsible for positive rather than negative outcomes,” Longin told PsyPost. “We speculate that people might apply different moral standards for praise and blame: when a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems.”

Despite participants attributing responsibility to the AI assistant in their assessments, they consistently viewed the AI assistant as a tool rather than an agent with moral responsibility. This finding underscores the tension between people’s behavior in rating AI assistants and their underlying beliefs about AI as tools.

“AI assistants – irrespective of their mode of interaction (tactile or verbal communication) – are perceived as something between tools and human agents,” Longin explained. “In fact, we found that participants strongly asserted that AI assistants were just tools, yet they saw them as partly responsible for the success or failures of the human drivers who consulted them – a trait traditionally only reserved for human agents.”

Interestingly, participants did not attribute responsibility in the same way when the non-AI-powered tool was involved. Instead, the sharing of responsibility was only evident when AI technology played a role in the driving assistance. This suggests that the attribution of responsibility and the tendency to share it with a non-human agent were specific to situations where artificial intelligence was actively involved in providing assistance.

While this study provides valuable insights into human-AI interactions and perceptions of responsibility, it is not without limitations. One limitation is the need for further research to replicate these findings in different domains and cultures. Cultural norms and expectations can significantly influence how AI is perceived and held responsible.

The study, “Intelligence brings responsibility – Even smart AI assistants are held responsible“, was authored by Louis Longin, Bahador Bahrami, and Ophelia Deroy.

© PsyPost