The “trust paradox” of AI: New study sheds light on why we embrace technologies we’re unsure of

People often express higher support for using AI-enabled technologies than their trust in the same technologies, particularly in domains like police surveillance, according to new research published in PLOS One. The findings indicate that factors such as perceptions of AI’s effectiveness and the “fear of missing out” can motivate individuals to support and adopt these technologies despite their initial lack of trust.

The study’s authors were motivated by the rapidly advancing field of artificial intelligence and its increasing integration into various aspects of society. Despite this rapid progress, discussions of human agency and public attitudes have been somewhat overlooked. They sought to address this oversight by examining why individuals might support the use of AI-enabled technologies even if they don’t fully trust them.

“Artificial intelligence (AI) is rapidly developing across civilian and military applications,” said study author Julie George, a predoctoral fellow at Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) and Center for International Security and Cooperation (CISAC).

“It is important to study public opinion concerning support and trust levels of various AI-enabled technologies, especially in the United States context. By understanding the microfoundations of individuals’ beliefs on AI, we can better assess how AI is developed and used in society.”

To investigate public attitudes towards AI technologies, the researchers used conjoint survey analysis to examine preferences for different attributes of AI technologies across various domains, including armed drones, general surgery, police surveillance, self-driving cars, and social media content moderation. The survey was conducted using the Lucid platform from October 7 to 21, 2022, with a final sample of 1,008 representative U.S. citizens.

The researchers identified a “trust paradox” where individuals supported the use of AI-enabled technologies despite distrusting them. The trust paradox was most pronounced for police surveillance, followed by drones, cars, general surgery, and social media content moderation.

Perceptions of autonomy played a key role in shaping public attitudes towards AI-enabled technologies. Participants were more willing to support technologies that offered mixed-initiative autonomy, where both humans and machines can make decisions, compared to technologies with either full autonomy (complete AI control) or full human control.

Demographic factors were also related to trust and support in AI technologies. Being older was associated with less support and trust in AI technologies, while men showed higher levels of support and trust than women. Education positively influenced support and trust, while conservatism was negatively associated with both.

The researchers also found evidence that even if people expressed relatively low trust in AI technologies, their beliefs in safeguards, perceptions of AI’s effectiveness, evaluations of risks and benefits, and “fear of missing out” could lead them to support the use of these technologies.

The findings highlight that people’s decisions to support and adopt AI technologies are influenced by a combination of factors beyond their level of trust. The interplay of these factors can lead to a situation where individuals express support for AI-enabled technologies despite having reservations about their trustworthiness.

“Our article shows that several underlying beliefs help account for public attitudes of support for artificial intelligence-enabled technologies,” George told PsyPost, “including the fear of missing out (FOMO), optimism that future versions of the technology will be more trustworthy, a belief that the benefits of AI-enabled technologies outweigh the risks, and calculation that AI-enabled technologies yield efficiency gains. Additional research could consider public opinion of AI beyond the United States.”

The study, “Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States“, was authored by Sarah Kreps, Julie George, Paul Lushenko, and Adi Rao.

© PsyPost