People with social anxiety more likely to become overdependent on conversational artificial intelligence agents

A recent study examined how social anxiety, loneliness, and rumination contribute to the problematic use of conversational artificial intelligence agents. Results showed that persons high in social anxiety are more likely to feel lonely and engage in ruminative thought patterns. They then tend to turn to conversational artificial intelligence agents for relief. The study was published in Computers in Human Behavior.

Conversational artificial intelligence (AI) agents are pieces of software designed to engage in human-like conversations with human users. With recent advances in natural language processing and machine learning, conversational AI agents are becoming more and more prevalent. These include software agents such as Apple Siri, Amazon Echo, ChatGPT and others. In a 2018 survey, 65% of U.S. participants stated that virtual assistants had changed their behaviors and daily routines.

The flourishing of conversational AI has also presented an opportunity for problematic use of this technology i.e., for individuals to become overdependent on them. Personalized responses of conversational Ais and the human-like nature of conversations can keep users glued to their devices for longer and create a strong attachment to such pieces of technology.

Earlier studies have shown that individuals high in social anxiety are more likely to develop problematic use of technology. It is quite possible that the same would be the case with problematic use of conversational AI. Problematic use of conversational AI is its excessive use in an addictive way, frequently causing undesirable consequences in daily life.

Study author Bo Hu and his colleagues reasoned that socially anxious individuals may use conversational AI to compensate for deficits in social interactions they experience and, in this way, develop problematic use more easily. This effect could be expected to be higher in participants who attributed a human-like mind to the software agent (mind perception) as this would facilitate emotional attachment to it. Researchers also expected that rumination and loneliness would play a role in this link.

Participants were 516 users of conversational AI recruited through the crowd-sourcing platform Wenjuanxing. Participants were between 18 and 59 years of age. 76% of them held a bachelor’s degree, while 14.5% held a master’s degree or above.

Participants completed assessments of loneliness (a short version of the UCLA loneliness scale), rumination (the Ruminative Response Scale, short version), mind perception (5 items, e.g. “I feel that conversational AI is able to think by itself”) and problematic conversational AI use (the Bergen Social Media Addiction Scale, adapted to be about conversational AI instead of social media, e.g., “I spend a lot of time thinking about conversational AI”).

Rumination refers to a pattern of repetitive and intrusive thoughts about negative experiences, emotions, or problems. It involves dwelling on past events, analyzing them extensively, and repeatedly going over the same concerns without reaching a resolution or finding a way to move forward.

Results showed that individuals with higher social anxiety scores tend to be more lonely, more prone to rumination, and more prone to use conversational AI in a problematic way. Both loneliness and rumination were positively associated with problematic conversational AI use.

Researchers tested a model that stated that social anxiety leads to loneliness, that loneliness leads to rumination, which in turn makes a person prone to problematic use of conversational AI. Results showed that such a model of relations between the studied factors is indeed possible. They also indicated that the effect of social anxiety on the problematic use of conversational AI was stronger when mind perception was high i.e., when participants attributed a human-like mind to the AI agent.

“As predicted, the results revealed a positive association between social anxiety and problematic use of conversational AI. Socially anxious individuals may have difficulty engaging in interpersonal interactions and developing relationships with other people in face-to-face contexts. Compared with human interaction, conversational AI can provide these individuals with a more comfortable and relaxed pseudo-interpersonal experience, which may eventually make them feel that they are in their comfort zone and lead to them becoming dependent on the technology,” the study authors wrote.

The study makes an important contribution to the scientific understanding of psychological aspects of AI use. However, it also has limitations that need to be taken into account. Notably, the study design does not allow for any cause-and-effect conclusions to be made. Additionally, all study participants were Chinese. Results on people from other cultures might not be the same.

The study, “How social anxiety leads to problematic use of conversational AI: The roles of loneliness, rumination, and mind perception”, was authored by Bo Hu, Yuanyi Mao, and Ki Joon Kim.

© PsyPost