People worse at detecting AI faces are more confident in their ability to spot them, study finds

In new research published in Psychological Science, a team of scientists have shed light on a perplexing phenomenon in the realm of artificial intelligence (AI): AI-generated faces can appear more “human” than actual human faces. This discovery, termed “hyperrealism,” has raised important questions about the potential consequences of AI technology in various aspects of society.

The AI revolution has brought about a significant transformation in our daily lives, with one of its prominent features being the creation of incredibly realistic AI faces. However, this progress has sparked concerns about the possible distortion of truth and the blurring of lines between reality and AI-generated content.

AI-generated faces have become increasingly accessible and are being used for both beneficial purposes, such as aiding in finding missing children, and malevolent activities, such as disseminating political misinformation through fake social media accounts. These AI faces have become so convincing that people often fail to distinguish them from real human faces.

“AI technologies are rapidly changing the way we live, work, and socialize. As a clinical psychologist, I think it’s essential we understand what these technologies are doing and how they are shaping our experience of the world,” explained study author Amy Dawel, a senior lecturer and director of the Emotions & Faces Lab at The Australian National University.

“Young and middle-aged adults will need to pivot how they work, and even what work they do, with new jobs like prompt engineering already on the table. Our children will grow up in a world that looks very different to the one we experienced. We need to do everything we can to make sure that it’s a positive experience, that leaves our next generation better off, not worse.”

To understand and explain the hyperrealism phenomenon, the researchers drew upon existing psychological theories, such as face-space theory, which posits that faces are coded in a multidimensional space based on how different they are from an average face. Human faces are believed to be distributed within this space, with average features being overrepresented. The researchers hypothesized that AI-generated faces embody these average attributes to a greater extent than real human faces.

Previous studies had shown conflicting results regarding people’s ability to distinguish AI from human faces. Some suggested that people couldn’t tell the difference, while others hinted that people might overidentify AI faces as human. These inconsistencies were partly attributed to the racial bias in the training data of AI algorithms. For instance, the StyleGAN2 algorithm, widely used for generating AI faces, was predominantly trained on White faces, potentially leading to AI faces that appear exceptionally average.

The new study began with a reanalysis of a previous experiment, which found evidence of AI hyperrealism for White faces but not for non-White faces. White AI faces were consistently perceived as more human than White human faces, suggesting a clear case of hyperrealism.

“Our study highlights the biases that AI is perpetuating. We found that White AI faces are perceived as more human than real people’s faces, and than other races of AI faces,” Dawel explained. “This means that White AI faces are particularly convincing, which may mean they are more influential when it comes to catfishing and spreading misinformation.”

In a subsequent experiment, the researchers recruited 124 White U.S. residents aged 18 to 50 years. Participants were tasked with differentiating between AI-generated and real human faces, specifically focusing on AI-generated White faces. They also rated their confidence in their judgments. The results replicated the hyperrealism effect, with AI-generated White faces consistently being perceived as more human than real human faces.

Surprisingly, participants who were less accurate at detecting AI-generated faces tended to be more confident in their judgments. This overconfidence further accentuated the tendency for AI hyperrealism.

“We expected people would realize they weren’t very good at detecting AI, given how realistic the faces have become. We were very surprised to find people were overconfident,” Dawel told PsyPost. “People aren’t very good at spotting AI imposters — and if you think you are, changes are you’re making more errors than most. Our study showed that the people who were most confident made the most errors in detecting AI-generated faces.”

In a second experiment, 610 participants were asked to rate a variety of attributes of AI and human faces. The participants were asked to rate the faces on 14 different attributes, including distinctiveness/averageness, memorability, familiarity, attractiveness, and others. Unlike Experiment 1, participants were not informed that AI faces were present, and those who guessed that AI faces were part of the study were excluded.

The results showed that several attributes influenced whether faces were perceived as human. Faces were more likely to be judged as human if they appeared more proportional, alive in the eyes, and familiar. On the other hand, they were less likely to be judged as human if they were memorable, symmetrical, attractive, and smooth-skinned.

The researchers also used a lens model to investigate how each of the 14 attributes contributed to the misjudgment of AI faces as human. They found that AI faces were more average (less distinctive), familiar, and attractive, and less memorable than human faces. AI hyperrealism was primarily explained by attributes that were utilized in the wrong direction, such as facial proportions, familiarity, and memorability. In contrast, attributes that were utilized in the correct direction, such as facial attractiveness, symmetry, and congruent lighting/shadows, had a smaller effect.

Furthermore, the researchers conducted a machine learning experiment to determine if human-perceived attributes could be used to accurately classify AI and human faces. Using a random forest classification model, they were able to achieve a high accuracy rate of 94% in classifying face types (AI vs. human) based on the 14 attributes identified in Experiment 2. This suggests that AI faces, particularly those generated by StyleGAN2, can be reliably distinguished from human faces using human-perceived attributes.

“The main problem right now is that a lot of the AI technology is not transparent,” Dawel said. “We don’t know how it is being trained, so we don’t have much insight into the biases it is producing. There is an urgent need for research funding to independent bodies, like universities, who can investigate what’s happening and provide ethical guidance.”

“Government needs to step in and require companies to disclose what their AI is trained on and put in place systems for protecting against bias. If you are a parent, now is the time to lobby your local minister for action on regulating AI, to ensure it benefits rather than harms our children. Companies that are creating AI should be required to have independent oversight.”

The study, “AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones“, was authored by Elizabeth J. Miller, Ben A. Steward, Zak Witkower, Clare A. M. Sutherland, Eva G. Krumhuber, and Amy Dawel.

© PsyPost