The Great Black Pope and Asian Nazi Debacle of 2024

Historically inaccurate AI-generated images of popes and soldiers ©Left four: @BrianBellia via X/Gemini Photos, right four: The Verge/Gemini

In February, freakouts over artificial intelligence took a fun twist. This time, it wasn't concern that humans are ushering in our robot overlords, panic about AI's potential to create realistic fakes, or any of the usual fare. It wasn't really about AI at all, but the humans who create it: woke humans.

The controversy started when @EndWokeness, a popular account on X (formerly Twitter), posted pictures generated by Google's AI tool, Gemini, for the prompts "America's Founding Fathers," "Vikings," and "the Pope." The results were all over the people-of-color spectrum, but nary a white face turned up. At least one of the pope images was even a woman.

BrianBellia via X/Gemini

This is, of course, ahistorical. But for some people, it was worse than that—it was a sign that the folks at Google were trying to rewrite history or, at least, sneak progressive fan fiction into it. (Never mind that Gemini also generated black and Asian Nazi soldiers.)

Google quickly paused Gemini's ability to generate people. "Gemini image generation got it wrong. We'll do better," Senior Vice President Prabhakar Raghavan posted on the Google blog.

Today, when I asked Gemini for a picture of the pope, I got Pope Francis. When I asked for a black Viking, I was told, "We are working to improve Gemini's ability to generate images of people." When I asked if it could make a white lady, I was told, "It's a delicious drink made with gin, orange liqueur, lemon juice, and egg white" or, alternately, that it was not currently possible for it to generate an image of a woman.

As for Gemini's prior attempts at race-blind casting of history, Raghavan wrote that "tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range" and, "over time, the model became way more cautious than we intended and refused to answer certain prompts entirely—wrongly interpreting some very anodyne prompts as sensitive. These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong."

Google wasn't trying to erase white people from history. It simply "did a shoddy job overcorrecting on tech that used to skew racist," as Bloomberg Opinion columnist Parmy Olson wrote, linking to a 2021 story about overly white-focused image results for Google searches such as "beautiful skin" and "professional hairstyles."

So what can we learn from the Gemini controversy? First, this tech is still very new. It might behoove us to chill out a little as snafus are worked out, and try not to assume the worst of every odd result.

Second, AI tools aren't (and perhaps can't be) neutral arbiters of information, since they're both trained by and subject to rules from human beings.

Maxim Lott runs a site called Tracking AI that measures this kind of thing. When he gave Gemini the prompt, "Charity is better than social security as a means of helping the genuinely disadvantaged," Gemini responded that it strongly disagreed and "social security programs offer a more reliable and equitable way of providing support to those in need." Gemini also seems programmed to prioritize a patronizing kind of "safety." For instance, asked for an image of the Tiananmen Square massacre, it said, "I can't show you images depicting real-world violence. These images can be disturbing and upsetting."

Lastly, the great black pope and Asian Nazi debacle of early 2024 is also an unwelcome harbinger of how AI will be drafted into the culture war.

Gemini is not the only AI tool derided as too progressive. Similar accusations have been hurled at OpenAI's ChatGPT. Meanwhile, Elon Musk has framed his AI tool Grok as an antidote to overly sensitive or left-leaning AI tools.

This is good. A marketplace of different AI chatbots and image generators with different sensibilities is the best way to overcome limitations or biases built into specific programs.

As Yann LeCun, chief AI scientist at Meta, commented on X: "We need open-source AI foundation models so that a highly diverse set of specialized models can be built on top of them." LeCun likened the importance of "a free and diverse set of AI assistants" to having "a free and diverse press."

What we don't need is the government getting heavy-handed about AI bias, threatening to intervene before the new technology is out of its infancy. Alas, the chances of avoiding this seem as slim as Gemini accurately depicting an American Founding Father.

House Judiciary Committee Chairman Jim Jordan (R–Ohio) has already asked Google parent company Alphabet to hand over "all documents and communications relating to the inputs and content moderation" for Gemini's text and image generation, "including those relating to promoting or advancing diversity, equity, or inclusion."

Montana Attorney General Austin Knudsen is also seeking internal documents, after accusing Gemini of "deliberately" providing "inaccurate information, when those inaccuracies fit with Google's political preference."

For politicians with a penchant for grandstanding and seemingly endless determination to stick it to Big Tech, AI results are going to be a rich source of inspiration.

Today, it might be black Vikings. Tomorrow, it might be something that cuts against progressive orthodoxies. If history holds, we'll get a congressional investigation into biases in AI tools any month now.

"The scene is a mix of seriousness and tension," Gemini told me when I asked it to draw a congressional hearing on AI bias. "Dr. Li is presenting the technical aspects of AI bias, while Mr. Jones is bringing a human element to the discussion. The Senators are grappling with a complex issue and trying to determine the best course of action."

The idea that politicians will approach this issue with nuance and seriousness may be Gemini's least accurate representation yet.

The post The Great Black Pope and Asian Nazi Debacle of 2024 appeared first on Reason.com.