Add glue to your pizza? Google's new AI search prompts ridicule

Google is once again facing ridicule online over nonsense being spewed by its AI after launching new "AI overviews" in the US. Andrej Sokolow/dpa

Since Google added AI "overviews" to its search engine in the US, social media has slowly begun to fill with examples of hilarious - and sometimes disturbing - examples of mistakes that the software has been making.

Created with the help of AI, these "AI overviews" are currently only being displayed in the US and are intended to give users a more immediate answer to their questions, rather than giving them links to websites where they might find an answer.

Google has been increasingly moving towards using AI in response to several AI start-ups starting to compete against Google's dominance in web searches.

Short answers to factual questions have long been available in the search engine in snippet previews of text in the links, however the new AI overviews are sometimes several lines lines long and generated only in response to the user's search query.

Google says the feature is to be introduced in other countries by the end of the year. Many website operators and the media are worried that Google will direct fewer people to them as a result of the AI summaries and that their business will suffer.

Google counters that there is even more traffic to the sources of information that end up in the summaries. However, it remains unclear how the rest will fare.

However, the widespread introduction of the function has now revealed a completely different problem that in many cases, the AI software does not seem to be able to distinguish serious information from jokes or satire.

For example, the sources for some particularly silly claims were fun posts on online platforms or articles from the satirical website The Onion, such as the claim that geologists recommend eating one small stone a day.

A Google spokeswoman told the technology blog The Verge on Thursday that the errors were "generally very uncommon queries, and aren’t representative of most people's experiences." These "isolated examples" would be used to improve the product.

Google already had to endure much ridicule online in February when its Gemini AI software was spotted generating diverse photos of people-of-colour Nazi soldiers and American colonial settlers.

Google explained that it had failed to programme exceptions for cases in which diversity was definitely out of place. After that, the Gemini Group stopped generating images of people for the time being.