AI systems ‘having feelings and even human-level consciousness’ no longer ‘in realm of sci-fi’

Artificially intelligent systems having human-type consciousness is no longer the stuff of science fiction, dozens of global academics have said in an open letter.

The Association for Mathematical Consciousness Science (AMCS) compiled the message to AI developers – titled ‘The responsible development of AI agenda needs to include consciousness research’ – as applications such as ChatGPT continue to boom, but added it did not have a view on whether AI development in general should be paused.

It says: “It is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness.”

Urging a greater scientific understanding of consciousness, how it could apply to AI and how society might live alongside it, the academics added: “The rapid development of AI is exposing the urgent need to accelerate research in the field of consciousness science.”

Its signatories include Dr Susan Schneider, who chaired US space agency Nasa, as well as academics from universities in the UK, US and Europe.

Even though most experts agree AI is nowhere near the level of displaying human-type consciousness, it is evolving so rapidly the world’s leading technology experts have called for developments to be paused until it can be better understood.

Tesla billionaire Elon Musk co-signed a separate letter calling for further AI developments to be put on hold until effective safety measures can be designed and implemented.

And his ex-wife, Tallulah Riley, has tweeted that artificial general intelligence (AGI), which is AI capable of human-level intellectual tasks, needs a figure such as climate activist Greta Thunberg to raise awareness and encourage public debate about its possible uses.

The Generative Pre-trained Transformer 4 (GPT-4), developed by OpenAI, the creator of the ChatGPT chatbot, can now successfully complete the bar exam, the professional qualification for lawyers, although it still makes mistakes and can share misinformation.

AI products are also being deployed in various sectors, including health research, marketing, and finance.

Last year, a Google engineer was fired after claiming an AI system was sentient.

The engineer had written Lamda, which is Google’s large language model that underpins its ChatGPT rival, Bard.

Google maintains Lamda was doing exactly what it had been programmed to do, but experts said the incident highlights the need for more research in the field of consciousness science.

Microsoft, which has invested heavily in OpenAI, says that AI can take the “drudgery” out of mundane jobs such as office administration and a recent report by Goldman Sachs suggests AI could replace the equivalent of 300 million full-time jobs.

© BANG Media International