Obsessing about the existential risk of god-like AI misses the point

By Anna Thomas

At a roundtable discussion at the Conservative Party conference last month, one participant commented: “you have to say it’s about life and death to get people round the table!” Focusing on the existential risk (or ‘x-risk’ as it is commonly referred to) of “god-like” frontier AI, the organisers of the Global AI Safety Summit have appeared to take this approach.

In response to this talk of death and species extinction, there has been a great swell of action to highlight the very real challenges that people in workplaces around the world are already facing due to AI.

We’ve seen AI “fringe” events spring up all over the country, from the AI and Society Forum, People’s Panels on AI, and our day-long conference focusing on the Future of Work. Meanwhile, individual unions have proposed motions on the use of ChatGPT, senior ministers have approached research organisations directly, and a new Work and AI working group has recently been set up by X/Twitter. All of this demonstrates the UK’s strengths in civil society and academia, as well as in AI innovation and law.

In response, the Department for Science and Technology has made strides to acknowledge publicly that AI will impact not just technology but people too, and that a wider range of social, economic and cultural impacts and harms – which specifically include work and labour market disruption – also need our urgent attention alongside preventing the elimination of the human race. But in his announcements this week, while recognising that human capabilities should be centred alongside tech capabilities – a major step forward – Rishi Sunak nonetheless overlooked work and workplaces as perhaps the major site of AI impact.

Jobs are the thread that tie together our daily lives, our communities and the economy. Because a workplace is a high-stakes environment and where most people will interact with AI systems, a sharper focus on risks, impacts and opportunities to create and sustain good work could be the best means to shape AI for social good.

However, we can’t assume this will happen automatically. Indeed, new research is pointing to the need to be proactive, and highlights that we must carefully shape the environment and conditions for “good” automation which creates better work. Research that we published just last month shows that deployment of AI and automation can lead to better quality work, and a net increase in jobs, but that active work is needed to create the infrastructure that can support this, as well as the active HR approach that firms need to bring employees with them.

The concern is that the government’s priorities are going to miss this important dimension, and that the Summit will be overly dominated by digital giants, risking a regulatory hijack that fails to take into account the voices from civil society calling for a wider view that shapes AI for the good of the many.

Looking back through history, in all tech revolutions the people who are most likely to benefit from a technology are those most likely to support the development of it – and those most likely to lose out are also those most likely to be left out.

By focusing on the frontier risk of “god-like” AI – and combining this with anti-regulation techno-optimism that assumes technology is unarguably good for all – the discourse around regulation is too narrow.

The real existential crisis would be if the UK wasted its chance to achieve global agreement towards human-centred AI that affirms the dignity of all, and of their labour.

The proposed AI Safety Institute is a welcome initiative, but cannot be seen as a substitute for good governance and regulation. It must have the capacity to fund pilots like our Workplace AI Sandbox, and build capacity for civil society and union engagement, alongside independent, multidisciplinary research. It might not shout ‘life and death,’ but – with a commitment to human flourishing, and work as a key element of that – this kind of convening is more important than that.