Time moves fast in AI years, and Britain is nowhere near ready

By Tom Westgarth

OpenAI, whose online chatbot ChatGPT made waves when it was debuted in December, and has since launched GPT-4. (Photo by Leon Neal/Getty Images)

The latest version of ChatGPT makes the AI chatbot which went viral last year look like the dumb younger sibling but the UK still can’t figure out what to do – or how to use – artificial intelligence, writes Tom Westgarth

It is easy to look back with clarity on the pandemic. At this distance it is clear the signs were all there. Lockdowns in China, the heaving hospital wards in Lombardy. Even the exponential growth data in January and February 2020 should have pointed to the need to act and act now.

The rampant growth in AI is a similar story. GPT-3, a landmark large language model (LLM) that can predict the next sequence of words in a sentence, was produced by Open AI in June 2020. Since then, the open-source community has improved models faster than many could imagine. It is not just poems and art that the new era of “generative AI” has produced: new tools and businesses have helped to design antibodies, produce new music, and have even become a Linux terminal. The release of GPT-4 in the last few weeks makes ChatGPT, which went viral in late 2022, look like the dumb younger sibling. Time moves fast in AI years.

During the pandemic, there were respectable institutions in place, and yet the response was still botched. The World Health Organisation (WHO), all the various national health security agencies, and the Centre for Disease Control got it wrong on the initial risk and on basics such as masks and scaling up testing. Even with all those institutions in place, governments still failed to deal with the crisis.

And yet we barely have a Health Security Agency equivalent for AI, let alone a WHO. Offices for AI are not equipped to deal with the next generation of emerging challenges.

Take the UK’s approach. The government’s AI strategy, while highly regarded by experts in its aims, was not even funded. Its office for AI, while producing policy and research, has no statutory regulatory mandate and has arguably not been given the political prioritisation it deserves in order to coordinate responses to AI.

Technologists and entrepreneurs eagerly await this year’s release of the UK’s “pro-innovation” regulatory framework for AI. But with only a couple of dozen or so employees at the office for AI, the team there has its work cut out if it is to anticipate and respond to emerging challenges, as well as creating an environment for markets to mature.

There are big questions that already need to be answered. We want people to be able to use AI tools for creative purposes. But how do we enable people to create AI music while also ensuring that artists’ IP is protected? If someone creates original music from AI trained on tools that harvest data from the internet, have they violated copyright?

UK-based Stability AI, the latest potential AI leviathan behind Stable Diffusion, is now facing a lawsuit for allegedly scraping artist’s work without their consent. The outcome of this case will be significant for the future of the British AI market. Are the relevant UK government departments going to be proactive and make legislation or will a judge decide the course of AI copyright in the UK? Technological maturity will not be brought about without institutional maturity.

We require better institutional capabilities to arm ministers and officials with information of this kind. As models and the dilemmas they pose increase in complexity, our capacity must necessarily do so as well.

One possibility is for the office for AI to lead a “whole of government” foresight approach to understanding how to benefit from AI’s disruption. The office for AI should go to every single government department, and ask them to consider all the potential ways public services could benefit from new generative AI tools – and the foreseeable problems.

For the highest-impact scenarios, we should put in place an adaptation framework for the different stages of development and use of AI. A similar framework was provided as a blueprint for us by Stanford’s Cyber Policy Centre and OpenAI for dealing with emerging disinformation threats.

This tactic should be part of a suite of new responsibilities that an expanded government AI body takes on. Talk of being ‘AI ready’ is cheap; if you do not have the talent, foresight, and ambition to be able to respond in an agile way to emerging risks and opportunities, you may as well follow the pandemic era advice and stay at home.

As things stand, little of this is a political priority. So-called horizon scanning to understand emerging AI does take place in some departments, but in a disparate manner. We need dedicated teams to take on this role, but that will only come if the Department for Science, Innovation and Technology gives it adequate backing.

Nowhere is AI currently on a list of “most important issues to the public”. But tomorrow, a national scandal (for example, a huge cyber hack of a public database assisted by GPT-4) would catapult the field into being a key voter concern.

This is part of a series of essays published today by The Entrepreneurs’ Network.

The post Time moves fast in AI years, and Britain is nowhere near ready appeared first on CityAM.