Artificially intelligent chatbots ‘could encourage terrorism’

Artificially intelligent chatbots are feared to be set to encourage terrorism.

The UK’s independent reviewer of terrorism legislation in the UK, Jonathan Hall KC, said the apps could target young and vulnerable users and groom them into extremism if they were programmed to promote terror ideologies.

He also warned it would be almost impossible to prosecute anyone for the offence as AI chatbots are not covered by anti-terrorism laws.

He said in an article in the Mail on Sunday: “At present, the terrorist threat in Great Britain relates to low sophistication attacks using knives or vehicles.

“But AI-enabled attacks are probably around the corner.

“Hundreds of millions of people across the world could soon be chatting to these artificial companions for hours at a time, in all the languages of the world.

“I believe that it is entirely conceivable that Artificial Intelligence chatbots will be programmed, or even worse, decide to propagate violent extremist ideology of one shade or another.

“Anti-terrorism laws are already lagging when it comes to the online world: unable to get at malign overseas actors or tech enablers. But when ChatGPT starts encouraging terrorism, who will there be to prosecute?

“The human user may be arrested for what is on their computer, and based on recent years, many of them will be children.

“Also, because an artificial companion is a boon to the lonely, it is probable that many of those arrested will be neurodivergent, possibly suffering medical disorders, learning disabilities or other conditions.

“Yet since the criminal law does not extend to robots, the AI groomer will go scot-free. Nor does it operate reliably when responsibility is shared between man and machine.”

Mr Hall added about asking ChatGPT about terrorism: “When, in an exercise, I asked ChatGPT how it excluded terrorist use, it replied that its developer, OpenAI, conducted ‘extensive background checks on potential users.’

“Having myself enrolled in less than a minute, this is demonstrably false.

“Another failing is for the platform to refer to its terms and conditions without specifying who and how they are enforced.”

Senior tech figures such as Elon Musk and Steve Wozniak, co-founder of Apple, have called for a pause on AI experimentation and development, warning it poses “profound risks to society and humanity” if it goes on unregulated.

© BANG Media International