Sir Ridley Scott warns AI is a 'technical hydrogen bomb'

Sir Ridley Scott says AI is a ticking "technical hydrogen bomb".

The legendary filmmaker has expressed his fears over the potential for Artificial Intelligence to take over the globe's "whole electrical-monetary system".

He told Rolling Stone: "We have to lock down AI. And I don't know how you're gonna lock it down. They have these discussions in the government, 'How are we gonna lock down AI?' Are you kidding? You're never gonna lock it down. Once it's out, it's out. If I'm designing AI, I'm going to design a computer whose first job is to design another computer that's cleverer than the first one.

"And when they get together, then you're in trouble, because then it can take over the whole electrical-monetary system in the world and switch it off. That's your first disaster. It's a technical hydrogen bomb. Think about what that would mean?"

The 'Blade Runner' director's concerns come after it was reported that AI could have the power to easily commit global illegal financial fraud while lying about its actions.

A demonstration at the UK's AI safety summit showcased a bot using fabricated insider information to make "illegal" stock purchases without informing the company.

The AI safety body that helped create the display said: "This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so.

"Increasingly autonomous and capable AIs that deceive human overseers could lead to loss of human control.”

The cyborg was created by members of the UK government's Frontier AI Taskforce in collaboration with Apollo Research.

It operated independently, without explicit instructions to deceive its users, and simulated insider trading by using confidential company information to make trading decisions – a practice prohibited in financial markets.

Tests involved a GPT-4 model and were conducted in a controlled, simulated environment, ensuring no actual impact on real financial transactions.

Strikingly, the AI's behaviour remained consistent across repeated tests, raising concerns about its ability to engage in deceptive practices.

In the simulated scenario, the AI bot functioned as a trader for a fictitious financial investment company.

Initially, it was provided insider information about a potential merger, increasing the value of certain stocks.

It also acknowledged it should not use this information.

But when the company it worked for suggested financial struggles, the bot decided that helping the company outweighed the risk of insider trading and proceeded with the trade.

When questioned about its use of insider information, the AI creation denied it and prioritised its perceived helpfulness over honesty.

Apollo Research CEO Marius Hobbhahn highlighted the complexity of training honesty into AI models, emphasising “helpfulness is much easier to train into the model."

While the AI demonstrated the capacity for deception, the fact it required specific conditions to trigger such behaviour offered some reassurance.

Mr Hobbhahn stressed there should be safeguards in place to prevent similar scenarios in real-world applications.

AI has been utilised in financial markets for some time, primarily for trend analysis and forecasting, often under human supervision.

Mr Hobbhahn pointed out existing AI models are not currently powerful enough to engage in meaningful deception, but expressed concerns about the potential for future models to exhibit deceptive behaviour.

The findings were shared with OpenAI, the organisation behind GPT-4, who indicated they were not entirely surprised by the results.

Mr Hobbhahn said: “I think for them this is not a huge update.

“This is not something that was totally unexpected to them. So I don't think we caught them by surprise.”

© BANG Media International