If scammers exploit ChatGPT for fraud, our response to digital crime must also use AI

By Elena Siniscalco

(Photo by Sean Gallup/Getty Images)

Criminals online are using Artificial Intelligence like ChatGPT to steal our money. If they exploit AI, so must we in our response and defence, writes Martin Rehak

Our expanding relationship with the digital world transformed the opportunities for financial crimes. As criminals become more sophisticated, so too must be our response. As they get better at it, exploiting the newest technologies, so must we.

We are getting used to living our life without the need of physically going places. Shopping is delivered to our doors and banking can be done from a smartphone. The pandemic accelerated this shift to the digital economy, and while the potential benefits are endless, so too are the opportunities for innovative criminal schemes.

In the same vein, banks have also become more digital. Going to a bank branch is now rare. New customers can be onboarded virtually via a smartphone. This has given criminals new channels to exploit – increasing digitalisation has been accompanied by a rise in sophisticated fraud and money laundering. Digital crime pays better and the odds of getting caught are close to nil.

One major threat is push payment fraud. A typical scam involves getting a call from your bank informing you that your account has been compromised. You are in danger of losing all your money. Don’t panic, they say, we can help you. They talk you through the software you need to install and the accounts to put your money in to keep it safe. You do this to prevent an awful thing from happening, but later you realise that that awful thing has happened. You just have sent all your money to scam artists.

Imagine this process, a few years down the line, but without the need for humans. Using AI, such as ChatGPT, combined with video generation, the attackers will be able to create convincing impersonations of bank employees, and other people we implicitly trust. A machine that has learned how to have a natural conversation can convince enough of us to send our money into the hands of criminals.

The traditional AI-based approach to financial crime prevention has been based on manual rules-based systems, or on supervised machine learning. This approach, relying on an explicit definition of financial crime we want to catch, already fails today. The upcoming generation of attacks will be impossible to address manually. Applying machine learning to this problem is long overdue, especially in a fully digital environment.

A better way of combatting financial crime is with techniques based on anomaly detection, combined with recent AI innovations. Rather than trying to second guess what a financial crime might look like, the machine learns to detect any behaviour or transaction out of the ordinary. This more flexible approach adapts as criminality evolves.

The ‘black box’ issue is another AI shortcoming when combatting crime. Many AI techniques do not explain which factors were key in reaching their outputs. When financial institutions use AI to combat financial crime, they need transparency to understand how a decision was reached.

Regulators also want to understand how AI is used – to ensure the decisions are ethical and explainable. AI should not be making arbitrary decisions, but rather operating under close supervision of professionals who can make a final judgement. Applying such an approach is our best hope of combatting financial crime as it becomes more sophisticated, our best hope of safeguarding everything we have online – and in real life.

The post If scammers exploit ChatGPT for fraud, our response to digital crime must also use AI appeared first on CityAM.