AI and risk management: Are financial services future-proof?

By Darren Parkin

by Emre Kazim, Ashyana-Jasmine Kachra and Ayesha Gulley of Holistic AI

Across industries from healthcare to agriculture to financial services, we have seen the rapid adoption of artificial intelligence (AI) increasing efficiency and accuracy. In the financial services sector, AI is being used to recommend investments using a data-driven approach and identify fraudulent transactions based on spending habits.

However, with high-profile incidents such as the glitch in Knight Capital’s trading algorithm, which led to the loss of $440 million USD in just 30 minutes, or a recent lawsuit against State Farm where it is alleged that their automated claims processing has resulted in algorithmic bias against black homeowners there has been increased industry, public and regulatory concern.

Nonetheless AI adoption need not be risky business.

A pro-innovation approach

In the UK, a pro-innovation approach to AI regulation is being taken, where the focus is on the specific context in which AI is being used. Even the banks are all in; in an attempt to build, maintain, and reinforce consumer trust, the Bank of England opened a consultation on its model risk management framework —one of the priorities of which is to identify and manage the risks associated with the use of AI and machine learning in the financial sector.

In the absence of AI-specific regulations, UK financial services must remain vigilant to ensure their AI is being developed and deployed in line with current laws and regulations.

Pertinent existing regulations

For example, firms should be aware that their AI must align with the rules and guidance set out by the FCA. One rule is that customers must be treated fairly and communicated with openly. This has implications for firms that use AI to determine credit worthiness, which can result in negative outcomes for certain customers.

Beginning July 31 2023, firms will also have to comply with New Consumer Duty rules outlined by Principle 12. In accordance with this principle a firm must act to deliver good outcomes for retail customers. For example, firms must avoid foreseeable harm to retail customers.

As such financial firms which use AI in their business practices to recommend investment endeavours or in determining creditworthiness should be diligent in their liability to disclose this to customers to not breach responsibilities in avoiding foreseeable harm and acting in good faith for retail customers.

Outside of the FCA, existing legislation such as the UK Equality Act applies to any financial services firm that serves the public. Financial providers must ensure they are not discriminating against customers based on protected characteristics directly or indirectly. This would include procedures to determine the provision or rate of services such as the use of AI in determining credit worthiness.

Preparing for the future

From FCA transparency guidelines, to monitoring due diligence in algorithmic trading, firms in financial services need to be aware of their AI systems. This means exploring some important questions: What processes are in place? Is your system treating customers fairly? Are they being communicated with openly? While ensuring compliance comes at some expense, the reputational cost, potential loss of trust by customers and risk of fines is even higher.

For now, financial services companies with AI-based products and services must carefully monitor and account for the shifting regulatory landscape. Optimising AI for maximum benefit will require a new approach. One way to take control is by adopting an AI risk management solution to triage, verify, and mitigate AI risks. In doing so, organisations maximise their ability to innovate with confidence by employing cross-functional efforts to identify and mitigate risks at various points in the AI lifecycle.

The post AI and risk management: Are financial services future-proof? appeared first on CityAM.