We’ve had dangerous AI with us for decades, and never had a summit for racist algorithms

By Susannah Copson

AI is already deeply entrenched in public service models – which means its harms are too, impacting everything from policing to benefits, writes Susannah Copson

Rishi Sunak’s AI summit has come and gone; the Bletchley Park conference put the UK firmly back on the map, at least in Silicone Valley boardrooms for the week anyway. However, while Sunak cites the importance of ensuring “appropriate protection” against AI risks the reality is that the government is failing to safeguard against real and pressing harms in the here and now, while legislating in a way that will make us more exposed to the threats AI could pose.

In signposting some of the potential risks of AI, Sunak has demonstrated yet again the government’s long-standing fascination with AI Armageddon, citing existential threats and security concerns. These are of course serious issues, but in the clamour for proactive measures to avert future AI catastrophes the government has conveniently sidestepped addressing the havoc that faulty AI systems are already wreaking in the present.

Law enforcement agencies and private companies regularly use facial recognition surveillance systems despite their alarming inaccuracy rates and without public knowledge. Algorithms underpinning facial recognition technology have disproportionately discriminated against women and people of colour. Such inaccuracies in this kind of high-risk context can be incredibly damaging, as use of these flawed and unregulated technologies increases the likelihood of marginalised groups encountering unwarranted police attention. This takes place in an environment where tensions and distrust between police and the public run high, as evidenced by the aftermath of the Black Lives Matter protests and response to the Sarah Everard vigil – as well as the numerous scandals the Met currently faces.

AI is already deeply entrenched in public service models – which means its harms are too. Hidden government algorithms make decisions about key areas of public life, but little is known about how these decisions are made. The A-level grading fiasco stands as a glaring testament to the havoc wreaked when flawed algorithms call the shots, leaving countless students reeling from arbitrary outcomes – but there are other examples too. According to Big Brother Watch, the Department for Work and Pensions uses secretive, invasive and discriminatory algorithms that impact people’s ability to access housing, benefits, or council tax support. These algorithms received criticism for automating bias, likely disadvantaging and discriminating against Britain’s poor.

Talking about regulating AI in the future without addressing the current problems is shutting the barn door after the horse has bolted. However, there is no doubt that safeguarding future AI use is still a key challenge for policymakers. The reality is that the very best protections we have against the threats of AI remain our human rights and data protection laws, yet this government has sought to undermine both.

While Sunak’s summit will see high-level dignitaries speak to the importance of ensuring appropriate future protections, the Government is pushing a Data Protection Bill through Parliament that rips up many of our data protection laws and will leave us exposed to the threats AI presents. By going against the grain of European AI regulation trends and slashing legal safeguards, the Bill will supercharge the disastrous and discriminatory impact that AI can have on the public; particularly vulnerable and marginalised individuals and communities. This legislation will normalise mass automated decision-making despite the privacy and equality risks, so that more decisions will be made about the public based on binary predictions irrespective of human involvement, empathy, or dignity.

The government faces a real choice over whether they back any of their AI summit rhetoric with actions. This must start with addressing the AI threats here and now. Simultaneously ripping up the most basic laws that protect us from excessive data-harvesting and the threats of automated decision will mean that their high-level summit pledges hold no water. We don’t need performative gestures. We need to protect the legal safeguards already in place – not haemorrhage them. In the pursuit of global AI leadership, let’s not forget the immediate battles we face to guard against automated threats.