Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
National
Daniel Boffey

Surveillance technology is advancing at pace – with what consequences?

A van with a sign reading 'live facial recognition in operation'
A Metropolitan police van used as part of their facial recognition operation for the king’s coronation in May. Photograph: Will Edwards/AFP/Getty Images

In the summer of 2019, Nikolay Glukhin travelled on the Moscow underground with a lifesize cardboard cutout of a young political protester, Konstantin Kotov. On a banner he had scrawled of Kotov’s fate: “I’m facing up to five years … for peaceful protests.” A few days later, Glukhin himself was arrested.

Glukhin’s peaceful initiative is believed to have been picked up on social media and CCTV cameras. His image is thought to have been matched through facial recognition technology to a database of photos, the source of which has yet to be confirmed.

The European court of human rights ruled that Glukhin’s article 8 right to respect for private life had been infringed, not because of the intrusive surveillance but because there had not been a pressing social need for him to have been taken into custody.

The UK is not Russia. For all that the many civil liberty campaigners will complain, as is their role, the independence of the judiciary remains strong. The laws relating to freedom of association, expression and right to privacy are well defended in parliament and outside.

But the technology, the means by which the state might insert itself into our lives, is developing apace. The checks and balances are not. The Guardian has revealed that the government is legislating, without fanfare, to allow the police and the National Crime Agency to run facial recognition searches across the UK’s driving licence records. When the police have an image, they will be able to identify the person, it is hoped, through the photographic images the state holds for the purposes of ensuring that the roads are safe.

Searching those digital images would have taken more man-hours than could have been justified in the old analogue world. It is now a matter of pushing a button, thanks to the wonders of artificial intelligence systems that are able to match biometric measurements in a flash.

There are those who say that none of this is anything to worry about for those who have done nothing wrong. This summer, the government abolished the office of the biometrics and surveillance camera commissioner, an independent watchdog. The last holder of that role, Fraser Sampson, told the Guardian that this was the argument that frustrated him more than any other. The computer will say you are there, at the perimeter of a crime or perhaps at the heart of it, and at times there will be scant opportunity or even motive to argue back. Think of the speeding cameras and the notices that few contest.

In relation to the use of the driving records, did those passing their tests understand or agree in any meaningful way to the images they provided for their licences to be added to what is in effect a permanent police lineup for whenever the law is seeking suspects or witnesses to a crime? If not, what are the consequences for the age-old concept of British policing by consent?

Then we come to the frailty of these systems. An estimated global TV audience of about 300 million watched the coronation of King Charles III. The use of live facial recognition cameras by the police made the event in London the largest public deployment of AI-driven policing in British history. Yet, just a few weeks later, a House of Commons committee heard that the system used could show racial bias at certain thresholds.

Speaking to the science, innovation and technology committee, Dr Tony Mansfield, a principal research scientist at the National Physical Laboratory, said the system used by the Metropolitan police, the UK’s largest force, was prone to bias against black individuals on a set of test data created for his investigations. It is believed that the Met deployed higher thresholds than they needed when seeking matches between their watchlists and their live camera feeds. But at a certain threshold there was an arguably unacceptable high risk of false positive identifications.

Who was regulating this? Everyone wants the bad guys to be caught – but at what price?

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.