Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Johana Bhuiyan

TechScape: ‘Are you kidding, carjacking?’ – The problem with facial recognition in policing

‘The only policy that will prevent false facial recognition arrests is a complete ban’.
‘The only policy that will prevent false facial recognition arrests is a complete ban.’ Photograph: John Lund/Getty Images/Blend Images

Porcha Woodruff was eight months pregnant when police in Detroit, Michigan came to arrest her on charges of carjacking and robbery. She was getting her two children ready for school when six police officers knocked on her door and presented her with an arrest warrant. She thought it was a prank.

“Are you kidding, carjacking? Do you see that I am eight months pregnant?” the lawsuit Woodruff filed against Detroit police reads. She sent her children upstairs to tell her fiance that “Mommy’s going to jail”.

She was detained and questioned for 11 hours and released on a $100,000 bond. She immediately went to the hospital, where she was treated for dehydration.

Woodruff later found out that she was the latest victim of false identification by facial recognition. After her image was incorrectly matched to video footage of a woman at the gas station where the carjacking took place, her picture was shown to the victim in a photo lineup. According to the lawsuit, the victim allegedly chose Woodruff’s picture as the woman who was associated with the perpetrator of the robbery. Nowhere in the investigator’s report did it say the woman in the video footage was pregnant.

A month later the charges were dismissed due to insufficient evidence.

Porcha Woodruff, who was falsely arrested, in early August.
Porcha Woodruff, who was falsely arrested, in early August. Photograph: Carlos Osorio/AP

Woodruff’s is the third known case of an arrest made due to false facial recognition by the Detroit police department – and the sixth case in the US. All six people who were falsely arrested are Black. For years, privacy experts and advocates have raised the alarm about the inability of technology to properly identify people of colour and have warned of the privacy violations and dangers of a system that purports to identify anyone by their image or face. Still, law enforcement and government agencies across the US and around the world continue to contract with various facial recognition firms from Amazon’s Rekognition to Clearview AI.

Countries including France, Germany, China and Italy have used similar technology. In December, it was revealed that Chinese police had used mobile data and faces to track protestors. Earlier this year, French legislators passed a bill giving police the power to use AI in public spaces ahead of the Paris 2024 Olympics, making it the first country in the EU to approve the use of AI surveillance (though it forbid the use of real-time facial recognition). And last year, Wired reported on controversial proposals to let police forces in the EU share photo databases that include images of people’s faces – described by one civil rights policy advisor as “the most extensive biometric surveillance infrastructure that I think we will ever have seen in the world”.

Back in Detroit, Woodruff’s lawsuit has sparked renewed calls in the US for total bans on police and law enforcement use of facial recognition. The Detroit police have rolled out new limitations on the use of facial recognition in the days since the lawsuit was filed, including prohibiting the use of facial recognition images in a lineup and requiring a detective not involved in the case to handle showing the images to the person being asked to identify a person. But activists say that’s not enough.

“The only policy that will prevent false facial recognition arrests is a complete ban,” said Albert Fox Cahn of the nonprofit Surveillance Technology Oversight Project. “Sadly, for every facial recognition mistake we know about, there are probably dozens of Americans who remained wrongly accused and never get justice. These racist, error-prone systems simply have no place in a just society.”

Police have used facial recognition technology to monitor and identify Black Lives Matter protestors.
Police have used facial recognition technology to monitor and identify Black Lives Matter protestors. Photograph: Kathy Willens/AP

As governments around the world grapple with generative AI, the long-recorded harms of existing AI use, such as those in surveillance systems, are often glossed over or entirely left out of conversation. Even in the case of the EU AI Act, which was introduced with several clauses proposing limitations on high-risk uses of AI like facial recognition, some experts say the hype around generative AI has partly distracted from those discussions. “We were quite lucky that we put a lot of these things on the agenda before this AI hype and generative AI, ChatGPT boom happened,” Sarah Chander, a senior policy adviser at the international advocacy organisation European Digital Rights, told me in June. “I think ChatGPT muddies the water very much in terms of the types of harms we’re actually talking about here.”

Much like other forms of AI-based systems, facial recognition is only as good as the data that is fed into it, and as such often reflects and perpetuates the biases of those building them – a problem, as Amnesty International has noted, because images used to train such systems are predominantly of white faces. Facial recognition systems have the poorest accuracy rates when it comes to identifying people who are Black, female and between the ages of 18 to 30, while false positives “exist broadly”, according to a study by the National Institute of Standards and Technology. In 2017, NIST examined 140 face recognition algorithms and found that “false positive rates are highest in west and east African and east Asian people, and lowest in eastern European individuals. This effect is generally large, with a factor of 100 more false positives between countries.”

But even if facial recognition technology were exactly accurate – it wouldn’t be safer, critics argue. Civil liberties groups say the technology can potentially create a vast and boundless surveillance network that breaks down any semblance of privacy in public spaces. People can be identified wherever they go, even if those locations are where they are practicing constitutionally protected behaviour like protests and religious centres. In the aftermath of the US supreme court’s reversal of federal abortion protections, it is newly dangerous for those seeking reproductive care. Some facial recognition systems, like Clearview AI, also use images scraped from the internet without consent. So social media images, professional headshots and any other photos that live on public digital spaces can be used to train facial recognition systems that are in turn used to criminalise people. Clearview has been banned in several European countries including Italy and Germany and is banned from selling facial recognition data to private companies in the US.

As for Woodruff, she is seeking financial damages. Detroit police chief James E White said the department was reviewing the lawsuit and that it was “very concerning”.

“I don’t feel like anyone should have to go through something like this, being falsely accused,” Woodruff told the Washington Post. “We all look like someone.”

The week in AI

Blast Theory’s Cat Royale, a videoed experiment measuring cats’ reactions to being looked after by a robot.
Blast Theory’s Cat Royale, a videoed experiment measuring cats’ reactions to being looked after by a robot. Photograph: David JW Bailey/Stephen Daly

The wider TechScape

Star Wars characters at Walt Disney World in Orlando, Florida”Star Wars” characters perform on the new Starcruiser experience at Walt Disney World in Orlando, Florida, U.S., February 24, 2022. Picture taken February 24, 2022. REUTERS/Lisa Richwine
Star Wars characters on the Galactic Starcruiser at Disney World, Florida. Photograph: Lisa Richwine/Reuters
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.