Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Science
Felipe Romero Moreno, Senior Lecturer and Research Tutor, School of Law, University of Hertfordshire

Facial recognition technology: how it's being used in Ukraine and why it's still so controversial

Facial recognition technology is controversial in many countries. Shutterstock

Facial recognition technology is being used in warfare for the first time. It could be a game changer in Ukraine, where it is being used to identify the dead and reunite families. But if we fail to grapple with the ethics of this technology now, we could find ourselves in a human rights minefield.

Ukraine’s Ministry of Defence has been using Clearview AI facial recognition software since March 2022 to build a case for war crimes and identify the dead – both Russian and Ukrainian. The Ministry of Digital Transformation in Ukraine said it is using Clearview AI technology to give Russians the chance to experience the “true cost of the war”, and to let families know that if they want to find their loved ones’ bodies, they are “welcome to come to Ukraine”.

Ukraine is being given free access to the software. It’s also being used at checkpoints and could help reunite refugees with their families.

The privacy backlash

Last month, however, the UK Information Commissioner’s Office (ICO) fined Clearview AI more than £7.5 million for collecting images of people in the UK and elsewhere from the web and social media. It was ordered to delete the images and stop obtaining and using the personal data of UK residents publicly available on the internet. Originally the ICO said it intended to fine Clearview AI £17 million.

According to the ICO, given the huge number of UK social media users, Clearview AI’s face database is likely to contain a significant amount of images collected without consent.

A lawyer for Clearview, AI Lee Wolosky, said: “While we appreciate the ICO’s desire to reduce their monetary penalty on Clearview AI, we nevertheless stand by our position that the decision to impose any fine is incorrect as a matter of law. Clearview AI is not subject to the ICO’s jurisdiction, and Clearview AI does no business in the UK at this time.”

Clearview AI has said it wants 100 billion face images in its database by early 2023 – equivalent to 14 for every person on Earth. Multiple photos of the same person improve the system’s accuracy.

According to Clearview AI’s website, its facial recognition technology helps law enforcement tackle crime, and enables transportation businesses, banks and other commercial companies to detect theft, prevent fraud and verify identities.

Buzzfeed reported in February 2020 that several British police forces have previously used Clearview AI. A spokeswoman for Clearview AI said police in the UK do not have access to its technology, while spokespeople for both the National Crime Agency and Metropolitan police would neither confirm nor deny use of specific tools or techniques. However, in March 2022 the College of Policing published new guidance for UK police forces on the use of live facial recognition.

The UK government plans to replace key human rights laws with a new Modern Bill of Rights which could make it difficult, if not impossible, for people to challenge decisions based on AI evidence in court, including facial recognition.

According to advocacy group Liberty , the bill is likely to have a disproportionate impact on over-policed communities, as it would create different classes of claimants based on their past behaviour.

A tool for warfare

Clearview AI’s chief executive Hoan Ton-That said its facial recognition software has allowed Ukrainian law enforcement and government officials to store more than 2 billion images from VKontakte, a Russian social networking service. Hoan said the software can help Ukrainian officials identify dead soldiers more efficiently than fingerprints, and works even if a soldier’s face is damaged.

But there is conflicting evidence about facial recognition software’s effectiveness. According to the US Department of Energy, decomposition of a person’s face can reduce the software’s accuracy. On the other hand, recent scientific research demonstrated results relating to the identification of dead people that were similar to or better than human assessment.

Research suggests fingerprints, dental records and DNA are still the most reliable identification techniques. But they are tools for trained professionals, while facial recognition can be used by non-experts.

Another issue flagged by research is that facial recognition can mistakenly pair two images, or fail to match photos of the same person. In Ukraine, the consequences of any potential error with AI could be disastrous. An innocent civilian could be killed if they are misidentified as a Russian soldier.

A controversial history

In 2016 Hoan began recruiting computer science engineers to create Clearview AI’s algorithm. But it was not until 2019 that the American facial recognition company started discretely providing its software to US police and law enforcement agencies.

In January 2020, The New York Times published its story: ‘The Secretive Company That Might End Privacy as We Know It’. This article prompted more than 40 civil rights and tech organisations to send a letter to the Privacy and Civil Liberties Oversight Board and four US congressional committees, demanding the suspension of Clearview AI’s facial recognition software.

In February 2020, following a data leak of Clearview AI’s client list, BuzzFeed revealed that Clearview AI’s facial recognition software was being used by individuals in more than 2,200 law enforcement departments, government agencies and companies across 27 different countries.

Man in suit watches screens showing surveillance camera footage
Facial recognition technology is also used to detect theft, prevent fraud and verify identities. Shutterstock

On May 9 2022, Clearview AI agreed to stop selling access to its face database to individuals and businesses in the US, after the American Civil Liberties Union launched a lawsuit accusing Clearview AI of breaching an Illinois privacy law.

Over the last two years, data protection authorities in Canada, France, Italy, Austria and Greece have all fined, investigated or banned Clearview AI from collecting images of people.

The future of Clearview AI in the UK is uncertain. The worst-case scenario for ordinary people and businesses would be if the UK government fails to take on board the concerns raised in response to its consultation on the Modern Bill of Rights. Liberty has warned of a potential human rights “power grab”.

The best outcome, in my opinion, would be for the UK government to scrap its plans for a Modern Bill of Rights. This would also mean that UK courts should continue to take account of cases from the European Court of Human Rights as case law.

Unless laws governing the use of facial recognition are adopted, police use of this technology risks breaching privacy rights, data protection and equality laws.

The Conversation

Felipe Romero Moreno is affiliated with The British and Irish Law Education Technology Association (BILETA) https://www.bileta.org.uk/

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.