Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Ronald Bailey

Is Facial Recognition a Useful Public Safety Tool or Something Sinister?

Your Face Belongs to Us: A Secretive Startup's Quest To End Privacy as We Know It, by Kashmir Hill, Random House, 352 pages, $28.99

"Do I want to live in a society where people can be identified secretly and at a distance by the government?" asks Alvaro Bedoya. "I do not, and I think I am not alone in that."

Bedoya, a member of the Federal Trade Commission, says those words in New York Times technology reporter Kashmir Hill's compelling new book, Your Face Belongs to Us. As Hill makes clear, we are headed toward the very world that Bedoya fears.

This book traces the longer history of attempts to deploy accurate and pervasive facial recognition technology, but it chiefly focuses on the quixotic rise of Clearview AI. Hill first learned of this company's existence in November 2019, when someone leaked a legal memo to her in which the mysterious company claimed it could identify nearly anyone on the planet based only on a snapshot of their face.

Hill spent several months trying to talk with Clearview AI's founders and investors, and they in turn tried to dodge her inquiries. Ultimately, she tracked down an early investor, David Scalzo, who reluctantly began to tell her about the company. After she suggested the app would bring an end to anonymity, Scalzo replied: "I've come to the conclusion that because information constantly increases, there's never going to be privacy. You can't ban technology. Sure, it might lead to a dystopian future or something, but you can't ban it." He pointed out that law enforcement loves Clearview AI's facial recognition app.

As Hill documents, the company was founded by a trio of rather sketchy characters. The chief technological brain is an Australian entrepreneur named Hoan Ton-That. His initial partners included the New York political fixer Richard Schwartz and the notorious right-wing edgelord and troll Charles Johnson.

Mesmerized by the tech ferment of Silicon Valley, the precocious coder Ton-That moved there at age 19; there he quickly occupied himself creating various apps, including in 2009 the hated video-sharing ViddyHo "worm," which hijacked Google Talk contact lists to spew out a stream of instant messages to click on its video offerings. After the uproar over ViddyHo, Ton-That kept a lower profile working on various other apps, eventually decamping to New York City in 2016.

In the meantime, Ton-That began toying online with alt-right and MAGA conceits, an interest that led him to Johnson. The two attended the 2016 Republican National Convention in Cleveland, where Donald Trump was nominated. At that convention, Johnson briefly introduced Ton-That to internet financier Peter Thiel, who would later be an angel investor in what became Clearview AI. (For what it's worth, Ton-That now says he regrets his earlier alt-right dalliances.)

Ton-That and Schwartz eventually cut Johnson out of the company. As revenge for his ouster, Johnson gave Hill access to tons of internal emails and other materials that illuminate the company's evolution into the biggest threat to our privacy yet developed.

"We have developed a revolutionary, web-based intelligence platform for law enforcement to use as a tool to help generate high-quality investigative leads," explains the company's website. "Our platform, powered by facial recognition technology, includes the largest known database of 40+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources."

***

As Hill documents, the billions of photos in Clearview AI's ever-growing database were scraped without permission from Facebook, TikTok, Instagram, and other social media sites. The company argues that what it is doing is no different than the way Google catalogs links and data for its search engine, only that theirs is for photographs. The legal memo leaked to Hill was part of the company's defense against numerous lawsuits filed by social media companies and privacy advocates who objected to the data scraping.

Scalzo is right that law enforcement loves the app. In March, Ton-That told the BBC that U.S. police agencies have run nearly a million searches using Clearview AI. Agencies used it to identify suspects in the January 6 Capitol riot, for example. Of course, it does not always finger the right people. Police in New Orleans misused faulty face IDs from Clearview AI's app to arrest and detain an innocent black man.

Some privacy activists argue that facial recognition technologies are racially biased and do not work as well on some groups. But as developers continued to train their algorithms, they mostly fixed that problem; the software's disparities with respect to race, gender, and age are now so negligible as to be statistically insignificant. In testing by the National Institute of Standards and Technology, Hill reports, Clearview AI ranked among the world's most accurate facial recognition companies.

***

To see the possible future of pervasive facial recognition, Hill looks at how China and Russia are already using the technology. As part of its "safe city" initiative, Russian authorities in Moscow have installed over 200,000 surveillance cameras. Hill recounts an experiment by the Russian civil liberties activist Anna Kuznetsova, who submitted her photos to a black market data seller with access to the camera network. Two weeks later, she received a 35-page report detailing each time the system identified her face on a surveillance camera. There were 300 sightings in all. The system also accurately predicted where she lived and where she worked. While the data seller was punished, the system remains in place; it is now being used to identify anti-government protesters.

"Every society needs to decide for itself what takes priority," said Kuznetsova, "whether it's security or human rights."

The Chinese government has deployed over 700 million surveillance cameras; in many cases, artificial intelligence analyzes their output in real time. Chinese authorities have used it to police behavior like jaywalking and to monitor racial and religious minorities and political dissidents. Hill reports there is a "red list" of VIPs who are invisible to facial recognition systems. "In China, being unseen is a privilege," she writes.

In August 2023, the Chinese government issued draft regulations that aim to limit the private use of facial recognition technologies; the rules impose no such restrictions on its use for "national security" concerns.

Meanwhile, Iranian authorities are using facial recognition tech to monitor women protesting for civil rights by refusing to wear hijabs in public.

Coappearance analysis using artificial intelligence allows users to review either live or recorded video and identify all of the other people with whom a person of interest has come into contact. Basically, real-time facial recognition will not only keep track of you; it will identify your friends, family, coreligionists, political allies, business associates, and sexual partners and log when and where and for how long you hung out with them. The San Jose, California–based company Vintra asserts that its co-appearance technology "will rank these interactions, allowing the user to identify the most recurrent relationships and understand potential threats." Perhaps your boss will think your participation in an anti-vaccine rally or visit to a gay bar qualifies as a "potential threat."

What about private use of facial recognition technology? It certainly sounds like a product with potential: Personally, I am terrible at matching names to faces, so incorporating facial recognition into my glasses would be a huge social boon. Hill tests a Clearview AI prototype of augmented reality glasses that can do just that. Alas, the glasses provide viewers access to all the other photos of the person in Clearview AI's database, including snapshots at drunken college parties, at protest rallies, and with former lovers.

"Facial recognition is the perfect tool for oppression," argued Woodrow Hartzog, then a professor of law and computer science at Northeastern University, and Evan Selinger, a philosopher at the Rochester Institute of Technology, back in 2018. They also called it "the most uniquely dangerous surveillance mechanism ever invented." Unlike other biometric identifiers, such as fingerprints and DNA, your face is immediately visible wherever you roam. The upshot of the cheery slogan "your face is your passport" is that authorities don't even have to bother with demanding "your papers, please" to identify and track you.

In September 2023, Human Rights Watch and 119 other civil rights groups from around the world issued a statement calling "on police, other state authorities and private companies to immediately stop using facial recognition for the surveillance of publicly-accessible spaces and for the surveillance of people in migration or asylum contexts." Human Rights Watch added separately that the technology "is simply too dangerous and powerful to be used without negative consequences for human rights….As well as undermining privacy rights, the technology threatens our rights to equality and nondiscrimination, freedom of expression, and freedom of assembly."

The United States is cobbling together what Hill calls a "rickety surveillance state" built on this and other surveillance technologies. "We have only the rickety scaffolding of the Panopticon; it is not yet fully constructed," she observes. "We still have time to decide whether or not we actually want to build it.

The post Is Facial Recognition a Useful Public Safety Tool or Something Sinister? appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.