Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Los Angeles Times
Los Angeles Times
National
Libor Jany

LAPD doesn't fully track its use of facial recognition, report finds

LOS ANGELES — Two years after Los Angeles police leaders set tougher limits on the use of facial recognition technology, a follow-up report found the department lacks a way to track its outcomes or effectiveness.

The report, by the LAPD inspector general's office, found that LAPD personnel used facial recognition software in an effort to identify criminal suspects nearly 2,000 times last year. Of those searches, about 55% resulted in a positive match — meaning that an image of an unidentified suspect was matched through artificial intelligence to a mugshot or other photo of a known person, the report found.

On Tuesday, Inspector General Mark Smith told the department's civilian oversight commissioners that the LAPD was largely in compliance with a 2021 policy that set out rules for when and how specially trained officers can use a facial recognition program maintained by the county Sheriff's Department. The county program runs images against a database of roughly 9 million mugshots of people who have been booked into the county's detention facilities — a far less expansive pool than some third-party search platforms.

But having no clear process for documenting either an investigation's results or corroborating evidence that confirms the photo match meant there was "no way to verify or analyze the search results," the report said. This, it said, also made it impossible to track the number of times the technology may have misidentified someone — a common criticism of facial recognition.

"The risk of a "false positive" — an instance when ... a match ultimately results in the identification and arrest of the wrong person due to the fact that insufficient corroborating evidence was obtained prior to law enforcement action being taken — is always of paramount concern," the report said.

The report also found that the department had no way of determining whether officers follow a state law and an internal policy that prohibit using photos taken from an officer's phone or cameras worn by an officer or mounted in a police vehicle in facial recognition programs.

For every facial recognition query, officers are expected to complete a form, which includes the investigator's name and identification number, a reason for the search, whether the subject of the search is a suspect, victim or witness; the search's results; and the search date.

What the form fails to capture, however, are details on the outcomes of these inquiries and instances in which, for instance, a facial recognition hit leads to the misidentification of a suspect, Smith said. While officers are not supposed to arrest someone based solely on a facial recognition match without providing additional evidence, there is no way to track if that is occurring, Smith said.

"We believe there needs to be an accounting of when that happens because it has an impact on the person detained," he told commissioners on Tuesday.

Later in the presentation, Commission President William Briggs asked Smith if he could offer a breakdown of the demographics of people falsely identified through the facial recognition program.

"Right now, I'm not aware of a way to enumerate those instances," Smith responded.

In January 2021, the Commission approved its current facial recognition policy after The Times reported that LAPD officers had used the technology more than 30,000 times since 2009 despite department claims to the contrary.

Among other things, the policy established new measures for tracking the Police Department's use of the county system. It limited use of the sheriff's system to cases where there is an imminent threat to life, or in which investigators are trying to solve a crime or to identify someone who is incapacitated or at risk. The policy also explicitly states that any match determined by the software can only be used as a lead by officers but does not constitute probable cause for an arrest and cannot serve as the sole basis for criminal charges.

The move came over the protests of privacy watchdogs and civil rights advocates, who had sought an outright ban. They pointed to research that suggests facial recognition technology is unreliable, in particular when trying to match the faces of Black people, exacerbated historical racial disparities in the criminal justice system.

Tuesday's report did not specify how many times the department used facial recognition before the new rules were adopted, only that photo-comparison software had been used "extensively" since 2009.

The report also did not address LAPD officers' use of facial recognition software from Clearview AI, a technology company. The department's use of the controversial program was first revealed in a 2020 story by Buzzfeed, which reported that 25 LAPD employees had performed nearly 475 searches of Clearview's database of more than 3 billion photos compiled from Facebook, Google, and other sites.

After the Buzzfeed article, the LAPD conducted its own internal probe, which found only eight employees had used a trial version of the Clearview program to conduct 18 searches for "Department-related searches," according to the inspector general's report.

____

(Times staff writer Kevin Rector contributed to this report.)

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.