Immigration enforcement agents across the US are increasingly relying on a new smartphone app with facial recognition technology.
The app is named Mobile Fortify. Simply pointing a phone’s camera at their intended target and scanning the person’s face allows Mobile Fortify to pull data on an individual from multiple federal and state databases, some of which federal courts have deemed too inaccurate for arrest warrants.
The US Department of Homeland Security has used Mobile Fortify to scan faces and fingerprints in the field more than 100,000 times, according to a lawsuit brought by Illinois and Chicago against the federal agency, earlier this month. That’s a drastic shift from immigration enforcement’s earlier use of facial recognition technology, which was otherwise limited largely to investigations and ports of entry and exit, legal experts say.
The app’s existence was first uncovered last summer by 404 Media, through leaked emails. 404 Media also reported, in October, about internal DHS documents that say people cannot refuse to be scanned by Mobile Fortify.
“Here we have ICE using this technology in exactly the confluence of conditions that lead to the highest false match rates,” says Nathan Freed Wessler, deputy director of the ACLU’s speech, privacy and technology project. “A false result from this technology can turn somebody’s life totally upside down.” The larger implications for democracy are chilling, too, he notes: “ICE is effectively trying to create a biometric checkpoint society.”
Use of the app has inspired backlash on the streets, in courts, and on Capitol Hill.
Protesters are using a variety of tactics to fight back. They include recording masked agents, using burner phones and donated dashboard cameras, according to the Washington Post.
Underpinning resistance to ICE’s use of facial recognition are doubts about the technology’s efficacy. Research has uncovered higher error rates in identifying women and people of color than for scans of white faces. ICE’s use of the technology is often occurring in intense and fast-moving situations, which makes misidentification more likely. Those being scanned may be people of color. They could be turning away from officers because they don’t want to be identified. The lighting could be poor.
The Illinois lawsuit against the DHS takes particular issue with the federal agency’s use of Mobile Fortify and argued that the app goes far beyond what Congress allows with regards to the collection of biometric data. The complaint cited several examples in which federal agents appeared to take photos or scans of US citizens across Illinois without their consent.
Democratic lawmakers in Congress introduced a bill on 15 January that would outright ban the homeland security department from using Mobile Fortify or similar apps, except for identification at points of entry. This follows a September letter, sent by senators to ICE, asking for more information about the app and declaring that “even when accurate, this type of on-demand surveillance threatens the privacy and free speech rights of everyone in the United States”.
The DHS said in a statement that Mobile Fortify does not violate constitutional rights or compromise privacy. “It operates with a deliberately high matching threshold and queries only limited CBP immigration datasets. The application does not access open-source material, scrape social media, or rely on publicly available data,” a spokesperson said.
According to 404 Media, the app’s database consists of some 200m images. “Mobile Fortify has not been blocked, restricted, or curtailed by the courts or by legal guidance. It is lawfully used nationwide in accordance with all applicable legal authorities.”
Observers, experts, and at least one congressman have said federal immigration agents frequently do not ask for consent to scan a person’s face – and may dismiss other documentation that contradicts this data. ICE has been documented using biometrics as a definitive determination of someone’s citizenship in the absence of identification.
This means that ICE is not required to conduct additional vetting of facial scans or checks to avoid a misidentification. 404 Media reported earlier this month that Mobile Fortify misidentified a detained woman during an immigration raid; the app came up with two different and incorrect names.
“Facial recognition – to the extent it should be used at all – is really supposed to be a starting point,” says Jake Laperruque, deputy director of the security and surveillance project at the Center for Democracy & Technology (CDT). “If you treat this as an endpoint – as a definitive ID – you’re going to have errors, and you’re going to end up arresting and jailing people that are not actually who the machine says it is.”
Laperruque says that even police departments across the country have pushed back against an over-reliance on facial recognition, treating it as a lead, at most. At least 15 states are cautious about using it all, and have laws limiting police’s use of the technology. In 2019, San Francisco became the first major US city to ban facial recognition technology by police and all other local government agencies.
The DHS issued a directive in September 2023 requiring that the bureau test the technology for unintended bias and offer US citizens the choice to opt out of scans not conducted by law enforcement. That directive appeared to be rescinded in February last year.
ICE’s stops – whether they involve face-scanning or not – have been subject to litigation as well. They have often been referred to as “Kavanaugh stops” after the supreme court justice wrote in a concurring opinion that Hispanic residents’ “apparent ethnicity” can be a “relevant factor” for ICE to stop them and demand for proof of citizenship. The ACLU sued the Trump administration earlier this month, accusing federal immigration authorities of racial profiling and unlawful arrests.