Random Image Display on Page Reload

FBI Agents Are Using Face Recognition Without Proper Training

Sep 25, 2023 5:07 PM

FBI Agents Are Using Face Recognition Without Proper Training

The FBI makes heavy use of face recognition services like that of controversial startup Clearview AI, but 95 percent of the agents using them haven’t completed training on the technology.

Three blue crystal heads facing each other

Photograph: Lourdes Balduque/Getty Images

The US Federal Bureau of Investigation (FBI) has done tens of thousands of face recognition searches using software from outside providers in recent years. Yet only 5 percent of the 200 agents with access to the technology have taken the bureau’s three-day training course on how to use it, a report from the Government Accountability Office (GAO) this month reveals. The bureau has no policy for face recognition use in place to protect privacy, civil rights, or civil liberties.

Lawmakers and others concerned about face recognition have said that adequate training on the technology and how to interpret its output is needed to reduce improper use or errors, although some experts say training can lull law enforcement and the public into thinking face recognition is low risk.

Since the false arrest of Robert Williams near Detroit in 2020, multiple instances have surfaced in the US of arrests after a face recognition model wrongly identified a person. Alonzo Sawyer, whose ordeal became known this spring, spent nine days in prison for a crime he didn’t commit.

The lack of face recognition training at the FBI came to light in a GAO report examining the protections in place when federal law enforcement uses the technology. The report was compiled at the request of seven Democratic members of Congress.

Report author and GAO Homeland Security and Justice director Gretta Goodwin says, via email, that she found no evidence of false arrests due to use of face recognition by a federal law enforcement agency. An FBI spokesperson declined to respond to questions about the GAO report for this story.

The GAO report focuses on face recognition tools made by commercial and nonprofit entities. That means it does not cover the FBI's in-house face recognition platform, which the GAO previously criticized for poor privacy protections. The US Department of Justice was ordered by the White House last year to develop best practices for using face recognition and report any policy changes that result.

The outside face recognition tools used by the FBI and other federal law enforcement covered by the report comes from companies including Clearview AI, which scraped billions of photos of faces from the internet to train its face recognition system, and Thorn, a nonprofit that combats sex trafficking by applying face recognition to identify victims and sex traffickers from online commercial sex market imagery.

The FBI ranks first among federal law enforcement agencies examined by the GAO for the scale of its use of face recognition. More than 60,000 searches were carried out by seven agencies between October 2019 and March 2022. Over half were made by FBI agents, about 15,000 using Clearview AI and 20,000 using Thorn.

Most Popular

No existing law requires federal law enforcement personnel to take training before using face recognition or to follow particular standards when using face recognition in a criminal investigation.

The DOJ plans to issue a department-wide civil rights and civil liberties policy for face recognition but has yet to set a date for planned implementation, according to the report. It says that DOJ officials, at one point in 2022, considered updating its policy to allow a face recognition match alone to justify applying for a search warrant.

The commercial face recognition tools used by the FBI and other federal agencies attempt to match a photo of a suspect or victim to images of faces in databases that can contain millions of images. After the software offers up a list of possible matches, humans decide whom to subject to further investigation.

Face recognition models made by commercial vendors in the US have misidentified Asian Americans and women with dark skin at higher rates than the rest of the population, according to a National Institute of Standards and Technology report. Though government assessments of face recognition continue to show improvements in identifying people across different demographic groups, false arrests continue to take place, often the results of a combination of faulty technology and poor investigative work by police officers, and almost exclusively involving Black men. Last month, also, news broke about Porcha Woodruff, who was falsely arrested despite the fact she was visibly pregnant, unlike a suspect seen in security camera footage.

Misidentification by a witness or law enforcement officer can also result in a false arrest. Research shows that people are generally bad at recognizing other people they don’t know and especially bad at recognizing people of a different race. Photo quality and the time difference between a probe photo and a photo in a database can also influence outcomes.

The FBI has been under pressure from government and lawmakers to better protect the rights of US residents against the power of face recognition for years. The GAO began calling for the FBI to assess the accuracy and privacy implications of its in-house face recognition software in 2016. It renewed those calls in 2019, when bipartisan lawmakers also pressured the FBI to add safeguards.

In 2022, a congressional committee instructed the Department of Justice to create an ethical-use-of-face-recognition policy, but the agency has yet to put such a policy into practice. That same year, President Joe Biden signed an executive order directing the department to commission a National Academy of Sciences study of face recognition's impact on privacy, civil rights, and civil liberties. It also required the Department of Justice engage in an interagency process to develop best practices for face recognition use by law enforcement agencies.

Sneha Revanur, founder of Encode Justice, a youth nonprofit that wants a moratorium on face recognition use by law enforcement, says the technology can be overlooked amid recent excitement and fear about generative AI such as ChatGPT. “It’s critical that we don’t leave behind unfinished business around issues like face recognition,” she says. Revanur says a federal moratorium on the technology would provide time to study what kinds of training and use policies could possibly lessen the power of a surveillance tool that can violate civil rights.

Get More From WIRED

Khari Johnson is a senior writer for WIRED covering artificial intelligence and the positive and negative ways AI shapes human lives. He was previously a senior writer at VentureBeat, where he wrote stories about power, policy, and novel or noteworthy uses of AI by businesses and governments. He is based… Read more
Senior Writer

More from WIRED

AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous

State and local governments in the US are scrambling to harness tools like ChatGPT to unburden their bureaucracies, rushing to write their own rules—and avoid generative AI's many pitfalls.

Todd Feathers

Senators Want ChatGPT-Level AI to Require a Government License

A new US government body would force companies to seek a license before working on powerful AI models like OpenAI's GPT-4, under a bipartisan proposal by senators Richard Blumenthal and Josh Hawley.

Khari Johnson

This New Autonomous Drone for Cops Can Track You in the Dark

Startup Skydio says its powerful new drone for public safety can reduce the need for high-speed chases. Civil liberties groups warn that few rules govern police use of drones.

Khari Johnson

Inside the Senate’s Private AI Meeting With Tech’s Billionaire Elites

Dozens of US senators listened quietly as tech titans and AI ethicists schooled them on the “civilizational risks” of generative AI.

Matt Laslo

The Maker of ShotSpotter Is Buying the World’s Most Infamous Predictive Policing Tech

SoundThinking is purchasing parts of Geolitica, the company that created PredPol. Experts say the acquisition marks a new era of companies dictating how police operate.

Dhruv Mehrotra

TikTok Is Spending $1.3 Billion to Dodge Bans in Europe

European politicians are nervous about where TikTok’s data goes. The company is spending big on local data centers, but analysts say it’s not enough.

Chris Stokel-Walker

Get Ready for AI Chatbots That Do Your Boring Chores

Move over, Siri. Startups are using the technology behind ChatGPT to build more capable AI agents that can control your computer and access the web to get things done—with sometimes chaotic results.

Will Knight

Why This Award-Winning Piece of AI Art Can’t Be Copyrighted

Matthew Allen’s AI art won first prize at the Colorado State Fair. But the US government has ruled it can’t be copyrighted because it’s too much “machine” and not enough “human.”

Kate Knibbs

*****
Credit belongs to : www.wired.com

Check Also

What are microplastics doing to human health? Scientists work to connect the dots

People unknowingly ingest microplastics from what we eat, drink and breathe. Some scientists fear exposure …