Even the U.S. Government Admits Facial Recognition Is Racially Biased

Credit to Author: Edward Ongweso Jr| Date: Fri, 20 Dec 2019 17:36:13 +0000

A new federal study from the National Institute of Standards and Technology (NIST) confirms, again, that facial recognition technology is riddled with a fundamental racial bias.

Using nearly 200 facial recognition algorithms developed by 99 corporations on 18 million images from federal databases, the algorithms’ accuracy was found to vary wildly among different racial, ethnic, gender, and age groups. Native Americans, Blacks, and Asians had some of the highest false match rates; for mugshots, Blacks and Asians were misidentified at rates ranging from ten to 100 times more than Caucasians. Women, in general, had higher false match rates than men, with Native American women misidentified as high as 68 times more than white men.

"This study makes it clear: the government needs to stop using facial recognition surveillance right now. This technology has serious flaws that pose an immediate threat to civil liberties, public safety, and basic human rights,” said Fight for the Future, a privacy rights group. “Even if the algorithms improve in the future, biometric surveillance like face recognition is dangerous and invasive. Lawmakers everywhere should take action to ban the use of this nuclear-grade surveillance tech."

For years, facial recognition has been part of the drive to integrate artificial intelligence systems into everything from public housing to healthcare, despite constant warnings about inherent bias against black and brown people and subsequent abuse by corporations, police departments, federal agencies, and everything in between—all in the name of “improving” the technology instead of simply banning it.

According to facial recognition researchers, the U.S. government, along with researchers and corporations, regularly and non-consensually use the images of immigrants, abused children, and dead people to test their facial recognition programs. In October, contractors working for Google were caught training its facial recognition systems using "dubious tactics" that targeted "darker skin people"—including deceiving homeless people into letting their faces be scanned and then lying to them about it.

"Even government scientists are now confirming that this surveillance technology is flawed and biased. One false match can lead to missed flights, lengthy interrogations, watchlist placements, tense police encounters, false arrests, or worse. But the technology’s flaws are only one concern,” ACLU Senior Policy Analyst Jay Stanley told Motherboard. “Face recognition technology—accurate or not—can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale. Government agencies, including the FBI, CBP and local law enforcement, must immediately halt the deployment of this dystopian technology.”

This article originally appeared on VICE US.

http://www.vice.com/en_ca/rss