acm-header
Sign In

Communications of the ACM

ACM TechNews

Are Face Recognition Systems Accurate? Depends on Your Race


View as: Print Mobile App Share:

Studies have shown the facial-recognition systems used by the U.S. Federal Bureau of Investigation and others have a built-in racial bias that is a result of how the systems are designed and the data on which they are trained.

Credit: Sophia Foster-Dimino

The U.S. Government Accountability Office in June issued a report finding the Federal Bureau of Investigation (FBI) has not properly tested the accuracy of its face-matching systems or the massive network of state-level face-matching databases it accesses.

Studies show the facial-recognition systems used by the FBI and other police agencies have a built-in racial bias that is a result of how the systems are designed and the data on which they are trained.

Although photos taken under controlled conditions with generally cooperative subjects can be nearly 95% accurate, images taken under less-than-ideal conditions can produce errors. The algorithms also can be biased based on the way they are trained, according to Michigan State University professor Anil Jain.

Face matching software works by learning to recognize faces using training data. If a gender, age group, or race is underrepresented in the data, that will be reflected in the algorithm's performance. "If your training set is strongly biased toward a particular race, your algorithm will do better recognizing that race," says University of Texas at Dallas professor Alice O'Toole.

In 2011, O'Toole conducted a study showing an algorithm developed in Western countries was better at recognizing Caucasian faces than it was at recognizing East Asian faces, while East Asian algorithms performed better on East Asian faces than on Caucasian faces.

From Technology Review
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account