acm-header
Sign In

Communications of the ACM

ACM TechNews

Artificial Intelligence's White Guy Problem


View as: Print Mobile App Share:
Artificial Intelligence's White Guy Problem, illustration

Credit: The New York Times

Artificial intelligence (AI) may be worsening inequality, given the biases being embedded within the underlying machine-learning algorithms, writes Kate Crawford, a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and AI. She cites one case in which Google's photo application was found to classify images of black people as gorillas as an example of systems with prejudices built in. An even more pernicious example was referenced in a recent ProPublica investigation, which found popular software used to evaluate the probability of criminal recidivism was twice as likely to erroneously assign a high risk to black defendants and a low risk to white defendants.

Crawford says AI reflects the values of those who create it, and inclusivity must be accounted for to avoid machine intelligences that mirror a narrow and elite perception of society. "We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future," Crawford warns.

Analyzing biases in AI systems now can make designers more capable of instilling fairness in such systems, Crawford says, but more accountability from the technology community is needed. "We must address the current implications for communities that have less power, for those who aren't dominant in elite Silicon Valley circles," she says.

From The New York Times 
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account