acm-header
Sign In

Communications of the ACM

ACM TechNews

When Computers Learn Human Languages, They Also Learn Human Prejudices


View as: Print Mobile App Share:
Part of a machine-learning algorithm recently open-sourced by Google.

Princeton University researchers used a machine-learning algorithm to infer associations between English words by feeding the algorithm nearly 1 trillion words of text extracted from the Internet.

Credit: Google

New research from Princeton University suggests computers learning human languages can demonstrate prejudices and biased word associations.

Researchers used a machine-learning algorithm to infer associations between English words by feeding the algorithm nearly 1 trillion words of text extracted from the Internet. The algorithm derived meaning from the words based on their proximity to one another and the strength of their associations.

The system's gender bias strongly associated male names with words such as "management" and "salary," while female names were associated with "home" and "family." The researchers say the strength of the associations between different occupations and words describing women were found to accurately predict the number of women working in each profession.

A similar study conducted by ProPublica identified the racial bias present in algorithms that quantify risks associated with criminal defendants.

The Princeton researchers say it is possible to train algorithms with bias-free language samples, but these systems would have an incomplete understanding of human language.

To counter biases, they suggest results from machine-learning algorithms be paired with an algorithmic accountability approach, which calls for layers of accountability and independent evaluations of outcomes.

From Quartz
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found