acm-header
Sign In

Communications of the ACM

ACM TechNews

Lazy Coders Are Training Artificial Intelligences to Be Sexist


View as: Print Mobile App Share:
Vintage sexism can show up in modern algorithms.

Even the most unbiased algorithm is going to flag up the biases of a slanted culture; if you dont take steps to remove it, you should assume prejudice is well-represented in all your software.

Credit: Martyn Goddard/REX/Shutterstock

Artificial intelligence (AI) systems can form flawed and stereotypical word associations by learning from biased data samples, according to a report from Microsoft Research.

To help computers form associations between words and derive meaning from them, coders often use a word-embedding technique in which computers map words as vectors that demonstrate their relationship to other words.

Led by Adam Kalai, the Microsoft team fed a data-mining algorithm Google news articles, which were examined for word associations. The algorithm drew associations between occupations and gender stereotypes, labeling jobs such as philosopher, captain, and boss as "male," while homemaker, nurse, and receptionist were labeled "female."

A similar study at Princeton University found female names were more likely to be associated with home and the arts, and male names with careers and mathematics. European-American names of either gender also were associated with more pleasant words than African-American names.

Web developer Maciej Ceglowski says biased algorithms are the result of lazy or rushed programmers who have not considered that flawed search engines and algorithms could amplify existing prejudices.

Kalai's group has developed tools that can adjust the word maps without losing the original meaning, enabling certain words to be gender-neutral.

From New Scientist
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found