acm-header
Sign In

Communications of the ACM

ACM TechNews

Removing Gender Bias From Algorithms


View as: Print Mobile App Share:
Examining gender stereotypes to eliminate bias.

A Stanford University research group has developed a debiasing system to eliminate gender bias and stereotypes from machine-learning systems.

Credit: Shutterstock

Machine-learning systems begin as blank slates, an advantage for learning interesting patterns from the data and documents fed into them, which also carries the danger of incorporating gender stereotypes, according to Stanford University professor James Zou.

He details how his group used a common type of machine-learning algorithm to produce "word embeddings."

"Each English word is embedded, or assigned, to a point in space," Zou says. "Words that are semantically related are assigned to points that are close together in space. This type of embedding makes it easy for computer programs to quickly and efficiently identify word relationships."

Zou says the algorithm bases its decisions on which words frequently appear in close proximity to each other, and if the source data reflects gender bias then the algorithm learns those biases as well.

However, Zou's team has developed a debiasing system that uses people to identify examples of the types of connections that are appropriate and inappropriate, and applies these distinctions to measure the degree to which gender is a factor in those word choices. The algorithm is then instructed to eliminate the gender factor from the connections in the embedding so it no longer exhibits blatant gender stereotypes.

From The Conversation
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account