acm-header
Sign In

Communications of the ACM

ACM TechNews

AI Programs Are Learning to Exclude Some African-American Voices


View as: Print Mobile App Share:
Amazon's Alexa digital assistant.

Researchers warn some artificial intelligence programs are inheriting biases that could lead to automatic minority discrimination as language-based systems proliferate.

Credit: by Crosa/Flickr

Researchers at the University of Massachusetts, Amherst (UMass) warn some artificial intelligence (AI) programs are inheriting biases against certain dialects, which could lead to automatic minority discrimination as language-based AI systems proliferate.

The researchers compiled 59.2-million Twitter messages with a high likelihood of containing African-American slang or vernacular, and then filtered them through several natural-language processing tools. They found one popular tool classified the posts as Danish, while several common machine learning-based application programming interfaces that analyze text for meaning and sentiment also had problems.

"If you purchase a sentiment analyzer from some company, you don't even know what biases it has in it," says UMass professor Brendan O'Connor. "We don't have a lot of auditing or knowledge about these things."

Some experts are concerned the AI prejudice problem may be more widespread than many people realize, with such systems increasingly influencing decisions in finance, healthcare, and education.

From Technology Review
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found