acm-header
Sign In

Communications of the ACM

ACM TechNews

A Crucial Step for Averting AI Disasters


View as: Print Mobile App Share:
The risk of building blind spots or biases into tech products multiplies exponentially with artificial intelligence, damaging customers trust and cutting into profits.

The expanding use of AI is attracting new attention to the importance of workforce diversity.

Credit: Daria Kirpach

Data from the U.S. Bureau of Labor Statistics shows that, although technology companies have increased efforts to recruit women and minorities, computer and software professionals who write artificial intelligence (AI) programs remain largely white and male.

A byproduct of this lack of diversity is that datasets often lack adequate representation of women or minority groups.

For example, one widely used dataset is more than 74% male and 83% white, meaning algorithms based on this data could have blind spots or biases built in.

Biases in algorithms can skew decision-making, and many companies have realized that eliminating bias upfront among those who write code is essential.

Said Affectiva co-founder Rana el Kaliouby, "You need diversity in the data, and more important, in the team that's designing the algorithm."

From The Wall Street Journal
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account