acm-header
Sign In

Communications of the ACM

ACM News

What If We Could Just Ask AI to be Less Biased?


View as: Print Mobile App Share:
DALL-E 2 generates images of white men 97% of the time when given prompts like CEO or director.

Researchers don’t know why text- and image-generating AI models self-correct for some biases after simply being asked to do so.

Credit: Stephanie Arnett/MITTR, Getty, Stable Diffusion

Think of a teacher. Close your eyes. What does that person look like? If you ask Stable Diffusion or DALL-E 2, two of the most popular AI image generators, it's a white man with glasses. 

Last week, I published a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that let people see for themselves what kinds of inherent biases AI models have about different genders and ethnicities. 

Although I've written a lot about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and stale the humans of AI are. That was particularly true for DALL-E 2, which generates white men 97% of the time when given prompts like "CEO" or "director."

And the bias problem runs even deeper than you might think into the broader world created by AI. These models are built by American companies and trained on North American data, and thus when they're asked to generate even mundane everyday items, from doors to houses, they create objects that look American, Federico Bianchi, a researcher at Stanford University, tells me.

From MIT Technology Review
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account