In 2018, Liz O'Sullivan and her colleagues at a prominent artificial intelligence start-up began work on a system that could automatically remove nudity and other explicit images from the Internet.
They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach A.I. software how to recognize indecent images. But once the photos were tagged, Ms. O'Sullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.
For Ms. O'Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligence. It was a "cruel game of Whac-a-Mole," she said.
From The New York Times
View Full Article
No entries found