acm-header
Sign In

Communications of the ACM

ACM News

Big Tech Builds AI with Bad Data, So Scientists Sought Better Data


View as: Print Mobile App Share:

Artificial intelligence researcher Yacine Jernite at his apartment in Brooklyn, NY.

Credit: Amir Hamja/The Washington Post

Yacine Jernite's fears about bias in artificial intelligence were vividly affirmed in 2017, when a Facebook translation error led Israeli police to arrest a Palestinian construction worker. The man had posted a picture of himself leaning against a bulldozer with the caption, in Arabic, "good morning." Facebook mistakenly translated it, in Hebrew, as "attack them."

The error was quickly discovered and the man released, according to a report in Haaretz, but the incident cemented personal concerns about AI for Jernite, who joined Facebook's AI division soon after. As the child of Moroccan parents in post-9/11 America, Jernite said he has "spent hours upon hours in immigration secondary interviews — in a way that I could not at the time trace to the technology that was being applied."

Now Jernite, 33, is trying to push AI in a better direction. After leaving Facebook, he joined BigScience, a global effort by 1,000 researchers in 60 countries to build a more transparent, accountable AI, with less of the bias that infects so many Big Tech initiatives. The largely volunteer effort trained a computer system with good data that was curated by humans from different cultures, rather than readily available data scraped from the internet, written mostly in English, and riddled with harmful speech on race, gender, and religion. The resulting AI was released on July 12 for researchers to download and study.

From The Washington Post
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account