acm-header
Sign In

Communications of the ACM

ACM News

This New Way to Train AI Could Curb Online Harassment


View as: Print Mobile App Share:
Artist's metaphor for removing toxic misogyny.

Social media companies use artificial intelligence to identify and remove posts that demean, harass, or threaten violence against women, but its a tough problem.

Credit: Elena Lacey/Getty Images

For about six months last year, Nina Nørgaard met weekly for an hour with seven people to talk about sexism and violent language used to target women in social media. Nørgaard, a Ph.D. candidate at IT University of Copenhagen, and her discussion group were taking part in an unusual effort to better identify misogyny online. Researchers paid the seven to examine thousands of Facebook, Reddit, and Twitter posts and decide whether they evidenced sexism, stereotypes, or harassment. Once a week, the researchers brought the group together, with Nørgaard as a mediator, to discuss the tough calls where they disagreed.

Misogyny is a scourge that shapes how women are represented online. A 2020 Plan International study, one of the largest ever conducted, found that more than half of women in 22 countries said they had been harassed or abused online. One in five women who encountered abuse said they changed their behavior—cut back or stopped use of the internet—as a result.

Social media companies use artificial intelligence to identify and remove posts that demean, harass, or threaten violence against women, but it's a tough problem. Among researchers, there's no standard for identifying sexist or misogynist posts; one recent paper proposed four categories of troublesome content, while another identified 23 categories. Most research is in English, leaving people working in other languages and cultures with even less of a guide for difficult and often subjective decisions.

From Wired
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account