acm-header
Sign In

Communications of the ACM

ACM TechNews

Facebook Failed to Stop Ads Threatening Election Workers


View as: Print Mobile App Share:
The tests underscored the challenges that social networks face in moderating increasingly partisan and violent content around elections.

Ads threatening to “lynch,” “murder” and “execute” election workers in recent months were approved by Facebook’s automated moderation system.

Credit: Anna Watts/The New York Times

Researchers at the watchdog group Global Witness and New York University's Cybersecurity for Democracy tested Facebook's automated moderation system by submitting ads around Election Day that contained direct threats against election workers.

Although Facebook contends it prohibits content that threatens serious violence, the test showed that 15 of the 20 ads containing violent content were approved; the ads were deleted before publication.

The researchers noted all the ads were rejected by TikTok and YouTube and the accounts associated with them were suspended.

The researchers said, "The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published, shows that what we are asking is technically possible."

From The New York Times
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account