Researchers at the watchdog group Global Witness and New York University's Cybersecurity for Democracy tested Facebook's automated moderation system by submitting ads around Election Day that contained direct threats against election workers.
Although Facebook contends it prohibits content that threatens serious violence, the test showed that 15 of the 20 ads containing violent content were approved; the ads were deleted before publication.
The researchers noted all the ads were rejected by TikTok and YouTube and the accounts associated with them were suspended.
The researchers said, "The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published, shows that what we are asking is technically possible."
From The New York Times
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA
No entries found