acm-header
Sign In

Communications of the ACM

ACM News

How Content Creators Cope with Discriminatory Algorithms


View as: Print Mobile App Share:

Shadow-banning is a form of algorithmic bias that disproportionately affects specific demographics.

Credit: Getty Images

The threat of bias in the latest wave of generative artificial intelligence may be in the spotlight lately, but social media algorithms already have discrimination problems. Some creators from marginalized communities have expressed frustration with how the algorithms appeared biased against them, robbing them of critical engagement.

How do social media algorithms discriminate against some creators?

While content that doesn't violate any explicit terms can't be outright banned, social media companies still have ways of suppressing the work of some creators. Shadow-bans are "a form of online censorship where you're still allowed to speak, but hardly anyone gets to hear you," The Washington Post explained. Their content might not be removed, but some creators notice that engagement with their posts plummets outside of their immediate friends. "Even more maddening, no one tells you it's happening," the Post added.

Content creators have long decried the lack of transparency with shadow-bans. Late last year, the practice made headlines when Twitter owner Elon Musk released the Twitter Files, internal company documents intended to show how "shadow-banning was being used to suppress conservative views," the Post said.

From The Week
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account