An interdisciplinary research team at Cornell University found that an individual's trust in online content moderation systems and decisions depend on whether the moderator is human or artificial intelligence (AI) and the type of harassing content.
The study involved a custom social media site and a simulation engine that uses preprogrammed bots to mimic the behavior of other users.
Almost 400 participants were asked to beta test a new social media platform and randomly assigned to one of six experimental conditions that differed based on the type of content moderation system and harassing content.
With inherently ambiguous content, the researchers found that AI moderators were more likely to be questioned by users. However, trust in all types of moderation was about the same when clearly harassing comments were involved.
From Cornell Chronicle
View Full Article
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA
No entries found