Researchers at Michigan State University and Pennsylvania State University found that social media users trust artificial intelligence (AI) as much as human content moderators when it comes to flagging harmful content.
The study involved 676 participants who interacted with a content classification system and were randomly assigned to one of 18 experimental conditions based on the source of moderation (AI, human, or both) and level of transparency (regular, interactive, or none).
The researchers found that participants put more trust in AI when considering its accuracy and objectivity, but put more trust in humans when reminded that machines cannot make subjective decisions.
However, trust in AI increased with "interactive transparency," which enables users to make suggestions to the AI.
Said Michigan State's Maria D. Molina, "We want to know how we can build AI content moderators that people can trust in a way that doesn't impinge on that freedom of expression."
From Pennsylvania State University
View Full Article
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA
No entries found