acm-header
Sign In

Communications of the ACM

ACM TechNews

Users Trust AI as Much as Humans for Flagging Problematic Content


View as: Print Mobile App Share:

While artificial intelligence editors can analyze content swiftly, people often do not trust these algorithms to make accurate recommendations, and fear the information could be censored.

Credit: John Schnobrich/Unsplash

Researchers at Michigan State University and Pennsylvania State University found that social media users trust artificial intelligence (AI) as much as human content moderators when it comes to flagging harmful content.

The study involved 676 participants who interacted with a content classification system and were randomly assigned to one of 18 experimental conditions based on the source of moderation (AI, human, or both) and level of transparency (regular, interactive, or none).

The researchers found that participants put more trust in AI when considering its accuracy and objectivity, but put more trust in humans when reminded that machines cannot make subjective decisions.

However, trust in AI increased with "interactive transparency," which enables users to make suggestions to the AI.

Said Michigan State's Maria D. Molina, "We want to know how we can build AI content moderators that people can trust in a way that doesn't impinge on that freedom of expression."

From Pennsylvania State University
View Full Article

 

Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account