Facebook placed its army of human moderators on paid leave, forcing the company to police disinformation without them.
During the ongoing coronavirus pandemic, Facebook is relying more heavily on artificial intelligence algorithms to make the subjective decisions about which content violates its terms of service and should be removed from the platform.
Users should expect more mistakes while the company expedites the process, and there could be a rise in "false positives," including removal of content that should not be taken down, according to Facebook CEO Mark Zuckerberg.
YouTube and Twitter also have announced temporary plans to rely more heavily on automated systems.
The decision to send moderators home and rely more on technology to police the sites concerned some researchers.
Said Mary Gray at Microsoft Research, “They haven't made enough leaps and bounds in artificial intelligence to take away the best tool we have: human intelligence to do the discernment."
From The Washington Post
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found