acm-header
Sign In

Communications of the ACM

ACM TechNews

Emojis Make It Harder for Tech Giants to Track Down Online Abuse


View as: Print Mobile App Share:
Selecting an emoji.

Harmful posts containing emojis can end up being missed altogether while acceptable posts containing emojis may be mislabeled as offensive, according to the Oxford Internet Institute.

Credit: Sky News

A study by researchers at the U.K.'s Oxford Internet Institute found that some algorithms used to identify abusive online posts are less effective if the content includes emojis.

These algorithms are trained on large databases of text that generally lack emojis.

In response, the researchers developed a nearly 4,000-sentence database that included offensive uses of emojis, then used the database to train an artificial intelligence model to distinguish between abusive and non-abusive messages.

The model was tested on written examples of abuse related to race, gender, gender identity, sexuality, religion, and disability, and achieved a 30% improvement in correctly distinguishing between hateful and non-hateful content, compared to existing tools.\

It also demonstrated an 80% improvement in identifying some types of emoji-based abuse.

From Sky News
View Full Article

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account