A study by researchers at the U.K.'s Oxford Internet Institute found that some algorithms used to identify abusive online posts are less effective if the content includes emojis.
These algorithms are trained on large databases of text that generally lack emojis.
In response, the researchers developed a nearly 4,000-sentence database that included offensive uses of emojis, then used the database to train an artificial intelligence model to distinguish between abusive and non-abusive messages.
The model was tested on written examples of abuse related to race, gender, gender identity, sexuality, religion, and disability, and achieved a 30% improvement in correctly distinguishing between hateful and non-hateful content, compared to existing tools.\
It also demonstrated an 80% improvement in identifying some types of emoji-based abuse.
From Sky News
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found