Tessa Verhoef and Eduard Fosch-Villaronga at Leiden University argue that affective computing systems trained on currently available datasets will likely have racial biases, biases against users with mental health problems, and age biases because they derive from limited samples that do not fully represent societal diversity.
In a paper presented at the Affective Computing + Intelligent Interaction conference, the authors say the datasets are missing diversity, equity, and inclusion elements which can directly affect the accuracy and fairness of emotion recognition algorithms across different groups. They emphasize the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, and provide recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.
From Mirage.News
View Full Article
No entries found