acm-header
Sign In

Communications of the ACM

ACM News

AI 'Fairness' Research Held Back by Lack of Diversity


View as: Print Mobile App Share:

Biases in AI tools such as those used to detect signs of disease could exacerbate inequalities in health care.

Credit: Jiraroj Praditcharoenkul/Alamy

A lack of racial and gender diversity could be hindering the efforts of researchers working to improve the fairness of artificial intelligence (AI) tools in health care, such as those designed to detect disease from blood samples or imaging data.

Scientists analysed 375 research and review articles on the fairness of artificial intelligence in health care, published in 296 journals between 1991 and 2022. Of 1,984 authors, 64% were white, whereas 27% were Asian, 5% were Black and 4% were Hispanic (see 'Gaps in representation').

The analysis, published as a preprint on medRxiv1, also found that 60% of authors were male and 40% female, a gender gap that was heightened among last authors, who often have a senior role in leading the research.

"These findings are a reflection of what's happening in the research community at large," says study co-author Leo Anthony Celi, a health informatician and clinical researcher at the Massachusetts Institute of Technology in Boston. A lack of diversity is problematic, because it leads to biased data sets and algorithms that work best for white people from rich countries, he says.

From Nature
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account