acm-header
Sign In

Communications of the ACM

ACM News

Europeans Can't Talk about Racist AI systems. They Lack the Words.


View as: Print Mobile App Share:
Considering artificial intelligence in Europe.

Several European artificial intelligence projects rely on race without explicitly saying so.

Credit: AdobeStock

In February, El Confidencial revealed that Renfe, the Spanish railways operator, published a public tender for a system of cameras that could automatically analyze the behavior of passengers on train platforms. One characteristic that the system should be able to assess was "ethnic origin".

Ethnic origin can mean many things. But in the context of an automated system that assigns a category to people based on their appearance captured by camera the term is misleading. "It seems to me that 'ethnic origin' is code for a crude essentialist (biological) notion of 'race'" Norma Möllers, an assistant professor of sociology at Queen's University who focuses on the intersections of science, technology and politics, told AlgorithmWatch. "Take the following example: my mother is Batak, an ethnic indigenous minority in Indonesia. Renfe would likely not recognize that she's Batak, it would recognize that she's a brown woman. Hence, 'ethnic origin' appears to be a colorblind racist term to build race into the system without talking about race," she said.

Renfe's plan is not an exception. Several other projects rely on race without saying so. In the Dutch town of Roermond, the police uses automated systems to track "mobile banditry", a category of crime they created which only applies to Roma people, a report by Amnesty International revealed last year. European hospitals routinely modify the scores of Black patients in some tests, making them healthier than they are and possibly denying them treatment, based on flawed research, as AlgorithmWatch Switzerland showed.

From Algorithm Watch
View Full Article

 


 

No entries found