acm-header
Sign In

Communications of the ACM

ACM TechNews

Algorithms and Bias: Q and A With Cynthia Dwork


View as: Print Mobile App Share:
Cynthia Dwork of Microsoft Research

"Historical biases . . . will be learned by the algorithm, and past discrimination will lead to future discrimination," says Cynthia Dwork of Microsoft Research.

Credit: Thor Swift / The New York Times

In an interview, Microsoft Research scientist Cynthia Dwork describes how algorithms can learn to discriminate because they are programmed by coders who incorporate their biases. And they are patterned on human behavior, so they reflect human biases, she says.

Dwork defines her research as "finding a mathematically rigorous definition of fairness and developing computational methods — algorithms — that guarantee fairness." She notes a study she co-authored found that "sometimes, in order to be fair, it is important to make use of sensitive information while carrying out the classification task. This may be a little counterintuitive: the instinct might be to hide information that could be the basis of discrimination."

Dwork says fairness involves treating similar people in a similar manner. "A true understanding of who should be considered similar for a particular classification task requires knowledge of sensitive attributes, and removing those attributes from consideration can introduce unfairness and harm utility," she says. The development of a fairer algorithm would involve serious consideration about who should be treated similarly to whom, according to Dwork. She says the push to train algorithms to protect certain groups from discrimination is relatively young, but the Fairness, Accountability, and Transparency in Machine Learning workshop is a promising research area.

From The New York Times
View Full Article – May Require Free Registration

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account