acm-header
Sign In

Communications of the ACM

ACM TechNews

Injecting Fairness into ML Models


View as: Print Mobile App Share:

Massachusetts Institute of Technology researchers developed a technique that induces fairness directly into a machine learning model, no matter how unbalanced the training dataset was, which can boost the model’s performance on downstream tasks.

Credit: Jose-Luis Olivares, MIT

Researchers at the Massachusetts Institute of Technology (MIT), Canada's University of Toronto, and Germany's University of Tubingen have developed a method of incorporating fairness into machine learning (ML) models, even if they are trained on unfair data.

The team adapted the deep metric learning technique to map specific attributes in an embedding space, where a similarity metric between attributes corresponds to the distance between them; this gives rise to bias learned in the embedding space, which the researchers found insurmountable.

The researchers' solution, Partial Attribute Decorrelation (PARADE), trains the model to learn a separate similarity metric for a sensitive attribute, then to decorrelate this similarity metric from the targeted similarity metric.

The similarity metric for the attribute is learned in a separate embedding space, and is jettisoned after training so only the targeted similarity metric remains.

The researchers found PARADE reduced bias-induced performance gaps in facial recognition and bird species classification.

From MIT News
View Full Article

 

Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account