acm-header
Sign In

Communications of the ACM

ACM News

Finding the Fairness in AI


View as: Print Mobile App Share:

Explains Nikola Konstantinov of Switzerland's ETH Zürich, "Fairness in AI is about ensuring that AI models don't discriminate when they're making decisions, particularly with respect to protected attributes like race, gender, or country of origin."

Credit: University of Michigan

As artificial intelligence (AI) becomes more widely used to make decisions that affect our lives, making certain it is fair is a growing concern.

Algorithms can incorporate bias from several sources, from the people involved in different stages of their development to modelling choices that introduce or amplify unfairness. A machine learning system used by Amazon to pre-screen job applicants was found to display bias against women, for example, while an AI system used to analyze brain scans failed to perform equally well across people of different races.

"Fairness in AI is about ensuring that AI models don't discriminate when they're making decisions, particularly with respect to protected attributes like race, gender, or country of origin," says Nikola Konstantinov, a post-doctoral fellow at the ETH AI Center of ETH Zürich, in Switzerland.  

Researchers typically use mathematical tools to measure the fairness of machine learning systems based on a specific definition of fairness. Individual fairness, for example, considers that a machine learning model should treat two similar individuals in a similar way. Two job applicants with comparable qualifications, but who may be of different genders, should therefore get similar results from a job screening tool; unfairness could be measured by counting how many times equally-experienced candidates receive different outcomes. However, fairness also can be measured on the population level, where group fairness looks at whether a system performs similarly across different groups, such as white people versus black people.

"If you were to now measure unfairness, one way is to check what proportion of all white applicants get offers, and also do this with black applicants then just compare them," says Konstantinov.

One of the problems with measuring fairness, however, is that the result is specifically tied to the definition of fairness used.  According to mathematical proofs, it is impossible for a model to be fair with regards to several notions of fairness simultaneously. That's because as soon as you start accounting for problems to make a model fairer based on one definition, other problems will arise which will make it less fair according to a different notion of fairness. It is therefore important to carefully consider what definition of fairness is best-suited to a particular AI system, says Boris Ruf, a research data scientist at insurance giant AXA in Paris, France. "The goal of AI fairness is not to satisfy some sort of fairness, but rather to achieve the most appropriate and expected fairness objective."

Interpreting fairness measurements can also be a challenge. Such outcomes are typically a measure of the disparity between two groups or individuals, which would ideally be zero, and is therefore a result on a sliding scale rather than a concrete answer of whether it is fair or not.  Fairness scores are often useful for machine learning researchers who are optimizing AI systems so they can improve their rating and make them as fair as possible. However, if a model is to be put to practical use, a more discrete measure of whether it is fair or not, either A or B, is often desirable. The challenge is therefore to determine what fairness threshold is acceptable and would correspond to a fair system. "The question of whether a model is unfair or not, as an A or B question, is essentially juridical," says Konstantinov. "Our measures may serve as a tool for defining certain requirements of AI models that are established by law or by agreed-upon good practices."

AI fairness researchers often adopt the 80% rule, which originates from labor law in the U.S., to check the existence of disparate impact, says Ruf. It is a guideline that companies use that states they should hire protected groups, such as ethnic minorities and women, at a rate that is at least 80% of that of white men. Applying this rule to AI systems means that any disparity measure between 0.8 and 1.25 would be considered fair.

Considering the context in which an AI system will be adopted, however, can help determine an appropriate fairness threshold. Stricter disparity measures could be advantageous when a machine learning model will be used to make high-risk decisions, such as in healthcare settings, whereas laxer standards could be acceptable for an AI system developed to recommend movies or songs. "The best choice always depends on the context of the application," says Ruf.

Measuring the fairness of an AI system using current methods, however, may not be enough to mitigate bias. When algorithms are being developed, unfairness can creep in at many different stages along the way and underlying issues can be obscured, since metrics are typically looking at an algorithm's performance. "It could be dangerous to just use algorithmic fairness (metrics) on (their) own," says Rumi Chunara, an associate professor at New York University.

Biased data, for example, can unknowingly contribute to unfairness. Machine learning systems are trained on datasets that may not be fair due to several factors, such as the way the data was collected (for example, if certain populations are underrepresented in the data). In recent work, Chunara and her colleagues found that mortality risk predictive models used in clinical care that were developed in one hospital or region were not generalizable to other populations. Particularly, they found disparities in performance across racial groups, which they think was caused by the training datasets used. "We should really solve the reason why that data is different for those different populations," says Chunara. "Without figuring out what's happening, we can't just put a fairness metric on an algorithm and require the outcomes to be the same."

Although some pre-processing methods can be applied to account for biases in data, they can only help overcome issues to a certain extent. If data is corrupted and prone to noise and unfairness, little can be done to make the AI system trained on it fair, says Konstantinov. "Essentially, this is an example of what machine learning people call 'garbage in, garbage out'," he adds. "Unfortunately, modern AI algorithms often not only preserve, but even amplify unfairness present in the data."

Several avenues are being explored to improve the fairness of AI systems. Chunara thinks focussing on data is important, for example by better examining its provenance and whether it is reproducible. Increasing public understanding of why it is important to collect data could also help persuade more people to contribute theirs, which could lead to more diverse sources and larger datasets.

Some researchers are also taking causal approaches to fairness to tackle discrimination in AI. Machine learning models learn patterns from data to make decisions and it's not always clear if the correlations they are detecting are relevant and fair. By focussing on making models explainable and transparent, for example, the variables involved in a decision are clearer and can then be judged as fair or not. "I follow with particular interest publications that focus on the intersectionality between algorithmic fairness and causality," says Ruf.

However, whether AI systems considered to be fair today will still be seen as fair in the long term is another issue being addressed. Characteristics of society change with time, so ensuring a model remains fair in an adaptive manner is an active line of researchers being pursued. "It may be possible to ensure that a model is completely fair by one definition on a fixed dataset and for particular situations," says Konstantinov. "However, providing long-term guarantees on fairness is an open problem."

 

Sandrine Ceurstemont is a freelance science writer based in London, U.K.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account