acm-header
Sign In

Communications of the ACM

Viewpoint

Algorithms, Platforms, and Ethnic Bias


platform bias, illustration

Credit: The Wall Street Journal

Ethnic and other biases are increasingly recognized as a problem that plagues software algorithms and datasets.9,12 This is important because algorithms and digital platforms organize ever-greater areas of social, political, and economic life. Algorithms already sift through expanding datasets to provide credit ratings, serve personalized advertisements, match individuals on dating sites, flag unusual credit-card transactions, recommend news articles, determine mortgage qualification, predict the locations and perpetrators of future crimes, parse résumés, rank job candidates, assist in bail or probation proceedings, and perform a wide variety of other tasks. Digital platforms are comprised of algorithms executed in software. In performing these functions, as Lawrence Lessig observed, "code" functions like law in structuring human activity. Algorithms and online platforms are not neutral; they are built to frame and drive actions.8


Without proper mitigation, preexisting societal bias will be embedded in the algorithms that make or structure real-world decisions.


Algorithmic "machines" are built with specific hypotheses about the relationship between persons and things. As techniques such as machine learning are more generally deployed, concerns are becoming more acute. For engineers and policymakers alike, understanding how and where bias can occur in algorithmic processes can help address it. Our contribution is the introduction of a visual model (see the accompanying figure) that extends previous research to locate where bias may occur in an algorithmic process.6

Back to Top

Interrogating Bias in Algorithmic Decision-Making

Of course, social bias has been long recognized. Some attribute the introduction of bias into algorithms to the fact that software developers are not well versed in issues such as civil rights and fairness.3 Others suggest it is far more deeply embedded in society and its expressions.4 Inspired by value chain research, while our model cannot resolve bias; it provides a template for identifying and addressing the sources of bias—conscious or unconscious—that might infect algorithms. What is certain is that without proper mitigation, preexisting societal bias will be embedded in the algorithms that make or structure real-world decisions.

We model algorithm development, implementation, and use as having five distinct nodes—input, algorithmic operations, output, users, and feedback. Importantly, we incorporate users because their actions affect outcomes. As shown in the accompanying figure, we identify nine potential biases. They are not mutually exclusive, as it is possible for multiple, interacting biases to exist in a single algorithmic process.

Back to Top

Types of Bias

Training Data Bias. Predictive algorithms are trained on datasets, thus any biases in the training data will be reflected in the algorithm. In principle, this bias should be easy to detect, but the sources may be difficult to detect. Presumed gold standard datasets, such as government statistics or even judicial conviction rates, frequently contain bias. For example, if the criminal justice system is biased, then, absent corrections, the algorithm will mirror such bias. Thus, training sets can be subtle contributors to bias.

Algorithmic Focus Bias. Algorithmic focus bias occurs from both the inclusion and exclusion of particular variables. For instance, the exclusion of gender or race in a health diagnostic algorithm can lead to inaccurate or even harmful conclusions. However, the inclusion of gender, race, or even ZIP codes in a sentencing algorithm can lead to discrimination. This is the conundrum: in certain cases, such variables must intentionally be used to produce less-biased outcomes.5

uf1.jpg
Figure. Potential biases and where they may be introduced in the algorithmic value chain.

Algorithmic Processing Bias. Bias can be embedded in the algorithm itself. One source of such bias is the inclusion and weighting of particular variables. Consider the case of a firm's chief scientist's finding that "one solid predictor of strong coding is an affinity for a particular Japanese manga site."10 If this is embodied in job-candidate-sorting software, then this seemingly innocuous choice might exclude particular qualified candidates. Effectively, a desired proxy trait inadvertently excludes certain groups that could perform the job.

Transfer Context Bias. Transfer context bias occurs when algorithmic output is applied to an inappropriate or unintended context. One example is using credit scores to make hiring decisions. Bad credit is equated with inferior future job performance, despite little evidence that credit scores are related to work performance. If the undesirable, but irrelevant trait is correlated with ethnicity, then it might lead to biased outcomes.

Interpretation Bias. Interpretation bias arises when users interpret algorithmic outputs according to their internalized biases. For example, a judge can receive an algorithmically generated recidivism prediction score and decide on the punishment or bail amount for the defendant. Because individual judges may be unconsciously biased, they may use the score as a "scientific" justification for a biased decision.

Outcome Non-Transparency Bias. Algorithms, particularly artificial intelligence and machine learning, often generate opaque results. The reasons for the results may even be inexplicable to the algorithm's creators or the software's owner. For example, when a machine-learning program recommends denial of a loan application, the bank official conveying the decision may not know the exact reasons for denial. The absence of transparency makes it difficult for the subjects of these decisions to identify discriminatory outcomes or even the reasons for the outcome.

Automation Bias. Automation bias results from the belief the output is fact, rather than a prediction with a confidence level. For instance, credit decisions are now fully automated and use group aggregates and personal credit history.13 The algorithm gives certain people lower scores and limits their access to credit. Credit denial means their scores cannot improve. Often, the subjects and decision-makers are unaware of the algorithm's assumptions and uncritically accept the decisions. The European Union's GDPR's Article 22 has attempted to provide some protection by limiting automated algorithmic decision processes for legal or the equivalent life-affecting decisions.11

Consumer Bias. The biases that human beings act upon in everyday life are expressed in their online activities. Further, digital platforms can exacerbate or give expression to latent bias in online behavior. Users may consciously or unconsciously discriminate on the basis of a user profile that contains ethnically identifiable characteristics. Consumer bias can occur from either side, or party, in a digital interaction. Or, even more deliberately, anonymous online hackers purposely "taught" Microsoft's Tay chatbot, which was opened to the public for only a few days in 2016, to respond with racially objectionable statements. Effectively, the algorithm or platform provides users with a new venue within which to express their biases.

Feedback Loop Bias. Algorithmic systems create a data trail. For example, the Google Search algorithm responds to and records a query that becomes customized input for subsequent searches. The algorithm learns from user behavior. For example, in predictive policing, the algorithm relies almost entirely on historical crime data. Suppose the algorithm sends police officers into a neighborhood to prevent crime. Not surprisingly, increased police presence leads to higher crime detection, thereby raising the statistical crime rate. This can motivate the dispatch of more police, who make more arrests, thereby initiating a feedback loop. In another example, Google Search can learn that ethnically biased websites are often selected and therefore recommend them more often, thereby propagating them. As smart as algorithms can be, human monitoring continues to be necessary.

Back to Top

Benefits of Platforms and Algorithms

The potential benefits of algorithmic decision-making are less noticed, but it can also be used to decrease social bias. It is well known that members of the law enforcement community make decisions that are affected by a defendant's "demeanor," dress, and other characteristics that may correlate with ethnicity—an algorithmic process does not "see" these characteristics. This offers the potential for mitigating such bias. For example, Kleinberg et al. created a machine-learning algorithm that could do a better job than judges in making bail decisions.7 The algorithm was optimized to reduce ethnic disparities among those incarcerated while also reducing the rate of reoffending. This optimization was possible because a disproportionately high number of people in certain racial groups are incarcerated. The point is that it is possible to design algorithms with different social goals. Critics ignore the fact the data and tools can be used to decrease inequity and improve efficiency and effectiveness.

Because algorithms are machines, they can be redesigned to improve outcomes. To illustrate, sales websites could reengineer a site to, for example, provide greater anonymity and thus reduce opportunities for consumer bias. Because all digital activities leave records, it is easier to detect biased behavior and thus reduce it. For example, a government agency could study online behavioral patterns to identify biased behavior. If it can be identified, then it can be prevented. For example, it would be easy to assess whether consumers are biased in their evaluations of online vendors and impose a standardization algorithm to mitigate such bias. Thus, while platforms and algorithms can be used in a discriminatory manner, they also can be studied to expose and address bias. Of course, the will to do so is necessary.

Back to Top

Conclusion

Computer scientists have a unique challenge and opportunity to use their skills to address the serious social problem of bias. We contribute to increased awareness by developing a readily understandable visual model for identifying where bias might emerge in the complex interaction between algorithms and humans. While we focus on ethnic bias, it is possible to extend our model to other types of bias. The model can be particularly useful in policy discussions to explain to policymakers and laypersons where a particular initiative could have an impact and what would not be addressed.


Interest in mitigating algorithmic bias has increased, but "correcting" the data to increase fairness can be hampered by determining what is "fair."


Interest in mitigating algorithmic bias has increased, but "correcting" the data to increase fairness can be hampered by determining what is "fair." Some have suggested that transparency would provide protection against bias and other socially undesirable outcomes.2 Leading computing professional organizations such as ACM are aware of the problems and have established principles to guide their members in addressing these issues. For example, in 2017 the ACM Public Policy Council issued a statement of general principles regarding algorithmic transparency and accountability that identified potential bias as a serious issue.1 Unsurprisingly, firms resist transparency, maintaining that revelation of their data and algorithms could allow other actors to game their systems. In many cases, this response is valid, yet it is also self-serving as it prevents scrutiny. Software developers often cannot provide definitive explanations of complex algorithmic outcomes, meaning transparency alone may be unable to provide accountability. Further, a single algorithmic model may contain multiple sources of bias that interact, creating greater difficulty in tracing its source. However, even in such cases, outcomes can be tested to discover evidence of potential bias.

Platforms, algorithms, software, data-driven decision-making, and machine learning are shaping choices, alternatives, and outcomes. It is vital to understand where and how social ills such as bias can be expressed and reinforced by digital technologies. Algorithmic bias can be addressed and, for this reason, critics who suggest these technologies necessarily will exacerbate bias are too pessimistic. Digital processes create a record that can be examined and analyzed with software tools. In the analog world, ethnic or other kinds of discrimination were difficult and expensive to study and identify. In the digital world, the data captured is often permanent and can be analyzed with existing techniques. Although digital technologies have the potential to reinforce old biases with new tools, they can also help identify and monitor progress in addressing ethnic bias.

Back to Top

References

1. ACM. Public Policy Council: Statement on Algorithmic Transparency and Accountability. (2017), 1–2; http://bit.ly/2n4RBjV

2. Ananny, M. and Crawford, K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society 20, 3 (Mar. 2018), 973–989.

3. Barocas, S. et al. Big Data, Data Science, and Civil Rights. arXiv preprint arXiv:1706.03102 (2017).

4. Caliskan, A., Bryson, J.J., and Narayanan, A. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183–186; https://doi.org/10.1126/science.aal4230

5. d'Alessandro, B., O'Neil, C., and LaGatta, T. Conscientious classification: A data scientist's guide to discrimination-aware classification. Big Data 5, 2 (Feb. 2017), 120–134.

6. Danks, D. and London, A.J. Algorithmic bias in autonomous systems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (Aug. 2017), 4691–4697.

7. Kleinberg, J. et al. Human decisions and machine predictions. Quarterly Journal of Economics 133, 1 (Jan. 2017), 237–293.

8. Lessig, L. Code: And Other Laws of Cyberspace (2009); ReadHowYouWant.com.

9. O'Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books, New York, 2016.

10. Peck, P. They're watching you at work. The Atlantic (Dec. 2013); https://bit.ly/2jhKIt4

11. Portal, EU GDPR. Key Changes with the General Data Protection Regulation. EU GDPR Portal (2017).

12. Silva, S. and Kenney, M. Algorithms, platforms, and ethnic bias: An integrative essay. Phylon: The Clark Atlanta University Review of Race and Culture 55, 1–2 (2018).

13. Zarsky, T. The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, and Human Values 41, 1 (Jan. 2016), 118–132.

Back to Top

Authors

Selena Silva ([email protected]) is a research assistant at the University of California, Davis, USA.

Martin Kenney ([email protected]) is a Distinguished Professor in the Department of Human Ecology at the University of California, Davis, CA, USA, and is Research Director for the Berkeley Roundtable on the International Economy, Berkeley, CA, USA.

Back to Top

Footnotes

This research was funded in part by the Ewing Marion Kauffman Foundation and Clark Atlanta University. The contents of this Viewpoint are solely the responsibility of the authors.


Copyright held by authors.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.


 

No entries found