acm-header
Sign In

Communications of the ACM

ACM Careers

Hiring Data Creates Risk of Workplace Bias


View as: Print Mobile App Share:
mosaic people, illustration

American employers increasingly rely on large datasets and computer algorithms to decide who gets interviewed, hired, or promoted.

While these data algorithms can help to avoid biased human decision-making, they also risk introducing new forms of bias or reinforcing existing biases.

Pauline Kim, Daniel Noyes Kirby Professor of Law at Washington University in St. Louis, explains that when algorithms rely on inaccurate, biased, or unrepresentative data, they may systematically undermine racial and ethnic minorities, women, and other historically disadvantaged groups.

"When this happens, the result is classification bias — a term that highlights the risk that data algorithms may sort or score workers in ways that worsen inequality or disadvantage along the lines or race, sex, or other protected characteristics," Kim says.

According to Kim, an expert on employment law, "we must fundamentally rethink how anti-discrimination laws apply in the workplace in order to address classification bias and avoid further entrenching workplace inequality."

Kim explains how existing employment discrimination laws must adapt to meet the challenges posed by algorithmic decision-making in "Data-Driven Discrimination at Work," published in the William & Mary Law Review.

"Rote application of our existing laws will fail to address the real sources of bias when discrimination is data-driven," Kim says.

"Because data algorithms differ significantly from traditional discriminatory practices, they require a different legal response adapted to the particular risks they raise. Focusing on classification bias suggests that anti-discrimination law should be adjusted in a number of ways when it is applied to data algorithms."


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account