acm-header
Sign In

Communications of the ACM

ACM News

Algorithms Should’ve Made Courts More Fair. What Went Wrong?


View as: Print Mobile App Share:
The Kentucky study is one of the first in-depth, independent assessments of what happens when algorithms are injected into a justice system.

A study published earlier this year found that requiring Kentucky judges to consult an algorithm when deciding whether to hold a defendant in jail before trial yielded offers of no-bail release to white defendants much more often than to blacks.

Credit: Comstock/Getty Images

Kentucky lawmakers thought requiring that judges consult an algorithm when deciding whether to hold a defendant in jail before trial would make the state's justice system cheaper and fairer by setting more people free. That's not how it turned out.

Before the 2011 law took effect, there was little difference between the proportion of black and white defendants granted release to await trial at home without cash bail. After being mandated to consider a score predicting the risk a person would reoffend or skip court, the state's judges began offering no-bail release to white defendants much more often than to blacks. The proportion of black defendants granted release without bail increased only slightly, to a little over 25 percent. The rate for whites jumped to more than 35 percent. Kentucky has changed its algorithm twice since 2011, but available data shows the gap remained roughly constant through early 2016.

The Kentucky experience, detailed in a study published earlier this year, is timely. Many states and counties now calculate "risk scores" for criminal defendants that estimate the chance a person will reoffend before trial or skip court; some use similar tools in sentencing. They are supposed to help judges make fairer decisions and cut the number of people in jail or prison, sometimes as part of eliminating cash bail. Since 2017, Kentucky has released some defendants scored as low-risk based purely on an algorithm's say-so, without a judge being involved.

How these algorithms change the way justice is administered is largely unknown. Journalists and academics have shown that risk-scoring algorithms can be unfair or racially biased. The more crucial question of whether they help judges make better decisions and achieve the tools' stated goals is largely unanswered.

The Kentucky study is one of the first in-depth, independent assessments of what happens when algorithms are injected into a justice system. It found that the project missed its goals and even created new inequities. "The impacts are different than what policymakers may have hoped for," says Megan Stevenson, a law professor at George Mason University who authored that study.

 

From Wired
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account