acm-header
Sign In

Communications of the ACM

ACM News

Inspecting Algorithms For Bias


View as: Print Mobile App Share:
Is there justice in the automated decision-making systems used by courts?

ProPublica compared COMPASs risk assessments for more than 10,000 people arrested in one Florida county with how often those people actually went on to reoffend.

Credit: Pablo Delcan

It was a striking story. "Machine Bias," the headline read, and the teaser proclaimed: "There's software used across the country to predict future criminals. And it's biased against blacks."

ProPublica, a Pulitzer Prize–winning nonprofit news organization, had analyzed risk assessment software known as COMPAS. It is being used to forecast which criminals are most likely to ­reoffend. Guided by such forecasts, judges in courtrooms throughout the United States make decisions about the future of defendants and convicts, determining everything from bail amounts to sentences. When ProPublica compared COMPAS's risk assessments for more than 10,000 people arrested in one Florida county with how often those people actually went on to reoffend, it discovered that the algorithm "correctly predicted recidivism for black and white defendants at roughly the same rate." But when the algorithm was wrong, it was wrong in different ways for blacks and whites. Specifically, "blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend." And COMPAS tended to make the opposite mistake with whites: "They are much more likely than blacks to be labeled lower risk but go on to commit other crimes."

Whether it's appropriate to use systems like COMPAS is a question that goes beyond racial bias. The U.S. Supreme Court might soon take up the case of a Wisconsin convict who says his right to due process was violated when the judge who sentenced him consulted COMPAS, because the workings of the system were opaque to the defendant. Potential problems with other automated decision-making (ADM) systems exist outside the justice system, too. On the basis of online personality tests, ADMs are helping to determine whether someone is the right person for a job. Credit-scoring algorithms play an enormous role in whether you get a mortgage, a credit card, or even the most cost-effective cell-phone deals.

It's not necessarily a bad idea to use risk assessment systems like COMPAS. In many cases, ADM systems can increase fairness. Human decision making is at times so incoherent that it needs oversight to bring it in line with our standards of justice. As one specifically unsettling study showed, parole boards were more likely to free convicts if the judges had just had a meal break. This probably had never occurred to the judges. An ADM system could discover such inconsistencies and improve the process.

But often we don't know enough about how ADM systems work to know whether they are fairer than humans would be on their own. In part because the systems make choices on the basis of underlying assumptions that are not clear even to the systems' designers, it's not necessarily possible to determine which algorithms are biased and which ones are not. And even when the answer seems clear, as in ­ProPublica's findings on COMPAS, the truth is sometimes more complicated.

 

From MIT Technology Review
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account