acm-header
Sign In

Communications of the ACM

ACM News

We ­se Big Data to Sentence Criminals. But Can the Algorithms Really Tell ­S What We Need to Know?


View as: Print Mobile App Share:
Use of data-driven risk assessments in sentencing may be considered by the U.S. Supreme Court.

The U.S. Supreme Court must consider whether to hear a case that could determine whether it is appropriate for any court to use automated risk assessment tools when sentencing criminals.

Credit: Karen Neoh/flickr

In 2013, a man named Eric L. Loomis was sentenced for eluding police and driving a car without the owner's consent.

When the judge weighed Loomis' sentence, he considered an array of evidence, including the results of an automated risk assessment tool called COMPAS. Loomis' COMPAS score indicated he was at a "high risk" of committing new crimes. Considering this prediction, the judge sentenced him to seven years.

Loomis challenged his sentence, arguing it was unfair to use the data-driven score against him. The U.S. Supreme Court now must consider whether to hear his case – and perhaps settle a nationwide debate over whether it's appropriate for any court to use these tools when sentencing criminals.

Today, judges across the U.S. use risk assessment tools like COMPAS in sentencing decisions. In at least 10 states, these tools are a formal part of the sentencing process. Elsewhere, judges informally refer to them for guidance.

I have studied the legal and scientific bases for risk assessments. The more I investigate the tools, the more my caution about them grows.

The scientific reality is that these risk assessment tools cannot do what advocates claim. The algorithms cannot actually make predictions about future risk for the individual defendants being sentenced.

 

From The Conversation
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account