Law Enforcement

AI is sentencing people based on their ‘risk’ assessment

©iStock/AZemdega

AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions.

During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system.

America imprisons more people than any other nation. It’s not just a result of the population of the country, the incarceration per head is the highest in the world at ~716 per 100,000 of the national population. The second largest, Russia, incarcerates ~455 per 100,000 population.

Black males are, by far, America’s most incarcerated:

AI has been proven to have bias problems. Last year, the American Civil Liberties Union found that Amazon’s facial recognition technology disproportionately flagged those with darker skin colours as criminals more often.

The bias is not intentional but a result of a wider problem in STEM career diversity. In the West, the fields are dominated by white males.

A 2010 study by researchers at NIST and the University of Texas in Dallas found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Deploying such inherently-biased AIs is bound to exacerbate societal problems. Most concerning, US courtrooms are using AI tools for ‘risk’ assessments to make sentencing decisions.

Using a defendant’s profile, the AI generates a recidivism score – a number which aims to estimate if an individual will reoffend. A judge then uses that score to make decisions such as the severity of their sentence, what services the individual should be provided, and if a person should be held in jail before trial.

Last July, a statement (PDF) was signed by over 100 civil rights organisations – including the ACLU – calling for AI to be kept clear of risk assessments.

When the bias problem with AIs is solved, their use in the justice system could improve trust in decisions. Current questions over whether a judge was prejudiced in their sentencing will be reduced. However, we’re yet to be anywhere near that point.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top

We are using cookies on our website

We use cookies to personalise content and ads, to provide social media features, and to analyse our traffic. Please confirm if you accept our tracking cookies. You are free to decline the tracking so you can continue to visit our website without any data sent to third-party services. All personal data can be deleted by visiting the Contact Us > Privacy Tools area of the website.