Stop using AI tools for criminal justice apps, warn experts

AI criminal justice
Image credit: Phonlamai Photo / Shutterstock.com

ITEM: Artificial intelligence experts have strongly advised members of the criminal justice system to stop using AI-powered risk assessment tools to decide who should and shouldn’t be put in jail because the technology isn’t good enough yet.

Last week, the Partnership on AI (PAI) released a report that documents “serious shortcomings” of algorithmic risk assessment tools currently being used within the US criminal justice system to decide whether to detain or release defendants.

Risk assessment tools are very basic forms of AI that predict the probability of a particular future outcome – in this case, whether a defendant is likely to fail to appear in court or commit more crimes once they’re released, etc. Such tools are already widely used in the US, and some jurisdictions aim to mandate their use. The rationale – at least in the US, which has the highest incarceration rate per capita in the world – is to reduce “unnecessary detention and provide fairer and less punitive decisions than existing processes”.

The problem, says the PAI report, is that achieving those goals requires far more than just the tools themselves. The report lists numerous issues that fall into three basic categories:

1. Concerns about the accuracy, bias, and validity in the tools themselves

Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, this report suggests that it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data.

2. Issues with the interface between the tools and the humans who interact with them

In addition to technical concerns, these tools must be held to high standards of interpretability and explainability to ensure that users (including judges, lawyers, and clerks, among others) can understand how the tools’ predictions are reached and make reasonable decisions based on these predictions. 

3. Questions of governance, transparency, and accountability

To the extent that such systems are adapted to make life-changing decisions, tools and decision-makers who specify, mandate, and deploy them must meet high standards of transparency and accountability. 

There’s a lot at stake in getting this right. The PAI report emphasizes that criminal justice is “one domain where it is imperative to exercise maximal caution and humility in the deployment of statistical tools,” yet current deployment appears to exercise neither caution nor humility.

To that end, the paper lists ten minimum requirements that jurisdictions should implement prior to using these tools:

1. Training datasets must measure the intended variables

2. Bias in statistical models must be measured and mitigated

3. Tools must not conflate multiple distinct predictions

4. Predictions and how they are made must be easily interpretable

5. Tools should produce confidence estimates for their predictions

6. Users of risk assessment tools must attend trainings on the nature and limitations of the tools

7. Policymakers must ensure that public policy goals are appropriately reflected in these tools

8. Tool designs, architectures, and training data must be open to research, review and criticism

9. Tools must support data retention and reproducibility to enable meaningful contestation and challenges

10. Jurisdictions must take responsibility for the post-deployment evaluation, monitoring, and auditing of these tools

According to Logan Koepke, senior policy analyst at Upturn, the report and those ten requirements shows that we are a long way away from being ready to deploy these tools responsibly.

“To our knowledge, no single jurisdiction in the US is close to meeting the ten minimum requirements for responsible deployment of risk assessment tools detailed here,” he said.

Consequently, the report strongly recommends that policymakers “either avoid using risk assessments altogether for decisions to incarcerate, or find ways to resolve the requirements outlined in this report via future standard-setting processes.”

The odds of policymakers following that advice may not be so good.

After all, for several years now we’ve seen a number of reports postulating that AI tools used for law enforcement purposes often reflect human biases, especially institutional biases such as racism. Yet governments have been deploying them anyway, perhaps under the mistaken impression that algorithmic decision-making is neutral and unbiased simply because it is data-driven – thus, the logic goes, applying AI to the criminal justice system will therefore make that system neutral and unbiased.

Which – if not entirely true, or even a real objective on the part of the government in question – at least sounds good. Which is often close enough for some policymakers.

Anyway, here’s hoping at least some listens to the experts (for once) and make efforts to align existing programs with PAI’s requirements list, because a future where computers are put in charge of decisions about who should or shouldn’t be in jail doesn’t sound much like progress to me.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.