AI tools “too biased” for sentencing decisions


Report: algorithms no substitute for trials

Bias and inaccuracy render artificial intelligence (AI) algorithmic criminal justice tools unsuitable for assessing risk when making decisions on whether to imprison people or release them, according to a report by experts in the field.

They warned that whlie prosecuting authorities in some US jurisdictions increasingly relied on this technology, it was more urgent that software engineers and designers exercised caution and humility.

Other jurisdictions could come under pressure to adopt the technology as part of efforts to reduce cost, they argued: “Lessons drawn from the US context have widespread applicability in other jurisdictions, too, as the international policymaking community considers the deployment of similar tools.”

Many contributors to the report by the independent US-based Partnership on AI (PAI) – which has over 80 members, including AI, machine learning research, and ethics specialists – accepted that only individualised judicial hearings should inform decisions on detention to achieve just outcomes.

An “overwheling majority” agreed that “current risk assessment tools are not ready for decisions to incarcerate human beings”.

The report adds weight to recent findings by a legal academic, who argued for transparency in AI, and a large-scale European study, which argued that a human-centred approach to AI development was vital to maintain public trust in the technology.

PAI recommended that developers of criminal justice risk assessment tools should adopt 10 minimum requirements that addressed issues such as technical accuracy, bias and validity, and the transparency and accountability of the systems in which life-changing decisions were made.

One expert, Andi Peng, AI resident at Microsoft Research, said: “As research continues to push forward the boundaries of what algorithmic decision systems are capable of, it is increasingly important that we develop guidelines for their safe, responsible, and fair use.”

Another, Logan Koepke, senior policy analyst at Upturn, an organisation which “promotes equity and justice in the design, governance, and use of digital technology”, said: “This report… highlights, at a statistical and technical level, just how far we are from being ready to deploy these tools responsibly.

“To our knowledge, no single jurisdiction in the US is close to meeting the 10 minimum requirements for responsible deployment of risk assessment tools detailed here.”

The PAI report was prompted by a California Senate bill, which would mandate the use of statistical and machine learning risk assessment tools for pre-trial detention decisions.

It concluded: “PAI believes standard setting in this space is essential work for policymakers because of the enormous momentum that state and federal legislation have placed behind risk assessment procurement and deployment…

“For AI researchers, the task of foreseeing and mitigating unintended consequences and malicious uses has become one of the central problems of our field.

“Doing so requires a very cautious approach to the design and engineering of systems, as well as careful consideration of the ways that they will potentially fail and the harms that may occur as a result.

“Criminal justice is a domain where it is imperative to exercise maximal caution and humility in the deployment of statistical tools.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Use the tools available to stop doing the work you shouldn’t be doing anyway

We are increasingly taken for granted in the world of Do It Yourself, in which we’re required to do some of the work we have ostensibly paid for, such as in banking, travel and technology


Quality indicators – peer recommendations over review websites

I often feel that I am banging the SRA’s drum for them when it comes to transparency but it’s because I genuinely believe in clarity when it comes to promoting quality professional services.


Embracing the future: Navigating AI in litigation

Whilst the UK courts have shown resistance to change over time, in the past decade they have embraced the use of some technologies that naturally improve efficiency. Now we’re in the age of AI.


Loading animation