Bias and inaccuracy render artificial intelligence (AI) algorithmic criminal justice tools unsuitable for assessing risk when making decisions on whether to imprison people or release them, according to a report by experts in the field.
They warned that whlie prosecuting authorities in some US jurisdictions increasingly relied on this technology, it was more urgent that software engineers and designers exercised caution and humility.
Other jurisdictions could come under pressure to adopt the technology as part of efforts to reduce cost, they argued: “Lessons drawn from the US context have widespread applicability in other jurisdictions, too, as the international policymaking community considers the deployment of similar tools.”
Many contributors to the report by the independent US-based Partnership on AI (PAI) – which has over 80 members, including AI, machine learning research, and ethics specialists – accepted that only individualised judicial hearings should inform decisions on detention to achieve just outcomes.
An “overwheling majority” agreed that “current risk assessment tools are not ready for decisions to incarcerate human beings”.
The report adds weight to recent findings by a legal academic, who argued for transparency in AI, and a large-scale European study, which argued that a human-centred approach to AI development was vital to maintain public trust in the technology.
PAI recommended that developers of criminal justice risk assessment tools should adopt 10 minimum requirements that addressed issues such as technical accuracy, bias and validity, and the transparency and accountability of the systems in which life-changing decisions were made.
One expert, Andi Peng, AI resident at Microsoft Research, said: “As research continues to push forward the boundaries of what algorithmic decision systems are capable of, it is increasingly important that we develop guidelines for their safe, responsible, and fair use.”
Another, Logan Koepke, senior policy analyst at Upturn, an organisation which “promotes equity and justice in the design, governance, and use of digital technology”, said: “This report… highlights, at a statistical and technical level, just how far we are from being ready to deploy these tools responsibly.
“To our knowledge, no single jurisdiction in the US is close to meeting the 10 minimum requirements for responsible deployment of risk assessment tools detailed here.”
The PAI report was prompted by a California Senate bill, which would mandate the use of statistical and machine learning risk assessment tools for pre-trial detention decisions.
It concluded: “PAI believes standard setting in this space is essential work for policymakers because of the enormous momentum that state and federal legislation have placed behind risk assessment procurement and deployment…
“For AI researchers, the task of foreseeing and mitigating unintended consequences and malicious uses has become one of the central problems of our field.
“Doing so requires a very cautious approach to the design and engineering of systems, as well as careful consideration of the ways that they will potentially fail and the harms that may occur as a result.
“Criminal justice is a domain where it is imperative to exercise maximal caution and humility in the deployment of statistical tools.”