AI users in justice system should be under duty to act responsibly


Adams Bhatti: AI has the potential to drive hugely positive outcomes

Those who use artificial intelligence (AI) in the justice system should be under a duty to “act responsibly”, campaign group JUSTICE has argued in a report.

The duty should include “an obligation to pause, rethink, redesign or even stop development or deployment if significant risks to the rule of law or human rights are identified”.

In what it described as “the first rights-based framework to guide AI use across the UK justice system”, JUSTICE set out two “clear requirements”.

The first was that AI development should be “goal-led” to “ensure the tool is clearly aimed at improving one or more of justice system’s core goals of access to justice, fair and lawful decision making, and transparency”.

The second was that users should be under “a duty to act responsibly” to ensure “all those involved in creating and using the tool take responsibility for ensuring the rule of law and human rights are embedded at each stage of its design, development, and deployment”.

Researchers said AI “had the potential, if deployed well, to be of great service to the strengthening of our justice system”, but it was “far from a silver bullet”.

It could “lack transparency, embed or exacerbate societal biases, and can produce inaccurate outputs which are nevertheless convincing to the people around it”.

Researchers said AI could be used in a positive way in the judicial system for legal research and drafting or to help the police investigate crime such as child sexual abuse in online images.

AI-based tools could also be used to identify biases. For example, in the US, natural language processing was used to “measure the different gender attitudes of judges, by analysing how frequently judges linked men with careers and women with families in their written opinions.”

When it came to public engagement, AI could be used to help “overburdened” courts write and published their judgments, analyse data across the justice system and other state departments, improve translation services and access to court documents.

There were also examples where AI had been used to improve public engagement in law-making and policy development, particularly in Taiwan, and improve advocacy for under-represented groups.

Researchers also set out the risks, headed by “poor and incomplete data” in a justice system which “has more data gaps than any other public service”.

When training data was “incomplete, poorly curated, or reflects social inequalities”, the results could “unintentionally replicate or even amplify these biases”.

The report went on: “Many AI models – including those used for risk assessments, sentencing recommendations, or fraud detection – rely on probabilistic methods.

“Instead of offering guaranteed correctness, they provide predictions with varying degrees of confidence, which means there is always a margin of error.”

This could result in “adverse outcomes” if decision-makers treated results as “fully accurate or certain”.

Further risks from AI were the production of “fabricated and misleading content” (hallucinations), and the fact that many AI models, “particularly those based on deep learning” operated as ‘black boxes’, which lacked transparency.

Risks faced by individuals ranged from threats to “fundamental rights”, for example unlawful or arbitrary arrests or detentions, inequality of arms where only one party has access to advanced AI systems, and a lack of accountability in AI decisions, for example where, as in the Post Office scandal, sub-postmasters were unable to rebut “presumed reliability” of computer evidence.

Sophia Adams Bhatti, report co-author and chair of JUSTICE’s AI programme, commented: “Given the desperate need to improve the lives of ordinary people and strengthen public services, AI has the potential to drive hugely positive outcomes.

“Equally, human rights and the rule of law drive prosperity, enhance social cohesion, and strengthen democracy. We have set out a framework which will allow for the positive potential of both to be aligned.”

Stephanie Needleman, legal director of JUSTICE, added: “AI isn’t a cure-all, and its use carries big risks – to individuals and to our democracy if people lose trust in the law through more scandals like Horizon.

“But it also offers potential solutions. We must therefore find ways to harness it safely in service of well-functioning justice system. Our rights-based framework can help us navigate this challenge.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Managing risk: a guide for law firms

Traditional risk management approaches typically focused on responding to incidents after they have occurred. Best practice today demands a more forward-thinking approach.


Legal tech in 2025: Data, data and more data management

Even the staunchest sceptics are now recognising that generative AI is here to stay. But was 2024 the year that the AI ‘hype bubble’ burst?


Understanding mid-sized law firms’ priorities in 2025

Mid-tier practices are looking to grow both organically, or by a merger/takeover, and our survey revealed that 17% of firms are intending to pursue an acquisition strategy over the next 12 months.


Loading animation