- Legal Futures - https://www.legalfutures.co.uk -

Governing the automation of public decision-making

Guest post by Tatiana Kazim, research fellow in public law and technology at Public Law Project

Kazim: Negative impacts of flawed systems are far-reaching

Scrutinising the ever-increasing use of algorithms and big data by government is a key part of our work at Public Law Project (PLP).

As the A-level results scandal in 2020 proved, such scrutiny is essential in ensuring that algorithms operate fairly, lawfully and without bias.

This work is far from finished and the issues in this field are complex in a number of ways – not least because of the legal uncertainty generated by the current patchwork of laws.

That’s why PLP strongly supports the Law Commission’s proposed project on a new legal framework to govern the automation of public decision-making, as part of its upcoming 14th programme of law reform. (Our full response to the Law Commission’s consultation is here [1].)

Currently, the UK government uses algorithms to make decisions in a range of areas, including tax, welfare, criminal justice, immigration and social care. Through our research, we are finding more and more instances of automated decision-making in government.

Automated government decision-making may offer various benefits: saving time, reducing cost and improving the quality and consistency of decisions. But, crucially, there are also significant challenges in achieving fair and lawful decisions through the use of algorithms.

The issues include:

One example we are particularly concerned about is the automated triage system used by the Home Office to determine whether a marriage should be investigated as a sham.

As we currently understand it, all three of the issues identified above may well arise in relation to the sham marriages algorithm. You can read more from our work on the sham marriages algorithm here [2].

And this is just one example of a wider pattern. Automated public decision-making takes place in many contexts and the negative impacts of flawed systems are likely to be far-reaching and keenly felt, particularly among the most vulnerable groups in society.

Compounding these problems, the current legal framework governing automation of public decision-making is a patchwork of public law, equality/discrimination law, human rights law and data protection law.

It is generating considerable uncertainty for all involved (including government). It also gives rise to a considerable risk of unfairness experienced by individuals going without an effective remedy.

Due to the potentially detrimental outcomes for individuals, we would welcome a review of the existing legal framework and options for reform.

A new legal framework has the potential to mitigate the risks of automated decision-making and preserve its benefits. The EU Commission’s proposed artificial intelligence (AI) regulation [3], adopted on 21 April 2021, might provide a useful blueprint for the UK’s own approach.

The proposed regulation includes:

Measures like these could significantly help to ensure greater transparency and accountability when it comes to automated public decision-making, reduce the risk of both discrimination, and unlawful decisions that violate administrative law principles, and increase legal certainty.

The Law Commission, as an independent, expert-led, consultative body, is ideally placed to address the pressing need for law reform work in this area, and to help ensure that government algorithms deliver fair outcomes for all.

We hope to see its promising proposal develop into meaningful action.

Find out more about PLP’s work on public law and technology here [4].