Governing the automation of public decision-making


Guest post by Tatiana Kazim, research fellow in public law and technology at Public Law Project

Kazim: Negative impacts of flawed systems are far-reaching

Scrutinising the ever-increasing use of algorithms and big data by government is a key part of our work at Public Law Project (PLP).

As the A-level results scandal in 2020 proved, such scrutiny is essential in ensuring that algorithms operate fairly, lawfully and without bias.

This work is far from finished and the issues in this field are complex in a number of ways – not least because of the legal uncertainty generated by the current patchwork of laws.

That’s why PLP strongly supports the Law Commission’s proposed project on a new legal framework to govern the automation of public decision-making, as part of its upcoming 14th programme of law reform. (Our full response to the Law Commission’s consultation is here.)

Currently, the UK government uses algorithms to make decisions in a range of areas, including tax, welfare, criminal justice, immigration and social care. Through our research, we are finding more and more instances of automated decision-making in government.

Automated government decision-making may offer various benefits: saving time, reducing cost and improving the quality and consistency of decisions. But, crucially, there are also significant challenges in achieving fair and lawful decisions through the use of algorithms.

The issues include:

  • Bias, including as a result of problems in the design or training data. For example, if the training data is unrepresentative, it may systematically produce worse outcomes when applied to a particular group. This could amount to discrimination, in violation of the Equality Act 2010.
  • Opacity, whether intentional or due to the complexity of the system. A machine learning algorithm may be a ‘black box’, to experts and laypeople alike. Even relatively simple algorithms can be hard to understand and are often kept secret. This lack of transparency could undermine procedural fairness.
  • Automation bias, a well-established psychological phenomenon whereby people put too much trust in computers. This may mean that officials over-rely on automated decision support systems and fail to exercise meaningful review of an algorithm’s outputs.

One example we are particularly concerned about is the automated triage system used by the Home Office to determine whether a marriage should be investigated as a sham.

As we currently understand it, all three of the issues identified above may well arise in relation to the sham marriages algorithm. You can read more from our work on the sham marriages algorithm here.

And this is just one example of a wider pattern. Automated public decision-making takes place in many contexts and the negative impacts of flawed systems are likely to be far-reaching and keenly felt, particularly among the most vulnerable groups in society.

Compounding these problems, the current legal framework governing automation of public decision-making is a patchwork of public law, equality/discrimination law, human rights law and data protection law.

It is generating considerable uncertainty for all involved (including government). It also gives rise to a considerable risk of unfairness experienced by individuals going without an effective remedy.

Due to the potentially detrimental outcomes for individuals, we would welcome a review of the existing legal framework and options for reform.

A new legal framework has the potential to mitigate the risks of automated decision-making and preserve its benefits. The EU Commission’s proposed artificial intelligence (AI) regulation, adopted on 21 April 2021, might provide a useful blueprint for the UK’s own approach.

The proposed regulation includes:

  • A register of high-risk AI systems;
  • Certification indicating conformity to regulatory standards;
  • A requirement that the design of high-risk AI systems allows for effective human oversight; and
  • A blanket ban on certain AI systems, including subliminal manipulative systems – systems which exploit vulnerabilities related to age, and physical or mental disability to distort behaviour – public sector ‘social credit’ systems, and real-time remote biometric systems in public spaces.

Measures like these could significantly help to ensure greater transparency and accountability when it comes to automated public decision-making, reduce the risk of both discrimination, and unlawful decisions that violate administrative law principles, and increase legal certainty.

The Law Commission, as an independent, expert-led, consultative body, is ideally placed to address the pressing need for law reform work in this area, and to help ensure that government algorithms deliver fair outcomes for all.

We hope to see its promising proposal develop into meaningful action.

Find out more about PLP’s work on public law and technology here.




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Loading animation