
AI: Legal questions would remain even if granted legal personality
The “perhaps radical” option of granting legal personality to artificial intelligence (AI) systems is likely to be on the agenda in time, Law Commission has said.
The commission said although it was “not clear presently that any AI systems are sufficiently advanced to warrant being granted legal personality”, if the technology progressed quickly enough, the option of granting some AI systems legal personality was “likely increasingly to be considered”.
A new discussion paper, AI and the Law, noted that it was possible for “different categories of legal persons to have different bundles of rights and obligations”, for example corporations.
“Something similar would likely be required should AI systems be granted separate legal personality with underlying owners or people with control.
“If they were entirely separate legal persons they would still require means for identification, just as there are forms of identification for natural persons, such as names, birth dates and government identification numbers (for example, national insurance numbers).
“Further, some mechanism would need to be put in place such that an AI system could be subject to sanction were it to commit a criminal offence.”
The paper did not contain proposals for law reform and instead aimed to raise awareness and prompt discussion.
It said legal issues arose both from some features of AI – such as autonomy, leading to questions including causation and liability – and AI’s development, such as training it and the use of data.
While the “perhaps radical, option” of granting AI systems some form of legal personality “may seem futuristic”, it had “already been considered in academic discourse, and as AI systems advance, it may become an increasingly salient option”.
The commission said reasons in favour of granting AI legal personality included filling gaps regarding liability and responsibility, potentially encouraging AI innovation and research (by granting AI developers separation in terms of liability), and incentivising the systems themselves to avoid liability.
Reasons against included the use of AI systems as ‘liability shields’, protecting developers from reasonable accountability, and the complexity of “granting AI the ability to hold funds and assets such that they can be held meaningfully accountable, for example by way of claims being brought against them”.
The commission said legal personality did not “seem appropriate for all AI systems” and theorists had “posited various features” as a threshold for granting AI legal personality, including their degree of autonomy, awareness and “intentionality”.
Whatever criteria were used, “the difficult question is where to draw that line, and how that point should be defined”.
The next question was the type of legal personality that should be granted to AI legal systems.
“Should AI systems be granted a form of legal personality where they are required to be owned by natural or legal persons, similar to corporations having shareholders? If so, should there be a form of limited liability for the owners of those systems?”
In English law, in return for limited liability, companies had to be registered with the state.
The Law Commission warned that “even if AI systems were afforded legal personality, legal questions would remain”.
For example, if the AI had duties to take reasonable care, by what standard should this be assessed?
“In the context of professional negligence, for example, would it be compared to the behaviour of a reasonable professional in the same circumstances? Is that the correct comparison for an AI system that may have superior skills to a human in some respects, but inferior skills in others?”
The Law Commission said it had completed work relating to or involving AI, including on automated vehicles and AI deepfakes in its project on intimate image abuse. There was an ongoing project relating to AI on aviation autonomy, as well as a pending project on product liability, which would consider AI.
“We anticipate that AI will increasingly impact the substance of our law reform work, including, potentially, as the focus of future projects.”
Sir Peter Fraser, chair of the Law Commission, said AI was developing rapidly and being used in an increasingly wide variety of applications, and this was likely to continue.
“However, with AI’s potential benefits comes potential harm. It is important that the laws of England and Wales evolve so that they are up to the task of the many changes being wrought by AI.”













Leave a Comment