- Legal Futures - https://www.legalfutures.co.uk -

Probe to consider how best to regulate use of AI in legal services

AI: Should regulators be setting standards for AI?

The more innovative use of technology like artificial intelligence (AI) by unregulated providers of legal services is raising questions of whether there is a widening consumer protection gap, the Legal Services Board (LSB) has warned.

The oversight regulator is to continue its project on technology and innovation, addressing concerns about issues such as autonomous automated decision-making and its transparency.

The first phase focused on establishing an evidence base, with the LSB commissioning papers and podcasts from a variety of experts, and seeking views from a range of stakeholders.

In a progress update sent to the Committee on Standards in Public Life – which last year issued a report on AI and public standards – the LSB said its research indicated that unregulated providers tended to be more innovative and bigger users of technology.

It went on: “This gives rise to questions on whether there is a widening consumer protection gap between users of regulated and unregulated legal services. It also raises questions on whether the current scope of regulation is limiting technological innovation in the sector.

“We plan to continue our work on technology and innovation, including research on the social acceptability of emerging technologies such as AI.

“This will help ensure that regulatory approaches to technology are broadly acceptable to both legal services consumers and providers, and compatible with wider public interest.”

The LSB said the evidence pointed to “a number of challenges and questions regarding AI in the legal services sector”.

These included the potential for AI “to increase the power imbalances that lawyers mediate and the need for legal professionals to understand AI decision-making tools in order to do the best for their clients and ensure fundamental values and principles are protected”.

The use of AI also raised regulatory issues around human accountability and respect, it went on, along with questions about the extent of regulation required.

“For example, should there be greater regulation when AI is used to provide services directly to consumers? Is it sensible to expect the legal services providers who use AI-based applications to understand how they work and are trained in their use and implications? If the answer is no, then should regulators be setting standards for AI and its use?”

The LSB identified specific concerns that it would explore in the next phase of the project, including: autonomous automated decision-making that takes lawyers out of the loop; possible discrimination, such as underlying biases in data used to educate AIs; transparency of decision-making; unsecure data and record-keeping; and “individual fairness being less important than general utility”.

It added: “We plan to carry out research on the social acceptability of developments in technology. As part of our work on the scope of regulation, we will also consider how a risk-based approach to regulation could better enable innovation and the use of technology.”