- Legal Futures - https://www.legalfutures.co.uk -

High Court judge: ethical and legal framework for AI “imperative”

Clement-Jones: Independent AI regulator not the best way forward

An ethical and legal framework is “imperative” for artificial intelligence (AI) and the financial world had shown what happened without one, a High Court judge has warned.

Mr Justice Robin Knowles said how the need for an ethical and legal framework for AI was met would determine whether the technology realised its true potential.

“AI is going to go deeper into people’s lives than many things have before, and it’s imperative that we take the opportunity for law and ethics to travel with it.

“We have as a society developed law and ethics over thousands of years. That means they are ready for us and available for us in this journey.”

Giving evidence at the latest meeting of the Law Society’s technology and law policy commission, the judge said: “Haven’t we seen in the world of finance what happens without codes and the ethics they drive?

“Aren’t we at a stage where we are beginning to realise that, in business, there is a positive incentive to promote ethics and codes and standards, because without it business ultimately does not succeed?”

Lord Clement-Jones, Liberal Democrat peer and co-chair of the All Party Parliamentary Group on AI, agreed on the need for an ethical framework, but said there was no point in having a set of ethics unless you had a way of checking that the algorithm was conforming.

“Audit is, in my opinion, absolutely crucial, and this is something that is only in its infancy.”

Lord Clement-Jones said that, when his committee suggested an ethical framework, the response from the tech industry was “incredibly positive”.

He said the committee did not believe an independent AI regulator was the best way forward, but where AI was “embedded in their sector”, regulators such as the Competition and Markets Authority and the Financial Conduct Authority should ensure they were respected.

Earlier in the session, Adrian Weller, programme director for AI at the Alan Turing Institute, asked whether algorithms should be held to “higher standards” than humans.

He said the impact of humans getting things wrong was “unlikely to be catastrophic”, while AI failures could have a much bigger impact.

Mr Weller said it could be “very dangerous morally to distance ourselves from some decisions”, giving the example of executions by drone.

Matthew Lavy, a barrister specialising in technology-related disputes, said AI could be used to avoid the problem of “sometimes arbitrary and dare I say it capricious decisions by harassed and underfunded courts”.

Mr Lavy said he had found during his early experience representing people involved in low-velocity road traffic accidents that the quality of justice was “generally dreadful”.

He said the cases often involved no injuries and were essentially “prangs” which cost people their insurance excess, but there were “queues of cases” to be dealt with across the country.

“You’ve got harried district judges who have no time to read anything in advance, you generally have no hard evidence – although I do remember a couple of occasions where there were photographs of damages which were completely inconsistent – and what you have is witness statements written by numbers.”

Mr Lavy said there was often “screeds of information” about a particular type of seat-belt, but when it came to what happened in the accident itself, only “one incoherent sentence”.

“It’s not a surprise that’s the kind of evidence you get because, almost by definition, one party didn’t have a clue what was going on. You have two people in the room being cross-examined by a hapless junior barrister like me and a bored judge trying to work out which way to toss the coin”.

Mr Lavy said where cases were lost, litigants felt it was unfair and did not understand the reason.

In the future, he said, AI could enable litigants to be given a detailed analysis of all the accidents on a particular stretch of road, all the accident records of the car they were driving and other relevant information, and to be told that “on the balance of probabilities, your story is less credible”.

Mr Lavy said they might prefer to be told “this is why you lost”, rather than “because the judge did not like what you said”.