The legal profession will have to develop “cyber ethics” to cope with the very different issues that the growing use of technology is having on the practice of law, a leading academic has suggested.
Professor Richard Moorhead said “a whole system approach” was necessary.
He explained: “We will need a cybernetic legal ethics to cope with interlocking technologies, logics and values that should the demanded of a good legal system.”
In a blog  that expanded on a talk he gave to a recent roundtable on law and technology organised by CILEx Regulation, Professor Moorhead – professor of law and professional ethics at University College London – talked about the “innovation ethics dilemma”.
He recounted practising on the well-known DoNotPay chatbot with a fictional situation that led to a parking ticket at a hospital.
The result was a draft letter to the parking authority that contained a lie: “The urgency of the situation prevented me from having enough time to ensure that my parking was in complete compliance.”
Professor Moorhead explained: “The application had made it up on the assumption that the medical emergency prevented me from parking properly. It didn’t, I was in a rush, and I’d taken a chance, but it wasn’t an emergency and I could have taken more time and part properly.
“To my mind, it lied on my behalf.
“The psychology of that is interesting, because I didn’t make anything up, it did, and I might be inclined –I’m certainly more inclined – to send such a letter than to draft one myself.
“And someone less familiar with the legal system might think that’s just the kind of thing one needs to write to challenge a parking ticket.”
This led to the dilemma. Around a half of parking ticket challenges through DoNotPay – according to its creator, US-based British student Josh Browder – succeed.
Professor Moorhead said the chatbot was very easy to use, had none of the problems associated with using lawyers or websites to cobble together information, and allowed some people to ease past traditional access to justice barriers.
“But it also encourages the making of meritless claims. If my experience is widespread, it encourages the making of misleading or dishonest claims.
“And it provides a very rough, low competence version of lawyer which may jeopardise the interests of clients with good cases if, and it is a big if, they would otherwise go elsewhere.”
The question was whether those ethical risks were worth the access to justice gains. This led to a second aspect of the innovation ethics dilemma. Who is making those trade-offs, and with what information?
Professor Moorhead said: “If it is the innovators themselves, should we be concerned about their values and how they make the trade-offs? For instance, a series of psychological studies suggest that an innovatory mindset is also associated with a greater risk of misconduct.
“Creative types and those focused on getting the job done rather than doing the job according to the rules are more prone to lying and cheating than their alternates.
“That’s before we factor in any effects caused by not knowing legal professional obligations or the incentives and mindsets that come with particular business models.”
But given the state of access to justice around the developed world, and some countries misusing law, there was an argument that such technologies were nonetheless worthwhile.
“I do not entirely swallow that argument,” the academic wrote. “An escalation of mechanical or designed-in malfeasance does not constitute progress. But we do need to be careful not to throw out the baby of innovation, with the bathwater of its less seemly side.”
Professor Moorhead pointed out that these technologies and innovations tended to fall outside of legal regulatory boundaries, but where they are used by regulated providers, “a lot of regulatory work will need to be done by existing professional obligations to be competent and to supervise work product and systems of work”.
Using the example of e-disclosure and machine learning, he asked: “How many lawyers understand sufficiently how e-discovery [sic] systems work, or how to balance the risks of being over- and under-inclusive in what the system produces, and how reliant are they on the providers with their own commercial incentives on selling their systems?”
The more profound point, he continued, was that “the digitisation and systematisation of law probably requires a completely different mindset to that which dominates lawyers, law firms, and perhaps regulators”.
He explained: “How will we set standards for inadequate professional services, or negligence, for instance in relation to legal services delivered through systems? What kinds of evidence and testing will we require, if any? How will regulators scrutinise such systems? What should our tolerances be for misconduct or poor practice?”
The “final innovation ethics dilemma” was whether, once risks become known, they were simply tolerated, or whether there were levels at which they should not be.
As to the impact on legal education, Professor Moorhead said: “I suspect the real juice will be in getting law students to better understand how an analytical world is transformed by digital thinking; how rules and the social systems that surround them influence behaviour; and how the complexities of this may be systemic and need lawyers, legal engineers, and others able to take system-wide views.”