- Legal Futures - https://www.legalfutures.co.uk -

Woebots and robots

A guest post by Professor Roger Smith, who blogs [1] on the use of technology to advance access to justice

Meet Nadia. Now start talking

The chances are that you may not be entirely sure what a bot or a chatbot is. So, the news that, “starting today, DoNotPay is opening up so that anyone can create legal bots for free (with no technical knowledge)” may be a bit opaque.

But bots have their devotees. The picture is of Nadia [2], an Australian bot being developed to give information on disability benefits with the voice of Cate Blanchett. The editor of Chatbots Magazine [3] (OK, no neutral source) is pretty clear about their future. He writes articles with titles like ‘How bots will completely kill websites and mobile apps’.

Joshua Browder, the creator of the DoNotPay parking ticket challenger, is behind what he hopes will be this major expansion of legal bots. Mr Browder knows a thing or two about publicity. [4] His DoNotPay app was followed by apps that took on asylum claims and housing disrepair. He obtained coverage to die for and has now announced [5]the launch of a “robot lawyer for 1,000 areas of the law”. Sign up before the deadline and he will facilitate the free construction of a chatbot for you.

Mr Browder revels in the description given to him by the BBC as “the Robin Hood of the internet”. He told Business Insider: [6] ‘”I originally started DoNotPay two years ago to fight my own parking tickets and became an accidental witness to how lawyers are exploiting human misery…

“From discrimination in Silicon Valley to the tragedy in London with an apartment building setting on fire, it seems the only people benefitting from injustice are a handful of lawyers… I hope that DoNotPay, by helping with these issues and many more, will ultimately give everyone the same legal power as the richest in society.”

Mr Browder’s enthusiasm for the democratisation of law meets predictable resistance – largely from lawyers. They tend to point out that bots may help in simple cases but that, actually, proving liability in Silicon Valley discrimination cases and London’s Grenfell Towers fire is likely to involve legal skills and evidential prowess significantly beyond a basic – or even expert – bot.

Yahoo Finance [7] is but one source on a row that erupted at Stanford University’s FutureLaw Conference this spring. There, Mr Browder came up against Clio group’s Joshua Lennon. ‘‘We are about to enter a reign of tech terror,” the article quoted the latter as saying, noting that funding into chatbots may be taking dollars that otherwise would go toward courtroom innovations.

“And while chatbots are incredibly scalable and accessible, they can potentially direct users to incorrect information and mislead users into thinking there’s a real-life attorney sitting on the other side of the chat.”

Chatbots have certainly got people talking. Uses are getting more and more imaginative. What, for example, do you make of ‘Woebot’? Yes. This really exists. It is an automated counselling service that reportedly [8] passed a blind test: “70 students aged between 18 and 28, were randomly assigned to either the Woebot agent or an information control group who were offered the National Institute of Metal Health’s ebook on managing depression among college students.

“After two weeks the participants who were communicating with Woebot reported significantly lower symptoms of anxiety and depression relative to the information control group.”

The reporter of this success – Rich Haridy, a seasoned Australian journalist – expressed some personal scepticism: “After playing with the system for a little while, it seems frustratingly prescriptive. The entire tool is less ‘artificial intelligence’ and more an assortment of self-help aphorisms that pick up on certain key words being inputed. The chatbot then spits out prescribed bits of rationalisation or inspirational videos to improve your mood.”

Mr Haridy’s assessment of the Woebot seems prescient in relation to legal chatbots:

As a low-level self-help tool, chatbots like Woebot are undoubtedly useful, especially when looking at how younger generations use social media and digital technology. But the technology is still nowhere near capable of recreating the experience of sitting in a room with a trained psychologist.

For a student in college, anxious and overwhelmed, Woebot may offer the enthusiastic aphorisms they need to get through a rough few days, but when that mild depression turns into something more dangerous, an automated chatbot is probably not the solution. At least not yet…

And that sums up the paradox. The interactive form of the chatbot means that the user is interrogating information rather than just having it thrust down their throat. It can be tailored for individual consumption. At current levels of development, this processing is pretty simple and liable, in a complex matter, to be simplistic.

But it is impossible not to be struck by the potential, particularly once artificial intelligence begins really to take off. You could certainly imagine information sites like citizensadvice.org.uk, Ontario’s Steps for Justice [9] or any other more specialist site being supplemented by a chatbot as a form of simplified search engine.

It is difficult not to see fantastic potential – towards which Mr Browder’s initiative will drive innovation even if the immediate results disappoint – as, frankly, they are likely to do.