Posted by Yazad Bajina, director at Legal Futures Associate Checkboard

Bajina: Inaccuracies can easily slip through the net without care
The legal profession has been a keen adopter of artificial intelligence (AI).
We’ve now learned that three-quarters of top 20 law firms are using some kind of third-party AI tool, establishing teams to focus on digital transformation, or even building or customising their own AI tools. The same is true of around 60% of the next tranche of firms.
Meanwhile, use of generative AI is continuing apace. A new report has revealed around 60% of lawyers are using it in their day-to-day work; this time, smaller firms are more than twice as likely to use it than larger ones.
But, for most firms, AI use remains fundamentally experimental. Although larger firms are introducing training programmes and AI strategies, most of the rest use AI on a more ad hoc basis.
Although it offers many benefits, particularly in terms of ideation and productivity, AI also introduces new challenges. AI is still prone to error – or ‘hallucination’, as critics have called it – so using it for legal work without any kind of strategy risks exposing firms to misinformation.
Getting smart about AI
The profession needs to be smart about the way it integrates AI. While the big firms can put money, time and resources into it, smaller firms should be careful and intentional.
While the headline numbers are all about innovation and digital transformation, other stories show where thoughtless use of AI can lead to trouble. Just a few weeks ago, a barrister was referred to the Bar Standards Board after misleading a tribunal with information “learned” from ChatGPT, apparently without realising the LLM could produce misleading outcomes.
At Checkboard, we value LLMs as a useful tool for ideating and brainstorming.
But our area – anti-money laundering compliance – requires precise outcomes, accurate information and correctly interpreted reports. LLMs like ChatGPT can be useful for sifting through information and summarising documents, but they should augment, not replace, human intuition and understanding.
Relying on ChatGPT to give you accurate compliance information is risky business. If you’re not paying attention, inaccuracies can easily slip through the net.
Not all bad news
In August, a solicitor told Legal Futures that lawyers have a “new duty” to understand AI. This is especially true when dealing with confidential material. They must also, he said, learn to understand how biases might shape its output.
Something we often hear from our clients is that there are always, almost without exception, some teething problems among solicitors and conveyancers as they adapt to using our technology platform.
It’s natural. That’s why we always start a new partnership with training and support to get things off the ground. AI seems to have been let loose on the profession without that kind of oversight.
It’s heartening, then, to see that some of those larger law firms are beginning to introduce training programmes and oversight to their use of AI, to help staff use it with responsibility and with intentionality.
Once they get the hang of that, the legal profession will reap the benefits of speed, productivity, and faster, better ideation – without risking compliance or spreading false information.
So, we think the profession is right to be excited about AI. It just needs to be intentional about how it’s used.










Leave a Comment