Guest post by Andrew Fremlin-Key [1], a partner, and Millie Dickson, a trainee solicitor, in the media and reputation management team of London law firm Withers

Andrew Fremlin-Key
The only certainty with generative AI in the legal world is that it is here to stay. Those willing to embrace its potential are rapidly gaining the edge on their more hesitant peers.
Lawyers should be excited about the possibilities and opportunities this creates. Yet, as GenAI’s influence grows, so do the risks which are already playing out in courtrooms across England and Wales, where some early adopters are setting precedents they would rather not.
The legal profession (in common with many other industries) is at a critical juncture, and as lawyers grapple with the fast pace of change, they need to be alive to the potential pitfalls and limitations of AI to avoid organisational and individual damage to reputation.
Judicial red flags
Over the past 12 months, cases involving the misuse of AI have become all the more commonplace. The case last autumn of Ndaryiyumvire v Birmingham City University [2] in Birmingham County Court, for example, involved a solicitor who had cited two false authorities.
The opposing side requested copies of the cases and at this point the authorities were withdrawn. It was later explained that the cases had been generated using software containing a built-in research function which automatically creates case law.
The judge sympathetically ruled that this case “is not at the more serious end” of putting false citations before the court, largely because the authorities were not heard in a hearing and were promptly removed. The solicitor narrowly avoided a referral to the Solicitors Regulation Authority.
Contrast this with MS v Secretary of State for the Home Department [3], which involved the deployment of a fabricated authority that was key to permission being granted to appeal. The Upper Tribunal referred the barrister to the Bar Standards Board:
It said: “We find a referral is appropriate so that proper standards are set to stop false material coming before the tribunal, which, in this case, we find contributed to the grant of permission, and thus potentially to the wrongful prolonging of the litigation, and has led to considerable public expense in addressing it through this hearing.
“We also find that it is relevant to the making of this referral that there was no immediate, full and truthful explanation given.”
These cases follow the decision last June in Ayinde [4], where Dame Victoria Sharp warned that there would be “serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused”.
More to come?
The cases above all involved judges and/or the other side being diligent in reviewing each and every case referenced in the submissions.
In circumstances where judges are overstretched and solicitors trust the expertise of their colleagues and other firms, it may well be the case that other cases involving ‘hallucinated case law’ have been relied on and have gone under the radar. It’s inevitable that these cases won’t be the last to make headlines.
Pressure exists on law firms, both from clients and employees, to start relying more heavily on GenAI models to streamline work and get ahead of the game in an industry that is often seen as resistant to change.
There are many ways in which AI can enhance and support the work that lawyers do, but it is clear that AI models cannot be relied on without checks and balances in place, which at the moment means lawyers checking the output of AI against reputable safe sources.
This is reflected in recent judicial guidance on the use of AI [5], where it is suggested that, while the legal profession becomes familiar with these tools, lawyers should be asked by judges to confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of AI.

Millie Dickson
What can law firms do to protect themselves against misuse of AI?
Law firms cannot ignore AI nor would it be wise to ban its use altogether given the possible benefits. However, there are a number of proactive steps which they can and should take to ensure that their lawyers use AI in an appropriate way.
- Establish clear AI usage policies
There must be a clear firm policy of what constitutes acceptable use of AI tools. This policy will:
- define prohibited activities, for example putting confidential client information into open or public AI tools (as opposed to closed systems);
- explain when approval is/isn’t needed for its use;
- set out which AI tools are acceptable and approved for use within the firm;
- highlight incident reporting in the event there is a suspected policy violation and what the consequences are for breaching the policy; and
- be reviewed and updated regularly in light of the constantly shifting regulatory environment.
- Implement regular mandatory training and awareness
All lawyers and other staff must understand the basics of AI including its capabilities and limitations. Training should include how to verify AI-generated outputs, risks of bias, inaccuracies and hallucinations in AI responses.
It should also consider any ethical and regulatory issues and ensure employees understand any applicable internal policies.
- Monitor and audit AI usage
Regular audits can help identify misuse or over-reliance on AI or problem areas. Monitoring should involve tracking which tools are being used and for what purposes, whether employees have a good understanding of AI, the firm’s policies and the regulatory/legal obligations.
Compliance checks are likely to be helpful for demonstrating the firm’s own obligations to regulators.
- Keeping up-to-date with guidance
As above, this is a rapidly shifting landscape and there should be an individual or team at the firm responsible for keeping the firm (and its staff) up to date with its obligations.
GenAI’s double-edged sword
Irrespective of the clear benefits, the increased uptake of GenAI models exposes law firms and their solicitors to a real risk of reputational damage associated with its use.
Despite the courts having been (perhaps surprisingly) sympathetic to many law firms and lawyers involved in these cases to date, the wider publicity such a finding can have is costly.
Lawyers falling foul of AI use in the future are not likely to be so fortunate and risk having their client’s case struck out, regulatory referrals, wasted costs orders and significant reputational harm. As set out by Dame Victoria Sharp in Ayinde, the courts’ powers also extend to initiating contempt proceedings and referrals to the police for the most serious cases.
It is said that that there will be two types of lawyers in the future – those who use AI and those who don’t, who will be left behind.
But there is likely to be a third type – those who use AI without sufficient oversight or a good enough understanding of its limitations. As the AI race continues to gather speed, lawyers (and their clients) will want to ensure that they do not fall at the first hurdle.