- Legal Futures - https://www.legalfutures.co.uk -

Solicitor faces probe after putting client documents into ChatGPT

ChatGPT: Data breach by solicitor

The Upper Tribunal has warned lawyers against putting client documents into ChatGPT and other open-source AI tools after a solicitor admitted doing so.

To do is “is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege”, said Judge Fiona Lindsley.

Any regulated lawyer or firm that did this would have to report themselves to their regulator and “be advised to consult” with the Information Commissioner’s Office.

“Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks,” she said.

Judge Lindsley, who sits in the Immigration and Asylum Chamber, was giving the three-judge panel’s ruling after two Hamid hearings concerning hallucinated case references put before it. The decision from November [1] has only just been published.

“The Upper Tribunal cannot afford to have its limited resources absorbed by representatives who place false information before the tribunal,” she said.

The citation of cases which did not exist sent judges “on a fool’s errand… at the expense of other judicial business and is not in the interests of justice”. It also risked a loss of public confidence.

“Despite all of this, the Upper Tribunal has seen a considerable increase in the latter half of 2025 in the citation of fictitious authorities in both statutory appeals and applications for judicial review.”

The two matters before the tribunal arose before a change to the Upper Tribunal’s forms, which now require a legal representative to confirm by a statement of truth that any authority cited “(a) exists; (b) may be located using the citation provided; and (c) supports the proposition of law for which it is cited”.

The first case concerned Tahir Mehmood Mohammed, who is a solicitor but also regulated by the Immigration Advice Authority (IAA) at Manchester firm TMF Immigration Lawyers.

Mr Mohammed was unable to say how the fake case name, which was attached to the citation of an unrelated employment law case, appeared in his application for permission to appeal.

His “best guess” was that he had inadvertently used the AI mode of a Google search. He accepted that he should have checked it.

Whilst he was clear that ChatGPT was not used to create grounds of appeal, Mr Mohammed said he had put client emails he had drafted explaining Home Office decisions into ChatGPT to try to improve them, and also uploaded Home Office decision letters to summarise them for clients.

“He informed us that he now realises that this is a data breach and will inform the clients that he has done this, as well as the IAA and the SRA [Solicitors Regulation Authority].”

The tribunal said it would have referred Mr Mohammed to both the IAA and the SRA had he not already self-reported.

The second case concerned several false citations in an application for judicial review lodged by Birmingham immigration law firm City Laws.

Zubair Rasheed, the firm’s senior solicitor and compliance officer for legal practice, said the grounds were drafted by Waheed Malik, a “part-time trainee lawyer” who was working at the firm under supervision and used “an outdated precedent on our system, practitioner blogs and personal notes”.

Mr Rasheed said his supervision was compromised due to his mother’s illness.

It turned out that Mr Malik was a “very junior caseworker”, not a trainee solicitor, and actually Mr Rasheed’s brother – the tribunal was not clear why he had not attended the hearing.

The tribunal also could not understand “why it was thought by Mr Rasheed to be appropriate for Waheed Malik to draft grounds for judicial review”, and why he did not check them, especially as the medical evidence about his mother did not indicate she was ill at the time.

The solicitor had also “demonstrated a worrying lack of understanding of the extent to which AI is available in the modern world”.

The judge explained: “He stated that there was no mechanism by which staff at his firm could use AI. That is to overlook the fact that anyone with access to Google has access to AI.”

Judge Lindsley said the matter was “principally about supervision and the obligation to ensure that the tribunal is not misled”, rather than the misuse of AI.

“It matters not how such citation errors come about. Whether they are inserted by a hapless trainee or by ChatGPT is really neither here nor there; the point is that the qualified legal professional with conduct of the matter is expected to ensure that such documents are checked, that errors are identified, and that only accurate documents are sent to the tribunal.”

The tribunal said there was “every likelihood that Mr Malik will have worked on other unidentified files for the firm and may have relied on false citations which were generated by AI, as Mr Rasheed accepted he had done in this case”.

Despite Mr Rasheed urging it not to, the tribunal referred him to the SRA.

It concluded by stressing the responsibility retained by lawyers who delegate their work to another fee-earner, to ensure both its accuracy and that the fee-earners are aware of the dangers of using non-specialist AI for legal research and drafting.