Litigant unwittingly put fake cases generated by AI before tribunal


ChatGPT: System like this created the cases

Nine authorities put before the First-tier Tribunal (FTT) by a litigant in person challenging a penalty from HM Revenue & Customs (HMRC) were fakes generated by an artificial intelligence (AI) system like ChatGPT, a judge has ruled.

Tax tribunal judge Anne Redston accepted that Felicity Harber had not known they were the product of what is known as ‘hallucination’, where generative AI delivers plausible but incorrect results.

Mrs Harber had received a £3,265 penalty from HMRC for failing to notify her liability to capital gains tax after disposing of a property.

She appealed on the basis that she had a reasonable excuse, because of her mental health condition and/or because it was reasonable for her to be ignorant of the law.

The nine summaries of FTT decisions she cited saw the appellant succeed in showing that a reasonable excuse existed, five on the basis of mental health and four on ignorance of the law.

Mrs Harber told the tribunal that the cases had been provided to her by “a friend in a solicitor’s office” whom she had asked to assist with her appeal.

She did not have more details of the cases, including the full texts or FTT reference numbers.

Neither HMRC nor the tribunal itself could find the cases, although two of them bore resemblances to other cases.

‘Baker v HMRC (2020)’ had similarities with a 2018 ruling Richard Baker v HMRC, in which the taxpayer appealed on the basis that his depression constituted a reasonable excuse. “However, not only was the year different, but Mr Richard Baker lost his appeal,” Judge Redston noted.

The appellant in ‘David Perrin (2019)’ had the same surname as the appellant in the main authority on reasonable excuse, Christine Perrin, but her case was heard in a different year and she lost.

Then asked if she had used an AI system such as ChatGPT, Mrs Harber said it was “possible”, but the judge said she then “moved quickly on to say that she couldn’t see that it made any difference”, as there must have been other relevant FTT cases.

Mrs Harber also asked how the tribunal could be confident that the cases relied on by HMRC were genuine.

“The tribunal pointed out that HMRC had provided the full copy of each of those judgments and not simply a summary, and the judgments were also available on publicly accessible websites such as that of the FTT and the British and Irish Legal Information Institute (BAILLI). Mrs Harber had been unaware of those websites.”

Judge Redston quoted from the Solicitors Regulation Authority’s Risk Outlook report on AI, published last month, which said: “All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this.

“That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’. The result is known as ‘hallucination’, where a system produces highly plausible but incorrect results.”

Here, all but one of the nine cases related to penalties for late filing, not failures to notify a liability.

There were stylistic points as well, such as the American spelling of “favor” (ie “found in their favor”) in six of them, and frequent repetition of identical phrases.

The tribunal concluded: “We find as a fact that the cases in the response are not genuine FTT judgments but have been generated by an AI system such as ChatGPT.”

Judge Redston, sitting with Helen Myerscough, said: “We acknowledge that providing fictitious cases in reasonable excuse tax appeals is likely to have less impact on the outcome than in many other types of litigation, both because the law on reasonable excuse is well-settled, and because the task of a tribunal is to consider how that law applies to the particular facts of each appellant’s case.

“But that does not mean that citing invented judgments is harmless. It causes the tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined.”

She quoted New York Judge Peter Castell – who in the summer fined two lawyers who unwittingly submitted fake cases generated by ChatGPT – in saying that the practice also “promotes cynicism” about judicial precedents.

Judge Redston continued by quoting Lord Bingham from the 2006 case of Kay v LB of Lambeth in saying that the use of precedent was “a cornerstone of our legal system” and “an indispensable foundation upon which to decide what is the law and its application to individual cases”.

While FTT judgments were not binding, they were nevertheless persuasive authorities, she observed.

The tribunal went on to reject Mrs Harber’s appeal, finding that the reasonable person in her position would not have been prevented by her mental health condition from contacting HMRC and that her ignorance of the requirement to notify her liability was not objectively reasonable.




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Retrospective or not retrospective, that is the question

As the debate heats up over the Litigation Funding Agreements (Enforceability) Bill, it is crucial to understand what is the true vice in retrospective legislation.


Harnessing the balance of technology and human interaction

In today’s legal landscape, finding the delicate balance between driving efficiency via use of technology and providing a personalised service is paramount to success.


AI’s legal leap: transforming law practice with intelligent tech

Just like in numerous other industries, the integration of artificial intelligence (AI) in the legal sector is proving to be a game-changer.


Loading animation