CJC calls for declaration about AI use in drafting witness statements


Birss: Working group chair

Litigators should have to declare that they did not use artificial intelligence (AI) in preparing witness statements for trial, the Civil Justice Council (CJC) has proposed.

But otherwise, so long as documents such as statements of case and skeletons bear the name of the legal representative who is taking professional responsibility for them, they should not have to declare whether they were created with the aid of AI.

An interim report and consultation paper published this week by the CJC’s use of AI in preparing court documents working group – chaired by Sir Colin Birss, Chancellor of the High Court – acknowledged the “enormous potential” for AI to be used for social good.

It said: “In the context of the justice system it has already transformed the way in which the legal profession goes about its work, including in relation to research, data analysis and the preparation of court documents. However, these benefits do not come without significant risk.”

The objective was to maintain a balance between using the latest technology “to maximum advantage in the civil justice system… while at the same time maintaining confidence in the rule of law”.

The report said that, aside from the well-known issue of hallucination, AI tools based on large language models “inevitably reflect the errors and biases in the data used to train the model”.

While some jurisdictions required a declaration as to whether AI has been used in the preparation of statements of case, as well as skeleton arguments and other advocacy documents, “we are not convinced that is needed provided it is clear who is taking appropriate professional responsibility for the document in question”.

A declaration of this nature – though put forward as an alternative option for consultees to consider – would “necessarily raise the question of what use of AI ought to be disclosed”.

The position was trickier with witness statements. The report noted suggestions that, even though PD57AC – concerning witness statements for use at trials in the Business and Property Courts – contemplated a lawyer preparing a witness statement by transcribing the witness’s words, “it might be said that using AI (even simply to assist in that task) would contravene PD57AC”.

The working group proposed the same approach as for statements of case for non-trial witness statements, but it was “difficult to see that the aims and objectives of PD57AC in the legally represented context can properly be met if AI is used, other than for non-text generating purposes [such as transcription], in the process of drafting witness statements covered by that rule”.

It went on: “With this in mind, we consider that a rule requiring a declaration that AI has not been used for the purposes of generating the content of such a statement (including by way of altering, embellishing, strengthening, diluting or rephrasing the witness’s evidence) would be consistent with the aims of the practice direction and reinforce the importance of witness statements being in the witness’s own words.”

The report recommended the same approach for trial witness statements under part 32.

For witness statements in a foreign language, a translation must be provided and the translator must already sign the original and certify the translation as accurate. The CJC suggested that this sufficed.

But when it came to experts, the report proposed that the statement of truth should include confirmation that the expert’s report identified and explained any AI which had been used, other than for administrative uses such as transcription.

There was no need to require any such statements in relation to disclosure, where the use of AI was long established and “anecdotal evidence suggests that, at present, parties are cooperating” in their use of it.

The use of AI by litigants in person were beyond the working group’s remit but it said this needed to be looked at.

“Any regulation of the use of AI by LiPs presents a particularly difficult challenge, owing to its potential to assist with access to justice and thus the undesirability as a matter of policy of discouraging its use as well as the lack of regulatory framework to govern the conduct of LiPs.

“It is inevitable that LiPs will use AI to produce court documents and that many may do so without appreciating the risks involved or intending to mislead the court.

“It may be that requiring a declaration on such documents as to the use of AI would at least alert the court to the possibility that the material being presented may be inaccurate or fictitious (albeit the requirement for a declaration might of course be ignored).”

The working group added: “We also consider that there is potential for a wider consideration of the use, and potential regulation, of AI tools by legal representatives in other areas of the justice system. However, as things stand our remit is firmly related to civil justice.”




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Why AI and leadership choices will define law firm profitability in 2026

Despite rapid advances in legal technology, the future of law will not be determined by software alone. It will be shaped by leadership decisions.


Legal director: an alternative to partnership

Firms are increasingly acknowledging the need for alternative senior roles – positions that offer influence and recognition without the obligations of ownership.


It’s time for law firms to ask tougher questions

For years, many law firms have treated ID verification as a box-ticking exercise. Run a liveness check, match a face to a document and move on. But that is no longer good enough.


Loading animation