The next frontier of AI in the courts: Deepfake video evidence


Video: Women’s mouth did not match words being spoken

While fake cases generated by artificial intelligence (AI) has become a problem for courts around the world, a judge in the US has faced the next frontier: deepfake videos submitted as evidence.

Victoria Kolakowski, a judge in the Superior Court of California, issued a ‘terminating sanction’ – essentially striking out the action – after determining that two videos that had been submitted by self-represented claimants seeking summary judgment were fake.

The person featured in the videos (view one of them here) “lack expressiveness, are monotone, do not pause at moments where pauses are expected, use odd words choices, and appear generally robotic”, she wrote. “Further, the mouth flap does not match the words being spoken.”

The judge also noted the “looping video feed” as another reason for believing the videos were the products of generative AI.

At an earlier hearing, she said, “the court’s suspicion was reinforced when Maridol Mendones [one of the claimants] mentioned that some witnesses depicted in the suspect evidentiary submissions were deceased or could not be contacted by her”.

She also identified other evidence – such as a photograph purportedly taken by a Ring doorbell camera – that had been “materially altered”, albeit not very well. “A close inspection shows that the background is in black and white, while the man is in color,” the judge wrote.

She had suspicions that more evidence had been doctored but said she did not have “the time, funding, or technical expertise” to determine whether this was the case.

Judge Kolakowski decided against referring the claimants for criminal prosecution – saying it was “simultaneously too severe and not sufficiently remedial” – and decided a terminating sanction to be appropriate.

“This sanction is proportional to the harm that plaintiffs’ misuse of the court’s processes has caused. A terminating sanction serves the appropriate remedial effect of denying plaintiffs – and other litigants seeking to make use of GenAI to submit video testimonials – of the ability to further prosecute this action after violating the court’s and the defendants’ trust so egregiously…

“Further, a terminating sanction serves the appropriate deterrent effect of showing the public that the court has zero tolerance with attempting to pass deepfakes as evidence. This sanction serves the appropriately chilling message to litigants appearing before this court: use GenAI in court with great caution.”

In a blog on the case, Judge Scott Schlegel – a member of the Fifth Circuit Court of Appeal in Louisiana – said he had been warning for a while that this day would come.

He acknowledged that a lot of AI was “still a bit off or easy to spot” but added: “It is not about what GenAI can do today; it is about the pace of change. If courts can barely keep up now, how will they fare when the fakes become indistinguishable from reality?

“The Mendones case is a warning shot. It shows the cost of letting AI forgeries seep into the system. The deepfakes in that case were crude enough that the judge could spot them, but the technology has already advanced to the point where many of us would struggle to tell the difference.”

Judge Schlegel, who sits on the American Bar Association’s taskforces on the law and AI, said the Federal Advisory Committee on Evidence Rules was considering a rule to require AI-generated evidence to meet the same reliability standards as expert testimony – although he said this may not have stopped what happened here given that they were self-represented litigants.




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


The power of participation for trainees and apprentices

It’s important as a trainee or an apprentice to get involved in the life of your firm – even under the pressure of discovering how to navigate professional life and now the demands of the SQE.


Is it time to change how law firms view compliance?

Although COFAs often hold senior positions and play an essential role in a firm’s financial and regulatory integrity, the perception of the compliance function itself is still evolving.


From templates to culture change: Lessons from the SRA on source of funds

The SRA’s new thematic review into source of funds and wealth reveals both progress and persistent blind spots, with source-of-funds checks too often thought of as a procedural hurdle.


Loading animation