- Legal Futures - https://www.legalfutures.co.uk -

‘Deepfake’ warning over online courts

Facial recognition: Issue for online courts

Video manipulation software, including ‘deepfake’ technology, poses problems for remote courts in verifying evidence and that litigants or witnesses are who they say they are, a report has warned.

Not only could successful deepfakes find their way into evidence, “potentially condemning the innocent or exonerating the guilty”, it said, but the mere existence of deepfakes allowed litigants and their lawyers “to cast doubt on video or audio that is legitimate”.

The report on ‘virtual justice’ [1] by New York-based privacy group Surveillance Technology Oversight Project (STOP) noted that parties to online court proceedings may be asked to verify their identity by providing sensitive personal information, biometric data, or facial scans – in the state of Oregon, judges sign into their virtual court systems using facial recognition.

It said: “Distrust around digital records has persisted with the advent and ease of photoshopping. Altered evidence can still be introduced if the authenticating party is itself fooled or is lying.

“In the coming years, courts must also be mindful of emerging AI technology around deepfakes, which allows a user to manipulate images and audio in real time. While this technology is nascent today, it is rapidly advancing and may soon pose a potent threat to trust in online communication.”

STOP said programs such as Avatarify superimpose another’s face onto a user in real time and is already being used on conferencing platforms.

“While faceswap technologies like Avatarify use an algorithm trained on another’s image, usually requiring several photos of the person’s face that you’re trying to animate, technology like First Order Motion approaches deepfakes inversely, manipulating a user’s photo by way of video of another person without any prior training on the target image.

“AI software companies like SenseTime can create deepfakes from audio sources by using a third party’s audio clip and video of the user to generate footage of the user saying the words from the recording.

“This can not only allow a person to fabricate their identity but can allow a litigant or witness to use their own voice to make the claim that they said something different than what the opposing party claims.”

The report said courts could learn from China, where the Beijing Internet Court requires litigants to set up an online account using their national identity cards and a facial recognition system before bringing a case remotely.

More broadly, the STOP report warned that online courts “may transform the digital divide into a justice divide, as the lack of computer access and broadband internet robs low-income litigants of their day in court”.

It also highlighted privacy and due process concerns with online court software and the growing role of private vendors, including the lack of clear rules on how confidential data is collected, stored, and accessed, as well as the inability of lawyers and clients to communicate confidentially.

The report noted that the courts could not monitor for unauthorised recordings of proceedings as well.