Nothing “inherently improper” about barristers using generative AI


ChatGPT: Anthropomorphism is misleading

There is nothing “inherently improper” about barristers using tools based on generative artificial intelligence (AI), so long as they are “properly understood” and “used responsibly”, the Bar Council has said.

However, barristers were warned in new guidance to be “extremely vigilant” about sharing any legally privileged or confidential information with generative AI systems.

The guidance was developed by the Bar Council’s IT panel and regulatory review panels, and noted that LLM systems were not “concerned with concepts like ‘truth’ or accuracy”.

The first “key risk” was that they were “designed and marketed in such a way as to give the impression that the user is interacting with something that has human characteristics”, for example by putting the word ‘Chat’ in ‘ChatGPT’.

Despite this “anthropomorphism”, LLMs, at least at the current stage in their development, did not “have human characteristics in any relevant sense”.

Use of the term “hallucinations” to describe the phenomenon of outputs generated by LLMs that “sound plausible but are either factually incorrect or unrelated to the given context” was a further example of anthropomorphism.

The ability for ChatGPT “inadvertently to generate information disorder, including misinformation”, was “a serious issue of which to be aware”.

This was illustrated in the affidavit filed by a New York lawyer to explain his conduct after including six fictitious cases suggested by ChatGPT in his submissions on a case.

The lawyer thought that the LLM was “engaging in the human process of reading and understanding the question, searching for the correct answer and then communicating the correct answer to the lawyer”.

In fact, all it was doing was “producing outputs (which just happened to be in the form of words)” through the use of mathematical processes.

“It appears that there has been at least one example in England and Wales where a litigant in person has sought to use ChatGPT in the same way.

“Of course, it may be unnecessary to add that there are also examples of LLMs being used to manufacture entirely fictitious allegations of misconduct against individuals.”

Another “key risk” was the way in which generative AI was trained using data “trawled from the internet”, which inevitably contained biases or stereotypes.

“Although the developers of ChatGPT have attempted to put safeguards in place to address these issues, it is not yet clear how effective these safeguards are.”

The use of user prompts to develop systems was also “plainly problematic”, not only if inputs were incorrect, but if they were confidential or subject to legal professional privilege.

Barristers should be “extremely vigilant not to share with a generative LLM system any legally privileged or confidential information (including trade secrets), or any personal data”, which could feature in outputs.

The ability of LLMs to generate “convincing but false” content also raised “ethical concerns” and barristers should not “take such systems’ outputs on trust and certainly not at face value”.

Barristers would need to “critically assess” whether content generated by AI might violate intellectual property rights, especially third-party copyright, and must be careful not to use words, in response to system prompts, which breached trademarks or gave rise to a passing-off claims.

Generative AI could “complement and augment human processes to improve efficiency but should not be a substitute for the exercise of professional judgment, quality legal analysis and the expertise which clients, courts and society expect from barristers”.

Irresponsible use could have “harsh and embarrassing consequences”, including claims for professional negligence, breach of contract, breach of confidence, defamation and data protection infringements.

“There is nothing inherently improper about using reliable AI tools for augmenting legal services; but they must be properly understood by the individual practitioner and used responsibly, ensuring accuracy and compliance with applicable laws, rules and professional codes of conduct.”

Sam Townend KC, chair of the Bar Council, added: “The growth of AI tools in the legal sector is inevitable and, as the guidance explains, the best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity.

“Any use of AI must be done carefully to safeguard client confidentiality and maintain trust and confidence, privacy, and compliance with applicable laws.”





Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


The FCA is trying to get to grips with motor finance mis-selling

The FCA will be urging the Supreme Court to move as quickly as possible in relation to a key ruling on motor finance. The regulator is taking an active approach to this important issue.


Embracing AI: The future of law firms

AI is set to fundamentally change how law firms operate, bringing about new efficiencies, enhancing strategic insights, and ultimately transforming the way legal services are delivered.


CMA guidance on unregulated legal services must be applauded but…

There is little doubt that, with a staggering 3,800 unregulated providers of such legal services, the recent CMA action and guidance was required.


Loading animation