Guest post by Eleonora Dimitrova, who holds an LLB and LLM in corporate and commercial law from Queen Mary University, London.

Dimitrova: Profession is not addressing the technical reality of AI
Discussion about whether solicitors can safely rely on AI has focused heavily on competence and due diligence.
In my earlier blog on AI-assisted due diligence, I examined the profession’s uncertainty around competence and procedural oversight. Yet the area most likely to expose firms to regulatory jeopardy in 2026 receives far less structured guidance: client confidentiality in the age of commercial AI systems.
The Solicitors Regulation Authority’s (SRA) code of conduct is unambiguous. The duty to protect confidential information is fundamental, strict and continuous. It is not diluted by efficiency pressures or the allure of new technology.
But solicitors are now routinely using tools that process data through opaque, proprietary models operated by third-party providers. The tension between these two realities is becoming harder to ignore.
The confidentiality problem the profession is not confronting
Much of the profession’s approach reflects marketing language, not technical reality. Many firms assume that if an AI provider states it “does not store prompts” or “does not use client data for training”, the confidentiality risk has been managed.
That assumption is misplaced. Even with contractual assurances, several unresolved issues remain:
Third-party access: AI vendors frequently rely on subcontractors, hosting partners, or internal review teams. Each link in that chain is a potential disclosure route.
Data-retention ambiguity: Providers may log inputs for troubleshooting or optimisation, even where training is excluded.
Jurisdictional exposure: Cross-border processing can trigger disclosure to foreign authorities under their domestic laws, beyond the control of any firm.
Model inferences: Even if raw data is deleted, derivative information may persist in ways that are difficult to audit or verify.
A firm cannot discharge its confidentiality obligations by relying on vendor assurances alone. The code requires solicitors to understand how their tools operate and to ensure they are used “appropriately and securely”. At present, too many practices are unable to explain even the basic data flows of the systems they have adopted.
Privilege risk: a weak point in firm governance
Privilege is not merely a confidentiality issue; it is a legal right capable of being waived, sometimes inadvertently.
Feeding privileged documents into an AI system that transfers data to external servers risks constituting a disclosure to a third party. Whether a court would treat that as waiver remains untested and the risk profile is poorly understood.
Firms need internal rules reflecting the seriousness of this point. A blanket instruction that “staff must not enter client documents into public AI tools” is insufficient. Practitioners require clarity about what constitutes a “public” tool, how firm-licensed systems handle data, and which categories of information are strictly prohibited from processing through any external platform.
The regulatory gap
The SRA’s public statements on AI emphasise outcomes and principles. That is consistent with its general approach, but it is not a substitute for operational standards where confidential data is concerned.
Many firms want to adopt AI but lack a clear regulatory benchmark for what constitutes acceptable use when confidential information is implicated.
The consequence is predictable. Some firms are becoming overly cautious and avoiding AI entirely, losing efficiency gains. Others are adopting tools rapidly without understanding the associated risks, relying on informal assurances rather than structured governance.
Neither approach aligns with regulatory expectations around informed risk management.
Targeted guidance is now required: how to assess providers, what minimum contractual protections are expected, when local hosting is necessary, and how firms should audit compliance. Without this, inconsistent practices will become entrenched.
Practical safeguards firms should implement now
While regulation develops more slowly than technology, firms cannot afford inaction. Several safeguards are already essential:
- Structured data-flow assessment for every AI tool, mapping where information travels, who can access it, and how long it is retained;
- Contractual clarity, including explicit prohibitions on training, subcontractor access, cross-border transfers, and data retention beyond strict operational need;
- Internal categorisation rules identifying information that may and may not be processed using AI, with privilege-sensitive material firmly outside scope;
- A central register of approved tools, with clear governance on versioning, permissions, and risk classification; and
- Training that addresses technical operation, not merely high-level warnings, ensuring practitioners understand how specific models and firm-sanctioned tools actually function.
These are not optional extras. They represent the minimum operational discipline required to meet the profession’s existing confidentiality obligations.
Looking ahead
The adoption of AI in legal practice is irreversible. But the profession cannot treat confidentiality as an afterthought simply because technology has outpaced regulation.
The regulatory risks are clear, and they will crystallise rapidly if firms continue to treat confidential data as something that can be safely delegated to third-party systems without rigorous scrutiny.
As with competence and due diligence, the absence of structured SRA guidance leaves practitioners navigating material regulatory risk without clear operational standards.
This develops themes I explored in an earlier blog on the solicitor’s duty of competence, highlighting that regulatory guidance has yet to catch up with technological change.
The next phase of the AI discussion must shift from general enthusiasm or anxiety to the precise operational realities of confidentiality and privilege.
If the SRA does not address this soon, it risks allowing a critical gap to develop in professional practice – one that will be harder to close once compromised data or inadvertent disclosures have already occurred.
For solicitors, the message is straightforward: AI will not relax the duty of confidentiality. If anything, it raises the standard of care expected. Firms that fail to recognise this are not merely gambling with technology. They are gambling with regulatory risk.
Eleonora Dimitrova is engaged in legal research and writing, with an interest in regulatory developments and legal technology











Leave a Comment