Guest post by Eleonora Dimitrova, who holds an LLB and LLM in corporate and commercial law from Queen Mary University, London. She is engaged in legal research and writing, with an interest in regulatory developments and legal technology

Dimitrova: Regulators abroad have started to tackle this issue
Artificial intelligence (AI) is no longer a novelty in legal services. From contract analysis to legal research and automated correspondence, AI tools are already embedded in many firms’ day-to-day workflows. Yet the regulatory framework has failed to keep pace.
Specifically, the Solicitors Regulation Authority (SRA) has offered no substantive guidance on how the duty of competence applies to the use of AI tools in legal practice.
This regulatory silence is unsustainable. Under the SRA code of conduct, the obligation to provide competent advice applies irrespective of the technology used. Whether advice is written by hand or assisted by generative AI, the professional duty remains the same.
But what that actually requires in practice – in the context of evolving and opaque AI systems – is increasingly unclear. The result is a risk-laden regulatory grey area. If AI is here to stay, the SRA cannot remain silent.
The duty of competence – and Its ambiguity
Principle 7 of the SRA Standards and Regulations requires solicitors to “act in the best interests of each client”. The accompanying code of conduct expands this into a set of obligations: to provide a proper standard of service, to maintain up-to-date knowledge and skills, and to supervise legal work appropriately.
Yet nowhere do these rules engage directly with AI-generated content, AI-assisted legal analysis or automated client communications. As the tools become more sophisticated – and less transparent – the regulatory framework becomes more brittle.
Does a solicitor need to verify every output from an AI legal assistant? What if the error is subtle but material? At what point does over-reliance on AI undermine supervision and accountability? These are not hypothetical questions – they are daily practical issues and the SRA offers no answers.
Practical risk scenarios
Three current scenarios illustrate the stakes:
1. AI-assisted legal research: A solicitor under time pressure uses an AI legal assistant to draft case references. The output appears accurate and well-cited. However, several cases included have been overturned. The error goes unnoticed, the advice is issued and the client suffers financial loss.
2. AI-generated client letters: A junior lawyer uses an AI drafting tool to create client engagement letters. The tool inserts a boilerplate disclaimer inconsistent with the firm’s own terms. The letter is sent without senior review. A dispute later arises over scope and responsibility.
3. Due diligence by AI: A commercial property team uses AI to review hundreds of leasehold documents. One document contains a restrictive covenant embedded in a scanned image, which the tool misses. The issue is discovered only after completion, triggering legal consequences.
In all three cases, the solicitor, not the AI vendor, bears professional responsibility. The duty to supervise, to exercise independent judgement, and to ensure client care remains undiluted. Yet the SRA has not addressed where those duties begin and end when AI tools are in use.
Why the SRA should act now
Regulators abroad have started to tackle this. The Law Society of Ontario has acknowledged that emerging technologies require revised competence frameworks. The American Bar Association has issued ethics opinions stating that lawyers must understand, supervise and verify the outputs of AI tools they use.
The SRA, by contrast, has said little. Its 2021 guidance on innovation encourages firms to assess risks of legal tech, but provides no concrete standards around AI usage and supervision. It leaves firms and individual solicitors to interpret their duties without regulatory scaffolding.
This is neither sustainable nor fair. The risks are systemic: uneven standards between firms, inconsistent client experiences and a lack of clarity on when liability arises. Providing non-binding guidance would not restrict innovation – it would give firms and practitioners a benchmark for responsible adoption.
Such guidance could:
- Clarify the minimum standard of review when AI is used in drafting or legal analysis;
- Offer supervisory expectations for junior staff using AI tools; and
- Outline where liability lies between practitioner and software vendor.
This need not be complex regulation. A short practice note or illustrative guidance could go a long way to support consistency, accountability, and confidence.
Evolving standards and professional responsibility
The regulatory gap is not just a matter of policy – it affects how solicitors discharge their daily obligations. Professional indemnity insurers are already scrutinising how firms use legal tech, especially in relation to risk assessment and claims exposure.
Without regulatory guidance, firms are left to create ad hoc protocols, leading to divergence in practice and uneven risk management standards.
Moreover, clients, particularly institutional ones, increasingly expect firms to deploy technology efficiently but also responsibly. This creates a dual pressure: to innovate while maintaining demonstrable competence. Regulatory silence leaves practitioners exposed on both fronts.
The Law Society could supplement the SRA’s efforts by issuing good practice guidance or model protocols, but it is for the regulator to set baseline expectations. Without a defined standard, solicitors risk being judged retrospectively – after a claim or complaint arises – based on standards that were never clearly articulated in the first place.
Conclusion
The question is not whether solicitors should be allowed to use AI – they already do. The question is whether the regulator will offer guidance that reflects this reality. The profession cannot wait for the first major negligence case involving AI-assisted advice to set the standard retroactively.
Solicitors remain accountable for what they produce and advise, regardless of the method used. The tools may change, but the professional duty does not. The SRA’s current silence risks confusion, inconsistency, and reputational damage to the profession.
It is time for the regulator to step forward. A brief but targeted set of expectations – reinforcing that competence, supervision and care apply equally in the age of AI – would provide clarity where it is urgently needed.
Competence must evolve with tools – and regulation must not fall behind practice.
Leave a Comment