AI and due diligence: the SRA’s next regulatory blind spot


Guest post by Eleonora Dimitrova, who holds an LLB and LLM in corporate and commercial law from Queen Mary University, London. She is engaged in legal research and writing, with an interest in regulatory developments and legal technology

Dimitrova: SRA guidance needed

Artificial intelligence (AI) tools have moved from novelty to necessity in transactional practice. In mergers and acquisitions, platforms like Kira and Luminance now review thousands of documents in minutes, flagging clauses and anomalies that once consumed teams of junior lawyers.

What once felt experimental has become standard operating procedure.

But as AI-assisted due diligence becomes embedded, a sharper question emerges: what happens when the system misses something material?

A hidden restrictive covenant, an unflagged change-of-control clause, or an overlooked indemnity can alter the entire commercial outcome. When it does, liability rests not with the software vendor but with the solicitor who relied on it.

That point sounds simple in theory. In practice, it exposes a gap between technological adoption and regulatory clarity – a gap the SRA has yet to close.

Delegation versus responsibility

The solicitor-client retainer still carries the same implied obligation: to exercise reasonable care and skill. That duty does not diminish because part of the work is automated.

Delegating document review to AI is no different, in legal principle, from delegating it to a trainee – the solicitor remains responsible for the result.

Where the due-diligence report proves defective because an algorithm failed to detect a clause, the key question will be whether reliance on the AI output was reasonable and adequately supervised. ‘Industry standard’ will not be enough. Courts will expect evidence of human verification and quality control.

Some firms now attempt to contractually qualify their exposure through engagement-letter language on the use of AI tools.

But under the Unfair Contract Terms Act 1977, exclusions of negligence must still pass the test of reasonableness. Unless a client has expressly accepted the risk – which in most cases they have not – such disclaimers are unlikely to hold.

The expanding risk perimeter

AI does not eliminate error; it changes its form. A missed clause is no longer the result of human oversight but of algorithmic limitation – yet the legal consequence is identical.

Recent case law on professional negligence, particularly the 2021 Supreme Court ruling in Manchester Building Society v Grant Thornton, re-emphasises that professionals are liable for losses within the scope of their assumed duty.

Where a solicitor’s advice certifies that no material encumbrances exist, any undiscovered covenant or liability will almost certainly fall within that scope, regardless of whether AI assisted the review.

Tortious duties also extend beyond the immediate client. A solicitor may owe a duty of care to foreseeable third parties who rely on their reports, such as lenders, investors or counterparties. The risk perimeter therefore widens as digital workflows multiply.

The supervision deficit

Rule 3.5 of the SRA Code of Conduct requires adequate supervision of legal work. Yet no regulatory guidance defines what that means when AI is involved. Must every AI-generated output be checked line by line? Is sampling enough? Should clients be told when AI forms part of the due-diligence process?

For now, firms answer those questions in isolation. Some insist on dual human review; others rely entirely on vendor testing. The inconsistency creates uneven standards and leaves both practitioners and clients uncertain about where liability begins and ends.

The Law Society’s 2021 practice note on technology stresses that solicitors remain accountable “regardless of the tools used” but offers no operational direction. The SRA’s ethics material echoes the sentiment without adding substance. As with competence more broadly, principle has overtaken practice.

What the SRA should clarify

The regulator does not need to reinvent the rulebook, but it must update its application. A short, practical note could:

  • Set minimum expectations for human verification of AI-generated due-diligence output;
  • Outline supervisory protocols for junior lawyers using AI platforms; and
  • Encourage transparent client communication about the extent and limits of automation.

Such guidance would not impede innovation. It would provide the same certainty the profession already expects in areas like outsourcing, confidentiality and conflicts.

Commercial pressure and professional risk

Transaction timetables now assume that AI will accelerate the due-diligence phase. Clients reward speed. But compressed review cycles can erode supervision, particularly where teams rely too heavily on machine summaries.

Professional indemnity insurers are already alert to this tension. Many now ask firms to disclose what AI systems they use and what verification procedures are in place.

Without regulatory parameters, the market is left to self-regulate. The result is predictable: divergent standards, uneven quality control and inconsistent liability exposure.

Beyond competence: accountability in practice

In an earlier blog, I examined the solicitor’s duty of competence in the age of AI. The issue now is accountability. Competence defines what a solicitor must know; accountability defines how they must act when technology is part of the process.

AI does not shift responsibility away from the professional. It merely obscures where, within the workflow, error may arise. Regulators should address this now, before the first negligence case involving AI-assisted due diligence sets the precedent by default.

Conclusion

AI has transformed how solicitors perform due diligence but not who bears responsibility for the outcome. When the machine misses, the solicitor still answers.

The SRA should move from encouragement to expectation, making explicit that supervision, verification and disclosure obligations apply equally to AI-assisted work. Clear guidance would protect clients, clarify liability and strengthen confidence in innovation.

The profession has embraced the tools. Regulation must now catch up – not to restrict them, but to ensure that in the pursuit of efficiency, the essential safeguard of accountability is not lost.




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Loading animation