Shining a light on the shadows


Guest post by Camilo Artiga-Purcell, general counsel at Kiteworks

Artiga-Purcell: Human-dependent approach can create a gaping vulnerability

Legal departments and businesses are embracing artificial intelligence (AI) at an unprecedented rate. And for good reason. AI is great for automating tasks, handling big data, facilitating decision-making and reducing human error.

However, this enthusiasm has birthed a dangerous trend known as ‘Shadow AI’, where employees are using personal or unapproved AI tools.

In fact, according to one recent survey, 83% of in-house counsel are using AI tools not provided by their organisations and 47% operate without any governance policies at all. The report highlights a 56% rise in AI-related incidents, with data leaks a primary concern.

Our recent AI data security and compliance risk report reveals an even starker reality: only 17% of organisations have implemented automated controls with data loss prevention capabilities, while 83% rely on training sessions, warning emails, policies or nothing at all.

This human-dependent approach creates a gaping vulnerability that no amount of policy documentation can close.

Regulatory landscape

The UK’s regulatory landscape heightens these challenges. GDPR imposes stringent obligations on data processing, storage and cross-border transfers, with fines up to £17.5m or 4% of global annual turnover for violations. The proposed AI Bill signals increased scrutiny of AI governance.

For legal departments, a single employee uploading client data to an unapproved AI tool can expose privileged communications, trade secrets or merger strategies to servers in unknown jurisdictions, undermining the foundations of legal practice. This needs to change.

The legal and compliance risks of ungoverned AI use are profound. Data protection violations top the list. When lawyers upload client data to consumer AI tools like ChatGPT or Claude, they relinquish control over that information. The data may be processed via third-party APIs, stored on servers in multiple jurisdictions, or used to train AI models, all potentially breaching GDPR.

Confidentiality and privilege concerns are equally grave. Solicitor-client privilege can be waived when communications are shared with third-party AI providers.

The volume of exposed data defies belief. In our report, 27% of organisations said over 30% of their AI-bound data contains private information, with customer records, employee data and trade secrets flowing freely into AI systems.

Mitigating the risks

Consumer AI tools are simply not designed for the rigorous security needs of legal work, making it nearly impossible to retrieve or delete data once uploaded. To mitigate these risks, law firms and legal departments must establish a robust AI governance framework tailored to their needs.

Comprehensive AI usage policies should outline acceptable tools, data-handling protocols and consequences for non-compliance, addressing confidentiality, privilege and data security. Regular risk assessments are vital to identify vulnerabilities, while a formal approval process ensures only secure, compliant AI platforms are used.

Technical controls are critical. Data classification systems should identify sensitive information before it is processed by AI tools. Access controls, such as role-based permissions and monitoring, can prevent unauthorised use of consumer AI platforms. An approved list of enterprise-grade AI tools, designed with legal and compliance requirements in mind, ensures efficiency without sacrificing security.

Training and awareness should be used to underpin effective governance. Mandatory training for all staff, from partners to associates, should cover the technical and legal risks of AI. Then regular updates on emerging threats, such as new data breach tactics or regulatory changes, can be used to keep teams informed. Plus, clear reporting mechanisms for AI-related incidents foster transparency and enable swift responses to potential breaches, minimising damage.

Shining a light

With the proposed AI Bill, GDPR and the EU’s NIS2 Directive signalling heightened scrutiny, the UK’s regulatory landscape is evolving rapidly.

Law firms and legal departments that fail to act risk becoming cautionary tales, facing fines, client loss and reputational damage. Conversely, those that implement robust governance demonstrate to clients their commitment to security and compliance.

The urgency of addressing AI data leaks is undeniable. Lawyers must act now to shine a light on the shadows by auditing AI usage, implementing robust controls, and educating staff. By balancing innovation with risk management, they can ensure that they can protect sensitive data, uphold client trust, and navigate a complex regulatory landscape.

The legal profession is built on trust and diligence. In the AI era, these principles demand proactive governance to ensure technology serves as a tool for progress.




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Loading animation
loading