
Matthew Stringer, founder and CEO of Stridon
By Matthew Stringer, founder and CEO of Stridon, Legal Futures Associate
There’s a lot of buzz around AI in the legal sector right now and rightly so. From boosting employee experience, increased productivity to cutting administrative burden and improving client service, generative AI presents an exciting opportunity. But before we start putting sensitive data into AI tools or asking questions of AI that are confidential in nature, we’ve got to talk about something a little less flashy, but absolutely critical, and that’s security and compliance.
Here are some of the core insights for legal leaders who want to move forward with AI, without putting their firm or clients at risk. At the heart of this transformation sits Microsoft 365 Copilot. While our focus here is on this platform specifically, the underlying principles apply across most AI platforms.
Start with purpose and not hype
Let’s get one thing straight, AI isn’t just about efficiency. It’s about helping people focus on what really matters. For lawyers, that might be spending more time with clients and less time wading through admin tasks. For operations, it’s about surfacing insights quickly and streamlining business process. For everyone, it’s about unlocking time to do meaningful work and maybe even logging off a bit earlier.
But to unlock that kind of value, law firms need a foundation of trust.
The legal sector’s unique challenges
When it comes to data sensitivity, confidentiality and compliance, law firms sit in a different league. So, it’s no surprise that many of the firms we work with have similar worries. Here we have listed some of those concerns and how we address them.
Q: What data is Microsoft 365 Copilot accessing?
A: All data stays within the Microsoft 365 boundary and it doesn’t leak to some mysterious third-party service
Q: Will prompts or outputs be used to train AI models?
A: Your data stays your data. Microsoft 365 Copilot doesn’t use your prompts, documents or outputs to train any models.
Q: Can we control who sees what?
A: Copilot respects your existing data security and sensitivity labels, therefore if a document is tagged “confidential,” the output is too. Microsoft 365 Copilot can only surface what an individual already has access to. No new permissions are granted.
A: Are we risking client trust and will AI use jeopardise insurance or increase regulatory risk?
Q: Microsoft 365 Copilot doesn’t operate in the wild, it lives inside your secure, GDPR compliant Microsoft environment. That means it follows the same access rules, encryption and data residency policies you already in place.
These are all valid questions. The good news is that Microsoft has done a lot of the groundwork. If you’re already using Microsoft 365, you are part of an ecosystem designed with security, privacy and compliance at its core.
The role of zero trust
One of the biggest concerns with Generative AI is that it has access to a lot of information and fast, which is great for productivity, but only if it’s properly governed.
Whether you seek to use Microsoft 365 Copilot or any other Generative AI tool, applying Zero Trust security principles, which treat every person, request and device as untrusted until proven otherwise, can help you to build a strong foundation of security. Zero Trust principles cover:
- Always authenticate individuals and verify their devices
- Giving people access only to what they need
- Assuming breaches can happen, and being ready to minimise the fallout
This is essential for law firms, where one misstep could result in a client data breach, regulatory penalty or reputational hit.
Cleaning up the cupboards: keeping on top of data hygiene
I came across a brilliant Old English phrase recently “Scurryfunge” which is used to describe that frantic tidy up you do when unexpected guests arrive, quickly hiding mess behind cupboard doors to make everything look presentable. I think it’s a good analogy for how many firms have treated data over the years.
Many legal practices still battle digital clutter such as outdated file shares, neglected Teams folders, loose access controls and archived content no one’s looked at in years.
Microsoft 365 Copilot only accesses information a person already has permission to view, but if your firm’s access permissions are too broad or have not been reviewed in sometime, then you risk dragging poor quality or outdated information into your AI outputs.
That’s why getting your data hygiene right including classifying, tagging and cleaning up what no longer needs to be there, is essential. It’s not just about compliance; it’s about ensuring AI delivers accurate, relevant and compliant results.
At Stridon we talk a lot about how laying the right foundations can ensure long-term success when implementing Generative AI in your law firm. Here’s where many of our legal clients begin and some practical first steps:
- Run a data risk assessment: Identify where sensitive data lives, who has access and what needs fixing.
- Enable restricted search policies: If for example you don’t know what’s in a SharePoint library, your AI shouldn’t either.
- Use sensitivity labels: Apply them consistently to documents and emails and Microsoft 365 Copilot will inherit them.
- Set up a working group to build your own AI principles: Bring together IT, compliance and client-facing teams to decide where AI can and should be used.
- Leverage the tools you already have: Many firms have Microsoft subscriptions in place that provide access to features such as Microsoft Purview, SharePoint Advanced Management and Data Security Posture Management for AI (DSPM) offer powerful insights, often without the need for additional spend.
This doesn’t need to be a massive transformation effort right out of the gate. Start small, build confidence and scale from there.
Managing risk and building confidence
Getting the technology right is only part of the AI journey. Real AI success in law firms comes down to people.
It’s about helping everyone from trainee lawyers to senior partners, legal assistants to business directors to feel confident, informed and empowered. That means:
- Clear internal guidance
- Hands-on, tailored support and training
- Being open about what’s changing and why
- Encouraging experimentation and collaboration within a safe and controlled environment
Final thoughts for creating a secure foundation and empowered teams
Adopting generative AI isn’t about rushing in. It’s about moving forward with purpose.
Get your data in order. Clarify your internal policies and principles. Empower your teams. Use the tools you already pay for and lean into the opportunity, because it’s a big one.
You’ll unlock time, yes. But more importantly, you’ll help your people do more of what they’re great at. And that means you can build a firm that’s more efficient, more client-focused and ready for the future.
Take your first small steps today
We’re helping several law firms begin their AI journey but recognise the perceived challenges it poses. The following is designed to help you make those all-important first steps:
- Download our step-by-step guide. No technical jargon, just practical insights you can put into practice now. Download the guide here
- Watch our series of webinar on demand here
If you want a more in-depth chat about how you can implement GenAI in your firm then don’t hesitate to reach out to me at insights@stridon.co.uk.