AI and law firm risk – the view of professional indemnity insurers


Posted by Marc Rowson, a partner at Legal Futures Associate Lockton

Rowson: Additional liability risks

Many law firms are both exploring and implementing AI technology within their businesses, including generative AI (GenAI), to provide legal services, with some firms also developing their own in-house AI tools.

The article sets out the approaches insurers may take to AI risk and how law firms can manage this risk effectively.

There are various use cases for AI within legal services, including:

  • Administration, like answering queries with AI-enabled chatbots;
  • Drafting text using GenAI tools;
  • Profiling, such as checking legal documents for drafting errors and suggesting modifications to improve client communications;
  • Legal research; and
  • Identifying and predicting risk, for example automating routine tasks in a disclosure or anti-money laundering exercise.

As the capability of existing AI tools expands and new tools enter the market, the legal sector is expected to increase the volume and range of its use of AI.

To distinguish themselves amid a competitive market, some firms are already creating their own AI tools specifically for legal work, in partnership with internal or third-party developers.

Risks and liabilities of AI use in legal sector

Although AI presents many positive opportunities for firms, its use in legal services also threatens to create additional liability risk, such as where outputs result in unfair or incorrect outcomes.

Potential risks that apply to all organisations using AI include failure to properly:

  • Train or implement the AI system;
  • Monitor and check the outputs;
  • Train staff to use and understand AI tools;
  • Carry out adequate risk assessments; and
  • Have appropriate internal policies and frameworks in place to govern use of AI tools, to monitor the AI’s outputs and to rectify issues with the AI models.

However, the use of AI to deliver legal services also throws up more specific risks for law firms, such as:

  • Errors and inaccuracies. When drafting legal arguments, AI “hallucinations” may create fictitious cases. This may be exacerbated by a lack of human oversight and review of the outputs;
  • Breach of confidentiality, for example, where:
    • AI tools are used to answer a question on a client’s case;
    • Personal data is disclosed when information is transferred to a potential third-party vendor; or
    • Systems holding confidential information are subject to a data leak or security threat.
  • Failure to obtain informed consent before using AI to process client data;
  • Breach of intellectual property (IP) and copyright, for example, when using AI to draft legal briefs, conduct research and so on; and
  • Breaching contractual obligations.

Exposure to these risks also depends on whether the firm is using its own AI tool or a third-party tool. Firms are likely to have a greater understanding of the function and implementation of tools they have developed themselves, making any issues easier to resolve and risk management and governance procedures easier to document.

In contrast, while third-party tools are potentially a quicker, more practical and more cost-effective solution, they may lack transparency, which could prevent or complicate efforts to identify risks ahead of time.

Integrating third-party AI tools into a firm’s operations also poses a counterparty risk (for example, if that tool is withdrawn or ceases to operate) and security and privacy risks.

Insurance approaches to AI risk

Inevitably, underwriters in the law firm insurance market are taking a keen interest in how AI is impacting firms’ ways of working. In considering law firm applications for cover, many insurers will expect to see evidence of how firms are adapting to the changes and preparing for the future.

This does not mean that firms are expected to lead the line with regards to implementing AI tools, but nor should they be overly averse to its potential benefits.

Insurers recognise that a sensible approach for law firms to take in adopting AI tools is to proceed with change while being aware of the risks and managing or mitigating them as far as possible.

Professional indemnity insurance policies should respond where AI is used to perform legal duties and a claim against the insured later arises in relation to an alleged breach of those duties.

AI risk management

AI technology provides law firms with many positive opportunities to develop their businesses.

However, in making the most of the technology, firms should take proactive steps to identify and manage the risks that AI use also presents to avoid issues, breaches and potential claims.

Doing so also assists the firm in addressing any concerns or information requests from insurers and may therefore also assist in securing insurance on the best possible terms.

Examples of best practice measures to consider include:

  • Creating internal AI policies and risk frameworks that are detailed and adhered to. The firm should ensure that these are regularly updated as the AI technology and its use within the firm evolves. This should include identifying relevant accountable persons and making sure they are aware of their roles in governing the use of AI tools within the firm.
  • Conducting ongoing monitoring of the algorithms underpinning the AI systems that the firm uses. Where the AI tools have been developed and supplied by third parties, the firm should seek evidence that they are undertaking monitoring procedures.
  • Ensuring staff are properly trained in the use and potential risks of AI, as well as methods for checking its outputs. Similarly, the firm should ensure that leadership teams and the board have sufficient knowledge and understanding of the tools, as certain legislation and regulatory responsibilities ultimately lie with them.
  • Ensuring that all relevant personnel are aware of the additional risks to their department arising from the implementation of AI. For example, a legal department may need to be more aware of any IP threats where GenAI tools are used in the performance of their legal work. Data security teams should also be alert to the increased threat of data breaches.

Firms may wish to discuss these issues with their insurance brokers, who will be familiar with insurer concerns, attitudes and requirements.

Brokers may be able to support the firm in designing its AI risk management programme and assist with presenting the firm’s AI risk profile to insurers as positively as possible.

It’s worth noting that the nature and extent of risks evolving from the use of AI is still ongoing and is something that will be constantly developing as time goes on. As insurers begin understanding more about these risks, it’s likely that additional questions may be asked by insurers and insurance products may evolve.

Tags:




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Loading animation