There may need to be some coverage disputes before professional indemnity (PI) insurers work out how to deal with bad advice given by artificial intelligence (AI) systems used by lawyers, a leading City firm has warned.
It said the widespread use of technology that utilised AI “contributes to additional complexity and uncertainty for insureds and insurers when assessing risk and apportioning liability”.
A short paper written by Tristan Hall, a partner, and Amit Tyagi, a senior associate, at CMS Cameron McKenna Nabarro Olswang explained how, prior to the widespread use of technology and increasing use of AI, loss scenarios for professionals were “relatively uncomplicated”.
“For example, according to a study produced by QBE, the most common causes of claims against solicitors historically included a failure to correctly identify their client resulting in instructions being taken from the ‘wrong’ client, or inadequate supervision of junior lawyers…
“The potential risk exposure changes if the work is actually done by AI, either where the client contracts with the AI provider directly or, in particular, via a professional services firm so it is that firm that still ostensibly provides the ‘service’ to its client.”
In the event of incorrect advice or missing an electronic filing deadline, for example, the question was where the liability would lie.
The paper said: “The client would no doubt look to the professional with whom they have a retainer. The professional will seek to argue it should be the software developer, and their PI insurers, that should meet the claim.
“The position becomes more complicated where the professionals have amended the software or if the software requires the interaction of the professional or a third party to produce its final work product.
“Here any number of potential insurers may be implicated and, at least, may be required to meet defence costs for their insured as claims are pursued.”
The lawyers said insurers were dealing with such claims on an ad hoc basis at the moment.
“In due course, we anticipate insurers may expand the scope of coverage available under traditional PI type policies so that ultimately AI failures come to be considered to be a ‘standard’ risk.
“We can envisage a scenario where firms using AI declare that to their insurer and the risk of the technology failing is priced into their ‘usual’ policies.
“Another possibility is that new insurance products are created for the developers of AI technology which provide cover for claims by third parties against the technology’s ultimate users in the event of a failure and these types of risk are carved out of PI policies.”
The paper suggested that the best way to try and prevent gaps in coverage was an open dialogue between insureds, their brokers and insurers on the types and levels of AI being used.
“In reality, the position may remain unclear for some time,” it concluded. “There may need to be a number of claims, and related coverage disputes, before parties begin to appreciate where the potential exposures for PI insurers truly lie and the market can respond.
“In any event, developments in policy coverage will need to try and keep pace with those in technology in order to try and keep ahead of the curve on which type of policy will respond and to try and mitigate against unexpected and uninsured losses.”