AI technology “transformative but carries risks”, says Slaughters report


Kingsley: boards need to be aware of AI risks

Company directors should urgently consider the risks of using artificial intelligence (AI) technology so as to understand and manage their liability, according to a report by magic circle law firm Slaughter & May.

AI was “the most transformative technology” of this century, it said. However, risks included AI being maliciously ‘re-purposed’ – for example benign financial analysis systems being used illegitimately to disrupt financial market activity.

In a ‘white paper’ produced by its fintech team, the firm identified six areas of “particularly acute” potential risk from AI deployment: failure to perform, discrimination, vulnerability to misuse, social disruption, privacy, and malicious re-purposing.

Some of the risk was common to all software, but some was unique, they argued, adding: “Civil engineers do not design a bridge and then later contemplate what safety features should be implemented; safe deployment is critical to the design process. The same reasoning applies to AI.”

Slaughter & May itself employs a transactional due diligence tool called Luminance, an AI platform that allows the firm to analyse vast numbers of documents rapidly.

The authors observed that the issue was becoming critical as AI-based machine learning technologies took off, made possible by a massive increase in computational power, accompanied by the huge volume of accumulated data.

Global investment in AI start-ups, just £2.4bn in 2015, was expected to rise to £153bn in 2020.

The white paper expanded on the identified categories of risk. Failure to perform acknowledged that AI, like any system, whether mechanical or human, would fail from time to time. This could come from bad design, bad data, or bad application.

Unlike conventional software, the complexity of AI presented problems: “The relative lack of transparency and of predictability makes assessing both the types of possible failure and the risk of that failure occurring, more difficult.

“For this reason, where risk tolerance is low, extensive testing is typically necessary.”

Discrimination by AI-based systems could reflect biased data used to train them, sometimes producing subtle discriminatory biases. An example was AI used by a police force to analyse people’s previous interactions with the law; such a system could risk entrenching forms of discrimination.

AI’s vulnerability to malicious misuse was also significant, the authors argued, because malicious data feeds could change a system’s model in ways not immediately apparent.

The effect could be reputationally damaging for an organisation, as it was to Microsoft’s Twitter-based bot, Tay – which when it went live began re-posting racist and misogynist tweets sent to it maliciously.

Relatedly, malicious repurposing of AI systems presented risks if they fell into the wrong hands. Slaughter & May’s experts warned that AI systems for analysing financial market activity could be turned against those same markets in order to disrupt them.

Other AI-related risks involved privacy and data protection breaches. The authors also warned that AI and automation could have far-reaching effects on social cohesion.

For instance, efficiency-driven displacement of the human workforce by AI could carry business risks: “Businesses will need to consider how they begin to reallocate and re-focus their resources as their operational processes change.”

The authors concluded: “We believe that [AI] could be the most transformative technology of the 21st Century…

“However, while learning algorithms have enormous positive potential, they also carry significant legal, security, and performance risks that, if not managed well, could jeopardise reputations, or worse.”

Speaking to Legal Futures, the leader of Slaughter & May’s fintech team, partner Ben Kingsley, said AI software presented unprecedented challenges: “It’s different to other forms of software, or indeed hardware, in that it has the potential to develop, or at least refine, itself.

“I think that’s very much part of the risk assessment that we are advocating in the paper; that boards need to be thinking through before exposing their customers, or their clients, or the world at large, to mass-market [AI] usage.”

He added that to protect themselves, companies needed to be aware of the risks they faced: “It’s not that you are looking to escape liability, but that you are looking to understand your exposure and to manage your liability.”

Tags:




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Continuing competence still in the SRA’s headlights

The SRA’s second annual assessment of continuing competence leaves lawyers and COLPs in little doubt that the regulatory spotlight is still firmly on whether skills and knowledge are being maintained.


How the Oldham community helped my law firm against rioters

On the evening of 7 August, we anxiously watched CCTV footage from outside the building, waiting for the mob. Our blood ran cold when we saw a group of around 150 people approaching.


Essential tips for junior lawyers

Starting out as a junior lawyer can be daunting, with the challenge of balancing the demanding day-to-day responsibilities, honing legal skills, and engaging in profile-raising activities.


Loading animation