- Legal Futures - https://www.legalfutures.co.uk -

Algorithms and the law

A guest post by Jeremy Barnett, Adriano Soares Koshiyama and Philip Treleaven

Jeremy Barnett

Our aim is to start a discussion in the legal profession on the legal impact of algorithms on firms, software developers, insurers, and lawyers. In a longer paper which can be found here [1], we consider whether algorithms should have a legal personality, an issue which will likely provoke an intense debate between those who believe in regulation and those who believe that ‘code is law’.

In law, companies have the rights and obligations of a person. Algorithms are rapidly emerging as artificial persons: a legal entity that is not a human being but for certain purposes is legally considered to be a natural person. Intelligent algorithms will increasingly require formal training, testing, verification, certification, regulation, insurance, and status in law.

The science fiction writer Isaac Asimov proposed ‘Three Laws of Robotics: Robots (1) may not injure a humans or, through inaction, allow humans to come to harm; (2) must obey orders given by humans except where such orders would conflict with the first law; and (3) must protect their own existence as long as such protection does not conflict with the first or second laws.

Philip Treleaven

In 2007, the South Korean government proposed a Robot Ethics Charter. In 2011, the UK Research Council EPSRC published five ethical principles for industry. In 2017, the Association for Computing Machinery published seven principles for algorithmic transparency and accountability.

Already algorithmic trading systems account for 70%-80% of US equity trades. Apple, Google and Amazon provide ‘intelligent’ virtual assistants, chatbots, and ‘smart’ devices that interact with speech. Numerous financial firms provide ‘Robo’ advisers. Car manufacturers are working on autonomous vehicles.

In response, governments and regulators are modifying national laws to encourage innovation, with lawyers and insurers scrambling to absorb the implications.

Key technologies

Key technologies to consider are artificial intelligence (knowledge-based systems and machine learning NLP and sentiment analysis), blockchain (distributed ledger technologies and smart contracts), the Internet of Things, and behavioural and predictive analysis.

Rogue trading

Algorithmic trading has significantly impacted financial markets. Notably, the 2010 Flash Crash wiped $600bn in market value off US stocks in 20 minutes. Market-making firm Knight Capital deployed an untested trading algorithm which went rogue and lost $440m in 30 minutes, destroying the company.

The law

After the 2008 financial crisis, Warren Buffet warned: “Wall Street’s beautifully designed risk algorithms contributed to the mass murder of $22 trillion.”

Trading algorithms are increasingly regulated. Concerns around market manipulation and trading of competitor algorithms has led to some industry practices being banned under legislation such as MiFID II.

Although robo-advisers are registered with the Securities and Exchange Commission, they are not fiduciaries, nor do they fit under the traditional standard applied to human registered investment advisers.

The European ESMA proposed regulatory and technical standards based on existing guidance. High frequency algorithmic traders now have obligations to store records and trading algorithms for at least five years.

Adriano Koshiyama

Algorithm testing and certification

AI algorithms are different from ordinary software, as they adapt, learn and influence the environment without being explicitly programmed to do so. This makes testing and certification immensely challenging.

Algorithm testing is the assessment of the risks of algorithm implementation, and provides a view on quality. The techniques divide into:

Algorithm formal verification – proving or disproving the correctness of algorithms underlying a system with respect to a certain formal specification or property.

Algorithm cross-validation – techniques to assess how the results of an algorithm perform in practice.

Certification ensures that an algorithm conforms to one or more ISO, IEEE or FDA standards.

Algorithmic Star Chamber

In the workplace, algorithms are rapidly becoming a judge and jury ‘star chamber’, meaning firms such as Uber, Google and Amazon rely on them to make many of their business decisions.

Gaps between the design and operation of algorithms, their ethical implications, and the lack of redress may have severe consequences. Ethical challenges arise, for example in the programming of electric vehicles.

Predictive justice

In 1977, US academic Anthony D’Amato asked if computers could or should replace judges, concluding that decisions on facts must be the province of humans. Using algorithms to predict the outcome of court decisions is gaining traction, though, especially in France.

Algorithms regulator and new code

There is an argument over whether a new regulator such as the Health and Safety Executive should be established, backed by a code of sanctions. Assistance can be drawn from the European system to regulate placing unsafe goods on the market, to protect consumers.

Algorithms as artificial persons

Recent case law decided that a software program could not enter into a contract on behalf of an insurer. Similarly, a company director must be a ‘legal person’. Section 156A of the Companies Act 1986 will be amended to provide that a person may not be appointed a company director unless they are a ‘natural person’.

This followed press reports of a venture capital firm appointing an algorithm to vote on whether or not to invest in a specific company.

Jeremy Barnett is a regulatory barrister at St Paul’s Chambers and Gough Square Chambers, and sits as a Recorder of the Crown and County Court. With a background in advanced computing, he is currently involved in research and development of blockchain and smart contracts.

Adriano Soares Koshiyama is a PhD Student in the UCL Department of Computer Science, funded by the Brazilian government’s CNPq programme.

Philip Treleaven is professor of computing at UCL and director of the UK Centre for Financial Computing & Analytics.