Why we need robot rules


Guest post by Jacob Turner, a barrister at Fountain Court Chambers and author of Robot Rules: Regulating Artificial Intelligence

Turner: We need systems of rules dedicated to AI

In 2016, an artificial intelligence (AI) program called AlphaGo beat world champion Lee Sedol at the board game Go. At one point, AlphaGo made a decision which baffled observers and even its programmers, but turned out hours later to be the winning move. Commentators called it “God’s Touch”.

AI is unlike any other technology, because of its ability to act independently and unpredictably. In my new book, I explain why AI is unique, what legal and ethical problems it could cause, and how we can address them.

There are three issues:

  • Responsibility – who is liable if AI causes harm?
  • Rights – are there grounds for giving corporate personality to AI?
  • Ethics – how should AI make choices, and are there any decisions it should not take?

Responsibility for AI

Where AI systems make choices, there is no legal framework for determining who or what should be held responsible. It could be the programmer, owner, operator, a combination of the above, or perhaps none.

We might start to think of an AI’s designers as parents and the AI as their offspring. After a certain point in their development, children attain independence and become responsible for themselves.

Two features of AI make it difficult to hold the original programmer always responsible. First, AI is becoming more independent; some AI systems are now able to develop new AI. Secondly, the barriers between designers and users are being broken down as AI becomes more user-friendly. Think of training a dog rather than writing code.

For self-driving vehicles, the UK has recently enacted the Automated and Electric Vehicles Act 2018. Where an accident is caused by a vehicle when driving itself, the vehicle’s insurer is now liable for any damage. An insurer can’t avoid liability by saying “the self-driving car did it”.

But although insurers are required to pay victims under the Act, they can then make contribution claims against other parties. The tricky issues of responsibility are not solved, just kicked down the road.

As well as harm, most legal systems at present do not tell us who is responsible if AI creates something beneficial, which might be covered by intellectual property protections had it been done by a human. AI art is already becoming very desirable. One AI-generated painting was sold by Christies in November 2018 for over $430,000.

Rights for AI

One solution to the question of responsibility is to give AI its own legal personality. This would involve giving an AI system the ability to hold property, to sue and be sued. Indeed, the European Parliament proposed legal personality for AI in a resolution of February 2017.

Artificial legal persons are not new; we have had limited liability companies with their own legal personality for thousands of years.

It will only take one country to start a domino effect for AI personality. If a smaller jurisdiction with the ability to move quickly – perhaps Singapore, Malta or the British Virgin Islands – adopts AI legal personality, then others are likely to follow so as not to lose out on any competitive advantage.

Indeed EU countries are already required to recognise other corporate persons registered in other member states.

Ethics of AI

A well-known thought experiment asks how a driverless car should decide whom to prioritise in a collision. This issue was considered so serious in Germany that the Federal Ministry of Transport and Digital Infrastructure issued guidance to car manufacturers on the principles to be taken into account.

Many have argued that AI should not make life or death decisions in autonomous weapons (or “killer robots”). The same debates might apply to a medical AI program choosing which patients to prioritise for treatment.

Some people say that there should always be a “human in the loop” to ratify or second-guess any decision by AI. Article 22 of the GDPR (the right to object to a decision based solely on automated decision making) may even make human input mandatory for certain important decisions, such as approvals of bank loans.

Rules for Robots

We could take a ‘business as usual’ approach to AI, leaving principles to be adapted organically through case law.

However, test cases are rarely the best way to create new policy, especially in situations where difficult ethical and societal decisions need to be made. If we do nothing, legal developments may be haphazard and contradictory.

In my view, it would be better for governments to work proactively, together with companies, academia, the legal industry and the public to lay down systems of rules dedicated to AI. This could be by amendment to existing laws, or by creating entirely new ones.

To this end, the second half of Robot Rules suggests a roadmap for creating new institutions and rules on a cross-industry and international basis.

Either way, two thing seem clear: AI will have an increasing impact on all areas the world economy, and lawyers have a major role to play in determining its relationship with society.

Jacob Turner tweets at @robotrules




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Loading animation