AI lawyers coming but hold off on legislation, says Microsoft

Print This Post

5 February 2018


Smith: all lawyers will depend on AI

A new breed of specialist artificial intelligence (AI) lawyer will emerge within 20 years, but governments should only legislate on the subject once technology companies have had time to develop their own ethical principles, according to Microsoft.

In a 120-page thesis on the future of technology, The future computed, artificial intelligence and its role in society, the software giant urged regulators to use existing laws, while anticipating a discrete band of specialist lawyers, themselves supported by the technology.

“By 2038 it’s safe to assume that… not only will there be AI lawyers practising AI law, but these lawyers, and virtually all others, will rely on AI itself to assist them with their practice,” the company’s president and former general counsel, Brad Smith, co-wrote with a colleague in the foreword.

However, current laws on privacy, data protection, competition, and negligence were sufficient to regulate many of the issues thrown up by AI, the company argued.

It called for any new AI-specific laws to strike a balance between addressing challenges and enabling innovation and its potential “to improve people’s lives”.

Meanwhile, stakeholders in the technology should be given “sufficient time to identify and articulate key principles guiding the development of responsible and trustworthy AI, and to implement these principles by adopting and refining best practices”.

In the short term, Microsoft identified data collection as the focus for regulators. The development of AI required the use of data, often as much as possible.

Data also had a bearing on competition and governments should be even-handed in enabling access to public data, while ensuring that no one company had a monopoly on private data.

They should also be mindful of whether “sophisticated algorithms will enable rivals to effectively ‘fix’ prices”.

Governments should stimulate the adoption of AI technologies across a “wide range of industries”, to “promote economic growth and opportunity”, the company suggested.

It explained: “[AI] can play an important role in addressing income stagnation and mitigating political and social tensions that can arise as income inequality increases”.

Meanwhile, current negligence laws could be used to address “injuries arising from the deployment and use of AI systems”.

It continued: “Relying on a negligence standard that is already applicable to software generally to assign responsibility for harm caused by AI is the best way for policymakers and regulators to balance innovation and consumer safety, and promote certainty for developers and users of the technology.

“This will help keep firms accountable for their actions, align incentives and compensate people for harm.”

The ultimate goal for the technology was a “human-centred approach”, Microsoft concluded, but this would require “researchers, policymakers, and leaders from government, business and civil society”, to come together to “develop a shared ethical framework” for AI.

Tags:



Leave a comment

* Denotes required field

All comments will be moderated before posting. Please see our Terms and Conditions

Legal Futures Blog

Why your firm should support working mothers to the hilt

Georgina Hamblin

If you are going to balance the demands of work and childcare, and stay sane, you need to adapt, and with any luck your firm will adapt with you. In doing so you will both win, and your respective productivity will soar. When I had my son, I realised just how lucky I was. Not only did I have the incredible support of my, and my husband’s, family through this life-changing time, but I had a firm that offered me complete flexibility and control over my return to business life.

April 19th, 2018