AI lawyers coming but hold off on legislation, says Microsoft


Smith: all lawyers will depend on AI

A new breed of specialist artificial intelligence (AI) lawyer will emerge within 20 years, but governments should only legislate on the subject once technology companies have had time to develop their own ethical principles, according to Microsoft.

In a 120-page thesis on the future of technology, The future computed, artificial intelligence and its role in society, the software giant urged regulators to use existing laws, while anticipating a discrete band of specialist lawyers, themselves supported by the technology.

“By 2038 it’s safe to assume that… not only will there be AI lawyers practising AI law, but these lawyers, and virtually all others, will rely on AI itself to assist them with their practice,” the company’s president and former general counsel, Brad Smith, co-wrote with a colleague in the foreword.

However, current laws on privacy, data protection, competition, and negligence were sufficient to regulate many of the issues thrown up by AI, the company argued.

It called for any new AI-specific laws to strike a balance between addressing challenges and enabling innovation and its potential “to improve people’s lives”.

Meanwhile, stakeholders in the technology should be given “sufficient time to identify and articulate key principles guiding the development of responsible and trustworthy AI, and to implement these principles by adopting and refining best practices”.

In the short term, Microsoft identified data collection as the focus for regulators. The development of AI required the use of data, often as much as possible.

Data also had a bearing on competition and governments should be even-handed in enabling access to public data, while ensuring that no one company had a monopoly on private data.

They should also be mindful of whether “sophisticated algorithms will enable rivals to effectively ‘fix’ prices”.

Governments should stimulate the adoption of AI technologies across a “wide range of industries”, to “promote economic growth and opportunity”, the company suggested.

It explained: “[AI] can play an important role in addressing income stagnation and mitigating political and social tensions that can arise as income inequality increases”.

Meanwhile, current negligence laws could be used to address “injuries arising from the deployment and use of AI systems”.

It continued: “Relying on a negligence standard that is already applicable to software generally to assign responsibility for harm caused by AI is the best way for policymakers and regulators to balance innovation and consumer safety, and promote certainty for developers and users of the technology.

“This will help keep firms accountable for their actions, align incentives and compensate people for harm.”

The ultimate goal for the technology was a “human-centred approach”, Microsoft concluded, but this would require “researchers, policymakers, and leaders from government, business and civil society”, to come together to “develop a shared ethical framework” for AI.

Tags:




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Reports

No larger firm can ignore the demands of innovation – that was the clear message from our most recent roundtable: “The law firm of the future”, sponsored by LexisNexis Enterprise Solutions. It comes in many forms, predominantly but not just technology, and is not simply a case of automating process. Expertise and process are not mutually exclusive.

Blog

9 November 2018

Seeing the ability in disability

There is much readily available research about the tremendous value that having a diverse workforce brings to the legal profession. This includes drawing from a wider pool of talent and avoiding ‘groupthink’.

Read More