Ethical impacts from AI “unimaginable”, says EU think tank


AI: carries risks as well as benefits

Artificial intelligence (AI) software poses risks to society including tracking and identifying individuals, ‘scoring’ people without their knowledge, and powering lethal autonomous weapons systems, an influential EU group has warned.

The risks were outlined by the European Commission’s high-level expert group on AI when it published its ethics guidelines for trustworthy AI this week.

At the same time it launched a pilot project to test the guidance in practice.

With growing use of AI in legal practice, many of the issues raised have resonance.

Academic lawyers sat on the group, including experts from the universities of Birmingham and Oxford.

Several years in the making, the guidelines are the final version of proposals made in draft at the beginning of the year, which urged that AI be both human-centric and trustworthy.

The EU’s ambition is to boost spending on AI to €20bn (£17bn) annually over the next decade. The bloc is currently behind Asia and North America in private investment in AI.

In order for AI to be trustworthy and thereby gain public acceptance, the group recommended that it had three components: it should be lawful, complying with all laws and regulations; it should be ethical; and it should be robust from a technical and social perspective, so it did not cause harm unintentionally.

Those developing and using AI should bear in mind that while the technology could bring benefits, it could also impact negatively on “democracy, the rule of law and distributive justice, or on the human mind itself”.

The experts continued: “AI is a technology that is both transformative and disruptive, and its evolution over the last several years has been facilitated by the availability of enormous amounts of digital data, major technological advances in computational power and storage capacity, as well as significant scientific and engineering innovation in AI methods and tools.

“AI systems will continue to impact society and citizens in ways that we cannot yet imagine.”

Noteworthy risks included face recognition technology, the use of involuntary biometric data – such as “lie detection [or] personality assessment through micro expressions” – and automatic identification that raised legal and ethical concerns.

They also highlighted “citizen scoring in violation of fundamental rights”. Any such system must be transparent and fair, with mechanisms allowing the challenging and rectifying of discriminatory scores.

“This is particularly important in situations where an asymmetry of power exists between the parties,” they added.

The final example of risk brought about by AI was of lethal autonomous weapon systems, such as “learning machines with cognitive skills to decide whom, when and where to fight without human intervention”.

They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”

Tags:




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Use the tools available to stop doing the work you shouldn’t be doing anyway

We are increasingly taken for granted in the world of Do It Yourself, in which we’re required to do some of the work we have ostensibly paid for, such as in banking, travel and technology


Quality indicators – peer recommendations over review websites

I often feel that I am banging the SRA’s drum for them when it comes to transparency but it’s because I genuinely believe in clarity when it comes to promoting quality professional services.


Embracing the future: Navigating AI in litigation

Whilst the UK courts have shown resistance to change over time, in the past decade they have embraced the use of some technologies that naturally improve efficiency. Now we’re in the age of AI.


Loading animation