Ethical impacts from AI “unimaginable”, says EU think tank


AI: carries risks as well as benefits

Artificial intelligence (AI) software poses risks to society including tracking and identifying individuals, ‘scoring’ people without their knowledge, and powering lethal autonomous weapons systems, an influential EU group has warned.

The risks were outlined by the European Commission’s high-level expert group on AI when it published its ethics guidelines for trustworthy AI this week.

At the same time it launched a pilot project to test the guidance in practice.

With growing use of AI in legal practice, many of the issues raised have resonance.

Academic lawyers sat on the group, including experts from the universities of Birmingham and Oxford.

Several years in the making, the guidelines are the final version of proposals made in draft at the beginning of the year, which urged that AI be both human-centric and trustworthy.

The EU’s ambition is to boost spending on AI to €20bn (£17bn) annually over the next decade. The bloc is currently behind Asia and North America in private investment in AI.

In order for AI to be trustworthy and thereby gain public acceptance, the group recommended that it had three components: it should be lawful, complying with all laws and regulations; it should be ethical; and it should be robust from a technical and social perspective, so it did not cause harm unintentionally.

Those developing and using AI should bear in mind that while the technology could bring benefits, it could also impact negatively on “democracy, the rule of law and distributive justice, or on the human mind itself”.

The experts continued: “AI is a technology that is both transformative and disruptive, and its evolution over the last several years has been facilitated by the availability of enormous amounts of digital data, major technological advances in computational power and storage capacity, as well as significant scientific and engineering innovation in AI methods and tools.

“AI systems will continue to impact society and citizens in ways that we cannot yet imagine.”

Noteworthy risks included face recognition technology, the use of involuntary biometric data – such as “lie detection [or] personality assessment through micro expressions” – and automatic identification that raised legal and ethical concerns.

They also highlighted “citizen scoring in violation of fundamental rights”. Any such system must be transparent and fair, with mechanisms allowing the challenging and rectifying of discriminatory scores.

“This is particularly important in situations where an asymmetry of power exists between the parties,” they added.

The final example of risk brought about by AI was of lethal autonomous weapon systems, such as “learning machines with cognitive skills to decide whom, when and where to fight without human intervention”.

They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”

Tags:




Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog


Navigating carer’s leave: A personal journey and call for change

The Carer’s Leave Act 2023, which came into force on 6 April 2024, was a pivotal moment for the UK. It allows workers to take up to five unpaid days off a year to carry out caring responsibilities.


House of Lords shines a spotlight on flawed DBA regulations

As the Litigation Funding Agreements (Enforceability) Bill was debated in the House of Lords last month, a number of peers shone the spotlight on the need to address the poor state of the rules governing DBAs.


Align success measures with your firm’s core values for long-term success

What sets you apart from your competitors? How does your team’s core values help you deliver a service that makes you stand out and help you retain – and win – business?


Loading animation