Ethical impacts from AI “unimaginable”, says EU think tank


AI: carries risks as well as benefits

Artificial intelligence (AI) software poses risks to society including tracking and identifying individuals, ‘scoring’ people without their knowledge, and powering lethal autonomous weapons systems, an influential EU group has warned.

The risks were outlined by the European Commission’s high-level expert group on AI when it published its ethics guidelines for trustworthy AI this week.

At the same time it launched a pilot project to test the guidance in practice.

With growing use of AI in legal practice, many of the issues raised have resonance.

Academic lawyers sat on the group, including experts from the universities of Birmingham and Oxford.

Several years in the making, the guidelines are the final version of proposals made in draft at the beginning of the year, which urged that AI be both human-centric and trustworthy.

The EU’s ambition is to boost spending on AI to €20bn (£17bn) annually over the next decade. The bloc is currently behind Asia and North America in private investment in AI.

In order for AI to be trustworthy and thereby gain public acceptance, the group recommended that it had three components: it should be lawful, complying with all laws and regulations; it should be ethical; and it should be robust from a technical and social perspective, so it did not cause harm unintentionally.

Those developing and using AI should bear in mind that while the technology could bring benefits, it could also impact negatively on “democracy, the rule of law and distributive justice, or on the human mind itself”.

The experts continued: “AI is a technology that is both transformative and disruptive, and its evolution over the last several years has been facilitated by the availability of enormous amounts of digital data, major technological advances in computational power and storage capacity, as well as significant scientific and engineering innovation in AI methods and tools.

“AI systems will continue to impact society and citizens in ways that we cannot yet imagine.”

Noteworthy risks included face recognition technology, the use of involuntary biometric data – such as “lie detection [or] personality assessment through micro expressions” – and automatic identification that raised legal and ethical concerns.

They also highlighted “citizen scoring in violation of fundamental rights”. Any such system must be transparent and fair, with mechanisms allowing the challenging and rectifying of discriminatory scores.

“This is particularly important in situations where an asymmetry of power exists between the parties,” they added.

The final example of risk brought about by AI was of lethal autonomous weapon systems, such as “learning machines with cognitive skills to decide whom, when and where to fight without human intervention”.

They concluded: “it is important to build AI systems that are worthy of trust, since human beings will only be able to confidently and fully reap its benefits when the technology, including the processes and people behind the technology, are trustworthy.”

Tags:




Blog


Doug Hargrove

From AI ambition to operational reality

AI is no longer an emerging technology on the horizon. It has become the connective tissue binding law, regulation, risk and commercial decision-making.


From text to world: The legal significance of multimodal AI

The next phase of AI, already underway, will integrate text with vision, sound, motion and even touch. This will produce systems that no longer ‘read about’ the world but perceive it.


The new leaders of law

Where once many law firm owners remained technology sceptics, a growing number are now shaped by leaders who are digitally fluent and commercially oriented.


Loading animation