Skip to yearly menu bar Skip to main content


Keynote
in
Workshop: ICML Workshop on Algorithmic Recourse

Sandra Wachter - How AI weakens legal recourse and remedies


Abstract:

AI is increasingly used to make automated decisions about humans. These decisions include assessing creditworthiness, hiring decisions, and sentencing criminals. Due to the inherent opacity of these systems and their potential discriminatory effects, policy and research efforts around the world are needed to make AI fairer, more transparent, and explainable.

To tackle this issue the EU commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring as well as transparency tools. But is Europe ready for this task?

In this session I will examine several EU legal frameworks including data protection as well as non-discrimination law and demonstrate how AI weakens legal recourse mechanisms. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe.

I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world.