All our seminars are in hybrid format and accessible from the same connecting link. You can subscribe to our mailing list to receive all information and reminders of our events.

- This event has passed.
“The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law”, Sandra Wachter (University of Oxford)
9/11/2022 @ 14:30 - 16:00
- Speaker: Sandra Wachter (Professor of Technology and Regulation Oxford Internet Institute)
- Paper: available here
- Abstract:
Artificial intelligence is increasingly used to make life-changing decisions, including about who is successful with their job application and who gets into university. To do this, AI often creates groups that haven’t previously been used by humans. Many of these groups are not covered by non-discrimination law (e.g., ‘dog owners’ or ‘sad teens’), and some of them are even incomprehensible to humans (e.g., people classified by how fast they scroll through a page or by which browser they use).
This is important because decisions based on algorithmic groups can be harmful. If a loan applicant scrolls through the page quickly or uses only lower caps when filling out the form, their application is more likely to be rejected. If a job applicant uses browsers such as Microsoft Explorer or Safari instead of Chrome or Firefox, they are less likely to be successful. Non-discrimination law aims to protect against similar types of harms, such as equal access to employment, goods, and services, but has never protected “fast scrollers” or “Safari users”. Granting these algorithmic groups protection will be challenging because historically the European Court of Justice has remained reluctant to extend the law to cover new groups.
This paper argues that algorithmic groups should be protected by non-discrimination law and shows how this could be achieved.
Online only