Why Fairness Cannot Be Automated: Bridging the Gap between EU Nondiscrimination Law and AI

In recent years a substantial literature has emerged concerning bias, discrimination, and fairness in AI and machine learning. Connecting this work to existing legal non-discrimination frameworks is essential to create tools and methods that are practically useful across divergent legal regimes. While much work has been undertaken from an American legal perspective, comparatively little has mapped the effects and requirements of EU law. This Article addresses this critical gap between legal, technical, and organisational notions of algorithmic fairness. Through analysis of EU non-discrimination law and jurisprudence of the European Court of Justice (ECJ) and national courts, we identify a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness. A clear gap exists between statistical measures of fairness as embedded in myriad fairness toolkits and governance mechanisms and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements used by the ECJ; we refer to this approach as “contextual equality.”

Focus: AI Ethics/Policy
Source: Computer Law & Security Review
Readability: Expert
Type: PDF Article
Open Source: No
Keywords: N/A
Learn Tags: Bias AI and Machine Learning Fairness Ethics
Summary: This paper looks at the dynamics, in the European context, between the legal community and that technological community in exploring solutions for objectivity and non-discrimination in automated systems. The authors speculate, however, that this might not be achievable within the current legal ambit which does not provide for testing for discrimination in AI systems.