Why AI Fairness Tools Might Actually Cause More Problems

Focus: AI Ethics/Policy
Source: Protocol
Readability: Intermediate
Type: Website Article
Open Source: No
Keywords: N/A
Learn Tags: Bias Business Design/Methods Ethics Fairness Inclusive Practice AI and Machine Learning
Summary: The process of codifying a rule, commonly used to measure impacts on protected groups in hiring, for AI fairness tools has, in some instances, caused an inverse reaction due to a removal of the human element of decision-making.