Toward Fairness in AI for People with Disabilities: A Research Roadmap

AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD). Indeed, improving the lives of PWD is a motivator for many state-of-the-art AI systems, such as automated speech recognition tools that can caption videos for people who are deaf and hard of hearing, or language prediction algorithms that can augment communica­tion for people with speech or cognitive disabilities. However, widely deployed AI systems may not work properly for PWD, or worse, may actively discriminate against them. These con­siderations regarding fairness in AI for PWD have thus far received little attention. In this position paper, we identify po­tential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development, and testing. We intend for this risk assessment of how various classes of AI might interact with various classes of disability to provide a roadmap for future research that is needed to gather data, test these hypotheses, and build more inclusive algorithms.

Focus: AI and Disability/Outliers
Source: ACM ASSETS 2019
Readability: Expert
Type: PDF Article
Open Source: No
Keywords: Artificial intelligence, machine learning, data, disability, accessibility, inclusion, AI fairness, AI bias, ethical AI.
Learn Tags: Ethics Fairness Accessibility Assistive Technology Data Tools Disability Bias AI and Machine Learning
Summary: Identifies how several AI technologies, such as automated speech recognition tools and language prediction algorithms, may not be useful for persons with disabilities and may discriminate against them.