AI and Disability: A Double-Edged Sword

AI and Disability: A Double-Edged Sword. Thursday, July 22, 2021, 1:30 PM - 3:00 PM (EDT).

AI and Disability: A Double-Edged Sword

To celebrate the anniversary of the We Count Digging Deeper Series, we’re bringing you a special webinar that features an intimate conversation between Wendy Chisholm and Jutta Treviranus about the complex relationship between innovative technology and disability, with a focus on artificial intelligence (AI). Drawing on their wealth of knowledge and experience, these panelists will discuss the risks and rewards of AI for people with disabilities, as well as what the future of the industry may hold for the disability community.

July 22, 2021, 1:30 PM – 3:00 PM (EDT)

AI and Disability webinar video

Panelists:

Wendy Chisholm has been working to make the world more equitable since 1995, when she started working on what would become the W3C’s Web Content Accessibility Guidelines, the basis of worldwide web accessibility policy. Since then, she co-wrote Universal Design for Web Applications with Matt May (O’Reilly, 2008), worked as a consultant, and appeared as Wonder Woman in a web comic with the other HTML5 Super Friends. In 2010, she joined Microsoft to infuse accessibility into their engineering processes and drove the development of what is now Accessibility Insights. As of 2018, she manages the selection and funding of projects for Microsoft’s AI for Accessibility program — a $25 million grant program to accelerate AI innovations that are developed with or by people with disabilities.

Jutta Treviranus is the Director of the Inclusive Design Research Centre (IDRC) and professor in the faculty of Design at OCAD University. Jutta established the IDRC in 1993 as the nexus of a growing global community that proactively works to ensure that our digitally transformed and globally connected society is designed inclusively. She also heads the Inclusive Design Institute, a multi-university regional centre of expertise. Jutta founded an innovative graduate program in inclusive design at OCAD University. She leads international multi-partner research networks that have created broadly implemented innovations that support digital equity. She has played a leading role in developing accessibility legislation, standards and specifications internationally (including W3C WAI ATAG, IMS AccessForAll, ISO 24751, and AODA Information and Communication). She serves on many advisory bodies globally to provide expertise in equitable policy design. Jutta’s work has been attributed as the impetus for the corporate adoption of more inclusive practices in large enterprise companies such as Microsoft and Adobe.

Earn a Learner badge

You will learn:

  • The risks and rewards of AI for people with disabilities
  • How future developments in different industries may impact the disability community

Learn and earn badges from this event:

  1. Watch the accessible AI and Disability: A Double-Edged Sword webinar
  2. Apply for your Learner badge

Comments

*

Indicates required field

*

Required
*

Required
Thank you for submitting your comment. It will be posted on the page once it is approved

Ayesha Rafi | July 22, 2021

I really enjoyed the conversation today about AI and Disability. I believe we still need more supports for Disabilities in the health care sector and have much to improve!

David Dyer Lawson | July 22, 2021

Outside of ethical considerations, I wondered if the panelists could discuss the economic advantages to creating inclusive AI? Specifically, it'd be very cool to hear if Microsoft had any data to show how inclusive AI improved corporate ROI/ROE. Since we live in an era of neoliberalism, we can we avoid thinking about economics?

Mitali Karnat | July 22, 2021

Can you talk about accessible AI based 3D scanning applications that can be used by blind and partially sighted individuals?

Jennifer Chadwick | July 22, 2021

You shared great examples of barriers caused by AI. Does We Count have a document sharing specific examples, user feedback, experiences of people with disabilities to understand the depth of the impact?

Alan Harnum | July 22, 2021

Do we need to move beyond the primarily complaint-based and individual rights based enforcement for disabled people impacted or potentially impacted by technology? (cards on the table, I feel the continued retrenchment of exclusion every time there's new technology stems in large part from those predominant models of "resolution")

Margot Whitfield | July 22, 2021

Can AI critique be used to help us see our basis or be used to help employers realize their bias? Are there examples of this? Is there a way to access an AI error/bias (say for example in the case of hiring) for the case of an individual so they could advocate for getting a chance for an interview? Maybe people who use AI in their applications would need to input their assumptions publicly on their products/services as a way for individuals to critique these assumptions... Would this be an important issue to advocate for?

Merve Hickok | July 22, 2021

I am an AI ethicist (former senior HR practitioner and HR Tech sme). Everything you said is crucial in the employment context. I am also concerned that these practices will lock certain people 'permamently' out of certain industries / positions / companies as these tools become more prevalent. If AI systems are designed to have systems to ensure all outcomes are in compliance with 4/5ths rule, it gives no space to challenge these systems legally

Susan Till | July 22, 2021

What do you think of gathering personal data (related to disability, race, and other aspects of diversity) via a survey on a business's website to collect info about its users? Assume that users would be fully informed on how this data would be used.

Monica Tsang | July 22, 2021

How do data collectin platforms/ coops garner trust from the PWD community? How do we ensure that the PWD population can get a voice in the development of ML/AI platforms in large companies? These platforms are hugely impactful our lives, however we seem to have no say when harm is being done. Besides differential privacy, are there other data techniques to protect PWD who are sharing their data for greater good?