We Count: Artificial Intelligence Inclusion Projects from Inclusive Design Research Centre

Resources

Support your learning through our searchable research library and discover valuable resources about many topics in artificial intelligence and data analytics, such as AI ethics, bias and data tools.

Select the We Count at Large tag to view a selection of speaking engagements and presentations by IDRC team members. Many of these resources showcase the efforts of IDRC Director Jutta Treviranus, whose pioneering work and insights in AI and inclusive AI continue to inspire and lead the field.

Filters

Topics

  • AI and disability, small minorities and outliers (for the general public)
  • Work for people with disabilities in data science
  • AI ethics and policy
  • AI design and methods (for AI experts)
  • ICT Standards and Legislation

Tags

Media Types

What Type of Explanation Do Rejected Job Applicants Want? Implications for Explainable AI

Source: arXiv
Media Type: PDF Article
Readability: 
  • Expert
Summary:

This paper makes the point that rejected job applicants want personalized explanations for their rejection. They also want to know how they could improve their employment prospects. It also emphasizes the fact that employers owe applicants an explanation for their rejection.

What We Learned Auditing Sophisticated AI for Bias

Source: O'Reilly
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

AI bias stems from a number of factors not just data. The regulatory framework for effective AI auditing is still in the works. As AI is poised to be the next epic transformative technology, audits and risk management tools similar to other technological wonders like nuclear and aviation are sure to follow.

When AI Reads Medical Images: Regulating to Get It Right

Source: HAI
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

Stanford researchers propose a framework for regulating diagnostic algorithms that will ensure world-class clinical performance and build trust among clinicians and patients.

When AI’s Output Is a Threat to AI Itself

Source: New York Times
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

AI model collapse is an ongoing concern both for those who train AI models and for those who use them. Further compounding the issue is the increasing difficulty in identifying AI-generated data.

When Robots Can’t Riddle: What Puzzles Reveal About the Depths of Our Own Minds

Source: BBC
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

Improving AI's ability to solve puzzles and logic problems may be the key to improving the technology.

When the Machine Meets the Expert: An Ethnography of Developing AI for Hiring

Source: MIS Quarterly
Media Type: Website Article
Readability: 
  • Expert
Summary:

This ethnographic study focuses on how developers managed the tension between the need to produce knowledge independent of domain experts and the need to remain relevant to the domain the system serves when building a machine learning system to support the process of hiring job candidates at a large international organization.

When We Design for Disability, We All Benefit

Source: TED Talks
Media Type: Video with Transcript
Readability: 
  • Beginner
Summary:

Persons with disabilities have unique ways of experiencing and reframing the world. Inclusive designers often discover better solutions when designing for persons with disabilities, instead of the norm.

White Faces Generated by AI Are More Convincing Than Photos, Finds Survey

Source: Guardian
Media Type: Website Article
Readability: 
  • Intermediate
Summary:

A new international study has found that people are more likely to think pictures of white faces generated by AI are human than photographs of real individuals.

Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem

Source: FAccT 2022
Media Type: PDF Article
Readability: 
  • Expert
Summary:

Algorithmic audits are an increasingly popular mechanism for algorithmic accountability. But without a clear understanding of audit practices, AI audit claims are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm. To address this, the AJL has completed a field scan of the AI audit ecosystem and has shared their findings in a new paper.

Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem (Overview)

Source: Algorithmic Justice League
Media Type: Video
Readability: 
  • Intermediate
Summary:

Algorithmic audits are an increasingly popular mechanism for algorithmic accountability. But without a clear understanding of audit practices, AI audit claims are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm. To address this, the AJL has completed a field scan of the AI audit ecosystem and has shared their findings in a video overview of their new paper.

Resources

Support your learning through our searchable research library and discover valuable resources about many topics in artificial intelligence and data analytics, such as AI ethics, bias and data tools.

Select the We Count at Large tag to view a selection of speaking engagements and presentations by IDRC team members. Many of these resources showcase the efforts of IDRC Director Jutta Treviranus, whose pioneering work and insights in AI and inclusive AI continue to inspire and lead the field.

Filters

Topics

  • {{ category.categoryLabel }}

Tags

Media Types

{{ searchResult }}

Search Term:

“{{ searchTerm }}”