AI Hiring System Policies
Identifying and Addressing Bias in Machine Learning Models on Selection of Candidates from a Policy Perspective
Panelists will discuss how machine learning models can carry bias when selecting candidates, affecting persons with disabilities and other individual differences. We will focus on the many legal and ethical implications of machine learning bias as we explore the best policies and practices that should be adopted by both tech companies in the design of their algorithms and the employment organizations that use them. This webinar offers an understanding of the policy issues at stake within this area of algorithmic bias.
November 17, 2020, 1:30 PM – 3:00 PM (EST)
Alexandra Reeve Givens is the CEO of the Center for Democracy & Technology, a leading U.S. think tank that focuses on protecting democracy and individual rights in the digital age. The organization works on a wide range of tech policy issues, including consumer privacy to data and discrimination, free expression, surveillance, internet governance and competition.
Julia Stoyanovich is an Assistant Professor of Computer Science and Engineering and of Data Science at New York University. Julia’s research focuses on responsible data management and analysis, including operationalizing fairness, diversity, transparency and data protection in all stages of the data science lifecycle. She is the founding director of the Center for Responsible AI at NYU, a comprehensive laboratory that is building a future in which responsible AI will be the only kind of AI accepted by society.
Dr. Vera Roberts is Senior Manager Research, Consulting and Projects at the Inclusive Design Research Centre (IDRC) at OCAD University. Vera’s primary research area is generating a culture of inclusion through outreach activities and implementation of inclusive technology and digital sharing platforms.