Machine Learning: Bias In, Bias Out

Learn More. Free Digging DEEPer Webinar on Machine Learning Discrmination by Dr. Toon Calders on July 8, 2020 11AM-12PM

Featuring: Dr. Toon Calders
(University of Antwerp)

Artificial intelligence is more and more responsible for decisions that have a huge impact on our lives. But predictions made using data mining and algorithms can affect population subgroups differently. Academic researchers and journalists have shown that decisions taken by predictive algorithms sometimes lead to biased outcomes, reproducing inequalities already present in society. Is it possible to make a fairness-aware data mining process? Are algorithms biased because people are too? Or is it how machine learning works at the most fundamental level?

Machine Learning: Bias In, Bias out Webinar video

Earn a Learner Badge from this webinar

Machine learning, a subset of artificial intelligence (AI), depends on the quality, objectivity and size of training data used to teach it. We Count encourages participants and learners to explore this concept to help inform more equitable decisions and supports by understanding data gaps and biases.
You will learn:

  • How predictive algorithms and data mining affect different populations in a discriminatory manner, and
  • How specific data resources are used to train and reinforce machine learning models to produce biased outputs.
  1. Watch the accessible Dr. Toon Calders’s Webinar.
  2. Apply for your Learner Badge



Indicates required field

Thank you for submitting your comment. It will be posted on the page once it is approved