Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson from an Explainable AI Competition

In 2018, a landmark challenge in artificial intelligence (AI) took place, namely, the Explainable Machine Learning Challenge. The goal of the competition was to create a complicated black box model for the dataset and explain how it worked. One team did not follow the rules. Instead of sending in a black box, they created a model that was fully interpretable. This leads to the question of whether the real world of machine learning is similar to the Explainable Machine Learning Challenge, where black box models are used even when they are not needed. We discuss this team’s thought processes during the competition and their implications, which reach far beyond the competition itself.

Focus: AI Ethics/Policy
Source: HDSR
Readability: Expert
Type: Website Article
Open Source: No
Keywords: N/A
Learn Tags: AI and Machine Learning Ethics Trust
Summary: The main thrust of this article is that an explainable AI model can always be constructed. Thus stakeholders should not resign themselves to black box models, particularly for high-stakes decisions.