Towards Trustable Explainable AI

Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models.

Focus: Methods or Design
Source: IJCAI 2020
Readability: Expert
Type: PDF Article
Open Source: No
Keywords: N/A
Learn Tags: AI and Machine Learning Trust Design/Methods
Summary: This paper explores the advances of rigorous abductive approaches to explainable AI and proffers that absolutely necessary if trustable explainable AI is of concern.