Explainable Artificial Intelligence: Exploring XAI Techniques in Military Deep Learning Applications

As a result of the advancements in artificial intelligence (AI), machine learning specifically deep learning, the explainable artificial intelligence (XAI) research field has received a lot of attention recently. XAI is a research field where the focus is on ensuring that the reasoning and decision making of AI systems can be explained to human users. In a military context, such explanations are typically required to ensure that: human users have appropriate mental models of the AI systems they operate; specialists can gain insight and extract knowledge from AI systems and their hidden tactical and strategical behavior; AI systems obey international and national law; developers are able to identify flaws or bugs in AI systems even prior to deployment. The objective of this report is to explore XAI techniques developed specifically to provide explanation in deep learning based AI systems. Such systems are inherently difficult to explain because the processes that they model are often too complex to model using interpretable alternatives. Even though the deep learning XAI field is still in its infancy, many explanation techniques have already been proposed in the scientific literature. Today’s XAI techniques are useful primarily for development purposes (i.e. to identify bugs). More research is needed to conclude if these techniques are also useful for supporting users in the process of building appropriate mental models of the AI-systems they operate, tactics development and to ensure that future military AI systems are following national and international law."

Focus: AI Ethics/Policy
Source: FOI
Readability: Expert
Type: PDF Article
Open Source: Yes
Keywords: N/A
Learn Tags: AI and Machine Learning Ethics Fairness Framework Design/Methods
Summary: The objective of this report is to present representative explainable AI techniques that have been developed in the context of deep learning.