Overview

The Need

Future military operations may rely more on machine learning and artificial intelligence to parse vast amounts of data. One limitation of these systems is an inability for machines to explain their reasoning or decisions. Warfighters will need this information to understand and trust these systems.

The Defense Advanced Research Project Agency (DARPA) seeks to address this limitation in the Explainable Artificial Intelligence (XAI) program.

The Solution

Under XAI, Charles River Analytics led a team to build systems that explain how AI tools classify activities, such as detecting pedestrians in images, and perform autonomous decision-making, such as in game environments. 

 

The Benefit

Our team used the notion of causality as a key concept to create explanations humans can understand and trust.

Explainable Artificial Intelligence

Our CAMEL approach supports dialogue between humans and artificial intelligence systems.

"First, we modeled how highly complex machine learning systems work—systems that use deep neural networks and deep reinforcement learning. Then, we built an interface on top of our models that explains how the system came to its conclusion in an intuitive and human-understandable way." - Dr. James Tittle, Principal Scientist


The Department of Defense (DoD) views human-machine teaming as vital to future operations. However, current artificial intelligence (AI) systems cannot explain how they reached their conclusions. As machine learning becomes integral to these cohesive systems, the need for AI to effectively communicate with human team members will become more important.

Dialogue with AI systems

DARPA’s Explainable Artificial Intelligence (XAI) effort aims to create a human-machine dialogue with AI systems. Under XAI, Charles River Analytics led a team that included Brown University, the University of Massachusetts at Amherst, and Roth Cognitive Engineering. The team developed probabilistic causal modeling techniques and an interpretive interface that enable users to naturally interact with machines. Our Causal Models to Explain Learning (CAMEL) approach simplifies explanations of how these complex, deep learning machines work.

XAI

The Need for Explainable AI (Image courtesy of DARPA)

Strengthening Human-Machine Trust

CAMEL’s explanations are based on causality, a concept that is critical to creating explanations that humans understand. Causal models explain machine learning techniques so users can correctly interpret the results from complex and increasingly mission-critical AI systems.

Learning causal models of such a complex domain is a challenging and currently unaddressed problem. CAMEL unifies causal modeling with the new field of probabilistic programming to create a novel framework that explains machine learning techniques for data analysis and autonomy systems using causal inference.

Broadening the Applicability of CAMEL

CAMEL will significantly impact the way that machine learning systems are deployed, operated, and used inside and outside the DoD. Users of mission-critical systems will be provided with the rationale behind AI conclusions and can request more detailed explanations. For decision-makers faced with life or death situations, these explanations are vital to effective interpretation and application of recommendations, and are an increasingly necessary requirement from government institutions.


This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

Close
Contact Us