acm-header
Sign In

Communications of the ACM

ACM News

A Breakthrough in Explainable AI


View as: Print Mobile App Share:
robot at blackboard

A new artificial intelligence (AI) agent offers easy-to-understand English explanations of the AI's analysis and decisions.

Such tools of explanation are considered critical by those working in AI who fear users may be reluctant to embrace AI applications that make recommendations whose rationales are shrouded in mystery.

"As AI pervades all aspects of our lives, there is a distinct need for human-centered AI design that makes black-boxed AI systems explainable to everyday users," says Upol Ehsan, a doctoral student in the School of Interactive Computing at the Georgia Institute of Technology (Georgia Tech) and lead author of the study.  "Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them."

Adds Devi Parikh, an assistant professor in Georgia Tech's School of Interactive Computing, "Just like human teams are more effective when team members are transparent and can understand the rationale behind each others decisions or opinions, human-AI teams will be more effective if AI systems explain themselves and are somewhat interpretable to a human."

Eshan and his research team set out to solve the explainable AI problem by developing an AI agent that could offer easy-to-understand explanations to humans in certain settings.

For their research, the team—including researchers from Georgia Tech, Cornell University, and the University of Kentucky—decided to create an AI agent that would analyze and explain moves in the video game "Frogger." The game is an ideal choice for developing an AI agent, given its simplicity: the entire goal of the game is to move an animated frog across a screen, enabling it to dodge oncoming vehicles and other animated hazards.

The researchers trained their AI agent by first asking gamers to play Frogger as they explained the rationale behind each action they took, move by move.

"When it comes to generating rationales, we treat it as a machine translation problem," Ehsan says.  "Just like when translating from one language to another—for example, French to English—we need to find a mapping of one sequence of words in the source language to another sequence of words in the target language. For us, the source 'language' is the internal state of the AI agent," and the target language is English.

Subsequently, the researchers fed a record of that game play (along with the blow-by-blow explanations of each move) into their AI agent software.

"This is where the neural machine translation happens," says Ehsan.  "In essence, the network learns the associations between the state-action pair with that of the explanations. Both the encoder and decoder are recurrent neural networks comprised of Gated Recurrent Unit cell, with additional attention mechanism on the decoder side."

Test subjects evaluated the AI programming by watching a Frogger game in action as the AI agent generated written, play-by-play explanations of each move made in the game.

One of the key findings was that test subjects preferred a comprehensive explanation of an action taken by a program, which gave them a frame of reference to understand an event, rather than an explanation that only offered illumination about the specific event. Essentially, the test subjects preferred comprehensive explanations of game moves because those explanations offered information about future choices the AI might make in the game, according to Ehsan.

In the end, the test subjects still preferred human-generated descriptions of game play over descriptions offered by the AI agent, even though the test subjects were never informed which descriptions had been AI-generated and which had been provided by humans.

Looking ahead, Ehsan says his team may build on their study by studying how AI might be used as a companion agent in task-oriented software.  They are also interested in examining how explainable AI might be employed in emergency response AI software applications, as well as in educational AI software.

"We have a ways to go before commercializing the technology," Ehsan says.  "We must keep issues of fairness, accountability, and transparency in mind first before thinking of commercialization."

Joyce Chai, a professor of computer science and engineering at Michigan State University, found the preference for comprehensive explanations generated by AI agent notable. "This work tackles an important problem in explainable AI.  The proposed approach has demonstrated the potential to generate rationales that can mimic human justification behaviors," Chai says.

"The finding that human users predominantly prefer the rationales generated based on the entire game board—the complete view—over the ones generated only based on the neighborhood of the agent—the focused local view—is interesting.  It provides insight on what kind of contextual information should be captured to generate rationales, especially for the sequential decision making agents."

While a great deal of research into explainable AI focuses on unearthing the rationale behind deep learning image classification and visual question answering, the Georgia Tech et al work centers on AI and reinforcement learning, according to Chai.

"Prior work on interpreting reinforcement learning agents has attempted to explain how agents learn action policies and make decisions; for example, through visual salience maps," Chai says. "The rationales from Georgia Tech, Cornell, and Kentucky do not attempt to explain the agent's underlying decision making process, but rather, provide justifications for selected actions that can be understood by humans."

Adds Ehsan, "Our work is focused on natural language-based explanations on sequential decision-making tasks, an underexplored area in explainable AI."

Ideally, work on explainable AI from these and other researchers will make more AI applications easier to understand and trust.

"From a developer's point of view, humans must be able to understand how a model performs and why it performs the way it does before they can apply it to solve real-world problems," Chai says. "From an end user's point of view, humans must be able to understand why a decision is made in order to trust machines' predictions and recommendations," she adds.  "This is particularly important in applications where humans and AIs need to communicate and collaborate with each other to solve a joint task." 

Adds Klaus-Robert Müller, professor of machine learning at the Technical University of Berlin, "Explainable AI is one of the most important steps towards a practical application and dissemination of AI.  We need to ensure that no AI algorithms with suspect problem-solving strategies or algorithms which employ cheating strategies are used, especially in such areas as medical diagnosis or safety-critical systems."

Joe Dysart is an Internet speaker and business consultant based in Manhattan, NY, USA.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account