acm-header
Sign In

Communications of the ACM

News

AI, Explain Yourself


AI Explain Yourself, question mark on mobile phone

Credit: Letters-Shmetters

Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment. Often, however, the "reasoning" behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI "explainable" to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.

Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.


Comments


Martin Smith

I worry that if we demand that for AIs explain their decisions in terms we find understandable will lead to their lying to us. Perhaps "lying" is a little hyperbolic, but "telling human-style stories" is very close to that. Humans explain to each other in terms of stories that "make sense" and deal with a small number of facts/factoids based on which a decision or result seems logically inevitable.

Facts/factoids that don't fit the logic of the narrative are omitted from the "explanation." T appreciate how important selection and interpretation of a given set of facts can be, consider the context of a court case. The contesting parties routinely build quite different narratives leading "logically" from the same set of admissible evidence. Consider also the observation of Nassim Taleb ("Black Swan", the financial book not the movie about intrigue at the ballet): ""We like to simplify, i.e., to reduce the dimension of matters. We prefer compact stories over raw truths." Who writes a memo or makes a business proposal that goes into detail on the unknowns, contraindications or risks of the recommend action?

For any AI decision that has major consequences it seems certain that the AI-developer's approach to explanation will consist of a justification of the decision in terms of applicable law, regulation or policy. And I expect AI's will be really good at finding and marshaling statutory provisions and case law that support any given decision. Is that what we want in the way of "explanation"?


Displaying 1 comment

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.
Sign In for Full Access
» Forgot Password? » Create an ACM Web Account