Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment. Often, however, the "reasoning" behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI "explainable" to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.
Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.
I worry that if we demand that for AIs explain their decisions in terms we find understandable will lead to their lying to us. Perhaps "lying" is a little hyperbolic, but "telling human-style stories" is very close to that. Humans explain to each other in terms of stories that "make sense" and deal with a small number of facts/factoids based on which a decision or result seems logically inevitable.
Facts/factoids that don't fit the logic of the narrative are omitted from the "explanation." T appreciate how important selection and interpretation of a given set of facts can be, consider the context of a court case. The contesting parties routinely build quite different narratives leading "logically" from the same set of admissible evidence. Consider also the observation of Nassim Taleb ("Black Swan", the financial book not the movie about intrigue at the ballet): ""We like to simplify, i.e., to reduce the dimension of matters. We prefer compact stories over raw truths." Who writes a memo or makes a business proposal that goes into detail on the unknowns, contraindications or risks of the recommend action?
For any AI decision that has major consequences it seems certain that the AI-developer's approach to explanation will consist of a justification of the decision in terms of applicable law, regulation or policy. And I expect AI's will be really good at finding and marshaling statutory provisions and case law that support any given decision. Is that what we want in the way of "explanation"?
Displaying 1 comment