acm-header
Sign In

Communications of the ACM

India Region Special Section: Hot Topics

Toward Explainable Deep Learning


workers with machinery surround an oversized robotic head, illustration

Credit: NITI Aayog

Deep learning (DL) models have enjoyed tremendous success across application domains within the broader umbrella of artificial intelligence (AI) technologies. However, their "black-box" nature, coupled with their extensive use across application sectors—including safety-critical and risk-sensitive ones such as healthcare, finance, aerospace, law enforcement, and governance—has elicited an increasing need for explainability, interpretability, and transparency of decision-making in these models.11,14,18,24 With the recent progression of legal and policy frameworks that mandate explaining decisions made by AI-driven systems (for example, the European Union's GDPR Article 15(1)(h) and the Algorithmic Accountability Act of 2019 in the U.S.), explainability has become a cornerstone of responsible AI use and deployment. In the Indian context, NITI Aayog recently released a two-part strategy document on envisioning and operationalizing Responsible AI in India,15,16 which puts significant emphasis on the explainability and transparency of AI models. Explainability of DL models lies at the human-machine interface, and different users may expect different explanations in different contexts. A data scientist may want an explanation to help improve the model; a regulator may want the explanation to support the fairness of decision-making, while a customer support agent may want to respond accordingly to a customer query. This subjectivity necessitates a multipronged technical approach, so a suitable approach can be chosen for a specific application and user context.

Researchers across academic and industry organizations in India have explored the explainability of DL models in recent years. A specific catalyst of these efforts was the development of explainable COVID-19 risk prediction models to support decisionmaking during the pandemic over the last two years.10,12,17 Noteworthy efforts from research groups in India have focused on the transparency of DL models, especially in computer vision and natural language processing. Answering the question: "Which part of the input image or document did the model look at while making its prediction?" is essential to validate DL model predictions with human understanding, and thereby increase the trust of human users in model predictions. To this end, efforts have been developed on providing saliency maps (regions of an image a DL model looks at while making a prediction) through gradient-based19 and gradient-free methods6 in computer vision. Similar methods to provide transparency in attention-based language models13 also have been proposed. Looking forward, moving toward next-generation AI systems that can reason and strategize, Indian researchers have also addressed the integration of commonsense reasoning in language models,2 as well as obtaining model explanations using logic and neurosymbolic reasoning.1,21,22 Industry researchers in India have also led and contributed to developing practical, useful software toolkits for explainability and its use in AIOps.3,4


 

No entries found

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.
Sign In for Full Access
» Forgot Password? » Create an ACM Web Account