acm-header
Sign In

Communications of the ACM

ACM TechNews

AI Is Explaining Itself to Humans. It's Paying Off.


View as: Print Mobile App Share:
A robot tries to explain itself.

U.S. consumer protection regulators including the Federal Trade Commission have warned over the last two years that AI that is not explainable could be investigated.

Credit: Alamy

Startups and major technology companies are investing heavily in explainable artificial intelligence (XAI), as U.S. and EU regulators campaign for fairness and transparency in automated decision-making.

XAI advocates say it has helped make AI more effective in fields such as healthcare and sales.

Microsoft saw its LinkedIn subscription revenue increase 8% after providing its sales team with CrystalCandle software, which identifies clients potentially at risk of cancellation, while explaining its reasoning.

Skeptics say an AI’s explanations of why it made the predictions it did are still too unreliable.

LinkedIn says an algorithm's integrity cannot be judged without understanding its reasoning, while tools like CrystalCandle, for example, could help physicians learn why AI predicts someone is at greater risk of disease.

From Reuters
View Full Article

 

Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account