The complexity involved in state-of-the-art artificial intelligence (AI) systems means the reason the software reaches any particular conclusion is often largely opaque, even to its own creators.
This lack of clarity can be bad for developers trying to sell AI systems, because it is hard for consumers to trust a system they cannot understand.
This is especially true in fields such as healthcare, finance, and law enforcement, and regulation also is driving companies to seek out more explainable AI. For example, in Europe, the General Data Protection Regulation gives citizens a "right to a human review" of any algorithmic decision affecting them.
IBM recently surveyed 5,000 businesses about using AI, and found 82% of respondents said they wanted to do so, but 66% were reluctant to proceed due to a lack of explainability.
As a result, software vendors and information technology systems integrators have started advertising their ability to give customers insights into how AI programs work.
From Bloomberg
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found