acm-header
Sign In

Communications of the ACM

ACM TechNews

The Dark Secret at the Heart of AI


View as: Print Mobile App Share:
Pondering the mysteries inherent in artificial intelligence.

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Credit: Keith Rankin

Some experts warn deep-learning artificial intelligence (AI) technologies cannot be adopted without reservations if their creators cannot understand how they reason, or guarantee accountability for users.

"You don't want to just rely on a 'black box' method," says Massachusetts Institute of Technology (MIT) professor Tommi Jaakkola.

As AI technology progresses, the possibility looms of taking a leap of faith in using it. Some researchers are attempting to introduce "explainability" to AI to instill trust.

MIT professor Regina Barzilay thinks human-machine collaboration will go a long way toward implementing explainability, and one project in this area seeks to develop a deep-learning algorithm that can detect early signs of breast cancer in mammograms. However, this strategy cannot avoid the fact that explanations are simplified, meaning some information could be lost along the way.

University of Wyoming professor Jeff Clune speculates some aspects of machine intelligence will always be instinctual or inscrutable.

From Technology Review
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account