acm-header
Sign In

Communications of the ACM

ACM TechNews

Expert: Artificial Intelligence Systems More Apt to Fail Than to Destroy


View as: Print Mobile App Share:
digital brain graphic

Oregon State University professor and Association for the Advancement of Artificial Intelligence president Thomas Dietterich says he is more concerned about potential ways artificial intelligence (AI) might fail or be abused by people than he is about it wiping out humanity. Dietterich is referring to a recent $10 million contribution made by Elon Musk to the Future of Life Institute to support AI safety research. Dietterich says he agrees with Musk that AI safety issues need to be addressed, but disagrees about the potential magnitude of those issues.

Dietterich says he is more worried about the ways AI, which increasingly is trusted with high-stakes tasks such as driving cars or handling weapons, could fail or make mistakes that put people at risk. "We need to be conscious of this risk and create systems that can still function safely even when the AI components commit errors," he says.

Dietterich also is concerned powerful AI could be used to carry out cyberattacks on computer networks. "The biggest risk is that those algorithms may not always work," he says. "We need to be conscious of this risk and create systems that can still function safely even when the AI components commit errors." He believes Musk's contribution will help to tackle several of these issues by enabling further research through open grant competitions.

From Oregon State University
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account