acm-header
Sign In

Communications of the ACM

ACM TechNews

Artificial Intelligence Is Already Weirdly Inhuman


View as: Print Mobile App Share:
machine learning, illustration

Credit: smartnoob

Artificial intelligence (AI) systems such as neural networks are capable of incomprehensible behavior, and not knowing why they behave in such a manner is a challenge that must be resolved if AI is to be rendered predictable, especially in failure. For example, a neural net may identify two pictures of the same subject that differ only slightly as two different subjects. The problem is that, unlike with humans, it is impossible to determine why AIs make such errors so researchers can reverse-engineer the process.

"We need to be prepared to accept that computers, even though they're performing tasks that we perform, are performing them in ways that are very different," says Solon Barocas, a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. The layered architecture of a neural net, which is trained by human programmers, enables processing that can detect patterns in vast volumes of data and match those patterns to the right images. However, this schematic means errors or misidentification cannot be explained because humans cannot yet determine what computer-created rules or criteria the AI is following.

Various research teams have created methods to make neural nets reveal what their architectural layers and even individual neurons are doing when performing operations. University of Wyoming professor Jeff Clune thinks the quirks of neural-net cognition can lead to fascinating insights on how computers think.

From Nautilus
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account