acm-header
Sign In

Communications of the ACM

ACM News

Cognitive Computers Work Like the Human Brain


View as: Print Mobile App Share:
neural network chip, illustration

Credit: iStockPhoto.com

Today, major corporation commonly use neural networks to perform tasks that emulate human thinking.

The first of these tasks was voice recognition, but now IBM's Watson and dozens of other neural networks — realized in both software and hardware — are acquiring all sorts of cognitive processes, from diagnosing previously undiagnosable diseases, to beating humans at chess, poker, the game Go, and besting human champions on the TV game-show Jeopardy with IBM's Watson computing system. IBM, in fact, has added an extension called CognizeR to the R database language that gives its programmers direct access to Watson's cognitive computing capabilities.

"Neural networks have been used to dramatically improve a number of different technologies," says Said Dave Schubmehl, research director of Cognitive Systems and Content Analytics at IDC. "Speech recognition is now much more accurate due to neural networks. Image and video recognition has improved tremendously due to them as well, and the ability to recognize shapes, objects, and patterns within video and pictures is due primarily to them as well. Neural networks are being used to categorize and identify many different types of data, from television listings to legal e-discovery. They're also being used to provide predictive answers such as when machinery needs maintenance or when oil pipes need cleaning,"

Every major software house has cognitive computing, neural network software that emulates the human brain.

Microsoft's neural network algorithms and Computational Network Toolkit enable programmers to use its resources to learn tasks that are difficult for engineers to program manually, such as recognizing where faces are in photographs, and which images are of the same person.

Apple is taking a more diversified tact with its Basic Neural Network Subroutines, which enable programmers to use its already perfected cognitive algorithms to build their own learning applications for Mac desktop computers and iOS mobile smartphones and tablets. Apps using Apple's neural networks are being sold in the iTunes Store for many recognition problems, such as identifying the species of flower in a photo taken with an iPhone camera.

"Cognitive/AI systems are replacing tables and heuristics in enterprise applications," Schubmehl says. "We're seeing AI being embedded in all sorts of applications providing recommendations and predictions from sales enablement and predicting 'next best action,' to automated contract review and logistics optimization. We’re also seeing a wave of new conversational AI-based assistants coming to market for customer service, sales, and other applications that have been traditionally the realm of interactive voice response systems or humans. These assistants will be available to aid and advise workers on topics from medical diagnoses to what kinds of problems a car or truck is having. Office-based assistants will search out information for your projects, help you to organize and structure your reports, assist in making contacts with experts in your organization (and outside), and generally help you in your day-to-day activities. Almost all enterprise software will include these kinds of capabilities as part of their offering."

Many companies are luring users to take advantage of their brain-like neural-network-based cognitive algorithms to build sophisticated applications never before possible. Facebook, Google, and Twitter, for instance, are using Torch; Google's Brain Team is also using Tensorflow. Universities are also releasing open-source software like the University of California at Berkeley's Caffe, which is described in "Caffe: Convolutional Architecture for Fast Feature Embedding."

These cognizers are intended to allow users to train on any kind of data — voice, images, numerical data, unstructured text, whatever — and come up with algorithms which can recognize and identify instances they have never seen before. This means you need a lot of data with which to train, plus you need to hold some of it in reserve to test with afterwards, in order to see if your recognition algorithm works on cases it has never seen.

Hewlett Packard is taking an even easier road for programmers doing common recognition tasks, by supplying them with its Apache Hadoop Vertica. Vertica has been pre-trained for common tasks such as finding faces in a image, but also offers simple subroutines that mine a database for almost any recognition task.

The big names in semiconductors — Intel, Qualcomm, Nvidia, IBM, and ARM — are all designing specialized neural network hardware to speed up operation and to shrink the size of the software (which today needs cloud computing resources, including supercomputers, to execute their millions of lines of code).

IBM made a splash with its TrueNorth neural network chip, which is already available in systems designed in cooperation with Lawrence Livermore National Laboratory (LLNL) to shepherd the U.S. nuclear arsenal. The biggest supercomputers in the world have had the job of simulating nuclear explosions (to make sure aging U.S. warheads are still functional) since the ban on underground testing was implemented in the 1990s. However, even the fastest supercomputers have had to approximate the task, to get the job done in a reasonable amount of time. Now the TrueNorth neural network running as an attached processor to LLNL's Sequoia supercomputer aims to accelerate the process enough to perform extremely detailed simulations of the aging U.S. nuclear arsenal.

Not to be outdone, Intel acquired neural network expertise this year by buying Nervana for its brain-inspired Nervana Engine. Intel promises to optimize the Math Kernel Library for its Xeon and massively parallel Xeon Phi supercomputer processors so they can directly access the Nervana Engine.

At the other end of the spectrum — from supercomputers down to smartphones — Qualcomm has announced its Snapdragon Neural Processing Engine and the Machine Learning Software Development Kit to program it. Using what Qualcomm dubs its Zeroth cognitive computing platform, it aims to enable its latest Snapdragon processors to perform cognitive functions — such as speech recognition — on a handset itself, rather than requiring a broadband connection to a cloud computer as is the case today.

ARM, recently purchased by Japan's SoftBank, allows users to create neural network versions of its microprocessor cores, which ARM licenses rather than manufactures. For instance, the Spiking Neural Network Architecture (SpiNNaker) can be implemented on an ARM processor, thanks to the University of Manchester. Using funds from the European Union's Human Brain Project, the University of Manchester plans to combine up to a million tiny ARM cores to simulate the performance of the human brain.

Likewise, Nvidia — the massively parallel graphics-processor-unit (GPU) designer — does not make neural networks itself, but it has dozens of machine learning tools to help its users create hardware-driven neural networks and other cognitive computing applications using its GPUs.

R. Colin Johnson is a Kyoto Prize Fellow who ​​works as a technology journalist.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account