The ability of the human brain to process massive amounts of information while consuming minimal energy has long fascinated scientists. When there is a need, the brain dials up computation, but then it rapidly reverts to a baseline state. Within the realm of silicon-based computing, such efficiencies have never been possible. Processing large volumes of data requires massive amounts of electrical energy. Moreover, when artificial intelligence (AI) and its cousins deep learning and machine learning enter the picture, the problem grows exponentially worse.
Emerging neuromorphic chip designs may change all of this. The concept of a brain-like computing architecture, conceived in the late 1980s by California Institute of Technology professor Carver Mead, is suddenly taking shape. Neuromorphic frameworks incorporate radically different chip designs and algorithms to mimic the way the human brain works—while consuming only a fraction of the energy of today's microprocessors. The computing model takes direct aim at the inefficiencies of existing computing frameworks—namely the von Neumann bottleneck—which forces a processor to remain idle while it waits for data to move to and from memory and other components. This causes slow-downs and limits more advanced uses.
No entries found