The continuance of Moore's Law—the axiom that the number of devices that can be economically installed on a processor chip doubles every other year—will mainly result in a growing population of cores, but the exploitation of those cores by the software requires extensive rewriting.
"We have to reinvent computing, and get away from the fundamental premises we inherited from [John] von Neumann," says Microsoft technical fellow Burton Smith. "He assumed one instruction would be executed at a time, and we are no longer even maintaining the appearance of one instruction at a time."
Although vendors offer the possibility of higher performance by adding more cores to the central processing unit, the achievement of this operates on the assumption that the software is aware of those cores, and will use them to run code segments in parallel.
However, Amdahl's Law dictates that the anticipated improvement from parallelization is 1 divided by the percentage of the task that cannot be parallelized combined with the improved run time of the parallelized segment. "It says that the serial portion of a computation limits the total speedup you can get through parallelization," says Adobe Systems' Russell Williams.
Consultant Jim Turley maintains that overall consumer operating systems "don't do anything very smart" with multiple cores, and he points out that the ideal tool—a compiler that takes older source code and distributes it across multiple cores—remains elusive. The public's adjustment to multicore exhibits faster progress than application vendors, with hardware vendors saying that today's buyers are counting cores rather than gigahertz.
From Computerworld
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA
No entries found