In the 19th century, writing about his work on mechanical calculating devices, Charles Babbage noted, “The most constant difficulty in contriving the engine has arisen from the desire to reduce the time in which the calculations were executed to the shortest which is possible.” Roughly a century later, Daniel Slotnick wrote retrospectively about the ILLIAC IV parallel computing design, “By sacrificing a factor of roughly three in circuit speed, it's possible that we could have built a more reliable multi-quadrant system in less time, for no more money, and with a comparable overall performance.”
Babbage’s design challenged the machining and manufacturing capabilities of his day, though recently others were able to build a functioning system using parts fabricated to tolerances achievable with 19th century processes. Similarly, Slotnick’s design challenged electronics and early semiconductor fabrication and assembly. Today, of course, parallel computing designs embodying tens of thousands of processors are now commonplace, leveraging inexpensive, commodity hardware.
There is a lesson here that systems designers repeatedly ignore at their peril. Simple designs usually triumph, and the artful exploitation of mainstream technologies usually bests radical change. Or, as Damon Runyon once archly observed, “The race may not always be to the swift, nor the victory to the strong, but that's how you bet.”
All of which is to say that incrementalism wins repeatedly, right up to the point when a dislocating phase transition occurs. There are, of course, many paths to failure. One can be too early or too late. Or to put it another way, you want to be the first person to design a successful, transistorized computer system, not the last person to design a vacuum tube computer. The same is true of design approaches such as pipelining, out of order issue and completion, superscalar dispatch, cache design, system software, and programming tools.
Any designer’s challenge is to pick the right technologies at the right time, recognizing when inflection points–maturing, disruptive technologies–are near. This is the essence of Clayton Christensen’s well-documented innovator’s dilemma.
The shift from largely proprietary high-performance computing (HPC) designs to predominantly commodity clusters a decade ago was but one of the most recent transitions. Arguably, we are near another disruptive technology point. The embedded hardware ecosystem offers one intriguing new performance-power-price point, particularly as we consider trans-petascale and exascale designs that are energy constrained. The experiences of cloud providers in building massive scale infrastructures for data analytics and on-demand computing are another possibility.
As I frequently told my graduate students at Illinois, the great thing about parallel computing is the question never changes–“How can I increase performance?”–but the answers do. Babbage would have understood.
No entries found