In the early dates of the automobile, there was a lively competition among disparate technologies for hegemony as the motive power source. Steam engines were common, given their history in manufacturing and locomotives, and electric vehicles trundled through the streets of many cities. The supremacy of the internal combustion engine as the de facto power source was by no means an early certainty. Yet it triumphed due to a combination of range, reliability, cost and safety, relegating other technologies to historical curiosities.
Thus, it is ironic that we are now assiduously re-exploring several of these same alternative power sources to reduce carbon emissions and dependence on dwindling global petroleum reserves. Today’s hybrid and electric vehicles embody 21st century versions of some very old ideas.
There are certain parallels to the phylogenic recapitulation of the automobile now occurring in computing. Perhaps it is time to revisit some old ideas.
Use of the word “computer” conjures certain images and brings certain assumptions. One of them, so deeply ingrained that we rarely question it, is that computing is digital and electronic. Yet there was a time not so long ago when those adjectives were neither readily assumed nor implied when discussing computing, just as the internal combustion engine was not de rigueur in automobile design.
The alternative to digital computing – analog computing – has a long and illustrious history. Its antecedents lie in every mechanical device built to solve some problem in a repeatable way, from the sundial to the astrolabe. Without doubt, analog computing found its apotheosis in the slide rule, which dominated science and engineering calculations for multiple centuries, coexisting and thriving alongside the late comer, digital computing.
The attraction of analog computing has always been its ability to accommodate uncertainty and continuity. As Cantor showed, the real numbers are non-countably infinite, and their discretization in a floating-point representation is fraught with difficulty. Because of this, the IEEE floating-point standard is a delicate and ingenious balance between range and precision.
All experimental measurements have uncertainty, and quantifying that uncertainty and its propagation in digital computing models is part of the rich history of numerical analysis. Forward error propagation models, condition numbers, and stiffness are all attributes of this uncertainty and continuity.
I raise the issue of analog computing because we face some deep and substantive challenges in wringing more performance from sequential execution and the von Neumann architecture model of digital computing. Multicore architectures, limits on chip power, near threshold voltage computation, functional heterogeneity and the rise of dark silicon are forcing us to confront fundamental design questions. Might analog computing and sub-threshold computing bring some new design flexibility and optimization opportunities?
We face an equally daunting set of challenges in scientific and technical computing at very large scale. For exascale computing, reliability, resilience, numerical stability and confidence can be problematic when input uncertainties can propagate, and single and multiple bit upsets can disturb numerical representations. How can we best assess the stability and error ranges on exascale computations? Could analog computing play a role?
Please note that I am not advocating a return to slide rules or pneumatic computing systems. Rather, I am suggesting that we step back and remember that the evolution of technologies brings new opportunities to revisit old assumptions. Hybrid computing may be one possible way to address the challenges we face on the intersecting frontiers of device physics, computer architecture and software.
A brave new world is aborning. Might there be a hybrid computer in your hybrid vehicle?
Digital computers have one fatal shortcoming - they use a clock (or trigger) to change from one state to another. During (or absent) a clock trigger, they are deaf, dumb, and blind. The more precision in time that is demanded, the more state cycles are forced. Analog devices operate deriving their end functions from input conditions without having to walk through the state changes to get there. If we can ever get over the fact that precision is not accuracy, analog systems may make a comeback. (Digital air data computers are one of the logical absurdities - aerodynamic data has no business going from analaog physics through digital math, to analog output (control position) when none of the steps in between have any need of step-wise state change computation.
You point out that "computer" these days implies digital and electronic. That's certainly true, but "digital vs analog" is not the only dimension to question in the meaning of "computer". The "electronic" part has not always been there either, and in ways other than devices like the astrolabe or the slide rule that you mention.
You can also ask: "tool" or biological? Here's the first definition of "computer" from the (current!) Oxford English Dictionary: "A person who makes calculations or computations; a calculator, a reckoner". Isaac Newton had a computer - a person who was employed to do calculations for him.
Why do I bring this up? Does it help us solve new and interesting problems like your discussion of analog computer might? No, probably not - but as someone who works in the field of computer science, this is the way I view the term computer in "computer science." Something that computes, whether it is digital or analog, electronic or mechanical, machine or human (and maybe even questions we haven't thought to ask yet). And if we stressed that a little more often when talking about the field of computer science, people might have a better understanding of the science that is studied (and stop asking us to fix their Windows machines because we're computer scientists).
It may not be known to this community, but some of us have been doing what Daniel Reeds article suggests. A single-chip analog computer, that can solve differential equations up to 80th order, often faster than a digital computer, and without any convergence problems, was described a few years ago: see G. Cowan, R. Melville, and Y. Tsividis, A VLSI analog computer / digital computer accelerator, IEEE Journal of Solid-State Circuits, vol. 41, no. 1, pp. 42-53, January 2006. There is a lot more that can be done in this area.
Yannis Tsividis
Columbia University
Great post!
A study on modern workloads that can be improved with hybrid analog-digital computers is here:
http://www.cs.columbia.edu/~simha/hdcacase.pdf
Simha
One of the best examples of "hybrid computing" is found in the amazing, awe-inspiring combination of digital and analog technology involved in DNA expression, the RNA interpreter language in the fabrication of versatile ubiquitous proteins and workhorse micromolecules and the use of analog sensors and outputs to regulate the biological cell. --Alan Cassidy
I think George Dysons Edge comment on analog computing is relevant, adding the computational capacity of, for example social networks, to the discussion:
"Complex networks of molecules, people, or ideas constitute their own simplest behavioral descriptions. This behavior can be more easily and accurately approximated by continuous, analog networks than it can be defined by digital, algorithmic codes. These analog networks may be composed of digital processors, but it is in the analog domain that the interesting computation is being performed."
http://edge.org/response-detail/782/what-scientific-concept-would-improve-everybodys-cognitive-toolkit
Displaying all 6 comments