acm-header
Sign In

Communications of the ACM

The profession of IT

Don't Feel Bad If You Can't Predict the Future


answer ball

Credit: Brian Greenberg / Andrij Borys Associates

The machine that would predict the future." An article of that title appeared in the December 2011 issue of Scientific American. It suggested that advances in big data and supercomputing will finally enable the old dream of an automated oracle. It set me to reflecting on what machines we already have available for forecasting and what our track record is with them. It also reminded me of a predicament I have faced many times as a professional, on being asked to make forecasts. When can I offer forecasts that others can trust? When should I refrain?

Back to Top

The Work of Futurists

I began by inquiring into the work of the professionals who get paid for their forecasts.2 Forecasting the future became a profession in the 1940s. Most professional futurists see their mission as investigating how social, demographic, economic, and technological developments will shape the future. They advise on global trends, plausible scenarios, emerging market opportunities, and risk management. They are heavy users of information technology. Futurists rely on three main methods.

Revelation of current realities. Often we are oblivious or blind to what is going on around us. We operate with interpretations of the world that are unsupported by evidence. Futurists gather data and propose new interpretations grounded in that data. They then examine how policy and action might change to align with the reality. For many people, simply showing them what is already going on around them is a revelation of the future to them.

Peter Drucker was a master at this. His book The New Realities (Harper Business, 1989) is loaded with examples. In his chapter "When the Russian Empire Is Gone," he analyzed economic data, conversations of politicians and the media, and moods of Soviet citizens to conclude that the Soviet Union would soon fall. It did—within a year of the book's publication, even sooner than he expected.

Drucker was once asked what his method of forecasting was. He replied that he made no forecasts. He simply looked at the current realities and told people what the consequences are. When pressed to make long-term forecasts, he offered probability estimates based on past history.

Modeling. A model is a set of equations or simulations that take some observed variables (parameters) of a system and compute other values (metrics). A validated model is one whose track record shows consistently good agreement between computed and actual metrics. A validated model can be used for forecasting by declaring that its assumptions will still be valid in the future time period, and setting its parameters to the values expected in the future period. The forecast will be in error if model assumptions do not hold or if parameter estimates are incorrect. Such models have long been used in the sciences and engineering to describe natural recurrences.

Trend extrapolation is one of the simplest models. When a trend can be detected in some measure of performance, futurists can calculate future values and draw conclusions about the consequences. In 1965 Gordon Moore, a cofounder of Intel Corporation, noticed an 18-month doubling trend in the development of computer circuits ("Cramming More Components into Integrated Circuits," in Electronics Magazine 38, April 1965). That is a 100-fold speedup for the same price over a decade. An industry rule of thumb is that any technology change that provides a 10-fold speedup can usher in a disruptive change. Many entrepreneurs started using the law to gauge whether their proposed disruptive technologies would be supportable by the computing power available in a few years. Moore's Law became a guiding business model that has sustained the computer chip industry for nearly 50 years. It has started to break down as a trend because the sizes of transistors and wires are approaching a few atoms each, too small for them to function. Most trend analyses break down over longer forecast periods because eventually the trend encounters a limit.

In The Age of Spiritual Machines (Viking, 1999), Ray Kurzweil observed the same doubling trend in four previous generations of information technologies, and he claimed it would be present in technologies that supersede silicon. Based on that, he extrapolated Moore's Law well into the future. He predicted a "singularity" around 2030, when he believes artificial brains will become intelligent.


Our problems with forecasts arise when we wrongly believe model assumptions or parameter forecasts will be valid.


On the other side, in The Social Life of Information (Harvard Business, 2000), John Seely Brown and Paul Duguid warned against overconfidence in trend extrapolation because social systems often resist and redirect changes in technology. They exposed a series of major predictions that never happened. Belief in those predictions led to the dot-com bust in 2002.

Scenarios. A scenario is a story that lays out in some detail what the future might look like under certain assumptions about trends and other factors. Futurists usually offer several scenarios under different assumptions. The method is useful to help people see how they might react to different futures, and then try to influence policies and trends so that the most attractive futures come to be. Futurists do not offer scenarios as forecasts or predictions. They sometimes give probabilities for the various futures they depict.

One thing I learned from this is that futurists actually avoid making predictions. They give you model results and scenarios and leave it to you to draw your conclusions.

Back to Top

Expert Predictions

Despite the caution of professional futurists, expert predictions have acquired a bad reputation. In Future Babble (Dutton, 2011), Dan Gardner argued that misguided trust among media forecasters in "legions of experts" has led many people down false paths. He bases his conclusions on the work of psychologist Philip Tetlock, who performed a long and careful study of 27,450 predictions by 284 experts in many fields. Tetlock found that the performance of experts was no better than random guessing. He found that celebrity experts tend to be worse than random guessing and that "humble" experts—like the cautious futurists—tend to be slightly better than random. Consumers of these predictions tend to celebrate the successes and forget the failures.

Tetlock only evaluated predictions that were stated as definitively testable hypotheses; for example, "In five years, unemployment will be under 10%." Many expert predictions are not so precise. Gardner says that experts are even less successful with vaguely worded long-term hypotheses than with precisely worded short-term hypotheses.

Dave Walter presented a dramatic example of failed long-term forecasts in his book Today Then (1992). At the 1892 Chicago Columbian Exhibition, the exhibitors speculated about how electricity, telephony, and automobiles would bring peace and prosperity in the coming century. The American Press Association invited 74 leading authors, journalists, industrialists, business leaders, engineers, social critics, lawyers, politicians, religious leaders, and other luminaries of the day to pen their forecasts of the world after 100 years.

The 1892 forecasters believed that in 1992 railroads and pneumatic tubes would be the primary means of transportation, governments would be smaller, and increased commerce would end wars. None foresaw the interstate highway system, genetic engineering, quantum physics, universal health care, mass state-sponsored education, broadcast TV and radio—or the computer. Walter concluded that many modern expert predictions are no more reliable than these.

Back to Top

Prediction Machines

Prediction machines are machines that forecast the future with reasonable accuracy. They are nothing mysterious. In almost every case they are validated models being applied for future conditions. How well have such machines done to date? Can they do better than experts?

Mathematical models of physical processes are the most successful examples.1 Newtonian models of planetary motion give highly reliable predictions of the future positions of planets, asteroids, comets, and manmade vehicles. Jay Forrester's system dynamics models were very reliable for material and information flows in industrial plants. Queueing network models have been very reliable for forecasting throughputs and response times of communication networks and assembly lines. Finite element models have been very reliable for determining whether airplanes will fly or buildings will withstand earthquakes.

The common feature of these physical models is that they describe and exploit natural recurrences—laws of nature. We can assume that Newtonian physics, system feedback loops, congestion at bottleneck queues, and forces in rigid structures will continue to behave the same way in the future. We do not have to worry that the assumptions of the model will be invalid.

Our problems with forecasts arise when we wrongly believe model assumptions or parameter forecasts will be valid. In other words, we assume a recurrence that will not happen.

Many things can invalidate our assumptions of recurrences: human declarations in social systems, chaotic, or low-probably disruptive events, inherently complex systems whose rules of operation are unknown, complex adaptive systems whose rules change, environmental changes that invalidate key assumptions, and unanticipated interactions especially those never before seen. This list is hardly exhaustive.

Of these, I think the first is the most underappreciated. Human social systems are networks of commitments, and most commitments ultimately follow from human declarations. The timing and nature of declarations is unpredictable. Whether a technology is adopted or sustained in a community depends on the support of its social structure and belief systems, both of which resulted from previous declarations.3 Seely Brown and Duguid, mentioned earlier, give numerous examples of technology forecasts foiled by human declarations.

We know from experience that many validated models deteriorate over time. A locality principle is at work: the model assumptions are less likely to change over a short period than over a long period. Our short-range predictions are better than our long-range predictions. As a consequence, we need to frequently revalidate models to maintain our confidence that they still apply to at least the current circumstances.


We seek technology predictions in an attempt to reduce our risks, losses, and missed opportunities. We do so against great odds.


What about long-term predictions? Most often, they are just flat-out wrong, as in the examples Dan Gardner and Dave Walter gave us. Occasionally they are correct but way off in the timing. Researchers at MIT predicted in the 1960s that computer utilities—forerunners of today's "cloud"—would be common by the 1980s; they were off by 30 years. Alan Kay predicted in the 1970s that personal computers would revolutionize computing; he was off by 20 years. Alan Turing in 1950 speculated that conversation machines would, by the year 2000, have a 70% chance of fooling a human for more than five minutes.4 He also thought that memory capacity for the machine's database would be the main obstacle. By 2012, our natural language systems are not close to this goal even though we have the memory capacity—but maybe in a few more years they will.

The few long-range predictions that do succeed late give us a forlorn hope that we can at least get the outcome right, even if the timing is off.

Nevertheless, the dream of good prediction by machine lives on. That Scientific American article mentioned earlier envisions a project to build a computing system with more storage and computing power than ever before, connected globally to sensors and personal information. With new data mining methods to be developed, the system would find correlations in the data, and use them for predictions. Despite the soaring rhetoric, the system is no more likely to be successful than any other prediction machine, except when it can find and validate recurrences. It is unlikely to be successful whenever the outcome can depend on human declarations or unpredictable events.

Back to Top

Conclusion

We seek technology predictions in an attempt to reduce our risks, losses, and missed opportunities. We do so against great odds. Unpredictability arises not from insufficient information about a system's operating laws, from inadequate processing power, or from limited storage. It arises because the system's outcomes depend on unpredictable events and human declarations. Do not be fooled into thinking that wise experts or powerful machines can overcome such odds.

If you are called on to make forecasts, do so with great humility. Make sure your models are validated and that their assumed recurrences fit the world you are forecasting. Ground your speculations in observable data, or else label them as opinion. Be skeptical about your ability to make longer-term predictions, even with the best of models. Do not worry about the forecasts made by experts—they are no better than forecasts you can make.

Often, the most powerful and useful statement you can make when asked for a prediction is: "I don't know."

Back to Top

References

1. Denning, P. Modeling reality. American Scientist 78 (Nov.–Dec. 1990); http://denninginstitute.com/pjd/PUBS/AmSci-1990-6-modeling.pdf.

2. Denning, P. Innovating the future: From ideas to adoption. The Futurist, World Future Society (Jan.–Feb, 2012), 40–45.

3. Schon, D. Beyond the Stable State. Norton, 1971.

4. Turing, A. M. Computing machinery and intelligence. Mind 59 (1950), 433–460; http://www.loebner.net/Prizef/TuringArticle.html.

Back to Top

Author

Peter J. Denning ([email protected]) is Distinguished Professor of Computer Science and Director of the Cebrowski Institute for information innovation at the Naval Postgraduate School in Monterey, CA, is Editor of ACM Ubiquity, and is a past president of ACM.


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.


Comments


CACM Administrator

The following letter was published in the Letters to the Editor in the November 2012 CACM (http://cacm.acm.org/magazines/2012/11/156596).
--CACM Administrator

In his Viewpoint "Don't Feel Bad If You Can't Predict the Future," Peter J. Denning (Sept. 2012) wrote: "Make sure your models are validated and that their assumed recurrences fit the world you are forecasting. Ground your speculations in observable data..." Hmm... Who validates the models? Physics-based models can be validated, in light of, say, their ability to predict/replicate the results of observable phenomena (such as gravity and inertia); experts in the discipline agree that the assumptions, calculations and/or algorithms, and predicted results match what is seen in the real world. On the other hand, social models rely on assumptions about human behavior, both individual and en masse, that cannot be measured or demonstrated and on predictions that can never be more than "face-validated." That is, "I can't tell you we got the right answer for the right reason; the best I can say is the predicted behavior corresponds to what is observed in real life x% of the time."

This inability to validate the quantification of variables is seen in efforts to model military interactions, as well as social, economic, and political phenomena; for example, no version of either the Lanchester model reflecting the relative strengths of a predator/prey pair or of the many "expanded" Lanchester variants is capable of predicting the outcome of the Battle of Rorke's Drift depicted in the 1964 movie Zulu between British troops and Zulu warriors in South Africa in 1879. Tank on tank, we can predict the odds; add human crews, and things get dicey; witness the dramatically uneven results of combat in Operation Desert Storm when a U.S.-led coalition reversed Iraq's 1991 invasion and nominal annexation of Kuwait. Similarly, one has only to open the newspaper to understand the degree to which we have so far failed to model the American economy sufficiently to suggest effective measures to relieve the ongoing recession. As Denning pointed out, predicting the future is difficult and fraught with danger. Be humble...

Joseph M. Saur
Atlanta, GA


CACM Administrator

The following letter was published in the Letters to the Editor in the November 2012 CACM (http://cacm.acm.org/magazines/2012/11/156596).
--CACM Administrator

I very much agree with Peter J. Denning (Sept. 2012) that one should be humble when predicting anything, especially if the prediction depends on some future human action or decision. Unlike atoms and molecules, humans have free will. More than 60 years ago, the economist and philosopher Ludwig von Mises explored this idea in his monumental book Human Action. More recently, Walter Isaacson's biography of Steve Jobs and Malcolm Gladwell's book Outliers: The Story of Success only reinforced the impossibility of predicting human behavior. Historian J. Rufus Fears wrote: "Nations and empires rise and fall not because of anonymous social and economic forces but because of decisions made by individuals" in the description of his course Wisdom of History. As for Jobs, predicting even the next five minutes would have been futile. Any given human action or even random event might have yielded a totally different technological (or economic or political) world from the one we have today.

Per Kjeldaas
Monroe, LA


Displaying all 2 comments