acm-header
Sign In

Communications of the ACM

Viewpoints

Historical Reflections: Will the Future of Software Be Open Source?


Question mark illustration

Illustration by Celia Johnson

If one was forecasting the future of software today, it is likely that open source software (OSS) would figure prominently in most projections. Indeed, open source zealots might expect to see OSS everywhere, with "innovation networks" abounding, Microsoft humbled, and Linux on every desktop. Personally, I wouldn't bet on it.

Historians are cautious about forecasting the future, with good reason. They know that when technical experts gaze into the crystal ball, they usually extrapolate well but fail to spot those discontinuities that can transform a technology. One such attempt at futurology was the book The Future of Software, published in 1995.a The book included contributions from leading experts in the field. They correctly extrapolated that PCs would become more powerful, numerous, pervasive, and software would proliferate to fill the applications vacuum. That was correct to a point, but their collective take on new software development methods and technologies was wide of the mark. One contributor forecast that visual programming by ordinary users would herald the "fall of software's aristocracy." Another predicted the maturing of the software factory, by which our "craft industry" would be transformed "toward Ford-style mass production." Another contributor expected to see stunning advances in natural language interfaces. What no contributor foresaw, or even mentioned, was the impact of open source software and development techniques. At the very moment they were making their projections, Linux was under their nose but they could not see it.

The idea of open source software goes back to the very dawn of computing, when the mainframe computer was getting established in the early 1950s. At that time, and for many years after, IBM and the other computer manufacturers gave their software away for free—software was seen largely as a marketing initiative that made their hardware more saleable. Software was supplied in both source and object code form because some people found the source code useful and there was no reason not to let them have it. Where manufacturers' provision fell short, cooperative user groups, such as IBM's SHARE, coordinated the writing and free distribution of programs. When it came to applications, computer users wrote their own or hired a "software contractor," such as the Computer Sciences Corporation or Electronic Data Systems, to write software for them.

There was a radical transformation in the software world in 1964, with the launch of IBM's System/360 computer. The 360 created, for the first time, a standard computer platform, and it massively expanded the computer population, particularly in medium-sized businesses. Most of the new computer owners did not have the resources to hire a staff of programmers or to buy the services of a software contractor. There was thus an applications vacuum filled by the first software product firms. These firms wrote programs for specific industries (such as the insurance or construction industries), or for generic, cross-industry functions (such as payroll or stock control). The sales of individual software products were quite modest: if a product had 100 or so customers it was considered quite successful. Software product prices were high, typically $50,000 upward. This was not only because of the low sales volume, but because software writing was very capital intensive. The only way to run a software business was to hire a team of programmers plus a mainframe computer and put them to work. This cost at least $1 million a year (closer to $10 million in today's currency).

The first software products were usually supplied in both source code and object code. This was necessary because customizing software was a little-understood technology and most users configured their application software by modifying the source and recompiling it. Software-product companies were, naturally, concerned about disclosing source code, because if it fell into the hands of a competitor it would make it easy for them to produce a competing product. In a somewhat uneasy compromise, paying customers received a copy of the source code but were bound by the license terms with a trade secrecy clause requiring them not to disclose the source code or documentation to third parties.

The advent of personal computers, which occurred during the late 1970s, gave rise to a new software industry that rewrote the rules for making and selling software. The cost of computer power plummeted, the computer population soared, and the number of software firms increased exponentially. However, although the hardware-cost barrier to software making had been lowered, code development still needed a disciplined environment of salaried programmers who worked office hours in the same physical location. Although computer networks existed in the 1980s, they were slow and impractical—software development remained a same-time, same-place, collaborative activity. PC software products were comparatively inexpensive (usually less than $500), but this was only because the sales volume was high compared with mainframe software. Software writing remained an expensive, highly capitalized activity.

In the new PC environment, with thousands of software companies and millions of users, it was no longer feasible for software companies to supply their source code to users, or their products would be rapidly duplicated. Firms such as Microsoft, Lotus, and WordPerfect had invested hundreds of millions of dollars in software development; disclosing their software would have been akin to giving away the family jewels. Of course, software had some legal protection through copyright laws, but this did not protect the data structures and algorithms that would have been exposed by access to the source code. By the mid-1980s source code disclosure had almost completely ceased—in 1983, IBM was one of the last major companies to stop disclosing source code in its so-called OCO (object-code only) policy. Competitors and users alike objected to the OCO policy, but IBM was resolute and was doing no more or less than the rest of the industry. By the mid-1980s, trade secrecy was endemic in the software products industry.

The ascendancy of the Internet in the early 1990s began another radical transformation of software development. Inexpensive network access removed the constraint of having salaried programmers working together in a dedicated facility. It was now possible for programmers to collaborate in software development via the Internet—whether they were salaried personnel or volunteers, and whether they were trained computer professionals or talented amateurs. This was the birth of today's open source community. Linux was the defining product of the community, and the open source principle was also responsible for a large portion of the Internet infrastructure. Besides enabling the new open source development regimen, the Internet also removed the barriers to software distribution. Whereas the existing software products industry had used retail channels, which could carry only a limited range of products, or had used an (expensive) sales force, open source products were freely available for downloading from the Internet. Open source programs soon appeared in many of the established software categories.


The idea of open source software goes back to the very dawn of computing, when the mainframe computer was getting established.


In the initial euphoria of open source in the mid-1990s, it looked as though in the future software would be "free" in both senses of the word: free of cost to consumers, and with freely available source code. Ten years on, however, it became clear that nothing was that simple. Fundamentally, open source was a new development method. Traditionally, code development accounted for 10%–15% of the cost of a software product. The rest of the cost was for activities such as marketing, packaging, and after-sales support (for example, telephone help lines). For users, too, software was only a fraction of what came to be called the TCO (total cost of ownership), which included computer and infrastructure costs and technical support. Today, there are numerous firms supplying open source products, and their cost structure turns out to be not very different compared to traditional software firms. They spend 10%–15% of their income on code development, and the rest is taken up with activities such as marketing and after-sales support. Because of the open source development method, it may well be that their products are better and less expensive than their proprietary equivalents, but for most users they do not drastically change their total information processing costs.

So, if a person was attempting to peer into the future of software today, what would he or she predict? Such a forecast has two dimensions: first, predictable extrapolation, and, second, the unknowable paradigm shifts that might take place. Predictably, we can expect the open source paradigm to gain in strength and to be increasingly adopted by the traditional software industry, and that there will be some convergence between the two sides of the industry. But in the next 10 or 15 years there will surely be unanticipated technological discontinuities, comparable with the launch of the IBM System/360 in the 1960s, the personal computer in the late 1970s, and the open source movement in the 1990s.


The first software products were usually supplied in both source code and object code.


History shows us that the preferred software development method of the day has always been the one that seemed to work best within the contemporary technological and economic constraints, particularly the costs of computer ownership, programming personnel, and data communications. The next paradigm shift might well be the currently much-hyped SaaS (software as a service)—software delivered as a service over the Internet rather than as a product installed on a local computer. SaaS seems to offer a technological prospect in which both proprietary and open source software can flourish. But it is at least as likely that some other technological development—perhaps already here and waiting in the wings—will create a software future that is currently unimaginable. That's the fundamental reason historians are so reluctant to attempt to predict the future of software.

Back to Top

Author

Martin Campbell-Kelly ([email protected]) is a professor in the Department of Computer Science at the University of Warwick, where he specializes in the history of computing.

Back to Top

Footnotes

a. Leebeart, D. Ed., The Future of Software. MIT Press, Cambridge, MA, 1995.

DOI: http://doi.acm.org/10.1145/1400181.1400189


©2008 ACM  0001-0782/08/1000  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.


Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: