Software engineering evolved to deal with the needs of large systems developed by multiple large teams, and it has struggled to keep up with the increasing size and complexity of large systems ever since. Meanwhile, the typical small system has become larger and the typical team size has, if anything, gotten smaller. This situation leaves a mismatch between the methods developed for large projects and the smaller but significant projects that make up much of the software development industry.
Unlike traditional engineering, the targets of software engineering continually become larger. Contrast this with civil engineering. While we can build much larger structures today, a large class of structures remains constant. A bridge built over a creek 500 years ago is the same size as one built this year. What has changed are the methods and materials used to build the bridge. In software, very few projects remain of constant magnitude, so it is difficult to directly compare the methods and tools over time.
This growth has led to a few consequences. First, the small project of today is substantially larger than the small project of even 10 years ago. And second, we have overlooked the downward-scalable range of software engineering tools and management methods.
Large systems, too complex for individual comprehension, must be subdivided into smaller tasks coordinated between groups. In fact, a large portion of software engineering is devoted to the documentation, notification, and management review needed to coordinate large projects. Attempting to scale down is more problematical. Software engineering texts, if they mention smaller applications at all, usually recommend that existing techniques be scaled down to fit the resources of the organization. This approach assumes that scaling down is just the compression or elimination of some methods or development phases.
A small group is likely to reject the idea of using large-scale methods, arguing that it makes no more sense than to attempt scaling down battleship blueprints to build a dinghy. And since much of the small project data is old enough to not reflect current development environments and methods, we have little in the way of examples or guidelines. This lack has led to some "less than useful" recommendations.
If you don't drop any of the methods, then how, exactly, do you go about scaling them down?
Scaling down cannot mean that we should keep the same tasks and somehow make them smaller, nor can it mean that we should have fewer people doing more. At a certain point, a single person will have much more to do than he or she can possibly do accurately. In fact, a scaled-down group must have exceptionally talented and motivated people in order to function. In large groups, having irreplaceable people is a weakness; in small groups it is a necessity.
Taking this discussion further, there are a number of problems with scaling down software engineering methods. Here we list a few:
If our models are best for large groups, does it mean we should increase the size of smaller groups? No. There is evidence that smaller groups are more efficient. Smaller groups usually have higher per-capita productivity than larger groups. Moreover, while many software engineering methods have proved valuable, the totality of these methods has not been proved optimal for small groups. We must start studying the development needs of smaller groups and develop methods that work for them.
We use the term "scalability" almost without thinking. When we talk about scalability, we think of software such as Unix, that can scale from PCs to large servers. Scalability is a fundamental quality of software. The same operations can be used without wear on programs of all sizes and with any volume of data. But scalability is not a limitless quality. In order to make the term meaningful, it has to be understood within a particular context and then regarded as variation within a range. So first, we should develop an understanding of the definition of scalability.
Scalability in the context of software engineering is the property of reducing or increasing the scope of methods, processes, and management according to the problem size. One way of assessing scalability is with the notion of scalable adequacythe effectiveness of a software engineering notation or process when used on differently sized problems. Inherent in this idea is that software engineering techniques should provide good mechanisms for partitioning, composition, and visibility control. It includes the ability to scale the notation to particular problem needs, contractual requirements, or even to budgetary and business goals and objectives. Methods that omit unneeded notations and techniques without destroying overall functionality possess scalable adequacy.
Tailorabilityanother way of assessing scalabilityis the customizability of a technique or a process to a specific domain or of standards. If process and notational changes can effectively be made, then a system is tailorable. For example, object-oriented techniques, such as UML, can be tailored to fit different problem domains and organizations. A process, such as configuration management is usually adapted to the development project. And development standards like DoD-STD-498 must be altered to fit a particular organization, contact, and problem domain.
Unfortunately, there is no existing process for scaling up or scaling down that addresses large changes in problem size. In most cases a major change in scale results in a fundamentally different method or different process. We are more familiar with scaling up, and what is similar to other engineering disciplines, but we don't know much about scaling down. In particular, scalable adequacy and tailorability must look at overhead and learning curves as notations are scaled down. Here are a few examples:
For small software development groups, this presents another aspect of the scalability challenge. Without contractual requirements and without any marketing advantage in standards usage, all that remains is to determine the value of the standards in terms of development improvement. Clearly, if the standard cannot be economically tailored to the development, it cannot and should not be used.
In summary, scaling down is not a simple task, and it has clear limits. Methods requiring too much overhead for their relative benefits are ultimately not sustainable. This is the central problem with scaling down large, formal-communication-laden systems. A method is scalable only if it can be applied to problems of different sizes without fundamentally changing the method, and it is entirely unclear that many methods can be scaled down without such change. Moreover, scaling forces significant changes in the software architecture, software processes, methods, life cycles, and domain knowledge that usually introduce new sets of errors. This by itself puts a limit on scaling. Because we have been so focused on large-scale development, software engineering methods have tended to intertwine management and coordination with technical aspects. Deciding how to extricate these two areas and deciding how to make some methods work in small organizations will take some study.
©2000 ACM 0002-0782/00/0900 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2000 ACM, Inc.
No entries found