Here's a message from software practitioners to software researchers: We need your help. What help do practitioners need? We need some better advice on how and when to use methodologies.
Now, before the researchers give a "shoot-from-the-lip" answer, let me tell you one answer that won't be satisfactory. We don't want to hear "Use the latest methodology out of the research labs." There's plenty of evidence, from the broader computing literature, that that answer just doesn't cut it. Let me give you one example. In [3], we see "People think you can come up with one universal ...solution that solves a whole bunch of problems. [Those people] are wrong...The suggestion that all of us should be served by one solution [approach] is just insane."
Or perhaps that's not enough to convince you. Here's another example comment: "We argue that the traditional IS development methodologies are treated primarily as a necessary fiction to present an image of control...Alternative approaches that recognize the particular character of work...are required" [11].
Still not convinced? Then try this one: "Prescriptive information systems methodologies are unlikely to cope...use approaches tailored to each project" [9].
In fact, this was the fundamental message of my own work [12]. Universal, project-independent methodologies are characterized as "weak" in the field of problem solving, I say, while solution approaches focused on the problem at hand are considered "strong."
In fact, I am pleased to report here that, in approximately the last five years or so, there is a veritable groundswell of support for what I consider to be the fact that universal or "meta" problem-solving approaches just don't work very well.
But, given that groundswell, there's a problem. It may well be that universal approaches are the wrong answer. But what's the right one? That's where we return to my original cry for help. Software researchers, we need some advice on when and how to choose from that plethora of methodologies that you researchers have been showering us with. And, I strongly believe, the only way to generate the meaningful advice I'm asking from you, is for you to do the necessary research to provide an answer.
It is not enough to propose a new methodology without discussing when its use might be appropriate.
What kind of research? Research that:
Don't hasten to judgment here, software researcher. Task 1 in the list is a difficult problem. There has been almost no effort, to date, to define the realm of applicability of the average methodology. Advocates of the structured approaches, as far back as when they appeared on the scene several decades ago, saw them as universal, and advocated their use on all projects. The same thing happened, more recently, with respect to the object-oriented approaches. One author even said that, with the advent of the object approaches, "no software engineer would be caught dead using the 'structured' paradigm to build software" [8], implying that the new methodology had swept away the old.
Note the quotation appearing at the beginning of this column. Here's a respected software author, Scott Ambler, taking the same approach with the agile methodologies. There's almost a fundamental need, among software authors, to believe that a universal elixir of some kind is right around the corneror even presently at hand, in the latest concept to emerge from the research labs or the guru's perch. In fact, there is often a superficially convincing reason for the claim that the latest whatever is universal. Later in the article in which the column-opening quote appears, the author writes "...you can develop the same system with a team one-fifth the size [using the agile approaches]..." That kind of hyped pronouncement is not at all unusual among those who claim the discovery of a universal methodology.
Task 2 in the list is also problematic. It's not that there aren't starting points for such a taxonomy of domains. I provided one myself a few years ago (in [6], followed by [5]). Capers Jones has provided some pretty powerful ones (see, for example, [7]). The problem is, Jones' taxonomy and my taxonomy don't agree! There are some fundamental similarities, of course, but in the end someone choosing to define such a taxonomy would probably have to choose sides between Jones' starting point and my own.
Given the problems of task 1 and task 2, task 3 is, by contrast, quite simple. Once we understand what methodologies provide, and what domains need, providing the needed map is probably relatively easy to accomplish.
In the past, I've been accused of being incredibly Pollyanna-ish. Concerned with what I called the "communication chasm" between academic computing and its industrial equivalent, I proposed a set of solutions to bridge that gap approximately 30 years ago (for example, academics spending sabbaticals in industry, and practitioners serving as adjuncts in academe). Members of the audience at the conference where I first presented those solutions mumbled, I was told by someone who was there, that those things were not going to happen. Guess what? They didn't. So I suppose I ought to admit that my message to researchers in this column is also likely to fall on deaf ears.
To help ward off that possibility, though, let me provide some existence proofs that what I am suggesting can happen. Let's start, in fact, with those agile methodologies that the opening quote is so enthusiastic about. One of the best books in the agile realm is [2]. In it, the author refers to the "sweet spots" where the agile approaches are at their best (projects involving two- to eight-member teams working in one room, on-site usage experts, one-month development increments, fully automated regression tests, and experienced developers). That's not quite a mapping from methodology to domain (since the sweet spots are more about project sociology than domain characteristics), but at least it's a healthy start (and it also should have caused the author of the opening quote to at least provide some qualifications as to when his advice was appropriate!). The author goes even further in that same book, coming as close to a taxonomy of methodologies as I have seen anywhere (he notes such facets as methodology "size," the degree of ritualization, problem size, project size (in people), precision needed, accuracy needed, tolerance for variability, and stability). Once again, this is not quite the taxonomy I am crying out for, but at least it's a start.
The agile world is not the only one in which such progress is being made. In an article on aspect-oriented programming, the authors' conclusions note that "We have determined particular situations where AOP benefits developers, and we have characterized 'aspects' of how AOP has worked" [10]. The article might not contain many particular details, but at least the authors have acknowledged that it is not enough to propose a new methodology without discussing when its use might be appropriate.
This is, I feel, an important problem. Even vitally important. I reflect on it as a "Fallacy" of the software field ("Software needs more methodologies," I say, is a fallacy) in [4], noting that Karl Wiegers says things like "We don't need any new methodologies," going on to say that we need advice on how to use what we already have. As the software field matures, I feel strongly, it won't be enough to invent a new approach to constructing software products. It will be necessary, I fervently believe, to explain why this invention is worthwhile, and to suggest times when its use might be beneficial. Until that happens, I am afraid, we will still be a discipline crying in the technological wilderness, pretending we know more than we really do.
1. Ambler, S.W. Outsourcing examined. Software Development (Apr. 2003).
2. Cockburn, A. Agile Software Development. Addison-Wesley, 2002.
3. Cooper, M. Everyone is wrong: Q&A with Martin Cooper. Technology Review, (June 2001).
4. Glass, R.L. Facts and Fallacies of Software Engineering, Addison-Wesley, 2003.
5. Glass, R.L. and Vessey, I. Toward a taxonomy of software application domains: History. Journal of Systems and Software, (Feb. 1992).
6. Glass, R.L. and Vessey, I. Contemporary application-domain taxonomies. IEEE Software, (July 1995).
7. Jones, C. Software Assessment, Benchmarks, and Best Practices. Addison-Wesley, 2000.
8. LeClerc, A. Mechanisms for reengineering legacy applications. Cutter IT E-mail Adviser, (Fall 2001).
9. Middleton, P. Managing information systems development in bureaucracies. Information and Software Technology, 41 (1999), 473482.
10. Murphy, G.C. et al. Does aspect-oriented programming work? Commun. ACM 44, 10 (Oct. 2001).
11. Nandhakumar, J. and Avison, D.E. The fiction of methodological development: A field study of information systems development. Information Technology and People 12, 2 (1999).
12. Vessey, I. and Glass, R.L. Strong vs. weak approaches to systems development. Commun. ACM 41, 4 (Apr. 1998).
©2004 ACM 0002-0782/04/0500 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.