acm-header
Sign In

Communications of the ACM

BLOG@CACM

Feature-Based Development: The Lasagne and the Linguini


View as: Print Mobile App Share:
Bertrand Meyer

The following observations address a core concern of software development, made particularly vivid by the spread of agile methods and their insistence that we can build systems one feature (or "user story") at a time. They are loosely drawn from chapter 4 of my book "Agile!" [1],  an analysis and critique of agile methods; they also serve as a plug for the ACM webinar on the same topic, "Agile methods: the Good, the Hype and the Ugly", which will take place this Wednesday (Feb. 18) at 1 PM New York time; see here for the registration, which is free, and here for the corresponding time in various cities.

The topic is of interest beyond agile methods and the plug is also (other than for excellent Italian food specialties) for work by Pamela Zave, well known in requirements engineering circles, but deserving a wider audience.

What agile methods did with user stories was to make explicit an approach to requirements and more generally to software development that many people had hoped to practice: one feature at a time.

A user story is a unit of user-visible functionality, such as, for a blog writing tool,  in a frequently recommended standard style for user stories:  "As a blog writer, I want the system to save my text regularly and automatically, so that I will not lose more than a few minutes of work  in case of an incident".

Would it not be great? You pile up user story after user story, and at the end you get a system. You might think that I am caricaturing, but no; the book cites numerous examples from the best agile authors suggesting that such a process will work. A typical quote is  "I can live with something simple that works properly. The complexity can be folded in later."

Unfortunately things do not work that way in practice. The feature-upon-feature approach can work, but only for systems of a specific kind, not those for which truly hard design issues arise.

There are two forms of design complexity: additive and multiplicative. Additive complexity applies when the features to be piled are largely independent of each other. You can just add a layer without disturbing the previous ones too much. If you are into pastry you could picture a millefeuilles, but our theme for today is Italian so we will think of lasagne:

 Lasagne

 A typical example would be the addition of a new user interface language, assuming your system already supports a few. Adding Swiss German may not be trivial, but probably will not require extensive redesign of the existing software base.

Additive complexity is not the truly challenging kind of feature addition. Many systems, however, look more like a plate of linguine:

Here the interdependencies between features are so intricate that you can hardly pull any part without having to pull many others too. Such system exhibit multiplicative complexity. It is naïve in this case to assume that you can just add features one after the other. A manager who believes in that fairy tale will end up confronting a more prosaic reality: every feature addition risks forcing the team to rework the previously implemented features. You try to grab one piece with your fork, and the whole plate's content ends up on your lap.

Assume for example that a system was not initially designed for multiple user interface languages, and that a customer now asks for that feature. Satisfying this request will require a major redesign. (As a consultant I once took a look, unfortunately after the project had failed, at a system that had been designed for a single country then re-engineered to support multiple UI languages. The team had applied its best efforts, using a run-time on-the-fly message-catching translation tool, but once in a while it would still display a message in the wrong language, say in Portuguese for Swedish customers. No wonder the customer refused to deploy the system and sued the provider for some forty million dollars. That kind of design decision is best done at the beginning, even if that implies what the agile literature scornfully calls "big-upfront thinking".)

For a precise and extensive understanding of the feature-based approach, it is essential to turn to the work of Pamela Zave of AT&T; you can start from her Web page and specifically from her informative "feature interaction FAQ" [2]. Zave has devoted much of her career to understand the role of features in telecommunications software, where this notion arises naturally:  to gain competitive advantage, telecom providers constantly want to add features, from call forwarding to voice mail to many others. She describes numerous examples of "bad" feature interaction, such as these two:

  1. Bob has Call Forwarding, and is forwarding all calls to Carol. Carol has Do Not Disturb enabled. Alice calls Bob, the call is forwarded to Carol, and Carol's phone rings, because Do Not Disturb is not applied to a forwarded call.
  2. Bob has Three-Way Calling. If he picks up his phone and dials Alice, he can use Three-Way Calling to add Carol to the conversation. However, if he uses Click-to-Dial to reach Alice from a Web-based mailbox, address book, or call log, he does not have Three-Way Calling, even though he is talking to her on the same telephone.

I cite others of Zave's examples in the book, and she has many more on her page. These are typical examples of multiplicative complexity: try to grab a linguine, and the whole plate follows.

Another source of insight, entirely different in its perspective, is Boehm's and Turner's empirical analysis of agile methods [3]. Their book dates from 2003 but the conclusions remain applicable. They write:

Experience to date indicates that low-cost refactoring cannot be depended upon as projects scale up.

"Refactoring" is the process whereby, in agile methods, you are supposed to improve on a working but less-than-ideal design. Further:

The only sources of empirical data we have encountered come from less-experienced early adopters who found that even for small applications the percentage of refactoring and defect-correction effort increases with [the size of requirements].

So much for the hope that we can build "The Simplest Thing That Can Possibly Work" (an agile slogan) and then refine it until it is really up to expectations. Bad news: you have to think hard about the whole process, right from the start. Of course it is not really news to people who are experienced in the practice of software engineering and its theory.

Everyone wants simple solutions. But it is not enough for a solution to be simple; it must also work.  The idea that you can specify and build a system by discovering user stories as you go and implementing them one after the other, with some refactoring here and there to clean up the design, is clear and simple, but only in the sense of the famous H. L. Mencken quote ("clear, simple and wrong"). Any sophisticated software endeavor will exhibit complexity of the multiplicative kind. To address it, there is no substitute for professional-grade modular decomposition, requiring serious upfront analysis and the techniques of software architecture, in particular object-oriented software construction, as refined over decades of progress in software engineering.

References

[1] Bertrand Meyer, Agile! The Good, the Hype and the Ugly, Springer, 2014, see Amazon page here, publisher's page here and my own book page here.

[2] Pamela Zave, Feature Interaction FAQ, see here.

[3] Barry Boehm and Richard Turner, Balancing Agility and Discipline: A Guide for the Perplexed, Addison-Wesley, 2003. 


Comments


Hendrik Boom

Sometimes doing the simplest thing that could possibly work is the best way to start a project, even though it's very unlikely to produce something that works. It ends up being the simplest thing that could possibly resemble what's really wanted.

The reason is that in the process of tossing together this simplest thing, you end up exploring the problem space and figure out what you should be doing instead. Then you start a new, using the existing broken system as a kind of mnemonic reference while writing completely new code.

And why should the initial system be the simplest possible thing? So you don't waste time on it. It's purpose is not to run, but to teach the programmer.


Displaying 1 comment

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account