I began writing programs in 1957. For the past four decades I have been a computer science researcher, doing only a small amount of programming. I am the creator of the TLA+ specification language. What I have to say is based on my experience programming and helping engineers write specifications. None of it is new; but sensible old ideas need to be repeated or silly new ones will get all the attention. I do not write safety-critical programs, and I expect that those who do will learn little from this.
Architects draw detailed plans before a brick is laid or a nail is hammered. But few programmers write even a rough sketch of what their programs will do before they start coding. We can learn from architects.
Program is not the house - the compiled binary (or, the running service) is. Programming is the blueprint. So, yes, we focus on the blueprint/program, the compiling part is easy. The analogy is building cookie cutter 2 story house is easy - just like compiling to small software. Service is like building a mansion or some other mega projects which requires a lot of interaction/adjustment during the building process so it's like running service - you need devops.
The implication that simply writing a spec will guarantee a successful development project just does not satisfy. Just like having a blueprint doesn't mean the house won't fall down.
Here's a scenario: take a year to write the spec for a complicated system, start programming and find out there were a couple of deep -- and incorrect -- assumptions on which the design has been built.
But that's ok, we can fix the methodology: plan for and do a design review. Multiple sets of eyes, wisdom of the crowd, and all that.
But wait, we can't do them too late, otherwise you end up in the same position. Easy to fix: schedule the design reviews periodically. Ok, we'll do them every so often, when it feels right.
But that still doesn't guarantee a good spec or project. Software is written by people and people can get into psychological failure modes, e.g. one strong personality and the team falls into groupthink. In the face of that, one or several design reviews could fail to find deeply held but false assumptions because of politics or social pressures.
And so on.
Here we are again, adjusting development methodologies over and over, epicycles within epicycles. It's safe to say Waterfall doesn't work in all cases, and Agile doesn't either.
So what does work? What techniques can we use to protect against project failures? I propose we perform a large-scale multi-project Process FMEA to find that out:
1) Find out how teams and projects actually fail in real world scenarios (sorry no student projects allowed)
2) Propose methodological solutions for specific failures
3) Test the solution's effectivity
4) Repeat until project failure rates are "acceptable"
It may well be that writing a spec helps but perhaps we should get some real world evidence that it does. And when it doesn't work, find out why and which techniques can be used to mitigate. And repeat.
In short, we need to do the grunt work, the low level research to find out specifically and precisely: how does software development fail?
Here are some possible research areas:
- can we find a taxonomy of project types? And then follow up: is there a methodology that is more applicable for one type versus another?
- can specific risky aspects of a given project be identified and mitigated up front? Are there any of these that can only be identified along the way (i.e. not up front)? How do we mitigate those?
- what is the impact of Management (good and bad)? What is "good"?
- what is the impact of project timelines and associated pressures? Are there scheduling techniques to mitigate the risks?
- what is the impact of the available team member skills? How do we identify critical skills for a particular project? How do we know if there is a lack of them in the current team?
- what are the most common "failure modes" of a software developer? Can they be mitigated by language design, by design techniques or analysis, or, as a last resort, by training?
A thoughtful, valid, and well-presented article. But it's a shame that it had to be written in 2015.
http://www.idinews.com/Lamport.html
Its hard to disagree with the view that thinking before action is a good idea. And, as the piece makes crystal clear, thoughts need to be explained clearly in writing. Perhaps one of the curses of business life is the bullet point, which encourages the presentation of ill-thought, vague ideas, which are subsequently open to interpretation in many different ways a recipe for disaster.
For software development, the larger the project the earlier the thinking needs to start. While new IT systems in both public and private sectors may be commissioned explicitly as IT projects, IT is now so pervasive that almost any major business or government policy initiative has a significant IT component. It will not work unless the IT works. It is therefore essential that those launching an initiative either understand the IT involved or have advisors who do.
However, it is not just enough to have competent advisors. They have to articulate the thinking clearly and you have to listen to them. Launching new initiatives sometimes appears to be a matter of political vanity, with or without a capital P, with a liberal sprinkling of unjustified optimism and no realistic cost/benefit analysis. The public sector, at least in the UK, seems particularly prone to these problems. The private sector is not guilt free but may be better at hiding problems. And the cost of the failure to think clearly is considerable, as illustrated by the many reports of failed or significantly underperforming IT projects.
Principles taken for granted in other forms of engineering need to be consistently applied to software production if we are to think of IT practitioners as engineers.
Peter Bye, 22nd April 2015
Displaying all 4 comments