I read Robert Glass's "Practical Programmer" column ("Evolving a New Theory of Project Success," Nov. 1999, p. 17) and would like to add a practitioner's point of view.
First, I agree software projects need "More accurate and honest estimates. More understanding of the complexity of scope. More expert help." If we had all this, we'd be in great shape. But there are obstacles, often insurmountable, to each of these goals.
Regarding accurate estimates, it is difficult to estimate the project's effort fully when the scope is not fully known ahead of time. Yet it is almost impossible to know the full scope until the project is finished or you have previously done exactly the same work on exactly the same platform. The latter case, in my experience, is rare. Nevertheless, good programmers with a little experience can make rough estimates. However, when the estimates "seem" to be too high, either to management or to the client, the reaction is: "It can't possibly take that long. What could you do to take that long?" I have heard similar reactions from both internal management and from clients. The sad fact is that if, as in our case, we are doing custom work for outside clients, you don't win many jobs with high estimates. So projects proceed, with boss or client directly or indirectly mandating a schedule that is not based on the programmers' best guesses.
Related to the estimation issue is the understanding of the complexity. Aside from the point about not knowing you are there until you are, software development has evolved from the specification-first waterfall method to various iterative approaches. This means everybody presumably agrees all the details won't be known until a project begins, and revisions are made along the way, including a certain amount of "scope creep." The iterative approach works best for the end user but means the full complexity is unknown at the outset. (It also can cause problems from a sales point of view because it is difficult to convince clients to fund a project in small stages or to be open to changes in cost as the project evolves; or alternatively, a set price that in the end causes a loss of money.)
One conclusion then, from Glass's column, is that if scope creep is bad and understanding everything up front is good, then we should go back to the waterfall method of ironclad specifications as the first step. But hasn't history shown, especially from the user's point of view, that this is not the best approach?
Finally, we come to the matter of expertise. In my experience, on almost all projects, the programmer is doing something he or she has never done before, whether it involves hardware, software, or application details. Even if the programmer uses the same software development environment as before, chances are good that new (to the programmer) functions will be used, or known functions will be used in new ways. The programmer is rarely really and truly expert in the job at hand, if we take "expert" to mean a master who knows everything there is to know about the task. If our programmers are not experts, can they seek expert advice? Usually not. There may be no such person who knows everything about the task, or if there is, that person may be impossible to find. Or, if the advice is found, it is too expensive. Generally, good programmers ask around, send email, post queries on bulletin boards, and read articles and books in an attempt to learn more about how something works. But this is all time consuming, costly, and may not lead to a good answer anyway. Then there are the situations in which the programmer simply doesn't realize that expert advice at a given point could prevent trouble later. In practice then, I believe that while full expertise is certainly desirable, it is rarely achievable.
If Glass knows how we can get around these obstacles, especially in a competitive environment where we have to pitch project schedules and costs to skeptical clients, I'm listening.
Richard H. Veith
Port Murray, NJ
After reading a book on software production, Peopleware (DeMarco and Lister, 2nd ed.), Glass's column highlighting the research of Kurt R. Lindberg reminded me of an earlier study. In Peopleware, the authors quote a paper by Jeffrey and Lawrence from as far back as 1985. In that work, it was found that the most productive teams were the ones with no effort estimate, that is, no planning for how many people and how much time is needed to complete a project.
Projects with no effort estimate are surely out of control. If we believe one of the book's ideasthat practitioners get the greatest satisfaction when they are productivecertainly this is in complete accord with Lindberg's.
From my own experience, I can say bad management and useless pressure on the people is worse than no control at all since technical experts, like everyone else, cannot concentrate on productive work when simultaneously fighting short-sighted management.
Lauri Pirttiaho
Oulu, Finland
I was struck by Hal Berghel's statement in his "Digital Village" column that the Manhattan Project was created to prove the impossibility of making an atomic bomb ("The Cost of Having Analog Executives in a Digital World," Nov. 1999, p. 11). I have never before heard this advanced as a goal of the project.
In his letter to President Roosevelt, Einstein stated clearly that, in his opinion, the science of physics had advanced sufficiently to possibly build an atomic device, and the allies should consider a project to explore this possibility. Einstein's fear, given the talent pool available to them, was that the Axis powers would surely recognize the same possibility (indeed they did) and try to build the bomb. The goal of the Manhattan Project was never to prove a negative at all, it was to beat the Axis powers to the punch.
Berghel's thesis that only a pure technologist of "enormous depth and breadth within a technological area" should fill the role of an IT manager is unworkable in the real world. He ridicules as misguided two universities advertising for broadly grounded IT czars. Berghel surely knows technology management involves resolving the competing demands of running an organization and those of technology and science. This creates an inevitable tension among competing values and goals. It is within and because of this tension that visionary leaders show their true worth. They are the ones universities rightly seek. To state that an assistant provost or vice chancellor must have breadth and depth within a (presumably single) technological area, as though such were the alpha and omega of the argument, is both provincial and arrogant. Any organizationacademic, commercial, or governmentalthat hires its senior technical managers on any other basis will soon flounder and fail.
Berghel apparently doesn't recognize IT as a broad multidisciplinary field drawing on many management, visionary, technical, academic, and market skills. Fear not the dreaded generalist, Mr. Berghel. Without us there would be no one to support the breadth and depth of the specialists.
Ralph Miller
Moorestown, NJ
Hal Berghel Responds:
There are really two issues here: the optimal skills of IT managers and, the optimal skills of executives in charge of the strategic planning of IT (so-called "info-czars"). These two issues should not be confused. My column, and the arguments and examples I put forth, concerns only the second. By my account, the position of technology strategist is one that is optimally filled by technologists and not by managers. I do not suggest we fill the management ranks with techies. For purposes of sound technology forecasting, reliance on managers rather than on technologists is a mistake of the first order.
As I review the major technology blunders in IT over the past 25 years, they are almost always traceable to planning decisions made by executives who were not technically qualified. My analysis of the history behind OS/2, CP/M, Z-80 microcomputers, SuperCalc, VisiON, Easy Writer, the Amiga and the Mac, IBM's Future Systems and Olympiad projects, MicroChannel architecture, the Xerox Star, Presentation Manager, Univac, DEC, Osborne, Kaypro, Commodore, Xerox' inability to take advantage of PARC's technology, IBM's inability to take advantage of RISC, and the list goes on and on, seem to confirm my point.
Failure to exploit technology, failure to innovate quickly enough, failure to orient the organization with the appropriate technology horizon, failure to avoid technology surprises, chasing after trends rather than following paradigms, and so forth, are all maladies that befall organizations lacking info-czars with accurate technology compasses. My position, incidentally, is also not to be confused with the stronger claim one hears from time to time, especially in new IT start-ups, that major high-tech blunders result from technical inversion where managers lack the technical competence to supervise their subordinates properly. This latter position, which one might attack as "provincial and arrogant"not to mention threatening to IT managersis not mine.
Relating to my brief history of atomic weaponry, it is clear at the time of Einstein's letter to Roosevelt (1939), the uranium atom had been split and that released neutrons were detected as by-products of the fission. Thus, the possibility of an uncontrolled chain reaction using certain isotopes of uranium had been established. However, it was by no means obvious to any physicist at the time the technical barriers to producing a workable bomb could be overcome anytime soon. These technical barriers were nontrivial, involving the separation of the fissionable material (Uranium 235 or Plutonium 239) from their nonfissionable parent elements, and the development of the technology to bring the fissionable material to critical mass. It is my understanding that until late 1941 the National Bureau of Standards was still studying the viability of such technology, and that the "go" decision to attempt to make an atomic bomb was made only the day before the Pearl Harbor attack. If I've got this wrong, I'm confident that there's a reader with a physics background who will correct me.
I read the "Log on Education" column ("K12 and the Internet," Jan. 2000, p. 19) with much interest. However, I believe the authors miss a key point in their discussion of David Gelertner's commentary on the Internet and school: the Web is of little use to a child who cannot read fluently.
The sad fact is that an embarrassingly large percentage of our students do not learn to read fluentlywell over 30% in many areas. This estimate is for well-to-do neighborhoods with involved parents; I shudder to think what conditions must be like in low-income, inner-city areas. The students who fail to read fluently are less able to appreciate and contribute to our rich cultural and technological heritage. They are not the only ones who are impoverished: (1) the rest of society is deprived of the contributions that they might have made; (2) feelings of failure will induce some of them to become destructive; and (3) the curriculum is watered down to accommodate them, thus depriving their more fortunate peers of the level of challenge that many of them need to excel.
Fortunately, there are proven, effective methods for teaching children to read, and Gelertner would probably be comfortable with them. Unfortunately, many schools seem to be as uncomfortable with these methods as are Soloway et al. But fortunately, and more important, computer technology offers great assistance. All three of my children were taught how to read by a vanilla-color PC running an inexpensive software package. Soloway et al. might be disappointed to hear that this PC was not connected to the Internet, but perhaps they can take solace in the fact that the software is advertised on the Web.
Indeed the Web has potential to profoundly change our society. But mastering the basics must always be the highest priority.
Paul E. McKenney
Beaverton, OR
The article "A Case Study of a Netizen's Guide to Elections" (Dec. 1999, p. 48) prompted me to write this note confirming the effect of candidate Web information in local elections in the San Francisco Bay area.
In the fall of 1998, the League of Women Voters hosted a Web site (the "SmartVoter" project) including the candidates for all levels of government, in all the local races. The LWV posted the minimal Registrar of Voters information for a candidate, but asked candidates to submit their own information, including picture, platform, and endorsements. In Santa Clara County (Silicon Valley), there were 220 candidates in 55 races, and a candidate who went to the trouble of supplying information was 10% more likely to win than one who ignored SmartVoter (statistically significant at the 75% level). In Marin County (just north of the Golden Gate bridge) and San Mateo County (just south of San Francisco), Web information suppliers were actually less likely to win, though only by a statistically insignificant 2% and 6%, respectively.
In another high-tech part of the state, Orange County, the winning probability difference was less than 1%, (also not statistically distinguishable from 0).
It will be interesting to watch how increasing Net usage, publicity through AOL and other ISPs, and the Center for Governmental Studies' increasingly sophisticated designs, change these statistics in the future.
Tom Moran
Saratoga, CA
Frank Cuccias's claim that the IBM 3083s were/are vacuum tube systems may be in error. If my memory serves me right, the 3083 was a post-370 system, a post-360 for sure.
According to the comprehensive, still authoritative "ACM 71, Quarter Century View," "...1959, the year ... marks the start of the second and transistorized computer generation..." (p. 16). The appearance in 19601961 of the System 360 opened the way to the "third generation" (several variations of integrated circuits). This was followed by "3.5," then the "fourth" generation, followed by "blurred generations," andmost likelythe "no generation" mainframes of today.
Antanas V. Dundzila
McLean, VA
Frank Cuccias Responds:
In the process of paring down 50+ pages of the original article to the version in Communications, it seems the rewording implied that all of the systems were/are vacuum tube-based systems. This is not the case.
Referring back to the original document, I make mention of those systems: "The current mainframe computers acting as the central computers or Hosts for most of the ARTCCsthe IBM 3083, IBM 9020E, IBM ES/9121, and the Raytheon 750are old, antiquated systems, some containing vacuum tube processors. IBM indicates it no longer has the resources to remediate the code and recommends replacing the systems as soon as possible. Raymond Long, the FAA's Y2K program director came up with a solution to this apparent problem. He found two former IBM programmers who used to work with these systems and commissioned them to work on this project."
©2000 ACM 0002-0782/00/0300 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2000 ACM, Inc.
No entries found