In the January 1999 Communications (p. 27), ACM President Barbara Simons says we should insist that leaders "spend less time worrying about how to censor the Net and more time on how to use it to provide timely and easy-to-access information about the workings of government." I'm not convinced that these two efforts are in opposition to one another.
If we want people to use the Internet to get information about the government and other things, then people need to be able to use it to find the information they want without being forced to deal with information they don't want. I'm a computer professional, but I tend to avoid using the Internet to search for things because it's too easy to find things I don't want. For example, when looking for information about Watt Humphrey's Personal Software Process, I searched for "Discipline Software Engineering." A large number of hits had to do with whips and chains and such things. I don't think I should have to think about that particular kind of behavior to become a better programmer. Based on experiences like this, I don't believe claims that the only people who find pornography on the Internet are those who go looking for it.
In Hamburg, Germany, the Reeperbahn used to be where prostitution took place. Making information available someplace like the Reeperbahn would not help me be a more informed citizen, because I would try to avoid going there. That's how strongly my belief is about the Internet. I can avoid the Reeperbahn in the physical world by not going there, but because the Internet is not locational in the same way, it's more difficult to keep from going somewhere on the Internet I don't want to go.
Given this kind of environment online, I'm uncomfortable allowing my children to access the Net without direct supervision. And given that I might have to explain offensive material to a child as a result of an inadvertent hit, I'm not sure I want to allow my children access to the Internet even with supervision.
The balance of freedom on the Internet is tilted similar to the way freedom to smoke was in the U.S. 10 or 15 years ago. It used to be anyone was free to smoke cigarettes almost anywhere, regardless of whether anyone else wanted to smoke. That meant if one person smoked, everyone in the place smoked. The move toward smoke-free environments follows a recognition that there is a right not to smoke, as well as a right to smoke. Allowing some people the freedom not to breathe smoke impinges on the freedom of others to smoke, but there needs to be a balance between these freedoms, rather than complete freedom for one at the expense of the other.
Our goal for the Internet should not be total freedom for pornographers to deliver their content to whomever they can, but a balance between the freedom of those who want this material and the freedom of those who do not.
In the same issue of Communications (p. 25), Max Hailperin hopes the U.S. Supreme Court will decide that "substantial burdens" on free speech are unconstitutional. He is concerned about burdens on the speakers and transmitters of free speech, but what about the burdens on the receivers? Just as email spam benefits senders at a substantial cost to receivers, it seems to me that those opposing any censorship on the Internet are preserving freedom for providers and leaving the "substantial burden" to the accessors.
Hailperin also discusses "routing around the law," using, for example, FTP instead of HTTP to circumvent the Child Online Protection Act (COPA). While I appreciate the power of the Internet to "route around" political repression, I am not happy this same capability can override my control over what I or my children may receive from the Internet. Why don't those who invest so much effort in "routing around" the law instead work out a way for individuals and families and schools to censor their own personal or institutional access to the Net? If the Internet provided the means for personalized, decentralized censorship, there would be less need for centralized attempted solutions like the COPA. Or is it perhaps the goal of those who oppose all censorship on the Net to force everyone to listen to their free speech whether or not we want to?
Although Communications articles on this subject have not been completely one-sided, it does seem that, in general, there has been a slant toward opposing any censorship. While I agree it is appropriate to favor freedom, I also think we need to take responsibility for the consequences of our freedom. In this light, perhaps when addressing the issue of freedom on the Internet, one should respond in some way to the question: Do I have the right to choose not to have access to everything available on the Internet? Or does accessing information on the Internet mean I may have to receive unsavory material whether or not I want to?
John Ebert
Columbus, OH
I just finished reading Robert Glass's column "How Not to Prepare for a Consulting Assignment, and Other Ugly Consultancy Truths," Dec. 1998, p. 11). I must admit I have run into this argument many times before, and I am continually amazed. People consistently try to compare apples and oranges. It doesn't work. I strongly disagree with Brooks's forecast of a lack of silver bullets. Such studies often seem to be too general to be of any worth. You can't compare 100 business majors programming at a consulting firm to 100 hackers in Silicon Valley. Comparing people writing in C, then switching to C++, isn't the same as someone writing in C, then switching to Smalltalk. Both switches contain an "upgrade" to OO, but C++ still contains many C traits (including pointers, for instance), while Smalltalk is an interpretive language.
The studies focus on the wrong changes and the wrong statistics. Now, I don't have any studies to back up my conclusions, only my experiences, some of which I'll list.
4GLs. I worked on a project where a team of two created a project similar to the one already created for a different department. A group of seven working for two-and-a-half years was almost finished. They wrote their application in C. We finished our application before they did; we did it in 12 weeks, simply by using a 4GL and existing C libraries (they wrote their own). 4GL can work in the right situation.
Process. Newer software engineering techniques can vastly improve a large project. I worked on a project where we developed a new switch from scratch. The first switch took almost five years. We had our switch done in 18 months. The major difference was the implementation of a new process. I later went on to work on a domain-engineering team in which the goal was to consistently reduce efforts by an order of magnitude. In one instance, we took an 1824 month process down to 23 weeks. Domain engineering at the firm has been a main reason why it keeps beating its competitors to market.
People. Virtually ignored are the people contributing to the project. I once worked on a large project where half of the group consistently met deadlines, the other half did not. The latter half wasn't any less intelligent or less educated or any less of a programming team. They just weren't organized. They didn't know how to meet deadlines.
Scripting languages. I made the switch from C/C++ a couple of years ago as my language of choice. The interactive environment allows me to quickly develop applications that took forever in compiled languages. It's OO, and no pointers. And yes, my applications are extremely reusable, unlike C++.
Information. Unlike 15 years ago, when I want to do something today, someone has most likely already done it. With the help of the Internet, I can find written software or at least a framework to follow. In some cases, I find I don't need to write much at all.
OO. There are few languages that implement true OO. Using C++, or even in some instances Java, doesn't count. I thought OO was the biggest fallacy, until someone turned me on to Smalltalk, the light finally clicked, and "Reuse" finally entered my vocabulary.
I have tried for years to convince people that there is payoff to new software techniques. The implementation of X-Windows on Unix alone probably did more than most ideas. My point is, it's not the technique but the people behind the technique that count. Anyone can use OO, a 4GL, a scripting language, or a good process, but it doesn't mean anything unless they use it right, which none of these studies actually account for. As for me, I got sick of having to show people you could get a software project done quickly and have since moved on. But don't get me wrong, this isn't an attack on the validity of Glass's column. I just wish when people did this type of research, they'd focus on the proper constraints that will in turn lead to the proper conclusions.
Joe Saltiel
Champaign, IL
Robert Glass Responds:
My biggest disagreement with Saltiel is over his failure to believe in "no silver bullet." I think the data I've acquired shows clearly that Brooks was right; there are no "breakthrough" technologies on the software horizon. But since Saltiel couches his response largely in terms of the power of people (rather than techniques and technologies) to make a difference, I certainly agree with that.
I also agree that domain engineering is a powerful concept. Remember that my column was about technologies for which claims of "breakthrough" had been made. Oddly enough, for real technology winners, like inspections and domain engineering, no one is making such claims.
I admire the format changes of Communications in recent years to be more relevant to a wider audience. However, I would beg the staff to make sure the publication does not become just another "popular press" computer magazine. While it is not my intention to single out an individual author, I am referring to the Larry Press's column "The Next Generation of Business Data Processing," (Feb. 1999, p. 13).
The premise of the title was lost on me by the time I completed my initial reading of the piecenamely, Web information systems (WIS) as the next generation of business data processing. With the subtle comparisons between two companies and numerous statistics, such as browser market share and server installations, the information content generalized the theme, rather than supporting it with more detailed information. The deployment example at Toyota might have better supported the premise if the specifications for the implementation of the WIS were described in more detail and comments made on whether Toyota achieved a return on its investment once the system went into production. Anyone who believes that "Microsoft is not looking for a profit on this service" (consulting services) has a head in the sand or has not read the company's operating results for the last quarter. Hence, I had to look at the cover of the magazine to make sure I was not reading another popular computer magazine.
The beef I have with the popular press is the rush or push to publicize the "next" best thing in computing, whether a piece of hardware, a paradigm, or idea. It is strongly implied that the world ought to quickly embrace or adopt the technology. I realize vendors, advertisers, and circulation are the driving force, however most of the content usually does not convince me to buy into their story. To the serious reader, it should be obvious there are many articles and stories, however disconnected from each other they may be, that describes the state of data processing, software products. The list of causes for such effects range from poor project management, lack of skills, and worse. But has anyone ever thought that possibly the media itself is accountable for the low quality found in personal computing today. Articles seem to say you can have your cake and eat it too without paying a fair price. Or not enough technical evidence is given to support the premise. Yesterday's client/server headlines have been replaced with new hype about the Internet and newer, faster ways to do something that has never been done before at a lower cost.
And if you don't believe we have a major problem with computing based on what you read in the press, how come when I called a major credit card company last week to give a change of address, it couldn't be done because its computers were down. And this week I had a problem with the cable TV company which couldn't tell me why a channel wasn't coming in ... its computer was down. Need I go on?
I am a skeptic who now believes that despite the list of causes for low-grade software products, I think there must be a direct relationship between the cost or perceived savings and the level of quality found in a product once it goes to market. This concept of lower cost and more savings is found time and again in the popular press to reinforce to the consumer an affirmative decision to buy. Who suffers the most? The mass market. Few articles examine or quantify the lost productivity to the economy or to individual companies as a consequence of defective hardware and software products. If IBM's or Microsoft's development organizations could be billed for all the format reinstalls and downtime customers and end users suffer, they just might produce better products. Instead, their bottom lines look good at everyone else's expense.
The format change in Communications has been excellent not only in terms of diversifying subject matter, but also making a certain percentage of material is easier to read. Given this fact and regardless of the difficulty of reading, I value most those articles that present sound evidence based on research or survey, something that empowers my critical thinking. In my opinion, Communications can rise above the popular press, leading the way by being the conscience of the industry with columns like "Inside Risks," where eventually our software quality can be elevated to new heights because IS people and customers have an unbiased source of information to turn to, rather than hearsay or market dominance.
John Wubbel
Waukegan, IL
Communications is one of my favorite publications, but I was disappointed to find an ad for Microsoft technology posing as a column by Larry Press.
I find myself bombarded by Microsoft's marketing message everywhere. To have that message repeated without meaningful analysis is of no value to me as a Communications reader. The occasional inclusion of a product-oriented piece is expected as part of a survey of a technology or issue. While I think it would only be appropriate to give fair time to all players in this field by printing any related marketing pieces, I would rather see Communications return to the high editorial standards of the past.
John Willmore
Orange, CA
"The Realities of Software Technology Payoffs," (Feb. 1999, p.74) raises my hopes that new software paradigms will be studied via empirical science, rather than armchair argument. We can leave the snake oil and prophet stage, and look for remedies that are proven in practice to be safe and effective. Please publish, and encourage authors to write, more "clinical trials" of proposed software technologies.
Tom Moran
Saratoga, CA
©1999 ACM 0002-0782/99/0400 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 1999 ACM, Inc.
No entries found