acm-header
Sign In

Communications of the ACM

Forum

Forum


Simone Santini's comment (Forum, Dec. 2003) brought to mind something I constantly tell my students: There is nothing new about computers; they simply mechanize the application of logical principles that have been around since Socrates, a logic that can be formally expressed through the mathematics invented by George Boole.

The two best courses I ever took for learning to work with computers were formal logic and Boolean algebra. The former was taught by the philosophy department, the latter by the mathematics department. Neither professor knew much about computers. Indeed, the subject of computers never came up. Yet my experience suggests that everything a computer does is based on the principles expressed in these two courses.

Thus, Santini's assertion appears to be correct: The mind should be developed first; technical details should come later. It's a mistake to sit people in front of a computer as the first step in teaching them how to program it to do something useful.

I have found that people using their brains get ahead further and faster than people who know only technical details. In the end, it is not the computer that is important but the results they help achieve. Knowing what to do with the machine is far more important than knowing how to do things with the machine. Well-considered definitions of what things to do inspire us to greater accomplishments than trivial applications. Applying basic principles of logical thought shields us from being overwhelmed by technical detail.

Peter G. Raeth
Fairborn, OH

The notion that we're going to fix, or even help, U.S. educational problems by putting a computer on every desk or even in every classroom is ridiculous. Simone Santini's Forum comment (Dec. 2003) was eloquent on this point.

Computers in the classroom are what the law calls an "attractive nuisance," like a swimming pool in your neighbor's backyard. The definition is "any inherently hazardous object ... that can be expected to attract children to investigate or play." In the case of classroom computers, the hazards are encouragement for superficial thinking, facilitation of general academic laziness up to and including plagiarism, ease of access to materials children ought not to see, and general time wasting, including game playing and aimless Web surfing.

As Santini pointed out, computers are useful in a school library, but their value in the classroom is far from proven. If Aristotle, Newton, and Einstein were educated without computers, how much of a handicap is not having one?

My one disagreement with Santini concerns is over the issue of school funding. I do not lament "the idea that public funding of education is destined to decrease." I embrace it. When it decreases to zero, children now confined to public schooling will finally get the same chance to excel as the U.S.'s seven million private school and home-schooled children have had for decades. The best alternative to public funding is not corporate sponsorship but self-funding.

For decades, government has brainwashed parents into trusting the bureaucrats to educate their children. The result has been a disaster. The only way they can take back control is to pay fully for their children's schooling themselves.

Mark Wallace
Irvine, CA

Back to Top

Why Reinvent When We Have Open Source?

Robert L. Glass's Practical Programmer column "A Sociopolitical Look at Open Source" (Nov. 2003) discounted the practicality of users fixing their own source, since the only users capable of doing so are interested only in system programs. Application end users hire consultants to do the fixing for them, and open source works out very nicely indeed.

I'd like to understand the reasons software developers insist on reinventing the wheel despite the availability of open source solutions. As pointed out in Eric Steven Raymond's The Art of Unix Programming (www.faqs.org/docs/artu/), the notion of "not invented here" is not simply a response to the lack of transparency in proprietary solutions. The consequence in the open source world of programmers' roll-your-own tendencies is that, except for a few areas with category-killer solutions, most problems are covered with haystacks of partial solutions of questionable utility.

David Hawley
Tokyo, Japan

Back to Top

Still Seeking Software Productivity

In our article "Measuring Software Productivity in the Software Industry" (Nov. 2003), we developed a rationale for measuring the productivity of development and support environments as a first step toward solving the problem of software productivity.

A 1995 BusinessWeek article surveyed productivity in 25 industries, deriving percent productivity change over the previous five years while reaching some astounding conclusions outlined in another article in Software Developer & Publisher (July 1996). Computers (hardware) were first (+153%), semiconductors second (+84%), and software dead last (−11%). Independently, the Standish Group in Feb. 1995 published a report on the software industry, further supporting the negative productivity findings.

These facts were at odds with the implications of a 1991 BusinessWeek article "Software Made Simple," which interviewed key players in the world of object-oriented programming. The article cited the naysayers who compared OOP to artificial intelligence as just another computer industry buzzword. Defending this comparison, the authors wrote that, unlike AI, object technology would have "an immediate, practical payoff."

Much more recently, BusinessWeek (Jan. 12, 2004) discussed the annual percent productivity gains of various major industries from 1998 to 2001. Computer chips (+19%) and consumer electronics (+18%) had the greatest gains. Software was last (−1%).

These facts continue to support our observations, measures, and rationale about low productivity using current software technology, as well as the steps needed to turn it around. This has not seemed to stop the software industry from continuing to put itself further behind in the productivity race each year.

Donald Anselmo
Phoenix, AZ
Henry Ledgard
Toledo, OH

Back to Top

Spare Me the Self-Service

The News Track item "Checked Out" (Jan. 2004) cited the skyrocketing use of self-service kiosks. I, for one, am not convinced that the 14.5 million passengers statistic really means what it suggests. As a recent first-class passenger on a major airline, I was directed to the self-service kiosk. Glancing at the counter, I noticed that all passengers were being directed to the "self-service" line; there was no personal service. This is just another way station on the road to eliminating personal (but costly) face-to-face service without lowering the price to the consumer. It didn't cost me any less in effort (checking in) or money (ticket price), but it most certainly saved the airline some of both.

Personally, I would prefer if the self-serve kiosks at airlines and grocery stores (along with telephone menu systems corporations use to punish us) would go away.

Dwayne Nelson
Washington, D.C.

Back to Top

Lineage of the Internet's Open Systems Foundation

I'd like to point out an important contradiction in Coskun Bayrak's and Chad Davis's "The Relationship between Distributed Systems and Open Source Development" (Dec. 2003), which stated: "The original specifications of a layered protocol stack upon which heterogeneous distributed systems could build their communication systems, and upon which TCP/IP was closely modeled, is of course the Open Systems Interconnect model [5]." Reference 5 is Peterson's and Davie's Computer Networks: A Systems Approach, which stated: "The Internet and ARPANET were around before the OSI architecture, and the experience gained from building them was a major influence on the OSI reference model."

Though this obvious contradiction was made by the authors, a thorough review should have caught it.

Yitzchak Gottlieb
Princeton, NJ

The article by Coskun Bayrak and Chad Davis would have been more informative and more interesting had it let us in on the mechanisms the Linux community uses to sort the good from the bad features and code so we "can bet the product will be substantially better in only a few weeks," as they say.

Moreover, the authors assert that TCP/IP was closely modeled on ISO's OSI. Vinton Cerf and Robert Kahn began their work on TCP in 1973, whereas OSI came out in 1983. What is the basis for the authors' claim?

Alex Simonelis
Montreal, Canada

Authors Respond:
Though we acknowledge our semantic error we were lax not to catch prior to publication, it shouldn't be viewed as an indictment of the general line of our argument. A key part of our purpose, and of this citation in particular, was to highlight the open system nature of the Internet's technological foundations, at the core of which is the TCP/IP suite.

From a historical perspective, the evolution of OSI vs. TCP/IP is a subjective issue. One may be interested in the invention of specific protocols or, alternatively, the invention of open system concepts, including layering and interoperability. However, the citation reinforced the openness of the Internet's core communications mechanisms. The notion of what won—OSI or TCP/IP—is not relevant in this sense. Relevant is the nonproprietary, open system qualities of the Internet protocol suite.

We do not support the suggestion that the protocols within the TCP/IP suite were derived from OSI. We would, however, like to know whether anyone finds fault with our characterization of the TCP/IP stack as an essentially and significantly open system. In regards to history and our discussion of openness, the OSI project was meant to clarify the role of openness in the Internet's developing infrastructure.

The lineage of the individual protocols in the TCP/IP stack clearly predate the concept of the standardized and layered stack itself; TCP itself dates to the 1960s. However, the notion of the TCP/IP protocol stack isn't the same as the individual protocols themselves. It wasn't until 1989 that the architecture of Internet communications was written into RFCs 1122 and 1123, long after OSI and other projects brought attention to the significance of openness and interoperability.

We appreciate the opportunity to revisit these issues. The history of these systems had been on the margins of our thinking prior to this reexamination of our topic. We thank Gottlieb and Simonelis for initiating this dialogue, encouraging us to delve further into the genealogical elements of our topic. We also hope to hear more about our discussion of the Internet's fundamental reliance on communication systems demonstrating high levels of openness.

Concerning the mechanisms of development utilized by open source projects, including Linux itself, we were unable in the context of the article to fully explore the inner structures of the open source development model and its intriguing similarities to the inner structures of distributed software systems.

Coskun Bayrak
Chad Davis
Little Rock, AR

Back to Top

Author

Please address all Forum correspondence to the Editor, Communications, 1515 Broadway, New York, NY 10036; email: [email protected].


©2004 ACM  0002-0782/04/0300  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.


 

No entries found