Michael A. Cusumano's "Technology Strategy and Management" column ("The Puzzle of Japanese Software," July 2005) was intriguing but took a strange direction. He wrote that one study found that Japanese software projects have only one-twentieth the number of defects as their U.S. counterparts, and that "so few bugs may suggest an overly rigid style of development and a preoccupation with `zero defects' rather than innovation and experimentation." How can he criticize a preoccupation with reliability? Most ordinary users would love zero-defect software and gladly dispense with all the new features, many they will never use. Western companies ship now and patch later. I hope some Japanese software companies give them the kicking they deserve.
Lawrence C. Paulson
Cambridge, U.K.
Author Responds:
We know from research in many industries that innovation in product development requires experimentation, trial and error, and thus iterations that often produce defects, especially in early versions of a product. Japanese programmers are too often reluctant to depart from a highly structured waterfall style. Yes, it produces high reliability, but it is not a very creative way to develop new software for fast-paced, changing markets. The Japanese tend to treat every software project as if it were mission-critical, even when they're not.
Michael A. Cusumano
Cambridge, MA
Development of the relational database represents an important software milestonemore important than, say, fourth-generation languagesbut Robert L. Glass did not mention it in his "Practical Programmer" column ("`Silver Bullet' Milestones in Software History," Aug. 2005). The relational database put large-scale data storage and manipulation on a sound theoretical footing and made possible many of the large-scale applications we use today.
Stephen Schmid
San Rafael, CA
In their "Viewpoint" column ("Is the Thrill Gone?," Aug. 2005), Sanjeev Arora and Bernard Chazelle sought a remedy to an apparently paradoxical situationthat even as computers pervade our everyday lives, computer science in the U.S. faces a deep crisis. Attributing it to "our collective failure as educators, researchers, and practitioners to articulate a cogent, compelling narrative about the science of computing (as opposed to just the technology)," they generally blamed programming for the falloff in both students enrolling in CS and research funding.
While I agree with them on the benefits of integrating CS with other disciplines, it should also be important to show how, through an epistemological approach, other disciplines lead to the development of computer applications. A first step in this direction was Richard M. Karp's 1985 Turing Lecture, which examined the historical development of an aspect of discrete mathematics and how theoretical computer science results from a set of mosaic tesserae combining like a puzzle.
C.P. Snow, in a 1959 lecture, cited the divergence that had developed between the humanistic and the scientific cultures during the 19th and 20th centuries. Since the end of the 18th century, when the philosopher Emanuel Kant and the mathematician Pierre-Simon Laplace joined forces in a cosmological theory, humanistic and scientific disciplines have evolved so that, while any cultured person knows of Shakespeare and at least some of his works, few humanists know of the existence of the second law of thermodynamics; how propositional and predicate calculus derive from Aristotle's syllogism, along with the logic programming paradigm; how developments from electron physics and abstract algebra gave rise to modern integrated circuits; or how theoretical linguistics and abstract algebra merge to implement computer language translators.
Work in this direction could help attract brilliant young people to CS. In the 1980s, I used it with good results to introduce the field to teachers in Italian high schools (liceo classico).
Fabio A. Schreiber
Milano, Italy
The "Spyware" special section (Aug. 2005) drove home at least one mountain-size point: how many dishonest people there are in the world, and the extraordinary time, effort, and money the rest of us must expend to have a fighting chance of not being their next victim. Fixing the root cause of the problem is probably beyond the capabilities of ACM, but you did a good job of identifying the problem.
Hal Lowe
Oswego, IL
While the special section on spyware (Aug. 2005) contributed to my awareness of the subject, I feel I'm no better informed for several reasons: The first is that some of the articles lacked information that would be useful to "users," reporting instead what researchers know or think about spyware and even what they think about other people's thoughts about spyware.
Several articles had similar introductions, including tables listing the various categories of spyware. It might have been better to collect such material in the section's Introduction (by Thomas F. Stafford); the articles could have referred to it there instead of repeating it.
And I would have preferred hard facts and perhaps case studies (in case hard facts were too difficult to come by). For example, I'd like to know whether and how often key-logging has led to identity theft or similar criminal behavior? Or is the problem only a potential danger, as suggested in one article? In that sense, I liked Kirk P. Arnett's and Mark B. Schmidt's "Busting the Ghost in the Machine."
Günter Rote
Berlin, Germany
Concerning the special section on spyware (Aug. 2005), I devised (in the 1980s) a virus-proof PC but was unable to interest anyone in building or investing in it. IBM said it had its own approach, and the anti-virus vendors apparently would not support being put out of business. Generally, nobody seemed to care about being infected by a virus.
With proper architecture, you can, however, build a totally safe PC. Trying to "add" security after the fact does not work. I have not addressed the issue of DOS attacks, though fixing the Internet's protocols would be politically far more difficult than changing the PC architecture. Covert channels could be fixed, too, contrary to conventional wisdom.
Instead of trying to fix security problems, we should devise a PC immune to them. Devising a systems architecture to support this goal is not rocket science but a matter of interest and will. Unfortunately, the public doesn't seem to care, and the government probably won't do anything visible until a disaster occurs. If the government did act proactively or a sufficient number of companies banded together, we could have a totally virus-, trojan-, key logger-, adware-, scumware-, dialer-, spyware-, hijacker-, root-kit-, flash-, ActiveX-free PC in 18 months, including a basic operating system.
We would architect for quality, not to be first to market with some useless "feature." Moreover, our machine would not crawl like so many do today, forced to carry the deadweight of buggy, unwanted features.
William Adams
Springfield, VA
The articles in the spyware special section (Aug. 2005) mentioned "PC" over and over, yet with one exception (Microsoft Windows) ignored operating systems. Were the authors writing about all personal computers, even those running Mac OS X Tiger, Linux Fedora Core 4, or OpenBSD? Should I assume all OS flavors have the same problems with spyware? If they do, why didn't the authors say so? If they don't, why didn't they say so? Illustrious authors contributed articles, so I know they know more about these questions than I could find in their articles.
David Anderson
Burlingame, CA
Guest Editor Responds:
As with virus attacks, the easy targets get the most attention from malware developers; the vulnerabilities of Windows-based systems are widely known, and the availability of spyware targets is much greater with Windows-based systems than with other systems, based solely on their market share.
This does not mean that Macs, for example, are inherently safe by virtue of having a different operating system or less market share. The same consideration applies to Unix/Linux variants; in fact, given the recent rise in popularity of open source operating systems, one could expect malware developers to be busily figuring out strategies to penetrate these systems, too.
The main purpose for remote-monitoring spyware seems to be to steer customer-relationship-management-style targeted advertising to specific users over Internet connections. From a mass-marketing perspective, this generally implies a consumer audience, meaning that Windows systems are the prime target. To say that corporate platforms running Unix/Linux are not at risk would be inaccurate, since industrial espionage is likely to be a more compelling motivation for spying on corporate computers than the desire to target consumers with advertising through their office machines.
Thomas F. Stafford
Memphis, TN
I enjoyed the special section "Designing for the Mobile Device" (July 2005) because it addressed wireless, handheld mobility. However, it emphasized the consumer market, even as businesses are adopting handheld communications mobility for all aspects of their person-to-person contact, as well for their information delivery (messaging).
Just as the PC invaded the enterprise, the personalized, wireless, handheld communication device is supplementing or replacing wired desktop communication devices like telephones and PCs. Person-to-person communication is the primary application motivating businesses, as well as consumers, to carry wireless communication devices. Everything else is secondary, even information access and retrieval, information delivery from application services, and entertainment.
The challenge is how to render personal communications device- and modality-independent while allowing individual users to satisfy their personal time priorities, device needs and preferences, and environmental circumstances (in a car or an elevator or under noisy circumstances). An application that proactively delivers time-sensitive information to a person has the same multimodal challenge of making contact with a recipient as do other people, in addition to being recognized as a legitimate (non-spam) contact initiator. Presence-management "buddy" lists must also include "application buddies" for notification and delivery of urgent information (such as whether an airline flight is delayed or canceled).
Enterprise communications should allow all forms of multimodal access, including the ability to dynamically shift between modes as the situation warrants. For example, the recipient of an email message should be able to respond through, say, an instant message exchange, which might then be escalated to a voice (or video) conference.
Person-to-person communication should be treated as a distinct "application," independent of business-process applications. Personal communication interfaces must be geared to standard communication control functions that work across wired and wireless networks, as well as across communication modalities, with a variety of handheld device form factors.
Arthur M. Rosenberg
Santa Monica, CA
©2005 ACM 0001-0782/05/1100 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.
No entries found