acm-header
Sign In

Communications of the ACM

Forum

Forum


I found Erickson's and Siau's article "E-ducation" (Sept. 2003) an eloquent but alas typical example of the inadequacy of many technological approaches to education. Nothing I've read in the technical literature has dared ask whether education needs computer technology to begin with or if it does what role it might usefully play. The general attitude is of technological primacy; because technology exists it must be used, and educators must find ways to use whatever we offer. The technological imperative is stronger than the pedagogical.

IT vendors see an appealing modus operandi, not only for the equipment and software they sell to schools but, mainly, for the prospect of creating generations of lucrative techno-enthusiasts unable to take an intellectual step without a computer's help.

The only obligation of educational institutions should be toward their students; we are here to help give them a foundation for rich and rewarding intellectual lives. Teaching them a job is not our primary goal. Transforming them into techno-addicts is the antithesis of one.

The authors cited a decade as a likely time horizon for dramatic changes in the classroom. A decade is indeed a long time for a computer vendor whose product life cycle is likely less than five years. But educational institutions teach teenagers and young adults mental habits that will accompany them the rest of their lives. The time horizon of what educators do is closer to 50 years than to five.

The authors concluded by saying the next 10 years "should be extremely exciting and fast-paced for educators." The myth of fast-paced changes and of the struggle to keep up is rooted in industry, though even there, its social consequences can be dire. No attempt was made to justify technology's haphazard application to education.

The article also reflected a cavalier attitude toward the prevailing commercial influence on education. Though it included a "real-world caveat" to educators, overall, it accepted the idea that public funding of education is destined to decrease and that the presence of "commercial partners" in education will be with us for a long time to come.

All this still leaves us with the question of the computer's role in education. Computers are valid technical instruments for encyclopedic information searches with a solid place in any school library (including a librarian, of course). Whether a computer belongs in the classroom—apart from special job-training classrooms—is debatable. While some children respond well to computers, like some children to the violin, I know of no school boards pressuring schools to put a violin in every classroom.

Computers can be useful instruments, even in schools, as long as the impetus for their use comes from the needs of educators, not from pressure to use technology. An excessive fixation on them (often driven by commercialism) and on the silliness of e-education will result, I'm afraid, only in the creation of a lot of gullible e-diots.

Simone Santini
La Jolla, CA

Back to Top

Minds Over Math

It seems to me that if "Why CS Students Need Math" is worthy of being the main theme of a special section of Communications (Sept. 2003), then the underlying question must be the topic of some debate in the community. Consequently, if it is a worthy topic of debate, does it not seem reasonable to make some attempt to cover both sides?

In a world where more and more people use computational devices in ever more different contexts, let me ask a simple question: Of the following, which is the more significant insofar as computers are concerned?

  • Declining literacy in math on the part of CS students, or
  • General illiteracy of computing professionals in the human aspects of computing?

We live in a world where, despite the real human and cultural implications of ubiquitous computing, virtually no university with a CS degree program requires (in order to graduate) its students to write a program that is to be used by another human being.

Let me beg to differ with guest editor Keith Devlin. CS is not "entirely about abstractions." Responsible CS is as much about people as it is about machines, code, or abstractions.

The historian of technology Melvin Kranzberg spoke of three laws:

  • Technology is not good;
  • Technology is not bad; and
  • Technology is not neutral.

It is more important for a computer scientist to understand their implications (especially of the third) than it is to know the Peano Postulates.

Yes, the ability for abstract thought is important. So is a basic foundation in math. But like all components of the curriculum, they must be balanced with other aspects of the discipline. Ultimately, CS is about people and the effect our profession has on them. This is not an abstraction but a simple truth. It is time our profession reflected it.

Bill Buxton
Toronto

I fully agree with Keith Devlin and Kim Bruce et al. (Sept. 2003) arguing that universities should provide foundations rather than specific techniques. But, following the same logic, why is writing neglected in many CS programs?

While many institutions require three or more semesters of math, few require more than one of writing beyond the first-year composition courses many students test out of. This is despite research consistently indicating that engineering graduates entering the work force are surprised to discover the central role of writing in their careers. Survey after survey suggests employers rank communication among the top skills needed by their employees—and is an area where CS and other engineering majors are most lacking.

If the goal is to focus on fundamentals rather than specific techniques that can be taught on the job, why not require CS students to take a technical writing class designed to prepare them for the communication demands they will inevitably face, no matter where their careers take them?

Joanna Wolfe
Louisville, KY

Back to Top

Requirements vs. Components

The article "Software Reuse Strategies and Component Markets" (Aug. 2003) defined the "two key costs" of component acquisition as search cost and component price. Apparently included in the search cost is the cost of evaluating the components as a minor element of the acquisition cost. But the component evaluation cost can govern the make vs. buy decision, especially for medical and avionics applications where regulatory agencies require evaluation of all components.

Ravichandran et al. wrote that black-box components would be less costly than in-house development due to the "economies of scale" of a larger market. Commercial components are typically priced at what the market will bear, not on some return-on-investment model. The simple reason, in the case of software, is the buyer is not buying the object or license but knowledge and/or time.

They concluded that component quality can be certified through license agreements. I'm not sure this is possible, because I don't think it is possible to develop a precise definition of quality everyone would accept. Is quality a lack of defects, broad functionality, or component performance? Well-written software components have all these qualities and more.

To increase reuse, it is necessary to increase the number of components suitable for an application. Suitability cannot be established through standards or licenses for every component and application. Developers must establish a set of requirements and then determine how a collection of candidate components relates to these requirements. More tools to automate this process and reduce the cost of the evaluation would make more components available for reuse.

Carl J. Mueller
Carol Stream, IL

Authors Respond:
The evaluation cost we described as a minor element of the component acquisition cost refers to an assessment of component fit with requirements. Since evaluation of a black-box component does not by definition involve analysis of code, it is limited to an assessment based on the documentation provided by the developers. Mueller points out that in certain domains, including medicine and avionics, government regulations require a thorough quality evaluation of components. We agree that in such domains evaluation may require more effort. But our basic argument that the evaluation costs are likely to be lower for black-box components (compared to white-box components) is still valid.

We agree that value-based pricing applies to commercial software components. However, the upper price boundary is the cost of in-house development. To be competitive, a commercial component provider prices its components below this boundary, unless component users have a reason to pay a premium. Moreover, when markets allow component developers with different cost structures to market their software, there is bound to be an effect on prices. It already happens in the form of components developed in low-cost locations, including India.

As Mueller points out, the definition of quality in component markets should include at least a lack of defects, functionality as documented, and performance as documented. Component providers lacking in any of these categories may be unable to prevail in the market. In other domains, a component user may rely on the reputation of the supplier or on quality standards defined in license agreements.

Back to Top

Make Every Vote Count

Could there be a solution to the reliability problem of e-voting systems ("Voting and Technology: Who Gets to Count Your Vote?," Aug. 2003)? Most of what I say here originated in a project I've been working on (with Lila Kari at the University of Western Ontario) to develop a secret, secure, reliable voting system usable over any network. Dill et al. emphasized that although e-voting is increasingly popular, the systems are far from trustworthy. They were especially concerned about using direct recording electronics (DRE) to count and record votes. Since no paper records are associated with votes cast, there is no way to perform a recount when the outcome is in doubt.

Printing a receipt for each voter might solve the problem—but only partly. Voters would still have no way to know whether the final results of an election were calculated correctly. I strongly agree that voters and candidates need proof that their votes were indeed counted correctly; runoffs can be decided by only a few votes out of millions cast.

Our approach to DRE thus represents an enormous improvement in system reliability. The idea we implemented was introduced in 1991 by Kari et al. in the journal Computers and Security, and their protocol has proved to be time independent. Here's how it works. Like other voting systems, users interact with computers to record their votes. The system assigns a number (a large positive integer) to every vote cast. The number, unique for each vote, results from the one-way cryptographic hash function. It is presented to user/voters after they cast their votes and are told to take note of it. As votes are cast the numbers are published as confirmation to the voters that their votes were indeed recorded. The numbers are not assigned to the candidates because the system can't allow voters to be influenced by the progressing results. After the final voting date, all voting numbers are published, along with the names of the candidates. Voters then check if their votes were counted correctly, and the candidates check the results by counting the votes.

Voters and candidates alike would thus be able to count the votes. And a secure and reliable paperless voting system would be on its way to serving us all.

Halina Kaminski
London, Ontario, Canada

Back to Top

Author

Please address all Forum correspondence to the Editor, Communications, 1515 Broadway, New York, NY 10036; email: [email protected].


©2003 ACM  0002-0782/03/1200  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2003 ACM, Inc.


 

No entries found