We were happy to see the special section on voting systems (Oct. 2004). However, like most of the recent interest in such systems in both research centers and in the popular press, it largely ignored usability. This represents a serious blind spot, especially since user interface design (as in the butterfly ballot), rather than security, was an important factor in the Florida 2000 voting debacle.
While the security of voting systems is clearly important, it isn't the whole problem. Effective election technology must also support voters in casting ballots for whom they intend, as well as poll workers in managing the equipment. Moreover, the security systems themselves must be understandable and usable. Overwhelming research shows that the manner in which a ballot is designed is important. Even if security and reliability are assured, e-voting could still be a disaster if voters cast ballots for candidates they don't want to elect.
We hope the great and appropriate concern that so many of our colleagues have about voting systems security is expanded to include user interface issues and the complete voter experience. E-voting machines could improve the voting process and increase citizen trust, but only if user interface design issues and all parts of the system are given attention and public awareness is raised.
SIGCHI U.S. Public Policy Committee:
Ben Bederson
University Park, MD;
Harry Hochheiser
Baltimore;
Jeff Johnson
San Francisco;
Clare-Marie Karat
Hawthorne, NY;
Jonathan Lazar
Towson, MD
It is unfortunate that decisions concerning e-voting system architecture are in the hands of politicians and the e-voting industry, rather than in those of computer scientists. This makes it unlikely that the authors of the special section on voting systems (Oct. 2004) will be able to influence the direction of future e-voting systems.
Needed is a worldwide standard that works correctly while being independent of e-voting implementations. This would mean that regardless of how a system is designed, it would be secure, as long as the standard was in place and upheld. Such a standard would make certification of the integrity of the software of a particular e-voting system unnecessary. It would treat all systems as "untrustable."
Using RSA encryption and a paper audit trail, such as standard is certaintly possible.
John David
Fairfield, CT
I am glad ACM took a stand against the current e-voting systems. ACM needed to look only as far as the IEEE Computer Society's Technical Council on Software Engineering to see that even the "software engineering" experts are incapable of running a secure online election. I recently received a postcard explaining that I should vote "online by September 28, 2004 using the access numbers printed with your address on this card." I can only hope all the people handling the card before I received it did not decide to vote as me.
Frances C. Bernstein
Bellport, NY
Bruce Schneier's "Inside Risks" column "The Nonsecurity of Secrecy" (Oct. 2004) explored the appealing notion that open scrutiny enhances security. It is certainly true that many secrecy-based security approaches produce nothing more than a false sense of security. And many computer software vendors tout secrecy as a way of covering their own mistakes. But I can't agree that secrecy is never part of the equation.
Schneier did not address the key fallacy that continues to afflict public discussion of computer security, namely, that a system is either "secure" or "insecure." As long as we chase the mirage of a secure system we will ignore the graded steps that contribute to security. Security includes multiple levels and types of defenses, and secrecy is an important aspect of such a defense.
Computer security and computer attacks revolve around two axes: cost to attack, in terms of human effort, and time to get through, especially for verifying attack success. Someday, hopefully, computer security will begin to entail a third axis, namely, the attacker's own risk of capture and punishment.
Secrecy imposes a burden on the attacker that slows the attack and obscures its results. It definitely keeps the lightweight thugs out of the running. The key thing an explorer of any sort needs when trying to map, understand, and master unknown terrain is a quick probe-record-test-verify feedback loop. Secrecy slows this explorationeven for serious professional attackers.
The notion of secrecy cannot realistically be understood as "never disclosed." However, if you slow the disclosure or discovery of a secret long enough, you might win. A DES key is good only for so long. I would suggest that Saddam Hussein's 1991 practice of moving his Scud missile batteries around is illustrative. U.S. forces received intelligence on the locations of many of the batteries, but it came too late for them to act effectively. It should be the same way with attackers exploring critical computing resources.
Andrew D. Wolfe, Jr.
Wakefield, MA
As discussed in Volker Wulf's and Matthias Jarke's article "The Economics of End-User Development" (Sept. 2004): "EUD leads to more efficient appropriation processes by empowering users to adapt software to their local needs." Some users are not satisfied with simply being users; they are inclined to explore new things instead of limiting themselves to using applications the way programmers have determined for them. Motivated by the desire to accomplish a given task more efficiently, they wander proudly in the technology environment where they are further motivated to explore the limits of their own knowledge.
Wulf and Jarke wrote: "In non-EUD environments, all adaptations must be realized by the software vendor, external consultants, or in-house development team." Users periodically impose new demands in regard to application features and functions. It would be wonderful if userswithout formal training in programmingcould implement these functions through drag-and-drop buttons. Users programming their own computers would fundamentally change the nature of software and software development tools.
In order to make the power of computers fully available to all users, certain programs would have to be equipped not only with great user interfaces to make programming easier but move from easy-to-use to easy-to-develop interactive software. That way, end users would be inspired to build tools to help themselves and create yet more tools. The phenomenal growth of the Internet reflects the need to create active and interactive Web content. Without user participation, applications might end up as impressive artifacts but have nothing to do.
EUD encourages users to learn, even when starting over from the seeds of failure. Being familiar with programming and the structure of software would also help make their use of computers more effective.
What can be done in office environments that lack staff to maintain the code or know what the code is supposed to do? Management, as well as developers, must accept accountability for the artifacts, though IT professionals must be in charge when deploying mission-critical applications. No doubt these goals involve challenges for programming languages and environments if they are to be used by a large population of end users. Nonetheless, following the increasingly pervasive use of computers and software, we can anticipate an ever-greater need for user-based programming and customization.
Hong-Lok Li
Vancouver, BC
Peter Denning's "The Field of Programmers Myth" (July 2004) was the right step in the right direction toward making computer science more scientific, improving connections between teaching CS and the needs of the information industry, and giving new impetus to developing IT.
My aim here is to explore some of the issues Denning touched upon. The first is "system thinking," a mental practice of engineering and system development that conceptualizes problems with organized representation. Denning wrote that the fundamental obstacle to systematically building reliable, dependable, useful computing systems was, and always will be, the complex behavior of large software systems. The tool of system thinking contributes to taming complexity.
System thinking comes in several varieties: problem-oriented, when systems are organized or separated according to problems; function-oriented, when systems are organized or separated according to performed functions; and relation-oriented, when systems are organized or separated according to relationships among elements and components. System thinking is a circular causation where a variable is both a cause and an effect.
The second set of issues Denning touched upon includes human-computer interaction and the social aspects of computing practices. Computing is increasingly foolproof, invisible, and everywhere. Thus, we need to teach and learn more about the psychological, ergonomic, and social issues of computer and network development and use.
A third set of issues in this context involves how to address the problems of information in terms of data and knowledge. Though computers are called information-processing devices, they process data only in terms of future knowledge processing. The role of information is insufficiently presented in CS courses. It is thus necessary to teach not only the data structures used in programming languages and databases but also elements of information theory and the foundations of knowledge systems.
Mark Burgin
Los Angeles
Several pieces in the Oct. 2004 issue inspired the following comments:
It was nice to hear Phillip G. Armour say in his "The Business of Software" column ("Not-Defect: The Mature Discipline of Testing") that specifications will never be complete and that the main issue of software development is dealing with what we do not know about the problem to be solved. If only this perspective were accepted more universally.
The question "Who Values Certification?" asked by Casey G. Cegielski in "Technical Opinion" is one I have dealt with personally, as my organization's management has encouraged me to certify in certain technologies, even as my technical colleagues say, "Certification isn't worth the paper it's written on." My own opinion is more in-between. Although certification indicates that someone is motivated to learn, study independently, and take an exam, it doesn't indicate how well this person will perform on the job.
The "Forum" comment "Prepare Graduates for Industrial Reality" seems to be from the school of thought that claims, "I have a degree in engineering, therefore I am better than anybody else." I am tired of it. One reason I left the faculty of engineering and went into computer science was just this attitude from the majority of students and faculty.
Thanks for a great magazine.
Nicholas Roeder
Calgary, Alberta, Canada
©2004 ACM 0001-0782/04/1200 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.
No entries found