Many thanks for Cynthia Dwork's article "A Firm Foundation for Private Data Analysis" (Jan. 2011), explaining why, in trying to formalize what is perfect privacy, we cannot use the late University of Stockholm economist Tore E. Dalenius's criterion that asking allowed queries of a statistical database, we should not be able to learn new (private) information about a particular individual. When preparing to discuss Dwork's article at a recent colloquium in our computer science department, we came up with an even simpler explanation of such an impossibility:
One important purpose of collecting statistical data is to help identify correlations between, say, weight and blood pressure. Suppose, for example, it turns out that blood pressure is equal to weight, and we know that person A (not in this database) weighs 180 pounds. Without the database, A's blood pressure might be private, but once we learn the perfect correlation from it, we can conclude that A's blood pressure is 180.
In real life, we never see such perfect correlation, but, by analyzing the database and discovering some correlation, we know more about the probability of different values of blood pressure than we would otherwise know.
Vladik Kreinovich and Luc Longpre,
El Paso, TX
The Future Tense essay "Rebirth of Worlds" (Dec. 2010) lamented the demise of historic, online interactive 3D destinations. Since 1997 when they first appeared on the Web, virtual worlds have inspired artists, engineers, and scientists alike to explore and build the emerging frontiers of cyberspace. As Rumilisoun (a.k.a William Sims Bainbridge) wrote, despite the wonderful destinations across entertainment, education, and community, we are left to ask, "How can I still get there?"
What came through clearly in "Rebirth of Worlds" is the author's nostalgia for the experience of those worldstheir realities and possibilities. Such compelling emotional, perceptual, existential content may indeed be gone for good. Loss of an appealing game world is lamentable, but it is even more disheartening with engineering and scientific content, where we require the durability and reproducibility of our interactive 3D digital contentmodels, behaviors, worlds, and scenariosfor decades to come.
Enterprise-scale adopters, along with many others, also feel the pain of virtual-world babelization, as developing and maintaining innovative assets like worlds, avatars, and business logic across platforms become increasingly complex. Content models and network protocols are fragmented, making it difficult to create integrated information spaces and a compelling user experience. In the tumult of proprietary virtual-world technology, lack of reuse is a major obstacle to achieving improved efficiencies and economies of scale.
In the face of this market churn is a proven path for interactive 3D environments that includes royalty-free, extensible content models designed for the Web and semantic integration. Consumers and computer professionals alike should therefore demand and participate in the development of international standards needed to raise the greatest common denominator of future-proof 3D content.
Nicholas F. Polys
(president of Web3D Consortium), Blacksburg, VA
The lock-free pop
operation Nir Shavit described in his article "Data Structures in the Multicore Age" (Mar. 2011) depends on the semantics of the Java implementation in an important way. The push
operation allocates a new node object during the call, and it is this object that is placed on the stack. In addition, the assignment of oldTop
at line 13 creates a reference to the top node, keeping it alive until the return from the function.
This is of interest because if any of these constraints is not true, the pop
operation would not work. In particular, if one would naively implement a push
-and-pop
mechanism along these lines in a language like C++, and let the clients provide the object to be push
ed, and returned that object to the clients when the pop
occurred, the program would be wrong. This is because after fetching oldTop
(line 13) and newTop
(line 17) other threads could remove the top node, remove or push other nodes, then push the top node again. The compareAnd-Set
would then succeed, even though newTop
was no longer the correct new value. Similarly, if the implementation allocated a node in push, and freed it in pop
, the program would be wrong because the freed-node storage might be reused in a subsequent push
, leading to the same error.
The Java implementation also involves hidden costs, including allocation and garbage collection of the node objects and concurrency control required in the memory-allocation system to make it work. These costs must be considered, as they are essential to the correctness of the program. Be warned about not using apparently identical algorithms that do not satisfy the hidden constraints.
Marc Auslander,
Yorktown Heights, NY
I regret that Joel F. Brenner responded to my letter to the editor "Hold Manufacturers Liable" (Feb. 2011) concerning his Viewpoint "Why Isn't Cyberspace More Secure?" (Nov. 2010) with two strawman arguments and one outright misstatement.
Brenner said software "is sold pursuant to enforceable contracts." As the Viewpoint "Do You Own the Software You Buy?" by Pamela Samuelson (Mar. 2011) made clear, software is not "sold." Every EULA insists software is licensed and only the media on which it is recorded are sold; a series of court decisions, of which the Vernor v. Autodesk decision Samuelson cited is the most recent and one of the most conclusive, have upheld this stance.
This mischaracterization by Brenner is one of the keys to understanding how manufacturers of such shoddy goods get off essentially scot-free. If software were actually sold, the argument that it should be exempt from the protections of the Uniform Commercial Code would be much more difficult to maintain, in addition to other benefits thoroughly discussed elsewhere (including by Samuelson in her column).
Even though EULAs have been held enforceable, such a determination comes at the expense of the consumer. Almost without exception, EULAs have the effect of stripping the consumer of essentially all reasonable rights and expectations, compared with other goods and services. And while click-through and shrink-wrap EULAs have indeed been found to be enforceable, many reasonable people (including me) believe it should not be the case, since the vast majority of consumers do not read these "contracts" and do not understand their consequences. Brenner apparently does not consider them a significant problem.
Finally, Brenner simply reiterated his assertion that "Congress shouldn't decide what level of imperfection is acceptable." I agree. There are basic consumer protections that apply to all other goods, as embodied in the UCC. Neither a further act of Congress nor detailed specifications of product construction are required to give consumers the right to expect, say, a stove, properly used and maintained, will not burn down their house. The corresponding right of freedom from gross harm, like the other protections of the UCC, is not available for software, though it and they should be; Brenner apparently disagrees.
I emphasized good engineering practices in my February letter not because (as Brenner seems to believe) I thought they were sufficient to guarantee a reasonable level of product quality, but because they are well-established means toward the end of meeting the basic standards of non-harm and reliability taken as a given for all other products. In any case, Brenner did not say why he thinks a different process should be used for setting functional safety and reliability standards for software than for other consumer goods. Simply asserting "software is different" is not a reasoned argument.
L Peter Deutsch,
Palo Alto, CA
Thanks to Deutsch for correcting my error. Software is of course licensed rather than sold. As Deutsch says, this is why UCC product-liability standards for purchased goods haven't improved software quality. But his point strengthens my argument. I was explaining, not defending, the status quo, which is lamentable precisely because liability is weak. I cannot fathom why Deutsch thinks I'm indifferent to higher engineering standards for software. They represent the only basis on which a liability regime can be founded, even for licensed products.
Joel F. Brenner,
Washington, D.C.
Sarah Underwood's news story "British Computer Scientists Reboot" (Apr. 2011) incorrectly attributed statements by King's College London professor Michael Luck to King's College London professor Andrew Jones. This has been corrected in the online article. We apologize for the error.
Communications welcomes your opinion. To submit a Letter to the Editor, please limit your comments to 500 words or less and send to [email protected].
©2011 ACM 0001-0782/11/0500 $10.00
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.
No entries found