acm-header
Sign In

Communications of the ACM

Letters to the editor

Free Speech For Algorithms?


Letters to the Editor

Credit: iStockPhoto.com

In "Regulating the Information Gatekeepers" (Nov. 2010), Patrick Vogl and Michael Barrett said a counterargument against the regulation of search-engine bias is that "Search results are free speech and therefore cannot be regulated." While I have no quarrel as to whether this claim is true, I'm astounded that anyone could seriously make such a counterargument—or any judge accept it.

Search results are the output of an algorithm. I was unaware the field of artificial intelligence had advanced to the point that we must now consider granting algorithms the right of free speech. To illustrate such absurdity, suppose I was clever enough to have devised an algorithm that could crawl the Web and produce opinionated articles, rather than search results, as its output. Would anyone seriously suggest the resulting articles be granted all the constitutional protections afforded the works of a human author? Taking the analogy further, suppose, too, my algorithm produced something equivalent to shouting "Fire!" in a crowded theater. Or, further still, perhaps it eventually produced something genuinely treasonous.

If we accept the idea that the output of an algorithm can be protected under the right of free speech, then we ought also to accept the idea that it is subject to the same limitations we place on truly unfettered free speech in a civilized society. But who would we go after when these limitations are exceeded? I may have created the algorithm, but I'm not responsible for the input it found that actually produced the offensive output. Who's guilty? Me? The algorithm? (Put the algorithm on trial?) The machine that executed the algorithm? How about those responsible for the input that algorithmically produced the output?

Unless humans intervene to modify the output of algorithms producing search results, arguments involving search results and free speech are absurd. At least until artificial intelligence has advanced to where machines must indeed be granted the same rights we grant our fellow humans.

Roger Neate, Seattle, WA

Back to Top

Authors' Response:

Neate touches a nerve concerning the increasingly complex relationship between humans and material technologies in society. Accountability in these sociomaterial settings is challenging for judge and regulator alike. In the 2003 case of SearchKing vs. Google Technology, a U.S. District Court noted the ambiguity of deciding whether PageRank is mechanical and objective or subjective, ruling that PageRank represents constitutionally protected opinions. Whether search results are indeed free speech remains controversial, meaning we can expect the debate to continue.

Patrick Vogl and Michael Barrett, Cambridge, U.K.

Back to Top

Science Has 1,000 Legs

It's great to reflect on the foundations of science in Communications, as in Tony Hey's comment "Science Has Four Legs" (Dec. 2010) and Moshe Y. Vardi's Editor's Letter "Science Has Only Two Legs" (Sept. 2010), but also how the philosophy of science sheds light on questions involving the number of legs in a natural science.

Willard Van Orman Quine's 1951 paper "Two Dogmas of Empiricism" convincingly argued that the attempt to distinguish experiment from theory fails in modern science because every observation is so theory-laden; for example, as a result of a Large Hadron Collider experiment, scientists will not perceive, say, muons or other particles, but rather some visual input originating from the computer screen displaying experimental data. The interpretation of this perception depends on the validity of many nonempirical factors, including physics theories and methods.

With computation, even more factors are needed, including the correctness of hardware design and the validity of the software packages being used, as argued by Nick Barnes in his comment "Release the Code" (Dec. 2010) concerning Dennis McCafferty's news story "Should Code Be Released?" (Oct. 2010).

For such a set of scientific assumptions, Thomas S. Kuhn coined the term "paradigm" in his 1962 book The Structure of Scientific Revolutions. Imre Lakatos later evolved the concept into the notion of "research program" in his 1970 paper "Falsification and the Methodology of Scientific Research Programs."

In this light, neither the two-leg nor the four-leg hypothesis is convincing. Citing the leg metaphor at all, science is perhaps more accurately viewed as a millipede.

Wolf Siberski, Hannover, Germany

Back to Top

Certify Software Professionals and their Work

As a programmer for the past 40 years, I wholeheartedly support David L. Parnas's Viewpoint "Risks of Undisciplined Development" (Oct. 2010) concerning the lack of discipline in programming projects. We could be sitting on a time bomb and should take immediate action to prevent potential catastrophic consequences of the carelessness of software professionals. I agree with Parnas that undisciplined software development must be curbed.

I began with structured programming and moved on to objects and now to Web programming and find that software is a mess today. When I travel on a plane, I hope its embedded software does not execute some untested loop in some exotic function never previously recognized or documented. When I conduct an online banking transaction, I likewise hope nothing goes wrong.

See the Web site "Software Horror Stories" (http://www.cs.tau.ac.il/~nachumd/horror.html) showing why the facts can no longer be ignored. Moreover, certification standards like CMMI do not work. I have been part of CMMI-certification drives and find that real software-development processes have no relation to what is ultimately certified. Software development in real life starts with ambiguous specifications. When a project is initiated and otherwise unrelated employees assembled into a team, the project manager creates a process template and fills it with virtual data for the quality-assurance review. But the actual development is an uncontrolled process, where programs are assembled from random collections of code available online, often taken verbatim from earlier projects.

Most software winds up with an unmanageable set of bugs, a scenario repeated in almost 80% of the projects I've seen. In them, software for dropped projects might be revived, fixed by a new generation of coders, and deployed in new computer systems and business applications ultimately delivered to everyday users.

Software developers must ensure their code puts no lives at risk and enforce a licensing program for all software developers. Proof of professional discipline and competency must be provided before they are allowed to write, modify, or patch any software to be used by the public.

As suggested by Parnas,1,2 software should be viewed as a professional engineering discipline. Science is limited to creating and disseminating knowledge. When a task involves creating products for others, it becomes an engineering discipline and must be controlled, as it is in every other engineering profession. Therefore, software-coding standards should be included in penal codes and country laws, as in the ones that guide other engineering, as well as medical, professions. Moreover, software developers should be required to undergo periodic relicensing, perhaps every five or 10 years.

Basudeb Gupta, Kolkata, India

Back to Top

Unicode Not So Unifying

Poul-Henning Kamp's attack in "Sir, Please Step Away from the ASR-33!" on ASCII as the basis of modern programming languages was somewhat misplaced. While, as Kamp said, most operating systems support Unicode, a glance at the keyboard shows that users are stuck with an ASCII subset (or regional equivalent).

My dubious honor learning and using APL* while at university in the 1970s required a special "golf ball" and stick-on key labels for the IBM Selectric terminals supporting it. A vexing challenge in using the language was finding one the many Greek or other special characters required to write even the simplest code.

Also, while Kamp mentioned Perl, he failed to mention that the regular expressions made popular by that language—employing many special characters as operators—are virtually unintelligible to all but the most diehard fans. The prospect of a programming language making extensive use of the Unicode character set is a frightening proposition.

William Hudson, Abingdon, U.K.

Back to Top

The Merchant Is Still Liable

In his Viewpoint "Why Isn't Cyberspace More Secure?" (Nov. 2010), Joel F. Brenner said that in the U.K. the customer, not the bank, usually pays in cases of credit-card fraud. I would like to know the statistical basis for this claim, since for transactions conducted in cyberspace the situation in both the U.K. and the U.S. is that liability generally rests with the merchant, unless it provides proof of delivery or has used the 3-D Secure protocol to enable the card issuer to authenticate the customer directly. While the rates of uptake of the 3-D Secure authentication scheme may differ, I have difficulty believing that difference translates into a significant related difference in levels of consumer liability.

The process in the physical retail sector is quite different in the U.K. as a result of the EMV, or Europay, MasterCard, and VISA protocol, or "Chip & PIN," though flaws in EMV and hardware mean, in practice, the onus is till on the bank to demonstrate its customer is at fault.

Alastair Houghton, Fareham, England

Back to Top

Author's Response:

The U.K. Financial Services Authority took over regulation of this area November 1, 2009, because many found the situation, as I described it, objectionable. In practice, however, it is unclear whether the FSA's jurisdiction has made much difference. While the burden of proof is now on the bank, one source (see Dark Reading, Apr. 26, 2010) reported that 37% of credit-card fraud victims get no refund. The practice in the U.S. is not necessarily better but is different.

Joel F. Brenner, Washington, D.C.

Back to Top

Format Migration or Unforgiving Obsolescence

David S.H. Rosenthal's response (Jan. 2011) to Robin Williams' comment "Interpreting Data 100 Years On" said he was unaware of a single format widely used that has actually become obsolete. Though I understand the sentiment, it brought to mind Apple's switch from PowerPC to Intel architecture about six years ago. Upgrading the computers in my company in response to that switch required migrating all our current and legacy data to the new format used by Intel applications at the time. Though we didn't have to do it straightaway, as we could have kept running our older hardware and software, we had no choice but to commence a process to migrate over time.

This decision directly affected only my company, not the entire computing world, but when addressing data exchange and sharing, it was an additional factor we had to consider. Rather than face some general obsolescence, we may inevitably all have to address format obsolescence that is a natural consequence of IT's historically unforgiving evolution.

Bob Jansen, Erskineville, NSW, Australia

Back to Top

References

1. Parnas, D.L. Licensing software engineers in Canada. Commun. ACM 45, 11 (Nov. 2002), 96–98.

2. Parnas, D.L. Software engineering: An unconsummated marriage. Commun. ACM 40, 9 (Sept. 1997), 128.

Back to Top

Footnotes

* APL stands for "A Programming Language," so "the APL programming language" deconstructs as "the a programming language programming language."

Communications welcomes your opinion. To submit a Letter to the Editor, please limit your comments to 500 words or less and send to [email protected].

DOI: http://doi.acm.org/10.1145/1897852.1897854


©2011 ACM  0001-0782/11/0300  $10.00

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.


Comments


Anonymous

About Basudeb Gupta's comments, "Certify Software Professionals and their Work", in reaction to David L. Parnas's Viewpoint "Risks of Undisciplined Development" (Oct. 2010) concerning the lack of discipline in programming projects

While I agree that vast swaths of computer code "out there in the wild" shows careless coding, but enforcing conformity to central planning will create more problems than it solves, and bigger problems.
Mr. Gupta says any producing products for others "becomes an engineering discipline and must be controlled", by which one assumes he must mean software products, because he cannot mean enforcing penal codes for tasteless bakery products!

Demanding government enforcement of any common standard even in specializations would inevitably succumb to the rule of good intentions down to a disaster of unintended consequences, more than anything in matters of modern software.

For a good dose of irony, Mr. Gupta provides a good counter-example to this demand: "I have been part of CMMI-certification drives and find that real software-development processes have no relation to what is ultimately certified."

If it doesn't work because it certifies the wrong skills, then it cannot mean that certifications for the right skills won't work.

But that illustrates the very problem with establishing standards enforced upon all by the rule of the biggest army (the government's). First, who decides? And who decides on who it is that will decide?

Standards are commercially politicized enough already as it is without handing them a way to leverage particular interests. That's what we need: another reason for lobbyists and anti-lobbyists to pull elected Congressmen and regulators every which way, to leverage their own particular investments.

There might be a market out there for somebody who can actually come up with ten software standards. But then I wonder if Mr. Gupta can guarantee that all of his code will fit every circumstance for which his clients will have reasonable expectations?

There are already some stiff penalties for lax standards. If management accepts a product, if a client accepts the product, if a company accepts the product, then they pay the penalty. And the industry certainly has paid dearly. Adding another layer of bloat to already over-bloated government will only work against the intended purpose.
Where the difference really really matters, such as in life-or-death issues relating to medical devices, there are already tens of thousands of rules, laws, and regulations in place. They apply to the software already. Sarbaney-Oxley for example is overkill designed by politicians who don't know the industry.

Assembler coding, operating system development, accounting software, geographical applications, scientific code, network programming, all these things have their own engineering methodologies and the standards of each of them is very different from the others.

The free market can handle this best. We could try that for a change. Poorly elaborated software is its own penalty.


Displaying 1 comment