acm-header
Sign In

Communications of the ACM

Privacy and security

Why Isn't Cyberspace More Secure?


cyberspace security illustration

Credit: Celia Johnson

In cyberspace it's easy to get away with criminal fraud, easy to steal corporate intellectual property, and easy to penetrate governmental networks. Last spring the new Commander of USCYBERCOM, NSA's General Keith Alexander, acknowledged for the first time that even U.S. classified networks have been penetrated.2 Not only do we fail to catch most fraud artists, IP thieves, and cyber spies—we don't even know who most of them are. Yet every significant public and private activity—economic, social, governmental, military—depends on the security of electronic systems. Why has so little happened in 20 years to alter the fundamental vulnerability of these systems? If you're sure this insecurity is either (a) a hoax or (b) a highly desirable form of anarchy, you can skip the rest of this column.

Presidential Directives to Fix This Problem emerge dramatically like clockwork from the White House echo chamber, chronicling a history of executive torpor. One of the following statements was made in a report to President Obama in 2009, the other by President George H.W. Bush in 1990. Guess which is which:

"Telecommunications and information processing systems are highly susceptible to interception, unauthorized electronic access, and related forms of technical exploitation, as well as other dimensions of the foreign intelligence threat."

"The architecture of the Nation's digital infrastructure, based largely on the Internet, is not secure or resilient. Without major advances in the security of these systems or significant change in how they are constructed or operated, it is doubtful that the United States can protect itself from the growing threat of cybercrime and state-sponsored intrusions and operations."

Actually, it doesn't much matter which is which.a In between, for the sake of nonpartisan continuity, President Clinton warned of the insecurities created by cyber-based systems and directed in 1998 that "no later than five years from today the United States shall have achieved and shall maintain the ability to protect the nation's critical infrastructures from intentional acts that would significantly diminish" our security.6 Five years later would have been 2003.

In 2003, as if in a repeat performance of a bad play, the second President Bush stated that his cybersecurity objectives were to "[p]revent cyber attacks against America's critical infrastructure; [r]educe national vulnerability to cyber attacks; and [m]inimize damage and recovery time from cyber attacks that do occur."7

These Presidential pronouncements will be of interest chiefly to historians and to Congressional investigators who, in the aftermath of a disaster that we can only hope will be relatively minor, will be shocked, shocked to learn that the nation was electronically naked.

Current efforts in Washington to deal with cyber insecurity are promising—but so was Sisyphus' fourth or fifth trip up the hill. These efforts are moving at a bureaucratically feverish pitch—which is to say, slowly—and so far they have produced nothing but more declarations of urgency and more paper. Why?

Back to Top

Lawsuits and Markets

Change in the U.S. is driven by three things: liability, market demand, and regulatory (usually federal) action. The role and weight of these factors vary in other countries, but the U.S. experience may nevertheless be instructive transnationally since most of the world's intellectual property is stored in the U.S., and the rest of the world perceives U.S. networks as more secure than we do.4 So let's examine each of these three factors.

Liability has been a virtually nonexistent factor in achieving greater Internet security. This may be surprising until you ask: Liability for what, and who should bear it? Software licenses are enforceable, whether shrink-wrapped or negotiated, and they nearly always limit the manufacturer's liability to the cost of the software. So suing the software manufacturer for allegedly lousy security would not be worth the money and effort expended. What are the damages, say, from finding your computer is an enslaved member of a botnet run out of Russia or Ukraine? And how do you prove the problem was caused by the software rather than your own sloppy online behavior?


Deciding what level of imperfection is acceptable is not a task you want your Congressional representative to perform.


Asking Congress to make software manufacturers liable for defects would be asking for trouble: All software is defective, because it's so astoundingly complicated that even the best of it hides surprises. Deciding what level of imperfection is acceptable is not a task you want your Congressional representative to perform. Any such legislation would probably drive some creative developers out of the market. It would also slow down software development—which would not be all bad if it led to higher security. But the general public has little or no understanding of the vulnerabilities inherent in poorly developed applications. On the contrary, the public clamors for rapidly developed apps with lots of bells and whistles, so an equipment vendor that wants to control this proliferation of vulnerabilities in the name of security is in a difficult position.

Banks, merchants, and other holders of personal information do face liability for data breaches, and some have paid substantial sums for data losses under state and federal statutes granting liquidated damages for breaches. In one of the best known cases, Heartland Payments Systems may end up paying approximately $100 million as a result of a major breach, not to mention millions more in legal fees. But the defendants in such cases are buyers, not makers and designers, of the hardware and software whose deficiencies create many (but not all) cyber insecurities. Liability presumably makes these companies somewhat more vigilant in their business practices, but it doesn't make hardware and software more secure.

Many major banks and other companies already know they have been persistently penetrated by highly skilled, stealthy, and anonymous adversaries, very likely including foreign intelligence services and their surrogates. These firms spend millions fending off attacks and cleaning their systems, yet no forensic expert can honestly tell them that all advanced persistent intrusions have been defeated. (If you have an expert who will say so, fire him right away.)

In an effective liability regime, insurers play an important role in raising standards because they tie premiums to good practices. Good automobile drivers, for example, pay less for car insurance. Without a liability dynamic, however, insurers play virtually no role in raising cyber security standards.

If liability hasn't made cyberspace more secure, what about market demand? The simple answer is that the consuming public buys on price and has not been willing to pay for more secure software. In some cases the aftermath of identity theft is an ordeal. In most instances of credit card fraud, however, the bank absorbs 100% of the loss, so their customers have little incentive to spend more for security. (In Britain, where the customer rather than the bank usually pays, the situation is arguably worse because banks are in a better position than customers to impose higher security requirements.) Most companies also buy on price, especially in the current economic downturn.

Unfortunately we don't know whether consumers or corporate customers would pay more for security if they knew the relative insecurities of the products on the market. As J. Alex Halderman of the University of Michigan recently noted, "most customers don't have enough information to accurately gauge software quality, so secure software and insecure software tend to sell for about the same price."3 This could be fixed, but doing so would require agreed metrics for judging products and either the systematic disclosure of insecurities or a widely accepted testing and evaluation service that enjoyed the public's confidence. Consumer Reports plays this role for automobiles and many other consumer products, and it wields enormous power. The same day Consumer Reports issued a "Don't buy" recommendation for the 2010 Lexus GX 460, Toyota took the vehicle off the market. If the engineering and computer science professions could organize a software security laboratory along the lines of Consumer Reports, it would be a public service.

Back to Top

Federal Action

Absent market- or liability-driven improvement, there are eight steps the U.S. federal government could take to improve Internet security, and none of them would involve creating a new bureaucracy or intrusive regulation:

  1. Use the government's enormous purchasing power to require higher security standards of its vendors. These standards would deal, for example, with verifiable software and firmware, means of authentication, fault tolerance, and a uniform vocabulary and taxonomy across the government in purchasing and evaluation. The Federal Acquisition Regulations, guided by the National Institute of Standards and Technology, could drive higher security into the entire market by ensuring federal demand for better products.
  2. Amend the Privacy Act to make it clear that Internet Service Providers (ISPs) must disclose to one another and to their customers when a customer's computer has become part of a botnet, regardless of the ISP's customer contract, and may disclose that fact to a party that is not its own customer. ISPs may complain that such a service should be elective, at a price. That's equivalent to arguing that cars should be allowed on the highway without brakes, lights, and seatbelts. This requirement would generate significant remedial business.
  3. Define behaviors that would permit ISPs to block or sequester traffic from botnet-controlled addresses—not merely from the botnet's command-and-control center.
  4. Forbid federal agencies from doing business with any ISP that is a hospitable host for botnets, and publicize the list of such companies.
  5. Require bond issuers that are subject to the jurisdiction of the Federal Energy Regulatory Commission to disclose in the "Risk Factors" section of their prospectuses whether the command-and-control features of their SCADA networks are connected to the Internet or other publicly accessible network. Issuers would scream about this, even though a recent McAfee study plainly indicates that many of them that do follow this risky practice think it creates an "unresolved security issue."1 SCADA networks were built for isolated, limited access systems. Allowing them to be controlled via public networks is rash. This point was driven home forcefully this summer by discovery of the "Stuxnet" computer worm, which was specifically designed to attack SCADA systems.4 Yet public utilities show no sign of ramping up their typically primitive systems.
  6. Increase support for research into attribution techniques, verifiable software and firmware, and the benefits of moving more security functions into hardware.
  7. Definitively remove the antitrust concern when U.S.-based firms collaborate on researching, developing, or implementing security functions.
  8. Engage like-minded governments to create international authorities to take down botnets and make naming-and-addressing protocols more difficult to spoof.

Back to Top

Political Will

These practical steps would not solve all problems of cyber insecurity but they would dramatically improve it. Nor would they involve government snooping and or reengineering the Internet or other grandiose schemes. They would require a clear-headed understanding of the risks to privacy, intellectual property, and national security when an entire society relies for its commercial, governmental, and military functions on a decades-old information system designed for a small number of university and government researchers.

Translating repeated diagnoses of insecurity into effective treatment would also require the political will to marshal the resources and effort necessary to do something about it. The Bush Administration came by that will too late in the game, and the Obama Administration has yet to acquire it. After his inauguration, Obama dithered for nine months over the package of excellent recommendations put on his desk by a nonpolitical team of civil servants from several departments and agencies. The Administration's lack of interest was palpable; its hands are full with a war, health care, and a bad economy. In difficult economic times the President naturally prefers invisible risk to visible expense and is understandably reluctant to increase costs for business. In the best of times cross-departmental (or cross-ministerial) governance would be extremely difficult—and not just in the U.S. Doing it well requires an interdepartmental organ of directive power that can muscle entrenched and often parochial bureaucracies, and in the cyber arena, we simply don't have it. The media, which never tires of the cliché, told us we were getting a cyber "czar," but the newly created cyber "Coordinator" actually has no directive power and has yet to prove his value in coordinating, let alone governing, the many departments and agencies with an interest in electronic networks.

And so cyber-enabled crime and political and economic espionage continue apace, and the risk of infrastructure failure mounts. As for me, I'm already drafting the next Presidential Directive. It sounds a lot like the last one.

Back to Top

References

1. Baker, S. et al. In the Crossfire: Critical Infrastructure in the Age of Cyber War, CSIS and McAfee, (Jan. 28, 2010), 19; http://img.en25.com/Web/McAfee/NA_CIP_RPT_REG_2840.pdf. See also P. Kurtz et al., Virtual Criminology Report 2009: Virtually Here: The Age of Cyber Warfare, McAfee and Good Harbor Consulting, 2009, 17; http://iom.invensys.com/EN/pdfLibrary/McAfee/WP_McAfee_Virtual_Criminology_Report_2009_03-10.pdf.

2. Gertz, B. 2008 intrusion of networks spurred combined units. The Washington Times, (June 3, 2010); http://www.washingtontimes.com/news/2010/jun/3/2008-intrusion-of-networks-spurred-combined-units/.

3. Halderman, J.Q. To strengthen security, change developers' incentives. IEEE Security and Privacy (Mar./Apr. 2010), 79.

4. Krebs, B. "Stuxnet" worm far more sophisticated than previously thought. Krebs on Security, Sept. 14, 2010; http://krebsonsecurity.com/2010/09/stuxnet-worm-far-more-sophisticated-than-previously-thought/.

5. McAfee. Unsecured Economies: Protecting Vital Information. 2009, 4, 13–14; http://www.cerias.purdue.edu/assets/pdf/mfe_unsec_econ_pr_rpt_fnl_online_012109.pdf.

6. Presidential Decision Directive 63, (May 22, 1998); http://www.fas.org/irp/offdocs/pdd/pdd-63.htm.

7. The National Strategy to Secure Cyberspace 2003. U.S. Department of Homeland Security.

Back to Top

Author

Joel F. Brenner ([email protected]) of the law firm Cooley LLP in Washington, D.C., was the U.S. National Counterintelligence Executive from 2006–2009 and the Inspector General of the National Security Agency from 2002–2006.

Back to Top

Footnotes

a. The first quotation is from President G.H.W. Bush's National Security Directive 42, July 5, 1990, redacted for public release, April 1, 1992; http://www.fas.org/irp/offdocs/nsd/nsd_42.htm. The second quotation is from the preface to "Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure," May 2009; http://www.whitehouse.gov/assets/documents/Cyber-space_Policy_Review_final.pdf.

DOI: http://doi.acm.org/10.1145/1839676.1839688


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.


Comments


CACM Administrator

The following letter was published in the Letters to the Editor in the February 2011 CACM (http://cacm.acm.org/magazines/2011/2/104382).
--CACM Administrator

In his Viewpoint "Why Isn't Cyber-space More Secure?" (Nov. 2010), Joel F. Brenner erroneously dismissed the value of making software manufacturers liable for defects, with this misdirected statement: "Deciding what level of imperfection is acceptable is not a task you want your Congressional representative to perform." But Congress doesn't generally make such decisions for non-software goods. The general concept of "merchantability and fitness for a given application" applies to all other goods sold and likewise should be applied to software; the courts are available to resolve any dispute over whether an acceptable level of fitness has indeed been met.

In no other commercial realm do we tolerate the incredible level of unreliability and insecurity characteristic of today's consumer software; and while better engineering is more challenging and the software industry could experience dislocations as its developers learn to follow basic good engineering practices in every product they bring to market, that lesson does not excuse the harm done to consumers from not employing basic good engineering practices.

L. Peter Deutsch
Palo Alto, CA

--------------------------------------------------

AUTHOR'S RESPONSE:

The challenge is in writing standards that would improve security without destroying creativity. "Basic good engineering" is not a standard. A "merchantability and fitness" standard works for, say, lawnmowers, where everyone knows what a defect looks like. It doesn't work for software because defining "defect" is so difficult, and the stuff being written is flying off the shelves; that is, it's merchantable. It's also sold pursuant to enforceable contracts. So while courts are indeed available to resolve disputes, they usually decide them in favor of the manufacturer. Deutsch and I both want to see more secure and reliable software, but, like it or not, progress in that direction won't be coming from Congress.

Joel F. Brenner
Washington, D.C.


CACM Administrator

The following letter was published in the Letters to the Editor in the May 2011 CACM (http://cacm.acm.org/magazines/2011/5/107681).
--CACM Administrator

I regret that Joel F. Brenner responded to my letter to the editor "Hold Manufacturers Liable" (Feb. 2011) concerning his Viewpoint "Why Isn't Cyberspace More Secure?" (Nov. 2010) with two strawman arguments and one outright misstatement.

Brenner said software "is sold pursuant to enforceable contracts." As the Viewpoint "Do You Own the Software You Buy?" by Pamela Samuelson (Mar. 2011) made clear, software is not "sold." Every EULA insists software is licensed and only the media on which it is recorded are sold; a series of court decisions, of which the Vernor v. Autodesk decision Samuelson cited is the most recent and one of the most conclusive, have upheld this stance.

This mischaracterization by Brenner is one of the keys to understanding how manufacturers of such shoddy goods get off essentially scot-free. If software were actually sold, the argument that it should be exempt from the protections of the Uniform Commercial Code would be much more difficult to maintain, in addition to other benefits thoroughly discussed elsewhere (including by Samuelson in her column).

Even though EULAs have been held enforceable, such a determination comes at the expense of the consumer. Almost without exception, EULAs have the effect of stripping the consumer of essentially all reasonable rights and expectations, compared with other goods and services. And while click-through and shrink-wrap EULAs have indeed been found to be enforceable, many reasonable people (including me) believe it should not be the case, since the vast majority of consumers do not read these "contracts" and do not understand their consequences. Brenner apparently does not consider them a significant problem.

Finally, Brenner simply reiterated his assertion that "Congress shouldn't decide what level of imperfection is acceptable." I agree. There are basic consumer protections that apply to all other goods, as embodied in the UCC. Neither a further act of Congress nor detailed specifications of product construction are required to give consumers the right to expect, say, a stove, properly used and maintained, will not burn down their house. The corresponding right of freedom from gross harm, like the other protections of the UCC, is not available for software, though it and they should be; Brenner apparently disagrees.

I emphasized good engineering practices in my February letter not because (as Brenner seems to believe) I thought they were sufficient to guarantee a reasonable level of product quality, but because they are well-established means toward the end of meeting the basic standards of non-harm and reliability taken as a given for all other products. In any case, Brenner did not say why he thinks a different process should be used for setting functional safety and reliability standards for software than for other consumer goods. Simply asserting "software is different" is not a reasoned argument.

L Peter Deutsch
Palo Alto, CA

----------------------------------------------

AUTHOR'S RESPONSE:

Thanks to Deutsch for correcting my error. Software is of course licensed rather than sold. As Deutsch says, this is why UCC product-liability standards for purchased goods haven't improved software quality. But his point strengthens my argument. I was explaining, not defending, the status quo, which is lamentable precisely because liability is weak. I cannot fathom why Deutsch thinks I'm indifferent to higher engineering standards for software. They represent the only basis on which a liability regime can be founded, even for licensed products.

Joel F. Brenner
Washington, D.C.


CACM Administrator

The following letter was published in the Letters to the Editor in the March 2011 CACM (http://cacm.acm.org/magazines/2011/3/105325).
--CACM Administrator

In his Viewpoint "Why Isn't Cyberspace More Secure?" (Nov. 2010), Joel F. Brenner said that in the U.K. the customer, not the bank, usually pays in cases of credit-card fraud. I would like to know the statistical basis for this claim, since for transactions conducted in cyberspace the situation in both the U.K. and the U.S. is that liability generally rests with the merchant, unless it provides proof of delivery or has used the 3-D Secure protocol to enable the card issuer to authenticate the customer directly. While the rates of uptake of the 3-D Secure authentication scheme may differ, I have difficulty believing that difference translates into a significant related difference in levels of consumer liability.

The process in the physical retail sector is quite different in the U.K. as a result of the EMV, or Europay, MasterCard, and VISA protocol, or "Chip & PIN," though flaws in EMV and hardware mean, in practice, the onus is till on the bank to demonstrate its customer is at fault.

Alastair Houghton
Fareham, England

------------------------------------------------

AUTHOR'S RESPONSE:

The U.K. Financial Services Authority took over regulation of this area November 1, 2009, because many found the situation, as I described it, objectionable. In practice, however, it is unclear whether the FSA's jurisdiction has made much difference. While the burden of proof is now on the bank, one source (see Dark Reading, Apr. 26, 2010) reported that 37% of credit-card fraud victims get no refund. The practice in the U.S. is not necessarily better but is different.

Joel F. Brenner
Washington, D.C.


Displaying all 3 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: