acm-header
Sign In

Communications of the ACM

Inside risks

The Nonsecurity of Secrecy


Considerable confusion exists between the different concepts of secrecy and security, which often causes bad security and surprising political arguments. Secrecy usually contributes only to a false sense of security.

In June 2004, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission requires telephone companies to report large disruptions of telephone service, and wants to extend that to high-speed data lines and wireless networks. The Department of Homeland Security fears that such information would give cyberterrorists a "virtual road map" to target critical infrastructures.

Is publishing computer and network vulnerability information useful, or does it just help the hackers? This is a common question, as malware takes advantage of software vulnerabilities soon after they become known.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is beneficial to security only in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they're lost, they're lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there's no way to recover security. Trying to base security on secrecy is simply bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making keys effective. Keys are short and easy to transfer. They're easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography's most basic principles is to assume that the algorithm is public.

A fallacy of the secrecy argument is the assumption that secrecy works. Do we really think that physical weak points of networks are a mystery to bad guys unable to discover vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies denied their existence and wouldn't bother fixing them, believing in the security of secrecy. And because customers didn't know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we'll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks. Public reporting of network outages encourages telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose those that best serve their needs. Without public disclosure, companies can hide their weaknesses.

Who supports secrecy? Software vendors such as Microsoft want to keep vulnerability information secret. The Department of Homeland Security's recommendations were loudly echoed by the phone companies. The interests of these companies are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we're seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret: information about the infrastructure of plants, government buildings, and profiling information used to flag certain airline passengers; standards for the Department of Homeland Security's color-coded terrorism threat levels; even information about government operations without any terrorism connections.

This may help to keep terrorists in the dark, especially "dumb" terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can't improve because there's no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks: they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy. Trying to hide the fact that a network has hubs is futile. Better to identify and protect them.

We're all safer when we have the information we need to exert market pressure on vendors to improve security. We are all less secure if software vendors don't make their security vulnerabilities public, and if telephone companies don't have to report network outages. Governments operating without accountability serve their own security interests, not the people's.

Back to Top

Author

Bruce Schneier (www.schneier.com), CTO of Counterpane Internet Security, Inc., wrote Beyond Fear: Thinking Sensibly About Security in an Uncertain World and produces Crypto-Gram, his free monthly newsletter.


©2004 ACM  0001-0782/04/1000  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: