For over 50 years we have been trying to build computing systems that are trustworthy. The efforts are most notable by the lack of enduring successand by the oftentimes spectacular security and privacy failures along the way. With each passing year (and each new threat and breach) we seem to be further away from our goals.
Consider what is present in too many organizations. Operating systems with weak controls and flaws have been widely adopted because of cost and convenience. Thus, firewalls have been deployed to put up another layer of defense against the most obvious problems. Firewalls are often configured laxly, so complex intrusion and anomaly detection tools are deployed to discover when the firewalls are penetrated. These are also imperfect, especially when insider threats are considered, so we deploy data loss detection and prevention tools. We also employ virtual machine environments intended to erect barriers against buggy implementations. These are all combined with malware detection and patch management, yet still attacks succeed. Each time we apply a new layer, new attacks appear to defeat it.
I conjecture that one reason for these repeated failures is that we may be trying to answer the wrong questions. Asking how to make system "XYZ" secure against all threats is, at its core, a nonsensical question. Almost every environment and its threats are different. A system controlling a communications satellite is different from one in a bank, which in turn is different from one in an elementary school computer lab, which is different from one used to control military weapons. There are some issues in common, certainly, but the overall design and deployment should reflect the differences.
Asking how to make system "XYZ" secure against all threats is, at its core, a nonsensical question.
The availability and familiarity of a few common artifacts has led us to deploy them (or variants) everywhere, even to unsuitable environments. By analogy, what if everything in society was constructed of bricks because they are cheap, common, and easy to use? Imagine not only homes built of bricks, but everything else from the space shuttle to submarines to medical equipment. Thankfully, other fields have better sense and choose appropriate tools for important tasks.
A time-honored way of reinforcing a point is by means of a story told as a parable, a fairy tale, or as a joke. One classic example I tell my students:
Two buddies leaving a tavern find a distressed and somewhat inebriated man on his hands and knees in the parking lot, apparently searching for something. They ask him what he has lost, and he replies that he has dropped his keys. He describes the keys, and says if the two men find them they will receive a reward. They begin to help search. Other people come by and they too are drawn into the search. Soon, there is a crowd combing the lot, with an air of competition to see who will be the first to find the keys. Periodically someone informs the crowd of the discovery of a coin or a particularly interesting piece of rock.
After a while, one in the crowd stands up and inquires of the fellow who lost his keys, "Say, are you sure you lost your keys out here in the lot?" To which the man replies, "No. I lost them in the alley." Everyone stops to stare at the man. "Well, why the heck are you searching for them here in the parking lot!?" someone exclaimed. To which the man replied, "Well, the light is so much better here. And besides, now I have such good company!"
There are many lessons that can be inferred from this story, but the one I stress with my students is that if they don't properly define the problem, ask the right questions, and search in the proper places, they may have good company and funding, but they shouldn't expect to find what they are really seeking.a
So it is in researchespecially in cyber security and privacy. We have people seeking answers to the wrong questions, often because that is where "the light is better" and there seems to be a bigger crowd around them. Until we start asking questions that better address the problems that really need to be solved, we shouldn't expect to see progress. Here are a few examples of misleading questions:
Each of these questions implies it can be answered in a positive, meaningful manner. That is not necessarily the case.
We have generally failed to understand that when we build and deploy systems they are used in a variety of environments, facing different threats. There is no perfect security in any real systemhardware fails, people make mistakes, and attacks outside our expectations may defeat our protection mechanisms. If an attacker is sufficiently motivated and has enough resources (including time), every system can be defeated in some manner.b If the attacker doesn't care if the defeat is noticed, it may reduce the work factor involved; as an obvious example, an assured denial-of-service attack can be accomplished with enough nuclear weapons. The goal in the practice of security is to construct sufficient defenses against the likely threats in such a way as to reduce the risk of compromise to an acceptable level; if the attack can be made to cost far more than the perceived gain resulting from its success, then that is usually sufficient.
By asking the wrong questionssuch as how to patch or modify existing items rather than ask what is appropriate to build or acquirewe end up with systems that cannot be adequately protected against the threats they face. Few current systems are designed according to known security practices,c nor are they operated within an appropriate policy regime. Without understanding the risks involved, management seeks to "add on" security technology to the current infrastructure, which may add new vulnerabilities.
The costs of replacing existing systems with different ones requiring new training seems so daunting that it is seldom considered, even by organizations that face prospects of catastrophic loss. There is so much legacy code that developers and customers alike believe they cannot afford to move to something else. Thus, the market tends toward "add on" solutions and patches rather than fundamental reengineering. Significant research funding is applied to tinkering with current platforms rather than addressing the more fundamental issues. Instead of asking "How do we design and build systems that are secure in a given threat environment?" and "What tools and programming constructs should we be using to produce systems that do not exhibit easily exploited flaws?" we, as a community, continue to ask the wrong questions.
Note that I am not arguing against standards, per se. Standards are important for interoperability and innovation. However, standards are best applied at the interfaces so as to allow innovation and good engineering practice to take place inside. I am also not overlooking the potential expense. Creating new systems, training developers, and developing new code bases might be costly, but only initiallygiven current losses and trends, this approach would eventually reduce costs in many environments.
Robert H. (Bob) Courtney Jr., one of the first computer security professionals and an early recipient of the NIST/ NCSC National Computer Systems Security Award articulated three "laws" for those who seek to build secure, operational computational artifacts:d
Although not everyone will agree with these three laws, they provide a good starting point for thinking about the practice of information security. The questions we should be asking are not about how to secure system "XYZ," but whether "XYZ" is appropriate for use in the environment at hand. Can it be configured and protected against the expected threats to a level that matches our risk tolerance? What policies and procedures need to be put in place to augment the technology? What is the true value of what we are protecting? Do we even know what we are protecting?e
As researchers and practitioners, we need to stop looking for solutions where the light is good and people seem to be gathered. Consider a quote I have been using recently: "Insanity is doing the same thing over and over again while expecting different results."f Asking the wrong questions repeatedly is not only hindering us from making real progress but may even be considered insane.
So, what questions are you trying to answer?
a. Another story that resonates with my students is http://spaf.cerias.purdue.edu/Archive/racehorse.html.
b. There are many books on this topic, and the basic premise is at the heart of nearly every big heist movie, including Ocean's 11, The Italian Job, and The Thomas Crown Affair. For some interesting, real-life examples outside computing, I recommend the book Spycraft by Robert Wallace and H. Keith Melton.
c. There are many fine works on security engineering, including Ross Anderson's opus of that title. If we return to the fundamentals, tried-and-true design principles were articulated by Jerome H. Saltzer and Michael D. Schroeder in "The Protection of Information in Computer Systems," republished in Communications of the ACM 17, 7 (July 1974) but few systems are designed using these principles.
d. My thanks to William Hugh Murray for his restatement of Courtney's Laws.
e. Many firms do not understand the value of what they are protecting or where it is located; see http://snipurl.com/sec-econ.
f. This quote is widely attributed to Albert Einstein and to John Dryden. I have been unable to find a definitive source for it, however.
DOI: http://doi.acm.org/10.1145/1516046.1516056
The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.
No entries found