Cryptography is often treated as if it were magic security dust: "sprinkle some on your system, and it is secure; then, you're secure as long as the key length is large enough—112b, 128b, 256b" (I've even seen companies boast of 16,000b.) "Sure, there are always new developments in cryptanalysis, but we've never seen an operationally useful cryptanalytic attack against a standard algorithm. Even the analyses of DES aren't any better than brute force in most operational situations. As long as you use a conservative published algorithm, you're secure."
This just isn't true. We've seen attacks that hack into the mathematics of cryptography and go beyond traditional cryptanalysis, forcing cryptography to do something new, different, and unexpected. For example:
The common thread through all of these exploits is they've all pushed the envelope of what constitutes cryptanalysis by using out-of-band information to determine the keys. Before side-channel attacks, the open crypto community did not think about using information other than plaintext and ciphertext to attack algorithms. After the first article, researchers began to look at invasive side channels, attacks based on introducing transient and permanent faults, and other side channels. Suddenly there was a whole new way to employ cryptanalysis.
Several years ago, I was talking with an NSA employee about a particular exploit. He explained how a system was broken; it was a sneaky attack, one that I didn't think should even count. "That's cheating," I said. He looked at me as if I'd just arrived from Neptune.
"Defense against cheating" (that is, not playing by the assumed rules) is one of the basic tenets of security engineering. Conventional engineering is about making things work. It's the genesis of the term "hack," as in "He worked all night and hacked the code together." The code works; it doesn't matter what it looks like. Security engineering is different; it's about making sure things don't do something they shouldn't. It's making sure security isn't broken, even in the presence of a malicious adversary who does everything in his power to make sure things don't work in the worst possible way at the worst possible times. A good attack is one that the engineers never thought about.
Defending against these unknown attacks is impossible, but the risk can be mitigated with good system design. The mantra of any good security engineer is: "Security is a not a product, but a process." It's more than designing strong cryptography into a system; it's designing the entire system such that all security measures, including cryptography, work together. It's designing the entire system so that when there's an unexpected attack, the system can be upgraded and resecured. It's never a matter of "if a security flaw is found," but rather "when a security flaw is found."
This isn't a temporary problem. Cryptanalysts will forever be pushing the envelope of attacks. And whenever crypto is used to protect massive financial resources (especially with worldwide master keys), these violations of designers' assumptions can be expected to be used more aggressively by malicious attackers. As our society becomes more reliant on a digital infrastructure, the process of security must be designed in from the beginning.
©1999 ACM 0002-0782/99/1000 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 1999 ACM, Inc.
No entries found