William Hugh (Bill) Murray is a management consultant and trainer in Information Assurance specializing in policy, governance, and applications. He has more than 60 years experience in information technology and more than 50 years in security. During more than 25 years with IBM his management responsibilities included development of access control programs, advising IBM customers on security, and the articulation of the IBM security product plan. He is the author of the IBM publication Information System Security Controls and Procedures. He has been recognized as a founder of the systems audit field and by Information Security Magazine as a Pioneer in Computer Security. He has served as adjunct faculty at the Naval Postgraduate School and Idaho State University. In 1999, he was elected a Distinguished Fellow of the Information System Security Association. In 2007, he received the Harold F. Tipton Award in recognition of his lifetime achievement and contribution. In 2016, he was inducted into the National Cyber Security Hall of Fame. In 2018, he was elected a Fellow of (ISC)2—see https://www.isc2.org/).
Bill Murray has been responding for years to security threats with nonconventional thinking. When he sees a security breakdown, he asks what is the current practice that allows the breakdown to happen, and what new practice would stop it? Most of our security vulnerabilities arise from poor practice, not from inadequate technology.
Many people today are concerned about cybersecurity and want to know how to protect themselves from malware, identity thieves, invading hackers, botnets, phishers, and more. I talked to Bill about what practices we have to deal with these issues, and where we need to look for new practices.
Q: Weak passwords have been the bane of security experts for years. Early studies of time-sharing systems showed that in a community of 100 users, two or three are likely to use their own names as passwords. A hacker can break in easily if passwords are so easy to guess. You declared that the root cause of this is the reusability of passwords. You proposed that we use technologies where a password can be used only once. How does this work and why is it now feasible?
A: This is not simply about "weak passwords" but all passwords. It is time to abandon passwords for all but trivial applications. Passwords are fundamentally vulnerable to fraudulent reuse. They put the user at risk of fraudulent use of identity, capabilities, and privileges and the system or application at risk of compromise and contamination by illicit users. Strong passwords protect against brute force attacks but these are not the attacks that we are seeing.
We need "strong authentication," defined as at least two kinds of evidence of identity, one resistant to brute force attacks and the other resistant to replay, that is, includes a one-time value. All strong authentication is "multi-factor" but not all multi-factor is strong. Strong authentication protects us against both brute force attacks and the fraudulent reuse of compromised credentials, for example from so called "phishing" attacks, the attacks that we are actually seeing.
Steve Jobs and the ubiquitous mobile computer have lowered the cost and improved the convenience of strong authentication enough to overcome all arguments against it.
Q: The Internet is seen as a flat network where any node can communicate with any other. One of the fundamental ideas baked into the Internet protocols is anonymity. This presents immense problems for local networks that want to be secure because they cannot easily validate whether requested connections are from authorized members. What technologies are available to define secure subnets, abandoning the idea of flatness and anonymity?
A: The Internet is flat in the sense that the cost and time of communication between two points approximates that of any two points chosen at random. Enterprise networks are often, not to say usually, designed and intended to be as flat as possible.
It is time to abandon the flat network. Flat networks lower the cost of attack against a network of systems or applications—successfully attacking a single node gains access to the network. Secure and trusted communication must now trump ease of any-to-any communication.
It is time for end-to-end encryptions for all applications. Think TLS, VPNs, VLANs and physically segmented networks. Encrypted pathways must reach all the way to applications or services and not stop at network perimeters or operating systems. Software Defined Networks put this within the budget of most enterprises.
Q: Most file systems use the old Unix convention of regulating access by the read-write-execute bits. Why is this a security problem and what would be a better practice for controlling access?
A: It is not so much a question of the controls provided by the file system but the permissive default policy chosen by management. It is a problem because it makes us vulnerable to data leakage, system compromise, extortion, ransomware, and sabotage. It places convenience and openness ahead of security and accountability. It reduces the cost of attack to that of duping an otherwise unprivileged user into clicking on a bait object.
It is time to abandon this convenient but dangerously permissive default access control rule of in favor of the more restrictive "read/execute-only" or even better, "Least privilege." These rules are more expensive to administer but they are more effective; they raise the cost of attack and shrink the population of people who can do harm. Our current strategies of convenience over security and "ship low-quality early and patch late" are proving to be not just ineffective and inefficient, but dangerous. They are more expensive in maintenance and breaches than we could ever have imagined.
The most efficient measures are those that operate early, preventing the malware from being installed and executed in the first place.
Q: What about malware? When it gets on your computer it can do all sorts of harm such as stealing your personal data or in the worst case ransomware. What effective defenses are there against these attacks?
A: The most efficient measures are those that operate early, preventing the malware from being installed and executed in the first place. This includes familiar antivirus programs as well as the restrictive access control rules mentioned earlier. It may include explicitly permitting only intended code to run (so-called "white listing"). It will include process-to-process isolation, which prevents malicious code from spreading; isolation can be implemented at the operating system layer, as in for example, Apple's iOS, or failing that, by running the untrusted processes in separate hardware boxes. We should not be running vulnerable applications such as email and browsing on porous operating systems, such as Windows and Linux, along with sensitive enterprise applications.
However, since prevention will never be much more than 80% effective, we should also be monitoring for indicators of compromise, the evidence of its presence that any code, malicious or otherwise, must leave.
Oh, I almost forgot. We must monitor traffic flows. Malware generates anomalous and unexpected traffic. Automated logging and monitoring of the origin and destination of all traffic moves from "nice to do" to "must do." While effective logging generates large quantities of data, there is software to help in the efficient organization and analysis of this data.
Q: Early in the development of operating systems we looked for solutions to the problem of running untrusted software on our computers. The principle of confinement was very important. The idea was to execute the program in a restricted memory where it could not access any data other than that which it asked for and which you approved. The basic von Neumann architecture did not have anything built in that would allow confinement. The modern operating systems like iOS or Android include confinement functions called "sandboxes" to protect users from un-trusted software downloaded from the Internet. Is this a productive direction for OS designers and chip makers?
A: The brilliance of the von Neumann architecture was that it used the same storage for both procedures and data. While this was convenient and efficient, it is at the root of many of our current security problems. It permits procedures to be contaminated by their data and by other procedures, notably malware. Moreover, in a world in which one can put two terabytes of storage in one's pocket for less than $100, the problem that von Neumann set out to solve—efficiently using storage—no longer exists.
In the modern world of ubiquitous and sensitive applications running in a single environment, with organized criminals and hostile nation-states, convenience and efficiency can no longer be allowed to trump security. It is time to at least consider abandoning the open and flexible von Neumann Architecture for closed application-only operating environments, like Apple's iOS or the IBM iSeries, with strongly typed objects and APIs, process-to-process isolation, and a trusted computing base (TCB) protected from other processes. These changes must be made in the architecture and operating systems. There is nothing the iOS user can do from the user interface that will make a persistent change to the integrity of the software. There is little the developers of programs can do that will nullify defects in the operating system or other programs.
It is ironic that one can get a so-called "computer science" degree without even being aware of alternatives to the von Neumann architecture.
Q: There have been many attempts at intrusion detection in operating systems. Is it possible to identify that someone appearing to be an authorized user is actually someone else?
A: There are recognizable differences in the behavior of authorized users and impersonators. The simple measure of identifying repeated failed attempts to do something can reveal intruders. More complex measures exploiting advances in artificial intelligence can detect more subtle differences. We must tune these measures to balance false positives against the failure to detect. We must also ensure the alarms and alerts get to the responsible managers, usually the manager of the user and the owner of the asset, who are in a position to recognize the need for, and have the authority and resources, to take any indicated corrective action.
Q: When OSs started to span networks, traffic analysis of packets became an ingredient of a signature of computer use. Is this a valuable approach today?
A: It's tough but not hopeless. While we may never be sure that all nodes in the public networks properly identify themselves, cryptography can improve the trust that we have as to the source of traffic. While we may never solve the problem of compromised systems being used as "cutouts" to hide the identity and location of the sources of attack traffic, by storing more meta data about the sources and destination of traffic, we can improve the effectiveness and efficiency of forensics.
Q: Another common attack method is phishing: email or voicemail messages that appear legitimate and entice you into revealing your personal information. Are there any practical ways to defend against phishing.
A: Courtney's Third Law taught us "there are management solutions to technical problems but there are no technical solutions to management problems." Substitute "human" for "management" and the statement remains true.
Masquerading and fraud attacks appeal to the Seven Deadly Sins and to gullibility, fear, curiosity, and even the mere desire to be helpful. Fraud and deceit—what the roque hackers call "social engineering"—are as old as language. They have exploited every communication medium ever used.
However, in the modern world, these appeals are mostly used to get us to compromise our credentials or the integrity of our systems. We can caution and train our users but experience suggests the best of these efforts will not be sufficient. We must also use the measures recommended here to limit the consequences of the inevitable errors.
Outsiders may damage the brand but insiders may bring down the business.
Q: What about insider attacks?
A: Threats have both source and rate. Insiders have a low rate but high consequences. Outsiders may damage the brand but insiders may bring down the business.
There are risks with privileged users and escalation of privileges. Edward Snowden was able to expand his privileges in an organization with "security" in its name. He did this over an extended period of time without being detected.
Pervasively we have too many over privileged users, with too little accountability. Indeed privileged users are among the most likely to share IDs and passwords. There is no accountability if something goes wrong. Often the privileges are so great and accountability so poor that the privileges, once granted, cannot be reliably withdrawn.
To reduce this threat, start with strong authentication for the use of any privileged capabilities. Implement multiparty controls over these capabilities. Improve accountability by ensuring privilege is available to only one user at a time, only when needed. Keep a record of all grants and uses of privilege.
Q: You clearly have strong opinions about how to secure our computer systems and networks. You place a great deal of weight on past security practices. Are these not obsolete? Don't we need the results of modern security research more than ever?
A: I plead guilty to having strong opinions and I beg for tolerance. I would like to defend my respect for past practices. Believe it or not, designers of operating systems have made security and protection a high priority since the 1960s. Their research and experience with real systems proves that many of the methods they discovered work. It astounds me that we would downplay those older successes in favor of unproven research.
What has changed over those years is not the need for security, but the risks and costs of insecurity. It should be clear to a casual reader of the news, let alone those with access to intelligence sources, that what we are doing is not working. It is both costly and dangerous.
While these recommendations may represent a change in the way we are doing things, we know they work. There is little new in them. Most of these ideas are as old as computing and some we inherited from more primitive information technology. Most of the resistance to using these practices comes from loss of convenience. Good security is not convenient. But it is absolutely necessary for the security of our assets and the reliability of the many critical systems on which we all depend. We need not suffer from the scourge of systems that so easily succumb to invaders.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.
Could you comment on why we have not implemented accountability for the origin of email messages? It seems that if an email could be reliably traced to a sender (via the owner of the purported sender's domain) then lots of phishing (and SPAM) would be easier to stop. Would you agree this would be helpful from a security perspective?
If so, what would it take to have e-mail services be able to verify with a domain owner that a given message was sent by the account holder of the address?
Displaying 1 comment