acm-header
Sign In

Communications of the ACM

Practice

Opportunity Cost and Missed Chances in Optimizing Cybersecurity


pattern of blocks with arrows

Credit: Cagkan Sayin

back to top 

Cybersecurity involves everyday decisions about balancing costs that influence defensive outcomes—where to focus resources, which threats pose the most critical impact, and which mitigations must be deployed before others. Problems abound in this endeavor. Cost is not just monetary; resources are finite and scarce. Consequently, cybersecurity decisions are in danger of suboptimal outcomes and missed opportunities.

In theory, decisions should be made relative to the expected returns on each option. For example, will backups protect against the expected losses from ransomware? Other alternatives are, by necessity, not pursued. The calculation of ROI (return on investment) determines the value of a particular choice but ignores what might have been.

This is the very definition of opportunity cost: the loss of potential gain from other alternatives when one alternative is chosen.6 The money spent on data backups cannot also be used for endpoint protection as a defense against ransomware.

Cybersecurity has much to gain by incorporating opportunity cost into decision making, from minimizing the impact of external threats to maximizing company productivity. If resources are spent defending against supply-chain attacks when social engineering results in larger or more frequent losses, the business suffers. Inside the company, allocating time and other costs to security can even be detrimental to the bottom line.

This article explores how opportunity cost is commonly neglected in practice and discusses ways to incorporate it into routine decision processes; a case study illustrates the benefits of making opportunity cost a core component of cybersecurity decision making.

A frustration for many people is calculating the expected benefit of a given choice. What is the value in spending four hours reading a new book rather than practicing the violin? Considering alternatives is not a default human tendency, but these are not impossible questions to answer, even absent absolute dollar values.

Cost is usually associated with monetary value. Financial resources spent on one opportunity means those resources are not available for something else. Businesses calculate the financial cost of various components, including licensing, capital expenses, and the fully loaded cost of labor.

The monetary cost and benefit of a specific choice are often considered in isolation, rather than taking into account its externalities—the cost and benefit imposed on other entities by the choice.12 A classic example of a negative externality is pollution. A company can perform a cost-benefit analysis to calculate its optimal production rate, but this will consider only the cost incurred to make a widget and the benefit the company receives by selling the widget. It will not consider the costs imposed on the local community by lowering the quality of drinking water, nor the costs imposed on society through carbon emissions. In cybersecurity, implementation of an application security tool may present a positive ROI for the security team but may also result in a negative externality of slower or fewer software releases, imposing a cost on the software engineering team(s), the organization, and its customers.

The costs and benefits are likely to be quite different from the perspective of a security team, the organization and other teams within it, the users and customers, and the society around them. A benefit for one stakeholder may beget a cost for another. Tables 1 and 2 untangle this complexity by illustrating the costs and effects of opportunity cost in cybersecurity. It distinguishes tangible from intangible costs/benefits and highlights that these apply to several key groups: employees, organizations, and society.

t1.jpg
Table 1. Costs of opportunity cost in cybersecurity.

t2.jpg
Table 2. Effects of opportunity cost in cybersecurity.

Time is an important component of opportunity cost but is often neglected in practice. It takes time to answer help-desk tickets and to develop new security policies. The investment of time also has second-order impacts, on employee satisfaction, burnout, turnover, and more. Emotional experience, such as anxiety, frustration, and confusion, is often overlooked as a cost when evaluating options, despite its well-documented salience during human decision making.10 The stress of time pressure can exacerbate users' frustrations with a security tool's lack of usability, making them more likely to bypass the security requirement.8

Security programs are components of organizations and can expend energy or absorb it, but energy is neither created nor destroyed. Beseeching employees to be vigilant to phishing threats requires them to expend energy, which the security team absorbs (as these user efforts allow the security team to expend energy elsewhere). Requiring software engineers to triage bugs discovered by vulnerability scanners is another example; developers expend energy combing through findings and fixing them, and the security team absorbs that energy. Thinking of where energy is expended and absorbed, and by whom, can help excavate the opportunity cost of a security decision.

Back to Top

Salience and the Null Baseline

The most effective method of encouraging exploration of opportunity cost during decision making is to make alternatives more salient.

How can alternatives outside the focal point be made more salient? One way is to require a baseline to do nothing—that is, the costs and benefits of an option must be compared with those of doing nothing. This is known as the null baseline. The idea is to encourage salience around alternatives that solve the same goal. For example, if your goal is relaxation, you should compare not only vacation options, but also local spa days, outsourcing domestic toil such as cleaning, or implementing a yoga routine. This comparison should involve not just monetary and effort costs, but also benefits in terms of positive memories, health outcomes, increased leisure time, and whether the benefits are acute or extended.

Focal options influence security consumption. For example, security teams will often evaluate products of the same type, known as a bake-off, which reflects the focal options. They are less likely to include nonfocal alternatives during the same decision making process. Nor are they likely to compare the costs and benefits of buying the focal options vs. not buying any of them at all.

A security team may select three security-awareness training vendors for a bake-off to compare their costs and benefits. The salient costs might include the upfront purchase price and an estimate of initial implementation costs. The salient benefits might include the volume of content, variety of content, and customer support.

The costs and benefits that are not salient are those outside of the focal options and local context of the security team. The cost of internal end users disrupting their work to perform the training is not salient. The cognitive cost of users—or the security team itself—viewing the problem as "solved" when it remains unsolved is not salient.9 The benefits of pursuing alternative options—such as using single sign-on and multifactor authentication to reduce the impact of users falling for phishing schemes—are not salient. Neither are the benefits of simply not purchasing the tool, which could include better goodwill for the security team.


The most effective method of encouraging exploration of opportunity cost during decision-making is to make alternatives more salient.


Always considering a null baseline is a worthy heuristic for encouraging security decision makers to consider opportunity cost. This heuristic simplifies the consideration of opportunity cost and makes best use of finite time and attention. When evaluating a solution to a problem area, security professionals should consider "do nothing" as their baseline and approximate its benefits.

Suppose a security leader wishes to evaluate application-security testing tools. By considering the null baseline, the leader might come up with benefits including faster lead time for changes, increased deployment frequency, lower change failure rate, and regained time that would otherwise be diverted to triage test findings.3 The opportunity cost of pursuing application security testing may be considered too great of an impact on the business relative to the benefit of fewer bugs reaching production (which relies on a hypothetical outcome that those vulnerabilities will be exploited). If the security leader considers only the focal options, comparing the relative costs and benefits of each tool, these facets of the potential decision outcomes would be neglected despite bearing a nontrivial cost to the organization.

The null baseline can also support reframing the problem statement toward more globally optimal outcomes rather than local maximums. If the security leader's motivation behind application security testing tools is reflected in the problem statement of, say, "Fewer vulnerabilities must reach production," then considering the "do nothing" option may prompt the leader to rethink this problem in light of speed being vital to the organization. That is, the organization's priorities will become more salient, resulting in a problem statement more aligned with them such as, "Vulnerabilities exploited in production must not impact business operations." This changes the focal area substantially.

Alternative hypotheses not being salient also impacts security decision making. For example, once a security leader decides to adopt a zero-trust architecture, they are less likely to gather evidence about alternative hypotheses (that something other than zero trust might solve the problem best) and may become overconfident in the singular, original hypothesis.11 Making opportunity cost salient, such as by considering the null baseline, increases the likelihood that consumers will compare options across categories and benefit types rather than solely comparing the narrow competitive set reflected by focal options.15

In cybersecurity, the null baseline would therefore allow security leaders to perceive competition across solution categories and reconsider the ultimate problem being solved. No longer is it the narrow framing of, "What insider threat tool is best?" Now the question is, "What solution will most effectively reduce the impact of unwanted activity by an insider, given resource constraints?" This facilitates comparison between insider threat tools and identity controls or other options.

Finally, the null baseline is helpful as a trivial stopping rule. The pursuit of additional information and evaluation of alternative choices could continue indefinitely if not for a decision to halt the process. For those looking at opportunity cost for the first time, the decision maker considers the null baseline and then stops seeking other options. Once this becomes routine, other stopping rules can be considered.

Back to Top

Areas of Application for Opportunity Cost in Cybersecurity

There are many ways to apply opportunity cost to cybersecurity, as illustrated by the examples just mentioned. This section explores four common applications in more depth.

Development and solution selection. Security practitioners struggle to select the best solutions to their problems under budget and time constraints. The potential solution to a given problem can be expanded when considering options such as build or buy; manual or automated processes; tightly scoped MVP (minimum viable product) or a more complete initial feature set. While this reflects a slightly more expansive consideration of solutions, it still reflects a zoomed-in perspective.

An opportunity-cost framing exposes situations when the problem definition is overly prescriptive, which stifles alternatives that might be superior solutions. For example, "What static application security tool is best?" begets a narrow focal point and prescribes only static application security testing (SAST) tools. "How can we minimize the number of security bugs developers introduce into code?" is less tool-specific and might introduce the focal option of security training for developers, but is still anchored to a relatively narrow focal point.


Risk quantification can be informative, but the most important outcome is how the results influence subsequent decisions and actions.


Instead, "How do we minimize the impact of security bugs in code running in production?" allows an expansive list of potential solutions to best prioritize allocation of finite resources. Brainstorming with other teams could result in a broader list including not only SAST tools and training, but also ephemeral or immutable infrastructure, standardizing libraries and patterns so developers are less likely to make mistakes, architecting the system with strong isolation properties, or running security chaos experiments to expose how the system behaves in adverse scenarios.16

Opportunity cost can assist with decisions on when to outsource work or perform it internally. Most organizations have a defined purpose to fulfill; their missions are not to secure their endpoints or networks. Through this lens, any work that does not directly support the organization's fundamental purpose bears the opportunity cost of taking time, budget, and cognitive effort away from work that more directly fulfills its purpose.

If a retail corporation wants a resilient online store, the expertise for databases, ad placement, and content delivery may not be among its internal skillset. Instead, it could outsource the problem to a platform service provider, advertising technology firm, and content-delivery network whose employees are skilled at such tasks. The opportunity cost of trying to create and operate a successful online store is time and cognitive effort that could be better spent dealing with retail business logic that would deliver more value to customers. This is a considerable cost to pay.

Requirements and patterns. A routine part of cybersecurity practice is the use of requirements and patterns to define expectations around qualities and behaviors.

Defining requirements and creating reusable patterns reflects consideration of opportunity cost (which is perhaps why they are notoriously neglected in cybersecurity programs). Documented security requirements limit the variation of technology and attack surfaces, which supports repeatability and maintainability. Repeatability and maintainability reflect nonfunctional requirements that may not be in mind during cost-benefit analysis within a narrow focal point, despite their benefits to system resilience.2

For example, a common manual task for security teams is to answer engineering teams' ad hoc questions about how to build a product, feature, or system in a way that will be approved (or, at least, not be blocked) by the security program. This is often considered a high-priority activity, as ignoring it leaves engineering teams "stuck." Manual responses to each query by the engineering team, however, bear an opportunity cost, as the time spent answering each is time that could be used elsewhere to fulfill the security program's goals.

Defining explicit requirements and giving engineering teams the flexibility to build their projects while adhering to those requirements frees up time and effort for both sides: Engineering teams can self-serve and self-start without interrupting their work to discuss and negotiate with the security team, and security teams are no longer as inundated with requests and questions, so they can perform work with more enduring value. As a recent example, Greg Poirier applied this approach to CI/CD (continuous integration/continuous delivery) pipelines, eliminating the need for a centralized CI/CD system while maintaining the ability to attest software changes and determine software provenance.14

Spending time thinking through how to implement standards while minimizing friction and writing down the resulting security requirements—which become another set of nonfunctional requirements in a software project—provides a higher ROI than common security "toil" that provides only acute value. The opportunity cost of engineering teams asking for security requirements for each project includes tangible costs such as time and delayed realization of revenue, as well as intangible costs such as divided attention and cognitive overload. Writing documentation around, for example, "Here is how to administer a password policy" means that each engineering team no longer must ask for requirements, reducing these costs while providing the benefits of repeatability and maintainability. When faced with what type of password policy to implement, engineering teams should be able to access documentation and understand the requirements. Importantly, to minimize the opportunity cost of ad hoc discussions, everyone should reference the same single document to avoid one-off definition of requirements and negotiation based on individual tech stacks.

Therefore, the opportunity cost of not standardizing security requirements and disseminating them is high, but this cost is overlooked when practitioners consider the cost-benefit analysis of other work that takes time away from pursuing this standardization.

Risk prioritization. Despite much attention, there is insufficient analysis about the ROI and opportunity cost of risk quantification. The time cost of implementing information collection, tuning quantitative models, and interpreting information is often omitted; these activities explicitly siphon time from performing the activities that would address the concerns they attempt to measure. Calculating the probability of a particular security event sacrifices time that could be spent ensuring the organization is prepared for that security event and can minimize its impact. Risk quantification can be informative, but the most important outcome is how the results influence subsequent decisions and actions.

There is also the opportunity cost of delaying decision making because of the desire to calculate the probabilities of security events. Possessing a rough sense of what is easiest for the attacker—that is, the actions they are likely to attempt first in their operations—suffices to inform prioritization of effort. The benefits (in terms of real security outcomes) of refining those estimates with complex statistical models remain unproven.

Risk quantification can also lead to a false sense of security. By reducing uncertainty, practitioners may feel that the situation is more under control, but the ambiguity of the situation remains and can only be uncovered during real operations when real events occur.17 This is true even beyond the realm of cybersecurity. As seismologist Susan Elizabeth Hough noted, "A building doesn't care if an earthquake or shaking was predicted or not; it will withstand the shaking, or it won't."5 That is, the opportunity cost of attempting to predict an event is the time that could be spent cultivating resilience to the event.

Opportunity cost of delay. In cyber-security, decisions related to waiting or proceeding are common:1 Should we release or delay? Should we go for the temporary fix or wait for a more perfect solution?

Developing internal projects that aim for perfect and all-encompassing rather than a "good enough" MVP bears a similar opportunity cost. This is often witnessed as the dark side of the desire for automation. An organization may wish for "one automation framework to rule them all," even if it has not delivered anything after years of dedicated development work (exacerbated by the sunk cost fallacy). The opportunity cost for building the "perfect" thing for all use cases one year from now is solving one use case "good enough" sooner and delivering value sooner.

Security and software engineering teams alike often err in attempting to create automation frameworks that can be generalized to all potential use cases, often taking years to complete. This approach neglects opportunity cost, which would encourage comparison with alternative options, such as spending the same time, attention, and money on building automation for one specific use case to launch the project. For example, a security team could interview internal stakeholders and determine that automation for asset inventory would provide immediate value. They could constrain their time and effort by meeting a set of minimum requirements for asset inventory automation, quickly building an MVP to solicit initial feedback and inform what work should come next.

The security engineering team can consider the opportunity cost of this subsequent work, too; perhaps the marginal benefit provided by requested features is less than other tasks the team could perform. The team can thereby reallocate its time to other tasks once the MVP is released rather than confining it in perpetuity.

Multiple smaller releases targeted at a few use cases fosters more flexibility when allocating time and effort, especially when it can take time for users and stakeholders to provide feedback on releases with enough clarity and consensus to inform next steps. Attempting a single massive release satisfying the ideal requirements not only makes time and effort an illiquid resource, but can also require other team members to work overtime to cover for the people building the Rube Goldberg machine. This leads to tangible outcomes such as poorer work quality, worse metrics, and increased attrition, as well as intangible outcomes such as declines in productivity, increased burnout, and climbing resentment. (The SPACE Framework for developer productivity suggests that software quality can be captured by measuring reliability, ongoing service health, and absence of bugs. These are all quantifiable metrics and the former two, being relevant for uptime, can be tied to revenue.4)

Back to Top

Application and Practice in Cybersecurity

Here, we explore cybersecurity opportunity cost in practice, beginning with a detailed case study of Twilio, which highlights the principles discussed thus far. We then present tools to arm practitioners and security decision makers in implementing opportunity-cost consideration with maximum ease in real-world situations.

Case Study: Twilio is a technology company offering APIs and services for communications, including phone calls and text messages. For example, Uber uses Twilio to send text messages to customers about ride status. To make this possible, Twilio uses telecommunications protocols required for mobile network operations. To make this reliable, these protocols must be secured.

NightOwl is an automated testing framework built by two security architects at Twilio that performs attack-tree modeling of telecommunications protocols. Twilio gained a provisional patent for NightOwl, enhancing its intellectual property portfolio.

NightOwl saved $200,000 in its first year of use by just one team. Now multiple internal teams can use it to self-serve security testing ahead of product releases and are actively expanding their usage of it, proposing new features and functionality.

NightOwl's creators created automation and testing for two specific telecommunications protocols: GTP (GPRS Tunneling Protocol) and Diameter. These protocols are essential for operating Twilio's core mobile network, connecting users to Twilio's SIM cards across its global telecommunications network. Neither GTP nor Diameter support authentication or authorization, so testing each protocol's security and reliability is of paramount importance.

Contemplating opportunity cost led NightOwl's creators to scope an MVP that constrained its design to just these two protocols, ensuring the best use of their time and highest ROI for the organization. GTP and Diameter not only represented the first use cases for Twilio's Super SIM team, thereby offering the highest marginal benefits, but also were the easiest to build; one required a specialized transport layer, SCTP (Stream Control Transmission Protocol), and the other was over UDP (User Datagram Protocol).

Twilio's status quo was hiring an outside consulting firm to perform penetration tests for each protocol before a product became generally available (GA). Each test cost approximately $100,000 and required weeks to months of the security team's time not only to organize the tests with the consultants, but also to work through the findings afterward and coordinate with internal teams to fix them. Given the complex and specialized nature of telecommunications protocols, there were few off-the-shelf automated penetration testing tools that would present a direct alternative.

Because NightOwl's creators considered opportunity cost when pondering this problem, they understood that internal penetration testing was not viable. Hence, they expanded the range of viable alternatives to include developing an automated tool, trading off an initial upfront time investment for a reduction in ongoing costs and increase in perpetual benefits. By building their own tool, they could implement the comprehensive testing needed for such a complex system. There was more benefit in crafting targeted messages based on Twilio's real-world implementation than attempting to retrofit existing tools that covered only a subset of cases that were not applicable and did not cover other necessary tests. It also allowed them to implement new approaches that existing tools did not offer but that provided palpable marginal benefits, such as building a fuzzing engine and stateful messages to perform more in-depth tests.

Taking an MVP approach, which made best use of constrained time resources, allowed NightOwl's creators to deliver these benefits quickly while minimizing time costs. They arrived at the MVP-plus-iteration model by also considering the opportunity costs of delaying NightOwl's release, including not receiving user feedback, delayed delivery of value to users, and time that could be better spent on other projects.

The resulting savings from NightOwl for launching products to GA are both temporal and monetary. The first engineering team to use NightOwl was releasing a product to GA by a certain target date. The team ran the necessary tests for GTP and Diameter using NightOwl, leaving only one test for the outside consultants to cover. By covering testing for two out of three protocols, NightOwl saved $200,000. Had the other two tests not been satisfied by NightOwl, launching the product would have required an additional eight weeks of work. The engineering team even generated a report via NightOwl without assistance from the security team, including graphs of system outcomes when performing the simulated attacks. This provided not only the intangible benefit of gaining confidence in releasing the product to GA, but also knowledge documentation.

Before NightOwl, engineering teams would perform penetration tests only for major releases, given the substantial time, effort, and monetary expense. With NightOwl, engineering teams can now perform penetration tests before every release, including minor ones, thereby increasing the level of security coverage. The toil of coordinating the tests, creating tickets for each finding, and tracking completion of those tickets also imposed intangible costs—including stress, frustration, and lost productivity—eliminated by adopting NightOwl.

While it requires more effort up-front, NightOwl offers ongoing benefits that would have been missed had its creators not considered opportunity cost. They might have simply performed a cost/benefit analysis based on comparing different consultants or performing the same manual work internally, overlooking this superior option.

Finally, a consideration of opportunity cost inspired NightOwl's creators to reframe the problem statement. By considering the costs of interpreting the findings from penetration testing consultants, translating them for Twilio's engineering teams, and collaborating to prioritize fixes, they began thinking about the goal as "actionable security findings that engineers can fix on their own" instead of "conduct a penetration test." This refined NightOwl's initial functional and nonfunctional requirements, enabling its creators to deliver more value than the external penetration test or replacing it with manual scripting.

Implementing opportunity-cost consideration in practice. Any proposal to make decision making more burdensome risks skepticism and low adoption. Generating a list of alternatives might require research about which alternatives exist, the costs associated with different proposals, or even baseline measurements about the sum costs and value of a single option. Indeed, there is opportunity cost when considering opportunity cost.

Daniel Kahneman and other psychologists have differentiated fast and automatic thinking (system 1) from slower, analytical thinking (system 2).7 Fast and automatic thinking—sometimes described as the "lizard brain"—is optimized for ease. The basic instincts of the human brain in the limbic cortex support survival and efficiency. Anything invoking slower, analytical thinking must feel to the thinker like it offers an outsized payoff, given the effort involved. Therefore, ease of implementation into decision making workflows is essential for opportunity-cost consideration to become the default. In real life, especially in fast-paced and stressful environments, people routinely have neither the time nor desire to think about more difficult decisions.


Opportunity cost is an integral aspect of cybersecurity that must be considered in every decision—a prominent, first-order component of decision-making, not an afterthought.


To incorporate opportunity cost in practice, decision makers should expend only enough time and brainpower to stimulate consideration of options beyond the focal point. The null baseline can serve as a new decision making heuristic—a mental shortcut that can lead to better decision outcomes—that can become more automatic with repetition.

In any given decision, you can try the following workflow to consider opportunity costs:

  1. Consider the null baseline. What are the tangible and intangible benefits of doing nothing? What are the costs?
  2. The null baseline's benefits become the starting point for considering the opportunity cost of the focal option.
  3. The null baseline's costs may lead the decision maker to reframe the problem statement. For example, consider the focal option of manual security change approvals. The cost of "doing nothing" relative to this focal option is that a Web application running in production may be compromised. Thus, the problem statement should likely focus on "minimizing impact of compromise of code in production," for which manual security change approvals is one of many possible options.
  4. With the reframed problem statement, consider the expanded set of options that could help solve it. View time, effort, and budget as fixed units interchangeable among options. Estimate the benefits and costs of each—rough "napkin math" estimates are not only fine, but ideal here. Which option least erodes the benefits of the "do nothing" option? Answering that question can help uncover which option bears the most minimal opportunity costs.

These steps help the decision maker consider the null baseline, refine the goal, and explore the diversity of solutions.

There is no need to delve into precise opportunity-cost quantification; doing so could lead practitioners to become profligate consumers of time and energy. In essence, there is a nontrivial opportunity cost when quantifying opportunity cost with precision (if precision is even possible in cases where intangible costs abound). Each unit of time expended attempting to assign a specific number to opportunity cost in a decision area is a unit of time not spent implementing the right option based on existing analysis.

The greatest marginal benefits can be harnessed from considering opportunity cost by baselining against the "do nothing" option. The null baseline is likely to, at a minimum, make alternative goals salient—such as "delivering code to production faster," "being able to make more sales calls," or "being able to consume more data for analysis"—which can highlight what will be lost if a focal choice is pursued. The null baseline can also highlight otherwise overlooked intangible costs of the focal option, such as the impairment to productivity from engineers relearning access workflows when a zero-trust network access tool is adopted by the security team.

The null baseline is likely to encourage a reframing of the problem statement to expand the range of alternatives being considered as well—for example, reframing the statement "Buy the best code-scanning tool," to the more business-aligned and less myopic "Minimize the impact of bugs in production."

Once practitioners are comfortable with this opportunity-cost consideration workflow (that is, it becomes more automatic for the lizard brain), there are a few relatively thrifty options for measurement to complement this consideration. Pilot programs can uncover indirect and intangible costs toward considering opportunity cost before implementation. Proof of concepts of vendor solutions are often designed to hide implementation challenges, especially if teams affected by the tool are not involved in the bake-off. Piloting a solution, whether software or process, on one engineering team (with at least one other as a control) and comparing their goal metrics can generate evidence about the implementation's impact beyond the focal benefits and costs.

Another potential avenue for measuring opportunity cost is determining stakeholders' willingness to pay to avoid a security implementation. For example, how much would employees pay to avoid using the corporate virtual private network (VPN)? How much would developers pay to avoid using a SAST tool that takes six hours to scan their code or to avoid having to file a ticket to gain access to a service? Surveys about willingness to pay can serve to translate intangible burdens into tangible values, making it easier to compare benefits and costs across options.

Back to Top

Conclusion

Opportunity cost is an integral aspect of cybersecurity that must be considered in every decision—a prominent, first-order component of decision making, not an afterthought. Everyone in the cybersecurity ecosystem plays a role in security outcomes, no matter who is making decisions, from individual software developers to security managers to the CEO. The results will be the best possible options for security and privacy—ones that balance organizations' multifaceted goals to nurture resilient business operations in a complex digital landscape.

Security decision makers would benefit from recognizing the many types of costs in addition to money. They need not ignore opportunity cost for lack of precise measurements. In particular, one way to ease into considering complex alternatives is to consider the null baseline of doing nothing instead of the choice at hand. This can elucidate the "true" problem at hand.

Opportunity cost can feel abstract, elusive, and imprecise, but it can be understood by everyone, given the right introduction and framing. Using the approach presented here—especially the null baseline heuristic—will make it natural and accessible. The widespread inclusion of opportunity cost as a routine consideration in research and practice is a worthy goal.

Back to Top

References

1. Arora, A., Caulkins, J.P., and Telang, R. Research note—sell first, fix later: Impact of patching on software quality. Mgmt. Sci. 52, 3 (2006), 465–471; https://pubsonline.informs.org/doi/10.1287/mnsc.1050.0440.

2. Cybersecurity and Infrastructure Security Agency, U.S. Digital Service, and Federal Risk and Authorization Management Program. CISA cloud security technical reference architecture, 2021; https://bit.ly/3IXmNIt.

3. Forsgren, N., Humble, J., and Kim, G. Accelerate—The Science of Lean Software and DevOps: Building and scaling high-performing technology organizations. IT Revolution Press, 2018.

4. Forsgren, N., et al. The SPACE of developer productivity: There's more to it than you think. acmqueue 19, 1 (2021), 20–48; https://dl.acm.org/doi/10.1145/3454122.3454124.

5. Hough, S. Predicting the Unpredictable: The Tumultuous Science of Earthquake Prediction. Princeton University Press, Princeton, NJ, 2010.

6. Huynh, T.N., Kleerup, E.C., Raj, P.P., and Wenger, N.S. The opportunity cost of futile treatment in the intensive care unit. Critical Care Medicine 42, 9 (2014), 1977–1982; http://bit.ly/3kObdat.

7. Kahneman, D. Thinking, Fast and Slow. Macmillan, 2011.

8. Kurowski, S., Fähnrich, N., and Roßnagel, H. On the possible impact of security technology design on policy adherent user behavior—Results from a controlled empirical experiment. SICHERHEIT. H. Langweg, M. Meier, B.C. Witt, and D. Reinhardt, eds. Gesellschaft für Informatik e.V., Bonn, Germany, 2018, 145–158; https://dl.gi.de/handle/20.500.12116/16276.

9. Lain, D., Kostiainen, K., and Capkun, S. Phishing in organizations: Findings from a large-scale and long-term study, 2021; https://arxiv.org/abs/2112.07498.

10. Loewenstein, G. and Lerner, J.S. The role of affect in decision making. Handbook of Affective Science. R. Davidson, H. Goldsmith, and K. Scherer, eds. Oxford University Press, Oxford, U.K., 619–664; https://bit.ly/3yh6X6s.

11. McKenzie, C.R.M. Taking into account the strength of an alternative hypothesis. J. Experimental Psychology: Learning, Memory, and Cognition 24, 3 (1998), 771–792; https://bit.ly/3ZAUENY.

12. Organization for Economic Cooperation and Development. Externalities—OECD. Glossary of statistical terms, 2003; https://www.oecd.org/regreform/sectors/2376087.pdf.

13. Podkul, C. Despite decades of hacking attacks, companies leave vast amounts of sensitive data unprotected. ProPublica (Jan. 25, 2022); http://bit.ly/3JhmFot.

14. Poirier, G. Die softwareherkunft (software provenance): an opera in two acts. Why would anyone do that? (Jan. 14, 2022); https://grepory.substack.com/p/der-softwareherkunft-software-provenance.

15. Russell, G. et al. Multiple-category decision making: review and synthesis. Marketing Letters 10, 3 (1999), 319–332; https://link.springer.com/article/10.1023/A:1008143526174#article-info.

16. Shortridge, K. and Rinehart, A. Security Chaos Engineering: Sustaining Resilience in Software and Systems. O'Reilly Media, Sebastopol, CA, 2022.

17. van Stralen, D. and Mercer, T.A. Ambiguity in the operator's sense. J. Contingencies and Crisis Mgmt. 23, 2 (2015), 54–58; https://onlinelibrary.wiley.com/doi/abs/10.1111/1468-5973.12084.

Back to Top

Authors

Kelly Shortridge is a senior principal product technologist at Fastly, New York, NY, USA and co-author of Security Chaos Engineering (O'Reilly Media).

Josiah Dykstra is a technical fellow at the National Security Agency and the owner of Designer Security, LLC, Severn, MD, USA.


Copyright held by authors/owners. Publication rights licensed to ACM.
Request permission to publish from [email protected]

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.

 


 

No entries found