acm-header
Sign In

Communications of the ACM

Contributed articles

Cyber-Physical Testbeds


Cyber-Physical Testbeds, illustration

Credit: Roy Wiemann

Modern societies depend on the quality and reliability of the services provided by networked critical infrastructures (NCIs). Physical infrastructures, including transportation systems, electricity grids, and telecommunication networks, deliver fundamental services for the smooth functioning of the economy and for the lives of all citizens. Accidental or intentional failure represents one of the most important risks faced today.

Back to Top

Key Insights

ins01.gif

The past few years have seen a dramatic increase in the use of the information and communication technologies (ICTs) within NCIs. The motivation for companies was mainly to reduce the cost of industrial installations and implement new services (such as remote monitoring and maintenance of infrastructures, energy markets, and the emerging smart grid). Despite clear advantages, the downside is widespread use of standard technology components exposes NCIs to significant but common cyberthreats; for instance, deliberate attacks through computer malware6 or unintentional threats due to mis-configuration and software bugs5 can lead to severe service outages. This was also highlighted by several studies and reports concerning security of supervisory control and data acquisition, or SCADA, systems,6,15 which represent core NCI infrastructure, monitoring and controlling physical processes. They consist mainly of actuators, sensors, and hardware devices that perform physical actions (such as open a valve), as well as the ICT devices and software that monitor and control physical processes. Unlike traditional ICT systems, where the effects of disruptive cyberattacks are generally limited to cyber operations, in the context of critical infrastructure assets, such attacks can result in the loss of vital services (such as transportation and water and gas supply). Assessing the effect of cyberthreats against both the physical and the cyber dimensions of NCIs requires an accurate scientific instrument for conducting experimental tests and taking measurements.

Cyber-physical testbeds that actively support the scientific method are an example of such an instrument. Testbed development may leverage real systems, emulators, or software simulators. Unfortunately, experimentating with production systems in security and resilience tests involves the risk of side effects to mission-critical services.10 Likewise, dedicated experimentation infrastructure with real components involves safety risks10 and high installation costs.11 Software-based simulation is an efficient way to study physical systems, offering low-cost, fast, accurate analysis. However, it has limited applicability in the context of cybersecurity in light of the diversity and complexity of computer networks. Software simulators effectively model normal network conditions but fail to capture the way computer networks fail.7 On the other hand, in many cases, emulators capture not only whether a system will fail but also how it will fail.

To address the need for cyber-physical testbeds, we propose a scientific instrument called the Experimentation Platform for Internet Contingencies, or EPIC, we developed to provide accurate, repeatable assessments of the effect cyberattacks might have on the cyber and physical dimensions of NCIs. To model the complexity of NCIs, EPIC uses a computer testbed based on Emulab (an advanced software suite for creating emulation testbeds)20,24 to recreate the cyber elements of an NCI and SSims for the physical components. (The term Emulab refers to both a facility at the University of Utah, Salt Lake City, and to a type of software.)

Back to Top

Motivation

A major limitation of existing testbeds is the inability to run security experiments on multiple heterogeneous NCIs; NCIs are highly interconnected and interdependent, meaning a single failure within one NCI could have a cascading effect on others; for example, the collapse of India's northern electricity grid in July 2012 affected more than 600 million people and led to loss of power for transportation, health care, and many other services. Such scenarios (which can also be caused by cyberattacks) must be recreated, analyzed, and understood in a laboratory environment in order to develop the necessary security measures that can be applied in real-world settings.

By recreating key connections between the cyber and the physical in NCIs, EPIC provides a diverse palette of research applications. Along with vulnerability testing, effect analysis, and validation of different techniques, EPIC also provides tools for closing an important loop in cyber-physical experimentation—the human operator. In the NCI context, human operators help ensure the stability and normal functioning of physical processes. Human operators can interact directly with EPIC as part of an experiment or be simulated by modeling their standard operating procedures. Either way, EPIC can be used to build complex experiments for testing the effect of commands issued by human operators on physical processes or measure the reaction of human operators to the changes in the state of some physical process. EPIC thus brings an important development in experimentation testbeds through accurate experiments that are closer to real NCI operation.

Testbed requirements. A cyber-physical testbed must be compatible with and actively support the scientific method, enabling researchers to apply rigorous scientific methods by ensuring the fidelity, repeatability, measurement accuracy, and safe execution of experiments:20

Fidelity. Experimentation testbeds must be able to reproduce as accurately as possible the real system under study. However, in many cases reproducing all details of a real system in an absolute way might not be necessary or even possible. It is thus preferable for an experimental platform to offer an adjustable level of realism, meaning researchers can use the level of detail that is sufficient to test the experimental hypothesis; for example, one experiment might need to reproduce a network at the physical layer using real routers, while for another a software router might be enough. The concept of an adjustable level of realism means having the option of using real hardware when really needed or emulators, simulators, or other abstractions when not needed.

Repeatability. This requirement reflects the need to repeat an experiment and obtain the same or statistically consistent results. Repeatable experiments require a controlled environment, but to achieve them researchers must first define the experiment's initial and final state, as well as all events in between. To reproduce a previously stored experiment scenario researchers must be able to set up the experimental platform in the initial state and trigger all necessary events in the right order and time of occurrence.

Measurement accuracy. Experiments should be monitored but not interfere with the experiment in a way that might alter its outcome. Needed therefore are separation of control, measurement, and experiment processes.

Safe execution. In most cases security experiments assume the presence of adversaries employing malicious software to achieve their goals. The effect of the software is often unpredictable, including disruptive effects on physical systems. Experiments must recreate such instances without jeopardizing the physical testbed itself or the researchers.


Before choosing the simulation step, EPIC researchers must verify the output of real-time simulation reproduces as accurately as possible its counterpart real-world process.


Existing approaches. To assess the state of the art, we performed a literature review and evaluated the features of available testbeds against the previously defined set of requirements (see Table 1).

The U.S. National SCADA TestBed (NSTB) program run by the U.S. Department of Energy23 constitutes a national collaborative laboratory project intended to support industry and government efforts to enhance the cybersecurity of industrial installations, providing a range of facilities to recreate real-world systems, from generation to transmission, including real power-grid components, as well as industry-specific software products.

Although the NSTB helped identify vulnerabilities and harden control-system protection mechanisms, the cost of deploying a similar installation limits its practical application in multi-domain heterogeneous cyber-physical systems.

A collaborative effort between Enel S.p.A., the largest power company in Italy, and the Joint Research Centre, Italy, the scientific and technical arm of the European Union's executive body, led to development in 2009 of a protected environment recreating the physical characteristics of a real turbogas power plant.15 The testbed accurately reproduces both cyber and physical characteristics of a typical power plant, including a scaled-down physical process, typical field networks, process network, security zones, horizontal services, corporate domain, and standard software. The testbed is used to analyze attack scenarios and test countermeasures in a safe environment. Unfortunately, the high fidelity of a pure physical testing environment is offset by poor flexibility and the high cost of maintenance of similar architectures.

The Emulab-based cyber-DEfence Technology Experimental Research (DETER) testbed2 provides repeatable security-related experiments, part of the DETER Enabled Federated Testbeds (DEFT) for interconnecting geographically distributed testbeds in cyber-physical experimentation. Within the DEFT consortium DETER was interconnected25 in 2009 with the Virtual Power System Testbed (VPST) developed by the University of Illinois.3 VPST provides simulation capabilities for electricity grids through real-time simulators (such as PowerWorld, a proprietary power-system simulator), extending DETER capabilities to experimentation with cyber-physical systems.

The key difference between EPIC and DEFT is EPIC provides a scalable cost-effective solution for experimenting with multi-domain heterogeneous physical processes (through its software simulators), while DEFT is more focused on a specific infrastructure (such as the power grid). EPIC can also be viewed as complementary to the DEFT initiative since the software simulators developed for EPIC are easily reused through DETER.

The PowerCyber testbed developed at Iowa State University11 in 2013 integrates SCADA-specific hardware and software with real-time digital simulators to simulate electrical grids. It uses virtualization techniques to address scalability and cost and the Internet-Scale Event and Attack Generation Environment project also developed at Iowa State University for wide-area network emulation. The testbed further provides non-real-time simulation capabilities, primarily for simulating larger systems and for performing state estimation and contingency analysis.

An approach using real components for the physical dimension and partly simulated components for the cyber dimension comes from the Tsinghua University of Beijing,8 using real SCADA control servers and the NS-2 network simulator combined with real control hardware and field devices. This testbed was designed to determine the effect of cyberattacks on the SCADA system, including packet forging, compromised access-control mechanisms, and compromised SCADA servers. Although it provides reliable experimental data, since almost everything in it is real, it is unable to support tests on large infrastructures (such as a national electric grid).

Sandia National Laboratory developed the Virtual Control System Environment (VCSE) testbed14 to explore vulnerabilities, train operators, and validate mitigation techniques. VCSE employs computer-network performance-analysis software called OPNET to integrate real devices with simulated networks and PowerWorld as its power system simulator. VCSE also incorporates Umbra, Sandia's patented framework that provides a centralized environment for monitoring and controlling multiple simulated components.


NCIs are highly interconnected and interdependent, meaning a single failure within one NCI could have a cascading effect on others.


The SCADASim framework17 developed at the Royal Melbourne Institute of Technology, Melbourne, Australia, provides predefined modules for building SCADA simulations, employing the OMNET++ discrete event simulation engine to recreate typical SCADA components while providing an underlying inter-model communications layer. SCADASim supports integration with real devices through modules implementing industry-standard protocols. It can be used to develop a range of SCADA simulations and evaluate the effect of cyberattack scenarios on communications, as well as on the normal functioning of physical processes.

Finally, the system-of-systems approach to testbed development at the Swiss Federal Institute of Technology, Zurich,16 uses the High Level Architecture simulation standard to provide a multi-domain experimentation environment for interconnecting simulators from multiple domains. The testbed supports exploration of what-if scenarios in the context of complex interdependencies between critical infrastructures. Unfortunately, such an approach might be effective on interdependency studies but be unable to recreate the cyber layer accurately.

Back to Top

EPIC Overview

EPIC architecture involves an emulation testbed based on Emulab software20,24 to recreate the cyber dimensions of NCIs and software simulation for the physical dimension. By employing an emulation-based testbed, EPIC ensures fidelity, repeatability, measurement accuracy, and safety of the cyber layer, an approach well established in the field of cyber security2 chosen to overcome major difficulties trying to simulate how ICT components behave under attacks or during failures. EPIC uses simulation for the physical layer, since it provides an efficient, safe, low-cost approach with fast, accurate analysis capabilities. Although it weakens the fidelity requirement, software simulation enables disruptive experiments on multiple heterogeneous physical processes. Moreover, we find complex models of several physical systems in the literature, and the behavior of real physical systems can be reproduced accurately by integrating them into software simulators; one example is the energy sector, where simulation is so accurate and trusted it is commonly used to aid decision making by operators of transmission systems.

Recreating cyber systems. Emulab24 is an advanced software suite for emulation testbeds, with many private installations worldwide and support from multiple universities. In 2009, we developed our first installation using the Emulab architecture and software, now under continuous development and expansion through the Emulab architecture and software (see Figure 1a). Adopting Emulab in EPIC, we automatically and dynamically map physical components (such as servers and switches) to a virtual topology; that is, Emulab software configures physical topology to emulate the virtual topology as transparently as possible. The basic Emulab architecture consists of two control servers: a pool of physical resources used as experimental nodes (such as generic PCs and routers) and a set of switches interconnecting the nodes. Emulab software provides a Web interface to describe the steps that define the experiment life cycle within the EPIC testbed:

Virtual network topology. EPIC researchers must first create a detailed description of the virtual network topology, the "experiment script"; using a formal language for experiment setup eases recreation of a similar arrangement by other researchers who might want to reproduce our results;

Emulab software. Experiments are instantiated through Emulab software, which automatically reserves and allocates the physical resources needed from the pool of available components;

Experimental nodes. The software configures network switches to recreate virtual topology by connecting experimental nodes through multiple virtual local-area networks, then configures packet capturing of predefined links for monitoring purposes; and

Defined events. Experiment-specific software (such as simulators) is launched automatically through events defined in the experiment script or manually by logging in to each node.

Recreating physical systems. The software units that recreate physical systems within EPIC are outlined in Figure 1b. Physical process models are built in Matlab Simulink, from which the corresponding C code is generated through Simulink Coder. The generated code is then integrated into the software simulation unit (SSim) to enable real-time interaction of simulated processes with the rest of the emulation testbed. At its core, the SSim unit binds the cyber and the physical layers. Viewed from the SSim's perspective, models are the equivalent of black boxes, with inputs and outputs dynamically mapped to an internal memory region. Values written into this region are copied to the model's inputs, while model outputs are copied back to internal memory. This way, EPIC enables experimentation with a range of physical processes without having to provide details of their content. To enable interdependency studies on multiple NCIs, SSim implements a remote procedure call (RPC) interface accessible to other SSim instances. RPCs provide access to the internal memory region and consequently to the model's inputs and outputs, enabling real-time interaction between models. Moreover, EPIC supports industrial protocols (such as Modbus) through Proxy units that translate calls between SSim and other units, including servers in industrial installations.

Integrating real-world hardware and software. Since almost all its components are real, EPIC supports any software that usually runs on a regular PC and can integrate practically any hardware equipped with an Ethernet network interface; for instance, the EPIC-based testbed includes real control hardware and real industrial software that enable studies of specific industrial architectures. Interaction between real-world software and EPIC software units is achieved in several ways: First, real software interacts with the simulated models through industrial protocols (such as Modbus). Modbus calls are sent to a Proxy unit that forwards them as RPCs to the SSim unit. Another way to interact is through operating system-level shared memory. Software units can access a shared memory region mapped to the model's inputs/outputs by the SSim unit, as in Figure 1b. This technique enables interaction with software that does not implement RPC or Modbus, providing a simple way to run more complex security studies.

Real-time simulation on multitasking OS. With real-time simulation, models run in a discrete time domain linked closely to the clock of the operating system, meaning the simulated model runs at the same rate as the actual physical system. EPIC uses generic PCs with multitasking operating systems to run real-time software simulation units. Despite major advantages, our choice of Simulink Coder to produce the simulators imposes several constraints on the simulated models, including model-execution rate, or "simulation step." The model's internal dynamics limit the range of possible simulation steps. Before choosing a simulation step, EPIC researchers must verify the output of real-time simulation reproduces as accurately as possible its counterpart real-world process. In parallel, the model execution time on a specific computer is limited by the model's complexity and the host's processing power. In general, if the model's execution time exceeds its simulation step, real-time simulation is not possible.

Testing the limitations of software simulation in EPIC, we have experimented with several physical processes (see Figure 2). Here, we mention small-scale processes (such as Bell and Åström's oil-fired 160MW electric power plant1 based on the Sydsvenska Kraft AB plant in Malmö, Sweden, and the Tennessee-Eastman chemical process,9 likewise based on a real process), though Downs and Vogel9 introduced minor changes to mask the identity of reactants and products. The railway systems used throughout our experiments are based on the train models proposed by Rios and Ramos18 that account for several dimensions of real transportation systems (such as weight, speed, acceleration, deceleration, and power consumption).

Then there is the IEEE suite of power grid systems22 used with EPIC. The nine-bus test case is the Western System Coordinating Council's three-machine nine-bus system, and the 30-bus, 39-bus, and 118-bus test cases represent a portion of the system operated by the American Electric Power utility, Columbus, OH, in early 1960. These test cases constitute realistic models long established in the power-systems community, providing a range of power-system configurations.

Data in Figure 2 reflects how real-time software simulation is well suited to small- and mid-scale models. However, software simulation is limited by CPU speed and model size; for instance, the IEEE 118 bus system is a complex model that includes 54 generators with frequency of 50Hz and a maximum simulation step, or maximum model execution rate, of approximately 24ms. Since the model's execution time on a 2.8GHz CPU is 155ms, real-time simulation is not possible in this case.

This is a well-known limitation of real-time software simulation but can be addressed in several ways; for instance, researchers can leverage parallel processing techniques (such as GPU computing) or dedicated hardware simulators that are more powerful and specifically designed for simulations. However, these approaches are still quite expensive and, in a multi-model environment, could render the cost of the cyber-physical testbed prohibitive.

Implementation details. Installation of EPIC at the European Commission's Joint Research Centre in Ispra, Italy, consists of 120 PCs and approximately 100 virtual machines massively interconnected through two stacks of network switches. In addition, carrier-grade routers (such as Cisco 6503) and industrial-control hardware and software (such as ABB AC 800M control hardware, including Modbus interfaces with control server and human-machine interface software from ABB) are available as experimental resources. We have also developed software units (such as SSim and Proxy) in C# and ported and tested them on Unix-based systems with the help of the Mono platform, enabling cross-platform deployment of C# applications.

Back to Top

Scalability and Applicability

EPIC's ability to recreate both the cyber and the physical dimensions of NCIs provides a spectrum of experimentation options addressing critical infrastructures. Here, we offer an overview of typical experiments conducted with EPIC, along with a full experimental scenario:

Typical experiments. EPIC is used concurrently by many researchers for developing, testing, and validating a range of concepts, prototypes, and tools (see Table 2); see also scientific reports and papers on the EPIC website http://ipsc.jrc.ec.europa.eu/?id=693. The first experiment covered in this article weighed the effect of network parameters on cyberattacks targeting NCIs. We looked into network delay, packet loss, background traffic, and network segmentation on a spoofing cyberattack consisting of an adversary able to send legitimate commands to process controllers. We showed that while communications parameters have an insignificant effect on cyberattacks, physical process-aware network segmentation can yield more resilient systems.


As a general rule, if the model's execution time exceeds its simulation step, real-time simulation is not possible.


The second experiment showed studies help validate the effectiveness of newly proposed protection mechanisms. Using EPIC, we recreated a complex setting, including real networks, protocols, hosts, and routers, hence a realistic environment for the validation of a novel anomaly-detection system. The experiment confirmed researchers can use EPIC to recreate a realistic environment to launch distributed denial-of-service (DDoS) attacks together with spoofing attacks on critical infrastructure assets. These attacks contributed to validation of a novel anomaly-detection system able to efficiently detect anomalies in both the cyber and the physical dimensions of NCIs.

The third experiment focused on the human operator, closing a significant loop in cyber-physical experimentation, including a coordinated cyberattack in which the attacker prevents the normal remote operation of several substations, or "reduction of load," by blocking communications. Consequently, several substations exhibited a significant drop of voltage below nominal operating levels. The experiment showed operational decisions can be the difference between a complete breakdown and system survival, and that collaborations between operators can limit the propagation of cyber disturbances.

The fourth experiment recreated the well-known 2008 YouTube border gateway-protocol-route-hijacking incident,19 analyzing several hypothetical scenarios. We developed an abstraction of Internet backbone networks in Europe and recreated the incident by replaying real traffic traces. The results highlighted the importance of tools and mechanisms for fast discovery of border-gateway-protocol-hijacking events and, most important, well-trained operators able to communicate over a trusted medium.

Together, these four experiments represent only a fraction of the many directions and applications in which EPIC has proved itself a modern scientific instrument. Moreover, its use is not limited to disruptive experiments but can also take on educational and preparedness activities (such as an environment for cybersecurity exercises).

Illustrative experiment. Here, we illustrate EPIC applicability by exploring the consequences ICT disruptions can have on a critical infrastructure (such as a national power grid). We consider the hypothetical scenario of a cyberattack—specifically a DDoS attack—causing significant telecommunication service degradation propagating across critical infrastructures.

Experiment setup. We thus recreated a typical architecture in which the power grid is controlled remotely (see Figure 3a). Site A located on the operator's premises runs a simplified model of an energy-management system (EMS)21 to ensure voltage stability. The EMS continuously monitors and adjusts the operational parameters of the power grid model running at Site B located remotely (such as in an electrical substation). The EMS sends commands to emulated control hardware or through proxy units that provide access to the power-grid model inputs and outputs. Communications are provided through the Modbus protocol.

The scientific community uses IEEE electrical-grid models extensively in similar studies since they encapsulate (accurately) the basic characteristics of real infrastructures. We adopted the IEEE 39-bus New England system, including 39 substations and 10 generators. The daily load imposed on our system derives from real data,13 and EMS intervention is required to keep the grid stable.

Conjuring a realistic communications infrastructure between EMS and the power-grid simulator, we assumed the service provider was using a Multi Protocol Label Switching (MPLS) network; telecom operators use the MPLS protocol to replace older implementations based on frame relay and asynchronous transfer mode.12

Using our Emulab installation, we created a minimal MPLS network with four Cisco 6503 routers on which we defined two MPLS virtual private networks (VPNs). VPN 1 functioned as a protected virtual circuit between Site A and Site B, an approach telecom operators usually follow to isolate customer traffic. Since telecom operators route diverse traffic (such as public Internet traffic) through the same MPLS cloud, we used VPN 2 to create a virtual circuit between two different "public" regions.

Telecom disruption and propagation to the power grid. We then launched a bandwidth-consumption DDoS attack in VPN 2 and measured its effect on the power-grid operator's virtual circuit in VPN 1. The attack caused harm to the grid operator's private circuit. The EMS consequently lost control of the power grid and was unable to send commands that could restore stability. Once an attack begins the grid is able to run approximately seven minutes without intervention (see Figure 3b). However, after those seven minutes the changes in the daily load require intervention of load-shedding algorithms implemented in the EMS. Since the commands from the EMS cannot reach the emulated control hardware, the voltages in the different segments of the grid inevitably begin to collapse.

Shortly after an attack begins, the model becomes highly unstable, exhibiting large oscillations that are difficult to map to reality. One major limitation of simulation-based studies is researchers can reason only within the model's boundaries. However, voltage collapse is a clear indication of grid instability, possibly forcing operators to rebuild an entire grid. For our EMS-related security study it was enough to verify the attacker could redirect the system outside normal operating limits. If experiments go beyond these limits then researchers must likewise extend the models of physical systems to cover extreme and unstable conditions or extend the cyber-physical testbed through real physical devices, assuming it is feasible, economically cost effective, and safe.

A look at reality. Most telecom operators limit the interference between separate VPNs; for example, with deployment of quality of service (QoS) in the MPLS network an attack on the public Internet barely affects the private traffic of other telecom customers. We validated this claim by running our EMS-related experiment after activating QoS with packet prioritization (a feature also used to implement packet prioritization in industrial communications) in the MPLS cloud. The only measurable effect was a slight increase of packet round-trip times (by 1ms–2ms), a tolerable delay if we apply the IEEE 1646-2004 standard for communication delays in substation automation, meaning high-speed messages must be delivered in the 2ms–10ms range.

However, such measures, delivered through policies and regulation, are not compulsory. Our EMS-related experiment demonstrated the severe risk if the measures are not implemented, highlighting the potential effect of ICT disruption on a range of physical systems. Moreover, by designing and conducting experiments based on real incidents we were able to explore a number of what-if scenarios. For example, we investigated a 2004 incident that affected Rome's remotely controlled power grid managed through a public telecommunications network.4 Communications between remote sites was disrupted due to a broken water pipe flooding the server room of a telecom operator, short-circuiting critical hardware. Power-grid operators were unable to monitor or control the remote site. Fortunately, none of the disturbances was harmful, so the grid was stable. Nevertheless, as shown in our experiments on EPIC, a change in the balance between generated and consumed energy would have serious consequences on the electrical grid. In Rome, with a population of 2.5 million in 2004, it could have caused blackouts throughout the city and affected other critical infrastructure (such as transportation and health care).

Back to Top

Conclusion

Combining an Emulab-based testbed with real-time software simulators, EPIC takes a novel approach to cybersecurity studies involving multiple heterogeneous NCIs. EPIC can be viewed as an instance of a new class of scientific instrument—cyber-physical testbeds—suitable for assessing cyberthreats against physical infrastructures. It supports interesting studies in many interdependent critical infrastructure sectors with heterogeneous systems (such as transportation, chemical manufacturing, and power grids); to explore several, see http://ipsc.jrc.ec.europa.eu/?id=691.

Back to Top

References

1. Bell, R. and Åström, K. Dynamic Models for Boiler-Turbine Alternator Units: Data Logs and Parameter Estimation for a 160MW Unit. Technical Report TFRT-3192. Lund Institute of Technology, Lund, Sweden, 1987.

2. Benzel, T., Braden, R., Kim, D., Neuman, C., Joseph, A., Sklower, K., Ostrenga, R., and Schwab, S. Experience with DETER: A testbed for security research. In Proceedings of the International Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities (Barcelona, Mar. 1–3). IEEE, New York, 2006, 379–388.

3. Bergman, D.C., Jin, D., Nicol, D.M., and Yardley, T. The virtual power system testbed and inter-testbed integration. In Proceedings of the Second Conference on Cyber Security Experimentation and Test (Montreal, Aug. 10–14). USENIX Association, Berkeley, CA, 2009, 5–5.

4. Bobbio, A., Bonanni, G., Ciancamerla, E., Clemente, R., Iacomini, A., Minichino, M., Scarlatti, A., Terruggia, R., and Zendri, E. Unavailability of critical SCADA communication links interconnecting a power grid and a telco network. Reliability Engineering and System Safety 95, 12 (Dec. 2010), 1345–1357.

5. Charette, R. IT Hiccups of the week: Southwest Airlines computer failure grounded all flights. IEEE Spectrum Risk Factor Blog (June 2013); http://spectrum.ieee.org/riskfactor/computing/it/it-hiccups-of-the-week-southwest-airlines-computer-failure-grounded-all-flights

6. Chen, T. and Abu-Nimeh, S. Lessons from Stuxnet. Computer 44, 4 (Apr. 2011), 91–93.

7. Chertov, R., Fahmy, S., and Shro, N.B. Fidelity of network simulation and emulation: A case study of TCP-targeted denial-of-service attacks. ACM Transactions on Modeling and Computer Simulation 19, 1 (Jan. 2009), 4:1–4:29.

8. Chunlei, W., Lan, F., and Yiqi, D. A simulation environment for SCADA security analysis and assessment. In Proceedings of the 2010 International Conference on Measuring Technology and Mechatronics Automation (Changsha City, China, Mar. 13–14). IEEE, New York, 2010, 342–347.

9. Downs, J. and Vogel, E. A plantwide industrial process control problem. Computers & Chemical Engineering 17, 3 (Mar. 1993), 245–255.

10. Duggan, D. Penetration Testing of Industrial Control Systems. Technical Report SAND2005-2846P. Sandia National Laboratories, Albuquerque, NM, 2005.

11. Hahn, A., Ashok, A., Sridhar, S., and Govindarasu, M. Cyber-physical security testbeds: Architecture, application, and evaluation for smart grid. IEEE Transactions on the Smart Grid 4, 2 (June 2013), 847–855.

12. IBM and Cisco. Cisco and IBM provide high-voltage grid operator with increased reliability and manageability of its telecommunication infrastructure. IBM Case Studies, 2007; https://www.cisco.com/web/partners/pr67/downloads/756/partnership/ibm/success/terna_success_story.pdf

13. Manera, M. and Marzullo, A. Modelling the load curve of aggregate electricity consumption using principal components. Environmental Modeling Software 20, 11 (Nov. 2005), 1389–1400.

14. McDonald, M.J., Mulder, J., Richardson, B.T., Cassidy, R.H., Chavez, A., Pattengale, N.D., Pollock, G.M., Urrea, J.M., Schwartz, M.D., Atkins, W.D., and Halbgewachs, R.D. Modeling and Simulation for Cyber-Physical System Security Research, Development, and Applications. Technical Report SAND2010-0568. Sandia National Laboratories, Albuquerque, NM, 2010.

15. Nai Fovino, I., Masera, M., Guidi, L., and Carpi, G. An experimental platform for assessing SCADA vulnerabilities and countermeasures in power plants. In Proceedings of the Third Conference on Human System Interactions (Rzeszow, Poland, May 13–15). IEEE, New York, 2010, 679–686.

16. Nan, C., Eusgeld, I., and Kröger, W. Analyzing vulnerabilities between SCADA system and SUC due to interdependencies. Reliability Engineering & System Safety 113 (May 2013), 76–93.

17. Queiroz, C., Mahmood, A., and Tari, Z. SCADASim: A framework for building SCADA simulations. IEEE Transactions on Smart Grid 2, 4 (Sept. 2011), 589–597.

18. Ríos, M.A. and Ramos, G. Power system modelling for urban mass-transportation systems. In Infrastructure Design, Signaling and Security in Railway. InTech, Rijeka, Croatia, 2012, 179–202.

19. RIPE Network Coordination Centre. YouTube Hijacking: A RIPE NCC RIS Case Study, 2008; http://www.ripe.net/internet-coordination/news/industry-developments/youtube-hijacking-a-ripe-ncc-ris-case-study

20. Siaterlis, C., Garcia, A., and Genge, B. On the use of Emulab testbeds for scientifically rigorous experiments. IEEE Communications Surveys and Tutorials 15, 2 (Second Quarter 2013), 929–942.

21. Tuan, T., Fandino, J., Hadjsaid, N., Sabonnadiere, J., and Vu, H. Emergency load shedding to avoid risks of voltage instability using indicators. IEEE Transactions on Power Systems 9, 1 (Feb. 1994), 341–351.

22. University of Washington. Power Systems Test Case Archive. Electrical Engineering Department, Seattle, 2012; http://www.ee.washington.edu/research/pstca/

23. U.S. Department of Energy. National SCADA Test Bed. Washington, D.C., 2009; http://energy.gov/sites/prod/files/oeprod/DocumentsandMedia/NSTB_Fact_Sheet_FINAL_09-16-09.pdf

24. White, B., Lepreau, J., Stoller, L., Ricci, R., Guruprasad, S., Newbold, M., Hibler, M., Barb, C., and Joglekar, A. An integrated experimental environment for distributed systems and networks. In Proceedings of the Fifth Symposium on Operating Systems Design and Implementation (Boston, Dec. 9–11). USENIX Association, Berkeley, CA, 2002, 255–270.

25. Yardley, T., Berthier, R., Nicol, D., and Sanders, W. Smart grid protocol testing through cyber-physical testbeds. In Proceedings of the Fourth IEEE PES Innovative Smart Grid Technologies Conference (Washington, D.C., Feb. 24–27). IEEE Power and Energy Society, NJ, 2013, 1–6.

Back to Top

Authors

Christos Siaterlis ([email protected]) is a project officer in the Institute for the Protection and Security of the Citizen of the European Commission's Joint Research Centre, Ispra, Italy.

Béla Genge ([email protected]) is a Marie Curie postdoctoral fellow and a member of the Informatics Department at Petru Maior University of Tîrgu Mureş, Mureş, Romania.

Back to Top

Figures

F1Figure 1. EPIC testbed architecture: (a) overview and experimentation steps; and (b) software modules, including SSims and proxy units.

F2Figure 2. Execution time on a 2.8GHz CPU and limitations of various models; EPIC enables experimentation with power plants, chemical plants, railway systems, and power grid models from a suite of standard IEEE models.

F3Figure 3. Effect of a cyberattack on critical infrastructures: (a) experimental setting, with three physical SSim units, an energy-management system simulator, attacker nodes, and two virtual circuits offered by a telecom operator; VPN 1 = grid operator's private circuit, VPN 2 = public Internet; and (b) effect on voltage stability.

Back to Top

Tables

T1Table 1. Testbed features and cost-effectiveness compared: ••• = strong support; •• = moderate support; and • = weak support for a specific feature.

T2Table 2. Typical experiments performed through EPIC.

Back to top


©2014 ACM  0001-0782/14/06

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.


 

No entries found