acm-header
Sign In

Communications of the ACM

Adaptive complex enterprises

Test Beds For Complex Systems


Global competition requires major manufacturers to increase productivity, quality, and responsiveness while simultaneously decreasing costs and time to market. To achieve these seemingly conflicting requirements, many manufacturers have expanded the outsourcing trends that began in the late 1980s. Now, design, engineering, logistics, as well as production, are outsourced to companies located all over the world.

This expansion has resulted in a new organizational structure known as the value chain. The members of this chain form a globally distributed system of companies that rely completely on the timely and error-free exchange of information. The availability of this information is crucial for making good decisions at every level in the chain. Many of these decisions are formulated as optimization problems, which are solved using commercial software applications. These applications assume that required inputs are either stored locally or available from other members. They further assume that inputs are current, accurate, and meaningful. Current computing technologies can assure information currency, but they cannot assure accuracy and meaning.

Back to Top

Information Importance

The information needed to make decisions must be conveyed in physical symbols like marks on paper, sounds, and electrical pulses. Nevertheless, information has an effect on system performance that is not explainable by its physical properties alone. That effect is related to the organization of the symbols, the meaning ascribed to that organization, and the change in system performance that comes from understanding and acting on that meaning [6].

In the earliest physical systems, the carriers of information were, for example, mechanical links in steam engine governors, punched holes in Jacquard weaving looms, or electrical connectivity in thermostats. The performance of these simple systems could be observed, measured, and quantified directly. Consequently, it was possible to establish a mathematical link between the physical meaning of the information and the physical performance of the system. This link allowed predictions to be made about the performance of the system. Moreover, when something went wrong—there was a disturbance—it was usually easy to determine the extent of the problem and the likely remedy.

Establishing such a direct link between information meaning and system performance in a value chain is much more difficult because it is not a simple system. A value chain represents a complex system in which every decision of every component impacts, in complicated and sometimes unpredictable ways, total system performance. Despite the qualitative appreciation of the concept of complex systems, there is very limited understanding of their behavior and evolution over time, and the relationships between the performance of the parts and the performance of the whole. This means it is very difficult to predict the long-term performance of value chains.


A value chain represents a complex system in which every decision of every component impacts, in complicated and sometimes unpredictable ways, total system performance.


Back to Top

Defining a Complex Manufacturing System

The terms complexity and manufacturing have been linked for a long time. Computational or algorithmic complexity is often used for classifying manufacturing planning and control problems [4]. However, computational complexity does not capture all the aspects of complexity in a manufacturing system. In fact, the question, "does a system fundamentally change or become simpler if a better algorithm is invented for solving a particular problem?" suggests the need for system-related as well as algorithm-related complexity measures. Also, computational complexity does not necessarily relate to system performance. For example, consider two heuristics used for solving the n job/m machine, job-shop scheduling problem, which is known to be NP-hard. The quality of the solutions generated by these two heuristics cannot be determined solely by comparing their computational complexities.

In recent years, several researchers have attempted to capture the essence of complexity in manufacturing systems using information-theoretic metrics. For example, Deshmukh [3] characterizes complexity in terms of structure and behavior. Static complexity can be viewed as a function of the structure of the system, connectivity patterns, variety of components, and the strengths of interactions. Dynamic complexity is concerned with unpredictability in the behavior of the system over time. Both of these characterizations are applicable to a value chain.

Back to Top

Modeling Value Chains

The usual approach to understanding any system is to build models. A value chain can be modeled as a highly interconnected, layered network of informational and physical systems. Typically, the layering occurs in both the temporal domain and the spatial domain. The bottom layer most often contains physical processes such as machining, inspection, assembly, and transportation. These processes are modeled using continuous-time, continuous-state techniques such as physics-based differential equations and computer-based simulations and are subject to the second law of thermodynamics. Hence, without any external intelligence to guide their evolution, entropy will increase and these processes will go out of control over time.

As we move up the layers, we no longer deal directly with physical systems. Instead, we deal with decision making and information systems that affect those physical systems, but on a longer-term basis. The models for these systems have many characteristics including discrete or continuous time, discrete or continuous state, linear or nonlinear behavior, and deterministic or nondeterministic parameters. There are several, often conflicting, quantitative performance measures and the techniques are implemented in a number of software applications such as linear programming, demand forecasting, system dynamics, discrete-event simulation, and supply chain management.

These applications also produce plans implemented in other, lower-layer software applications—demands lead to production plans, which lead to schedules, which lead to sequences, and so on. These plans are based on information that has a high degree of uncertainty. Some of this uncertainty arises because of the influence of the second law on the bottom-layer processes. Some of it arises because of the uncertain nature of predictions (associated with demand projections, priority orders, and material arrivals, to name a few) at higher layers.

As noted previously, when we move up and down various layers of a complex system like a value chain, the models change. The characteristics of the information associated with these models change as well. At the bottom, information is very detailed, relatively simple, and deterministic; at the top, it is very general, more complicated, and highly uncertain. No one knows exactly how or why these changes take place. Moreover, at every layer, there is some influence of entropy from both the second law and information uncertainty. At the bottom, the second law dominates. At the top, information uncertainty dominates. We have a very good idea of how to measure and control the effects of the second law on physical system performance. We have almost no idea how to measure and control the effects of information uncertainty on system performance.

To deal with this latter problem and to better understand how the physical and informational relate to and impact one another, we propose some new research directions. Before discussing our proposals, we describe some of the traditional approaches and why they do not work for complex systems such as value chains.

Back to Top

Traditional Approach

Once we have good models of all the system components, we can make decisions that influence system performance. The traditional approach derives from the philosophy of Descartes. It can be best described as a reductionism and involves three steps:

  • Decompose the original global problem into independent, local subproblems;
  • Find solutions to each local subproblem, ignoring all others; and
  • Recompose these local solutions to get the solution to the global problem.

Researchers in operations research have spent the last 50 years developing techniques to implement these steps [7] and have proposed sophisticated decomposition strategies based on the principles of mathematical programming, graph theory, and control theory. Thousands of algorithms and heuristics that produce optimal or near-optimal solutions to the local subproblems have been generated. Researchers have not been, however, as successful with recomposition.

Even for a simple, two-level, hierarchical decomposition a number of theoretical conditions must hold before recomposition of the local solutions into the global solution can be performed. These conditions are related to the relative independence of the subproblems through what are commonly called coupling constraints [7]. The more constraints, the less likely the conditions will hold.

In a value chain, there are multiple levels of decomposition, not necessarily hierarchical, with complicated coupling constraints at every level. These constraints arise from the vast amounts of information sharing and material exchange among the members of the chain. The result is the theoretical conditions necessary for recomposition probably will never exist for any decision in a value chain. Consequently, new research directions are necessary.

Back to Top

New Research Directions in Decision Making

We noted previously that multiscale modeling on spatial, control, and temporal scales is required. We need to develop a science of multiscale analysis of enterprises. Questions to be addressed include: How do we identify higher-level structures/phenomenon based on micro interactions? What would be the calculus for understanding interactions among higher-level structures? How do we zoom in or out of the modeling framework with minimal error induction?

Planning is useful for a longer time horizon; it is not, however, as important over short time scales. The availability of information about the state of the system—via RFID, sensor networks, wireless communications, and video image analysis—is making real-time control of enterprises possible. Mass customization and rapid response to customers has created the need for rapid reconfiguration and reorganization. Techniques for analyzing the trade-offs between available time, computation, information, communication, and decision quality must be developed. The notion of stability/robustness in addition to performance optimization may become a key factor in these systems.

Selling a single product is not the goal for most manufacturing organizations. Rather, most enterprises are selling a complete portfolio of products and providing related services and operations over the entire life cycle of those products. So, from the standpoint of the enterprise performance, how do we create utility portfolios?

It is likely that suppliers will be partners to many major companies, who are often simultaneously in competition with one another. How do you ensure "best" results from decentralized systems composed of entities competing and cooperating at the same time? What are the infrastructure needs for such global enterprises? We need to identify incentive schemes that work in different settings and interoperability/security standards that work for the entire global enterprise.

Back to Top

New Research Directions in Information Uncertainty

Early in his landmark book, Herbert Simon said that in complex systems, "it is the organization of the components, not their physical properties, that determines behavior" [10]. We interpret this to mean that we must focus on the role and impact of information if we want to improve system performance. Researchers have proposed approaches and measures for dealing with various aspects of information [1, 2]. Two pioneering works relate information to entropy [9] or negentropy [11].

A number of unanswered questions remain related to the meaning of information. How do we quantify information meaning? How do we measure its accuracy and relevance to the current state of the system—in other words—how important is this piece of information? How do we value information, and how much should we pay for it? How much information is necessary to make a decision? How do we hedge against risk with additional information? How should information be traded in enterprises? How do we express age, relevance, and accuracy of a piece of information?

A growing number of AI researchers argue that probability does not always provide a basis for measuring that uncertainty because its sources are not related to sampling. For information objects, uncertainty is often linked with language and described by terms such as vagueness, nonspecificity, dissonance, and confusion. In addition, there are many cases where samples from a population of information object does not exist because there is only one object. Consequently, AI researchers have proposed alternative theories and computational approaches computing uncertainties and updating those uncertainties as new information becomes available.

Fuzzy theory, possibility theory, belief theory, and Bayesian networks are commonly used to measure such uncertainties and to make inferences [10]. While inference is important, it is not the only concern. From the performance perspective, we need to know how to propagate uncertainties around the network, combine uncertainties for information objects when they are used as input to a decision, and determine the uncertainty of the decision as a function of inputs. Numerous techniques, including queuing theory, simulation, and stochastic programming, are available to handle these three issues when the uncertainties are measured using probability theory. New techniques are needed when they are not.

Back to Top

Performance Test Bed

As described earlier, we propose a systems-based research program that focuses on two areas: the global performance of the entire value chain and the impact that local decisions and local disturbances have on that global performance. To conduct such a program, researchers must have access to models of various value-chain activities that impact performance. Four major value-chain activities are being considered: supply chain management and logistics, collaborative design and engineering, distributed planning and scheduling, and shop floor process control.

Researchers at the National Institute of Standards and Technology (NIST) are partnering with researchers in academia to build a reflective virtual manufacturing environment. This environment will contain manufacturing software applications, as well as a number of different simulation tools (see Figure 1) to model information flows, product capabilities, manufacturing processes, and logistics. Software systems will be augmented with manufacturing hardware from both NIST and university laboratories. Manufacturing hardware may include numerically controlled machines, coordinate measuring machines, robots, and other manufacturing equipment.

Back to Top

Interoperability Test Bed

To successfully address the two focus areas in the performance test bed, the components in our reflective environment must form one integrated system. They must be able to exchange information and interoperate with one another seamlessly and without error. Different organizations are creating interoperability standards for exchanging information and vendors are using the latest technologies to implement these standards. Before manufacturers buy these products, they want to be sure the standards are implemented correctly and they can use the products to do business with their partners around the world. These requirements are usually satisfied through a number of interoperability demonstrations.

Until recently, vendors conducted such demonstrations as part of their normal marketing and sales operations. However, the costs of doing the required modifications for every potential customer have become prohibitively expensive. A number of users and vendors suggested that NIST create a persistent environment, tools, and test suites for such demonstrations. In response, NIST created an interoperability test bed [5].

The testing approach adopted in the interoperability test bed is shown in the top part of Figure 2. The OEM and the supplier represent virtual trading partners attempting to exchange messages using two different vendor products. The Reflector is a testing tool that supports both disconnected and connected testing scenarios and allows the transactions to be routed to the specified end points, reflected to the originator, and stored in a permanent transaction log. These transactions are tested for conformance to specific standards governing messaging, syntax, semantics, and choreography, among others. NIST and its partners have identified numerous tools for conducting such tests. Several of those tools are described here; information about other related tools is available in [5].

The Process Checker enables monitoring and conformance checking for choreographed transactions between business partners. The tool provides a Web-based graphical user interface to monitor the business interactions in real time. The monitoring tool checks whether each message has the correct sender and receiver and that they come in the correct order. Further, each transaction may have a time constraint associated with its execution. Should the constraints be exceeded, the monitoring tool raises a flag that the collaboration has failed.

The Content Checker enables specification and execution of content constraints that define valid syntax, structure, or semantics of the business messages. This facility allows standard developers, users, and implementers to precisely specify, extend, and test for conformance with the semantics of a common data dictionary (lexicon).

A Syntax Checker can be likened to a validating XML parser whose role is to check whether a message has the correct structure as specified by a standard and that all necessary elements are present and in the right order as specified in the XML Schema instance for that message.

A Grammar Checker can be considered a superset of the Syntax Checker responsible for enforcing business document structural rules defined in some application domain and business context. These rules are not easily expressible in the form of XML Schema instances and require additional expressive capability.

Back to Top

Conclusion

Manufacturing has become a truly global activity. The competition is no longer between individual companies; it is now between individual value chains. These chains contain independent companies that form complicated business structures. The tight vertical controls of the past no longer work in these federated structures; the result is behavior that is often complex, sometimes chaotic, and always unpredictable.

We contend there is a crucial, intrinsic relationship among performance, structure, decision, and information—a relationship that is not fully understood. We have discussed some of the limitations of traditional approaches to decision making and views of information and have proposed some new research directions to overcome some of these limitations. Based on our investigations, we have determined that two types of test beds are needed to support this new research: performance test beds and interoperability test beds.

Back to Top

References

1. Arndt, C. Information Measures: Information and its Description in Science and Engineering. Springer-Verlag, Berlin, Germany, 2001.

2. Chaitin, G.J. Information-Theoretic Incompleteness. World Scientific, 1992 (reprinted 1998).

3. Deshmukh, A. Complexity and chaos in manufacturing systems. Ph.D. dissertation, School of Industrial Engineering, Purdue University, West Lafayette, IN, 1993.

4. Garey, M. and Johnson, D. Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman, New York, 1979.

5. Ivezic, N., Kulvatunyou, B., and Jones, A.T. A manufacturing B2B interoperability testbed. In Proceedings of the European Commission E-Challenges 2003 (Bologna, Italy, Oct. 2003), 551-558.

6. Jones, A., Reeker, L., and Deshmukh, A. On information and performance of complex manufacturing systems. In Proceedings of 2002 Conference of Manufacturing Complexity Networks, G. Frizelle and H. Richards, Eds. (Cambridge, U.K., April 2002), 173—182.

7. Lasdon, L. Optimization Theory of Large Systems. Dover Publications, Mineola, NY, 2002.

8. Parsons, S. Qualitative Methods for Reasoning Under Uncertainty. MIT Press, Cambridge, MA, 2001.

9. Shannon, C. and Weaver, W. The Mathematical Theory of Communication. University of Illinois Press, Urbana, IL, 1971.

10. Simon, H. The Sciences of the Artificial. MIT Press, Boston, MA, 1981.

11. Stonier, T. Information and the Internal Structure of the Universe. Springer-Verlag, Berlin, Germany, 1991.

Back to Top

Authors

Albert Jones ([email protected]) is the leader of the Enterprise Systems Group in the Manufacturing Engineering Lab of the National Institute of Standards and Technology in Gaithersburg, MD.

Abhijit Deshmukh ([email protected]) is an associate professor of Mechanical and Industrial Engineering at the University of Massachusetts in Amherst.

Back to Top

Footnotes

Certain commercial software products are identified in this article. These products were used only for demonstration purposes. This use does not imply approval or endorsement by NIST, nor does it imply that these products are necessarily the best available for the purpose.

Back to Top

Figures

F1Figure 1. Some of the current simulation tools in the virtual manufacturing environment.

F2Figure 2. Schematic diagram of the test bed approach.

Back to top


©2005 ACM  0001-0782/05/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.


 

No entries found