acm-header
Sign In

Communications of the ACM

Contributed articles

Process Mining


hands on cogs, illustration

Credit: Myddleton Croft

Recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on event data. Activities executed by people, machines, and software leave trails in so-called event logs. What events (such as entering a customer order into SAP, a passenger checking in for a flight, a doctor changing a patient's dosage, or a planning agency rejecting a building permit) have in common is that all are recorded by information systems. Data volume and storage capacity have grown spectacularly over the past decade, while the digital universe and the physical universe are increasingly aligned. Business processes thus ought to be managed, supported, and improved based on event data rather than on subjective opinions or obsolete experience. Application of process mining in hundreds of organizations worldwide shows that managers and users alike tend to overestimate their knowledge of their own processes. Process mining results can thus be viewed as X-rays revealing what really goes on inside processes and can be used to diagnose problems and suggest proper treatment. The practical relevance of process mining and related interesting scientific challenges make process mining a hot topic in business process management (BPM). This article offers an introduction to process mining by discussing the core concepts and applications of the emerging technology.

Back to Top

Key Insights

ins01.gif

Process mining aims to discover, monitor, and improve real processes by extracting knowledge from event logs readily available in today's information systems.1,2 Although event data is everywhere, management decisions tend to be based on PowerPoint charts, local politics, or management dashboards rather than on careful analysis of event data. The knowledge hidden in event logs cannot be turned into actionable information. Advances in data mining made it possible to find valuable patterns in large datasets and support complex decisions based on the data. However, classical data mining problems (such as classification, clustering, regression, association rule learning, and sequence/episode mining) are not process-centric. Therefore, BPM approaches tend to resort to handmade models, and process mining research aims to bridge the gap between data mining and BPM. Metaphorically, process mining can be seen as taking X-rays to help diagnose/predict problems and recommend treatment.

An important driver for process mining is the incredible growth of event data4,5 in any context—sector, economy, organization, and home—and system that logs events. For less than $600, one can buy, say, a disk drive with the capacity to store all of the world's music.5 A 2011 study by Hilbert and Lopez4 found that storage space worldwide grew from 2.6 optimally compressed exabytes (2.6 × 1018B) in 1986 to 295 compressed exabytes in 2007. In 2007, 94% of all information storage capacity on Earth was digital, with the other 6% in the form of books, magazines, and other non-digital formats; in 1986, only 0.8% of all information-storage capacity was digital. These numbers reflect the continuing exponential growth of data.

The further adoption of technologies (such as radio frequency identification, location-based services, cloud computing, and sensor networks) will accelerate the growth of event data. However, organizations have problems using it effectively, with most still diagnosing problems based on fiction (such as PowerPoint slides and Visio diagrams) rather than on facts (such as event data). This is illustrated by the poor quality of process models in practice; for example, over 20% of the 604 process diagrams in SAP's reference model have obvious errors and their relation to actual business processes supported by SAP is unclear.6 It is thus vital to turn the world's massive amount of event data into relevant knowledge and reliable insights—and this is where process mining can help.

The growing maturity of process mining is illustrated by the Process Mining Manifesto9 released earlier this year by the IEEE Task Force on Process Mining (http://www.win.tue.nl/ieeetfpm/) supported by 53 organizations and based on contributions from 77 process-mining experts. The active contributions from end users, tool vendors, consultants, analysts, and researchers highlight the significance of process mining as a bridge between data mining and business process modeling.

The starting point for process mining is an event log in which each event refers to an activity, or well-defined step in some process, and is related to a particular case, or process instance. The events belonging to a case are ordered and can be viewed as one "run" of the process. Event logs may also store additional information about events; when possible, process mining techniques use extra information (such as the resource, person, or device executing or initiating the activity), the timestamp of the event, and data elements recorded with the event (such as the size of an order).

Event logs can be used to conduct three types of process mining (see Figure 1).1 The first and most prominent is discovery; a discovery technique takes an event log and produces a model without using a priori information. For many organizations it is surprising that existing techniques are able to discover real processes based only on example behaviors recorded in event logs. The second type is conformance, where an existing process model is compared with an event log of the same process. Conformance checking can be used to check if reality, as recorded in the log, conforms to the model and vice versa. The third type is enhancement, where the idea is to extend or improve an existing process model using information about the actual process recorded in an event log. Whereas conformance checking measures alignment between model and reality, this third type of process mining aims to change or extend the a priori model; for instance, using timestamps in the event log, one can extend the model to show bottlenecks, service levels, throughput times, and frequencies.

* Process Discovery

The goal of process discovery is to learn a model based on an event log. Events can have all kinds of attributes (such as timestamps, transactional information, and resource usage) that can be used for process discovery. However, for simplicity, we often represent events by activity names only. That way, a case, or process instance, can be represented by a trace describing a sequence of activities. Consider, for example, the event log in Figure 2 (from van der Aalst1), which contains 1,391 cases, or instances of some reimbursement process. There are 455 process instances following trace acdeh, with each activity represented by a single character: a = register request, b = examine thoroughly, c = examine casually, d = check ticket, e = decide, f = reinitiate request, g = pay compensation, and h = reject request. Hence, trace acdeh models a reimbursement request that was rejected after a registration, examination, check, and decision step; 455 cases followed this path, which consists of five steps, so the first line in the table corresponds to 455 × 5 = 2,275 events. The whole log consists of 7,539 events.

Process-discovery techniques produce process models based on event logs (such as the one in Figure 2); for example, the classical α-algorithm produces model M1 for this log. This process model is represented as a Petri net consisting of places and transitions. The state of a Petri net, or "marking," is defined by the distribution of tokens over places. A transition is enabled if each of its input places contains a token; for example, a is enabled in the initial marking of M1, because the only input place of a contains a token (black dot). Transition e in M1 is enabled only if both input places contain a token. An enabled transition may fire, thereby consuming a token from each of its input places and producing a token for each of its output places. Firing a in the initial marking corresponds to removing one token from start and producing two tokens, one for each output place. After firing a, three transitions—b, c, and d—are enabled. Firing b disables c because the token is removed from the shared input place (and vice versa). Transition d is concurrent with b and c; that is, it can fire without disabling another transition. Transition e becomes enabled after d and b or c have occurred. By executing e, three transitions—f, g, and h—become enabled; these transitions are competing for the same token, thus modeling a choice. When g or h is fired, the process ends with a token in place end. If f is fired, the process returns to the state just after executing a. Note that transition d is concurrent with b and c. Process mining techniques must be able to discover such advanced process patterns and should not be restricted to simple sequential processes.

Checking that all traces in the event log can be reproduced by M1 is easy. The same does not hold for the second process model in Figure 2, as M2 is able to reproduce only the most frequent trace acdeh. The model does not fit the log well because observed traces (such as abdeg) are not possible according to M2. The third model is able to reproduce the entire event log, but M3 also allows for traces (such as ah and adddddddg). M3 is therefore considered "underfitting"; too much behavior is allowed because M3 clearly overgeneralizes the observed behavior. Model M4 is also able to reproduce the event log, though the model simply encodes the example traces in the log; we call such a model "overfitting," as the model does not generalize behavior beyond the observed examples.

In recent years, powerful process mining techniques have been developed to automatically construct a suitable process model, given an event log. The goal is to construct a simple model able to explain most observed behavior without overfitting or underfitting the log.

Back to Top

Conformance Checking

Process mining is not limited to process discovery; the discovered process is just the starting point for deeper analysis. Conformance checking and enhancement relate model and log, as in Figure 1. The model may have been made by hand or discovered through process discovery. In conformance checking, the modeled behavior and the observed behavior, or event log, are compared. When checking the conformance of M2 with respect to the log in Figure 2, only the 455 cases following acdeh can be replayed from beginning to end. If the model would try to replay trace acdeg, it would get stuck after executing acde because g is not enabled. If it would try to replay trace adceh, it would get stuck after executing the first step because d is not (yet) enabled.

Among the approaches to diagnosing and quantifying conformance is one that looks to find an optimal alignment between each trace in the log and the most similar behavior in the model. Consider, for example, process model M1, a fitting trace σ1 = adceg, a non-fitting trace σ2 = abefdeg, and the following three alignments:

ueq01.gif

and

ueq02.gif

and

ueq03.gif

γ1 shows perfect alignment between σ1 and M1; all moves of the trace in the event log (top part of alignment) can be followed by moves of the model (bottom part of alignment). γ2 shows an optimal alignment for trace σ2 in the event log and model M1; the first two moves of the trace in the event log can be followed by the model. However, e is not enabled after executing only a and b. In the third position of alignment γ2, a d move of the model is not synchronized with a move in the event log. This move in just the model is denoted as (>>,d), signaling a conformance problem. In the next three moves model and log agree. The seventh position of alignment γ2 involves a move in the model that is not also in the log: (>>,b). γ3 shows another optimal alignment for trace σ2. In γ3 there are two situations where log and model do not move together: (e,>>) and (f,>>). Alignments γ2 and γ3 are both optimal if the penalties for "move in log" and "move in model" are the same. Both alignments have two >> steps, and no alignments are possible with fewer than two >> steps.

Conformance may be viewed from two angles: either the model does not capture real behavior (the model is wrong) or reality deviates from the desired model (the event log is wrong). The first is taken when the model is supposed to be descriptive, or captures or predicts reality; the second is taken when the model is normative, or used to influence or control reality.

Various types of conformance are available, and creating an alignment between log and model is just the starting point for conformance checking.1 For example, various fitness (the ability to replay) metrics are available for determining the conformance of a business process model; a model has fitness 1 if all traces can be replayed from begin to end, and a model has fitness 0 if model and event log "disagree" on all events. In Figure 2, process models M1, M3, and M4 have a fitness of 1, or perfect fitness, with respect to the event log. Model M2 has a fitness 0.8 for the event log consisting of 1,391 cases. Intuitively, this means 80% of the events in the log are explained by the model. Our experience with conformance checking in dozens of organizations shows real-life processes often deviate from the simplified Visio or PowerPoint representations traditionally used by process analysts.

Back to Top

Model Enhancement

A process model can be extended or improved through alignment between event log and model, and a non-fitting process model can be corrected through the diagnostics provided by the alignment. If the alignment contains many (e,>>) moves, it might make sense to allow for skipping activity e in the model. Moreover, event logs may contain information about resources, timestamps, and case data; for example, an event referring to activity "register request" and case "992564" may also have attributes describing the person registering the request (such as "John"), time of the event (such as "3011-2011:14.55"), age of the customer (such as "45"), and claimed amount (such as "650 euro"). After aligning model and log the event log can be replayed on the model; while replaying, one can analyze these additional attributes; Figure 3 shows, for example, it is possible to analyze wait times between activities. Measuring the time difference between causally related events and computing basic statistics (such as averages, variances, and confidence intervals) makes it possible to identify the main bottlenecks.

Information about resources can help discover roles, or groups of people executing related activities frequently, through standard clustering techniques. Social networks can be constructed based on the flow of work, and resource performance (such as the relation between workload and service times) can be analyzed. Standard classification techniques can be used to analyze the decision points in the process model; for example, activity e ("decide") has three possible outcomes: "pay," "reject," and "redo." Using data known about the case prior to the decision, a decision tree can be constructed explaining the observed behavior.

Figure 3 outlines why process mining is not limited to control-flow discovery. Moreover, process mining is not limited to offline analysis and can be used for predictions and recommendations at runtime; for example, the completion time of a partially handled customer order can be predicted through a discovered process model with timing information.

Back to Top

Practical Value

Here, I focus on the practical value of process mining. As mentioned earlier, process mining is driven by the continuing exponential growth of event-data volume; for example, according to McKinsey Global Institute in 2010 enterprises stored more than seven exabytes of new data on disk drives, while consumers stored more than six exabytes of new data on such devices as PCs and notebooks.5

The remainder of the article explores how process mining provides value, referring to case studies that used our open-source software package ProM (http://www.processmining.org)1 created and maintained by the process-mining group at Eindhoven University of Technology, though other research groups have contributed, including the University of Padua, Universitat Politècnica de Catalunya, University of Calabria, Humboldt-Universität zu Berlin, Queensland University of Technology, Technical University of Lisbon, Vienna University of Economics and Business, Ulsan National Institute of Science and Technology, K.U. Leuven, Tsinghua University, and University of Innsbruck. Besides ProM, approximately 10 commercial software vendors worldwide develop and distribute process-mining software, often embedded in larger tools (such as Pallas Athena, Software AG, Futura Process Intelligence, Fluxicon, Businesscape, Iontas/Verint, Fujitsu, and Stereologic).

Provides insight. For the past 10 years we have used ProM in more than 100 organizations, including municipalities (such as Alkmaar, Harderwijk, and Heusden), government agencies (such as Centraal Justitieel Incasso Bureau, the Dutch Justice Department, and Rijkswaterstaat), insurance-related agencies (such as UWV), banks (such as ING Bank), hospitals (such as AMC Hospital in Amsterdam and Catharina hospital in Eindhoven), multinationals (such as Deloitte and DSM), high-tech system manufacturers and their customers (such as ASML, Philips Healthcare, Ricoh, and Thales), and media companies (such as Winkwaves). For each, we discovered some of their processes based on the event data they provided, with discovered processes often surprising even the stakeholders. The variability of processes is typically much greater than expected. Such insight represents tremendous value, as unexpected differences often point to sources of waste and mismanagement.

Improve performance. Event logs can be replayed on discovered or handmade process models to support conformance checking and model enhancement. Since most event logs contain timestamps, replay can be used to extend the model with performance information.

Figure 4 includes some performance-related diagnostics that can be obtained through process mining. The model was discovered based on 745 objections raised by citizens against the so-called Waardering Onroerende Zaken, or WOZ, valuation in a Dutch municipality. Dutch municipalities are required by law to estimate the value of houses and apartments within their borders. They use the WOZ value as a basis for determining real-estate property tax. The higher the WOZ value, the more tax an owner must pay. Many citizens appeal against the WOZ valuation, asserting it is too high.

Each of the 745 objections corresponds to a process instance. Together, these instances generated 9,583 events, all with timestamps; Figure 4 outlines how frequently the different paths are used in the model. The different stages, or "places" in Petri net jargon, of the model include color to highlight where, on average, most process time is spent; the purple stages of the process take the most time, the blue stages the least. It is also possible to select two activities and measure the time that passes between them. On average, 202.73 days pass from completion of activity "OZ02 Voorbereiden" (preparation) to completion of "OZ16 Uitspraak" (final judgment); this is longer than the average overall flow time of approximately 178 days. Approximately 416, or 56%, of the objections follow this route; the other cases follow the branch "OZ15 Zelf uitspraak" that takes, on average, less time.

Diagnostics, as in Figure 4, can be used to improve processes by removing bottlenecks and rerouting cases. Since the model is connected to event data, it is possible to drill down immediately and investigate groups of cases that take notably more time than others.1

Ensure conformance. Replay can also be used to check conformance (see Figure 5). Based on 745 appeals against the WOZ valuation, ProM was used to compare the normative model and the observed behavior, finding that 628 of the 745 cases can be replayed without encountering any problems. The fitness of the model and log is 0.98876214, indicating the model explains almost all recorded events. Despite the good fit, ProM identified all deviations; for example, "OZ12 Hertaxeren" (reevaluate property) occurred 23 times despite not being allowed according to the normative model, as indicated by the "-23" in Figure 5. With ProM the analyst can drill down to see what these cases have in common.

The conformance of the appeal process is high; approximately 99% of events are possible according to the model. Many processes have a much lower conformance; for example, it is not uncommon to find processes where only 40% of events are possible according to the model; for example, process mining revealed ASML's modeled test process strongly deviated from the real process.8

The increasing importance of corporate governance, risk, compliance management, and legislation (such as the Sarbanes-Oxley Act and the Basel II Accord) highlight the practical relevance of conformance checking. Process mining can help auditors check whether processes execute within certain boundaries set by managers, governments, and other stakeholders.3 Violations discovered through process mining might indicate fraud, malpractice, risks, or inefficiency; for example, in the municipality for which the WOZ appeal process was analyzed, ProM revealed misconfigurations of its eiStream workflow-management system. Municipal employees frequently bypassed the system because system administrators could manually change the status of cases (such as to skip activities or roll back the process).7

Show variability. Handmade process models tend to provide an idealized view of the business process being modeled. However, such "PowerPoint reality" often has little in common with real processes, which have much more variability. To improve conformance and performance, process analysts should not naively abstract away this variability.

Process mining often involves spaghetti-like models; the one in Figure 6 was discovered based on an event log containing 24,331 events referring to 376 different activities describing the diagnosis and treatment of 627 gynecological oncology patients in the AMC Hospital in Amsterdam. The spaghetti-like structures are not caused by the discovery algorithm but by the variability of the process.

Although stakeholders should see reality in all its detail (see Figure 6), spaghetti-like models can be simplified. As with electronic maps, it is possible to seamlessly zoom in and out.1 Zooming out, insignificant things are either left out or dynamically clustered into aggregate shapes, in the same way streets and suburbs amalgamate into cities in Google Maps. The significance level of an activity or connection may be based on frequency, costs, or time.

Improve reliability. Process mining can also help improve the reliability of systems and processes; for example, since 2007, we have used process mining to analyze the event logs of X-ray machines from Philips Healthcare1 that record massive amounts of events describing actual use. Regulations in different countries require proof systems were tested under realistic circumstances; for this reason, process discovery was used to construct realistic test profiles. Philips Healthcare also used process mining for fault diagnosis to identify potential failures within its X-ray systems. By learning from earlier system failure, fault diagnosis was able to find the root cause for new emergent problems. For example, we used ProM to analyze the circumstances under which particular components are replaced, resulting in a set of "signatures," or historical fault patterns; when a malfunctioning X-ray machine exhibits a particular signature behavior, the service engineer knows what component to replace.

Enable prediction. Combining historic event data with real-time event data can also help predict problems before they occur; for instance, Philips Healthcare can anticipate that an X-ray tube in the field is about to fail by discovering signature fault patterns in the machine's event logs, so the tube can be replaced before the machine begins to malfunction. Many data sources today are updated in (near) real time, and sufficient computing power is available to analyze events as they occur. Process mining is not restricted to offline analysis and is useful for online operational support. Predictions can even be made for a running process instance (such as expected remaining flow time).1

Back to Top

Conclusion

Process-mining techniques enable organizations to X-ray their business processes, diagnose problems, and identify promising solutions for treatment. Process discovery often provides surprising insight that can be used to redesign processes or improve management, and conformance checking can be used to identify where processes deviate. This is relevant where organizations are required to emphasize corporate governance, risk, and compliance. Process-mining techniques are a means to more rigorously check compliance while improving performance.

This article introduced the basic concepts and showed that process mining can provide value in several ways. For more on process mining see van der Aalst,1 the first book on the subject, and the Process Mining Manifesto9 available in 13 languages; for sample logs, videos, slides, articles, and software see http://www.process-mining.org.

Back to Top

Acknowledgments

I thank the members of the IEEE Task Force on Process Mining and all who contributed to the Process Mining Manifesto9 and the ProM framework.

Back to Top

References

1. Aalst, W. van der. Process Mining: Discovery, Conformance and Enhancement of Business Processes. Springer-Verlag, Berlin, 2011.

2. Aalst, W. van der. Using process mining to bridge the gap between BI and BPM. IEEE Computer 44, 12 (Dec. 2011), 77–80.

3. Aalst, W. van der, Hee, K. van, Werf, J.M. van der, and Verdonk, M. Auditing 2.0: Using process mining to support tomorrow's auditor. IEEE Computer 43, 3 (Mar. 2010), 90–93.

4. Hilbert, M. and Lopez, P. The world's technological capacity to store, communicate, and compute information. Science 332, 6025 (Feb. 2011), 60–65.

5. Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R, Roxburgh, C., and Byers, A. Big Data: The Next Frontier for Innovation, Competition, and Productivity. Report by McKinsey Global Institute, June 2011; http://www.mckinsey.com/mgi

6. Mendling, J., Neumann, G., and Aalst, W. van der Understanding the occurrence of errors in process models based on metrics. In Proceedings of the OTM Conference on Cooperative information Systems (Vilamoura, Algarve, Portugal, Nov. 25–30), F. Curbera, F. Leymann, and M. Weske, Eds. Lecture Notes in Computer Science Series, Vol. 4803. Springer-Verlag, Berlin, 2007, 113–130.

7. Rozinat, A. and Aalst, W. van der. Conformance checking of processes based on monitoring real behavior. Information Systems 33, 1 (Mar. 2008), 64–95.

8. Rozinat, A., de Jong, I., Günther, C., and Aalst, W. van der. Process mining applied to the test process of wafer scanners in ASML. IEEE Transactions on Systems, Man and Cybernetics, Part C 39, 4 (July 2009), 474–479.

9. Task Force on Process Mining. Process Mining Manifesto. In Proceedings of Business Process Management Workshops, F. Daniel, K. Barkaoui, and S. Dustdar, Eds. Lecture Notes in Business Information Processing Series 99. Springer-Verlag, Berlin, 2012, 169–194.

Back to Top

Author

Wil van der Aalst ([email protected]) is a professor in the Department of Mathematics & Computer Science of the Technische Universiteit Eindhoven, the Netherlands, where he is chair of the Architecture of Information Systems group.

Back to Top

Figures

F1Figure 1. The three basic types of process mining in terms of input and output.

F2Figure 2. An event log and four potential process models—M1, M2, M3, and M4—aiming to describe observed behavior.

F3Figure 3. The process model can be extended using event attributes (such as timestamps, resource information, and case data); the model also shows frequencies, as in, say, 1,537 times a decision was made and 930 cases were rejected.

F4Figure 4. Performance analysis based on 745 appeals against the WOZ valuation.

F5Figure 5. Conformance analysis showing deviations between event log and process model.

F6Figure 6. Process model discovered for a group of 627 gynecological oncology patients.

Back to top


©2012 ACM  0001-0782/12/0800  $10.00

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.


 

No entries found