acm-header
Sign In

Communications of the ACM

Virtual extension

Practitioner-Based Measurement: A Collaborative Approach


The established philosophy within the software development industry is that an organization implementing a program to improve software quality can expect to recoup the cost of the implementation many times over through the reduced cost associated with improvements in quality.4 Measurement initiatives are perceived to provide a key contribution to quality improvement as evidenced by the focus of early measurement based initiatives and the place of measurement in the higher echelons of process initiatives. In general, organizations pursue measurement initiatives from a perspective that, without measurement, control is not possible.3 While organizations recognize that there are potential benefits to measuring their processes and products, however, they typically find it difficult to structure ad-hoc measures into a formal program – a situation that is compounded by the significant cost of implementing such programs. Although these problems have led to some organizations moving away from measurement programs, many companies still use measurement programs as illustrated by the continued interest in, for example, the Capability Maturity Model. Given the appetite and potential returns on investment of measurement frameworks and initiatives, ways of successfully implementing them are important.

With that importance in mind, this work evaluates the implementation of such a measurement framework in a major Insurance organization. A hybrid model – practitioner-based – was devised to incorporate the best aspects of current approaches and mitigate identified shortcomings. In order to continually improve software quality, research was conducted to understand the critical success factors in implementing software measurement programs, develop a measurement framework to address the critical success factors, implement a pilot program based on that framework, and reflect on the outcomes of implementation for future practice. We examine existing measurement frameworks in order to assess the critical success factors and the relative strengths and weaknesses of existing approaches in relation to those factors and describe the model that results from the outcomes of the analysis of strengths and weaknesses of existing approaches. Later, we describe the implementation of a pilot of the model in an established IT department and evaluate the success of the pilot and the implications for the state-of-the-art.

Back to Top

Existing Measurement Frameworks

A variety of frameworks have been proposed as the basis for measurement programs, which can broadly be categorized as top-down or bottom-up. Top down approaches focus on the goals of the organization as expressed through senior management. The Goal Question Metric (GQM) paradigm1 underpins a number of top-down approaches and is considered one of the most effective along with the AMI approach,10 which itself has its basis in GQM. In outline, the approach is a structured method of breaking down organizational goals into questions or sub-goals and further decomposing them into metrics. In this approach, a clear link exists between metrics and goals, though a significant early effort is demanded in deriving questions and measures. The AMI adds to the measurement based improvement philosophy in its consideration of process maturity when setting goals. In contrast, the bottom-up approach challenges the assertion that measurement is focused solely on providing information for managerial decision making, and suggests that providing practitioners with objective data will help facilitate an improvement in both the service they provide and the products they produce.8 As an example, the MQG framework is predicated on a reversal of the GQM framework8 - the fundamental difference is that the measures come first, not last. One of the key concepts of the MQG spiral is that the practitioner is the focus of the program as measures should serve practitioners in helping them improve the quality of their work products.

Failed measurement programs have been reported in the literature5,7 and irrespective of the framework, anecdotal evidence suggests that 75%-80% or more measurement programs are likely to fail to deliver their objectives.2 As a consequence, it is important to consider the factors that are argued to affect implementation success in order to evaluate the strengths and weaknesses of the top-down and bottom-up approaches. In summary, the factors are:

  • Complexity. Measurement programs to improve quality may require a significant number of measurements to cover each of the quality characteristics; typical facets include correctness, reliability, integrity, usability, efficiency, maintainability, flexibility, testability, portability, reusability, interoperability for example. Multiple facets and multiple measures increase complexity, which increases the risk that the measurement program will fail to establish itself.8
  • Practitioner commitment. The effort related to the implementation of frameworks requires consensus with those generating measurement data. Typical facets that are important in this respect include transparency, feedback, usefulness, automated data collection and training.
  • Management commitment. Measurement programs can have a significant ongoing cost for all future software development projects so it is important to secure management commitment.7
  • Metrics integrity. The requirement for accurate data irrespective of whether it is used by managers to make decisions, or practitioners to improve their software development products and processes.2
  • Communication. Both approaches require effective communication. For top-down approaches, the aim is to provide transparency in order to ensure that practitioners are aware of what metrics are being collected and how they are being used and so encourage the participation of practitioners.7 For bottom-up approaches, communication can allow success to be publicized and so ensure continued management support.

Table 1 assesses the key advantages and disadvantages in relation to the factors here. Of these five key factors, the primary areas of concern for topdown approaches are complexity, practitioner commitment, and metrics integrity. For bottom-up approaches the areas for particular attention are complexity and management commitment.

Back to Top

PBM: A Hybrid Approach

Given that neither the top-down or bottom-up approaches are ideal, we sought to implement a hybrid approach designed to maximize the advantages and minimize the disadvantages associated with each approach – a Practitioner-Based Model (PBM). An analysis of the strengths and weaknesses across the five factors given in Table 1 led to the belief that the top-down approach was likely to be effective if additional practitioner involvement could be included in the goal setting stage. Broadly speaking, the model thus represents a direct attempt to develop the top-down approach by successfully improving the commitment of the practitioner-incorporating their objectives while ensuring continuation of the financial support required for institutionalizing the program. PBM thus focuses on a mechanism to achieve practitioner participation in the design of measurement programs, an idea that is supported by the research community.7 This participation aims for both active support, a sense of ownership and common understanding of the measures and their value.

The PBM model is illustrated at Table 1. The general stages of the framework are similar to those found in existing frameworks, as is the iterative nature of the model. The distinguishing feature of the PBM is found at the goal setting stage, where practitioners and decision-makers are awarded equal status. As far as the authors are aware this is the first published account of such a process. This approach addresses a widely cited area of concern5 and seeks to provide a method of significantly improving practitioner commitment. The focus of the PBM in relation to the key success factors outlined above is detailed at Table 2.

Back to Top

Implementing the PBM

The PBM was implemented in Property and Casualty Solutions Delivery (PCSD) – an established IT department of the AXA Group that provides IT services to the motor and home insurance lines of business within the UK. AXA's U.K. IT division employ more than 1000 staff, development being carried out both in the U.K. and offshore. Applications, hosted across multiple platforms, cover life and pensions, property and casualty insurance systems. The organization has been involved in several acquisitions and mergers, which have placed a significant emphasis within PCSD on integrating large complex legacy systems, development processes and software development departments. At the time, PCSD did not currently deploy a formal structured measurement program although, in line with many organizations, a variety of basic measures are taken throughout the development life cycle. Though the organization has delivered many successful IT projects, the need for continuous improvement provided an environment in which to pilot the PBM framework.

In order to assess the impact of the pilot, the initial position was determined through a questionnaire, which had sections covering basic information, experience of previous measurement programs, and the value of measurement programs. The questionnaire was administered to 72 potential respondents including five decision makers and 67 practitioners. Overall, 39 of 72 responded with an average response rate of 54%; of these 39, four were IT Managers (decision makers), seven were Project Managers, and 28 were developers. The respondents ranked factors considered to affect success and Table 2 illustrates the average ranking of the factors. Both groups supported the principal assertion of PBM that the commitment of the practitioner is crucial. Practitioners felt that the cost and complexity of the implementation was also an important factor though, interestingly, the decision makers indicated that it was the least important factor.

A key element of PBM is the assertion that a common understanding of the measurement program's objectives and the usage of the gathered data affect the success of the program. It was found that 31% of the developers were unaware of the reasons for collecting data in previous measurement programs. As might be expected from a top-down implementation, all the decision makers and project managers were aware of the objectives of the previous programs. Although all decision makers knew how the gathered data was used, only 25% of the practitioner group had a similar understanding. Again, this is consistent with a top-down approach having previously been adopted and information on data usage not being communicated beyond the decision makers or being perceived as valuable. Although 82% of the decision maker and the management practitioner groups believed that the developers benefit from a measurement program, only 39% of the developers thought the developer benefited. This figure is consistent with criticisms made of the GQM framework.

The PBM was implemented in the early lifecycle stages of a program so that it appeared as a natural part of the process. This program was part of the deployment of a new policy management system, based on the re-use of an inhouse application from another company in the AXA Group. The project team consisted of experienced staff from within PCSD, all with at least five years of IT experience, supplemented by staff from an external supplier – only the views of staff directly employed by PCSD were considered. The introduction of the PBM was preceded by a team training presentation describing the PBM and proposed stakeholder involvement. Clearly, the initial focus was placed on the joint goal setting stage, which was facilitated via a brain storming session. The outcome of that session was that only two goals would be used, based on evidence that the use of one or two simple goals and a small set of easily gathered measures helps establish a measurement program and contributes towards its success.9 During the workshop, the team assessed the potential goals in order to determine the two to use. Despite the training, practitioners were surprised that they had joint responsibility with their management for the objectives of the program. The goals recorded in the workshop were translated into questions and measures that were circulated via email among the team for review, comment and finally agreement; these are shown in Table 3. A central repository was developed in order to capture the measurement data.

Back to Top

Evaluating the PBM

Once measurement data was generated, each of the 10 participants was interviewed in order to assess the impact of the PBM. Interviews were semi-structured in nature and based on an interview plan and were followed up with a second questionnaire (both were based upon the initial questionnaire so that responses could be compared). The interview questions also attempted to determine whether there was an increase in practitioner commitment towards measurement programs, which was assessed by the effect that practitioners felt the model would have upon data accuracy, and how important they felt a measurement program was to PCSD. The hypothesis here was that if participants felt that the data accuracy had improved it was because the data they reported was more accurate.

The results of the follow-up questionnaire are presented at Figure 3. Metrics integrity is the factor that is most positively affected by the application of the PBM, providing support for the hypothesis above in relation to the accuracy of data. It is interesting that the respondents did not identify the more obvious 'Support of IT Staff' as being the most important factor and that the pilot team believed that there would be negligible impact on the support from senior management.

The members of the pilot team who had previously been involved in a measurement program at PCSD were unanimous in the view that practitioner commitment was critical to the success of a measurement program. Comments during the interviews supported earlier research by highlighting that involving practitioners was a positive step. For example, one interviewee stated that "(Developers) will buy-in if they are involved early" while another said "(previous programs made) no attempt to get people on board and explain it (the program objectives)" and "involving developers (in the goal-setting stage) can only be a good thing." A third stated that "(previously) no-one ever asked us our opinions in terms of what it (the program) was trying to achieve ... it was difficult to see where it was going... it's a positive move asking practitioners to contribute." However, one participant, while supporting the use of the PBM in PCSD, suggested that the approach might not work in some organizations. Comments also supported the hypothesis that practitioner involvement leads to improvements in accuracy, one developer saying "... gaining people's commitment and ultimately getting more accurate answers" while another stated that this approach produces results that are "more meaningful." In addition, one person relayed his direct personal experience of the manipulation of the data gathered by previous programs, stating that developers with responsibility for identifying the cause of problems deliberately recorded inaccurate data to prevent 'blame' being attributed to their work, describing it as "avoid(ing) the finger being pointed at them."

The pilot study highlighted the need for more interactive communication. An individual's level of experience of measurement program may be an indicator of their ability to focus beyond the measures and be able to consider the approach itself. This suggestion is corroborated by the four interviewees with the most experience, who could readily distinguish between the outcomes and the framework. Others though, needed these distinctions made more clear - particularly in order to identify the boundaries of responsibility which went with their role in the process. Previous research in this area involving group interviews of practitioners11 also found evidence of this blurring of the boundaries. The experienced members of the pilot team, however, were unanimous in the view that practitioner commitment was critical to the success of a measurement program: a situation that appeared to call for a strengthening and combination of written and oral communication.

Given the focus on practitioners, there was a danger that PBM would have a negative impact upon the commitment of senior management as it reduces their control. The senior manager involved in the pilot acknowledged that loss of control would be a consequence of PBM but highlighted the potential "manipulation and fabrication" of results in previous top-down programs in which he was involved. He was of the view that "concerns over loss of control are outweighed by positive benefits of gaining people's (practitioner's) commitment and ultimately getting more accurate answers." The practitioners were asked to assess how they would view PBM were they a senior manager and the responses reflected positive views of teamwork and maximizing the advantage of experienced practitioners.

Back to Top

Conclusion

This research identified 'Practitioner Commitment' as a key factor affecting success and attempted to overcome this through including practitioners in the goal-setting stage. The response of the pilot team to this approach suggests that practitioner resistance is not necessarily inherent, but may be a reaction to their exclusion from the early stages of a top-down program. The inclusive nature of PBM, while potentially more complex, appears to provide an alternative perspective that may be of value to some IS organizations.

The research used experienced staff as participants and this may have influenced the results. However, the background of this group provided the ability to compare and contrast the practitioner based approach with previous experience. The indications here are that practitioners, particularly experienced ones, can provide additional value to the goal setting stage of a measurement program. Moreover, these practitioners appear to go on to display more commitment to the program.

Although the pilot study indicated that PBM helps secure practitioner commitment to a measurement program, feedback from two pilot team members suggested that the style of the communication within the program was inappropriate at times. They felt that it was important to support written communication with oral communication. The pilot team considered that PBM added cost and complexity to the implementation. While this point was not explored further, an increase in cost is almost inevitable due to the involvement of more people in the early stages. However, this short-term cost could be recouped if decisions are made upon accurate data.

The pilot study was limited to a single implementation on a small software development program. While the participants provided significant insight, clearly it would be useful to study the effectiveness of PBM across a range of projects.

Back to Top

References

1. Basili, V.R. and Rornbach, H.D. The TAME project: Towards improvement-oriented software environments. IEEE Transactions on Software Engineering 14, 6, (1988) 758–773.

2. Dekkers, C.A. and McQuaid, P.A. The dangers of using software metrics to (Mis)manage. IT Professional 4, 2, (2002) 24–30.

3. Demarco, T. Controlling software projects: Management, measurement & estimation. Yourdon Press, Englewood Cliffs, N.J., 1982.

4. Fenton, N.E. and Pfleeger A.L. Software metrics: A rigorous & practical approach, 2nd Edition. PWS Publishing. 1997.

5. Fenton, N.E. and Neil, M. Software metrics: Successes, failures and new directions. Journal of Systems and Software 47, 2-3, (1999) 149–157

6. Grady, R.B. and Caswell D.L. Software Metrics: Establishing A Company-Wide Program. Prentice-Hall. 1987.

7. Hall, T. and Fenton, N. Implementing effective software metrics programs. IEEE Software 14, 2, (1997) 55–65.

8. Hetzel, W.C., Making Software Measurement Work - Building an Effective Measurement Program. QED Publishing Group, 1993.

9. Pfleeger, S.L. Maturity, models, and goals: How to build a metrics plan. Journal of Systems and Software 31, 2, (1995) 143–155.

10. Pulford, K., Kuntzrnann-Cornbelles, A. and Shirlaw S. A Quantitative Approach to Software Management: The AMI Approach. Addison-Wesley, 1996.

11. Rainer, A. and Hall, T. A quantitative and qualitative analysis of factors affecting software processes. Journal of Systems and Software 66, 1, (2003) 7–21.

Back to Top

Authors

S. T. Parkinson ([email protected]) is a Senior Project Manager at AXA UK.

R. M. Hierons ([email protected]) is a Professor of Computing at the School of Information Systems, Computing and Mathematics at Brunel University, Uxbridge, Middlesex, U. K.

M. Lycett ([email protected]) is a Professor of Information Systems Development at the School of Information Systems, Computing and Mathematics at Brunel University, Uxbridge, Middlesex, U. K.

M. Norman ([email protected]) is a Visiting Professor at the School of Information Systems, Computing and Mathematics at Brunel University, Uxbridge, Middlesex, U. K.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1666420.1666456

Back to Top

Figures

F1Figure 1.

F2Figure 2. Relative Importance of Key Factors Affecting Implementation Success

F3Figure 3. Impact of Practitioner Based Model on Factors Affecting Implementation Success

Back to Top

Tables

T1Table 1.

T2Table 2.

T3Table 3.

Back to top


©2010 ACM  0001-0782/10/0300  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.


 

No entries found