Software vendors depend on writing, maintaining, and selling quality software products and solutions. But software product conception, planning, development, and deployment are complex, time-consuming, and costly activities. While the market for commercial software demands ever higher quality and ever shorter product development cycles, overall technological complexity and diversification keep increasing. One result is that the rate of project failure is increasing, too. Software products consist of bytes of data, functions, files, images; companies' exclusive resources are human beings. Development organizations thus require ways to measure human code-writing productivity and quality-assurance processes to guarantee the continuous improvement of each new product release.
The following steps outline a hypothetical software product life cycle:
Customer data. The sales force collects and enters customer data into a Siebel Systems' customer relationship management system.
Product requirements. The customer data is converted by product architects into product requirements and entered into a Borland CaliberRM collaborative requirements management system for high-level product definition.
Development. Project management tools (such as MS Project, ER/Studio, and Artemis) are used by product development managers during product design and engineering. At the same time, source code development is supported by the Concurrent Versioning System [2], allowing programmers to track code changes and support parallel development.
Testing. Concluding the coding phase, the quality-assurance team uses various testing tools (such as Purify, PureCoverage, and TestDirector) to isolate defects and perform integration, scalability, and stress tests, while the build-and-packaging team uses other tools to generate installable CD images and burn CDs. In this phase of product development the Vantive system tracks product issues and defects. Testers open Vantive tickets, or descriptions of problems, that are then examined and eventually resolved by product developers. Robohelp and Documentum support their documentation efforts.
Release and maintain. The software product itself is finally released to the market, where its maintenance process begins a more simplified customer-support life cycle with the help of such tools as MS Project, Concurrent Versions System (CVS), and Vantive.
What happens when something goes wrong, as it inevitably does, milestones slip, or productivity, quality, or customer satisfaction falls off? How does the development company address and solve these problems? Critical questions the product developer should be able to answer include:
Those struggling to find answers include product-line and quality-assurance managers, process analysts, directors of research, and chief technology officers.
One approach they might take is to change the product development process by adopting a more formal product life cycle [4], possibly introducing an integrated product development suite (such as those from Rational Software, Starbase, and Telelogic), leveraging embedded guidelines, integration, and collaboration functions. It may be their only option when the degree of software product failure is so great that the software product development life cycle must be completely reengineered. However, this approach is too often subjective and politically driven, producing culture shock yet still not solving problems in such critical areas as project management and customer support.
Another approach is to maintain the current process, acquire a much better understanding of life cycle performance, and take more specific actions targeting one or more phases of the life cycle. The resulting data is used to plan and guide future changes in a particular product's life cycle. It is highly objective, involving fewer opportunities for subjective decision making while minimizing change and culture shock and promising extension to other phases of the product life cycle.
Software product life cycle management starts by measuring the critical features of the product and the activities performed in each phase of its life cycle. Useful life cycle metrics [1, 3] include:
Ideally, the data is aggregated and summarized in a product life cycle data warehouse where it is then available for generating views, automated alerts, and correlation rules from day-to-day operations, product status summaries, and top-level aggregated and executive data. These views, alerts, and rules support a variety of users, from the product line manager to the process analyst, and from the software test engineer to the chief technology officer. Figure 2 outlines an agent-based data acquisition, alert, and correlation architecture for the hypothetical life cycle described earlier.
Much of this work is performed by software agents autonomously mining the data and providing automated alerts based on thresholds and correlation rules. Phase-specific alerts are generated in, for example, engineering, when fixing defects would take too long or require too many new lines of code. Global alerts are generated when, for example, the research and development expenses are not proportional to the sales levels for a specific product or when new requirements crop up toward the end of the development cycle. Such a system might alert development managers to the following situations:
Too much time to resolve product defects. The managers drill into details provided by the system and notice that some components keep changing, prompting them to organize a code review of the components and identify and order improvements to their design and modularity. As a result, the product becomes more stable, and the time to resolve defects decreases.
Too many defects. The system reports that many more defects are generated for product X than for the other, say, eight products for which the quality assurance managers are responsible. After analyzing the current resource allocation with their direct reports they move resources from the most stable products to product X and notify the development organization of the situation. Focusing on the right products and quickly reacting to alerts increases overall product quality.
Missed milestones. The system reports the correlation of metrics (such as rate of defect generation, time needed to resolve a support issue, and overall stability and quality) indicates a product's next release milestones are likely to slip. Further analysis shows the entire development staff is busy addressing current release problems. An analyst prepares a detailed report to alert company executives, who might then decide to: assign an expert product architect to assess the situation and propose a recovery plan; notify customers the next release is delayed (quantified based on the assessment); and review the product team's technical and management skills to determine whether and which actions (such as training and adjusting responsibilities) are needed to increase product quality and customer satisfaction.
In 2002, BMC Software implemented a prototype life cycle management approach called Measure, Alert, Understand, Improve, or MAUI, to manage several problematic software projects. Focusing on the engineering phase of the software product life cycle, it was designed to monitor development activities carried out through CVS and Telelogic's Synergy Configuration Management (SCM) system. Several months of daily monitoring revealed trends and patterns in the metrics and parameters of these projects' life cycles. The tables here summarize this data, along with the advice we generated for a number of critical BMC projects and teams.
The metrics in Table 1 help the development team analyze project activities. The software engineering monitor (SEM) agent collects them daily at the file level, aggregates them into directories and projects according to the hierarchies defined in the underlying SCM tools, and stores them in a metric history database for later use [5]. The SEM agent generates alerts and events and notifies development team members automatically when metric thresholds are crossed.
For any given metric collection cycle the observation window is 200 days, so all activities older than 200 days from collection time are ignored by the SEM agent. The indexes are special metrics defined by the development team with BMC's project managers in light of their own criteria for stability and quality. The indexes are defined as weighted sums of basic metrics and calculated using a formula normalizing their value from 0 to 10.
Table 2 indicates the SEM agent has reported an overall alarm status for a project called Jupiter due to the low number of lines of code (LOC) per developer, thus supporting the following analysis:
This analysis allows project managers to proactively review resource allocation and task assignments and perform targeted code reviews of the aspects of the product that change most often and that involve an unacceptably high number of defects. Long-term savings of time and money in customer support and maintenance are potentially significant.
Table 3 indicates that the SEM agent has reported an overall OK status for another project, this one called Saturn, supporting the following analysis:
This analysis shows that project managers are doing a good job. Further analysis might also suggest these managers would probably be comfortable releasing the product earlier than predicted, even beating the schedule. Benefits from reinvesting immediate revenue into product improvements are potentially significant.
These early experiments in MAUI real-time development monitoring demonstrate the value of continuously measuring software engineering metrics (see Figure 3). The MAUI prototype provides a real-time feedback loop that helps teams and managers quickly identify problem areas and steer software development projects in the right direction.
BMC's adoption of MAUI has been limited by three main concerns:
Future MAUI improvements include:
Most of the value of life cycle management follows from automating data acquisition, providing alerts and correlation rules, identifying bottlenecks, increasing quality, optimizing critical processes, saving time, money, and resources, and reducing risk and failure rate. Key benefits include:
A software product life cycle that is stable, predictable, and repeatable ensures timely delivery of software applications within budget. A predictable life cycle is achievable only through continuous improvement and refinement of the processes and their tools. The real-time MAUI approach to product life cycle monitoring and analysis promises the continuous improvement and refinement of the software product life cycle from initial product concept to customer delivery and beyond.
1. Florac, W., Park R., and Carleton A. Practical Software Measurement: Measuring for Process Management and Improvement. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, 1997.
2. Fogel, K. Open Source Development with CVS. Coriolis Press, Scottsdale, AZ, 1999.
3. Grady, R. Practical Software Metrics for Project Management and Process Improvement. Prentice Hall, Inc., Upper Saddle River, NJ, 1992.
4. Jacobson, I., Booch, G., and Rumbaugh, J. The Unified Software Development Process. Addison-Wesley Publishing Co., Reading, MA, 1999.
5. Spuler, D. Enterprise Application Management With PATROL. Prentice Hall, Inc., Upper Saddle River, NJ, 1999.
Figure 1. Software product life cycle improvement scenarios.
Figure 2. Managed software product life cycle.
Figure 3. Project Jupiter and project Saturn trends viewed through the SEM Web interface.
Table 1. Metrics definitions; LOC = lines of code.
Table 2. Project Jupiter engineering activities monthly analysis, Jan. 2002.
Table 3. Project Saturn engineering activities monthly analysis, June 2002.
©2004 ACM 0002-0782/04/0300 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.
No entries found