Quality - Innovation - Vision

Effective Metrics for Managing a Test Effort
Presented By:

Shaun Bradshaw Director of Quality Solutions Questcon Technologies

Objectives
Why Measure? Definition Metrics Philosophy

Types of Metrics
Interpreting the Results Metrics Case Study

Q&A

Slide 2

An estimated $22.2 billion could be eliminated by improved testing that enables earlier and more effective identification and removal of defects.US Department of Commerce (NIST) Slide 3 .S.” .Why Measure? “Software bugs cost the U. economy an estimated $59.5 billion per year.

” Slide 4 . “You cannot improve what you cannot measure.Why Measure? It is often said.

Slide 5 . Are gathered and interpreted throughout the test effort. Provide an objective measurement of the success of a software project.Definition Test Metrics: Are a standard of measurement. Gauge the effectiveness and efficiency of several software development activities.

test metrics can aid in software development process improvement by providing pragmatic & objective evidence of process change initiatives. Slide 6 .Metrics Philosophy Keep It Simple Make It Meaningful Track It Use It When tracked and used properly.

Metrics Philosophy Keep It Simple Measure the basics first Make It Meaningful Clearly define each metric Get the most “bang for your buck” Track It Use It Slide 7 .

Metrics Philosophy Metrics are useless if they are meaningless (use GQM model) Keep It Simple Make It Meaningful Track It Must be able to interpret the results Metrics interpretation should be objective Use It Slide 8 .

Metrics Philosophy Incorporate metrics tracking into the Run Log or defect tracking system Automate tracking process to remove time burdens Keep It Simple Make It Meaningful Track It Use It Accumulate throughout the test effort & across multiple projects Slide 9 .

Metrics Philosophy Interpret the results Keep It Simple Provide feedback to the Project Team Make It Meaningful Implement changes based on objective data Track It Use It Slide 10 .

Types of Metrics Base Metrics Raw data gathered by Test Analysts Tracked throughout test effort Used to provide project status and evaluations/feedback Examples # Test Cases # Executed # Passed # Failed # Under Investigation # Blocked # 1st Run Failures # Re-Executed Total Executions Total Passes Total Failures Slide 11 .

Defines the impact of known system defects on the ability to execute specific test cases Slide 12 .Types of Metrics Base Metrics Raw data gathered by Test Analyst Tracked throughout test effort Used to provide project status and evaluations/feedback Examples # Test Cases # Executed # Passed # Failed # Under Investigation # Blocked # 1st Run Failures # Re-Executed Total Executions Total Passes Total Failures # Blocked The number of distinct test cases that cannot be executed during the test effort due to an application or environmental constraint.

Types of Metrics Calculated Metrics Tracked by Test Lead/Manager Converts base metrics to useful data Combinations of metrics can be used to evaluate process changes Examples % Complete % Test Coverage % Test Cases Passed % Test Cases Blocked 1st Run Fail Rate Overall Fail Rate % Defects Corrected % Rework % Test Effectiveness Defect Discovery Rate Slide 13 .

Types of Metrics Calculated Metrics Tracked by Test Lead/Manager Converts base metrics to useful data Combinations of metrics can be used to evaluate process changes Examples % Complete % Test Coverage % Test Cases Passed % Test Cases Blocked 1st Run Fail Rate Overall Fail Rate % Defects Corrected % Rework % Test Effectiveness Defect Discovery Rate 1st Run Fail Rate The percentage of executed test cases that failed on their first execution. Slide 14 . Used to determine the effectiveness of the analysis and development process. Comparing this metric across projects shows how process changes have impacted the quality of the product at the end of the development phase.

Actual results met expected results. Actual results met expected results. Sample failure Sample multiple failures Actual results met expected results. Sample Under Investigation Actual results met expected results. Actual results met expected results. Sample Blocked Sample Blocked Actual results met expected results. Run Status P F F F P P P U P B B P P P P P P Current # of Status Runs P 1 F 1 P 3 P 1 P 1 U 1 P 1 B 0 B 0 P 1 P 1 P 1 P 1 P 1 P 1 0 0 0 0 0 01/03/04 01/03/04 01/03/04 01/03/04 01/03/04 01/03/04 Slide 15 . Actual results met expected results. Actual results met expected results.Sample Run Log Sample System Test TC ID SST-001 SST-002 SST-003 SST-004 SST-005 SST-006 SST-007 SST-008 SST-009 SST-010 SST-011 SST-012 SST-013 SST-014 SST-015 SST-016 SST-017 SST-018 SST-019 SST-020 Run Date 01/01/04 01/01/04 01/02/04 01/02/04 01/02/04 01/03/04 01/03/04 Actual Results Actual results met expected results. Actual results met expected results.

4% 20.Sample Run Log Base Metrics Metric Total # of TCs # Executed # Passed # Failed # UI # Blocked # Unexecuted # Re-executed Total Executions Total Passes Total Failures 1st Run Failures Value 100 13 11 1 1 2 87 1 15 11 3 2 % % % % % % % % Calculated Metrics Metric Complete Test Coverage TCs Passed TCs Blocked 1st Run Failures Failures Defects Corrected Rework Value 11.0% 15.0% Slide 16 .6% 2.0% 13.7% 100.0% 66.0% 84.

Result: Potential improvements are not implemented leaving process gaps throughout the SDLC.Metrics Program – No Analysis Issue: The test team tracks and reports various test metrics. but there is no effort to analyze the data. Slide 17 . This reduces the effectiveness of the project team and the quality of the applications.

Slide 18 .Metrics Analysis & Interpretation Solution: Closely examine all available data Use the objective information to determine the root cause Compare to other projects  Are the current metrics typical of software projects in your organization?  What effect do changes have on the software development process? Result: Future projects benefit from a more effective and efficient application development process.

Metrics Case Study Volvo IT of North America had little or no testing involvement in its IT projects. Slide 19 . The organization’s projects were primarily maintenance related and operated in a COBOL/CICS/Mainframe environment. While establishing a test team we also instituted a metrics program to track the benefits of having a QA group. The organization had a desire to migrate to newer technologies and felt that testing involvement would assure and enhance this technological shift.

Metrics Case Study Project V Introduced a test methodology and metrics program Project was 75% complete (development was nearly finished) Test team developed 355 test scenarios 30.Overall Fail Rate Defect Repair Costs = $519.7% .1st Run Fail Rate 31.000 Slide 20 .4% .

1st Run Fail Rate 18.Metrics Case Study Project T Instituted requirements walkthroughs and design reviews with test team input Same resources comprised both project teams Test team developed 345 test scenarios 17.0% .Overall Fail Rate Defect Repair Costs = $346.9% .000 Slide 21 .

000. using the same QA principles can achieve the same type of savings.00 Reduction of 33.0% 42.3% in the cost of defect repairs Every project moving forward. Slide 22 .000.9% 41.7% 346.00 $ 173.7% 31.4% 519.7% 18.Metrics Case Study 1st Run Fail Rate Overall Failure Rate Cost of Defects $ Project V 30.000.00 $ Project T Reduction of 17.

Q&A Slide 23 .