Test Metrics

Reference from : GLT Testing Services

What are Test Metrics?
Parameters to objectively measure the software testing process on various aspects
– Test Effort – Test Schedule – Test Status – Defects – Test Efficiency – Test Effectiveness
2

Why do we need Test Metrics?
• To quantitatively analyze the current level of maturity in testing and set goals/objectives for future To provide means for managing, tracking and controlling the status of testing To provide a basis for estimation (Today’s data is tomorrow’s historic data) To objectively measure the effectiveness and efficiency of testing To identify areas for process improvements To give insight into the quality of the product
3

• • •

Index
Base Metrics • Project Management Metrics • Test Progress Metrics • Defects Metrics Derived Metrics • Test Efficiency Metrics • Test Effectiveness Metrics • Group Standard Testing Metrics
4

Project Management Metrics
• Effort Variance
Indicates deviation of actual effort from planned Where collected Data Source Main User Reason for Use effort for the project.
All Phases NIKU Test Lead, Test Manager Track progress, manage and replan tasks

Test Effort Variance (Planned vs Actual)
82 Effort (Pe rson days) 80 78 76 74 72 70 68 66 64 1 2 3 Release version Planned Effort (person days) Actual Effort (person days) 4 5 70 77 75 72 71 80

76 75

5

Project Management Metrics
• Schedule Variance
Indicates deviation of actual schedule dates (start date and end dates) from planned schedule dates for the project. Data Source Where collected Main User Reason for Use
All Phases NIKU Test Lead, Test Manager Track progress, manage and replan tasks

Schedule Variance
15-Sep-07 26-Aug-07 06-Aug-07 Date 17-Jul-07 27-Jun-07 07-Jun-07 18-May-07 28-Apr-07 08-Apr-07 1 2 Release Version Planned Start Date Actual Start Date Planned End Date Actual End Date 3

6

Test Progress Metrics
• Test Case Preparation Status
Indicates status of test case preparation against the number of test cases planned to be written.
Where collected
Test Case Preparation Phase

Data Source
Manual Tracking, QC

Main User
Test Lead, Test Manager

Reason for Use
Track progress, manage and replan tasks
Test Cases Completed Test Cases Inprogress Test Cases not completed

Test Case Preparation Status
60 50 50 40 40 30 20 10 10 0

7

Test Progress Metrics
• Test Execution Status
Indicates status of test execution against the number of test cases planned to be executed.
Where collected
Test Execution Phase

Data Source
QC

Main User
Test Lead, Test Manager

Reason for Use
Track progress, manage and replan tasks

8

Test Progress Metrics
• Planned vs Actual Execution
Where collected
Test Execution Phase

Data Source
QC

Main User
Test Lead, Test Manager

Reason for Use
Track progress, manage and replan tasks

Test Status - S Curve 140 120
No. of Test Cases

120 116 92 100 105 102

100 80 60 40 20 0

1

2 Week No. Planned Execution Actual Execution

3

9

Defect Metrics
• Defects Distribution by Severity/Status
Where collected
Test Execution Phase

Data Source
QC

Main User
Test Lead, Test Manager

Reason for Use
Track progress, manage and replan tasks

10

Defect Metrics
• Defects Distribution by Root cause
Where collected
Test Execution Phase

Data Source
QC

Main User
Test Lead, Test Manager

Reason for Use
Process Improvement

Defects Distribution by Root cause 25 20 Number of defects 15
Number of defects

10 5 0 Requirements miss/unclear Environment Integration Test Code/Unit Test Root Cause Script miss Control Record Setup Data

11

Test Efficiency Metrics
• Test Case Writing Productivity
Total Test Cases created / Total person days of effort involved in creating test cases
Where collected
Test Case Preparation Phase

Data Source
QC, NIKU

Main User
Test Lead, Test Manager

Reason for Use
Track progress, manage and replan Analyse test efficiency , Process Improvement

• Test Case Execution Productivity
Total Test Cases executed / Total person days of effort involved in executing test cases
Where collected
Test Execution Phase

Data Source
QC, NIKU

Main User
Test Lead, Test Manager

Reason for Use
Track progress, manage and replan Analyse test efficiency , Process Improvement

12

Test Efficiency Metrics
• % (Review +Rework) Effort ( A / B) X 100
where cases A = (Review + Rework) effort for writing test

B = Total effort for writing test cases (creation + Where collected Data Main User Reason for Use review + rework) Source
Test Case Preparation Phase NIKU Test Lead, Test Manager Track progress, manage and replan Analyse test efficiency, Process Improvement

13

Test Efficiency Metrics
• Rejected Defects
% rejected defects = (Number of defects rejected/ Total number of defects logged) X 100
Where collected Test Execution Phase Data Source QC Main User Test Lead, Test Manager Reason for Use Analyse Test Efficicency, Process Improvement
Defect Rejection Ratio

Defect Detected v/s Defect Rejected

200 180
Number of Defects

160 140 120 100 80 60 40 20 0

5

3%
Total Rejected Valid Rejected

154

97%

Root Caus e of rejected defects 6 Number of defects 5 4 3 2 1 0 Code/Unit Test Data Use Case Update Root Caus e Working As Designed
Number of Defects

14

Test Effectiveness Metrics
• Defect Removal Efficiency (DRE)
% DRE (for a testing phase)
# valid defects detected in the current testing phase
X100

(# valid defects detected in the current phase + # valid defects detected in the next testing phase)

% DRE (for all testing phases)

X100

# valid defects detected pre-production (# valid defects detected pre-production + # defects detected post-production)

Where collected
Post Test Execution Phase

Data Source
QC, Defects data for next testing phase

Main User
Test Lead, Test Manager

Reason for Use
Analyse test effectiveness, Process Improvement

15

Test Effectiveness Metrics
• Requirements Coverage
Indicates the distribution of requirements covered by test cases along with the status
Where collected
Test Case Preparation and Test Execution Phase

Data Source
QC

Main User
Test Lead, Test Manager

Reason for Use
Analyse test effectiveness, Process Improvement

16

Test Effectiveness Metrics
• Test Coverage
# valid defects not mapped to test cases Vs # valid defects mapped to test cases
Where collected
Test Execution Phase

Data Source
QC

Main User
Test Lead, Test Manager

Reason for Use
Analyse test effectiveness, Process Improvement

Test Coverage
Valid Defects not mapped to Test Cases

11%

Valid Defects Mapped to Test Cases

89%
17

Test Effectiveness Metrics
• Cost of Quality
Total Testing effort (person days)/ #valid defects found
Where collected
Test Execution Phase

Data Source
QC, NIKU

Main User
Test Lead, Test Manager

Reason for Use
Analyse test effectiveness, efficiency Process Improvement

18

Group Standard Testing Metrics Testing Effort
Actual Testing Effort / Total Project Effort (in person hours )
Where collected
All testing phases

Data Source
NIKU

Main User
Test Lead, Test Manager

Reason for Use
Measures the proportion of effort spent on testing against the whole project

19

Group Standard Testing Metrics Test Effectiveness Indicator
A / (A + B)
where A = number of defects found in all test stages B = number of latent defects during the first month after implementation
Where collected
All testing phases

Data Source
QC, Manual submission by Testing CoE or Regional IT Quality Leads.

Main User
Test Lead, Test Manager

Reason for Use
Measures the proportion of the number of defects discovered in all formal testing stages, i.e. SIT, UAT, NFR or performance test, OAT etc, with that discovered in the first month of production operations.

20

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times