You are on page 1of 8


Camelia Rosca
Carmen Buzdugan
"We can't control things which we
can't measure"
Type of Software Testing Metrics
Manual Performance Automation Common Metrics

Test Case Performance Automation Effort Variance

Productivity Scripting Scripting
Productivity Productivity Schedule
Test Execution Variance
Summary Performance Automation Test
Execution Execution Scope Change
Defect Summary Productivity
Performance Automation
Defect Rejection Execution Data Coverage
- Client Side
Bad Fix Defect Cost
Performance Comparison
Execution Data
Test Execution - Server Side
Test Efficiency Test Efficiency

Defect Severity Performance

Index Severity Index
Testing Metrics - examples
%ge Test cases Passed = (No. of Test cases Passed / Total no. of Test cases Executed) * 100.
%ge Test cases Failed = (No. of Test cases Failed / Total no. of Test cases Executed) * 100.
%ge Test cases Blocked = (No. of Test cases Blocked / Total no. of Test cases Executed) * 100.

Total Raw Test Steps

Test Case Productivity = Step(s)/hour
Efforts (hours)

Number of Valid Defects

Defect Acceptance = * 100 %
Total Number of Defects

(Req met during PT) - (Req not met after Signoff of PT)
Performance Test Efficiency = * 100 %
Req met during PT

Actual Effort - Estimated Effort

Effort Variance = * 100 %
Estimated Effort

Importance of Test Coverage?
Finding area of a requirement not implemented by a
set of test cases

Helps to create additional test cases to increase coverage

Identifying meaningless test cases that do not increase

Benefits of Test Coverage?
It can assure the quality of test

It can help identify what portions of the code were actually tested for the
release or fix

It can help to determine the paths in your application that were not tested

Prevent defect leakage

To keep Time, Scope and Cost under control

Early Defect prevention.

Test coverage
by feature
by GUI icon
by instrumentation
by structure
by scenario
by transition
by web script, web page, application and