Professional Documents
Culture Documents
ASSURANCE (SQA)
Chapter 5
Objectives
• After completing this chapter, you shall be
able to
✓Explain and define what is measurement
✓Explain the importance of measurement
✓Define measurement steps
✓Understand criteria for effective software
metrics
✓Understand the defined metrics used
throughout the Software Life Cycle
✓Understand and define measurement in
Process Improvement and Project Tracking
and Control
SOFTWARE MEASUREMENT
AND METRICS
Software Metrics:
• Compute
3
• Once computed, appropriate metrics are analyzed based on a pre-established
guidelines and data on past performance
4
• Interpret results of the analysis to gain insight into the quality of the software,
and the results of the interpretation lead to modification of work products
5
Attributes of Effective Software Metrics (1/2)
2 language
• Metrics should be based on the phases of the life cycle
model, but not dependent on programming language,
syntax or semantics
5 dimensions
• The mathematical computation of the metric should use
measures that do not lead to weird combinations of
units (units of measure)
Project Defect & Defect & Defect & Defect & Project Post
Assessment Effort Effort Effort Measurement Implementation
Effort
Evaluation Defects
Analysis Analysis Analysis Analysis
*Enhancement project – Project where the work involves extension of the existing
functionality/ modules/ components
*Defect Injection Rate – Ratio of in-process and customer reported defects with respect
to the Total Size of Product in Line of Code or Function Point
Measurement during Software Life Cycle
Context
❑ Measurement for Requirement Phase
▪ For a new project, the project cost tracking system should
be initialize at the requirement phase
▪ It is also appropriate to initialize the project defect and
quality tracking system at this phase since requirements
problems are often a major source of both expense and
later troubles
❑ Measure for Product Size
▪ The common units for measuring the size of software work
product are either
o Lines of Code (LOC), or
o Function Points (FP), or
o Feature Points (FP).
*Function oriented software metrics has been used to measure the functionality delivered
by the application
Measurement during Software Life Cycle
Context
Most stable = 1
Less stable > 1
▪ After consultation with customers and research on their needs, project
begins with the creation of requirements document.
o Requirement document expresses what the customer or client needs and expects
and what developer will provide.
o The customer representative group reviews the document and if in agreement
with its specifications, signs it (signing off).
o Signing off process is intended to ensure that customer representative or clients
have agreed –in writing-on the specifics involved
Requirements Related Metrics
2) Requirements Stability Index (RSI) (continue)
▪ Almost certainly, once the design and development process is
underway, customers think of changes or enhancement they
would like, a phenomenon known as the phenomenon of
“requirements creep” or “feature creep”
▪ This, if not managed with a firm hand, requirements/feature
creeps can result in lost time and money and either a project far
beyond the scope of what was originally foreseen, or a failed
project.
▪ Requirements Stability is related to size stability.
o Size stability which is derived from changes in the size estimate as
time goes on, provides an indication of the completeness and
stability of the requirements, the understanding of the requirements,
design thoroughness and stability, and the capability of the software
development staff to meet the current budget and schedule
o Size instability may indicate the need for corrective action
Requirements Stability Indicator
Measurement during Software Life Cycle
Context
❑Measurement during Construction Phase
▪ Utilize reviews and inspection and record defect data
▪ Second cost estimation (more rigorous) should be
prepared as defect data will continue to accumulate
▪ A third formal cost estimate should be prepared – it
will be very rigorous in accumulating costs to date
and very accurate in estimating costs to the
completion of the project
▪ Defect and quality data recording should also be kept
during code reviews or inspections
▪ Complexity measures of the code itself can now be
performed well.
Measurement during Software Life Cycle
Context
❑ Measurement during Testing Phase
▪ Both the defect data and the cost data from the testing
phase should be measured in detail and then analyzed for
use in subsequent defect prevention activities
▪ During the maintenance and enhancement phase, both
user satisfaction measures and defect measures should
be carried out
▪ User Satisfaction is
o A basic metric for organizations that market software.
o An important metric for internal information systems too
▪ Industry data shows a strong observed co-relationship
between defect levels and user satisfaction. From this
point of view, measuring software defect rates becomes
important
Measurement during Software Life Cycle
Context
Terms Definition
Errors • Human mistakes
• The quantification would depend on an understanding of
the programmer intentions
• Example: Typographical and syntactical errors (typo and
semantic errors)
Defects • These are improper conditions that are generally the result
of one or multiple errors
• All errors may not produce software defects (e.g.: incorrect
comments or minor documentation errors
• A defect could result from such non-programmer causes like
improper packaging or handling of the software product
• Example: Error in the logo of the splash screen coming up on
product start up
Measurement during Software Life Cycle
Context
Terms Definition
Bugs/Faults • A program defect that is encountered while operating the
product either under test or while in use
• The operational condition or operating environment is often
responsible for revealing a bug which could otherwise have
remained unexposed
• Thus, bugs result from defects but all defects do not cause
bugs (some are hidden and may never be found)
Failures • A malfunction of a user’s installation
• It may results from a bug, incorrect installation, a
communication line fault, a hardware failure, incorrect setting
of environmental parameters for the software products etc.
Problems • Problems are user-encountered difficulties
• May results from failures, misuse, or at times, even from
misunderstanding
Defect Metrics
Defect Density at Acceptance Testing (Total no. of Defects reported during System Testing)
Actual Size of Product
*Size = LOC/FP
Defect Metrics
➢ The differences/similarities between MTTF and
Defect Density are as follow
o MTFF show the temporal dimension of the defect (measures the
time between successive failures) whereas defect density
measures the defects relative to the software size (lines of code,
function points etc.)
o “failures” and “defects” have different meanings but for practical
purposes, there is no difference between the two terms
o We usually encounter the defects that cause higher failure rates.
The probability of failure associated with a “latent defect” is called its
size or “bug size”. For missions critical applications (such as air
traffic control systems), the operational profile and scenarios are
better defined. As such, MTTF metrics is more appropriate here.
o From the data gathering perspective, time between failure data is
expensive. To be useful, time between failure data requires a high
degree of accuracy. Gathering such data requires special
arrangements and that is not always easy
o Defect Density has another appeal for the development organization
of commercial products – opportunity to strengthen/improve their
testing processes.
Defect Metrics
2) Defect Rate
▪ “Defect Rate” is the expected number of defects over a
certain time period specified
▪ Is important for cost and resource estimates of the
maintenance phase of the software life cycle
▪ “Defect Rate” and “Defect Injection Rate” are not the same
▪ Defect Injection Rate (DIR) is defined as the ratio of in process
defects and customer reported defects with respect to the total
size of the product.
DRE = Number of defects identified up to System Testing (both during review and testing)
[(No. of defects identified up to System Testing) + (No. of defects identified at
Acceptance Testing) + (No. of defects identified during warranty support) + (No. of
defects identified by customer during work product reviews)]
Defect Metrics
4) Defect Removal Effectiveness
▪ To define defect removal effectiveness, first, must understand
the activities in the development process that are related to
defect injections (a defect being injected into a product though
not intentionally) and defect removals as shown below
Defects
Existing At
Defects
Injected Defect
Undetected Defects
●
step entry During Detection Defect Incorrect repairs Defects
Development Repair Existing at
exit
*No. of defects removed = no. of defects detected minus the number of incorrect
repairs
Test Metrics [1/6]
➢B
1) Testability measures
▪ Testability metrics are usually size or structure measures
used for the purpose of assessing ease of testing
▪ Useful testability metrics are those that can be derived from
requirements and design documents – can use them to
assess future testability during early stages of systems
development
▪ Testability metrics are used:
a) To identify potentially un-testable (or difficult to test)
components at an early stage in the development process
o Provides opportunity to redesign components/provide
additional testing effort
b) To assess the feasibility of test specifications in terms of the
provision of adequate test cases and appropriate test
completion criteria
c) To estimate the effort and schedule of test activities (testing
effort and duration)
Test Metrics [5/6]
➢B
2) Test control metrics
▪ Test Resource Metrics
o These are measures of effort, time-scale and staffing levels,
and other cost items such as tool purchase, special test
equipment, training etc
o Test resource metrics are used for planning and monitoring
testing activities
▪ Test Coverage Metrics
o Measure the amount of the product that has been tested at
different levels of granularity
▪ Test Plan Metrics
o These indicate the amount of testing to be performed and the
effectiveness of test plans
o Test Plan metrics are used for planning and monitoring testing
activities
Test Metrics [6/6]
➢B
3) Test results metrics
▪ Usually based on defect counts
o Such counts can be related to the test items and the test
activity in terms of error rates
o Can also be made in terms of various defect categories
▪ A useful defect classification scheme would identify the
phase and/or activity during which the error was detected,
and the type and severity of the error.
▪ Defect rates are often used to identify the pass and fail
criteria for test item. For example, a module may be rejected
if its defect rate exceeds some predetermined limit.
Metrics for Software Maintenance
Maintenance
Process