Professional Documents
Culture Documents
Daniel Moune Software Measurement and Metrics Full Set of Slides PDF
Daniel Moune Software Measurement and Metrics Full Set of Slides PDF
Introduction
Basics of Measurements
Goal-based Framework of Measurement
Daniel Moune
Msc, Software Engineering 2013
Network Engineer CCNA, CCNP, MCITP 2008
ICT Department
ICT University, Yaounde Campus
August 2018
Outline
• Introduction
• What is measurement?
• Measurement purpose
• Software Measurement Challenges
• Basics of Measurements
• Theory of measurement
• Measurement scales and types
• Empiric Investigations
• Products Measurement Models
• LOC Model
• Function-Points Model
• Cost Construction Model
• Process Measurement Models
• attributes classification
• External / internal Metrics
• Goal-based Framework for software measurement
• attributes classification
• External / Internal Metrics
Introduction
Measurement = Process used to assign values to characteristics or
attributes of a real world entity.
Measure = Quantitative/Qualitative indication of the size of some
product or process attribute.
Metric = Quantitative measure of the degree to which a system
component or process possesses a given attribute .
We use measurement in general
• To describe objects (modeling)
• To assess objects/process/methods (quality assessment)
• To predict trends and tendencies (decision-making)
Introduction
Problems
• Software is an abstract product, so how do we define
measurable characteristics on something abstract?
• Software quality depends on its development process/methods.
How do we assess processes and methods?
• At the difference of other Engineering domains, Software
Engineering has fuzzy models on how to define project
requirements, costs and risks. So how do we assess these
management parameters?
Introduction
Problems
• British Computer Society made a survey
• Out of 1027 projects only 130 were successful !!!!
• $81billion has been lost on US Software projects in 1995
• 53% of projects cost almost 200% of original estimate
• Success was defined as
• Delivered with all requirements met
• Within budget
• Within time
• Conformant to quality standards agreed on
Introduction
• In the software industry, developers will never declare
their products defect free(Why?).
• Software Complexity: Number of possible uses the product permits.
Normally, an industrial product allows only less than a few thousand use
cases and configurations
• Product Visibility: Software products are abstract so software faults
which are stored on diskettes or CDs are invisible.
• Product Development and Production Process: In an industrial product,
defects can be detected during the development and manufacturing.
Unfortunately software defects can be detected only during
development stage
Introduction
In Software Engineering, we need measurements
in order to achieve almost the same goals
• To describe software components with measurable
characteristics (modeling and design)
• To test and validate software deliverables/methods (quality
assessment)
• To predict project feasibility, risks, costs, maintenance and
delivery schedules (project management)
Introduction
Questions requiring Measurements for
project management
• What is the cost of each stage of the Software Development process?
• How productive is the staff ?
• How good are the Software deliverables?
• Will the customer be satisfied of the final product?
• When are we going to deliver the early releases, the final releases ?
• What do we need to do in order to mitigate delays and budget overflow?
• How much can we sell the product to a customer?
Introduction
Questions requiring Measurements for
project design and implementation
• Are the requirements clearly defined and testable?
• What kind of faults are we going to test?
• How are we going to detect these faults ?
• Are the sizes of project modules / units correct?
• How complex is the project design?
• Which framework are we going to use to reduce time for coding / testing /
maintenance?
• When should we stop modifying / working on the project to satisfy the user
requirements?
Basics of Measurements
Measurement types
Measurements can be classified as
• Direct measurement
• Qualitative
• Nominal (unordered)
• Ordinal (ordered using some logic)
Ordinal Scale
• Values of the measure are grouped into classes and these classes can be
compared using a relation of order
• Any mapping to numerical values should preserve the order
Suppose we have to evaluate the conformance on each section of the SRS documentation
according to the description provided in the SRS IEEE 830-1998 template. This can be likely done
with a survey the following scale
Incomprehensible < non conformant < Neutral < moderately Conformant < fully conformant
M(x) 0 1 2 3 4
Conformance incomprehensible Not conformant Neutral Moderately conformant Fully conformant
Then each section score would be obtained by calculating the mode of this variable’s distribution.
Ordinal Scale
Interval Scale
• Values of the measure are grouped into classes and there is classes can be
compared using a relational operator
• Each class has a weight which is provided by the size of its interval. This
weight is going to have an influence on the metrics computation
• Any mapping should preserve ordering and weight
• Adding and subtracting values is possible but not multiplication and division
Suppose we have to evaluate the responsiveness of the web pages of our project’s prototype that has been online since last week.
This can be likely done by asking each participant to specify time taken by they browser to render the gallery page when it is
loading. This can be done using the following
M(x) 0 – 200ms 200 – 500ms 500 – 1000ms 1000 – 5000ms > 5000ms
The responsiveness score would be obtained by calculating the average of this variable’s distribution.
Interval Scale
Proportional/Ratio Scale
• Values of the measure are grouped into classes and there is classes can be
compared using a relational operator
• There is a zero value representing the lack of measurement
• Each class has a weight which is provided by the size of its interval. This
weight is going to have an influence on the metrics computation
• Any mapping should preserve ordering, weight and ratios and start from 0
• Adding and subtracting multiplying and dividing values is allowed
• It is possible to compare several attributes with one single measure
For instance, the number of repeated sections of code per software modules is a clear indicator of the level of
modularity in the project design.
Another example is the algorithmic complexity which represents the number of elementary operations necessary
to successfully execute an algorithm based on the size of inputs is one basic measure of software efficiency.
Proportional/Ratio Scale
Absolute Scale
• Values of the measure are obtained by counting the
number of elements in an entity set
• There is only one possible measurement mapping
namely the actual count
• All arithmetic operations can be performed on the
resulting count
For instance, the length of code measured in KLOC.
Meaningfulness of scales in
measurement
• Analyzing scales help to determine the weight of the measure
in the product assessment
• Scales help to determine the data collection method and the
measurement method
• Scales help to determine which metrics to produce and how to
interpret them
• A statement involving measures or metrics interpretation is
meaningful if its truth value is not modified by any
transformation applied to the allowable scales
• Ranges: distribution range is the difference between the largest and smallest values.
Researchers often quote the interquartile range which is the range between the lower quartile
and the upper quartile
• Standard deviation: is the measure of the average spread around the mean.
• Variance : is square of the standard deviation. It gives you an idea of the average
distance of the values to the mean.
• Project metrics : describe the project characteristics and execution (e.g. number of
developers, productivity, success rate, etc...)
Metrics (Requirements)
Software metrics requirements should satisfy the following
requirements
General requirements
• relevant
• valid
• reliable
• Comprehensive
• Mutually exclusive
Operative requirements
• Easy and simple
• Does not require independent data collection
• Immune to biased interventions
What to measure?
• This is one of the nightmares of QA analysts
• There many approaches and models that have been defined
over the years
• Each of model targets a some dimensions of software quality
assessment domain, so selection of the model should be done
according to the project goals
• A few of these models would be studied in here.
Product metrics
LOC Model
Function-Point Model
Function-Point (FP)
A Function Point (FP) is a unit of
measurement to express the amount of
business functionality, an information
system (as a product) provides to a user. It is
used to measure software size.There are
two types of functions
● Data Functions
● Transaction Functions
measured
Function-Point (FP)
•
M(x) 0 1 2 3 4 5
Weight Not Important Incidental Moderate Average Significant Essential
Adjustment Factor
1 Backup and recovery
2 Data communication
3 Distributed processing functions
4 Is performance critical
5 Existing Operating Environment
6 On-line data entry
7 Input transaction built over multiple screens
Adjustment Factor
8 Master files updated on-line
9 Complexity of inputs, outputs, files, inquiries
10 Complexity of processing
11 Code design for re-use
12 Are conversion/installation included in design?
13 Multiple installations
14 Application designed to facilitate change by the user
COCOMO Basics
• COCOMO Barry W. Boehm,1981 (Constructive Cost Model) for Cost and effort estimation.
• Empirical model based on projects’ experience derived from the analysis of 63
software projects in 1981.
• Computes software development effort as a function of program size (SLOC, KLOC)
• Procedural cost estimation model based on 02 parameters:
• Effort: Amount of labor required to complete a task. It is measured in person-days/
person-weeks/person-months
• Schedule: Amount of calendar time required to complete an activity. Unit here is hour/day/month/year
• Differentiates 03 types of software projects
Software Project
Organic Small teams with good experience working with less rigid requirements
Medium teams with mixed experience working with a mix of rigid and less
Semi-detached
requirements
COCOMO Basics
•
Software Project a b c d
COCOMO Intermediate
•
Software Project a b c d
Time Required to
TrProg
program
Number of Delivered
NDelBug
Bugs
Helpdesk Quality
HD Metrics(What?)
HD quality metrics : involves 03 groups of measures
• Calls metrics
• Calls density metrics = rate of customer requests for HD services as measured by the number of
calls
• Calls severity metrics = categorization of customer requests based on a weighted scale applied
on the accounting of the customer request over a period of time
• HD success metrics = rate of success in response to customer request. A success is achieved by
completing the required service within the time determined in the service contract
• Productivity and effectiveness metrics
Calls severity metrics aim at detecting one type of adverse situation: increasingly
severe HD calls.
The computed results may contribute to improvements in all or parts of the user
interface (its “user friendliness”) as well as the user manual and integrated help
menus.
The Average Severity of HD Calls (ASHC): refers to failures detected during a period of
one year
HDP HD Productivity
HDE HD effectiveness
CM quality metrics : deal with several aspects of the quality of maintenance services. A
distinction is needed between software system failures treated by the maintenance teams and failures of
the maintenance service that refer to cases where the maintenance failed to provide a repair that meets
the designated standards or contract requirements. software maintenance metrics are classified as
follows
• Software System failures density metrics = rate of demand for corrective
maintenance, based on the records of failures identified during regular operation of the
software system
• Software System failures severity metrics = rate of severe software system failures
encountered to by the corrective maintenance team based on a weighted scale.
• Software System failures of maintenance services metrics = rate of
maintenance services incapacity to complete the failure correction on time or rate of failed
corrections.
• Software System availibility metrics = rate of disturbances caused to the customer by
periods of time where the services of the software system are unavailable or only partly
available
Process Metrics
Process Metrics
Software process metrics fall into one of the following categories:
• Quality metrics: describe the spread of defects/errors in the final product (e.g. error
density, error severity, error removal efficiency, etc...)
• Timetable metrics: describe the rate of activities/tasks completion along with the
milestones achievements (e.g. timetable observance, average delay of milestone completion,
etc...)
• Productivity metrics : describe the rate of code and documentation release over
the time (e.g. development productivity, code reuse, documentation reuse, etc...)
• Errors accounting : obtained by counting errors discovered during each stage of the
software development process
• NCE = Number of Code Errors detected in the source code through code inspections and
testing
• NDE = Number of Design Errors detected in the product documentation through
validation and verification
• WCE = Weighted total Code Errors detected in the source code through code inspections
and testing
• WDE = Weighted total Design Errors detected in the product documentation through
validation and verification
Software developers can measure the effectiveness of error removal by the software
quality assurance system after a period of regular operation (usually 6 or 12 months) of
the system. The metrics combine the error records of the development stage with the
failures records compiled during the first year (or any defined period) of regular
operation.
NYF = Number of Software failures detected during a year of maintenance service
WYF = Weighted number of Software failures detected during a year of maintenance
service
Daniel Moune | moune.Daniel@ictuniversity.org | +237675082872 Software Metrics and Measurements 88
Outline
Introduction
BASICS OF MEASUREMENTS
Goal-based Framework of Measurement
Software process timetable metrics may be based on accounts of success (completion of milestones
per schedule) in addition to failure events (non-completion per schedule). An alternative approach
calculates the average delay of milestones. TTO and ADMC metrics are based on data for all
relevant milestones scheduled in the project plan. In other words, only milestones that were
designated for completion in the project plan stage are considered in the metrics’ computation.
Therefore, these metrics can be applied throughout development and need not wait for the
project’s completion.
MSOT = Milestone completed on time
TCDAM = Total Completion Delays(days, weeks, etc...) for all milestones.
MS = Total number of milestones
DevH = Total working hours invested in the development of the software system.
ReKLOC = Number of thousands of reused lines of code.
ReDoc = Number of reused pages of documentation.
NDoc = Number of pages of documentation.
Tools
Here is a non-exhaustive list of tools for Source
code metrics visualization
• NDepend
• Sextant
Conclusion