You are on page 1of 36

Software Process and Project

Metrics

Chapter 4
Software Process and Product metrics

Quantitative measure that enable software people


to gain insight into the efficacy of the software
process and the project that are conducted using
the process as a framework
Software Managers

Software managers are the one’s who analyzed and


assessed software metrics
A Good Manager Measures

process
process metrics

project metrics
measurement
product metrics

product

4
Measure, Metrics and indicators

• Metric is a quantitative measure of the degree to


which a system, component, or process possesses
a given attribute
• When a single data point has been collected, a
measure has been established
• An indicator is a metric or combination of metrics
that provide insight into the software process, a
software project, or the product

5
Metrics in the Process and Project Domains

• Many of the same metrics are used in both the


process and project domain
• Determinants for software quality and
organizational effectiveness

6
Metrics in the Process and Project Domains

• Process indicators – enable a software


engineering organization to gain insight into the
efficacy of an existing process
• Project indicators – enable a software project
manager to assess the status of an ongoing
project, track potential risks, uncover problem
areas before they go “critical”, adjust work flow
or task, and evaluate the project teams ability to
control quality of software work products a
7
Metrics in the Process and Project Domains

Product
Customer Business
characteristics conditions

Process

People Technology

Development
environment
Process Metrics

• Majority focus on quality achieved as a consequence of


a repeatable or managed process
• Derived based on the outcomes that can be derived
from the process
– Errors, work products delivered, …
• Derived by measuring the characteristics of specific
software engineering tasks
– The effort or the time spent performing the umbrella activities

9
Process Metrics

• There are “private and public” use for different


types of process data
• Private metrics should be private to the individual
and serve as an indicator for the individual only
– include defect rates (by individual), defect rates (by
module), and errors found during development
– Conforms well with the personal software process (PSP)
Public Metrics

• Public metrics generally assimilate information


that originally was private to individuals and teams
• Project level defect rates, effort, calendar times,
and related data
Software Metrics Etiquette

• Use common sense and organizational sensitivity when


interpreting metrics data
• Provide regular feedback to the individuals and teams who
collect measures and metrics
• Don’t use metrics to appraise individuals
• Work with practitioners and teams to set clear goals and
metrics that will be used to achieve them
• Never use metrics to threaten individuals or teams
• Metrics data that indicate a problem area should not be
considered negative
• Don’t obsess on a single metric to the exclusion of other
important metrics 12
Statistical Software Process Improvement (SSPI)

• You can’t improve your approach to software


engineering unless you understand where you’re
strong and where you’re weak
• Use software failure analysis to collect information
about all errors and defects encountered as an
application is developed and used

13
Project Metrics

• Used during estimating new projects and


product quality
• Effort/time per SE task
• Errors uncovered per review hour
• Scheduled vs. actual milestone dates
• Changes (number) and their characteristics
• Distribution of effort on SE tasks

14
Software Measurement

• Direct measures include cost and effort applied


– Line of code (LOC) produced, execution speed, memory
size, and defects reported over some set period of time
• Indirect measures include functionality, quality,
complexity, efficiency, reliability, maintainability,
and other abilities

15
Typical Size-Oriented Metrics

• errors per KLOC (thousand lines of code)


• defects per KLOC
• $ per LOC
• page of documentation per KLOC
• errors / person-month
• LOC per person-month
• $ / page of documentation

16
Typical Size-Oriented Metrics

• Measures should be normalized


• Size-oriented metrics are not universally accepted
as the best way to measure the process of
software development

17
Typical Function-Oriented Metrics

• Based on function points (FP)


• errors per FP (thousand lines of code)
• defects per FP
• $ per FP
• pages of documentation per FP
• FP per person-month

18
Why FP Measures?

independent of programming language

uses readily countable characteristics of the


"information domain" of the problem

does not "penalize" inventive implementations that


require fewer LOC than others

makes it easier to accommodate reuse and the


trend toward object-oriented approaches
19
Analyzing the Information Domain

weighting factor
measurement parameter count simple avg. complex
number of user inputs X 3 4 6 =
number of user outputs X 4 5 7 =
number of user inquiries X 3 4 6 =
number of files X 7 10 15 =
number of ext.interfaces X 5 7 10 =
count-total
complexity multiplier
function points
20
Computing Function Points

Analyze information
domain of the Establish count for input domain and
application system interfaces
and develop counts

Weight each count by Assign level of complexity orweight


assessing complexity to each count

Assess influence of Grade significance of external factors, Fi


global factors that affect
the application
such as reuse, concurrency, OS, ...

function points = (count x weight) x C


Compute
function points where:
Taking Complexity into Account

Factors are rated on a scale of 0 (not important)


to 5 (very important):

data communications on-line update


distributed functions complex processing
heavily used configuration installation ease
transaction rate operational ease
on-line data entry multiple sites
end user efficiency facilitate change
22
ORGANIZATION and ADMINISTRATION

• The relationship between lines of code and function


points depends upon the programming language
and the quality of the design
• A historical baseline of information should be
established before these metrics are employed for
estimation
Measuring Quality

• Correctness — the degree to which a program


operates according to specification
– Defects per KLOC
• Maintainability—the degree to which a program is
amenable to change
– Mean-time-to-change (MTTC)
– Spoilage – the cost to correct defects encountered after
software has been released

24
Measuring Quality

• Integrity—the degree to which a program is


impervious to outside attack
– Threat is the probability that an attack of a specific type
will occur within a given time
– Security is the probability that the attack of a specific type
will be repelled
• Usability—the degree to which a program is easy to
use
Defect Removal Efficiency

DRE = (errors) / (errors + defects)

where
errors = problems found before release
defects = problems found after release

for project and process


ideal value is 1

26
Integrating Metrics Within The Software Process

• Realistically establishing a successful company-wide


software metrics program is hard work
• But it is worth
• If we do not measure, there is no real way of
determining whether we are improving
• If we are not improving, we are lost

27
ORGANIZATION and ADMINISTRATION

Metrics being collected can answer


Which use requirements are most likely to change?
Which components in this system are most error
prone?
How much testing should be planned for each
component?
How many errors (of specific types) can I expect
when testing commences?
Establishing a Metric Baseline

To be an effective aid in process improvement, a


metric baseline must be established
The process (Figure in page 101)
Data collection  Measures
Metrics computation  Metrics
Metric evaluation  Indicators

29
Managing Variation

• The same process metrics will vary from project to


project
– Statistical process control
• Control chart for determining whether changes and
variation in metrics data are meaningful
– The moving range (mR) control chart
– The individual control chart

30
Managing Variation

An example – errors uncovered per review hour, Er


for 20 small projects
The procedure required to develop a mR control
chart for determining the stability of the process.

• Calculate the moving ranges


• Calculate the mean of the moving ranges
• Multiply the mean by 3.268. Plot this line as the
upper control limit (UCL)
• If all the moving range values inside the URL, the
process metrics is stable
Establishing a software metrics program

• Identify your business goals


• Identify what you want to know or learn
• Identify your subgoals
• Identify entities and attributes related to your
subgoals

34
Establishing a software metrics program

• Formalize your measurement goals


• Identify quantifiable questions and the related
indicators that you will use to help you achieve
your measurement goals
• Identify the data elements that you will collect to
construct the indicators that help answer your
questions
• Define the measures to be used, and make these
definitions operational 35
Establishing a software metrics program

• Identify the action that you will take to implement


the measures
• Prepare a plan for implementing the measures

36

You might also like