Professional Documents
Culture Documents
Engineering-‐I
The
Product
and
the
Process:
Evolving
Role
of
So#ware
• Today,
so#ware
takes
on
a
dual
role.
• It
is
a
product
and,
at
the
same
Bme,
the
tool/
method
for
delivering
a
product.
•
As
a
product,
it
delivers
the
compuBng
potenBal
embodied
by
computer
hardware
to
a
user
• As
the
tool
used
to
deliver
the
product
it
acts
as
:
– the
basis
for
the
control
of
the
computer
(operaBng
systems)
– the
communicaBon
of
informaBon
(networks),
and
– the
creaBon
and
control
of
other
programs
(so#ware
tools
and
environments).
• So#ware
delivers
the
most
important
product
of
our
Bme—informaBon.
• The
central
role
played
by
so#ware
in
today’s
scenario
has
come
about
over
the
last
50
years
•
During
the
iniBal
years,
development
of
so#ware
was
considered
to
be
an
art
• It
was
more
of
a
personal
acBvity
carried
out
by
hobbyists
• So#ware
was
meant
to
solve
fragmented,
individual
and
small-‐scale
problems
• As
a
result,
no
proper
method,
process
or
approach
was
standardised
• This
led
to
a
number
of
problems
when
developing
so#ware
• Some
of
them
were:
– Why
does
it
take
so
long
to
get
so#ware
finished?
– Why
are
development
costs
so
high?
– Why
can't
we
find
all
the
errors
before
we
give
the
so#ware
to
customers?
– Why
do
we
conBnue
to
have
difficulty
in
measuring
progress
as
so#ware
is
being
developed
• The
lone
programmer
of
an
earlier
era
has
been
replaced
by
a
team
of
so#ware
specialists
• Each
team
focuses
on
one
part
of
the
technology
required
to
deliver
a
complex
applicaBon.
• And
yet,
the
same
quesBons
asked
of
the
lone
programmer
are
sBll
being
asked
• This
list
of
quesBons
is
not
exhausBve,
there
are
many
other
quesBons
• It
is
such
quesBons
about
so#ware
and
the
manner
in
which
it
is
developed—a
concern
that
has
lead
to
the
adopBon
of
so#ware
engineering
pracBce.
So#ware:
CharacterisBcs
• To
gain
an
understanding
of
so#ware
(and
ulBmately
an
understanding
of
so#ware
engineering),
it
is
important
to
examine
the
characterisBcs
of
so#ware
that
make
it
different
from
other
things
that
human
beings
build.
• When
hardware
is
built,
the
human
creaBve
process
(analysis,
design,
construcBon,
tesBng)
is
ulBmately
translated
into
a
physical
form.
• So#ware
is
a
logical
rather
than
a
physical
system
element.
Therefore,
so#ware
has
characterisBcs
that
are
considerably
different
than
those
of
hardware
1. So#ware
is
developed
or
engineered,
it
is
not
manufactured
in
the
classical
sense.
• Although
some
similariBes
exist
between
so#ware
development
and
hardware
manufacture,
the
two
acBviBes
are
fundamentally
different.
• In
both
acBviBes,
high
quality
is
achieved
through
good
design,
but
the
manufacturing
phase
for
hardware
can
introduce
quality
problems
that
are
easily
corrected
for
so#ware.
• Both
acBviBes
are
dependent
on
people,
but
the
relaBonship
between
people
applied
and
work
accomplished
is
enBrely
different
(as
we
shall
see
later)
• Both
acBviBes
require
the
construcBon
of
a
"product"
but
the
approaches
are
different.
• So#ware
costs
are
concentrated
in
engineering.
This
means
that
so#ware
projects
cannot
be
managed
as
if
they
were
manufacturing
projects.
2. So#ware
doesn't
"wear
out."
CHAPTER 1 THE PRODUCT
F I G U R E 1.1
Failure curve
for hardware
Time
ity is achieved through good design, but the manufacturing phase for ha
8 PA R T O N E THE PRODUCT AND THE PROCESS
F I G U R E 1.2
Increased failure
Idealized and
actual failure rate due to side
curves for effects
software
Failure rate
Change
Actual curve
Idealized curve
Software engineering
Time
methods strive to
reduce the magnitude
of the spikes and the changes are made, it is likely that some new defects will be introduced,
slope of the actual failure rate curve to spike as shown in Figure 1.2. Before the curve can r
• So#ware
is
deteriorates
due
to
change.
• When
a
hardware
component
wears
out,
it
is
replaced
by
a
spare
part.
• There
are
no
so#ware
spare
parts.
• Every
so#ware
failure
indicates
an
error
in
design
or
in
the
process
through
which
design
was
translated
into
machine
executable
code.
• Therefore,
so#ware
maintenance
involves
considerably
more
complexity
than
hardware
maintenance.
3. Although
the
industry
is
moving
toward
component-‐based
assembly,
most
so#ware
conBnues
to
be
custom
built.
• For
instance,
each
IC
chip
has
a
part
number,
a
defined
and
validated
funcBon,
a
well-‐defined
interface,
and
a
standard
set
of
integraBon
guidelines.
• Hence,
it
can
be
ordered
off
the
shelf.
• In
general,
as
an
engineering
discipline
evolves,
a
collecBon
of
standard
design
components
is
created.
• In
the
hardware
world,
component
reuse
is
a
natural
part
of
the
engineering
process.
• In
the
so#ware
world,
it
is
something
that
has
only
begun
to
be
achieved
on
a
broad
scale.
• Ideally,
a
so#ware
component
should
be
designed
and
implemented
so
that
it
can
be
reused
in
many
different
programs.
• However,
it
is
only
very
recently
that
the
concept
of
off-‐the-‐shelf
reusable
so#ware
components
has
begun
to
take
root
• In
the
1960s,
scienBfic
subrouBne
libraries
that
were
reusable
in
a
broad
array
of
engineering
and
scienBfic
applicaBons.
•
These
subrouBne
libraries
reused
well-‐defined
algorithms
in
an
effecBve
manner
but
had
a
limited
domain
of
applicaBon.
• Today,
this
view
has
been
extended
to
encompass
not
only
algorithms
but
also
data
structure.
• Modern
reusable
components
encapsulate
both
data
and
the
processing
applied
to
the
data,
enabling
the
so#ware
engineer
to
create
new
applicaBons
from
reusable
parts.
So#ware
ApplicaBons
• So#ware
applicaBon
domains
can
be
broadly
categorised
into
2
categories:
– Determinate
applicaBons
– Indeterminate
applicaBons
• Determinate:
A
so#ware
that
accepts
data
in
a
predefined
order,
executes
the
analysis
algorithm(s)
without
interrupBon,
and
produces
resultant
data
is
a
case
of
determinate
applicaBons
• For
e.g:
an
engineering
analysis
program
• Indeterminate:
A
so#ware
that
accepts
inputs
with
varied
content
and
arbitrary
Bming,
executes
algorithms
that
can
be
interrupted
by
external
condiBons,
and
produces
output
that
varies
as
a
funcBon
of
environment
and
Bme
is
said
to
have
an
indeterminate
characterisBc.
• For
e.g:
a
mulB-‐user
OS
• It
is
somewhat
difficult
to
develop
meaningful
generic
categories
for
so#ware
applicaBons.
• As
so#ware
complexity
grows,
neat
compartmentalizaBon
disappears.
• However,
the
following
so#ware
areas
indicate
the
breadth
of
potenBal
applicaBons:
• System
so#ware
• Real-‐Bme
so#ware
• Business
so#ware
• Engineering
and
scienBfic
so#ware
• Embedded
so#ware
• Personal
compuBng
so#ware
• Web-‐based
so#ware
• ArBficial
intelligence
so#ware
• System
so#ware:
is
a
collecBon
of
programs
wricen
to
service
other
programs.
• Some
system
so#ware
(e.g.,
compilers,
editors,
and
file
management
uBliBes)
process
complex,
but
determinate
informaBon
structures.
• Other
systems
applicaBons
(e.g.,
operaBng
system
components,
drivers,
telecommunicaBons
processors)
process
largely
indeterminate
data.
• In
either
case,
the
system
so#ware
area
is
characterized
by:
– heavy
interacBon
with
computer
hardware
– heavy
usage
by
mulBple
users
– concurrent
operaBon
that
requires
scheduling
– resource
sharing
– sophisBcated
process
management
– complex
data
structures
– mulBple
external
interfaces.
• Real-‐Bme
so#ware:
monitors/analyzes/controls
real-‐world
events
as
they
occur
is
called
real
&me.
• Elements
of
real-‐Bme
so#ware
include:
– a
data
gathering
component
that
collects
and
formats
informaBon
from
an
external
environment
– an
analysis
component
that
transforms
informaBon
as
required
by
the
applicaBon
– a
control/output
component
that
responds
to
the
external
environment
– and
a
monitoring
component
that
coordinates
all
other
components
so
that
real-‐Bme
response
(typically
ranging
from
1
millisecond
to
1
second)
can
be
maintained.
• Business
so#ware:
Business
informaBon
processing
is
the
largest
single
so#ware
applicaBon
area.
• Discrete
"systems"
(e.g.,
payroll,
accounts
receivable/
payable,
inventory)
have
evolved
into
management
informaBon
system
(MIS)
so#ware
that
accesses
one
or
more
large
databases
containing
business
informaBon.
• ApplicaBons
in
this
area
restructure
exisBng
data
in
a
way
that
facilitates
business
operaBons
or
management
decision
making.
• In
addiBon
to
convenBonal
data
processing
applicaBon,
business
so#ware
applicaBons
also
encompass
interacBve
compuBng
(e.g.,
point-‐of-‐sale
transacBon
processing).
• Engineering
and
scienBfic
so#ware:
have
been
characterized
by
"number
crunching"
algorithms.
• ApplicaBons
range
from
astronomy
to
volcanology,
from
automoBve
stress
analysis
to
space
shucle
orbital
dynamics,
and
from
molecular
biology
to
automated
manufacturing.
• However,
modern
applicaBons
within
the
engineering/scienBfic
area
are
moving
away
from
convenBonal
numerical
algorithms.
• Computer-‐aided
design,
system
simulaBon,
and
other
interacBve
applicaBons
have
begun
to
take
on
real-‐Bme
and
even
system
so#ware
characterisBcs.
• Embedded
so#ware:
Intelligent
products
have
become
commonplace
in
nearly
every
consumer
and
industrial
market.
• Embedded
so#ware
resides
in
ROMs
and
is
used
to
control
products
and
systems
for
the
consumer
and
industrial
markets.
• Embedded
so#ware
can
perform
very
limited
and
esoteric
funcBons
(e.g.,
keypad
control
for
a
microwave
oven)
or
provide
significant
funcBon
and
control
capability
(e.g.,
digital
funcBons
in
an
automobile
such
as
fuel
control,
dashboard
displays,
and
braking
systems).
• Personal
compuBng
so#ware:
The
personal
computer
so#ware
market
has
burgeoned
over
the
past
two
decades.
• Word
processing,
spreadsheets,
computer
graphics,
mulBmedia,
entertainment,
database
management,
personal
and
business
financial
applicaBons,
external
network,
and
database
access
are
only
a
few
of
hundreds
of
applicaBons.
• Web-‐based
so#ware:
The
Web
pages
retrieved
by
a
browser
are
so#ware
that
incorporates
executable
instrucBons
(e.g.,
CGI,
HTML,
Perl,
or
Java),
and
data
(e.g.,
hypertext
and
a
variety
of
visual
and
audio
formats).
• In
essence,
the
network
becomes
a
massive
computer
providing
an
almost
unlimited
so#ware
resource
• ArBficial
intelligence
so#ware:
ArBficial
intelligence
(AI)
so#ware
makes
use
of
non-‐
numerical
algorithms
to
solve
complex
problems
that
are
not
amenable
to
computaBon
or
straighgorward
analysis.
• Expert
systems,
also
called
knowledge-‐
based
systems,
pacern
recogniBon
(image
and
voice),
arBficial
neural
networks,
theorem
proving,
and
game
playing
are
representaBve
of
applicaBons
within
this
category
So#ware
Myths
• So#ware
myths
have
given
rise
misleading
ahtudes
that
have
caused
serious
problems
for
managers
and
technical
people
alike
• Many
causes
of
a
so#ware
afflicBon
can
be
traced
to
a
mythology
that
arose
during
the
early
history
of
so#ware
development.
• Today,
most
knowledgeable
professionals
recognize
these
myths
•
However,
old
ahtudes
and
habits
are
difficult
to
modify,
and
remnants
of
so#ware
myths
are
sBll
believed
• Myths
can
be
categorised
into:
– Management
myths
– Customer
myths
– PracBBoner’s
myths
• Management
myths:
• Managers
with
so#ware
responsibility,
like
managers
in
most
disciplines,
are
o#en
under
pressure
to
maintain
budgets,
keep
schedules
from
slipping,
and
improve
quality.
• In
order
to
avoid
the
above
disastrous
scenario,
a
manager
may
start
to
believe
in
any
of
the
following
myths:
• Myth:
“We
already
have
a
book
that's
full
of
standards
and
procedures
for
building
so#ware,
won't
that
provide
my
people
with
everything
they
need
to
know?”
• Reality:
• The
book
of
standards
may
very
well
exist,
but
is
it
used?
• Are
so#ware
pracBBoners
aware
of
its
existence?
• Does
it
reflect
modern
so#ware
engineering
pracBce?
• Is
it
complete?
•
Is
it
streamlined
to
improve
Bme
to
delivery
while
sBll
maintaining
a
focus
on
quality?
• In
many
cases,
the
answer
to
all
of
these
quesBons
is
"no."
• Myth:
“My
people
have
state-‐of-‐the-‐art
so#ware
development
tools,
a#er
all,
we
buy
them
the
newest
computers.”
• Reality:
It
takes
much
more
than
the
latest
model
mainframe,
workstaBon,
or
PC
to
do
high-‐quality
so#ware
development.
• Computer-‐aided
so#ware
engineering
(CASE)
tools
are
more
important
than
hardware
for
achieving
good
quality
and
producBvity,
yet
the
majority
of
so#ware
developers
sBll
do
not
use
them
effecBvely.
• Myth:
“If
we
get
behind
schedule,
we
can
add
more
programmers
and
catch
up
(someBmes
called
the
Mongolian
horde
concept).”
• Reality:
• So#ware
development
is
not
a
mechanisBc
process
like
manufacturing.
• In
the
words
of
Brooks:
"adding
people
to
a
late
so#ware
project
makes
it
later.”
• At
first,
this
statement
may
seem
counterintuiBve.
• However,
as
new
people
are
added,
people
who
were
working
must
spend
Bme
educaBng
the
newcomers,
thereby
reducing
the
amount
of
Bme
spent
on
producBve
development
effort.
• People
can
be
added
but
only
in
a
planned
and
well-‐coordinated
manner.
• Myth:
If
I
decide
to
outsource
the
so#ware
project
to
a
third
party,
I
can
just
relax
and
let
that
firm
build
it.
• Reality:
If
an
organizaBon
does
not
understand
how
to
manage
and
control
so#ware
projects
internally,
it
will
invariably
struggle
when
it
outsources
so#ware
projects.
• Customer
myths:
• A
customer
who
requests
computer
so#ware
may
be
a
person
at
the
next
desk,
a
technical
group
down
the
hall,
the
markeBng/sales
department,
or
an
outside
company
that
has
requested
so#ware
under
contract.
• In
many
cases,
the
customer
believes
myths
about
so#ware
because
so#ware
managers
and
developers
do
licle
to
correct
misinformaBon.
• Myths
lead
to
false
expectaBons
(by
the
customer)
and
ulBmately,
dissaBsfacBon
with
the
developer.
• Myth:
A
general
statement
of
objecBves
is
sufficient
to
begin
wriBng
programs—
we
can
fill
in
the
details
later.
• Reality:
A
poor
up-‐front
definiBon
is
the
major
cause
of
failed
so#ware
efforts.
• A
formal
and
detailed
descripBon
of
the
informaBon
domain,
funcBon,
behavior,
performance,
interfaces,
design
constraints,
and
validaBon
criteria
is
essenBal.
• These
characterisBcs
can
be
determined
only
a#er
thorough
communicaBon
between
customer
and
developer.
• Myth:
Project
requirements
conBnually
change,
but
change
can
be
easily
accommodated
because
so#ware
is
flexible.
• Reality:
It
is
true
that
so#ware
requirements
change,
but
the
impact
of
change
varies
with
the
Bme
at
which
it
is
introduced.
• Figure
below
illustrates
the
impact
of
change
14 PA R T O N E THE PRODUCT AND THE PROCESS
F I G U R E 1.3
60–100×
The impact of
change
Cost to change
1.5–6×
1×
F I G U R E 2.1
Software Tools
engineering
layers
Methods
Process
A quality focus
Framework activities
Task sets
Tasks
Milestones, deliverables
SQA points
Umbrella activities
• In
recent
years,
there
has
been
a
significant
emphasis
on
“process
maturity.”
• The
So#ware
Engineering
InsBtute
(SEI)
has
developed
a
comprehensive
model
predicated
on
a
set
of
so#ware
engineering
capabiliBes
that
should
be
present
as
organizaBons
reach
different
levels
of
process
maturity.
• To
determine
an
organizaBon’s
current
state
of
process
maturity,
the
SEI
uses
an
assessment
that
results
in
a
five
point
grading
scheme.
• The
grading
scheme
determines
compliance
with
a
capability
maturity
model
(CMM)
•
It
defines
key
acBviBes
required
at
different
levels
of
process
maturity.
• The
SEI
approach
provides
a
measure
of
the
global
effecBveness
of
a
company's
so#ware
engineering
pracBces
and
establishes
five
process
maturity
levels
that
are
defined
in
the
following
manner:
• Level
1:
Ini3al
– The
so#ware
process
is
characterized
as
ad
hoc
and
occasionally
even
chaoBc.
– Few
processes
are
defined,
and
success
depends
on
individual
effort.
• Level
2:
Repeatable
– Basic
project
management
processes
are
established
to
track
cost,
schedule,
and
funcBonality.
– The
necessary
process
discipline
is
in
place
to
repeat
earlier
successes
on
projects
with
similar
applicaBons.
• Level
3:
Defined
– The
so#ware
process
for
both
management
and
engineering
acBviBes
is
documented,
standardized,
and
integrated
into
an
organizaBon-‐
wide
so#ware
process.
– All
projects
use
a
documented
and
approved
version
of
the
organizaBon's
process
for
developing
and
supporBng
so#ware.
– This
level
includes
all
characterisBcs
defined
for
level
2.
• Level
4:
Managed.
– Detailed
measures
of
the
so#ware
process
and
product
quality
are
collected.
– Both
the
so#ware
process
and
products
are
quanBtaBvely
understood
and
controlled
using
detailed
measures.
– This
level
includes
all
characterisBcs
defined
for
level
3.
• Level
5:
Op3mizing.
– ConBnuous
process
improvement
is
enabled
by
quanBtaBve
feedback
from
the
process
and
from
tesBng
innovaBve
ideas
and
technologies.
– This
level
includes
all
characterisBcs
defined
for
level
4.
Solution
integration
(a)
problem
definition
status technical
quo development
solution
integration
problem
definition
quo
quo development
solution
integration
problem
definition
status technical
quo development
solution
integration
problem
definiton
Status status
quo
technical
development
quo
solution
integration
problem
definiton
Status technical
quo development
solution
integration
(b)
• All
so#ware
development
can
be
characterized
as
a
problem
solving
loop
• Four
disBnct
stages
are
encountered:
– status
quo
– problem
definiBon
– technical
development
– soluBon
integraBon
• The
generic
so#ware
engineering
phases
and
steps
defined
in
slide
71
easily
map
into
these
stages.
• This
problem
solving
loop
applies
to
so#ware
engineering
work
at
many
different
levels
of
resoluBon.
• It
can
be
used:
– at
the
macro
level
when
the
enBre
applicaBon
is
considered
– at
a
mid-‐level
when
program
components
are
being
engineered
– at
the
line
of
code
level.
• Therefore,
each
stage
in
the
problem
solving
loop
contains
an
idenBcal
problem
solving
loop
(this
conBnues
to
some
raBonal
boundary;
for
so#ware,
a
line
of
code).
Linear
SequenBal
Model
• SomeBmes
called
the
classic
life
cycle
or
the
waterfall
model,
the
linear
sequen&al
model
• It
suggests
a
systemaBc,
sequenBal
approach
to
so#ware
development
that
begins
at
the
system
level
and
progresses
through
analysis,
design,
coding,
tesBng,
and
support.
CHAPTER 2 THE PROCESS 29
System/information
engineering
R E 2.5
rototyping
digm
Listen to Build/revise
customer mock-up
Customer
test drives
mock-up
Business
Team #1 modeling
Data
modeling
Process
modeling
Business Data
modeling modeling Application
generation
Testing
&
Process turnover
modeling
Data
modeling
Application
generation
Process
modeling Testing
&
turnover
Application
generation
Testing
&
turnover
60–90 days
• Used
primarily
for
informaBon
systems
applicaBons,
the
RAD
approach
encompasses
the
following
phases:
1. Business
modeling:
The
informaBon
flow
among
business
funcBons
is
modeled
in
a
way
that
answers
the
following
quesBons:
– What
informaBon
drives
the
business
process
– What
informaBon
is
generated?
– Who
generates
it?
– Where
does
the
informaBon
go?
– Who
processes
it?
2. Data
modeling:
– The
informaBon
flow
defined
as
part
of
the
business
modeling
phase
is
refined
into
a
set
of
data
objects
that
are
needed
to
support
the
business.
– The
characterisBcs
(called
aGributes)
of
each
object
are
idenBfied
and
the
relaBonships
between
these
objects
defined.
3. Process
modeling:
• The
data
objects
defined
in
the
data
modeling
phase
are
transformed
to
achieve
the
informaBon
flow
necessary
to
implement
a
business
funcBon.
• Processing
descripBons
are
created
for
adding,
modifying,
deleBng,
or
retrieving
a
data
object.
4. Applica3on
genera3on:
• RAD
assumes
the
use
of
fourth
generaBon
techniques.
• Rather
than
creaBng
so#ware
using
convenBonal
third
generaBon
programming
languages,
the
RAD
process
works
to
reuse
exisBng
program
components
(when
possible)
or
create
reusable
components
(when
necessary).
• In
all
cases,
automated
tools
are
used
to
facilitate
construcBon
of
the
so#ware.
5. Tes3ng
and
turnover:
• Since
the
RAD
process
emphasizes
reuse,
many
of
the
program
components
have
already
been
tested.
• This
reduces
overall
tesBng
Bme.
• However,
new
components
must
be
tested
and
all
interfaces
must
be
fully
exercised.
Advantages
• Obviously,
the
Bme
constraints
imposed
on
a
RAD
project
demand
“scalable
scope”
(advantage??)
• If
a
business
applicaBon
can
be
modularized
in
a
way
that
enables
each
major
funcBon
to
be
completed
in
less
than
three
months
(using
the
approach
described
previously),
it
is
a
candidate
for
RAD.
•
Each
major
funcBon
can
be
addressed
by
a
separate
RAD
team
and
then
integrated
to
form
a
whole.
Drawbacks
1. For
large
but
scalable
projects,
RAD
requires
sufficient
human
resources
to
create
the
right
number
of
RAD
teams.
• RAD
requires
developers
and
customers
who
are
commiced
to
the
rapid-‐fire
acBviBes
necessary
to
get
a
system
complete
in
a
much
abbreviated
Bme
frame.
• If
commitment
is
lacking
from
either
consBtuency,
RAD
projects
will
fail.
2. Not
all
types
of
applicaBons
are
appropriate
for
RAD.
• If
a
system
cannot
be
properly
modularized,
building
the
components
necessary
for
RAD
will
be
problemaBc.
•
If
high
performance
is
an
issue
and
performance
is
to
be
achieved
through
tuning
the
interfaces
to
system
components
(since
interface
is
the
weak
link
here),
the
RAD
approach
may
not
work.
3. RAD
is
not
appropriate
when
technical
risks
are
high.
• This
occurs
when
a
new
applicaBon
makes
heavy
use
of
new
technology
or
when
the
new
so#ware
requires
a
high
degree
of
interoperability
with
exisBng
computer
programs.
EVOLUTIONARY
SOFTWARE
PROCESS
MODELS
• So#ware,
like
all
complex
systems,
evolves
over
a
period
of
Bme:
1. Business
and
product
requirements
o#en
change
as
development
proceeds,
making
a
straight
path
to
an
end
product
unrealisBc;
2. Tight
market
deadlines
make
compleBon
of
a
comprehensive
so#ware
product
impossible,
but
a
limited
version
must
be
introduced
to
meet
compeBBve
or
business
pressure;
3. A
set
of
core
product
or
system
requirements
is
well
understood,
but
the
details
of
product
or
system
extensions
have
yet
to
be
defined
• In
these
and
similar
situaBons,
so#ware
engineers
need
a
process
model
that
has
been
explicitly
designed
to
accommodate
a
product
that
evolves
over
Bme.
Why
are
earlier
process
models
not
evoluBonary?
• The
linear
sequenBal
model
is
designed
for
straight-‐line
development.
• In
essence,
this
waterfall
approach
assumes
that
a
complete
system
will
be
delivered
a#er
the
linear
sequence
is
completed.
• The
prototyping
model
is
designed
to
assist
the
customer
(or
developer)
in
understanding
requirements.
• In
general,
it
is
not
designed
to
deliver
a
producBon
system.
• Therefore,
the
evoluBonary
nature
of
so#ware
is
not
considered
in
either
of
these
classic
so#ware
engineering
paradigms.
• EvoluBonary
models
are
iteraBve.
• They
are
characterized
in
a
manner
that
enables
so#ware
engineers
to
develop
increasingly
more
complete
versions
of
the
so#ware.
1.
The
Incremental
Model
• The
model
combines
elements
of
the
linear
sequenBal
model
(applied
repeBBvely)
with
the
iteraBve
philosophy
of
prototyping.
• The
incremental
model
applies
linear
sequences
in
a
staggered
fashion
as
calendar
Bme
progresses.
• Each
linear
sequence
produces
a
deliverable
“increment”
of
the
so#ware
• For
example,
word-‐processing
so#ware
developed
using
the
incremental
paradigm
might
deliver
basic
file
management,
ediBng,
and
document
producBon
funcBons
in
the
first
increment;
• More
sophisBcated
ediBng
and
document
producBon
capabiliBes
in
the
second
increment;
• Spelling
and
grammar
checking
in
the
third
increment;
• And
advanced
page
layout
capability
in
the
fourth
increment.
• It
should
be
noted
that
the
process
flow
for
any
increment
can
incorporate
the
prototyping
paradigm.
increment, until the complete product is produced.
System/information Increment 1
engineering
Calendar time
So
what
does
this
example
tell
us
about
the
incremental
model?
• When
an
incremental
model
is
used,
the
first
increment
is
o#en
a
core
product.
• That
is,
basic
requirements
are
addressed,
but
many
supplementary
features
(some
known,
others
unknown)
remain
undelivered.
• The
core
product
is
used
by
the
customer
(or
undergoes
detailed
review).
• As
a
result
of
use
and/or
evaluaBon,
a
plan
is
developed
for
the
next
increment.
• The
plan
addresses
the
modificaBon
of
the
core
product
to
becer
meet
the
needs
of
the
customer
and
the
delivery
of
addiBonal
features
and
funcBonality.
•
This
process
is
repeated
following
the
delivery
of
each
increment,
unBl
the
complete
product
is
produced.
So
then
how
is
the
incremental
model
different
from
prototyping?
• The
incremental
process
model,
like
prototyping
and
other
evoluBonary
approaches,
is
iteraBve
in
nature.
• But
unlike
prototyping,
the
incremental
model
focuses
on
the
delivery
of
an
operaBonal
product
with
each
increment.
• Early
increments
are
stripped
down
versions
of
the
final
product,
but
they
do
provide
capability
that
serves
the
user
and
also
provide
a
plagorm
for
evaluaBon
by
the
user.
So
when
is
it
useful?
• Incremental
development
is
parBcularly
useful
when
staffing
is
unavailable
for
a
complete
implementaBon
by
the
business
deadline
that
has
been
established
for
the
project.
• Early
increments
can
be
implemented
with
fewer
people.
• If
the
core
product
is
well
received,
then
addiBonal
staff
(if
required)
can
be
added
to
implement
the
next
increment.
• In
addiBon,
increments
can
be
planned
to
manage
technical
risks.
• For
example,
a
major
system
might
require
the
availability
of
new
hardware
that
is
under
development
and
whose
delivery
date
is
uncertain.
• It
might
be
possible
to
plan
early
increments
in
a
way
that
avoids
the
use
of
this
hardware,
thereby
enabling
parBal
funcBonality
to
be
delivered
to
end-‐users
without
inordinate
delay.
• Self
Study:
2.
The
Spiral
Model
3.
The
Concurrent
Development
Model
• The
concurrent
development
model,
someBmes
called
concurrent
engineering
• Consider
the
following
scenario:
Project
managers
who
track
project
status
in
terms
of
the
major
phases
[of
the
classic
life
cycle]
have
no
idea
of
the
status
of
their
projects.
This
is
because
many
phases
of
the
project
are
under
development
simultaneously.
Personnel
are
wri&ng
requirements,
designing,
coding,
tes&ng,
and
integra&on
tes&ng
[all
at
the
same
&me]
• Therefore,
concurrency
is
a
feature
that
is
prevalent
in
so#ware
projects
• A
process
model
is
needed
that
incorporates
this
feature
• The
concurrent
process
model
can
be
represented
schemaBcally
as
a
series
of
major
technical
acBviBes,
tasks,
and
their
associated
states.
F I G U R E 2.10
One element of None
the concurrent Analysis activity
process model
Under
development
model
Done
Represents a state of a
software engineered activity
• The
acBvity—analysis—may
be
in
any
one
of
the
states
at
any
given
Bme.
• Similarly,
other
acBviBes
(e.g.,
design
or
customer
communicaBon)
can
be
represented
in
an
analogous
manner.
• All
acBviBes
exist
concurrently
but
reside
in
different
states.
• For
example,
early
in
a
project
the
customer
communica&on
acBvity
(not
shown
in
the
figure)
has
completed
its
first
iteraBon
and
exists
in
the
awai3ng
changes
state.
• The
analysis
acBvity
(which
existed
in
the
none
state
while
iniBal
customer
communicaBon
was
completed)
now
makes
a
transiBon
into
the
under
development
state.
• If,
however,
the
customer
indicates
that
changes
in
requirements
must
be
made,
the
analysis
acBvity
moves
from
the
under
development
state
into
the
awai3ng
changes
state.
• The
concurrent
process
model
defines
a
series
of
events
that
will
trigger
transiBons
from
state
to
state
for
each
of
the
so#ware
engineering
acBviBes.
• For
example,
during
early
stages
of
design,
an
inconsistency
in
the
analysis
model
is
uncovered.
• This
generates
the
event
analysis
model
correc&on
which
will
trigger
the
analysis
acBvity
from
the
done
state
into
the
awai3ng
changes
state.
Advantages
of
the
concurrent
model
• In
reality,
the
concurrent
process
model
is
applicable
to
all
types
of
so#ware
development
and
provides
an
accurate
picture
of
the
current
state
of
a
project.
• Rather
than
confining
so#ware
engineering
acBviBes
to
a
sequence
of
events,
it
defines
a
network
of
acBviBes.
• Each
acBvity
on
the
network
exists
simultaneously
with
other
acBviBes.
• Events
generated
within
a
given
acBvity
or
at
some
other
place
in
the
acBvity
network
trigger
transiBons
among
the
states
of
an
acBvity.
Summary:
A
quick
list
of
evoluBonary
process
models
• The
incremental
model
• The
Spiral
model
(le#
as
self
study)
• The
WINWIN
Spiral
model
(safe
to
ignore)
• The
concurrent
development
model
• Self
Study:
Component
Based
Development
– The
CBD
model
incorporates
many
of
the
characterisBcs
of
the
spiral
model.
– It
is
evoluBonary
in
nature,
demanding
an
iteraBve
approach
to
the
creaBon
of
so#ware.
– However,
the
component-‐based
development
model
composes
applicaBons
from
prepackaged
so#ware
components
(called
classes).
FOURTH
GENERATION
TECHNIQUES
• The
term
fourth
genera&on
techniques
(4GT)
encompasses
a
broad
array
of
so#ware
tools
that
have
one
thing
in
common.
• Each
enables
the
so#ware
engineer
to
specify
some
characterisBc
of
so#ware
at
a
high
level.
• The
tool
then
automaBcally
generates
source
code
based
on
the
developer's
specificaBon.
• The
higher
the
level
at
which
so#ware
can
be
specified
to
a
machine,
the
faster
a
program
can
be
built.
• The
4GT
paradigm
for
so#ware
engineering
focuses
on
the
ability
to
specify
so#ware
using
specialized
language
forms
or
a
graphic
notaBon
that
describes
the
problem
to
be
solved
in
terms
that
the
customer
can
understand.
• Currently,
a
so#ware
development
environment
that
supports
the
4GT
paradigm
includes
some
or
all
of
the
following
tools:
– Nonprocedural
languages
for
database
query
– Report
generaBon
– Data
manipulaBon
– Screen
interacBon
and
definiBon
– Code
generaBon
– High-‐level
graphics
capability
– Spreadsheet
capability
– Automated
generaBon
of
HTML
and
similar
languages
used
for
Web-‐site
creaBon
using
advanced
so#ware
tools.
• Like
other
paradigms,
4GT
begins
with
a
requirements
gathering
step.
• Ideally,
the
customer
would
describe
requirements
and
these
would
be
directly
translated
into
an
operaBonal
prototype.
• But
this
is
unworkable.
• The
customer
may
be
unsure
of
what
is
required,
may
be
ambiguous
in
specifying
facts
that
are
known,
and
may
be
unable
or
unwilling
to
specify
informaBon
in
a
manner
that
a
4GT
tool
can
consume.
• For
this
reason,
the
customer/developer
dialog
described
for
other
process
models
remains
an
essenBal
part
of
the
4GT
approach.
• For
small
applicaBons,
it
may
be
possible
to
move
directly
from
the
requirements
gathering
step
to
implementaBon
using
4GL
• However,
for
larger
efforts,
it
is
necessary
to
develop
a
design
strategy
for
the
system,
even
if
a
4GL
is
to
be
used.
• The
use
of
4GT
without
design
(for
large
projects)
will
cause
the
same
difficulBes
(poor
quality,
poor
maintainability,
poor
customer
acceptance)
that
have
been
encountered
when
developing
so#ware
using
convenBonal
approaches.
• ImplementaBon
using
a
4GL
enables
the
so#ware
developer
to
represent
desired
results
in
a
manner
that
leads
to
automaBc
generaBon
of
code
to
create
those
results.
•
Obviously,
a
data
structure
with
relevant
informaBon
must
exist
and
be
readily
accessible
by
the
4GL.
• To
transform
a
4GT
implementaBon
into
a
product,
the
developer
must
conduct
thorough
tesBng,
develop
meaningful
documentaBon,
and
perform
all
other
soluBon
integraBon
acBviBes
that
are
required
in
other
so#ware
engineering
paradigms.
• In
addiBon,
the
4GT
developed
so#ware
must
be
built
in
a
manner
that
enables
maintenance
to
be
performed
expediBously.
Debate
surrounding
the
4GL
model
• Like
all
so#ware
engineering
paradigms,
the
4GT
model
has
advantages
and
disadvantages.
• Proponents
claim
dramaBc
reducBon
in
so#ware
development
Bme
and
greatly
improved
producBvity
for
people
who
build
so#ware.
• Opponents
claim
that
current
4GT
tools
are
not
all
that
much
easier
to
use
than
programming
languages,
that
the
resultant
source
code
produced
by
such
tools
is
"inefficient,"
and
that
the
maintainability
of
large
so#ware
systems
developed
using
4GT
is
open
to
quesBon.
Summary
• There
is
some
merit
in
the
claims
of
both
sides
1. The
use
of
4GT
is
a
viable
approach
for
many
different
applicaBon
areas.
Coupled
with
computer-‐aided
so#ware
engineering
tools
and
code
generators,
4GT
offers
a
credible
soluBon
to
many
so#ware
problems.
2. Data
collected
from
companies
that
use
4GT
indicate
that
the
Bme
required
to
produce
so#ware
is
greatly
reduced
for
small
and
intermediate
applicaBons
and
that
the
amount
of
design
and
analysis
for
small
applicaBons
is
also
reduced.
3. However,
the
use
of
4GT
for
large
so#ware
development
efforts
demands
as
much
or
more
analysis,
design,
and
tesBng
(so#ware
engineering
acBviBes)
to
achieve
substanBal
Bme
savings
that
result
from
the
eliminaBon
of
coding.
So#ware
Project
Management
• Problems
usually
faced
by
so#ware
organizaBons:
– nightmarish
projects
– impossible
deadlines
– outrageously
buggy
and/or
expensive
products
– inordinately
long
maintenance
Bme
• Reason:
Weak
project
management
What
is
it?
• Project
management
involves:
– planning
– monitoring
– control
of
the
people,
process
and
events
that
occur
as
so#ware
evolves
from
a
preliminary
concept
to
an
operaBonal
implementaBon
Why
is
it
important?
• Building
computer
so#ware
is
a
complex
undertaking
since
it
involves
many
people
working
over
a
relaBvely
long
Bme
What
are
the
steps?
• So#ware
project
management
involves
four
P’s:
– People
– Product
– Process
– Project
• This
order
is
important
Why
is
the
order
important?
• A
manager
who
forgets
that
so#ware
development
is
an
intensely
human
endeavour
will
never
have
success
in
project
management
• A
manager
who
fails
to
encourage
comprehensiveness
stakeholder
communicaBon
early
in
the
evoluBon
of
the
project
risks
building
an
elegant
soluBon
for
the
wrong
problem
•
A
manager
who
pays
licle
acenBon
to
the
process
runs
the
risk
of
wasBng
competent
technical
methods
and
tools
into
a
vacuum.
• A
manager
who
embarks
without
a
solid
project
plan
jeopardizes
the
success
of
the
product
The
management
spectrum:
1.
The
people
• The
need
for
moBvated
and
highly
skilled
so#ware
people
has
been
felt
since
the
1960s
• In
fact,
the
“people
factor”
is
so
important
that
the
So#ware
Engineering
InsBtute
has
developed
a
people
management
capability
maturity
model
(PM-‐CMM)
• The
purpose
of
this
model
is
to
“enhance
the
readiness
of
so#ware
organizaBons
to
“undertake
increasingly
complex
applicaBons
by
helping
to
acract,
grow,
moBvate,
deploy
and
retain
the
talent
needed
to
improve
their
so#ware
development
capability”
• The
people
management
maturity
model
defines
the
following
key
pracBce
areas
for
so#ware
people:
– recruiBng
– selecBon
– performance
management
– training
– compensaBon
– career
development
– organizaBon
and
work
design
– team/culture
development
Advantage
• OrganizaBons
that
achieve
high
levels
of
maturity
in
people
management
have
a
higher
likelihood
of
implemenBng
effecBve
so#ware
engineering
pracBces
2.
The
Product
• Before
a
project
can
be
planned:
– product
objecBves
and
scope
should
be
established
– alternaBve
soluBons
should
be
considered
– technical
and
management
constraints
should
be
idenBfied.
• Without
this
informaBon,
it
is
impossible
to:
– define
reasonable
(and
accurate)
esBmates
of
the
cost,
– an
effecBve
assessment
of
risk,
– a
realisBc
breakdown
of
project
tasks,
– or
a
manageable
project
schedule
that
provides
a
meaningful
indicaBon
of
progress.
• The
so#ware
developer
and
customer
must
meet
to
define
product
objecBves
and
scope.
• In
many
cases,
this
acBvity
begins
as
part
of
the
system
engineering
or
business
process
engineering
and
conBnues
as
the
first
step
in
so#ware
requirements
analysis.
• ObjecBves
idenBfy
the
overall
goals
for
the
product
(from
the
customer’s
point
of
view)
without
considering
how
these
goals
will
be
achieved.
• Scope
idenBfies
the
primary
data,
funcBons
and
behaviors
that
characterize
the
product,
and
more
important,
acempts
to
bound
these
characterisBcs
in
a
quanBtaBve
manner.
• Once
the
product
objecBves
and
scope
are
understood,
alternaBve
soluBons
are
considered.
• Although
very
licle
detail
is
discussed,
the
alternaBves
enable
managers
and
pracBBoners
to
select
a
"best"
approach,
given
the
constraints
imposed
by
delivery
deadlines,
budgetary
restricBons,
personnel
availability,
technical
interfaces,
and
myriad
other
factors.
3.
The
Process
• A
so#ware
process
provides
the
framework
from
which
a
comprehensive
plan
for
so#ware
development
can
be
established.
• A
small
number
of
framework
acBviBes
are
applicable
to
all
so#ware
projects,
regardless
of
their
size
or
complexity.
• A
number
of
different
task
sets—tasks,
milestones,
work
products,
and
quality
assurance
points—enable
the
framework
acBviBes
to
be
adapted
to
the
characterisBcs
of
the
so#ware
project
and
the
requirements
of
the
project
team.
• Finally,
umbrella
acBviBes—such
as
so#ware
quality
assurance,
so#ware
configuraBon
management,
and
measurement—overlay
the
process
model.
• Umbrella
acBviBes
are
independent
of
any
one
framework
acBvity
and
occur
throughout
the
process.
4.
The
Project
• So#ware
projects
are
planned
and
controlled
for
one
primary
reason—it
is
the
only
known
way
to
manage
complexity.
• And
yet,
the
success
rate
is
dismal
• In
1998,
industry
data
indicated
that
26
percent
of
so#ware
projects
failed
outright
and
46
percent
experienced
cost
and
schedule
overruns.
• Although
the
success
rate
for
so#ware
projects
has
improved
somewhat,
our
project
failure
rate
remains
higher
than
it
should
be
• In
order
to
avoid
project
failure,
a
so#ware
project
manager
and
the
so#ware
engineers
who
build
the
product
must:
– avoid
a
set
of
common
warning
signs
– understand
the
criBcal
success
factors
that
lead
to
good
project
management
– and
develop
a
commonsense
approach
for
planning,
monitoring
and
controlling
the
project.
PEOPLE
• In
a
study
published
by
the
IEEE
[CUR88],
the
engineering
vice
presidents
of
three
major
technology
companies
were
asked
the
most
important
contributor
to
a
successful
so#ware
project.
They
answered
in
the
following
way:
• VP
1:
– I
guess
if
you
had
to
pick
one
thing
out
that
is
most
important
in
our
environment,
I'd
say
it's
not
the
tools
that
we
use,
it's
the
people.
• VP
2:
– The
most
important
ingredient
that
was
successful
on
this
project
was
having
smart
people
.
.
.
very
licle
else
macers
in
my
opinion.
.
.
.
The
most
important
thing
you
do
for
a
project
is
selecBng
the
staff
.
.
.
The
success
of
the
so#ware
development
organizaBon
is
very,
very
much
associated
with
the
ability
to
recruit
good
people.
• VP
3:
– The
only
rule
I
have
in
management
is
to
ensure
I
have
good
people—real
good
people—and
that
I
grow
good
people—and
that
I
provide
an
environment
in
which
good
people
can
produce.
The
Stakeholders
• These
are
the
people
involved
in
the
so#ware
process
and
the
manner
in
which
they
are
organized
to
perform
effecBve
so#ware
engineering
• Can
be
categorized
into
5
consBtuencies
1. Senior
managers:
who
define
business
issues
that
o#en
have
significant
influence
on
the
project
2. Project
(technical
mangers):
who
must
plan,
moBvate,
organize
and
control
the
pracBBoners
who
do
so#ware
work
3. PracBBoners:
who
deliver
the
technical
skills
that
are
necessary
to
engineer
a
product
or
applicaBon
4. Customers:
who
specify
the
requirements
for
the
so#ware
to
be
engineered
and
other
stakeholders
who
have
a
peripheral
interest
in
the
outcome
5. End
users:
who
interact
with
the
so#ware
once
it
is
released
for
producBon
use.
• Every
so#ware
project
has
people
who
fall
within
this
taxonomy.
• To
be
effecBve,
the
project
team
must
be
organized
in
a
way
that
maximizes
each
person’s
skills
and
abiliBes.
• The
person
who
makes
this
possible:
The
Team
Leader
Bad
Team
Leaders
• It
is
clear
that
project
management
is
a
people-‐intensive
acBvity
• Thus,
competent
so#ware
pracBBoners
o#en
make
poor
team
leaders
(they
don’t
have
the
right
mix
of
“people
skills”)
“Unfortunately
and
all
too
frequently
it
seems,
individuals
just
fall
into
a
project
manager
role
and
become
accidental
project
managers”
Key
traits
of
an
effecBve
project
manager
• Four
key
traits:
1. Problem
Solving:
– Ability
to
diagnose
technical
and
organizaBonal
issues
that
are
most
relevant
– systemaBcally
structure
a
soluBon
or
properly
moBvate
other
pracBBoners
to
develop
the
soluBon
– apply
lessons
learned
from
past
projects
to
new
situaBons
and
remain
flexible
enough
to
change
direcBon
if
iniBal
acempts
at
problem
soluBon
are
fuBle
2. Managerial
IdenBty:
– Ability
to
take
charge
of
the
project.
– The
confidence
to
assume
control
when
necessary
and
the
assurance
to
allow
good
technical
people
to
follow
their
insBncts
3. Achievement:
– To
opBmize
producBvity
of
the
project
team.
– Must
reward
iniBaBve
and
accomplishment.
– Demonstrate
through
own
acBons
that
controlled
risk
taking
will
not
be
penalized
4. Influence
and
team
building:
– Ability
to
“read”
people
– To
understand
verbal
and
non-‐verbal
signals
and
react
to
the
needs
of
people
sending
those
signals.
– Ability
to
remain
under
control
in
high-‐stress
situaBons
The
So#ware
Team
• Project
manager
usually
can
decide
the
organizaBon
of
the
people
involved
in
a
so#ware
project
• The
“best”
team
structure
depends
on:
– the
management
style
of
the
organizaBon
– the
number
of
people
who
will
populate
the
team
– their
skill
levels
– overall
problem
difficulty
• Mantei
describes
7
factors
that
should
be
considered
when
planning
the
structure
of
so#ware
engineering
teams:
1. The
difficulty
of
the
problem
to
be
solved
2. The
“size”
of
the
resulBng
program(s)
in
lines
of
code
or
funcBon
points
3. The
Bme
that
the
team
will
stay
together
(team
lifeBme)
4. The
degree
to
which
the
problem
can
be
modularised
5. The
required
quality
and
reliability
of
the
system
to
be
built
6. The
rigidity
of
the
delivery
date
7. The
degree
of
sociability
required
for
the
project
A
few
paradigms
for
so#ware
teams
• ConstanBne
suggests
4
“organizaBonal
paradigms”
for
so#ware
engineering
teams:
1. A
closed
paradigm:
– Structures
a
team
according
to
a
tradiBonal
hierarchy
of
authority.
– Such
teams
work
well
when
producing
so#ware
similar
to
past
efforts,
but
less
likely
to
be
innovaBve.
2. A
random
paradigm:
– Structures
a
team
loosely
and
depends
on
individual
iniBaBve
of
the
team
members.
– Works
well
when
innovaBon
or
technological
breakthrough
is
required,
but
may
struggle
when
“orderly”
performance
is
needed
3. An
open
paradigm:
– A
mix
of
the
above
two
methods
i.e.
the
“order”
associated
with
the
closed
paradigm
but
the
innovaBon
occurs
when
using
the
random
paradigm.
ü work
is
performed
collaboraBvely
ü Heavy
communicaBon
and
consensus-‐based
decision
making
ü well
suited
to
finding
soluBons
to
complex
problems
✗ may
not
perform
as
efficiently
as
other
teams
4. A
synchronous
paradigm:
• Relies
on
the
natural
compartmentalizaBon
of
a
problem
and
organizes
team
members
to
work
on
pieces
of
the
problem
with
licle
acBve
communicaBon
among
themselves
Another
approach
• The
following
opBons
are
available
for
applying
human
resources
to
a
project
that
will
require
n
people
working
for
k
years:
1. n
individuals
are
assigned
to
m
different
funcBonal
tasks:
– relaBvely
licle
combined
work
occurs
– coordinaBon
is
the
responsibility
of
a
so#ware
manager
who
may
have
six
other
projects
to
be
concerned
with.
2. n
individuals
are
assigned
to
m
different
funcBonal
tasks
(
m
<
n
)
so
that
informal
"teams"
are
established;
– an
ad
hoc
team
leader
may
be
appointed;
– coordinaBon
among
teams
is
the
responsibility
of
a
so#ware
manager.
3. n
individuals
are
organized
into
t
teams;
– each
team
is
assigned
one
or
more
funcBonal
tasks;
– each
team
has
a
specific
structure
that
is
defined
for
all
teams
working
on
a
project;
– coordinaBon
is
controlled
by
both
the
team
and
a
so#ware
project
manager.
So
which
team
structure
is
the
best?
• Although
it
is
possible
to
voice
arguments
for
and
against
each
of
these
approaches,
a
growing
body
of
evidence
indicates
that
a
formal
team
organizaBon
(opBon
3)
is
most
producBve.
• The
“best”
team
however
structure
depends
on:
– the
management
style
of
the
organizaBon
– the
number
of
people
who
will
populate
the
team
and
their
skill
levels,
– the
overall
problem
difficulty.
Mantei’s
team
organisaBon
• Mantei
suggests
three
generic
team
organizaBons:
1. Democra3c
decentralized
(DD):
• This
so#ware
engineering
team
has
no
permanent
leader.
• Rather,
"task
coordinators
are
appointed
for
short
duraBons
and
then
replaced
by
others
who
may
coordinate
different
tasks."
• Decisions
on
problems
and
approach
are
made
by
group
consensus.
• CommunicaBon
among
team
members
is
horizontal.
2. Controlled
decentralized
(CD):
• This
so#ware
engineering
team
has
a
defined
leader
who
coordinates
specific
tasks
and
secondary
leaders
that
have
responsibility
for
subtasks.
• Problem
solving
remains
a
group
acBvity,
but
implementaBon
of
soluBons
is
parBBoned
among
subgroups
by
the
team
leader.
• CommunicaBon
among
subgroups
and
individuals
is
horizontal.
• VerBcal
communicaBon
along
the
control
hierarchy
also
occurs.
3. Controlled
Centralized
(CC):
• Top-‐level
problem
solving
and
internal
team
coordinaBon
are
managed
by
a
team
leader.
• CommunicaBon
between
the
leader
and
team
members
is
verBcal.
Pros
and
Cons
• A
centralized
structure
completes
tasks
faster,
it
is
the
most
adept
at
handling
simple
problems.
• A
rigid
team
centralised
team
structure
can
be
successfully
applied
to
simple
problems.
• Decentralized
teams
generate
more
and
becer
soluBons
than
individuals.
Therefore
such
teams
have
a
greater
probability
of
success
when
working
on
difficult
problems.
• Because
the
performance
of
a
team
is
inversely
proporBonal
to
the
amount
of
communicaBon
that
must
be
conducted,
very
large
projects
are
best
addressed
by
teams
with
a
CC
or
CD
structures
when
subgrouping
can
be
easily
accommodated.
•
The
length
of
Bme
that
the
team
will
"live
together"
affects
team
morale.
• It
has
been
found
that
DD
team
structures
result
in
high
morale
and
job
saBsfacBon
and
are
therefore
good
for
teams
that
will
be
together
for
a
long
Bme.
• The
DD
team
structure
is
best
applied
to
problems
with
relaBvely
low
modularity,
because
of
the
higher
volume
of
communicaBon
needed.
• When
high
modularity
is
possible
(and
people
can
do
their
own
thing),
the
CC
or
CD
structure
will
work
well.
• CC
and
CD
teams
have
been
found
to
produce
fewer
defects
than
DD
teams,
but
these
data
have
much
to
do
with
the
specific
quality
assurance
acBviBes
that
are
applied
by
the
team.
• Decentralized
teams
generally
require
more
Bme
to
complete
a
project
than
a
centralized
structure
and
at
the
same
Bme
are
best
when
high
sociability
is
required.
So#ware
Metrics:
Process,
product
and
project
metrics
What
is
meant
by
metrics?
• Metrics
are
quanBtaBve
measures
that
enable
so#ware
people
to
gain
insight
into
the
efficacy
of
the
so#ware
process
and
the
projects
that
are
conducted
using
the
process
as
a
framework
and
the
product
created
• Basic
quality
and
producBvity
data
are
collected.
• These
data
are
then
analyzed,
compared
against
past
averages,
and
assessed
to
determine
whether
quality
and
producBvity
improvements
have
occurred.
• Metrics
are
also
used
to
pinpoint
problem
areas
so
that
remedies
can
be
developed
and
the
so#ware
process
can
be
improved.
Who
does
it?
• So#ware
metrics
are
analyzed
and
assessed
by
so#ware
managers.
• Measures
are
o#en
collected
by
so#ware
engineers.
Why
is
it
important?
• If
you
don’t
measure,
judgement
can
be
based
only
on
subjecBve
evaluaBon.
• With
measurement,
trends
(either
good
or
bad)
can
be
spoced,
becer
esBmates
can
be
made,
and
true
improvement
can
be
accomplished
over
Bme.
What
are
the
steps?
• Begin
by
defining
a
limited
set
of
process,
project,
and
product
measures
that
are
easy
to
collect.
• These
measures
are
o#en
normalized
using
either
size-‐
or
funcBon-‐oriented
metrics.
• The
result
is
analyzed
and
compared
to
past
averages
for
similar
projects
performed
within
the
organizaBon.
• Trends
are
assessed
and
conclusions
are
generated.
What
is
the
work
product?
• A
set
of
so#ware
metrics
that
provide
insight
into
the
process
and
understanding
of
the
project.
Confused?
Measures,
Metrics,
And
Indicators
• Measure:
• Measurement
is
the
act
of
determining
a
measure.
• For
e.g.:
the
number
of
errors
uncovered
in
the
review
of
a
single
module
• Measurement
occurs
as
the
result
of
the
collecBon
of
one
or
more
data
points
• For
e.g,
a
number
of
module
reviews
are
invesBgated
to
collect
measures
of
the
number
of
errors
for
each.
• Metric:
• A
so#ware
metric
relates
the
individual
measures
in
some
way
• The
average
number
of
errors
found
per
review
or
the
average
number
of
errors
found
per
person-‐hour
expended
on
reviews.
• Indicator:
• A
so#ware
engineer
collects
measures
and
develops
metrics
so
that
indicators
will
be
obtained.
• An
indicator
is
a
metric
or
combinaBon
of
metrics
that
provide
insight
into
the
so#ware
process,
a
so#ware
project,
or
the
product
itself.
• An
indicator
provides
insight
that
enables
the
project
manager
or
so#ware
engineers
to
adjust
the
process,
the
project,
or
the
process
to
make
things
becer.
• For
example,
four
so#ware
teams
are
working
on
a
large
so#ware
project.
• Each
team
must
conduct
design
reviews
but
is
allowed
to
select
the
type
of
review
that
it
will
use.
• Upon
examinaBon
of
the
metric:
errors
found
per
person-‐hour
expended,
the
project
manager
noBces
that
the
two
teams
using
more
formal
review
methods
exhibit
an
errors
found
per
person-‐hour
expended
that
is
40%
higher
than
the
other
teams.
• Assuming
all
other
parameters
equal,
this
provides
the
project
manager
with
an
indicator
that
formal
review
methods
may
provide
a
higher
return
on
Bme
investment
than
another,
less
formal
review
approach.
• She
may
decide
to
suggest
that
all
teams
use
the
more
formal
approach.
•
The
metric
provides
the
manager
with
insight.
• And
insight
leads
to
informed
decision
making.
Role
of
process
and
project
indicators
• Metrics
should
be
collected
so
that
process
and
product
indicators
can
be
ascertained.
• Process
indicators
enable
a
so#ware
engineering
organizaBon
to
gain
insight
into
the
efficacy
of
an
exisBng
process,
i.e.:
– The
paradigm
– So#ware
engineering
tasks
– Work
products
– Milestones
• They
enable
managers
and
pracBBoners
to
assess
what
works
and
what
doesn’t.
• Process
metrics
are
collected
across
all
projects
and
over
long
periods
of
Bme.
• Their
intent
is
to
provide
indicators
that
lead
to
long-‐term
so#ware
process
improvement.
• Project
indicators
enable
a
so#ware
project
manager
to:
(1)
assess
the
status
of
an
ongoing
project
(2)
track
potenBal
risks
(3)
uncover
problem
areas
before
they
go
“criBcal,”
(4)
adjust
work
flow
or
tasks
(5)
evaluate
the
project
team’s
ability
to
control
quality
of
so#ware
work
products.
• In
some
cases,
the
same
so#ware
metrics
can
be
used
to
determine
project
and
then
process
indicators.
• In
fact,
measures
that
are
collected
by
a
project
team
and
converted
into
metrics
for
use
during
a
project
can
also
be
transmiced
to
those
with
responsibility
for
so#ware
process
improvement.
• For
this
reason,
many
of
the
same
metrics
are
used
in
both
the
process
and
project
domain.
• Three
other
factors
that
have
a
profound
influence
on
so#ware
quality
and
organizaBonal
performance.
1. The
skill
and
moBvaBon
of
people
has
been
shown
to
be
the
single
most
influenBal
factor
in
quality
and
performance.
2. The
complexity
of
the
product
can
have
a
substanBal
impact
on
quality
and
team
performance.
3. The
technology
(i.e.,
the
so#ware
engineering
methods)
that
populate
the
process
also
has
an
impact.
Process
Metrics
and
So@ware
Process
Improvement
• As
an
organizaBon
becomes
more
comfortable
with
the
collecBon
and
use
of
process
metrics,
it
tends
to
apply
a
more
rigorous
approach
called
sta&s&cal
so9ware
process
improvement
(SSPI).
• In
essence,
SSPI
uses
so#ware
failure
analysis
to
collect
informaBon
about
all
errors
and
defects
encountered
as
an
applicaBon,
system,
or
product
is
developed
and
used.
• Failure
analysis
works
in
the
following
manner:
1. All
errors
and
defects
are
categorized
by
origin
(e.g.,
flaw
in
specificaBon,
flaw
in
logic,
nonconformance
to
standards).
2. The
cost
to
correct
each
error
and
defect
is
recorded.
3. The
number
of
errors
and
defects
in
each
category
is
counted
and
ranked
in
descending
order.
4. The
overall
cost
of
errors
and
defects
in
each
category
is
computed.
5. Resultant
data
are
analyzed
to
uncover
the
categories
that
result
in
highest
cost
to
the
organizaBon.
6. Plans
are
developed
to
modify
the
process
with
the
intent
of
eliminaBng
(or
reducing
the
frequency
of)
the
class
of
errors
and
defects
that
is
most
costly.
CHAPTER 4 S O F T WA R E P R O C E S S A N D P R O J E C T M E T R I C S
F I G U R E 4.2 Logic
Causes of 20%
defects and
their origin for Data handling
four software 10.5%
projects Software interface
[GRA94] 6.0%
Standards
Hardware interface 6.9%
7.7%
Error checking
10.9%
Specifications
25.5%
User interface
11.7%
Origin of errors/defects
Specification/requirements
Design
Code
• Following
steps
1
and
2,
a
simple
defect
distribuBon
can
be
developed.
• For
the
pie-‐chart
noted
in
the
figure,
eight
causes
of
defects
and
their
origin
(indicated
by
shading)
are
shown.
• Grady
suggests
the
development
of
a
fishbone
diagram
to
help
in
diagnosing
the
data
represented
in
the
frequency
diagram.
PA R T T W O M A N A G I N G S O F T WA R E P R O J E C T S
Missing Ambiguous
om
Specification
defects
Customer gave
wrong info
Inadequate inquiries
Used outdated
info
Incorrect Changes
• The
spine
of
the
diagram
(the
central
line)
represents
the
quality
factor
under
consideraBon
(in
this
case
specificaBon
defects
that
account
for
25
percent
of
the
total).
• Each
of
the
ribs
(diagonal
lines)
connecBng
to
the
spine
indicate
potenBal
causes
for
the
quality
problem
(e.g.,
missing
requirements,
ambiguous
specificaBon,
incorrect
requirements,
changed
requirements).
• The
spine
and
ribs
notaBon
is
then
added
to
each
of
the
major
ribs
of
the
diagram
to
expand
upon
the
cause
noted.
• Expansion
is
shown
only
for
the
incorrect
cause
in
the
previous
diagram
Project
Metrics
• So#ware
process
metrics
are
used
for
strategic
purposes.
• So#ware
project
measures
are
tacBcal.
• That
is,
project
metrics
and
the
indicators
derived
from
them
are
used
by
a
project
manager
and
a
so#ware
team
to
adapt
project
work
flow
and
technical
acBviBes.
• The
first
applicaBon
of
project
metrics
on
most
so#ware
projects
occurs
during
esBmaBon.
• Metrics
collected
from
past
projects
are
used
as
a
basis
from
which
effort
and
Bme
esBmates
are
made
for
current
so#ware
work.
• As
a
project
proceeds,
measures
of
effort
and
calendar
Bme
expended
are
compared
to
original
esBmates
(and
the
project
schedule).
• The
project
manager
uses
these
data
to
monitor
and
control
progress.
• As
technical
work
commences,
other
project
metrics
begin
to
have
significance.
• ProducBon
rates
represented
in
terms
of
pages
of
documentaBon,
review
hours,
funcBon
points,
and
delivered
source
lines
are
measured.
• In
addiBon,
errors
uncovered
during
each
so#ware
engineering
task
are
tracked.
• As
the
so#ware
evolves
from
specificaBon
into
design,
technical
metrics
are
collected
to
assess
design
quality
and
to
provide
indicators
that
will
influence
the
approach
taken
to
code
generaBon
and
tesBng.
Benefits
of
project
metrics
• The
intent
of
project
metrics
is
twofold:
• First,
these
metrics
are
used
to
minimize
the
development
schedule
by
making
the
adjustments
necessary
to
avoid
delays
and
miBgate
potenBal
problems
and
risks.
• The
above
is
possible
a#er
esBmaBon
• Second,
project
metrics
are
used
to
assess
product
quality
on
an
ongoing
basis
and,
when
necessary,
modify
the
technical
approach
to
improve
quality.
• As
quality
improves,
defects
are
minimized,
and
as
the
defect
count
goes
down,
the
amount
of
rework
required
during
the
project
is
also
reduced.
• This
leads
to
a
reducBon
in
overall
project
cost.
So#ware
Measurement
• So#ware
can
be
measured
directly
or
indirectly
• Direct
measures
of
the
so#ware
engineering
process
include
cost
and
effort
applied.
• Direct
measures
of
the
product
include:
– lines
of
code
(LOC)
produced,
– execuBon
speed
– memory
size,
– defects
reported
over
some
set
period
of
Bme.
• Indirect
measures
of
the
product
include:
• funcBonality
• quality,
• complexity,
• efficiency,
• reliability,
• Maintainability,
•
many
other
"–abiliBes"
• Direct
measures
are
relaBvely
easy
to
collect,
as
long
as
specific
convenBons
for
measurement
are
established
in
advance.
• Correspondingly
indirect
measures
are
relaBvely
difficult
to
collect
• Another
aspect
of
measurement
is
private
and
public
metrics
• Examples
of
private
metrics
include
defect
rates
(by
individual),
defect
rates
(by
module),
and
errors
found
during
development.
• Some
process
metrics
are
private
to
the
so#ware
project
team
but
public
to
all
team
members.
• Examples
include:
– Defects
reported
for
major
so#ware
funcBons
(that
have
been
developed
by
a
number
of
pracBBoners
– Errors
found
during
formal
technical
reviews
– LOCs
or
funcBon
points
per
module
and
funcBon
• These
data
are
reviewed
by
the
team
to
uncover
indicators
that
can
improve
team
performance.
• Public
metrics
generally
assimilate
informaBon
that
originally
was
private
to
individuals
and
teams.
• Examples
include:
– Project
level
defect
rates
(absolutely
not
acributed
to
an
individual)
– effort,
– calendar
Bmes
– Any
other
related
data
• Therefore,
we
can
conclude
that
product
metrics
that
are
private
to
an
individual
are
o#en
combined
to
develop
project
metrics
that
are
public
to
a
so#ware
team.
• Project
metrics
are
then
consolidated
to
create
process
metrics
that
are
public
to
the
so#ware
organizaBon
as
a
whole.
• But
how
does
an
organizaBon
combine
metrics
that
come
from
different
individuals
or
projects?
• To
illustrate
the
problem,consider
a
simple
example:
Individuals
on
two
different
project
teams
record
and
categorize
all
errors
that
they
find
during
the
so#ware
process.
Individual
measures
are
then
combined
to
develop
team
measures.
Team
A
found
342
errors
during
the
so#ware
process
prior
to
release.
Team
B
found
184
errors.
All
other
things
being
equal,
which
team
is
more
effecBve
in
uncovering
errors
throughout
the
process?
• Because
the
size
or
complexity
of
the
projects
is
not
known,
this
quesBon
cannot
be
answered
• However,
if
the
measures
are
normalized,
it
is
possible
to
create
so#ware
metrics
that
enable
comparison
to
broader
organizaBonal
averages.
Size-‐Oriented
Metrics
• Size-‐oriented
so#ware
metrics
are
derived
by
normalizing
quality
and/or
producBvity
measures
by
considering
the
size
of
the
so#ware
that
has
been
produced.
• If
a
so#ware
organizaBon
maintains
simple
records,
a
table
of
size-‐oriented
measures,
such
as
the
one
shown
in
the
next
slide,
can
be
created.
developed, 134 errors were recorded before the software was released, and 29 defects
• • • • • •
• • • • • •
• • • • • •
• From
the
rudimentary
data
contained
in
the
table,
a
set
of
simple
size-‐oriented
metrics
can
be
developed
for
each
project:
– Errors
per
KLOC
(thousand
lines
of
code).
– Defects4
per
KLOC.
– $
per
LOC.
– Page
of
documentaBon
per
KLOC.
• In
addiBon,
other
interesBng
metrics
can
be
computed:
– Errors
per
person-‐month.
– LOC
per
person-‐month.
– $
per
page
of
documentaBon.
Pros
and
Cons
• Size-‐oriented
metrics
are
not
universally
accepted
as
the
best
way
to
measure
the
process
of
so#ware
development
.
• Most
of
the
controversy
swirls
around
the
use
of
lines
of
code
as
a
key
measure.
Pros
• Proponents
of
the
LOC
measure
claim
that
LOC
is
an
"arBfact"
of
all
so#ware
development
projects
• Can
be
easily
counted
• Many
exisBng
so#ware
esBmaBon
models
use
LOC
or
KLOC
as
a
key
input
• A
large
body
of
literature
and
data
predicated
on
LOC
already
exists.
Cons
• LOC
measures
are
programming
language
dependent
• They
penalize
well-‐designed
but
shorter
programs,
• Cannot
easily
accommodate
nonprocedural
languages
• Their
use
in
esBmaBon
requires
a
level
of
detail
that
may
be
difficult
to
achieve
(i.e.,
the
planner
must
esBmate
the
LOC
to
be
produced
long
before
analysis
and
design
have
been
completed).
Func3on-‐Oriented
Metrics
• FuncBon-‐oriented
so#ware
metrics
use
a
measure
of
the
funcBonality
delivered
by
the
applicaBon
as
a
normalizaBon
value.
• Since
‘funcBonality’
cannot
be
measured
directly,
it
must
be
derived
indirectly
using
other
direct
measures.
• FuncBon
points
are
derived
using
an
empirical
relaBonship
based
on
countable
(direct)
measures
of
so#ware's
informaBon
domain
and
assessments
of
so#ware
complexity.
• FuncBon
points
are
computed
by
compleBng
the
table
PA R T T W O M A N A G I N G S O F T WA R E P R O J E C T S
Weighting factor
Number of files × 7 10 15 =
Count total
the appropriate table location. Information domain values are defined in the follow-
5
• Number
of
user
inputs.
Each
user
input
that
provides
disBnct
applicaBon-‐oriented
data
to
the
so#ware
is
counted.
Inputs
should
be
disBnguished
from
inquiries,
which
are
counted
separately.
Number
of
user
outputs.
Each
user
output
that
provides
applicaBon-‐
oriented
informaBon
to
the
user
is
counted.
In
this
context
output
refers
to
reports,
screens,
error
messages,
etc.
• Individual
data
items
within
a
report
are
not
counted
separately.
• Number
of
user
inquiries.
An
inquiry
is
defined
as
an
on-‐line
input
that
results
in
the
generaBon
of
some
immediate
so#ware
response
in
the
form
of
an
on-‐line
output.
Each
disBnct
inquiry
is
counted.
Number
of
files.
Each
logical
master
file
(i.e.,
a
logical
grouping
of
data
that
may
be
one
part
of
a
large
database
or
a
separate
file)
is
counted.
Number
of
external
interfaces.
All
machine
readable
interfaces
(e.g.,
data
files
on
storage
media)
that
are
used
to
transmit
informaBon
to
another
system
are
counted.
• External
Inputs:
screens
forms,
dialog
boxes...
• External
Outputs:
screens,
reports,
graphs
• External
Queries:
input/output
combinaBon
with
a
query
leads
to
simple
output
• Logical
files:
major
groups
of
end-‐user-‐
data
• Interface
Files:
files
controlled
by
other
programs
hcp://www.codeproject.com/ArBcles/18024/CalculaBng-‐FuncBon-‐Points
• Once
these
data
have
been
collected,
a
complexity
value
is
associated
with
each
count.
• OrganizaBons
that
use
funcBon
point
methods
develop
criteria
for
determining
whether
a
parBcular
entry
is
simple,
average,
or
complex.
• Nonetheless,
the
determinaBon
of
complexity
is
somewhat
subjecBve.
• To
compute
funcBon
points
(FP),
the
following
relaBonship
is
used:
FP
=
count
total
X
[0.65
+
0.01
X
∑(Fi)]
where
count
total
is
the
sum
of
all
FP
entries
obtained
• The
Fi
(i
=
1
to
14)
are
"complexity
adjustment
values"
based
on
responses
to
the
following
quesBons:
1. Does
the
system
require
reliable
backup
and
recovery?
2. Are
data
communicaBons
required?
3. Are
there
distributed
processing
funcBons?
4. Is
performance
criBcal?
5. Will
the
system
run
in
an
exisBng,
heavily
uBlized
operaBonal
environment?
6. Does
the
system
require
on-‐line
data
entry?
7. Does
the
on-‐line
data
entry
require
the
input
transacBon
to
be
built
over
mulBple
screens
or
operaBons?
8. Are
the
master
files
updated
on-‐line?
9. Are
the
inputs,
outputs,
files,
or
inquiries
complex?
10. Is
the
internal
processing
complex?
11. Is
the
code
designed
to
be
reusable?
12. Are
conversion
and
installaBon
included
in
the
design?
13. Is
the
system
designed
for
mulBple
installaBons
in
different
organizaBons?
14. Is
the
applicaBon
designed
to
facilitate
change
and
ease
of
use
by
the
user?
• Each
of
these
quesBons
is
answered
using
a
scale
that
ranges
from
0
(not
important
or
applicable)
to
5
(absolutely
essenBal).
• The
constant
values
in
the
equaBon
and
the
weighBng
factors
that
are
applied
to
informaBon
domain
counts
are
determined
empirically.
• Once
funcBon
points
have
been
calculated,
they
are
used
in
a
manner
analogous
to
LOC
as
a
way
to
normalize
measures
for
so#ware
producBvity,
quality,
and
other
acributes:
– Errors
per
FP.
– Defects
per
FP.
– $
per
FP.
– Pages
of
documentaBon
per
FP.
– FP
per
person-‐month.
Pros
and
Cons
• The
funcBon
point
(and
its
extensions),
like
the
LOC
measure,
is
controversial.
• Pros:
– Programming
language
independent,
making
it
ideal
for
applicaBons
using
convenBonal
and
nonprocedural
languages;
– Based
on
data
that
are
more
likely
to
be
known
early
in
the
evoluBon
of
a
project,
making
FP
more
acracBve
as
an
esBmaBon
approach.
• Cons:
– Method
requires
some
"sleight
of
hand"
in
that
computaBon
is
based
on
subjecBve
rather
than
objecBve
data
– FP
has
no
direct
physical
meaning—it's
just
a
number.
RECONCILING
DIFFERENT
METRICS
APPROACHES
• The
relaBonship
between
lines
of
code
and
funcBon
points
depends
upon
the
programming
language
that
is
used
to
implement
the
so#ware
and
the
quality
of
the
design.
Programming Language LOC/FP (average)
Assembly language 320
C 128
COBOL 106
FORTRAN 106
Pascal 90
C++ 64
Ada95 53
Visual Basic 32
Smalltalk 22
Powerbuilder (code generator) 16
SQL 12
5.1
Complexity weight
exity Object type
ing for Simple Medium Difficult
types
6] Screen 1 2 3
Report 2 5 8
3GL component 10
• To
derive
an
according
esBmate
to Tableo5.1.
f
eThe
ffort
objectb ased
countoisn
the
Once complexity is determined, the number of screens, reports, and components
are weighted point then determined by
different
where NOP levels
of
is defined as d eveloper
new object points. experience
and
• PROD
=
PROD
TA B L E 5 . 2
Productivity
NOP/person-‐month
= NOP/person-month
rates for object
points [BOE96]
Very Very
Developer's experience/capability Low Nominal High
low high
Very Very
Environment maturity/capability Low Nominal High
low high
PROD 4 7 13 25 50
• Once
the
producBvity
rate
has
been
determined,
an
esBmate
of
project
effort
can
be
derived
as
esBmated
effort
=
NOP/PROD
The
So@ware
Equa3on
• The
so#ware
equaBon
is
a
dynamic
mulBvariable
model
that
assumes
a
specific
distribuBon
of
effort
over
the
life
of
a
so#ware
development
project
• The
model
has
been
derived
from
producBvity
data
collected
for
over
4000
contemporary
so#ware
projects.
• Based
on
these
data,
an
esBmaBon
model
of
the
form:
E
=
[LOC
X
B0.333/P]3
X
(1/t4)
Where
E=
effort
in
person-‐months
or
person-‐years
t=
project
duraBon
in
months
or
years
B=“special
skills
factor”
P=
“producBvity
parameter”
• B
increases
slowly
as
“the
need
for
integraBon,
tesBng,
quality
assurance,
documentaBon,
and
management
skills
grow”.
• For
small
programs
(KLOC
=
5
to
15),
B
=
0.16.
For
programs
greater
than
70
KLOC,
B
=
0.39.
• P
reflects:
– Overall
process
maturity
and
management
pracBces
– The
extent
to
which
good
so#ware
engineering
pracBces
are
used
– The
level
of
programming
languages
used
– The
state
of
the
so#ware
environment
– The
skills
and
experience
of
the
so#ware
team
– The
complexity
of
the
applicaBon
• Typical
values
might
be
P
=
2,000
for
development
of
real-‐Bme
embedded
so#ware;
•
P
=
10,000
for
telecommunicaBon
and
systems
so#ware;
• P
=
28,000
for
business
systems
applicaBons.
• The
producBvity
parameter
can
be
derived
for
local
condiBons
using
historical
data
collected
from
past
development
efforts.
• It
is
important
to
note
that
the
so#ware
equaBon
has
two
independent
parameters:
1. an
esBmate
of
size
(in
LOC)
and
2. an
indicaBon
of
project
duraBon
in
calendar
months
or
years.
• To
simplify
the
esBmaBon,
Putnam
and
Myers
suggest
a
set
of
equaBons
derived
from
the
so#ware
equaBon.
• Minimum
development
Bme
is
defined
as:
tmin
=
8.14
(LOC/P)0.43
in
months
for
tmin
>
6
months
E
=
180Bt3
in
person-‐months
for
E
≥
20
person-‐month,
where
t
is
in
years
• Using
the
above
equaBons
with
P
=
12,000
(the
recommended
value
for
scienBfic
so#
ware)
for
a
CAD
so#ware:
tmin
=
8.14
(33200/12000)0.43
=
12.6
calendar
months
E
=
180
x
0.28
x
(1.05)3
=
58
person-‐months
SOFTWARE
QUALITY
ASSURANCE
• Even
the
most
jaded
so#ware
developers
will
agree
that
high-‐quality
so#ware
is
an
important
goal.
• But
how
do
we
define
quality?
• A
wag
once
said,
"Every
program
does
something
right,
it
just
may
not
be
the
thing
that
we
want
it
to
do."
• Many
definiBons
of
so#ware
quality
have
been
proposed
in
the
literature.
• For
most
purposes,
so9ware
quality
is
defined
as:
Conformance
to
explicitly
stated
func&onal
and
performance
requirements,
explicitly
documented
development
standards,
and
implicit
characteris&cs
that
are
expected
of
all
professionally
developed
so9ware.
• This
definiBon
serves
to
emphasize
three
important
points:
1. So#ware
requirements
are
the
foundaBon
from
which
quality
is
measured.
Lack
of
conformance
to
requirements
is
lack
of
quality.
2. Specified
standards
define
a
set
of
development
criteria
that
guide
the
manner
in
which
so#ware
is
engineered.
If
the
criteria
are
not
followed,
lack
of
quality
will
almost
surely
result.
3. A
set
of
implicit
requirements
o#en
goes
unmenBoned
(e.g.,
the
desire
for
ease
of
use
and
good
maintainability).
If
so#ware
conforms
to
its
explicit
requirements
but
fails
to
meet
implicit
requirements,
so#ware
quality
is
suspect.
SQA
within
a
so#ware
organisaBon
• The
implicaBon
for
so#ware
is
that
many
different
consBtuencies
have
so#ware
quality
assurance
responsibility:
– so#ware
engineers,
– project
managers,
– customers,
– salespeople,
and
– the
individuals
who
serve
within
an
SQA
group.
• The
SQA
group
serves
as
the
customer's
in-‐
house
representaBve.
• That
is,
the
people
who
perform
SQA
must
look
at
the
so#ware
from
the
customer's
point
of
view.
• They
could
be
asking
quesBons
such
as:
– Does
the
so#ware
adequately
meet
the
quality
factors?
(as
discussed
earlier)
– Has
so#ware
development
been
conducted
according
to
pre-‐established
standards?
– Have
technical
disciplines
properly
performed
their
roles
as
part
of
the
SQA
acBvity?
SQA
acBviBes
• So#ware
quality
assurance
is
composed
of
a
variety
of
tasks
associated
with
two
different
consBtuencies:
1. the
so#ware
engineers
who
do
technical
work
and
2. an
SQA
group
that
has
responsibility
for
quality
assurance
planning,
oversight,
record
keeping,
analysis,
and
reporBng.
• So#ware
engineers
address
quality
(and
perform
quality
assurance
and
quality
control
acBviBes)
by:
– applying
solid
technical
methods
and
measures
– conducBng
formal
technical
reviews,
and
– performing
well-‐planned
so#ware
tesBng
• The
role
of
the
SQA
group
is
to
assist
the
so#ware
team
in
achieving
a
high-‐quality
end
product.
• The
So#ware
Engineering
InsBtute
recommends
a
set
of
SQA
acBviBes
that
address:
– quality
assurance
planning,
– oversight,
– record
keeping,
– analysis,
and
– reporBng
• These
acBviBes
are
performed
(or
facilitated)
by
an
independent
SQA
group
that:
• Prepares
an
SQA
plan
for
a
project.
The
plan
is
developed
during
project
planning
and
is
reviewed
by
all
interested
parBes.
• Quality
assurance
acBviBes
performed
by
the
so#ware
engineering
team
and
the
SQA
group
are
governed
by
the
plan.
• The
plan
idenBfies
– evaluaBons
to
be
performed
– audits
and
reviews
to
be
performed
– standards
that
are
applicable
to
the
project
– procedures
for
error
reporBng
and
tracking
– documents
to
be
produced
by
the
SQA
group
– amount
of
feedback
provided
to
the
so#ware
project
team
• Par3cipates
in
the
development
of
the
project’s
so@ware
process
descrip3on.
– The
so#ware
team
selects
a
process
for
the
work
to
be
performed.
– The
SQA
group
reviews
the
process
descripBon
for
compliance
with
organizaBonal
policy,
internal
so#ware
standards,
externally
imposed
standards
(e.g.,
ISO-‐9001),
and
other
parts
of
the
so#ware
project
plan.
• Reviews
so@ware
engineering
ac3vi3es
to
verify
compliance
with
the
defined
so@ware
process.
– The
SQA
group
idenBfies,
documents,
and
tracks
deviaBons
from
the
process
and
verifies
that
correcBons
have
been
made.
• Audits
designated
so@ware
work
products
to
verify
compliance
with
those
defined
as
part
of
the
so@ware
process.
– The
SQA
group
reviews
selected
work
products;
idenBfies
documents,
and
tracks
deviaBons;
verifies
that
correcBons
have
been
made;
and
– periodically
reports
the
results
of
its
work
to
the
project
manager.
• Ensures
that
devia3ons
in
so@ware
work
and
work
products
are
documented
and
handled
according
to
a
documented
procedure.
– DeviaBons
may
be
encountered
in
the
project
plan,
process
descripBon,
applicable
standards,
or
technical
work
products.
• Records
any
noncompliance
and
reports
to
senior
management.
– Noncompliance
items
are
tracked
unBl
they
are
resolved.
• In
addiBon
to
these
acBviBes,
the
SQA
group
coordinates
the
control
and
management
of
change
and
helps
to
collect
and
analyze
so#ware
metrics.
FORMAL
APPROACHES
TO
SQA
• It
can
be
argued
that
a
computer
program
is
a
mathemaBcal
object.
• A
rigorous
syntax
and
semanBcs
can
be
defined
for
every
programming
language
• It
is
also
possibly
to
similarly
develop
a
rigorous
approach
to
the
specificaBon
of
so#ware
requirements.
• If
the
requirements
model
(specificaBon)
and
the
programming
language
can
be
represented
in
a
rigorous
manner,
it
should
be
possible
to
apply
mathemaBc
proof
of
correctness
to
demonstrate
that
a
program
conforms
exactly
to
its
specificaBons.
• One
of
the
methods
to
do
so
is
known
as
staBsBcal
SQA
StaBsBcal
SQA
• For
so#ware,
staBsBcal
quality
assurance
implies
the
following
steps:
1. InformaBon
about
so#ware
defects
is
collected
and
categorized.
2. An
acempt
is
made
to
trace
each
defect
to
its
underlying
cause
(e.g.,
non-‐
conformance
to
specificaBons,
design
error,
violaBon
of
standards,
poor
communicaBon
with
the
customer).
3. Using
the
Pareto
principle
(80
percent
of
the
defects
can
be
traced
to
20
per-‐
cent
of
all
possible
causes),
isolate
the
20
percent
(the
"vital
few").
4. Once
the
vital
few
causes
have
been
idenBfied,
move
to
correct
the
problems
that
have
caused
the
defects.
• To
illustrate
how
staBsBcal
SQA
works,
assume
that
a
so#ware
engineering
organizaBon
collects
informaBon
on
defects
for
a
period
of
one
year.
• Some
of
the
defects
are
uncovered
as
so#-‐
ware
is
being
developed.
• Others
are
encountered
a#er
the
so#ware
has
been
released
to
its
end-‐users.
• Although
hundreds
of
different
errors
are
uncovered,
all
can
be
tracked
to
one
(or
more)
of
the
following
causes:
• incomplete
or
erroneous
specificaBons
(IES)
• misinterpretaBon
of
customer
communicaBon
(MCC)
•
intenBonal
deviaBon
from
specificaBons
(IDS)
• violaBon
of
programming
standards
(VPS)
• error
in
data
representaBon
(EDR)
• inconsistent
component
interface
(ICI)
• error
in
design
logic
(EDL)
•
incomplete
or
erroneous
tesBng
(IET)
• inaccurate
or
incomplete
documentaBon
(IID)
• error
in
programming
language
translaBon
of
design
(PLT)
• ambiguous
or
inconsistent
human/computer
interface
(HCI)
• miscellaneous
(MIS)
CHAPTER 8 SOFTWARE QUALITY ASSURANCE 211
• To
aDATA
TA B L E 8 . 1 pply
staBsBcal
COLLECTION SQA,
FOR STATISTICAL SQA a
Table
is
built:
1. Requirements
F I G U R E 1 1.1
Analysis as
analysis
is
a
a bridge
between
so#ware
engineering
system System
engineering
and software
engineering
designtask
No direct
coupling
a d i
Data Data Control
structure (variables) flag
b c e j k
f g h
Global data
area
• No
coupling:
– Modules
a
and
d
– Subordinate
to
different
modules.
– Each
is
unrelated
and
therefore
no
direct
coupling
occurs.
• Data
coupling
(low
coupling):
– Module
c
and
a
– Module
a
is
accessed
via
a
convenBonal
argument
list,
through
which
data
are
passed.
– As
long
as
a
simple
argument
list
is
present
(i.e.,
simple
data
are
passed;
a
one-‐to-‐one
correspondence
of
items
exists),
low
coupling
is
exhibited
in
this
porBon
of
the
structure.
• Stamp
coupling
(variaBon
of
data
coupling):
– Modules
a
and
b
–
is
found
when
a
porBon
of
a
data
structure
(rather
than
simple
arguments)
is
passed
via
a
module
interface.
• Control
Coupling
(moderate):
– Modules
d
and
e
– Characterized
by
passage
of
control
between
modules.
– Very
common
in
most
so#ware
designs
– Occurs
when
a
“control
flag”
(a
variable
that
controls
decisions
in
a
subordinate
or
superordinate
module)
is
passed
between
modules.
• External
coupling
(high):
– Occurs
when
modules
are
Bed
to
an
environment
external
to
so#ware
– For
example,
I/O
couples
a
module
to
specific
devices,
formats,
and
communicaBon
protocols
• Common
coupling
(high):
– Modules
c,
g,
and
k
– Occurs
when
a
number
of
modules
reference
a
global
data
area
(e.g.,
a
disk
file
or
a
globally
accessible
memory
area)
• Example
of
common
couplin:
– Module
c
iniBalizes
the
item.
– Later
module
g
recomputes
and
updates
the
item.
– An
error
occurs
and
g
updates
the
item
incorrectly.
– Much
later
in
processing
module,
k
reads
the
item,
acempts
to
process
it,
and
fails,
causing
the
so#ware
to
abort.
– The
apparent
cause
of
abort
is
module
k;
the
actual
cause,
module
g.
• Diagnosing
problems
in
structures
with
considerable
common
coupling
is
Bme
consuming
and
difficult.
• However,
this
does
not
mean
that
the
use
of
global
data
is
necessarily
"bad."
• It
does
mean
that
a
so#ware
designer
must
be
aware
of
potenBal
consequences
of
common
coupling
and
take
special
care
to
guard
against
them.
• Content
coupling
(highest):
– Occurs
when
one
module
makes
use
of
data
or
control
informaBon
maintained
within
the
boundary
of
another
module.
– Secondarily,
content
coupling
occurs
when
branches
are
made
into
the
middle
of
a
module.
– This
mode
of
coupling
can
and
should
be
avoided.
• The
coupling
modes
just
discussed
occur
because
of
design
decisions
made
when
structure
was
developed.
• Variants
of
external
coupling,
however,
may
be
introduced
during
coding.
• For
example,
compiler
coupling
Bes
source
code
to
specific
(and
o#en
non-‐
standard)
acributes
of
a
compiler;
• Opera&ng
system
(OS)
coupling
Bes
design
and
resultant
code
to
operaBng
system
"hooks"
that
can
create
havoc
when
OS
changes
occur.
So#ware
TesBng
• "You're
never
done
tesBng,
the
burden
simply
shi#s
from
you
(the
so#ware
engineer)
to
your
customer."
• “Every
Bme
the
customer/user
executes
a
computer
program,
the
program
is
being
tested.”
• "You're
done
tesBng
when
you
run
out
of
Bme
or
you
run
out
of
money."
• This
sobering
fact
underlines
the
importance
of
tesBng
in
so#ware
quality
assurance
acBviBes.
• So#ware
tesBng
is
a
criBcal
element
of
so#ware
quality
assurance
and
represents
the
ulBmate
review
of
specificaBon,
design,
and
code
generaBon.
• TesBng
presents
an
interesBng
anomaly
for
the
so#ware
engineer
• During
earlier
so#ware
engineering
acBviBes,
the
engineer
acempts
to
build
so#ware
from
an
abstract
concept
to
a
tangible
product.
• Now
comes
tesBng
wherein
engineer
creates
a
series
of
test
cases
that
are
intended
to
"demolish"
the
so#ware
that
has
been
built
• In
fact,
tesBng
is
the
one
step
in
the
so#ware
process
that
could
be
viewed
(psychologically,
at
least)
as
destrucBve
rather
than
construcBve.
• But
is
it
really
destrucBve?
• The
answer
is
NO,
since
the
objecBves
of
tesBng
are
somewhat
different
than
we
might
expect.
TesBng
ObjecBves
• Glen
Myers
states
a
number
of
rules
that
can
serve
well
as
tesBng
objecBves:
1. TesBng
is
a
process
of
execuBng
a
program
with
the
intent
of
finding
an
error.
2. A
good
test
case
is
one
that
has
a
high
probability
of
finding
an
as-‐yet-‐undiscovered
error.
3. A
successful
test
is
one
that
uncovers
an
as-‐yet-‐
undiscovered
error.
• These
objecBves
imply
a
dramaBc
change
in
viewpoint.
• They
move
counter
to
the
commonly
held
view
that
a
successful
test
is
one
in
which
no
errors
are
found.
• Our
objecBve
is
to
design
tests
that
systemaBcally
uncover
different
classes
of
errors
and
to
do
so
with
a
minimum
amount
of
Bme
and
effort.
• If
tesBng
is
conducted
successfully,
it
will
uncover
errors
in
the
so#ware.
• As
a
secondary
benefit,
tesBng
demonstrates
that
so#ware
funcBons
appear
to
be
working
according
to
specificaBon,
that
behavioral
and
performance
requirements
appear
to
have
been
met.
• In
addiBon,
data
collected
as
tesBng
is
conducted
provide
a
good
indicaBon
of
so#ware
reliability
and
some
indicaBon
of
so#ware
quality
as
a
whole.
• Hence
tesBng
cannot
show
the
absence
of
errors
and
defects,
it
can
show
only
that
so#ware
errors
and
defects
are
present.
• It
is
important
to
keep
this
(rather
gloomy)
statement
in
mind
as
tesBng
is
being
conducted.
TesBng
Principles
• Before
applying
methods
to
design
effecBve
test
cases,
a
so#ware
engineer
must
understand
the
basic
principles
that
guide
so#ware
tesBng.
• Davis
suggests
a
set
of
tesBng
principles:
• All
tests
should
be
traceable
to
customer
requirements.
– As
we
have
seen,
the
objecBve
of
so#ware
tesBng
is
to
uncover
errors.
– It
follows
that
the
most
severe
defects
(from
the
customer’s
point
of
view)
are
those
that
cause
the
program
to
fail
to
meet
its
requirements.
• Tests
should
be
planned
long
before
tes3ng
begins.
– Test
planning
can
begin
as
soon
as
the
requirements
model
is
complete.
– Detailed
definiBon
of
test
cases
can
begin
as
soon
as
the
design
model
has
been
solidified.
– Therefore,
all
tests
can
be
planned
and
designed
before
any
code
has
been
generated.
• The
Pareto
principle
applies
to
so@ware
tes3ng.
– Stated
simply,
the
Pareto
principle
implies
that
80
percent
of
all
errors
uncovered
during
tesBng
will
likely
be
traceable
to
20
percent
of
all
program
components.
– The
problem,
of
course,
is
to
isolate
these
suspect
components
and
to
thoroughly
test
them.
• Tes3ng
should
begin
“in
the
small”
and
progress
toward
tes3ng
“in
the
large.”
– The
first
tests
planned
and
executed
generally
focus
on
individual
components.
– As
tesBng
progresses,
focus
shi#s
in
an
acempt
to
find
errors
in
integrated
clusters
of
components
and
ulBmately
in
the
enBre
system.
• Exhaus3ve
tes3ng
is
not
possible.
– The
number
of
path
permutaBons
for
even
a
moderately
sized
program
is
excepBonally
large.
– For
this
reason,
it
is
impossible
to
execute
every
combinaBon
of
paths
during
tesBng.
– It
is
possible,
however,
to
adequately
cover
program
logic
and
to
ensure
that
all
condiBons
in
the
component-‐level
design
have
been
exercised.
• To
be
most
effec3ve,
tes3ng
should
be
conducted
by
an
independent
third
party.
– By
most
effec&ve,
we
mean
tesBng
that
has
the
highest
probability
of
finding
errors
(the
primary
objecBve
of
tesBng).
– The
so#ware
engineer
who
created
the
system
is
not
the
best
person
to
conduct
all
tests
for
the
so#ware.
Test
Case
Design
• Recalling
the
objecBves
of
tesBng,
we
must
design
tests
that
have
the
highest
likelihood
of
finding
the
most
errors
with
a
minimum
amount
of
Bme
and
effort.
• A
rich
variety
of
test
case
design
methods
have
evolved
for
so#ware.
• These
methods
provide
the
developer
with
a
systemaBc
approach
to
tesBng.
• More
important,
methods
provide
a
mechanism
that
can
help
to
ensure
the
completeness
of
tests
and
provide
the
highest
likelihood
for
uncovering
errors
in
so#ware.
• Any
engineered
product
(and
most
other
things)
can
be
tested
in
one
of
two
ways:
1. Knowing
the
specified
funcBon
that
a
product
has
been
designed
to
perform,
tests
can
be
conducted
that
demonstrate
each
funcBon
is
fully
operaBonal
while
at
the
same
Bme
searching
for
errors
in
each
funcBon;
• This
is
termed
as
black
box
tesBng
2. Knowing
the
internal
workings
of
a
product,
tests
can
be
conducted
to
ensure
that
all
internal
operaBons
are
performed
according
to
specificaBons
and
all
internal
components
have
been
adequately
exercised.
• This
approach
is
called
white-‐box
tesBng.
• When
computer
so#ware
is
considered,
black-‐
box
tes&ng
alludes
to
tests
that
are
conducted
at
the
so#ware
interface.
• Although
they
are
designed
to
uncover
errors,
black-‐box
tests
are
used
to:
– demonstrate
that
so#ware
funcBons
are
operaBonal,
– that
input
is
properly
accepted
and
output
is
correctly
produced,
and
– that
the
integrity
of
external
informaBon
(e.g.,
a
database)
is
maintained.
• A
black-‐box
test
examines
some
fundamental
aspect
of
a
system
with
licle
regard
for
the
internal
logical
structure
of
the
so#ware.
• White-‐box
tes&ng
of
so#ware
is
predicated
on
close
examinaBon
of
procedural
detail.
• Logical
paths
through
the
so#ware
are
tested
by
providing
test
cases
that
exercise
specific
sets
of
condiBons
and/or
loops.
• The
"status
of
the
program"
may
be
examined
at
various
points
to
determine
if
the
expected
or
asserted
status
corresponds
to
the
actual
status.
• It
would
seem
that
very
thorough
white-‐box
tesBng
would
lead
to
"100
percent
correct
programs."
• However,
the
number
of
paths
to
be
tested
increase
exponenBally
as
the
program
size
increases
and
can
quickly
become
impracBcal
• The
way
out
is
to
exercise
a
limited
number
of
important
logical
paths
and
data
structures
• The
acributes
of
both
black-‐
and
white-‐box
tesBng
can
be
combined
to
provide
an
approach
that
validates
the
so#ware
interface
and
selecBvely
ensures
that
the
internal
workings
of
the
so#ware
are
correct.
White-‐box
tesBng
• White-‐box
tesBng,
someBmes
called
glass-‐box
tes&ng,
is
a
test
case
design
method
that
uses
the
control
structure
of
the
procedural
design
to
derive
test
cases.
• Using
white-‐box
tesBng
methods,
the
so#ware
engineer
can
derive
test
cases
that:
1. Guarantee
that
all
independent
paths
within
a
module
have
been
exercised
at
least
once
2. Exercise
all
logical
decisions
on
their
true
and
false
sides
3. Execute
all
loops
at
their
boundaries
and
within
their
operaBonal
bounds,
and
4. Exercise
internal
data
structures
to
ensure
their
validity.
Basis
path
tesBng
• The
basis
path
method
enables
the
test
case
designer
to
derive
a
logical
complexity
measure
of
a
procedural
design
and
use
this
measure
as
a
guide
for
defining
a
basis
set
of
execuBon
paths.
• Test
cases
derived
to
exercise
the
basis
set
are
guaranteed
to
execute
every
statement
in
the
program
at
least
one
Bme
during
tesBng.
Flow
Graph
NotaBon
• The
flow
graph
depicts
logical
control
flow
using
the
notaBon:
PA R T T H R E E C O N V E N T I O N A L M E T H O D S F O R S O F T WA R E E N G I N E E R I N G
1
Edge
1
2,3 Node
2
6 4,5
R2
3
7 R3 8
4 9 R1 Region
6
R4
10
7 8 5
9
10
11
(B)
11
(A)
• Areas
bounded
by
edges
and
nodes
are
called
regions.
• When
counBng
regions,
we
include
the
area
outside
the
graph
as
a
region
CyclomaBc
Complexity
• Is
a
so#ware
metric
that
provides
a
quanBtaBve
measure
of
the
logical
complexity
of
a
program.
• When
used
in
the
context
of
the
basis
path
tesBng
method,
the
value
computed
for
cyclomaBc
complexity
defines
the
number
of
independent
paths
in
the
basis
set
of
a
program
and
provides
us
with
an
upper
bound
for
the
number
of
tests
that
must
be
conducted
to
ensure
that
all
statements
have
been
executed
at
least
once.
• An
independent
path
is
any
path
through
the
program
that
introduces
at
least
one
new
set
of
processing
statements
or
a
new
condiBon.
• When
stated
in
terms
of
a
flow
graph,
an
independent
path
must
move
along
at
least
one
edge
that
has
not
been
traversed
before
the
path
is
defined.
11
(A)
1
Edge
2,3 Node
6 4,5
R2
7 R3 8
9 R1 Region
R4
10
11
(B)
• The
set
of
independent
paths
for
the
flow
graph
shown
is:
– path
1:
1-‐11
– path
2:
1-‐2-‐3-‐4-‐5-‐10-‐1-‐11
– path
3:
1-‐2-‐3-‐6-‐8-‐9-‐10-‐1-‐11
– path
4:
1-‐2-‐3-‐6-‐7-‐9-‐10-‐1-‐11
• Note
that
each
new
path
introduces
a
new
edge.
• The
path
1-‐2-‐3-‐4-‐5-‐10-‐1-‐2-‐3-‐6-‐8-‐9-‐10-‐1-‐11
is
not
considered
to
be
an
independent
path
because
it
is
simply
a
combinaBon
of
already
specified
paths
and
does
not
traverse
any
new
edges.
• Paths
1,
2,
3,
and
4
consBtute
a
basis
set
for
the
flow
graph
shown
• That
is,
if
tests
can
be
designed
to
force
execuBon
of
these
paths
(a
basis
set),
every
statement
in
the
program
will
have
been
guaranteed
to
be
executed
at
least
one
Bme
and
every
condiBon
will
have
been
executed
on
its
true
and
false
sides.
• It
should
be
noted
that
the
basis
set
is
not
unique.
• In
fact,
a
number
of
different
basis
sets
can
be
derived
for
a
given
procedural
design.
• How
do
we
know
how
many
paths
to
look
for?
• The
computaBon
of
cyclomaBc
complexity
provides
the
answer.
• CyclomaBc
complexity
has
a
foundaBon
in
graph
theory
and
provides
us
with
an
extremely
useful
so#ware
metric.
• Complexity
is
computed
in
one
of
three
ways:
1. The
number
of
regions
of
the
flow
graph
correspond
to
the
cyclomaBc
complexity
2. CyclomaBc
complexity,
V(G),
for
a
flow
graph,
G,
is
defined
as
V(G)
=
E
−
N
+
2
3. CyclomaBc
complexity,
V(G),
for
a
flow
graph,
G,
is
also
defined
as
V(G)
=
P
+
1
where
P
is
the
number
of
predicate
nodes
in
the
flow
graph
G.
• The
cyclomaBc
complexity
for
the
flow
graph
shown
is……..?
Deriving
Test
Cases
• The
basis
path
tesBng
method
can
be
applied
to
a
procedural
design
or
to
source
code.
• Basis
path
tesBng
can
be
performed
as
a
series
of
steps:
1. Using
the
design
or
code
as
a
foundaBon,
draw
a
corresponding
flow
graph.
G U R E 17.4 PROCEDURE average;
L for test * This procedure computes the average of 100 or fewer
se design numbers that lie between bounding values; it also computes the
th nodes sum and the total number valid.
entified INTERFACE RETURNS average, total.input, total.valid;
INTERFACE ACCEPTS value, minimum, maximum;
10
5
12 11
6
13
7
9
2. Determine
the
cyclomaBc
complexity
of
the
resultant
flow
graph.
– The
cyclomaBc
complexity
of
the
flow
graph
shown
in
the
previous
slide
is
6
3. Determine
a
basis
set
of
linearly
independent
paths.
– Path
1:
1-‐2-‐10-‐11-‐13
– Path
2:
1-‐2-‐10-‐12-‐13
– Path
3:
1-‐2-‐3-‐10-‐11-‐13
– Path
4:
1-‐2-‐3-‐4-‐5-‐8-‐9-‐2-‐…….
– Path
5:
1-‐2-‐3-‐4-‐5-‐6-‐8-‐9-‐2-‐…….
– Path
6:
1-‐2-‐3-‐4-‐5-‐6-‐7-‐8-‐9-‐2-‐…….
• It
is
also
worthwhile
to
idenBfy
predicate
nodes
for
the
derivaBon
of
test
cases
4. Prepare
test
cases
that
will
force
execuBon
of
each
path
in
the
basis
set.
– Data
should
be
chosen
so
that
condiBons
at
the
predicate
nodes
are
appropriately
set
as
each
path
is
tested.
– Some
sample
test
cases
that
saBsfy
the
basis
set
for
this
flowgraph
(average)
are:
• Path
1
test
case:
value(k)
=
valid
input,
where
k
<
i
for
2
≤
i
≤
100
value(i)
=
−
999
where
2
≤
i
≤
100
Expected
results:
Correct
average
based
on
k
values
and
proper
totals.
Note:
Path
1
cannot
be
tested
stand-‐alone
but
must
be
tested
as
part
of
path
4,
5,
and
6
tests.
• Path
2
test
case:
value(1)
=−999
Expected
results:
Average
=
−999;
other
totals
at
iniBal
values.
• Path
3
test
case:
Acempt
to
process
101
or
more
values.
First
100
values
should
be
valid.
Expected
results:
Same
as
test
case
1.
• Path
4
test
case:
value(i)
=
valid
input
where
i
<
100
value(k)
<
minimum
where
k
<
i
Expected
results:
Correct
average
based
on
k
values
and
proper
totals.
• Path
5
test
case:
value(i)
=
valid
input
where
i
<
100
value(k)
>
maximum
where
k
<=
i
Expected
results:
Correct
average
based
on
n
values
and
proper
totals.
• Path
6
test
case:
value(i)
=
valid
input
where
i
<
100
Expected
results:
Correct
average
based
on
n
values
and
proper
totals.
• Each
test
case
is
executed
and
compared
to
expected
results.
• Once
all
test
cases
have
been
completed,
the
tester
can
be
sure
that
all
statements
in
the
program
have
been
executed
at
least
once.
• It
is
important
to
note
that
some
independent
paths
(path
1)
cannot
be
tested
in
stand-‐alone
fashion.
• In
such
cases,
these
paths
are
tested
as
part
of
another
path
test.
• Self
Study:
Control
Structure
TesBng
TesBng
Strategies
• TesBng
is
a
set
of
acBviBes
that
can
be
planned
in
advance
and
conducted
systemaBcally.
• For
this
reason
a
template
for
so#ware
tesBng
—a
set
of
steps
into
which
we
can
place
specific
test
case
design
techniques
and
tesBng
methods—should
be
defined
for
the
so#ware
process.
• A
number
of
so#ware
tesBng
strategies
have
been
proposed
in
the
literature.
• All
provide
the
so#ware
developer
with
a
template
for
tesBng
and
all
have
the
following
generic
characterisBcs:
– TesBng
begins
at
the
component
level
and
works
"outward"
toward
the
integraBon
of
the
enBre
computer-‐based
system.
– Different
tesBng
techniques
are
appropriate
at
different
points
in
Bme.
– TesBng
is
conducted
by
the
developer
of
the
so#ware
and
(for
large
projects)
an
independent
test
group.
– TesBng
and
debugging
are
different
acBviBes,
but
debugging
must
be
accommodated
in
any
tesBng
strategy.
• A
strategy
for
so#ware
tesBng
must
accommodate
low-‐level
tests
that
are
necessary
to
verify
that
a
small
source
code
segment
has
been
correctly
implemented
as
well
as
high-‐level
tests
that
validate
major
system
funcBons
against
customer
requirements.
• A
strategy
must
provide
guidance
for
the
pracBBoner
and
a
set
of
milestones
for
the
manager.
• Because
the
steps
of
the
test
strategy
occur
at
a
Bme
when
deadline
pressure
begins
to
rise,
progress
must
be
measurable
and
problems
must
surface
as
early
as
possible.
VerificaBon
and
ValidaBon
System testing
Validation testing
Integration testing
Unit testing
Code
Design
Requirements
System engineering
• IniBally,
system
engineering
defines
the
role
of
so#ware
and
leads
to
so#ware
requirements
analysis,
where
the
informaBon
domain,
funcBon,
behavior,
performance,
constraints,
and
validaBon
criteria
for
so#ware
are
established.
• Moving
inward
along
the
spiral,
we
come
to
design
and
finally
to
coding.
• To
develop
computer
so#ware,
we
spiral
inward
along
streamlines
that
decrease
the
level
of
abstracBon
on
each
turn.
• A
strategy
for
so#ware
tesBng
may
also
be
viewed
in
the
context
of
the
spiral.
• Unit
tes&ng
begins
at
the
vortex
of
the
spiral
and
concentrates
on
each
unit
(i.e.,
component)
of
the
so#ware
as
implemented
in
source
code.
• TesBng
progresses
by
moving
outward
along
the
spiral
to
integra&on
tes&ng,
where
the
focus
is
on
design
and
the
construcBon
of
the
so#ware
architecture.
• Taking
another
turn
outward
on
the
spiral,
we
encounter
valida&on
tes&ng,
where
requirements
established
as
part
of
so#ware
requirements
analysis
are
validated
against
the
so#ware
that
has
been
constructed.
• Finally,
we
arrive
at
system
tes&ng,
where
the
so#ware
and
other
system
elements
are
tested
as
a
whole.
• To
test
computer
so#ware,
we
spiral
out
along
streamlines
that
broaden
the
scope
of
tesBng
with
each
turn.
• Considering
the
process
from
a
procedural
point
of
view,
tesBng
within
the
context
of
so#ware
engineering
is
actually
a
series
of
four
steps
that
are
implemented
sequenBally.
PA R T T H R E E C O N V E N T I O N A L M E T H O D S F O R S O F T WA R E E N G I N E E R I N G
R E 18.2
are
g steps High-order
Requirements
tests
Unit
Code
test
Testing
“direction”
• IniBally,
tests
focus
on
each
component
individually,
ensuring
that
it
funcBons
properly
as
a
unit.
• Hence,
the
name
unit
tes&ng.
• Unit
tesBng
makes
heavy
use
of
white-‐box
tesBng
techniques,
exercising
specific
paths
in
a
module's
control
structure
to
ensure
complete
coverage
and
maximum
error
detecBon.
• Next,
components
must
be
assembled
or
integrated
to
form
the
complete
so#ware
package.
• Integra&on
tes&ng
addresses
the
issues
associated
with
the
dual
problems
of
verificaBon
and
program
construcBon.
• Black-‐box
test
case
design
techniques
are
the
most
prevalent
during
integraBon,
although
a
limited
amount
of
white-‐box
tesBng
may
be
used
to
ensure
coverage
of
major
control
paths.
• A#er
the
so#ware
has
been
integrated
(constructed),
a
set
of
high-‐order
tests
are
conducted.
• ValidaBon
criteria
(established
during
requirements
analysis)
must
be
tested.
• Valida&on
tes&ng
provides
final
assurance
that
so#ware
meets
all
funcBonal,
behavioral,
and
performance
requirements.
• Black-‐box
tesBng
techniques
are
used
exclusively
during
validaBon.
• The
last
high-‐order
tesBng
step
falls
outside
the
boundary
of
so#ware
engineering
and
into
the
broader
context
of
computer
system
engineering.
• So#ware,
once
validated,
must
be
combined
with
other
system
elements
(e.g.,
hardware,
people,
databases).
• System
tes&ng
verifies
that
all
elements
mesh
properly
and
that
overall
system
funcBon/
performance
is
achieved.
Criteria
for
compleBon
of
tesBng
• A
classic
quesBon
arises
every
Bme
so#ware
tesBng
is
discussed:
"When
are
we
done
tesBng—how
do
we
know
that
we've
tested
enough?"
• Sadly,
there
is
no
definiBve
answer
to
this
quesBon,
but
there
are
a
few
pragmaBc
responses
and
early
acempts
at
empirical
guidance.
• Musa
and
Ackerman
suggest
a
response
that
is
based
on
staBsBcal
criteria:
"No,
we
cannot
be
absolutely
certain
that
the
so9ware
will
never
fail,
but
rela&ve
to
a
theore&cally
sound
and
experimentally
validated
sta&s&cal
model,
we
have
done
sufficient
tes&ng
to
say
with
95
percent
confidence
that
the
probability
of
1000
CPU
hours
of
failure
free
opera&on
in
a
probabilis&cally
defined
environment
is
at
least
0.995."
• Using
staBsBcal
modeling
and
so#ware
reliability
theory,
models
of
so#ware
failures
(uncovered
during
tesBng)
as
a
funcBon
of
execuBon
Bme
can
be
developed.
• A
version
of
the
failure
model,
called
a
logarithmic
Poisson
execu&on-‐&me
model,
takes
the
form:
f(t)
=
(1/p)
ln
[l0
pt
+
1]
– where
f(t)
=
cumulaBve
number
of
failures
that
are
expected
to
occur
once
the
so#ware
has
been
tested
for
a
certain
amount
of
execuBon
Bme,
t,
– l0
=
the
iniBal
so#ware
failure
intensity
(failures
per
Bme
unit)
at
the
beginning
of
tesBng,
– p
=
the
exponenBal
reducBon
in
failure
intensity
as
errors
are
uncovered
and
repairs
are
made.
• The
instantaneous
failure
intensity,
l(t)
can
be
derived
by
taking
the
derivaBve
of
f(t)
l(t)=l0
/(l0
pt+1)
• Using
the
relaBonship
depicted
in
the
previous
slide,
testers
can
predict
the
drop-‐off
of
errors
as
tesBng
progresses.
• The
actual
error
intensity
can
be
ploced
CHAPTER 18 S O F T WA R E T E S T I N G S T R AT E G I E S
Execution time, t
• If
the
actual
data
gathered
during
tesBng
and
the
logarithmic
Poisson
execuBon
Bme
model
are
reasonably
close
to
one
another
over
a
number
of
data
points,
the
model
can
be
used
to
predict
total
tesBng
Bme
required
to
achieve
an
acceptably
low
failure
intensity.
• By
collecBng
metrics
during
so#ware
tesBng
and
making
use
of
exisBng
so#ware
reliability
models,
it
is
possible
to
develop
meaningful
guidelines
for
answering
the
quesBon:
"When
are
we
done
tesBng?"
• There
is
licle
debate
that
further
work
remains
to
be
done
before
quanBtaBve
rules
for
tesBng
can
be
established,
but
the
empirical
approaches
that
currently
exist
are
considerably
becer
than
raw
intuiBon.
IntegraBon
TesBng
• You
might
ask
a
seemingly
legiBmate
quesBon
once
all
modules
have
been
unit
tested:
"If
they
all
work
individually,
why
do
you
doubt
that
they'll
work
when
we
put
them
together?”
• The
problem,
of
course,
is
"puhng
them
together"—
interfacing.
– Data
can
be
lost
across
an
interface;
– One
module
can
have
an
inadvertent,
adverse
affect
on
another;
– SubfuncBons,
when
combined,
may
not
produce
the
desired
major
funcBon;
– Individually
acceptable
imprecision
may
be
magnified
to
unacceptable
levels;
– Global
data
structures
can
present
problems.
• Sadly,
the
list
goes
on
and
on.
• IntegraBon
tesBng
is
a
systemaBc
technique
for
construcBng
the
program
structure
while
at
the
same
Bme
conducBng
tests
to
uncover
errors
associated
with
interfacing.
• The
objecBve
is
to
take
unit
tested
components
and
build
a
program
structure
that
has
been
dictated
by
design.
• There
is
o#en
a
tendency
to
acempt
nonincremental
integraBon;
that
is,
to
construct
the
program
using
a
"big
bang"
approach.
• All
components
are
combined
in
advance.
• The
enBre
program
is
tested
as
a
whole.
• And
chaos
usually
results!
• A
set
of
errors
is
encountered.
• CorrecBon
is
difficult
because
isolaBon
of
causes
is
complicated
by
the
vast
expanse
of
the
enBre
program.
• Once
these
errors
are
corrected,
new
ones
appear
and
the
process
conBnues
in
a
seemingly
endless
loop.
• Incremental
integraBon
is
the
anBthesis
of
the
big
bang
approach.
• The
program
is
constructed
and
tested
in
small
increments,
where:
– Errors
are
easier
to
isolate
and
correct;
– Interfaces
are
more
likely
to
be
tested
completely;
– A
systemaBc
test
approach
may
be
applied.
• The
following
slides
discuss
a
number
of
different
incremental
integraBon
strategies.
Top-‐Down
IntegraBon
• Top-‐down
integra&on
tes&ng
is
an
incremental
approach
to
construcBon
of
program
structure.
• Modules
are
integrated
by
moving
downward
through
the
control
hierarchy,
beginning
with
the
main
control
module
(main
program).
• Modules
subordinate
(and
ulBmately
subordinate)
to
the
main
control
module
are
incorporated
into
the
structure
in
either
a
depth-‐first
or
breadth-‐first
manner.
CHAPTER 18 S O F T WA R E T E S T I N G S T R AT E G I E S
18.6
n M1
on
M2 M3 M4
M5 M6 M7
M8
• Referring
to
the
figure
in
the
previous
slide,
depth-‐first
integra&on
would
integrate
all
components
on
a
major
control
path
of
the
structure.
• SelecBon
of
a
major
path
is
somewhat
arbitrary
and
depends
on
applicaBon-‐specific
characterisBcs.
• For
example,
selecBng
the
le#-‐hand
path,
components
M1,
M2
,
M5
would
be
integrated
first.
• Next,
M8
or
(if
necessary
for
proper
funcBoning
of
M2)
M6
would
be
integrated.
• Then,
the
central
and
right-‐
hand
control
paths
are
built.
• Breadth-‐first
integra&on
incorporates
all
components
directly
subordinate
at
each
level,
moving
across
the
structure
horizontally.
• From
the
figure,
components
M2,
M3,
and
M4
(a
replacement
for
stub
S4)
would
be
integrated
first.
• The
next
control
level,
M5,
M6,
and
so
on,
follows.
• The
integraBon
process
is
performed
in
a
series
of
five
steps:
1. The
main
control
module
is
used
as
a
test
driver
and
stubs
are
subsBtuted
for
all
components
directly
subordinate
to
the
main
control
module.
2. Depending
on
the
integraBon
approach
selected
(i.e.,
depth
or
breadth
first),
subordinate
stubs
are
replaced
one
at
a
Bme
with
actual
components.
3. Tests
are
conducted
as
each
component
is
integrated.
4. On
compleBon
of
each
set
of
tests,
another
stub
is
replaced
with
the
real
component.
5. Regression
tesBng
(discussed
later)
may
be
conducted
to
ensure
that
new
errors
have
not
been
introduced.
• The
process
conBnues
from
step
2
unBl
the
enBre
program
structure
is
built.
Benefits
of
top-‐down
integraBon
• The
top-‐down
integraBon
strategy
verifies
major
control
or
decision
points
early
in
the
test
process.
• In
a
well-‐factored
program
structure,
decision
making
occurs
at
upper
levels
in
the
hierarchy
and
is
therefore
encountered
first.
• If
major
control
problems
do
exist,
early
recogniBon
is
essenBal.
• If
depth-‐first
integraBon
is
selected,
a
complete
funcBon
of
the
so#ware
may
be
implemented
and
demonstrated.
Problems
with
top-‐down
integraBon
• Top-‐down
strategy
sounds
relaBvely
uncomplicated,
but
in
pracBce,
logisBcal
problems
can
arise.
• The
most
common
of
these
problems
occurs
when
processing
at
low
levels
in
the
hierarchy
is
required
to
adequately
test
upper
levels.
• Stubs
replace
low-‐
level
modules
at
the
beginning
of
top-‐down
tesBng;
therefore,
no
significant
data
can
flow
upward
in
the
program
structure.
• The
tester
is
le#
with
three
choices:
1. Delay
many
tests
unBl
stubs
are
replaced
with
actual
modules,
2. Develop
stubs
that
perform
limited
funcBons
that
simulate
the
actual
module,
or
3. Integrate
the
so#ware
from
the
bocom
of
the
hierarchy
upward.
• The
first
approach
(delay
tests
unBl
stubs
are
replaced
by
actual
modules)
causes
us
to
loose
some
control
over
correspondence
between
specific
tests
and
incorporaBon
of
specific
modules.
• This
can
lead
to
difficulty
in
determining
the
cause
of
errors
and
tends
to
violate
the
highly
constrained
nature
of
the
top-‐down
approach.
•
The
second
approach
is
workable
but
can
lead
to
significant
overhead,
as
stubs
become
more
and
more
complex.
• The
third
approach,
called
boGom-‐up
tes&ng,
is
discussed
in
the
next
secBon.
Bocom-‐up
integraBon
• BoGom-‐up
integra&on
tes&ng,
as
its
name
implies,
begins
construcBon
and
tesBng
with
atomic
modules
(i.e.,
components
at
the
lowest
levels
in
the
program
structure).
• Because
components
are
integrated
from
the
bocom
up,
processing
required
for
components
subordinate
to
a
given
level
is
always
available
and
the
need
for
stubs
is
eliminated.
• A
bocom-‐up
integraBon
strategy
may
be
implemented
with
the
following
steps:
1. Low-‐level
components
are
combined
into
clusters
(someBmes
called
builds)
that
perform
a
specific
so#ware
subfuncBon.
2. A
driver
(a
control
program
for
tesBng)
is
wricen
to
coordinate
test
case
input
and
output.
3. The
cluster
is
tested.
4. Drivers
are
removed
and
clusters
are
combined
moving
upward
in
the
program
structure.
• IntegraBon
follows
the
pacern
illustrated
491
CHAPTER 18 S O F T WA R E T E S T I N G S T R AT E G I E S
below:
18.7
Mc
p
on
Ma Mb
D1 D2 D3
Cluster 3
Cluster 1
Cluster 2
• Components
are
combined
to
form
clusters
1,
2,
and
3.
• Each
of
the
clusters
is
tested
using
a
driver
(shown
as
a
dashed
block).
• Components
in
clusters
1
and
2
are
subordinate
to
Ma.
• Drivers
D1
and
D2
are
removed
and
the
clusters
are
interfaced
directly
to
Ma.
• Similarly,
driver
D3
for
cluster
3
is
removed
prior
to
integraBon
with
module
Mb.
• Both
Ma
and
Mb
will
ulBmately
be
integrated
with
component
Mc,
and
so
forth.
• As
integraBon
moves
upward,
the
need
for
separate
test
drivers
lessens.
• In
fact,
if
the
top
two
levels
of
program
structure
are
integrated
top
down,
the
number
of
drivers
can
be
reduced
substanBally
and
integraBon
of
clusters
is
greatly
simplified.
Regression
TesBng
• Each
Bme
a
new
module
is
added
as
part
of
integraBon
tesBng,
the
so#ware
changes.
– New
data
flow
paths
are
established
– New
I/O
may
occur
– New
control
logic
is
invoked.
• These
changes
may
cause
problems
with
funcBons
that
previously
worked
flawlessly.
• In
the
context
of
an
integraBon
test
strategy,
regression
tes&ng
is
the
re-‐execuBon
of
some
subset
of
tests
that
have
already
been
conducted
to
ensure
that
changes
have
not
propagated
unintended
side
effects.
• In
a
broader
context,
successful
tests
(of
any
kind)
result
in
the
discovery
of
errors,
and
errors
must
be
corrected.
• Whenever
so#ware
is
corrected,
some
aspect
of
the
so#ware
configuraBon
(the
program,
its
documentaBon,
or
the
data
that
support
it)
is
changed.
• Regression
tesBng
is
the
acBvity
that
helps
to
ensure
that
changes
(due
to
tesBng
or
for
other
reasons)
do
not
introduce
unintended
behavior
or
addiBonal
errors.
• Regression
tesBng
may
be
conducted
manually,
by
re-‐execuBng
a
subset
of
all
test
cases
or
using
automated
capture/playback
tools.
• Capture/playback
tools
enable
the
so#ware
engineer
to
capture
test
cases
and
results
for
subsequent
playback
and
comparison.
• The
regression
test
suite
(the
subset
of
tests
to
be
executed)
contains
three
different
classes
of
test
cases:
1. A
representaBve
sample
of
tests
that
will
exercise
all
so#ware
funcBons.
2. AddiBonal
tests
that
focus
on
so#ware
funcBons
that
are
likely
to
be
affected
by
the
change.
3. Tests
that
focus
on
the
so#ware
components
that
have
been
changed.
• As
integraBon
tesBng
proceeds,
the
number
of
regression
tests
can
grow
quite
large.
• Therefore,
the
regression
test
suite
should
be
designed
to
include
only
those
tests
that
address
one
or
more
classes
of
errors
in
each
of
the
major
program
funcBons.
• It
is
impracBcal
and
inefficient
to
re-‐execute
every
test
for
every
program
funcBon
once
a
change
has
occurred.