You are on page 1of 537

So#ware

 Engineering-­‐I    
The  Product  and  the  Process:  Evolving  
Role  of  So#ware  
•  Today,  so#ware  takes  on  a  dual  role.    
•  It  is  a  product  and,  at  the  same  Bme,  the  tool/
method  for  delivering  a  product.  
•   As  a  product,  it  delivers  the  compuBng  potenBal  
embodied  by  computer  hardware  to  a  user  
•  As  the  tool  used  to  deliver  the  product  it  acts  as  :  
–  the  basis  for  the  control  of  the  computer  (operaBng  
systems)  
–  the  communicaBon  of  informaBon  (networks),  and    
–  the  creaBon  and  control  of  other  programs  (so#ware  
tools  and  environments).    
•  So#ware  delivers  the  most  important  product  
of  our  Bme—informaBon.    
•  The  central  role  played  by  so#ware  in  today’s  
scenario  has  come  about  over  the  last  50  
years  
•   During  the  iniBal  years,  development  of  
so#ware  was  considered  to  be  an  art  
•  It  was  more  of  a  personal  acBvity  carried  out  
by  hobbyists    
•  So#ware  was  meant  to  solve  fragmented,  
individual  and  small-­‐scale  problems  
•  As  a  result,  no  proper  method,  process  or  
approach  was  standardised  
•  This  led  to  a  number  of  problems  when  
developing  so#ware  
•  Some  of  them  were:  
–  Why  does  it  take  so  long  to  get  so#ware  finished?    
–  Why  are  development  costs  so  high?    
–  Why  can't  we  find  all  the  errors  before  we  give  
the  so#ware  to  customers?    
–  Why  do  we  conBnue  to  have  difficulty  in  
measuring  progress  as  so#ware  is  being  
developed    
•  The  lone  programmer  of  an  earlier  era  has  been  
replaced  by  a  team  of  so#ware  specialists  
•  Each  team  focuses  on  one  part  of  the  technology  
required  to  deliver  a  complex  applicaBon.    
•  And  yet,  the  same  quesBons  asked  of  the  lone  
programmer  are  sBll  being  asked  
•  This  list  of  quesBons  is  not  exhausBve,  there  are  
many  other  quesBons  
•  It  is  such  quesBons  about  so#ware  and  the  
manner  in  which  it  is  developed—a  concern  that  
has  lead  to  the  adopBon  of  so#ware  engineering  
pracBce.    
So#ware:  CharacterisBcs    
•  To  gain  an  understanding  of  so#ware  (and  ulBmately  
an  understanding  of  so#ware  engineering),  it  is  
important  to  examine  the  characterisBcs  of  so#ware  
that  make  it  different  from  other  things  that  human  
beings  build.    
•  When  hardware  is  built,  the  human  creaBve  process  
(analysis,  design,  construcBon,  tesBng)  is  ulBmately  
translated  into  a  physical  form.    
•  So#ware  is  a  logical  rather  than  a  physical  system  
element.  Therefore,  so#ware  has  characterisBcs  that  
are  considerably  different  than  those  of  hardware    
1.  So#ware  is  developed  or  engineered,  it  is  not  
manufactured  in  the  classical  sense.    
•  Although  some  similariBes  exist  between  
so#ware  development  and  hardware  
manufacture,  the  two  acBviBes  are  
fundamentally  different.    
•  In  both  acBviBes,  high  quality  is  achieved  through  
good  design,  but  the  manufacturing  phase  for  
hardware  can  introduce  quality  problems  that  
are  easily  corrected  for  so#ware.    
•  Both  acBviBes  are  dependent  on  people,  but  the  
relaBonship  between  people  applied  and  work  
accomplished  is  enBrely  different  (as  we  shall  see  
later)  
•  Both  acBviBes  require  the  construcBon  of  a  
"product"  but  the  approaches  are  different.    
•  So#ware  costs  are  concentrated  in  
engineering.  This  means  that  so#ware  
projects  cannot  be  managed  as  if  they  were  
manufacturing  projects.    
2.  So#ware  doesn't  "wear  out."    
CHAPTER 1 THE PRODUCT

F I G U R E 1.1
Failure curve
for hardware

“Infant “Wear out”


mortality”
Failure rate

Time

ity is achieved through good design, but the manufacturing phase for ha
8 PA R T O N E THE PRODUCT AND THE PROCESS

F I G U R E 1.2
Increased failure
Idealized and
actual failure rate due to side
curves for effects
software
Failure rate

Change
Actual curve

Idealized curve
Software engineering
Time
methods strive to
reduce the magnitude
of the spikes and the changes are made, it is likely that some new defects will be introduced,
slope of the actual failure rate curve to spike as shown in Figure 1.2. Before the curve can r
•  So#ware  is  deteriorates  due  to  change.    
•  When  a  hardware  component  wears  out,  it  is  
replaced  by  a  spare  part.    
•  There  are  no  so#ware  spare  parts.    
•  Every  so#ware  failure  indicates  an  error  in  
design  or  in  the  process  through  which  design  
was  translated  into  machine  executable  code.    
•  Therefore,  so#ware  maintenance  involves  
considerably  more  complexity  than  hardware  
maintenance.    
3.  Although  the  industry  is  moving  toward  
component-­‐based  assembly,  most  so#ware  
conBnues  to  be  custom  built.    
•  For  instance,  each  IC  chip  has  a  part  number,  a  
defined  and  validated  funcBon,  a  well-­‐defined  
interface,  and  a  standard  set  of  integraBon  
guidelines.    
•  Hence,  it  can  be  ordered  off  the  shelf.  
•  In  general,  as  an  engineering  discipline  
evolves,  a  collecBon  of  standard  design  
components  is  created.    
•  In  the  hardware  world,  component  reuse  is  a  
natural  part  of  the  engineering  process.    
•  In  the  so#ware  world,  it  is  something  that  has  
only  begun  to  be  achieved  on  a  broad  scale.    
•  Ideally,  a  so#ware  component  should  be  
designed  and  implemented  so  that  it  can  be  
reused  in  many  different  programs.    
•  However,  it  is  only  very  recently  that  the  
concept  of  off-­‐the-­‐shelf  reusable  so#ware  
components  has  begun  to  take  root  
•  In  the  1960s,  scienBfic  subrouBne  libraries  that  
were  reusable  in  a  broad  array  of  engineering  
and  scienBfic  applicaBons.  
•   These  subrouBne  libraries  reused  well-­‐defined  
algorithms  in  an  effecBve  manner  but  had  a  
limited  domain  of  applicaBon.    
•  Today,  this  view  has  been  extended  to  
encompass  not  only  algorithms  but  also  data  
structure.    
•  Modern  reusable  components  encapsulate  both  
data  and  the  processing  applied  to  the  data,  
enabling  the  so#ware  engineer  to  create  new  
applicaBons  from  reusable  parts.    
So#ware  ApplicaBons  
•  So#ware  applicaBon  domains  can  be  broadly  
categorised  into  2  categories:  
–  Determinate  applicaBons  
–  Indeterminate  applicaBons  
•  Determinate:  A  so#ware  that  accepts  data  in  a  
predefined  order,  executes  the  analysis  
algorithm(s)  without  interrupBon,  and  
produces  resultant  data  is  a  case  of  
determinate  applicaBons  
•  For  e.g:  an  engineering  analysis  program  
•  Indeterminate:  A  so#ware  that  accepts  inputs  
with  varied  content  and  arbitrary  Bming,  
executes  algorithms  that  can  be  interrupted  
by  external  condiBons,  and  produces  output  
that  varies  as  a  funcBon  of  environment  and  
Bme    is  said  to  have  an  indeterminate  
characterisBc.  
•  For  e.g:  a  mulB-­‐user  OS  
•  It  is  somewhat  difficult  to  develop  meaningful  
generic  categories  for  so#ware  applicaBons.    
•  As  so#ware  complexity  grows,  neat  
compartmentalizaBon  disappears.    
•  However,  the  following  so#ware  areas  
indicate  the  breadth  of  potenBal  applicaBons:    
•  System  so#ware  
•  Real-­‐Bme  so#ware  
•  Business  so#ware  
•  Engineering  and  scienBfic  so#ware  
•  Embedded  so#ware  
•  Personal  compuBng  so#ware  
•  Web-­‐based  so#ware    
•  ArBficial  intelligence  so#ware  
•  System  so#ware:  is  a  collecBon  of  programs  
wricen  to  service  other  programs.    
•  Some  system  so#ware  (e.g.,  compilers,  
editors,  and  file  management  uBliBes)  process  
complex,  but  determinate  informaBon  
structures.    
•  Other  systems  applicaBons  (e.g.,  operaBng  
system  components,  drivers,  
telecommunicaBons  processors)  process  
largely  indeterminate  data.    
•  In  either  case,  the  system  so#ware  area  is  
characterized  by:  
–  heavy  interacBon  with  computer  hardware  
–  heavy  usage  by  mulBple  users  
–  concurrent  operaBon  that  requires  scheduling  
–  resource  sharing  
–  sophisBcated  process  management  
–  complex  data  structures  
–  mulBple  external  interfaces.    
•  Real-­‐Bme  so#ware:  monitors/analyzes/controls  
real-­‐world  events  as  they  occur  is  called  real  
&me.    
•  Elements  of  real-­‐Bme  so#ware  include:  
–  a  data  gathering  component  that  collects  and  formats  
informaBon  from  an  external  environment  
–  an  analysis  component  that  transforms  informaBon  as  
required  by  the  applicaBon  
–  a  control/output  component  that  responds  to  the  
external  environment  
–  and  a  monitoring  component  that  coordinates  all  
other  components  so  that  real-­‐Bme  response  
(typically  ranging  from  1  millisecond  to  1  second)  can  
be  maintained.    
•  Business  so#ware:  Business  informaBon  processing  is  
the  largest  single  so#ware  applicaBon  area.  
•  Discrete  "systems"  (e.g.,  payroll,  accounts  receivable/
payable,  inventory)  have  evolved  into  management  
informaBon  system  (MIS)  so#ware  that  accesses  one  
or  more  large  databases  containing  business  
informaBon.    
•  ApplicaBons  in  this  area  restructure  exisBng  data  in  a  
way  that  facilitates  business  operaBons  or  
management  decision  making.    
•  In  addiBon  to  convenBonal  data  processing  
applicaBon,  business  so#ware  applicaBons  also  
encompass  interacBve  compuBng  (e.g.,  point-­‐of-­‐sale  
transacBon  processing).    
•  Engineering  and  scienBfic  so#ware:  have  been  
characterized  by  "number  crunching"  algorithms.    
•  ApplicaBons  range  from  astronomy  to  
volcanology,  from  automoBve  stress  analysis  to  
space  shucle  orbital  dynamics,  and  from  
molecular  biology  to  automated  manufacturing.    
•  However,  modern  applicaBons  within  the  
engineering/scienBfic  area  are  moving  away  from  
convenBonal  numerical  algorithms.    
•  Computer-­‐aided  design,  system  simulaBon,  and  
other  interacBve  applicaBons  have  begun  to  take  
on  real-­‐Bme  and  even  system  so#ware  
characterisBcs.    
•  Embedded  so#ware:  Intelligent  products  have  
become  commonplace  in  nearly  every  consumer  
and  industrial  market.    
•  Embedded  so#ware  resides  in  ROMs  and  is  used  
to  control  products  and  systems  for  the  
consumer  and  industrial  markets.    
•  Embedded  so#ware  can  perform  very  limited  and  
esoteric  funcBons  (e.g.,  keypad  control  for  a  
microwave  oven)  or  provide  significant  funcBon  
and  control  capability  (e.g.,  digital  funcBons  in  an  
automobile  such  as  fuel  control,  dashboard  
displays,  and  braking  systems).    
•  Personal  compuBng  so#ware:  The  personal  
computer  so#ware  market  has  burgeoned  
over  the  past  two  decades.    
•  Word  processing,  spreadsheets,  computer  
graphics,  mulBmedia,  entertainment,  
database  management,  personal  and  business  
financial  applicaBons,  external  network,  and  
database  access  are  only  a  few  of  hundreds  of  
applicaBons.    
•  Web-­‐based  so#ware:  The  Web  pages  
retrieved  by  a  browser  are  so#ware  that  
incorporates  executable  instrucBons  (e.g.,  
CGI,  HTML,  Perl,  or  Java),  and  data  (e.g.,  
hypertext  and  a  variety  of  visual  and  audio  
formats).  
•  In  essence,  the  network  becomes  a  massive  
computer  providing  an  almost  unlimited  
so#ware  resource    
•  ArBficial  intelligence  so#ware:  ArBficial  
intelligence  (AI)  so#ware  makes  use  of  non-­‐
numerical  algorithms  to  solve  complex  
problems  that  are  not  amenable  to  
computaBon  or  straighgorward  analysis.    
•  Expert  systems,  also  called  knowledge-­‐  based  
systems,  pacern  recogniBon  (image  and  
voice),  arBficial  neural  networks,  theorem  
proving,  and  game  playing  are  representaBve  
of  applicaBons  within  this  category    
So#ware  Myths  
•  So#ware  myths  have  given  rise  misleading  
ahtudes  that  have  caused  serious  problems  for  
managers  and  technical  people  alike  
•  Many  causes  of  a  so#ware  afflicBon  can  be  
traced  to  a  mythology  that  arose  during  the  early  
history  of  so#ware  development.    
•  Today,  most  knowledgeable  professionals  
recognize  these  myths  
•   However,  old  ahtudes  and  habits  are  difficult  to  
modify,  and  remnants  of  so#ware  myths  are  sBll  
believed  
•  Myths  can  be  categorised  into:  
–  Management  myths  
–  Customer  myths  
–  PracBBoner’s  myths  
•  Management  myths:  
•  Managers  with  so#ware  responsibility,  like  
managers  in  most  disciplines,  are  o#en  under  
pressure  to  maintain  budgets,  keep  schedules  
from  slipping,  and  improve  quality.    
•  In  order  to  avoid  the  above  disastrous  
scenario,  a  manager  may  start  to  believe  in  
any  of  the  following  myths:  
•  Myth:  “We  already  have  a  book  that's  full  of  
standards  and  procedures  for  building  
so#ware,  won't  that  provide  my  people  with  
everything  they  need  to  know?”  
 
•  Reality:    
•  The  book  of  standards  may  very  well  exist,  but  is  
it  used?    
•  Are  so#ware  pracBBoners  aware  of  its  existence?    
•  Does  it  reflect  modern  so#ware  engineering  
pracBce?    
•  Is  it  complete?  
•   Is  it  streamlined  to  improve  Bme  to  delivery  
while  sBll  maintaining  a  focus  on  quality?    
•  In  many  cases,  the  answer  to  all  of  these  
quesBons  is  "no."    
•  Myth:  “My  people  have  state-­‐of-­‐the-­‐art  
so#ware  development  tools,  a#er  all,  we  buy  
them  the  newest  computers.”  
 
•  Reality:  It  takes  much  more  than  the  latest  
model  mainframe,  workstaBon,  or  PC  to  do  
high-­‐quality  so#ware  development.  
•  Computer-­‐aided  so#ware  engineering  (CASE)  
tools  are  more  important  than  hardware  for  
achieving  good  quality  and  producBvity,  yet  
the  majority  of  so#ware  developers  sBll  do  
not  use  them  effecBvely.    
•  Myth:  “If  we  get  behind  schedule,  we  can  add  
more  programmers  and  catch  up  (someBmes  
called  the  Mongolian  horde  concept).”  
 
•  Reality:    
•  So#ware  development  is  not  a  mechanisBc  
process  like  manufacturing.  
•  In  the  words  of  Brooks:  "adding  people  to  a  late  
so#ware  project  makes  it  later.”  
•  At  first,  this  statement  may  seem  
counterintuiBve.    
•  However,  as  new  people  are  added,  people  who  
were  working  must  spend  Bme  educaBng  the  
newcomers,  thereby  reducing  the  amount  of  
Bme  spent  on  producBve  development  effort.  
•  People  can  be  added  but  only  in  a  planned  and  
well-­‐coordinated  manner.    
•  Myth:  If  I  decide  to  outsource  the  so#ware  
project  to  a  third  party,  I  can  just  relax  and  let  
that  firm  build  it.  
 
•  Reality:  If  an  organizaBon  does  not  
understand  how  to  manage  and  control  
so#ware  projects  internally,  it  will  invariably  
struggle  when  it  outsources  so#ware  projects.    
•  Customer  myths:  
•  A  customer  who  requests  computer  so#ware  
may  be  a  person  at  the  next  desk,  a  technical  
group  down  the  hall,  the  markeBng/sales  
department,  or  an  outside  company  that  has  
requested  so#ware  under  contract.  
•  In  many  cases,  the  customer  believes  myths  
about  so#ware  because  so#ware  managers  and  
developers  do  licle  to  correct  misinformaBon.    
•  Myths  lead  to  false  expectaBons  (by  the  
customer)  and  ulBmately,  dissaBsfacBon  with  the  
developer.    
•  Myth:  A  general  statement  of  objecBves  is  
sufficient  to  begin  wriBng  programs—  we  can  
fill  in  the  details  later.  
 
 
•  Reality:  A  poor  up-­‐front  definiBon  is  the  major  
cause  of  failed  so#ware  efforts.    
•  A  formal  and  detailed  descripBon  of  the  
informaBon  domain,  funcBon,  behavior,  
performance,  interfaces,  design  constraints,  
and  validaBon  criteria  is  essenBal.    
•  These  characterisBcs  can  be  determined  only  
a#er  thorough  communicaBon  between  
customer  and  developer.    
•  Myth:  Project  requirements  conBnually  
change,  but  change  can  be  easily  
accommodated  because  so#ware  is  flexible.  
 
•  Reality:  It  is  true  that  so#ware  requirements  
change,  but  the  impact  of  change  varies  with  
the  Bme  at  which  it  is  introduced.    
•  Figure  below  illustrates  the  impact  of  change  
14 PA R T O N E THE PRODUCT AND THE PROCESS

F I G U R E 1.3
60–100×
The impact of
change
Cost to change

1.5–6×

Definition Development After release


•  If  serious  acenBon  is  given  to  up-­‐front  
definiBon,  early  requests  for  change  can  be  
accommodated  easily.    
•  The  customer  can  review  requirements  and  
recommend  modificaBons  with  relaBvely  licle  
impact  on  cost.    
•  When  changes  are  requested  during  so#ware  
design,  the  cost  impact  grows  rapidly.    
•  Resources  have  been  commiced  and  a  design  
framework  has  been  established.    
•  Change  can  cause  upheaval  that  requires  
addiBonal  resources  and  major  design  
modificaBon,  that  is,  addiBonal  cost.    
•  Changes  in  funcBon,  performance,  interface,  
or  other  characterisBcs  during  
implementaBon  (code  and  test)  have  a  severe  
impact  on  cost.    
•  Change,  when  requested  a#er  so#ware  is  in  
producBon,  can  be  over  an  order  of  
magnitude  more  expensive  than  the  same  
change  requested  earlier.    
 
•  Prac33oner's  myths:  
•  Myths  that  are  sBll  believed  by  so#ware  
pracBBoners  have  been  fostered  by  50  years  
of  programming  culture.    
•  During  the  early  days  of  so#ware,  
programming  was  viewed  as  an  art  form.    
•  Old  ways  and  ahtudes  die  hard.    
•  Myth:  Once  we  write  the  program  and  get  it  
to  work,  our  job  is  done.  
 
•  Reality:  Someone  once  said  that  "the  sooner  
you  begin  'wriBng  code',  the  longer  it'll  take  
you  to  get  done."    
•  Industry  data  indicate  that  between  60  and  80  
percent  of  all  effort  expended  on  so#ware  will  
be  expended  a#er  it  is  delivered  to  the  
customer  for  the  first  Bme.    
•  Myth:  UnBl  I  get  the  program  "running"  I  have  
no  way  of  assessing  its  quality.    
•  Reality:  One  of  the  most  effecBve  so#ware  
quality  assurance  mechanisms  can  be  applied  
from  the  incepBon  of  a  project—the  formal  
technical  review.    
•  So#ware  reviews  are  a  "quality  filter"  that  
have  been  found  to  be  more  effecBve  than  
tesBng  for  finding  certain  classes  of  so#ware  
defects.    
•  Myth:  The  only  deliverable  work  product  for  a  
successful  project  is  the  working  program.  
 
•  Reality:  A  working  program  is  only  one  part  of  
a  so9ware  configura&on  that  includes  many  
elements.    
•  DocumentaBon  provides  a  foundaBon  for  
successful  engineering  and,  more  important,  
guidance  for  so#ware  support.    
•  Myth:  So#ware  engineering  will  make  us  
create  voluminous  and  unnecessary  
documentaBon  and  will  invariably  slow  us  
down.  
 
•  Reality:  So#ware  engineering  is  not  about  
creaBng  documents.    
•  It  is  about  creaBng  quality.    
•  Becer  quality  leads  to  reduced  rework.  And  
reduced  rework  results  in  faster  delivery  
Bmes.    
In  conclusion….  
•  Many  so#ware  professionals  recognize  the  
fallacy  of  the  myths  just  described.    
•  Regrecably,  habitual  ahtudes  and  methods  
foster  poor  management  and  technical  
pracBces,  even  when  reality  dictates  a  becer  
approach.  
•  RecogniBon  of  so#ware  realiBes  is  the  first  
step  toward  formulaBon  of  pracBcal  soluBons  
for  so#ware  engineering.    
So#ware  Crisis  
•  Many  industry  observers  have  characterized  
the  problems  associated  with  so#ware  
development  as  a  "crisis."    
•  The  set  of  problems  that  are  encountered  in  
the  development  of  computer  so#ware  is  not  
limited  to  so#ware  that  "doesn't  funcBon”  
 
•  Rather,  the  crisis  encompasses  problems  
associated  with:  
–  how  we  develop  so#ware,    
–  how  we  support  a  growing  volume  of  exisBng  
so#ware,    
–  how  we  can  expect  to  keep  pace  with  a  growing  
demand  for  more  so#ware.    
•  However,  it  is  also  a  fact  that  so#ware  people  
succeed  more  o#en  than  they  fail.    
•  It  also  true  that  the  so#ware  crisis  predicted  
30  years  ago  never  seemed  to  materialize.    
•  The  industry  prospers  in  spite  of  the  above  
menBoned  problems.  
•  And  yet,  things  would  be  much  becer  if  we  
could  find  and  broadly  apply  a  cure.    
So#ware  Engineering:  A  layered  
technology  
•  Although  hundreds  of  authors  have  developed  
personal  definiBons  of  so9ware  engineering,  a  
definiBon  proposed  by  Fritz  Bauer  sBll  serves  
as  the  basis:    
So#ware  engineering  is  the  establishment  and  
use  of  sound  engineering  principles  in  order  to  
obtain  economically  so#ware  that  is  reliable  and  
works  efficiently  on  real  machines.    
•  However,  this  definiBon  gives  rise  to  
quesBons  that  conBnue  to  challenge  so#ware  
engineers:  
•  What  “sound  engineering  principles”  can  be  
applied  to  computer  so#ware  development?    
•  How  do  we  “economically”  build  so#ware  so  
that  it  is  “reliable”?    
•  What  is  required  to  create  computer  
programs  that  work  “efficiently”  on  not  one  
but  many  different  “real  machines”?    
•  The  IEEE  has  developed  a  more  
comprehensive  definiBon  when  it  states:    
So#ware  Engineering:    
(1) The  applicaBon  of  a  systemaBc,  disciplined,  
quanBfiable  approach  to  the  development,  
operaBon,  and  maintenance  of  so#ware;  
that  is,  the  applicaBon  of  engineering  to  
so#ware.    
(2) The  study  of  approaches  as  in  (1).    
Process,  Methods,  and  Tools    
•  So#ware  engineering  is  a  layered  technology  
•  Referring  to  figure  below,  any  engineering  
approach  (including  so#ware  engineering)  
must  rest  on  an  organizaBonal  commitment  to  
quality.    
CHAPTER 2 THE PROCESS 21

F I G U R E 2.1
Software Tools
engineering
layers
Methods

Process

A quality focus

development of increasingly more mature approaches to software engineering. The


bedrock that supports software engineering is a quality focus.
•  Total  quality  management  and  similar  
philosophies  foster  a  conBnuous  process  
improvement  culture,  and  this  culture  
ulBmately  leads  to  the  development  of  
increasingly  more  mature  approaches  to  
so#ware  engineering.    
•  The  bedrock  that  supports  so#ware  
engineering  is  a  quality  focus.    
•  The  foundaBon  for  so#ware  engineering  is  the  
process  layer.    
•  So#ware  engineering  process  is  the  glue  that  
holds  the  technology  layers  together  and  
enables  raBonal  and  Bmely  development  of  
computer  so#ware.    
•  Process  defines  a  framework  for  a  set  of  key  
process  areas  that  must  be  established  for  
effecBve  delivery  of  so#ware  engineering  
technology.  
•  The  key  process  areas  form  the  basis  for:  
–  management  control  of  so#ware  projects  
–  establish  the  context  in  which  technical  methods  
are  applied  
–  work  products  (models,  documents,  data,  reports,  
forms,  etc.)  are  produced  
–  milestones  are  established  
–  quality  is  ensured  
–  change  is  properly  managed.    
•  So#ware  engineering  methods  provide  the  
technical  how-­‐to's  for  building  so#ware.    
•  Methods  encompass  a  broad  array  of  tasks  that  
include:  
–  requirements  analysis,    
–  design  
–  program  construcBon  
–  tesBng,  and    
–  support.    
•  So#ware  engineering  methods  rely  on  a  set  of  
basic  principles  that  govern  each  area  of  the  
technology  and  include  modeling  acBviBes  and  
other  descripBve  techniques.    
•  So#ware  engineering  tools  provide  automated  
or  semi-­‐automated  support  for  the  process  
and  the  methods.    
•  When  tools  are  integrated  so  that  informaBon  
created  by  one  tool  can  be  used  by  another,  a  
system  for  the  support  of  so#ware  
development,  called  computer-­‐aided  so9ware  
engineering  (CASE),  is  established.    
•  CASE  combines  so#ware,  hardware,  and  a  
so#ware  engineering  database  (a  repository  
containing  important  informaBon  about  
analysis,  design,  program  construcBon,  and  
tesBng)  to  create  a  so#ware  engineering  
environment  analogous  to  CAD/CAE  
(computer-­‐aided  design/engineering)  for  
hardware.    
A  Generic  View  of  So@ware  
Engineering    
•  Engineering  is  the  analysis,  design,  
construcBon,  verificaBon,  and  management  of  
technical  (or  social)  enBBes.  
•  Regardless  of  the  enBty  to  be  engineered,  the  
following  quesBons  must  be  asked  and  
answered:    
•  What  is  the  problem  to  be  solved?    
•  What  characterisBcs  of  the  enBty  are  used  to  
solve  the  problem?    
•  How  will  the  enBty  (and  the  soluBon)  be  
realized?    
•  How  will  the  enBty  be  constructed?    
•  What  approach  will  be  used  to  uncover  errors  
that  were  made  in  the  design  and  construcBon  of  
the  enBty?    
•  How  will  the  enBty  be  supported  over  the  long  
term,  when  correcBons,  adaptaBons,  and  
enhancements  are  requested  by  users  of  the  
enBty.    
•  The  work  associated  with  so#ware  
engineering  can  be  categorized  into  three  
generic  phases,  regardless  of  applicaBon  area,  
project  size,  or  complexity:  
–  The  definiBon  phase  
–  The  development  phase  
–  The  support  phase  
•  Each  phase  addresses  one  or  more  of  the  
quesBons  noted  previously  
The  defini&on  phase    
•  Focuses  on  what.    
•  During  definiBon,  the  so#ware  engineer  
acempts  to  idenBfy:  
–  what  informaBon  is  to  be  processed  
–  what  funcBon  and  performance  are  desired  
–  what  system  behavior  can  be  expected  
–  what  interfaces  are  to  be  established  
–  what  design  constraints  exist  
–  what  validaBon  criteria  are  required  to  define  a  
successful  system.    
•  The  key  requirements  of  the  system  and  the  
so#ware  are  idenBfied.    
•  The  methods  applied  during  the  definiBon  
phase  will  vary  depending  on  the  so#ware  
engineering  paradigm  (or  combinaBon  of  
paradigms)  that  is  applied  
•  IrrespecBve  of  the  paradigm  chosen,  three  
major  tasks  will  occur  in  some  form:    
–  system  or  informaBon  engineering  
–  so#ware  project  planning    
–  Requirements  analysis  
The  development  phase    
•  Focuses  on  how  
•  During  development  a  so#ware  engineer  
acempts  to  define:  
–  how  data  are  to  be  structured  
–  how  funcBon  is  to  be  implemented  within  a  
so#ware  architecture  
–  how  procedural  details  are  to  be  implemented  
–  how  interfaces  are  to  be  characterized  
–  how  the  design  will  be  translated  into  a  
programming  language  
–  how  tesBng  will  be  performed.    
•  The  methods  applied  during  the  development  
phase  will  vary,  but  three  specific  technical  
tasks  should  always  occur:    
–  so#ware  design  
–  code  generaBon  
–  so#ware  tesBng  
The  support  phase    
•  Focuses  on  change  
•  Change  is  associated  with:  
–  error  correcBon  
–  adaptaBons  required  as  the  so#ware's  
environment  evolves  
–  changes  due  to  enhancements  brought  about  by  
changing  customer  requirements.    
•  The  support  phase  reapplies  the  steps  of  the  
definiBon  and  development  phases  but  does  
so  in  the  context  of  exisBng  so#ware.    
•  Four  types  of  change  are  encountered  during  
the  support  phase:  
1.   Correc3on:  
•  Even  with  the  best  quality  assurance  acBviBes,  
it  is  likely  that  the  customer  will  uncover  
defects  in  the  so#ware.    
•  Correc&ve  maintenance  changes  the  so#ware  
to  correct  defects.  
 
2.  Adapta3on:  
•  Over  Bme,  the  original  environment  (e.g.,  
CPU,  operaBng  system,  business  rules,  
external  product  characterisBcs)  for  which  the  
so#ware  was  developed  is  likely  to  change.  
•  Adap&ve  maintenance  results  in  modificaBon  
to  the  so#ware  to  accommodate  changes  to  
its  external  environment.    
3.  Enhancement:  
•  As  so#ware  is  used,  the  customer/user  will  
recognize  addiBonal  funcBons  that  will  
provide  benefit.    
•  Perfec&ve  maintenance  extends  the  so#ware  
beyond  its  original  funcBonal  requirements.    
4.  Preven3on:  
•  Computer  so#ware  deteriorates  due  to  
change  
•  Because  of  this,  preven&ve  maintenance,  
o#en  called  so9ware  reengineering,  must  be  
conducted  to  enable  the  so#ware  to  serve  the  
needs  of  its  end  users.    
•  In  essence,  prevenBve  maintenance  makes  
changes  to  computer  programs  so  that  they  
can  be  more  easily  corrected,  adapted,  and  
enhanced.    
•  The  phases  and  related  steps  just  described  are  
normally  complemented  by  a  number  of  
umbrella  ac&vi&es.    
•  Typical  acBviBes  in  this  category  include:    
–  So#ware  project  tracking  and  control    
–  Formal  technical  reviews    
–  So#ware  quality  assurance    
–  So#ware  configuraBon  management    
–  Document  preparaBon  and  producBon    
–  Reusability  management    
–  Measurement    
–  Risk  management    
•  Umbrella  acBviBes  are  applied  throughout  the  
so#ware  process    
THE  SOFTWARE  PROCESS    
•  A  so#ware  process  can  be  characterized  as  a  
framework  
•  This  framework  is  established  by  defining  a  
small  number  of  framework  acBviBes  that  are  
applicable  to  all  so#ware  projects,  regardless  
of  their  size  or  complexity.    
•  A  number  of  task  sets—each  a  collecBon  of  
so#ware  engineering  work  tasks,  project  
milestones,  work  products,  and  quality  
assurance  points—enable  the  framework  
acBviBes  to  be  adapted  to  the  characterisBcs  
of  the  so#ware  project  and  the  requirements  
of  the  project  team.    
•  Finally,  umbrella  acBviBes—such  as  so#ware  
quality  assurance,  so#ware  configuraBon  
management,  and  measurement—overlay  the  
process  model.    
•  Umbrella  acBviBes  are  independent  of  any  
one  framework  acBvity  and  occur  throughout  
the  process.    
PA R T O N E THE PRODUCT AND THE PROCESS

Common process framework

Framework activities

Task sets

Tasks

Milestones, deliverables

SQA points

Umbrella activities
•  In  recent  years,  there  has  been  a  significant  
emphasis  on  “process  maturity.”  
•  The  So#ware  Engineering  InsBtute  (SEI)  has  
developed  a  comprehensive  model  predicated  
on  a  set  of  so#ware  engineering  capabiliBes  
that  should  be  present  as  organizaBons  reach  
different  levels  of  process  maturity.    
•  To  determine  an  organizaBon’s  current  state  
of  process  maturity,  the  SEI  uses  an  
assessment  that  results  in  a  five  point  grading  
scheme.    
•  The  grading  scheme  determines  compliance  
with  a  capability  maturity  model  (CMM)  
•   It  defines  key  acBviBes  required  at  different  
levels  of  process  maturity.    
•  The  SEI  approach  provides  a  measure  of  the  
global  effecBveness  of  a  company's  so#ware  
engineering  pracBces  and  establishes  five  
process  maturity  levels  that  are  defined  in  the  
following  manner:    
•  Level  1:  Ini3al  
–  The  so#ware  process  is  characterized  as  ad  hoc  
and  occasionally  even  chaoBc.  
–  Few  processes  are  defined,  and  success  depends  
on  individual  effort.  
•  Level  2:  Repeatable  
–  Basic  project  management  processes  are  
established  to  track  cost,  schedule,  and  
funcBonality.    
–  The  necessary  process  discipline  is  in  place  to  
repeat  earlier  successes  on  projects  with  similar  
applicaBons.    
•  Level  3:  Defined    
–  The  so#ware  process  for  both  management  and  
engineering  acBviBes  is  documented,  
standardized,  and  integrated  into  an  organizaBon-­‐
wide  so#ware  process.    
–  All  projects  use  a  documented  and  approved  
version  of  the  organizaBon's  process  for  
developing  and  supporBng  so#ware.    
–  This  level  includes  all  characterisBcs  defined  for  
level  2.    
•  Level  4:  Managed.  
–  Detailed  measures  of  the  so#ware  process  and  
product  quality  are  collected.    
–  Both  the  so#ware  process  and  products  are  
quanBtaBvely  understood  and  controlled  using  
detailed  measures.    
–  This  level  includes  all  characterisBcs  defined  for  
level  3.    
•  Level  5:  Op3mizing.    
–  ConBnuous  process  improvement  is  enabled  by  
quanBtaBve  feedback  from  the  process  and  from  
tesBng  innovaBve  ideas  and  technologies.    
–  This  level  includes  all  characterisBcs  defined  for  
level  4.    

²   Read  about  key  process  areas  (KPAs)  with  


each  of  the  maturity  levels.    
SOFTWARE  PROCESS  MODELS    
  in  an  industry  
•  To  solve  actual  problems  
sehng,  a  so#ware  engineer  or  a  team  of  
engineers  must  incorporate  a  development  
strategy  that  encompasses:  
–  the  process,  methods,  and  tools  layers  (slide  62)    
–  The  generic  phases  (slide  71)  
•  This  strategy  is  o#en  referred  to  as  a  process  
model  or  a  so9ware  engineering  paradigm.    
 
•  A  process  model  for  so#ware  engineering  is  
chosen  based  on:  
–  the  nature  of  the  project  and  applicaBon  
–  the  methods  and  tools  to  be  used  
–  the  controls  and  deliverables  that  are  required.    

•  L.  B.  S.  Raccoon  uses  fractals  as  the  basis  for  a  


discussion  of  the  true  nature  of  the  so#ware  
process.    
F I G U R E 2.3
(a) The phases Problem
of a problem definition
solving loop
[RAC95]
(b) The phases
within phases Status Technical
of the problem quo development
solving loop
[RAC95]

Solution
integration

(a)

problem
definition

status technical
quo development

solution
integration

problem
definition

status status technical

quo
quo development

solution
integration

problem
definition

status technical
quo development

solution
integration

problem
definiton

Status status
quo
technical
development

quo
solution
integration

problem
definiton

Status technical
quo development

solution
integration

(b)
•  All  so#ware  development  can  be  
characterized  as  a  problem  solving  loop  
•  Four  disBnct  stages  are  encountered:    
–  status  quo  
–  problem  definiBon  
–  technical  development  
–  soluBon  integraBon  
•  The  generic  so#ware  engineering  phases  and  
steps  defined  in  slide  71  easily  map  into  these  
stages.    
•  This  problem  solving  loop  applies  to  so#ware  
engineering  work  at  many  different  levels  of  
resoluBon.    
•  It  can  be  used:  
–  at  the  macro  level  when  the  enBre  applicaBon  is  
considered  
–  at  a  mid-­‐level  when  program  components  are  being  
engineered  
–  at  the  line  of  code  level.    
•  Therefore,  each  stage  in  the  problem  solving  loop  
contains  an  idenBcal  problem  solving  loop  (this  
conBnues  to  some  raBonal  boundary;  for  
so#ware,  a  line  of  code).    
Linear  SequenBal  Model  
•  SomeBmes  called  the  classic  life  cycle  or  the  
waterfall  model,  the  linear  sequen&al  model  
•  It  suggests  a  systemaBc,  sequenBal  approach  
to  so#ware  development  that  begins  at  the  
system  level  and  progresses  through  analysis,  
design,  coding,  tesBng,  and  support.    
CHAPTER 2 THE PROCESS 29

System/information
engineering

Analysis Design Code Test

design and analysis. Information engineering encompasses requirements gathering


at the strategic business level and at the business area level.
Software requirements analysis. The requirements gathering process is intensi-
fied and focused specifically on software. To understand the nature of the program(s)
1.  System/informa3on  engineering  
and  modeling    
•  Because  so#ware  is  always  part  of  a  larger  
system  (or  business),  work  begins  by  
establishing  requirements  for  all  system  
elements  and  then  allocaBng  some  subset  of  
these  requirements  to  so#ware.    
•  This  system  view  is  essenBal  when  so#ware  
must  interact  with  other  elements  such  as  
hardware,  people,  and  databases.    
•  System  engineering  and  analysis  encompass  
requirements  gathering  at  the  system  level  
with  a  small  amount  of  top  level  design  and  
analysis.    
•  InformaBon  engineering  encompasses  
requirements  gathering  at  the  strategic  
business  level  and  at  the  business  area  level.  
 
2.  So@ware  requirements  analysis    
•  The  requirements  gathering  process  is  intensified  
and  focused  specifically  on  so#ware.    
•  To  understand  the  nature  of  the  program(s)  to  be  
built,  the  so#ware  engineer  ("analyst")  must  
understand:  
–  the  informaBon  domain  for  the  so#ware,    
–  the  required  funcBon,    
–  behavior,  performance  
–  interface  
•  Requirements  for  both  the  system  and  the  
so#ware  are  documented  and  reviewed  with  the  
customer  
3.  Design  
•  So#ware  design  is  actually  a  mulBstep  process  
that  focuses  on  four  disBnct  acributes  of  a  
program:    
–  data  structure  
–  so#ware  architecture  
–  interface  representaBons  
–  procedural  (algorithmic)  detail  
•  The  design  process  translates  requirements  into  
a  representaBon  of  the  so#ware  that  can  be  
assessed  for  quality  before  coding  begins.    
•  Like  requirements,  the  design  is  documented  and  
becomes  part  of  the  so#ware  configuraBon.    
4.  Code  genera3on  
•  The  design  must  be  translated  into  a  machine-­‐
readable  form.    
•  The  code  generaBon  step  performs  this  task.    
•  If  design  is  performed  in  a  detailed  manner,  
code  generaBon  can  be  accomplished  
mechanisBcally.  
 
5.  Tes3ng  
•  Once  code  has  been  generated,  program  
tesBng  begins.    
•  The  tesBng  process  focuses  on  the  logical  
internals  of  the  so#ware,  ensuring  that  all  
statements  have  been  tested,  and  on  the  
funcBonal  externals  
•  This  involves  conducBng  tests  to  uncover  
errors  and  ensure  that  defined  input  will  
produce  actual  results  that  agree  with  
required  results.    
6.  Support  
•  So#ware  will  undoubtedly  undergo  change  a#er  
it  is  delivered  to  the  customer  (a  possible  
excepBon  is  embedded  so#ware)  
•  Change  will  occur  because:  
–  errors  have  been  encountered  
–  the  so#ware  must  be  adapted  to  accommodate  
changes  in  its  external  environment  (e.g.,  a  change  
required  due  to  a  new  OS  or  peripheral  device)  
–  the  customer  requires  funcBonal  or  performance  
enhancements.    
•  So#ware  support/maintenance  reapplies  each  of  
the  preceding  phases  to  an  exisBng  program  
rather  than  a  new  one.    
Advantages  

•  The  linear  sequenBal  model  is  the  oldest  and  


the  most  widely  used  paradigm  for  so#ware  
engineering  
•  It  provides  a  template  into  which  methods  for  
analysis,  design,  coding,  tesBng,  and  support  
can  be  placed.    
•  It  is  significantly  becer  than  a  haphazard  
approach  to  so#ware  development.    
Drawbacks  

1.  Real  projects  rarely  follow  the  sequenBal  


flow  that  the  model  proposes.    
•  Although  the  linear  model  can  accommodate  
iteraBon,  it  does  so  indirectly.    
•  As  a  result,  changes  can  cause  confusion  as  
the  project  team  proceeds.    
2.  It  is  o#en  difficult  for  the  customer  to  state  
all  requirements  explicitly.    
•  The  linear  sequenBal  model  requires  this  and  
has  difficulty  accommodaBng  the  natural  
uncertainty  that  exists  at  the  beginning  of  
many  projects.    
3.  The  customer  must  have  paBence.    
•  A  working  version  of  the  program(s)  will  not  
be  available  unBl  late  in  the  project  Bme-­‐span.    
•  A  major  blunder,  if  undetected  unBl  the  
working  program  is  reviewed,  can  be  
disastrous.    
4.  Bradac  found  that  the  linear  nature  of  the  
classic  life  cycle  leads  to  “blocking  states”  in  
which  some  project  team  members  must  
wait  for  other  members  of  the  team  to  
complete  dependent  tasks.    
•  In  fact,  the  Bme  spent  waiBng  can  exceed  the  
Bme  spent  on  producBve  work!    
•  The  blocking  state  tends  to  be  more  prevalent  
at  the  beginning  and  end  of  a  linear  sequenBal  
process.    
The  prototyping  model  
•  O#en,  a  customer  defines  a  set  of  general  
objecBves  for  so#ware  but  does  not  idenBfy  
detailed  input,  processing,  or  output  
requirements.    
•  In  other  cases,  the  developer  may  be  unsure  of  
the  efficiency  of  an  algorithm,  the  adaptability  of  
an  OS,  or  the  form  that  human/machine  
interacBon  should  take  
•  In  these,  and  many  other  situaBons,  a  
prototyping  paradigm  may  offer  the  best  
approach.    
•  The  prototyping  paradigm  begins  with  
requirements  gathering.    
•  Developer  and  customer  meet  and  define  the  
overall  objecBves  for  the  so#ware,  idenBfy  
whatever  requirements  are  known,  and  
outline  areas  where  further  definiBon  is  
mandatory.    
•  A  "quick  design"  then  occurs.  
•  The  quick  design  focuses  on  a  representaBon  
of  those  aspects  of  the  so#ware  that  will  be  
visible  to  the  customer/user  (e.g.,  input  
approaches  and  output  formats).    
•  The  quick  design  leads  to  the  construcBon  of  a  
prototype.    
•  The  prototype  is  evaluated  by  the  customer/
user  and  used  to  refine  requirements  for  the  
so#ware  to  be  developed.    
•  IteraBon  occurs  as  the  prototype  is  tuned  to  
saBsfy  the  needs  of  the  customer,  while  at  the  
same  Bme  enabling  the  developer  to  becer  
understand  what  needs  to  be  done  
CHAPTER 2 THE PROCESS 31

R E 2.5
rototyping
digm

Listen to Build/revise
customer mock-up

Customer
test drives
mock-up

a prototype. The prototype is evaluated by the customer/user and used to refine


Advantages    
•  The  prototype  can  serve  as  "the  first  system.”  
•  Both  customers  and  developers  like  the  
prototyping  paradigm.    
•  Users  get  a  feel  for  the  actual  system  and  
developers  get  to  build  something  
immediately.      
Drawbacks  
1.  The  customer  sees  what  appears  to  be  a  
working  version  of  the  so#ware,  unaware  that  
the  prototype  is  held  together  “with  chewing  
gum  and  baling  wire”  
•  In  the  rush  to  get  it  working  no  one  has  
considered  over-­‐all  so#ware  quality  or  long-­‐term  
maintainability.    
•  When  informed  that  the  product  must  be  rebuilt  
so  that  high  levels  of  quality  can  be  maintained,  
the  customer  cries  foul  and  demands  that  "a  few  
fixes"  be  applied  to  make  the  prototype  a  
working  product.    
•  Too  o#en,  so#ware  development  management  
relents.    
2.  The  developer  o#en  makes  implementaBon  
compromises  in  order  to  get  a  prototype  
working  quickly.  
•  An  inappropriate  operaBng  system  or  
programming  language  may  be  used  simply  
because  it  is  available  and  known  
•  An  inefficient  algorithm  may  be  implemented  
simply  to  demonstrate  capability.    
•  A#er  a  Bme,  the  developer  may  become  familiar  
with  these  choices  and  forget  all  the  reasons  why  
they  were  inappropriate.    
•  The  less-­‐than-­‐ideal  choice  has  now  become  an  
integral  part  of  the  system.    
To  conclude  
•  Although  problems  can  occur,  prototyping  can  be  
an  effecBve  paradigm  for  so#ware  engineering.    
•  The  key  is  to  define  the  rules  of  the  game  at  the  
beginning  
•  That  is,  the  customer  and  developer  must  both  
agree  that  the  prototype  is  built  to  serve  as  a  
mechanism  for  defining  requirements.    
•  It  is  then  discarded  (at  least  in  part)  and  the  
actual  so#ware  is  engineered  with  an  eye  toward  
quality  and  maintainability.    
The  RAD  model  
•  Rapid  applica&on  development  (RAD)  is  an  
incremental  so#ware  development  process  
model  that  emphasizes  an  extremely  short  
development  cycle.    
•  The  RAD  model  is  a  “high-­‐speed”  adaptaBon  of  
the  linear  sequenBal  model  in  which  rapid  
development  is  achieved  by  using  component-­‐
based  construcBon.  
•   If  requirements  are  well  understood  and  project  
scope  is  constrained,  the  RAD  process  enables  a  
development  team  to  create  a  “fully  funcBonal  
system”  within  very  short  Bme  periods  (e.g.,  60  
to  90  days).    
F I G U R E 2.6 Team #3
The RAD Team #2
Business
model modeling

Business
Team #1 modeling
Data
modeling

Process
modeling
Business Data
modeling modeling Application
generation

Testing
&
Process turnover
modeling
Data
modeling
Application
generation

Process
modeling Testing
&
turnover

Application
generation

Testing
&
turnover

60–90 days
•  Used  primarily  for  informaBon  systems  
applicaBons,  the  RAD  approach  encompasses  
the  following  phases:  
1.  Business  modeling:  
The  informaBon  flow  among  business  funcBons  is  
modeled  in  a  way  that  answers  the  following  
quesBons:    
–  What  informaBon  drives  the  business  process  
–  What  informaBon  is  generated?    
–  Who  generates  it?    
–  Where  does  the  informaBon  go?    
–  Who  processes  it?    
2.  Data  modeling:  
–  The  informaBon  flow  defined  as  part  of  the  
business  modeling  phase  is  refined  into  a  set  of  
data  objects  that  are  needed  to  support  the  
business.    
–  The  characterisBcs  (called  aGributes)  of  each  
object  are  idenBfied  and  the  relaBonships  
between  these  objects  defined.    
 
3.  Process  modeling:  
•  The  data  objects  defined  in  the  data  modeling  
phase  are  transformed  to  achieve  the  
informaBon  flow  necessary  to  implement  a  
business  funcBon.    
•  Processing  descripBons  are  created  for  
adding,  modifying,  deleBng,  or  retrieving  a  
data  object.    
4.  Applica3on  genera3on:  
•  RAD  assumes  the  use  of  fourth  generaBon  
techniques.    
•  Rather  than  creaBng  so#ware  using  
convenBonal  third  generaBon  programming  
languages,  the  RAD  process  works  to  reuse  
exisBng  program  components  (when  possible)  
or  create  reusable  components  (when  
necessary).    
•  In  all  cases,  automated  tools  are  used  to  
facilitate  construcBon  of  the  so#ware.    
5.  Tes3ng  and  turnover:  
•  Since  the  RAD  process  emphasizes  reuse,  
many  of  the  program  components  have  
already  been  tested.    
•  This  reduces  overall  tesBng  Bme.    
•  However,  new  components  must  be  tested  
and  all  interfaces  must  be  fully  exercised.    
Advantages    
•  Obviously,  the  Bme  constraints  imposed  on  a  
RAD  project  demand  “scalable  
scope”  (advantage??)  
•  If  a  business  applicaBon  can  be  modularized  in  
a  way  that  enables  each  major  funcBon  to  be  
completed  in  less  than  three  months  (using  
the  approach  described  previously),  it  is  a  
candidate  for  RAD.  
•   Each  major  funcBon  can  be  addressed  by  a  
separate  RAD  team  and  then  integrated  to  
form  a  whole.    
Drawbacks  
1.  For  large  but  scalable  projects,  RAD  requires  
sufficient  human  resources  to  create  the  
right  number  of  RAD  teams.    
•  RAD  requires  developers  and  customers  who  
are  commiced  to  the  rapid-­‐fire  acBviBes  
necessary  to  get  a  system  complete  in  a  much  
abbreviated  Bme  frame.    
•  If  commitment  is  lacking  from  either  
consBtuency,  RAD  projects  will  fail.    
2.  Not  all  types  of  applicaBons  are  appropriate  
for  RAD.    
•  If  a  system  cannot  be  properly  modularized,  
building  the  components  necessary  for  RAD  
will  be  problemaBc.  
•   If  high  performance  is  an  issue  and  
performance  is  to  be  achieved  through  tuning  
the  interfaces  to  system  components  (since  
interface  is  the  weak  link  here),  the  RAD  
approach  may  not  work.    
3.  RAD  is  not  appropriate  when  technical  risks  
are  high.    
•  This  occurs  when  a  new  applicaBon  makes  
heavy  use  of  new  technology  or  when  the  new  
so#ware  requires  a  high  degree  of  
interoperability  with  exisBng  computer  
programs.    
EVOLUTIONARY  SOFTWARE  PROCESS  
MODELS    
•  So#ware,  like  all  complex  systems,  evolves  
over  a  period  of  Bme:  
1.  Business  and  product  requirements  o#en  
change  as  development  proceeds,  making  a  
straight  path  to  an  end  product  unrealisBc;    
2.  Tight  market  deadlines  make  compleBon  of  a  
comprehensive  so#ware  product  impossible,  
but  a  limited  version  must  be  introduced  to  
meet  compeBBve  or  business  pressure;    
3.  A  set  of  core  product  or  system  requirements  
is  well  understood,  but  the  details  of  product  
or  system  extensions  have  yet  to  be  defined  
•  In  these  and  similar  situaBons,  so#ware  
engineers  need  a  process  model  that  has  been  
explicitly  designed  to  accommodate  a  product  
that  evolves  over  Bme.    
Why  are  earlier  process  models  not  
evoluBonary?  
•  The  linear  sequenBal  model  is  designed  for  
straight-­‐line  development.    
•  In  essence,  this  waterfall  approach  assumes  
that  a  complete  system  will  be  delivered  a#er  
the  linear  sequence  is  completed.    
•  The  prototyping  model  is  designed  to  assist  the  
customer  (or  developer)  in  understanding  
requirements.    
•  In  general,  it  is  not  designed  to  deliver  a  
producBon  system.    
•  Therefore,  the  evoluBonary  nature  of  
so#ware  is  not  considered  in  either  of  these  
classic  so#ware  engineering  paradigms.    
•  EvoluBonary  models  are  iteraBve.    
•  They  are  characterized  in  a  manner  that  
enables  so#ware  engineers  to  develop  
increasingly  more  complete  versions  of  the  
so#ware.    
1.  The  Incremental  Model    
•  The  model  combines  elements  of  the  linear  
sequenBal  model  (applied  repeBBvely)  with  
the  iteraBve  philosophy  of  prototyping.    
•  The  incremental  model  applies  linear  
sequences  in  a  staggered  fashion  as  calendar  
Bme  progresses.    
•  Each  linear  sequence  produces  a  deliverable  
“increment”  of  the  so#ware    
•  For  example,  word-­‐processing  so#ware  
developed  using  the  incremental  paradigm  might  
deliver  basic  file  management,  ediBng,  and  
document  producBon  funcBons  in  the  first  
increment;    
•  More  sophisBcated  ediBng  and  document  
producBon  capabiliBes  in  the  second  increment;  
•  Spelling  and  grammar  checking  in  the  third  
increment;    
•  And  advanced  page  layout  capability  in  the  fourth  
increment.    
•  It  should  be  noted  that  the  process  flow  for  any  
increment  can  incorporate  the  prototyping  
paradigm.    
increment, until the complete product is produced.

System/information Increment 1
engineering

Analysis Design Code Test Delivery of


1st increment

Increment 2 Analysis Design Code Test Delivery of


2nd increment

Increment 3 Analysis Design Code Test Delivery of


3rd increment

Increment 4 Analysis Design Code Test Delivery of


4th increment

Calendar time
So  what  does  this  example  tell  us  
about  the  incremental  model?  
•  When  an  incremental  model  is  used,  the  first  
increment  is  o#en  a  core  product.    
•  That  is,  basic  requirements  are  addressed,  but  
many  supplementary  features  (some  known,  
others  unknown)  remain  undelivered.    
•  The  core  product  is  used  by  the  customer  (or  
undergoes  detailed  review).    
•  As  a  result  of  use  and/or  evaluaBon,  a  plan  is  
developed  for  the  next  increment.    
 
•  The  plan  addresses  the  modificaBon  of  the  
core  product  to  becer  meet  the  needs  of  the  
customer  and  the  delivery  of  addiBonal  
features  and  funcBonality.  
•   This  process  is  repeated  following  the  
delivery  of  each  increment,  unBl  the  complete  
product  is  produced.    
So  then  how  is  the  incremental  model  
different  from  prototyping?  
•  The  incremental  process  model,  like  
prototyping  and  other  evoluBonary  
approaches,  is  iteraBve  in  nature.    
•  But  unlike  prototyping,  the  incremental  model  
focuses  on  the  delivery  of  an  operaBonal  
product  with  each  increment.    
•  Early  increments  are  stripped  down  versions  
of  the  final  product,  but  they  do  provide  
capability  that  serves  the  user  and  also  
provide  a  plagorm  for  evaluaBon  by  the  user.  
 
So  when  is  it  useful?  
•  Incremental  development  is  parBcularly  useful  
when  staffing  is  unavailable  for  a  complete  
implementaBon  by  the  business  deadline  that  
has  been  established  for  the  project.    
•  Early  increments  can  be  implemented  with  
fewer  people.    
•  If  the  core  product  is  well  received,  then  
addiBonal  staff  (if  required)  can  be  added  to  
implement  the  next  increment.    
•  In  addiBon,  increments  can  be  planned  to  
manage  technical  risks.    
•  For  example,  a  major  system  might  require  
the  availability  of  new  hardware  that  is  under  
development  and  whose  delivery  date  is  
uncertain.    
•  It  might  be  possible  to  plan  early  increments  
in  a  way  that  avoids  the  use  of  this  hardware,  
thereby  enabling  parBal  funcBonality  to  be  
delivered  to  end-­‐users  without  inordinate  
delay.    
•  Self  Study:  2.  The  Spiral  Model  
3.  The  Concurrent  Development  
Model    
•  The  concurrent  development  model,  someBmes  
called  concurrent  engineering  
•  Consider  the  following  scenario:  
Project  managers  who  track  project  status  in  terms  of  
the  major  phases  [of  the  classic  life  cycle]  have  no  idea  of  
the  status  of  their  projects.  This  is  because  many  phases  
of  the  project  are  under  development  simultaneously.  
Personnel  are  wri&ng  requirements,  designing,  coding,  
tes&ng,  and  integra&on  tes&ng  [all  at  the  same  &me]    
 
 
•  Therefore,  concurrency  is  a  feature  that  is  
prevalent  in  so#ware  projects  
•  A  process  model  is  needed  that  incorporates  
this  feature  
•  The  concurrent  process  model  can  be  
represented  schemaBcally  as  a  series  of  major  
technical  acBviBes,  tasks,  and  their  associated  
states.    
F I G U R E 2.10
One element of None
the concurrent Analysis activity
process model

Under
development

•  Figure  shows   Awaiting


changes
 a  schemaBc  
representaBon  of  one   Under review
Under
acBvity  with  the   revision

concurrent  process   Baselined

model    
  Done

Represents a state of a
software engineered activity
•  The  acBvity—analysis—may  be  in  any  one  of  
the  states  at  any  given  Bme.    
•  Similarly,  other  acBviBes  (e.g.,  design  or  
customer  communicaBon)  can  be  represented  
in  an  analogous  manner.    
•  All  acBviBes  exist  concurrently  but  reside  in  
different  states.  
•  For  example,  early  in  a  project  the  customer  
communica&on  acBvity  (not  shown  in  the  
figure)  has  completed  its  first  iteraBon  and  
exists  in  the  awai3ng  changes  state.    
•  The  analysis  acBvity  (which  existed  in  the  
none  state  while  iniBal  customer  
communicaBon  was  completed)  now  makes  a  
transiBon  into  the  under  development  state.  
•  If,  however,  the  customer  indicates  that  
changes  in  requirements  must  be  made,  the  
analysis  acBvity  moves  from  the  under  
development  state  into  the  awai3ng  changes  
state.    
•  The  concurrent  process  model  defines  a  series  
of  events  that  will  trigger  transiBons  from  
state  to  state  for  each  of  the  so#ware  
engineering  acBviBes.    
•  For  example,  during  early  stages  of  design,  an  
inconsistency  in  the  analysis  model  is  
uncovered.    
•  This  generates  the  event  analysis  model  
correc&on  which  will  trigger  the  analysis  
acBvity  from  the  done  state  into  the  awai3ng  
changes  state.    
Advantages  of  the  concurrent  model  
•  In  reality,  the  concurrent  process  model  is  
applicable  to  all  types  of  so#ware  development  
and  provides  an  accurate  picture  of  the  current  
state  of  a  project.    
•  Rather  than  confining  so#ware  engineering  
acBviBes  to  a  sequence  of  events,  it  defines  a  
network  of  acBviBes.    
•  Each  acBvity  on  the  network  exists  
simultaneously  with  other  acBviBes.    
•  Events  generated  within  a  given  acBvity  or  at  
some  other  place  in  the  acBvity  network  trigger  
transiBons  among  the  states  of  an  acBvity.    
Summary:  A  quick  list  of  evoluBonary  
process  models  
•  The  incremental  model  
•  The  Spiral  model  (le#  as  self  study)  
•  The  WINWIN  Spiral  model  (safe  to  ignore)  
•  The  concurrent  development  model  
•  Self  Study:  Component  Based  Development  
–  The  CBD  model  incorporates  many  of  the  
characterisBcs  of  the  spiral  model.    
–  It  is  evoluBonary  in  nature,  demanding  an  
iteraBve  approach  to  the  creaBon  of  so#ware.    
–  However,  the  component-­‐based  development  
model  composes  applicaBons  from  prepackaged  
so#ware  components  (called  classes).    
FOURTH  GENERATION  TECHNIQUES    
•  The  term  fourth  genera&on  techniques  (4GT)  
encompasses  a  broad  array  of  so#ware  tools  
that  have  one  thing  in  common.  
•  Each  enables  the  so#ware  engineer  to  specify  
some  characterisBc  of  so#ware  at  a  high  level.    
•  The  tool  then  automaBcally  generates  source  
code  based  on  the  developer's  specificaBon.    
•  The  higher  the  level  at  which  so#ware  can  be  
specified  to  a  machine,  the  faster  a  program  
can  be  built.    
•  The  4GT  paradigm  for  so#ware  engineering  
focuses  on  the  ability  to  specify  so#ware  using  
specialized  language  forms  or  a  graphic  
notaBon  that  describes  the  problem  to  be  
solved  in  terms  that  the  customer  can  
understand.    
•  Currently,  a  so#ware  development  environment  
that  supports  the  4GT  paradigm  includes  some  or  
all  of  the  following  tools:  
–  Nonprocedural  languages  for  database  query  
–  Report  generaBon  
–  Data  manipulaBon  
–  Screen  interacBon  and  definiBon  
–  Code  generaBon  
–  High-­‐level  graphics  capability  
–  Spreadsheet  capability  
–  Automated  generaBon  of  HTML  and  similar  languages  
used  for  Web-­‐site  creaBon  using  advanced  so#ware  
tools.    
•  Like  other  paradigms,  4GT  begins  with  a  
requirements  gathering  step.    
•  Ideally,  the  customer  would  describe  
requirements  and  these  would  be  directly  
translated  into  an  operaBonal  prototype.    
•  But  this  is  unworkable.    
•  The  customer  may  be  unsure  of  what  is  required,  
may  be  ambiguous  in  specifying  facts  that  are  
known,  and  may  be  unable  or  unwilling  to  specify  
informaBon  in  a  manner  that  a  4GT  tool  can  
consume.    
•  For  this  reason,  the  customer/developer  dialog  
described  for  other  process  models  remains  an  
essenBal  part  of  the  4GT  approach.    
•  For  small  applicaBons,  it  may  be  possible  to  
move  directly  from  the  requirements  
gathering  step  to  implementaBon  using  4GL  
•  However,  for  larger  efforts,  it  is  necessary  to  
develop  a  design  strategy  for  the  system,  even  
if  a  4GL  is  to  be  used.    
•  The  use  of  4GT  without  design  (for  large  
projects)  will  cause  the  same  difficulBes  (poor  
quality,  poor  maintainability,  poor  customer  
acceptance)  that  have  been  encountered  
when  developing  so#ware  using  convenBonal  
approaches.    
•  ImplementaBon  using  a  4GL  enables  the  
so#ware  developer  to  represent  desired  
results  in  a  manner  that  leads  to  automaBc  
generaBon  of  code  to  create  those  results.  
•   Obviously,  a  data  structure  with  relevant  
informaBon  must  exist  and  be  readily  
accessible  by  the  4GL.    
•  To  transform  a  4GT  implementaBon  into  a  
product,  the  developer  must  conduct  
thorough  tesBng,  develop  meaningful  
documentaBon,  and  perform  all  other  soluBon  
integraBon  acBviBes  that  are  required  in  other  
so#ware  engineering  paradigms.    
•  In  addiBon,  the  4GT  developed  so#ware  must  
be  built  in  a  manner  that  enables  
maintenance  to  be  performed  expediBously.    
Debate  surrounding  the  4GL  model  
•  Like  all  so#ware  engineering  paradigms,  the  4GT  
model  has  advantages  and  disadvantages.    
•  Proponents  claim  dramaBc  reducBon  in  so#ware  
development  Bme  and  greatly  improved  
producBvity  for  people  who  build  so#ware.  
•  Opponents  claim  that  current  4GT  tools  are  not  
all  that  much  easier  to  use  than  programming  
languages,  that  the  resultant  source  code  
produced  by  such  tools  is  "inefficient,"  and  that  
the  maintainability  of  large  so#ware  systems  
developed  using  4GT  is  open  to  quesBon.    
Summary  
•  There  is  some  merit  in  the  claims  of  both  sides  
1.  The  use  of  4GT  is  a  viable  approach  for  many  
different  applicaBon  areas.  Coupled  with  
computer-­‐aided  so#ware  engineering  tools  and  
code  generators,  4GT  offers  a  credible  soluBon  
to  many  so#ware  problems.    
2.  Data  collected  from  companies  that  use  4GT  
indicate  that  the  Bme  required  to  produce  
so#ware  is  greatly  reduced  for  small  and  
intermediate  applicaBons  and  that  the  amount  
of  design  and  analysis  for  small  applicaBons  is  
also  reduced.    
3.  However,  the  use  of  4GT  for  large  so#ware  
development  efforts  demands  as  much  or  more  
analysis,  design,  and  tesBng  (so#ware  
engineering  acBviBes)  to  achieve  substanBal  
Bme  savings  that  result  from  the  eliminaBon  of  
coding.    
So#ware  Project  Management  
•  Problems  usually  faced  by  so#ware  
organizaBons:  
–  nightmarish  projects  
–  impossible  deadlines  
–  outrageously  buggy  and/or  expensive  products  
–  inordinately  long  maintenance  Bme  
•  Reason:  Weak  project  management  
What  is  it?  
•  Project  management  involves:  
–  planning  
–  monitoring  
–  control  of  the  people,  process  and  events  that  
occur  as  so#ware  evolves  from  a  preliminary  
concept  to  an  operaBonal  implementaBon  
Why  is  it  important?  
•  Building  computer  so#ware  is  a  complex  
undertaking  since  it  involves  many  people  
working  over  a  relaBvely  long  Bme  
What  are  the  steps?  
•  So#ware  project  management  involves  four  
P’s:  
–  People  
–  Product  
–  Process  
–  Project  
•  This  order  is  important  
Why  is  the  order  important?  
•  A  manager  who  forgets  that  so#ware  
development  is  an  intensely  human  endeavour  
will  never  have  success  in  project  management  
•  A  manager  who  fails  to  encourage  
comprehensiveness  stakeholder  communicaBon  
early  in  the  evoluBon  of  the  project  risks  building  
an  elegant  soluBon  for  the  wrong  problem  
•   A  manager  who  pays  licle  acenBon  to  the  
process  runs  the  risk  of  wasBng  competent  
technical  methods  and  tools  into  a  vacuum.    
•  A  manager  who  embarks  without  a  solid  project  
plan  jeopardizes  the  success  of  the  product  
The  management  spectrum:  
1.  The  people  
•  The  need  for  moBvated  and  highly  skilled  so#ware  
people  has  been  felt  since  the  1960s  
•  In  fact,  the  “people  factor”  is  so  important  that  the  
So#ware  Engineering  InsBtute  has  developed  a  people  
management  capability  maturity  model  (PM-­‐CMM)  
•  The  purpose  of  this  model  is  to  “enhance  the  readiness  
of  so#ware  organizaBons  to  “undertake  increasingly  
complex  applicaBons  by  helping  to  acract,  grow,  
moBvate,  deploy  and  retain  the  talent  needed  to  
improve  their  so#ware  development  capability”  
•  The  people  management  maturity  model  
defines  the  following  key  pracBce  areas  for  
so#ware  people:  
–  recruiBng  
–  selecBon  
–  performance  management  
–  training  
–  compensaBon  
–  career  development  
–  organizaBon  and  work  design  
–  team/culture  development  
Advantage  
•  OrganizaBons  that  achieve  high  levels  of  
maturity  in  people  management  have  a  higher  
likelihood  of  implemenBng  effecBve  so#ware  
engineering  pracBces  
2.  The  Product  
•  Before  a  project  can  be  planned:  
–  product  objecBves  and  scope  should  be  established  
–  alternaBve  soluBons  should  be  considered  
–  technical  and  management  constraints  should  be  
idenBfied.    
•  Without  this  informaBon,  it  is  impossible  to:  
–  define  reasonable  (and  accurate)  esBmates  of  the  
cost,    
–  an  effecBve  assessment  of  risk,    
–  a  realisBc  breakdown  of  project  tasks,    
–  or  a  manageable  project  schedule  that  provides  a  
meaningful  indicaBon  of  progress.    
•  The  so#ware  developer  and  customer  must  meet  
to  define  product  objecBves  and  scope.  
•  In  many  cases,  this  acBvity  begins  as  part  of  the  
system  engineering  or  business  process  
engineering  and  conBnues  as  the  first  step  in  
so#ware  requirements  analysis.    
•  ObjecBves  idenBfy  the  overall  goals  for  the  
product  (from  the  customer’s  point  of  view)  
without  considering  how  these  goals  will  be  
achieved.    
•  Scope  idenBfies  the  primary  data,  funcBons  and  
behaviors  that  characterize  the  product,  and  
more  important,  acempts  to  bound  these  
characterisBcs  in  a  quanBtaBve  manner.    
•  Once  the  product  objecBves  and  scope  are  
understood,  alternaBve  soluBons  are  
considered.    
•  Although  very  licle  detail  is  discussed,  the  
alternaBves  enable  managers  and  
pracBBoners  to  select  a  "best"  approach,  
given  the  constraints  imposed  by  delivery  
deadlines,  budgetary  restricBons,  personnel  
availability,  technical  interfaces,  and  myriad  
other  factors.    
3.  The  Process  
•  A  so#ware  process  provides  the  framework  from  
which  a  comprehensive  plan  for  so#ware  
development  can  be  established.    
•  A  small  number  of  framework  acBviBes  are  
applicable  to  all  so#ware  projects,  regardless  of  
their  size  or  complexity.    
•  A  number  of  different  task  sets—tasks,  
milestones,  work  products,  and  quality  assurance  
points—enable  the  framework  acBviBes  to  be  
adapted  to  the  characterisBcs  of  the  so#ware  
project  and  the  requirements  of  the  project  
team.    
•  Finally,  umbrella  acBviBes—such  as  so#ware  
quality  assurance,  so#ware  configuraBon  
management,  and  measurement—overlay  the  
process  model.    
•  Umbrella  acBviBes  are  independent  of  any  
one  framework  acBvity  and  occur  throughout  
the  process.    
4.  The  Project  
•  So#ware  projects  are  planned  and  controlled  for  
one  primary  reason—it  is  the  only  known  way  to  
manage  complexity.    
•  And  yet,  the  success  rate  is  dismal  
•  In  1998,  industry  data  indicated  that  26  percent  
of  so#ware  projects  failed  outright  and  46  
percent  experienced  cost  and  schedule  overruns.    
•  Although  the  success  rate  for  so#ware  projects  
has  improved  somewhat,  our  project  failure  rate  
remains  higher  than  it  should  be    
•  In  order  to  avoid  project  failure,  a  so#ware  
project  manager  and  the  so#ware  engineers  
who  build  the  product  must:  
–  avoid  a  set  of  common  warning  signs  
–  understand  the  criBcal  success  factors  that  lead  to  
good  project  management  
–  and  develop  a  commonsense  approach  for  
planning,  monitoring  and  controlling  the  project.    
PEOPLE  
•  In  a  study  published  by  the  IEEE  [CUR88],  the  
engineering  vice  presidents  of  three  major  
technology  companies  were  asked  the  most  
important  contributor  to  a  successful  so#ware  
project.  They  answered  in  the  following  way:    
•  VP  1:    
–  I  guess  if  you  had  to  pick  one  thing  out  that  is  most  
important  in  our  environment,  I'd  say  it's  not  the  tools  
that  we  use,  it's  the  people.    
•  VP  2:    
–  The  most  important  ingredient  that  was  successful  on  
this  project  was  having  smart  people  .  .  .  very  licle  else  
macers  in  my  opinion.  .  .  .  The  most  important  thing  
you  do  for  a  project  is  selecBng  the  staff  .  .  .  The  
success  of  the  so#ware  development  organizaBon  is  
very,  very  much  associated  with  the  ability  to  recruit  
good  people.    
•  VP  3:  
–  The  only  rule  I  have  in  management  is  to  ensure  I  have  
good  people—real  good  people—and  that  I  grow  good  
people—and  that  I  provide  an  environment  in  which  
good  people  can  produce.    
The  Stakeholders  
•  These  are  the  people  involved  in  the  so#ware  
process  and  the  manner  in  which  they  are  
organized  to  perform  effecBve  so#ware  
engineering  
•  Can  be  categorized  into  5  consBtuencies  
1.  Senior  managers:  who  define  business  issues  that  
o#en  have  significant  influence  on  the  project  
2.  Project  (technical  mangers):  who  must  plan,  
moBvate,  organize  and  control  the  pracBBoners  
who  do  so#ware  work  
3.  PracBBoners:  who  deliver  the  technical  skills  that  
are  necessary  to  engineer  a  product  or  
applicaBon  
4.  Customers:  who  specify  the  requirements  for  the  
so#ware  to  be  engineered  and  other  
stakeholders  who  have  a  peripheral  interest  in  
the  outcome  
5.  End  users:  who  interact  with  the  so#ware  once  it  
is  released  for  producBon  use.  
•  Every  so#ware  project  has  people  who  fall  
within  this  taxonomy.    
•  To  be  effecBve,  the  project  team  must  be  
organized  in  a  way  that  maximizes  each  
person’s  skills  and  abiliBes.  
•  The  person  who  makes  this  possible:  The  
Team  Leader  
Bad  Team  Leaders  
•  It  is  clear  that  project  management  is  a  
people-­‐intensive  acBvity  
•  Thus,  competent  so#ware  pracBBoners  o#en  
make  poor  team  leaders  (they  don’t  have  the  
right  mix  of  “people  skills”)  
 “Unfortunately  and  all  too  frequently  it  
seems,  individuals  just  fall  into  a  project  
manager  role  and  become  accidental  project  
managers”  
Key  traits  of  an  effecBve  project  
manager  
•  Four  key  traits:  
1.  Problem  Solving:    
–  Ability  to  diagnose  technical  and  organizaBonal  
issues  that  are  most  relevant  
–  systemaBcally  structure  a  soluBon  or  properly  
moBvate  other  pracBBoners  to  develop  the  
soluBon  
–  apply  lessons  learned  from  past  projects  to  new  
situaBons  and  remain  flexible  enough  to  change  
direcBon  if  iniBal  acempts  at  problem  soluBon  are  
fuBle  
2.  Managerial  IdenBty:    
–  Ability  to  take  charge  of  the  project.    
–  The  confidence  to  assume  control  when  necessary  
and  the  assurance  to  allow  good  technical  people  
to  follow  their  insBncts  
3.  Achievement:    
–  To  opBmize  producBvity  of  the  project  team.  
–  Must  reward  iniBaBve  and  accomplishment.  
–  Demonstrate  through  own  acBons  that  controlled  
risk  taking  will  not  be  penalized    
4.  Influence  and  team  building:    
–  Ability  to  “read”  people  
–  To  understand  verbal  and  non-­‐verbal  signals  and  
react  to  the  needs  of  people  sending  those  
signals.    
–  Ability  to  remain  under  control  in  high-­‐stress  
situaBons  
The  So#ware  Team  
•  Project  manager  usually  can  decide  the  
organizaBon  of  the  people  involved  in  a  
so#ware  project  
•  The  “best”  team  structure  depends  on:  
–  the  management  style  of  the  organizaBon  
–  the  number  of  people  who  will  populate  the  team  
–  their  skill  levels  
–  overall  problem  difficulty  
•  Mantei  describes  7  factors  that  should  be  
considered  when  planning  the  structure  of  
so#ware  engineering  teams:  
1.  The  difficulty  of  the  problem  to  be  solved  
2.  The  “size”  of  the  resulBng  program(s)  in  lines  
of  code  or  funcBon  points  
3.  The  Bme  that  the  team  will  stay  together  
(team  lifeBme)  
4.  The  degree  to  which  the  problem  can  be  
modularised  
5.  The  required  quality  and  reliability  of  the  
system  to  be  built  
6.  The  rigidity  of  the  delivery  date  
7.  The  degree  of  sociability  required  for  the  
project    
A  few  paradigms  for  so#ware  teams  
•  ConstanBne  suggests  4  “organizaBonal  
paradigms”  for  so#ware  engineering  teams:  
1.  A  closed  paradigm:    
–  Structures  a  team  according  to  a  tradiBonal  
hierarchy  of  authority.    
–  Such  teams  work  well  when  producing  so#ware  
similar  to  past  efforts,  but  less  likely  to  be  
innovaBve.  
2.  A  random  paradigm:    
–  Structures  a  team  loosely  and  depends  on  
individual  iniBaBve  of  the  team  members.    
–  Works  well  when  innovaBon  or  technological  
breakthrough  is  required,  but  may  struggle  when  
“orderly”  performance  is  needed  
3.  An  open  paradigm:    
–  A  mix  of  the  above  two  methods  i.e.  the  “order”  
associated  with  the  closed  paradigm  but  the  
innovaBon  occurs  when  using  the  random  
paradigm.    
ü work  is  performed  collaboraBvely  
ü Heavy  communicaBon  and  consensus-­‐based  
decision  making    
ü well  suited  to  finding  soluBons  to  complex  
problems  
✗ may  not  perform  as  efficiently  as  other  teams  
4.  A  synchronous  paradigm:    
•  Relies  on  the  natural  compartmentalizaBon  of  
a  problem  and  organizes  team  members  to  
work  on  pieces  of  the  problem  with  licle  
acBve  communicaBon  among  themselves  
Another  approach  
•  The  following  opBons  are  available  for  
applying  human  resources  to  a  project  that  
will  require  n  people  working  for  k  years:    
1.  n  individuals  are  assigned  to  m  different  
funcBonal  tasks:    
–  relaBvely  licle  combined  work  occurs  
–  coordinaBon  is  the  responsibility  of  a  so#ware  
manager  who  may  have  six  other  projects  to  be  
concerned  with.    
2.  n  individuals  are  assigned  to  m  different  
funcBonal  tasks  (  m  <  n  )  so  that  informal  
"teams"  are  established;    
–  an  ad  hoc  team  leader  may  be  appointed;  
–  coordinaBon  among  teams  is  the  responsibility  of  
a  so#ware  manager.    
3.  n  individuals  are  organized  into  t  teams;    
–  each  team  is  assigned  one  or  more  funcBonal  
tasks;    
–  each  team  has  a  specific  structure  that  is  defined  
for  all  teams  working  on  a  project;    
–  coordinaBon  is  controlled  by  both  the  team  and  a  
so#ware  project  manager.    
So  which  team  structure  is  the  best?  
•  Although  it  is  possible  to  voice  arguments  for  
and  against  each  of  these  approaches,  a  
growing  body  of  evidence  indicates  that  a  
formal  team  organizaBon  (opBon  3)  is  most  
producBve.    
•  The  “best”  team  however  structure  depends  
on:  
–  the  management  style  of  the  organizaBon  
–  the  number  of  people  who  will  populate  the  team  
and  their  skill  levels,    
–  the  overall  problem  difficulty.    
Mantei’s  team  organisaBon  
•  Mantei  suggests  three  generic  team  
organizaBons:    
1.  Democra3c  decentralized  (DD):  
•  This  so#ware  engineering  team  has  no  
permanent  leader.    
•  Rather,  "task  coordinators  are  appointed  for  
short  duraBons  and  then  replaced  by  others  who  
may  coordinate  different  tasks."    
•  Decisions  on  problems  and  approach  are  made  
by  group  consensus.    
•  CommunicaBon  among  team  members  is  
horizontal.    
2.  Controlled  decentralized  (CD):  
•  This  so#ware  engineering  team  has  a  defined  
leader  who  coordinates  specific  tasks  and  
secondary  leaders  that  have  responsibility  for  
subtasks.    
•  Problem  solving  remains  a  group  acBvity,  but  
implementaBon  of  soluBons  is  parBBoned  
among  subgroups  by  the  team  leader.    
•  CommunicaBon  among  subgroups  and  
individuals  is  horizontal.    
•  VerBcal  communicaBon  along  the  control  
hierarchy  also  occurs.    
3.  Controlled  Centralized  (CC):  
•  Top-­‐level  problem  solving  and  internal  team  
coordinaBon  are  managed  by  a  team  leader.  
•  CommunicaBon  between  the  leader  and  team  
members  is  verBcal.    
Pros  and  Cons  
•  A  centralized  structure  completes  tasks  faster,  
it  is  the  most  adept  at  handling  simple  
problems.    
•  A  rigid  team  centralised  team  structure  can  be  
successfully  applied  to  simple  problems.    
•  Decentralized  teams  generate  more  and  
becer  soluBons  than  individuals.  Therefore  
such  teams  have  a  greater  probability  of  
success  when  working  on  difficult  problems.    
•  Because  the  performance  of  a  team  is  
inversely  proporBonal  to  the  amount  of  
communicaBon  that  must  be  conducted,  very  
large  projects  are  best  addressed  by  teams  
with  a  CC  or  CD  structures  when  subgrouping  
can  be  easily  accommodated.  
•   The  length  of  Bme  that  the  team  will  "live  
together"  affects  team  morale.  
•  It  has  been  found  that  DD  team  structures  
result  in  high  morale  and  job  saBsfacBon  and  
are  therefore  good  for  teams  that  will  be  
together  for  a  long  Bme.    
•  The  DD  team  structure  is  best  applied  to  
problems  with  relaBvely  low  modularity,  
because  of  the  higher  volume  of  
communicaBon  needed.    
•  When  high  modularity  is  possible  (and  people  
can  do  their  own  thing),  the  CC  or  CD  
structure  will  work  well.    
•  CC  and  CD  teams  have  been  found  to  produce  
fewer  defects  than  DD  teams,  but  these  data  
have  much  to  do  with  the  specific  quality  
assurance  acBviBes  that  are  applied  by  the  
team.    
•  Decentralized  teams  generally  require  more  
Bme  to  complete  a  project  than  a  centralized  
structure  and  at  the  same  Bme  are  best  when  
high  sociability  is  required.    
So#ware  Metrics:  Process,  product  
and  project  metrics  
What  is  meant  by  metrics?  
•  Metrics  are  quanBtaBve  measures  that  enable  
so#ware  people  to  gain  insight  into  the  
efficacy  of  the  so#ware  process  and  the  
projects  that  are  conducted  using  the  process  
as  a  framework  and  the  product  created  
•  Basic  quality  and  producBvity  data  are  
collected.  
•  These  data  are  then  analyzed,  compared  
against  past  averages,  and  assessed  to  
determine  whether  quality  and  producBvity  
improvements  have  occurred.    
•  Metrics  are  also  used  to  pinpoint  problem  
areas  so  that  remedies  can  be  developed  and  
the  so#ware  process  can  be  improved.    
Who  does  it?    
•  So#ware  metrics  are  analyzed  and  assessed  
by  so#ware  managers.    
•  Measures  are  o#en  collected  by  so#ware  
engineers.    
Why  is  it  important?    
•  If  you  don’t  measure,  judgement  can  be  based  
only  on  subjecBve  evaluaBon.    
•  With  measurement,  trends  (either  good  or  
bad)  can  be  spoced,  becer  esBmates  can  be  
made,  and  true  improvement  can  be  
accomplished  over  Bme.    
What  are  the  steps?    
•  Begin  by  defining  a  limited  set  of  process,  
project,  and  product  measures  that  are  easy  
to  collect.    
•  These  measures  are  o#en  normalized  using  
either  size-­‐  or  funcBon-­‐oriented  metrics.    
•  The  result  is  analyzed  and  compared  to  past  
averages  for  similar  projects  performed  within  
the  organizaBon.    
•  Trends  are  assessed  and  conclusions  are  
generated.    
What  is  the  work  product?  
•  A  set  of  so#ware  metrics  that  provide  insight  
into  the  process  and  understanding  of  the  
project.    
Confused?  
Measures,  Metrics,  And  Indicators    
•  Measure:  
•  Measurement  is  the  act  of  determining  a  
measure.    
•  For  e.g.:  the  number  of  errors  uncovered  in  
the  review  of  a  single  module    
•  Measurement  occurs  as  the  result  of  the  
collecBon  of  one  or  more  data  points    
•  For  e.g,  a  number  of  module  reviews  are  
invesBgated  to  collect  measures  of  the  
number  of  errors  for  each.    
•  Metric:  
•  A  so#ware  metric  relates  the  individual  
measures  in  some  way  
•  The  average  number  of  errors  found  per  
review  or  the  average  number  of  errors  found  
per  person-­‐hour  expended  on  reviews.    
•  Indicator:  
•  A  so#ware  engineer  collects  measures  and  
develops  metrics  so  that  indicators  will  be  
obtained.    
•  An  indicator  is  a  metric  or  combinaBon  of  
metrics  that  provide  insight  into  the  so#ware  
process,  a  so#ware  project,  or  the  product  
itself.    
•  An  indicator  provides  insight  that  enables  the  
project  manager  or  so#ware  engineers  to  
adjust  the  process,  the  project,  or  the  process  
to  make  things  becer.    
•  For  example,  four  so#ware  teams  are  working  
on  a  large  so#ware  project.  
•  Each  team  must  conduct  design  reviews  but  is  
allowed  to  select  the  type  of  review  that  it  will  
use.    
•  Upon  examinaBon  of  the  metric:  errors  found  
per  person-­‐hour  expended,  the  project  
manager  noBces  that  the  two  teams  using  
more  formal  review  methods  exhibit  an  errors  
found  per  person-­‐hour  expended  that  is  40%  
higher  than  the  other  teams.    
•  Assuming  all  other  parameters  equal,  this  
provides  the  project  manager  with  an  
indicator  that  formal  review  methods  may  
provide  a  higher  return  on  Bme  investment  
than  another,  less  formal  review  approach.    
•  She  may  decide  to  suggest  that  all  teams  use  
the  more  formal  approach.  
•   The  metric  provides  the  manager  with  insight.    
•  And  insight  leads  to  informed  decision  
making.    
Role  of  process  and  project  indicators  
•  Metrics  should  be  collected  so  that  process  and  
product  indicators  can  be  ascertained.    
•  Process  indicators  enable  a  so#ware  
engineering  organizaBon  to  gain  insight  into  the  
efficacy  of  an  exisBng  process,  i.e.:  
–  The  paradigm  
–  So#ware  engineering  tasks  
–  Work  products  
–  Milestones  
 
•  They  enable  managers  and  pracBBoners  to  
assess  what  works  and  what  doesn’t.    
•  Process  metrics  are  collected  across  all  
projects  and  over  long  periods  of  Bme.    
•  Their  intent  is  to  provide  indicators  that  lead  
to  long-­‐term  so#ware  process  improvement.    
•  Project  indicators  enable  a  so#ware  project  
manager  to:  
 (1)  assess  the  status  of  an  ongoing  project  
         (2)  track  potenBal  risks  
         (3)  uncover  problem  areas  before  they  go  
“criBcal,”    
       (4)  adjust  work  flow  or  tasks  
       (5)  evaluate  the  project  team’s  ability  to  
control  quality  of  so#ware  work  products.    
•  In  some  cases,  the  same  so#ware  metrics  can  
be  used  to  determine  project  and  then  
process  indicators.    
•  In  fact,  measures  that  are  collected  by  a  
project  team  and  converted  into  metrics  for  
use  during  a  project  can  also  be  transmiced  to  
those  with  responsibility  for  so#ware  process  
improvement.    
•  For  this  reason,  many  of  the  same  metrics  are  
used  in  both  the  process  and  project  domain.    
•  Three  other  factors  that  have  a  profound  
influence  on  so#ware  quality  and  
organizaBonal  performance.    
1.  The  skill  and  moBvaBon  of  people  has  been  
shown  to  be  the  single  most  influenBal  factor  
in  quality  and  performance.    
2.  The  complexity  of  the  product  can  have  a  
substanBal  impact  on  quality  and  team  
performance.    
3.  The  technology  (i.e.,  the  so#ware  
engineering  methods)  that  populate  the  
process  also  has  an  impact.    
Process  Metrics  and  So@ware  Process  
Improvement    
•  As  an  organizaBon  becomes  more  
comfortable  with  the  collecBon  and  use  of  
process  metrics,  it  tends  to  apply  a  more  
rigorous  approach  called  sta&s&cal  so9ware  
process  improvement  (SSPI).    
•  In  essence,  SSPI  uses  so#ware  failure  analysis  
to  collect  informaBon  about  all  errors  and  
defects  encountered  as  an  applicaBon,  
system,  or  product  is  developed  and  used.    
•  Failure  analysis  works  in  the  following  manner:    
1.  All  errors  and  defects  are  categorized  by  origin  (e.g.,  
flaw  in  specificaBon,  flaw  in  logic,  nonconformance  
to  standards).    
2.  The  cost  to  correct  each  error  and  defect  is  
recorded.    
3.  The  number  of  errors  and  defects  in  each  category  is  
counted  and  ranked  in  descending  order.    
4.  The  overall  cost  of  errors  and  defects  in  each  
category  is  computed.    
5.  Resultant  data  are  analyzed  to  uncover  the  
categories  that  result  in  highest  cost  to  the  
organizaBon.    
6.  Plans  are  developed  to  modify  the  process  with  the  
intent  of  eliminaBng  (or  reducing  the  frequency  of)  
the  class  of  errors  and  defects  that  is  most  costly.    
CHAPTER 4 S O F T WA R E P R O C E S S A N D P R O J E C T M E T R I C S

F I G U R E 4.2 Logic
Causes of 20%
defects and
their origin for Data handling
four software 10.5%
projects Software interface
[GRA94] 6.0%

Standards
Hardware interface 6.9%
7.7%

Error checking
10.9%

Specifications
25.5%
User interface
11.7%
Origin of errors/defects
Specification/requirements
Design
Code
•  Following  steps  1  and  2,  a  simple  defect  
distribuBon  can  be  developed.    
•  For  the  pie-­‐chart  noted  in  the  figure,  eight  
causes  of  defects  and  their  origin  (indicated  by  
shading)  are  shown.    
•  Grady  suggests  the  development  of  a  fishbone  
diagram  to  help  in  diagnosing  the  data  
represented  in  the  frequency  diagram.    
PA R T T W O M A N A G I N G S O F T WA R E P R O J E C T S

Missing Ambiguous

om

Specification
defects

Wrong customer queried

Customer gave
wrong info

Inadequate inquiries

Used outdated
info
Incorrect Changes
•  The  spine  of  the  diagram  (the  central  line)  
represents  the  quality  factor  under  consideraBon  
(in  this  case  specificaBon  defects  that  account  for  
25  percent  of  the  total).    
•  Each  of  the  ribs  (diagonal  lines)  connecBng  to  the  
spine  indicate  potenBal  causes  for  the  quality  
problem  (e.g.,  missing  requirements,  ambiguous  
specificaBon,  incorrect  requirements,  changed  
requirements).    
•  The  spine  and  ribs  notaBon  is  then  added  to  each  
of  the  major  ribs  of  the  diagram  to  expand  upon  
the  cause  noted.    
•  Expansion  is  shown  only  for  the  incorrect  cause  in  
the  previous  diagram  
Project  Metrics    
•  So#ware  process  metrics  are  used  for  
strategic  purposes.    
•  So#ware  project  measures  are  tacBcal.    
•  That  is,  project  metrics  and  the  indicators  
derived  from  them  are  used  by  a  project  
manager  and  a  so#ware  team  to  adapt  
project  work  flow  and  technical  acBviBes.    
•  The  first  applicaBon  of  project  metrics  on  
most  so#ware  projects  occurs  during  
esBmaBon.    
•  Metrics  collected  from  past  projects  are  used  
as  a  basis  from  which  effort  and  Bme  
esBmates  are  made  for  current  so#ware  
work.    
•  As  a  project  proceeds,  measures  of  effort  and  
calendar  Bme  expended  are  compared  to  
original  esBmates  (and  the  project  schedule).  
•  The  project  manager  uses  these  data  to  
monitor  and  control  progress.    
•  As  technical  work  commences,  other  project  
metrics  begin  to  have  significance.    
•  ProducBon  rates  represented  in  terms  of  pages  
of  documentaBon,  review  hours,  funcBon  points,  
and  delivered  source  lines  are  measured.    
•  In  addiBon,  errors  uncovered  during  each  
so#ware  engineering  task  are  tracked.    
•  As  the  so#ware  evolves  from  specificaBon  into  
design,  technical  metrics  are  collected  to  assess  
design  quality  and  to  provide  indicators  that  will  
influence  the  approach  taken  to  code  generaBon  
and  tesBng.    
Benefits  of  project  metrics  
•  The  intent  of  project  metrics  is  twofold:    
•  First,  these  metrics  are  used  to  minimize  the  
development  schedule  by  making  the  
adjustments  necessary  to  avoid  delays  and  
miBgate  potenBal  problems  and  risks.    
•  The  above  is  possible  a#er  esBmaBon  
 
•  Second,  project  metrics  are  used  to  assess  
product  quality  on  an  ongoing  basis  and,  when  
necessary,  modify  the  technical  approach  to  
improve  quality.    
•  As  quality  improves,  defects  are  minimized,  
and  as  the  defect  count  goes  down,  the  
amount  of  rework  required  during  the  project  
is  also  reduced.    
•  This  leads  to  a  reducBon  in  overall  project  
cost.    
So#ware  Measurement  
•  So#ware  can  be  measured  directly  or  
indirectly    
•  Direct  measures  of  the  so#ware  engineering  
process  include  cost  and  effort  applied.    
•  Direct  measures  of  the  product  include:  
–  lines  of  code  (LOC)  produced,    
–  execuBon  speed  
–  memory  size,    
–  defects  reported  over  some  set  period  of  Bme.    
•  Indirect  measures  of  the  product  include:  
•  funcBonality  
•  quality,  
•  complexity,  
•  efficiency,  
•  reliability,  
•  Maintainability,  
•   many  other  "–abiliBes"    
•  Direct  measures  are  relaBvely  easy  to  collect,  
as  long  as  specific  convenBons  for  
measurement  are  established  in  advance.  
•  Correspondingly  indirect  measures  are  
relaBvely  difficult  to  collect  
•  Another  aspect  of  measurement  is  private  and  
public  metrics  
•  Examples  of  private  metrics  include  defect  
rates  (by  individual),  defect  rates  (by  module),  
and  errors  found  during  development.    
•  Some  process  metrics  are  private  to  the  
so#ware  project  team  but  public  to  all  team  
members.    
•  Examples  include:  
–  Defects  reported  for  major  so#ware  funcBons  
(that  have  been  developed  by  a  number  of  
pracBBoners  
–  Errors  found  during  formal  technical  reviews  
–  LOCs  or  funcBon  points  per  module  and  funcBon  
•  These  data  are  reviewed  by  the  team  to  
uncover  indicators  that  can  improve  team  
performance.    
•  Public  metrics  generally  assimilate  informaBon  
that  originally  was  private  to  individuals  and  
teams.    
•  Examples  include:  
–  Project  level  defect  rates  (absolutely  not  
acributed  to  an  individual)  
–  effort,  
–  calendar  Bmes  
–  Any  other  related  data    
•  Therefore,  we  can  conclude  that  product  
metrics  that  are  private  to  an  individual  are  
o#en  combined  to  develop  project  metrics  
that  are  public  to  a  so#ware  team.  
•  Project  metrics  are  then  consolidated  to  
create  process  metrics  that  are  public  to  the  
so#ware  organizaBon  as  a  whole.    
•  But  how  does  an  organizaBon  combine  
metrics  that  come  from  different  individuals  
or  projects?    
•  To  illustrate  the  problem,consider  a  simple  
example:  
 Individuals  on  two  different  project  teams  
record  and  categorize  all  errors  that  they  find  
during  the  so#ware  process.  Individual  
measures  are  then  combined  to  develop  team  
measures.  Team  A  found  342  errors  during  the  
so#ware  process  prior  to  release.  Team  B  found  
184  errors.  All  other  things  being  equal,  which  
team  is  more  effecBve  in  uncovering  errors  
throughout  the  process?    
•  Because  the  size  or  complexity  of  the  projects  
is  not  known,  this  quesBon  cannot  be  
answered  
•  However,  if  the  measures  are  normalized,  it  is  
possible  to  create  so#ware  metrics  that  
enable  comparison  to  broader  organizaBonal  
averages.    
Size-­‐Oriented  Metrics    
•  Size-­‐oriented  so#ware  metrics  are  derived  by  
normalizing  quality  and/or  producBvity  
measures  by  considering  the  size  of  the  
so#ware  that  has  been  produced.    
•  If  a  so#ware  organizaBon  maintains  simple  
records,  a  table  of  size-­‐oriented  measures,  
such  as  the  one  shown  in  the  next  slide,  can  
be  created.    
developed, 134 errors were recorded before the software was released, and 29 defects

Project LOC Effort $(000) Pp. doc. Errors Defects People

alpha 12,100 24 168 365 134 29 3


beta 27,200 62 440 1224 321 86 5
gamma 20,200 43 314 1050 256 64 6

• • • • • •
• • • • • •
• • • • • •
•  From  the  rudimentary  data  contained  in  the  
table,  a  set  of  simple  size-­‐oriented  metrics  can  
be  developed  for  each  project:    
–  Errors  per  KLOC  (thousand  lines  of  code).    
–  Defects4  per  KLOC.    
–  $  per  LOC.    
–  Page  of  documentaBon  per  KLOC.    
•  In  addiBon,  other  interesBng  metrics  can  be  
computed:    
–  Errors  per  person-­‐month.    
–  LOC  per  person-­‐month.    
–  $  per  page  of  documentaBon.    
Pros  and  Cons  
•  Size-­‐oriented  metrics  are  not  universally  
accepted  as  the  best  way  to  measure  the  
process  of  so#ware  development  .    
•  Most  of  the  controversy  swirls  around  the  use  
of  lines  of  code  as  a  key  measure.    
Pros  
•  Proponents  of  the  LOC  measure  claim  that  
LOC  is  an  "arBfact"  of  all  so#ware  
development  projects  
•  Can  be  easily  counted  
•  Many  exisBng  so#ware  esBmaBon  models  use  
LOC  or  KLOC  as  a  key  input  
•  A  large  body  of  literature  and  data  predicated  
on  LOC  already  exists.    
Cons  
•  LOC  measures  are  programming  language  
dependent  
•  They  penalize  well-­‐designed  but  shorter  
programs,    
•  Cannot  easily  accommodate  nonprocedural  
languages  
•  Their  use  in  esBmaBon  requires  a  level  of  detail  
that  may  be  difficult  to  achieve  (i.e.,  the  planner  
must  esBmate  the  LOC  to  be  produced  long  
before  analysis  and  design  have  been  
completed).    
Func3on-­‐Oriented  Metrics    
•  FuncBon-­‐oriented  so#ware  metrics  use  a  
measure  of  the  funcBonality  delivered  by  the  
applicaBon  as  a  normalizaBon  value.  
•  Since  ‘funcBonality’  cannot  be  measured  
directly,  it  must  be  derived  indirectly  using  
other  direct  measures.    
•  FuncBon  points  are  derived  using  an  empirical  
relaBonship  based  on  countable  (direct)  
measures  of  so#ware's  informaBon  domain  
and  assessments  of  so#ware  complexity.    
•  FuncBon  points  are  computed  by  compleBng  
the  table  
PA R T T W O M A N A G I N G S O F T WA R E P R O J E C T S

Weighting factor

nts Measurement parameter Count Simple Average Complex


Number of user inputs × 3 4 6 =

Number of user outputs × 4 5 7 =

Number of user inquiries × 3 4 6 =

Number of files × 7 10 15 =

Number of external interfaces × 5 7 10 =

Count total

the appropriate table location. Information domain values are defined in the follow-
5
•  Number  of  user  inputs.  Each  user  input  that  
provides  disBnct  applicaBon-­‐oriented  data  to  
the  so#ware  is  counted.  Inputs  should  be  
disBnguished  from  inquiries,  which  are  
counted  separately.  
Number  of  user  outputs.  Each  user  output  
that  provides  applicaBon-­‐  oriented  
informaBon  to  the  user  is  counted.  In  this  
context  output  refers  to  reports,  screens,  
error  messages,  etc.    
•  Individual  data  items  within  a  report  are  not  
counted  separately.    
•  Number  of  user  inquiries.  An  inquiry  is  defined  
as  an  on-­‐line  input  that  results  in  the  generaBon  
of  some  immediate  so#ware  response  in  the  
form  of  an  on-­‐line  output.  Each  disBnct  inquiry  is  
counted.  
Number  of  files.  Each  logical  master  file  (i.e.,  a  
logical  grouping  of  data  that  may  be  one  part  of  a  
large  database  or  a  separate  file)  is  counted.  
Number  of  external  interfaces.  All  machine  
readable  interfaces  (e.g.,  data  files  on  storage  
media)  that  are  used  to  transmit  informaBon  to  
another  system  are  counted.    
•  External  Inputs:  screens  forms,  dialog  boxes...    
•  External  Outputs:  screens,  reports,  graphs    
•  External  Queries:  input/output  combinaBon  
with  a  query  leads  to  simple  output    
•  Logical  files:  major  groups  of  end-­‐user-­‐  data    
•  Interface  Files:  files  controlled  by  other  
programs    
hcp://www.codeproject.com/ArBcles/18024/CalculaBng-­‐FuncBon-­‐Points  
•  Once  these  data  have  been  collected,  a  
complexity  value  is  associated  with  each  
count.    
•  OrganizaBons  that  use  funcBon  point  methods  
develop  criteria  for  determining  whether  a  
parBcular  entry  is  simple,  average,  or  
complex.    
•  Nonetheless,  the  determinaBon  of  complexity  
is  somewhat  subjecBve.    
•  To  compute  funcBon  points  (FP),  the  following  
relaBonship  is  used:  
FP  =  count  total  X  [0.65  +  0.01  X  ∑(Fi)]    
where  count  total  is  the  sum  of  all  FP  entries  
obtained    
•  The  Fi  (i  =  1  to  14)  are  "complexity  adjustment  
values"  based  on  responses  to  the  following  
quesBons:    
1.  Does  the  system  require  reliable  backup  and  
recovery?    
2.  Are  data  communicaBons  required?    
3.  Are  there  distributed  processing  funcBons?    
4.  Is  performance  criBcal?    
5.  Will  the  system  run  in  an  exisBng,  heavily  
uBlized  operaBonal  environment?    
6.  Does  the  system  require  on-­‐line  data  entry?    
7.  Does  the  on-­‐line  data  entry  require  the  input  
transacBon  to  be  built  over  mulBple  screens  
or  operaBons?    
8.  Are  the  master  files  updated  on-­‐line?    
9.  Are  the  inputs,  outputs,  files,  or  inquiries  
complex?    
10. Is  the  internal  processing  complex?    
11. Is  the  code  designed  to  be  reusable?    
12. Are  conversion  and  installaBon  included  in  
the  design?    
13. Is  the  system  designed  for  mulBple  
installaBons  in  different  organizaBons?    
14. Is  the  applicaBon  designed  to  facilitate  
change  and  ease  of  use  by  the  user?    
 
•  Each  of  these  quesBons  is  answered  using  a  
scale  that  ranges  from  0  (not  important  or  
applicable)  to  5  (absolutely  essenBal).    
•  The  constant  values  in  the  equaBon  and  the  
weighBng  factors  that  are  applied  to  
informaBon  domain  counts  are  determined  
empirically.    
•  Once  funcBon  points  have  been  calculated,  
they  are  used  in  a  manner  analogous  to  LOC  
as  a  way  to  normalize  measures  for  so#ware  
producBvity,  quality,  and  other  acributes:    
–  Errors  per  FP.    
–  Defects  per  FP.    
–  $  per  FP.    
–  Pages  of  documentaBon  per  FP.    
–  FP  per  person-­‐month.    
Pros  and  Cons  
•  The  funcBon  point  (and  its  extensions),  like  
the  LOC  measure,  is  controversial.    
•  Pros:  
–  Programming  language  independent,  making  it  
ideal  for  applicaBons  using  convenBonal  and  
nonprocedural  languages;    
–  Based  on  data  that  are  more  likely  to  be  known  
early  in  the  evoluBon  of  a  project,  making  FP  
more  acracBve  as  an  esBmaBon  approach.    
•  Cons:  
–  Method  requires  some  "sleight  of  hand"  in  that  
computaBon  is  based  on  subjecBve  rather  than  
objecBve  data  
–  FP  has  no  direct  physical  meaning—it's  just  a  
number.    
RECONCILING  DIFFERENT  METRICS  
APPROACHES    
•  The  relaBonship  between  lines  of  code  and  
funcBon  points  depends  upon  the  
programming  language  that  is  used  to  
implement  the  so#ware  and  the  quality  of  the  
design.    
Programming Language LOC/FP (average)
Assembly language 320
C 128
COBOL 106
FORTRAN 106
Pascal 90
C++ 64
Ada95 53
Visual Basic 32
Smalltalk 22
Powerbuilder (code generator) 16
SQL 12

A review of these data indicates that one LOC of C++ pro


•  A  review  of  these  data  indicates  that  one  LOC  
of  C++  provides  approximately  1.6  Bmes  the  
"funcBonality"  (on  average)  as  one  LOC  of  
FORTRAN.    
•  Furthermore,  one  LOC  of  a  Visual  Basic  
provides  more  than  three  Bmes  the  
funcBonality  of  a  LOC  for  a  convenBonal  
programming  language.    
•  More  detailed  data  on  the  relaBonship  
between  FP  and  LOC  can  be  used  to  compute  
the  number  of  funcBon  points  when  the  
number  of  delivered  LOC  are  known  
Summary  
•  LOC  and  FP  measures  are  o#en  used  to  derive  
producBvity  metrics  such  as  LOC  per  person-­‐
month  or  LOC  per  person-­‐month  
•  FuncBon  points  and  LOC  based  metrics  have  
been  found  to  be  relaBvely  accurate  
predictors  of  so#ware  development  effort  and  
cost.    
•  However,  in  order  to  use  LOC  and  FP  for  
esBmaBon,  a  historical  baseline  of  informaBon  
must  be  established.    
Metrics  for  so#ware  Quality  
•  The  overriding  goal  of  so#ware  engineering  is  
to  produce  a  high-­‐quality  system,  applicaBon,  
or  product.    
•  To  achieve  this  goal,  so#ware  engineers  must  
apply  effecBve  methods  coupled  with  modern  
tools  within  the  context  of  a  mature  so#ware  
process.    
•  In  addiBon,  a  good  so#ware  engineer  (and  
good  so#ware  engineering  managers)  must  
measure  if  high  quality  is  to  be  realized.    
•  The  quality  of  a  system,  applicaBon,  or  
product  is  only  as  good  as:  
–  the  requirements  that  describe  the  problem,    
–  the  design  that  models  the  soluBon,    
–  the  code  that  leads  to  an  executable  program,  and  
–  the  tests  that  exercise  the  so#ware  to  uncover  
errors.  
 
•  A  good  so#ware  engineer  uses  measurement  
to  assess  the  quality  of  the  analysis  and  design  
models,  the  source  code,  and  the  test  cases  
that  have  been  created  as  the  so#ware  is  
engineered.    
•  To  accomplish  this  real-­‐Bme  quality  
assessment,  the  engineer  must  use  technical  
measures  to  evaluate  quality  in  objecBve,  
rather  than  subjecBve  ways.    
•  The  project  manager  must  also  evaluate  
quality  as  the  project  progresses.    
•  Private  metrics  collected  by  individual  
so#ware  engineers  are  assimilated  to  provide  
project-­‐level  results.    
•  Although  many  quality  measures  can  be  
collected,  the  primary  thrust  at  the  project  
level  is  to  measure  errors  and  defects.    
•  Metrics  derived  from  these  measures  provide  
an  indicaBon  of  the  effecBveness  of  individual  
and  group  so#ware  quality  assurance  and  
control  acBviBes.    
•  Metrics  such  as  work  product  (e.g.,  
requirements  or  design)  errors  per  funcBon  
point,  errors  uncovered  per  review  hour,  and  
errors  uncovered  per  tesBng  hour  provide  
insight  into  the  efficacy  of  each  of  the  
acBviBes  implied  by  the  metric.    
•  Error  data  can  also  be  used  to  compute  the  
defect  removal  efficiency  (DRE)  for  each  
process  framework  acBvity.    
Factors  that  affect  quality  
•  Over  25  years  ago,  McCall  and  Cavano  defined  
a  set  of  quality  factors  that  were  a  first  step  
toward  the  development  of  metrics  for  
so#ware  quality.    
•  These  factors  assess  so#ware  from  three  
disBnct  points  of  view:    
(1)  product  operaBon  (using  it),    
(2)  product  revision  (changing  it),  and    
(3)  product  transiBon  (modifying  it  to  work  in  
a  different  environment;  i.e.,  "porBng"  it).    
•  In  their  work,  the  authors  describe  the  
relaBonship  between  these  quality  factors  
(what  they  call  a  framework)  and  other  
aspects  of  the  so#ware  engineering  process:    
 
“First,  the  framework  provides  a  mechanism  for  
the  project  manager  to  idenBfy  what  qualiBes  
are  important.  These  qualiBes  are  acributes  of  
the  so#ware  in  addiBon  to  its  funcBonal  
correctness  and  performance  which  have  life  
cycle  implicaBons.  Such  factors  as  
maintainability  and  portability  have  been  shown  
in  recent  years  to  have  significant  life  cycle  cost  
impact  .  .  .”  
“Secondly,  the  framework  provides  a  means  for  
quanBtaBvely  assessing  how  well  the  
development  is  progressing  relaBve  to  the  
quality  goals  established  .  .  .”    
“Thirdly,  the  framework  provides  for  more  
interacBon  of  QA  personnel  throughout  the  
development  effort  .  .  .”    
“Lastly,  .  .  .  QA  personal  can  use  indicaBons  of  
poor  quality  to  help  idenBfy  [becer]  standards  
to  be  enforced  in  the  future.”    
So  what  does  this  mean?  
•  If  a  so#ware  organizaBon  adopts  a  set  of  
quality  factors  as  a  “checklist”  for  assessing  
so#ware  quality,  it  is  likely  that  so#ware  built  
today  will  sBll  exhibit  quality  well  into  the  
future  
•  Even  as  compuBng  architectures  undergo  
radical  change,  so#ware  that  exhibits  high  
quality  in  operaBon,  transiBon,  and  revision  
will  conBnue  to  serve  its  users  well.    
Measuring  quality  
•  Although  there  are  many  measures  of  
so#ware  quality,  correctness,  maintainability,  
integrity,  and  usability  provide  useful  
indicators  for  the  project  team.    
•  They  are  defined  as:  
•  Correctness:  
•   A  program  must  operate  correctly  or  it  
provides  licle  value  to  its  users.    
•  Correctness  is  the  degree  to  which  the  
so#ware  performs  its  required  funcBon.    
•  The  most  common  measure  for  correctness  is  
defects  per  KLOC,  where  a  defect  is  defined  as  
a  verified  lack  of  conformance  to  
requirements.    
•  When  considering  the  overall  quality  of  a  
so#ware  product,  defects  are  those  problems  
reported  by  a  user  of  the  program  a#er  the  
program  has  been  released  for  general  use.    
•  For  quality  assessment  purposes,  defects  are  
counted  over  a  standard  period  of  Bme,  
typically  one  year.    
•  Maintainability:  
•  So#ware  maintenance  accounts  for  more  
effort  than  any  other  so#ware  engineering  
acBvity.    
•  Maintainability  is  the  ease  with  which  a  
program  can  be  corrected  if  an  error  is  
encountered,  adapted  if  its  environment  
changes,  or  enhanced  if  the  customer  desires  
a  change  in  requirements.    
•  There  is  no  way  to  measure  maintainability  
directly;  therefore  indirect  measures  are  used  
•  A  simple  Bme-­‐oriented  metric  is  mean-­‐&me-­‐
to-­‐  change  (MTTC)  
•  It  is  the  Bme  taken  to  analyze  the  change  
request,  design  an  appropriate  modificaBon,  
implement  the  change,  test  it,  and  distribute  
the  change  to  all  users.  
•  On  average,  programs  that  are  maintainable  
will  have  a  lower  MTTC  (for  equivalent  types  
of  changes)  than  programs  that  are  not  
maintainable.    
•  Hitachi  has  used  a  cost-­‐oriented  metric  for  
maintainability  called  spoilage—the  cost  to  
correct  defects  encountered  a#er  the  
so#ware  has  been  released  to  its  end-­‐users.    
•  The  raBo  of  spoilage  to  overall  project  cost  
(for  many  projects)  is  ploced  as  a  funcBon  of  
Bme  
•  This  allows  a  manager  can  determine  whether  
the  overall  maintainability  of  so#ware  
produced  by  a  so#ware  development  
organizaBon  is  improving.  
•  AcBons  can  then  be  taken  in  response  to  the  
insight  gained  from  this  informaBon.    
•  Integrity:  
•  So#ware  integrity  has  become  increasingly  
important  in  the  age  of  hackers  and  firewalls.    
•  This  acribute  measures  a  system's  ability  to  
withstand  acacks  (both  accidental  and  
intenBonal)  to  its  security.    
•  Acacks  can  be  made  on  all  three  components  
of  so#ware:  programs,  data,  and  documents.    
•  To  measure  integrity,  two  addiBonal  
acributes  must  be  defined:  threat  and  
security.    
•  Threat  is  the  probability  (which  can  be  
esBmated  or  derived  from  empirical  evidence)  
that  an  acack  of  a  specific  type  will  occur  
within  a  given  Bme.    
•  Security  is  the  probability  (which  can  be  
esBmated  or  derived  from  empirical  evidence)  
that  the  acack  of  a  specific  type  will  be  
repelled.    
•  The  integrity  of  a  system  can  then  be  defined  
as:    
               integrity  =  ∑  [(1  –  threat)  X  (1  –  security)]    
•  where  threat  and  security  are  summed  over  
each  type  of  acack.  
 
•  Usability:  
•  The  catch  phrase  "user-­‐friendliness"  has  
become  ubiquitous  in  discussions  of  so#ware  
products.    
•  If  a  program  is  not  user-­‐friendly,  it  is  o#en  
doomed  to  failure,  even  if  the  funcBons  that  it  
performs  are  valuable.  
•   Usability  is  an  acempt  to  quanBfy  user-­‐
friendliness  and  can  be  measured  in  terms  of  
four  characterisBcs:    
(1) the  physical  and  or  intellectual  skill  required  
to  learn  the  system,    
(2) the  Bme  required  to  become  moderately  
efficient  in  the  use  of  the  system  
(3) the  net  increase  in  producBvity  (over  the  
approach  that  the  system  replaces)  
measured  when  the  system  is  used  by  
someone  who  is  moderately  efficient,  and  
(4) a  subjecBve  assessment  (someBmes  
obtained  through  a  quesBonnaire)  of  users  
ahtudes  toward  the  system.    
•  Defect  Removal  Efficiency:  
•  A  quality  metric  that  provides  benefit  at  both  
the  project  and  process  level  is  defect  removal  
efficiency  (DRE).    
•  In  essence,  DRE  is  a  measure  of  the  filtering  
ability  of  quality  assurance  and  control  acBviBes  
as  they  are  applied  throughout  all  process  
framework  acBviBes.    
•  When  considered  for  a  project  as  a  whole,  DRE  
is  defined  in  the  following  manner:    
DRE  =  E/(E  +  D)    (Ideal  value?)  
 
•  The  ideal  value  for  DRE  is  1.    
•  That  is,  no  defects  are  found  in  the  so#ware.  
•   RealisBcally,  D  will  be  greater  than  0,  but  the  
value  of  DRE  can  sBll  approach  1.    
•  As  E  increases  (for  a  given  value  of  D),  the  
overall  value  of  DRE  begins  to  approach  1.    
•  In  fact,  as  E  increases,  it  is  likely  that  the  final  
value  of  D  will  decrease  (errors  are  filtered  out  
before  they  become  defects).    
•  If  used  as  a  metric  it  provides  an  indicator  of  
the  filtering  ability  of  quality  control  and  
assurance  acBviBes  
•  DRE  encourages  a  so#ware  project  team  to  
insBtute  techniques  for  finding  as  many  errors  
as  possible  before  delivery.    
•  DRE  can  also  be  used  within  the  project  to  
assess  a  team’s  ability  to  find  errors  before  
they  are  passed  to  the  next  framework  acBvity  
or  so#ware  engineering  task.    
•  For  example,  the  requirements  analysis  task  
produces  an  analysis  model  that  can  be  
reviewed  to  find  and  correct  errors.    
•  Those  errors  that  are  not  found  during  the  
review  of  the  analysis  model  are  passed  on  to  
the  design  task  (where  they  may  or  may  not  
be  found).    
•  When  used  in  this  context,  we  redefine  DRE  as    
DREi  =  Ei/(Ei  +  Ei+1)    
where  Ei  is  the  number  of  errors  found  during  
so#ware  engineering  acBvity  i  and  Ei+1  is  the  
number  of  errors  found  during  so#ware  
engineering  acBvity  i+1  that  are  traceable  to  
errors  that  were  not  discovered  in  so#ware  
engineering  acBvity  i.    
•  A  quality  objecBve  for  a  so#ware  team  (or  an  
individual  so#ware  engineer)  is  to  achieve  
DREi  that  approaches  1.    
•  That  is,  errors  should  be  filtered  out  before  
they  are  passed  on  to  the  next  acBvity.    
Project  Planning  &  EsBmaBon    
•  So#ware  project  management  begins  with  a  set  
of  acBviBes  that  are  collecBvely  called  project  
planning  
•  Before  the  project  can  begin,  the  manager  and  
the  so#ware  team  must  esBmate:  
–  The  work  to  be  done  
–  The  resources  that  will  be  required,  and  
–  The  Bme  that  will  elapse  from  start  to  finish  
•  This  looking  into  the  future  always  has  a  certain  
degree  of  uncertainty  associated  with  it  
•  Although  esBmaBng  is  as  much  art  as  it  is  science,  
this  important  acBvity  need  not  be  conducted  in  a  
haphazard  manner.    
•  Useful  techniques  for  Bme  and  effort  esBmaBon  do  
exist.    
•  Process  and  project  metrics  can  provide  historical  
perspecBve  and  powerful  input  for  the  generaBon  
of  quanBtaBve  esBmates.  
•  Past  experience  (of  all  people  involved)  can  aid  
immeasurably  as  esBmates  are  developed  and  
reviewed.    
•  EsBmaBon  lays  a  foundaBon  for  all  other  project  
planning  acBviBes  and  project  planning  provides  
the  road  map  for  successful  so#ware  engineering  
•  Therefore  it  is  ill-­‐advised  to  embark  without  it.    
What  is  it?  
•  Project  planning  and  esBmaBon  involves  an  
acempt  to  determine:  
–  how  much  money  
–  how  much  effort  
–  how  many  resources  
–  how  much  Bme  it  will  take  to  build  a  specific  
so#ware-­‐based  system  or  product.    
Who  does  it?  
•  So#ware  managers—using  informaBon  
solicited  from  customers  and  so#ware  
engineers  and  so#ware  metrics  data  collected  
from  past  projects.    
Why  is  it  important?  
•  Would  you  build  a  house  without  knowing  
how  much  you  were  about  to  spend?  Of  
course  not!  
•  Since  most  computer-­‐based  systems  and  
products  cost  considerably  more  to  build  than  
a  large  house,  it  would  seem  reasonable  to  
develop  an  esBmate  before  you  start  creaBng  
the  so#ware.    
What  are  the  steps?  
•  EsBmaBon  begins  with  a  descripBon  of  the  scope  
of  the  product.    
•  UnBl  the  scope  is  “bounded”  it’s  not  possible  to  
develop  a  meaningful  esBmate.    
•  The  problem  is  then  decomposed  into  a  set  of  
smaller  problems  and  each  of  these  is  esBmated  
using  historical  data  and  experience  as  guides.    
•  It  is  advisable  to  generate  your  esBmates  using  at  
least  two  different  methods  (as  a  cross  check).  
•  Problem  complexity  and  risk  are  considered  before  
a  final  esBmate  is  made.    
What  is  the  work  product?  
•  A  simple  table  delineaBng:  
–  the  tasks  to  be  performed  
–  the  funcBons  to  be  implemented,  and    
–  the  cost,  effort,  and  Bme  involved  for  each  is  
generated.    
•  A  list  of  required  project  resources  is  also  
produced.  
 
So  what  are  the  controlling  factors?  
•  EsBmaBon  of  resources,  cost,  and  schedule  for  
a  so#ware  engineering  effort  requires:  
–   experience  
–  access  to  good  historical  informaBon,  and    
–  the  courage  to  commit  to  quanBtaBve  predicBons  
when  qualitaBve  informaBon  is  all  that  exists.    
•  EsBmaBon  carries  inherent  risk  and  this  risk  
leads  to  uncertainty.    
•  Project  complexity  has  a  strong  effect  on  the  
uncertainty  inherent  in  planning.    
•  Complexity,  however,  is  a  relaBve  measure  
that  is  affected  by  familiarity  with  past  effort.  
•  The  first-­‐Bme  developer  of  a  sophisBcated  e-­‐
commerce  applicaBon  might  consider  it  to  be  
exceedingly  complex.    
•  However,  a  so#ware  team  developing  its  
tenth  e-­‐commerce  Web  site  would  consider  
such  work  run  of  the  mill.    
•  A  number  of  quanBtaBve  so#ware  complexity  
measures  have  been  proposed    
•  Such  measures  are  applied  at  the  design  or  
code  level  and  are  therefore  difficult  to  use  
during  so#ware  planning  (before  a  design  and  
code  exist).    
•  However,  other,  more  subjecBve  assessments  
of  complexity  (e.g.,  the  funcBon  point  
complexity  adjustment  factors  described  
earlier)  can  be  established  early  in  the  
planning  process.    
•  Project  size  is  another  important  factor  that  
can  affect  the  accuracy  and  efficacy  of  
esBmates.    
•  As  size  increases,  the  interdependency  among  
various  elements  of  the  so#ware  grows  
rapidly.  
•  Problem  decomposiBon,  an  important  
approach  to  esBmaBng,  becomes  more  
difficult  because  decomposed  elements  may  
sBll  be  formidable.    
•  The  degree  of  structural  uncertainty  also  has  
an  effect  on  esBmaBon  risk.    
•  In  this  context,  structure  refers  to  the  degree  
to  which  requirements  have  been  solidified,  
the  ease  with  which  funcBons  can  be  
compartmentalized,  and  the  hierarchical  
nature  of  the  informaBon  that  must  be  
processed.    
•  The  availability  of  historical  informa&on  has  a  
strong  influence  on  esBmaBon  risk.    
•  By  looking  back,  approaches  that  worked  
could  be  emualated  and  problem  areas  could  
be  improved.  
•  When  comprehensive  so#ware  metrics  are  
available  for  past  projects,  esBmates  can  be  
made  with  greater  assurance,  schedules  can  
be  established  to  avoid  past  difficulBes,  and  
overall  risk  is  reduced.    
SOFTWARE  PROJECT  ESTIMATION    
•  In  the  early  days  of  compuBng,  so#ware  costs  
consBtuted  a  small  percentage  of  the  overall  
computer-­‐based  system  cost.    
•  An  order  of  magnitude  error  in  esBmates  of  
so#ware  cost  had  relaBvely  licle  impact.  
•  Today,  so#ware  is  the  most  expensive  element  of  
virtually  all  computer-­‐based  systems.    
•  For  complex,  custom  systems,  a  large  cost  
esBmaBon  error  can  make  the  difference  
between  profit  and  loss.    
•  Cost  overrun  can  be  disastrous  for  the  developer.    
•  So#ware  cost  and  effort  esBmaBon  will  never  
be  an  exact  science.    
•  Too  many  variables—human,  technical,  
environmental,  poliBcal—can  affect  the  
ulBmate  cost  of  so#ware  and  effort  applied  to  
develop  it.    
•  However,  so#ware  project  esBmaBon  can  be  
transformed  from  a  black  art  to  a  series  of  
systemaBc  steps  that  provide  esBmates  with  
acceptable  risk.    
•  To  achieve  reliable  cost  and  effort  esBmates,  a  
number  of  opBons  arise:    
1.  Delay  esBmaBon  unBl  late  in  the  project  
(obviously,  we  can  achieve  100%  accurate  
esBmates  a#er  the  project  is  complete!).    
–  Unfortunately,  this  opBon,  however  acracBve,  is  
not  pracBcal.    
–  Cost  esBmates  must  be  provided  "up  front.”  
–  However,  we  should  recognize  that  the  longer  we  
wait,  the  more  we  know,  and  the  more  we  know,  
the  less  likely  we  are  to  make  serious  errors  in  our  
esBmate    
2.  Base  esBmates  on  similar  projects  that  have  
already  been  completed.    
–  This  opBon  can  work  reasonably  well,  if  the  
current  project  is  quite  similar  to  past  efforts  and  
other  project  influences  (e.g.,  the  customer,  
business  condiBons,  deadlines  etc)  are  equivalent.  
–  Unfortunately,  past  experience  has  not  always  
been  a  good  indicator  of  future  results.    
3.  Use  relaBvely  simple  decomposiBon  
techniques  to  generate  project  cost  and  
effort  esBmates.    
4.  Use  one  or  more  empirical  models  for  
so#ware  cost  and  effort  esBmaBon.    
–  The  remaining  opBons  are  viable  approaches  to  
so#ware  project  esBmaBon.    
–  Ideally,  the  techniques  noted  for  each  opBon  
should  be  applied  in  tandem;  each  used  as  a  
cross-­‐check  for  the  other.    
•  Decomposi&on  techniques  take  a  "divide  and  
conquer"  approach  to  so#ware  project  
esBmaBon.    
•  By  decomposing  a  project  into  major  funcBons  
and  related  so#ware  engineering  acBviBes,  
cost  and  effort  esBmaBon  can  be  performed  
in  a  stepwise  fashion.    
•  Empirical  es&ma&on  models  can  be  used  to  
complement  decomposiBon  techniques  and  
offer  a  potenBally  valuable  esBmaBon  
approach  in  their  own  right.    
•  A  model  is  based  on  experience  (historical  
data)  and  takes  the  form    
d=f(vi)  
where  d  is  one  of  a  number  of  esBmated  values  
(e.g.,   effort,   cost,   project   duraBon)   and   vi   are  
selected   independent   parameters   (e.g.,  
esBmated  LOC  or  FP).    
•  Automated  es&ma&on  tools  implement  one  or  
more  decomposiBon  techniques  or  empirical  
models.    
•  When  combined  with  a  graphical  user  interface,  
automated  tools  provide  an  acracBve  opBon  
for  esBmaBng.    
•  In  such  systems,  the  characterisBcs  of  the  
development  organizaBon  (e.g.,  experience,  
environment)  and  the  so#ware  to  be  
developed  are  described.  
•   Cost  and  effort  esBmates  are  derived  from  
these  data.    
Self  Study  
•  Examples  of:  
–  LOC  based  esBmaBon  
–  FP  based  esBmaBon  
EMPIRICAL  ESTIMATION  MODELS    
•  An  es&ma&on  model  for  computer  so#ware  
uses  empirically  derived  formulas  to  predict  
effort  as  a  funcBon  of  LOC  or  FP.    
•  Values  for  LOC  or  FP  are  esBmated  using  the  
approach  described  previously.  
•  But  instead  of  using  the  tables  described  in  
those  secBons,  the  resultant  values  for  LOC  or  
FP  are  plugged  into  the  esBmaBon  model.    
•  A  typical  empirical  esBmaBon  model  is  
derived  using  regression  analysis  on  data  
collected  from  past  so#ware  projects.    
•  The  overall  structure  of  such  models  takes  the  
form:  
 E=A+B  x(ev)C    
where  A,  B,  and  C  are  empirically  derived  
constants,  E  is  effort  in  person-­‐months,  and  ev  is  
the  esBmaBon  variable  (either  LOC  or  FP).    
 
•  In  addiBon  to  the  relaBonship  shown  above,  
the  majority  of  esBmaBon  models  have  some  
form  of  project  adjustment  component  that  
enables  E  to  be  adjusted  by  other  project  
characterisBcs  (e.g.,  problem  complexity,  staff  
experience,  development  environment).    
The  COCOMO  Model    
•  Barry  Boehm  introduced  a  hierarchy  of  
so#ware  esBmaBon  models  bearing  the  name  
COCOMO,  for  COnstruc&ve  COst  Model.  
•  The  original  COCOMO  model  became  one  of  
the  most  widely  used  and  discussed  so#ware  
cost  esBmaBon  models  in  the  industry.    
•  It  has  evolved  into  a  more  comprehensive  
esBmaBon  model,  called  COCOMO  II    
•  COCOMO  II  is  actually  a  hierarchy  of  
esBmaBon  models  that  address  the  following  
areas:    
–  Applica3on  composi3on  model.  Used  during  the  
early  stages  of  so#ware  engineering,  when  
prototyping  of  user  interfaces,  consideraBon  of  
so#ware  and  system  interacBon,  assessment  of  
performance,  and  evaluaBon  of  technology  
maturity  are  paramount.    
–  Early  design  stage  model.  Used  once  
requirements  have  been  stabilized  and  basic  
so#ware  architecture  has  been  established.    
–  Post-­‐architecture-­‐stage  model.  Used  during  the  
construcBon  of  the  so#ware.    
•  Like  all  esBmaBon  models  for  so#ware,  the  
COCOMO  II  models  require  sizing  informaBon.    
•  Three  different  sizing  opBons  are  available  as  
part  of  the  model  hierarchy:  object  points,  
funcBon  points,  and  LOC.    
•  The  COCOMO  II  applicaBon  composiBon  
model  uses  object  points  and  is  illustrated  as  
follows.    
•  It  should  be  noted  that  other,  more  
sophisBcated  esBmaBon  models  (using  FP  and  
KLOC)  are  also  available  as  part  of  COCOMO  II.    
•  Like  FP,  the  object  point  is  an  indirect  so#ware  
measure  that  is  computed  using  counts  of  the  
number  of:  
1.  screens  (at  the  user  interface),  
2.  reports,  and    
3.  components  likely  to  be  required  to  build  the  
applicaBon.    
•  Each  object  instance  (e.g.,  a  screen  or  report)  
is  classified  into  one  of  three  complexity  levels  
(i.e.,  simple,  medium,  or  difficult)  using  
criteria  suggested  by  Boehm.    
•  In  essence,  complexity  is  a  funcBon  of  the  
number  and  source  of  the  client  and  server  
data  tables  that  are  required  to  generate  the  
screen  or  report  and  the  number  of  views  or  
secBons  presented  as  part  of  the  screen  or  
report.    
•  Once  complexity  is  determined,  the  number  
of  screens,  reports,  and  components  are  
weighted  according  to  the  table  below:  
PA R T T W O M A N A G I N G S O F T WA R E P R O J E C T S

 
5.1
Complexity weight
exity Object type
ing for Simple Medium Difficult
types
6] Screen 1 2 3

Report 2 5 8

3GL component 10

sophisticated estimation models (using FP and KLOC) are also available a


COCOMO II.
Like function points (Chapter 4), the object point is an indirect software
hat is an that is computed using counts of the number of (1) screens (at the user inte
•  The  object  point  count  is  then  determined  by  
mulBplying  the  original  number  of  object  
instances  by  the  weighBng  factor  given  in  the  
above  table  and  summing  to  obtain  a  total  
object  point  count.    
•  When  component-­‐based  development  or  
general  so#ware  reuse  is  to  be  applied,  the  
percent  of  reuse  (%reuse)  is  esBmated  and  the  
object  point  count  is  adjusted:    
             NOP  =  (object  points)  x  [(100  -­‐  %reuse)/100]  
           
               where  NOP  is  defined  as  new  object  points.  
   
tions presented as part of the screen or report.

•  To  derive   an  according
esBmate  to Tableo5.1.
f  eThe
ffort  
objectb ased  
countoisn   the  
Once complexity is determined, the number of screens, reports, and components
are weighted point then determined by

computed   NOP  value,  a  “producBvity  rate”  


multiplying the original number of object instances by the weighting factor in Table
5.1 and summing to obtain a total object point count. When component-based devel-
must  opment
be  dorerived.  
general software reuse is to be applied, the percent of reuse (%reuse) is
estimated and the object point count is adjusted:
•  Table  below   presents  the  producBvity  rate  for  
NOP = (object points) x [(100 ! %reuse)/100]

different  
where NOP levels   of  
is defined as d eveloper  
new object points. experience  and  

development   environment  maturity.    


To derive an estimate of effort based on the computed NOP value, a “productivity
rate” must be derived. Table 5.2 presents the productivity rate

•  PROD  =  PROD
TA B L E 5 . 2
Productivity
NOP/person-­‐month  
= NOP/person-month  
rates for object
points [BOE96]

Very Very
Developer's experience/capability Low Nominal High
low high

Very Very
Environment maturity/capability Low Nominal High
low high

PROD 4 7 13 25 50
•  Once  the  producBvity  rate  has  been  
determined,  an  esBmate  of  project  effort  can  
be  derived  as    
esBmated  effort  =  NOP/PROD    
The  So@ware  Equa3on    
•  The  so#ware  equaBon  is  a  dynamic  
mulBvariable  model  that  assumes  a  specific  
distribuBon  of  effort  over  the  life  of  a  
so#ware  development  project  
•  The  model  has  been  derived  from  producBvity  
data  collected  for  over  4000  contemporary  
so#ware  projects.  
•  Based  on  these  data,  an  esBmaBon  model  of  the  form:    
E  =  [LOC  X  B0.333/P]3  X  (1/t4)    
Where    
E=  effort  in  person-­‐months  or  person-­‐years    
t=  project  duraBon  in  months  or  years  
B=“special  skills  factor”  
P=  “producBvity  parameter”  
 
•  B  increases  slowly  as  “the  need  for  integraBon,  tesBng,  
quality  assurance,  documentaBon,  and    management  
skills  grow”.    
•  For  small  programs  (KLOC  =  5  to  15),  B  =  0.16.  For  
programs  greater  than  70  KLOC,  B  =  0.39.  
 
 
•  P  reflects:  
–  Overall  process  maturity  and  management  
pracBces  
–  The  extent  to  which  good  so#ware  engineering  
pracBces  are  used    
–  The  level  of  programming  languages  used  
–  The  state  of  the  so#ware  environment  
–  The  skills  and  experience  of  the  so#ware  team  
–  The  complexity  of  the  applicaBon    
•  Typical  values  might  be  P  =  2,000  for  
development  of  real-­‐Bme  embedded  
so#ware;  
•   P  =  10,000  for  telecommunicaBon  and  
systems  so#ware;    
•  P  =  28,000  for  business  systems  applicaBons.  
•  The  producBvity  parameter  can  be  derived  for  
local  condiBons  using  historical  data  collected  
from  past  development  efforts.    
•  It  is  important  to  note  that  the  so#ware  
equaBon  has  two  independent  parameters:  
1.  an  esBmate  of  size  (in  LOC)  and    
2.  an  indicaBon  of  project  duraBon  in  calendar  
months  or  years.    
•  To  simplify  the  esBmaBon,  Putnam  and  Myers  
suggest  a  set  of  equaBons  derived  from  the  
so#ware  equaBon.    
•  Minimum  development  Bme  is  defined  as:  
           tmin  =  8.14  (LOC/P)0.43  in  months  for  tmin  >  6  
months  
             E  =  180Bt3  in  person-­‐months  for  E  ≥  20  
person-­‐month,  where  t  is  in  years  
•  Using  the  above  equaBons  with  P  =  12,000  
(the  recommended  value  for  scienBfic  so#  
ware)  for  a  CAD  so#ware:  
tmin  =  8.14  (33200/12000)0.43    
   =  12.6  calendar  months    
E  =  180  x  0.28  x  (1.05)3    
   =  58  person-­‐months    
SOFTWARE  QUALITY  ASSURANCE    
•  Even  the  most  jaded  so#ware  developers  will  
agree  that  high-­‐quality  so#ware  is  an  
important  goal.    
•  But  how  do  we  define  quality?    
•  A  wag  once  said,  "Every  program  does  
something  right,  it  just  may  not  be  the  thing  
that  we  want  it  to  do."    
•  Many  definiBons  of  so#ware  quality  have  
been  proposed  in  the  literature.    
•  For  most  purposes,  so9ware  quality  is  defined  
as:  
 Conformance  to  explicitly  stated  func&onal  and  
performance  requirements,  explicitly  
documented  development  standards,  and  
implicit  characteris&cs  that  are  expected  of  all  
professionally  developed  so9ware.    
•  This  definiBon  serves  to  emphasize  three  
important  points:    
1.  So#ware  requirements  are  the  foundaBon  from  
which  quality  is  measured.  Lack  of  conformance  to  
requirements  is  lack  of  quality.    
2.  Specified  standards  define  a  set  of  development  
criteria  that  guide  the  manner  in  which  so#ware  is  
engineered.  If  the  criteria  are  not  followed,  lack  of  
quality  will  almost  surely  result.    
3.  A  set  of  implicit  requirements  o#en  goes  
unmenBoned  (e.g.,  the  desire  for  ease  of  use  and  
good  maintainability).  If  so#ware  conforms  to  its  
explicit  requirements  but  fails  to  meet  implicit  
requirements,  so#ware  quality  is  suspect.    
SQA  within  a  so#ware  organisaBon  
•  The  implicaBon  for  so#ware  is  that  many  
different  consBtuencies  have  so#ware  quality  
assurance  responsibility:  
–  so#ware  engineers,    
–  project  managers,  
–  customers,  
–  salespeople,  and    
–  the  individuals  who  serve  within  an  SQA  group.    
 
•  The  SQA  group  serves  as  the  customer's  in-­‐
house  representaBve.    
•  That  is,  the  people  who  perform  SQA  must  look  
at  the  so#ware  from  the  customer's  point  of  
view.    
•  They  could  be  asking  quesBons  such  as:  
–  Does  the  so#ware  adequately  meet  the  quality  
factors?  (as  discussed  earlier)  
–  Has  so#ware  development  been  conducted  
according  to  pre-­‐established  standards?    
–  Have  technical  disciplines  properly  performed  their  
roles  as  part  of  the  SQA  acBvity?    
SQA  acBviBes  
•  So#ware  quality  assurance  is  composed  of  a  
variety  of  tasks  associated  with  two  different  
consBtuencies:  
1.  the  so#ware  engineers  who  do  technical  
work  and    
2.  an  SQA  group  that  has  responsibility  for  
quality  assurance  planning,  oversight,  record  
keeping,  analysis,  and  reporBng.    
•  So#ware  engineers  address  quality  (and  
perform  quality  assurance  and  quality  control  
acBviBes)  by:  
–  applying  solid  technical  methods  and  measures  
–  conducBng  formal  technical  reviews,  and  
–  performing  well-­‐planned  so#ware  tesBng  
•  The  role  of  the  SQA  group  is  to  assist  the  
so#ware  team  in  achieving  a  high-­‐quality  end  
product.    
•  The  So#ware  Engineering  InsBtute  
recommends  a  set  of  SQA  acBviBes  that  
address:  
–  quality  assurance  planning,    
–  oversight,    
–  record  keeping,    
–  analysis,  and    
–  reporBng  
•  These  acBviBes  are  performed  (or  facilitated)  
by  an  independent  SQA  group  that:    
•  Prepares  an  SQA  plan  for  a  project.  The  plan  is  
developed  during  project  planning  and  is  reviewed  
by  all  interested  parBes.    
•  Quality  assurance  acBviBes  performed  by  the  
so#ware  engineering  team  and  the  SQA  group  are  
governed  by  the  plan.    
•  The  plan  idenBfies    
–  evaluaBons  to  be  performed    
–  audits  and  reviews  to  be  performed    
–  standards  that  are  applicable  to  the  project    
–  procedures  for  error  reporBng  and  tracking    
–  documents  to  be  produced  by  the  SQA  group    
–  amount  of  feedback  provided  to  the  so#ware  project  
team    
•  Par3cipates  in  the  development  of  the  
project’s  so@ware  process  descrip3on.    
–  The  so#ware  team  selects  a  process  for  the  work  
to  be  performed.  
–  The  SQA  group  reviews  the  process  descripBon  for  
compliance  with  organizaBonal  policy,  internal  
so#ware  standards,  externally  imposed  standards  
(e.g.,  ISO-­‐9001),  and  other  parts  of  the  so#ware  
project  plan.    
•  Reviews  so@ware  engineering  ac3vi3es  to  
verify  compliance  with  the  defined  so@ware  
process.    
–  The  SQA  group  idenBfies,  documents,  and  tracks  
deviaBons  from  the  process  and  verifies  that  
correcBons  have  been  made.    
•  Audits  designated  so@ware  work  products  to  
verify  compliance  with  those  defined  as  part  
of  the  so@ware  process.    
–  The  SQA  group  reviews  selected  work  products;  
idenBfies  documents,  and  tracks  deviaBons;  
verifies  that  correcBons  have  been  made;  and  
–  periodically  reports  the  results  of  its  work  to  the  
project  manager.    
•  Ensures  that  devia3ons  in  so@ware  work  and  
work  products  are  documented  and  handled  
according  to  a  documented  procedure.  
–  DeviaBons  may  be  encountered  in  the  project  plan,  
process  descripBon,  applicable  standards,  or  technical  
work  products.    
•  Records  any  noncompliance  and  reports  to  senior  
management.    
–  Noncompliance  items  are  tracked  unBl  they  are  
resolved.    
•  In  addiBon  to  these  acBviBes,  the  SQA  group  
coordinates  the  control  and  management  of  
change  and  helps  to  collect  and  analyze  so#ware  
metrics.    
FORMAL  APPROACHES  TO  SQA    
•  It  can  be  argued  that  a  computer  program  is  a  
mathemaBcal  object.    
•  A  rigorous  syntax  and  semanBcs  can  be  
defined  for  every  programming  language  
•  It  is  also  possibly  to  similarly  develop  a  
rigorous  approach  to  the  specificaBon  of  
so#ware  requirements.    
•  If  the  requirements  model  (specificaBon)  and  
the  programming  language  can  be  
represented  in  a  rigorous  manner,  it  should  be  
possible  to  apply  mathemaBc  proof  of  
correctness  to  demonstrate  that  a  program  
conforms  exactly  to  its  specificaBons.    
•  One  of  the  methods  to  do  so  is  known  as  
staBsBcal  SQA  
StaBsBcal  SQA  
•  For  so#ware,  staBsBcal  quality  assurance  
implies  the  following  steps:    
1.  InformaBon  about  so#ware  defects  is  collected  
and  categorized.    
2.  An  acempt  is  made  to  trace  each  defect  to  its  
underlying  cause  (e.g.,  non-­‐  conformance  to  
specificaBons,  design  error,  violaBon  of  
standards,  poor  communicaBon  with  the  
customer).    
3.  Using  the  Pareto  principle  (80  percent  of  the  
defects  can  be  traced  to  20  per-­‐  cent  of  all  
possible  causes),  isolate  the  20  percent  (the  
"vital  few").    
4.  Once  the  vital  few  causes  have  been  
idenBfied,  move  to  correct  the  problems  that  
have  caused  the  defects.    
•  To  illustrate  how  staBsBcal  SQA  works,  assume  
that  a  so#ware  engineering  organizaBon  
collects  informaBon  on  defects  for  a  period  of  
one  year.    
•  Some  of  the  defects  are  uncovered  as  so#-­‐  
ware  is  being  developed.    
•  Others  are  encountered  a#er  the  so#ware  has  
been  released  to  its  end-­‐users.    
•  Although  hundreds  of  different  errors  are  
uncovered,  all  can  be  tracked  to  one  (or  more)  
of  the  following  causes:    
•  incomplete  or  erroneous  specificaBons  (IES)  
•  misinterpretaBon  of  customer  communicaBon  (MCC)  
•   intenBonal  deviaBon  from  specificaBons  (IDS)  
•  violaBon  of  programming  standards  (VPS)  
•  error  in  data  representaBon  (EDR)  
•  inconsistent  component  interface  (ICI)  
•  error  in  design  logic  (EDL)  
•   incomplete  or  erroneous  tesBng  (IET)  
•  inaccurate  or  incomplete  documentaBon  (IID)  
•  error  in  programming  language  translaBon  of  design  (PLT)  
•  ambiguous  or  inconsistent  human/computer  interface  
(HCI)    
•  miscellaneous  (MIS)    
CHAPTER 8 SOFTWARE QUALITY ASSURANCE 211

•  To  aDATA
TA B L E 8 . 1 pply   staBsBcal  
COLLECTION SQA,  
FOR STATISTICAL SQA a  Table  is  built:    

Total Serious Moderate Minor


Error No. % No. % No. % No. %

IES 205 22% 34 27% 68 18% 103 24%


MCC 156 17% 12 9% 68 18% 76 17%
IDS 48 5% 1 1% 24 6% 23 5%
VPS 25 3% 0 0% 15 4% 10 2%
EDR 130 14% 26 20% 68 18% 36 8%
ICI 58 6% 9 7% 18 5% 31 7%
EDL 45 5% 14 11% 12 3% 19 4%
IET 95 10% 12 9% 35 9% 48 11%
IID 36 4% 2 2% 20 5% 14 3%
PLT 60 6% 15 12% 19 5% 26 6%
HCI 28 3% 3 2% 17 4% 8 2%
MIS 56 6% 0 0% 15 4% 41 9%
Totals 942 100% 128 100% 379 100% 435 100%
•  The  table  indicates  that  IES,  MCC,  and  EDR  are  
the  vital  few  causes  that  account  for  53  
percent  of  all  errors.    
•  It  should  be  noted,  however,  that  IES,  EDR,  
PLT,  and  EDL  would  be  selected  as  the  vital  
few  causes  if  only  serious  errors  are  
considered.    
•  Once  the  vital  few  causes  are  determined,  the  
so#ware  engineering  organizaBon  can  begin  
correcBve  acBon.    
•  For  example,  to  correct  MCC,  the  so#ware  
developer  might  implement  facilitated  
applicaBon  specificaBon  techniques  to  
improve  the  quality  of  customer  
communicaBon  and  specificaBons.  
•  To  improve  EDR,  the  developer  might  acquire  
CASE  tools  for  data  modeling  and  perform  
more  stringent  data  design  reviews.    
•   It  is  important  to  note  that  correcBve  acBon  
focuses  primarily  on  the  vital  few.    
•  As  the  vital  few  causes  are  corrected,  new  
candidates  pop  to  the  top  of  the  stack.    
 
•  StaBsBcal  quality  assurance  techniques  for  
so#ware  have  been  shown  to  provide  
substanBal  quality  improvement.    
•  In  some  cases,  so#ware  organizaBons  have  
achieved  a  50  percent  reducBon  per  year  in  
defects  a#er  applying  these  techniques.    
•  StaBsBcal  quality  assurance  techniques  for  
so#ware  have  been  shown  to  provide  
substanBal  quality  improvement.  
•  In  some  cases,  so#ware  organizaBons  have  
achieved  a  50  percent  reducBon  per  year  in  
defects  a#er  applying  these  techniques.    
•  In  conjuncBon  with  the  collecBon  of  defect  
informaBon,  so#ware  developers  can  
calculate  an  error  index  (EI)  for  each  major  
step  in  the  so#ware  process.  
•  A#er  analysis,  design,  coding,  tesBng,  and  
release,  the  following  data  are  gathered:    
 
–  Ei  =  the  total  number  of  errors  uncovered  during  the  ith  
step  in  the  so#ware  engineering  process  
–   Si  =  the  number  of  serious  errors  
–  Mi  =  the  number  of  moderate  errors  
–  Ti  =  the  number  of  minor  errors  
–  PS  =  size  of  the  product  (LOC,  design  statements,  pages  
of  documentaBon)  at  the  ith  step    
•  ws,  wm,  wt  =  weighBng  factors  for  serious,  
moderate,  and  trivial  errors  
•  Recommended  values  are  ws  =  10,  wm  =  3,  wt  =  1.    
•  The  weighBng  factors  for  each  phase  should  
become  larger  as  development  progresses.  
•   This  rewards  an  organizaBon  that  finds  errors  
early.    
•  At  each  step  in  the  so#ware  process,  a  phase  
index,  PIi,  is  computed:    
PIi  =  ws  (Si/Ei)  +  wm  (Mi/Ei)  +  wt  (Ti/Ei)    
•  The  error  index  is  computed  by  calculaBng  the  
cumulaBve  effect  on  each  PI:  
                               EI  =  ∑  (i  x  PIi)/PS  
                                       =  (PI1  +2PI2  +3PI3  +...iPIi)/PS    

•  WeighBng  errors  encountered  later  in  the  


so#ware  engineering  process  more  heavily  
than  those  encountered  earlier  
•  The  error  index  can  be  used  in  conjuncBon  
with  informaBon  collected  in  Table  shown  
earlier  to  develop  an  overall  indicaBon  of  
improvement  in  so#ware  quality.    
•  The  applicaBon  of  the  staBsBcal  SQA  and  the  
Pareto  principle  can  be  summarized  in  a  single  
sentence:    
Spend  your  &me  focusing  on  things  that  really  
maGer,  but  first  be  sure  that  you  understand  
what  really  maGers!    
So#ware  Reliability  
•  There  is  no  doubt  that  the  reliability  of  a  
computer  program  is  an  important  element  of  
its  overall  quality  
•  If  a  program  repeatedly  and  frequently  fails  to  
perform,  it  macers  licle  whether  other  
so#ware  quality  factors  are  acceptable.    
•  So#ware  reliability,  unlike  many  other  quality  
factors,  can  be  measured  directly  and  
esBmated  using  historical  and  developmental  
data.    
•  So9ware  reliability  is  defined  in  staBsBcal  
terms  as  "the  probability  of  failure-­‐free  
opera&on  of  a  computer  program  in  a  
specified  environment  for  a  specified  &me"    
•  To  illustrate,  program  X  is  esBmated  to  have  a  
reliability  of  0.96  over  eight  elapsed  processing  
hours.    
•  In  other  words,  if  program  X  were  to  be  
executed  100  Bmes  and  require  eight  hours  of  
elapsed  processing  Bme  (execuBon  Bme),  it  is  
likely  to  operate  correctly  (without  failure)  96  
Bmes  out  of  100.    
•  Whenever  so#ware  reliability  is  discussed,  a  
pivotal  quesBon  arises:  What  is  meant  by  the  
term  failure?    
•  In  the  context  of  any  discussion  of  so#ware  
quality  and  reliability,  failure  is  nonconformance  
to  so#ware  requirements.  
•  Yet,  even  within  this  definiBon,  there  are  
gradaBons.    
•  Failures  can  be  only  annoying  or  catastrophic.  
•  One  failure  can  be  corrected  within  seconds  
while  another  requires  weeks  or  even  months  to  
correct.    
•  ComplicaBng  the  issue  even  further,  the  
correcBon  of  one  failure  may  in  fact  result  in  the  
introducBon  of  other  errors  that  ulBmately  result  
in  other  failures.    
Measures  of  Reliability  and  
Availability    
 
•  If  we  consider  a  computer-­‐based  system  
(hardware  +  so#ware),  a  simple  measure  of  
reliability  is  mean-­‐  &me-­‐between-­‐failure  
(MTBF),  where:    
MTBF  =  MTTF  +  MTTR    
     Where  MTTF  =  mean-­‐Bme-­‐to-­‐failure  
                               MTTR=  mean-­‐Bme-­‐to-­‐repair    
•  Many  researchers  argue  that  MTBF  is  a  far  
more  useful  measure  than  defects/KLOC  or  
defects/FP.    
•  Stated  simply,  an  end-­‐user  is  concerned  with  
failures,  not  with  the  total  error  count.  
•  Because  each  error  contained  within  a  
program  does  not  have  the  same  failure  rate,  
the  total  error  count  provides  licle  indicaBon  
of  the  reliability  of  a  system.    
•  For  example,  consider  a  program  that  has  
been  in  operaBon  for  14  months.    
•  Many  errors  in  this  program  may  remain  
undetected  for  decades  before  they  are  
discovered.    
•  The  MTBF  of  such  obscure  errors  might  be  50  
or  even  100  years.    
•  Other  errors,  as  yet  undiscovered,  might  have  
a  failure  rate  of  18  or  24  months.    
•  Even  if  every  one  of  the  first  category  of  errors  
(those  with  long  MTBF)  is  removed,  the  
impact  on  so#ware  reliability  is  negligible.    
•  In  addiBon  to  a  reliability  measure,  we  must  
develop  a  measure  of  availability.    
•  So9ware  availability  is  the  probability  that  a  
program  is  operaBng  according  to  
requirements  at  a  given  point  in  Bme  and  is  
defined  as:  
       Availability  =  [MTTF/(MTTF  +  MTTR)]  x  100%    
•  The  MTBF  reliability  measure  is  equally  
sensiBve  to  MTTF  and  MTTR.    
•  The  availability  measure  is  somewhat  more  
sensiBve  to  MTTR,  an  indirect  measure  of  the  
maintainability  of  so#ware.    
So#ware  Requirement  and  Design:  
Requirement  specificaBon    
•  Once  requirements  have  been  gathered,  a  set  
of  work  products  can  be  generated.  
•  These  form  the  basis  for  requirements  
analysis.  
•  Possible  work  products  are:  
–  A  statement  of  need  and  feasibility.    
–  A  bounded  statement  of  scope  for  the  system  or  
product.    
–  A  list  of  customers,  users,  and  other  stakeholders  
who  parBcipated  in  the  requirements  elicitaBon  
acBvity.    
–  A  descripBon  of  the  system’s  technical  environment.    
–  A  list  of  requirements  (preferably  organized  by  
funcBon)  and  the  domain  constraints  that  apply  to  
each.    
–  A  set  of  usage  scenarios  that  provide  insight  into  the  
use  of  the  system  or  product  under  different  
operaBng  condiBons.    
–  Any  prototypes  developed  to  becer  define  
requirements.    
•  Each  of  these  work  products  is  reviewed  by  all  
people  who  have  parBcipated  in  the  
requirements  elicitaBon.    
Requirement  Analysis  
•  Analysis  categorizes  requirements  and  
organizes  them  into  related  subsets  
•  Explores  each  requirement  in  relaBonship  to  
others  
•  Examines  requirements  for  consistency,  
omissions,  and  ambiguity;  and    
•  Ranks  requirements  based  on  the  needs  of  
customers/users.    
•  As  the  requirements  analysis  acBvity  
commences,  the  following  quesBons  are  
asked  and  answered:    
–  Is  each  requirement  consistent  with  the  overall  
objecBve  for  the  system/product?    
–  Have  all  requirements  been  specified  at  the  
proper  level  of  abstracBon?  That  is,  do  some  
requirements  provide  a  level  of  technical  detail  
that  is  inappropriate  at  this  stage?    
–  Is  each  requirement  bounded  and  unambiguous?    
 
–  Does  each  requirement  have  acribuBon?  That  is,  
is  a  source  (generally,  a  specific  individual)  noted  
for  each  requirement?    
–  Do  any  requirements  conflict  with  other  
requirements?    
–  Is  each  requirement  achievable  in  the  technical  
environment  that  will  house  the  system  or  
product?    
–  Is  each  requirement  testable,  once  implemented?    
•  It  isn’t  unusual  for  customers  and  users  to  ask  
for  more  than  can  be  achieved,  given  limited  
business  resources.    
•  It  also  is  relaBvely  common  for  different  
customers  or  users  to  propose  conflicBng  
requirements,  arguing  that  their  version  is  
“essenBal  for  our  special  needs.”    
•  The  system  engineer  must  reconcile  these  
conflicts  through  a  process  of  negoBaBon.    
 
•  Customers,  users  and  stakeholders  are  asked  
to  rank  requirements  and  then  discuss  
conflicts  in  priority.    
•  Risks  associated  with  each  requirement  are  
idenBfied  and  analyzed    
•  Rough  guesBmates  of  development  effort  are  
made  and  used  to  assess  the  impact  of  each  
requirement  on  project  cost  and  delivery  
Bme.    
•  Using  an  iteraBve  approach,  requirements  are  
eliminated,  combined,  and/or  modified  so  
that  each  party  achieves  some  measure  of  
saBsfacBon.    
Issues  in  Requirement  Analysis  
•  Requirements  analysis  and  specificaBon  may  
appear  to  be  a  relaBvely  simple  task,  but  
appearances  are  deceiving.    
•  CommunicaBon  content  is  very  high.    
•  Chances  for  misinterpretaBon  or  
misinformaBon  abound.    
•  Ambiguity  is  probable.    
•  The  dilemma  that  confronts  a  so#ware  
engineer  may  best  be  understood  by  
repeaBng  the  statement  of  an  anonymous  
(infamous?)  customer:    
"I  know  you  believe  you  understood  what  you  
think  I  said,  but  I  am  not  sure  you  realize  that  
what  you  heard  is  not  what  I  meant."    
Significance  of  Requirement  Analysis  
CHAPTER 11 A N A LY S I S C O N C E P T S A N D P R I N C I P L E S

1.  Requirements  
F I G U R E 1 1.1
Analysis as

analysis  is   a  
a bridge
between

so#ware  engineering
system System
engineering
and software
engineering  
designtask  

that  bridges  the   Software


requirements
analysis
gap  between  
system  level   Software
design
requirements  
engineering  and  
so#ware  design    
interface characteristics, and uncover additional design con
tasks serves to describe the problem so that an overall appro
2.  Requirements  engineering  acBviBes  result  in:  
–  the  specificaBon  of  so#ware’s  operaBonal  
characterisBcs  (funcBon,  data,  and  behavior),  
–  indicate  so#ware's  interface  with  other  system  
elements,  and    
–  establish  constraints  that  so#ware  must  meet.    
3.  Requirements  analysis  allows  the  so#ware  
engineer  (someBmes  called  analyst  in  this  role)  
to  refine  the  so#ware  allocaBon  and  build  
models  of  the  data,  funcBonal,  and  behavioral  
domains  that  will  be  treated  by  so#ware.    
4.  Requirements  analysis  provides  the  so#ware  
designer  with  a  representaBon  of  
informaBon,  funcBon,  and  behavior  that  can  
be  translated  to  data,  architectural,  interface,  
and  component-­‐level  designs.  
5.  Finally,  the  requirements  specificaBon  
provides  the  developer  and  the  customer  
with  the  means  to  assess  quality  once  
so#ware  is  built.    
•  So#ware  requirements  analysis  may  be  
divided  into  five  areas  of  effort:    
1.  Problem  recogniBon  
2.  EvaluaBon  and  synthesis    
3.  Modeling  
4.  SpecificaBon,  and    
5.  Review.    
•  IniBally,  the  analyst  studies  the  System  
Specifica&on  (if  one  exists)  and  the  So9ware  
Project  Plan.    
•  Problem  evaluaBon  and  soluBon  synthesis  is  the  
next  major  area  of  effort  for  analysis.    
•  The  analyst  must:  
–  define  all  externally  observable  data  objects,  
–  evaluate  the  flow  and  content  of  informaBon,    
–  define  and  elaborate  all  so#ware  funcBons,  
–  understand  so#ware  behavior  in  the  context  of  
events  that  affect  the  system,  establish  system    
–  interface  characterisBcs,  and    
–  uncover  addiBonal  design  constraints.    
•  Each  of  these  tasks  serves  to  describe  the  
problem  so  that  an  overall  approach  or  soluBon  
may  be  synthesized.    
ANALYSIS  PRINCIPLES    
•  Over  the  past  two  decades,  a  large  number  of  
analysis  modeling  methods  have  been  
developed.    
•  However,  all  analysis  methods  are  related  by  a  
set  of  operaBonal  principles:    
1.  The  informaBon  domain  of  a  problem  must  be  
represented  and  understood.    
2.  The  funcBons  that  the  so#ware  is  to  perform  
must  be  defined.    
3.  The  behavior  of  the  so#ware  (as  a  consequence  
of  external  events)  must  be  represented.    
4.  The  models  that  depict  informaBon,  funcBon,  
and  behavior  must  be  parBBoned  in  a  manner  
that  uncovers  detail  in  a  layered  (or  hierarchical)  
fashion.    
5.  The  analysis  process  should  move  from  essenBal  
informaBon  toward  implementaBon  detail.    
•  By  applying  these  principles,  the  analyst  
approaches  a  problem  systemaBcally.    
•  The  informaBon  domain  is  examined  so  that  
funcBon  may  be  understood  more  completely.  
•  Models  are  used  so  that  the  characterisBcs  of  
funcBon  and  behavior  can  be  communicated  in  a  
compact  fashion.  
•  ParBBoning  is  applied  to  reduce  complexity.  
•  EssenBal  and  implementaBon  views  of  the  
so#ware  are  necessary  to  accommodate  the  
logical  constraints  imposed  by  processing  
requirements  and  the  physical  constraints  
imposed  by  other  system  elements.    
•  In  addiBon  to  these  operaBonal  analysis  
principles,  Davis  suggests  a  set  of  guiding  
principles  for  requirements  engineering:    
•  Understand  the  problem  before  you  begin  to  
create  the  analysis  model.    
–  There  is  a  tendency  to  rush  to  a  soluBon,  even  
before  the  problem  is  understood.    
–  This  o#en  leads  to  elegant  so#ware  that  solves  
the  wrong  problem!    
•  Develop  prototypes  that  enable  a  user  to  
understand  how  human/machine  interac&on  
will  occur.    
–  Since  the  percepBon  of  the  quality  of  so#ware  is  
o#en  based  on  the  percepBon  of  the  
“friendliness”  of  the  interface,  prototyping  (and  
the  iteraBon  that  results)  are  highly  
recommended.    
•  Record  the  origin  of  and  the  reason  for  every  
requirement.    
–  This  is  the  first  step  in  establishing  traceability  
back  to  the  customer.    
•  Use  mul&ple  views  of  requirements.    
–  Building  data,  funcBonal,  and  behavioral  models  
provide  the  so#ware  engineer  with  three  different  
views.    
–  This  reduces  the  likelihood  that  something  will  be  
missed  and  increases  the  likelihood  that  
inconsistency  will  be  recognized.    
•  Rank  requirements.    
–  Tight  deadlines  may  preclude  the  implementaBon  of  
every  so#ware  requirement.    
–  If  an  incremental  process  model  is  applied,  those  
requirements  to  be  delivered  in  the  first  increment  
must  be  idenBfied.    
•  Work  to  eliminate  ambiguity.  
–  Because  most  requirements  are  described  in  a  
natural  language,  the  opportunity  for  ambiguity  
abounds.    
–  The  use  of  formal  technical  reviews  is  one  way  to  
uncover  and  eliminate  ambiguity.    
•  A  so#ware  engineer  who  takes  these  
principles  to  heart  is  more  likely  to  develop  a  
so#ware  specificaBon  that  will  provide  an  
excellent  foundaBon  for  design.    
Brief  discussion  of  Design  
•  So#ware  design  sits  at  the  technical  kernel  of  
so#ware  engineering  and  is  applied  regardless  
of  the  so#ware  process  model  that  is  used.  
•  Beginning  once  so#ware  requirements  have  
been  analyzed  and  specified,  so#ware  design  is  
the  first  of  three  technical  acBviBes—design,  
code  generaBon,  and  test—that  are  required  to  
build  and  verify  the  so#ware.    
•  Each  acBvity  transforms  informaBon  in  a  manner  
that  ulBmately  results  in  validated  computer  
so#ware.    
•  Each  of  the  elements  of  the  analysis  model  
provides  informaBon  that  is  necessary  to  create  
the  four  design  models  required  for  a  complete  
specificaBon  of  design.    
•  So#ware  requirements,  manifested  by  the  data,  
funcBonal,  and  behavioral  models,  feed  the  design  
task.  
•  Using  one  of  a  number  of  design  methods,  the  
design  task  produces  a  data  design,  an  
architectural  design,  an  interface  design,  and  a  
component  design.    
DESIGN  PRINCIPLES  
•  So#ware  design  is  both  a  process  and  a  model.    
•  The  design  process  is  a  sequence  of  steps  that  
enable  the  designer  to  describe  all  aspects  of  
the  so#ware  to  be  built.    
•  It  is  important  to  note,  however,  that  the  
design  process  is  not  simply  a  cookbook.    
•  CreaBve  skill,  past  experience,  a  sense  of  what  
makes  “good”  so#ware,  and  an  overall  
commitment  to  quality  are  criBcal  success  
factors  for  a  competent  design.    
•  The  design  model  is  the  equivalent  of  an  
architect’s  plans  for  a  house.    
•  It  begins  by  represenBng  the  totality  of  the  
thing  to  be  built  (e.g.,  a  three-­‐dimensional  
rendering  of  the  house)  and  slowly  refines  the  
thing  to  provide  guidance  for  construcBng  
each  detail  (e.g.,  the  plumbing  layout).  
•   Similarly,  the  design  model  that  is  created  for  
so#ware  provides  a  variety  of  different  views  
of  the  computer  so#ware.    
•  Basic  design  principles  enable  the  so#ware  
engineer  to  navigate  the  design  process.    
•  Davis  suggests  a  set  of  principles  for  so#ware  
design,  which  have  been  adapted  and  
extended  in  the  following  list:    
•  The  design  process  should  not  suffer  from  
“tunnel  vision.”    
–  A  good  designer  should  consider  alternaBve  
approaches,  judging  each  based  on  the  
requirements  of  the  problem,  the  resources  
available  to  do  the  job,  and  the  design  concepts  
•  The  design  should  be  traceable  to  the  
analysis  model.    
–  Because  a  single  element  of  the  design  model  
o#en  traces  to  mulBple  requirements,  it  is  
necessary  to  have  a  means  for  tracking  how  
requirements  have  been  saBsfied  by  the  design  
model.    
•  The  design  should  not  reinvent  the  wheel.  
–  Systems  are  constructed  using  a  set  of  design  
pacerns,  many  of  which  have  likely  been  
encountered  before.    
–  These  pacerns  should  always  be  chosen  as  an  
alternaBve  to  reinvenBon.    
–  Time  is  short  and  resources  are  limited!    
–  Design  Bme  should  be  invested  in  represenBng  
truly  new  ideas  and  integraBng  those  pacerns  
that  already  exist.    
•  The  design  should  “minimize  the  intellectual  
distance”  between  the  so@ware  and  the  
problem  as  it  exists  in  the  real  world.    
–  That  is,  the  structure  of  the  so#ware  design  
should  (whenever  possible)  mimic  the  structure  of  
the  problem  domain.    
•  The  design  should  exhibit  uniformity  and  
integra3on.    
–  A  design  is  uniform  if  it  appears  that  one  person  
developed  the  enBre  thing.    
–  Rules  of  style  and  format  should  be  defined  for  a  
design  team  before  design  work  begins.    
–  A  design  is  integrated  if  care  is  taken  in  defining  
interfaces  between  design  components.    
•  The  design  should  be  structured  to  
accommodate  change.    
–  The  design  concepts  should  be  flexible  enough  to  
accommodate  jusBfied  change.  
•  The  design  should  be  structured  to  degrade  
gently,  even  when  aberrant  data,  events,  or  
opera3ng  condi3ons  are  encountered.    
–  Well-­‐  designed  so#ware  should  never  “bomb.”    
–  It  should  be  designed  to  accommodate  unusual  
circumstances,  and  if  it  must  terminate  
processing,  do  so  in  a  graceful  manner.    
•  Design  is  not  coding,  coding  is  not  design.  
–  Even  when  detailed  procedural  designs  are  
created  for  program  components,  the  level  of  
abstracBon  of  the  design  model  is  higher  than  
source  code.    
–  The  only  design  decisions  made  at  the  coding  level  
address  the  small  implementaBon  details  that  
enable  the  procedural  design  to  be  coded.    
•  The  design  should  be  assessed  for  quality  as  it  
is  being  created,  not  a@er  the  fact.    
–  A  variety  of  design  concepts  and  design  measures  
are  available  to  assist  the  designer  in  assessing  
quality.    
•  The  design  should  be  reviewed  to  minimize  
conceptual  (seman3c)  errors.    
–  There  is  someBmes  a  tendency  to  focus  on  
minuBae  when  the  design  is  reviewed,  missing  the  
forest  for  the  trees.    
–  A  design  team  should  ensure  that  major  conceptual  
elements  of  the  design  (omissions,  ambiguity,  
inconsistency)  have  been  addressed  before  
worrying  about  the  syntax  of  the  design  model.    
EFFECTIVE  MODULAR  DESIGN  
•  All  the  fundamental  design  concepts  serve  to  
precipitate  modular  designs.    
•  In  fact,  modularity  has  become  an  accepted  
approach  in  all  engineering  disciplines.    
•  A  modular  design:  
–  reduces  complexity  
–  facilitates  change  (a  criBcal  aspect  of  so#ware  
maintainability),  and    
–  results  in  easier  implementaBon  by  encouraging  
parallel  development  of  different  parts  of  a  system.    
FuncBonal  Independence  
•  The  concept  of  func&onal  independence  is  a  
direct  outgrowth  of  modularity  and  the  
concepts  of  abstracBon  and  informaBon  hiding.    
•  FuncBonal  independence  is  achieved  by  
developing  modules  with  "single-­‐minded"  
funcBon  and  an  "aversion"  to  excessive  
interacBon  with  other  modules.    
•  Stated  another  way,  we  want  to  design  so#ware  
so  that  each  module  addresses  a  specific  
subfuncBon  of  requirements  and  has  a  simple  
interface  when  viewed  from  other  parts  of  the  
program  structure.    
Benefits  of  funcBonal  independence  
•  So#ware  with  effecBve  modularity,  that  is,  
independent  modules,  is:  
–  easier  to  develop    
–  Easier  to  maintain  
•  Why  easier  to  develop?  
•  Why  easier  to  maintain?  
–  Easier  to  develop  because  funcBon  may  be  
compartmentalized  and  interfaces  are  simplified  
hence  easily  mapped  onto  team  work    
–  Independent  modules  are  easier  to  maintain  (and  
test)  because  secondary  effects  caused  by  design  or  
code  modificaBon  are  limited,  error  propagaBon  is  
reduced,  and  reusable  modules  are  possible.    
•  To  summarize,  funcBonal  independence  is  a  key  
to  good  design,  and  design  is  the  key  to  
so#ware  quality.    
Coupling  and  cohesion  
•  FuncBonal  independence  is  measured  using  
two  qualitaBve  criteria:    
–  Cohesion  
–  Coupling.    
•  Cohesion  is  a  measure  of  the  relaBve  
funcBonal  strength  of  a  module.    
•  Coupling  is  a  measure  of  the  relaBve  
interdependence  among  modules.    
Cohesion  
•  Cohesion  is  a  natural  extension  of  the  
informaBon  hiding  concept.  
•  A  cohesive  module  performs  a  single  task  
within  a  so#ware  procedure,  requiring  licle  
interacBon  with  procedures  being  performed  
in  other  parts  of  a  program.    
•  Stated  simply,  a  cohesive  module  should  
(ideally)  do  just  one  thing.    
•  Cohesion  may  be  represented  as  a  "spectrum."    
•  We  always  strive  for  high  cohesion,  although  the  
mid-­‐range  of  the  spectrum  is  o#en  acceptable.  
•  The  scale  for  cohesion  is  nonlinear.    
•  That  is,  low-­‐end  cohesiveness  is  much  "worse"  
than  middle  range,  which  is  nearly  as  "good"  as  
high-­‐end  cohesion.    
•  In  pracBce,  a  designer  need  not  be  concerned  
with  categorizing  cohesion  in  a  specific  module.  
•  Rather,  the  overall  concept  should  be  understood  
and  low  levels  of  cohesion  should  be  avoided  
when  modules  are  designed.    
•  At  the  low  (undesirable)  end  of  the  spectrum,  
we  encounter  a  module  that  performs  a  set  of  
tasks  that  relate  to  each  other  loosely,  if  at  all.  
•   Such  modules  are  termed  coincidentally  
cohesive.    
•  A  module  that  performs  tasks  that  are  related  
logically  (e.g.,  a  module  that  produces  all  
output  regardless  of  type)  is  logically  cohesive  
•  When  a  module  contains  tasks  that  are  
related  by  the  fact  that  all  must  be  executed  
with  the  same  span  of  Bme,  the  module  
exhibits  temporal  cohesion.    
•  As  an  example  of  low  cohesion,  consider  a  module  
that  performs  error  processing  for  an  engineering  
analysis  package.    
•  The  module  is  called  when  computed  data  exceed  
prespecified  bounds.    
•  It  performs  the  following  tasks:    
1.  computes  supplementary  data  based  on  original  
computed  data,  
2.  produces  an  error  report  (with  graphical  content)  on  
the  user's  workstaBon,  
3.  performs  follow-­‐up  calculaBons  requested  by  the  user,  
4.  updates  a  database,  and  
5.  enables  menu  selecBon  for  subsequent  processing.    
So  what  is  the  problem  you  see  here?  
•  Although  the  preceding  tasks  are  loosely  
related,  each  is  an  independent  funcBonal  
enBty  that  might  best  be  performed  as  a  
separate  module.    
•  Combining  the  funcBons  into  a  single  module  
can  serve  only  to  increase  the  likelihood  of  
error  propagaBon  when  a  modificaBon  is  
made  to  one  of  its  processing  tasks.    
•  Moderate  levels  of  cohesion  are  relaBvely  
close  to  one  another  in  the  degree  of  module  
independence.    
•  When  processing  elements  of  a  module  are  
related  and  must  be  executed  in  a  specific  
order,  procedural  cohesion  exists.    
•  When  all  processing  elements  concentrate  on  
one  area  of  a  data  structure,  communica&onal  
cohesion  is  present.    
•  High  cohesion  is  characterized  by  a  module  
that  performs  one  disBnct  procedural  task.    
•  As  we  have  already  noted,  it  is  unnecessary  to  
determine  the  precise  level  of  cohesion.  
•  Rather  it  is  important  to  strive  for  high  
cohesion  and  recognize  low  cohesion  so  that  
so#ware  design  can  be  modified  to  achieve  
greater  funcBonal  independence.    
Coupling  
•  Coupling  is  a  measure  of  interconnecBon  among  
modules  in  a  so#ware  structure.  
•  Coupling  depends  on  the  interface  complexity  
between  modules,  the  point  at  which  entry  or  
reference  is  made  to  a  module,  and  what  data  pass  
across  the  interface.    
•  In  so#ware  design,  we  strive  for  lowest  possible  
coupling.    
•  Simple  connecBvity  among  modules  results  in  
so#ware  that  is  easier  to  understand  and  less  prone  
to  a  "ripple  effect”  caused  when  errors  occur  at  one  
locaBon  and  propagate  through  a  system.    
PA R T T H R E E C O N V E N T I O N A L M E T H O D S F O R S O F T WA R E E N G I N E E R I N G

No direct
coupling

a d i
Data Data Control
structure (variables) flag

b c e j k

f g h

Global data
area
•  No  coupling:  
–  Modules  a  and  d  
–  Subordinate  to  different  modules.    
–  Each  is  unrelated  and  therefore  no  direct  coupling  
occurs.    
•  Data  coupling  (low  coupling):  
–  Module  c  and  a  
–  Module  a  is  accessed  via  a  convenBonal  argument  
list,  through  which  data  are  passed.    
–  As  long  as  a  simple  argument  list  is  present  (i.e.,  
simple  data  are  passed;  a  one-­‐to-­‐one  
correspondence  of  items  exists),  low  coupling  is  
exhibited  in  this  porBon  of  the  structure.    
•  Stamp  coupling  (variaBon  of  data  coupling):  
–  Modules  a  and  b  
–   is  found  when  a  porBon  of  a  data  structure  
(rather  than  simple  arguments)  is  passed  via  a  
module  interface.    
•  Control  Coupling  (moderate):  
–  Modules  d  and  e  
–  Characterized  by  passage  of  control  between  
modules.    
–  Very  common  in  most  so#ware  designs    
–  Occurs  when  a  “control  flag”  (a  variable  that  
controls  decisions  in  a  subordinate  or  
superordinate  module)  is  passed  between  
modules.    
•  External  coupling  (high):  
–  Occurs  when  modules  are  Bed  to  an  environment  
external  to  so#ware    
–  For  example,  I/O  couples  a  module  to  specific  
devices,  formats,  and  communicaBon  protocols    
•  Common  coupling  (high):  
–  Modules  c,  g,  and  k    
–  Occurs  when  a  number  of  modules  reference  a  
global  data  area  (e.g.,  a  disk  file  or  a  globally  
accessible  memory  area)    
•  Example  of  common  couplin:  
–  Module  c  iniBalizes  the  item.    
–  Later  module  g  recomputes  and  updates  the  item.  
–  An  error  occurs  and  g  updates  the  item  
incorrectly.    
–  Much  later  in  processing  module,  k  reads  the  
item,  acempts  to  process  it,  and  fails,  causing  the  
so#ware  to  abort.    
–  The  apparent  cause  of  abort  is  module  k;  the  
actual  cause,  module  g.    
•  Diagnosing  problems  in  structures  with  
considerable  common  coupling  is  Bme  
consuming  and  difficult.    
•  However,  this  does  not  mean  that  the  use  of  
global  data  is  necessarily  "bad."    
•  It  does  mean  that  a  so#ware  designer  must  be  
aware  of  potenBal  consequences  of  common  
coupling  and  take  special  care  to  guard  
against  them.    
•  Content  coupling  (highest):  
–  Occurs  when  one  module  makes  use  of  data  or  
control  informaBon  maintained  within  the  
boundary  of  another  module.    
–  Secondarily,  content  coupling  occurs  when  
branches  are  made  into  the  middle  of  a  module.    
–  This  mode  of  coupling  can  and  should  be  avoided.    
•  The  coupling  modes  just  discussed  occur  
because  of  design  decisions  made  when  
structure  was  developed.    
•  Variants  of  external  coupling,  however,  may  
be  introduced  during  coding.    
•  For  example,  compiler  coupling  Bes  source  
code  to  specific  (and  o#en  non-­‐  standard)  
acributes  of  a  compiler;    
•  Opera&ng  system  (OS)  coupling  Bes  design  
and  resultant  code  to  operaBng  system  
"hooks"  that  can  create  havoc  when  OS  
changes  occur.    
So#ware  TesBng  
•  "You're  never  done  tesBng,  the  burden  simply  
shi#s  from  you  (the  so#ware  engineer)  to  your  
customer."    
•  “Every  Bme  the  customer/user  executes  a  
computer  program,  the  program  is  being  
tested.”  
•  "You're  done  tesBng  when  you  run  out  of  Bme  
or  you  run  out  of  money."    
•  This  sobering  fact  underlines  the  importance  of  
tesBng  in  so#ware  quality  assurance  acBviBes.    
•  So#ware  tesBng  is  a  criBcal  element  of  
so#ware  quality  assurance  and  represents  the  
ulBmate  review  of  specificaBon,  design,  and  
code  generaBon.    
•  TesBng  presents  an  interesBng  anomaly  for  
the  so#ware  engineer  
•  During  earlier  so#ware  engineering  acBviBes,  
the  engineer  acempts  to  build  so#ware  from  
an  abstract  concept  to  a  tangible  product.  
 
•  Now  comes  tesBng  wherein  engineer  creates  
a  series  of  test  cases  that  are  intended  to  
"demolish"  the  so#ware  that  has  been  built  
•  In  fact,  tesBng  is  the  one  step  in  the  so#ware  
process  that  could  be  viewed  (psychologically,  
at  least)  as  destrucBve  rather  than  
construcBve.    
•  But  is  it  really  destrucBve?  
•  The  answer  is  NO,  since  the  objecBves  of  
tesBng  are  somewhat  different  than  we  might  
expect.    
TesBng  ObjecBves  
•  Glen  Myers  states  a  number  of  rules  that  can  
serve  well  as  tesBng  objecBves:    
1.  TesBng  is  a  process  of  execuBng  a  program  with  
the  intent  of  finding  an  error.    
2.  A  good  test  case  is  one  that  has  a  high  probability  
of  finding  an  as-­‐yet-­‐undiscovered  error.    
3.  A  successful  test  is  one  that  uncovers  an  as-­‐yet-­‐
undiscovered  error.    
•  These  objecBves  imply  a  dramaBc  change  in  
viewpoint.    
•  They  move  counter  to  the  commonly  held  
view  that  a  successful  test  is  one  in  which  no  
errors  are  found.    
•  Our  objecBve  is  to  design  tests  that  
systemaBcally  uncover  different  classes  of  
errors  and  to  do  so  with  a  minimum  amount  
of  Bme  and  effort.    
•  If  tesBng  is  conducted  successfully,  it  will  
uncover  errors  in  the  so#ware.    
•  As  a  secondary  benefit,  tesBng  demonstrates  
that  so#ware  funcBons  appear  to  be  working  
according  to  specificaBon,  that  behavioral  and  
performance  requirements  appear  to  have  
been  met.    
•  In  addiBon,  data  collected  as  tesBng  is  
conducted  provide  a  good  indicaBon  of  
so#ware  reliability  and  some  indicaBon  of  
so#ware  quality  as  a  whole.    
•  Hence  tesBng  cannot  show  the  absence  of  
errors  and  defects,  it  can  show  only  that  
so#ware  errors  and  defects  are  present.    
•  It  is  important  to  keep  this  (rather  gloomy)  
statement  in  mind  as  tesBng  is  being  
conducted.    
TesBng  Principles  
•  Before  applying  methods  to  design  effecBve  
test  cases,  a  so#ware  engineer  must  
understand  the  basic  principles  that  guide  
so#ware  tesBng.    
•  Davis  suggests  a  set  of  tesBng  principles:    
•  All  tests  should  be  traceable  to  customer  
requirements.    
–  As  we  have  seen,  the  objecBve  of  so#ware  tesBng  
is  to  uncover  errors.    
–  It  follows  that  the  most  severe  defects  (from  the  
customer’s  point  of  view)  are  those  that  cause  the  
program  to  fail  to  meet  its  requirements.    
•  Tests  should  be  planned  long  before  tes3ng  
begins.    
–  Test  planning  can  begin  as  soon  as  the  
requirements  model  is  complete.    
–  Detailed  definiBon  of  test  cases  can  begin  as  soon  
as  the  design  model  has  been  solidified.  
–  Therefore,  all  tests  can  be  planned  and  designed  
before  any  code  has  been  generated.    
•  The  Pareto  principle  applies  to  so@ware  
tes3ng.    
–  Stated  simply,  the  Pareto  principle  implies  that  80  
percent  of  all  errors  uncovered  during  tesBng  will  
likely  be  traceable  to  20  percent  of  all  program  
components.    
–  The  problem,  of  course,  is  to  isolate  these  suspect  
components  and  to  thoroughly  test  them.    
•  Tes3ng  should  begin  “in  the  small”  and  
progress  toward  tes3ng  “in  the  large.”    
–  The  first  tests  planned  and  executed  generally  
focus  on  individual  components.    
–  As  tesBng  progresses,  focus  shi#s  in  an  acempt  to  
find  errors  in  integrated  clusters  of  components  
and  ulBmately  in  the  enBre  system.    
•  Exhaus3ve  tes3ng  is  not  possible.    
–  The  number  of  path  permutaBons  for  even  a  
moderately  sized  program  is  excepBonally  large.    
–  For  this  reason,  it  is  impossible  to  execute  every  
combinaBon  of  paths  during  tesBng.    
–  It  is  possible,  however,  to  adequately  cover  
program  logic  and  to  ensure  that  all  condiBons  in  
the  component-­‐level  design  have  been  exercised.    
•  To  be  most  effec3ve,  tes3ng  should  be  
conducted  by  an  independent  third  party.    
–  By  most  effec&ve,  we  mean  tesBng  that  has  the  
highest  probability  of  finding  errors  (the  primary  
objecBve  of  tesBng).    
–  The  so#ware  engineer  who  created  the  system  is  
not  the  best  person  to  conduct  all  tests  for  the  
so#ware.    
Test  Case  Design  
•  Recalling  the  objecBves  of  tesBng,  we  must  design  
tests  that  have  the  highest  likelihood  of  finding  the  
most  errors  with  a  minimum  amount  of  Bme  and  
effort.    
•  A  rich  variety  of  test  case  design  methods  have  
evolved  for  so#ware.    
•  These  methods  provide  the  developer  with  a  
systemaBc  approach  to  tesBng.    
•  More  important,  methods  provide  a  mechanism  
that  can  help  to  ensure  the  completeness  of  tests  
and  provide  the  highest  likelihood  for  uncovering  
errors  in  so#ware.    
•  Any  engineered  product  (and  most  other  
things)  can  be  tested  in  one  of  two  ways:  
1.  Knowing  the  specified  funcBon  that  a  
product  has  been  designed  to  perform,  tests  
can  be  conducted  that  demonstrate  each  
funcBon  is  fully  operaBonal  while  at  the  
same  Bme  searching  for  errors  in  each  
funcBon;  
•  This  is  termed  as  black  box  tesBng  
 
2.  Knowing  the  internal  workings  of  a  product,  
tests  can  be  conducted  to  ensure  that  all  
internal  operaBons  are  performed  according  
to  specificaBons  and  all  internal  components  
have  been  adequately  exercised.    
•  This  approach  is  called  white-­‐box  tesBng.    
•  When  computer  so#ware  is  considered,  black-­‐
box  tes&ng  alludes  to  tests  that  are  conducted  at  
the  so#ware  interface.    
•  Although  they  are  designed  to  uncover  errors,  
black-­‐box  tests  are  used  to:  
–  demonstrate  that  so#ware  funcBons  are  operaBonal,  
–  that  input  is  properly  accepted  and  output  is  
correctly  produced,  and    
–  that  the  integrity  of  external  informaBon  (e.g.,  a  
database)  is  maintained.    
•  A  black-­‐box  test  examines  some  fundamental  
aspect  of  a  system  with  licle  regard  for  the  
internal  logical  structure  of  the  so#ware.    
•  White-­‐box  tes&ng  of  so#ware  is  predicated  on  
close  examinaBon  of  procedural  detail.    
•  Logical  paths  through  the  so#ware  are  tested  
by  providing  test  cases  that  exercise  specific  
sets  of  condiBons  and/or  loops.    
•  The  "status  of  the  program"  may  be  examined  
at  various  points  to  determine  if  the  expected  
or  asserted  status  corresponds  to  the  actual  
status.    
•  It  would  seem  that  very  thorough  white-­‐box  
tesBng  would  lead  to  "100  percent  correct  
programs."    
•  However,  the  number  of  paths  to  be  tested  
increase  exponenBally  as  the  program  size  
increases  and  can  quickly  become  impracBcal  
•  The  way  out  is  to  exercise  a  limited  number  of  
important  logical  paths  and  data  structures  
•  The  acributes  of  both  black-­‐  and  white-­‐box  
tesBng  can  be  combined  to  provide  an  
approach  that  validates  the  so#ware  interface  
and  selecBvely  ensures  that  the  internal  
workings  of  the  so#ware  are  correct.    
White-­‐box  tesBng  
•  White-­‐box  tesBng,  someBmes  called  glass-­‐box  
tes&ng,  is  a  test  case  design  method  that  uses  
the  control  structure  of  the  procedural  design  
to  derive  test  cases.    
•  Using  white-­‐box  tesBng  methods,  the  
so#ware  engineer  can  derive  test  cases  that:    
1.  Guarantee  that  all  independent  paths  within  
a  module  have  been  exercised  at  least  once  
2.  Exercise  all  logical  decisions  on  their  true  and  
false  sides  
3.  Execute  all  loops  at  their  boundaries  and  
within  their  operaBonal  bounds,  and    
4.  Exercise  internal  data  structures  to  ensure  
their  validity.    
Basis  path  tesBng  
•  The  basis  path  method  enables  the  test  case  
designer  to  derive  a  logical  complexity  
measure  of  a  procedural  design  and  use  this  
measure  as  a  guide  for  defining  a  basis  set  of  
execuBon  paths.    
•  Test  cases  derived  to  exercise  the  basis  set  are  
guaranteed  to  execute  every  statement  in  the  
program  at  least  one  Bme  during  tesBng.    
Flow  Graph  NotaBon    
•  The  flow  graph  depicts  logical  control  flow  
using  the  notaBon:  
PA R T T H R E E C O N V E N T I O N A L M E T H O D S F O R S O F T WA R E E N G I N E E R I N G

The structured constructs in flow graph form:


Case

Sequence If While Until

Where each circle represents one or more


nonbranching PDL or source code statements
•  Each  circle,  called  a  flow  graph  node,  represents  
one  or  more  procedural  statements.    
•  A  sequence  of  process  boxes  and  a  decision  
diamond  (in  a  flowchart)  can  map  into  a  single  
node.    
•  The  arrows  on  the  flow  graph,  called  edges  or  
links,  represent  flow  of  control  and  are  analogous  
to  flowchart  arrows.    
•  An  edge  must  terminate  at  a  node,  even  if  the  
node  does  not  represent  any  procedural  
statements  (e.g.,  the  symbol  for  the  if-­‐then-­‐else  
construct).    
9
10

17 SOFTWARE TESTING TECHNIQUES 447


11
(A)

1
Edge
1
2,3 Node

2
6 4,5
R2
3
7 R3 8

4 9 R1 Region
6

R4
10
7 8 5
9
10
11

(B)
11
(A)
•  Areas  bounded  by  edges  and  nodes  are  called  
regions.    
•  When  counBng  regions,  we  include  the  area  
outside  the  graph  as  a  region    
CyclomaBc  Complexity  
•  Is  a  so#ware  metric  that  provides  a  
quanBtaBve  measure  of  the  logical  complexity  
of  a  program.    
•  When  used  in  the  context  of  the  basis  path  
tesBng  method,  the  value  computed  for  
cyclomaBc  complexity  defines  the  number  of  
independent  paths  in  the  basis  set  of  a  
program  and  provides  us  with  an  upper  bound  
for  the  number  of  tests  that  must  be  
conducted  to  ensure  that  all  statements  have  
been  executed  at  least  once.    
•  An  independent  path  is  any  path  through  the  
program  that  introduces  at  least  one  new  set  
of  processing  statements  or  a  new  condiBon.  
•  When  stated  in  terms  of  a  flow  graph,  an  
independent  path  must  move  along  at  least  
one  edge  that  has  not  been  traversed  before  
the  path  is  defined.    
11
(A)

1
Edge

2,3 Node

6 4,5
R2

7 R3 8

9 R1 Region

R4
10

11

(B)
•  The  set  of  independent  paths  for  the  flow  graph  
shown  is:  
–  path  1:  1-­‐11    
–  path  2:  1-­‐2-­‐3-­‐4-­‐5-­‐10-­‐1-­‐11    
–  path  3:  1-­‐2-­‐3-­‐6-­‐8-­‐9-­‐10-­‐1-­‐11    
–  path  4:  1-­‐2-­‐3-­‐6-­‐7-­‐9-­‐10-­‐1-­‐11    
•  Note  that  each  new  path  introduces  a  new  edge.    
•  The  path  1-­‐2-­‐3-­‐4-­‐5-­‐10-­‐1-­‐2-­‐3-­‐6-­‐8-­‐9-­‐10-­‐1-­‐11  is  not  
considered  to  be  an  independent  path  because  it  
is  simply  a  combinaBon  of  already  specified  paths  
and  does  not  traverse  any  new  edges.    
•  Paths  1,  2,  3,  and  4  consBtute  a  basis  set  for  
the  flow  graph  shown  
•  That  is,  if  tests  can  be  designed  to  force  
execuBon  of  these  paths  (a  basis  set),  every  
statement  in  the  program  will  have  been  
guaranteed  to  be  executed  at  least  one  Bme  
and  every  condiBon  will  have  been  executed  
on  its  true  and  false  sides.    
•  It  should  be  noted  that  the  basis  set  is  not  
unique.    
•  In  fact,  a  number  of  different  basis  sets  can  be  
derived  for  a  given  procedural  design.    
•  How  do  we  know  how  many  paths  to  look  for?    
•  The  computaBon  of  cyclomaBc  complexity  
provides  the  answer.    
•  CyclomaBc  complexity  has  a  foundaBon  in  
graph  theory  and  provides  us  with  an  
extremely  useful  so#ware  metric.    
•  Complexity  is  computed  in  one  of  three  ways:    
1.  The  number  of  regions  of  the  flow  graph  
correspond  to  the  cyclomaBc  complexity    
2.  CyclomaBc  complexity,  V(G),  for  a  flow  
graph,  G,  is  defined  as  V(G)  =  E  −  N  +  2    
3.  CyclomaBc  complexity,  V(G),  for  a  flow  
graph,  G,  is  also  defined  as    
         V(G)  =  P  +  1  
         where  P  is  the  number  of  predicate  nodes  in  
the  flow  graph  G.  
•  The  cyclomaBc  complexity  for  the  flow  graph  
shown  is……..?  
Deriving  Test  Cases  
•  The  basis  path  tesBng  method  can  be  applied  
to  a  procedural  design  or  to  source  code.    
•  Basis  path  tesBng  can  be  performed  as  a  
series  of  steps:    
1.  Using  the  design  or  code  as  a  foundaBon,  draw  a  
corresponding  flow  graph.    
G U R E 17.4 PROCEDURE average;
L for test * This procedure computes the average of 100 or fewer
se design numbers that lie between bounding values; it also computes the
th nodes sum and the total number valid.
entified INTERFACE RETURNS average, total.input, total.valid;
INTERFACE ACCEPTS value, minimum, maximum;

TYPE value[1:100] IS SCALAR ARRAY;


TYPE average, total.input, total.valid;
minimum, maximum, sum IS SCALAR;
TYPE i IS INTEGER;
i = 1;
total.input = total.valid = 0; 2
1
sum = 0;
DO WHILE value[i] <> –999 AND total.input < 100 3
4 increment total.input by 1;
IF value[i] > = minimum AND value[i] < = maximum 6
5 THEN increment total.valid by 1;
7 sum = s sum + value[i]
ELSE skip
ENDIF
8
increment i by 1;
9 ENDDO
IF total.valid > 0 10
11 THEN average = sum / total.valid;
12
ELSE average = –999;
13 ENDIF
END average
F I G U R E 17.5
Flow graph for 1
the procedure
average
2

10
5

12 11
6

13
7

9
2.  Determine  the  cyclomaBc  complexity  of  the  
resultant  flow  graph.  
–  The  cyclomaBc  complexity  of  the  flow  graph  shown  in  
the  previous  slide  is  6  
3.  Determine  a  basis  set  of  linearly  independent  
paths.    
–  Path  1:  1-­‐2-­‐10-­‐11-­‐13  
–  Path  2:  1-­‐2-­‐10-­‐12-­‐13  
–  Path  3:  1-­‐2-­‐3-­‐10-­‐11-­‐13  
–  Path  4:  1-­‐2-­‐3-­‐4-­‐5-­‐8-­‐9-­‐2-­‐…….  
–  Path  5:  1-­‐2-­‐3-­‐4-­‐5-­‐6-­‐8-­‐9-­‐2-­‐…….  
–  Path  6:  1-­‐2-­‐3-­‐4-­‐5-­‐6-­‐7-­‐8-­‐9-­‐2-­‐…….  
•  It  is  also  worthwhile  to  idenBfy  predicate  nodes  for  
the  derivaBon  of  test  cases  
4.  Prepare  test  cases  that  will  force  execuBon  of  
each  path  in  the  basis  set.    
–  Data  should  be  chosen  so  that  condiBons  at  the  
predicate  nodes  are  appropriately  set  as  each  
path  is  tested.    
–  Some  sample  test  cases  that  saBsfy  the  basis  set  
for  this  flowgraph  (average)  are:  
•  Path  1  test  case:    
value(k)  =  valid  input,  where  k  <  i  for  2  ≤  i  ≤  100  
value(i)  =  −  999  where  2  ≤  i  ≤  100  
Expected  results:  Correct  average  based  on  k  values  
and  proper  totals.  
Note:  Path  1  cannot  be  tested  stand-­‐alone  but  must  
be  tested  as  part  of  path  4,  5,  and  6  tests.    
•  Path  2  test  case:    
value(1)  =−999  
Expected  results:  Average  =  −999;  other  totals  at  
iniBal  values.    
 
•  Path  3  test  case:    
Acempt  to  process  101  or  more  values.    
First  100  values  should  be  valid.    
Expected  results:  Same  as  test  case  1.    
•  Path  4  test  case:    
value(i)  =  valid  input  where  i  <  100  
value(k)  <  minimum  where  k  <  i  
Expected  results:  Correct  average  based  on  k  values  
and  proper  totals.    
•  Path  5  test  case:    
value(i)  =  valid  input  where  i  <  100  
value(k)  >  maximum  where  k  <=  i  
Expected  results:  Correct  average  based  on  n  values  
and  proper  totals.    
•  Path  6  test  case:    
value(i)  =  valid  input  where  i  <  100  
Expected  results:  Correct  average  based  on  n  values  
and  proper  totals.    
•  Each  test  case  is  executed  and  compared  to  
expected  results.    
•  Once  all  test  cases  have  been  completed,  the  
tester  can  be  sure  that  all  statements  in  the  
program  have  been  executed  at  least  once.    
•  It  is  important  to  note  that  some  independent  
paths  (path  1)  cannot  be  tested  in  stand-­‐alone  
fashion.    
•  In  such  cases,  these  paths  are  tested  as  part  of  
another  path  test.    
•  Self  Study:  Control  Structure  TesBng  
TesBng  Strategies  
•  TesBng  is  a  set  of  acBviBes  that  can  be  
planned  in  advance  and  conducted  
systemaBcally.    
•  For  this  reason  a  template  for  so#ware  tesBng
—a  set  of  steps  into  which  we  can  place  
specific  test  case  design  techniques  and  
tesBng  methods—should  be  defined  for  the  
so#ware  process.    
•  A  number  of  so#ware  tesBng  strategies  have  
been  proposed  in  the  literature.    
•  All  provide  the  so#ware  developer  with  a  
template  for  tesBng  and  all  have  the  following  
generic  characterisBcs:    
–  TesBng  begins  at  the  component  level  and  works  
"outward"  toward  the  integraBon  of  the  enBre  
computer-­‐based  system.    
–  Different  tesBng  techniques  are  appropriate  at  
different  points  in  Bme.    
–  TesBng  is  conducted  by  the  developer  of  the  
so#ware  and  (for  large  projects)  an  independent  
test  group.    
–  TesBng  and  debugging  are  different  acBviBes,  but  
debugging  must  be  accommodated  in  any  tesBng  
strategy.    
•  A  strategy  for  so#ware  tesBng  must  
accommodate  low-­‐level  tests  that  are  
necessary  to  verify  that  a  small  source  code  
segment  has  been  correctly  implemented  as  
well  as  high-­‐level  tests  that  validate  major  
system  funcBons  against  customer  
requirements.    
•  A  strategy  must  provide  guidance  for  the  
pracBBoner  and  a  set  of  milestones  for  the  
manager.    
•  Because  the  steps  of  the  test  strategy  occur  at  
a  Bme  when  deadline  pressure  begins  to  rise,  
progress  must  be  measurable  and  problems  
must  surface  as  early  as  possible.    
VerificaBon  and  ValidaBon  

•  So#ware  tesBng  is  one  element  of  a  broader  


topic  that  is  o#en  referred  to  as  verificaBon  
and  validaBon  (V&V)  
•  VerificaBon  refers  to  the  set  of  acBviBes  that  
ensure  that  so#ware  correctly  implements  a  
specific  funcBon  
•  ValidaBon  refers  to  a  different  set  of  acBviBes  
that  ensure  that  the  so#ware  that  has  been  
built  is  traceable  to  customer  requirements  
•  Boehm  states  this  another  way:    
–  Verifica&on:  "Are  we  building  the  product  right  
–  Valida&on:  "Are  we  building  the  right  product?"    
•  The  definiBon  of  V&V  encompasses  many  of  
the  acBviBes  that  we  have  referred  to  as  
so9ware  quality  assurance  (SQA).    
•  These  SQA  acBviBes  include:  
–  formal  technical  reviews  
–  quality  and  configuraBon  audits,    
–  performance  monitoring,    
–  simulaBon,    
–  feasibility  study,    
–  documentaBon  review,    
–  database  review,    
–  algorithm  analysis,    
–  development  tesBng,    
–  qualificaBon  tesBng,  and    
–  installaBon  tesBng    
•  TesBng  does  provide  the  last  basBon  from  
which  quality  can  be  assessed  and,  more  
pragmaBcally,  errors  can  be  uncovered.    
•  But  tesBng  should  not  be  viewed  as  a  safety  
net.    
•  As  they  say:  
   "You  can't  test  in  quality.  If  it's  not  there  
before  you  begin  tesBng,  it  won't  be  there  
when  you're  finished  tesBng."    
•  Quality  is  incorporated  into  so#ware  
throughout  the  process  of  so#ware  
engineering.    
 
Myths  about  tesBng  
•  There  are  o#en  a  number  of  misconcepBons  
about  tesBng:  
1.  The  developer  of  so#ware  should  do  no  
tesBng  at  all  
2.  The  so#ware  should  be  "tossed  over  the  
wall"  to  strangers  who  will  test  it  mercilessly  
3.  Testers  get  involved  with  the  project  only  
when  the  tesBng  steps  are  about  to  begin.  
•  Each  of  these  statements  is  incorrect.    
Facts  about  tesBng  
•  The  so#ware  developer  is  always  responsible  
for  tesBng  the  individual  units  (components)  
of  the  program,  ensuring  that  each  performs  
the  funcBon  for  which  it  was  designed.    
•  In  many  cases,  the  developer  also  conducts  
integraBon  tesBng—a  tesBng  step  that  leads  
to  the  construcBon  (and  test)  of  the  complete  
program  structure.    
•  Only  a#er  the  so#ware  architecture  is  
complete  does  an  independent  test  group  
(ITG)  become  involved.    
•  The  role  of  an  independent  test  group  (ITG)  is  
to  remove  the  inherent  problems  associated  
with  lehng  the  builder  test  the  thing  that  has  
been  built.    
•  Independent  tesBng  removes  the  conflict  of  
interest  that  may  otherwise  be  present.    
•  A#er  all,  personnel  in  the  independent  group  
team  are  paid  to  find  errors.    
•  However,  the  so#ware  engineer  doesn't  turn  
the  program  over  to  ITG  and  walk  away.  
•  The  developer  and  the  ITG  work  closely  
throughout  a  so#ware  project  to  ensure  that  
thorough  tests  will  be  conducted.    
•  While  tesBng  is  conducted,  the  developer  
must  be  available  to  correct  errors  that  are  
uncovered.    
•  The  ITG  is  part  of  the  so#ware  development  
project  team  in  the  sense  that  it  becomes  
involved  during  the  specificaBon  acBvity  and  
stays  involved  (planning  and  specifying  test  
procedures)  throughout  a  large  project.  
•  However,  in  many  cases  the  ITG  reports  to  the  
so#ware  quality  assurance  organizaBon,  
thereby  achieving  a  degree  of  independence  
that  might  not  be  possible  if  it  were  a  part  of  
the  so#ware  engineering  organizaBon.    
A  so#ware  tesBng  strategy  
•  The  so#ware  engineering  process  may  be  
viewed  as  the  spiral    
CHAPTER 18 S O F T WA R E T E S T I N G S T R AT E G I E S

System testing
Validation testing
Integration testing
Unit testing

Code

Design
Requirements
System engineering
•  IniBally,  system  engineering  defines  the  role  
of  so#ware  and  leads  to  so#ware  
requirements  analysis,  where  the  informaBon  
domain,  funcBon,  behavior,  performance,  
constraints,  and  validaBon  criteria  for  
so#ware  are  established.    
•  Moving  inward  along  the  spiral,  we  come  to  
design  and  finally  to  coding.    
•  To  develop  computer  so#ware,  we  spiral  
inward  along  streamlines  that  decrease  the  
level  of  abstracBon  on  each  turn.    
•  A  strategy  for  so#ware  tesBng  may  also  be  
viewed  in  the  context  of  the  spiral.    
•  Unit  tes&ng  begins  at  the  vortex  of  the  spiral  
and  concentrates  on  each  unit  (i.e.,  
component)  of  the  so#ware  as  implemented  
in  source  code.    
•  TesBng  progresses  by  moving  outward  along  
the  spiral  to  integra&on  tes&ng,  where  the  
focus  is  on  design  and  the  construcBon  of  the  
so#ware  architecture.    
•  Taking  another  turn  outward  on  the  spiral,  we  
encounter  valida&on  tes&ng,  where  
requirements  established  as  part  of  so#ware  
requirements  analysis  are  validated  against  
the  so#ware  that  has  been  constructed.  
•  Finally,  we  arrive  at  system  tes&ng,  where  the  
so#ware  and  other  system  elements  are  
tested  as  a  whole.    
•  To  test  computer  so#ware,  we  spiral  out  
along  streamlines  that  broaden  the  scope  of  
tesBng  with  each  turn.    
•  Considering  the  process  from  a  procedural  
point  of  view,  tesBng  within  the  context  of  
so#ware  engineering  is  actually  a  series  of  
four  steps  that  are  implemented  sequenBally.  
PA R T T H R E E C O N V E N T I O N A L M E T H O D S F O R S O F T WA R E E N G I N E E R I N G

R E 18.2  
are
g steps High-order
Requirements
tests

Design Integration test

Unit
Code
test

Testing
“direction”
•  IniBally,  tests  focus  on  each  component  
individually,  ensuring  that  it  funcBons  
properly  as  a  unit.    
•  Hence,  the  name  unit  tes&ng.    
•  Unit  tesBng  makes  heavy  use  of  white-­‐box  
tesBng  techniques,  exercising  specific  paths  in  
a  module's  control  structure  to  ensure  
complete  coverage  and  maximum  error  
detecBon.    
•  Next,  components  must  be  assembled  or  
integrated  to  form  the  complete  so#ware  
package.    
•  Integra&on  tes&ng  addresses  the  issues  
associated  with  the  dual  problems  of  
verificaBon  and  program  construcBon.    
•  Black-­‐box  test  case  design  techniques  are  the  
most  prevalent  during  integraBon,  although  a  
limited  amount  of  white-­‐box  tesBng  may  be  
used  to  ensure  coverage  of  major  control  
paths.    
•  A#er  the  so#ware  has  been  integrated  
(constructed),  a  set  of  high-­‐order  tests  are  
conducted.    
•  ValidaBon  criteria  (established  during  
requirements  analysis)  must  be  tested.  
•  Valida&on  tes&ng  provides  final  assurance  
that  so#ware  meets  all  funcBonal,  behavioral,  
and  performance  requirements.    
•  Black-­‐box  tesBng  techniques  are  used  
exclusively  during  validaBon.    
•  The  last  high-­‐order  tesBng  step  falls  outside  
the  boundary  of  so#ware  engineering  and  
into  the  broader  context  of  computer  system  
engineering.    
•  So#ware,  once  validated,  must  be  combined  
with  other  system  elements  (e.g.,  hardware,  
people,  databases).    
•  System  tes&ng  verifies  that  all  elements  mesh  
properly  and  that  overall  system  funcBon/
performance  is  achieved.    
Criteria  for  compleBon  of  tesBng  
•  A  classic  quesBon  arises  every  Bme  so#ware  
tesBng  is  discussed:  "When  are  we  done  
tesBng—how  do  we  know  that  we've  tested  
enough?"    
•  Sadly,  there  is  no  definiBve  answer  to  this  
quesBon,  but  there  are  a  few  pragmaBc  
responses  and  early  acempts  at  empirical  
guidance.    
•  Musa  and  Ackerman  suggest  a  response  that  
is  based  on  staBsBcal  criteria:    
"No,  we  cannot  be  absolutely  certain  that  the  
so9ware  will  never  fail,  but  rela&ve  to  a  
theore&cally  sound  and  experimentally  validated  
sta&s&cal  model,  we  have  done  sufficient  tes&ng  
to  say  with  95  percent  confidence  that  the  
probability  of  1000  CPU  hours  of  failure  free  
opera&on  in  a  probabilis&cally  defined  
environment  is  at  least  0.995."    
•  Using  staBsBcal  modeling  and  so#ware  
reliability  theory,  models  of  so#ware  failures  
(uncovered  during  tesBng)  as  a  funcBon  of  
execuBon  Bme  can  be  developed.    
•  A  version  of  the  failure  model,  called  a  
logarithmic  Poisson  execu&on-­‐&me  model,  
takes  the  form:    
f(t)  =  (1/p)  ln  [l0  pt  +  1]    
–  where  f(t)  =  cumulaBve  number  of  failures  that  
are  expected  to  occur  once  the  so#ware  has  been  
tested  for  a  certain  amount  of  execuBon  Bme,  t,    
–  l0  =  the  iniBal  so#ware  failure  intensity  (failures  
per  Bme  unit)  at  the  beginning  of  tesBng,    
–  p  =  the  exponenBal  reducBon  in  failure  intensity  
as  errors  are  uncovered  and  repairs  are  made.    
•  The  instantaneous  failure  intensity,  l(t)  can  be  
derived  by  taking  the  derivaBve  of  f(t)    
l(t)=l0  /(l0  pt+1)    
•  Using  the  relaBonship  depicted  in  the  
previous  slide,  testers  can  predict  the  drop-­‐off  
of  errors  as  tesBng  progresses.    
•  The  actual  error  intensity  can  be  ploced  
CHAPTER 18 S O F T WA R E T E S T I N G S T R AT E G I E S

against  the  predicted  curve.    


F I G U R E 18.3
Failure Data collected during testing
intensity as a
function of
Failures per test hour

execution time Predicted failure intensity, l(t)


l0

Execution time, t
•  If  the  actual  data  gathered  during  tesBng  and  
the  logarithmic  Poisson  execuBon  Bme  model  
are  reasonably  close  to  one  another  over  a  
number  of  data  points,  the  model  can  be  used  
to  predict  total  tesBng  Bme  required  to  
achieve  an  acceptably  low  failure  intensity.    
•  By  collecBng  metrics  during  so#ware  tesBng  
and  making  use  of  exisBng  so#ware  reliability  
models,  it  is  possible  to  develop  meaningful  
guidelines  for  answering  the  quesBon:  "When  
are  we  done  tesBng?"    
•  There  is  licle  debate  that  further  work  
remains  to  be  done  before  quanBtaBve  rules  
for  tesBng  can  be  established,  but  the  
empirical  approaches  that  currently  exist  are  
considerably  becer  than  raw  intuiBon.    
IntegraBon  TesBng  
•  You  might  ask  a  seemingly  legiBmate  quesBon  once  all  
modules  have  been  unit  tested:    
"If  they  all  work  individually,  why  do  you  doubt  that  they'll  
work  when  we  put  them  together?”  
•  The  problem,  of  course,  is  "puhng  them  together"—
interfacing.    
–  Data  can  be  lost  across  an  interface;    
–  One  module  can  have  an  inadvertent,  adverse  affect  on  
another;    
–  SubfuncBons,  when  combined,  may  not  produce  the  desired  
major  funcBon;  
–  Individually  acceptable  imprecision  may  be  magnified  to  
unacceptable  levels;    
–  Global  data  structures  can  present  problems.    
•  Sadly,  the  list  goes  on  and  on.    
•  IntegraBon  tesBng  is  a  systemaBc  technique  
for  construcBng  the  program  structure  while  
at  the  same  Bme  conducBng  tests  to  uncover  
errors  associated  with  interfacing.    
•  The  objecBve  is  to  take  unit  tested  
components  and  build  a  program  structure  
that  has  been  dictated  by  design.    
•  There  is  o#en  a  tendency  to  acempt  
nonincremental  integraBon;  that  is,  to  construct  
the  program  using  a  "big  bang"  approach.    
•  All  components  are  combined  in  advance.    
•  The  enBre  program  is  tested  as  a  whole.    
•  And  chaos  usually  results!    
•  A  set  of  errors  is  encountered.    
•  CorrecBon  is  difficult  because  isolaBon  of  causes  is  
complicated  by  the  vast  expanse  of  the  enBre  
program.    
•  Once  these  errors  are  corrected,  new  ones  appear  
and  the  process  conBnues  in  a  seemingly  endless  
loop.    
•  Incremental  integraBon  is  the  anBthesis  of  the  
big  bang  approach.    
•  The  program  is  constructed  and  tested  in  
small  increments,  where:  
–  Errors  are  easier  to  isolate  and  correct;    
–  Interfaces  are  more  likely  to  be  tested  completely;  
–  A  systemaBc  test  approach  may  be  applied.    
•  The  following  slides  discuss  a  number  of  
different  incremental  integraBon  strategies.  
Top-­‐Down  IntegraBon  
•  Top-­‐down  integra&on  tes&ng  is  an  
incremental  approach  to  construcBon  of  
program  structure.    
•  Modules  are  integrated  by  moving  downward  
through  the  control  hierarchy,  beginning  with  
the  main  control  module  (main  program).    
•  Modules  subordinate  (and  ulBmately  
subordinate)  to  the  main  control  module  are  
incorporated  into  the  structure  in  either  a  
depth-­‐first  or  breadth-­‐first  manner.    
CHAPTER 18 S O F T WA R E T E S T I N G S T R AT E G I E S

18.6
n M1
on

M2 M3 M4

M5 M6 M7

M8
•  Referring  to  the  figure  in  the  previous  slide,  
depth-­‐first  integra&on  would  integrate  all  
components  on  a  major  control  path  of  the  
structure.    
•  SelecBon  of  a  major  path  is  somewhat  
arbitrary  and  depends  on  applicaBon-­‐specific  
characterisBcs.    
•  For  example,  selecBng  the  le#-­‐hand  path,  
components  M1,  M2  ,  M5  would  be  
integrated  first.    
•  Next,  M8  or  (if  necessary  for  proper  funcBoning  
of  M2)  M6  would  be  integrated.  
•  Then,  the  central  and  right-­‐  hand  control  paths  
are  built.    
•  Breadth-­‐first  integra&on  incorporates  all  
components  directly  subordinate  at  each  level,  
moving  across  the  structure  horizontally.    
•  From  the  figure,  components  M2,  M3,  and  M4  
(a  replacement  for  stub  S4)  would  be  integrated  
first.    
•  The  next  control  level,  M5,  M6,  and  so  on,  
follows.    
 
•  The  integraBon  process  is  performed  in  a  series  
of  five  steps:    
1.  The  main  control  module  is  used  as  a  test  driver  
and  stubs  are  subsBtuted  for  all  components  
directly  subordinate  to  the  main  control  module.    
2.  Depending  on  the  integraBon  approach  selected  
(i.e.,  depth  or  breadth  first),  subordinate  stubs  are  
replaced  one  at  a  Bme  with  actual  components.    
3.  Tests  are  conducted  as  each  component  is  
integrated.    
 
4.  On  compleBon  of  each  set  of  tests,  another  stub  
is  replaced  with  the  real  component.    
5.  Regression  tesBng  (discussed  later)  may  be  
conducted  to  ensure  that  new  errors  have  not  
been  introduced.    
•  The  process  conBnues  from  step  2  unBl  the  
enBre  program  structure  is  built.  
 
Benefits  of  top-­‐down  integraBon  
•  The  top-­‐down  integraBon  strategy  verifies  
major  control  or  decision  points  early  in  the  
test  process.    
•  In  a  well-­‐factored  program  structure,  decision  
making  occurs  at  upper  levels  in  the  hierarchy  
and  is  therefore  encountered  first.  
•  If  major  control  problems  do  exist,  early  
recogniBon  is  essenBal.    
•  If  depth-­‐first  integraBon  is  selected,  a  complete  
funcBon  of  the  so#ware  may  be  implemented  
and  demonstrated.      
Problems  with  top-­‐down  integraBon  
•  Top-­‐down  strategy  sounds  relaBvely  
uncomplicated,  but  in  pracBce,  logisBcal  
problems  can  arise.    
•  The  most  common  of  these  problems  occurs  
when  processing  at  low  levels  in  the  hierarchy  is  
required  to  adequately  test  upper  levels.  
•  Stubs  replace  low-­‐  level  modules  at  the  
beginning  of  top-­‐down  tesBng;  therefore,  no  
significant  data  can  flow  upward  in  the  program  
structure.    
•  The  tester  is  le#  with  three  choices:    
1.  Delay  many  tests  unBl  stubs  are  replaced  with  
actual  modules,  
2.  Develop  stubs  that  perform  limited  funcBons  
that  simulate  the  actual  module,  or  
3.  Integrate  the  so#ware  from  the  bocom  of  the  
hierarchy  upward.    
•  The  first  approach  (delay  tests  unBl  stubs  are  
replaced  by  actual  modules)  causes  us  to  loose  
some  control  over  correspondence  between  
specific  tests  and  incorporaBon  of  specific  
modules.    
•  This  can  lead  to  difficulty  in  determining  the  
cause  of  errors  and  tends  to  violate  the  highly  
constrained  nature  of  the  top-­‐down  approach.  
•   The  second  approach  is  workable  but  can  lead  to  
significant  overhead,  as  stubs  become  more  and  
more  complex.    
•  The  third  approach,  called  boGom-­‐up  tes&ng,  is  
discussed  in  the  next  secBon.    
Bocom-­‐up  integraBon  
•  BoGom-­‐up  integra&on  tes&ng,  as  its  name  
implies,  begins  construcBon  and  tesBng  with  
atomic  modules  (i.e.,  components  at  the  
lowest  levels  in  the  program  structure).  
•  Because  components  are  integrated  from  the  
bocom  up,  processing  required  for  
components  subordinate  to  a  given  level  is  
always  available  and  the  need  for  stubs  is  
eliminated.    
•  A  bocom-­‐up  integraBon  strategy  may  be  
implemented  with  the  following  steps:    
1.  Low-­‐level  components  are  combined  into  
clusters  (someBmes  called  builds)  that  perform  a  
specific  so#ware  subfuncBon.    
2.  A  driver  (a  control  program  for  tesBng)  is  wricen  
to  coordinate  test  case  input  and  output.    
3.  The  cluster  is  tested.    
4.  Drivers  are  removed  and  clusters  are  combined  
moving  upward  in  the  program  structure.    
•  IntegraBon  follows  the  pacern  illustrated   491
CHAPTER 18 S O F T WA R E T E S T I N G S T R AT E G I E S

below:  
18.7
Mc
p
on

Ma Mb

D1 D2 D3

Cluster 3

Cluster 1

Cluster 2
•  Components  are  combined  to  form  clusters  1,  
2,  and  3.    
•  Each  of  the  clusters  is  tested  using  a  driver  
(shown  as  a  dashed  block).    
•  Components  in  clusters  1  and  2  are  
subordinate  to  Ma.    
•  Drivers  D1  and  D2  are  removed  and  the  
clusters  are  interfaced  directly  to  Ma.  
•  Similarly,  driver  D3  for  cluster  3  is  removed  
prior  to  integraBon  with  module  Mb.    
•  Both  Ma  and  Mb  will  ulBmately  be  integrated  
with  component  Mc,  and  so  forth.    
•  As  integraBon  moves  upward,  the  need  for  
separate  test  drivers  lessens.    
•  In  fact,  if  the  top  two  levels  of  program  
structure  are  integrated  top  down,  the  
number  of  drivers  can  be  reduced  
substanBally  and  integraBon  of  clusters  is  
greatly  simplified.    
Regression  TesBng  
•  Each  Bme  a  new  module  is  added  as  part  of  
integraBon  tesBng,  the  so#ware  changes.  
–  New  data  flow  paths  are  established  
–  New  I/O  may  occur  
–  New  control  logic  is  invoked.    
•  These  changes  may  cause  problems  with  
funcBons  that  previously  worked  flawlessly.    
•  In  the  context  of  an  integraBon  test  strategy,  
regression  tes&ng  is  the  re-­‐execuBon  of  some  
subset  of  tests  that  have  already  been  conducted  
to  ensure  that  changes  have  not  propagated  
unintended  side  effects.    
•  In  a  broader  context,  successful  tests  (of  any  
kind)  result  in  the  discovery  of  errors,  and  
errors  must  be  corrected.    
•  Whenever  so#ware  is  corrected,  some  aspect  
of  the  so#ware  configuraBon  (the  program,  its  
documentaBon,  or  the  data  that  support  it)  is  
changed.    
•  Regression  tesBng  is  the  acBvity  that  helps  to  
ensure  that  changes  (due  to  tesBng  or  for  
other  reasons)  do  not  introduce  unintended  
behavior  or  addiBonal  errors.    
•  Regression  tesBng  may  be  conducted  
manually,  by  re-­‐execuBng  a  subset  of  all  test  
cases  or  using  automated  capture/playback  
tools.    
•  Capture/playback  tools  enable  the  so#ware  
engineer  to  capture  test  cases  and  results  for  
subsequent  playback  and  comparison.    
•  The  regression  test  suite  (the  subset  of  tests  
to  be  executed)  contains  three  different  
classes  of  test  cases:    
1.  A  representaBve  sample  of  tests  that  will  
exercise  all  so#ware  funcBons.    
2.  AddiBonal  tests  that  focus  on  so#ware  
funcBons  that  are  likely  to  be  affected  by  the  
change.    
3.  Tests  that  focus  on  the  so#ware  components  
that  have  been  changed.    
•  As  integraBon  tesBng  proceeds,  the  number  
of  regression  tests  can  grow  quite  large.  
•  Therefore,  the  regression  test  suite  should  be  
designed  to  include  only  those  tests  that  
address  one  or  more  classes  of  errors  in  each  
of  the  major  program  funcBons.    
•  It  is  impracBcal  and  inefficient  to  re-­‐execute  
every  test  for  every  program  funcBon  once  a  
change  has  occurred.    

You might also like