You are on page 1of 99

Definition of COCOMO Model

The COCOMO (Constructive Cost Model) is one of the most popularly used
software cost estimation models i.e. it estimates or predicts the effort required
for the project, total project cost and scheduled time for the project. This model
depends on the number of lines of code for software product development. It was
developed by a software engineer Barry Boehm in 1981.
What is COCOMO Model?
The COCOMO estimates the cost for software product development in terms of
effort (resources required to complete the project work) and schedule (time
required to complete the project work) based on the size of the software product.
It estimates the required number of Man-Months (MM) for the full development
of software products. According to COCOMO, there are three modes of software
development projects that depend on complexity. Such as:

1. Organic Project
It belongs to small & simple software projects which are handled by a small team
with good domain knowledge and few rigid requirements.

Example: Small data processing or Inventory management system.

2. Semidetached Project
It is an intermediate (in terms of size and complexity) project, where the team
having mixed experience (both experience & inexperience resources) to deals
with rigid/nonrigid requirements.

Example: Database design or OS development.

3. Embedded Project
This project having a high level of complexity with a large team size by considering
all sets of parameters (software, hardware and operational).

Example: Banking software or Traffic light control software.

Types of COCOMO Model


Depending upon the complexity of the project the COCOMO has three types. Such
as:
1. The Basic COCOMO
It is the one type of static model to estimates software development effort
quickly and roughly. It mainly deals with the number of lines of code and the level
of estimation accuracy is less as we don’t consider the all parameters belongs to
the project. The estimated effort and scheduled time for the project are given by
the relation:

Effort (E) = a*(KLOC)b MM


Scheduled Time (D) = c*(E)d Months(M)
Where,

 E = Total effort required for the project in Man-Months (MM).


 D = Total time required for project development in Months (M).
 KLOC = the size of the code for the project in Kilo lines of code.
 a, b, c, d = The constant parameters for a software project.

PROJECT TYPE a b c d

Organic 2.4 1.05 2.5 0.38

Semidetached 3 1.12 2.5 0.35

Embedded 3.6 1.2 2.5 0.32

Example: For a given project was estimated with a size of 300 KLOC. Calculate the
Effort, Scheduled time for development. Also, calculate the Average resource size
and Productivity of the software for Organic project type.

Ans: Given estimated size of project is: 300 KLOC

For Organic
Effort (E) = a*(KLOC)b = 2.4*(300)1.05 = 957.61 MM
Scheduled Time (D) = c*(E)d = 2.5*(957.61)0.38 = 33.95 Months(M)
Avg. Resource Size = E/D = 957.61/33.95 = 28.21 Mans
Productivity of Software = KLOC/E = 300/957.61 = 0.3132 KLOC/MM = 313
LOC/MM
For Semidetached
Effort (E) = a*(KLOC)b = 3.0*(300)1.12 = 1784.42 MM
Scheduled Time (D) = c*(E)d = 2.5*(1784.42)0.35 = 34.35 Months(M)

For Embedded
Effort (E) = a*(KLOC)b = 3.6*(300)1.2 = 3379.46 MM
Scheduled Time (D) = c*(E)d = 2.5*(3379.46)0.32 = 33.66 Months(M)
Estimation Factors
 How much effort is required to complete an
activity?
 How much calendar time is needed to
complete an activity?
 What is the total cost of an activity?
 Project estimation and scheduling are
interleaved management activities.
 Hardware and software costs.
 Travel and training costs.
 Effort costs (the dominant factor in most
projects)
 The salaries of engineers involved in the project;
 Social and insurance costs.
 Effort costs must take overheads into
account
 Costs of building, heating, lighting.
 Costs of networking and communications.
 Costs of shared facilities (e.g library, staff
restaurant, etc.).
 Estimates are made to discover the cost, to
the developer, of producing a software
system.
 There is not a simple relationship between
the development cost and the price charged
to the customer.
 Broader organisational, economic, political
and business considerations influence the
price charged.
A measure of the rate at which individual
engineers involved in software development
produce software and associated
documentation.
 Not quality-oriented although quality
assurance is a factor in productivity
assessment.
 Essentially, we want to measure useful
functionality produced per time unit.
 Size
related measures based on some output
from the software process.
 This may be lines of delivered source code,
object code instructions, etc.
 Function-related measures based on an
estimate of the functionality of the delivered
software.
 Function-points are the best known of this type
of measure.
 What's a line of code? –
 The measure was first proposed when programs
were typed on cards with one line per card;
 This model assumes that there is a linear
relationship between system size and volume
of documentation.
 The lower level the language, the more
productive the programmer
 The same functionality takes more code to
implement in a lower-level language than in a
high-level language.
 The more verbose the programmer, the
higher the productivity
 Measures of productivity based on lines of code
suggest that programmers who write verbose
code are more productive than programmers who
write compact code.
4
 Based on a combination of program
characteristics
 external inputs and outputs;
 user interactions;
 external interfaces;
 files used by the system.
A weight is associated with each of these and
the function point count is computed by
multiplying each raw count by the weight
and summing all values.
 The function point count is modified by
complexity of the project
 FPs can be used to estimate LOC depending
on the average number of LOC per FP for a
given language
 LOC = AVC * number of function points;
 AVC is a language-dependent factor varying from
200-300 for assemble language to 2-40 for a 4GL;
 FPsare very subjective. They depend on the
estimator
 Automatic function-point counting is impossible.
 Object points (alternatively named application
points) are an alternative function-related
measure to function points when 4Gls or similar
languages are used for development.
 Object points are NOT the same as object
classes.
 The number of object points in a program is a
weighted estimate of
 The number of separate screens that are displayed;
 The number of reports that are produced by the
system;
 The number of program modules that must be
developed to supplement the database code;
 Object points are easier to estimate from a
specification than function points as they are
simply concerned with screens, reports and
programming language modules.
 They can therefore be estimated at a fairly
early point in the development process.
 At this stage, it is very difficult to estimate
the number of lines of code in a system.
 Real-time embedded systems, 40-160 LOC/P-
month.
 Systems programs, 150-400 LOC/P-month.
 Commercial applications, 200-900 LOC/P-
month.
 In object points, productivity has been
measured between 4 and 50 object
points/month depending on tool support and
developer capability.
 All metrics based on volume/unit time are
flawed because they do not take quality into
account.
 Productivity may generally be increased at
the cost of quality. It is not clear how
productivity/quality metrics are related.
 If requirements are constantly changing then
an approach based on counting lines of code
is not meaningful as the program itself is not
static;
 The dictionary defines "economics" as "a
social science concerned chiefly with
description and analysis of the production,
distribution, and consumption of goods and
services."
 Economics is the study of how people make
decisions in resource-limited situations.
 Macroeconomics is the study of how people
make decisions in resource-limited situations
on a national or global scale.
 It deals with the effects of decisions that
national leaders make on such issues as tax
rates, interest rates, foreign and trade
policy.
 Microeconomics is the study of how people
make decisions in resource-limited situations
on a more personal scale.
 It deals with the decisions that individuals
and organizations make on such issues as how
much insurance to buy, which word processor
to buy, or what prices to charge for their
products or services.
 If we look at the discipline of software
engineering, we see that the
microeconomics branch of economics deals
more with the types of decisions we need to
make as software engineers or managers.
 Clearly, we deal with limited resources.
 There is never enough time or money to
cover all the good features we would like to
put into our software products.
 And even in these days of cheap hardware
and virtual memory, our more significant
software products must always operate
within a world of limited computer power
and main memory.
 If you have been in the software engineering
field for any length of time, I am sure you
can think of a number of decision situations
in which you had to determine some key
software product feature as a function of
some limiting critical resource.
 Throughout the software life cycle, there are many
decision situations involving limited resources in
which software engineering economics techniques
provide useful assistance.
 To provide a feel for the nature of these economic
decision issues, an example is given below for each of
the major phases in the software life cycle.
 Feasibility Phase: How much should we invest
in information system analyses (user questionnaires
and interviews, current-system analysis, workload
characterizations, simulations, scenarios, prototypes)
in order that we converge on an appropriate
definition and concept of operation for the system
we plan to implement?
 Plans and Requirements Phase: How
rigorously should we specify requirements?
How much should we invest in requirements
validation activities (automated
completeness, consistency, and traceability
checks, analytic models, simulations,
prototypes) before proceeding to design and
develop a software system?
 Product Design Phase: Should we organize
the software to make it possible to use a
complex piece of existing software which
generally but not completely meets our
requirements?
 Programming Phase: Given a choice
between three data storage and retrieval
schemes which are primarily execution time-
efficient, storage-efficient, and easy-to modify,
respectively; which of these should we choose to
implement?
 Integration and Test Phase: How much
testing and formal verification should we
perform on a product before releasing it to
users?
 Maintenance Phase: Given an extensive list
of suggested product improvements, which ones
should we implement first?
 Phase out: Given an aging, hard-to-modify
software product, should we replace it with a
new product, restructure it, or leave it alone?
 The microeconomics field provides a number of techniques
for dealing with software life-cycle decision issues such as
the ones given in the previous section. Fig. 1 presents an
overall master key to these techniques and when to use
them.
 As indicated in Fig. 1, standard optimization techniques
can be used when we can find a single quantity such as
dollars (or pounds, yen, cruzeiros, etc.) to serve as a
"universal solvent” into which all of our decision variables
can be converted.
 Or, if the nondollar objectives can be expressed as
constraints (system availability must be at least 98
percent; throughput must be at least 150 transactions per
second), then standard constrained optimization
techniques can be used. And if cash flows occur at
different times, then present-value techniques can be
used to normalize them to a common point in time.
 More frequently, some of the resulting
benefits from the software system are not
expressible in dollars.
 In such situations, one alternative solution
will not necessarily dominate another
solution.
 An example situation is shown in Fig. 2,
which compares the cost and benefits (here,
in terms of throughput in transactions per
second) of two alternative approaches to
developing an operating system for a
transaction processing system.
 Option A: Accept an available operating
system. This will require only $80K in
software costs, but will achieve a peak
performance of 120 transactions per second,
using five $10K minicomputer processors,
because of a high multiprocessor overhead
factor.
 Option B: Build a new operating system.
This system would be more efficient and
would support a higher peak throughput, but
would require $180K in software costs.
 The cost-versus-performance curve for these two options
are shown in Fig. 2. Here, neither option dominates the
other, and various cost-benefit decision-making
techniques (maximum profit margin, cost benefit ratio,
return on investments, etc.) must be used to choose
between Options A and B.
 In general, software engineering decision problems are
even more complex than Fig. 2, as Options A and B will
have several important criteria on which they differ
(e.g., robustness, ease of tuning, ease of change,
functional capability).
 If these criteria are quantifiable, then some type of
figure of merit can be defined to support a comparative
analysis of the prefer- ability of one option over another.
 If some of, the criteria are unquantifiable (user goodwill,
programmer morale, etc.), then some techniques for
comparing unquantifiable criteria need to be used.
 In software engineering, our decision issues are generally
even more complex than those discussed above. This is
because the outcome of many of our options cannot be
determined in advance.
 In such circumstances, we are faced with a problem of
decision making under uncertainty, with a considerable risk
of an undesired outcome.
 The main economic analysis techniques available to
support us in resolving such problems are the following.
 1) Techniques for decision making under complete
uncertainty, such as the maximax rule, the maximin rule,
and the Laplace rule [38]. These techniques are generally
inadequate for practical software engineering decisions.
U.S. telephone numbers by about the year 2015. The UNIX calendar expires in the year
2038 and could troublesome like the year 2000 problem. Even larger, it may be necessary
to add at least one digit to U.S. social security numbers by about the year 2050.

The imbalance between software development and maintenance is opening up new


business opportunities for software outsourcing groups. It is also generating a significant
burst of research into tools and methods for improving software maintenance
performance.

What is Software Maintenance?

The word “maintenance” is surprisingly ambiguous in a software context. In normal


usage it can span some 21 forms of modification to existing applications. The two most
common meanings of the word maintenance include: 1) Defect repairs; 2) Enhancements
or adding new features to existing software applications.

Although software enhancements and software maintenance in the sense of defect repairs
are usually funded in different ways and have quite different sets of activity patterns
associated with them, many companies lump these disparate software activities together
for budgets and cost estimates.

The author does not recommend the practice of aggregating defect repairs and
enhancements, but this practice is very common. Consider some of the basic differences
between enhancements or adding new features to applications and maintenance or defect
repairs as shown in table 1:

Table 1: Key Differences Between Maintenance and Enhancements

Enhancements Maintenance
(New features) (Defect repairs)

Funding source Clients Absorbed


Requirements Formal None
Specifications Formal None
Inspections Formal None
User documentation Formal None
New function testing Formal None
Regression testing Formal Minimal

Because the general topic of “maintenance” is so complicated and includes so many


different kinds of work, some companies merely lump all forms of maintenance together
and use gross metrics such as the overall percentage of annual software budgets devoted
to all forms of maintenance summed together.

3
This method is crude, but can convey useful information. Organizations which are
proactive in using geriatric tools and services can spend less than 30% of their annual
software budgets on various forms of maintenance, while organizations that have not
used any of the geriatric tools and services can top 60% of their annual budgets on
various forms of maintenance.

Although the use of the word “maintenance” as a blanket term for more than 20 kinds of
update activity is not very precise, it is useful for overall studies of national software
populations. Table 2 shows the estimated U.S. software population for the United States
between 1950 and 2025 divided into “development” and “maintenance” segments.

In this table the term “development” implies creating brand new applications or adding
major new features to existing applications. The term “maintenance” implies fixing bugs
or errors, mass updates such as the Euro and Year 2000, statutory or mandatory changes
such as rate changes, and minor augmentation such as adding features that require less
than a week of effort.

Table 2: U.S. Software Populations in Development and Maintenance

Year Development Maintenance Total Maintenance


Personnel Personnel Personnel Percent

1950 1,000 100 1,100 9.09%


1955 2,500 250 2,750 9.09%
1960 20,000 2,000 22,000 9.09%
1965 50,000 10,000 60,000 16.67%
1970 125,000 25,000 150,000 16.67%
1975 350,000 75,000 425,000 17.65%
1980 600,000 300,000 900,000 33.33%
1985 750,000 500,000 1,250,000 40.00%
1990 900,000 800,000 1,700,000 47.06%
1995 1,000,000 1,100,000 2,100,000 52.38%
2000 750,000 2,000,000 2,750,000 72.73%
2005 775,000 2,500,000 3,275,000 76.34%
2010 800,000 3,000,000 3,800,000 78.95%
2015 1,000,000 3,500,000 4,500,000 77.78%
2020 1,100,000 3,750,000 4,850,000 77.32%
2025 1,250,000 4,250,000 5,500,000 77.27%

Notice that under the double impact of the Euro and the Year 2000 so many development
projects were delayed or cancelled so that the population of software developers in the
United States actually shrank below the peak year of 1995. The burst of mass update
maintenance work is one of the main reasons why there is such a large shortage of
software personnel.

As can be seen from table 2, the work of fixing errors and dealing with mass updates to
aging legacy applications has become the dominant form of software engineering. This

4
tendency will continue indefinitely so long as maintenance work remains labor-intensive.

Before proceeding, let us consider 21 discrete topics that are often coupled together under
the generic term “maintenance” in day to day discussions, but which are actually quite
different in many important respects:

Table 3: Major Kinds of Work Performed Under the Generic Term “Maintenance”

1. Major Enhancements (new features of > 20 function points)


2. Minor Enhancements (new features of < 5 function points)
3. Maintenance (repairing defects for good will)
4. Warranty repairs (repairing defects under formal contract)
5. Customer support (responding to client phone calls or problem reports)
6. Error-prone module removal (eliminating very troublesome code segments)
7. Mandatory changes (required or statutory changes)
8. Complexity analysis (quantifying control flow using complexity metrics)
9. Code restructuring (reducing cyclomatic and essential complexity)
10. Optimization (increasing performance or throughput)
11. Migration (moving software from one platform to another)
12. Conversion (Changing the interface or file structure)
13. Reverse engineering (extracting latent design information from code)
14. Reengineering (transforming legacy application to client-server form)
15. Dead code removal (removing segments no longer utilized)
16. Dormant application elimination (archiving unused software)
17. Nationalization (modifying software for international use)
18. Year 2000 Repairs (date format expansion or masking)
19. Euro-currency conversion (adding the new unified currency to financial applications)
20. Retirement (withdrawing an application from active service)
21. Field service (sending maintenance members to client locations)

Although the 21 maintenance topics are different in many respects, they all have one
common feature that makes a group discussion possible: They all involve modifying an
existing application rather than starting from scratch with a new application.

Although the 21 forms of modifying existing applications have different reasons for being
carried out, it often happens that several of them take place concurrently. For example,
enhancements and defect repairs are very common in the same release of an evolving
application. There are also common sequences or patterns to these modification
activities. For example, reverse engineering often precedes reengineering and the two
occur so often together as to almost comprise a linked set. For releases of large
applications and major systems, the author has observed from six to 10 forms of
maintenance all leading up to the same release!

5
Nominal Default Values for Maintenance and Enhancement Activities

The nominal default values for exploring these 21 kinds of maintenance are shown in
table 4. However, each of the 21 has a very wide range of variability and reacts to a
number of different technical factors, and also to the experience levels of the maintenance
personnel. Let us consider some generic default estimating values for these various
maintenance tasks using two useful metrics: “assignment scopes” and “production rates.”

The term “assignment scope” refers to the amount of software one programmer can keep
operational in the normal course of a year, assuming routine defect repairs and minor
updates. Assignment scopes are usually expressed in terms of function points and the
observed range is from less than 300 function points to more than 5,000 function points.

The term “production rate” refers to the number of units that can be handled in a standard
time period such as a work month, work week, day, or hour. Production rates are usually
expressed in terms of either “function points per staff month” or the similar and
reciprocal metric, “work hours per function point.”

We will also include “Lines of code per staff month” with the caveat that the results are
merely based on an expansion of 100 statements per function point, which is only a
generic value and should not be used for serious estimating purposes.

Table 4: Default Values for Maintenance Assignment Scopes and Production Rates

Assignment Production Production Production


Scopes Rates Rates Rates
in Function (Funct. Pts. (Work Hours (LOC per
Points per Month) per Funct. Pt.) Staff Month)

Customer support 5,000 3,000 0.04 300,000


Code restructuring 5,000 1,000 0.13 100,000
Complexity analysis 5,000 500 0.26 50,000
Reverse engineering 2,500 125 1.06 12,500
Retirement 5,000 100 1.32 10,000
Field service 10,000 100 1.32 10,000
Dead code removal 750 35 3.77 3,500
Enhancements (minor) 75 25 5.28 2,500
Reengineering 500 25 5.28 2,500
Maintenance (defect repairs) 750 25 5.28 2,500
Warranty repairs 750 20 6.60 2,000
Migration to new platform 300 18 7.33 1,800
Enhancements (major) 125 15 8.80 1,500
Nationalization 250 15 8.80 1,500
Conversion to new interface 300 15 8.80 1,500
Mandatory changes 750 15 8.80 1,500
Performance optimization 750 15 8.80 1,500
Year 2000 repairs 2,000 15 8.80 1,500

6
Euro-currency conversion 1,500 15 8.80 1,500
Error-prone module removal 300 12 11.00 1,200
Average 2,080 255 5.51 25,450

Each of these forms of modification or support activity have wide variations, but these
nominal default values at least show the ranges of possible outcomes for all of the major
activities associated with support of existing applications.

Table 5 shows some of the factors and ranges that are associated with assignment scopes,
or the amount of software that one programmer can keep running in the course of a
typical year.

In table 5 the term “experienced staff” means that the maintenance team has worked on
the applications being modified for at least six months and are quite familiar with the
available tools and methods.

The term “good structure” means that the application adheres to the basic tenets of
structured programming; has clear and adequate comments; and has cyclomatic
complexity levels that are below a value of 10.

The term “full maintenance tools” implies the availability of most of these common
forms of maintenance tools: 1) Defect tracking and routing tools; 2) Change control
tools; 3) Complexity analysis tools; 4) Code restructuring tools; 5) Reverse engineering
tools; 6) Reengineering tools; 7) Maintenance “workbench” tools; 8) Test coverage
tools.

The term “high level language” implies a fairly modern programming language that
requires less than 50 statements to encode 1 function point. Examples of such languages
include most object-oriented languages such as Smalltalk, Eiffel, and Objective C.

By contrast “low level languages” implies language requiring more than 100 statements
to encode 1 function point. Obviously assembly language would be in this class since it
usually takes more than 200 to 300 assembly statements per function point. Other
languages that top 100 statements per function point include many mainstream languages
such as C, Fortran, and COBOL.

In between the high-level and low-level ranges are a variety of mid-level languages that
require roughly 70 statements per function point, such as Ada83, PL/I, and Pascal.

The variations in maintenance assignment scopes are significant in understanding why so


many people are currently engaged in maintenance of aging legacy applications. If a
company owns a portfolio of 100,000 function points maintained by generalists many
more people will be required than if maintenance specialists are used. If the portfolio
consists of poorly structured code written in low-level languages then the assignment
scope might be less than 500 function points or a staff of 200 maintenance personnel.

7
If the company has used complexity analysis tools, code restructuring tools, and has a
staff of highly trained maintenance specialists then the maintenance assignment scope
might top 3,000 function points. This implies that only 33 maintenance experts are
needed, as opposed to 200 generalists. Table 5 illustrates how maintenance assignment
scopes vary in response to four different factors, when each factor switches from “worst
case” to “best case.” Table 5 assumes Version 4.1 of the International Function Point
Users Group (IFPUG) counting practices manual.

Table 5: Variations in Maintenance Assignment Scopes Based on Four Key Factors


(Data expressed in terms of function points per maintenance team member)

Worst Average Best


Case Case Case

Inexperienced staff 100 200 350


Poor structure
Low-level language
No maintenance tools

Inexperienced staff 150 300 500


Poor structure
High-level language
No maintenance tools

Inexperienced staff 225 400 600


Poor structure
Low-level language
Full maintenance tools

Inexperienced staff 300 500 750


Good structure
Low-level language
No maintenance tools

Experienced Staff 350 575 900


Poor structure
Low-level language
No maintenance tools

Inexperienced staff 450 650 1,100


Good structure
High-level language
No maintenance tools

Inexperienced staff 575 800 1,400


Good structure
Low-level language
Full maintenance tools

8
Experienced staff 700 1,100 1,600
Good structure
Low-level language
No maintenance tools

Inexperienced staff 900 1,400 2,100


Poor structure
High-level language
Full maintenance tools

Experienced staff 1,050 1,700 2,400


Poor structure
Low-level language
Full maintenance tools

Experienced staff 1,150 1,850 2,800


Poor structure
High-level language
No maintenance tools

Experienced staff 1,600 2,100 3,200


Good structure
High-level language
No maintenance tools

Inexperienced staff 1,800 2,400 3,750


Good structure
High-level language
Full maintenance tools

Experienced staff 2,100 2,800 4,500


Poor structure
High-level language
Full maintenance tools

Experienced staff 2,300 3,000 5,000


Good structure
Low-level language
Full maintenance tools

Experienced staff 2,600 3,500 5,500


Good structure
High-level language
Full maintenance tools

Average 1,022 1,455 2,278

9
None of the values in table 5 are sufficiently rigorous by themselves for formal cost
estimates, but are sufficient to illustrate some of the typical trends in various kinds of
maintenance work. Obviously adjustments for team experience, complexity of the
application, programming languages, and many other local factors are needed as well.

Metrics Problems With Small Maintenance Projects

There are several difficulties in exploring software maintenance costs with accuracy. One
of these difficulties is the fact that maintenance tasks are often assigned to development
personnel who interleave both development and maintenance as the need arises. This
practice makes it difficult to distinguish maintenance costs from development costs
because the programmers are often rather careless in recording how time is spent.

Another and very signficant problem is that fact that a great deal of software maintenance
consists of making very small changes to software applications. Quite a few bug repairs
may involve fixing only a single line of code. Adding minor new features such as
perhaps a new line-item on a screen may require less than 50 source code statements.

These small changes are below the effective lower limit for counting function point
metrics. The function point metric includes weighting factors for complexity, and even if
the complexity adjustments are set to the lowest possible point on the scale, it is still
difficult to count function points below a level of perhaps 15 function points.

Quite a few maintenance tasks involve changes that are either a fraction of a function
point, or may at most be less than 10 function points or about 1000 COBOL source code
statements. Although normal counting of function points is not feasible for small
updates, it is possible to use the “backfiring” method or converting counts of logical
source code statements in to equivalent function points. For example, suppose an update
requires adding 100 COBOL statements to an existing application. Since it usually takes
about 105 COBOL statements in the procedure and data divisions to encode 1 function
point, it can be stated that this small maintenance project is “about 1 function point in
size.”

If the project takes one work day consisting of six hours, then at least the results can be
expressed using common metrics. In this case, the results would be roughly “6 staff
hours per function point.” If the reciprocal metric “function points per staff month” is
used, and there are 20 working days in the month, then the results would be “20 function
points per staff month.”

Best and Worst Practices in Software Maintenance

Because maintenance of aging legacy software is very labor intensive it is quite important
to explore the best and most cost effective methods available for dealing with the millions
of applications that currently exist. The sets of best and worst practices are not
symmetrical. For example the practice that has the most positive impact on maintenance

10
productivity is the use of trained maintenance experts. However the factor that has the
greatest negative impact is the presence of “error –prone modules” in the application that
is being maintained.

Table 6 illustrates a number of factors which have been found to exert a beneficial
positive impact on the work of updating aging applications and shows the percentage of
improvement compared to average results:

Table 6: Impact of Key Adjustment Factors on Maintenance


(Sorted in order of maximum positive impact)

Maintenance Factors Plus


Range

Maintenance specialists 35%


High staff experience 34%
Table-driven variables and data 33%
Low complexity of base code 32%
Y2K and special search engines 30%
Code restructuring tools 29%
Reengineering tools 27%
High level programming languages 25%
Reverse engineering tools 23%
Complexity analysis tools 20%
Defect tracking tools 20%
Y2K “mass update” specialists 20%
Automated change control tools 18%
Unpaid overtime 18%
Quality measurements 16%
Formal base code inspections 15%
Regression test libraries 15%
Excellent response time 12%
Annual training of > 10 days 12%
High management experience 12%
HELP desk automation 12%
No error prone modules 10%
On-line defect reporting 10%
Productivity measurements 8%
Excellent ease of use 7%
User satisfaction measurements 5%
High team morale 5%

Sum 503%

At the top of the list of maintenance “best practices” is the utilization of full-time, trained
maintenance specialists rather than turning over maintenance tasks to untrained
generalists. The positive impact from utilizing maintenance specialists is one of the
reasons why maintenance outsourcing has been growing so rapidly. The maintenance
productivity rates of some of the better maintenance outsource companies is roughly
twice that of their clients prior to the completion of the outsource agreement. Thus even

11
if the outsource vendor costs are somewhat higher, there can still be useful economic
gains.

Let us now consider some of the factors which exert a negative impact on the work of
updating or modifying existing software applications. Note that the top-ranked factor
which reduces maintenance productivity, the presence of error-prone modules, is very
asymmetrical. The absence of error-prone modules does not speed up maintenance work,
but their presence definitely slows down maintenance work.

Error-prone modules were discovered by IBM in the 1960’s when IBM’s quality
measurements began to track errors or bugs down to the levels of specific modules. For
example it was discovered that IBM’s IMS data base product contained 425 modules, but
more than 300 of these were zero-defect modules that never received any bug reports.
About 60% of all reported errors were found in only 31 modules, and these were very
buggy indeed.

When this form of analysis was applied to other products and used by other companies, it
was found to be a very common phenomenon. In general more than 80% of the bugs in
software applications are found in less than 20% of the modules. Once these modules are
identified then they can be inspected, analyzed, and restructured to reduce their error
content down to safe levels.

Table 7 summarizes the major factors that degrade software maintenance performance.
Not only are error-prone modules troublesome, but many other factors can degrade
performance too. For example, very complex “spaghetti code” is quite difficult to
maintain safely. It is also troublesome to have maintenance tasks assigned to generalists
rather than to trained maintenance specialists.

A very common situation which often degrades performance is lack of suitable


maintenance tools, such as defect tracking software, change management software, test
library software, and so forth. In general it is very easy to botch up maintenance and
make it such a labor-intensive activity that few resources are left over for development
work. The simultaneous arrival of the year 2000 and Euro problems have basically
saturated the available maintenance teams, and are also drawing developers into the work
of making mass updates. This situation can be expected to last for many years, and may
introduce permanent changes into software economic structures.

12
Table 7: Impact of Key Adjustment Factors on Maintenance
(Sorted in order of maximum negative impact)

Maintenance Factors Minus


Range

Error prone modules -50%


Embedded variables and data -45%
Staff inexperience -40%
High complexity of base code -30%
No Y2K of special search engines -28%
Manual change control methods -27%
Low level programming languages -25%
No defect tracking tools -24%
No Y2K “mass update” specialists -22%
Poor ease of use -18%
No quality measurements -18%
No maintenance specialists -18%
Poor response time -16%
Management inexperience -15%
No base code inspections -15%
No regression test libraries -15%
No HELP desk automation -15%
No on-line defect reporting -12%
No annual training -10%
No code restructuring tools -10%
No reengineering tools -10%
No reverse engineering tools -10%
No complexity analysis tools -10%
No productivity measurements -7%
Poor team morale -6%
No user satisfaction measurements -4%
No unpaid overtime 0%

Sum -500%

Given the enormous amount of effort that is now being applied to software maintenance,
and which will be applied in the future, it is obvious that every corporation should
attempt to adopt maintenance “best practices” and avoid maintenance “worst practices” as
rapidly as possible.

Software Entropy and Total Cost of Ownership

The word “entropy” means the tendency of systems to detstabilize and become more
chaotic over time. Entropy is a term from physics and is not a software-related word.
However entropy is true of all complex systems, including software.: All known
compound objects decay and become more complex with the passage of time unless
effort is exerted to keep them repaired and updated. Software is no exception. The
accumulation of small updates over time tends to gradually degrade the initial structure of
applications and makes changes grow more difficult over time.

13
 2) Expected-value techniques, in which we
estimate the probabilities of occurrence of each
outcome (successful or unsuccessful
development of the new operating system) and
complete the expected payoff of each option:

 3) Techniques in which we reduce uncertainty by


buying information. For example, prototyping is a
way of buying information to reduce our uncertainty
about the likely success or failure of a multiprocessor
operating system; by developing a rapid prototype of
its high-risk elements, we can get a clearer picture of
our likelihood of successfully developing the full
operating system.
 In general, prototyping and other options for buying
information are most valuable aids for software
engineering decisions.
 However, they always raise the following question: "how
much information-buying is enough?"
 In principle,. this question can be answered via statistical
decision theory techniques involving the use of Bayes' Law,
which allows us to calculate the expected payoff from a
software project as a function of our level of investment in
a prototype or other information-buying option. (Some
examples, of the use of Bayes' Law to estimate the
appropriate level of investment in a prototype are given in
[[1, ch. 20] .)
 In practice, the use of Bayes' Law involves the estimation
of a number of conditional probabilities which are not easy
to estimate accurately.
 However, the Bayes' Law approach can be translated
into a number of value-of-information guidelines, or
conditions under which it makes good sense to decide on
investing in more information before committing
ourselves to a particular course of action.
 Condition 1: There exist attractive alternatives whose
payoff vanes greatly, depending on some critical states
of nature. I f not, we can commit ourselves to one of the
attractive alternatives with no risk of significant loss.
 Condition 2: The critical states of nature have an
appreciable probability of occurring. If not, we can
again commit ourselves without major risk. For situations
with extremely high variations in payoff, the appreciable
probability level is lower than in situations with smaller
variations in payoff.
 Condition 3: The investigations have a high
probability of accurately identifying the occurrence
of the critical states of nature. If not, the
investigations will not do much to reduce our risk of
loss due to making the wrong decision.
 Condition 4: The required cost and schedule of the
investigations do not overly curtail their net value.
It does us little good to obtain results which cost
more than they can save us, or which arrive too late
to help us make a decision.
 Condition 5: There exist significant side benefits
derived from performing the investigations. Again,
we may be able to justify an investigation solely on
the basis of its value in training, team-building,
customer relations, or design validation.
 The pitfalls below are expressed in terms of
some frequently expressed but faulty pieces
of software engineering advice.
 Pitfall 1: Always use a simulation to
investigate the feasibility of complex real-
time software.
 Pitfall 2: Always build the software twice.
 Pitfall 3: Build the software purely top-
down.
 Pitfall 4: Every piece of code should be
proved correct.
 Pitfall 5: Nominal-case testing is sufficient.
 Thereis no simple way to make an accurate
estimate of the effort required to develop a
software system
 Initial estimates are based on inadequate
information in a user requirements definition;
 The software may run on unfamiliar computers or
use new technology;
 The people in the project may be unknown.
 Project cost estimates may be self-fulfilling
 The estimate defines the budget and the product
is adjusted to meet the budget.
 Algorithmic cost modelling.
 Expert judgement.
 Estimation by analogy.
 Parkinson's Law.
 Pricing to win.
 Each method has strengths and weaknesses.
 Estimation should be based on several
methods.
 If these do not return approximately the
same result, then you have insufficient
information available to make an estimate.
 Some action should be taken to find out
more in order to make more accurate
estimates.
 Pricing to win is sometimes the only
applicable method.
 Any of these approaches may be used top-
down or bottom-up.
 Top-down
 Start at the system level and assess the overall
system functionality and how this is delivered
through sub-systems.
 Bottom-up
 Start at the component level and estimate the
effort required for each component. Add these
efforts to reach a final estimate.
 Usable without knowledge of the system
architecture and the components that might
be part of the system.
 Takes into account costs such as integration,
configuration management and
documentation.
 Can underestimate the cost of solving
difficult low-level technical problems.
 Usable when the architecture of the system
is known and components identified.
 This can be an accurate method if the system
has been designed in detail.
 It may underestimate the costs of system
level activities such as integration and
documentation.
 The project costs whatever the customer has
to spend on it.
 Advantages:
 You get the contract.
 Disadvantages:
 The probability that the customer gets the
system he or she wants is small. Costs do not
accurately reflect the work required.
 This approach may seem unethical and un-
businesslike.
 However, when detailed information is
lacking it may be the only appropriate
strategy.
 The project cost is agreed on the basis of an
outline proposal and the development is
constrained by that cost.
 A detailed specification may be negotiated or
an evolutionary approach used for system
development.
 Cost is estimated as a mathematical function
of product, project and process attributes
whose values are estimated by project
managers:
 Effort = A x Size^B x M
 A is an organisation-dependent constant, B
reflects the disproportionate effort for large
projects and M is a multiplier reflecting product,
process and people attributes.
 The most commonly used product attribute
for cost estimation is code size.
 Most models are similar but they use
different values for A, B and M.
 The size of a software system can only be
known accurately when it is finished.
 Several factors influence the final size
 Use of COTS and components;
 Programming language;
 Distribution of system.
 Asthe development process progresses then
the size estimate becomes more accurate.
Software cost estimation techniques
Presentation by Kudzai G. Rerayi B1542349
SOFTWARE COST ESTIMATION TECHNIQUES
 In the actual cost estimation process there are
other inputs and constraints that needed to be
considered besides the cost drivers.

 One of the primary constraints of the software


cost estimate is the financial constraint, which
are the amount of the money that can be
budgeted or allocated to the project. There are
other constraints such as manpower constraints,
and date constraints.
Expert judgement
 several experts on the proposed software development techniques
and the application domain are consulted.

 They each estimate the project cost. These estimates are


compared and discussed. The estimation process iterates until an
agreed estimate is reached.

 This technique captures the experience and the knowledge of the


estimator who provides the estimate based on their experience
from a similar project to which they have participated.
ADVANTAGES
 Relatively cheap estimation method.
 Can be accurate if experts have direct experience of similar systems
 Useful in the absence of quantified, empirical data.
 Can factor in differences between past project experiences and
requirements of the proposed project
 Can factor in impacts caused by new technologies, applications and
languages.
DISADVANTAGES
 Estimate is only as good expert’s opinion.
 It is very inaccurate if there are no experts!
 Hard to document the factors used by the experts
Pricing to win

 the software cost is estimated to be whatever the


customer has available to spent on the project.

 The estimated effort depends on the customer’s


budget and not on the software functionality.

 In other words the cost estimate is the price that is


necessary to win the contract or the project.
ADVANTAGES

 Often rewarded with the contract

DISADVANTAGES
 Time and money run out before the job is done
 The probability that the customer gets the system
he or she wants is small. Costs do not accurately
reflect the work required.
Estimation by analogy

 This technique is applicable when other projects in the


same application domain have been completed. The cost
of new project is estimated by analogy with these
completed projects.
 comparing the proposed project to previous completed
similar project in the same application domain.
 The actual data from the completed projects are
extrapolated. Can be used either at system or component
level.
ADVANTAGES
 Based on actual project data
 It is accurate if project data available

DISADVANTAGES
 Impossible if no comparable project had been
tackled in the past
 How well does the previous project represent this
one
Bottom up
 costof each software component is estimated
and then combine the results to arrive the total
cost for the project.

 The goal is to construct the estimate of the


system from the knowledge accumulated about
the small software components and their
interactions.
ADVANTAGES
 More stable
 More detailed
 Allow each software group to hand an estimate

DISADVANTAGES
 May overlook system level costs
 More time consuming
TOP-DOWN

 Thistechnique is also called Macro Model,


which utilize the global view of the
product and then partitioned into various
low level components.

 startat the system level and assess the


overall system functionality and how this
is delivered through sub systems
ADVANTAGES

 Requires minimal project detail


 Usually faster and easier to implement
 Focus on system level activities

DISADVANTAGES
 Tend to overlook low level components
 No detailed basis
Algorithmic cost modelling
 useof mathematical equations to predict
cost estimations
 Equations are based on theory or
historical data.
 Analgorithmic cost model can be built by
analyzing the costs and attributes of
completed projects and finding the
closest fit formula to actual experience.
ADVANTAGES
 No detailed basis
 Easy to modify input data
 Easy to refine and customize formulas
 Objectively calibrated to experience

DISADVANTAGES
 Unable to deal with exceptional conditions
 Some experience and factors can not be quantified
 Sometimes algorithms may be proprietary
The Waterfall Model
 The Waterfall Model was the first Process Model to be
introduced.
 It is also referred to as a linear-sequential life cycle
model.
 It is very simple to understand and use.
 In a waterfall model, each phase must be completed before
the next phase can begin and there is no overlapping in the
phases.
 The sequential phases in Waterfall model are :
 Requirement Gathering and analysis − All possible
requirements of the system to be developed are captured in
this phase and documented in a requirement specification
document.
 System Design − The requirement specifications from first
phase are studied in this phase and the system design is
prepared. This system design helps in specifying hardware and
system requirements and helps in defining the overall system
architecture.
 Implementation − With inputs from the system design, the
system is first developed in small programs called units, which
are integrated in the next phase. Each unit is developed and
tested for its functionality, which is referred to as Unit Testing.
 Integration and Testing − All the units developed in the
implementation phase are integrated into a system after testing
of each unit. Post integration the entire system is tested for any
faults and failures.
 Deployment of system − Once the functional and non-
functional testing is done; the product is deployed in the
customer environment or released into the market.
 Maintenance − There are some issues which come up in the
client environment. To fix those issues, patches are released. Also
to enhance the product some better versions are released.
Maintenance is done to deliver these changes in the customer
environment.
Sequential Development: The
Waterfall Model
Discussion of the Waterfall Model
 The waterfall model is a heavyweight process with full
documentation of each process step.
 Advantages:
 Process visibility
 Separation of tasks
 Quality control at each step
 Cost monitoring at each step
 Disadvantages:
 In practice, each stage in the process reveals new
understanding of the previous stages, which often requires
the earlier stages to be revised.
 The Waterfall Model is not flexible enough.
Discussion of the Waterfall Model
 A pure sequential model is impossible
 Examples:
 A feasibility study cannot create a proposed budget and
schedule without a preliminary study of the
requirements and a tentative design.
 Detailed design and implementation reveal gaps in the
requirements specification.
 Requirements and/or technology may change during the
development.
 The plan must allow for some form of iteration.
Modified Waterfall Model
Sequential Development
 Sequential processes work best when the requirements are
well understood and the design is straight forward, e.g.,
 Conversions of manual data processing systems where the
requirements were well understood and few changes were
made during the development (e.g., electricity billing).
 New models of a product where the functionality is closely
derived from an earlier product (e.g. automatic braking
system for a car).
 Portions of a large system where some components have
clearly defined requirements and are clearly separated from
the rest of the system.
Contracts
 Note about contracts for software development
 Some organizations contract for software development
by placing separate contracts for each stage of the
Waterfall Model or arrange for payment after each stage.
 This is a very bad practice.
Economic Rationale for the
Waterfall Model
 To achieve a successful software product all sub goals
must be met
 Avoidable costly consequences will occur unless early
goals are thoroughly satisfied
 Any different ordering of the sub goals will produce a
less successful software product
 Early detection of errors (particularly requirements)
will mean simple, less costly changes are needed
Refinements – Incremental
Development
 Increments of functional capability
 Increment 1 – basic capability to operate
 Increment 2 – value added production-mode capabilities
 Increment 3 – nice-to-have features
 Advantages
 More helpful & easier to test
 Incorporates user experience in a less expensive way
 Reduces labour costs
Refinements – advancemanship
 Anticipatory documentation
 Define detailed objectives & plans for future software
development activities
 Produce early versions of user documentation
 Software scaffolding
 Extra products that need to be developed to ensure smooth &
efficient build of main software
 Advantages
 Reduce overall costs by limiting the time & energy spent in
none productive activities
 Redistribute costs – greater early investment reduces late
investment costs
What is a Work Breakdown Structure?
Breaking work into smaller tasks is a common productivity technique used to make
the work more manageable and approachable. For projects, the Work Breakdown
Structure (WBS) is the tool that utilizes this technique and is one of the most
important project management documents. It singlehandedly integrates scope, cost
and schedule baselines ensuring that project plans are in alignment.
The Project Management Institute (PMI) Project Management Book of Knowledge
(PMBOK) defines the Work Breakdown Structure as a “deliverable oriented
hierarchical decomposition of the work to be executed by the project team.” There
are two types of WBS:
1) Deliverable-Based
2) Phase-Based.
The most common and preferred approach is the Deliverable-Based approach. The
main difference between the two approaches are the Elements identified in the first
Level of the WBS.
Deliverable-Based Work Breakdown Structure
A Deliverable-Based Work Breakdown Structure clearly demonstrates the relationship
between the project deliverables (i.e., products, services or results) and the scope (i.e.,
work to be executed). Figure 1 is an example of a Deliverable-Based WBS for building
a house. Figure 2 is an example of a Phase-Based WBS for the same project.

FIGURE 1 – DELIVERABLE BASED WORK BREAKDOWN STRUCTURE

In Figure 1, the Level 1 Elements are summary deliverable descriptions. The Level 2
Elements in each Leg of the WBS are all the unique deliverables required to create
the respective Level 1 deliverable.
Phase-Based Work Breakdown Structure
In Figure 2, a Phase-Based WBS, the Level 1 has five Elements. Each of these Elements
are typical phases of a project. The Level 2 Elements are the unique deliverables in
each phase. Regardless of the type of WBS, the lower Level Elements are all
deliverables. A Phase-Based WBS requires work associated with multiple elements be
divided into the work unique to each Level 1 Element. A WBS Dictionary is created to
describe the work in each Element.

FIGURE 2 - PHASE BASED WORK BREAKDOWN STRUCTURE

A good WBS is simply one that makes the project more manageable. Every project is
different; every project manager is different and every WBS is different. So, the right
WBS is the one that best answers the question, “What structure makes the project
more manageable?”.

How to Make a Work Breakdown Structure


A good Work Breakdown Structure is created using an iterative process by following
these steps and meeting these guidelines:

1. GATHER CRITICAL DOCUMENTS


a. Gather critical project documents.
b. Identify content containing project deliverables, such as the Project
Charter, Scope Statement and Project Management Plan (PMP)
subsidiary plans.
2. IDENTIFY KEY TEAM MEMBERS
a. Identify the appropriate project team members.
b. Analyze the documents and identify the deliverables.
3. DEFINE LEVEL 1 ELEMENTS
a. Define the Level 1 Elements. Level 1 Elements are summary deliverable
descriptions that must capture 100% of the project scope.
b. Verify 100% of scope is captured. This requirement is commonly
referred to as the 100% Rule.
4. DECOMPOSE (BREAKDOWN) ELEMENTS
a. Begin the process of breaking the Level 1 deliverables into unique
lower Level deliverables. This “breaking down” technique is called
Decomposition.
b. Continue breaking down the work until the work covered in each
Element is managed by a single individual or organization. Ensure that
all Elements are mutually exclusive.
c. Ask the question, would any additional decomposition make the
project more manageable? If the answer is “no”, the WBS is done.
5. CREATE WBS DICTIONARY
a. Define the content of the WBS Dictionary. The WBS Dictionary is a
narrative description of the work covered in each Element in the WBS.
The lowest Level Elements in the WBS are called Work Packages.
b. Create the WBS Dictionary descriptions at the Work Package Level with
detail enough to ensure that 100% of the project scope is covered. The
descriptions should include information such as, boundaries,
milestones, risks, owner, costs, etc.
6. CREATE GANTT CHART SCHEDULE
a. Decompose the Work Packages to activities as appropriate.
b. Export or enter the Work Breakdown Structure into a Gantt chart for
further scheduling and project tracking.

There are many WBS software tools available. Some of them are based on mind
mapping and others are drawing tools. You can read about these tools in this WBS
software review.

How to Use a Work Breakdown Structure


The Work Breakdown Structure is used for many different things. Initially, it serves as
a planning tool to help the project team plan, define and organize scope with
deliverables. The WBS is also used as the primary source of schedule and cost
estimate activities. But, its biggest contributions to a project are is use as a
description all of the work and as a monitoring and controlling tool.

WORK PACKAGES
Figure 3 shows the House Project Work Breakdown Structure expanded to Level 1, 2,
and 3 Elements. The lowest Levels of each Leg and Branch of the WBS are called
Work Packages. Work Packages cover information related to the deliverable, such as
owner, milestones, durations, resources, risks, etc. This information is described in the
WBS Dictionary.
PLANNING PACKAGES
There is another type of Work Package called a Planning Package. When the project
management plan is approved, scope is known, but not necessarily all of the details.
In order to apply the 100% Rule and capture all of the scope, Planning Packages are
created. It is understood that as details are defined, the Planning Packages eventually
evolve to Work Packages. In the House Project, the project manager knows that the
house will have fixtures, but at the time construction begins, there is only a fixture
allowance and no fixtures identified. Once the fixtures are determined, the associated
Planning Package becomes a Work Package. This planning process is call Rolling
Wave Planning and is a from of Progressive Elaboration.

FIGURE 3 – WBS WORK PACKAGES AND CONTROL ACCOUNTS

CONTROL ACCOUNTS
The other application of the WBS is as a monitoring and controlling tool. This is
accomplished by defining Control Accounts. Control Accounts are WBS Elements at
which the project plans to monitor and report performance. The Control Accounts
can be any Element in the WBS. In the House Project, the project manager decides
that the project risks associated with using subcontractors can be better managed if
the project reports performance for each subcontractor. To monitor their
performance, Elements 3.1, 3.2 and 3.3 have been identified as Control Accounts.
However, the remaining work in Elements 1.0 and 2.0 will be performed by company
resources with less risk and the project does not feel like monitoring and controlling
is needed at lower Levels. To assist with the monitoring and reporting, project
management information tools are used to collect, analyze and report information at
any Element within the WBS.

You might also like