Professional Documents
Culture Documents
The COCOMO (Constructive Cost Model) is one of the most popularly used
software cost estimation models i.e. it estimates or predicts the effort required
for the project, total project cost and scheduled time for the project. This model
depends on the number of lines of code for software product development. It was
developed by a software engineer Barry Boehm in 1981.
What is COCOMO Model?
The COCOMO estimates the cost for software product development in terms of
effort (resources required to complete the project work) and schedule (time
required to complete the project work) based on the size of the software product.
It estimates the required number of Man-Months (MM) for the full development
of software products. According to COCOMO, there are three modes of software
development projects that depend on complexity. Such as:
1. Organic Project
It belongs to small & simple software projects which are handled by a small team
with good domain knowledge and few rigid requirements.
2. Semidetached Project
It is an intermediate (in terms of size and complexity) project, where the team
having mixed experience (both experience & inexperience resources) to deals
with rigid/nonrigid requirements.
3. Embedded Project
This project having a high level of complexity with a large team size by considering
all sets of parameters (software, hardware and operational).
PROJECT TYPE a b c d
Example: For a given project was estimated with a size of 300 KLOC. Calculate the
Effort, Scheduled time for development. Also, calculate the Average resource size
and Productivity of the software for Organic project type.
For Organic
Effort (E) = a*(KLOC)b = 2.4*(300)1.05 = 957.61 MM
Scheduled Time (D) = c*(E)d = 2.5*(957.61)0.38 = 33.95 Months(M)
Avg. Resource Size = E/D = 957.61/33.95 = 28.21 Mans
Productivity of Software = KLOC/E = 300/957.61 = 0.3132 KLOC/MM = 313
LOC/MM
For Semidetached
Effort (E) = a*(KLOC)b = 3.0*(300)1.12 = 1784.42 MM
Scheduled Time (D) = c*(E)d = 2.5*(1784.42)0.35 = 34.35 Months(M)
For Embedded
Effort (E) = a*(KLOC)b = 3.6*(300)1.2 = 3379.46 MM
Scheduled Time (D) = c*(E)d = 2.5*(3379.46)0.32 = 33.66 Months(M)
Estimation Factors
How much effort is required to complete an
activity?
How much calendar time is needed to
complete an activity?
What is the total cost of an activity?
Project estimation and scheduling are
interleaved management activities.
Hardware and software costs.
Travel and training costs.
Effort costs (the dominant factor in most
projects)
The salaries of engineers involved in the project;
Social and insurance costs.
Effort costs must take overheads into
account
Costs of building, heating, lighting.
Costs of networking and communications.
Costs of shared facilities (e.g library, staff
restaurant, etc.).
Estimates are made to discover the cost, to
the developer, of producing a software
system.
There is not a simple relationship between
the development cost and the price charged
to the customer.
Broader organisational, economic, political
and business considerations influence the
price charged.
A measure of the rate at which individual
engineers involved in software development
produce software and associated
documentation.
Not quality-oriented although quality
assurance is a factor in productivity
assessment.
Essentially, we want to measure useful
functionality produced per time unit.
Size
related measures based on some output
from the software process.
This may be lines of delivered source code,
object code instructions, etc.
Function-related measures based on an
estimate of the functionality of the delivered
software.
Function-points are the best known of this type
of measure.
What's a line of code? –
The measure was first proposed when programs
were typed on cards with one line per card;
This model assumes that there is a linear
relationship between system size and volume
of documentation.
The lower level the language, the more
productive the programmer
The same functionality takes more code to
implement in a lower-level language than in a
high-level language.
The more verbose the programmer, the
higher the productivity
Measures of productivity based on lines of code
suggest that programmers who write verbose
code are more productive than programmers who
write compact code.
4
Based on a combination of program
characteristics
external inputs and outputs;
user interactions;
external interfaces;
files used by the system.
A weight is associated with each of these and
the function point count is computed by
multiplying each raw count by the weight
and summing all values.
The function point count is modified by
complexity of the project
FPs can be used to estimate LOC depending
on the average number of LOC per FP for a
given language
LOC = AVC * number of function points;
AVC is a language-dependent factor varying from
200-300 for assemble language to 2-40 for a 4GL;
FPsare very subjective. They depend on the
estimator
Automatic function-point counting is impossible.
Object points (alternatively named application
points) are an alternative function-related
measure to function points when 4Gls or similar
languages are used for development.
Object points are NOT the same as object
classes.
The number of object points in a program is a
weighted estimate of
The number of separate screens that are displayed;
The number of reports that are produced by the
system;
The number of program modules that must be
developed to supplement the database code;
Object points are easier to estimate from a
specification than function points as they are
simply concerned with screens, reports and
programming language modules.
They can therefore be estimated at a fairly
early point in the development process.
At this stage, it is very difficult to estimate
the number of lines of code in a system.
Real-time embedded systems, 40-160 LOC/P-
month.
Systems programs, 150-400 LOC/P-month.
Commercial applications, 200-900 LOC/P-
month.
In object points, productivity has been
measured between 4 and 50 object
points/month depending on tool support and
developer capability.
All metrics based on volume/unit time are
flawed because they do not take quality into
account.
Productivity may generally be increased at
the cost of quality. It is not clear how
productivity/quality metrics are related.
If requirements are constantly changing then
an approach based on counting lines of code
is not meaningful as the program itself is not
static;
The dictionary defines "economics" as "a
social science concerned chiefly with
description and analysis of the production,
distribution, and consumption of goods and
services."
Economics is the study of how people make
decisions in resource-limited situations.
Macroeconomics is the study of how people
make decisions in resource-limited situations
on a national or global scale.
It deals with the effects of decisions that
national leaders make on such issues as tax
rates, interest rates, foreign and trade
policy.
Microeconomics is the study of how people
make decisions in resource-limited situations
on a more personal scale.
It deals with the decisions that individuals
and organizations make on such issues as how
much insurance to buy, which word processor
to buy, or what prices to charge for their
products or services.
If we look at the discipline of software
engineering, we see that the
microeconomics branch of economics deals
more with the types of decisions we need to
make as software engineers or managers.
Clearly, we deal with limited resources.
There is never enough time or money to
cover all the good features we would like to
put into our software products.
And even in these days of cheap hardware
and virtual memory, our more significant
software products must always operate
within a world of limited computer power
and main memory.
If you have been in the software engineering
field for any length of time, I am sure you
can think of a number of decision situations
in which you had to determine some key
software product feature as a function of
some limiting critical resource.
Throughout the software life cycle, there are many
decision situations involving limited resources in
which software engineering economics techniques
provide useful assistance.
To provide a feel for the nature of these economic
decision issues, an example is given below for each of
the major phases in the software life cycle.
Feasibility Phase: How much should we invest
in information system analyses (user questionnaires
and interviews, current-system analysis, workload
characterizations, simulations, scenarios, prototypes)
in order that we converge on an appropriate
definition and concept of operation for the system
we plan to implement?
Plans and Requirements Phase: How
rigorously should we specify requirements?
How much should we invest in requirements
validation activities (automated
completeness, consistency, and traceability
checks, analytic models, simulations,
prototypes) before proceeding to design and
develop a software system?
Product Design Phase: Should we organize
the software to make it possible to use a
complex piece of existing software which
generally but not completely meets our
requirements?
Programming Phase: Given a choice
between three data storage and retrieval
schemes which are primarily execution time-
efficient, storage-efficient, and easy-to modify,
respectively; which of these should we choose to
implement?
Integration and Test Phase: How much
testing and formal verification should we
perform on a product before releasing it to
users?
Maintenance Phase: Given an extensive list
of suggested product improvements, which ones
should we implement first?
Phase out: Given an aging, hard-to-modify
software product, should we replace it with a
new product, restructure it, or leave it alone?
The microeconomics field provides a number of techniques
for dealing with software life-cycle decision issues such as
the ones given in the previous section. Fig. 1 presents an
overall master key to these techniques and when to use
them.
As indicated in Fig. 1, standard optimization techniques
can be used when we can find a single quantity such as
dollars (or pounds, yen, cruzeiros, etc.) to serve as a
"universal solvent” into which all of our decision variables
can be converted.
Or, if the nondollar objectives can be expressed as
constraints (system availability must be at least 98
percent; throughput must be at least 150 transactions per
second), then standard constrained optimization
techniques can be used. And if cash flows occur at
different times, then present-value techniques can be
used to normalize them to a common point in time.
More frequently, some of the resulting
benefits from the software system are not
expressible in dollars.
In such situations, one alternative solution
will not necessarily dominate another
solution.
An example situation is shown in Fig. 2,
which compares the cost and benefits (here,
in terms of throughput in transactions per
second) of two alternative approaches to
developing an operating system for a
transaction processing system.
Option A: Accept an available operating
system. This will require only $80K in
software costs, but will achieve a peak
performance of 120 transactions per second,
using five $10K minicomputer processors,
because of a high multiprocessor overhead
factor.
Option B: Build a new operating system.
This system would be more efficient and
would support a higher peak throughput, but
would require $180K in software costs.
The cost-versus-performance curve for these two options
are shown in Fig. 2. Here, neither option dominates the
other, and various cost-benefit decision-making
techniques (maximum profit margin, cost benefit ratio,
return on investments, etc.) must be used to choose
between Options A and B.
In general, software engineering decision problems are
even more complex than Fig. 2, as Options A and B will
have several important criteria on which they differ
(e.g., robustness, ease of tuning, ease of change,
functional capability).
If these criteria are quantifiable, then some type of
figure of merit can be defined to support a comparative
analysis of the prefer- ability of one option over another.
If some of, the criteria are unquantifiable (user goodwill,
programmer morale, etc.), then some techniques for
comparing unquantifiable criteria need to be used.
In software engineering, our decision issues are generally
even more complex than those discussed above. This is
because the outcome of many of our options cannot be
determined in advance.
In such circumstances, we are faced with a problem of
decision making under uncertainty, with a considerable risk
of an undesired outcome.
The main economic analysis techniques available to
support us in resolving such problems are the following.
1) Techniques for decision making under complete
uncertainty, such as the maximax rule, the maximin rule,
and the Laplace rule [38]. These techniques are generally
inadequate for practical software engineering decisions.
U.S. telephone numbers by about the year 2015. The UNIX calendar expires in the year
2038 and could troublesome like the year 2000 problem. Even larger, it may be necessary
to add at least one digit to U.S. social security numbers by about the year 2050.
Although software enhancements and software maintenance in the sense of defect repairs
are usually funded in different ways and have quite different sets of activity patterns
associated with them, many companies lump these disparate software activities together
for budgets and cost estimates.
The author does not recommend the practice of aggregating defect repairs and
enhancements, but this practice is very common. Consider some of the basic differences
between enhancements or adding new features to applications and maintenance or defect
repairs as shown in table 1:
Enhancements Maintenance
(New features) (Defect repairs)
3
This method is crude, but can convey useful information. Organizations which are
proactive in using geriatric tools and services can spend less than 30% of their annual
software budgets on various forms of maintenance, while organizations that have not
used any of the geriatric tools and services can top 60% of their annual budgets on
various forms of maintenance.
Although the use of the word “maintenance” as a blanket term for more than 20 kinds of
update activity is not very precise, it is useful for overall studies of national software
populations. Table 2 shows the estimated U.S. software population for the United States
between 1950 and 2025 divided into “development” and “maintenance” segments.
In this table the term “development” implies creating brand new applications or adding
major new features to existing applications. The term “maintenance” implies fixing bugs
or errors, mass updates such as the Euro and Year 2000, statutory or mandatory changes
such as rate changes, and minor augmentation such as adding features that require less
than a week of effort.
Notice that under the double impact of the Euro and the Year 2000 so many development
projects were delayed or cancelled so that the population of software developers in the
United States actually shrank below the peak year of 1995. The burst of mass update
maintenance work is one of the main reasons why there is such a large shortage of
software personnel.
As can be seen from table 2, the work of fixing errors and dealing with mass updates to
aging legacy applications has become the dominant form of software engineering. This
4
tendency will continue indefinitely so long as maintenance work remains labor-intensive.
Before proceeding, let us consider 21 discrete topics that are often coupled together under
the generic term “maintenance” in day to day discussions, but which are actually quite
different in many important respects:
Table 3: Major Kinds of Work Performed Under the Generic Term “Maintenance”
Although the 21 maintenance topics are different in many respects, they all have one
common feature that makes a group discussion possible: They all involve modifying an
existing application rather than starting from scratch with a new application.
Although the 21 forms of modifying existing applications have different reasons for being
carried out, it often happens that several of them take place concurrently. For example,
enhancements and defect repairs are very common in the same release of an evolving
application. There are also common sequences or patterns to these modification
activities. For example, reverse engineering often precedes reengineering and the two
occur so often together as to almost comprise a linked set. For releases of large
applications and major systems, the author has observed from six to 10 forms of
maintenance all leading up to the same release!
5
Nominal Default Values for Maintenance and Enhancement Activities
The nominal default values for exploring these 21 kinds of maintenance are shown in
table 4. However, each of the 21 has a very wide range of variability and reacts to a
number of different technical factors, and also to the experience levels of the maintenance
personnel. Let us consider some generic default estimating values for these various
maintenance tasks using two useful metrics: “assignment scopes” and “production rates.”
The term “assignment scope” refers to the amount of software one programmer can keep
operational in the normal course of a year, assuming routine defect repairs and minor
updates. Assignment scopes are usually expressed in terms of function points and the
observed range is from less than 300 function points to more than 5,000 function points.
The term “production rate” refers to the number of units that can be handled in a standard
time period such as a work month, work week, day, or hour. Production rates are usually
expressed in terms of either “function points per staff month” or the similar and
reciprocal metric, “work hours per function point.”
We will also include “Lines of code per staff month” with the caveat that the results are
merely based on an expansion of 100 statements per function point, which is only a
generic value and should not be used for serious estimating purposes.
Table 4: Default Values for Maintenance Assignment Scopes and Production Rates
6
Euro-currency conversion 1,500 15 8.80 1,500
Error-prone module removal 300 12 11.00 1,200
Average 2,080 255 5.51 25,450
Each of these forms of modification or support activity have wide variations, but these
nominal default values at least show the ranges of possible outcomes for all of the major
activities associated with support of existing applications.
Table 5 shows some of the factors and ranges that are associated with assignment scopes,
or the amount of software that one programmer can keep running in the course of a
typical year.
In table 5 the term “experienced staff” means that the maintenance team has worked on
the applications being modified for at least six months and are quite familiar with the
available tools and methods.
The term “good structure” means that the application adheres to the basic tenets of
structured programming; has clear and adequate comments; and has cyclomatic
complexity levels that are below a value of 10.
The term “full maintenance tools” implies the availability of most of these common
forms of maintenance tools: 1) Defect tracking and routing tools; 2) Change control
tools; 3) Complexity analysis tools; 4) Code restructuring tools; 5) Reverse engineering
tools; 6) Reengineering tools; 7) Maintenance “workbench” tools; 8) Test coverage
tools.
The term “high level language” implies a fairly modern programming language that
requires less than 50 statements to encode 1 function point. Examples of such languages
include most object-oriented languages such as Smalltalk, Eiffel, and Objective C.
By contrast “low level languages” implies language requiring more than 100 statements
to encode 1 function point. Obviously assembly language would be in this class since it
usually takes more than 200 to 300 assembly statements per function point. Other
languages that top 100 statements per function point include many mainstream languages
such as C, Fortran, and COBOL.
In between the high-level and low-level ranges are a variety of mid-level languages that
require roughly 70 statements per function point, such as Ada83, PL/I, and Pascal.
7
If the company has used complexity analysis tools, code restructuring tools, and has a
staff of highly trained maintenance specialists then the maintenance assignment scope
might top 3,000 function points. This implies that only 33 maintenance experts are
needed, as opposed to 200 generalists. Table 5 illustrates how maintenance assignment
scopes vary in response to four different factors, when each factor switches from “worst
case” to “best case.” Table 5 assumes Version 4.1 of the International Function Point
Users Group (IFPUG) counting practices manual.
8
Experienced staff 700 1,100 1,600
Good structure
Low-level language
No maintenance tools
9
None of the values in table 5 are sufficiently rigorous by themselves for formal cost
estimates, but are sufficient to illustrate some of the typical trends in various kinds of
maintenance work. Obviously adjustments for team experience, complexity of the
application, programming languages, and many other local factors are needed as well.
There are several difficulties in exploring software maintenance costs with accuracy. One
of these difficulties is the fact that maintenance tasks are often assigned to development
personnel who interleave both development and maintenance as the need arises. This
practice makes it difficult to distinguish maintenance costs from development costs
because the programmers are often rather careless in recording how time is spent.
Another and very signficant problem is that fact that a great deal of software maintenance
consists of making very small changes to software applications. Quite a few bug repairs
may involve fixing only a single line of code. Adding minor new features such as
perhaps a new line-item on a screen may require less than 50 source code statements.
These small changes are below the effective lower limit for counting function point
metrics. The function point metric includes weighting factors for complexity, and even if
the complexity adjustments are set to the lowest possible point on the scale, it is still
difficult to count function points below a level of perhaps 15 function points.
Quite a few maintenance tasks involve changes that are either a fraction of a function
point, or may at most be less than 10 function points or about 1000 COBOL source code
statements. Although normal counting of function points is not feasible for small
updates, it is possible to use the “backfiring” method or converting counts of logical
source code statements in to equivalent function points. For example, suppose an update
requires adding 100 COBOL statements to an existing application. Since it usually takes
about 105 COBOL statements in the procedure and data divisions to encode 1 function
point, it can be stated that this small maintenance project is “about 1 function point in
size.”
If the project takes one work day consisting of six hours, then at least the results can be
expressed using common metrics. In this case, the results would be roughly “6 staff
hours per function point.” If the reciprocal metric “function points per staff month” is
used, and there are 20 working days in the month, then the results would be “20 function
points per staff month.”
Because maintenance of aging legacy software is very labor intensive it is quite important
to explore the best and most cost effective methods available for dealing with the millions
of applications that currently exist. The sets of best and worst practices are not
symmetrical. For example the practice that has the most positive impact on maintenance
10
productivity is the use of trained maintenance experts. However the factor that has the
greatest negative impact is the presence of “error –prone modules” in the application that
is being maintained.
Table 6 illustrates a number of factors which have been found to exert a beneficial
positive impact on the work of updating aging applications and shows the percentage of
improvement compared to average results:
Sum 503%
At the top of the list of maintenance “best practices” is the utilization of full-time, trained
maintenance specialists rather than turning over maintenance tasks to untrained
generalists. The positive impact from utilizing maintenance specialists is one of the
reasons why maintenance outsourcing has been growing so rapidly. The maintenance
productivity rates of some of the better maintenance outsource companies is roughly
twice that of their clients prior to the completion of the outsource agreement. Thus even
11
if the outsource vendor costs are somewhat higher, there can still be useful economic
gains.
Let us now consider some of the factors which exert a negative impact on the work of
updating or modifying existing software applications. Note that the top-ranked factor
which reduces maintenance productivity, the presence of error-prone modules, is very
asymmetrical. The absence of error-prone modules does not speed up maintenance work,
but their presence definitely slows down maintenance work.
Error-prone modules were discovered by IBM in the 1960’s when IBM’s quality
measurements began to track errors or bugs down to the levels of specific modules. For
example it was discovered that IBM’s IMS data base product contained 425 modules, but
more than 300 of these were zero-defect modules that never received any bug reports.
About 60% of all reported errors were found in only 31 modules, and these were very
buggy indeed.
When this form of analysis was applied to other products and used by other companies, it
was found to be a very common phenomenon. In general more than 80% of the bugs in
software applications are found in less than 20% of the modules. Once these modules are
identified then they can be inspected, analyzed, and restructured to reduce their error
content down to safe levels.
Table 7 summarizes the major factors that degrade software maintenance performance.
Not only are error-prone modules troublesome, but many other factors can degrade
performance too. For example, very complex “spaghetti code” is quite difficult to
maintain safely. It is also troublesome to have maintenance tasks assigned to generalists
rather than to trained maintenance specialists.
12
Table 7: Impact of Key Adjustment Factors on Maintenance
(Sorted in order of maximum negative impact)
Sum -500%
Given the enormous amount of effort that is now being applied to software maintenance,
and which will be applied in the future, it is obvious that every corporation should
attempt to adopt maintenance “best practices” and avoid maintenance “worst practices” as
rapidly as possible.
The word “entropy” means the tendency of systems to detstabilize and become more
chaotic over time. Entropy is a term from physics and is not a software-related word.
However entropy is true of all complex systems, including software.: All known
compound objects decay and become more complex with the passage of time unless
effort is exerted to keep them repaired and updated. Software is no exception. The
accumulation of small updates over time tends to gradually degrade the initial structure of
applications and makes changes grow more difficult over time.
13
2) Expected-value techniques, in which we
estimate the probabilities of occurrence of each
outcome (successful or unsuccessful
development of the new operating system) and
complete the expected payoff of each option:
DISADVANTAGES
Time and money run out before the job is done
The probability that the customer gets the system
he or she wants is small. Costs do not accurately
reflect the work required.
Estimation by analogy
DISADVANTAGES
Impossible if no comparable project had been
tackled in the past
How well does the previous project represent this
one
Bottom up
costof each software component is estimated
and then combine the results to arrive the total
cost for the project.
DISADVANTAGES
May overlook system level costs
More time consuming
TOP-DOWN
DISADVANTAGES
Tend to overlook low level components
No detailed basis
Algorithmic cost modelling
useof mathematical equations to predict
cost estimations
Equations are based on theory or
historical data.
Analgorithmic cost model can be built by
analyzing the costs and attributes of
completed projects and finding the
closest fit formula to actual experience.
ADVANTAGES
No detailed basis
Easy to modify input data
Easy to refine and customize formulas
Objectively calibrated to experience
DISADVANTAGES
Unable to deal with exceptional conditions
Some experience and factors can not be quantified
Sometimes algorithms may be proprietary
The Waterfall Model
The Waterfall Model was the first Process Model to be
introduced.
It is also referred to as a linear-sequential life cycle
model.
It is very simple to understand and use.
In a waterfall model, each phase must be completed before
the next phase can begin and there is no overlapping in the
phases.
The sequential phases in Waterfall model are :
Requirement Gathering and analysis − All possible
requirements of the system to be developed are captured in
this phase and documented in a requirement specification
document.
System Design − The requirement specifications from first
phase are studied in this phase and the system design is
prepared. This system design helps in specifying hardware and
system requirements and helps in defining the overall system
architecture.
Implementation − With inputs from the system design, the
system is first developed in small programs called units, which
are integrated in the next phase. Each unit is developed and
tested for its functionality, which is referred to as Unit Testing.
Integration and Testing − All the units developed in the
implementation phase are integrated into a system after testing
of each unit. Post integration the entire system is tested for any
faults and failures.
Deployment of system − Once the functional and non-
functional testing is done; the product is deployed in the
customer environment or released into the market.
Maintenance − There are some issues which come up in the
client environment. To fix those issues, patches are released. Also
to enhance the product some better versions are released.
Maintenance is done to deliver these changes in the customer
environment.
Sequential Development: The
Waterfall Model
Discussion of the Waterfall Model
The waterfall model is a heavyweight process with full
documentation of each process step.
Advantages:
Process visibility
Separation of tasks
Quality control at each step
Cost monitoring at each step
Disadvantages:
In practice, each stage in the process reveals new
understanding of the previous stages, which often requires
the earlier stages to be revised.
The Waterfall Model is not flexible enough.
Discussion of the Waterfall Model
A pure sequential model is impossible
Examples:
A feasibility study cannot create a proposed budget and
schedule without a preliminary study of the
requirements and a tentative design.
Detailed design and implementation reveal gaps in the
requirements specification.
Requirements and/or technology may change during the
development.
The plan must allow for some form of iteration.
Modified Waterfall Model
Sequential Development
Sequential processes work best when the requirements are
well understood and the design is straight forward, e.g.,
Conversions of manual data processing systems where the
requirements were well understood and few changes were
made during the development (e.g., electricity billing).
New models of a product where the functionality is closely
derived from an earlier product (e.g. automatic braking
system for a car).
Portions of a large system where some components have
clearly defined requirements and are clearly separated from
the rest of the system.
Contracts
Note about contracts for software development
Some organizations contract for software development
by placing separate contracts for each stage of the
Waterfall Model or arrange for payment after each stage.
This is a very bad practice.
Economic Rationale for the
Waterfall Model
To achieve a successful software product all sub goals
must be met
Avoidable costly consequences will occur unless early
goals are thoroughly satisfied
Any different ordering of the sub goals will produce a
less successful software product
Early detection of errors (particularly requirements)
will mean simple, less costly changes are needed
Refinements – Incremental
Development
Increments of functional capability
Increment 1 – basic capability to operate
Increment 2 – value added production-mode capabilities
Increment 3 – nice-to-have features
Advantages
More helpful & easier to test
Incorporates user experience in a less expensive way
Reduces labour costs
Refinements – advancemanship
Anticipatory documentation
Define detailed objectives & plans for future software
development activities
Produce early versions of user documentation
Software scaffolding
Extra products that need to be developed to ensure smooth &
efficient build of main software
Advantages
Reduce overall costs by limiting the time & energy spent in
none productive activities
Redistribute costs – greater early investment reduces late
investment costs
What is a Work Breakdown Structure?
Breaking work into smaller tasks is a common productivity technique used to make
the work more manageable and approachable. For projects, the Work Breakdown
Structure (WBS) is the tool that utilizes this technique and is one of the most
important project management documents. It singlehandedly integrates scope, cost
and schedule baselines ensuring that project plans are in alignment.
The Project Management Institute (PMI) Project Management Book of Knowledge
(PMBOK) defines the Work Breakdown Structure as a “deliverable oriented
hierarchical decomposition of the work to be executed by the project team.” There
are two types of WBS:
1) Deliverable-Based
2) Phase-Based.
The most common and preferred approach is the Deliverable-Based approach. The
main difference between the two approaches are the Elements identified in the first
Level of the WBS.
Deliverable-Based Work Breakdown Structure
A Deliverable-Based Work Breakdown Structure clearly demonstrates the relationship
between the project deliverables (i.e., products, services or results) and the scope (i.e.,
work to be executed). Figure 1 is an example of a Deliverable-Based WBS for building
a house. Figure 2 is an example of a Phase-Based WBS for the same project.
In Figure 1, the Level 1 Elements are summary deliverable descriptions. The Level 2
Elements in each Leg of the WBS are all the unique deliverables required to create
the respective Level 1 deliverable.
Phase-Based Work Breakdown Structure
In Figure 2, a Phase-Based WBS, the Level 1 has five Elements. Each of these Elements
are typical phases of a project. The Level 2 Elements are the unique deliverables in
each phase. Regardless of the type of WBS, the lower Level Elements are all
deliverables. A Phase-Based WBS requires work associated with multiple elements be
divided into the work unique to each Level 1 Element. A WBS Dictionary is created to
describe the work in each Element.
A good WBS is simply one that makes the project more manageable. Every project is
different; every project manager is different and every WBS is different. So, the right
WBS is the one that best answers the question, “What structure makes the project
more manageable?”.
There are many WBS software tools available. Some of them are based on mind
mapping and others are drawing tools. You can read about these tools in this WBS
software review.
WORK PACKAGES
Figure 3 shows the House Project Work Breakdown Structure expanded to Level 1, 2,
and 3 Elements. The lowest Levels of each Leg and Branch of the WBS are called
Work Packages. Work Packages cover information related to the deliverable, such as
owner, milestones, durations, resources, risks, etc. This information is described in the
WBS Dictionary.
PLANNING PACKAGES
There is another type of Work Package called a Planning Package. When the project
management plan is approved, scope is known, but not necessarily all of the details.
In order to apply the 100% Rule and capture all of the scope, Planning Packages are
created. It is understood that as details are defined, the Planning Packages eventually
evolve to Work Packages. In the House Project, the project manager knows that the
house will have fixtures, but at the time construction begins, there is only a fixture
allowance and no fixtures identified. Once the fixtures are determined, the associated
Planning Package becomes a Work Package. This planning process is call Rolling
Wave Planning and is a from of Progressive Elaboration.
CONTROL ACCOUNTS
The other application of the WBS is as a monitoring and controlling tool. This is
accomplished by defining Control Accounts. Control Accounts are WBS Elements at
which the project plans to monitor and report performance. The Control Accounts
can be any Element in the WBS. In the House Project, the project manager decides
that the project risks associated with using subcontractors can be better managed if
the project reports performance for each subcontractor. To monitor their
performance, Elements 3.1, 3.2 and 3.3 have been identified as Control Accounts.
However, the remaining work in Elements 1.0 and 2.0 will be performed by company
resources with less risk and the project does not feel like monitoring and controlling
is needed at lower Levels. To assist with the monitoring and reporting, project
management information tools are used to collect, analyze and report information at
any Element within the WBS.