Professional Documents
Culture Documents
Scoping
What is the problem
(Tools; Deliverable Definition Table; Deliverable Structure Chart;
Context Level Data Flow Diagram; Use Case Diagram)
How much will it cost?
Budget Estimation
How long will it take?
Schedule
3
Experience-based approaches
Experience-based techniques rely on judgments based on
experience of past projects and the effort expended in these
projects on software development activities.
Typically, you identify the deliverables to be produced in a
project and the different software components or systems
that are to be developed.
You document these in a spreadsheet, estimate them
individually and compute the total effort required.
It usually helps to get a group of people involved in the effort
estimation and to ask each member of the group to explain
their estimate (Delphi technique).
4
Function Point Analysis(FPA)
Focuses on the functionality and complexity of the
application
Independent of the Technology
Avoidsthe problem of different programming
languages or technology platforms
FP analysis is reliable in the sense that two
developers trained in FP analysis will obtain the
same results within an accepted margin of error
Breaks a system into its functional components
5
Function Point Analysis
A FPA is done at project onset based on the project’s scope followed
by a more detailed analysis during the analysis and design stage
FPA can also be done to evaluate functionality of an off the shelf
package.
FPA is based on an evaluation of five primary elements that define
the application boundary
The elements are:-
Inputs
Outputs
Enquiries
Logical Files
Interfaces
(These 5 elements can be determined from DFDs & Use case
diagrams)
6
The Application Boundary for Function
Point Analysis
Elements of FPA
User
Other
Apps
of user requirements
Function Point Analysis – Defining the elements
I. Data Function Types
i. Internal Logical File (ILF) – AN ILF is a file that stores data within the
application boundary.
For example, each entity in an E-R diagram would be considered
an ILF. The complexity of an ILF can be classified as low,
average, or high based on the number of data elements and
subgroups (subclasses) of data elements maintained by the ILF.
ILFs with fewer data elements (attributes) and subgroups will be
less complex than lLFs with more data elements and subgroups.
10
Conducting an FPA - Steps
1. Determine the function type count to be conducted; can be:-
Development –new system from scratch,
Enhancement- maintenance of a system or
Application – inventory of an existing system or off the shelf
system.
Record
Field
13
14
FPA Steps ct’d
6. Calculate the Value Adjustment Factor based on a set of General System
Characteristics. Assuming shown values for degree of influence, TDI is
computed as sum of degrees of influence
0 = not present/no
influence
1 = incidental influence
2 = moderate influence
3 = average influence
4 = significant influence
5 = strong influence
15
FPA Steps
16
Function Point Analysis Example
Suppose the following elements & their properties have been
determined after reviewing an application system:-
21
Estimation accuracy
The size of a software system can only be known
accurately when it is finished.
Several factors influence the final size
Use of COTS and components;
Programming language;
Distribution of system.
As the development process progresses then the
size estimate becomes more accurate.
The estimates of the factors contributing to B and M
are subjective and vary according to the judgment of
the estimator.
22
The Constructive Cost Model II
(COCOMO 2) model
An empirical model based on project experience.
Well-documented, ‘independent’ model which is not tied to a
specific software vendor.
Long history from initial version published in 1981 (COCOMO-81)
through various instantiations to COCOMO 2.
COCOMO 2 takes into account different approaches to software
development:
reuse,
automatic code generation,
component based engineering.
23
COCOMO 2 models
COCOMO 2 incorporates a range of sub-models that produce
increasingly detailed software estimates.
The sub-models in COCOMO 2 are:
i. Application composition model. Used when software is composed
from existing parts (component based engineering).
ii. Early design model. Used when requirements are available but
design has not yet started.
iii. Reuse model. Used to compute the effort of integrating reusable
components (reuse).
iv. Post-architecture model. Used once the system architecture has
been designed and more information about the system is available.
For large systems various sub-models can be used to compute the
size of various system components and these integrated.
24
COCOMO 2 Models Summary
25
i. Application Composition model
Used for prototyping and component based SE
Software size estimates based on application points and developer productivity
Application pts(no. of screens, reports, modules, no. of lines of scripting
language)
Table 1: Application point productivity derived empirically
Developer’s Very Low Nominal High Very High
capability & Low
experience
ICASE maturity Very Low Low Nominal High Very High
and capability
PROD 4 7 13 25 50
(NAP/month)
generated code.
Reuse Model ct’d
30
Reuse Model ct’d
AAM = AAF + SU + AA, adjusts estimates to reflect needed to reuse
code
AAF = Adaptation component (design, code & integration
changes )
SU = Understanding component (10 (simple) to 50 (cplx)),
gauges engineer understanding of the code.
AA= Assessment factor , decision effort required to check
whether to reuse or not (0 to 8)
PM ESLOC = A * ESLOC * M, therefore adding the two:
31
iv. Post architecture Model
Most detailed and more accurate, used with architectural design
Used to make estimates for detailed system components, based on the
equation: PM (Effort) = A * sizeB * M;
Recall :
A = 2.94,
M is the product of effort multipliers (17 for the post architecture model)
B relates to project complexity & is computed using the table in the next slide
Scale factors for the post architecture model (0 to 5), 0 = Extra high, 5 = very
low.
B = 1.01 +( sum of the scale factor ratings/100 ) 33
M is the product of the values of 17 cost drivers in the Post architecture model
Example
Suppose a team is working on a project new to it. The project client has not
defined the process to be used or allowed time in the project schedule for
significant risk analysis. The team was put together to implement this system.
The organization has recently put in place a process improvement program and
has been rated as a Level 2 organization according to the capability maturity
model (CMM).
Find the value of B based on this information.
Solution: Precedentedness (4 (low) - new project)
Development flexibility(1 (high)- no client involvement))
Architecture/ risk analysis(5 – no risk analysis)
Team cohesion (3 – no information available)
Process maturity (3 – nominal) some process in place
B = 1.01 + (total ratings)/100) = 1.01 + 16/100
=1.17 34
COCOMO II Cost Drivers
Multiplicative factors that determine effort required to complete a project.
Always rounded off to lower level if rating is between values
Divided into product, computer, personnel and project attributes
i. Product attrib.
RELY – required system reliability (Very low to Extra high)
CPLX – complexity of system modules
DOCU – Extent of documentation required
DATA – size of database used
RUSE- required % of reusable components
35
COCOMO II Cost drivers
ii. Computer attributes
TIME – Execution time constraints
PVOL – Volatility of development platform
STOR – Memory constraints
Suppose the team given above notices that RELY, CPLX, STOR, TOOL, and
SCED are the only key cost drivers in the project. All of the other cost drivers
have a nominal value of 1, so they do not affect the computation of the effort.
Compute the effort required with values: -
Size = 128 KLSOC
Reliability = Very High (1.39)
Complexity = Very high (1.3)
Memory constraint = High (1.21)
Tool use = Low = (1.12)
Schedule = accelerated (1.29) :
Effort = A * * M
= 2.94 * 128 1.17 * M = 2 710 person months
where M = 1.39* 1.3* 1.21* 1.12 * 1.29, since all other values are 1.
38
Project Duration Estimation
TDEV = 3 * (PM) (0.33 + 0.2*(B - 1.01)) ; where
TDEV is the nominal schedule for the project, in
calendar months, ignoring any multiplier that is
related to the project schedule.
PM is the effort computed by the COCOMO model.
B is the complexity-related exponent
39