You are on page 1of 39

Recap - Planning Essentials

Project Management Resources Site www.infogoal.com/pmc

 Scoping
 What is the problem
 (Tools; Deliverable Definition Table; Deliverable Structure Chart;
Context Level Data Flow Diagram; Use Case Diagram)
 How much will it cost?
 Budget Estimation
 How long will it take?
 Schedule

 How many people will it take?


 Resources
 What might go wrong?
 Risk
 How do we manage scope
 Scope control Strategy

Check the 40-20-40 rule


Planning, Estimating, Scheduling
 What’s the difference?
 Plan:Identify activities. No specific start and
end dates.
 Estimating: Determining the size & duration of
activities.
 Schedule: Adds specific start and end dates,
relationships, and resources.
Estimation techniques
 Once the WBS is complete, there is need to make software
effort and cost estimates. There are two types of techniques
that can be used to do this:
 Experience-based techniques The estimate of future effort
requirements is based on the manager’s experience of
past projects and the application domain. Essentially, the
manager makes an informed judgment of what the effort
requirements are likely to be.
 Algorithmic cost modeling In this approach, a formulaic
approach is used to compute the project effort based on
estimates of product attributes, such as size, and process
characteristics, such as experience of staff involved.

3
Experience-based approaches
 Experience-based techniques rely on judgments based on
experience of past projects and the effort expended in these
projects on software development activities.
 Typically, you identify the deliverables to be produced in a
project and the different software components or systems
that are to be developed.
 You document these in a spreadsheet, estimate them
individually and compute the total effort required.
 It usually helps to get a group of people involved in the effort
estimation and to ask each member of the group to explain
their estimate (Delphi technique).
4
Function Point Analysis(FPA)
 Focuses on the functionality and complexity of the
application
Independent of the Technology
Avoidsthe problem of different programming
languages or technology platforms
FP analysis is reliable in the sense that two
developers trained in FP analysis will obtain the
same results within an accepted margin of error
Breaks a system into its functional components

5
Function Point Analysis
 A FPA is done at project onset based on the project’s scope followed
by a more detailed analysis during the analysis and design stage
 FPA can also be done to evaluate functionality of an off the shelf
package.
 FPA is based on an evaluation of five primary elements that define
the application boundary
 The elements are:-
 Inputs
 Outputs
 Enquiries
 Logical Files
 Interfaces
(These 5 elements can be determined from DFDs & Use case
diagrams)
6
The Application Boundary for Function
Point Analysis
Elements of FPA

User

Other
Apps

FPA success is heavily dependent on a good understanding


7

of user requirements
Function Point Analysis – Defining the elements
I. Data Function Types
i. Internal Logical File (ILF) – AN ILF is a file that stores data within the
application boundary.
 For example, each entity in an E-R diagram would be considered
an ILF. The complexity of an ILF can be classified as low,
average, or high based on the number of data elements and
subgroups (subclasses) of data elements maintained by the ILF.
 ILFs with fewer data elements (attributes) and subgroups will be
less complex than lLFs with more data elements and subgroups.

ii. External interface file (EIF)——An EIF is similar to an ILF; however,


an EIF is a file maintained by another application system. The
complexity of an EIF is determined using the same criteria used for an
ILF.
8
Function Point Analysis Elements
II. Transactional Types
iii. External input (El)—An El refers to transactional data that originate outside
the application and cross the application boundary from outside to inside. The
data generally are added, deleted, or updated in one or more files internal to the
application (i.e., internal logical files).
 A common example of an EI would be an input screen.
 Data can, however, pass through the application boundary from other
applications.
 Based on its complexity, in terms of the number of internal files
referenced, number of data elements (i.e., fields) included, and any other
human factors, each EI is classified as low, average, or high.

iv. External output (EO)—Similarly, an EO is a transaction that allows data to exit


the application boundary.
 Examples of EOs include reports, confirmation messages, derived or
calculated totals, and graphs or charts. This data could go to screens,
printers, or other applications. After the number of EOs are
9 counted, they
are rated based on their complexity, like the external inputs (El).
Function Point Analysis Elements

Transactional Types ct’d


v. External inquiry (EQ)——An EQ is a process or transaction that
includes a combination of inputs and outputs for retrieving data from
either the internal files or from files external to the application.
 EQs do not update or change any data stored in a file. They only
read this information.
 Queries with different processing logic or a different input or
output format are counted as a single EQ. Once the EQs are
identified, they are classified based on their complexity as low,
average or high, according to the number of files referenced and
number of data elements included in the query

10
Conducting an FPA - Steps
1. Determine the function type count to be conducted; can be:-
 Development –new system from scratch,
 Enhancement- maintenance of a system or
 Application – inventory of an existing system or off the shelf
system.

2. Define the boundary of the application- identify system scope using


DFDs and Use case diagrams.

3. Define all data functions and their degree of complexity-i.e. identify


the data to be updated and queried by the system.
Data functions are ILFs & ELFs
Define their complexity based on their record types (subclass) (RETs) &
fields (DETs) 11
Determining ILFs & ELFs Complexity

Record

Field

Example complexity rating of a


record type Student.
There are 2 RETs & 6 DETs,
therefore complexity is Low from
Table A2
12
FPA steps ct’d
4. Define all transactional functions and their complexity.
These perform data processing;
 Can be EIs, EOs, or EQs
 EIs control data originating externally, their complexity is given by the
number of file types referenced and the DETs in the files
(refer to Table A3)
 EOs perform data output, complexity determined as EIs complexity
using Table A4
 EQs update, retrieve or query the data in ILFs & ELFs
Determine complexity using Table A5

13

Table A5: EQ Complexity


FPA Steps ct’d
5. Calculate the Unadjusted Function Point Count using table.

14
FPA Steps ct’d
6. Calculate the Value Adjustment Factor based on a set of General System
Characteristics. Assuming shown values for degree of influence, TDI is
computed as sum of degrees of influence

0 = not present/no
influence
1 = incidental influence
2 = moderate influence
3 = average influence
4 = significant influence
5 = strong influence

15
FPA Steps

 7. Calculate the final Adjusted Function Point Count


Adjusted Function Point = VAF * UAF
Gives the FP count
This can be used to give the SLOC based on programming
language to be used.
For an FP = 210 we have

16
Function Point Analysis Example
 Suppose the following elements & their properties have been
determined after reviewing an application system:-

 ILF: 3 low, 2 average and 1 complex


 ELF: 2 average
 EI: 3 low, 5 average and 4 complex
 EO: 4 low, 2 average and 1 complex
 EQ: 2 low, 5 average and 3 complex
 Unadjusted function points can be computed using the table:
Low Average High Total
ILF 3* 7 = 21 2* 10 = 20 1 * 15 = 15 56
ELF 0*5=0 2 * 7= 14 0 * 10 = 0 14
EI 3*3=9 5* 4 = 20 4 * 6 = 24 53
EO 4 * 4 = 16 2 * 5 = 10 1*7=7 33
EQ 2*3=6 5* 4 = 20 3 * 6 = 18 44
UFP Sum of all computations 17
200
Computing the Value Adjustment Factor
(VAF)
The next step is to compute a Value Adjustment Factor (VAF)
 Itis based on the Degrees of Influence (DI), often called
Processing Complexity Adjustment (PCA)
 Derived from the 14 General Systems Characteristics (GSC)
 To determine the total DI, each GSC is rated based on the
following
0 = not present or no influence
1 = incidental influence
2 = moderate influence
3 = average influence
4 = significant influence
5 = strong influence ..\FP General System Char.docx
18
Computing TDI & VAF Assuming characteristics given
General System Characteristic Degree of Influence
Data Communications 3

Distributed Data Processing 2


4
Performance
Heavily Used Configuration 3
3
Transaction Rate
4
On-line Data Entry
4
End User Efficiency
3
Online Update
3
Complex Processing
2
Reusability
3
Installation Ease
3
Operational Ease
Multiple Sites 1
Facilitate Change 2
Total Degrees of Influence 40
Value Adjustment Factor VAF = (40 * .01) + .65 =
VAF = (TDI * 0.01) + .65 1.05
19

Total Adjusted Function Points = FP = UAF * VAF FP = 200 * 1.05 = 210


Function Point Analysis

 After reviewing the application, the Total Adjusted Function Points


(TAFP) is computed to be 210. (TAFP = UAF + VAF)

 That number can be transformed into development estimates


 Productivity– how many function points can a programmer
produce in a given period of time
 Can be based on lessons learned from prior project experience
 LOC – convert function points into lines of code based on the
programming language
 Accuracy not high due to individual programming styles but can
be used to create a FP inventory of an organization’s project
portfolio
20
Algorithmic cost modelling (Cocomo)
 Cost is estimated as a mathematical function of product, project and
process attributes whose values are estimated by project managers:
 Effort = A * SizeB * M (The software equation)

 Effort is measured in person-months, and refers to one month effort by


one person executing a given task.
 A is an organizational-dependent constant, also considers type of software,
 B reflects the disproportionate effort for large projects, relates to the size
and complexity of the system
 M is a multiplier reflecting product, project, process, and people attributes.
 The most commonly used product attribute for cost estimation is code
size.
 Most parametric models are similar but differ in values of A, B and M.
 A person month is the amount of time one person spends working on the
software development project for one month

21
Estimation accuracy
 The size of a software system can only be known
accurately when it is finished.
 Several factors influence the final size
 Use of COTS and components;
 Programming language;
 Distribution of system.
 As the development process progresses then the
size estimate becomes more accurate.
 The estimates of the factors contributing to B and M
are subjective and vary according to the judgment of
the estimator.

22
The Constructive Cost Model II
(COCOMO 2) model
 An empirical model based on project experience.
 Well-documented, ‘independent’ model which is not tied to a
specific software vendor.
 Long history from initial version published in 1981 (COCOMO-81)
through various instantiations to COCOMO 2.
 COCOMO 2 takes into account different approaches to software
development:
 reuse,
 automatic code generation,
 component based engineering.

23
COCOMO 2 models
 COCOMO 2 incorporates a range of sub-models that produce
increasingly detailed software estimates.
 The sub-models in COCOMO 2 are:
i. Application composition model. Used when software is composed
from existing parts (component based engineering).
ii. Early design model. Used when requirements are available but
design has not yet started.
iii. Reuse model. Used to compute the effort of integrating reusable
components (reuse).
iv. Post-architecture model. Used once the system architecture has
been designed and more information about the system is available.
For large systems various sub-models can be used to compute the
size of various system components and these integrated.

24
COCOMO 2 Models Summary

25
i. Application Composition model
 Used for prototyping and component based SE
 Software size estimates based on application points and developer productivity
 Application pts(no. of screens, reports, modules, no. of lines of scripting
language)
Table 1: Application point productivity derived empirically
Developer’s Very Low Nominal High Very High
capability & Low
experience
ICASE maturity Very Low Low Nominal High Very High
and capability

PROD 4 7 13 25 50
(NAP/month)

PROD – Productivity is dependent on developer’s experience and the capability


(ICASE) of software tools used for development.
26
Expressed in lines of code /month or function points /month
Dependency is given in the table above.
Application Composition model ct’d
 
Gives

a high level estimate
Formula:
PM (effort) = where;
PM = Effort in person-months
NAP = tot no. of application points delivered in the
system
% reuse = estimate of amount of code reuse in the
system
PROD = application point productivity as shown in the
table in the previous slide
27
ii. Early Design Model
 Helps to evaluate various options of implementing a software product before
settling on an architectural design
 Formula: Effort = A * sizeB * M where:-
 A = 2.94, size is in KSLOC (computed using FPA)
 1.1 <= B <= 1.24, varies with project novelty.
 M = PERS * RCPX * RUSE * PDIF * PREX * FCIL * SCED
 PERS (Personnel capability)
 RCPX ( Reliability and complexity)
 RUSE ( Reuse required)
 PDIF ( Platform difficulty)
 PREX ( Personnel experience)
 FCIL ( Support facilities)
 SCED ( Schedule)
Attribute values for these are then estimated on a scale of 1 - 6
http://www.SoftwareEngineering-9.com/Web/Planning/costdrivers.html
iii. Reuse Model

  Used to estimate effort required to integrate components
(reusable or generated), normally used with the post-
architecture model, Reusable code can either be black box
or white box
 Black box code effort = zero, white box effort needs
computation.
 PM Auto = where:-
 ASLOC is the total number of lines of reused code,
including code that is automatically generated.
 AT is the percentage of reused code that is automatically
generated.
 ATPROD is the productivity of engineers in integrating
such code, ATPROD 2 400 source statements/ month
PM Auto is effort needed to integrate automatically
29

generated code.
Reuse Model ct’d

Reuse code from other systems:


Formula: ESLOC = ASLOC * AAM where:-
Considers effort required for software understanding, making changes
to the reused code, and making changes to the system to integrate that
code.
 ESLOC is the equivalent number of lines of new source code.
 ASLOC is the number of lines of code in the components that
have to be changed.
 AAM is an Adaptation Adjustment Multiplier (adjusts estimates to
reflect effort required to reuse code)

30
Reuse Model ct’d
AAM = AAF + SU + AA, adjusts estimates to reflect needed to reuse
code
 AAF = Adaptation component (design, code & integration
changes )
 SU = Understanding component (10 (simple) to 50 (cplx)),
gauges engineer understanding of the code.
 AA= Assessment factor , decision effort required to check
whether to reuse or not (0 to 8)
PM ESLOC = A * ESLOC * M, therefore adding the two:

PM Reuse = PM Auto + PM ESLOC

31
iv. Post architecture Model
 Most detailed and more accurate, used with architectural design
 Used to make estimates for detailed system components, based on the
equation: PM (Effort) = A * sizeB * M;
 Recall :
 A = 2.94,
 M is the product of effort multipliers (17 for the post architecture model)
 B relates to project complexity & is computed using the table in the next slide

Uses 3 parameters to give total size of code in KSLOC, i.e.


i. SLOC –tot. number of lines of new code to be developed.
ii. ESLOC-estimate of the reuse costs calculated using the reuse model and
iii. An estimate of the number of lines of code that are likely to be modified
because of changes to the system requirements.
(i + ii+ iii) = size to be substituted in the effort formula. 32
Post architecture model

Scale factors for the post architecture model (0 to 5), 0 = Extra high, 5 = very
low.
B = 1.01 +( sum of the scale factor ratings/100 ) 33

M is the product of the values of 17 cost drivers in the Post architecture model
Example

Suppose a team is working on a project new to it. The project client has not
defined the process to be used or allowed time in the project schedule for
significant risk analysis. The team was put together to implement this system.
The organization has recently put in place a process improvement program and
has been rated as a Level 2 organization according to the capability maturity
model (CMM).
Find the value of B based on this information.
Solution: Precedentedness (4 (low) - new project)
Development flexibility(1 (high)- no client involvement))
Architecture/ risk analysis(5 – no risk analysis)
Team cohesion (3 – no information available)
Process maturity (3 – nominal) some process in place
B = 1.01 + (total ratings)/100) = 1.01 + 16/100
=1.17 34
COCOMO II Cost Drivers
 Multiplicative factors that determine effort required to complete a project.
 Always rounded off to lower level if rating is between values
 Divided into product, computer, personnel and project attributes
 i. Product attrib.
 RELY – required system reliability (Very low to Extra high)
 CPLX – complexity of system modules
 DOCU – Extent of documentation required
 DATA – size of database used
 RUSE- required % of reusable components

35
COCOMO II Cost drivers
ii. Computer attributes
 TIME – Execution time constraints
 PVOL – Volatility of development platform
 STOR – Memory constraints

iii. Personnel Attributes


 ACAP – Capability of projects analysts
 PCON – Personnel continuity
 PEXP – Programmer experience in project domain
 PCAP – Programmer capability
 AEXP – Analyst experience in project domain
36
 LTEX – Language & tool experience
COCOMO II Cost Drivers
iv. Project Attributes
 Tool – Use of software tools
 SCED – Development schedule compression
 SITE – Extent of multi – site working and
quality of site communications

Cost driver values are obtained from the


COCOMO manual
Values not affecting effort are considered as
nominal (1)
37
Post Architecture Example ct’d

  
Suppose the team given above notices that RELY, CPLX, STOR, TOOL, and
SCED are the only key cost drivers in the project. All of the other cost drivers
have a nominal value of 1, so they do not affect the computation of the effort.
Compute the effort required with values: -
Size = 128 KLSOC
Reliability = Very High (1.39)
Complexity = Very high (1.3)
Memory constraint = High (1.21)
Tool use = Low = (1.12)
Schedule = accelerated (1.29) :
Effort = A * * M
= 2.94 * 128 1.17 * M = 2 710 person months
where M = 1.39* 1.3* 1.21* 1.12 * 1.29, since all other values are 1.
38
Project Duration Estimation
 TDEV = 3 * (PM) (0.33 + 0.2*(B - 1.01)) ; where
 TDEV is the nominal schedule for the project, in
calendar months, ignoring any multiplier that is
related to the project schedule.
 PM is the effort computed by the COCOMO model.
 B is the complexity-related exponent

39

You might also like