You are on page 1of 57

Planning and executing a software Project

Software Project Management - Project size estimation metrics,


Line of Code (LOC), Function Point (FP).
Project estimation techniques-
Empirical estimation techniques,
Putnam’s model,
Basic COCOMO model,
Halstead’s Software Science.
Staffing Level Estimation,
Risk management
Managing people
group working,
choosing and keeping people.
Software cost estimation-
software productivity,
estimation of productivity,
factors effecting programming productivity,
project duration and staffing;
Quality Assurance Plans– Project Monitoring Plans
Software Project Management

It is a combination of processes, tasks, and tools used to transition a project from


start to finish.
Initiation: when the project starts
Planning: when all of the key decisions are made
Execution: when project work actually takes place
Control: when adjustments are made to the plan
Monitoring: when project progress is checked
Termination: when the project comes to an end

Project management involves the use of various principles, procedures and


policies that are established to guide a project from its conception stage all the way
along until its completion.
Project management is important for the following reasons:
Clearly defines the plan of the project before it begins: primary functions is to tame the
chaos by mapping out a clear plan of the project from beginning to end.
Establishes an agreed schedule and plan: Schedules help to eliminate delays or overruns
and provide a plan to be followed for all those involved with the project.
Creates a base for teamwork: People are required to work in a team on a project - the
sharing and support of knowledge and skills - Bringing people together for a collaborative
team.
Resources are maximised: Project tracking and project risk management ensure that all
resources are used efficiently and are accounted for economically.
Helps to manage integration: Projects that are completed within an organization are
generally integrated with wider business processes and systems. Integration forms the
value aspect of projects and their management.
Helps to keep control of costs: Depending on the scope of the project, some projects can
incur organizations significant costs. It is important therefore to keep on budget and to
control spending. Project management greatly reduces the risk of budget overruns.
Helps to manage change: Project management allows for effective change management
and makes it less of a complex task.
Quality is continuously managed: Project management helps to identify, manage and
control quality.
Knowledge: This will serve as an asset to any business and project management helps to
capture and retain knowledge.
Creates an opportunity for learning: Project management ensures that these lessons are
learned and applied in the future.
Project scope :
 Define before estimation
 Defines/ describes the functions and features that are to be delivered to
the end users
 The data that are input and output
 The context that is presented to users as a consequence of using the
S/w
 Performance – Processing time, response time etc
 Constraints - Limit by external H/w memory, processor etc
 Interfaces and reliability that bound the system
 Describes the following information.
 The elements included and excluded in the project
 The processes and entities
 The functions and features required in software according to the
user requirements.
Technique that can be used are :

 A narrative description of the s/w scope after communicating with all


stakeholders
 Set of use-cases developed by the end users
 Feasibility
 Technology
 Finance
 Time
 Resources
project management software

Project management plan: Project management software allows you to plan projects
whilst taking previous track records into account.
Tracking project progress in terms of completion, time and costs: Certain warning
signs that software allows to easily spot gives way for in-time warnings.
Scheduling and time management: The importance of project scheduling cannot be
ignored. Each member receives their schedule and is aware of what is expected and
when.
Resource allocation: Certain software measures resource spending and also allocates
resources for each task and each member.
Budgeting: Controlled in real-time costs and time evaluation. updates of progress.
Communication and Collaboration: Proper communication among project team
members
Documentation: Documents can be created and stored there for safe.
User-friendly: Effective software should act as an enabler and not as a barrier.
Empirical Estimation:
• Estimation is the process of finding an estimate, or approximation, which is a
value that can be used for some purpose even if input data may be incomplete,
uncertain, or unstable.
• Estimation determines how much money, effort, resources, and time it will
take to build a specific system or product. Estimation is based on −
– Past Data/Past Experience
– Available Documents/Knowledge
– Assumptions
– Identified Risks
• The four basic steps in Software Project Estimation are −
– Estimate the size of the development product.
– Estimate the effort in person-months or person-hours.
– Estimate the schedule in calendar months.
– Estimate the project cost in agreed currency.
Empirical Estimation:
Model derived using regression analysis on data collected from past projects
Empirically derived formulas are used to predict data that are a required part
of the software project-planning step.
The empirical data are derived from a limited sample of projects.
Empirically derived formulae to predict efforts as a function of Line Code
(LOC) or Function Points (FP)
Empirical Estimation
Uses the formula
E = A + B * (ev)c
A and B are Empirically derived Constants
E – Effort in Person Month
ev - Estimation Value either FP / LOC
LOC Based
 Line of Code – Focuses on Software Functions
 Source Lines of Code (SLOC) is a software metric frequently used to measure the
size and complexity of a software project. It is typically used to predict the amount
of effort and time that will be required to develop a program, as well as to estimate
the programming productivity once the software is produced.
 Advantages:
 Universally accepted and is used in many models like COCOMO.
 Estimation is closer to developer’s perspective.
 Simple to use.
 Disadvantages:
 Different programming languages contains different number of lines.
 No proper industry standard exist for this technique.
 It is difficult to estimate the size using this technique in early stages of project.
Name Abbreviation Estimated LOC

User Interface and Control Facilities UICF No of Lines in code

Two Dimensional Geometrical Analysis 2DGA “

Two Dimensional Geometrical Analysis 3DGA “

Data Base Management DBM “

Computer Graphics Display Facilities CGDF “

Peripheral Control Function PCF “

Design Analysis Modules DAM “

Estimated Line of Code


Function Point Analysis: In this method, the number and type of functions supported
by the software are utilized to find FPC(function point count). The steps in function
point analysis are:
Count the number of functions of each proposed type.
Compute the Unadjusted Function Points(UFP).
Find Total Degree of Influence(TDI).
Compute Value Adjustment Factor(VAF).
Find the Function Point Count(FPC).
The explanation of above points given below:
Count the number of functions of each proposed type: Find the number of
functions belonging to the following types:
External Inputs: Functions related to data entering the system.
External outputs: Functions related to data exiting the system.
External Inquiries: They leads to data retrieval from system but don’t change the
system.
Internal Files: Logical files maintained within the system. Log files are not
included here.
External interface Files: These are logical files for other applications which are
used by our system.
Compute the Unadjusted Function Points(UFP): Categorise each of the five
function types as simple, average or complex based on their complexity. Multiply
count of each function type with its weighting factor and find the weighted sum.
Function Points
 Focuses on Information domain rather than software functions
 FP value is computed using weighed factors and number of Information
Domain values
Information Domain Value No Simple Average Complex FP Count
External Inputs 3 4 6 No x wegt
External Output 4 5 7 “
External Inquires 3 4 6 “
Internal Logical Files 7 10 15 “
External Interfaces 5 7 10 “
Total FP Count (Unadjusted Function Point Count)- UAF --

 14 characteristics defines the Total Degree of Influence and a Value


Adjustment factor is calculated using weights varying from 0-5 for
each characteristics
14 Characteristics for TDI (Total Degree of Influence) ( F1 to F14)
Backup and Recovery Data Communication

Distributed Processing Performance Critical

Existing Operating Environment Online Data Entry

Input Transaction over Multiple Master file updated online


Screens
Information Domain Values Internal Processing Complexity
Complexity
Code Designed for Reuse Conversion / Installation in design

Multiple Installations Application designed for change

Value Adjustment Factor (VAF) = [0.65+0.01 x ∑(Fi)]


FPestimated = UAF x VAF
FPestimated = Total FP Count x [0.65+0.01 x ∑(Fi)]
COCOMO
 Cocomo (Constructive Cost Model) is a regression model based on
LOC, i.e number of Lines of Code.
 It is a procedural cost estimate model for software projects and often
used as a process of reliably predicting the various parameters
associated with making a project such as size, effort, cost, time and
quality.
 It was proposed by Barry Boehm in 1970
 Most widely used in Industry
 The key parameters - Effort & Schedule:
 Effort: Amount of labour that will be required to complete a task. It
is measured in person-months units.
 Schedule: means the amount of time required for the completion of
the job (proportional to the effort put). It is measured in the units of
time such as weeks, months.
 COCOMO-1 and COCOMO-II
Boehm’s definition of Projects

Organic – simple, small size, problem is well understood, has been


solved in the past and the team members have a nominal experience
regarding the problem.

Semi-detached – More complex than organic. Comparatively less


familiar and difficult to develop compared to the organic ones and require
more experience and better guidance and creativity.

Embedded – with highest level of complexity, creativity, and experience


requirement Requires a larger team size than the other two models and
also the developers need to be sufficiently experienced and creative to
develop such complex models.
Types of Models: COCOMO consists of a hierarchy of three
increasingly detailed and accurate forms. Any of the three forms can
be adopted according to our requirements.
The types of COCOMO models are:
Basic COCOMO Model
Intermediate COCOMO Model
Detailed COCOMO Model

Basic COCOMO can be used for quick and slightly rough calculations of
Software Costs. Its accuracy is somewhat restricted due to the absence of
sufficient factor considerations.
E = a* (KLOC)b
E – Effort in person month
A and B are constants for different types of projects
KLOC – Kilo Lines of Code ie in 1000s
Basic COCOMO a and b values
SOFTWARE PROJECTS A B
Organic 2.4 1.05
Semi Detached 3.0 1.12
Embedded 3.6 1.20

Intermediate COCOMO : The basic model assumes that the effort is only a
function of the number of lines of code and some constants evaluated
according to the different software system.
No system’s effort and schedule can be solely calculated on the basis of
Lines of Code.
Various other factors such as reliability, experience, Capability etc are to be
considered
These factors are known as Cost Drivers and the Intermediate Model
utilizes 15 such drivers for cost estimation.
An Effort Adjustment Factor (EAF)is Calculated from these 15 parameters
Having values for Very Low, Low, Nominal, High, Very High
Intermediate COCOMO :
Effort Estimated as
E = (a* (KLOC)b * EAF)
a and b are constants
KLOC Kilo Lines of Code
EAF – Effort Adjustment Factor
Values for a and b are

SOFTWARE
A B
PROJECTS
Organic 3.2 1.05
Semi Detached 3.0 1.12
Embeddedc 2.8 1.20
Classification of Cost Drivers and their attributes:
(i) Product attributes –
Required software reliability extent
Size of the application database
The complexity of the product
(ii) Hardware attributes –
Run-time performance constraints
Memory constraints
The volatility of the virtual machine environment
Required turnabout time
(iii) Personnel attributes –
Analyst capability
Software engineering capability
Applications experience
Virtual machine experience
Programming language experience
(iv) Project attributes –
Use of software tools
Application of software engineering methods
Required development schedule
COST VERY
VERY LOW LOW NOMINAL HIGH
DRIVERS HIGH
Product
Attributes
Required
Software 0.75 0.88 1.00 1.15 1.40
Reliability
Size of
Application 0.94 1.00 1.08 1.16
Database
Complexity
of The 0.70 0.85 1.00 1.15 1.30
Product
VERY VERY
COST DRIVERS LOW NOMINAL HIGH
LOW HIGH
Hardware
Attributes
Runtime
Performance 1.00 1.11 1.30
Constraints
Memory
1.00 1.06 1.21
Constraints

Volatility of the
virtual machine 0.87 1.00 1.15 1.30
environment

Required turnabout
0.94 1.00 1.07 1.15
time
VERY VERY
COST DRIVERS LOW NOMINAL HIGH
LOW HIGH
Personnel attributes
Analyst capability 1.46 1.19 1.00 0.86 0.71
Applications experience 1.29 1.13 1.00 0.91 0.82
Software engineer
1.42 1.17 1.00 0.86 0.70
capability
Virtual machine
1.21 1.10 1.00 0.90
experience
Programming language
1.14 1.07 1.00 0.95
experience
VERY
COST DRIVERS VERY LOW LOW NOMINAL HIGH
HIGH
Project Attributes
Application of
software engineering 1.24 1.10 1.00 0.91 0.82
methods
Use of software tools 1.24 1.10 1.00 0.91 0.83
Required development
1.23 1.08 1.00 1.04 1.10
schedule
Detailed Model
Detailed COCOMO incorporates all characteristics of the intermediate
version with an assessment of the cost driver’s impact on each step of the
software engineering process. The detailed model uses different effort
multipliers for each cost driver attribute. In detailed cocomo, the whole
software is divided into different modules and then apply COCOMO in
different modules to estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
Planning and requirements
System design
Detailed design
Module code and test
Integration and test
Cost Constructive model
The effort is calculated as a function of program size and a set of cost
drivers are given according to each phase of the software lifecycle.
COCOMO-II
The COCOMO-II consists of the following steps
Application Composition Model
Early Design Stage Model
Post Architectural Model
Function point as Object Points
FP Object Pont Factors Simple Medium Difficult
Screens user Interfaces 1 2 3
Reports 2 5 8
3 GL Components 10
The Putnam’s model.
Halstead’s Software Science
Risk Management
A risk is a potential problem—it might happen, it might not. But, regardless of
the outcome, it’s a really good idea to identify it, assess its probability of
occurrence, estimate its impact, and establish a contingency plan should the
problem actually occur.

Risk concerns future happenings

Risk management is more proactive. Instead of responding to problems after they


occur, risk management identifies possible risks, determines their potential
impacts, and studies possible work-arounds ahead of time.

Risk is an expectation of loss, a potential problem that may or may not occur in
the future. It is generally caused due to lack of information, control or time

Two Characteristics

Uncertainty - may or may not happen


Loss – Loss of something / all
Risk Risk analysis Risk planning Risk
identification monitoring

List of potential Risk avoidance Risk


Prioritised risk and contingency
risks list assessment
plans
For each task, you should determine:

➤ Likelihood —Do you know more or less how to perform this task? Or is this something
you’ve never done before so it might hold unknown problems?
➤ Severity —Can the users live without this feature if the task proves difficult? Can you
cancel this feature or push it into a future release?
➤ Consequences —Will problems with this task affect other tasks? If this task fails, will
that cause other tasks to fail or make other tasks unnecessary?
➤ Work-arounds —Are there work-arounds? What other approaches could you take to
solve this problem? For each work-around consider:
➤ Difficulty —How hard will it be to implement this work-around? How long will it
take? What are the chances that this work-around will work?
➤ Impact —What affects do the work-arounds have on the project’s usability? Is this
going to make a lot of extra work for the users?
➤ Pros —What are the work-arounds’ advantages?
➤ Cons —What are the work-arounds’ disadvantages?
Type Of Risks
RISK MANAGEMENT(Doc)

RISK MANAGEMENT(PPT)
Software project
• Vague requirement
• User not sure of needs
• Huge number of people
• Large number of resources
• Time span
• Requirement changes
Core Risks [1]

• Ambitious time plan


• Ambiguity in requirement
• Requirement creep
• Team turnout
• Performance variance

Project manager
Software risk category
• Project risk
• Process risk
• Product risk
Risk management process
• Risk identification
• Risk analysis
• Risk planning
• Risk monitoring
• Risk resolving
Risk identification
• Human resource
• Organizational
• Human resource
• Tools
• Estimation
Quantification of risk [4]

• Risk exposure
RE = Probability * consequence
Risk analysis
• Questions
– What is causing the risk
– How much will it affect
– Are the risks dependent
– The probability that it will occur
– Is the exposure acceptable
• Severity
• probability
Risk planning
• Avoidance
• Protection
• Reduction
• Research
• Reserves
• Transfer
• Risk resolving
• Risk documentation

Risk managemnt [4]


Levels of risk
• Disaster management
• Repair on failure
• Risk mitigation
• Prevention
• Eliminate root cause
Managing People(Document)

Managing People - PPT


What is Software Productivity?
In standard economic terms, productivity is the ratio between the amount of goods or services
produced and the labor or expense that goes into producing them. The assumption that follows, then, is
that software productivity is the ratio between the amount of software produced to the labor and expense
of producing it. This is a simple theory that appears to be logical, but in practice becomes a matter of
some debate.
In order to define software productivity, we must first establish a definition of software. At its most
fundamental level, software is a computer program comprised of lines of code. However, lines of code,
in and of themselves, are not the primary deliverables of a software project and customers often do not
know how many lines of code are in the software they are buying
A broader definition of software encompasses not only the computer program, but also the related
procedures and documentation associated with the program. This often includes documentation of
requirements, specifications, software design, and end-user procedures . The complete set of
documentation provides a more tangible deliverable of a software project than does the program itself.
However, even though program code and documentation are the primary outputs of software
production, they are not of direct interest to the software consumer. Software is bought based on what it
can do, not on how it was coded or documented. This means that the economic value of goods and
services consumed is not measured in the same units as the natural units of production. Subsequently, a
different measure of software needs to be used in order to get a meaningful definition of software
productivity. This measure needs to reflect the utility value of the software, to wit the function that the
software is intended to perform.
Basing our measurements on the utility value of software, we can revise our original assumption
and define software productivity as the ratio between the functional value of software produced to the
labor and expense of producing it. This definition allows us to measure productivity based on the value
of results to the software consumer, which is more realistic than basing results on lines of code
How is Software Productivity Measured?
With a working definition of software productivity established, we are next faced
with the question of what to measure. Unlike lines of code and pages of
documentation that are easy to count, program functionality does not have a natural
unit of measure and is thus harder to quantify. Software metrics that we can use as
quantifiable measure of various characteristics of a software system or software
development process need to be established. These metrics need to capture both the
effort required to produce the software and the functionality provided to the software
consumer.
There are various methods by which software productivity is measured, but
whichever method is employed the goal should be uniform: to give software
managers and professionals a set of useful, tangible data points for sizing, estimating,
managing, and controlling software projects with rigor and precision (Jones 1). Some
of the more common methods of measuring software productivity are Function Point
Analysis, Constructive Cost Modeling, and Cyclomatic Complexity.
Function Points and Function Point Analysis
Cyclomatic Complexity (McCabe Metrics)
CoCoMo employs a size-oriented approach to analyzing a program, Cyclomatic
Complexity, developed by Thomas McCabe, is based on program complexity. Cyclomatic
Complexity (often referred to as McCabe metrics) reasons that complexity is directly
related to paths created by control and decision statements. As the number of paths
through a program or module increases, the program or module complexity increases. As
complexity increases, the effort required to produce the program increases and its
testability and maintainability decreases
Cyclomatic complexity metrics use graph theory to illustrate the number of linearly
independent paths in the program or module. A control graph for the program is created
that shows blocks of sequentially executable code as nodes and the flow or paths
through the program as arcs. The cyclomatic complexity number is calculated by
subtracting the number of nodes from the number of arcs (including a dummy arc from
the exit node back to the entry node), and adding the number of components (program
modules or programs). This value represents the number of independent paths in the
program
Although initially used to analyze code after it was written, the cyclomatic complexity is
now routinely being used to analyze the control and data flow diagrams created in the
design phase. The early detection of complex programs or modules significantly lowers
the time and effort expended to code, test, and maintain them in subsequent life cycle
phases. This, in turn, reduces the cost of the entire software project and improves
productivity
Personnel Planning
Personnel Planning deals with staffing. Staffing deals with the appoint
personnel for the position that is identified by the organizational
structure
It involves:
Defining requirement for personnel
Recruiting (identifying, interviewing, and selecting candidates)
Compensating
Developing and promoting agent

For personnel planning and scheduling, it is helpful to have efforts and


schedule size for the subsystems and necessary component in the
system.
Team Structure
Ego-Less or Democratic Teams
Chief Programmer Team
Controlled Decentralized Team (Hierarchical Team Structure)
Project duration and staffing
Factors effecting programming productivity

productivity metrics in IT software projects are mainly based on ratios between the
size of delivered software and the effort performed to obtain it
Function Points (FP) and Source Lines Of Code (SLOC)
multiple size measures
Object points
increasing store constraints, timing constraints, reliability requirements, high level
languages, team size, requirements volatility, staff tools skills, staff availability,
customer participation, and project duration
Quality Assurance Plans – Section 8.3.2

Project Monitoring Plans

You might also like