You are on page 1of 40

UNIT II

PROJECT LIFE CYCLE AND EFFORT ESTIMATION


Software process and Process Models – Choice of Process models - mental delivery – Rapid
Application development – Agile methods – Extreme Programming – SCRUM – Managing
interactive processes – Basics of Software estimation – Effort and Cost estimation techniques –
COSMIC Full function points - COCOMO II A Parametric Productivity Model - Staffing
Pattern.
------------------------------------------------------------------------------------------------------------
--------------------

16 MARKS QUESTIONS

1. Explain the Extreme Programming and its advantages and disadvantages in


detail. (April/may-2018)
2. Explain the component of staffing and staffing level estimation (April/may-2018)
3. Explain how project can be evaluated against strategic, technical and economic
criteria. (Nov -2011)(8)(JUNE-2014)(NOV-17)
4. Explain in detail about the Amanda’s decision tree.(Refer Class notes) (NOV-17)
5. Explain the “Internal rate of return” method. Also mention its advantages over
the NPV method. (8) (May - 2012) (May 2016) (JUNE-2017)
6. Where are estimates done and explain the problems with over and under
estimates. (16)(May 2016)
7. Explain COCOMO – II model (JUNE-2017) (May 2016) )(Nov/dec 2019)
8. What is incremental delivery? Explain the advantages and disadvantages of the
incremental delivery? Explain the incremental delivery plan.(16) ,(Apr/may
2019)
9. Explain the Rapid Application Development in detail.
10. Discuss the spiral software development life cycle model with diagrammatic
illustration. What are spiral model strengths ? What are the spiral model
deficiencies ? When to use spiral model ? Discuss. (Nov/Dec 2017)(Nov/dec 2019)
11. Components of function point analysis (Apr/may 2019)
-----------------------------------------------------------------------------------
1.Structure versus speed of delivery
Structured approach
Also called „heavyweight‟ approaches
Step-by-step methods where each step and intermediate product is carefully defined
Emphasis on getting quality right first time
Example: use of UML and USDP
Future vision: Model-Driven Architecture (MDA). UML supplemented
with Object Constraint Language, press the button and application
code generated from the UML/OCL model

Agile methods
Emphasis on speed of delivery rather than documentation
RAD Rapid application development emphasized use of quickly developed prototypes
JAD Joint application development.
Requirements are identified and agreed in intensive workshops with users

2.Choice of process models


The word 'process‟ is sometimes used to emphasize the idea of a system in action.
In order to achieve an outcome, the system will have to execute one or moe
activities: this is its process. This idea can be applied to the development of
computer-based systems whee a number of interelated activities have to be
undertaken to ceate a final poduct.

A major pat of the planning will be the choosing of the development methods to
be used and the slotting of these into an overall process model.

The planner needs not only to select methods but also to specify how the method is
to be applied. With methods such as SSADM. there is a considerable degree of
choice about how it is to be applied: not all pats of SSADM are compulsory.
SSADM is to be used: in the event, all that is produced as a few SSADM
fragments such as a top level data low diagram and a preliminary logical data
structue diagram. If this is all the paticular poject equies, it should be stated
at the outset.

3.Software process model


A software process model is a standardised format for
• planning
• organising, and
• running a development project.

Definition.
A (software/system) process model is a description of the sequence of activities carried out in an
SE project, and the relative order of these activities.
It provides a fixed generic framework that can be tailored to a specific project.
Project specific parameters will include:
• Size, (person-years)
• Budget,
• Duration.

There are hundreds of different process models to choose from, e.g:


• waterfall,
• code-and-fix
• spiral
• rapid prototyping
• unified process (UP)
• agile methods, extreme programming (XP)
• COTS …
But most are minor variations on a small number of basic models.
By changing the process model, we can improve and/or tradeoff:
• Development speed (time to market)
• Product quality
• Project visibility
• Administrative overhead
• Risk exposure
• Customer relations, etc, etc.

We can sometimes combine process models e.g.


1. waterfall inside evolutionary – onboard shuttle software
2. Evolutionary inside waterfall – e.g. GUI Prototyping

The Waterfall Model

The waterfall model is the classic process model – it is widely known, understood and used.
• In some respect, waterfall is the ”common sense” approach.

the „classical‟ model imposes structure on the project every stage needs to be checked and signed
off.
BUT limited scope for iteration
V model approach is an extension of waterfall where different testing phases are identified which
check the quality of different development phases

Advantages

1. Easy to understand and implement.


2. Widely used and known (in theory!)
3. Fits other engineering process models: civil, mech etc.
4. Reinforces good habits: define-before- design, designbefore-code
5. Identifies deliverables and milestones

Disadvantages
1. Doesn‟t reflect iterative nature of exploratory development.
2. Sometimes unrealistic to expect accurate requirements early in a project
3. Software is delivered late, delays discovery of serious errors.
4. No inherent risk management

4. Incremental Model
Incremental delivery

This is similar to the 'incremental prototyping' appoach mentioned


above. The approach involves beaking the system down into small
components which are then implemented and deliveed in sequence.

Advantages of this approach


These ac some of the justifications given for the approach:
• the feedback fom early increments can inluence the later stages;
• the possibility of changes in requirements is not so great as with
large
monolithic projects because of the shorter timespan between the design
of a
component and its delivery;
• users get beneits earlier than with a conventional approach;
• early delivery of some useful components improves cash low, because
you get
some retun on investment early on;
• smaller sub-pojects ac easier to contol and manage;
• 'gold-plating', the requesting of features that ae unnecessary and not in fact used,
should be less as users vv ill know that they get moe than one oppotunity
to make their requiements known: if a feature is not in the curent incement then it can
be included in the next;
• the project can be temporaily abandoned if more urgent work crops up;
• job satisfaction is inceased for developers who see their labours beaing fruit at
regular, short, intervals.

Disadvantages
On the other hand these disadvantages have been put forward:
• 'softwae breakage‟, that is, later incements might requie the earlier
incements to be modified;
• developers might be moe poductive working on one large system than on a
seies of smaller ones.

The incremental delivery plan


The content of each increment and the order in which the increments ac to be
delivered to the users of the system have to be planned at the outset.

Basically the same process has to be undetaken as in strategic planning but at a


more detailed level where the attention is given to incements of a user
application rather than whole applications. The elements of the inctemeni.il plan
ae the system objectives, incremental plan and the open technology plan.

Identify system objectives


The purpose is to give an idea of the 'big pictue*, that is, the overall objectives
that the system is to achieve. These can then expanded into more speciic
functional goals and quality goals.
 Functional goals will include
 objectives it is intended to achieve
 jobs the system is to do
 computer/non-computer functions to achieve them.
Plan increments
Having defined the overall objectives, the next stage is to plan the
incements using the following guidelines:

• steps typically should consist of 1 to 5% of the total poject:


• non-computer steps may be included - but these
must deliver beneits directly to the users;

• ideally, an incement should take one month or


less and should never take moe than three months;
• each increment should deliver some benefit to the user;
• some incements will be physically dependent on others;
• valuc-to-cost ratios may be used to decide piorities.
Create open technology plan
If the system is to be able to cope with new components being continually added then it
has to be built so that it is extendible. portable and maintainable.
As a minimum this will equire the use of:
• a standard high level language;
• a standard operating system;
• small modules;
• variable parameters, for example. Here such as organization name, department
names and charge rates ae held in a parameter ile that can be amended
without programmer intervention;
• a standard database management system.
These are all things that might be expected as a matter of course in a
modern IS development envionment.

5. Evolutionary Development
Types
Type 1: Exploratory Development:
customer assisted development that evolves a product from ignorance to insight, starting from
core, well understood components (e.g. GUI?).
Spiral model

Type 1: Spiral Model


Extends waterfall model by adding iteration to explore /manage risk
Project risk is a moving target. Natural to progress a project cyclically in four step phases
1. Consider alternative scenarios, constraints
2. Identify and resolve risks
3. Execute the phase
4. Plan next phase: e.g. user req, software req, architecture … then goto 1
Advantages
1. Realism: the model accurately reflects the iterative nature of software development on projects
with unclear requirements
2. Flexible: incoporates the advantages of the waterfall and evolutionary methods
3. Comprehensive model decreases risk
4. Good project visibility

Disadvantages
1. Needs technical expertise in risk analysis and risk management to work well.
2. Model is poorly understood by nontechnical management, hence not so widely used
3. Complicated model, needs competent professional management. High administrative
overhead.

Type 2: Throwaway Prototyping:


customer assisted development that evolves requirements from ignorance to insight by means of
lightweight disposable prototypes.
6.RAD : Rapid Application Development
The RAD RapidApplicationDevelopment model is based on prototyping and iterative
development with
no specific planning involved. The process of writing the software itself involves the planning
required for developing the product.

Rapid Application development focuses on gathering customer requirements through workshops


or focus groups, early testing of the prototypes by the customer using iterative concept, reuse of
the existing prototypes components, continuous integration and rapid delivery.

What is RAD?
Rapid application development RAD is a software development methodology that uses minimal
planning in favor of rapid prototyping. A prototype is a working model that is functionally
equivalent to a component of the product.

In RAD model the functional modules are developed in parallel as prototypes and are integrated
to make the complete product for faster product delivery.

Since there is no detailed preplanning, it makes it easier to incorporate the changes within the
development process. RAD projects follow iterative and incremental model and have small
teams comprising of developers, domain experts, customer representatives and other IT
resources working progressively on their component or prototype.

The most important aspect for this model to be successful is to make sure that the prototypes
developed are reusable.
F
RAD Model Design
RAD model distributes the analysis, design, build, and test phases into a series of short, iterative
development cycles. Following are the phases of RAD Model:

Business Modeling: The business model for the product under development is designed in terms
of flow of information and the distribution of information between various business channels. A
complete business analysis is performed to find the vital information for business, how it can be
obtained, how and when is the information processed and what are the factors driving successful
flow of information.

Data Modeling: The information gathered in the Business Modeling phase is reviewed and
analyzed to form sets of data objects vital for the business. The attributes of all data sets is
identified and defined. The relation between these data objects are established and defined in
detail in relevance to the business model.
Process Modeling: The data object sets defined in the Data Modeling phase are converted to
establish the business information flow needed to achieve specific business objectives as per the
business model. The process model for any changes or enhancements to the data object sets is
defined in this phase. Process descriptions for adding ,deleting, retrieving or modifying a data
object are given.

Application Generation: The actual system is built and coding is done by using automation
tools to convert process and data models into actual prototypes.

Testing and Turnover:The overall testing time is reduced in RAD model as the prototypes are
independently tested during every iteration. However the data flow and the interfaces between all
the components need to be thoroughly tested with complete test coverage. Since most of the
programming components have already been tested, it reduces the risk of any major issues.

RAD Model Application


RAD model can be applied successfully to the projects in which clear modularization is possible.
If the project cannot be broken into modules, RAD may fail. Following are the typical scenarios
where RAD can be used:

 RAD should be used only when a system can be modularized to be delivered in


incremental manner.
 It should be used if there.s high availability of designers for modeling.
 It should be used only if the budget permits use of automated code generating tools.
 RAD SDLC model should be chosen only if domain experts are available with relevant
business knowledge.
 Should be used where the requirements change during the course of the project and
working
 prototypes are to be presented to customer in small iterations of 2-3 months.

Advantages
1. Reduces risk of incorrect user requirements
2. Good where requirements are changing/ uncommitted
3. Regular visible progress aids management
4. Supports early product marketing

Disadvantages
1. An unstable/badly implemented prototype often becomes the final product.
2. Requires extensive customer collaboration
– Costs customers time/money
– Needs committed customers
– Difficult to finish if customer withdraws
– May be too customer specific

12. Agile Software Processes


Need for an adaptive process model suited to changes in:
• User requirements
• Customer business models
• Technology
• In-house environment
De-emphasise documentation, esp. URD!
Emphasise change management e.g. reverse engineering design
Examples include XP, Scrum, Agile modeling etc

Agile Principles
• Incremental delivery of software
• Continuous collaboration with customer
• Embrace change
• Value participants and their interaction
• Simplicity in code,

Agile methods
13. “eXtreme Programming” - XP
eXtreme Programming (XP) is a software development process as well as a methodology.

XP is also a process framework because it can be (and most likely will be) tailored to the specific
needs of teams, projects, companies etc.

XP is also a lightweight methodology or what Alistair Cockburn calls a “Crystal Methodology”.


In short,
methodologies of this family have high productivity and high tolerance. Communication is
usually strong with short paths, especially informal (not documented). There the is only a small
range of deliverables (artifacts), but these are delivered frequently (releases). Processes of the
Crystal family identify only a few roles and activities.

To date, XP has been applied to business problems only, e.g. projects with a external customer
that wants a specific product. The projects usually ranged from 6 to 15 months. XP was used by
small teams ranging from two to twelve members (and it is likely to be limited to teams of this
size).

The four values


XP defines four “values” which are used as guidelines throughout development. These are

Communication
Good communication is one of the key factors necessary for the success of software projects.
Customers need to communicate the requirements to the developers. Developers communicate
ideas and design to each other and so on.

Simplicity
XP strives for simple systems. This means, they should be as simple as possible but they must
work.
XP also strives for simplicity in the methodology. It reduces the amount of artifacts to an
absolute minimum – the requirements (User Stories), plans (Planning Game) and the product
(code). The practices and techniques can be learned in a matter of hours (although mastering
them of course takes more time).
The main reason for the desire for simplicity is that XP tries to cope with changes and other
risks.

Feedback
XP is a feedback-driven process. You need feedback at all scales, whether you are a customer,
manager or developer. While coding you gets immediate feedback from whitebox testing (Unit
Tests). The customer defines blackbox tests (Functional Tests) and the team delivers releases
frequently. From these practices, both the customers and the developers get feedback about the
status of the system.

Courage
This is a somewhat vague value. It includes courage as well as a certain amount of
aggressiveness. Courage is needed because a lot of the rules and practices are “extreme” in the
way that they go against “tradition” or “wisdom” of software engineering.

Process:
Roles and Responsibilities

Practices:
Advantages
1. Lightweight methods suit small-medium size projects
2. Produces good team cohesion
3. Emphasises final product
4. Iterative
5. Test-based approach to requirements and quality assurance

Limitations of extreme programming


Reliance on availability of high quality developers
Dependence on personal knowledge – after development knowledge of software may decay
making future.
development less easy.
Rationale for decisions may be lost e.g. which test case checks a particular requirement
Reuse of existing code less likely

14. Scrum
Process:
Roles and Responsibilities

15. Managing iterative processes


Grady Booch’s concern
Booch, an OO authority, is concerned that with requirements driven projects:
‘Conceptual integrity sometimes suffers because this is little motivation to deal with scalability,
extensibility, portability, or reusability beyond what any vague requirement might imply’
Tendency towards a large number of discrete functions with little common infrastructure?
Macro and Micro processes

Macro process-related to water fall model. Within macro process there will be micro
activities . System testing done here.

Micro process. It is possible for agile projects using XP practices to exist within a more
traditional stage –gate project environment.

The basis for software estimating

The need for historical data


Nearly all estimating methods need information about how projects have
been
implemented in the past. However, care needs to be taken in
judging the
applicability of data to the estimator's own circumstances because of
possible
difeences in envionmental factors such as the pogramming languages
used, the
softwae tools available, the standards enforced and the expeience of the
staf.

Measure of work
It is normally not possible to calculate directly the actual cost or time
equired to implement a project. The time taken to wite a pogram might
vary according to the competence or experience of the pogrammer.
Implementation times might also vary because of envionmental factors
such as the softwae tools available. The usual practice is therefoe to
express the work content of the system to be
implemented independently of effort, using a measue such as source lines
of code (SLOC). The eader might also come across the abbreviation
KLOC which refers to thousands of lines of code.
Complexity
Two pograms with the same KLOC will not necessaily take the same
time to
wite, even if done by the same developer in the same environment. One
pogram
might be more complex. Because of this. SLOC estimates have to be
modified to
take complexity into account. Attempts have been made to objective
measures
of complexity, but often it will depend on the subjective judgement
of the
estimator.

11.Software Effort Estimation Techniques

Barry Boehm. in his classic work on softwae effort models, identiied the main
ways of deiving estimates of software development effort as:

• algorithmic models - which use 'effort drivers' repesenting characteistics


of the target system and the implementation environment to predict effort.

• expert judgement - where the advice of knowledgeable staff is solicited.

• analogy - whee a similar, completed, project is identified and its actual


effort is used as a basis for the new poject.

• Parkinson - which identiies the staf effort available to do a poject and


uses that as the 'estimate'.

• pice to win - where the 'estimate' is a figure that appears to be


suficiently low to win a contract.

• top-down - where an overall estimate is formulated for the whole poject and
is then boken down into the effort requied for component tasks.

• bottom-up - where component tasks ae identified and sized and these


individual estimates ae aggegated.

Clearly, the 'Parkinson' method is not really an effort prediction method, but a
method of setting the scope of a project. Similarly, 'price to win' is a way of
deciding a pice and not a prediction method.
1.Bottom-up estimating
1. Break project into smaller and smaller components
[Stop when you get to what one person can do in one/two weeks]
2. Estimate costs for the lowest level activities
3. At each higher level calculate estimate by adding estimates for lower levels

Estimating methods can be generally divided into bottom-up and top-


down approaches. With the bottom-up approach, the estimator beaks the
poject into its component tasks and then estimates how much effort will
be requied to carry out each task. The bottom-up part comes in adding up
the calculated effort for each activity to get an overall estimate.

The bottom-up appoach is most appropiate at the later, moe detailed,


stages ofpoject planning. If this method is used early on in the poject
cycle then the estimator will have to make some assumptions about the
characteistics of the final svstem, for example the number and size of
softwae modules. These will be working assumptions that imply no
commitment when it comes to the design of the system.

Where a project is completely novel or there is no historical data


available, the estimator would be well advised to use the bottom-up
approach.

2.The top-down approach and parametric models


The top-down approach is normally associated with parametic (or
algoithmic) models. These may be explained using the analogy of
estimating the cost of rebuilding a house. This would be of practical
concern to a house-owner who needs sufficient insurance cover to
allow for rebuilding the property if it were destoyed. This is a simple
parametic model.

f
The effort needed to implement a project will be elated mainly to vaiables
associated with characteistics of the final system. The form of the parametic
model w ill normally be one or moe formulae in the form:

effort - (system size) x [productivity rate)

For example, system size might be in the form 'thousands of lines of code'
(KLOC) and the productivity rate 40 days per KLOC. The values to be used will
often be matters of subjective judgement.

The top-down and bottom-up approaches ae not mutually exclusive. Project


managers will probably try to get a number of different estimates fom difeent
people using difeent methods. Some pats of an overall estimate could be derived
using a top-dow n approach while other parts could be calculated using a bottom-
up method.
Bottom-up versus top-down
• Bottom-up
– use when no past project data
– identify all tasks that have to be done – so quite time-consuming
– use when you have no data about similar past projects
• Top-down
– produce overall estimate based on project cost drivers
– based on past project data
– divide overall estimate between jobs to be done

3.Expert judgement

This is asking someone who is knowledgeable about cither the application


area or the development envionment to give an estimate of the effort
needed to carry out a task. This method will most likely be used when
estimating the effort needed to change an existing piece of software. The
estimator would have to carry out some kind of impact analysis in order
to judge the popotion of code that would be afected and fom that
deive an estimate. Someone aleady familiar with the software would
be in the best position to do this.

4.Estimating by analogy

The use of analogy is also called case-based reasoning. The estimator


seeks out projects that have been completed (source cases) and that
have similar characteristics to the new poject (the target case). The
effort that has been recorded for the matching source case can then be
used as a base estimate for the target. The estimator should then try to
identify any difeences between the target
and the source and make adjustments to the base estimate for the new
project.

A problem here is how you actually identify the similaities and


difeences between the diferent systems. Attempts have been made to
automate this process. One softwae application that has been developed
lo do this is ANGEL. This
identifies the souce case that is nearest the target by measuring the
Euclidean distance between cases. The source case that is at the shotest
Euclidean distance from the target is deemed to be the closest match.
The Euclidean distance is
calculated:

distance = square-root of ((target_parameter1 - source_


parameter1)2+ (target_parametern - source_ parametern)2)

The new project is known to requie 7 inputs and 15 outputs. One of the past cases.
Project A. has 8 inputs and 17 outputs. The Euclidean distance between the souce
and the target is therefoe the squae-root of ((7-8)2+(l7-l5)2). that is 2.24.

Find a similar project:


 Type of business
 Size of applications
 Scope of systems
Technical methods and standards.
 Must adjust for:
 Organizational culture
 Users‟ level of computer literacy
 Degree of management support for project.

5. Algorithmic/Parametric models ---- Function Point


• focus on task or system size
• Based on analysis of inputs, outputs and files accessed in system
• Starts with unadjusted function points
• Then adjusts for:
- technical complexity
- performance-influencing factors
- risks.
6. Algorithmic/Parametric models ---Function Point Mark II

7. Algorithmic/Parametric models ---- COCOMO


• Formulae based on thousands of delivered source instructions (KDSI)
• Basic, intermediate and detailed versions

• Based on industry productivity standards - database is constantly updated


• Allows an organization to benchmark its software development productivity
• Basic model
effort = c x sizek
• C and k depend on the type of system: organic, semi-detached, embedded
• Size is measured in „kloc‟ ie. Thousands of lines of code

8. Algorithmic/Parametric models ---- COCOMO II

• Estimates produced at different stages:


– Application composition – based on design from user perspective
– Early design – based on top-level design
– Post architecture – once detailed design in place and construction has started.
• Uses different methods at different stages.
• Takes account of innovation, flexibility of approach, fixed or variable scope etc.

Development effort multipliers (dem)


Product attributes: required reliability, database size, product complexity
Computer attributes: execution time constraints, storage constraints, virtual machine (VM)
volatility
Personnel attributes: analyst capability, application experience, VM experience, programming
language experience
Project attributes: modern programming practices, software tools, schedule constraints

9. Delphi technique
• Several estimators are given specification of work and asked for estimates.
• Summarized anonymously and results circulated to estimators.
• Can revise estimates in the light of others‟ ideas.
• Method reduces personal disagreements and ego-based issues.

12.COSMIC Full Function points


The limitations of traditional FPA were the reason to start the COSMIC initiative in 1998.
The Common Software Measurement International Consortium (COSMIC), aimed to develop,
test, bring to market and to seek acceptance of a new software sizing method to support
estimating and performance measurement (productivity, time‐to‐market and quality).

The measurement method must be applicable for estimating the effort for developing and
maintaining software in various software domains. Not only business business software (MIS)
but also real‐time software (avionics, telecom, process control) and embedded software (mobile
phones, consumer electronics) can be measured.

The basis for measurement must be found, just as in FPA, in the user requirements the software
must fulfil.

The result of the measurement must be independent of the development environment and the
method used to specify these requirements. Sizes depend only on the user requirements for the
ultimately delivered software product.

COSMIC Concepts
The Functional User Requirements (FUR) are, according to the definition of a functional size
measurement method, the basis for measurement. They specify user‟s needs and procedures that
the software should fulfil.

The FUR are analysed to identify the functional processes. A Functional Process is an
elementary component of a set of FUR. It is triggered by one or more events in the world of the
user of the software being measured. The process is complete when it has executed all that is
required to be done in response to the triggering event.

Each functional process consists of a set of subprocesses that are either movements or
manipulations of data. Since no‐one knows how to measure data manipulation, and since the aim
is to measure „datamovement‐ rich‟ software, the simplifying assumption is made that each
functional process consists of a set of data movements.

A Data Movement moves one Data Group. A Data Group is a unique cohesive set of data
(attributes) specifying an „object of interest‟ (i.e. some thing that is „of interest‟ to the user. Each
Data Movement is counted as one CFP (COSMIC function point).

COSMIC recognises 4 (types of) Data Movements:


• Entry --- moves data from outside into the process
• Exit ----- moves data from the process to the outside world
• Read --- moves data from persistent storage to the process;
• Write ---- moves data from the process to persistent storage.

The benefits
The COSMIC method:
• Is based on fundamental software engineering principles, so „future‐proof‟;
• supports realistic scheduling and budgeting;
• is objective, simple to learn and easy to use;
• supports communication between principal, user and supplier;
• increasing complexity is rated;
• is applicable in most software domains;
• is applicable for complete applications and for components, in any layer;
• complies with ISO 14143.

13. COCOMO-II--parametric model


Empirical model based on project experience
 Well-documented, independent model, Independent of a specific software vendor.
 Long history – initially published in 1981 (COCOMO-81) and last in 1999 (COCOMO-
II)
 COCOMO-II takes into account different approaches to software development, reuse,
etc.
 COCOMO (COnstructive COst MOdel) proposed by Boehm.
 Divides software product developments into 3 categories:
o Organic
o Semidetached
o Embedded

Organic:
Relatively small groups
working to develop well-understood applications.
Semidetached:
Project team consists of a mixture of experienced and inexperienced staff.
Embedded:
The software is strongly coupled to complex hardware, or real-time systems.

Gives only an approximate estimation:


Effort = a1 * (KLOC) a2
Tdev = b1 * (Effort) b2
 KLOC is the estimated kilo lines of source code,
 a1,a2,b1,b2 are constants for different categories of software
products,
 Tdev is the estimated time to develop the software in months,
 Effort estimation is obtained in terms of person months (PMs).

Development Effort Estimation


Organic :
Effort = 2.4 (KLOC)1.05 PM
Semi-detached:
Effort = 3.0(KLOC)1.12 PM
Embedded:
Effort = 3.6 (KLOC)1.20 PM

Development Time Estimation

Organic:
Tdev = 2.5 (Effort)0.38 Months
Semi-detached:
Tdev = 2.5 (Effort)0.35 Months
Embedded:
Tdev = 2.5 (Effort)0.32 Months

Effort is somewhat super-linear in problem size.


Three-level model that allows increasingly detailed estimates to be prepared as development
progresses

Early prototyping level


 Based on “object points”
 Simple formula
Early design level
 Based on “function points” that are then translated to lines of source code (LOC)
Post-architecture level
 Estimates based on LOC

1. Early Prototyping Level


 Supports prototyping projects and projects where there is extensive reuse
 Estimates effort in object points/staff month
PM = ( NOP × (1 - %reuse/100 ) ) / PROD,
 where:
 PM is the effort in person-months
 NOP is the number of object points
 PROD is the productivity

2. Early Design Level


 Estimates made after requirements confirmed
PM = A × SizeB × M + πEMi
 where:
 A = 2.5 in initial calibration
 Size in KLOC
 B varies from 1.1 to 1.24 depending on novelty of project, development flexibility, risk
management approaches, and process maturity
 EM = (ASLOC × (AT / 100) ) / ATPROD
 M = PERS × RCPX × RUSE × PDIF × PREX × FCIL × SCED

Multipliers
Multipliers reflect capability of developers, nonfunctional requirements, familiarity with
development platform, etc.
RCPX - product reliability and complexity
RUSE - the reuse required
PDIF - platform difficulty
PREX - personnel experience
PERS - personnel capability
SCED - required schedule
FCIL - the team support facilities

3. Post-Architecture Level
 Uses same formula as early design estimates
 Estimate of size adjusted to account for:
 Requirements volatility
 Rework required to support change
 Extent of possible reuse

ESLOC = ASLOC x (AA + SU +0.4DM + 0.3CM + 0.3IM) / 100

 ESLOC is equivalent number of lines of new code


 ASLOC is the adjusted number of lines of reusable code which must be modified
 DM is the % of design modified
 CM is the % of the code that is modified
 IM is the % of the original integration effort required for integrating the reused software
 SU is a factor based on the cost of software understanding
 AA is a factor which reflects the initial assessment costs of deciding if software may be
reused

Exponent Scale Factors


Project Cost Drivers

14. Staffing pattern -Staffing Level Estimation

Norden’s work
 After the effort required to complete a software project has been estimated, the
staffing requirement for the project can be determined.

 Norden’s work suggested that starting of R and D project, the activities of the
project are planned and investigations are made. At this time the man power
requirements are low. As the project progress man power requirements increases
until it reach peak.

Number of personnel required during any development project: not constant.


Norden in 1958 analyzed many R&D projects, and observed:
Rayleigh curve represents the number of full-time personnel required at any time.
Rayleigh curve is specified by two parameters:
 td the time at which the curve reaches its maximum
 K the total area under the curve.
L=f(K, td)
Rayleigh Curve
 Very small number of engineers are needed at the beginning of a project carry out
planning and specification.
 As the project progresses: more detailed work is required, number of engineers slowly
increases and reaches a peak.

Putnam’s Work:

In 1976, Putnam studied the problem of staffing of software projects:

 observed that the level of effort required in software development efforts has a similar
envelope.found that the Rayleigh-Norden curve
 relates the number of delivered lines of code to effort and development time.
 After the effort required to complete a software project has been estimated, the staffing
requirement for the project can be determined.
 Putnam suggested that starting from a small number of developers, there should be a
staff build up and after a peak size has been achieved, staff reduction is required.

Putnam analyzed a large number of army projects, and derived the expression:
L=CkK1/3td4/3

 K is the effort expended and L is the size in KLOC.


 td is the time to develop the software.
 Ck is the state of technology constant
 reflects factors that affect programmer productivity.

Putnam observed that:


 the time at which the Rayleigh curve reaches its maximum value corresponds to system
testing and product release.
 After system testing, the number of project staff falls till product installation and
delivery.
 From the Rayleigh curve observe that: approximately 40% of the area under the
Rayleigh curve is to the left of td and 60% to the right.
 Putnam model indicates extreme penalty for schedule compression and extreme reward
for expanding the schedule.
 Putnam estimation model works reasonably well for very large systems, but seriously
overestimates the effort for medium and small systems.

Solved problems

UNIT – II

2 MARKS QUESTIONS & ANSWERS

USE OF rapid application model(apr/may 2019)


1. apid application development is a software development methodology that uses minimal
planning in favor of rapid prototyping. ... In the RAD model, the functional modules are
developed in parallel as prototypes and are integrated to make the complete product for faster
product delivery.
What ate the stages of estimation process.(apr/may 2019)
The Estimation Process
1- Scoping. You need first to scope the project even if you do not have the full detailed
requirements but you can assume some of them or add margins later. ...
2- Decomposition. ...
3- Sizing. ...
4- Expert and Peer Review. ...
5- Estimation Finalization.

Give examples for rapid application model.(Nov/Dec 2019)


1. Purchase Order
Collecting data for purchase orders and approving them sounds like a very simple process, but
readymade options often complicate it. However, you also want to build them on a platform that
gives you more than just basic functionality.
To start this RAD example, gather all the people who know the process best, starting with the
procurement team. Bring together current forms and a complete understanding of the workflow.
Discuss how you want the app to function. With purchase orders, it‟s often helpful to also have a
vendor database for quick reference to call up information in the form.
Decide what fields should be shown at what steps, and if you want to add some conditional steps
that only happen when certain parameters are met.
Then the procurement team can sit alongside someone familiar with Kissflow to build the first
prototype. You should be able to have a working form and workflow built within 1-2 hours
depending on the complexity of your form and how many databases you want to link it to.
After getting the basic app up and running, it should be shared with those who are going to be
using it. Those who are requesting purchase orders may have some additional ideas for how to
improve the form or workflow. These changes can be implemented immediately and shown to
stakeholders on the spot.
2. Employee Resignation
Another RAD example is handling employee resignation. HR teams have a lot to coordinate
when an employee decides to leave the company. This app might seem trickier to build just
because there are so many moving parts involved. But you can use the same approach.
HR will be leading the charge on this development. However, they need input from management,
payroll, IT, and many others.
Creating the perfect workflow is the key challenge here. As you develop the application, you‟ll
continually think of other people who need to be informed and take action. RAD can play a key
role in quickly adding steps to your workflow and testing to make sure that confidential data is
hidden from those who don‟t need to see it.
At some point in the development process, the HR team may also want to handle resignations
and terminations in the same application. In traditional development, this means going back to
the beginning and building the app from scratch. But if you are using Kissflow and RAD
principles, you can quickly go through and create a different workflow for terminations, or make
some tasks conditional.
3. Travel Request
The last RAD example we‟ll look at is Travel Request. This one may be used more broadly by
the entire company anytime someone is traveling for official business. Depending on your
company, the sales team or the customer accounts team might use it the most. However, it‟s
usually the finance team that is responsible for the application.
The key with travel requests is keeping tight control over adherence to policies. There‟s usually a
lot of chaos surrounding travel. Even if the finance team has set a detailed policy, some
departments might find ways around it. That‟s where building an application in Kissflow with
RAD principles can help keep things in order.
The finance team can create a form that displays the written travel policy, and validates key data
to make sure everything is correct before getting a manager‟s approval. It‟s important for the
finance team to be able to do a budget and/or cash flow check as well before the travel is
arranged.
By using RAD principles, the finance team can quickly create a prototype of the application and
get feedback from various departments before going live. With a no-code platform like Kissflow,
they can also take responsibility to maintain the app and make changes along the way

What is SCRUM?(Nov/Dec 2019)


Scrum is an Agile project management methodology involving a small team led by a Scrum
Master, whose main job is to clear away all obstacles to the team completing work. Work is done
in short cycles called sprints, and the team meets daily to discuss current tasks and roadblocks
that need clearing.

2. What is process model?


A life cycle model also called as process model of a software product is a graphical or
textual representation of its life cycle. A process model may describe the details of
various types of activities carried out during the different phases and the document
produced.

3. What is called maintenance stage?


The product is used by the customer and during this time the product needs to be
maintained for fixing bugs and enhancing functionalities. This stage is called
maintenance stage.

4. What is RAD?
One response to the structured methods is rapid application development. This puts
emphasis on quickly producing the prototypes of the software for users to evaluate.

5. What is Model-driven architectures?


A contracting approach which is attempt to create MDA the system development using
MDA involves creating the platform independent model which specifies system
functionality using UML diagrams supplemented by additional information recorded in
the Object Constraint Language(OCL).

6. What is incremental delivery?


A project lifecycle strategy used to reduce risk of project failure by dividing projects into
smaller, more manageable pieces. The resulting sub-projects may deliver parts of the full
product, or product versions. These will be enhanced to increase functionality or improve
product quality in subsequent sub-projects.
7. What is time-boxing?
It is associated with an incremental approach. The scope of deliverables for an increment
is rigidly constrained by an agreed deadline. This deadline has to be met, even at the
expense of dropping some of the planned functionality.

8. What is gold-plating?
It is requesting of features that are unnecessary and not in fact used, is less as users know
that if a feature is not in the current increment then it can be included in the next.

9. What is software breakage?


Later increments might require modifications to earlier increments that is known as
software breakage.

10. What are functional goals?


It includes:
Objectives it is intended to achieve
Jobs the system is to do
Computer / non-computer functions to achieve them.
11. What are the major aims of RAD model?
1. To decrease the time taken and the cost incurred to develop software systems
2. To limit the costs of accommodating change requests by incorporating them as early
as possible before large investments have been made on development and testing.
12. What is the main disadvantage of traditional heavy weight methodologies?
The difficulty of efficiently accommodating change requests from customers during the
execution of the project.

13. What are the various agile approaches?


1. Crystal Technologies
2. Atern (formely DSDM)
3. Feature-driven Development
4. Scrum
5. Extreme Programming (XP)
14. What is agile project?
An agile project includes a customer representative in the team. At the end of each
iteration, the customer representative along with the stakeholders review the progress
made, re-evaluate the requirements, and give suitable feedback to the development team.

15. What is XP?


The prime source of information on XP is Kent Beck‟s Extreme programming. XP is
explained as embrace change first published in 1999 and updated in 2004. XP takes
commonsense principles to extreme levels.

16. What are the four core values presented as the foundations of XP?
1. Communication and feedback
2. Simplicity
3. Responsibility
4. Courage

17. What the advantages of incremental delivery?


1.feedback from early increments improves the later stages.
2. user gets benefits earlier than with a conventional approach.
3. Smaller sub projects are easier to control and manage.
4 Job satisfaction is increased for developers.
18. What are the scale factors of COCOMO models?
1.system objective
2. incremental plan
3.Open Technology plan.
19. What is refactoring?
The courage to resist the temptation to make changes that affect as little of the code as
possible and prepared to rewrite whole sections of code if this will keep the code
structured.

20. What is scrum model?


In this model, projects are divided into small parts of work that can be incrementally
developed and delivered over time boxes that are called sprints. The product gets
developed over a series of manageable chunks. Each sprint takes only a couple of weeks.

21. What are the two level of managing iterative process?


Macro process-related to water fall model. Within macro process there will be micro
activities . System testing done here.

Micro process. It is possible for agile projects using XP practices to exist within a more
traditional stage –gate project environment.

22. List out software effort estimation techniques?


Algorithm model
Expert judgment
Analogy method
Parkinson method
Price to win method
Top down method
Bottom up method

23. What is COSMIC full function points?


 Extension of IFPUG method for real time system.
 COSMIC deal with decomposing the system architecture into hierarchy of
software layers. software components communicate with peer to peer
communication. Input and output are aggregated into data groups. The
FFp count derived by the 4 types of data movements.
24. What is four data groups data movements?
Entries(E)
Exits (X),
Read (R),
Write (W)
25. What are the COCOMO models mode and common equation of COCOMO model.
 Organic mode:-system being developed as small.
 Embedded mode: Product developed within very tight constraints.
 Semi detached: combined elements of organic and embedded mode.
Equation : Effort= c(size)k

26. What are the three stages of COCOMO models?


Application composition
Early design
Post architecture
27. What are the scale factors of COCOMO models?
Development flexibility (FLEX)
Architecture/risk resolution(RESL)
Team cohesion(TEAM)
Process maturity (PMAT)
28. What are effort multipliers of COCOMO II models?
Adjust the estimate to take account of productivity factors, but do not involve economies
or diseconomies of scale.

29. What is staffing Pattern- Putnam,s work?


After the effort required to complete a software project has been estimated, the staffing
requirement for the project can be determined.

Putnam suggested that starting from a small number of developers, there should be a
staff build up and after a peak size has been achieved, staff reduction is required.

30. What is staffing Pattern- Norden’s work?


After the effort required to complete a software project has been estimated, the staffing
requirement for the project can be determined.

Norden’s work suggested that starting of R and D project, the activities of the project are
planned and investigations are made. At this time the man power requirements are low.
As the project progress man power requirements increases until it reach peak.

31. What is function point analysis?


Quantify the functional size of programs independently of their programming language.
The components are
External input types, External output types, External inquiry types, logical internal file
types, external interface file.

You might also like