You are on page 1of 145

Software Engineering

Module – 01
Introduction
1. Motivation:
Presented chapter is the foundation of basic software process and therefore
motivation of this course is many problems can plague a software project.
Software planning involves estimating how much time, effort, money, and
resources will be required to build a specific software system.

2. Learning Objective:
This module explains that everyone involved in the software process—managers,
software engineers, and customers— participate in development of software
project

1. Students will be able to understand the Software engineering process


paradigms.
2. Students will be able to understand the typical application of the process
models: incremental and evolutionary.
3. Students will be able to understand the Agile methodology.
4. Students will be able to understand process and project metrics.

3. Prerequisite:
SOOAD, WTL

4. Syllabus:

Prerequisites Syllabus Duration Self Study


Basic Concept of 1.1 Software engineering 2 Hrs 2 Hrs
Software life cycle process paradigms
1.2 Process models: incremental 1 Hr 1 Hr
and evolutionary
1.3 Typical application for each
model
1.4 Agile methodology 1 Hr 1 Hr
1.5 process and project metrics 2 Hr 2 Hr

5. Learning OutComes:

1
Software Engineering

 The student will learn steps that help a software team to understand various
processes in models.
 Idea to identify problem, assess its probability of occurrence, estimate its impact,
and establish a contingency plan should the problem actually occur.
 The student will learn key element of good software project management.

6. Weightage:

06 hours

7. Abbreviations

RAD – Rapid Application Development

8. Key Definition:

Software Engineering: Software engineering encompasses a process, management


techniques, technical methods, and the use of tools.

Software: Software is a set of items or objects that form a “configuration” that


includes programs, documents, and data.

Software Engineering: Software engineering is the application of a systematic,


disciplined, quantifiable approach to the development, operation, and maintenance
of software.

Agile software engineering: Represents a reasonable compromise between to


conventional software engineering for certain classes of software and certain types
of software projects

Software process & project metrics: Are quantitative measures that enable software
engineers to gain insight into the efficacy of the software process & the projects that
are conducted using the process as a framework.

Incremental Models: The incremental model combines elements of the linear


sequential model (applied repetitively) with the iterative philosophy of prototyping.

Process and Project Metrics: Software process and project metrics are quantitative
measures that enable software engineers to gain insight into the efficiency of the
software process and the projects conducted using the process framework.

2
Software Engineering

Software estimation: Software planning involves estimating how much time, effort,
money, and resources will be required to build a specific software system.

9. Theory:

Overview
 The roadmap to building high quality software products is software process.

 Software processes are adapted to meet the needs of software engineers and
managers as they undertake the development of a software product.

 A software process provides a framework for managing activities that can very
easily get out of control.

 Different projects require different software processes.

 The software engineer's work products (programs, documentation, data) are


produced as consequences of the activities defined by the software process.

 The best indicators of how well a software process has worked are the quality,
timeliness, and long-term viability of the resulting software product.

Software Engineering

 Software engineering encompasses a process, management techniques, technical


methods, and the use of tools.

Generic Software Engineering Phases

 Definition phase - focuses on what (information engineering, software project


planning, and requirements analysis).

 Development phase - focuses on how (software design, code generation,


software testing).

 Support phase - focuses on change (corrective maintenance, adaptive


maintenance, perfective maintenance, preventative maintenance).

1.1 Software Engineering and software paradigm


The term "software engineering" was coined in about 1969 to mean "the establishment
and use of sound engineering principles in order to economically obtain software that is
reliable and works efficiently on real machines".

3
Software Engineering

This view opposed uniqueness and "magic" of programming in an effort to move the
development of software from "magic" (which only a select few can do) to "art" (which
the talented can do) to "science" (which supposedly anyone can do!). There have been
numerous definitions given for software engineering (including that above and below).

Software Engineering is not a discipline; it is an aspiration, as yet unachieved. Many


approaches have been proposed including reusable components, formal methods,
structured methods and architectural studies. These approaches chiefly emphasize the
engineering product; the solution rather than the problem it solves.

Software Development current situation:

 People developing systems were consistently wrong in their estimates of time,


effort, and costs

 Reliability and maintainability were difficult to achieve

 Delivered systems frequently did not work

 1979 study of a small number of government projects showed that:


 2% worked
 3% could work after some corrections
 45% delivered but never successfully used
 20% used but extensively reworked or abandoned
 30% paid and undelivered
 Fixing bugs in delivered software produced more bugs

 Increase in size of software systems

 NASA
 StarWars Defense Initiative
 Social Security Administration
 financial transaction systems
 Changes in the ratio of hardware to software costs

 early 60's - 80% hardware costs


 middle 60's - 40-50% software costs
 today - less than 20% hardware costs
 Increasingly important role of maintenance

 Fixing errors, modification, adding options


 Cost is often twice that of developing the software
 Advances in hardware (lower costs)

4
Software Engineering

 Advances in software techniques (e.g., users interaction)

 Increased demands for software


 Medicine, Manufacturing, Entertainment, Publishing
 Demand for larger and more complex software systems

 Airplanes (crashes), NASA (aborted space shuttle launches),


 "ghost" trains, runaway missiles,
 ATM machines (have you had your card "swallowed"?), life-support systems, car systems, etc.
 US National security and day-to-day operations are highly dependent on computerized systems.
Manufacturing software can be characterized by a series of steps ranging from concept
exploration to final retirement; this series of steps is generally referred to as a software
lifecycle.

Steps or phases in a software lifecycle fall generally into these categories:

 Requirements (Relative Cost 2%)


 Specification (analysis) (Relative Cost 5%)
 Design (Relative Cost 6%)
 Implementation (Relative Cost 5%)
 Testing (Relative Cost 7%)
 Integration (Relative Cost 8%)
 Maintenance (Relative Cost 67%)
 Retirement
Software engineering employs a variety of methods, tools, and paradigms.

Paradigms refer to particular approaches or philosophies for designing, building and


maintaining software. Different paradigms each have their own advantages and
disadvantages which make one more appropriate in a given situation than perhaps
another (!).

A method (also referred to as a technique) is heavily depended on a selected paradigm


and may be seen as a procedure for producing some result. Methods generally involve
some formal notation and process(es).

Tools are automated systems implementing a particular method.

Thus, the following phases are heavily affected by selected software paradigms

5
Software Engineering

 Design
 Implementation
 Integration
 Maintenance
The software development cycle involves the activities in the production of a software
system. Generally the software development cycle can be divided into the following
phases:

 Requirements analysis and specification


 Design
 Preliminary design
 Detailed design
 Implementation
 Component Implementation
 Component Integration
 System Documenting
 Testing
 Unit testing
 Integration testing
 System testing
 Installation and Acceptance Testing
 Maintenance
 Bug Reporting and Fixing
 Change requirements and software upgrading

1.2 Process Models –Incremental and Evolutionary models


Prescriptive Models

Many people (and not a few professors) believe that prescriptive models are “old school”—
ponderous, bureaucratic document-producing machines.

1.2.1 The Waterfall Model

Communication
project initiation Planning
requirement gathering estimating Modeling
scheduling
analysis Construction
tracking 6
design Deployment
code
test delivery
support
f eedback
Software Engineering

Many people dismiss the waterfall as obsolete and it certainly does have problems. But this
model can still be used in some situations.

Among the problems that are sometimes encountered when the waterfall model is applied are:

 A Real project rarely follows the sequential flow that the model proposes. Change can
cause confusion as the project proceeds.
 It is difficult for the customer to state all the requirements explicitly. The waterfall
model requires such demand.
 The customer must have patience. A working of the program will not be available until
late in the project time-span.
Advantages

1. Testing is inherent to every phase of the waterfall model

2. It is an enforced disciplined approach

3. It is documentation driven, that is, documentation is produced at every stage.

Disadvantages

1. Projects rarely flow its sequential flow as we cannot go back to introduce change.

2. Small changes or errors that arise in the completed software may cause a lot of problem.

3. it happens that the client is not very clear of what he exactly wants from the software.
Any changes that he mentions in between may cause a lot of confusion.

4. The greatest disadvantage of the waterfall model is that until the final stage of the
development cycle is complete, a working model of the software does not lie in the
hands of the client

7
Software Engineering

1.2.2 Incremental Process Models

The process models in this category tend to be among the most widely used (and effective) in
the industry.

1.2.2.1The Incremental Model

The incremental model combines elements of the waterfall model applied in an iterative fashion.
The model applies linear sequences in a staggered fashion as calendar time progresses.

Each linear sequence produces deliverable “increments” of the software. (Ex: a Word Processor
delivers basic file mgmt., editing, in the first increment; more sophisticated editing, document
production capabilities in the 2nd increment; spelling and grammar checking in the 3rd
increment.

When an increment model is used, the 1st increment is often a core product. The core product is
used by the customer.

As a result of use and / or evaluation, a plan is developed for the next increment.

The plan addresses the modification of the core product to better meet the needs of the
customer and the delivery of additional features and functionality.

8
Software Engineering

The process is repeated following the delivery of each increment, until the complete product is
produced.

If the customer demands delivery by a date that is impossible to meet, suggest delivering one or
more increments by that date and the rest of the Software later.

1.2.2.2 The RAD Model

Rapid Application Development (RAD) is an incremental software process model that emphasizes
a short development cycle.

RAD is a “high-speed” adaptation of the waterfall model, in which rapid development is


achieved by using a component based construction approach.

If requirements are well understood and project scope is constrained, the RAD process enables a
development team to create a fully functional system within a short period of time.

Advantages

1. To converge early toward a design acceptable to the customer


and feasible for the developers

2. To limit a project's exposure to the forces of change

3. To save development time, possibly at the expense of economy or


product quality

What are the drawbacks of the RAD model?

1. For large, but scalable projects, RAD requires sufficient human resources to create the
right number of RAD teams.

9
Software Engineering

2. If developers and customers are not committed to the rapid-fire activities necessary to
complete the system in a much abbreviated time frame, RAD project will fail.
3. If a system cannot properly be modularized, building the components necessary for
RAD will be problematic.

10
Software Engineering

Team # n
Modeling
business modeling
data modeling
process modeling

Const ruct ion


component reuse
Team # 2 automatic code
Communication generation
testing
Modeling
business modeling
data modeling
process modeling

Planning
Construction Deployment
Team # 1 component reuse
integration
automatic code
generation delivery
Modeling testing feedback
business modeling
data modeling
process modeling

Construction
component reuse
automatic code
generation
testing

60 - 90 days

11
Software Engineering

1.2.3 Evolutionary Process Models

Software evolves over a period of time; business and product requirements often change
as development proceeds, making a straight-line path to an end product unrealistic.

Software Engineering needs a process model that has been explicitly designed to
accommodate a product that evolves over time.

Evolutionary process models are iterative. They produce increasingly more complete
versions of the Software with each iteration.

1.2.3.1 Prototyping

Customers often define a set of general objectives for Software, but doesn’t identify
detailed input, processing, or input requirements.

Prototyping paradigm assists the Software engineering and the customer to better
understand what is to be built when requirements are fuzzy.

12
Software Engineering

Quick plan

Communication

Modeling
Quick design

Deployment
Delivery
& Feedback Construction
of
prototype

13
Software Engineering

The prototyping paradigm begins with communication where requirements and goals of
Software are defined.

Prototyping iteration is planned quickly and modeling in the form of quick design
occurs.

The quick design focuses on a representation of those aspects of the Software that will be
visible to the customer “Human interface”.

The quick design leads to the Construction of the Prototype.

The prototype is deployed and then evaluated by the customer.

Feedback is used to refine requirements for the Software.

Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while
enabling the developer to better understand what needs to be done.

The prototype can serve as the “first system”. Both customers and developers like the
prototyping paradigm as users get a feel for the actual system, and developers get to
build Software immediately. Yet, prototyping can be problematic:

1. The customer sees what appears to be a working version of the Software,


unaware that the prototype is held together “with chewing gum. “Quality,
long-term maintainability.” When informed that the product is a prototype,
the customer cries foul and demands that few fixes be applied to make it a
working product. Too often, Software development management relents.
2. The developer makes implementation compromises in order to get a
prototype working quickly. An inappropriate O/S or programming language
used simply b/c it’s available and known. After a time, the developer may
become comfortable with these choices and forget all the reasons why they
were inappropriate.

The key is to define the rules of the game at the beginning. The customer and the
developer must both agree that the prototype is built to serve as a mechanism for
defining requirements.

1.2.3.2 The Spiral Model

The spiral model is an evolutionary Software process model that couples the iterative
nature of prototyping with the controlled and systematic aspects of the waterfall model.

14
Software Engineering

It has two distinguishing features:

a. A cyclic approach for incrementally growing a system’s degree of


definition and implementation while decreasing its degree of risk.
b. A set of anchor point milestones for ensuring stakeholder commitment to
feasible and mutually satisfactory solutions.

Using the spiral model, Software is developed in a series of evolutionary releases.

During early stages, the release might be a paper model or prototype.

During later iterations, increasingly more complete versions of the engineered system
are produced.

A spiral model is divided into a set of framework activities divided by the Software
engineering team.

As this evolutionary process begins, the Software team performs activities that are
implied by a circuit around the spiral in a clockwise direction, beginning at the center.

Risk is considered as each revolution is made.

Anchor-point milestones – a combination of work products and conditions that are


attained along the path of the spiral- are noted for each evolutionary pass.

The first circuit around the spiral might result in the development of a product
specification; subsequent passes around the spiral might be used to develop a prototype
and then progressively more sophisticated versions of the Software.

Each pass through the planning region results in adjustments to the project plan. Cost
and schedule are adjusted based on feedback derived from the customer after delivery.

Unlike other process models that end when Software is delivered, the spiral model can
be adapted to apply throughout the life of the Software.

15
Software Engineering

planning
estimation
scheduling
risk analysis

communication

modeling
analysis
design
start

deployment
construction
delivery code
feedback test

1.3 Typical Application for each model


– Water fall model can be used in applications where the requirements are
fixed and known in advance
– Iterative model can be used where the requirements are known and the
product feedback has to be taken from customer.
1.4 Agile methodology
• Agility
– The ability to both create and respond to change in order to profit in a
turbulent business environment
• Companies need to determine the amount of agility they need to be
competitive
• Chaordic
– Exhibiting properties of both chaos and order
• The blend of chaos and order inherent in the external environment
and in people themselves, argues against the prevailing wisdom
about predictability and planning
• Things get done because people adapt, not because they
slavishly follow processes
– An agile view is a chaordic view
• “Balanced between chaos and order, perched on the precipice at the
edge of chaos.”
• Some people are not comfortable in this environment; others thrive
on it
Current Problem in Program Management & Software development:

16
Software Engineering

 31.1% of projects will be canceled before they ever get completed. 52.7% of
projects will cost 189% of their original estimates.
– The Standish Group
 Plus project complexity is increasing
– Demand for quicker delivery of useful systems
– Increasingly vague, volatile requirements
– Greater uncertainty/risk from limited knowledge of:
• Underlying technologies
• Off-the-shelf (OTS) components used

When to Apply Agile Methodologies?

• Problems characterized by change, speed, and turbulence are best solved by


agility.
– Accelerated time schedule combined with significant risk and uncertainty
that generate constant change during the project.
• Is your project more like drilling for oil or like managing a production line?
– Oil exploration projects need agile processes.
– Production-line projects are often well-served by rigorous methodologies.

Why do we go for Agile process

1) the customer did not give all of the expected requirements and

2) engineers did not always understand the requirements.

Some Agile Methodologies

• Extreme Programming (XP)


• Scrum

Scrum-it is iterative in nature

• Sprints- basic unit of development in scrum. It produces a working and


tested software within given time limits.

• Sprints tend to last between on week to month

1.5 Understanding Process and Project Metrics

17
Software Engineering

A common process framework is established by defining a small number of framework


activities that are applicable to all software projects, regardless of their size or
complexity. A number of task sets—each a collection of software engineering work tasks,
project milestones, work products, and quality assurance points—enable the framework
activities to be adapted to the characteristics of the software project and the
requirements of the project team. Finally, umbrella activities—such as software quality
assurance, software configuration management, and measurement —overlay the
process model. Umbrella activities are independent of any one framework activity and
occur throughout the process.

1.5.1 Process Metric


Primary metrics are also called as Process metrics. This is the metric the Six Sigma
practitioners care about and can influence. Primary metrics are almost the direct output
characteristic of a process. It is a measure of a process and not a measure of a high-level
business objective. Primary Process metrics are usually Process Defects, Process cycle
time and Process consumption.
There are four basic types of metrics for product development:
1. Process metrics - short-term metrics that measure the effectiveness of the product
development process and can be used to predict program and product performance
- Staffing (hours) vs. plan
- Turnover rate
- Errors per 1,000 lines of code (KSLOC)
2. Program/project metrics - medium-term metrics that measure effectiveness in
executing the development program/project
- Schedule performance
- Program/project cost performance
- Balanced team scorecard
3. Product metrics - medium-term metrics that measure effectiveness in meeting
product objectives - technical performance measures
- Weight
- Range
- Mean time between failure (MTBF)
- Unit production costs

4. Enterprise metrics - longer term metrics that measure the effectiveness of the
enterprise in undertaking IR&D and developing new products

18
Software Engineering

- Breakeven time
- Percent of revenue from products developed in last 4 years
- Proposal win %
- Development cycle time trend (normalized to program complexity)

Capability Maturity Model


To determine an organization’s current state of process maturity, the SEI uses an
assessment that results in a five point grading scheme. The grading scheme determines
compliance with a capability maturity model (CMM) that defines key activities required at
different levels of process maturity. The levels that are defined in the following manner

 Level 1: Initial. The software process is characterized as ad hoc and


occasionally even chaotic. Few processes are defined, and success depends on
individual effort.
 Level 2: Repeatable. Basic project management processes are established to
track cost, schedule, and functionality. The necessary process discipline is in
place to repeat earlier successes on projects with similar applications.
 Level 3: Defined. The software process for both management and engineering
activities is documented, standardized, and integrated into an organization wide
software process. All projects use a documented and approved version of the
organization's process for developing and supporting software. This level
includes all characteristics defined for level 2.
 Level 4: Managed. Detailed measures of the software process and product
quality are collected. Both the software process and products are quantitatively
understood and controlled using detailed measures. This level includes all
characteristics defined for level 3.
 Level 5: Optimizing. Continuous process improvement is enabled by
quantitative feedback from the process and from testing innovative ideas and
technologies.This level includes all characteristics defined for level 4.

10.References:

1. Roger Pressman, Software Engineering: A Practitioners Approach, (6th Edition),


McGraw Hill, 1997.

11. Objective Questions

19
Software Engineering

1. Software is a product and can be manufactured using the same technologies used
for other engineering artifacts.
A) True
B) False
2. Software deteriorates rather than wears out because…
A) Software suffers from exposure to hostile environments
B) Defects are more likely to arise after software has been used often
C) Multiple change requests introduce errors in component interactions
D) Software spare parts become harder to order
3. Agility is nothing more than the ability of a project team to respond rapidly to
change.
A) True
B) False
4. Which of the items listed below is not one of the software engineering layers?
A) Process
B) Manufacturing
C) Methods
D) Tools
5. Software engineering umbrella activities are only applied during the initial phases
of software development projects.
A) True
B) False
6. Process models are described as agile because they
A) eliminate the need for cumbersome documentation
B) emphasize maneuverability and adaptability
C) do not waste development time on planning activities
D) make extensive use of prototype creation
7. Which of these terms are level names in the Capability Maturity Model?
A) Performed
B) Repeated
C) Reused
D) Optimized
E) both a and d
8. Software processes can be constructed out of pre-existing software patterns to best
meet the needs of a software project.
A) True
B) False
9. Which of these are standards for assessing software processes?
A) SEI
B) SPICE
C) ISO 19002
D) ISO 9001
E) Both b and d

20
Software Engineering

10. The best software process model is one that has been created by the people who will
actually be doing the work.
A) True
B) False

(Ans : 1-B,2-C,3-A,4-B,5-B,6-C,7-E,8-E,9-A,10-A)

12. Subjective Questions


1. Explain:
a) Waterfall model
Ans: Refer page.no. 6
2. State the difference between process & project metrics? Describe process metric in
detail?
Ans: Refer page.no.18
3. Explain Agile development.
Ans: Refer page .no.17

13. Questions
1. Compare Waterfall model and Spiral model of Software development. [May 2010]
[10 marks]
Ans:
Waterfall
– Testing at end
– Changes cannot be incorporated in between
– Cascade effect
– Document driven
– Well understood products as requirements cannot change
– Linear sequential model or classical life cycle model
Spiral
– Testing is done at every step
– Iterations
– Not completely document driven
– the requirements are not defined in detail
2. Explain the Open Source software life cycle model. [May 2010] [10 marks]
Ans:
Open source is used by the open source initiative to determine whether or not a
software license can be considered open source.

Distribution terms of open source

21
Software Engineering

i. Free redistribution
ii. Source code
iii. Derived works
iv. Integrity of the author’s source code
v. No discrimination against fields of endeavor
vi. Distribution of license
vii. License must not be specific to a product
viii. License must not restrict other software
ix. License must be technology-central

3. What are the advantages of agile methodology? [May 2010] [5 marks]


Ans:
Agile development delivers increased value, visibility, and adaptability much earlier in
the life cycle, significantly reducing project risk.

4. What is an Agile Process? Explain any one Agile Process model with its advantages
and disadvantages. [Nov. 2010] [10 marks]
Ans:
Agile software development refers to a group of software development methodologies
based on iterative development, where requirements and solutions evolve through
collaboration between self-organizing cross-functional teams
The different Agile Process models are

• Agile Unified Process (AUP)


• Extreme Programming (XP)
• Scrum

22
Software Engineering

Extreme Programming:

Extreme Programming is a discipline of software development based on values of


simplicity, communication, feedback, and courage. It works by bringing the whole team
together in the presence of simple practices, with enough feedback to enable the team to
see where they are and to tune the practices to their unique situation.

XP Practices

 Core Practices: Whole Team

In Extreme Programming, every contributor to the project is an integral part of


the “Whole Team“. The team forms around a business representative called “the
Customer”, who sits with the team and works with them daily.

 Core Practices: Planning Game, Small Releases, Customer Tests

Extreme Programming teams use a simple form of planning and tracking to


decide what should be done next and to predict when the project will be done.
Focused on business value, the team produces the software in a series of small
fully-integrated releases that pass all the tests the Customer has defined.

 Core Practices: Simple Design, Pair Programming, Test-Driven Development,


Design Improvement

23
Software Engineering

Extreme Programmers work together in pairs and as a group, with simple design
and obsessively tested code, improving the design continually to keep it always
just right for the current needs.

 Core Practices: Continuous Integration, Collective Code Ownership, Coding


Standard

The Extreme Programming team keeps the system integrated and running all the
time. The programmers write all production code in pairs, and all work together
all the time. They code in a consistent style so that everyone can understand and
improve all the code as needed.

 Core Practices: Metaphor, Sustainable Pace

The Extreme Programming team shares a common and simple picture of what
the system looks like. Everyone works at a pace that can be sustained
indefinitely.

5. ) Write suitable applications of different software models [5M Jun 2015]

Ans: The development models are the various processes or methodologies that are
being selected for the development of the project depending on the project’s aims and
goals. There are many development life cycle models that have been developed in order
to achieve different required objectives. The models specify the various stages of the
process and the order in which they are carried out.

The selection of model has very high impact on the testing that is carried out. It will
define the what, where and when of our planned testing, influence regression testing
and largely determines which test techniques to use.

24
Software Engineering

There are various Software development models or methodologies. They are as follows:

Waterfall model

V model

Incremental model

RAD model

Agile model

Iterative model

Spiral model

Choosing right model for developing of the software product or application is very
important. Based on the model the development and testing processes are carried out.

Different companies based on the software application or product, they select the type
of development model whichever suits to their application. But these days in market the
‘Agile Methodology’ is the most used model. ‘Waterfall Model’ is the very old model. In
‘Waterfall Model’ testing starts only after the development is completed. Because of
which there are many defects and failures which are reported at the end. So,the cost of
fixing these issues are high. Hence, these days people are preferring ‘Agile Model’. In
‘Agile Model’ after every sprint there is a demo-able feature to the customer. Hence
customer can see the features whether they are satisfying their need or not.

‘V-model’ is also used by many of the companies in their product. ‘V-model’ is nothing
but ‘Verification’ and ‘Validation’ model. In ‘V-model’ the developer’s life cycle and
tester’s life cycle are mapped to each other. In this model testing is done side by side of
the development.

Likewise ‘Incremental model’, ‘RAD model’, ‘Iterative model’ and ‘Spiral model’ are
also used based on the requirement of the customer and need of the product.

25
Software Engineering

2 Software project scheduling, Control &


Monitoring

1. Motivation:
Poorly managed software production leads to wastage of money and time.

26
Software Engineering

2. Learning Objective:
This module explains that people involved in the Software Estimation planning
& process will be able to come out with a cost effective solution to develop any
kind of a software solution.
1. Students will be able to understand the Software Estimation- empirical estimation
models-Cost/Effort estimation model.
2. Students will be able to understand the Planning- work breakdown structure, Gantt
chart. They will be able to discuss cost and how to manage the schedule slippage.

3. Objective:
To get knowledge about the activities involved in project management.

4. Prerequisite:
Phases of software life cycle.

5. Learning OutComes:
 The student will learn steps that help a software team to understand Software
Estimation using empirical estimation models.
 The student will learn steps of Planning- work breakdown structure.
 The student will understand Gantt chart, Discuss cost and schedule slippage.

6. Syllabus:
Module Content Duration Self Study Time

2.1 Software Estimation- empirical


estimation models Cost/Effort 2 lectures 2 hours
estimation

2.2 Planning- work breakdown structure, 2 lectures 2 hours


Gantt chart, Discuss cost and schedule
slippage

5. Learning
• Product metrics

• Estimation techniques for project

• Project Planning

27
Software Engineering

• Project Scheduling

• Project Tracking

5. Weightage:
04 hours

7. Abbreviations:
(1) PERT - Program Evaluation Review Technique
(2) LOC – Lines Of Code
(3) FP - Function Point
(4) COCOMO - Constructive Cost Model
(5) WBS –Work Breakdown Structure

8. Key Definitions:
1. Planning: Project planning is part of project management, which relates to the
use of schedules such as Gantt charts to plan and subsequently report progress
within the project environment.
2. Estimation: The ability to accurately estimate the time and/or cost taken for a
project to come in to its successful conclusion is a serious problem for software
engineers. The use of a repeatable, clearly defined and well understood software
development process has, in recent years, shown itself to be the most effective
method of gaining useful historical data that can be used for statistical
estimation. In particular, the act of sampling more frequently, coupled with the
loosening of constraints between parts of a project, has allowed more accurate
estimation and more rapid development times.
3. Software metric: Software metric is a measure of some property of a piece of
software or its specifications. Since quantitative measurements are essential in all
sciences, there is a continuous effort by computer science practitioners and
theoreticians to bring similar approaches to software development. The goal is
obtaining objective, reproducible and quantifiable measurements, which may
have numerous valuable applications in schedule and budget planning, cost
estimation, quality assurance testing, software debugging, software performance
optimization, and optimal personnel task assignments.
4. LOC (Lines Of Code): Lines of code is a software metric used to measure the size
of a software program by counting the number of lines in the text of the
program's source code. SLOC is typically used to predict the amount of effort

28
Software Engineering

that will be required to develop a program, as well as to estimate programming


productivity or effort once the software is produced.
5. FP (Function Point): It is a unit of measurement used in software project
estimation. It is used to measure the functional size of an information system.
The functional size reflects the amount of functionality that is relevant to and
recognized by the user in the business. It is independent of the technology used
to implement the system.
6. COCOMO (Constructive Cost Model): The Constructive Cost Model
(COCOMO) is an algorithmic software cost estimation model developed by
Barry Boehm. The model uses a basic regression formula, with parameters that
are derived from historical project data and current project characteristics.
7. Gantt chart: A Gantt chart is a type of bar chart that illustrates a project schedule.
Gantt charts illustrate the start and finish dates of the terminal elements and
summary elements of a project. Terminal elements and summary elements
comprise the work breakdown structure of the project. Some Gantt charts also
show the dependency (i.e., precedence network) relationships between activities.
8. PERT chart: A PERT chart is a project management tool used to schedule,
organize, and coordinate tasks within a project. PERT stands for Program
Evaluation Review Technique, a methodology developed by the U.S. Navy in the
1950s to manage the Polaris submarine missile program.

9. Theory
2.1 Software estimation – Empirical estimation models – Cost/Effort
estimation

Planning & Estimation

Software project management begins with a set of activities that are collectively
called project planning. Before the project can begin, the manager and the software
team must estimate the work to be done, the resources that will be required, and the
time that will elapse from start to finish. Whenever estimates are made, future is looked
into and some degree of uncertainty is accepted as a matter of course.

2.1.1 Product metrics

Metrics are measurable ways to design and assess the software product. Software
metrics domain is partitioned into process, project and product metrics. The product

29
Software Engineering

metrics are private to an individual and are often combined to develop project metrics
that are public to a software team. Project metrics are then consolidated to create
process metrics that are public to the software organization as a whole. An organization
combines metrics that come from different individuals or projects by normalizing the
measures. By doing so, it is possible to create software metrics that enable comparison
to broader organizational averages.
In this section product metric is concentrated. Product metrics help software
engineers gain insight into the design and construction of the software they build.
Product metrics focus on specific attributes of software engineering work products and
are collected as technical tasks (analysis, design, coding, and testing) are being
conducted. Software engineers use product metrics is to help them build higher-quality
software.
The product Metrics Landscape:
Although a wide variety of metrics taxonomies have been proposed, the
following outline addresses the most important metrics area:

i. Metrics for the analysis model. These metrics address various aspects of the analysis
model and include:
Functionally delivered-provides an indirect measure of the functionality that is
packaged within the software.
System size-measures of the overall size of the system defined in terms of
information available as part of the analysis model.
Specification quality-provides an indication of the specificity and completeness of
a requirements specification.

ii. Metrics for the design model. These metrics quantify design attributes in a manner
that allows a software engineer to assess design quality. Metrics include:
Architectural metrics-provide an indication of the quality of the architectural
design.
Component-level metrics-measure the complexity of software components and
other characteristics that have a bearing on quality.
Interface design metrics-focus primarily on usability.
Specialized OO design metrics-measure characteristics of classes and their
communication and collaboration characteristics.

iii. Metrics for source code. These metrics measure the source code and can be used to
assess its complexity, maintainability, and testability, among other characteristics:
Halstead metrics-controversial but nonetheless fascinating, these metrics provide
unique measures of a computer program.
Complexity metrics-measure the logical complexity of source code (can also be
considered to be component-level design metrics).

30
Software Engineering

Length metrics-provide an indication of the size of the software.

iv. Metrics for testing. These metrics assist in the design of effective test cases and
evaluate the efficacy of testing:
Statement and branch coverage metrics-lead to the design of test cases that provide
program coverage.
Defect-related metrics-focus on bugs found, rather than on the tests themselves.
Testing effectiveness-provide a real-time indication of the effectiveness of tests that
have been conducted.
In-process metrics-process related metrics that can be determined as testing is
conducted.

In many cases, metrics for one model may be used in later software engineering
activities. For example, design metrics may be used to estimate the effort required to
generate source code. In addition, design metrics may be used in test planning and test
case design.

2.1.2 Estimation- LOC, FP, and COCOMO models

Estimation of resources, cost, and schedule for a software engineering effort


requires experience, access to good historical information, and the courage to commit to
quantitative predictions when qualitative information is all that exists. Estimation
carries inherent risk1 and this risk leads to uncertainty.
Project complexity has a strong effect on the uncertainty inherent in planning.
Complexity, however, is a relative measure that is affected by familiarity with past
effort. The first-time developer of a sophisticated e-commerce application might
consider it to be exceedingly complex. However, a software team developing its tenth e-
commerce Web site would consider such work run of the mill. A number of quantitative
software complexity measures have been proposed. Such measures are applied at the
design or code level and are therefore difficult to use during software planning (before
a design and code exist). However, other, more subjective assessments of complexity
(e.g., the function point complexity adjustment factors which will be discussed below)
can be established early in the planning process.
Project size is another important factor that can affect the accuracy and efficacy of
estimates. As size increases, the interdependency among various elements of the
software grows rapidly. Problem decomposition, an important approach to estimating,
becomes more difficult because decomposed elements may still be formidable. To
paraphrase Murphy's law: "What can go wrong will go wrong”—and if there are more
things that can fail, more things will fail.

31
Software Engineering

The degree of structural uncertainty also has an effect on estimation risk. In this
context, structure refers to the degree to which requirements have been solidified, the
ease with which functions can be compartmentalized, and the hierarchical nature of the
information that must be processed.
The availability of historical information has a strong influence on estimation
risk. By looking back, we can emulate things that worked and improve areas where
problems arose. When comprehensive software metrics are available for past projects,
estimates can be made with greater assurance, schedules can be established to avoid
past difficulties, and overall risk is reduced.
LOC (Lines Of Code):
Measurements in the physical world can be categorized in two ways: direct
measures (e.g., the length of a bolt) and indirect measures (e.g., the "quality" of bolts
produced, measured by counting rejects). Software metrics can be categorized similarly
(direct and indirect measures).
Direct measures of the software engineering process include cost and effort
applied. Direct measures of the product include lines of code (LOC) produced,
execution speed, memory size, and defects reported over some set period of time. The
cost and effort required to build software, the number of lines of code produced, and
other direct measures are relatively easy to collect, as long as specific conventions for
measurement are established in advance.
LOC comes under the category of size-oriented metrics. Size-oriented software
metrics are derived by normalizing quality and/or productivity measures by
considering the size of the software that has been produced. If a software organization
maintains simple records, a table of size-oriented measures, such as the one shown in
the Figure below, can be created. The table lists each software development project that
has been completed over the past few years and corresponding measures for that
project.

Figure 2.1: Size-oriented metrics

32
Software Engineering

In order to develop metrics that can be assimilated with similar metrics from
other projects, we choose lines of code as our normalization value. From the
rudimentary data contained in the table, a set of simple size-oriented metrics can be
developed for each project:
• Errors per KLOC (thousand lines of code).
• Defects per KLOC.
• $ per LOC.
• Page of documentation per KLOC.
In addition, other interesting metrics can be computed:
• Errors per person-month.
• LOC per person-month.
• $ per page of documentation.
Size-oriented metrics are not universally accepted as the best way to measure the
process of software development. Proponents of the LOC measure claim that LOC is an
“artifact” of all software development projects that can be easily counted. On the other
hand, opponents argue that LOC measures are programming language dependent, that
they penalize well-designed but shorter programs, that they cannot easily accommodate
nonprocedural languages.
LOC and FP (Function Point) estimation are distinct estimation techniques. Yet
both have a number of characteristics in common. The project planner begins with a
bounded statement of software scope and from this statement attempts to decompose
software into problem functions that can each be estimated individually. LOC or FP (the
estimation variable) is then estimated for each function. Alternatively, the planner may
choose another component for sizing such as classes or objects, changes, or business
processes affected.
Baseline productivity metrics (e.g., LOC/person-month or FP/person-month)
are then applied to the appropriate estimation variable, and cost or effort for the
function is derived. Function estimates are combined to produce an overall estimate for
the entire project.
The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning. When LOC is used as the estimation
variable, decomposition10 is absolutely essential and is often taken to considerable
levels of detail. For FP estimates, decomposition works differently. Rather than focusing
on function, each of the information domain characteristics—inputs, outputs, data files,
inquiries, and external interfaces—as well as the 14 complexity adjustment values
(which is discussed below). The resultant estimates can then be used to derive a FP
value that can be tied to past data and used to generate an estimate.
Regardless of the estimation variable that is used, the project planner begins by
estimating a range of values for each function or information domain value. Using

33
Software Engineering

historical data or (when all else fails) intuition, the planner estimates an optimistic, most
likely, and pessimistic size value for each function or count for each information
domain value. An implicit indication of the degree of uncertainty is provided when a
range of values is specified.
A three-point or expected value can then be computed. The expected value for the
estimation variable (size), S, can be computed as a weighted average of the optimistic
(sopt), most likely (sm), and pessimistic (spess) estimates. For example,
S = (sopt + 4sm + spess)/6 -------------------------------- (1)
gives heaviest credence to the “most likely” estimate and follows a beta probability
distribution. We assume that there is a very small probability the actual size result will
fall outside the optimistic or pessimistic values.
Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied. One can’t be sure that the estimates
are correct. Any estimation technique, no matter how sophisticated, must be cross-
checked with another approach. Even then, common sense and experience must prevail.
An example of LOC-Based estimation:
As an example of LOC and FP problem-based estimation techniques, a software
package to be developed for a computer-aided design application for mechanical
components is considered. A review of the System Specification indicates that the
software is to execute on an engineering workstation and must interface with various
computer graphics peripherals including a mouse, digitizer, high resolution color
display and laser printer.
Using the System Specification as a guide, a preliminary statement of software scope can
be developed:
“The CAD software will accept two- and three-dimensional geometric data from an
engineer. The engineer will interact and control the CAD system through a user
interface that will exhibit characteristics of good human/machine interface design. All
geometric data and other supporting information will be maintained in a CAD
database. Design analysis modules will be developed to produce the required output,
which will be displayed on a variety of graphics devices. The software will be designed
to control and interact with peripheral devices that include a mouse, digitizer, laser
printer, and plotter.”
This statement of scope is preliminary—it is not bounded. Every sentence would
have to be expanded to provide concrete detail and quantitative bounding. For
example, before estimation can begin the planner must determine what “characteristics
of good human/machine interface design” means or what the size and sophistication of
the “CAD database” are to be.

34
Software Engineering

For our purposes, it is assumed that further refinement has occurred and that the
following major software functions are identified:

• User interface and control facilities (UICF)


• Two-dimensional geometric analysis (2DGA)
• Three-dimensional geometric analysis (3DGA)
• Database management (DBM)
• Computer graphics display facilities (CGDF)
• Peripheral control function (PCF)
• Design analysis modules (DAM)
Following the decomposition technique for LOC, an estimation table, shown in the
Figure below, is developed. A range of LOC estimates is developed for each function.
For example, the range of LOC estimates for the 3D geometric analysis function is
optimistic—4600 LOC, most likely—6900 LOC, and pessimistic—8600 LOC.

Figure 2.2: Estimation table for the LOC method

Applying equation (1) mentioned above, the expected value for the 3D geometric
analysis function is 6800 LOC. Other estimates are derived in a similar fashion. By
summing vertically in the estimated LOC column, an estimate of 33,200 lines of code is
established for the CAD system.
A review of historical data indicates that the organizational average productivity
for systems of this type is 620 LOC/pm. Based on a burdened labor rate of $8000 per
month, the cost per line of code is approximately $13. Based on the LOC estimate and
the historical productivity data, the total estimated project cost is $431,000 and the
estimated effort is 54 person-months (the estimates are rounded-off here).
Function Point (FP):
As mentioned earlier, software metrics can be categorized as direct measures and
indirect measures. As discussed before direct measures of the product include LOC.

35
Software Engineering

Indirect measures of the product include functionality, quality, complexity, efficiency,


reliability, maintainability, and many other "–abilities". These indirect measures come
under the category of ‘function-oriented metrics’.
Function-oriented software metrics use a measure of the functionality delivered
by the application as a normalization value. Since ‘functionality’ cannot be measured
directly, it must be derived indirectly using other direct measures. Function-oriented
metrics were first proposed by Albrecht, who suggested a measure called the function
point. Function points are derived using an empirical relationship based on countable
(direct) measures of software's information domain and assessments of software
complexity.
Function points are computed by completing the table shown below. Five
information domain characteristics are determined and counts are provided in the
appropriate table location. Information domain values are defined in the following
manner:

Figure 2.3: Computing function points

Number of user inputs. Each user input that provides distinct application oriented data
to the software is counted. Inputs should be distinguished from inquiries, which are
counted separately.
Number of user outputs. Each user output that provides application oriented
information to the user is counted. In this context output refers to reports, screens, error
messages, etc. Individual data items within a report are not counted separately.
Number of user inquiries. An inquiry is defined as an on-line input that results in the
generation of some immediate software response in the form of an on-line output. Each
distinct inquiry is counted.
Number of files. Each logical master file (i.e., a logical grouping of data that may be
one part of a large database or a separate file) is counted.

36
Software Engineering

Number of external interfaces. All machine readable interfaces (e.g., data files on
storage media) that are used to transmit information to another system are counted.
Once these data have been collected, a complexity value is associated with each
count. Organizations that use function point methods develop criteria for determining
whether a particular entry is simple, average, or complex. Nonetheless, the
determination of complexity is somewhat subjective.
To compute function points (FP), the following relationship is used:
FP = count total × [0.65 + 0.01 ×Σ(Fi)] ------------------------------- (2)
where count total is the sum of all FP entries obtained from the figure for computing
function points shown above.
The Fi (i = 1 to 14) are "complexity adjustment values" based on responses to the
following questions:
1. Does the system require reliable backup and recovery?
2. Are data communications required?
3. Are there distributed processing functions?
4. Is performance critical?
5. Will the system run in an existing, heavily utilized operational environment?
6. Does the system require on-line data entry?
7. Does the on-line data entry require the input transaction to be built over multiple
screens or operations?
8. Are the master files updated on-line?
9. Are the inputs, outputs, files, or inquiries complex?
10. Is the internal processing complex?
11. Is the code designed to be reusable?
12. Are conversion and installation included in the design?
13. Is the system designed for multiple installations in different organizations?
14. Is the application designed to facilitate change and ease of use by the user?
Each of these questions is answered using a scale that ranges from 0 (not
important or applicable) to 5 (absolutely essential). The constant values in equation (2)
and the weighting factors that are applied to information domain counts are
determined empirically.
Once function points have been calculated, they are used in a manner analogous
to LOC as a way to normalize measures for software productivity, quality, and other
attributes:
• Errors per FP.
• Defects per FP.
• $ per FP.
• Pages of documentation per FP.
• FP per person-month.
An example of FP-based estimation:

37
Software Engineering

Decomposition for FP-based estimation focuses on information domain values rather


than software functions. Referring to the function point calculation table presented in
the Figure shown below, the project planner estimates inputs, outputs, inquiries, files,
and external interfaces for the CAD software (this was the one used for LOC-based
estimation also). For the purposes of this estimate, the complexity weighting factor is
assumed to be average. The figure below, presents the results of this estimate.

Figure 2.4: Estimating information domain values


Each of the complexity weighting factors is estimated and the complexity adjustment
factor is computed as described below:
------------------------------------------------------------------
Factor Value
Backup and recovery 4
Data communications 2
Distributed processing 0
Performance critical 4
Existing operating environment 3
On-line data entry 4
Input transaction over multiple screens 5
Master files updated on-line 3
Information domain values complex 5
Internal processing complex 5
Code designed for reuse 4
Conversion/installation in design 3
Multiple installations 5
Application designed for change 5
Complexity adjustment factor 1.17
---------------------------------------------------------------------
Finally, the estimated number of FP is derived:
FPestimated = count-total x [0.65 + 0.01 x ∑ (Fi)]
FPestimated = 375

38
Software Engineering

The organizational average productivity for systems of this type is 6.5 FP/pm. Based on
a burdened labor rate of $8000 per month, the cost per FP is approximately $1230. Based
on the LOC estimate and the historical productivity data, the total estimated project cost
is $461,000 and the estimated effort is 58 person-months.

COCOMO Model:

This model is based on empirical estimation. This estimation for computer


software uses empirically derived formulas to predict effort as a function of LOC or FP.
Values for LOC or FP are estimated using the approach described in the previous
paragraphs. But instead of using the tables described in those sections, the resultant
values for LOC or FP are plugged into the estimation model.
In his classic book on “software engineering economics,” Barry Boehm
introduced a hierarchy of software estimation models bearing the name COCOMO, for
COnstructive COst MOdel. The original COCOMO model became one of the most
widely used and discussed software cost estimation models in the industry. It has
evolved into a more comprehensive estimation model, called COCOMO II. Like its
predecessor, COCOMO II is actually a hierarchy of estimation models that address the
following areas:
Application composition model. Used during the early stages of software engineering,
when prototyping of user interfaces, consideration of software and system interaction,
assessment of performance, and evaluation of technology maturity are paramount.
Early design stage model. Used once requirements have been stabilized and basic
software architecture has been established.
Post-architecture-stage model. Used during the construction of the software.
Like all estimation models for software, the COCOMO II models require sizing
information. Three different sizing options are available as part of the model hierarchy:
object points, function points, and lines of source code.
The COCOMO II application composition model uses object points and is
illustrated in the following paragraphs. It should be noted that other, more
sophisticated estimation models (using FP and KLOC) are also available as part of
COCOMO II.

39
Software Engineering

Table 2.1: Complexity weighting for object types

Like function points (described earlier), the object point is an indirect software
measure that is computed using counts of the number of (1) screens (at the user
interface), (2) reports, and (3) components likely to be required to build the application.
Each object instance (e.g., a screen or report) is classified into one of three complexity
levels (i.e., simple, medium, or difficult) using criteria suggested by Boehm. In essence,
complexity is a function of the number and source of the client and server data tables
that are required to generate the screen or report and the number of views or sections
presented as part of the screen or report.

Once complexity is determined, the number of screens, reports, and components


are weighted according to the Table shown above. The object point count is then
determined by multiplying the original number of object instances by the weighting
factor in the table and summing to obtain a total object point count. When component-
based development or general software reuse is to be applied, the percent of reuse
(%reuse) is estimated and the object point count is adjusted:
NOP = (object points) x [(100 _ %reuse)/100]
where NOP is defined as new object points.
To derive an estimate of effort based on the computed NOP value, a
“productivity rate” must be derived. The table below presents the productivity rate
‘PROD = NOP/person-month’ for different levels of developer experience and
development environment maturity.

Table 2.2: Productivity rates for object points

Once the productivity rate has been determined, an estimate of project effort can be
derived as:
estimated effort = NOP/PROD
In more advanced COCOMO II models, a variety of scale factors, cost drivers, and
adjustment procedures are required.

40
Software Engineering

2.2 Project Management

Project management involves the planning, monitoring, and control of the people,
process, and events that occur as software evolves from a preliminary concept to an
operational implementation.

2.2.1 Planning

Software project management begins with asset of activities that are collectively
called as project planning. Before the project can begin the manager and the software
team must estimate the work to be done and the resources that will be required, and the
time that will elapse from start to finish.
Project planning focuses on the definition phase- the definition of the problem,
planning a solution and estimating resources-and results in the following work
products:
• The problem statement is a short document describing the problem the system
should address, the target environment, client deliverables, and acceptance
criteria. The problem statement is an initial description and is the seed for the
Project Agreement, formalizing the common understanding of the project by the
client and management, and for the Requirements Analysis Document (RAD), a
precise description of the system under development.
• The top-level design represents the initial decomposition of the system into
subsystems. It is used to assign subsystems to individual teams. The top-level
design will be the seed for the System Design Document (SDD), a precise
description of the software architecture.
• The Software Project Management Plan (SPMP) describes all the managerial
aspects of the project, in particular the work breakdown structure, the schedule,
organization, work packages, and budget.
Moving the design of the software architecture to the beginning is called architecture-
centric project management. However, this a deviation from long-standard software
project management practices. In university environments and in small company
projects, the software project manager and software architect roles are often assumed by
the same person. In large projects the roles should be assumed by different people. The
software architect and project manager should work closely together as a decision-
making team, dividing responsibilities between management and technical decisions.
Developing the problem statement

41
Software Engineering

The problem statement is developed by the project manager and the client as a
mutual understanding of the problem to be addressed by the system. The problem
statement describes the current situation, the functionality it should support, and
environment in which the system will be deployed. It also defines the deliverables
expected by the client, together with delivery dates and a set of acceptance criteria. The
problem statement may also specify constraints on the development environment, such
as the programming language to be used. The problem statement is not a precise or
complete specification of the system. Instead, it is a high-level summary of two project
documents yet to be developed, the RAD and the SPMP. The following figure shows an
outline for a problem statement:

Figure 2.5: Outline of problem statement document.

The first section describes the problem domain. Section 2 of the problem statement
provides example scenarios describing the interaction between the users and the system
for the essential functionality. Scenarios are used for describing both the current
situation and the future situation. Section 3 summarizes the functional requirements of
the system. Section 4 summarizes the nonfunctional requirements including constraints
placed by the client, such as the choice of the programming language, component reuse,
or selection of a separate testing team. Section 5 describes the deployment environment,
including platform requirements, a description of the physical environment, users, and
so on. Section 6 lists client deliverables and their associated deadlines.
Defining the top-level design
The top-level design describes the software architecture of the system. The
subsystem decomposition should be high level, focusing on functionality. It should be
kept constant until the analysis phase produces a stable system model. The subsystem
decomposition can and should be revisited during subsystem design, and new
subsystems are usually created during the system design phase. In this case the
organization might have to be changed to reflect the new design. The software architect
identifies the major subsystems and their services, but does not yet define their

42
Software Engineering

interfaces at this point. The subsystem decomposition is refined and modified later
during system design.
Identifying the work breakdown structure
There are several different approaches to developing the work breakdown
structure (WBS) for a project. The most commonly used approach is functional
decomposition based on the software process. For example, the work breakdown
structure shown in the below table is based on a functional breakdown of the work to
be performed.
A functional decomposition based on the functions of the product itself can also
be used to define the tasks. This is often used in high-risk projects where the system
functionality is released incrementally in a set of prototypes.
An object-oriented work breakdown structure is also possible. For example, if a
product consists of five components, an object-oriented WBS would include five tasks
associated with the construction of each of these components. Specifying the
development of these subsystems of the software architecture as tasks comes quite
naturally in object-oriented software projects.
Another decomposition is based on geographical areas. If the project has
developer teams in different geographical regions, the tasks might be identified for each
of these regions. Another WBS is based on organizational units. These can be the
organizational units of the company-for example, marketing, operations, and sales-or
they can be based on the people in the project.

Use case Work breakdown structure

Authenticate 1. Realize authentication use case


1.1 Develop user interface forms (Login,
change PIN)
1.2 Realize authentication protocol with
Server
1.3 Develop initial account creation

Withdraw money 2. Develop money withdrawal use case


2.1 Develop user interface forms (Select
Account, Specify Amount)
2.2 Realize communication with server
2.3 Develop business logic for
approving withdrawal

43
Software Engineering

2.4 Develop interface with each cash


distributor

Deposit Check 3. Develop deposit check use case


3.1 Develop user interface forms
(Specify Check, Insert Check)
3.2 Realize communication with server
3.3 Develop business logic for recording
the deposit
3.4 Develop interface with label printer

Table 2.3: Example of functional work breakdown structure for a simple ATM

No matter what approach is used, the ultimate goal is to decompose the total
work into tasks that are enough to be doable, whose duration can easily be estimated,
and that can be assigned to a single person.
Creating the initial schedule
After the temporal dependencies are established in the task model, a schedule is
created. This requires the estimation of durations for each of the tasks. Estimated times
can of course be based on prior experience of the manager or the estimator, but they
often do not give the needed precision. It is impossible to come up with a precise model
for the complete project. One solution is to start with an initial schedule representing
deadlines mutually agreed by the client and the project manager. These dates are
generous enough that, even in the event of change, they can still be met.
The initial version of the SPMP should be viewed as a proposed version to be
reviewed by the software architecture and the subsystem teams before it can be
considered binding for all project participants. The individual teams should start
working on a revision of the initial version of the subsystem decomposition right after
the initial team meeting, focusing on their subsystem and work breakdown structure for
the development of the subsystem.
This planning activity has to be done by the team in parallel with ongoing
requirements analysis and system design activities. The newly found tasks should be
presented during a formal planning review scheduled during or near the end of the
analysis phase. The result of the planning review should be revised work breakdown
structure and software architecture. This revision, In turn, is the basis for the second
version of the SPMP, which should be baselined and used as the basis for the project
agreement with the client. Scheduling requires a continuous trade-off between
resources, time, and implemented functionality and should be done regularly by the
project manager.

44
Software Engineering

2.2.2 Scheduling:

A schedule is the mapping of tasks onto time: each task is assigned start and end
times. This allows us to plan the deadlines for individual deliverables. The two most
often used diagrammatic notations for schedules are PERT (Program Evaluation
Review Technique) and Gantt charts. A Gantt chart is a compact way to present the
schedule of a software project along the time axis. A Gantt chart is a bar graph on which
the horizontal axis represents time and the vertical axis lists the different tasks to be
done. Tasks are represented as bars whose length corresponds to the planned duration
of the task. A schedule for the database subsystem example is represented as a Gantt
chart in the following figure. Before the tasks for the database subsystem is given in a
table.

Table 2.4: Examples of tasks for the realization of the database subsystem

45
Software Engineering

Figure 2.6: An example of schedule for the database subsystem (Gantt chart)

A PERT chart represents a schedule as an acyclic graph of tasks. The following


figure is a PERT chart for the database subsystem schedule. The planned start and
duration of the tasks are used to compute the critical path, which represents the shortest
possible path through the graph. The length of the critical path corresponds to the
shortest possible schedule, assuming sufficient resources to accomplish, in parallel,
tasks that are independent. Moreover, tasks on the critical path are the most important,
as a delay in any of these tasks will result in a delay in the overall project. The tasks and
bars represented in thicker lines belong to the critical path.

Figure 2.7: Schedule for the database subsystem (PERT chart)

46
Software Engineering

2.2.3 Tracking

As the project progresses, the project manager understands the activities to be


completed and milestones to be tracked and controlled with the help of project
schedule. Tracking of project schedule is done in several ways.
* Conducting periodic meetings with team members: - By conducting periodic
meetings, the project manager is able to distinguish between completed and
uncompleted activities or those that are yet to start. In addition, the project manager
considers the problems in the project as reported by the team members.
* Assessing the results of reviews: - Software reviews are conducted when one or more
activities of the project are complete or when a particular development phase is
complete. The purpose of conducting reviews is to check whether the software is
developed according to user requirements or not.
* Determining the milestones: - Milestones indicating the expected output par
described. These milestones check the status of a project by comparing the progress of
activities with the estimated end date of the project.
* Using earned value analysis to determine the progress of the project: - The progress of
the project is determined quantitatively by the earned value analysis technique. This
technique provides an estimate for every task without considering its type and the total
hours required to accomplish the project. Based on this estimation, each activity is given
an earned value, which is a measure of progress and describes the percentage of the
activities that have been completed.
Earned value analysis:
The earned value system provides a common value scale for every task,
regardless of the type of work being performed. The total hours to do the whole project
are estimated, and every task is given an earned value based on its estimated
percentage of the total. Basically, earned value is useful as it provides a quantitative
technique of assessing progress on the project as a whole (even when tasks are
completed in an order that differs from the original schedule).
Consider the following example project:

47
Software Engineering

Figure 2.8: Example project for illustrating earned value analysis


Calculating the planned value:
The planned value for a task is the estimated percentage of the planned duration
of that task of the total planned duration of all tasks.
Planned Value T = DT / TD
where T is a task, DT is the duration of T and TD is the total planned duration of all
tasks.
Applying this formula the planned value for the planning task is calculated from
the above table. It is estimated that planning will take 180 minutes. The total duration of
all of our estimates for development is 2305 minutes. The planned value for the
planning task by the formula above is, therefore, given by
Planned Value = 180/2305 ~ 7.81%
This value is written in the planned value (unit) column for the planning task.
The same is done for all of the tasks in the plan. The total of all of the unit planned
values will be 100% if this is done correctly.

48
Software Engineering

Figure 2.9: Calculating the planned value

Consider that one wishes to document when he/she expects each task to be
completed. In this example, this has been done by filling in the week number in the
planned completion column. In order to track progress against this plan, one should
have an idea of what percentage complete he/she expects to be at the end of each week.
A cumulative total of the planned value is kept in the cumulative planned value
column. This column can be used to determine what percentage of the project will be
complete by the end of week 3, for example.
It can be seen that the sum of all of the planned values for tasks planned to be
complete before the end of week 3 is approx. 48.81% (because the tasks are in order, the
cumulative planned value for the last week 3 entry gives us this number - if the tasks
are listed unordered by completion date, the sum must be calculated by adding the
planned values of all tasks due to be completed before the date in question).

Tracking a project with earned value:

49
Software Engineering

Consider that the end of the second week is being reached on this project and
one need to know the progress toward completion on time. Also consider that
everything has not been done in the order in which it was planned, so this may not be
easy to guess at. In this case the concept of an earned value can be used. Earned value is
assigned only to completed tasks. If a task is completed with a planned value of p%,
then the earned value for that task is p%. Incomplete tasks have no earned value.
In the example below (shown by the following figure), compiled at the end of the
second week, it is observed that the group has completed the following tasks with their
associated earned value:
– Planning – 7.81%
– Scope – 2.64%
– Product Features – 2.64%
– User Profile – 1.98%
– Assumptions – 2.64%
– UI requirements – 5.21%
By summing the earned values for the completed tasks it is determined that at the end
of week 2, the cumulative earned value (the percent complete) is
7.81+2.64+2.64+1.98+2.64+5.21=22.91%. Now, knowing the earned value the project
group can determine if they are ahead, behind on schedule to completion.

50
Software Engineering

Figure 2.10: Project status at the end of second week

Looking at the cumulative planned value in the table, it is observed that at the
end of week 2 to be on schedule, the group should have been 33.19% complete. By the
above calculation that they are actually only 22.91% complete. Hence it is noted that, the
project is falling behind schedule. Knowing this, allows the project team to take action.
Such action may involve: adjusting work practices, negotiating with the client for an
extension to the deadline etc.

2.2.4 Discuss schedule and cost slippage.


There is a direct re;lation betewwn delay in schedule and cost. As the schedule gets delayed ,
the cost of the project increases.

51
Software Engineering

10. Objective Questions:

1. Software project estimation techniques can be broadly classified under which of the
following:
a. automated processes
b. decomposition techniques
c. empirical models

d. regression models
e. both b and c
2. The size estimate for the software product to be built must be based on a direct
measure like LOC
a. True
b. False
3. Problem-based estimation is based on problem decomposition which focuses on
a. information domain values
b. project schedule
c. software functions
d. process activities
e. both a and c
4. LOC-based estimation techniques require problem decomposition based on
a. information domain values
b. project schedule
c. software functions
d. process activities
5. FP-based estimation techniques require problem decomposition based on
a. information domain values
b. project schedule
c. software functions
d. process activities
6. Process-based estimation techniques require problem decomposition based on
a. information domain values
b. project schedule
c. software functions
d. process activities
e. both c and d
7. Unlike a LOC or function point each person’s “use-case” is exactly the same size
a. True
b. False
8. When agreement between estimates is poor the cause may often be traced to
‘inadequately defined project scope’ or ‘inappropriate productivity data’
a. True
b. False

52
Software Engineering

9. Empirical estimation models are typically based on


a. Expert judgment based on past project experiences
b. refinement of expected value estimation
c. regression models derived from historical project data
d. trial and error determination of the parameters and coefficients
10. COCOMO II is an example of a suite of modern empirical estimation models that
require sizing information expressed as:
a. function points
b. lines of code
c. object points
d. any one of the above
11. Function points are of no use in developing estimates for object-oriented software.
a. True
b. False
12. The objective of software project planning is to
a. convince the customer that a project is feasible.
b. make use of historical project data.
c. enable a manager to make reasonable estimates of cost and schedule.
d. determine the probable profit margin prior to bidding on a project.
13. Since project estimates are not completely reliable, they can be ignored once a
software development project begins.
a. True
b. False

Answers: 1. e 2. b 3. e 4. c 5. a 6. e 7. b 8. a 9. c 10. d 11. b 12. c 13. b

11. Subjective Questions:


1. What are the various activities during software project planning? Describe them in
detail?

2. Explain in detail about scheduling a project and tracking its progress?


3. Write a note on COCOMO model?
4. Explain the following estimation techniques:
a) LOC b) FP
8. Write notes on product metrics?

12. Questions:

53
Software Engineering

1. Explain the COCOMO used for software estimation.


Ans:
Simple on-line cost model for estimating the number of person-months required to
develop software. The model also estimates the development schedule in months and
produces an effort and schedule distribution by major phases. This is based on Barry
Boehm’s Constructive Cost Model (COCOMO).

This is top level model. Basic COCOMO, is applicable to the large majority of software
projects. The model estimates cost using one of three different development modes:
organic, semidetached and embedded.

Organic: Small projects, in-house environment

Semidetached: intermediate, mixture of organic and embedded product generally


extends 300(KDSI) thousand delivered source instructions.

Embedded: coupled complex hardware, software regulations


Eg: electronic fund transfer, Air traffic control

2. Write two advantages of PERT chart.


Ans:
– Makes visible dependencies (precedence rates) between WBS elements
– Facilitates identification of critical path and makes this visible
– Facilitates early start, late start, slack for each activity
– Provides for potentially reduced project duration due to better understanding of
dependencies leading to improved overlapping of activities and tasks where
feasible

3. Explain how project scheduling and tracking is done for a software development
project.
Ans:
A schedule is the mapping of tasks onto time: each task is assigned start and end times.
This allows us to plan the deadlines for individual deliverables. The two most often
used diagrammatic notations for schedules are PERT (Program Evaluation Review
Technique) and Gantt charts.

As the project progresses, the project manager understands the activities to be


completed and milestones to be tracked and controlled with the help of project
schedule. Tracking of project schedule is done in several ways.

• Conducting periodic meetings with team members


• Assessing the results of reviews

54
Software Engineering

• Determining the milestones


• Using earned value analysis to determine the progress of the project

13. Learning Resources:

1. Roger Pressman, “Software Engineering”, sixth edition, Tata McGraw Hill.


2. Bernd Bruegge, “Object oriented software engineering”, Second Edition, Pearson
Education.

3 Risk Management

55
Software Engineering

1. Motivation:
The following are the motivational factors in risk management

• The prior knowledge of risks should be understandable to the developers and


managers.
• The planning model should be developed such that risk cannot disturb the
project.
2. Learning Objective:
This module explains that people involved in the Software Risk Management will be
able to understand the factors involved in the risk management.
1. Students will be able to understand the Software Risk factors during the
development of the software.
2. Students will be able to understand Risk Identification, Risk assessment, Risk
projection.
3. Students will be able to understand the RMMM model.

3. Objective:
Risks are potential problems that might affect the successful completion of a
software project. Risks involve uncertainty and potential losses. Risk analysis and
management are intended to help a software team understand and manage
uncertainty during the development process. The important thing is to remember
that things can go wrong and to make plans to minimize their impact when they
do. The work product is called a Risk Mitigation, Monitoring, and Management Pl an
(RMMM).

3. Prerequisite:
Knowledge of planning and estimation of software.

4. Syllabus:

Module Content Duration Self Study Time

3.1 Risk Identification 1 lecture 1 hour

3.1 Risk assessment 1 lecture 1 hour

3.2 Risk projection 1 lecture 1 hour

56
Software Engineering

3.3 RMMM 1 lecture 1 hour

5. Learning Outcomes:
Risks involve uncertainty and potential losses. Risk analysis and management
are intended to help a software team understand and manage uncertainty
during the development process. The important thing is to remember that
things can go wrong and to make plans to minimize their impact when
they do. The work product is called a Risk Mitigation, Monitoring and
Management Plan (RMMM).

6. Weightage: Marks

7. Abbreviations:

8. Key Definitions:
risk: Risk is an undesired event or circumstance that occur while a project is underway

Uncertainty—the risk may or may not happen; that is, there are no 100% probable risks.

Loss—if the risk becomes a reality, unwanted consequences or losses will occur.

9. Theory
3.1 Risk management:

• It is necessary for the project manager to anticipate and identify different risks
that a project may be susceptible to.

57
Software Engineering

Risk Management- It aims at reducing the impact of all kinds of risk that may
effect a project by identifying, analyzing and managing them. Risk analysis and
management are a series of steps that help a software team to understand and
manage uncertainty.

Types of risks

 Project risks which threaten the project plan. if project risks become real, it is
likely that the project schedule will slip and that costs will increase
 Technical risks are the risks that threaten the quality and timeliness of the
software to be produced. If a technical risk becomes a reality, implementation
may become difficult or impossible.

 Business risks threaten the viability of the software to be built and often
jeopardize the project or the product.

1.Risk identification

Risk identification is a attempt to specify threats to the project plan (estimates,


schedule, resource loading, etc.). By identifying known and predictable risks , the
project manager can avoid these risks.

One method for identifying risks is to create a risk item checklist.

 Product size—risks associated with the overall size of the software to be built
or modified.
 Business impact—risks associated with constraints imposed by management or
the marketplace.

 Customer characteristics—risks associated with the sophistication of the


customer and the developer's ability to communicate with the customer in a
timely manner.

 Process definition—risks associated with the degree to which the software


process has been defined and is followed by the development organization.

 Development environment—risks associated with the availability and quality


of the tools to be used to build the product.

 Technology to be built—risks associated with the complexity of the system to


be built and the "newness" of the technology that is packaged by the system.

58
Software Engineering

 Staff size and experience—risks associated with the overall technical and
project experience of the software engineers who will do the work.

Risk Components and Drivers

The risk components are defined in the following manner:

Performance risk—the degree of uncertainty that the product will meet its
requirements and be fit for its intended use.

Cost risk—the degree of uncertainty that the project budget will be maintained.

Support risk—the degree of uncertainty that the resultant software will be easy to
correct, adapt, and enhance.

Schedule risk—the degree of uncertainty that the project schedule will be


maintained and that the product will be delivered on time.

The impact of each risk driver on the risk component is divided into one of four
impact categories—negligible, marginal, critical, or catastrophic.

2. Risk Projection

The risk project has following steps

Impact Assessment:

59
Software Engineering

Creating a Risk Table

60
Software Engineering

The RMMM plan

The RMMM plan consists of risk avoidance, risk monitoring, risk management and
contingency planning

risk mitigation is a problem avoidance activity.

Risk monitoring is a project tracking activity with three primary objectives:

(1) to assess whether predicted risks do, in fact, occur;

(2) to ensure that risk aversion steps defined for the risk are being properly applied;
and

(3) to collect information that can be used for future risk analysis.

61
Software Engineering

10. Objective Questions:

1. The most important feature of spiral model is


(A) requirement analysis. (B) risk management. (C) quality management. (D)
configuration management.

2. The RMMM plan consists of


(A) risk avoidance,
(B) risk monitoring,
(C) risk management and contingency planning
(D) all of above

11. Subjective Questions:

1. Describe the difference between “known risks” and “predictable risks.

2. Explain RMMM plan

12. Learning Resources:

1. Roger Pressman, “Software Engineering”, sixth edition, Tata McGraw Hill.


2. Bernd Bruegge, “Object oriented software engineering”, Second Edition, Pearson
Education.

62
Software Engineering

04 Software Configuration
Management
_______________________________
1. Motivation:
Presented chapter is the foundation of basic software engineering and therefore
motivation of this course is to understand software engineering process is involved
with SCM.

2. Learning Objective:
This module explains that people involved in the Software Configuration Management.
will be able to understand the factors involved in the configuration management.
1. Students will be able to understand the Software configuration management
process, Identification of objects in software configuration.
2. Students will be able to understand Managing and controlling changes,
Managing and controlling versions.
3. Students will be able to understand the configuration audit, status reporting,
SCM standards and SCM issues.

3. Objective:
This module explains set of activities designed to control change by identifying the
work products that are likely to change, establishing relationships among them,
defining mechanisms for managing different versions of these work products,
controlling the changes imposed, and auditing and reporting on the changes made.

4. Prerequisite:
Workflow of different phases of software life cycle

5. Syllabus:

Prerequisites Syllabus Duration Self Study


Basic Project 4.1 Software configuration 2 Hrs 2 Hrs
Knowledge management ,process,
Identification of objects in
software configuration

63
Software Engineering

4.2 Managing and controlling


changes, Managing and
controlling versions
4.3 configuration 2 Hrs 2 Hrs
audit, status reporting, SCM
standards and SCM issues.

5. Learning Outcomes:
 Students will be able to do the software configuration and also able to Identify
the objects in software configuration.
 Students will be able to Manage and control changes during the configuring
process.
 Students will be able to understand the configuration audits and prepare reports
of the status.
 Students will be able to understand software engineering process involved with
SCM.

6. Weightage: 10 marks

7. Abbreviation

(1) SCM – Software Configuration Management


(2) SCI - Software Configuration Management Item
(3) CCA - Change Control Authority
(4) ECO - Engineering Change Order

8. Key Definition:

Baselines: A baseline is a software configuration management concept that helps us to


control change without seriously impeding justifiable change.

Software Configuration Objects: To control and manage configuration items, each must
be named and managed using an object-oriented approach. Basic objects are created by
software engineers during analysis, design, coding, or testing.

Version Control: Version control is to identify and manage project elements as they
change over time.

64
Software Engineering

Change Control: Change control refers to the policy, rules, procedures, information,
activities, roles, authorization levels and states relating to creation, updates, approvals,
tracking and archiving of items involved with the implementation of a change process.

9. Theory

5.1 Software Configuration Management

General Points

 Changes are inevitable when software is built.


 A primary goal of software engineering is to improve the ease with which
changes can be made to software.

 Configuration management is all about change control.

 Every software engineer has to be concerned with how changes made to work
products are tracked and propagated throughout a project.

 To ensure that quality is maintained the change process must be audited.

 Software configuration management is an important element of software quality


assurance.

 Its primary responsibility is the control of change.

 Software configuration management (SCM) is an umbrella activity that is applied


throughout the software process

 Five SCM tasks:

 Identification (tracking multiple versions to enable efficient changes)

 Version control (control changes before and after release to customer)

 Change control (authority to approve and prioritize changes)

 Configuration auditing (ensure changes made properly)

 Reporting (tell others about changes made)

Difference between SCM and Software Support (Maintenance)

 Support is a set of software engineering activities that occur after software has
been delivered to the customer and put into operation.

65
Software Engineering

 Software configuration management is a set of tracking and control activities that


begin when a software engineering project begins and terminate only when the
software is taken out of operation.
Software Configuration Management Items

 The output of the software process is information that may be divided into three
broad categories:
 Computer programs (both source level and executable forms);
 Documents that describe the computer programs (targeted at both technical
practitioners and users),
 Data (contained within the program or external to it).

 The items that comprise all information produced as part of the software process
are collectively called a software configuration.
 As the software process progresses, the number of software configuration items
(SCIs) grows rapidly.
 A System Specification spawns a Software Project Plan and Software
Requirements Specification (as well as hardware related documents).
 These in turn spawn other documents to create a hierarchy of information. If
each SCI simply spawned other SCIs, little confusion would result.
Unfortunately, another variable enters the process—change.
 In the extreme, a SCI could be considered to be a single section of a large
specification or one test case in a large suite of tests.
 More realistically, an SCI is a document, a entire suite of test cases, or a named
program component (e.g., a C++ function or an Ada package).
 In addition to the SCIs that are derived from software work products, many
software engineering organizations also place software tools under configuration
control.
 That is, specific versions of editors, compilers, and other CASE tools are "frozen"
as part of the software configuration.
 Because these tools were used to produce documentation, source code, and data,
they must be available when changes to the software configuration are to be
made.
 Although problems are rare, it is possible that a new version of a tool (e.g., a
compiler) might produce different results than the original version.
 For this reason, tools, like the software that they help to produce, can be
baselined as part of a comprehensive configuration management process.
 In reality, SCIs are organized to form configuration objects that may be cataloged
in the project database with a single name.

66
Software Engineering

Figure 5.1 Configuration objects

 A configuration object has a name, attributes, and is "connected" to other objects


by relationships.
 Referring to Figure, the configuration objects, Design Specification, data model,
component N, source code and Test Specification are each defined separately.
 However, each of the objects is related to the others as shown by the arrows. A
curved arrow indicates a compositional relation.
 That is, data model and component N are part of the object Design Specification.
 A double-headed straight arrow indicates an interrelationship.
 If a change were made to the source code object, the interrelationships enable a
software engineer to determine what other objects (and SCIs) might be affected.

5.1.1 Managing and controlling changes

Fundamental Sources of Change

67
Software Engineering

 New business or market conditions dictate changes in product requirements or


business rules.
 New customer needs demand modification of data produced by information
systems, functionality delivered by products, or services delivered by a
computer-based system.
 Reorganization or business growth/downsizing causes changes in project
priorities or software engineering team structure.
 Budgetary or scheduling constraints cause a redefinition of the system or
product.

Baselines

 A baseline is a software configuration management concept that helps us to


control change without seriously impeding justifiable change.
 A specification or product that has been formally reviewed and agreed upon,
that thereafter serves as the basis for further development, and that can be
changed only through formal change control procedures.

Figure 5.2 Baselined SCIs and the project database

68
Software Engineering

 Before a software configuration item becomes a baseline, change may be made


quickly and informally.
 However, once a baseline is established, we figuratively pass through a swinging
one way door.

 Changes can be made, but a specific, formal procedure must be applied to


evaluate and verify each change.

 A work product becomes a baseline only after it is reviewed and approved.

 A baseline is a milestone in software development that is marked by the delivery


of one or more configuration items.

 Once a baseline is established each change request must be evaluated and


verified by a formal procedure before it is processed.

 The progression of events that lead to a baseline is also illustrated in Figure.

 Software engineering tasks produce one or more SCIs.

 After SCIs are reviewed and approved, they are placed in a project database (also
called a project library or software repository).

 When a member of a software engineering team wants to make a modification to


a baselined SCI, it is copied from the project database into the engineer's private
work space.

 However, this extracted SCI can be modified only if SCM controls (discussed
later in this chapter) are followed.

 The arrows in Figure 5.2 illustrate the modification path for a base lined SCI.

Description of main tasks in detail

1. Software Configuration Objects

 To control and manage configuration items, each must be named and managed
using an object-oriented approach.
 Basic objects are created by software engineers during analysis, design, coding,
or testing.

 For example, a basic object might be a section of a requirements specification, a


source listing for a component, or a suite of test cases that are used to exercise the
code.

69
Software Engineering

 Aggregate objects are collections of basic objects and other aggregate objects.
Design Specification is an aggregate object.

 Configuration object attributes:

 unique name,

 description,

 list of resources,

 realization (a pointer to a work product for a basic object or null for an


aggregate object)

 An entity-relationship (E-R) diagram can be used to show the interrelationships


among the objects

2. Version Control

 Configuration management allows a user to specify alternative configurations of


the software system through the selection of appropriate versions.
 This is supported by associating attributes with each software version, and then
allowing a configuration to be specified [and constructed] by describing the set of
desired attributes.
 These "attributes" mentioned can be as simple as a specific version number that is
attached to each object or as complex as a string of Boolean variables (switches) that
indicate specific types of functional changes that have been applied to the system
 One representation of the different versions of a system is the evolution graph
presented in Figure 5.3.
 Each node on the graph is an aggregate object, that is, a complete version of the
software.
 Each version of the software is a collection of SCIs (source code, documents,
data), and each version may be composed of different variants.
 To illustrate this concept, consider a version of a simple program that is
composed of entities 1, 2, 3, 4, and 5
 Entity 4 is used only when the software is implemented using color displays.
Entity 5 is implemented when monochrome displays are available. Therefore, two
variants of the version can be defined: (1) entities 1, 2, 3, and 4; (2) entities 1, 2, 3, and
5.
 To construct the appropriate variant of a given version of a program, each entity
can be assigned an "attribute-tuple"—a list of features that will define whether the
entity should be used when a particular variant of a software version is to be
constructed.

70
Software Engineering

 One or more attributes is assigned for each variant. For example, a color attribute
could be used to define which entity should be included when color displays are to
be supported.
 Another way to conceptualize the relationship between entities, variants and
versions (revisions) is to represent them as an object pool. Referring to Figure 5.4,
the relationship between configuration objects and entities, variants and versions
can be represented in a three-dimensional space.
 An entity is composed of a collection of objects at the same revision level. A
variant is a different collection of objects at the same revision level and therefore
coexists in parallel with other variants.
 A new version is defined when major changes are made to one or more objects.
 A number of different automated approaches to version control have been
proposed over the past decade. The primary difference in approaches is the
sophistication of the attributes that are used to construct specific versions and
variants of a system and the mechanics of the process for construction.

Figure 5.3 Evolution graph

71
Software Engineering

Figure 5.4 Object pool representation of components, variants and versions

3. Change Control

 Change request is submitted and evaluated to assess technical merit and impact
on the other configuration objects and budget
 Change report contains the results of the evaluation
 Change control authority (CCA) makes the final decision on the status and
priority of the change based on the change report
 Engineering change order (ECO) is generated for each change approved
(describes change, lists the constraints, and criteria for review and audit)
 Object to be changed is checked-out of the project database subject to access
control parameters for the object
 Modified object is subjected to appropriate SQA and testing procedures
 Modified object is checked-in to the project database and version control
mechanisms are used to create the next version of the software
 Synchronization control is used to ensure that parallel changes made by different
people don’t overwrite one another

72
Software Engineering

Figure 5.5: The change control process

4. Configuration Audit

 Identification, version control, and change control help the software developer to
maintain order in what would otherwise be a chaotic and fluid situation.
 To ensure that the change has been properly implemented we conduct :
o formal technical reviews and

o the software configuration audit.

 The formal technical review focuses on the technical correctness of the


configuration object that has been modified.

73
Software Engineering

 The reviewers assess the SCI to determine consistency with other SCIs,
omissions, or potential side effects.
 A formal technical review should be conducted for all but the most trivial
changes.
 A software configuration audit complements the formal technical review by
assessing a configuration object for characteristics that are generally not
considered during review.
 The audit asks and answers the following questions:

 Has the change specified in the ECO (Engineering change order) been
made? Have any additional modifications been incorporated?

 Has a formal technical review been conducted to assess technical


correctness?

 Has the software process been followed and have software engineering
standards been properly applied?

 Has the change been "highlighted" in the SCI? Have the change date and
change author been specified? Do the attributes of the configuration object
reflect the change?

 Have SCM procedures for noting the change, recording it, and reporting it
been followed?

 Have all related SCIs been properly updated?

 In some cases, the audit questions are asked as part of a formal technical review.

 However, when SCM is a formal activity, the SCM audit is conducted separately by
the quality assurance group.

5. Status Reporting

 Configuration status reporting (sometimes called status accounting) is an SCM task


that answers the following questions:
 What happened?

 Who did it?

 When did it happen?

 What else will be affected?

 Each time an SCI is assigned new or updated identification, a CSR entry is made.

74
Software Engineering

 Each time a configuration audit is conducted, the results are reported as part of the
CSR task.
 Output from CSR may be placed in an on-line database, so that software developers
or maintainers can access change information by keyword category.
 In addition, a CSR report is generated on a regular basis and is intended to keep
management and practitioners appraised of important changes.
 Configuration status reporting plays a vital role in the success of a large software
development project.
 When many people are involved, it is likely that "the left hand not knowing what the
right hand is doing" syndrome will occur.
 Two developers may attempt to modify the same SCI with different and conflicting
intents.
 A software engineering team may spend months of effort building software to an
obsolete hardware specification.

Importance of version and change control

 Version control is used to combine procedures and tools to manage different


versions of configurations objects that are made during software development.

 Change control adds human procedure with the automated tools so that effective
control of change will be made.

Difference

Version Control Change Control

Version control is to identify and Change control refers to the policy,


manage project elements as they rules, procedures, information,
change over time. activities, roles, authorization levels
and states relating to creation, updates,
approvals, tracking and archiving of
items involved with the
implementation of a change process.

5.1.2 Managing and controlling version

Refer page no.195-197

10. References:

75
Software Engineering

1) Roger Pressman, Software Engineering: A Practitioners Approach, (6th


Edition), McGraw Hill, 1997.

2) I. Sommerville, Software Engineering, 7th edition, Adison Wesley, 1996.

11. Objective Questions

1. Which of these are valid software configuration items?


a) software tools
b) documentation
c) executable programs
d) test data
e) all of the above
2. Which of the following is not considered one of the four important elements
that should exist when a configuration management system is developed?
a) Component elements
b) human elements
c) process elements
d) validation elements
3. Once a software engineering work product becomes a baseline it cannot be
changed again

a) True b) False

4. Which configuration objects would not typically be found in the project


database?

a) design specification
b) marketing data
c) organizational structure description
d) test plans
e) both b and c
5. Modern software engineering practice suggests that a software team maintain
SCI's in a project database or repository
a) True b) False
6. A data repository meta model is used to determine how
a) information is stored in the repository
b) data integrity can be maintained
c) the existing model can be extended
d) All of the above
7. Many data repository requirements are the same as those for a typical
database application.

76
Software Engineering

a) True b) False

8. The ability to track relationships and changes to configuration objects is one


of the most important features of the SCM repository.

a) True b) False
9. Which of the following tasks is not part of software configuration
management?
a) change control
b) reporting
c) statistical quality control
d) version control
10. A basic configuration object is a __________ created by a software engineer
during some phase of the software development process.
a) program data structure
b) a software component
c) unit of information
d) all of the above
11. A new __________ is defined when major changes have been made to one or
more Configuration objects
a) Entity
b) Item
c) Variant
d) Version
12. Change control is not necessary if a development group is making use of an
automated project database tool
a) True b) False
13. When software configuration management is a formal activity, the software
configuration audit is conducted by the
a) development team
b) quality assurance group
c) senior managers
d) testing specialists
14. The primary purpose of configuration status reporting is to
a) allow revision of project schedules and cost estimates by project
managers
b) evaluate the performance of software developers and organizations
c) make sure that change information is communicated to all affected
parties
d) none of the above
15. Configuration issues that need to be considered when developing Web Apps
include:
a) Content
b) Cost

77
Software Engineering

c) People
d) Politics
e) a, b, and c

(Ans : 1-E,2-B,3-B,4-E,5-A,6-B,7-B,8-A,9-C,10-D,11-D,12-B,13-B,14-C,15-E)

12. Subjective questions


1. What is SCM? Explain in detail?
Ans: Refer page no.190-193
2. Explain in detail about change control?
Ans: Refer page no.193-195, 198
3. Explain in detail about version control?
Ans: Refer page no.195-197

13. University Questions


1. Write short note on Software Configuration Management. [May 2010] [10 marks]
Ans:

It is an umbrella activity that is applied throughout the software process.

– Software Configuration Objects


– Software Configuration Management Items

2. Explain Software Configuration Management and Change Control Management in


detail. [Nov. 2010] [10 marks]

05 Software Design
Specification
1. Motivation:

78
Software Engineering

Presented chapter is the foundation of basic software engineering and therefore


motivation of this course is to understand design process is involved with software.

2. Learning Objective:
This module explains that people involved in the Software Design Process will be able
to understand the Design Process during the design development of the software.
1. Students will be able to understand Software Design – Abstraction , Modularity.
2. Students will be able to understand the Software Architecture – Effective
modular design, Cohesion and Coupling, Example of code for cohesion and
coupling.
3. Students will be able to understand User Interface Design – Human Factors,
Interface standards, Design Issues – User Interface Design.

4. Objective:
This module explains the Software Design Process during the development of the
software.

5. Learning Outcomes:
 This module will help students to learn the Software Design Process during the
development of the software along with the concepts of Abstraction, Modularity.

 The Students will also be able to develop an effective Software Architecture


modular design, Concepts like Cohesion and Coupling for making the design of
the software.
 The Students will also be able to create efficient UI designs.

6. Prerequisite:
Workflow of different phases of software life cycle

7. Syllabus:

Prerequisites Syllabus Duration Self Study


Basic design 5.1 Software Design – 4 Hrs 4 Hrs
Knowledge Abstraction , Modularity

79
Software Engineering

5.2 Software Architecture –


Effective modular design,
Cohesion and
Coupling, Example of code for
cohesion and coupling.
5.3 User Interface Design – 4 Hrs 4 Hrs
Human Factors, Interface
standards, Design
Issues – User Interface Design
Process.

Design Workflow
This involves the activities involved in the design phase of software life cycle.
System design concepts, architectural styles and design patterns are explained below.

3.3.1 System Design Concepts:


Abstraction

At the highest level of abstraction, a solution is stated in broad terms using the language
of the problem environment. At lower levels of abstraction, a more detailed description
of the solution is provided.

System design concepts describe subsystem decompositions and their properties.


This includes the concept of subsystems and classes, subsystem interfaces, coupling,
cohesion.

Modularity

Modularity is the most common manifestation of separation of concerns. Software is


divided into separately named and addressable components, sometimes called
modules.

Coupling:

80
Software Engineering

Coupling is the number of dependencies between two subsystems. A desirable


property of subsystem decomposition is that subsystems are as loosely coupled as
reasonable. This minimizes the impact that errors or future changes in one subsystem
have on other subsystems.
The modules that interact with each other through message passing have low
coupling while those who interact with each other through variables that maintain
information about the state have high coupling. The diagram shows examples of two
such systems

Interface design
 Using information developed during interface analysis, define interface objects
and actions (operations).
 Define events (user actions) that will cause the state of the user interface to
change. Model this behavior.
 Depict each interface state as it will actually look to the end user.
 Indicate how the user interprets the state of the system from information
provided through the interface.

Design Issues
 Response time. System response time is the primary complaint for many
interactive applications. system response time is measured from the point at
which the user performs some control action (e.g., hits the return key or clicks a
mouse) until the software responds with desired output or action.
 Help facilities. Almost every user of an interactive, computer-based system
requires help now and then. In some cases, a simple question addressed to a
knowledgeable colleague can do the trick. In others, detailed research in a
multivolume set of “user manuals” may be the only option. In most cases,
however, modern software provides online help facilities that enable a user to
get a question answered or resolve a problem without leaving the interface.
 Error handling. Error messages and warnings impart useless or misleading
information and serve only to increase user frustration. There are few computer
users who have not encountered an error of the form
 Menu and command labeling. The typed command was once the most common
mode of interaction between user and system software and was commonly used
for applications of every type. Today, the use of window-oriented, point-andpick
interfaces has reduced reliance on typed commands, but some power-users
continue to prefer a command-oriented mode of interaction.
 Application accessibility. As computing applications become ubiquitous,
software engineers must ensure that interface design encompasses mechanisms

81
Software Engineering

that enable easy access for those with special needs. Accessibilityfor users (and
software engineers) who may be physically challenged is an imperative for
ethical, legal, and business reasons
 Internationalization. Software engineers and their managers nvariably
underestimate the effort and skills required to create user interfaces that
accommodate the needs of different locales and languages.

Types of Coupling:

1. Data coupling
2. Stamp coupling
3. Control coupling
4. Hybrid coupling
5. Common coupling
6. Content coupling

Data coupling:

Modules communicate by parameters. Each parameter is an elementary piece of


data. Each parameter is necessary to the communication. Nothing extra is needed.

Drawbacks: (i) Too many parameters make the interface difficult to understand and
possible error to occur.
(ii) Tramp data - data ‘traveling’ across modules before being used

Stamp coupling:

A composite data is passed between modules. Internal structure contains data


not used. Here, unrelated data are grouped into an artificial structure which is called
‘bundling’.

Control coupling:

A module controls the logic of another module through the parameter. The
controlling module needs to know how the other module works.

Hybrid coupling:

This is defined as a subset of data used as control.

Common coupling:

82
Software Engineering

Here global data is used as communication between modules.

Drawbacks: (i) Inflexibility


(ii) Difficulty in understanding the use of data

Content coupling:

A module refers to the inside of another module. It can also branch into another
module and refers to data within another module. By doing so, a module can change
the internal workings of another module.

Cohesion:

Cohesion is the number of dependencies within a subsystem. If a subsystem


contains many objects that are related to each other and perform similar tasks, its
cohesion is high. If a subsystem contains a number of unrelated objects, its cohesion is
low. A desirable property of a subsystem decomposition is that it leads to subsystems
with high cohesion.

Strong cohesion implies that all parts of a subsystem should have a close logical
relationship with each other. That means, in case some kind of change is required in the
software, all the related pieces are found at one place.

A class will be cohesive if most of the methods defined in a class use most of the
data members most of the time. If we find different subsets of data within the same
class being manipulated by separate groups of functions then the class is not cohesive
and should be broken down as shown below.

Figure 3.13: Illustration of cohesion

83
Software Engineering

Types of cohesion:

1. Functional cohesion
2. Sequential cohesion
3. Communicational cohesion
4. Procedural cohesion
5. Temporal cohesion
6. Logical cohesion
7. Coincidental cohesion

Functional cohesion:

All elements contribute to the execution of one and only one problem-related task. It
has a focused - strong, single-minded purpose. Unrelated activities are not done by the
elements.

Sequential cohesion:

Elements are involved in activities such that output data from one activity becomes
input data to the next. Usually this cohesion has good coupling and is easily
maintained. This is not so readily reusable because activities that will not in general be
useful together.

Communicational cohesion:

Elements contribute to activities that use the same input or output data. This
cohesion is not flexible, for example, if we need to focus on some activities and not the
others. It is possible that links cause activities to affect each other. So it is better to split
the elements into functional cohesive ones.

Procedural cohesion:

Elements are related only by sequence, otherwise the activities are unrelated.
This is similar to sequential cohesion, except for the fact that elements are unrelated.
Commonly found at the top of hierarchy, such as the main program module.

Temporal cohesion:

Elements are involved in activities that are related in time. Commonly found in
initialization and termination modules. Elements are basically unrelated, so the module
will be difficult to reuse. A good practice is to initialize as late as possible and terminate
as early as possible.

84
Software Engineering

Logical cohesion:

Elements contribute to activities of the same general category (type). For example, a
report module, display module or I/O module. The elements usually have control
coupling, since one of the activities will be selected.

Coincidental cohesion:

Elements contribute to activities with no meaningful relationship to one another


(i.e., mixture of activities). Similar to logical cohesion, except the activities may not even
be the same type. This is difficult to understand and maintain, with strong possibilities
of causing side effects every time the module is modified.

Human Interface

A user interface is a collection of techniques and mechanisms to interact with


something. In a graphical interface the primary interaction mechanism is a pointing
device of some kind. This device is the electronic equivalent to the human hand. What
the user interacts with is a collection of elements referred to as objects. They can be seen,
heard, touched, or otherwise perceived. Objects are always visible to the user and are
used to perform tasks. They are interacted with as entities independent of all other
objects. People perform operations, called actions, on objects. The operations include
accessing and modifying objects by pointing, selecting, and manipulating. All objects
have standard resulting behaviors.

Common usability problems


• Ambiguous menus and icons.
• Languages that permit only single direction movement through a system.
• Input and direct manipulation limits.
• Complex linkage.
• Inadequate feedback.
• Lack of system anticipation.
• Inadequate error messages.

85
Software Engineering

Important Human Characteristics in Design

Importance in design are perception, memory, visual acuity, foveal and peripheral
vision, sensory storage, information processing, learning, skill, and individual
differences.
• Memory: Memory is not the most stable of human attributes, as anyone who has
forgotten why they walked into a room, or forgotten a very important birthday,
can attest.
 Short-term, or working, memory.
 Long-term memory
 Mighty memory
 Sensory Storage

Mental Models: As a result of our experiences and culture, we develop mental models
of things and people we interact with
Movement Control: Once data has been perceived and an appropriate action decided
upon, a response must be made;
Learning: Learning, as has been said, is the process of encoding in long-term memory
information that is contained in short-term memory.
Skill: The goal of human performance is to perform skillfully. To do so requires linking
inputs and responses into a sequence of action.
Individual Differences : In reality, there is no average user. A complicating but very
advantageous human characteristic is that we all differ-in looks, feelings, motor
abilities, intellectual abilities, learning abilities and speed, and so on

Questions:
1. What is Modularity?
2. What is Abstraction?
3. Explain different types of Coupling.
4. What is Cohesion? Explain different types of Cohesion

86
Software Engineering

5. What are the design issues in Human interface design?

06 Software Quality
Assurance

6.1 . Motivation:
Presented chapter is the foundation of basic software engineering and therefore
motivation of this course is to understand software quality assurance process is
involved with software.

6.2 Learning Objective:


This module explains the meaning of Software Quality Assurance, with the help of
concepts that define the quality of a software product.
Student will be able to understand importance of Quality Assurance during the
development of the software.
1. Students will be able to understand Software Quality Assurance Concepts and
the various Software standards.
2. Students will be able to understand the Quality metrics and how to improve
Software Reliability.

87
Software Engineering

3. Students will be able to understand the Quality Measurement and Metrics.

6.3 Objective:
The main objective is to introduce to the students about the product that is to be
engineered and the process that provides a framework for the engineering technology.
1. To provide knowledge of software engineering discipline.
2. To analyze risk in software design and quality.
3. To introduce the concept of advance software methodology.

6.4 Prerequisite:
Workflow of different phases of software life cycle

6.5 Learning Outcomes:


1. The Student will be able to learn Software Quality Assurance Concepts and
Software standards.
2. The Student will be able to learn Quality Measurement and Metrics.
3. The Student will be able to learn to improve Software Reliability.

6.6 Syllabus:

Prerequisites Syllabus Duration Self Study


Basic Knowledge 6.1 Software Quality 2 Hrs 2 Hrs
of Quality Assurance – Software
standards

6.2 Quality metrics


Software Reliability
6.3 Quality Measurement and 2 Hrs 2 Hrs
Metrics

Software Quality

Software quality conformance to explicitly stated functional and performance


requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software.
The definition serves to emphasize three important points:

88
Software Engineering

1. Software requirements are the foundation from which quality is measured. Lack of
conformance to requirements is lack of quality.
2. Specified standards define a set of development criteria that guide the manner in
which software is engineered. If the criteria are not followed, lack of quality will
almost surely result.
3. There is a set of implicit requirements that often goes unmentioned (e.g., the desire
for ease of use). If software conforms to its explicit requirements but fails to meet
implicit requirements, software quality is suspect.
Software quality is a complex mix of factors that will vary across different
applications and the customers who request them.
McCall’s Quality Factors:
The factors that affect software quality can be categorized in two broad groups:
(1) factors that can be directly measured (e.g., defects per function-point) and (2) factors
that can be measured only indirectly (e.g., usability or maintainability). In each case
measurement must occur. The software must be compared (documents, programs, data)
to some datum and an indication of quality is arrived at.
McCall, Richards, and Walters propose a useful categorization of factors that
affect software quality. These software quality factors, shown in the figure below, focus
on three important aspects of a software product: its operational characteristics, its
ability to undergo change, and its adaptability to new environments.
Referring to the factors noted in the figure below, McCall and his colleagues
provide the following descriptions:
Correctness. The extent to which a program satisfies its specification and fulfills the
customer's mission objectives.

Reliability. The extent to which a program can be expected to perform its intended
function with required precision.

89
Software Engineering

Figure 4.30: McCall’s software quality factors

Efficiency. The amount of computing resources and code required by a program to


perform its function.

Integrity. Extent to which access to software or data by unauthorized persons can be


controlled.

Usability. Effort required to learn, operate, prepare input, and interpret output of a
program.

Maintainability. Effort required to locate and fix an error in a program.

Flexibility. Effort required to modify an operational program.

Testability. Effort required to test a program to ensure that it performs its intended
function.

Portability. Effort required to transfer the program from one hardware and/or software
system environment to another.

Reusability. Extent to which a program [or parts of a program] can be reused in other
applications—related to the packaging and scope of the functions that the program
performs.

Interoperability. Effort required to couple one system to another.

It is difficult, and in some cases impossible, to develop direct measures of these


quality factors. In fact, many of the metrics defined by McCall et al. can be measured
only subjectively. The metrics may be in the form of a checklist that is used to grade
specific attributes of the software.

Quality Standards

A quality assurance system may be defined as the organizational structure,


responsibilities, procedures, processes, and resources for implementing quality
management. Quality assurance systems are created to help organizations ensure their
products and services satisfy customer expectations by meeting their specifications.
These systems cover a wide variety of activities encompassing a product’s entire life
cycle including planning, controlling, measuring, testing and reporting, and improving

90
Software Engineering

quality levels throughout the development and manufacturing process. ISO 9000
describes quality assurance elements in generic terms that can be applied to any
business regardless of the products or services offered.

The ISO 9000 standards have been adopted by many countries including all
members of the European Community, Canada, Mexico, the United States, Australia,
New Zealand, and the Pacific Rim. Countries in Latin and South America have also
shown interest in the standards.

After adopting the standards, a country typically permits only ISO registered
companies to supply goods and services to government agencies and public utilities.
Telecommunication equipment and medical devices are examples of product categories
that must be supplied by ISO registered companies. In turn, manufacturers of these
products often require their suppliers to become registered. Private companies such as
automobile and computer manufacturers frequently require their suppliers to be ISO
registered as well.

To become registered to one of the quality assurance system models contained in


ISO 9000, a company’s quality system and operations are scrutinized by third party
auditors for compliance to the standard and for effective operation. Upon successful
registration, a company is issued a certificate from a registration body represented by
the auditors. Semi-annual surveillance audits ensure continued compliance to the
standard.

The ISO approach to software quality systems:

The ISO 9000 quality assurance models treat an enterprise as a network of


interconnected processes. For a quality system to be ISO compliant, these processes
must address the areas identified in the standard and must be documented and
practiced as described.

ISO 9000 describes the elements of a quality assurance system in general terms.
These elements include the organizational structure, procedures, processes, and
resources needed to implement quality planning, quality control, quality assurance, and
quality improvement. However, ISO 9000 does not describe how an organization
should implement these quality system elements. Consequently, the challenge lies in
designing and implementing a quality assurance system that meets the standard and
fits the company’s products, services, and culture.

The ISO 9001 Standard:

91
Software Engineering

ISO 9001 is the quality assurance standard that applies to software engineering.
The standard contains 20 requirements that must be present for an effective quality
assurance system. Because the ISO 9001 standard is applicable to all engineering
disciplines, a special set of ISO guidelines (ISO 9000-3) have been developed to help
interpret the standard for use in the software process.

The requirements delineated by ISO 9001 address topics such as management


responsibility, quality system, contract review, design control, document and data
control, product identification and traceability, process control, inspection and testing,
corrective and preventive action, control of quality records, internal quality audits,
training, servicing, and statistical techniques. In order for a software organization to
become registered to ISO 9001, it must establish policies and procedures to address each
of the requirements just noted (and others) and then be able to demonstrate that these
policies and procedures are being followed.

ISO 9126 Quality Factors:

The ISO 9126 standard was developed in an attempt to identify the key quality
attributes for computer software. The standard identifies six key quality attributes:

Functionality. The degree to which the software satisfies stated needs as indicated by the
following subattributes: suitability, accuracy, interoperability, compliance, and security.

Reliability. The amount of time that the software is available for use as indicated by the
following sub-attributes: maturity, fault tolerance, recoverability.

Usability. The degree to which the software is easy to use as indicated by the following
subattributes: understandability, learnability, operability.

Efficiency. The degree to which the software makes optimal use of system resources as
indicated by the following sub-attributes: time behavior, resource behavior.

Maintainability. The ease with which repair may be made to the software as indicated by
the following subattributes: analyzability, changeability, stability, testability.

Portability. The ease with which the software can be transposed from one environment
to another as indicated by the following subattributes: adaptability, installability,
conformance, replaceability.

92
Software Engineering

Like McCall’s quality factors, the ISO 9126 factors do not necessarily lend
themselves to direct measurement. However, they do provide a worthwhile basis for
indirect measures and an excellent checklist for assessing the quality of a system.

Quality Metrics
The overriding goal of software engineering is to produce a high-quality system,
application, or product. To achieve this goal, software engineers must apply effective
methods coupled with modern tools within the context of a mature software process. In
addition, a good software engineer (and good software engineering managers) must
measure if high quality is to be realized.
The quality of a system, application, or product is only as good as the
requirements that describe the problem, the design that models the solution, the code
that leads to an executable program, and the tests that exercise the software to uncover
errors. A good software engineer uses measurement to assess the quality of the analysis
and design models, the source code, and the test cases that have been created as the
software is engineered. To accomplish this real-time quality assessment, the engineer
must use technical measures to evaluate quality in objective, rather than subjective
ways.
The project manager must also evaluate quality as the project progresses. Private
metrics collected by individual software engineers are assimilated to provide project
level results. Although many quality measures can be collected, the primary thrust at
the project level is to measure errors and defects. Metrics derived from these measures
provide an indication of the effectiveness of individual and group software quality
assurance and control activities.
Metrics such as work product (e.g., requirements or design) errors per function
point, errors uncovered per review hour, and errors uncovered per testing hour provide
insight into the efficacy of each of the activities implied by the metric. Error data can
also be used to compute the defect removal efficiency (DRE) for each process framework
activity.

Measuring quality:

Although there are many measures of software quality, correctness,


maintainability, integrity, and usability provide useful indicators for the project team.
Gilb suggests definitions and measures for each.
Correctness. A program must operate correctly or it provides little value to its users.
Correctness is the degree to which the software performs its required function. The
most common measure for correctness is defects per KLOC, where a defect is defined as

93
Software Engineering

a verified lack of conformance to requirements. When considering the overall quality of


a software product, defects are those problems reported by a user of the program after
the program has been released for general use. For quality assessment purposes, defects
are counted over a standard period of time, typically one year.
Maintainability. Software maintenance accounts for more effort than any other
software engineering activity. Maintainability is the ease with which a program can be
corrected if an error is encountered, adapted if its environment changes, or enhanced if
the customer desires a change in requirements. There is no way to measure
maintainability directly; therefore, we must use indirect measures. A simple time-
oriented metric is mean-time-to-change (MTTC), the time it takes to analyze the change
request, design an appropriate modification, implement the change, test it, and
distribute the change to all users. On average, programs that are maintainable will have
a lower MTTC (for equivalent types of changes) than programs that are not
maintainable.
Hitachi has used a cost-oriented metric for maintainability called spoilage—the
cost to correct defects encountered after the software has been released to its end-users.
When the ratio of spoilage to overall project cost (for many projects) is plotted as a
function of time, a manager can determine whether the overall maintainability of
software produced by a software development organization is improving. Actions can
then be taken in response to the insight gained from this information.
Integrity. Software integrity has become increasingly important in the age of hackers
and firewalls. This attribute measures a system's ability to withstand attacks (both
accidental and intentional) to its security. Attacks can be made on all three components
of software: programs, data, and documents.
To measure integrity, two additional attributes must be defined: threat and
security. Threat is the probability (which can be estimated or derived from empirical
evidence) that an attack of a specific type will occur within a given time. Security is the
probability (which can be estimated or derived from empirical evidence) that the attack
of a specific type will be repelled. The integrity of a system can then be defined as
integrity = summation [(1 – threat) × (1 – security)]
where threat and security are summed over each type of attack.
Usability. The catch phrase "user-friendliness" has become ubiquitous in discussions of
software products. If a program is not user-friendly, it is often doomed to failure, even
if the functions that it performs are valuable. Usability is an attempt to quantify user-
friendliness and can be measured in terms of four characteristics: (1) the physical and or
intellectual skill required to learn the system, (2) the time required to become
moderately efficient in the use of the system, (3) the net increase in productivity (over
the approach that the system replaces) measured when the system is used by someone

94
Software Engineering

who is moderately efficient, and (4) a subjective assessment (sometimes obtained


through a questionnaire) of users attitudes toward the system.

4.3.3 Testing & SQA

Different testing strategies have been discussed in the previous section. The
following paragraphs explain Software Quality Assurance.

SQA (Software Quality Assurance):

Even the most jaded software developers will agree that high-quality software is
an important goal. Many definitions have been proposed in the literature. One such
definition is follows: “Conformance to explicitly stated functional and performance
requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software.” The
definition serves to emphasize the following three important points:

• Software requirements are the foundation from which quality is measured. Lack
of conformance to requirements is lack of quality.
• Specified standards define a set of development criteria that guide the manner in
which software is engineered. If the criteria are not followed, lack of quality will
almost surely result.
• A set of implicit requirements often goes unmentioned (e.g., the desire for ease of
use and good maintainability). If software conforms to its explicit requirements
but fails to meet implicit requirements, software quality is suspect.
SQA activities:

Software quality assurance is composed of a variety of tasks associated with two


different constituencies—the software engineers who do technical work and an SQA
group that has responsibility for quality assurance planning, oversight, record keeping,
analysis, and reporting.

Software engineers address quality (and perform quality assurance and quality
control activities) by applying solid technical methods and measures, conducting
formal technical reviews, and performing well-planned software testing.

The charter of the SQA group is to assist the software team in achieving a high
quality end product. The Software Engineering Institute recommends a set of SQA
activities that address quality assurance planning, oversight, record keeping, analysis,
and reporting. These activities are performed (or facilitated) by an independent SQA
group that:

95
Software Engineering

Prepares an SQA plan for a project. The plan is developed during project planning and is
reviewed by all interested parties. Quality assurance activities performed by the
software engineering team and the SQA group are governed by the plan. The plan
identifies:

• evaluations to be performed
• audits and reviews to be performed
• standards that are applicable to the project
• procedures for error reporting and tracking
• documents to be produced by the SQA group
• amount of feedback provided to the software project team

Participates in the development of the project’s software process description. The software team
selects a process for the work to be performed. The SQA group reviews the process
description for compliance with organizational policy, internal software standards,
externally imposed standards (e.g., ISO-9001), and other parts of the software project
plan.
Reviews software engineering activities to verify compliance with the defined software process.
The SQA group identifies, documents, and tracks deviations from the process and
verifies that corrections have been made.
Audits designated software work products to verify compliance with those defined as part of the
software process. The SQA group reviews selected work products; identifies, documents,
and tracks deviations; verifies that corrections have been made; and periodically reports
the results of its work to the project manager.

Ensures that deviations in software work and work products are documented and handled
according to a documented procedure. Deviations may be encountered in the project plan,
process description, applicable standards, or technical work products.

Records any noncompliance and reports to senior management. Noncompliance items are
tracked until they are resolved.
In addition to these activities, the SQA group coordinates the control and
management of change and helps to collect and analyze software metrics.

Difference between testing and SQA:


Software quality is the degree to which a system, component, or process meets
specified requirements and customer or user needs or expectations.

Quality assurance and quality control both contribute in delivering a high


quality software product though the way they go about it is different. This can be
illustrated by looking at the definitions of the two.

96
Software Engineering

Software Quality Assurance:-

Software QA involves the entire software development process - monitoring and


improving the process, making sure that any agreed-upon standards and procedures
are followed, and ensuring that problems are found and dealt with. It is oriented to
’prevention’.

This is a ’staff’ function, and is responsible for establishing standards and


procedures to prevent defects and breakdowns in the SDLC. The focus of QA is
prevention, processes, and continuous improvement of these processes.

Software Quality Control:-

This is a department function, which compares the standards to the product, and
takes action when non-conformance is detected for example testing.

This involves operation of a system or application under controlled conditions


and evaluating the results. The controlled conditions should include both normal and
abnormal conditions. Testing should intentionally attempt to make things go wrong to
determine if things happen when they shouldn’t or things don’t happen when they
should. It is oriented to ’detection’.

Relationship between Quality Control and Quality Assurance:-

An application that meets its requirements totally can be said to exhibit quality.
Quality is not based on a subjective assessment but rather on a clearly demonstrable,
and measurable, basis.

- Quality Control is a process directed at validating that a specific deliverable meets


standards, is error free, and is the best deliverable that can be produced. It is a
responsibility internal to the team.

- Quality Assurance (QA), on the other hand, is a review with a goal of improving the
process as well as the deliverable. QA is often an external process. QA is an effective
approach to producing a high quality product.

One aspect is the process of objectively reviewing project deliverables and the
processes that produce them (including testing), to identify defects, and then making
recommendations for improvement based on the reviews. The end result is the
assurance that the system and application is of high quality, and that the process is
working. The achievement of quality goals is well within reach when organizational

97
Software Engineering

strategies are used in the testing process. From the client's perspective, an application's
quality is high if it meets their expectations.

10. Objective Questions:

7. A key concept of quality control is that all work products


a. are delivered on time and under budget
b. have complete documentation
c. have measurable specifications for process outputs
d. are thoroughly tested before delivery to the customer
8. People who perform software quality assurance must look at the software from the
customer's perspective.
a. True
b. False
9. Which of these activities is not one of the activities recommended to be performed by
an independent SQA group?
a. prepare SQA plan for the project
b. review software engineering activities to verify process compliance
c. report any evidence of noncompliance to senior management
d. serve as the sole test team for any software produced
10. The purpose of software reviews is to uncover errors in work products so they can
be removed before moving on to the next phase of development
a. True
b. False
11. In general the earlier a software error is discovered and corrected the less costly to
the overall project budget.
a. True
b. False
12. Which of the following are objectives for formal technical reviews?
a. allow senior staff members to correct errors
b. assess programmer productivity
c. determining who introduced an error into a program
d. uncover errors in software work products
13. At the end of a formal technical review all attendees can decide to
a. accept the work product without modification
b. modify the work product and continue the review
c. reject the product due to stylistic discrepancies

98
Software Engineering

d. reject the product due to severe errors


e. both a and d
14. In any type of technical review, the focus of the review is on the product and not the
producer.
a. True
b. False
15. The ISO quality assurance standard that applies to software engineering is
a. ISO 9000:2004
b. ISO 9001:2000
c. ISO 9002:2001
d. ISO 9003:2004
16. Which of the following is not a section in the standard for SQA plans recommended
by IEEE?
a. budget
b. documentation
c. reviews and audits
d. test

Answers: 1. d 2. b 3. e 4. a 5. a 6. d 7. c 8. a 9. d 10. a 11. a 12. d 13. e 14. a


15. b 16. a

11. Subjective Questions:


2. Explain about software reviews? Differentiate inspection and walkthrough?
6. What is software quality? Explain Software Quality Assurance?
7. Write short notes on Quality Standards?
12. University Questions:

1. What are the five of the most important attributes of software quality? Explain them.
(May 2010) (10 marks)

99
Software Engineering

7 Software Testing

1. Motivation:
• Object design which comes before implementation, if not well understood, and if
not well done leads to system degradation.
• The average software product released on the market is not error free

2. Objective:
To get knowledge about the activities in the different testing strategies.

3. Learning Objective:
This module explains the meaning of Software Testing, with the help of concepts that
define the Testing process.
Student will be able to understand importance of Software Testing during the testing of
the software.
1. Students will be able to understand Software Testing Concepts and the various
Software standards.
2. Students will be able to understand the Testing Process and
Basic concept and terminology, Verification & validation, White Box Testing-
Path Testing, Control Structures Testing, DEFUSE testing.
3. Students will be able to understand Black Box Testing, OO testing methods.
4. Students will be able to understand Software Maintenance and Reverse Engineering.

4. Prerequisite:
Activities involved in requirement, analysis and design phases of software life cycle

5. Learning Outcomes:

100
Software Engineering

 Student will be able to test a software with the help of various techniques
like Black Box Testing, OO testing methods, White Box Testing-
Path Testing, Control Structures Testing, DEFUSE testing.

 Students will be able to Maintain a software and also able to do


Reverse Engineering.

6. Syllabus:
Module Content Duration Self Study Time

7.1 Basic concept and terminology, Verificatio


n & validation, White Box Testing-
3 lectures 3 hours
Path Testing, Control Structures Testing ,
DEFUSE testing,

7.2 Black Box Testing –BVA Integration, Vali


3 lectures 3 hours
dation and system testing.

7.3 OO testing methodsClass Testing, Intercla


ss testing, testing architecture, 3 lectures 3 hours
Behavioral testing.

7.4 Software Maintenance – Reverse Engineer


3 lectures 3 hours
ing.

5. Learning
• Various testing strategies
• Types of White box and black box testing
• Object oriented testing
• Maintenance types
• Maintenance log and defect reports
• Software reengineering

6. Weightage: Marks

7. Abbreviations:

101
Software Engineering

(1) BVA – Boundary value Analysis

8. Key Definitions:
1. Testing: Software testing is an investigation conducted to provide stakeholders
with information about the quality of the product or service under test. Software
testing also provides an objective, independent view of the software to allow the
business to appreciate and understand the risks of software implementation.
2. Black box testing: It is a software testing technique whereby the internal
workings of the item being tested are not known by the tester. For example, in a
black box test on a software design, the tester only knows the inputs and what
the expected outcomes should be and not how the program arrives at those
outputs. The tester does not ever examine the programming code and does not
need any further knowledge of the program other than its specifications.
3. White box testing: It is a software testing technique whereby explicit knowledge
of the internal workings of the item being tested is used to select the test data.
Unlike black box testing, white box testing uses specific knowledge of
programming code to examine outputs. The test is accurate only if the tester
knows what the program is supposed to do. He or she can then see if the
program diverges from its intended goal. White box testing does not account for
errors caused by omission, and all visible code must also be readable.
4. Reverse engineering: Software reverse engineering is done to retrieve the source
code of a program because the source code was lost, to study how the program
performs certain operations, to improve the performance of a program, to fix a
bug (correct an error in the program when the source code is not available), to
identify malicious content in a program such as a virus or to adapt a program
written for use with one microprocessor for use with another.
5. Forward engineering: Forward engineering is defined as the normal execution of
the software life cycle, i.e. in the forward direction. Therefore it is the traditional
process of moving from high-level abstractions and logical, implementation-
independent designs to the physical implementation of a system.
6. Defect report: It is a document reporting on any flaw in a component or system
that can cause the component or system to fail to perform its required function.
7. Maintenance log: It means a log in which unserviceabilities, rectifications and
daily inspections are recorded.
8. Unit testing: Unit testing is a software development process in which the
smallest testable parts of an application, called units, are individually and
independently scrutinized for proper operation. Unit testing is often automated
but it can also be done manually. This testing mode is a component of Extreme
Programming (XP), a pragmatic method of software development that takes a
meticulous approach to building a product by means of continual testing and
revision.

102
Software Engineering

9. Integration testing: Integration testing, also known as integration and testing


(I&T), is a software development process which program units are combined and
tested as groups in multiple ways. In this context, a unit is defined as the smallest
testable part of an application. Integration testing can expose problems with the
interfaces among program components before trouble occurs in real-world
program execution. Integration testing is also a component of Extreme
Programming (XP).
10. Regression testing: Regression testing is the process of testing changes to
computer programs to make sure that the older programming still works with
the new changes. Regression testing is a normal part of the program
development process and, in larger companies, is done by code testing
specialists.
11. Acceptance testing: Acceptance testing is a final stage of testing that is
performed on a system prior to the system being delivered to a live environment.
Systems subjected to acceptance testing might include such deliverables as a
software system or a mechanical hardware system. Acceptance tests are
generally performed as "black box" tests.

9. Theory
9.1 Testing
Software testing is an investigation conducted to provide stakeholders with
information about the quality of the product or service under test. Software testing also
provides an objective, independent view of the software to allow the business to
appreciate and understand the risks of software implementation.

The following sections explain the different testing strategies.

9.1.1 FTR (Formal Technical Review)


The purpose of FTR is to find defects (errors) before they are passed on to
another software engineering activity or released to the customer. It is an effective
means for improving the software quality. Walkthroughs and inspection are two types
of reviews. The fundamental difference between them is that walkthroughs have fewer
steps and are less formal than inspections.

Walkthroughs:

103
Software Engineering

A walkthrough team should consist of four to six individuals. The team should
include at least one representative drawing up the specifications (in case of specification
walkthrough), the manager responsible for the specifications, a client representative, a
representative of the team that will perform the next phase of the development (for
specification walkthrough it is a representative from design team), and a representative
of the Software quality assurance group. The walkthrough should be chaired by the
SQA representative. The members of the walkthrough team, as far as possible, should
be experienced senior technical staff members because they tend to find the important
faults. The material for the walkthrough must be distributed to the participants well in
advance to allow for careful preparation. Each reviewer should study the material and
develop two lists: a list of items the reviewer does not understand and a list of items the
reviewer believes are incorrect.

The person leading the walkthrough guides the other members of the
walkthrough team through the document to uncover any faults. It is not the task of the
team to correct faults, merely to record them for later correction. There are two ways of
conducting a walkthrough. The first is participant driven. Participants present their lists
of unclear items and items they think are incorrect. The representative of the
specifications team must respond to each query, clarifying what is unclear to the
reviewer and either agreeing that indeed there is a fault or explaining why the reviewer
is mistaken. The second way of conducting a review is document driven. A person
responsible for the document, either individually or as part of a team, walks the
participants through that document, with the reviewers interrupting either with their
prepared comments or comments triggered. The second is likely to be more thorough
leading to detection of more faults.

The primary role of the walkthrough leader is to elicit questions and facilitate
discussion. A walkthrough is an iterative process; it is not supposed to be one-sided
instruction by the presenter. It also is essential that the walkthrough not be used as a
means of evaluating the participants, because the walkthrough degenerates into a point-
scoring session and does not detect faults. Walkthrough includes all kinds like:
specification walkthrough, design walkthrough, plan walkthrough and code
walkthrough.

Inspections:

104
Software Engineering

Inspections were first proposed by Fagan for testing designs and code. An
inspection goes far beyond a walkthrough and has five formal steps. First, an overview
of the document to be inspected (specification, design, code, or plan) is given by one of
the individuals responsible for producing that document. At the end of the overview
session, the document is distributed to the participants. In the second step, preparation,
the participants try to understand the document in detail. Lists of fault types found in
recent inspections, with the fault types ranked by frequency, are excellent aids. The
third step is the inspection. To begin, one participant walks through the document with
the inspection team, ensuring that every item is covered and that every branch is taken
at least once. Then fault finding commences. Within one day the leader of the inspection
team (the moderator) must produce a written report of the inspection to ensure
meticulous follow-through. The fourth stage is the rework, in which the individual
responsible for that document resolves all faults and problems noted in the written
report. The final stage is the follow-up. The moderator must ensure that every single
issue raised has been resolved satisfactorily, by either fixing the document or clarifying
items incorrectly flagged as faults. All fixes must be checked to ensure that no new
faults have been introduced. If more than 5 percent of the material inspected has been
reworked, then the team must reconvene for a 100 percent reinspection.

The inspection should be conducted by a team of four. For example, in the case
of a design inspection, the team will consist of a moderator, designer, implementer, and
tester. The moderator is both manager and leader of the inspection team. There must be
a representative of the team responsible for the current phase as well as a representative
of the team responsible for the next phase. The tester be any programmer responsible
for setting up test cases; preferably the tester should be a member of the SQA group.
The IEEE standard recommends a team of between three and six participants. Special
roles are played by the moderator; the reader, who leads the team through the design (or
code etc.); and the recorder, who is responsible for producing a written report of the
detected faults.

An essential component of an inspection is the checklist of potential faults. For


example, the checklist for a design inspection should include items such as these; Is
each item of the specification document adequately and correctly addressed? For each
interface, do the actual and formal arguments correspond? etc. An important
component of the inspection procedure is the record of fault statistics. Faults must be
recorded by severity (major or minor) and fault type (e.g., interface faults, logic faults).

9.1.2 Unit Testing, Integration, System and Regression Testing

105
Software Engineering

Unit Testing:

Unit testing focuses on the building blocks of the software system, that is, objects
and subsystems. The specific candidates for unit testing are chosen from the object
model and the system decomposition. In principle, all the objects developed during the
development process should be tested, which is often not feasible because of time and
budget constraints. The minimal set of objects to be tested should be the participating
objects in the use cases. Subsystems should be tested after each of the objects and classes
within that subsystem have been tested individually Unit testing focuses verification
effort on the smallest unit of software design—the software component or module.The
unit test is white-box oriented. . In Unit testing the following are tested,

1. The module interface is tested to ensure that information properly flows into and
out of the program unit under test.

2. The local data structure is examined to ensure that data stored temporarily
maintains its integrity.

3. Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing.

4. All independent paths through the control structure are exercised to ensure that
all statements in a module have been executed at least once.

5. And finally, all error handling paths are tested

106
Software Engineering

The most important unit testing techniques are discussed below:

i. Equivalence testing

This black box testing technique minimizes the number of test cases. The possible
inputs are partitioned into equivalence classes, and a test case is selected for each class.
The assumption of equivalence testing is that systems usually behave in similar ways
for all members of a class. Only one member of an equivalence class needs to be tested
in order to test the behavior associated with that class. Equivalence class consists of two
steps: identification of the equivalence classes and selection of the test inputs. The
following criteria are used in determining the equivalence classes.

• Coverage (Every possible input belongs to one of the equivalence classes)


• Disjointedness (No input belongs to more than one equivalence class)

• Representation (If the execution demonstrates an erroneous state when a


particular member of a equivalence class is used as input, then the same
erroneous state can be detected by using any other member of the class as input)

107
Software Engineering

For each equivalence class, at least two pieces of data are selected: a typical input, which
exercises the common case, and an invalid input, which exercises the exception
handling capabilities of the component. After all equivalence classes have been
identified, a test input for each class has to be identified that covers the equivalence
class. If, not all the elements of the equivalence class are covered by the test input, the
equivalence class must be split into smaller equivalence classes, and test inputs must be
identified for each of the new classes.

A method that returns the number of days in a month, given the month and year
is considered as example here. The month and year are specified as integers. By
convention 1 represents the month of January, 2 the month of February, and so on. The
range of valid inputs for the year is 0 to maxInt.

Figure 4.16: Interface for a method computing the number of days in a given month.

From the code it is understood that ‘getNumDaysInMonth()’ is that particular


method that takes two parameters, a month and a year, both specified as integers. Three
equivalence classes can be found out for the month parameter: months with 31 days
(i.e., 1, 3, 5, 7, 8, 10, 12), months with 30 days (i.e., 4, 6, 9, 11), and February, which can
have 28 or 29 days. Nonpositive integers and integers larger than 12 are invalid values
for the month parameters. Also negative integers are invalid values for the year. At the
first, one valid value for each parameter and equivalence class (e.g., February, June,
July, 1901, and 1904) are selected. Given that the return value of the
getNumDaysInMOnth() method depends on both parameters, these values are
combined to test interaction. This results in the six equivalence classes displayed by the
following table.

Value for month Value for year


Equivalence class
input input

Months with 31 days, non-leap years 7 (July) 1901

Months with 31 days, leap years 7 (July) 1904

108
Software Engineering

Months with 30 days, non-leap years 6 (June) 1901

Months with 30 days, leap years 6 (June) 1904

Month with 28 or 29 days, non-leap


2 (February) 1901
year

Month with 28 or 29 days, leap year 2 (February) 1904

ii. Boundary testing

This special case of equivalence testing focuses on the conditions at the boundary
of the equivalence classes. Rather than selecting any element in the equivalence class,
boundary testing requires that the elements be selected from the “edges” of the
equivalence class. In the example discussed in equivalence testing, the month of
February presents several boundary cases. In general, years that are multiples of 4 are
leap years. Years that are multiples of 100, however, are not leap years, unless they are
also multiple of 400. For example, 2000 was a leap year, whereas 1900 was not. Both
year 1900 and 2000 are good boundary cases that should be tested. Other boundary
cases include the months 0 and 13. A disadvantage of equivalence class and boundary
testing is that these techniques do not explore combinations of test input data. This
problem is addresses by Cause-Effect testing that establishes logical relationships
between input and outputs or inputs and transformations. The inputs are called causes,
the outputs or transformations are effects.

iii. Path testing

This whitebox testing technique identifies faults in the implementation of the


component. The assumption behind path testing is that, by exercising all possible paths
through the code at least once, most faults will trigger failures.

The starting point for path testing is the flow graph. A flow graph consists of
nodes representing executable blocks and associations representing flow of control. A
flow graph is constructed from the code of a component by mapping decision
statements (e.g., if statements, while loops) to nodes lines. Statements between each
decision (e.g., then block, elser block) are mapped to other nodes. The following figures
depict the example faulty implementation of the getNumDaysInMonth() method and
the equivalent flow graph as a UML activity diagram. In the activity diagram, decisions

109
Software Engineering

are modeled with UML branches, blocks with UML action states, and control flow with
UML transitions.

Figure 4.17: An example of a (faulty) implementation of the getNumDaysInMonth()


method (Java)

110
Software Engineering

Figure 4.18: Equivalent flow graph for the (faulty) implementation of the
getNumDaysInMonth() method of the above implementation (UML activity diagram)

Complete path testing is done by examining the condition associated with each
branch point and selecting an input for the true branch and another input for the false
branch. For example, examining the first branch point in the UML activity diagram, two
inputs are selected: ‘year=0’ (such that year<1 is true) and ‘year=1901 (such that year<1
is false). The process is then repeated for the second branch and the inputs such as
‘month=1’ and ‘month=2’ are selected. The input (year=0, month=1) produces the path
{throw1}. The input (year=1901, month=1) produces a second complete path {n=32
return}, which uncovers one of the faults in the getNumDaysInMonth() method. The
following table depicts the test cases and equivalent paths generated by repeating the
above process for each node.

111
Software Engineering

Test case Path

(year=0, month=1) {throw1}

(year=1901, month=1) {n=32 return}

(year=1901, month=2) {n=28 return}

(year=1904, month=2) {n=29 return}

(year=1901, month=4) {n=30 return}

(year=1901, month=0) {throw2}

Using graph theory, it can be shown that the minimum number of tests necessary to
cover all edges is equal to the number of independent paths through the flow graph.
This is defined as the cyclomatic complexity CC of the flow graph, which is

CC= number of edges-number of nodes+2

where the number of nodes is the number of branches and action states, and the
number of edges is the number of transitions in the activity diagram. The cyclomatic
complexity of the getNumDaysInMonth() method is 6, which is also the number of test
cases found (shown in the above table). Path testing and whitebox methods can detect
only faults resulting from exercising a path in the program, such as the faulty
numDays=32 statement. Whitebox testing methods cannot detect omissions, such as the
failure to handle the non-leap year 1900. Also path testing is heavily based on the
control structure of the program. However, no testing method short of exhaustive
testing can guarantee the discovery of all faults.

iv. State-based testing

This testing technique was recently developed for object-oriented systems. State-
based testing compares the resulting state of the system with the expected state. In the
context of a class, state-based testing consists of deriving test cases from the UML
statechart diagram for the class. For each state, a representative set of stimuli is derived
for each transition (similar to equivalence testing). The attributes of the class are then
instrumented and tested after each stimulus has been applied to ensure that the class
has reached the specified state.

112
Software Engineering

This testing can be illustrated using an application called ‘2Bwatch’ (explanation


about this application is given inside the box). The statechart diagram of this
application is given below:

Figure 4.19: UML statechart diagram and resulting tests for 2Bwatch SetTime function
(only the first eight stimuli are shown)

2Bwatch is a watch with two buttons (so is the name). Setting the time on 2Bwatch requires
the actor 2BwatchOwner to first press both buttons simultaneously, after which 2Bwatch
enters the set time mode. In the set time mode, 2Bwatch blinks the number being changed
(e.g., the hours, minutes, seconds, day, month, or year). Initially, when the 2BWatchOwner
enters the set time mode, the hours blink. If the actor presses the first button, the next
number blinks (e.g., if the hours are blinking and the actor presses the first button, the hurs
stop blinking and the minutes start blinking). If the actor presses the second button, the
blinking number is incremented by one unit. If the blinking number reaches the end of its
range, it is reset to the beginning of its range (e.g., assume the minutes are blinking and its
The statechart diagram specifies which stimuli change the watch from the high-level
current value is 59, its new value is set to 0 if the actor presses the second button). The
state MeasureTime to the high-level state SetTime. It does not show the low-level states
actor exits the set time mode by pressing both buttons simultaneously
of the watch when the date and time change, either because of actions of the user or

113
Software Engineering

because of time passing. The test inputs shown in the figure were generated such that
each transition is traversed at least once. After each input, instrumentation code checks
if the watch is in the predicted state and reports a failure otherwise. Some transitions
(e.g., transition 3) are traversed several times, as it is necessary to put the watch back
into the SetTime state (e.g., to test transitions 4, 5, and 6). The test inputs for the
DeadBattery state were not generated (only the first eight stimuli are displayed).

State-based testing presents several difficulties. Because the state of a class is


encapsulated, test sequences must include sequences for putting classes in the desired
state before given transitions can be tested. It can become an effective testing technique
for object-oriented systems if proper automation is provided.

v. Polymorphism testing

Polymorphism introduces a new challenge in testing because it enables messages


to be bound to different methods based on the class of the target. Although this enables
developers to reuse code across a larger number of classes, it also introduces more cases
to test.

A NetworkInterface Strategy design pattern is considered as example here. The


strategy pattern is applied for encapsulating multiple implementations of a
NetworkInterface. The LocationManager implementing a specific policy configures
NetworkConnection with a concrete NetworkInterface (i.e., the mechanism) based on
the current location. The Application uses the NetworkConnection independently of
concrete NetworkInterfaces. The strategy design pattern uses polymorphism to shield
the context (i.e., the NetworkConnection class) from the concrete strategy (i.e., the
Ethernet, WaveLAN, and UMTS classes). For example, the NetworkConnection.send()
method calls he NetworkInterface.send() method to send bytes across the current
NetworkInterface, regardless of the actual concrete strategy. This means that, at run
time, the NetworkInterface.send() mehod invocation can be bound to one of three
methods, Ethernet.send(), WaveLAN.send(), UMTS.send().

114
Software Engineering

Figure 4.20: A strategy design pattern for encapsulating multiple implementation of a


NetworkInterface (UML class diagram)

115
Software Engineering

Figure 4.21: Java source code for the NetworkConnection.send() message (left) and
equivalent Java source code without polymorphism (right).

When applying the path testing technique to an operation that uses


polymorphism, all dynamic bindings should be considered, one for each message that
could be sent. In NetworkConnect.send() (found in the top part of java source code), the
NetworkInterface.send() operation is invoked, which can be bound to either
Ethernet.send(), WaveLAN.send(), or the UMTS.send() methods, depending on the
class of the nif object. To deal with this situation explicitly, the original source code is
expanded by replacing each invocation of NetworkInterface.send() with a nested ‘if
else’ statement that tests for all subclasses of NetworkInterface. Once the source code is
expanded, the flow graph is extracted and test cases are generated covering all paths.
When many interfaces and abstract classes are involved, generating the flow graph for a

116
Software Engineering

method of medium complexity can result in an explosion of paths. The equivalent flow
graph is shown below:

Figure 4.22: Equivalent flow graph for the expanded source code of the
NetworkConnection.send() method of the Java implementation (UML activity diagram)

Integration Testing:

Integration testing detects faults that have not been detected during unit testing
by focusing on small groups of components. Two or more components are integrated
and tested, and when no new faults are revealed, additional components are added to
the group. Since developing test stubs and drivers is time consuming, Extreme
Programming is used. The order in which components are tested, influences integration
testing. A careful ordering of components can reduce the resources needed for the
overall integration test.

117
Software Engineering

Integration testing strategies:

Several approaches have been devised to implement an integration testing


strategy: Big-bang testing, bottom-up testing, top-down testing and sandwich testing.

The big-bang testing strategy assumes that all components are first tested
individually and then tested together as a single system. The advantage is that no
additional test stubs or drivers are needed. Although this sounds simple it is expensive.
If attest uncovers a failure, it is impossible to distinguish failures in the interface from
failures within a component. Moreover, it is difficult to pinpoint a specific component.

The bottom-up testing first tests each component of the bottom layer
individually, and then integrates them with components of the next layer up. If two
components are tested together, it is called a double test. Similarly, three components
means triple test and four means quadruple test. This is repeated until all components
from all layers are combined. Test drivers are used to simulate the components of
higher layers that have not yet been integrated. Test stubs are not necessary during this
testing. The advantage of bottom-up testing is that interface faults can be more easily
found. The disadvantage is that it tests the most important subsystem, namely the
components of the user interface, last.

Figure 4.23: Bottom-up testing strategy

118
Software Engineering

The above figure illustrates bottom-up testing. Here the subsystems E, F, and G
are unit tested first, and then the triple test B-E-F and the double test D-G are executed
and so on.

The top-down testing strategy unit tests the components of the top layer first,
and then integrates the components of the next layer down. When all components of the
new layer have been tested together, the next layer is selected. Again, the tests
incrementally add one component at a time. This is repeated until all layers are
combined and involved in the test. Test stubs are used to simulate the components of
lower layers that have not yet been integrated. Test drivers are not required during this
testing process. The advantage of top-down testing is that it starts with user interface
components. The same set of tests, derived from the requirements, can be used in
testing the increasingly more complex set of subsystems. The disadvantage is that the
development of test stubs is time consuming and prone to error.

Figure 4.24: Top-down testing strategy

The above figure illustrates top-down testing strategy. Here, the subsystem A is
unit tested, then double tests A-B, A-C, and A-D are executed, then the quad test A-B-C-
D is executed, and so on.

119
Software Engineering

The sandwich testing strategy combines the top-down and bottom-up strategies
to make use of the best of both. During this testing, the tester must be able to map the
subsystem decomposition into three layers, a target layer (“the meat”), a layer above the
target layer (“the top slice of bread”), and a layer below the target layer (“the bottom
slice of bread”). Top-down integration testing is done by testing the top layer
incrementally with the components of the target layer, and bottom-up testing is used
for testing the bottom layer incremental with the components of the target layer. As a
result, test stubs and drivers need not be written for the top and bottom layers, because
they use the actual components from the target layer. The problem with this testing is
that it does not thoroughly test the individual components of the target layer before
integration. For example, the sandwich test shown below does not unit test component
C of the target layer.

Figure 4.25: Sandwich testing strategy (UML activity diagram)

120
Software Engineering

Figure 4.26: Modified sandwich testing strategy (UML activity diagram)

The modified sandwich testing strategy tests the three layers individually before
combining them in incremental tests with one another. The individual layer tests consist
of a group of three tests:

• a top layer test with stubs for the target layer


• a target layer test with drivers and stubs replacing the top and bottom layers
• a bottom layer test with a driver for the target layer.

The combined layer tests consist of two tests:

• The top layer the target layer. This test can reuse the target layer tests from the
individual layer tests, replacing the drivers with components from the top layer.
• The bottom is accessed by the target layer. This test can reuse the target layer
tests from the individual layer tests, replacing the stub with components from
the bottom layer.

The advantage of modified sandwich testing is that many testing activities can be
performed in parallel, as indicated by the UML activity diagrams. The disadvantage is

121
Software Engineering

the need for additional test stubs and drivers. But, modified testing leads to a
significantly shorter overall testing time than top-down or bottom-up testing.

System testing:

Once components have been integrated, system testing ensures that the complete
system complies with the functional and nonfunctional requirements. System testing is
a blackbox technique: test cases are derived from the use case model. Several system
testing activities are performed which are listed below:

• Functional testing
• Performance yesting
• Pilot testing
• Acceptance testing
• Installation testing

Functional testing:

Functional testing, also called requirements testing, finds differences between the
functional requirements and the system. The goal of the tester is to select those tests that
are relevant to the user and have a high probability of uncovering a failure. Functional
testing is different from usability testing. Functional testing finds differences between
the use case model and the observed system behavior, whereas usability testing finds
difference between the use case model and user’s expectation of the system.

To identify functional tests, the use case model is inspected and use case
instances that are likely to cause failures are identified. Test cases identified should
exercise both common and exceptional use cases. For example, the use case model for a
‘subway ticket distributor’ (shown by the UML use case diagram) is considered. Here
the common functionality is ‘PurchaseTicket’ use case, describing the steps necessary
for a passenger to successfully purchase a ticket and the various exceptional conditions
are ‘TimeOut’, Cancel, OutOfOrder, and NoChange use cases. The exceptional
conditions result from the state of the distributor or actions by the Passenger.

122
Software Engineering

Figure 4.27: An example of use case model for a subway ticket distributor (UML use
case diagram)

The following is the PurchaseTicket use case which describes the normal interaction
between the Passenger actor and the Distributor.

Use case name PurchaseTicket

Entry condition The Passenger is standing in front of ticket Distributor


The Passenger has sufficient money to purchase ticket
Flow of events 1. The Passenger selects the number of zones to be traveled. If the
Passenger presses multiple zone buttons, only the last button pressed is
considered by the Distributor.
2. The Distributor displays the amount due.
3. The Passenger inserts money.
4. If the Passenger selects a new zone before inserting sufficient money,
the Distributor returns all the coins and bills inserted by the Passenger.
5. If the Passenger inserted more money than the amount due, the
Distributor returns excess change.
6. The Distributor issues ticket.
7. The Passenger picks up the change and the ticket.

Exit condition The Passenger has the selected ticket.

Figure 4.28: An example of use case from the ticket distributor use case model
PurchaseTicket.

123
Software Engineering

The following are the three features of the Distributor that are likely to fail and should
be tested:

• The Passenger may press multiple zone buttons before inserting money, in which
case the Distributor should display the amount of the last zone.
• The Passenger may select another zone button after beginning to insert money, in
which case the Distributor should return all money inserted by the Passenger.
• The Passenger may insert more money than needed, in which case the
Distributor should return the correct change.

The following is the test case PurchaseTicket CommonCase, which exercises the above
three features. The flow of events describes both the input to the system (stimuli that
the Passenger sends to the Distributor) and desired outputs (correct responses from the
Distributor).

Test case name PurchaseTicket_CommonCase

124
Software Engineering

Entry condition The Passenger standing in front of ticket Distributor


The Passenger has two $5 bills and three dimes.

Flow of events 1. The Passenger presses in succession the zone buttons 2, 4, 1, and
2.
2. The Distributor should display in succession $i.25, $2.25, $0.75,
and $1.25.
3. The Passenger inserts a $5 bill.
4. The Distributor returns three $1 bills and three quarters and
issues a 2- zone ticket.
5. The Passenger repeats steps 1-4 using his second $5 bill.
6. The Passenger repeats steps 1-3 using four quarters and three
dimes. The Distributor issues a 2-zone ticket and returns a nickel.
7. The Passenger selects zone 1 and inserts a dollar bill. The
Distributor issues a 1-zone ticket and returns a quarter.
8. The Passenger selects zone 4 and inserts two $1 bills and a
quarter. The Distributor issues a 4-zone ticket.
9. The Passenger selects zone 4. The Distributor displays $2.25. The
Passenger inserts a $1 bill and a nickel, and selects zone 2. The
Distributor returns the $1 bill and the nickel and displays $1.25.

Exit condition The Passenger has three 2-zone tickets, one 1-zone ticket, and one 4-
zone ticket.

Figure 4.29: An example of test case derived from the PurchaseTicket use case.

Similar test cases can also be derived for the exceptional use cases NoChange,
OutOfOrder, TimeOut, and Cancel. Test cases such as PurchaseTicket_CommonCase,
are derived for all use cases, including use cases representing exceptional behavior. Test
cases are associated with the use cases from which they are derived, making it easier to
update the test cases when use cases are modified.

Performance testing:

125
Software Engineering

Performance testing finds differences between the design goals selected during
system design and the system. Because the design goals are derived from the
nonfunctional requirements, the test cases can be derived from the SDD (System Design
Document) or from the RAD (Rapid Application Development). The following tests are
performed during performance testing:

• Stress testing checks if the system can respond to many simultaneous requests.
• Volume testing attempts to find faults associated with large amounts of data, such
as static limits imposed by the data structure, or high-complexity algorithms, or
high disk fragmentation.
• Security testing attempts to find security faults in the system. Usually this test is
accomplished by “tiger teams” who attempt to break into the system, using their
experience and knowledge of typical security flaws.
• Timing testing attempts to find behaviors that violate timing constraints described
by the nonfunctional requirements.
• Recovery tests evaluates the ability of the system to recover from errorneous
states, such as the unavailability of resources, a hardware failure, or a network
failure.
After all the functional and performance tests have been performed, and no failures
have been detected during these tests, the system is said to be validated.

Pilot testing:

During the pilot test, also called the field test, the system is installed and used by
a selected set of users. Users exercise the system as if it had been permanently installed.
No explicit guidelines or test scenarios are given to the users. A group of people is
invited to use the system for a limited time and to give their feedback to the developers.
This test is useful for systems without a specific set of requirements.

An alpha test is a pilot test with users exercising the system in the development
environment. In a beta test, the acceptance test is performed by a limited number of end
users in the target environment. The Internet has made the distribution of software very
easy. As a result, beta tests are more and more common.

Acceptance testing:

There are three ways the client evaluates a system during acceptance testing. In a
benchmark test, the client prepares a set of test cases that represent typical conditions
under which the system should operate. Benchmark tests can be performed with actual
users or by a special test team exercising the system functions.

126
Software Engineering

Another kind of system acceptance testing is used in reengineering projects,


when the new system replaces an existing system. In competitor testing, the new system
is tested against an existing system or competitor product. In shadow testing, a form of
comparison testing, the new and the legacy systems are run in parallel and their
outputs are compared.

After acceptance testing, the client reports to the project manager which
requirements are not satisfied. If requirements must be changed, the changes should be
reported in the minutes to the client acceptance review and should form the basis for
another iteration of the software life-cycle process. If the customer is satisfied, the
system is accepted, possibly contingent on a list of changes recorded in the minutes of
the acceptance test.

Installation testing:

After the system is accepted, it is installed in the target environment. The desired
outcome of the installation test is that the installed system correctly addresses all
requirements. In most cases, the installation test repeats the test cases executed during
function and performance testing in the target environment. Once the customer is
satisfied with the results of the installation test, system testing is complete, and the
system is formally delivered and ready for operation.

Regression Testing:

Object-oriented development is an iterative process. When modifying a


component, developers design new unit tests exercising the new feature under
consideration. They may also retest the component by updating and rerunning previous
unit tests. The modification can introduce side effects or reveal previously hidden faults
in other components. A system that fails after the modification of a component is said to
regress. Hence, integration tests that are rerun on the system to produce such failures
are called regression tests.

The most robust and straightforward technique for regression testing is to


accumulate all integration tests and rerun them whenever new components are
integrated into the system. This requires developers to keep all tests up-to-date, to
evolve them as the subsystem interfaces change, and to add new integration tests as
new services or new subsystems are added. As regression testing can become time
consuming, different techniques have been developed for selecting specific regression
tests. Such techniques include:

127
Software Engineering

• Reset dependent components- Testing the components that is dependent on the


modified component, because they are the most likely to fail.
• Retest risky use cases- Focusing first on use cases that present the highest risk. By
this, the developers can minimize the likelihood of catastrophic failures.
• Retest frequent use case- Focusing on the use cases that are most often used by the
users.
In all cases, regression testing leads to running many tests many times. Hence,
regression test is feasible only when an automated testing infrastructure is in place.

User Acceptance Testing:

Refer to acceptance testing in system testing section.

Product testing:

Product testing is a type of usability testing. Usability testing is a technique for


ensuring that the intended users of a system can carry out the intended tasks efficiently,
effectively and satisfactorily. During product testing the end users are presented with a
functional version of the system. This test can be conducted after most of the system is
developed. It also requires that the system be easily modifiable such that the results of
the usability test can be taken into account. The basic elements of this test include:

 development of test objective


 a representative sample of end users
 the actual or simulated work environment
 controlled, extensive interrogation, and probing of the users by the person
performing the test
 collection and analysis of quantitative and qualitative results
 recommendations on how to improve the system
Typical objectives in a usability test address the comparison of two user interaction
styles, the main stumbling blocks, the identification of useful features for novice and
expert users, when help is needed, and what type of training information is required.

Maintenance

128
Software Engineering

Once the product has its acceptance test, it is handed over to the client. The
product is installed and used for the purpose for which it was constructed. Any useful
product, however, is almost certain to undergo maintenance during the maintenance
phase, either to fix faults (corrective maintenance) or extend the functionality of the
product (enhancement).

6.1.1 Types of Maintenance


The following are the types of maintenance:

i. Corrective maintenance: This involves correcting faults, whether specification faults,


design faults, coding faults, documentation faults, or any other types of faults. A study
of 69 organizations showed that maintenance programmers spend only 17.5 percent of
their time on corrective maintenance.

ii. Perfective maintenance: Here, changes are made to the code to improve the
effectiveness of the product. For instance, the client may wish additional functionality
or request that the product be modified so that it runs faster. Improving the
maintainability of a product is another example of perfective maintenance. The study
showed that 60.5 percent time was spent on this type of maintenance.

iii. Adaptive maintenance: Here, changes are made to the product to react to changes in
the environment in which the product operates. For example, a product almost certainly
has to be modified if it is ported to a new compiler, operating system, or hardware.
Adaptive maintenance is not requested by a client; instead, it is externally imposed on
the client. The study showed that 18 percent of software maintenance was adaptive in
nature.

The remaining 4 percent of maintenance time was devoted to other types of


maintenance that did not fall into the above mentioned three categories. Preventive
maintenance is other type which concerns activities aimed at increasing the system’s
maintainability, such as updating documentation, adding comments, and improving
the modular structure of the system.

6.1.2 Maintenance Log and Defect reports


Maintenance Log keeps track of the cost of operations performed, scheduled
maintenance, and unscheduled repairs. In short, it is the storage of activities and daily
updates involved in the maintenance phase.

129
Software Engineering

The first thing needed when maintaining a product is a mechanism for changing
the product. With regard to corrective maintenance, that is, removing residual faults, if
the product appears to be functioning incorrectly, then a fault/defect report should be
filed by the user. This must include enough information to enable the maintenance
programmer to recreate the problem, which usually will be sort of software failure.

Ideally, every fault reported by a user should be fixed immediately. In practice,


programming organizations usually are understaffed, with a backlog of work, both
development and maintenance. If the fault is critical, such as if a payroll product
crashes the day before payday or overpays, employees, immediate corrective action
must be taken. Otherwise, each fault report must at least receive an immediate
preliminary investigation.

The maintenance programmer should first consult the fault report file. This
contains all reported faults that have not yet been fixed, together with suggestions for
working around them, that is, ways for the user to bypass the portion of the product
that apparently is responsible for the failure, until such time as the fault can be fixed. If
the fault has been reported previously, any information in the fault report file should be
given to the user. But if what the user reports appear to be a new fault, then the
maintenance programmer should study the problem and attempt to find the cause and
a way to fix it. In addition, an attempt should be made to find a way to work around the
problem, because it may take 6 or 9 months before someone can be assigned to make
the necessary changes to the software. In the light of the serious shortage of
programmers, and in particular programmers good enough to perform maintenance,
suggesting a way to live with the fault until it can be solved often is the only way to
deal with fault reports that are not true emergencies.

The maintenance programmer’s conclusions then should be added to the fault


report file, together with any supporting documentation, such as listings, designs, and
manuals used to arrive at those conclusions. The manager in charge of maintenance
should consult the file regularly, setting priorities for the various fixes. The file also
should contain the client’s requests for perfective and adaptive maintenance. The next
modification made to the product then will be the one with the highest priority.

When copies of a product have been distributed to a variety of sites, copies of


fault reports must be circulated to all users of the product, together with an estimate of
when each fault can be fixed. Then, if the same failure occurs at another site, the user
can consult the relevant fault report to determine if it is possible to work around the
fault and when it will be fixed. It would be preferable to fix every fault immediately and

130
Software Engineering

then distribute a new version of the product to all sites, of course. Also organizations
prefer to accumulate noncritical maintenance tasks then implement the changes as a
group.

6.1.3 Reverse and Reengineering


An application has served the business needs of a company for 10 or 15 years.
During that time it has been corrected, adapted, and enhanced many times. People
approached this work with the best intentions, but good software engineering practices
were always shunted to the side (the press of other matters). Now the application is
unstable. It still works, but every time a change is attempted, unexpected and serious
side effects occur. Yet the application must continue to evolve. The solution software
reengineering that has been spawned by software maintenance.

Reengineering is concerned about adapting existing systems to changes in their


external environment and making enhancements requested by users. Only about 20
percent of all maintenance work is spent “fixing mistakes”, the remaining 80 percent is
spent in reengineering for future use.

A Software Reengineering Process model:

Reengineering takes time; it costs significant amounts of money; and it absorbs


resources that might be otherwise occupied on immediate concerns. For all of these
reasons, reengineering is not accomplished in a few months or even a few years.
Reengineering of information systems is an activity that will absorb information
technology resources for many years. That’s why every organization needs a pragmatic
strategy for software reengineering. A software reengineering model that defines six
activities is shown by the following figure:

131
Software Engineering

Figure 6.1: A software reengineering model

The reengineering paradigm shown in the figure is a cyclical model. This means
that each of the activities presented as a part of the paradigm may be revisited. For any
particular cycle, the process can terminate after any one of these activities. Each activity
is described down.
Inventory analysis:
Every software organization should have an inventory of all applications. The
inventory can be nothing more than a spreadsheet model containing information that
provides a detailed description (e.g., size, age, business criticality) of every active
application. By sorting this information according to business criticality, longevity,
current maintainability, and other locally important criteria, candidates for
reengineering appear. Resources can then be allocated to candidate applications for
reengineering work.
Document restructuring:

132
Software Engineering

Weak documentation is the trademark of many legacy systems. The following options
can be adopted:

• Creating documentation is far too time consuming. If the system works, we’ll
live with what we have.
• Documentation must be updated, but we have limited resources. We’ll use a
“document when touched” approach. (i.e., documenting only changed portions
of the system)
• The system is business critical and must be fully redocumented. (Even in this
case, an intelligent approach is to pare documentation to an essential minimum)
Each of these options is viable. A software organization must choose the one that is
most appropriate for each case.

Reverse Engineering:

The term reverse engineering has its origins in the hardware world. A company
disassembles a competitive hardware product in an effort to understand its competitor's
design and manufacturing "secrets." These secrets could be easily understood if the
competitor's design and manufacturing specifications were obtained. But these
documents are proprietary and unavailable to the company doing the reverse
engineering. In essence, successful reverse engineering derives one or more design and
manufacturing specifications for a product by examining actual specimens of the
product.
Reverse engineering for software is quite similar. In most cases, however, the
program to be reverse engineered is not a competitor's. Rather, it is the company's own
work (often done many years earlier). The "secrets" to be understood are obscure
because no specification was ever developed. Therefore, reverse engineering for
software is the process of analyzing a program in an effort to create a representation of
the program at a higher level of abstraction than source code. Reverse engineering is a
process of design recovery. Reverse engineering tools extract data, architectural, and
procedural design information from an existing program.
Code restructuring:
The most common type of reengineering is code restructuring. Some legacy
systems have relatively solid program architecture, but individual modules were coded
in a way that makes them difficult to understand, test, and maintain. In such cases, the
code within the suspect modules can be restructured.
To accomplish this activity, the source code is analyzed using a restructuring
tool. Violations of structured programming constructs are noted and code is then
restructured (this can be done automatically). The resultant restructured code is

133
Software Engineering

reviewed and tested to ensure that no anomalies have been introduced. Internal code
documentation is updated.
Data restructuring:
A program with weak data architecture will be difficult to adapt and enhance. In
fact, for many applications, data architecture has more to do with the long-term
viability of a program that the source code itself.
Unlike code restructuring, which occurs at a relatively low level of abstraction,
data structuring is a full-scale reengineering activity. In most cases, data restructuring
begins with a reverse engineering activity. Current data architecture is dissected and
necessary data models are defined. Data objects and attributes are identified, and
existing data structures are reviewed for quality.
When data structure is weak, the data are reengineered. Because data
architecture has a strong influence on program architecture and the algorithms that
populate it, changes to the data will invariably result in either architectural or code-
level changes.
Forward engineering:
In an ideal world, applications would be rebuilt using a automated
“reengineering engine.” The old program would be fed into the engine, analyzed,
restructured, and then regenerated in a form that exhibited the best aspects of software
quality. In the short term, it is unlikely that such an “engine” will appear, but CASE
vendors have introduced tools that provide a limited subset of these capabilities that
addresses specific application domains (e.g., applications that are implemented using a
specific database system). More important, these reengineering tools are becoming
increasingly more sophisticated.
Forward engineering, also called renovation or reclamation, not only recovers
design information from existing software, but uses this information to alter or
reconstitute the existing system in an effort to improve its overall quality. In most cases,
reengineered software re implements the function of the existing system and also adds
new functions and/or improves overall performance.

10. Objective Questions:

1. The following is a maintenance activity


a. fixing faults
b. generating code
c. extending the functionality of the system
d. both a and c

134
Software Engineering

2. Which of the following is a maintenance type?


a. corrective
b. perfective
c. adaptive
d. all of the above
3. Maintenance Log keeps track of which of the following:
a. cost of operations performed
b. scheduled maintenance
c. unscheduled repairs
d. all of the above
4. Business process reengineering has no start or end—it is an evolutionary process.
a. True
b. False
5. How much of software maintenance work involves fixing errors?
a. 20 percent
b. 40 percent
c. 60 percent
d. 80 percent
6. Which of the following activities is not part of the software reengineering process
model?
a. forward engineering
b. inventory analysis
c. prototyping
d. reverse engineering
7. The software reengineering process model includes restructuring activities for which
of the following work items?
a. code
b. documentation
c. data
d. all of the above
8. Reverse engineering of data focuses on
a. database structures
b. internal data structures
c. both a and b
d. none of the above
9. Reverse engineering should precede the reengineering of any user interface.
a. True
b. False
10. Forward engineering is not necessary if an existing software product is producing
the correct output.
a. True
b. False
11. The cost benefits derived from reengineering are realized largely due to decreased

135
Software Engineering

maintenance and support costs for the new software product.


a. True
b. False

Answers: 1. d 2. d 3. d 4. a 5. a 6. c 7. d 8. c 9. a 10. b 11. a

11. Subjective Questions:


1. Explain in detail the different types of maintenance?
Ans: Refer page no.207-208

2. Write notes on reengineering?


Ans: Refer page no.209-213
3. Explain in detail about reverse engineering and forward engineering?
Ans: Refer page no.211-213

12. Questions:

1. Write short note on Re-engineering. (May 2010) (10 marks)

Ans:
It is concerned about adapting existing systems to changes in their external
environment and making enhancements requested by users.

Software re-engineering model:

– Forward engineering
– Data restructuring
– Code restructuring
– Inventory analysis
– Document restructuring
– Reverse engineering

2. What are different types of maintenance and also explain the different steps involved
in creating a maintenance log? (Nov 2010) (10 marks)
Ans:
Types of maintenance:

• Corrective
• Perfective

136
Software Engineering

• Adaptive

• Preventive

Maintenance log:

Maintenance Log keeps track of the cost of operations performed, scheduled


maintenance, and unscheduled repairs. In short, it is the storage of activities and daily
updates involved in the maintenance phase.

3. Write short notes on Reverse and Re-engineering. (Nov 2010) (10 marks)
Ans:
Reengineering is concerned about adapting existing systems to changes in their external
environment and making enhancements requested by users. Only about 20 percent of
all maintenance work is spent “fixing mistakes”, the remaining 80 percent is spent in
reengineering for future use.
A software reengineering model that defines six activities as follows:
– Forward engineering
– Inventory analysis
– Data restructuring
– Document restructuring
– Code restructuring
– Reverse engineering

4. Explain various software testing strategies. (May 2010, Nov 2010) (10 marks)
Ans:
– Unit testing
– Regression testing (additional test cases for software functions)
– Integration testing (bottom-up, top down, sandwich)
– Validation testing
o Alpha- customer tests
o Beta-developer tests
– Acceptance testing
o Benchmark
o Pilot
o Parallel
– System testing
o Recovery
o Security
o Stress
o Performance

137
Software Engineering

5. Explain objectives for testing. Also explain the following terms: (Nov 2010) (10 marks)
(i) System testing
(ii) Scalability
(iii) Regression
(iv) Black box testing.
Ans:
Objectives for testing:

Software testing is an investigation conducted to provide stakeholders with information


about the quality of the product or service under test. Software testing also provides an
objective, independent view of the software to allow the business to appreciate and
understand the risks of software implementation.

(i) System testing:


Once components have been integrated, system testing ensures that the complete
system complies with the functional and nonfunctional requirements. System testing is
a blackbox technique: test cases are derived from the use case model. Several system
testing activities are performed which are listed below:
• Functional testing
• Performance testing
• Pilot testing
• Acceptance testing
• Installation testing
(ii) Scalability:
Scalability is the ability of a system, or process, to handle growing amounts of work in a
graceful manner or its ability to be enlarged to accommodate that growth.
(iii) Regression testing:
A system that fails after the modification of a component is said to regress. Hence,
integration tests that are rerun on the system to produce such failures are called
regression tests.
Different techniques for selecting specific regression tests:
• Reset dependent components
• Retest risky use cases
• Retest frequent use case

(iv) Black box testing


Black-box testing is a method of software testing that tests the functionality of an
application as opposed to its internal structures or workings

138
Software Engineering

Typical black-box test design techniques include:

 Decision table testing


 All-pairs testing

 State transition tables

 Equivalence partitioning

 Boundary value analysis

Object Oriented Testing methods


Fault-based testing

 The tester looks for plausible faults (i.e., aspects of the implementation of the system
that may result in defects). To determine whether these faults exist, test cases are
designed to exercise the design or code.

Class Testing and the Class Hierarchy

 Inheritance does not obviate the need for thorough testing of all derived classes. In fact,
it can actually complicate the testing process.

Scenario-Based Test Design

 Scenario-based testing concentrates on what the user does, not what the product does.
This means capturing the tasks (via use-cases) that the user has to perform, then
applying them and their variants as tests.

13. Learning Resources:

1. Bernd Bruegge “Object oriented software engineering”, Second Edition, Pearson


Education.

2. Roger Pressman, “Software Engineering”, sixth edition, Tata McGraw Hill.

139
Software Engineering

8 Web Engineering

1. Motivation:

2. Objective:

3. Learning Objective:
This module explores the area of Web Engineering.

1. Student will be able to understand the importance of Web Engineering for


web based applications: attributes, analysis, design and testing during the
testing of the software.
2. Student will be able to understand Security engineering, Service oriented

140
Software Engineering

software engineering.
3. Student will be able to understand the meaning of Test driven
development along with Software engineering with aspects.
3. Prerequisite:
Activities involved in requirement, analysis and design phases of software life cycle

4. Syllabus:

Module Content Duration Self Study Time

8.1 For web based applications:


2 lectures 2 hours
attributes ,analysis ,design and testing

8.2 Security engineering 1 lecture 1 hour

8.3 Service oriented software engineering 1 lecture 1 hour

8.4 Test driven development 1 lecture


1 hour

8.5 Software engineering with aspects 1 lecture 1hour

5. Learning Outcome:
 This topic gives the knowledge of developing web applications, security
engineering and development with aspects.

 Students will be able to learn about Service oriented software engineering.

 Students will be able to learn about Test driven development.

6. Weightage: Marks

7. Abbreviations:
(1) TDD – test-driven development

(2) AOP- aspect-oriented programming

141
Software Engineering

8. Key Definitions:

9. Theory

test-driven development (TDD), requirements for a software component serve as the


basis for the creation of a series of test cases that exercise the interface and attempt to
find errors in the data structures and functionality delivered by the component.TDD is
not really a new technology but rather a trend that emphasizes the design of test cases
before the creation of source code

During TDD, code is developed in very small increments (one sub function at a time),
and no code is written until a test exists to exercise it. You should note that each
iteration results in one or more new tests that are added to a regression test suite that is
run with every change. In TDD, tests drive the detailed component design and the
esultant source code

Security Engineering

Security engineering is a specialized field of engineering that focuses on


the security aspects in the design of systems that need to be able to deal robustly with
possible sources of disruption, ranging from natural disasters to malicious acts.

142
Software Engineering

The security is required due to ,


• Intrusion into computer system
• Initially there were private networks.
• Today the need to connect to outside users through virtual private networks
(VPNs) with help of third party vendor services and encryption.
• Public key cryptography
• Digital signatures
• SET
• Digital wallets

Phishing –
The user is mislead to some other site which is designed as the original site and
important customer data is copied.

They can be of the following types,


• domain name which is incorrect
http://www.banckname.com/
http://www.bankname.xtrasecuresite.com/
• Insecure end user- website is unknown
http://www.example.com/~user/www.bankname.com/
• Insecure machine- machine is unknown
http://www.example.com/bankname/login/
http://49320.0401/bankname/login/
• Free web hosting
http://www.bank.com.freespacesitename.com/

According to the Microsoft Developer Network the patterns & practices of Security
Engineering consists of the following activities:

 Security Objectives
 Security Design Guidelines

 Security Modeling

 Security Architecture and Design Review

 Security Code Review

 Security Testing

143
Software Engineering

 Security Tuning

 Security Deployment Review

These activities are designed to help meet security objectives in the software life cycle

Service Oriented Software engineering


Service-oriented Software Engineering (SOSE) is a software engineering methodology
which is focused on the development of software systems by composition of reusable
services (service-orientation) often provided by other service providers.
Since it involves composition, it shares many characteristics of component-based
software engineering, the composition of software systems from reusable components,
but it adds the ability to dynamically locate necessary services at run-time.

Aspect-Oriented Software Development


As modern computer-based systems become more sophisticated (and complex), certain
concerns—customer required properties or areas of technical interest—span the entire
architecture. Some concerns are high-level properties of a system (e.g., security, fault
tolerance). Other concerns affect functions (e.g., the application of business rules), while
others are systemic (e.g., task synchronization or memory management).
When concerns cut across multiple system functions, features, and information, they are
often referred to as crosscutting concerns. Aspectual requirements define those
crosscutting concerns that have an impact across the software architecture. Aspect-
oriented software development (AOSD), often referred to as aspect-oriented
programming (AOP), is a relatively new software engineering paradigm that provides a
process and methodological approach for defining, specifying, designing, and
constructing aspects.

The process is not yet matured, but it is likely that such a process will adopt
characteristics of both evolutionary and concurrent process models

Questions:

144
Software Engineering

1. What is aspect oriented programming?


2. Explain Security engineering.
3. Explain Service oriented software engineering.

145

You might also like