You are on page 1of 292

DESIGN PHASE

Chapter 4
Design Phase
Objective:

On completion of this chapter you would be able to understand

❖ Concepts in the design phase of Software Development Life Cycle

❖ Primary differences between analysis and design activities

❖ Transition from problem domain to solution domain

❖ Desirable characteristics of a good software design

❖ Concepts of modularization, abstraction, cohesion, coupling, functional


independence, etc.

❖ Difference between function oriented design and object oriented design

Structure:

4.1 Design Phase - Introduction

4.2 Design Quality

4.3 Characteristics of good design

4.4 Design Concepts

4.4.1. Design Concepts - Abstraction

4.4.2. Design Concepts - Architecture

4.4.3. Design Concepts - Modularity

4.4.4. Design Concepts - Information Hiding

4.4.5. Design Concepts - Functional Independence

! !181
DESIGN PHASE

4.4.6. Design Concepts - Refactoring

4.4.7. Design Concepts - Refinement

4.5 Functional Design Vs. Object Oriented Design

4.6 Design Documentation

4.7 Summary

4.8 Self-Assessment questions

! !182
DESIGN PHASE

4.1 DESIGN PHASE

The next phase and a crucial one in the Software Development Life Cycle is
the design phase. A software design is a meaningful engineering
representation of some software product that is to be built. The aim of
design is to move from the problem domain to the solution domain and to
produce a model that will provide a seamless transition to the coding
phase. Once requirements are analyzed and found to be satisfactory, a
design model is created which can be easily implemented. The design
phase is the intermediate language between requirements and code where
one proceeds from the abstract to more concrete representations. This is
the phase in which the customer's requirements are translated to enable a
finished software product or system.

In the software engineering context, design focuses on four major areas of


concern: data, architecture, interfaces and components. Each of the
elements of the analysis model that were covered in the previous chapter
provides information that is necessary to create the design models required
for a complete specification of design. Information obtained from the class-
based models, flow models, and behavioral models serve as the basis for
component design. The classes and relationships defined by CRC index
cards and the detailed data content depicted by class attributes provide the
basis for the data design activity. The interface design describes how the
software communicates with systems that interoperate with it, and with
humans who use it. Usage scenarios and behavioral models provide much
of the information required for interface design.

The design activity begins with the set of requirements identified, reviewed
and approved in the requirements gathering and analysis phases. For each
requirement, a set of one or more design elements will be produced as a
result of interviews, workshops, and/or prototype efforts. Design elements
describe the desired software features in detail, and include functional
hierarchy diagrams, screen layout diagrams, business rules, business
process diagrams, pseudo-code, and entity-relationship diagram with a full
data dictionary. These design elements describe the software in sufficient
detail so that skilled programmers may develop the software with minimal
additional input design. Design is the "technical kernel" of software
engineering and is applied regardless of the process model used.

! !183
DESIGN PHASE

The main difference between the analysis and design phase is that the
output of "analysis" consist of smaller problems to solve. "Analysis" is not
different even if it is designed by different team members or groups.
"Design" focuses on the capabilities, and there can be multiple designs for
the same problem depending on the environment that solution will be
hosted. They can be operating systems, webpages, mobile or cloud
computing based systems. Sometimes the design depends on the
environment that it was developed for.

It must be emphasized here that design is not coding and coding is not
design. The level of abstraction of the design model is higher than source
code. Software design being an iterative process, initially the design will be
at a high level of abstraction. As design iterations occur, subsequent
refinement leads to design representations at much lower levels of
abstraction.

4.2 DESIGN QUALITY

The importance of software design can be stated with a single word -


quality. The design process is very important. From a practical standpoint,
as a builder, one would not attempt to build a house without an approved
blueprint thereby risking the structural integrity and customer satisfaction.
The approach to building software products is no different. The emphasis in
design is on quality; this phase provides us with representation of software
that can be assessed for quality.

Design is the place where quality needs to be embedded in software


engineering. This phase is where the customers' requirements can be
translated to ?nished software application or system and serves as the
foundation for all the software engineering and software support activities
downstream. Without a proper design there is a high risk of developing
unstable systems and delays plus increased cost follows. Quality of the
evolving design is assessed with a series of formal technical reviews.

! !184
DESIGN PHASE

In order to assess the quality of design some guidelines to keep in mind


are:

❖ A design should be derived from information obtained during software


requirements analysis.

❖ The design should be traceable to the analysis model. Because a single


element of the design model often traces to multiple requirements, it is
necessary to have a means for tracking how requirements have been
satis?ed by the design model.

❖ A design should be modular; software should be logically partitioned into


elements or subsystems

❖ A design should contain distinct representations of data, architecture,


interfaces, and components.

❖ A design should lead to components that exhibit independent functional


characteristics.

❖ A design should lead to interfaces that reduce the complexity of


connections between components and with the external environment.

❖ The design process should not suffer from "tunnel vision." A good
designer should consider alternative approaches, judging each based on
the requirements of the problem, the resources available to do the job.

❖ A design should be represented using a notation that effectively


communicates its meaning.

❖ The design should not reinvent the wheel. Systems are constructed using
a set of design patterns, many of which have likely been encountered
before. These patterns should always be chosen as an alternative to
reinvention.

❖ The structure of the software design should closely mimic the structure
of the problem domain.

❖ The design should exhibit uniformity and integration i.e. appears as if


one person has designed the entire thing.

! !185
DESIGN PHASE
❖ The design should be structured to be flexible and accommodate change.

4.3 CHARACTERISTICS OF A GOOD SOFTWARE DESIGN

The definition of a "good software design" varies depending on the


application being designed. For example, for designing embedded
applications in a space ship, the memory size used by a program is an
important issue to characterize. Memory size is critical for embedded
applications and in this case one has to balance weight of the chips, space,
power consumption, costs and other constraints. The design
comprehensibility decreases to achieve code compactness. Hence criteria
used to judge a "given design solution" varies depending on the
application.

Some desirable characteristics that every good software design for general
application must possess are listed below:

❖ Correctness: A good design should correctly implement all the


functionalities identified in the SRS document. A design has to be correct
to be acceptable. The design must implement all of the explicit
requirements contained in the analysis model, and it must accommodate
all of the implicit requirements desired by the customer.

❖ Understandability: A good design is easily understandable. A design


that is easy to understand is also easy to develop, maintain and change.
The design must be a readable, understandable guide for those who
generate code and for those who test and subsequently support the
software.

❖ Efficiency: Design should be efficient. The software should perform its


tasks within a user-acceptable time and not consume too much memory.

❖ Maintainability: Because of increases in the size and complexity of


software products, software maintenance tasks have become increasingly
more difficult. Maintenance includes enhancing existing functions,
modifying for hardware upgrades, and correcting code errors Software
maintenance cannot be a design afterthought; it should be possible for
software maintainers to enhance the product without tearing down and
rebuilding the majority of code.

! !186
DESIGN PHASE
❖ Robustness (reliability): The software must be able to perform a
required function under stated conditions for a specified period of time.
The software must be able to operate under stress or tolerate
unpredictable or invalid input.

❖ Reusability: Design features of a software element (or collection of


software elements) must enhance its suitability for reuse. Reusability is
the use of existing assets in some form within the software product
development process. Re-use could be of code, sub-routines, functions,
modules, test suites, designs and documentation. The ability to reuse
relies on the ability to build larger things from smaller parts, and being
able to identify commonalities among those parts.

❖ Compatibility: The software must be able to operate with other


products that are designed for interoperability with another product. For
example, MS Office 2010 is backward-compatible with an older version
of itself.

❖ Flexibility: New capabilities can be added to the software without major


changes to the underlying architecture. The software design must allow
addition of further features and modification with slight or no
modification.

❖ Modularity: the resulting software must comprise well defined,


independent components which lead to better maintainability. The
components could be then implemented and tested in isolation before
being integrated to form a desired software system.

❖ Security: The software is able to withstand against external threats and


influences that can affect the organizations business interests.

❖ Portability - In an age of ubiquitous computing and fast moving


technology implementation in multiple environments over their total
lifetime is an emerging need. With change in technology the design
should have some capability of usability of the same software in different
environments. The pre requirement for portability is the generalized
abstraction between the application logic and system interfaces. When
software with the same functionality is produced for several computing
platforms, portability is the key issue for development cost reduction.

! !187
DESIGN PHASE
❖ Scalability - As the usage of any application grows there is an increase
in the number of users and the data. A good design should be able to
scale to meet the increasing data or number of users.

4.4 DESIGN CONCEPTS

There are several significant design concepts which provide a foundation on


which more sophisticated design methods can be applied.

❖ Abstraction - Procedural and Data


❖ Modularity - Make it intellectually manageable
❖ Architecture - Overall structure of the software
❖ Patterns - Proven solution to a known recurring problem
❖ Refactoring - A reorganization technique that simplifies the design
❖ Functional independence - Single-minded function and low coupling
❖ Information Hiding - Constrain access to data and procedures
❖ Refinement - Top down design strategy. Complementary to abstraction

4.4.1 Design Concepts - Abstraction

The earlier section covered some aspects of abstraction. Abstraction in


simple terms means concentrating on the essentials and ignoring the
details. It is a process of generalization by reducing the information
content of a concept or an observable phenomenon in order to retain only
information which is relevant for a particular purpose. There are multiple
levels of abstraction - at the highest level of abstraction, a solution is
stated in broad terms using the language of the problem environment; at
lower levels of abstraction, a more detailed description of the solution is
provided. ERP applications are large complex systems and can be made
understandable by decomposing them into modules. When viewed from the
outside, each module should be simple, with the complexity hidden inside.

Abstraction maxim: Simplicity is good; complexity is bad.

There are two kinds of abstraction: procedural abstraction and data


abstraction. (Refer diagram below)


! !188
DESIGN PHASE

Procedural abstraction is the separation of the logical properties of an


action from the details of how the action is implemented. It refers to a
sequence of instructions that have a specific and limited function. An
example would be the word "Enter" for a door. The procedural abstraction
implies the instructions and functions but specific details are suppressed.
Implemented with a "knowledge" of the object that is associated with
enter. E.g.: Open Door, Walk to door, Reach Out, Grasp Knob, Turn knob,
Pull door etc. The door may be opened by a switch, a card, manually or
automatically by sensors. The details are not relevant for "entering". A
opening of door could be a door of a castle, fridge, a house door or a
cupboard.

Data abstraction is the separation of the logical properties of data from the
details of how the data are represented. A data abstraction is a named
collection of data that describes a data object. In the context of the
procedural abstraction "open", one can define a "data" abstraction called
door. Data abstraction for door would encompass a set of attributes that
describe the door e.g. manufacturer, door type, door material, swing
direction, weight, lights coming on, pull or push, mechanisms, etc. In data

! !189
DESIGN PHASE

abstraction, the focus is on the problem's data rather than the tasks to be
carried out.

4.4.2 Design Concepts - Architecture

Architecture refers to the overall structure of the software and the ways in
which that structure provides conceptual integrity for a system. In its
simplest form, architecture is the hierarchical structure of program
modules, the manner in which these components interact and the structure
of data that are used by the components. Like any civil engineering
structure, software must have a solid foundation. Failing to consider key
scenarios, design for common problems or to predict the long term
consequence of a key decision can put the development work at risk. Poor
architecture can make the software unstable, produce more bugs during
coding phase and it is hard to support development for future business
requirements.

The architecture design is an important phase of the whole development


process; full consideration of user requirements, business goal and system
ability, it draws a blueprint for the later work. At this stage all the key
scenarios are outlined in great detail. Some questions that need to be
answered are:

- How will the user be using the application?


- How the features of the application will benefit the user?
- How can the application be designed to be maintainable to meet the
development schedule?

An ideal architecture will be a perfect conversion between business


requirements and technique requirements leading to implementation of
those requirements by programming the software. The architecture must
be designed with future evolution in mind so that it will be able to adapt to
requirements that are not fully known at the start of the design process.
Key is to build software for change instead of building to last; there will
always be new requirements and feedbacks.

! !190
DESIGN PHASE

Architecture Design can be represented using a number of different


models:

❖ Structural models represent architecture as an organized collection of


program components.

❖ Framework models increase the level of design abstraction by


attempting to identify repeatable architectural design frameworks
(patterns) that are encountered in similar types of applications.

❖ Dynamic models address behavioral aspects of the program


architecture, indicating how the structure or system configuration may
change as a function of external events.

❖ Process models focus on the design of the business or technical process


that the system must accommodate.

❖ Functional models can be used to represent the functional hierarchy of


a system.

4.4.3 Design Concepts - Modularity

It is the degree to which software can be understood by examining its


components independently of one another. Modularity refers to the extent
to which a software or a Web application may be divided into smaller
modules. Monolithic software cannot be easily understood due to
complexity of control paths, span of reference, number of variables etc.
Software is divided into separately named and addressable components
referred as "modules". Basic principle is to "divide and conquer" that is
dividing the problem into manageably small pieces where each piece can
be solved and/or modified separately. The pieces need to be related to the
application and cannot be independent; they must communicate.

Modularity provides greater software development manageability. Modules


are divided among teams based on functionality, and programmers need
not get involved with the functionalities of other modules. New
functionalities may be easily programmed in separate modules.

Besides reduction in cost and flexibility in design, modularity offers other


benefits such as augmentation i.e. adding new solution by merely plugging

! !191
DESIGN PHASE

in a new module. A computer is an example of modular design with


modules like power supply units, processors, mother-boards, graphics
cards, hard drives, optical drives, etc. All of these parts are easily
interchangeable as long as the parts that support the same standard
interfaces are used. One good example of modularization in the software
domain is MS Office products like Excel, Word or PowerPoint. The main
menu shows "modules" like Home, Insert, Options, Format, Review, View,
etc. Each module further sub-divides into smaller modules. The design can
be seen to be flexible, interchangeable and modifiable; new functionalities
can also be added in separate modules.

The cost of development of smaller modules decreases as number of


components increases. However integration of the modules require more
planning and efforts and the integration costs increases as the number of
modules increase. The diagram given below explains the "dilemma" of how
much to partition; when to stop partitioning and how to decide which is the
right number f modules?

! 


! !192
DESIGN PHASE

Cost to develop modules decreases as the modules become smaller, but


integration cost increases. Smaller modules are easier to build, easier to
change, easier to fix. There is no magic want to decide what the "right"
number of modules is for a specific software design. One should stop
partitioning when total cost is more than benefits.

4.4.4 Design Concepts - Information Hiding

It is the hiding of design decisions in a computer program that are most


likely to change, thus protecting other parts of the program from change if
the design decision is changed. Modules are designed so that information
contained within a module is inaccessible to other modules that have no
need for such information "Encapsulation" is often used interchangeably
with information hiding. In simple terms "information hiding" is the
principle and "encapsulation" is a technique. A software module hides
information by encapsulating the information into a module or other
construct which presents an interface.

A common example of Information Hiding is designing a car. In order to


make the design, manufacturing, and maintenance of a car reasonable, the
complex piece of equipment is divided into modules with particular
interfaces hiding design decisions. By designing a car in this fashion, a car
manufacturer can also offer various options while still having a vehicle
which is economical to manufacture.

The car manufacturer may have a luxury version of the car and a standard
version. Luxury version has a more powerful engine than the standard one.
Engineers designing the two engines provide the same interface for both
engines. Both engines fit into the engine bay of the car, fit the same
transmission, the same engine mounts and the same controls. The
differences are that the more powerful luxury version has a larger
displacement with a fuel injection system that is programmed to provide
the fuel air mixture.

In addition the luxury version may also offer other options such as better
CD player, more comfortable seats, a better suspension system with wider
tires, and different paint colors. The radio with CD player is a module which
replaces the standard radio, more comfortable seats get installed into the
same seat mounts, whether the seats are leather or plastic, or offer lumbar
support or not, doesn't matter.

! !193
DESIGN PHASE

Engineers design the car by dividing the task up into pieces of work which
are assigned to teams who design their component to a particular standard
or interface. Such a "platform" also provides an example of information
hiding, since the floor-plan can be built without knowing whether it is to be
used in a sedan or a hatchback.

Taking another example of a software program; suppose each object


(employee code) is required to have a unique ID stored in a member
variable called ID. One design approach would be to use integers for the
IDs and to store the highest ID assigned so far in a global variable called
MaxID. Each place a new object (employee code) is allocated one could
increment the value of MaxID by 1 and assigns the new value to ID. That
would guarantee a unique ID, and it would add the absolute minimum of
code in each place an object is created. This works fine till some more
constraints or requirements are added for the ID.

Suppose one wants to reserve ranges of IDs for special purposes (trainees,
contractors)? Or if one wants to reuse the IDs of objects (left the
organization) that have been destroyed? If there are several statements
where ID is used in the code in a program each of them need to be
changed for these new requirements.

One could put a simple statement ID = NewID() throughout the program


where NewID() is a function that is called every time a value has to be
assigned to ID. Here one hides the information about how new IDs are
created. Inside the NewID() function one might have just one of code or its
equivalent, but if at a later date it is required to reserve certain ranges of
IDs for special purposes or to reuse old IDs, one makes changes within the
NewID() function itself-without touching any of the ID = NewID()
statements.

The use of Information Hiding as a design criterion for modular systems


provides the greatest benefits when modifications are required during
testing and later, during software maintenance. Because most data and
procedures are hidden from other parts of the software, inadvertent errors
introduced during modifications are less likely to propagate to other
location within the software.

Hiding implies that effective modularity can be achieved by defining by a


set of independent modules that communicate with one another only that

! !194
DESIGN PHASE

information necessary to achieve software function. Modules should be


specified and design so that information (algorithm and data) contained
within a module is inaccessible to other modules that have no need for
such information.

4.4.5 Design Concepts - Functional Independence

In the previous sections one looked at concepts of Modularity, Information


Hiding and Abstraction - they result in functional independence. Functions
are designed with single- minded function and minimum interaction with
other functions. When a module has a single function to perform it is easier
to achieve its objective. No two modules ideally have the same function to
be achieved.

Functional independence is a key to any good design due to:

❖ Error isolation: Functional independence reduces error propagation. If a


module is functionally independent, its degree of interaction with the
other modules is less. Therefore, any error existing in a module would
not directly affect other modules.

❖ Scope of reuse: Since each module does some well-defined and precise
function, and the interaction of the module with the other modules is
simple and minimal a cohesive module can be easily taken out and
reused in a different program.

❖ Understandability: Complexity of the design is reduced, because


different modules can be understood in isolation as modules are more or
less independent of each other.

There are two qualitative criteria to assess functional independence

❖ Coupling - is the degree to which module is connected to other modules


in the system. It is an indication of interconnection between modules

❖ Cohesion - is the degree to which module performs one and only


function. It is an indication of the functional strength of a module

! !195
DESIGN PHASE

Functional Independence- Coupling

❖ The Goal: Modules should be as loosely coupled as possible

❖ In general, the more one must know about module A in order to


understand module B the more closely A is connected to B

❖ "Highly coupled" modules are joined by strong interconnections

❖ "Loosely coupled" modules have weak interconnections

❖ If two modules interchange large amounts of data, then they are highly
interdependent.

The degree of coupling between two modules depends on their interface


complexity.

There are five types of coupling can occur between any two modules.

❖ Data coupling: Two modules are data coupled, if they communicate


through a parameter. An example is an elementary data item passed as a
parameter between two modules, e.g. an integer, a float, a character,
etc.

❖ Stamp coupling: Two modules are stamp coupled, if they communicate


using a composite data item such as a record in PASCAL or a structure in
C.

❖ Control coupling: Control coupling exists between two modules, if data


from one module is used to direct the order of instructions execution in
another. An example of control coupling is a flag set in one module and
tested in another module.

❖ Common coupling: Two modules are common coupled, if they share


data through some global data items.

❖ Content coupling: Content coupling exists between two modules, if


they share code,

e.g. a branch from one module into another module.

! !196
DESIGN PHASE

Functional Independence - Cohesion

Goal: High cohesion is the goal

❖ A module is said to be "independent" if it can function completely without


the presence of other modules. Generally in a system all modules cannot
be independent, modules must cooperate with each other.

❖ Cohesion considers maximizing relationship between elements of same


module

❖ A good software design implies decomposition of the problem into


modules, and a neat arrangement of these modules in a hierarchy

There are 7 types of cohesion that a module may possess

❖ Coincidental cohesion: A module is said to have coincidental cohesion,


if it performs a set of tasks that relate to each other very loosely, if at all.
For example, in a transaction processing system (TPS), the get-input,
print-error, and summarize-members functions are grouped into one
module.

❖ Logical cohesion: A module is said to be logically cohesive, if all


elements of the module perform similar operations, e.g. error handling,
data input, data output, etc. An example of logical cohesion is the case
where a set of print functions generating different output reports are
arranged into a single module.

❖ Temporal cohesion: When a module contains functions that are related


by the fact that all the functions must be executed in the same time
span, the module is said to exhibit temporal cohesion. For example a set
of functions responsible for initialization, start-up, shutdown of some
process, etc. exhibits temporal cohesion.

❖ Procedural cohesion: A module is said to possess procedural cohesion,


if the set of functions of the module are all part of a procedure
(algorithm) in which certain sequence of steps have to be carried out for
achieving an objective, e.g. the algorithm for decoding a message.

! !197
DESIGN PHASE
❖ Communicational cohesion: A module is said to have communicational
cohesion, if all functions of the module refer to or update the same data
structure, e.g. the set of functions defined on an array or a stack.

❖ Sequential cohesion: A module is said to possess sequential cohesion,


if the elements of a module form the parts of sequence, where the
output from one element of the sequence is input to the next. For
example, in a TPS, the get-input, validate-input, sort-input functions are
grouped into one module.

❖ Functional cohesion: Functional cohesion is said to exist, if different


elements of a module cooperate to achieve a single function. For
example, a module containing all the functions required to manage
employees' pay-roll exhibits functional cohesion.

In practice both coupling and cohesion are used; cohesion and coupling are
interrelated. Greater the cohesion of modules, lower is the coupling
between modules. Low coupling is often a sign of a well-structured
computer system and a good design, and when combined with high
cohesion, supports the general goals of high readability and maintainability.
The mantra is "Low coupling, high cohesion".

Consider mobile applications in which the "contacts" module and the


"WhatsApp" module are coupled only to the extent of using the "contacts"
data. One module maintains the contacts and the other has its own groups
or individual members. Each of the modules is cohesive within it and
loosely coupled to each other.

4.4.6 Design Concepts - Refactoring

It is a reorganization technique that simplifies the design of a component


without changing its function or behavior. When software is re-factored, the
existing design is examined for redundancy, unused design elements,
inefficient or unnecessary algorithms, poorly constructed data structures,
or any other design failures that can be corrected to yield a better design.
Refactoring the "source code" means modifying it without changing its
behavior. It neither fixes bugs nor adds new functionality, though it might
precede either activity. It improves the understandability of the code and
changes its internal structure and design, and removes dead code, to make
it easier to comprehend, more maintainable and amenable to change.

! !198
DESIGN PHASE

Existing design is reviewed for redundancy, unused design elements,


inefficient of unnecessary algorithms, poorly constructed or inappropriate
data structures

An example of a trivial refactoring is to change a variable name into


something more meaningful, such as from a single letter 'i' to 'interestRate'
OR to turn a code within an "if block" into a "subroutine".

4.4.7 Design Concepts - Refinement

It is the process of elaboration. A hierarchy is developed by decomposing a


macroscopic statement of function in a step-wise fashion until
programming language statements are reached. In each step, one or
several instructions of a given program are decomposed into more detailed
instructions. Stepwise refinement is a top-down design strategy. A program
is developed by successively refining levels of procedural detail.

Design Concepts Refinement

Open

• Walk to Door
• Reach for knob Repeat until Door Opens
• Open Door Turn Knob Clockwise
• Walk thru Door If Knob does not turn, then
• Close Door Take Key out;
Find correct key;
Insert in Lock
endif
Push/Pull door move out of way
End repeat

! 


! !199
DESIGN PHASE

One begins with a statement of function that is defined at a high level of


abstraction. I.e. the statement describes function or information
conceptually but provides no information about the internal workings of the
function or the internal structure of the information. Refinement causes the
designer to elaborate on the original statement, providing more and more
detail as each successive refinement (elaboration) occurs. The diagram
below is an example of the refinement of the process of opening a door.

Abstraction and refinement are complementary concepts. Abstraction


enables a designer to specify procedure and data and yet suppress low-
level details. Refinement helps the designer to reveal low-level details as
design progresses. Both concepts aid the designer in creating a complete
design model as the design evolves.

4.5 FUNCTION-ORIENTED DESIGN VS. OBJECT ORIENTED


DESIGN

It is essential to pause here and understand few differences between a


function-oriented design and an object-oriented design approach.

In the Function Oriented Design a system is viewed as something that


performs a set of functions. Starting at this high-level view of the system,
each function is successively refined into more detailed functions. For
example, consider a function create-new-library- member which essentially
creates the record for a new member, assigns a unique membership
number to him, and prints a bill towards his membership charge. This
function may consist of sub-functions of "assigning-membership-number",
"creating-member-record, "printing-bill", etc. Each of these sub-functions
may be split into more detailed sub-functions and so on.

The system state is centralized and shared among different functions, e.g.
data such as member-records is available for reference and updates to
several functions such as "create- new-member", "delete-member",
"update-member-record", etc.

In traditional analysis methodologies, the two aspects: processes and data


are considered separately. Data may be modeled by ER diagrams, and
behaviors by flow charts or structure charts.

! !200
DESIGN PHASE

The OO design approach is fundamentally different from the function-


oriented design approaches primarily due to the different abstraction that
is used. It requires a different way of thinking and partitioning. The main
difference between OO analysis and other forms of analysis is that by the
OO approach requirements are organized around objects, which integrate
both behaviors (processes) and states (data) modeled after real world
objects that the system interacts with.

In the OO Design, the system is viewed as collection of objects (i.e.


entities). The state is decentralized among the objects and each object
manages its own state information. For example, in a Library Automation
Software, each library member may be a separate object with its own data
and functions to operate on these data. Functions defined for one object
cannot refer or change data of other objects. Objects have their own
internal data which define their state. Similar objects constitute a class.
Conceptually, objects communicate by message passing.

In OOD, the basic abstraction are not real-world functions such as sort,
display, track, etc., but real-world entities such as employee, picture,
machine, radar system, etc. For example in OOD, an employee pay-roll
software is not developed by designing functions such as update-
employee-record, get-employee-address, etc. but by designing objects
such as employees, departments, etc. Grady Booch explains the difference
as "identify verbs if one uses procedural design and identify nouns if one
uses object-oriented design"

The state information is not represented in a centralized shared memory


but is distributed among the objects of the system. For example, while
developing an employee pay-roll system, the employee data such as the
names of the employees, their code numbers, basic salaries, etc. are
usually implemented as global data in a traditional programming system;
whereas in an object-oriented system these data are distributed among
different employee objects of the system. Objects communicate by
message passing.

Even though object-oriented and function-oriented approaches are


remarkably different approaches to software design, yet they do not
replace each other but complement each other in some sense. Obviously
somewhere or other the real-world functions must be implemented even in

! !201
DESIGN PHASE

OOD. The functions are usually associated with specific real-world entities
(objects); they directly access only part of the system state information.

4.6 DESIGN DOCUMENTATION

The final goal of any engineering activity is to create some kind of


documentation. When a design effort is complete, the design
documentation is given to a manufacturing team. The manufacturing team
is a different set of people with a different set of skills from those of the
design team. The manufacturing team proceeds to build the product with
or without much further assistance from the designers.

A software design document (SDD) is a written description of a software


product, that a software designer writes in order to give a software
development team overall guidance to the architecture of the software
project. An SDD usually accompanies an architecture diagram with pointers
to detailed feature specifications of smaller pieces of the design. Practically,
a design document is required to coordinate a large team under a single
vision. A design document needs to be a stable reference, outlining all
parts of the software and how they will work. The document is commanded
to give a fairly complete description, while maintaining a high-level view of
the software.

A typical Table-of-Contents for a design document is

❖ Scope
❖ Data Design
❖ Architectural Design
❖ Interface Design
❖ Procedural Design
❖ Requirements Cross Reference
❖ Test Provisions
❖ Special Notes
❖ Appendices

! !202
DESIGN PHASE

4.7 SUMMARY

A software design is a meaningful engineering representation of some


software product that is to be built and moving from the problem domain
to the solution domain. Design focuses on four major areas of concern:
data, architecture, interfaces and components. Each of the elements of the
analysis model i.e. data, class, flow and behavior provides information that
is necessary to create the design models. Design is the place where quality
needs to be embedded in software engineering and provides
representations of software that can be assessed for quality. The design
process should not suffer from a "tunnel vision" and consider alternative
approaches based on the requirements of the problem etc. A good design
is traceable to the analysis model and is modular where the software is
logically partitioned into subsystems. A good design should also possess
some characteristics like correctness, understandability, efficiency,
maintainability, robustness (reliability), reusability, compatibility, flexibility,
modularity, security, portability and scalability Modular design
unintentionally follows the rules of 'divide and conquer' one of the problem-
solving strategies.

Abstraction in design is a technique in which unwanted details are not


included and only the needed information is given. In the highest level of
abstraction the solution is stated in general terms. Procedural abstraction
separates the logical properties of an action from the details of how the
action is implemented while data abstraction separates logical properties of
data from the details of how the data are represented. Design Architecture
is the overall hierarchical structure of the software and the ways in which
that structure provides conceptual integrity for a system. Encapsulation
and Information hiding are two concepts that are used in modular
designing that helps in making changes in some modules during testing or
later with minimum impact on other modules or code.

It is important to design modules such that each module has a specific


functional requirements. This functional independence is achieved by high
cohesion and low coupling. Cohesion is a measure that defines the degree
of intra-dependability within elements of a module. Coupling is a measure
that defines the level of inter-dependability among modules of a program.
Design is never a one-step process. Stepwise refinement is a top-down
design strategy. In each step, one or several instructions of a given
program are decomposed into more detailed instructions.

! !203
DESIGN PHASE

In a function-oriented design approach a system is viewed as something


that performs a set of functions and each function is successively refined
into more detailed functions. In the Object Oriented Design, the system is
viewed as collection of objects (i.e. entities). Even though object-oriented
and function-oriented approaches are remarkably different approaches to
software design, yet they do not replace each other but complement each
other in some sense. The final goal of design is to create some kind of
documentation.

4.8 SELF-ASSESSMENT QUESTIONS (EXAMPLES


PREFERABLY DIFFERENT FROM THOSE GIVEN IN THE
BOOK)

1. What is software design? How different is software design from coding?

2. What are the differences between software design and software


analysis?

3. Identify at least five important items developed during software design


phase.

4. State and explain few major design activities.

5. When can one say that quality of a design is good?

6. What are the desirable characteristics of a good software design?


Explain each of them with some examples.

7. What is abstraction? Explain the difference between data abstraction


and procedure abstraction with examples

8. What are the key concepts of design? Explain the concepts of


m o d u l a r i z a t i o n , a b s t ra c t i o n , c o h e s i o n , c o u p l i n g , f u n c t i o n a l
independence, etc. in simple terms with examples

9. What is architecture? What are the several models to support


architecture design? What is an ideal architecture?

! !204
DESIGN PHASE

10.Why should a design be modular? Can there be situations where a


monolithic design is beneficial? Explain with examples.

11.What are the pros and cons of modular design?

12.If a design is already modular, why is information hiding required?

13.Explain the difference between Information Hiding and encapsulation?

14.What is Information Hiding? Give an example to explain the answer.

15.Discuss functional independence in software design. What are the two


qualitative criteria to measure functional independence?

16.Identify at least three reasons in favor of why functional independence


is the key factor for a good software design.

17."Low coupling, high cohesion" is the mantra for functional


independence. What does this mean? Explain with an example.

18.What are the different types of coupling? Briefly explain each of them.

19.What are the different types of cohesion? Briefly explain each of them.

20.Explain the difference between function oriented design and object


oriented design in your own words.

21.Identify and explain at three least salient features of an object-oriented


design approach.

22.Explain refactoring in the context of software design.

23.Explain step-wise refinement in the context of software design.


! !205
DESIGN PHASE

REFERENCE MATERIAL
Click on the links below to view additional reference material for this
chapter

Summary

PPT

MCQ

Video Lecture


! !206
USER INTERFACE DESIGN

Chapter 5
User Interface Design
Objective:

On completion of this chapter you would be able to understand

❖ Identify some characteristics of a user interface

❖ Simple rules to keep in mind while designing user interfaces

❖ Importance of analyzing the user inputs for interface design

Structure:

5.1. User Interface Design - Characteristics

5.2. User Analysis

5.3. Types of Interfaces

5.4. Summary

5.5. Self-Assessment Questions

! !207
USER INTERFACE DESIGN

5.1 USER INTERFACE DESIGN - CHARACTERISTICS

Considering that usage of mobiles is all pervading I had asked several


youngsters during my teaching sessions about what they consider
important in user interface of a mobile. Why would they choose the same
company when they had to upgrade handsets? 99% of them said it was
easy to learn, easy to understand and easy to remember. When I asked
them if their parents would use the upgraded phone - more than 50% said
NO. They indicated that their parents would find the "user interface"
difficult to handle. One step further - "would their grandparents use the
upgraded phone?" 99% of the youth just laughed. The user interface was
not different but the users and their expectations were.

Hence in any software application design it is very important to identify the


characteristics desired of a good user interface. Some characteristics to
remember while designing a good user interface are:

Easy to Learn: A good user interface should be easy to learn. Speed of


learning is hampered by complex syntax and semantics of the command
issue procedures. To be effective using an interface people must be able to
recognize what it is, know why they would use it, understand what they
are interacting with, predict what will happen when they use it, and then
successfully interact. There is no room for confusion. Clarity inspires
confidence and leads to further use.

User in Control: Users are most comfortable when they feel in control of
themselves and their environment. Programmers sometimes create
software that forces people into unplanned interactions, confusing
navigations and surprising outcomes. A good interface design should let
the user be in control by regularly displaying system status, describing the
effect of any action and highlighting what to expect at every turn. Even the
obvious need to be stated - one knows very well that the obvious almost
never IS.

No overloading of user's memory: The system should remember


pertinent information rather than the user. A good user interface should not
require its users to memorize commands. The user need not be asked to
remember information from one screen to another while performing
various tasks using the interface. For example familiar actions such as
undo and redo, and clipboard actions like cut, copy, and paste allow users

! !208
USER INTERFACE DESIGN

to manipulate pieces of information needed in multiple places and within a


particular task.

Meaningful defaults: For example a user need not remember and type a
15 character account number every time he pays an electricity bill online.
The system should have some logic to show the last entries made and
reduce the demand on short term memory. Such features are evident in
several applications like Facebook, LinkedIn, MS Office, etc. When entering
dates for any transaction or data on a screen the current date is shown as
default in many applications. Or if someone has entered "Mrs." for "Title" in
an online application form the gender by default can be shown as "F" for
female. While booking an airline return ticket the system can display the
onward travel date as the default travel date for the return journey.

Hiding technical internals: The interface should be based on user-


oriented terms and concepts rather than computer concepts. For example,
an office system should use concepts such as letters, documents, folders
etc. rather than directories, file identifiers, etc. menu items, options,
features should not be confused with terminologies like modules,
programs, sub-routines and functions. A programmer should not let
implementation issues override the ease of use of the system.

Graphical User Interface: Today with excellent graphics cards and visual
design tools being available, design the interface with objects on screen is
a given. Visual layout is far easier to interpret and use. The visuals should
be based on a real world. In a GUI multiple windows with different
information can simultaneously be displayed on the user screen. This is
perhaps one of the biggest advantages of GUI over text-based interfaces
since the user has the flexibility to simultaneously interact with several
related items at any time and can have access to different system
information displayed in different windows. Symbolic information
manipulation such as dragging an icon representing an image or file to a
trash can be deleting is intuitively very appealing and the user can
instantly remember it. Usage of metaphors i.e. the abstractions of real-life
objects or concepts used in user interface design is highly recommended
and effective. For example if the user interface of a text editor uses
concepts such as cutting lines and paragraphs and pasting it at other
places, users can immediately relate to it. Another commonly seen
metaphor is a shopping cart on online shopping sites where the cart is used
to make choices while purchasing items in a supermarket.

! !209
USER INTERFACE DESIGN

Context Sensitive terminology: The user design should use words,


phrases and concepts familiar to the user, rather than system-oriented
terms. For example any person checking into a hotel or getting admitted to
a hospital fills up basic details of name, date of birth, address, contact
numbers, email ids and date of entry. However the person checking into a
hotel is a "guest" and the person being admitted to a hospital is referred as
a "patient". The user interface should clearly indicate if the system is
accepting a patient or a guest. Mixing up these can have disastrous effects.
The other example is the "Contacts" application in a mobile. If a new
number is detected by the application, it allows the user to "Add a New
contact", "Add to existing Contact", "Edit the number" or just "Discard" -
options that are relevant to the context of "Contacts".

Intuitive Short Cuts: By definition an user interface is intuitive when


users understand its behavior and effect without use of reason,
experimentation, assistance, or special training, This is possible if the user
has prior knowledge, either from experience in the real world or with other
software. Consider the example of a hotel stay. Most hotels have phone
system in rooms which requires "9" to be pressed for making an external
call. As adults from an early age it is understood that the '9' button will get
an outside line when using a business or hotel phone system. This is part
of the guests' current knowledge as they travel from phone system to
phone system. Here '9' becomes intuitive. In some hotels some designers
may choose another digit e.g. '8' button as a better choice. With some
mental adjustment the digit '8' becomes intuitive by training. Similarly in
software user interface design, one defines short cuts that are intuitive. For
example, if something looks like a push button in the real world one clicks
on it to make something happen. Alternatively, if something looks like a
link one clicks on it to open a new site, web page or associated programs.

Consistent Interface: Most users have used the different MS suite of


applications like MS Word, MS Paint, MS Outlook, MS PowerPoint, MS Excel,
etc. It is very obvious that the interfaces across all the Microsoft
applications are similar and consistent. The options available under one
sub-Menu e.g. Edit is consistent across all the applications. Once a user
learns about a command, he is able to use the similar commands in
different circumstances for carrying out similar actions. The interface
design makes it easier to learn the interface. All visual information should
be organized as per a design standard maintained across all screens. In
financial applications like PeopleSoft, Tally or ERP applications like SAP, the

! !210
USER INTERFACE DESIGN

input mechanisms are consistent throughout the application & family of


applications. Mechanisms to navigate from task to task are consistent
defined and implemented.

Component-based interface: Users can learn an interface faster if the


interaction style of the interface is very similar to the interface of other
applications with which the user is already familiar. This can be achieved if
the interfaces of different applications are developed using some standard
user interface components. Examples of standard user interface
components are: radio button, check box, text field, slider, progress bar,
date-picker, drop-down lists, etc.

Error prevention: A good user interface should minimize the scope of


committing errors while initiating different commands. Intelligent designs
of user interface can prevent errors by disabling un-necessary menu
commands, options or provide filters to select from. For example in a travel
booking online site, while selecting a state in India typing "M" will allow
selection of all Indian States beginning with M i.e. Madhya Pradesh,
Maharashtra etc. And at the next level selecting a city would ensure that
the user selects only relevant cities of the selected state in the previous
step. Errors can also be prevented by asking the users to confirm any
potentially destructive actions specified by them, for example while
deleting a group of files or emptying a Recycle Bin, the user is given an
option to revert his actions.

Prevent unnecessary actions: A user interface should not force a user


into unnecessary or undesired actions. For example while browsing a
photo-album in Picasa or Facebook, the navigation should not display the
"Next" button or "Right arrow" if it is the last image in the series and
similarly it must avoid display of "Prev" button or "Left Arrow" for the first
image. The user should be guided through the interface to make the right
entries and selections.

Provide Feedback: A good user interface must provide feedback to


various user actions. In a game of "Spider-Solitaire" if a user takes more
than few seconds to play the application gives an hint to the user by
flashing some card or group of cards. Similarly if a user request takes long
to process, the user should be informed about the state of the processing
of his request. This is seen in online payment through banks where the
system displays a message requesting the user to avoid using "Back-

! !211
USER INTERFACE DESIGN

Space" or "cancel" buttons while the transition is happening between say


ICICI bank and the "Payment Gateway". In the absence of any response
from the computer for a long time, a user might start a recovery or
shutdown procedures in panic. MS programs display the current status and
likely time of completion when the user specifies a file copy/file download
operation.

Interruptible Interactions: No environment is interruption-free. Every


employee in a workplace handles email, voicemail, and web-pages,
attending meetings, takes calls, supervises work, etc. Interruptions came
at unexpected times in the form of a call, visitor, and on-screen
message. During each interruption, the user may switch from one
application to another, perform some tasks or interrupt his current work
and resume his earlier tasks. Putting a call on hold and sending an SMS
message to someone else is a typical example in our mobile world.

Multiple skill levels: A good user interface should support multiple levels
of sophistication of command issue procedure for different categories of
users. Many applications today are not designed for handicapped persons
or visually impaired persons although several standards exist for
developing such applications. Designing an interface for different skill levels
is necessary because users with different levels of experience in using an
application prefer different types of user interfaces. Experienced users are
more concerned about the efficiency of the command issue procedure,
whereas beginners look for usability aspects. Cryptic and complex
commands discourage a novice, whereas elaborate command sequences
make the command issue procedure very slow and therefore put off
experienced users. As users become more and more familiar with an
interface their focus shifts from usability aspects to speed of command
issue aspects. Experienced users look for options such as "hot-keys",
"macros", etc. Providing both keyboard and mouse interfaces offers users
flexibility and allows users of different skill levels or physical handicaps to
use input devices in whatever ways they feel comfortable.

Support error recovery: While issuing commands, even expert users


commit errors. Therefore, a good user interface should allow a user to
undo mistake committed by him while using the interface. Users are put to
inconvenience, if they cannot recover from the errors they commit while
using the software. The Ctrl+Z is one of the most useful hot-keys that

! !212
USER INTERFACE DESIGN

might have saved many users time and effort apart from the stress of
recovering lost data or information.

On-line help: Users seek guidance and on-line help when they either
forget a command or are unaware of some features of the software. Users
should be provided with the appropriate guidance and help any time while
using the software. This is different from the guidance and error messages
which are flashed automatically without the user asking for them. The
guidance messages prompt the user regarding the options he has
regarding the next command, and the status of the last command, etc. A
good on-line help system should keep track of what a user is doing while
invoking the help system and provide the output message in a context-
dependent way. Here again MS products are excellent examples of how
online help can be provided - some are so exhaustive that one wonders
who reads them!

5.2 USER ANALYSIS

While designing user interfaces it is necessary to understand the user.


Interface analysis means understanding the end-users who will interact
with the system through the interface. User interface design is as much a
study of people as it is of usage of technology. It is a design of interface
between a human (user) and the computer. A poorly designed interface can
cause a user to make catastrophic errors. Some of the questions that need
to be answered are

❖ Who is the user?

❖ Are users Novices, Knowledgeable users, Intermittent or frequent users?

❖ Are users qualified, trained professionals, technician, clerical, or


manufacturing workers?

❖ Are users' experts in the subject matter that is addressed by the system?

❖ How does the user learn to interact with a new computer based system?

❖ Can users learn from written materials or do they need class room
training?

! !213
USER INTERFACE DESIGN
❖ What would the user want the system to do?

❖ How would the system fit in with the user's normal workflow or daily
activities?

❖ How technically savvy is the user and what similar systems does the user
already use?

❖ What interface look & feel styles appeal to the user?

❖ How does the user interpret information provided by the system?

❖ What will the user expect of the system?

❖ What are the tasks and sub-tasks that end-users must perform to do
their work?

❖ What is the environment in which these tasks will be conducted?

❖ What is the content that is presented as part of the interface?

❖ What are the consequences if a user makes a mistake using the system?

❖ Are users technically savvy?

❖ How does a work process get completed when several people (and roles)
are involved? And many more....

There are several modes for gathering inputs for designing the user
interface

❖ User interviews - designers meet with end-users individually or in groups

❖ Observation: Watch users as they attempt to perform tasks with the user
interface.

❖ Sales input - sales people help designers categorize users and better
understand their needs

! !214
USER INTERFACE DESIGN
❖ Marketing input - marketing analysis can help define market segments
and help understand how each segment might use the software

❖ Support input - support staff can provide good input what works and
does not, what users like, what features generate questions, and what
features are easy to use

5.3 TYPES OF INTERFACES

User interfaces can be classified into the following three categories:

❖ Command language based interfaces


❖ Menu-based interfaces
❖ Direct manipulation interfaces

Command Language-based Interface


In the pre-Windows era when DOS was the operating system interfaces
were command language based. A command language-based interface - as
the name itself suggests, is based on designing a command language
which the user can use to issue the commands. The user frames the
appropriate commands in the language and types them in appropriately
whenever required. For example the command:
Ren C:\My Documents\SW_Engg\Testing.doc
SW_Engg\Testing_Old.doc
will rename a file.

A simple command language-based interface might simply assign unique


names to the different commands. However, a more sophisticated
command language-based interface may allow users to compose complex
commands by using a set of primitive commands. Command language-
based interfaces allow fast interaction with the computer and simplify the
input of complex commands.

Menu-based Interface
For persons not familiar with programming languages or people wary of
technology command language-based interfaces are either intimidating,
difficult to learn or complex to remember. A menu-based interface is
preferred over a command language-based interface which does not
require the users to remember the exact syntax of the commands. A
menu-based interface is based on recognition of the command names,

! !215
USER INTERFACE DESIGN

rather than recollection. For example one would right-click on a file, select
the "Rename" option which will then allow the user to type the new name
for the file directly or make necessary changes by use of keyboard and
mouse. Typing effort is minimal as most interactions are carried out
through menu selections using a pointing device. One major challenge in
the design of a menu-based interface is structuring large number of menu
choices into manageable forms.

Direct Manipulation Interfaces


Direct manipulation interfaces present the interface to the user in the form
of visual models (i.e. icons or objects). For this reason, direct manipulation
interfaces are sometimes called as iconic interface. In this type of
interface, the user issues commands by performing actions on the visual
representations of the objects, e.g. pull an icon representing a file into an
another location for changing the location of a file. Important advantages
of iconic interfaces include the fact that the icons can be recognized by the
users very easily, and that icons are language-independent. It is difficult to
give complex commands using a direct manipulation interface. For
example, if one wants to copy all files in a given folder to another folder,
user has to select files from the "source" folder using the icon, right click to
select the copy option, transition to the "destination" folder and again right
click to select the "paste" option. This could be very easily done by issuing
a command like copy c:\source_folder\*.* destination_folder

! !216
USER INTERFACE DESIGN

5.4 SUMMARY

Defining "what is user interface?" itself is a challenge. For youngster's


living in the "mobile" and "internet" world good characteristics interface
could be ease of use and ease of learning or good graphics. The same
characteristics multiplied 10 times are not suitable for the elderly. Most
youngsters indicated that their parents would find the "user interface"
difficult to handle. The user interfaces are not different but the users and
their expectations are. For a good software application design it is
important to identify the characteristics desired of a good user interface
e.g. ease of learning, keep user in control, no overloading of user's
memory, meaningful defaults, hiding technical details from user, context
sensitivity, consistency, intuitive short cuts, error prevention, continuous
feedback, handling different skill levels, on-line help, etc.

The user is paramount while designing user interfaces and interface


analysis implies understanding the end-users who will interact with the
system through the interface. Interfaces can be classified into command
language based interfaces, menu-based interfaces and direct manipulation
interface categories. User interface design is as much a study of people as
it is of usage of technology.

! !217
USER INTERFACE DESIGN

5.5 SELF-ASSESSMENT QUESTIONS (EXAMPLES


PREFERABLY DIFFERENT FROM THOSE GIVEN IN THE
BOOK)

1. Identify some characteristics of a user interface and explain them with


examples

2. Consider any application you have been using for more than 1 year,
however simple it may be. Explain how the characteristics of the user
interface has been implemented for that application?

3. What are the simple rules to keep in mind while designing user
interfaces?

4. Why is the user important for designing user interfaces?

5. It is said user interface design is more a "study of people" than use of


technology. Debate this statement.

6. Why do we need to analyze user inputs for interface designs?

7. What are the modes of collecting user inputs for designing the user
interface?

8. State and explain the three different categories of user interface design?


! !218
USER INTERFACE DESIGN

REFERENCE MATERIAL
Click on the links below to view additional reference material for this
chapter

Summary

PPT

MCQ

Video Lecture


! !219
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

Chapter 6
Code Construction - Standards and
Guidelines
Objective:

On completion of this chapter you would be able to understand

❖ Basic coding concepts

❖ Identify the necessity of coding standards.

❖ Differentiate between coding standards and coding guidelines

❖ Examples of general coding standards, Language Specific Standards and


Project Specific Standards

Structure

6.1 Code Construction

6.2 Standards Vs. Guidelines

6.3 General Coding Standards

6.4 Language Specific Standards

6.5 Project Specific Standards

6.6 Summary

6.7 Self-Assessment questions

! !220
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

6.1 CODE CONSTRUCTION

"Before software can be re-usable it has to be first usable" - Ralph Johnson


- A profound statement.

Many a software has not seen the light of day or has remained unusable.
Coding errors and quality issues either delay the development process or
result in the projects being shelved. Coding the next phase in the SDLC life
cycle after design is a key phase where all ideas, customer requirements
and design get converted into more a concrete entity i.e. code. Coding is
the phase of a software development project where developer's actually
input the source code into a computer that will be compiled into the final
software program.

Source code is the high level language (i.e. C#, Java, Python, etc.) that is
typed into an IDE (interactive development environment) and stored in a
text file on the computer. This text file is compiled into machine code,
which are the instructions actually understood by the computer. Code
construction includes multitudes of algorithms, operating systems,
languages and databases - covering them is beyond the scope of this book.
However irrespective of the platform there are certain rules and guidelines
which need to be adhered to in the coding phase.

Imagine a large IT company of 1lac+ employees delivering multitudes of


applications to different organizations across the globe. Given a choice
each of these employees will use their creativity to develop programs with
fancy program names, coding styles and variable names like
"Michael_Schoemaker_Amount" or "Katrina_Kaif_Height" that looks
appealing to them. Over the years if such software keeps getting added,
the "horrific" plight of the next team of software engineers who need to
enhance, maintain or improve the existing programs can only be imagined.
It must be emphasized that code is read much more often than it is
written.

Good software development organizations normally require programmers


to adhere to some well-defined and standard style of coding conventions,
practices and methods for each aspect of a piece program written in a
specific language - called coding standards. These conventions usually
cover file organization, indentation, comments, declarations, statements,
white space, naming conventions, programming practices, programming

! !221
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

principles, programming rules of thumb, etc. Most software development


organizations formulate their own coding standards that suit them most,
and require their engineers to follow these standards rigorously. A coding
standard lists several rules to be followed during coding, such as the way
variables are to be named, the way the code is to be laid out, error return
conventions, etc. Most arguments against a particular standard come from
the ego. One needs to be flexible, control the ego a bit, and remember any
project is a team effort. The goal is to improve the productivity of all
software development.

The benefits of enforcing a standard style of coding are:

❖ Software development & maintenance has become a critical component


supporting the operations of any large organization. Experience over
many projects indicates that coding standards help the project to run
smoothly

❖ The intent of standards is to define a natural style and consistency, yet


leave programmers freedom to be creative

❖ Large portion of project scope is post-delivery maintenance. Coding


standards reduce the cost of a project by easing the learning when code
needs to be addressed by people other than its author or even by the
author after a long gap.

❖ Programmers can go into any code and figure out what's going on, so
maintainability, readability and reusability are increased

❖ Code walkthroughs become less painful

❖ People new to a language can adapt to an existing style & get to speed
quickly

❖ People new to a language can avoid making the same mistakes over and
over again, so reliability is increased

❖ College-learned behaviors are replaced with an emphasis on business


realities - high productivity, maintainability, shared authorship, etc.

! !222
CODE CONSTRUCTION - STANDARDS AND GUIDELINES
❖ A coding standard gives a uniform appearance to the codes written by
different engineers.

❖ Coding standards encourage good programming practices.

6.2 STANDARDS VS. GUIDELINES

Many times "guidelines" are confused with "standards" or vice-versa. It is


ideal to comply with all standards and guidelines; however guidelines are
different from standards with different connotations. It begins with the
organization policy which is a formal and high-level statement or plan that
embraces an organization's general beliefs, goals, objectives, and
acceptable procedures for a specified subject area (in our case coding).
Policies always state required actions including pointers to standards.
Policies require mandatory compliance and focus on desired results and not
on means of implementation.

One comes across terms like policy, standard, procedure and guidelines
and generally confuses people using them. Policy is a high level statement
uniform across the organization; business rules for fair and consistent staff
treatment and ensures compliance. Example: Dress code policy, e-mail and
internet policy. A Standard is the lowest level control that cannot be
changed; acceptable level of quality or attainment. Example: standard of
living, standard size. A procedure tells us step by step what to do to
accomplish the end. Example: Standard operating procedures (SOP's), a
medical procedure. Guideline is simply to give an overview of how to
perform a task; a piece of advice on how to act in a given situation.
Example: Employment discrimination guidelines, screening guideline

! !223
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

!
Standard: It is a mandatory action or rules designed to support and
conform to a defined policy; a standard makes a policy more meaningful
and effective. Standards are rules which programmers are expected to
follow. The goal of the coding standard is to increase reliability by
promulgating intelligent coding practices.

For example, a coding standard may contain rules that help developers
avoid complicated language constructs, limit complexity of functions, and
use a consistent syntactical and commenting style. These rules can
drastically reduce the occurrence of flaws, make software easier to test,
and improve long term maintainability.

After a standard is approved by management, compliance is mandatory. All


standards are used as reference points to ensure organizational
compliance. During testing or auditing, reference is made to the standard
e.g. C++ standards, Web Accessibility standards etc. to ensure minimum
compliance. A standard includes one or more accepted specifications for
hardware, software or behavior. For example, a password standard may set
out rules for password complexity and a Windows standard may set out the
rules for hardening Windows clients.

Guideline: They are general statements, recommendations, or


administrative instructions designed to achieve the policy's objectives by
providing a framework within which to implement procedures. A guideline
can change frequently based on the environment and should be reviewed
more frequently than standards and policies. A guideline is not mandatory,
rather a suggestion of a best practice and they are discretionary.

! !224
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

Guidelines are suggestions which help programmers write better software


& are optional, but highly recommended.

Guidelines should be viewed as best practices that are not usually


requirements, but are strongly recommended. They could consist of
additional recommended controls that support a standard, or help fill in the
gaps where no specific standard applies. For example, a standard may
require passwords to be 8 characters with conditions of at least 1
Capitalized letter, at least 1 special character and one numeric character. A
supporting guideline may state that it is best practice to ensure the
password expires after 30 days.

There are three parts to Coding Standards viz. General Coding Standards,
Language Specific Standards and Project Specific Standards

6.3 GENERAL CODING STANDARDS

These standards are generally specified by the customer or the


organization. Using a coding style that is too clever or too difficult to
understand should be avoided. Code should be easy to understand. Many
inexperienced engineers actually take pride in writing cryptic and
incomprehensible code. Clever coding can obscure meaning of the code
and makes maintenance difficult.

Also one must avoid obscure side effects. An obscure side effect is one that
is not obvious from a casual examination of the code. Obscure side effects
make it difficult to understand a piece of code. For example if some file
Input/Output is performed which is difficult to infer from the function's
name and header information, it becomes difficult for anybody trying to
understand the code.

Coding for Efficiency vs.. Coding for Readability - Writing software that
runs efficiently and writing software that is easy to maintain. Programmer
must carefully weigh efficiency gains Vs. program complexity and
readability

! !225
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

Some General Coding Standards Examples:

❖ While documentation of a module, program, function or sub-routine, they


should have the name of the author, who created the file, date created,
brief description or explanation of what they do, assumptions,
modification history, etc. As a rule of thumb, there must be at least one
comment line on the average for every three-source line.

❖ Avoid using one identifier for multiple purposes. Programmers often use
the same identifier to denote several temporary entities. For example,
some programmers use a temporary loop variable for memory efficiency
while computing and a storing the final result. There are several things
wrong with this approach and hence should be avoided. Use of variables
for multiple purposes usually makes future enhancements more difficult.

❖ Variable shall have mnemonic or meaningful names that convey to a


casual observer the intent of its use. Variables shall be initialized prior to
its first use.

❖ Error messages should be meaningful. Error handling is an important


aspect of computer programming. When possible, they should indicate
what the problem is, where the problem occurred, and when the problem
occurred. Programmers generally "hard-code" errors and notification
messages in programs which make it difficult to review and modify in
case of mistakes.

❖ Proper and consistent indentation is important to produce easy to read


and maintainable programs. Indentation is used to

• Emphasize the body of a conditional statement

• Emphasize the body of a control statement such as a loop or a select


statement

• Emphasize a new scope block

• A simple rule: Minimum of 3 spaces shall be used to indent and


consistently applied throughout the program.

! !226
CODE CONSTRUCTION - STANDARDS AND GUIDELINES
❖ Inline comments - where appropriate, Inline comments adding important
information is recommended. E.g. int gStateFlag; /* This state variable is
defined here, initialized in Main */.

As the name suggests, inline comments appear in the body of the source
code itself. Inline comments promote program readability, allows a person
not familiar with the code to more quickly understand and reduce the
amount of time required to perform maintenance tasks. A rule of thumb is
that inline comments should make up 20% of the total lines of code in a
program.

❖ The name of the source file or script shall represent its function. All of
the routines in a file shall have a common purpose. All folder and file
names shall begin with a 3 character prefix indicating the company
name. Or folder names shall indicate what the contents are e.g.
SUV_HIREaTEMPO_DesignDiagrams.

❖ The length of any function should not exceed 10 source lines: A function
that is very lengthy is usually very difficult to understand as it probably
carries out many different functions. For the same reason, lengthy
functions are likely to have disproportionately larger number of bugs.

❖ Subroutines, functions and methods should be reasonably sized. Restrict


each module to one function or action (i.e. each module should only do
one "thing"). Don't try to accomplish too many functions in one module

❖ Strictly avoid "goto" statements: Use of goto statements makes a


program unstructured and makes it very difficult to understand.

❖ Proper use of spaces within lines of code enhances readability. A keyword


followed by a parenthesis should be separated by a space. A blank space
should appear after each comma in an argument list. Blank spaces
should never separate unary operators such as unary minus, increment
("++"), and decrement ("-") from their operands.

❖ Wrapping Lines - When an expression will not fit on a single line, break it
e.g. break after a comma break after an operator etc.

! !227
CODE CONSTRUCTION - STANDARDS AND GUIDELINES
❖ Use of Parentheses - Better to use parentheses liberally. Even in cases
where operator precedence unambiguously dictates order of evaluation of
any expression, include parentheses to improve readability

❖ And many more…

6.4 LANGUAGE SPECIFIC STANDARDS

There are universal language specific coding standards for every language.
C++ is standardized by an ISO working group; one of the major revisions
of the C++ standard, C++11 was released on the 12 August 2011. Some
coding standard examples are

❖ Rules for limiting the use of "global". These rules list what types of data
can be declared global and what cannot. In C++ global variables should
always be referred to using the :: operator e.g. ::
mainWindow.open(), ::applicationContext.getName(). In general, the use
of global variables should be avoided and using singleton objects need to
be considered.

❖ Naming conventions for global variables, local variables, and constant


identifiers: A possible naming convention can be that global variable
names always start with a capital letter, local variable names are made
of small letters, and constant names are always capital letters. In C++
names representing types must be in mixed case starting with upper
case e.g.

Line, SavingsAccount, named constants must be all uppercase using


underscore to separate words e.g. MAX_ITERATIONS, COLOR_RED, PI

❖ Error handling: The way error conditions are reported by different


functions in a program are handled should be standard within an
organization. For example, different functions while encountering an
error condition should either return a 0 or 1 consistently. Exception
handling in C++ is a construct designed to handle the occurrence of
exceptions that is special conditions that changes the normal flow of
program execution. C++ supports the use of language constructs to
separate error handling and reporting code from ordinary code, that is,
constructs that can deal with these exceptions (errors and
abnormalities).

! !228
CODE CONSTRUCTION - STANDARDS AND GUIDELINES
❖ Classes: Classes should be named using "CamelCase." e.g. abstract class
DatabaseConnection extends PDO; Class methods and properties should
use "lowerCamelCase": public $lastStatement;.

❖ Subroutines, Functions, and Methods: Subroutines, functions, and


methods shall be reasonably sized. A good rule of thumb for module
length is to constrain each module to one function or action (i.e. each
module should only do one "thing"). The names of the subroutines,
functions, and methods shall have verbs in them i.e. names shall specify
an action, e.g. "get_name", "compute_temperature".

6.5 PROJECT SPECIFIC STANDARDS

These standards are based on the general coding standards & language
standards. They apply to the specific project for future ease of
maintenance. The language & Project standards supplement, rather than
override, the General Coding standards. Coding and Language Standards
are generally customized for each project based on

❖ Company Standards for the language


❖ Customer Standards for the language
❖ Requirements of the domain - Security, Performance etc.

! !229
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

6.6 SUMMARY

Lack of standards and guidelines for coding has resulted in many software
applications not seeing the light of day or remaining unstable. The coding
phase follows the design phases where all ideas, customer requirements
and design get converted into more a concrete entity i.e. code. This is the
phase where developer's actually input the source code into a computer
that will be compiled into the final software program. It must be
emphasized that code is read much more often than it is written. Code
construction includes multitudes of algorithms, operating systems,
languages and databases. Irrespective of the platform there are certain
rules and guidelines which need to be adhered to in the coding phase. A
coding standard lists several rules to be followed during coding with the
goal being improvement of the productivity of all software development.
The benefits of enforcing a standard style of coding is not just for the
programmer coding the program but for future maintenance, learning
speed for new programmers assigned to the projects, improving
productivity and quality of deliverables. Guidelines often confused as
standards are different and stem from best practices. They are general
statements, recommendations, or administrative instructions designed to
achieve the policy's objectives by providing a framework within which to
implement procedures. Standards are mandatory actions or rules designed
to support and conform to a defined policy; a standard makes a policy
more meaningful and effective. Standards are rules which programmers
are expected to follow. There are general coding standards, language
specific standards and project specific standards. It is important to
remember that when one codes for efficiency it must be also easy to
maintain i.e. code for readability.


! !230
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

6.7 SELF-ASSESSMENT QUESTIONS (EXAMPLES


PREFERABLY DIFFERENT FROM THOSE GIVEN IN THE
BOOK)

1. What is the coding phase? What is its significance?

2. What is meant by the word "standard"? Give some real life examples of
a "standard" and its usage.

3. Why is a standard required for coding?

4. Are guidelines the same as standards? Why not? Give examples of a


standard and its related guidelines.

5. Give some examples of general coding standards for any project


irrespective of the language of coding

6. Give some examples of language specific standards for any


programming language that you are familiar with.

7. What are the different ways of documenting program code? Which of


these are usually the most useful for understanding a piece of code?

8. Give some examples of Software Project Specific Standards

9. What is a coding standard? Identify the problems that might occur if the
engineers of an organization do not adhere to any coding standard.

10.What is the difference between coding standards and coding guidelines?


Why are these considered as important in a software development
organization?

11.Write down five important coding standards.

12.Write down five important coding guidelines.

13.Why is it important to properly document a software product?

! !231
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

14.Differentiate between the external and internal documentation of a


software product.


! !232
CODE CONSTRUCTION - STANDARDS AND GUIDELINES

REFERENCE MATERIAL
Click on the links below to view additional reference material for this
chapter

Summary

PPT

MCQ

Video Lecture


! !233
TESTING PHASE

Chapter 7
Testing Phase
Objective:

On completion of this chapter you would be able to understand

❖ Importance and Need for Testing

❖ The terminologies used in testing phase

❖ Objectives of testing

❖ What is testing?

❖ Verification and Validation and their differences

❖ Static Testing, what code review is, code inspection is, etc.

❖ Differentiate between functional testing and structural testing

❖ Differentiate between testing in the large and testing in the small

❖ The SDLC and V-model for Testing

❖ Different techniques of testing

❖ Concepts of Debugging and the difference between Testing and


Debugging

❖ Different types of testing - Unit, Integration, System, Acceptance Testing

❖ Usage of Stubs and Drivers during testing

❖ Functional and Non-Functional Testing

❖ What is Black Box Testing and its advantages and disadvantages

❖ What is White Box Testing and its advantages and disadvantages

! !234
TESTING PHASE
❖ Experience Based, Error Guessing and Cause-Effect Graphing Testing

❖ Automation in Testing and tools used for testing

❖ Defect Management

Structure:

7.1 Importance and Need of Testing

7.2 Testing Myths

7.3 Strategic Approach- Verification and Validation

7.4 Principles, Terminologies and Objectives of Testing


7.4.1. Principles of Testing
7.4.2. Terminologies Used in Testing
7.4.3. Objectives of Testing
7.4.4. Writing Test Cases

7.5 Types of Testing


7.5.1. Static Testing
7.5.2. Dynamic Testing

7.6 Debugging

7.7 SDLC and V-model of Testing

7.8 Levels of Testing


7.8.1. Unit Testing
7.8.2. Integration Testing
7.8.3. System Testing
7.8.4. User Acceptance Testing
7.8.5. Smoke Testing, Regression Testing and Exhaustive Testing

7.9 Dynamic Testing - White Box


7.9.1. White Box Testing - Coverage
7.9.2. Basis Path Testing
7.9.3. Cyclomatic Complexity

! !235
TESTING PHASE

7.9.4. Loop Testing


7.9.5. Memory Leaks
7.9.6. Mutation Testing
7.9.7. White Box Testing - Advantages & Disadvantages

7.10 Dynamic Testing - Black Box Testing


7.10.1. Black Box Testing - Equivalence Partitioning
7.10.2. Black Box Testing - Boundary Value Analysis
7.10.3. Black Box Testing - Error Guessing
7.10.4. Black Box Testing - Cause Effect Graphing
7.10.5. Black Box Testing - Advantages & Disadvantages

7.11 Automation in Testing, When to Stop Testing


7.11.1. Automation in Testing
7.11.2. Automation Tools - Examples
7.11.3. When to Stop Testing

7.12 Defect Management


7.12.1. Severity and Priority
7.12.2. Defect Life Cycle
7.12.3. Defect Reporting
7.12.4. Defect Measurement
7.12.5. Test Metrics
7.12.6. Defect Management Process

7.13. Summary

7.14. Self-Assessment Questions

! !236
TESTING PHASE

7.1 IMPORTANCE AND NEED OF TESTING

Software testing is the next phase after completion of design and code
construction. Testing is an investigation conducted to provide stakeholders
with information about the quality of the product or service under test.
Software testing also provides an objective, independent view of the
software to allow the business to appreciate and understand the risks of
software implementation

Software Testing is necessary because mistakes are made by humans.


Some mistakes are unimportant, but some of them are expensive or
dangerous. Before delving into the realms of testing one needs to
understand its importance and its need. Some examples are:

❖ Between January 4th and May 24th, 2000, 158 women had been told
that they had very little reason to worry about having a child with Down
syndrome, when four were carrying fetuses with the abnormality. An
investigation into the incident had revealed that the software
automatically assumed that patients weighed zero pounds if an actual
weight was not known.

❖ Knight Capital's $440 million loss. (The Register) Knight Capital, a firm
that specializes in executing trades for retail brokers, took $440m in cash
losses in 2012 due to a faulty test of new trading software.
Unfortunately, the trading algorithm the program was using was a bit
eccentric as well. Knight Capital's software went out and bought at the
"market" price and then sold at the bid price-instantly - over and over
and over again. Knight's fast buys and sells moved prices up and
attracted more action from other trading programs. This only increased
the amount of losses resulting from their trades to the point where, at
the end of the debacle 45 minutes later, Knight Capital had lost $440m
and was teetering on the brink of insolvency.


! !237
TESTING PHASE

❖ Overcharging by Walgreen Co. in U.S. Walgreen Co., the largest U.S.


drugstore chain, accidentally overcharged as many as 4 million
customers buying gifts and decorations two days before Christmas
because its payment-processing system malfunctioned from overuse.

❖ The 2003 blackout in North America deprived an estimated 50 million


people of power. The blackout was made possible by a bug in General
Electric's Unix-based monitoring software that kept operators from
learning of a local power outage. The glitch's domino effect cut off power
in eight U.S. states and in Ontario, Canada.


! !238
TESTING PHASE

!
The list is endless…

"Computers do what they're told," says Lawrence Pingree, an analyst at


Gartner. "If they're told to do the wrong thing, they're going to do it and
they're going to do it really, really well." The cost of some IT failures has
climbed into the billions.

One of the most expensive computer problems of a millennium was the


Y2K bug. Research firm IDC estimated that U.S. businesses, government
agencies and individuals spent nearly $200 billion dealing with the
problem. To be fair to the programmers who introduced this defect, the
cost of disk space and associated infrastructure in the pre-Y2K era was
enormous; disk sizes were monstrous compared to the 1TB drives available
currently and hence using 2 digits for the year in any date made lot of
economic sense.

It is ironic that software failure is predictable and avoidable. Most


organizations don't see preventing failure as an urgent matter, even though
that view risks harming the organization and maybe even destroying it.

! !239
TESTING PHASE

The examples of the disasters caused by faulty programming only re-


emphasize the need for Testing. The reasons for this are several:

❖ Erroneous programming: Wrong calculations, Omitting essential logic


paths, Parameterization (Lack of it, Less of it or Over), Lack of audit
trails, Legal provisions overlooked etc.

❖ Ambiguous requirements specifications: For example the statement "The


response time should be reasonable" OR "A minor account holder will be
allowed to operate the account with the guardian" are very ambiguous.
What is "reasonable" could be different for different people. Can a
guardian operate the account?

❖ Wrong interpretation of requirements: This arises mainly due to


ambiguity but sometimes two persons may interpret specifications in two
different ways. For example 'Railway reservations can be effected sixty
days in advance". One programmer may check for an exact 60 day
match between the ticket booking date and travel date while another
programmer will check "correctly" for travel dates less than or equal to
60 days for the booking.

❖ Absence of standardized methods of programming: Overzealous


Creativity kills. Programmers make mistakes in branching and looping;
there is absence of a modular approach in programming; concept of
'Reusability of code' not implemented

And many more…

Of course, IT projects rarely fail for just one or two reasons. Most failures,
in fact, can be traced to a combination of technical, project management,
and business decisions. Testing becomes all the more important to avoid
project failures.

! !240
TESTING PHASE

7.2 TESTING MYTHS

Software testing has its share of myths just as every other field arising due
to lack of authoritative facts, evolving nature of the industry and general
flaws in human logic. Though common-sense tells that testing is part of a
learning it is being challenged by myths. Some of the prevalent myths are:

❖ It's a manual process


Many professionals believe that manual testing is a simple set of step-by-
step tasks that anyone can run through to check an expected output.
Nothing can be farther from the reality. Though automatic testing is
necessary and possible, both manual and automatic testing goes hand-in-
hand and complements each other. Automated testing is a form of testing
that utilizes scripts to automatically run a set of procedures on the
software under test, to check (which is a small part) that the steps which
are coded in the script work. Manual testing is more than just checking.
The most useful and important tool when it comes to manual testing, is the
"brain". A computer just cannot replicate this. An automated script can do
a lot of things but still there are many things it cannot do. In order to
produce the highest quality application one should have a strong manual
testing element in place alongside an automated framework.

❖ It's repetitive & boring task


Fresh graduates, who are exposed to testing for a few hours in a software
engineering course and who have no experience in industry believe that
testing is a mindless and boring task. Testing can indeed be boring and a
mundane monotonous task if one is NOT doing it right. In reality, testing
requires creativity, alertness, and most importantly a passion for quality.
Like in every task, be it web design, accounting, banking or flying an
aircraft - there are sometimes mundane tasks of repeating nature. But a
good tester usually finds creative ways to solve repetitive tasks. A good
tester would look at testing as an information gathering activity done with
intent of exploring and discovering answers - NOT just flaws or bugs in the
software- to questions that nobody had asked before.

! !241
TESTING PHASE
❖ Testers are second-class citizens
Too many people assume that testing can't be that hard if a general user
finds bugs all the time. Therefore testers are "second-class-citizens"
compared to designers and developers. In fact, that is a totally unfair
assessment since testing is a complex craft which is not the "cup-of-tea"
for average professionals. According to Google's Patrick Copeland, what
makes a great tester:-

Quote: "From the 100s of interviews I've done "great" boils down to: 1) a
special predisposition to finding problems and 2) a passion for testing to go
along with that predisposition. In other words, they love testing and they
are good at it. They also appreciate that the challenges of testing are,
more often than not, equal or greater than the challenges of programming.
A great "career" tester with the testing gene and the right attitude will
always be able to find a job. They are gold."- Unquote

❖ Testing involves paperwork


"Ink is better than the best memory" - Chinese proverb
The general opinion about testing documentation is that only when the
programmer or analyst has free time one should do documentation for Test
cases, Test plans, test status report, Bug report, test metrics etc. Though
documentation cannot be the objective of testing, it is a great habit to
place all the data in black and white and to update others about that as
well. It is when things go wrong especially with customers' bringing up
mistakes which were due to missing information in their requirements -
that one realizes the importance of documentation. Careful documentation
can save an organization's time, efforts and money.

❖ There's no career growth in testing


With the power and reach of the internet customers have multiple options
to explore before deciding a vendor for a product or applications. Gone are
the days when users accept any product that is given out to them -
irrespective of the quality. There is abundance of competing software and
increasingly demanding users, which increases the demand for good
software testers to ensure high quality. There are dedicated institutions for
training fresh engineers in testing domain only. There are few IT
organizations specializing and taking up testing related projects. People
choose Testing as a pre-decided career path rather than a stumbled-upon
or "Hobson's" choice. People have made significant strides in taking testing
professionally and as a demanding career.

! !242
TESTING PHASE
❖ There's no challenge in testing
Ask any programmer the time that he/she will take to complete a simple
login screen from design till working module. The requirements of a login
screen are pretty standard and universal. 80% of the programmers will
exclude the testing time from their estimates. The assumption is that
development is more challenging and time consuming. Reality is that
testing can be difficult, time consuming and challenging more so than
coding. Creativity is essential to be applied when formulating test
approaches, when designing tests, writing test scripts, creating test data
and more importantly putting themselves in the "customer's shoes" to
simulate live working conditions during testing. A skilled tester is often an
expert of the product or application being tested. Programmers spend most
of their time working on a very specific area, function or component of the
application whereas a tester analyzes and understands how the entire
system works from an end-to-end standpoint. Testers need to demonstrate
their understanding of the product in a way that adds value to the product.

There are many more myths but in summary these are misplaced beliefs
and reality of testing is different. The subsequent sections will provide
some insight into the challenges of the testing phase.

! !243
TESTING PHASE

7.3 STRATEGIC APPROACH - VERIFICATION AND


VALIDATION

Verification and validation (V&V) is the process of checking that a software


system meets specifications and that it fulfills its intended purpose. It is
also referred as software quality control.

According to the CMMI definitions

❖ Software Verification: The process of evaluating software to determine


whether the products of a given development phase satisfy the
conditions imposed at the start of that phase.

❖ Software Validation: The process of evaluating software during or at the


end of the development process to determine whether it satisfies
specified requirements.

Verification and validation are not the same things, although they are often
confused. Boehm clarified the difference between them with a play on
English words as follow:

❖ Verification: Are we building the product right? It refers to the set of


activities that ensure that the software correctly implements a specific
function. This is a static method for verifying design and code.

❖ Validation: Are we building the right product? Validation refers to a


different set of activities that ensure that the software that has been built
is traceable to customer requirements. This is a dynamic process for
checking and testing the real product.

Software verification is ensuring that the product has been built according
to the requirements and design specifications, while software validation
ensures that the product actually meets the user's needs, and that the
specifications were correct in the first place.

V&V encompasses a wide range of SQA activities that include formal


technical reviews, quality and configuration audits, performance
monitoring, simulation etc. V&V tasks vary during different phases of
development lifecycle. Verification and validation are performed in each of
the phases of the lifecycle.

! !244
TESTING PHASE

Planning
❖ Verification of contract
❖ Review and evaluation of concept document
❖ Review of Test plans and Strategy
❖ Performing risk analysis

Requirement phase
❖ Review and evaluation of software requirements
❖ Review and evaluation of the interfaces
❖ Review of Software Requirement Specifications
❖ Generation of systems test plan
❖ Generation of acceptance test plan

Design Phase
❖ Review and evaluation of software design
❖ Review of diagrams and documents
❖ Review and evaluation of the Interfaces (UI)
❖ Generation of Integration test plan
❖ Generation of Component test plan
❖ Generation of Test design

Code Construction Phase


❖ Review of source code
❖ Review of documents
❖ Generation of test cases
❖ Generation of test procedure
❖ Execution of Components test cases

Testing Phase
❖ Execution White Box and Black Box testing
❖ Execution of Functional and Non-Functional Tests
❖ Execution of unit test cases
❖ Execution of integration test cases
❖ Execution of systems test cases
❖ Execution of acceptance test cases
❖ Capturing Defects and Generating Metrics for testing phase
• Updating of traceability metrics
• Risk analysis

! !245
TESTING PHASE
❖ Implementation phase
• Review of Installation documents
• Review of installation and configuration
• Final test of the installation candidate build

In summary verification means the review activities or the static testing


techniques and validation means the actual test execution activities or the
dynamic testing techniques.

The activities listed above looks daunting for one individual to complete.
Verification and Validation are performed by different groups of people
during the development life cycle. The table given below gives some
examples of the verification activities and people performing them

Who will Test ….. VERIFICATION

Activity Performed Explanation Deliverable


By

Requirement Users, Study & discussion of the Reviewed statement of


Review Developers computer system requirements ready to be
requirements to ensure translated into system
they meet stated user design
needs and are feasible

Design Developers Study & discussion of the System design ready to


Reviews computer system design to be translated into
ensure it will support the computer programs,
system requirements. Hardware
configurations.
documentation, Training

Code Developers An informal analysis of the Computer software ready


Walkthrough program source code to for testing or more
find defects & verify coding detailed inspections by
techniques. the developer.

Code Developers A formal analysis of the Computer software ready


Inspection (Team program source code to for testing by the
and Subject find defects as defined by developer.
Matter meeting computer system
Experts) design specifications.

! !246
TESTING PHASE

The table given below gives some examples of the validation activities and
people performing them

Who Will Test ….VALIDATION

Activity Performed Explanation Deliverable


By

Unit Testing Developers Testing of a single Software unit ready for


program, module, or unit testing with other system
of code. Validates that components, such as other
the software performs as software units, hardware,
designed. documentation, or users.

Integration Developers Testing of related Portions of the system ready


Testing programs, modules, or for testing with other
units of code. Validates portions of the system.
that multiple parts of the
system interact according
to the system design.

System Users, Testing of an entire A tested computer system,


Testing Developers computer system. Can based on what was specified
include functional & to be developed or
structural testing, such as purchased.
stress testing. Validates
the system requirements.

Acceptance Users Testing of a computer A tested computer system,


Testing system or parts of a based on user needs
computer system to make
sure it will work in the
system regardless of what
the system requirements
indicate.

! !247
TESTING PHASE

7.4 PRINCIPLES, TERMINOLOGIES AND OBJECTIVES OF


TESTING

7.4.1 PRINCIPLES OF TESTING

Alan M. Davis has proposed seven fundamental principles for testing

1. Testing shows presence of defects


"Probably the major weakness in all software testing methods is their
inability to guarantee that a program has no errors."- Glenford Myers,
Software Reliability: Principles & Practices, 1976

Testing can show the defects are present, but cannot prove that there are
no defects. Even after thoroughly testing the application and the
application may be in use for more than 10 years, one can never claim that
the software is 100% defect free. Testing reduces the number of
undiscovered defects remaining in the software but even if no defects are
found, it is not a proof of correctness. Therefore, it is important to design
test cases which find as many defects as possible.

2. Exhaustive testing is impossible


Testing everything including all combinations of inputs and preconditions is
not possible. Unless the application under test (UAT) has a very simple
logical structure and limited input, it is not possible to test all possible
combinations of data and scenarios. For example: In an application in one
screen there are 15 input fields, each having 5 possible values, then to test
all the valid combinations you would need 30 517 578 125 (515) tests. It is
practically impossible to allow for this number of tests in any project
schedule. Hence testing has to be planned by assessing and managing the
risks and prioritizing the test cases to achieve an objective as close to an
ideal state as possible.

3. Begin Testing Early


It is a general myth among programmers that testing can be done only
after the code is written and compiled. In typical business application
development project, around 30-40% of the time is spent on testing. This
huge effort by itself signifies the importance of testing.

It is quite common for the testing phase to get squeezed at the end of the
development lifecycle, i.e. when development has finished and surprises

! !248
TESTING PHASE

surface tracing back defects to the requirement phase or design phases.


The cascading delays impact both quality and customer interests. The
harsh reality is that testing activities should start as early as possible and
should be focused on defined objectives. Bugs can get introduced into
software from the first stage of the SDLC itself. It could be costly if not
fixed in time, it will be best to start testing from requirements gathering
phase itself.

An analogy can be taken from the need for "polio" vaccination to kids
before they reach schooling stages. Failure to "vaccinate" so may result in
"defects" surfacing much later in life when it is too late.

In software development too when defects are found earlier in the lifecycle,
they are much easier and cheaper to fix. Right from the time of
requirement gathering till design, testing and implementation - testing
goes hand in hand. The sooner testing activities are started the better the
quality of the deliverables and lesser delays in projects. It is much cheaper
to change an incorrect requirement than having to change functionality in a
large system that is not working as specified by the customer.


! !249
TESTING PHASE

4. Defects tend to cluster


"As the number of detected errors in a piece of software increases, the
probability of the existence of more undetected errors also increase"
Glenford Myers, Software Reliability - Principles and Practices, 1976

During testing it is generally observed that most of the reported defects


are related to small number of modules within a system i.e. a small
number of modules contain most of the defects in the system. The Pareto
principle applies to software testing where 80 percent of all errors are
found in 20% of the modules. Tester can focus the same area in order to
find the more number of defects. It will reduce the time & cost of finding
defects as by focusing on certain areas more bugs can be found in order to
improve quality of the software.

5. Pesticide paradox
Boris Beizer wrote:
"Every method you use to prevent or find bugs leaves a residue of subtler
bugs against which those methods are ineffectual."
In simple terms not every method or technique will find or prevent all
bugs, so one must use a variety of approaches, techniques, and methods
in testing. In software testing, if the same kinds of tests are repeated again
and again, eventually the same set of test cases will no longer be able to
find any new bugs. After certain number of iterations of testing most bugs
get be fixed & the 'Clustered Defect' area gets cleaned up. Developers will
focus on these areas and ignore other potential defect-ridden areas. To
overcome this "Pesticide Paradox" it is important to review the test cases
regularly and new and different tests need to be written to exercise
different parts of the software or system to potentially find more defects.

6. Testing is context dependent


Different kinds of applications and products are tested differently. For
example a space satellite launching software testing is significantly
different from testing a payroll application for a large organization. Or
safety critical software is tested differently from an online shopping site.
Different methodologies, techniques and types of testing are related to the
type and nature of the application. Today with online payment becoming
common or online travel ticket booking applications handling such
transactions need to go through rigorous performance testing as well as
functionality testing to make sure the performance is not affected by the
load on the servers.

! !250
TESTING PHASE

7. Absence of errors is a fallacy


Testing is done to find defects but if testing didn't find any defects in the
software one cannot conclude that that the software contains no defects
and is ready to be shipped. If no defects are found then alarm bells should
buzz -

❖ Were the executed tests well designed to catch the most defects?
❖ Were the test cases sufficient?
❖ Were the test cases designed to match user's requirements?
❖ Was the execution done properly?

In other words, a test that finds no errors is different than concluding that
the software is error-free. It should be assumed that all software contains
some faults, even if they are hidden.

Few other principles to keep in mind are

❖ Testing must be done by an independent party. In many software


engineering methodologies, the testing phase is a separate phase which
is performed by a different team after the implementation is completed.
There is merit in this approach; it is hard to see one's own mistakes, and
a fresh eye can discover obvious errors much faster than the person who
has read and re-read the material many times. Testing should not be
performed by the person or team that developed the software since they
tend to defend the correctness of the program.

❖ All tests should be traceable to customer requirements. The proof of the


pudding is in the eating. The customer is the final authority for accepting
the software after it passes through testing. Unless the software
conforms to the requirements specified (and implied) by the customer it
is of no value. Testing should be so designed that all test cases should be
able to be traced back to one or more requirements.

❖ One must assign the best personnel to the task. Because testing requires
high creativity and responsibility only the best personnel must be
assigned to design, implement, and analyze test cases, test data and test
results.

❖ Testing must be done for invalid and unexpected input conditions as well
as valid conditions. The program should generate correct messages when

! !251
TESTING PHASE

an invalid test is encountered and should generate correct results when


the test is valid.

❖ The software must be kept static during test. The program must not be
modified during the implementation of the set of designed test cases

7.4.2 Terminologies Used in Testing

Large applications like Banking, Insurance, ERP, Accounting packages,


Human Resource Management, Tours and Travel Booking, Online Shopping
etc. take a long time and require lots of effort to develop. Testing such
applications cannot be an ad-hoc process which is taken up after the
product or application is fully completed. Testing starts with a strategy
which is a road map to ensure quality deliverables throughout the
development cycle. Testing can never be completed in a single iteration
because changes to code will happen when defects are found in one testing
cycle and the fixes need to be retested. Or modules get added in phases,
which require multiple cycles to complete the testing. In each cycle there is
further segregation of testing activities where set of test cases are
executed as a package (test suite). Many times the same tests need to be
executed repeatedly either with the same data of different data and it is
manually impossible to type each command and each keystroke. Test
scripts which are automation scripts are written in one of the programming
language like VB script, Java, python etc., which can be interpreted and
executed automatically by a testing tool. For each test suite there are
several test cases and test data that need to be prepared covering all
scenarios gathered and analyzed during the requirement gathering and
design phases.

Some commonly used terms associated with testing are:

❖ Bug: This has become generic usage. A software bug is a flaw or mistake
in a computer program that prevents it from working as intended, or
produces an incorrect result.

❖ Error (or mistake): Generated due to incorrect syntax.

❖ Defect (fault): A product anomaly as found in a hardware device or


component e.g. a short circuit or broken wire. A defect is a code that
does not correctly implement the requirements or intended behavior.

! !252
TESTING PHASE
❖ Failure: If, as a result of the defect/error, the system performs an
undesired action or fails to perform a desired action, then this is referred
to as a failure. It is a deviation of the software from its expected delivery
or services. A system may be reliable but not correct, i.e. it may contain
faults but if those faults are never executed the system is considered
reliable.

❖ Test Case: Set of test inputs, execution conditions & expected results
developed for a particular objective, such as to exercise a particular
program path or to verify compliance with a specific requirement. It is
referred as a triplet "I S O", where "I" is the data input to the system, S
is the state of the system at which the data is input, and O is the
expected output of the system.

❖ Test Data: Inputs devised to test the system. Both Live Data as well as
Test Data can be used to test the system. Live Data is mostly valid but
Test data must include both valid as well as invalid data.

❖ Test Script: A series of commands or events stored in a script language


file that execute a test case and report the results.

❖ Test Suite: A set of individual test cases/scenarios that are executed as


a package, in a particular sequence and to test a particular aspect, for
example a Test Suite for a GUI or Test Suite for functionality. This is the
set of all test cases with which a given software product is to be tested

❖ Test Cycle: A test cycle consists of a series of test suites which


comprises a complete execution set from the initial setup to the test
environment through reporting and clean up. E.g. Integration test cycle/
regression test cycle

❖ Test Strategy: Provides a road map that describes the steps to be


conducted as part of testing, when these steps are planned and then
undertaken, and how much effort, time and resources will be required. It
incorporates test planning, test case design, test execution and resultant
data collection and evaluation.

Reading the terminologies in the reverse order gives the perspective of the
process of testing where one starts with a test strategy, determine the test
cycles, create test suites for individual test scenarios, write scripts, devise

! !253
TESTING PHASE

appropriate test data, create set of test cases for all scenarios and execute
them to detect bugs which could be an error, defect or result in failures.

7.4.3 Objectives of Testing

"Testing is a destructive, even sadistic, process, which explains why most


people find it difficult”
- Glenford Myers

Most programmers believe that testing is done to prove that the software
works or that the software does what the customer wanted. Although a
program may perform all of its intended functions, it may still contain
errors in that it also performs unintended functions. If the testing attitude
is to show that no errors are present, the likelihood of finding an error is
greatly decreased.

This however is not the objective of testing.

"Testing is the process of executing a program with the intent of


finding errors"

The "blood tests" recommended by doctors is usually not to find out if the
patient is healthy. The aim of the tests is to find defects in the patient's
body to determine physiological states, biochemical states, such as
disease, mineral content, pharmaceutical drug effectiveness, and organ
functions.

The aim of software testing process is to identify all defects existing in a


software product that can cause potential damage. Testing provides a
practical way of reducing defects in a system and increasing the users'
confidence in a developed system. Testing a program consists of providing
the program with a set of test inputs (or test cases) and observing if the
program behaves as expected. If the program fails to behave as expected,
then the conditions under which failure occurs are noted for later
debugging and correction.

! !254
TESTING PHASE

A good "Test Case" is one that has a high probability of detecting


an as yet undiscovered error.
A blood test prescribed by the doctor should be such that it can detect
"errors" in the human system. Prescribing a simple "Hemogram" test to
measure glucose levels or cholesterol levels will not help. A "lipid profile"
test has a higher probability to detect abnormalities in the cholesterol
levels. Similarly a good "Test Case" must be relevant to detecting
undiscovered errors. Developing test cases requires good domain
knowledge, requirement understanding and user scenario visualization to
ensure that the developed software will meet customer expectations.

A successful test case is one that detects an as yet undiscovered


error.
Continuing the blood test analogy, a blood test report which shows
everything normal but the patient still has physical symptoms of either
pain, discomfort or other signs the report becomes redundant. A successful
blood test should ideally indicate some deviations from the norms for some
of the test conditions. Similarly while testing a software application or
product, if no errors are detected alarm bells must ring and the test cases
reviewed and refined to detect undiscovered errors. There is extremely low
probability that any software written will contain "NO errors".

7.4.4 Writing Test Cases

Having talked about Test Cases, one would like to know how one writes a
test case. This is more of an art than a science. Every domain brings its
own challenges for writing test cases to cover maximum possible scenarios.
Reiterating that a test case is a set of conditions under which a tester will
determine whether an application or program is working as it was originally
established for it to do.

The topic of test cases is by itself a book. This book will give a brief idea
with an example of a typical test case for an ATM application.

A test case usually contains the following elements:

❖ Test Suite ID - The ID of the test suite to which this test case belongs.

❖ Test Case ID- The ID of the test case

! !255
TESTING PHASE
❖ Test Case Summary -The summary / objective of the test case.

❖ Prerequisites- Any prerequisites or preconditions that must be fulfilled


prior to executing the test.

❖ Test Procedure - Step-by-step procedure to execute the test.

❖ Test Data- The test data, or links to the test data, that are to be used
while conducting the test.

❖ Expected Result- The expected result of the test.

❖ Actual Result- The actual result of the test; to be filled after executing
the test.

❖ Pass or Fail: Success or Failure

❖ Remarks: Additional information in case of Fail.

❖ Status: Status of the test case

Other details of date of execution, date of creation, authors, testers, test


environment are also included along with the list of test cases. An example
of test cases is given below.

! !256
TESTING PHASE

Initial Functional Test Cases for Example ATM System (Courtesy


Russell C. Bjork)

Use Case Function Initial Input Expected Actual Next


Being Tested System Output Output Step
State

Session System System is Insert a Card is


reads a on and not readable accepted;
customer's servicing a card System asks
ATM card customer for entry of
PIN

Session System System is Insert an Card is


rejects an on and not unreadable ejected;
unreadable servicing a card System
card customer displays an
error screen;
System is
ready to
start a new
session

Session System System is Enter a System


accepts asking for PIN displays a
customer's entry of PIN menu of
PIN transaction
types

Session System System is Perform a System asks


allows displaying transaction whether
customer to menu of customer
perform a transaction wants
transaction types another
transaction

Session System System is Answer System


allows asking yes displays a
multiple whether menu of
transactions customer transaction
in one wants types
session another
transaction

! !257
TESTING PHASE

Session Session ends System is Answer no System


when asking ejects card
customer whether and is ready
chooses not customer to start a
to do wants new session
another another
transaction transaction

Transaction : Individual types of transaction will be tested below

Transaction System A readable Enter an The Invalid


handles an card has incorrect PIN
invalid PIN been PIN and Extension is
properly entered then performed
attempt a
transaction

Withdrawal System asks Menu of Choose System


customer to transaction Withdrawal displays a
choose an types is transaction menu of
account to being account
withdraw displayed types
from

Withdrawal System asks Menu of Choose System


customer to account checking displays a
choose an types is account menu of
amount to being possible
withdraw displayed withdrawal
amounts

! !258
TESTING PHASE

Withdrawal System System is Choose an System


performs a displaying amount dispenses
legitimate the menu of that the this amount
withdrawal withdrawal system of cash;
transaction amounts currently System
properly has and prints a
which is correct
not greater receipt
than the showing
account amount and
balance correct
updated
balance;
System
records
transaction
correctly in
the log
(showing
both
message to
the bank
and approval
back)

Withdrawal System System has Choose an System


verifies that been amount displays an
it has started up greater appropriate
sufficient with less than what message
cash on than the the system and asks
hand to maximum currently customer to
fulfill the withdrawal has choose a
request amount in different
cash on amount
hand;
System is
requesting
a
withdrawal
amount

! !259
TESTING PHASE

Withdrawal System System is Choose an System


verifies that requesting amount displays an
customer's a that the appropriate
balance is withdrawal system message
sufficient to amount currently and offers
fulfill the has but customer
request which is the option of
greater choosing to
than the do another
account transaction
balance or not.

Withdrawal A withdrawal System is Press System


transaction displaying "Cancel" displays an
can be menu of key appropriate
cancelled by account message
the types and offers
customer customer
any time the option of
prior to choosing to
choosing the do another
dollar transaction
amount or not.

Withdrawal A withdrawal System is Press System


transaction displaying "Cancel" displays an
can be menu of key appropriate
cancelled by dollar message
the amounts and offers
customer customer
any time the option of
prior to choosing to
choosing the do another
dollar transaction
amount or not.

Invalid PIN Customer is Enter an Customer is


Extension asked to incorrect asked to re-
reenter PIN PIN; enter PIN
Attempt an
inquiry
transaction
on the
customer’s
checking
account

! !260
TESTING PHASE

Invalid PIN Correct re- Request to Enter Original


Extension entry of PIN re-enter PIN correct PIN transaction
is accepted is being completes
displayed successfully

Invalid PIN A correctly An incorrect Perform This


Extension re- entered PIN has another transaction
PIN is used been re- transaction completes
for entered and successfully
subsequent transaction as well
transactions completed
normally

Invalid PIN Incorrect re- Request to Enter Appropriate


Extension entry of PIN re-enter PIN incorrect message
is not is being PIN displayed
accepted displayed and re-
entry of the
PIN is
requested

Invalid PIN Correct re- Request to Enter Original


Extension entry of PIN re-enter PIN incorrect transaction
on the is being PIN the completes
second try is displayed first time, successfully
accepted then
correct PIN
the second
time

Invalid PIN Correct re- Request to Enter Original


Extension entry of PIN re-enter PIN incorrect transaction
on the third is being PIN the completes
try is displayed first time successfully
accepted and
second
times,
then
correct PIN
the third
time

! !261
TESTING PHASE

Invalid PIN Three Request to Enter An


Extension incorrect re- re-enter PIN incorrect appropriate
entries of is being PIN three message is
PIN result in displayed times displayed;
retaining Card is
aborting retained by
transaction machine;
Session is
terminated

The example given was one of the simplest but one can see the effort and
time required to create them. The task obviously is daunting. Most Indian
programmers or project teams develop test cases after testing is
completed! Japanese clients are very particular and insistent on writing
test cases before coding starts. Unless they approve the test cases that can
run to more than 20,000 test cases for an ATM application the coding
phase cannot start. Meticulous checking and giving 100% feedback on the
total test-cases is part of their commitment; they also hire professional
testers to cover all possible test cases.

7.5 TYPES OF TESTING

Based on whether the actual execution of software under evaluation is


needed or not, there are two major categories of quality assurance
activities - Static and Dynamic Testing

7.5.1 Static Testing

Static Testing focuses on the range of methods that are used to determine
or estimate software quality without reference to actual executions.

❖ It implies testing software without execution on a computer.


❖ Involves just examination/review and evaluation.
❖ It is a process of reviewing the work product & is done using a checklist.
❖ Static Testing helps weed out many errors/bugs at an early stage.
❖ Static Testing lays strict emphasis on conforming to specifications.
❖ Static Testing can discover dead codes, infinite loops, uninitialized and
unused variables, standard violations.
❖ Is effective in finding 30-70% of errors.

! !262
TESTING PHASE

Techniques in Static testing include code inspection, code review, peer


review, desk checking, program review, etc. One of the simplest forms of
static testing is compiling. A compiler delivers error message when it finds
syntax errors or other invalid operations but does note execute the code. A
"linking loader" that links a set of modules into one executable program
will fail unless it finds every variable and functions referred to; here too the
modules are not executed.

Code review
Code review for a model is carried out after the module is successfully
compiled and the all the syntax errors have been eliminated. Code reviews
are extremely cost-effective strategies for reduction in coding errors and to
produce high quality code. Normally, two types of reviews are carried out
on the code of a module. These two code review techniques are "code
inspection" and "code walk through".

Code Walk Through


Code walk through is an informal code analysis technique. In this
technique, after a module has been coded, successfully compiled and all
syntax errors eliminated. A few members of the development team are
given the code few days before the walk through meeting to read and
understand code. Each member selects some test cases and simulates
execution of the code by hand (i.e. trace execution through each statement
and function execution). The main objectives of the walk through are to
discover the algorithmic and logical errors in the code. The members note
down their findings to discuss these in a walk through meeting where the
coder of the module is present.

Code Inspection
It is a formal analysis of the program source code done by a team of
developers and subject matter experts to find defects as define by meeting
computer system design. In contrast to code walk through, the aim of code
inspection is to discover some common types of errors caused due to
oversight and improper programming. In other words, during code
inspection the code is examined for the presence of certain kinds of errors.
In addition to the commonly made errors, adherence to coding standards is
also checked during code inspection. It is a good practice to collect
statistics regarding different types of errors commonly committed by the
developers and identify the type of errors most frequently committed. Such

! !263
TESTING PHASE

a list of commonly committed errors can be later used during code


inspection to look out for possible errors.

Some classical programming errors which can be checked during code


inspection are:

❖ Use of uninitialized variables.


❖ Nonterminating loops.
❖ Incompatible assignments.
❖ Array indices out of bounds.
❖ Improper storage allocation and de-allocation.
❖ Mismatches between actual and formal parameter in procedure calls.
❖ Use of incorrect logical operators or incorrect precedence among
operators.
❖ Improper modification of loop variables.
❖ Comparison of equally of floating point variables, etc.

Checklists are a great tool in code reviews - they ensure that reviews are
consistently performed by all teams - even if globally located. They are a
handy way to ensure that common issues are identified and resolved.

Usage of a checklist is very commonly seen whenever a car or a two-


wheeler is given for servicing. The first thing the service person does is
inspection of the vehicle and either ticking of boxes in a sheet of paper or
writing useful data like mileage, petrol levels, kms logged, dents, lights
functioning, customer request etc. The format of the "checklist" they use is
just one page but contains very useful information which helps to avoid
mistakes or miscommunication between the customer and service provider.

! !264
TESTING PHASE

Research by the Software Engineering Institute suggests that programmers


make 15-20 common mistakes. So by adding such mistakes to a checklist
they can be spotted whenever they occur and help drive them out over
time.

A typical Code Review Checklist looks like:

❖ Does the code work? Does it perform its intended function, the logic is
correct etc.

❖ Is all the code easily understood?

❖ Does it conform to the agreed coding conventions?

❖ Are braces, variable and function names, line length, indentations,


formatting, and comments rightly used?

❖ Are all variables properly defined with meaningful, consistent, and clear
names?

! !265
TESTING PHASE
❖ Are all assigned variables consist

❖ Are there any redundant or unused variables?

❖ Is there any redundant or duplicate code?

❖ Are there any uncalled or unneeded procedures or any unreachable


code?
Are there any unnecessary drivers, stubs or test routines in the code?

❖ Is the code as modular as possible?

❖ Can any global variables be replaced?

❖ Is there any commented out code?

❖ Can any of the code be replaced with library functions?

❖ Can any debugging code be removed?

❖ Are all data inputs checked (for the correct type, length, format, and
range) and encoded?

❖ Where third-party utilities are used, are returning errors being caught?

❖ Are output values checked and encoded?

Obviously the checklist cannot be an epic-book by itself and cannot be


exhaustive of all issues that can arise. A checklist so long will never be
used.

7.5.1 Dynamic Testing

Testing deals with specific methods to ascertain software quality through


actual execution i.e. with real data and under real or simulated conditions.
Techniques in this area include synthesis of inputs, the use of structurally
dictated testing procedures, and the automation of testing environment
generation. The static and dynamic methods are inseparable but will be
discussed separately.

! !266
TESTING PHASE

Dynamic Testing
Under Dynamic Testing code is executed. As the name implies it checks for
functional behavior of software system, memory usage, CPU usage and
overall performance of the system. This tests the dynamic behavior of
code. Dynamic Testing is performed to confirm that the software product
works in conformance with the business requirements. This testing is also
called as validation testing.

In dynamic testing the software must actually be compiled and run. It


involves working with the software, giving input values and checking if the
output is as expected by executing specific test cases which can be done
manually or with the use of an automated process. Dynamic testing is
performed at all levels of testing i.e. Unit, Integration, System and
Acceptance and it can be either black or white box testing. The levels of
testing, White Box and Black Box testing will be covered in subsequent
sections.

The following table highlights some of the difference between static and
dynamic testing

Static Testing Dynamic Testing

Testing done without executing the Testing done by executing the program
program

It is a verification process It is a validation process

It is about prevention of defects It is about finding and fixing the defects

Gives assessment of code and Gives assessment of defects and quality of


documentation the developed system.

Involves checklist usage Involves test cases for execution

Can be performed before It is performed after compilation


compilation

Covers the structural and Covers the executable items of the code
statement coverage testing

Cost of finding defects and fixing is Cost of finding and fixing defects is high
less

More reviews and feedback ensure More defects and coverage ensures good
good quality quality.

! !267
TESTING PHASE

Requires more meetings Comparatively requires lesser meetings

7.6 DEBUGGING

It is necessary here to digress a bit from testing and cover another


important aspect in the development life cycle i.e. debugging. Once errors
are identified in a program code, it is necessary to first identify the precise
program statements responsible for the errors and then to fix them.
Identifying errors in a program code and then fixing them are known as
debugging.

Debugging approaches
The following are some of the approaches popularly adopted by
programmers for debugging.

Brute Force Method


This is the most common method of debugging but is the least efficient
method. In this approach, the program is scattered with print statements
to print the intermediate values with the hope that some of the printed
values will help to identify the statement in error. In some cases memory
storage dumps are taken to find the source of the error or run-time traces
are invoked to debug. This approach becomes more systematic with the
use of a symbolic debugger (also called a source code debugger), because
values of different variables can be easily checked and break points can be
easily set to test the values of variables effortlessly. One irritant of this
method is that "lax" programmers tend to leave the debugging code in the
programs once the testing and debugging is completed.

Backtracking
This is also a fairly common approach. In this approach, beginning from
the statement at which an error symptom has been observed, the source
code is traced backwards until the error is discovered. Unfortunately, as the
number of source lines to be traced back increases, the number of
potential backward paths increases and may become unmanageably large
thus limiting the use of this approach.

! !268
TESTING PHASE

Cause Elimination Method


In this approach, a list of causes which could possibly have contributed to
the error symptom is developed and tests are conducted to eliminate each.
The method could be "by induction" or by “deduction".

❖ Induction: Locate data about what program did correctly/ incorrectly,


Organize data, device a hypothesis about the cause of the error & prove
the hypothesis

❖ Deduction: Enumerate the causes of error, eliminate each cause of error


and zoom in on the right cause(s).

Program Slicing
This technique is similar to back tracking. Here the search space is reduced
by defining slices. A slice of a program for a particular variable at a
particular statement is the set of source lines preceding this statement that
can influence the value of that variable.

Debugging is often carried out by programmers based on their ingenuity.


Few general guidelines for effective debugging are:

❖ Debugging often requires a thorough understanding of the program


design. Trying to debug based on a partial understanding of the system
design and implementation may require an inordinate amount of effort to
be put into debugging even simple problems.

❖ Debugging may sometimes even require full redesign of the system. In


such cases, common mistake that novice programmers often make is
attempting not to fix the error but its symptoms

❖ There is a possibility that an error correction may introduce new errors.


Therefore after every round of error-fixing, regression testing must be
carried out.

! !269
TESTING PHASE

Testing Vs. Debugging


There is usually confusion among programmers that debugging and testing
are the same. They are totally different and complementary. Testing leads
to debugging when defects are found and need to be fixed.

Testing Debugging

Starts with known conditions, uses Starts from possibly unknown initial
predefined procedures, and has conditions, and the end cannot be
predictable outcomes predicted

Is a structured process that identifies Is a diagnostic process that identifies


an error’s “symptoms” an error’s “cause”

Testing can and should be planned, The procedures for, and duration of,
designed, and scheduled debugging cannot be so constrained

Testing is a demonstration of error or Debugging is a deductive process.


apparent correctness.

Testing should strive to be rigid, Debugging demands intuitive leaps,


predictable, dull & inhuman. conjectures, experiments & freedom

Much of testing can be done without Debugging is impossible without


design knowledge detailed design knowledge.

Can often be done by an outsider. Must be done by an insider

Much of test execution and design can Automated debugging is still a dream.
be automated

7.7 SDLC AND V-MODEL FOR TESTING

There are some distinct test phases that take place in each of the software
life cycle activity. It is easier to visualize these phases through the
Waterfall model of development and V- model of testing. The V proceeds
from left to right, depicting the basic sequence of development and testing
activities. Unlike the waterfall model instead of moving down in a linear
way, the process steps are bent upwards after the coding phase, to form
the typical V shape. The V-Model also called as the Verification and
Validation model, demonstrates the relationships between each phase of
the development life cycle and its associated phase of testing. The V-model
explicitly suggests that testing (quality assurance) should be considered
early on in the life of a project.

! !270
TESTING PHASE

Please refer the diagram given above.

Based on same wish-lists, requirement specifications and inputs from


various stakeholders, the development, coding & testing activity is started.
The test planning, test case writing, test scripting and testing happen in
parallel to the development activities. The testing activity is performed in
the each phase of Software Testing Life Cycle phase. In the first half of the
model Verification testing activity is integrated in each phase and in the
next half the Validations testing activity comes into picture.

Verification Phase
Requirements Specifications: In the Requirements analysis phase, the
first step in the verification process, the requirements of the system are
collected by analyzing the needs of the user(s). The user requirements
document will typically describe the system's functional, interface,
performance, data, security, etc. requirements as expected by the user. In
this phase, the user acceptance test planning and tests are also designed
in this phase.

! !271
TESTING PHASE

Functional (System) design: It is the phase where system engineers


analyze and understand the business of the proposed system by studying
the user requirements document. They figure out possibilities and
techniques by which the user requirements can be implemented. During
this phase the system testing is designed and system testing plans and
test cases are prepared.

Detailed Design (High Level): The high-level design phase focuses on


system architecture and design. A baseline in architecture is designed
which typically consists of the list of modules, functionality of each module,
their interface relationships dependencies, database tables, architecture
diagrams, technology details etc. An integration test plan is created in this
phase as well in order to test the pieces of the software systems ability to
work together.

Program Specifications (Low Level): The low-level design phase is


where the actual software components are designed. The module design
phase can also be referred to as low-level design. The designed system is
broken up into smaller units or modules and each of them is explained so
that the programmer can start coding directly. During this stage unit test
design is developed and unit tests are created.

Validation Phase
In the V-model, each stage of verification phase has a corresponding stage
in the validation phase.

Unit testing: The unit test plans (UTPs) developed during module design
phase are executed to detect and eliminate defects at program code level
or unit level. A unit is the smallest entity which can independently exist,
e.g. a function, program or a module. Unit testing verifies that the smallest
entity can function correctly when isolated from the rest of the units.

Integration testing: Integration test plans that are developed during the
high Level design phase are executed in this stage. These tests verify that
units created and tested independently can coexist and communicate
among themselves.

System testing: System tests plans that are developed during system
design phase are executed in this stage. The System test plans are
generally composed by client's business team. Applications are tested for

! !272
TESTING PHASE

functionality, interdependency and communication. System testing verifies


that both functional and non-functional requirements have been met

User acceptance testing: User acceptance test (UAT) plans that are
developed during the requirements analysis phase are composed by
business users. In this stage UAT is performed in a user environment that
resembles the production environment, using realistic data. UAT verifies
that delivered system meets user's requirement and system is ready for
use in real time.

The V-model has several advantages and criticism. It has been criticized by
Agile advocates and others as an inadequate model of software
development for numerous reasons.

Advantages

❖ It provides a simple and easy to follow map of the software development


process.

❖ It defines a logical relationship between the tangible phases of the


process, and proposes a logical sequence in which these phases should
be approached.

❖ It demands that testing documentation is written as soon as possible, for


example, the integration tests are written when the high level design is
finished, the unit tests are written when the detailed specifications are
finished and so on

❖ It gives equal weight to development and testing.

Criticism

❖ It is too simple to accurately reflect the software development process,


and can lead managers into a false sense of security. The V-model
reflects a project management view of software development and fits the
needs of project managers, accountants and lawyers rather than
software developers or users.

❖ The V-model is not suitable for bigger and complex projects.

! !273
TESTING PHASE
❖ Although it is easily understood by fresh programmers, early
understanding is useful only if the novice develops a deeper
understanding of the development process and how the V-model must be
adapted and extended in practice.

❖ Since it is closely tied to the Waterfall model, it is inflexible and


encourages a rigid and linear view of software development and has no
inherent ability to respond to change.

❖ It implicitly promotes writing test scripts in advance rather than


exploratory testing; it encourages testers to look for what they expect to
find, rather than discover what is truly there. It does not encourage
testers to select the most effective and efficient way to plan and execute
testing.

❖ It lacks coherence and precision. There is widespread confusion about


what exactly the V-model is. Disagreement about the merits of the V-
model often reflects a lack of shared understanding of its definition.

7.8 LEVELS OF TESTING

Software products are normally tested first at the individual component (or
unit) level. One begins by 'testing-in-the-small' and moves toward 'testing-
in-the-large'. After testing all the components individually, the components
are slowly integrated and tested at each level of integration (integration
testing). Finally, the fully integrated system is tested - called system
testing. Integration and system testing are known as testing in the large.

Testing progresses by moving outward along the spiral:

Unit Testing - Integration Testing - Validation testing - System Testing -


Acceptance Testing The different Levels of testing are:

❖ Unit/Module testing
❖ Integration testing
✴ Non incremental
✴ Incremental
✴ Top-down approach
✴ Bottom-up approach
✴ Sandwich approach

! !274
TESTING PHASE
❖ System testing
❖ User acceptance testing

7.8.1. Unit Testing

Unit testing is undertaken after a module has been coded and successfully
reviewed. Unit testing (or module testing) is the testing of different units
(or modules) of a system in isolation. In order to test a single module, a
complete environment is needed to provide all that is necessary for
execution of the module. For example to test the "SMS" module of a mobile
application, the "contacts" module should be developed or data of contacts
available for testing the SMS module. In this case "one function" of the
SMS module will undergo "UNIT" testing. The SMS module itself becomes a
"UNIT" once all its functions are tested and integrated.

Unit tests are typically done by programmers and NOT by "Testers", as it


requires detailed knowledge of the internal program design and code.
These tests examine the software in isolation and in greatest detail. At
program level the type of testing used is White Box Testing (to be
discussed later). In order to complete some of the unit tests, stubs, drivers
and simulators may be used (will be discussed later). Unit tests will include
black box and white box testing techniques (to be discussed later). The
test attempts to uncover errors in logic & function within boundaries of a
component. They cover validation rules, navigation requirements, user
interfaces, processing of the unit, logical paths, Loops, etc.

Stubs and Drivers

While developing large business applications or complex software, modules


required providing the necessary environment, which either call or are
called by the module under test are usually not available on time. In such
cases stubs or drivers are used to complete the testing. The role of stub
and driver modules is shown in the diagram below.

! !275
TESTING PHASE

Stubs' are created for the modules at the lower level. Stubs are dummy
modules which will produce the same output as that of the called modules.
Once the calling module is tested, then the testing proceeds to test the
modules at the lower level and the 'stub' is replaced by the real module.
Developing the stub allows the programmer to call a method in the code
being developed, even if the method does not yet have the desired
behavior.

For example "A Customer Invoicing Program" calls a "Financial Accounting"


function to update the accounts receivables, discounts and sales amounts.
If the Financial Accounting function is not ready then a dummy module i.e.
"stub" is created to simulate the "called" module and complete the testing
of the Invoicing program.

Drivers are dummy modules created in place of the 'calling modules' so


that the testing of the 'called modules' can be carried out.

A 'Driver' is a piece of software that drives (invokes) the Unit being tested.
A driver creates necessary 'Inputs' required for the Unit and then invokes

! !276
TESTING PHASE

the Unit. A "Driver" passes test cases to another piece of code. "Test
Harness" or a "test driver" is supporting code and data used to provide an
environment for testing part of a system in isolation.

For example a "Sales order printing" program requires a 'Sales order' as an


input, which is actually an output of 'Sales order creation' program. If the
order creation program is not completed a driver module is created to
simulate an "order" and the "Printing" program is tested.

Both the "Driver" and the "Stub" must be kept at a minimum level of
complexity, so that they do not induce any errors while testing the Unit in
question. Once the real modules are developed for a Stub or a Driver they
are replaced with the real modules and the programs retested. The Stubs
and Drivers are often viewed as throwaway code.

7.8.2 Integration testing

This is also referred to as Link Testing. Interfaces are the means by which
data is passed to and from modules. A group of modules are tested
together to test the software for dependencies within modules & to test the
interfaces within modules. Integration testing tests interfaces between
components, interactions to different parts of a system, such as the
operating system, file system, hardware or interfaces between systems.
This behavior covers both functional & non-functional aspects of the
integrated system.

"Interface integrity" test ensure that when data is passed to another


module, by way of a call, none of the data becomes lost or corrupted. This
loss or corruption can happen by number of ways- calling and receiving
parameters may be of the wrong type and so the data appears in the
receiving programs in a garbled form. During integration testing, different
modules of a system are integrated in a planned manner using an
integration plan. The integration plan specifies the steps and the order in
which modules are combined to realize the full system. After each
integration step, the partially integrated system is tested.

! !277
TESTING PHASE

Modules are integrated by two ways:

❖ Non-incremental testing (Big-bang testing) - each module is tested


independently and at the end, all modules are combined to form a single
application

❖ Incremental module testing. There are three types


✴ Top down testing
✴ Bottom up testing
✴ Sandwich (mixed) testing

Big-Bang Integration Testing


It is the simplest integration testing approach, where all the modules
making up a system are integrated in a single step. It is assumed that
since all components have already undergone testing at the unit level and
have no defects, they can be now put together and tested. In simple
words, all the modules of the system are simply put together and tested.
The main advantage of this approach is that it is very quick.

However, this technique is practicable only for very small systems. The
main problem with this approach is that once an error is found during the
integration testing, it is very difficult to localize the error as the error may
potentially belong to any of the modules being integrated. Therefore,
debugging errors reported during big bang integration testing are very
expensive to fix.

Bottom-Up Integration Testing


In bottom-up testing, each subsystem at the lower hierarchy is tested
separately and then the full system is tested. A subsystem might consist of
many modules which communicate among each other through well-defined
interfaces. Large software systems normally require several levels of
subsystem testing; lower-level subsystems are successively combined to
form higher-level subsystems. In this approach testing is conducted from
sub module to main module, if the main module is not developed a
temporary program i.e. "Driver" is used to simulate the main module.

A principal advantage of bottom-up integration testing is that several


disjoint subsystems can be tested simultaneously. This testing is
advantageous if major flaws occur toward the bottom of the program. Test
conditions are easier to create and there is no need to create stubs. A

! !278
TESTING PHASE

disadvantage of bottom-up testing is the complexity that occurs when the


system is made up of a large number of small subsystems. Also driver
modules must be produced for this type of testing. The program as an
entity does not exist until the last module is added.

Top-Down Integration Testing


Top-down integration testing starts after a main routine and one or two
subordinate modules in the system are developed. Top-down integration
testing approach requires the use of program stubs to simulate the effect
of lower-level routines that are called by the routines under test and does
not require any driver routines. This testing is advantageous if major flaws
occur toward the top of the program. Also early working skeletal programs
allows demonstrations and boosts morale. A disadvantage of the top-down
integration testing approach is that in the absence of lower-level routines,
many times it may become difficult to exercise the top-level routines in the
desired manner since the lower-level routines perform several key low-
level functions. This also requires "stub" modules to be written for
completing the "calling" module testing.

Sandwich (Mixed) Integration Testing


A mixed integration testing follows a combination of top-down and bottom-
up testing approaches. In top-down approach, testing can start only after
the top-level modules have been coded and unit tested. Similarly, bottom-
up testing can start only after the bottom level modules are ready. The
mixed approach overcomes this shortcoming of the top-down and bottom-
up approaches. The modules are prioritized for testing depending on their
logical sequence of implementation and roll out. For example a Savings
Bank Module will need the bank account master creation program before
the interest calculation program to be completed first so that the bank can
start capturing the existing account details. Testing happens as and when
modules become available and sandwich testing is a commonly used
integration testing approach. In this approach, both stubs and drivers are
used.

7.8.3. System Testing

System testing is testing conducted on a complete, integrated system to


evaluate the system's compliance with its specified requirements. This level
of testing is the most crucial stage where tests are done to ensure
conformance to requirements and ensure quality that will meet or exceed

! !279
TESTING PHASE

customer expectations. As a rule, system testing takes, as its input, all of


the "integrated" software components that have passed the integration
testing stage. The aim is to test the software in a real environment in
which it is planned to operate i.e. hardware, end users, live data,
information, etc. The System Testing is carried out against the Functional
Specs and the type of testing is mainly Black Box Testing (to be discussed
later).

System Testing is divided into two categories:

❖ Functional system testing - Tests the completed application as a


whole, to determine that it provides all of the behaviors required.

❖ Non-Functional system testing - This testing is defined as "testing of


system requirements that do not relate to functionality e.g. Performance
(load, volume & stress) testing, Installation testing, Security testing,
Usability testing, Backup & Recovery testing, Configuration testing,
Documentation testing, Localization testing and Release testing.

In each of the two types of tests, various kinds of test cases are designed
by referring to the SRS document.

Non Functional Testing - Performance Testing


Performance testing is testing conducted to evaluate compliance of a
system with specified performance requirements. It is carried out to check
whether the system meets the non- functional requirements identified in
the SRS document. Performance testing determines how fast some aspect
of a system performs under a particular workload. It can also compare two
systems to find which performs better.

Performance testing checks if the industry defined benchmarks are met by


the application under test. It can measure what part of the system or
workload causes the system to perform badly. The process can involve
quantitative tests done in a lab, such as measuring the response time or
the number of MIPS (millions of instructions per second) at which a system
functions.

Example:
While testing a web application the network performance is checked for
Connection Speed vs. Latency. Latency is the time difference between the

! !280
TESTING PHASE

data to reach from source to destination. A 70kb page might take less than
15 seconds to load for a worst connection of 28.8kbps modem
(latency=1000 milliseconds), while the page of same size would appear
within 5 seconds, for the average connection of 256kbps DSL (latency=100
milliseconds).

The focus of performance testing is for checking a software programs’

❖ Speed - Determines whether the application responds quickly

❖ Scalability - Determines maximum user load the software application can


handle.

❖ Stability - Determines if the application is stable under varying loads

Some of the performance testing includes:

Load testing: Subject the target of the test, to varying workloads, to


measure and evaluate the performance under different workloads. A 1 HP
motor pump is expected to pump water to a head of 40 feet for a water
flow of 100 gallons per minute at normal temperatures and water specific
gravity=1 and fill a tank within a specified time. Load testing of this pump
implies testing with some minimum flow rates, less than 40 feet heights
and specific gravity of water = 1. Similarly a payroll application generating
pay-slips at the end of the month will require data of employees with
different criteria - full attendance, partial attendance, bonus time, overtime
pay, performance incentives, etc. The tests check the application's ability to
perform under anticipated user loads. The objective is to identify
performance bottlenecks before the software application goes live.

Volume Testing: Testing an application for a certain data volume. The 1


HP motor pumps will be tested for flow close to 100 gallons per minute,
height of 40 feet and sp. gravity of 1. For a payroll application, under
volume testing large number of employee data is populated in a database
and the overall software system's behavior is monitored. Complex queries
are executed with large volume of data to check the performance and
response times of the application. This also helps in finding defects in the
"sql" queries written for report generation. The objective is to check
software application's performance under varying database volumes. It is
especially important to check whether the data structures (arrays, queues,

! !281
TESTING PHASE

stacks, etc.) have been designed to successfully extraordinary situations. A


compiler might be tested to check whether the symbol table overflows
when a very large program is compiled.

Stress Testing
Stress Testing is testing conducted to evaluate a system or component at
or beyond the limits of its specified requirements or normal operation.
Stress testing also known as endurance testing, determines the breaking
point or unacceptable performance point of a system. For example any
component used in high-voltage circuit breakers is subjected to a "stress"
test for power surges far beyond the normal working voltages. Most of us
have experienced the crashing of a desktop or laptop halfway through
some routine program without any reason or the system saying "Not
responding". Such errors indicate stress on the memory of the machine
and the operating system.

Stress testing involves testing an application under extreme workloads to


see how it handles high traffic or data processing. Input data volume, input
data rate, processing time, utilization of memory, etc. are tested beyond
the designed capacity. For example, suppose an operating system is
supposed to support 15 multi-programmed jobs, the system is stressed by
attempting to run 15 or more jobs simultaneously.

In Oct 2014, Flipkart declared that it had created Indian e-commerce


history by clocking $100 million (Rs 600 crores) in sales in just 10 hours of
its much-heralded discount sale. Flipkart adopted a conscious strategy to
offer massive discounts across various segments of branded products to
test the market ahead of the Diwali festival season. It ended up selling 2
million items at the rate of 60 items per second. A television set sold per
second, half a million mobiles and an equal number of garments in a day.
But a day after its Big Billion Day sale, e-commerce giant Flipkart sent
letters to its customers apologizing for the glitches that the site
encountered as it struggled to keep up with the heavy traffic. Disgruntled
consumers took to social networks to express their displeasure about
jacked up prices, cancelled orders and the time and attempts taken to
complete orders after adding items to cart, along with Flipkart' s servers
crumbling to the pressure of heavy traffic, throwing up random errors.
Flipkart admitted that the shopping experience for many was frustrating
due to errors and unavailability of the website at times. They had deployed
nearly 5000 servers and had prepared for 20 times the traffic growth - but

! !282
TESTING PHASE

the volume of traffic at different times of the day was much higher than
this.

Stress testing is especially important for systems that usually operate


below the maximum capacity but are severely stressed at some peak
demand hours.

Configuration Testing
This is used to analyze system behavior in various hardware and software
configurations specified in the requirements. Today applications are built
work on desktops, mobiles, Notepads, i-phones etc. These applications are
built in variable configurations for different users. The system is configured
in each of the required configurations and it is checked if the system
behaves correctly in all required configurations.

Configuration testing is the process of testing a system under development


on machines which have various combinations of hardware and software.
In many situations the number of possible configurations is far too large to
test. For example for an application which works on a PC, the number of
combinations of operating system versions, memory sizes, hard drive
types, CPUs etc. is enormous. Assuming that there are 10 different
operating system versions, 8 different memory sizes, 6 different hard
drives, and 7 different CPUs, there are already 10 * 8 * 6 * 7 = 3,360
different hardware configurations. If one adds the standard stuff which
people load on their desktops like a Web browser, anti-virus software,
Picasa, MS Office, Facebook, LinkedIn, etc. the number of possible
configurations quickly becomes unmanageable.

Because the number of possible configurations to test is typically too large


to effectively test, it is crucial that the planning effort for a software testing
effort clearly identify which platforms will be supported. Test planning
effort must prioritize testing different configurations based on a number of
factors such as the size of the user base, and the risk associated with an
undiscovered bug in a particular configuration.

One configuration testing approach which is used for software testing is to


test on virtual machines. A virtual machine consists of a single file called a
virtual hard drive (VHD), which when installed on a host machine can
simulate a particular real software configuration. Multiple virtual machines,
each with a different software configuration, can be installed and run on a

! !283
TESTING PHASE

single physical host machine, and tested simultaneously. On Microsoft


platforms, free virtual systems include Virtual PC and Virtual Server and
the 64-bit machine based Hyper-V system. One of the open source virtual
machine software systems for Unix platforms is VirtualBox.

Sometimes when development work is outsourced to offshore vendors the


configuration of hardware, software licenses and application versions are
different from the live environment at the client site. The developed
modules and applications must be finally tested on the client configuration
before launching them for end-users.

Compatibility Testing
This type of testing is required when the system interfaces with other types
of systems. Compatibility aims to check whether the interface functions
perform as required. For instance, if the system needs to communicate
with a large database system to retrieve information, compatibility testing
is required to test the speed and accuracy of data retrieval.

Recovery Testing
Recovery testing tests the response of the system to the presence of
faults, or loss of power, devices, services, data, etc. The system is
subjected to the loss of the mentioned resources and it is checked if the
system recovers satisfactorily. For example, the printer can be
disconnected to check if the system hangs. Or, the power may be shut
down to check the extent of data loss and corruption.

Installation Testing: Ensure successful ship-out of all components of


software & installation of s/w with help of installation documents. Today
most applications are downloaded by end- users on their own machines
and setup-files are executed to unzip the components of the applications
and build the executable on the end-user machine. After a series of steps
where the user is allowed to choose some of the parameters like language,
location of files, add-on software etc. the application is then launched with
the help of a "key" which is given on purchase of the software. For such
applications it is very important to simulate such downloads and test the
installation process thoroughly so that a common user is able to procure
and install any software with the least of problems.

! !284
TESTING PHASE

Documentation Testing
It is checked that the required user manual, maintenance manuals, and
technical manuals exist and are consistent. With the global reach today,
applications are used across the world by people with different languages,
culture and capabilities. In the Arabic world documentation must be such
that it can be read from right to left. If the requirements specify the types
of audience for which a specific manual should be designed, then the
manual is checked for compliance.

Usability Testing
Usability testing concerns checking the user interface to see if it meets all
user requirements concerning the user interfaces. One of the goals of
usability testing is to check its suitability for the user's current work-style,
culture and organization's existing employees. The tests also find out how
the end-users will react to the system. Obviously this test is very
subjective.

There are several areas that are covered during usability testing. These
include Ease of Learning

❖ Ease of understanding
❖ Ease of installation
❖ Screen display - formats, layout, look and feel
❖ Report formats
❖ Ease of using in terms of keystrokes, mouse navigation, short cuts or
memory-aids
❖ Difficulty of orientation and navigation
❖ Ease of completing a specific task using the system
❖ Information consistency and presentation
❖ Usage by different categories and skill levels of users

Security Testing
Security testing has become very crucial with the advent of technology, the
internet and cloud computing. The testing verifies that protection
mechanisms built into a system will protect it from illegal penetration,
logical access violators.

Software security is about making software behave in the presence of a


malicious attack. Testing is basically done to check whether the application
or the product is secured or not and to see if the application is vulnerable

! !285
TESTING PHASE

to attacks. The testing is a process to determine that an information


system protects data and maintains functionality as intended; it is a
process that determines that confidential data stays confidential i.e. it is
not exposed to individuals or entities for which it is not meant. Users must
be able to perform only those tasks that they are authorized to perform
e.g. a user should not be able to change the functionality of the web
application in an unintended way, etc.

Security Testing ensures that system and applications in an organization


are free from any loopholes that may cause a big loss. Typical security
requirements may include specific elements of confidentiality, integrity,
authentication, availability, authorization and non- repudiation.

Today more and more vital data is stored in web applications and the
number of transactions on the web has increased; proper security testing
of web applications is becoming very important. Some key terms used in
web application security testing are:

❖ Vulnerability: This is a weakness in the web application. The cause of


such a "weakness" can be bugs in the application, an injection (SQL/
script code) or the presence of viruses.

❖ URL manipulation: A uniform resource locator (URL) is a reference to a


resource that specifies the location of the resource on a computer
network; the generic term for all types of names and addresses that
refer to objects on the World Wide Web. The term "Web address" is a
synonym for a URL. Some web applications communicate additional
information between the client (browser) and the server in the Url.
Changing some information in the Url may sometimes lead to unintended
behavior by the server.

❖ SQL injection: This is the process of inserting SQL statements through


the web application user interface into some query that is then executed
by the server.

❖ XSS (Cross Site Scripting): When a user inserts HTML/ client-side


script in the user interface of a web application and this insertion is
visible to other users, it is called XSS.

! !286
TESTING PHASE
❖ Spoofing: The creation of hoax look-alike websites or emails is called
spoofing. Cheats create dummy Urls to lure gullible users to reveal bank
account details and commit frauds.

Due to the logical limitations of security testing, passing security testing is


not an indication that no flaws exist or that the system adequately satisfies
the security requirements.

Backup & Recovery Testing


This is a Test of the backup procedures. Ask any youngster about the
reasons for losing valuable pictures, photographs, video shoots, you-tube
downloads, music albums, messages - from either a desktop, laptop or
mobile. The main reason would be "there was no backup". Most users who
do not take backups of their data or information and experienced the
hardships and anguish when the hard-disc crashes, a drive is not readable
or the files get corrupted due to a virus attack. Many users take regular
backups of their machine data but very few have restored or dare to
restore from backups and recovered lost data. In fact only few have the
courage of conviction that the backup data is reliable. It is essential
therefore to check if the backup and restore options provided in an
application software works properly and is reliable.

Localization Testing
Localization Testing focuses on areas affected during localization viz. user
interface and content, culture, language specific & region specific areas.
For example any help message in English needs to be translated to French
or German for the benefit of users in their countries and translation of
some "list of strings" may be out of context or wrongly translated. In some
cases more words may be required in a particular language to explain the
same thing which is expressed in one or two words in English. The word
"commode" in French has a totally different connation from the similar
sounding word "commodity" in English. Testing must ensure that such
ambiguities are not existent and also that the translated content fits into
the same user interface areas on screen. Sometimes a keyboard shortcut
may have no function on the source language (English) but is used for
typing characters in the layout of the target language (Tamil).

Release Testing
Consider widely used applications like McAfee anti-virus, Google Chrome,
Skype and Several Gaming software. Changes happen in each of these

! !287
TESTING PHASE

applications on a daily basis either due to defect fixing, enhancements,


adding features, etc. and a new version of the application is ready for
installation. It is impossible for every user to upgrade all his applications on
a daily basis - if so one will not be able to do any productive work at all in
a given day.

Hence vendors select the bug fixes, new features, and documentation for a
particular release. The bugs are prioritized, the source files are retrieved
from the repository, changes made and recompiled, linked and a new build
is ready with a new version number. After successful testing an installable
media package is created and kept ready. At a specified timeline the
package is released to users who have an option to install the new upgrade
or defer it for a later date. Care is taken such that every release checks the
existing configuration, the current version of the application at the user site
and upgrades it to the latest version in a proper sequential manner to
ensure that all changes from the previous upgrade till the current release
are implemented.

7.8.4 User Acceptance Testing

User acceptance is a type of testing performed by the client to certify the


system with respect to the requirements that was agreed upon. This
testing happens in the final phase of testing before moving the software
application to market or live production environment. User Acceptance
Testing consists of processes of verifying that a solution works for the user.
It is done after system testing to ensure that the solution will work for the
user i.e. test that the user accepts the solution. It is assumed that if the
software works as required and without issues during normal use, one can
reasonably extrapolate the same level of stability in production.

A subject-matter expert (SME) from the client site is preferably made the
owner for these tests and the SME provides a summary of the findings for
review. Users of the system perform tests similar to what would occur in
real-life scenarios. The purpose of this testing is to validate the end to end
business flow. For example while testing a banking application the testers
will test for creating customer accounts, depositing money, withdrawing
money with no balance, printing statements, stopping cheques payments,
suspending accounts, changing customer details, assigning nominees,
closing accounts, transferring funds, calculating bank interest etc. - not
necessarily in any logical sequence. The system should work properly

! !288
TESTING PHASE

giving appropriate error messages for invalid transactions or confirmation


messages for valid transactions.

There are two types of acceptance testing: Alpha and Beta testing

Alpha testing takes place at developers' sites, and involves testing of the
operational system by client's internal staff, before it is released to external
customers. Alpha testing is a type of acceptance testing of a software
product which is not the final version. It is performed to identify all
possible issues/bugs before releasing the product to everyday users or
public. The focus of this testing is to simulate real users by using blackbox
techniques and carry out tasks that a typical user might perform. Alpha
testing is carried out in a controlled lab environment and usually the
testers are internal employees of the organization.

Beta testing also called as "field testing" takes place at customers' sites,
and involves testing by a group of customers who use the system and
provide feedback, before the system is released to other customers. Beta
testing is testing the application which is completed (100%). It is
performed by "real users" of the software application in a "real
environment" and can be considered as a form of external user acceptance
testing. Some vendors release the "Beta version" of their products
especially gaming software to some specified users and take their feedback
before releasing the product. Since it the final test before shipping a
product to the customers, direct feedback from customers is a major
advantage of beta testing. This is also the last stage of testing when a
product vendor offers the product outside the company for free trials.

7.8.5 Smoke Testing, Regression Testing and Exhaustive Testing

Smoke Testing
Smoke testing refers to various classes of tests of systems, usually
intended to determine whether they are ready for more robust testing. The
expression was first used in plumbing in referring to tests for the detection
of cracks, leaks or breaks in closed systems of pipes. By metaphorical
extension the term was used in electronics. The phrase smoke test comes
from electronic hardware testing. One plugs in a new board and turns on
the power. If there is smoke seen coming from the board the power is
turned off. One does not need to do any more testing.

! !289
TESTING PHASE

Smoke testing in software is performed after software build to ascertain


that the critical functionalities of the program is working fine. It is executed
"before" any detailed functional or regression tests are executed on the
software build. The purpose is to reject a badly broken application, so that
the QA team does not waste time installing and testing the software
application. In Smoke Testing, the test cases chosen cover the most
important functionality or component of the system. The objective is not to
perform exhaustive testing, but to verify that the critical functionalities of
the system are working properly.

Regression Testing
The objective of testing being finding defects, fixes are required to remove
these defects. Changes can occur in one unit or multiple units. It is
important to ensure that such changes are done only to fix the problems
detected and not adding further defects. Regression testing is the process
of testing changes to computer programs to make sure that the older
programming still works with the new changes. Regression testing seeks to
uncover new software bugs, or regressions, in existing functional and non-
functional areas of a system after changes such as enhancements, patches
or configuration changes, have been made to them. Common methods of
regression testing include rerunning previously completed tests and
checking whether program behavior has changed and checking whether
previously fixed faults have re-emerged.

Regressions occur whenever software functionality that was previously


working correctly, stops working as intended. They occur as an unintended
consequence of program changes, when the newly developed part of the
software overlaps with the previously existing code. Or a fix for a problem
in one area inadvertently causes a software bug in another area. When
some feature is redesigned, some of the same mistakes that were made in
the original implementation of the feature are made in the redesign.

! !290
TESTING PHASE

"Also as a consequence of the introduction of new bugs, program


maintenance requires far more system testing per statement written than
any other programming. Theoretically, after each fix one must run the
entire batch of test cases previously run against the system, to ensure that
it has not been damaged in an obscure way. In practice, such regression
testing must indeed approximate this theoretical idea, and it is very
costly." - Fred Brooks, The Mythical Man Month

In the life of any product or application, regression testing would be the


largest test effort in industrial software development due to checking
numerous details and changes. Regression testing is usually done using
automated testing tools where initial tests are recorded and the test suite
contains software tools that allow the testing environment to execute all
the regression test cases automatically.

Exhaustive Testing
Exhaustive Testing, as the name suggests is very exhaustive but a big
myth. In this type of testing outputs given by an application for all possible
inputs are checked with maximum possible permutations and combinations
of the inputs. Exhaustive testing is feasible when the programs and the
scope of project are small. For bigger projects exhaustive testing is
impractical and not feasible. It has theoretical significance, useful to know
and learn but practice implies lot of effort and costs.

Refer the diagram below. Fig A shows the logical paths that is possible to
be executed for a simple program with five conditions to be checked, 10
decisions and 7 statements to be executed. The numbers of tests that need

! !291
TESTING PHASE

to be executed are impossible to achieve even if one works at the speed of


light (maybe the Indian Superman Rajnikant in Robot can!). The logical If
one considers Microsoft where the tester developer ratio is 1:1 their
products should have no defects. In July 1988, IBM and Microsoft released
IBM DOS 4.0, and soon disaster was written all over the walls - data-eating
bugs, corrupted disks, and mismanaged memory. Microsoft's version was
that IBM botched the testing; IBM's story says Microsoft shouldn't have
expected IBM DOS 4.0 to work on non-IBM hardware. But from the DOS
era till date Microsoft Products being shipped with numerous bugs, security
holes, vulnerability issues etc.

! !292
TESTING PHASE

Testers can find bugs in the software, but can't make it 100% bug free.
This is the truth and one has to live with it. There is no such thing like
exhaustive testing. It is very rare for products to completely pass
exhaustive testing. There are always a few things that fail, but it may be
for a very rare and unlikely scenario, so it is labeled as a low-priority bug
that is very unlikely to occur, and may occur for only a very small
population of users. One must design an optional test suite that is of
reasonable size and can uncover as many errors existing in the system as
possible.

7.9 DYNAMIC TESTING - WHITE BOX TESTING

White-box testing is also known as clear box testing, glass box testing,
transparent box testing, and structural testing. It is a method of testing
software that tests internal structures or workings of an application, as
opposed to its functionality i.e. black-box testing. In white- box testing an
internal perspective of the system, as well as programming skills, are used
to design test cases. In the white-box testing approach, designing test
cases requires thorough knowledge about the internal structure of
software, and therefore the white-box testing is called structural testing.

The tester chooses inputs to exercise paths through the code and
determine the appropriate outputs. White-box testing can be applied at the
unit, integration and system levels of the software testing process.

7.9.1 White Box Testing - Coverage

White-box test design techniques include the following


❖ Statement (Logic) coverage
❖ Decision (Branch) coverage
❖ Condition Coverage
❖ Control flow testing

White box is logic driven testing and permits the test engineer to
examine the internal structure of the program. The different techniques
exercise every visible path of the source code to minimize errors and
create an error-free environment. The whole point of white-box testing is
the ability to know which line of the code is being executed and being able
to identify what the correct output should be. It uses explicit knowledge of

! !293
TESTING PHASE

the internal workings of the item being tested to select the test data. Input
documents required here will be program specifications.

What does "coverage" mean?


❖ Practically NOT all possible combinations of data values or paths can be

tested
❖ Coverage is a way of defining how many of the paths were actually

exercised by the tests


❖ Coverage goals can vary by risk, trust, and level of test
❖ Coverage Tools are available to find %age covered
❖ 90% is a good achievement. 95% daunting

White Box Testing - Statement (Logic) Coverage


The statement coverage strategy aims to design test cases so that every
statement in a program is executed at least once. The principal idea is that
unless a statement is executed, it is very hard to determine if an error
exists in that statement or observe whether it causes failure due to some
illegal memory access, wrong result computation, etc. Statement Testing
covers a set of paths i.e. every node lies on at least one path. It attempts
to cover all statements during test execution. Tester chooses input data
that will result in the selected path.

Statement coverage measures the degree to which the test cases exercise
or cover the logic (source code) of the program. It is a %age of executable
statements exercised by a test suite

Statement Coverage %age = (No. of statements exercised/Total No. of


statements)* 100. Example: If program has 100 statements and the tests
exercise 87 statements then statement coverage = 87%
Refer the accompanying diagram
The simple program for this diagram is void procedure(int a, int b, int x)

! !294
TESTING PHASE

!
{If (a>1) && (b==0)
{ x=x/a;}
If (a==2 || x>1)
{ x=x+1; }
}
Path = 1 - 3 - 4 - 5 - 7 is sufficient for statement coverage Possible input:
a = 2, b =0, x = 4

Statement Coverage is a way of defining how many of the paths were


actually exercised by the tests. Statement coverage is the weakest form of
coverage. Risks are that some branches may be missed. Just because it
covers all statements it is not the ideal method of testing. It may not
ensure that all branches or conditions get covered during the testing.

White Box Testing - Decision Coverage


Decision Coverage testing is also known as "Branch coverage", "all-edges
coverage" and "Basis path coverage". In the branch coverage-based testing
strategy, test cases are designed to make each branch condition to assume
true and false values in turn. Branch testing is also known as edge testing
as in this testing scheme, each edge of a program's control flow graph is

! !295
TESTING PHASE

traversed at least once. It is obvious that branch testing guarantees


statement coverage and thus is a stronger testing strategy compared to
the statement coverage-based testing.

Decision Coverage% = (Number of decision Outcome Exercised / Total


Number of Decision Outcomes) X 100%

Decision coverage measures whether Boolean expressions tested in control


structures (such as the if-statement and while-statement) evaluate to both
T & F.

Test cases must be such that each decision has a true and false outcome at
least once. Consider the same example as shown in the accompanying
diagram

❖ Case 1: a=2, b=0, x>1


Here decision1 is T and decision 2 is T, Path ACE is covered?
❖ Case 2: a<=1, b!=0, x<=1
Here decision1 is False and Decision 2 is F, Path ABD is covered

Even if the test cases satisfy decision coverage, it still does not cover the
path ACD and path ABE, hence decision coverage though stronger than
statement coverage is still weak. For the same example, at least two test
cases are needed to execute the true and false outcome of the decisions at
least once.

! !296
TESTING PHASE

White Box Testing - Condition coverage


Condition testing exercises the logical conditions contained in a program
module. In this structural testing, test cases are designed to make each
component of a composite conditional expression to assume both true and
false values.

For a composite conditional expression of n components, for condition


coverage, 2? test cases are required. For condition coverage, the number
of test cases increases exponentially with the number of component
conditions. Therefore, a condition coverage-based testing technique is
practical only if n (the number of conditions) is small.

For the example shown in the diagram

! !297
TESTING PHASE

Test Case1: a=1, b=0, x=3


Condition1 is F, Condition2 is T and Path ABE is covered
Test Case2: a=1, b=1, x=1
Condition1 is F, Condition2 is F and path ABD is covered.

Condition testing is a stronger testing strategy than branch testing and


branch testing is stronger testing strategy than the statement coverage-
based testing.

! !298
TESTING PHASE

7.9.2 Basis Path Testing

In software engineering, basis path testing, or structured testing is a white


box method for designing test cases. This involves using the source code of
a program to attempt to find every possible executable path. The aim is to
test each individual path in as many ways as possible in order to maximize
the coverage of each test case. This gives the best possible chance of
discovering all faults within a piece of code. The method analyzes the
control flow graph of a program to find a set of linearly independent paths
of execution. The method normally uses McCabe' Cyclomatic complexity to
determine the number of linearly independent paths and then generates
test cases for each path thus obtained.

In order to understand the path coverage-based testing strategy, it is


necessary to understand the control flow graph (CFG) of a program. A
control flow graph describes the sequence in which the different
instructions of a program get executed. In other words, a control flow
graph describes how the control flows through the program.
On a flow graph:

❖ Arrows called edges represent flow of control. An edge in the CFG


represents the ability for a program to flow from its current statement to
the statement at the other end of the edge

❖ Circles called nodes represent one or more actions. A node in a CFG


represents a program statement

❖ Areas bounded by edges and nodes called regions.

❖ A predicate node is a node containing a condition

A control flow graph can be derived from a simple flow chart of a program.
An example is given in the diagram given below.

! !299
TESTING PHASE

Flow Chart and Flow Graph

Referring to figure B above

❖ Each circle, called a flow graph node, represents one or more procedural
statements. A sequence of process boxes and a decision diamond can
map into single node.

❖ The arrows on the flow graph, called edges or links, represents flow of
control and are analogous to flowchart arrows.

❖ An edge must terminate at a node, even if node does not represents any
procedural statements.

❖ Area bounded by edges and nodes are called Regions. When counting
Regions, include the area outside the graph as region.

❖ When compound conditions are encountered in the procedural design,


the generation of flow graph becomes slightly more complicated.

❖ Each node that contains a condition is called a predicate node and is


characterized by two or more edges emanating from it.

! !300
TESTING PHASE

In order to draw the control flow graph of a program, all the statements of
a program must be numbered first. The different numbered statements
serve as nodes of the control flow graph. An edge from one node to
another node exists if the execution of the statement representing the first
node can result in the transfer of control to the other node.

Writing test cases to cover all the paths of a typical program is impractical.
For this reason, the path-coverage testing does not require coverage of all
paths but only coverage of "linearly independent paths". A linearly
independent path is any path through the program that introduces at least
one new edge that is not included in any other linearly independent paths.
If a path has one new node compared to all other linearly independent
paths, then the path is also linearly independent.

7.9.3 Cyclomatic Complexity

For more complicated programs it is not easy to determine the number of


independent paths of the program. McCabe's cyclomatic complexity is a
software quality metric that quantifies the complexity of a software
program. McCabe's cyclomatic complexity defines an upper bound for the
number of linearly independent paths through a program. It is a practical
way of determining the maximum number of linearly independent paths in
a program. The higher the number, the more complex is the code.

There are three different ways to compute the cyclomatic complexity V(G).
The answers computed by the three methods are the same and guaranteed
to agree.

❖ V(G) = E - N + 2, where E is the number of edges and N = the number


of nodes of the graph

❖ V(G) = P+1, where P is the number of predicate nodes

❖ V(G) = R, where number of region in the graph

! !301
TESTING PHASE

For the accompanying diagram E = 7, N=6 and R= 3.


❖ V(G) = 7-6+2 = 3
❖ V(G) = 2+1 = 3
❖ V(G) = R = 3

Measurement of McCabe's cyclomatic complexity metric ensures that


developers are sensitive to the fact that programs with high McCabe
numbers (e.g. > 10) are likely to be difficult to understand and therefore
have a higher probability of containing defects. Complex systems have
more lines of code, more interactions and therefore security bugs. Complex
systems are harder to test and therefore are more likely to have untested
portions. The cyclomatic complexity number also indicates the number of
test cases that would have to be written to execute all paths in a program.

The selected threshold for Cyclomatic Complexity & Reliability Risk based
on categories established by the Software Engineering Institute is

❖ 1 - 10 Simple procedure, little risk

❖ 11- 20 More complex, moderate risk

❖ 21 - 50 Complex, high risk

❖ >50 Untestable, very high risk

! !302
TESTING PHASE

Advantages of using McCabe's metric


❖ Quantifies the logical complexity
❖ It can be used as a ease of maintenance metric
❖ Used as a quality metric, gives relative complexity of various design
❖ Measures minimum effort & best areas of concentration for testing
❖ Is easy to apply

Drawbacks
❖ CC is a measure of the program's control complexity and not the data

complexity
❖ The same weight is placed on nested and non-nested loops. However,

deeply nested conditional structures are harder to understand than non-


nested structures.
❖ It may give a misleading figure with regard to a lot of simple

comparisons and decision structures.

7.9.4 Loop Testing

Loop testing, a white box testing technique focuses exclusively on validity


of loop construct. The goal of loop testing is to test 'while-do", "repeat-
until", or" do-while" loops and any other loops in a program thoroughly -
by trying to ensure that each is executed.

Loops can be of different types (Refer diagram):

Simple Loops
❖ loops whose loop bodies contain no other loops
❖ Innermost'' loops if any loops are nested

Nested Loop
❖ They are combinations of loops such that each is contained inside the

loop body of the next.


❖ For testing start at the innermost loop, conduct simple loop test for the

innermost loop and then work outward, conducting tests for the next
loop but keeping all other loops at minimum

Concatenated Loop
❖ In such constructs, one loop is dependent on the other
❖ They are loops such that each follows the next in the code
❖ Execution of the next loop begins after the previous terminates.

! !303
TESTING PHASE

Loop Testing

Spaghetti Loop: Such loops are not desirable. One must redesign using
structured constructs

Loop testing attempts to break' the program, by trying to have a loop


executed with a fewer than minimum, as well as a larger than maximal,
number of iterations.

7.9.5 Memory Leak

All MS Windows users must have experienced at least one occurrence of an


unexpected and unexplainable crash or program stoppage with no warning
whatsoever. One simply reboots or restarts the application and carries on
hoping that the error would not reoccur. One of the likely reasons for such
"ad-hoc" behavior could be due to "memory leaks". In simple language a
memory leak is loss of available memory when a program fails to return
memory that it has obtained for temporary use.

Memory leaks are present whenever a program loses track of memory As a


result, the available memory for that application drains out completely and
the programs can no longer function. In object-oriented programming, a
memory leak may happen when an object is stored in memory but cannot

! !304
TESTING PHASE

be accessed by the running code. For a program that is frequently opened


or that runs continuously, even a very small memory leak can eventually
cause the program to terminate.

A memory leak is most common type of defect and a result of a


programming bug. They are difficult to detect and hence it is very
important to test it during development phase. One must remember that a
constantly increasing memory usage is not necessarily evidence of a
memory leak. Applications may store some amounts of information in
memory in form of cache. If the cache grows increasingly to cause
problems, this may be a programming or design error, but is not a memory
leak as the information is not used much.

The following example of software to control an elevator, written in pseudo


code, is intended to show how a memory leak can come about, and its
effects, without needing any programming knowledge. The program code is
executed whenever the passenger inside the lift presses the button to
selects a floor to traverse to.

When button is pressed


Get some memory, which will be used to remember the floor number
Put the floor number into the memory
Are we already on the target floor?
If so
Do nothing: finished
Otherwise:
Wait until the lift is idle
Go to the required floor
Release the memory used to remember the floor number: finished

In this example, the memory leak occurs if the floor number requested is
the same floor that the lift is on; the condition for releasing the memory
would be skipped. Each time this case occurs, more memory is leaked. The
probability that the user presses the same floor is low and the error not
detected during testing but the memory will leak over a long period of
usage of the lift. The consequences would be unpleasant; at the very least,
the lift would stop responding to requests to move to another floor.
Passengers being stuck in the lift in such cases apart from being a crisis
can be a big embarrassment for a big reputed hotel. The memory leak can
be reset by restarting the application or by power resets.


! !305
TESTING PHASE

There are memory leak detection tools that help to identify


❖ memory allocated but not de-allocated
❖ uninitialized memory locations

7.9.6 Mutation testing

Tests can be created to verify the correctness of the implementation of a


given software system, but the creation of tests still poses the question
whether the tests are correct and sufficiently cover the requirements that
have originated the implementation. This technological problem is itself an
instance of a deeper philosophical problem named "Who will guard the
guards?".

In this context, mutation testing was pioneered in the 1970s to locate and
expose weaknesses in test suites. In mutation testing, the software is first
tested by using an initial test suite built up from the different white box
testing strategies. After the initial testing is complete, mutation testing is
taken up. The idea behind mutation testing is to make few arbitrary
changes to a program at a time. Each time the program is changed, it is
called as a mutated program and the change effected is called as a mutant.
A mutated program is tested against the full test suite of the program. If at
least one test case exists in the test suite for which a mutant gives an
incorrect result, then the mutant is said to be dead.

If a mutant remains alive even after all the test cases have been
exhausted, the test data is enhanced to kill the mutant. The process of
generation and killing of mutants can be automated by predefining a set of
primitive changes that can be applied to the program. These primitive
changes can be alterations such as changing an arithmetic operator,
changing the value of a constant, changing a data type, etc. A major
disadvantage of the mutation-based testing approach is that it is
computationally very expensive, since a large number of possible mutants
can be generated.

This problem of the expense of mutation testing had reduced its practical
use as a method of software testing, but the increased use of object
oriented programming languages and unit testing frameworks has led to
the creation of mutation testing tools for many programming languages as
a way to test individual portions of an application.

! !306
TESTING PHASE

Since mutation testing generates a large number of mutants and requires


us to check each mutant with the full test suite, it is not suitable for
manual testing. Mutation testing should be used in conjunction of some
testing tool which would run all the test cases automatically.

7.9.7. White Box Testing - Advantages & Disadvantage

Advantages

❖ Since the knowledge of internal coding structure is pre-requisite, it


becomes easy to find out which type of input/data can help in testing the
application effectively.

❖ Helps in optimization of code by revealing hidden errors and being able


to remove these possible defects.

❖ Provides traceability of tests from the source, allowing future changes to


the software to be easily captured in changes to the tests.

❖ White box tests are easy to automate.

Disadvantages:

❖ As the knowledge of internal coding structure is a pre-requisite, a skilled


tester is needed to carry out this type of testing, which increases the
cost.

❖ It is nearly impossible to look into every bit of code to find out hidden
errors, which may create problems, resulting in failure of the application.

❖ The tests focus on the software as it exists, and missing functionality


may not be discovered.

! !307
TESTING PHASE

7.10 DYNAMIC TESTING -BLACK BOX TESTING

Black-box testing is a method of software testing that examines the


functionality of an application without peering into its internal structures or
workings. This testing is also known as behavioral, functional, opaque-box
and closed-box. This method of test can be applied to virtually every level
of software testing: unit, integration, system and acceptance.

Black box testing is data-driven or input/output-driven testing and


the tester is completely unconcerned about the internal behavior and
structure of program.. In this approach test cases are designed using the
functional specification of the software, i.e. without any knowledge of the
internal structure of the software. For this reason, black-box testing is
known as functional testing. One must remember that Black Box testing is
not an alternative to white box techniques. It is a complementary approach
that is likely to uncover a different class of errors than white box methods.

Black box testing attempts to find errors in the following categories


❖ Incorrect or missing functions
❖ Interface errors
❖ Errors in data structures or external database access
❖ Behavior or performance errors
❖ Initialization errors

These tests are best carried out by testers not by the developers.
Programmers are logical thinkers, so they catch many of the "logical'
defects. But they also get possessive of the code they write and tend to
ignore functional features while testing. Real users are not necessarily
logical and in the real environmental circumstances are often illogical.

There are two main approaches to designing black box test cases.

❖ Equivalence class portioning

❖ Boundary value analysis

! !308
TESTING PHASE

7.10.1 Black Box Testing - Equivalence Partitioning

In this approach, the domain of input values to a program is partitioned


into a set of equivalence classes. The main idea behind defining the
equivalence classes is that testing the code with any one value belonging
to an equivalence class is as good as testing the software with any other
value belonging to that equivalence class. The assumption is that if one
value in a group works, all will work. One from each partition is better than
all from one. This method divides the input domain of a program into
categories of data for deriving test cases.

For those familiar with elementary statistical techniques, Equivalent


Partitioning is very much similar to class intervals and tally marks analysis.
Equivalence classes for software can be designed by examining the input
data and output data.

The process consists of two steps:


❖ dentify the Equivalence class
❖ Write test cases for each class

If an input condition specifies a continuous range of values, there is one


valid class and two invalid classes.

Example 1:

The input variable is a mortgage applicant's income. Valid range is $1000/


m to $75,000/m
❖ Valid class: {1000 > = income < = 75,000}
❖ Invalid classes: {income < 1000} and {income > 75,000}

Example 2: A software program calculates interests for savings account in


a bank. The interest rates vary depending on the balance in the account.
Suppose 3% rate of interest is given if the balance in the account is in the
range of Rs 0 to Rs. 1000, 5% rate of interest is given if the balance in the
account is in the range of Rs. 1000 to Rs. 10000, and 7% rate of interest is
given if the balance in the account is Rs. 10000 and above. Maximum
balance in an account cannot exceed Rs. 2 Crores. One must initially
identify three valid equivalence partitions and one invalid partition.


! !309
TESTING PHASE

Invalid Valid (for Valid (for Valid for 7% Invalid


Partition 3%) 5%) Partition

- 0.01 0.00 to 1000.01 to 10000.00 to 20000000.01


1000.00 9999.99 20000000

Assuming only numeric partitions, in the above example there are five
partitions, even though the specification mentioned only four.

An inexperienced tester will take the easy way out with testing for every
Rs. 500 increments. That would give: Rs. 500.00, 1000.00, 1500.00,
2000.00, 2500.00, and so on up to Rs. 9000.00. This will cover only two
out of five partitions. Obviously this approach is less effective than
equivalence partitioning. At the same time there are four times more tests
as against the five partitions and still less. The invalid partition implies that
it is not one of the expected inputs for this particular field.

It can be seen that equivalence partitioning uses fewest test cases to cover
maximum requirements

7.10.2 Black Box Testing - Boundary Value Analysis

"Bugs lurk in corners and congregate at boundaries" - Boris Beizer

A typical error committed by programmers is writing programs for printing


statement of accounts, invoices, purchase orders or long lists of inventory
items. Many times the "Page Skips" are not consistent or some lines are
missed out in the beginning of a page or at the end of the page. The
programmer attempts to correct the error by replacing a "less than"
checking with a "greater than" checking and still the error persists. Or the
programmer ends up changing an "AND" logic in the code to "OR" logic
with no resulting success.

This is because one type of programming error frequently occurs at the


boundaries of different equivalence classes of inputs. Programmers often
fail to see the special processing required by the input values that lie at the
boundary of the different equivalence classes. It means that the less likely
it is to execute a piece of code, the more likelihood of finding a bug in
there, just because that "corner" has not been tested extensively.
Boundaries are usually the limits of loops which deals with margins

! !310
TESTING PHASE

Boundary value analysis is a test case design technique that complements


equivalence partitioning. A greater number of errors occur at the
boundaries of the input domain rather than in the "center". It derives test
cases from both the input domain and output domain

Example: If an input condition specifies that a variable, say count, can take
range of values (1 - 999) one has
❖ one valid equivalence class (1 < count < 999)
❖ two invalid equivalence classes (count < 1) & (count >999)

According to boundary value analysis, test cases are written for


❖ count=0, count=1, count=2,
❖ count=998, count=999 and count=1000 respectively

7.10.3 Black Box Testing - Error Guessing

Error guessing is a test method in which test cases used to find bugs in
programs are established based on experience in prior testing.

Among a large group of programmers there are always one or two persons
who are approached to solve problems when someone is stuck for a
solution. The "expert" sometimes may not even move from his desk but
ask simple questions to diagnose the problem or sometimes may suggest a
solution seemingly "out of the hat".

Myers study states that:


"The probability of errors remaining in the program is proportional to the
number of errors that have been found so far, which provides a rich source
for productive error guessing."

Experienced testers just guess where the errors are based on their intuition
and experience to determine what situations commonly cause software
failure. This testing is very ad-hoc and not a technique; more of an art to
guess where the errors could be lurking. Some people seem to be naturally
good at testing and others are good testers because they have a lot of
experience either as a tester or working with a particular system and so
are able to find out its weaknesses.

Error guessing has no explicit rules for testing; test cases can be designed
depending on the situation, either drawing from functional documents or

! !311
TESTING PHASE

when an unexpected/undocumented error is found while testing


operations. There are no specific tools and techniques for this and one
must write test cases depending on the situation while testing. Typical
errors

❖ Suppose the login screen of an application has to be tested. An


experienced test engineer may immediately see if the password typed in
the password field can be copied to a text field which may cause a
breach in the security of the application.

❖ Divide by zero is another typical error committed by programmers. The


denominator in any division process cannot be zero and programs can
give unpredictable errors if such a division is allowed.

❖ Entering blank spaces in the text fields

❖ Pressing submit button without entering values.

❖ Sorting list (playlist) is empty

❖ Uploading files exceeding maximum limits.

❖ Sorted list contains only one entry

❖ All entries in the sorted list have the same value

The basis behind this approach is in general people have the knack of
"smelling out" errors

7.10.4 Black Box Testing - Cause Effect Graphing

Cause Effect Graphing is testing technique that aids in selecting, in a


systematic way, a high-yield set of test cases that logically relates causes
to effects to produce test cases. It has a beneficial side effect in pointing
out incompleteness and ambiguities in specifications.

Cause is a distinct input condition. Effect is a distinct output condition.


Examples:

! !312
TESTING PHASE
❖ Cause: Got caught in rain; Effect: Cold and cough
❖ Cause: Hours of Dance practice; Effect: First Prize in the competition

In software testing, a cause-effect graph is a directed graph that maps a


set of causes to a set of effects. The causes may be thought of as the input
to the program, and the effects may be thought of as the output. Usually
the graph shows the nodes representing the causes on the left side and the
nodes representing the effects on the right side. There may be
intermediate nodes in between that combine inputs using logical operators
such as AND and OR. Steps are quite simple:

❖ Identify the causes and effects from the specification.


❖ Develop the cause effect diagram.
❖ Create a decision table.
❖ Develop test cases from the decision table.

Example: An insurance agency has the following norms to provide premium


to its policy holders

❖ If age<=30 & No claims made, premium increase will be 200 else 500
❖ For any age if claims made are 1 to 4, premium increase will be 1000
❖ If one or more claims made then warning letter
❖ If 5 or more claims made then cancel policy

! !313
TESTING PHASE

The diagram shows the cause-effect diagram. The decision tables derived
for the norms specified is given below.

7.10.5 Black Box Testing - Advantages & Disadvantages

Advantages

❖ Efficient when used on Large Systems

❖ Tester and Developer are Independent of Each other

❖ Testers can be Non-Technical

❖ Detailed Functional Knowledge not needed to the Tester

❖ Tests done with the end-user's point of view

❖ Helps to identify the vagueness and contradictions in the implementation


of specifications

! !314
TESTING PHASE
❖ Test cases can be designed along with functional specifications

Disadvantages

❖ Difficult to identify tricky inputs if the test cases are not developed with
Functional specifications

❖ It is difficult to identify all possible inputs in limited testing time. As a


result, writing test cases may be slow and difficult.

❖ There are chances of having unidentified paths during the testing


process.

❖ There is a high probability of repeating tests already performed by the


programme

7.11 AUTOMATION IN TESTING, WHEN TO STOP TESTING

7.11.1 Automation in Testing

Automation by definition is the use of tools and strategies that reduce


human involvement or interaction in unskilled, repetitive or redundant
tasks. Automation implies use of available technologies to reduce need of
human work. Some software testing tasks such as low-level interface
programming or regression testing, can be laborious and time consuming
to do manually. In addition, a manual approach might not always be
effective in finding certain classes of defects.

One wishes that there was a magic tool that would automate all of the
testing. There are a number of very useful tools for many different aspects
of software testing. Success with tools is not guaranteed, even if an
appropriate tool is acquired - there are also risks in using tools. It is a good
idea to use computers to do things that computers are really good at
compared to people. Tool support is useful for repetitive tasks; a computer
doesn't get bored and will be able to exactly repeat what was done before.
Handling large volumes of data (comparison) etc. would be easier.

Test automation offers a possibility to perform these types of testing


effectively. Once automated tests have been developed, they can be run
quickly and repeatedly. Many times, this can be a cost-effective method for

! !315
TESTING PHASE

regression testing of software products that have a long maintenance life.


Even minor patches over the lifetime of the application can cause existing
features to break which were working at an earlier point in time.

Software Test Automation is the process of automating the steps of manual


test cases using an automation tool OR utility to shorten the testing life
cycle with respect to time. Automation helps to avoid human errors and
also speed up the testing process & ensure high quality.

Why automate?

❖ Automated software testing is the best way to increase the effectiveness,


efficiency and coverage of software testing.

❖ Manual testing of all work flows, all fields, all negative scenarios is time
and cost consuming

❖ Automation does not require human intervention. Automated test can be


done unattended - overnight

❖ Automation increases speed of test execution

❖ Automation helps increase test coverage

❖ Manual Testing can become boring and hence error prone.

❖ Automation tests are reliable. Tests perform precisely the same


operations each time they are run, thereby eliminating human error.

❖ They are repeatable. One can Test how the application reacts after
repeated execution of the same operation.

❖ They are comprehensive. One can build a suite of tests that covers every
feature in the application.

❖ They are reusable. Can reuse tests on different versions of an


application, even if the user-interface changes.

❖ It is difficult to test for multi lingual sites manually

! !316
TESTING PHASE

Goal of Automation is to reduce number of test cases to be run manually


and not eliminate manual testing all together. When developing financial
applications for Banks, Insurance Companies or Financial institutions,
repeated testing to ensure compliance to stringent standards, quality and
financial risk coverage, the testing is 80% automation and 20% manual.

To implement the Test Automation detailed planning and effort is required.

Test cases Suitable for automation

❖ High Risk- Business Critical test cases.

❖ Test cases that are executed repeatedly

❖ Test Cases that are very tedious or difficult to perform manually

❖ Test Cases which are time consuming

❖ Test cases that need to be executed for every build of the application i.e.
test cases that are part of Regression Testing

❖ Data driven test cases i.e. those test cases that need to be executed for
multiple data values

❖ Test cases used for Load Testing of an application

Categories of test cases not suitable for automation:

❖ Test Cases that are newly designed and not executed manually at least
once

❖ Test Cases for which the requirements are changing frequently

❖ Test cases which are executed on ad-hoc basis.

The table below gives a comparing between manual testing and automated
testing.


! !317
TESTING PHASE

Manual Testing Automated Testing

Done by manually completing the Flow is completed by automation scripts


flow by humans or codes.

No special skills required to write Special skills are required to write the
test cases scripts/codes.

Manual Testing is not reusable Automated Tests are completely reusable

Manual Tests provide limited Automated Tests provide global visibility


visibility and have to be repeated
by all Stakeholders

Manual Testing has a high risk of Automated Tests have zero risk of
missing out on something. missing out a pre-decided test.

No tool required so cost is less. Cost of automation tool is high. But gets
nullified on long run.

High investment in human Less Investment in human resources.


resources. As test cases needs to be Test cases are executed by automation
executed manually so more testers tool. Less testers are required.
are required.

Color related issues can be identified With automation testing color related
issue cannot be detected.

The advantages of using a tool are many. However just automation of


testing does not minimize the risks or avoid defects. Some of the risks of
using tools include

❖ Unrealistic expectations for the tool

❖ Underestimating time, cost & effort for initial introduction of a tool

❖ Underestimating time and effort needed to achieve significant and


continuing benefits from the tool

❖ Underestimating effort required to maintain test assets generated

❖ Over-reliance on the tool

! !318
TESTING PHASE

Some of the test execution automation tools used for testing are

❖ Quick Test Professional (HP)

❖ WinRunner (HP)

❖ Robot (IBM Rational)

❖ Functional Tester (IBM Rational)

❖ SilkTest (Borland)

❖ TestComplete (AutomatedQA)

❖ QAWizard (Seapine)

❖ TestPartner (Compuware)

❖ QEngine (AdventNet)

❖ Open source tools (Sahi, Watir)

The book gives a brief idea of two of the popular tools

7.11.2 Automation Tools - Examples

Quick Test Professional (HP UFT)


HP QuickTest Professional (QTP) is now known as HP Unified Functional
Testing. UFT provides functional and regression test automation for
software applications and environments.[2] HP Unified Functional Testing
can be used for enterprise quality assurance. HP Unified Functional Testing
supports keyword and scripting interfaces and features a graphical user
interface. It uses the Visual Basic Scripting Edition (VBScript) scripting
language to specify a test procedure, and to manipulate the objects and
controls of the application under test.

HP Unified Functional Testing was originally written by Mercury Interactive


and called QuickTest Professional. Mercury Interactive was subsequently
acquired by Hewlett Packard(HP) in 2006. UFT version12 was launched on
March 20, 2014

! !319
TESTING PHASE

Some of the features of UFT (QTP) are

❖ Developed for web functionality

❖ Supports both web applications & windows applications

❖ Used for GUI testing and functional testing.

❖ Test Script developed on VB Script

❖ Supports Multiple Technologies

• .NET. J2EE, Main Frame, XML, Java, Delphi


• ERP -> SAP, Seibel, People soft
• Mobile Technology - Smartphones, tablets, i-phones etc.

❖ Supports both Technical & Non-Technical People

❖ It is sold with the ability to understand a few technologies, add-ins


provide UFT with the ability to understand additional technologies

WinRunner
HP WinRunner was originally written by Mercury Interactive. Mercury
Interactive was acquired by Hewlett Packard (HP) in 2006. In Feb 2008, HP
announced the end of support for WinRunner suggesting migration to HP
Functional Testing software as a replacement. It is still worthwhile to
understand the features of WinRunner for automated testing.

WinRunner software was an automated functional GUI testing tool that


allowed a user to record and play back user interface (UI) interactions as
test scripts. As a functional test suite, it worked with HP QuickTest
Professional and supported enterprise quality assurance. It captured,
verified and replayed user interactions automatically, in order to identify
defects and determine whether business processes worked as designed.
The software implemented a proprietary Test Script Language (TSL) that
allowed customization and parameterization of user input.

As WinRunner runs tests, it simulates a human user by moving the mouse


cursor over the application, clicking Graphical User Interface (GUI) objects,

! !320
TESTING PHASE

and entering keyboard input-but WinRunner does this faster than any
human user.

Salient features of WinRunner were:

❖ Testing Support: Functional GUI testing

❖ Functional Regression Testing Tool

❖ Windows Platform Dependent

❖ Only for Graphical User Interface (GUI) based Application

❖ Based on Object Oriented Technology (OOT) concept

❖ Only for Static content

❖ Record/Playback Tool

❖ WinRunner includes Addins: Web Test. Visual Basic, ActiveX, Power


Builder

Test data preparation tools


Setting up test data can be a significant effort, especially if an extensive
range or volume of data is needed for testing. Test data preparation tools
may be used by developers, but they may also be used during system or
acceptance testing. They are particularly useful for performance and
reliability testing, where a large amount of realistic data is needed.

Test data preparation tools enable data to be selected from an existing


database or created, generated, manipulated and edited for use in tests.
Features include support to:

❖ Extract selected data records from files or databases

❖ Massage data records to make them anonymous or not able to be


identified with real people (for data protection)

❖ Enable records to be sorted or arranged in a different order

! !321
TESTING PHASE
❖ Generate new records populated with pseudo-random data, or data set
up according to some guidelines

❖ Construct a large number of similar records from a template, to give a


large set of records for volume tests

7.11.3 When to Stop Testing

"One of the most difficult questions to answer when testing a program is


determining when to stop, since there is no way of knowing if the error just
detected is the last remaining error"
- The Art of Software Testing by Glenford Myers

A Project Manager's nightmare is to decide "when to stop testing" and


deliver the final version of software to the customer. For projects with loads
of quality issues and delays this nightmare gets compounded. One cannot
determine when to stop testing. Software applications are so complex and
run in an interdependent environment that complete 100 % testing can
never be done.

Testing is a never ending process and since one can never assume that 100
% testing has been done, one can only minimize the risk of shipping the
product to client with X testing done. Testing can be stopped

❖ When test budget is exceeded

❖ All the high priority bugs are fixed.

❖ When deadlines like release deadlines or testing deadlines have reached

❖ When the test cases have been completed with some prescribed pass
percentage.

❖ When the testing budget comes to its end.

❖ When the code coverage and functionality requirements come to a


desired level.

❖ When bug rate drops below a prescribed level

! !322
TESTING PHASE
❖ The risk in the project is under acceptable limits

❖ Beta or Alpha testing period ends

Testing metrics can help the testers to take better and accurate decisions
for stopping testing. One method is to have a fixed number of test cases
ready well before the beginning of test execution cycle. Subsequently
measure the testing progress by recording the total number of test cases
executed using the following metrics which are quite helpful in measuring
the quality of the software product

❖ Percentage Completion: (Number of executed test cases) / (Total number


of test cases)

❖ Percentage Test cases Passed: Defined as (Number of passed test


cases) / (Number of executed test cases)

❖ Percentage Test cases Failed: Defined as (Number of failed test cases) /


(Number of executed test cases)

! !323
TESTING PHASE

Whatever be the method used to decide, words like 'Good', 'Large', 'Low'
and 'High' are subjective terms and depend on the type of product being
tested. Ultimately, the risk associated with moving the application into
production, as well as the risk of not moving forward, must be taken into
consideration before ceasing testing.

7.12 DEFECT MANAGEMENT

Testing results in identifying defects and assists in improving the quality of


the product being developed. The primary goal is to prevent defects.
Where this is not possible or practical, the goals are to both find the defect
as quickly as possible and minimize the impact of the defect.

What is a defect?
It is a product anomaly or error reported by the tester. The possibilities
could be that

❖ The software doesn't do something that the product specification says it


should do.

❖ The software does something that the product specification says it


shouldn't do.

❖ The software does something that the product specification doesn't


mention.

❖ The software doesn't do something that the product specification doesn't


mention but should do.

A software bug is a generic usage for an error, defect or failure in a


computer program or system that causes it to produce an incorrect or
unexpected result or to behave in unintended ways. Most bugs arise from
mistakes and errors made by people in either a program's source code, its
design or in the operating systems used by such programs. It is a condition
in a software product which does not meet a software requirement or end-
user expectations. Defects occur when there is a variance between
expected results and actual results. A simple defect could be an SMS alert
being sent to a customer in an online banking application for every login to
the system irrespective of the customer doing a transaction or not.

! !324
TESTING PHASE

Software defects are expensive and practically impossible to avoid.


Moreover, the cost of finding and correcting defects represents one of the
most expensive activities. While defects may be inevitable, their numbers
can be minimized and the impact of defects on projects can be reduced.
Catching defects as early in the process as possible is necessary and
investment in this process can yield significant returns.

7.12.1 Severity and Priority

Severity and Priority are the two key parameters to measure the impact
and nature of the defect. Defects found during testing are classified based
on their severity.

Defect Severity means, how critical is the bug for an application? The
severities are usually pre-defined by the organization and are assigned by
"tester". Consistency is important since it helps test teams avoid
disagreement with development teams about the criticality of a defect.
Testers must assign the severity of the defect objectively and avoid
"personality" and "ego" clashes with the development teams. Severity
describes the bug in terms of functionality and how bad or critical is the
bug.

An example of classifying severities can be:

❖ Blocker (Showstopper): No further testing work can be done. Critical


Software crashes, hangs, or causes you to lose data.

❖ Major: Major loss of function. A major feature does not exist/not


working

❖ Minor: minor loss of function. There may be an easy work around

❖ Trivial: Some UI enhancements. Cosmetic errors like spelling mistakes


grammatical errors, issues in the look and feel of the application.

❖ Enhancement: Request for new feature or some enhancement in


existing one.

Defect Priority determines the order in which defects should be fixed.


Priority assigned to a defect is usually subjective as it is based on input

! !325
TESTING PHASE

from users regarding which defects are most important, resources


available, risk etc.

Priority highlights a degree of impact and urgency. It defines the order in


which defects should be resolved and set based on the customer
requirements. Priority can also be decided on the basis of how frequently
the defect occurs and describes the bug in terms of customer.

The priority levels of a defect can be set as follows-

❖ Very High - Immediate fix needed. Blocks further testing. Is very evident

❖ High: Must get fixed before the product is released.

❖ Medium: Needs to be fixed as per time schedule

❖ Low: Good to be fixed. But software can be released with the defect

Hence there are four different combinations of defects with different


priority and severity:

❖ High Severity and High Priority


Example: A payroll management application has basic salary data of
employees but any changes made to the salary amounts are not stored
properly.

❖ High Severity and Low Priority


Example: The sorting algorithm for an inventory stock take report is
wrong and lists are printed in a different sorted order. But since the
report is required once in 3 months and has a lower priority the fix can
be postponed till next release.

❖ Low Severity and High Priority


Example: The website contains several spelling mistakes in the name of
the website. The functionality is working fine and the severity is low but
in the customer's perspective this is a major flaw in the software and
needs to be fixed as early as possible. For a Japanese organization a
spelling error of its "company name" due to translation on a global
website is a "show stopper" and not trivial.

! !326
TESTING PHASE
❖ Low Severity and Low Priority
Example: The color combinations on the home page are not consistent
with other pages of the website. There are no functional impacts due to
this defect.

Defects are the health indicators of a software application. Management


gets an idea of how stable the application is just by looking at the defect
repository. Generally, if there are number of high severity defects being
detected in each build of the testing cycle then it indicates that the
application is not stable.

7.12.2 Defect Life Cycle

Defect tracking is a systematic step by step process followed from defect


discovery to its closure. In engineering, defect tracking is the process of
tracking the logged defects in a product from beginning to closure (by
inspection, testing, or recording feedback from customers), and making
new versions of the product that fix the defects. Defect tracking is
important in software engineering as complex software systems typically
have tens or hundreds or thousands of defects; managing, evaluating and
prioritizing these defects is a difficult task. When the numbers of defects
gets quite large, and the defects need to be tracked over extended periods
of time, use of a defect tracking system can make the management task
much easier.

Defect life cycle is the stages through which a defect goes through during
its lifetime. The states include:

❖ New: When a defect or bug is logged and posted for the first time.

❖ Open: The bug is now under analysis and remediation. The developer has
started analyzing and working on the defect fix. The defect status stays
as "open" till it is “closed"

! !327
TESTING PHASE

DEFECT LIFE CYCLE

NEW

OPEN

REJECTED

ASSIGN

DEFERRED
REOPENED TEST

VERIFIED

CLOSED
!
❖ Assign: Once a tester has found a defect, he posts it in the system (could
be a tool or just an MS Excel sheet). The test-lead reviews the bug
reported and once it is found genuine assigned to a developer and the
developer team.

❖ Test: When the developer makes the necessary code changes the code is
retested. In case the defect persists the bug status is changed to
"reopen" and goes through the cycle of assign & test,

❖ Verified: If no defects are found after the necessary code changes are
completed and testing is done, the bug status is changed to fixed or
verified.

❖ Closed: Once the bug is fixed, tested and approved the status of the bug
is changed to "closed".

! !328
TESTING PHASE
❖ Rejected: If the developer feels that the bug is not genuine the bug is
marked as rejected.

❖ Deferred: The bug, changed to deferred state means the bug is expected
to be fixed in next releases.

❖ Closed: Once the bug is fixed, tested and approved the status of the bug
is changed to "closed".

7.12.3 Defect Reporting

A defect report should be precise enough having the following information.


Defect attributes are the details about the defect which are included in the
defect report.

Defect Report (Template)

Sr Item Description of Item


No

1 Defect number Unique identification number of which helps in


tracking the defect

2 Reported By Testers identification or Name

3 Project/Product/ This can include Project Name, Module Name,


Module Under Test Program Name and the version numbers relevant to
And Version trace the source code of the defective unit.
numbers

4 Date Bug detected Incident Date

5 Summary of Defect Brief description to highlight the defect and its


characteristics

6 Description Detailed description of the defect. Tester provides as


much detail as possible to help developer
understand the symptoms of the defect and trace it
to fix the defect

7 Probable Cause or Tester based on his experience and knowledge of the


Steps to Reproduce application indicates a probable cause and the steps
the defect to reproduce the defect.

! !329
TESTING PHASE

8 Test case name The Test Case and condition which was being
executed when the defect occurred.

9 Expected Result What was expected when the defect occurred?

10 Actual Result What actually happened?

11 Configuration during The setup of the hardware, software licenses, tool


the Test when the used, data used for testing, the machine
defect was found configuration, etc. This helps in simulating the
identical conditions by the development team to
reproduce the defect.

12 Severity Code Indicates the impact of the defect. Blocker, Very


High, High, etc.

13 Priority Code Indicates the urgency of fixing the defect Very high,
High, Medium, Low, etc.

14 Assigned to Usually the developer’s name to whom the bug


Developer report is sent for fixing. In some organizations the
testing teams are different from the development
team. In those cases the project lead’s name is
mentioned here.

15 Date Sent to The defect detection date can be different from this
Developer date. The bug report is reviewed before being sent
to the developer team.

16 Status of Defect Current Status: New, Open, Assigned, etc. as per


agreed norms in the organization

Defect Reports are generally written by test engineers after finding defects
during testing of an Application-Under-Test (AUT).

7.12.4. DEFECT MEASUREMENT

The aim of any measurement is to supervise the process, resources and


enhance the efficiency of the team during the course of a project.
Organizations treat measurement as an additional, non-value-added task,
or just another thing to do. Now measurement is considered to be a basic
software engineering practice and based on the data collected more useful
information is gained to assist the project manager to make decisions.

! !330
TESTING PHASE

Can one predict the number of defects in a program which is still not
written? Sounds an impossible and ridiculous question? Well, reality is that
it is possible. If one works with any Japanese organization like Hitachi,
Toshiba or Toyota their approach to software development is not much
different from their manufacturing domain. Planning for defect prevention
includes prediction of defects. The logic is quite simple. If any organization
has collected metrics for defects, the Japanese company reviews this data
and sets the "targets" for each delivery and expects the vendors to
improve their processes to reduce the defects compared to their "own"
benchmarks. For example: If the known metrics for defects in ONE page of
a WORD document is 8 defects/page, this is the starting point and the
defect management process should aim to bring this to say 4 defects/page
in the first 6 months and later to 2 defects/ page in the long term.

Defect measurement should be integrated into the software development


process and be used by the project teams to improve the processes. In
other words, the project staff, by doing their job, should capture
information on defects at the source. It should not be done after-the-fact
by people unrelated to the project or system

7.12.5 Test Metrics

Metrics is the cornerstone in assessment and also foundation for any


business improvement. It is a unit used to describing or measuring an
attribute e.g. Inches is a metric used for measuring the length attribute.
Software Metrics is a Measurement Based Technique which is applied to
processes, products and services to supply engineering and management
information and working on the information supplied to improve processes,
products and services, if required. Lines of code is a metric for measuring
the size attribute of software

Metrics help in answering project related questions:


❖ How long would it take to test?
❖ How much will it cost to test?
❖ How bad/good is the product?
❖ How many problems still remain in the software?
❖ Will testing be completed on time?
❖ Was the testing effective?
❖ How much effort went into testing?

! !331
TESTING PHASE

All these questions require some type of measurements and record keeping
for resolving.

Test metrics should be


❖ Quantifiable: The method of measurement needs to be standard, concise

and quantifiable.
❖ Easy to collect: The information collection process must not take too

much of the collector's time


❖ Simple: The information collected should be simple to gather.
❖ Meaningful: The information gathered must have a specific purpose
❖ Non-threatening: Avoid using test metrics for employee evaluation

purposes

Benefits of Test metrics


❖ Helps predict the long-term direction and scope for an organization.
❖ Provides a basis for estimation
❖ Provides a means for control / status reporting
❖ Identifies risk areas that require more testing
❖ Provides meters to flag actions for faster, more informed decision making
❖ Quickly identifies and helps resolve potential problems.
❖ Provide an objective measure of the effectiveness and efficiency of

testing.
❖ Focus on different key metrics helps create a better business design.

Examples of Testing Metrics


Some of the key metrics (more will be covered later in the book) are:

Requirement Stability Index (RSI)


It is essential to track the requirements that are being verified to know the
extent of coverage. As the component undergoes testing, the requirements
that have been tested for successfully shall be tracked and reported. Metric
tracks number of structural features that are tested during white box
testing.

RSI = [(Number of baselined requirements) - (number of changes in


requirement after baselining) / (Number of baselined requirements)*100.

Defect Removal Efficiency


Defects should be corrected effectively, requiring only one regression test
to verify removal. If it requires more iterations through a defect removal

! !332
TESTING PHASE

process, then those processes may require improvement. DRE metric


tracks the history of these defect removals

DRE = [(Total no. of defects corrected before release) / (Total defects


detected before & after release)] * 100

Mean time to failure (MTTF)


Gives an estimate of the mean time to the next failure, by accurately
recording failure times "t"i, the elapsed time between the ith and the
(i-1)th failures, and computing the average of all the failure times. This
metric is the basic parameter required by most software reliability models.
High values imply good reliability.

Defect Age
Defect age analysis provides good feedback on the effectiveness of the
testing and the defect removal activities. E.g. if the majority of older,
unresolved defects are in a pending state, it probably means that not
enough resources are applied to the re-testing effort. Time from when a
defect is introduced TO when it is detected (or fixed).
Defect Age = (Defect Detected - Defect Introduced)/ Number of defects

Cost of finding a defect in testing (CFDT)


Total time spent on testing including time to create, review, rework,
execute the test cases and record the defects. This should not include time
spent in fixing the defects.
CFDT = Total effort spent on testing / defects found in testing

Test Case Effectiveness


This defines the effectiveness of test cases which is measured in number of
defects found in testing without using the test cases.
TCE= No. of defects detected using test cases*100 / Total no. of defects
detected

❖ Defect Acceptance Ratio= (Number of Defects Accepted as Valid) /


(Number of Defects Reported by Test Team)

❖ Defect density =defects related to size of software such as"defects/1000


lines of code"

❖ Review Effectiveness = 100 * (Total no. of defects fond in review /

! !333
TESTING PHASE

7.12.6 DEFECT MANAGEMENT PROCESS

Having studied the manifestation of defects, their classification, the states


a defect goes through, defect cycle and the states any defect goes through,
the reporting formats, metrics gathered during testing, etc. the defect
management process includes

❖ Defect Prevention
❖ Establishing Milestones
❖ Defect Discovery
❖ Defect Resolution
❖ Process Improvement and
❖ Management Reporting

Defect Prevention

❖ Implementation of techniques, methodology & standard processes to


reduce the risk of defects

❖ Understand the critical risks that could largely affect project or system.

❖ For each risk make an assessment of the financial impact if the risk
becomes a problem.

❖ Once the most important risks are identified try to eliminate each risk
and minimize the expected impact

Establishment of milestones

❖ A deliverable is baselined when it reaches a predefined milestone in its


development

❖ Defects in the baselined product are such where the given set of
requirements are not satisfied.

❖ Select the deliverables that are baselined.

❖ Set the requirements for each deliverable and the criteria that must be
met before the deliverable can be baselined.

! !334
TESTING PHASE

Defect Discovery

❖ It is important to minimize the time between defect origination and


defect discovery.

❖ Discover defects before they become major problems

❖ Report defects to developers so that they can be resolved

❖ Obtain developers acknowledgement that the defect is valid and should


be addressed

Defect Resolution:

❖ Process starts when developer acknowledges that the defect is valid.

❖ Developers determine the importance of fixing a particular defect

❖ Developers schedule when to fix a defect

❖ Developers fix defects in order of importance

❖ Developers notify all relevant parties how and when the defect was
repaired

Process Improvement

❖ Defect represents a weakness in the project. Understanding the cause of


defect is most important.

❖ Back tracing of validation process, where defect may have caught earlier.
The process gets strengthened to prevent defects

❖ The process also helps to find defects that have been created, but not yet
discovered

! !335
TESTING PHASE

Management Reporting

❖ Analysis and reporting of defect information to assist management.

❖ The basis for management reporting is the information collected for


individual defects by the project teams

❖ Report on the status of individual defects

❖ Provide tactical information and metrics to help project management


make more informed decisions -- e.g. redesign of error prone modules,
the need for more testing, etc.

❖ Provide strategic information and metrics to senior management -- defect


trends, problem systems, etc.

❖ Provide insight into areas where the process could be improved to either
prevent defects or minimize their impact.

! !336
TESTING PHASE

7.13 SUMMARY

The importance of the software testing phase after completion of design


and coding cannot be stressed enough. Mistakes made by humans have
caused enormous damage and financial losses from air space disasters to
trade losses and some failures have run into billions of dollars. One of the
root causes of many of these incidents has been the lack of proper testing.
Most organizations don't see preventing failure as an urgent matter, even
though that view risks harming the organization and maybe even
destroying it. Erroneous programming, ambiguous requirements
specifications, wrong interpretation of requirements, absence of
standardized methods of programming and many more reasons have
caused project failures and highlights the importance of testing. Software
testing has its share of myths. However testing is more than a manual
process, highly challenging, testers need equally if not more skills and
knowledge compared to the developers, paper work is essential to prevent
further defects, and testers do have a good career if one choses to excel
and face the challenges.

Verification and validation (V&V) is the process of checking that a software


system meets specifications and that it fulfills its intended purpose. It
answers the question "Are we building the product right?" Validation
ensures that the product being developed matches customer requirements.
It answers the question Are we building the right product". V&V is
applicable from the planning stage till implementation and roll out phases
of the software development life cycle.

The important objective of testing is not to prove that the software works
but to show presence of defects. Exhaustive testing is a myth and not
possible and hence planning for testing is an important activity. It is also a
myth that testing begins after coding. The fact that 30-40% of efforts are
spent on testing indicates that testing starts quite early at the requirement
stage itself. Costs of fixing defects are much less if testing is successful in
the early stages. Many organizations have an independent testing team to
avoid the risk of missing out defect detection in products by the teams that
created the product.

Testing starts with a strategy i.e. a road map to ensure quality deliverables
throughout the development cycle. Several test cycles may be involved,
some using test scripts and one needs to prepare test cases, test data with

! !337
TESTING PHASE

both valid and invalid data to detect failures, defects and errors. Writing
test cases even for simple modules is an arduous task - more an art than a
science. A good "Test Case" is one that has a high probability of detecting
an as yet undiscovered error and it is successful only if it finds the
undiscovered defect. Checklists are one of the effective tools used for
testing that can minimize errors due to oversight. Applying the V-model of
SDLC in testing gives a good perspective of the test activities and test
plans that need to be in place at each of the phases of the SDLC for
efficient and successful testing. Based on whether the actual execution of
software under evaluation is needed or not, there are two major categories
of quality assurance activities - Static and Dynamic Testing. Static testing
covers inspection, code walkthrough, reviews and desk checking. Dynamic
Testing includes White Box Testing and Black box testing each of them with
their own advantages and disadvantages. White box testing done by the
programmers is where the internals like statements, loops, decisions,
conditions, memory leaks etc. are covered while Black Box Testing
generally done by testers focuses on tests from the user perspective.
Usage of McCabe's cyclomatic complexity - a software quality metric - is a
practical way of determining the maximum number of linearly independent
paths in a program and helps in evaluating the complexity of the programs
and also in estimating the efforts required for white box testing. Testing
leads to finding the source of the defects or errors which requires good
debugging skills.

Testing rarely is a big-bang exercise and happens in increments. It starts


with 'testing-in-the- small' i.e. units and moves toward 'testing-in-the-
large" i.e. the complete system. Individual units are first tested, and the
integrated modules tested either in top-down approach, bottom- up
approach or the mixed sandwich approach. Incomplete or work-in-progress
modules are represented by stubs and drivers to test the completed
modules. System testing follows integration testing and covers the
functional requirements gathered in the first phase of the SDLC. System
tests are exhaustive and also cover non-functional tests covering
performance, Installation Testing, Security, Configuration, Usability, etc.
that do not have direct bearing to the functions. Past experience, skill and
domain knowledge play an important part in error- guessing and cause-
effect graphing tests.

User Acceptance Testing is the proof of the pudding which happens in two
stages - alpha and beta - one in the presence of developers and the other

! !338
TESTING PHASE

by the customer alone. Fixes happen whenever defects are found and the
units, modules or the system need to be regression tested, usually by
using tools to ensure that the changes work and that the changes have not
impacted other parts of the system or programs. Software Test Automation
though not applicable for all situations, helps to avoid human errors and
also speed up the testing process & ensure high quality. A Project
Manager's nightmare is to decide "when to stop testing" and wide-spread
use of products of MicroSoft with bugs are a clear example that 100 %
testing can never be done.

Testing results in identifying defects and also minimize the impact of the
defect. The impact can be measured by using two important parameters -
severity and priority. Metrics like requirement stability index, defect
removal efficiency, mean time to failure, defect age - measurement should
be integrated into the software development process and be used to
improve the processes and assure quality of deliverables.

! !339
TESTING PHASE

7.14 SELF-ASSESSMENT QUESTIONS (EXAMPLES


PREFERABLY DIFFERENT FROM THOSE GIVEN IN THE
BOOK)

1. What is the Importance of Testing?. Give examples to justify your


answers

2. Why is testing necessary? Give some examples to justify your answer.

3. State and briefly explain the terminologies used in testing phase.

4. Distinguish between error and failure. Testing detects which of these


two? Justify it.

5. Do you agree with the statement: "The effectiveness of a testing suite in


detecting errors in a system can be determined by examining the
number of test cases in the suite"? Justify the answer.

6. Explain the flow of the testing phase from testing strategy till defect
detection and correction.

7. What is verification? What kind of testing is covered by verification?


Who does the verification?

8. What is validation? What kind of testing is covered by validation? Who


does the validation?

9. Define and explain verification and Validation and their differences.

10.What are the objectives of testing?

11.When should we start testing? Why?

12.What are the myths about testing? What is the reality?

13.Is testing a negative activity? If not, why?

14.Why do professionals not prefer a testing career? How will you justify or
negate their beliefs?

! !340
TESTING PHASE

15.Given the many challenges facing a tester, what types of skills do you
believe should be required of a person being hired as a test specialist?
What is the testing phase?

16.Name and explain some of the principles of testing.

17.Give arguments for/against an independent testing group in an


organization. Consider organizational size, resources, culture, and types
of software systems developed as factors in your argument.

18.Using the V-model diagram describe the test-related activities that


should be done, and why they should be done, during the various
phases of the SDLC process.

19.What is meant by Static Testing? What is accomplished by static


testing?

20.Identify the type of errors that can be detected during code walk
throughs.

21.Identify the type of errors that can be detected during code inspection.

22.What is meant by code review? Why is it required to be completed


before performing integration and system testing?

23.What is meant by Dynamic Testing? What is accomplished by dynamic


testing?

24.Is static testing sufficient to prove that the product works? If Not, why?

25.Differentiate between functional testing and structural testing.

26.What is meant by the statement "Begin testing in the small and then
move to testing in the large"?

27.What are the different techniques of testing? Explain each of them with
examples.

! !341
TESTING PHASE

28.What are the differences between testing and debugging? What specific
tasks are involved in each? Which groups should have responsibility for
each of these processes?

29.What is debugging? If we are doing testing to find the defects why do


we need debugging?

30.What are the differences between Testing and Debugging?

31.Explain the different levels of testing - Unit, Integration, System,


Acceptance Testing. Consider an example of developing a module
Savings Bank in a large Banking Application for a cooperative Bank.
What could be the Unit, Integration and other types of testing for this
module?

32.Differentiate between the different types of Integration Testing i.e. Top-


down, Bottom- up, Sandwich, etc.

33.Is it advantageous to do a Big Bang testing? Where would one use such
a method?

34.What is exhaustive testing? Is it feasible? If not, how can one justify


incomplete testing?

35.Is random selection of test cases effective? Justify.

36.Why is it important to develop test cases for both valid and invalid Input
conditions?

37.Why do we need stubs and drivers during testing? Give examples.

38.What is the difference between functional and Non-Functional Testing?

39.Describe in brief highlighting the importance of some of the Non-


Functional Testing.

40.What is performance testing? Give examples for each of the types of


performance testing.

! !342
TESTING PHASE

41.Why do we need the different types of White Box Testing i.e. Statement
Coverage, Decision Coverage, Condition Coverage? Why cannot only one
of these be sufficient for testing?

42.Explain regression testing and the need for this.

43.What is the meaning of the word "coverage"? Explain this for all the
types used in White Box Testing.

44.What are the different types of Loop testing?

45.What is Black Box Testing and its advantages and disadvantages?

46.What is White Box Testing and its advantages and disadvantages?

47.What are the differences between white box and black box testing

48.Which is strongest structural testing technique among statement


coverage-based testing, branch coverage-based testing, and condition
coverage-based testing? Why?

49.Give examples to explain Equivalence Partitioning in Black Box Testing.

50.Give examples to explain Boundary Value Analysis in Black Box Testing.

51.A software program computes the average of 25 floating point numbers


that lie on or between bounding values which are positive values from
1.0 (lowest allowed boundary value) to 5000.0 (highest allowed
boundary value). The bounding values and the numbers to average are
Inputs to the unit. The upper bound must be greater than the lower
hound, If an invalid set of values is input for the boundaries an error
message appears and the user is re prompted. If the boundary values
are valid the unit computes the sum and the average of the numbers on
and within the hounds. The average and sum are output by the unit, as
well as the total number of Inputs that lie within the boundaries.

52.Derive a set of equivalence classes for the averaging unit using the
specification, and complement the classes using boundary value
analysis. Be sure to Identity valid and invalid classes.

! !343
TESTING PHASE

53.Identify and briefly explain two guidelines for the design of equivalence
classes for a problem.

54.Explain why boundary value analysis is so important for the design of


black box test suite for a problem.

55.From your own testing experiences and what you have learned from
this text, why do you think it Is important for a tester to use both white
and black box- based testing techniques to evaluate a given software
module?

56.Explain memory leakage with an example.

57.What is a control flow graph? How is It used In white box test design

58.Discuss how does control flow graph (CFG) of a problem help in


understanding of path coverage based testing strategy.

59.Discuss McCabe's Cyclomatic Complexity measure and its relevance to


testing.

60.Discuss experience based, error guessing and cause-effect graphing


Testing with examples.

61.Is automation in Testing feasible? Why is it required? What do the tools


help in achieving?

62.Give an overview of some of the tools used in automated testing.

63.Why is it important to begin a measurement program with measures of


defects and costs? \

64.Summarize the benefits of pulling a defect-prevention program in place.

65.What measures can be utilized to analyze the impact of defects?

66.Discuss how severity and priority are assigned for defect management
with examples.

! !344
TESTING PHASE

67.You are developing a module whose logic Impacts on data acquisition


for a flight control system. Your test manager has given you limited time
and budget to test the module. The module is fairly complex. It has
many loops, conditional statements, and nesting levels. You are going to
unit test the module using white box testing approaches. Describe three
test adequacy criteria you would consider applying to develop test cases
for this module. What are the pros and corn each. Which will have the
highest strength?

68.Consider an on-line fast food restaurant system. The system reads


customer orders, relays orders to the kitchen, calculates the customer's
bill, and gives change. It also maintains inventory information. Each
wait-person has a terminal. Only authorized wait-persons and a system
administrator can access the system. What are the system tests that
are applicable to this module?

69.When does one stop testing? Why? Suppose a test group was testing a
mission- critical software system. The group has found 85 out of the
100 seeded defects. If you were the test manager, would you stop
testing at this point? Explain your decision. If in addition you found 70
actual non-seeded defects, what would be your decision and why?

70.What are test metrics? Describe some of the metrics that are gathered
during testing and how they help in assuring quality of the deliverables.

! !345
TESTING PHASE

REFERENCE MATERIAL
Click on the links below to view additional reference material for this
chapter

Summary

PPT

MCQ

Video Lecture - Part 1

Video Lecture - Part 2

Video Lecture - Part 3

Video Lecture - Part 4

Video Lecture - Part 5

Video Lecture - Part 6


! !346
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

Chapter 8
Implementation, Maintenance, Change
Management, Risk Management
Objective:

On completion of this chapter you would be able to understand

❖ Steps followed during implementation and the challenges

❖ What Maintenance phase implies, what are the challenges and steps

❖ How changes are managed in the software after development.

❖ The types of Risks in Software development projects and how they can
be addressed

Structure:

8.1 Implementation Phase

8.2 Maintenance Phase

8.3 Software Change Management

8.4 Risk Management

8.5 Summary

8.6 Self-Assessment Questions

! !347
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

8.1 IMPLEMENTATION PHASE

Software Implementation or Deployment is one of the SDLC phases which


come after successful testing of the product or application. It includes all of
the activities that make a software system available for use to the end-
user. The general deployment process consists of several interrelated
activities with possible transitions between them. These activities generally
occur at the consumer side. Because every software system is unique, the
precise processes within each activity can be varied.

Implementation implies a process of converting a new or a revised system


into an operational one. There are three types of implementation.

❖ Replace a manual system with a computer system.

❖ Replace an existing computer system with a new system

❖ Replace an existing system with a modified system.

First time implementation of any automated system will be more


challenging than replacing existing computer systems or enhancing existing
systems. The activities involved in converting a totally manual system (rare
today) to a computer system are different right from the hardware setup
till final roll out to production of the application suites. Imagine if a simple
payroll processing system has to be replaced with an automated payroll
system. Activities for this will include capturing all employees' data, their
salary details, setting up bank transfer details, checking leave availed
details, handling statutory deductions, reimbursement of expenses or
allowances, etc. Once the application setup is done then monthly
processing of the payroll of each employee, printing pay-slips, transfer to
bank accounts etc. will happen. All this won't happen overnight but take
few weeks and sometimes months to complete and become a routine
operation.

! !348
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

Deployment activities can be one or more of the following:

❖ Release: It includes all the operations to prepare a system for assembly


and transfer to the customer site after completion of development

❖ Install and activate: Activation is the activity of starting up the


executable component of software. In larger software deployments, the
working copy of the software might be installed on a production server.
Today most software vendors send the license key or code to the user
who can download the software from websites and complete the
installation by themselves.

❖ Adapt: Process to modify a s/w system that has been previously


installed. It differs from updating in that adaptations are initiated by local
events such as changing the environment of customer site, while
updating is mostly started from remote software producer

❖ Update: Replaces an earlier version of all or part of a software system


with a newer release. Built-In mechanisms for installing updates are built
into some software systems (automatic or manual). Software patches are
sent by several software application vendors like Adobe, Google Chrome,
Quick Heal, etc. on a regular basis which users can install and activate.

❖ Deactivate: Deactivation is the inverse of activation, and refers to


shutting down any executing components of a system.

❖ Uninstall: Inverse of installation. Remove a system no longer required.

❖ Retire: System is marked as obsolete & support withdrawn by producers

Conversion of any system from manual to automatic or from old systems


to new systems includes several activities with the objective of putting a
tested system into live usage. It is the last step before the system begins
to show results.

The activities start with an Implementation Plan. This includes setting up of


the hardware, operating systems, loading software licenses if necessary,
loading data base licenses and any other special services, interfaces with
external systems etc. It also include planning for resources needed for the
implementation, prioritization of tasks, time schedules, defect reporting,

! !349
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

reporting structures, escalations, risk management, etc. Users of the


system need to be trained and made familiar with their roles with respect
to the system.

Transition is not a one-step process. Assume an organization is


implementing a simple accounting application like Tally, the existing
manual (or old system) accounting does not grind to a halt once Tally is
installed. Data from the manual system is captured in the computer
system, outputs are compared with manual reports and either corrected or
programs rectified. For existing automated systems, data is converted
using some be-spoke software from the old formats to the new formats.
For a couple of months both the manual system (existing system) and the
new computerized system are operational and once it is established that
the new system works properly the manual ( old system) is phased out.
The switch over is gradual and involves running both systems until
implementation of the new system is considered to be complete and
successful. Periodic post implementation reviews are essential to stabilize
the system and also get feedback from users on any potential issues with
the new system.

The challenge for any organization to implement a system especially large


ERP or Financial systems in any implementation are several. Current
employees or users have to shoulder the burden of continuing their work
with the current systems which needs to be phased out. "This new system
won't work" is a general refrain from some die-hards is part of the
challenges. The challenges generally found are:

❖ Resistance to change is a human trait; people resist changes.

❖ Reactions vary in different situations

❖ Hostility or non-cooperation

❖ Fear - Loss of Job, adverse effect of new system, person independence

❖ Mental Blocks - Refusal to accept IT changes or computers

❖ Personal - What's in it for me? No job satisfaction, sabotaging the system

! !350
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT
❖ Monetary - Less overtime, less importance and hence impact on salary
rise

❖ Communication gaps - inability to understand the system

❖ Inertia - old systems were better, adaptability issues

Inspite of all the problems stated, the implementation phase is the most
rewarding for both the vendor and customer. The fruits of the entire SDLC
effort are seen in this phase where a wish list of requirements gets
translated to some concrete application which works and users can use it.
In the 1970's automatic printing of a statement in a passbook was a big
event in banking. Today online systems and mobile applications are
launched every day which are lapped up without batting an eyelid. For the
youth, a new mobile game is "manna from heaven" - one can imagine the
pride and happiness of the producers who made it happen and see it
through till deployment.

8.2 MAINTENANCE PHASE

Software maintenance in software engineering is the modification of a


software product after delivery to correct faults, to improve performance or
other attributes. Maintenance is the last stage of the software life cycle.
After the product has been released, the maintenance phase keeps the
software up to date with environment changes and changing user
requirements. Construction and design phases should be such that
resultant code that can be easily read, understood, and changed.
Maintenance can only happen efficiently if the earlier phases are done
properly. It addresses problems and enhancement requests after the
software releases. Maintenance normally is for removing defects which
entered during development, adding new features. The cost of
maintenance increases as development progresses.

In some organizations, a change control board maintains the quality of the


product by reviewing each change made in the maintenance stage. In the
maintenance phase sometimes the entire lifecycle (SDLC) model is applied
where there are major enhancements or corrections.

! !351
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

Why is maintenance important?

❖ Massive investments of time and resources are made in developing and


implementing systems.

❖ Maintenance is inevitable. It is hard and costly. Considerable resources


are required to keep the systems active and dependable.

❖ Technology impact is very high in systems world today. Change is


constant. That calls for reengineering systems and software. Even
reengineered software needs maintenance, soon after its
implementation.

❖ Massively parallel processing systems and networking resources are


changing database services into corporate data warehouses.

❖ Software engineering environments, rapid application development tools


are changing the way we used to develop and maintain software.

❖ Software maintenance is moving from code maintenance to design


maintenance, even onto specification maintenance.

❖ Modifications today are made at specification level, regenerating the


software components, testing and integrating them with the system.

There are several types of maintenance - "traditional" one being corrective


and "software evolution" being adaptive, preventive and perfective.

❖ Corrective maintenance: required for correcting a defect found while


using the system.

This deals with fixing bugs in the code.

❖ Adaptive maintenance: required for adapting to changed conditions like


hardware, business situations, etc. This deals with adapting the software
to new environments.

❖ Perfective maintenance: done to improve or add new features to the


system for better or easier functioning. This deals with updating the
software according to changes in user requirements.

! !352
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT
❖ Preventive maintenance: Done to prevent occurring of some new defect,
which might not have been foreseen earlier. This deals with updating
documentation and making the software more maintainable.

Reverse engineering is also another form of maintenance where old


software is reengineered so that its internal working can be better
understood and performance improved.

The dream of every student wishing to pursue a career in IT is to work on


"development" projects. Maintenance is considered a secondary or
downgraded job. Practitioners enjoy maintaining code that they have
written themselves. Whereas maintaining code they did not write or build is
very painful for them. Many developers see maintenance as a prison
sentence and boring stuff. The reality is that in a life of any IT professional,
hardly 20% work will involve fresh development while 80% would involve
maintenance or enhancement work. Programmers spend more time
maintaining applications than developing them

If one buys any automobile, machine or equipment like a car, a washing


machine, fridge, air- conditioner, chiller plant, etc. the maintenance cost is
easy to compute and understand. There are thumb-rules for each kind of
maintenance task for these. For example for a car there are service
manuals which the user gets clearly explaining what maintenance work will
be done during warranty and what needs to be done after some mileage or
lapse of time. There is very little ambiguity in hardware maintenance costs.
It is relatively person independent whereby any good engineer can take up
the maintenance work of new equipment - not built by him, nor maintained
by him in the past. It is relatively simple because the designs, the
components and the interconnections are all well documented, visible to
the engineers and proper maintenance manuals are available for users and
the service providers.

It is not so in software because

❖ There is no "standard" documentation, service manuals or rates per tasks


for software maintenance. Software maintenance cost can exceed half of
the total software development cost. Maintenance tasks are difficult and
complicated by several factors: Applications operate in some "domain".
Maintenance professionals must understand the elements of the domain
which affect and are affected by the software.

! !353
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT
❖ Software systems are typically evolved from a complex set of
requirements, and are based on an overall design Maintenance
professionals must understand the requirements and design, but this
information is rarely available.

❖ Maintenance professionals depend on information in the documentation


and comments in the source code of a software system to guide them in
understanding the software. This information is typically not kept up to
date with changes in the system, and can be mostly inaccurate.

❖ Software systems are typically designed and implemented by different


groups of people than those who maintain it. Each Maintenance
professional must attempt to understand not only what the original
designers and implementers did, but what their predecessors in
maintenance did as well.

❖ Software systems are maintained by groups of people. Working in groups


implies its own set of problems which are not unique to software
engineering

❖ System need to be re-tested after each, programs must be tested for


side effects and related documentation must also be updated.

❖ Software maintenance requires more orientation and training than any


other programming activity.

There are four major problems that can slow down the maintenance
process: unstructured code, insufficient knowledge of the system,
documentation being absent, out of date, or at best insufficient, mental
blocks on maintenance tasks.

One of the definitions of a programmer is "anybody who knows the syntax


of a language and can develop some code that works. However, a software
engineer is someone who not only writes code that works correctly, but
also writes high quality code. His responsibility is to produce software that
is a business asset that will last for many years. If the code produced is of
low quality and others find it hard to understand or maintain, then it might
as well be thrown away and rewritten from scratch. This unfortunately
happens all too often. Making code readable and maintainable is as

! !354
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

important if not more important than making it work correctly. If a code


does not work it can be fixed; if it can't be maintained, it is scrap

In conclusion development is an art and science, maintenance is a


craft.

8.3 SOFTWARE CHANGE MANAGEMENT

The "First Law”

No matter where you are in the system life cycle, the system will change,
and the desire to change it will persist throughout the life cycle.
Bersoff, et al, 1980

Understanding and controlling changes is one of the biggest challenges in


software engineering projects. Change happens for different reasons.
(Refer diagram below). Some are planned and many are unplanned. Errors
detected in the software need to be corrected. New business or market
conditions dictate changes in product requirements or business rules. For
example when the Euro was launched in Europe, many business
applications went through significant changes to handle the new currency.
Some countries which were not using decimal points in their current
currency had to implement systems which could handle the Euro. In one
country the ", (comma)" was used as a decimal point and there were no
fractions. This too required changes in the systems handling the
transactions with Euro currency.

Changes can occur due to new customer needs demand modifications of


data produced by information systems, functionality delivered by products,
or services delivered by a computer- based system. Sometimes
reorganization or business growth/downsizing causes changes in project
priorities or software engineering team structure. Budgetary constraints
can also cause a redefinition of the system or product.

What are the changes?


! !355
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

It is often quoted that "The only constant in software development


is change."

There is a confusion on terminologies used in this context i.e. configuration


management, change control and change management. Each of these has
their own nuances but are inter- related. The acronym SCM is also
confusing since it can imply Software Configuration Management or
Software Change Management. Here SCM is assumed to mean "Software
Change Management".

Change control is a formal process used to ensure that changes to a


product or system are introduced in a controlled and coordinated manner.
It reduces the possibility that unnecessary changes will be introduced to a
system, introducing faults into the system or undoing changes made by
other users of software. For IT systems change control is a major aspect of
the broader discipline of change management.

In software engineering, software configuration management (CM) is the


task of tracking and controlling changes in the software, part of the larger

! !356
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

cross-discipline field of configuration management. Configuration


Management practices includes revision control and the establishment of
baselines. If something goes wrong, CM can determine what was changed
and who changed it. If a configuration is working well, CM can determine
how to replicate it across many hosts.

The purpose of Software Configuration Management is to establish and


maintain the integrity of the products of the software project throughout
the project’s software life cycle. Software Configuration Managment
involves identifying configuration items for the software project, controlling
these configuration items and changes to them, and recording and
reporting status and change activity for these configuration items.

Change Management System


One can have several changes in a project at any time. For example,
projects run out of budget and need approval of additional budgets. Or
there could be delays in a project due to quality issues and extension of
time is required. These types of changes involving the project processes or
the project baselines are managed through the Change Management
System. The purpose of the Change Management System is to implement
the approved changes into the project with a minimum of disruption.

It is necessary to pause here and introduce the terminology "change


request". A change request is a document containing a call for an
adjustment of a system and has a high significance in the change
management process.

A change request is declarative, i.e. it states what needs to be


accomplished, but leaves out how the change should be carried out.
Important elements of a change request are an ID, the project id, program
id, deadline (if applicable), an indication whether the change is required or
optional, the change type, a change abstract, assumptions and constraints.

Change requests typically originate from one of five sources:

❖ Problem reports that identify bugs that must be fixed, which forms the
most common source

❖ System enhancement requests from users

! !357
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT
❖ Events in the development of other systems

❖ Changes in underlying technology and/or standards

❖ Demands from senior management or stakeholders

All change history is logged with the change request, including all state
changes along with dates and reasons for the change. The Change
Management System ensures that every change request is received,
analyzed and either approved or rejected. If it is approved, all other project
constraints will also be analyzed for any possible impact due to this
implementation of change.

A good change management system ensures that all affected parameters


are identified and analyzed for any impact before the change is
implemented to the system, in order to avoid or minimize the adverse
effects.

Baselines
One important aspect of change management is to keep track of the
changes and control them before they control the project. Baselining is a
software change management concept that helps practitioners to control
change without seriously impeding justifiable change.

The IEEE standards define a baseline as:

❖ A specification or product that has been formally reviewed and agreed


upon, that thereafter serves as the basis for further development, and
that can be changed only through formal change control procedures.

❖ A baseline is a milestone in the development of software that is marked


by the delivery of one or more software configuration items and the
approval of these SCIs that is obtained through a formal technical
review.

A simple process of controlling the configuration items produced during a


software development life cycle is shown in the diagram below.

! !358
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

The configuration items include any artifacts that are created during the
project and controlled. Examples are All Plans, SRS documents, UML
Diagrams, Design documents, Interface Designs, Test Cases, Code, Test
Results, Implementation Manuals, User Manuals and many other items.
Even Minutes of meetings are configuration items. Items are created,
formally reviewed, approved and "checked in" through a "toll" gate to a
project database. Once baselining of these artifacts is done, any change of
any of the items needs to go through a change control mechanism. The
items are "checked out" through another toll gate, the required changes
and its impact is reviewed, the item taken up for changes, and then follow
through the same cycle of formal review, approval and "check-in".

The significance of the toll gate is that once an item is "checked out" or in
WIP, it cannot be "checked out" by another person till such time the
previous version is "checked in". This avoids inadvertent overwriting of any
changes by a made in a code by one programmer.
The Software Change Management (SCM) Process addresses the following
questions

❖ How does a software team identify the discrete elements of a software


configuration?

! !359
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT
❖ How does an organization manage the many existing versions of a
program (and its documentation) in a manner that will enable change to
be accommodated efficiently?

❖ How does an organization control the changes before and after software
is released to a customer?

❖ Who has responsibility for approving and ranking changes?

❖ How can we ensure that changes have been made properly?

❖ What mechanism is used to apprise others of changes that are made?

The British Standards Institute in its Code of Practice for IT Service


Management defines the scope of change management to include the
following process steps:

Recording Changes
In practice, the basic details of a change request from the business are
recorded to initiate the change process, including references to documents
that describe and authorize the change. Well-run change management
programs use a uniform means to communicate the request for change,
and work to ensure that all constituencies are trained and empowered to
request changes to the infrastructure.

Assessing the Impact, Cost, Benefits, and Risks of Changes


The business owner of a configuration item (i.e., a CI in the Configuration
Management Database which records the exact state of the IT
infrastructure) to be changed (e.g., IT for infrastructure, Finance for a
Billing application, etc.) and all affected groups (e.g., users, management,
IT, etc.) are identified and asked to contribute to an assessment of the risk
and impact of a requested change. Through this means, the process is
extended well beyond the IT department and draws on input from
throughout the organization.

Developing the Business Justification and Obtaining Approval


Formal approval should be obtained for each change from the "change
authority." The change authority may be a person or a group. The levels of
approval for each change should be judged by the size and risk of the
change. For example, changes in a large enterprise that affect several

! !360
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

distributed groups may need to be approved by a higher-level change


authority than a low risk routine change event. In this way, the process is
speeded for the routine kinds of changes IT departments deal with every
day.

Implementing the Changes


A change should normally be made by a change owner within the group
responsible for the components being changed. A release or
implementation plan should be provided for all but the simplest of changes
and it should document how to back-out or reverse the change should it
fail. On completion of the change the results should be reported back for
assessment to those responsible for managing changes, and then
presented as a completed change for customer agreement. The relevant
documentation is updated to reflect the applied changes.

Verify change
The implementation of the change in the new SYSTEM RELEASE is verified
for the last time, now by the project manager. Maybe this has to happen
before the release, but due to conflicting literature sources and diagram
complexity considerations it was chosen to model it this way and include
this issue.

Monitoring and Reporting on the Implementation


The change owner monitors the progress of the change and actual
implementation. The people implementing the change update the
configuration management database proactively and record or report each
milestone of change. Key elements of IT management information can be
produced as a result of change management, such as regular reports on
the status of changes. Reports should be communicated to all relevant
parties.

Closing and Reviewing the Change Requests


The change request and configuration management database should be
updated, so that the person who initiated the change is aware of its status.
Actual resources used and the costs incurred are recorded as part of the
record. A post-implementation review should be done to check that the
completed change has met its objectives, that customers are happy with
the results; and that there have been no unexpected side-effects. Lessons
learned are fed back into future changes as an element of continuous
process improvement.

! !361
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

As the above process makes clear, true change management differs from
change control in the depth of its overall process scope and in the range of
inputs it uses. Where change control ensures that changes are recorded
and approved, change management considers overall business impact and
justification, and focuses not only on the decision to make or not make a
given change, but on the implementation of the change and the impact of
that implementation as well.

Version Control
Version control combines procedures & tools to manage different versions
of configuration objects that are created during a s/w process. A version
control system implements or is directly integrated with four major
capabilities:

❖ Project database (repository) that stores all relevant configuration


objects

❖ A version management capability that stores all versions of a


configuration object

❖ A make facility that enables the software engineer to collect all relevant
configuration objects and construct a specific version of the software

❖ An issues tracking (also called bug tracking) capability that enables the
team to record and track the status of all outstanding issues associated
with each configuration object

The benefits of instituting and developing a mature change management


process include

❖ Improved overall visibility into and communication of changes across a


distributed enterprise

❖ Assurance that only changes that provide true business benefit are
approved

❖ Assurance that all proposed changes are scheduled based on business


priority, infrastructure impact and service risk

! !362
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT
❖ Improved ability to smoothly regress to a previous state in the event of
change failure or unanticipated results

❖ Time to implement changes is reduced

❖ Disruptions to ongoing service provision are minimized

Manage Change or It Will Manage You!


While "change" is often thought of as a dirty word, the reality is that
change happens for legitimate business reasons. In today's fast-moving
and competitive marketplace, it's unrealistic to expect stakeholders to have
perfect knowledge of what they want or need to achieve business
objectives. One wants to avoid drastic changes not tied to business
objectives, but not at the expense of ignoring real opportunities to deliver
more value to the organization. The most important thing is that an
informed decision is made about if and how to incorporate the change.

To sum up a quote from Winston Churchill: To improve is to Change. To be


perfect is to change often" - very apt for a successful change management
program.

8.4 RISK MANAGEMENT

Protecting business and stakeholder interests in all spheres is critical for


every management team. For example, a US organization outsourcing its
software development activities to a vendor-partner in India or elsewhere
needs to identify, analyze and plan for the risks of outsourcing (could range
from technology and domain knowledge to simple English communication
skills) and ensure that the right mitigation plans are in place.

In real life everyone manages risk consciously or unconsciously but very


rarely systematically. Getting to work could require forward thinking e.g.
filling petrol in a car or 2-wheeler, ironing the right dress for a
presentation, having a backup pen-drive, etc. Managing risk is trying to
maximize the opportunities one gets and avoid surprises by minimizing
threats. Even in a game like football or cricket right from the word "GO",
the captains and the management of the competing teams have to
contemplate all the risks - winning the toss, pitch conditions, weather
uncertainty, player fitness, umpire past records, crowd friendliness and so
on. As the game progresses the risks change depending on the winning or

! !363
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

losing streak. Adjustments have to be made to avert a loss or bridge gaps


in scoring, time left etc. Risk management is in play along with the game.

To begin with what is a Risk?


Risk is defined in this standard as "the effect of uncertainty on objectives".
This was different from the earlier concept of risk being "chance or
probability of loss". The salient points emphasized in the standard are:

❖ Risks include both positive possibilities as well as negative ones.

❖ Risk by itself is not positive or negative, but its consequences are

❖ Shift of emphasis from event to effects

❖ Risk management creates and protects value

❖ Definition of specific attributes of enhanced risk management

Risk can be perceived as the likelihood that a particular threat using a


specific attack, will exploit a particular vulnerability of a system that
results in an undesirable consequence, where

❖ Threat is any circumstance or event with the potential to cause harm to


an information system in the form of destruction, disclosure, adverse
modification of data, and/or the denial of service.

❖ Likelihood of the threat occurring is the estimation of the probability that


a threat will succeed in achieving an undesirable event.

❖ Attack is the undesirable event

❖ Vulnerability weakness in the system that could be exploited by a threat

❖ Consequence is that which logically or naturally follows an action or


condition. (**Definitions are from National Information Systems Security
(INFOSEC) Glossary, NSTISSI No. 4009, Aug. 1997)

! !364
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

Risks can manifest in many ways. They could be

❖ Performance risk-the degree of uncertainty that the product will meet its
requirements and be fit for its intended use.

❖ Cost risk-the degree of uncertainty that the project budget will be


maintained.

❖ Support risk-the degree of uncertainty that the resultant software will be


easy to correct, adapt, and enhance.

❖ Schedule risk-the degree of uncertainty that the project schedule will be


maintained and that the product will be delivered on time.

Project Size Risks

❖ Estimated size of the product in LOC, FP, # of programs, files,


transactions?

❖ Percentage deviation in size of product from average for previous


products?

❖ Size of database created or used by the product?

❖ Amount of reused software?

Business Impact Risks

❖ Effect of this product on company revenue?

❖ Reasonableness of delivery deadline?

❖ Number of customers who will use this product

❖ Amount and quality of product documentation that must be produced and


delivered to the customer?
❖ Costs associated with late delivery?

❖ Costs associated with a defective product?

! !365
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

Customer Risks

❖ Have you worked with the customer in the past?

❖ Does the customer have a solid idea of requirements?

❖ Is the customer willing to participate in reviews?

❖ Is the customer technically sophisticated?

❖ Will the customer resist looking over your shoulder during technically
detailed work?

Process Maturity Risks

❖ Have you established a common process framework?

❖ Is it followed by project teams? Buy-in from Management?

❖ Do you conduct formal technical reviews?

❖ Are CASE tools used for analysis, design and testing? Integrated tools?

❖ Have document formats been established?

Technology Risks

❖ Is the technology new to your organization?

❖ Is new or unproven hardware involved?

❖ Is a specialized user interface required?

❖ Are there significant performance constraints?

! !366
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

Staff/People Risks

❖ Are enough people available?

❖ Are the best people available?

❖ Has the staff received necessary training?

❖ Will some people work part time?

❖ Is the staff committed to the project duration?

❖ Will turnover among staff be low?

Can one foresee and prevent all risks? Take the example of the floods in
July 2005 in Maharashtra floods. Unprecedented torrential rain flooded of
many parts of the state of Maharashtra including large areas of Mumbai
city located in which at least 5,000 people died. The term 26 July, now is,
in context always used for the day when the city of Mumbai came to a
standstill. The "threat" was always there, "likelihood" was high during
monsoons, the "vulnerability" was exposed, the "attack" happened and
"consequence" was for all to see. Large numbers of people were stranded
on the road, lost their homes, and many walked for long distances back
home from work that evening. Transport services were at a standstill, with
the state government declaring a holiday on 27th and 28th of July.
Statisticians quoted this as the eighth heaviest ever recorded 24-hour
rainfall figure of 994 mm (39.1 inches). IT companies had a major problem
of maintaining continuity of their services, adhering to schedules or
responding to any crisis calls from their customers or clients; they could
not respond to this "attack" effectively. However the crisis led to a
thorough review of their disaster recovery and business continuity plans
and coverage of risks in the contracts and service level agreements.

A suggested list in order of preference on how to deal with risk:

❖ Avoiding the risk by deciding not to start or continue with the activity
that gives rise to the risk

❖ Accepting or increasing the risk in order to pursue an opportunity

! !367
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT
❖ Removing the risk source

❖ Changing the likelihood

❖ Changing the consequences

❖ Sharing the risk with another party or parties (including contracts and
risk financing)

❖ Retaining the risk by informed decision

Risk assessment one of the important activities is a process of analyzing


threats to and vulnerabilities of a system (or organization) and the
potential impact the loss of capabilities of a system would have. Risk
assessment is the determination of quantitative and/or qualitative value of
risk related to a concrete situation and a recognized threat; the resulting
analysis is used as a basis for identifying appropriate and cost-effective
counter-measures. Risk assessment is effected by applying the classic
process of:

❖ Risk Identification: Identify sources of risk, areas of impact and


consequences.

❖ Risk Analysis: Understanding the risk and whether it needs to be fully


evaluated.

❖ Risk Evaluation: Compare the level of risk established in the previous


stage with the risk tolerance criteria established.

❖ Risk Treatment: Modification of risk and decision on treatment option.

Qualitative Risk Assessment is basically judging an organization's risk


to threats. One ranks the seriousness of the threats based on judgment,
past history or intuitions; it is very subjective and difficult to justify any
return on investment. For example one can classify likelihood of a risk
event occurring as very high, Medium, Very Low etc. Or the Risk Impact
can be classified by the level of damage that can occur when a risk event
occurs as very high, low, very low, etc.

! !368
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

Quantitative Risk Assessment requires assigning real numbers to costs


of the risk components, calculate the potential loss, assign probability of
event occurring and can have its margin of error. Quantitative risk
assessment is calculations of two components of risk
(R) i.e. the magnitude of the potential loss (L), and the probability (p) that
the loss will occur.

Risk Exposure = Risk impact x Risk-Probability


Example: If there is a risk of a loss of a car, the risk-impact is cost to
replace car, e.g. $10,000 and if the probability of car loss: 0.10
Then Risk Exposure = 10,000 x 0.10 = 1,000

An effective strategy for dealing with risk must consider three issues

❖ Risk mitigation (avoidance) is the primary strategy and is achieved


through a plan

❖ Risk monitoring is a tracking activity to assess whether predicted risks


occur; to ensure that risk aversion steps defined for the risk are being
properly applied and to collect information that can be used for future
risk analysis

❖ Risk management and contingency planning: Take corrective and


preventive action based on the tracking activity results.

Building a Risk Table (given below) is a useful way of identifying, tracking


and managing risks.

❖ Identify the risks


❖ Estimate the probability of occurrence
❖ Estimate the impact on the project on a scale of 1 to 5, where
❖ 1 = low impact on project success
❖ 5 = catastrophic impact on project success
❖ Sort the table by probability and impact

The big challenge is estimating or knowing the two quantities potential loss
and probability of occurrence. Going back in time, in 1995 time frame
users and owners of the applications of mega applications running on
multiple mainframes and chugging along happily for 15 years realized the
huge risks of using their software beyond 1999 without making them Y2K

! !369
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

compliant. When it dawned that the programs would collapse on 1st Jan
2000 the risk analysts were all out with their calculators and pens to
address this gigantic software menace. The users and the IT team in these
organizations had a difficult time to put together a road map for the
conversion of the large applications. Identifying which source code was
required to be changed was sometimes decided by a toss of a coin in some
cases; performing magic was easier than calculating the loss and
probability of occurrence of a project failure. For example, an IT project
could be delayed by a few weeks and result in a loss of revenue for the
outsourcing partner of a few thousands of dollars but if Y2K applications
were not rolled out on before the crossover of the millennium (from Year
1999 to Year 2000) the consequences for the business could be enormous.

What is Risk Management? It can be defined as a process concerned with


identification, measurement, control and minimization of security risks in
information systems to a level commensurate with the value of the assets
protected. An enterprise-wide approach to risk management enables an
organization to consider the potential impact of all types of risks on all
processes, activities, stakeholders, products and services. Implementing a
comprehensive approach will result in an organization benefiting from what
is often referred to as the 'upside of risk'. The world has seen how the
global financial crisis in 2008 brought into focus the weaknesses of well-
established systems and highlighted the importance of adequate risk
management.

To quote: The only alternative to risk management is crisis


management --- and crisis management is much more expensive,
time consuming and embarrassing. (James Lam Enterprise Risk
Management, Wiley Finance © 2003) - Unquote.

The principles, framework and processes for Risk Management are depicted
in the diagram shown here.

! !370
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

!
Good Risk Management helps in
❖ Focusing efforts -helps prioritize. Top 10, Top 3, least priority etc.
❖ Is proactive not reactive - Prepare for risks before they happen.
❖ Identifying risks and develop appropriate risk mitigating strategies
❖ Improve outcomes - achievement of objectives
❖ Enables accountability, transparency and responsibility
❖ Sometimes assuring survival

Risk Management cannot be reactive. Project team reacts to risks when


they occur, they plan for additional resources in anticipation of fire-fighting
and perform fixes on failure finding resources and applying when the risk
strikes.

Risk Management should be proactive i.e. formal risk analysis is


performed, organization corrects the root causes of risk, examine risk
sources that lie beyond the bounds of the software and developing skills to
manage change

! !371
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

Risk management is not a "snap-shot" process. Risks are alive all the time
and changing. They need to be reviewed periodically and the parameters of
risk clearly confirmed. The probability of the risks, the impact of risks, the
severity etc. can change as the project progresses or as the organization
matures. Standards once established can promote continuous improvement
by being periodically reviewed and updated.

As the technology is changing at fast pace the challenges for implementing


the risk assessment and management model is also likely to change with
necessary amendments in policies on time to time basis. Therefore the
periodic review, updates and necessary feedback for strengthening the risk
assessment and management model are must.

! !372
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

8.5 SUMMARY

Software Implementation or Deployment follows user acceptance testing


and includes all of the activities that make a software system available for
use to the end-user. Deployment can be a simple release, installing and
activating, upgrading or retiring an existing application. Inspite of all the
challenges this is the most rewarding phase for both the vendor and
customer.

Software maintenance is the modification of a software product after


delivery to correct faults, to improve performance or other attributes.
Changes can happen due to technology changes, defect fixing, upgrades,
enhancements, statutory requirements, etc. Unlike hardware maintenance,
software maintenance costs can be significantly higher compared to the
development costs. There are no rate cards for maintenance of software or
"standard" documentation. Though the dream of every programmer is to
do only development in his career lifetime, reality is that 80% of the life of
a product is in maintenance. The challenges in maintenance stem from
unstructured code, insufficient knowledge of the system and many other
reasons. It is said that development is an art and science, maintenance is a
craft.

Understanding and controlling changes is one of the biggest challenges in


software engineering projects. Unplanned changes due to errors, statutory
changes, business conditions, technology changes or threats can force any
software product to be changed. Change Management System managed
through change requests implements the approved changes into the
project with a minimum of disruption. Baselining helps practitioners to
control change without seriously impeding justifiable change. Change
management begins from identifying the change, assessing its impact,
evaluating the costs, benefits and the risks of changes, obtaining approval,
implementing the changes and verifying them. Version control is a critical
activity required for good change management to manage different
versions of configuration objects.

Software development is not a cake-walk and risks exist in any project


even with the best of models, tools, resources and skills. Protecting
business and stakeholder interests in all spheres is critical for every
management team. Risk i.e. "the effect of uncertainty on objectives" must
be managed along with the project life cycle. Risks need to be assessed

! !373
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

both quantitatively and qualitatively, the treatment options evaluated, risk


exposure calculated, tracked and monitored. Risk Management needs to be
very proactive and cannot be reactive.

8.6 SELF-ASSESSMENT QUESTIONS (EXAMPLES


PREFERABLY DIFFERENT FROM THOSE GIVEN IN THE
BOOK)

1. Discuss the different types of implementation e.g. manual to


automation, existing automation to new systems, etc. What are the
likely challenges in each of the implementation types?

2. Discuss various deployment activities with examples.

3. What are the differences in the deployment activities like release, install
and activate and upgrade?

4. Implementation is not a one-step process? Why not? Discuss with an


example of any large application implementation in any organization.

5. Why is the implementation phase the most rewarding phase for vendor
and customer?

6. What does maintenance phase imply? What are the challenges and
steps followed for maintenance?

7. Why is maintenance important

8. What needs to be maintained in a working application? If a customer


has been using the product for 4 or more years does it need to be
changed? Answer your question with explanations.

9. If a customer seeks modifications or enhancements of a product that


was executed by following all software engineering principles (!), what is
the project related configuration items that can change? Why?
Programmers consider maintenance as a secondary-citizen job. Discuss
the reality in industry and the skill needed for maintenance work.

! !374
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

10.What are the different types of maintenance? Give examples for each of
them.

11.Differentiate between maintenance of a car and software. Explain the

12.How are changes managed in the software after development and


deployment?

13.How does one control the changes that arise after deployment? Explain
a logical process flow that can be implemented to track and control the
changes from its inception till implementation including exceptions.

14.What is meant by baselining? With changes happening continuously in


any project, how does it help to have a baseline version?

15.What is a toll gate? How is it useful to manage changes?

16.Explain the terms "check-in" and "check-out" in change management?


How does it help control changes?

17.In any project changes may impact only some of the configuration
items. For example the test case document can change with no change
to code or design. Will the version numbers be changed for all
configuration items if the test case document version number is
changed? How is it possible to correlate different items with different
version numbers?

18.Explain the sentence "Manage the Change or Change will manage you"

19.Using the analogy of the risks in a cricket match, discuss the various
risks, the impact, etc. that can arise in a project. As a project leader
how will you address each of the identified risks?

20.Explain risk management with examples from any real-life project or


situation you have faced or handled? For example "getting stuck in the
flood" due to rains in Maharashtra.

21.Why do we need qualitative and quantitative assessment of risks?


Explain with an example.

! !375
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

22.What are the steps in Risk assessment? Give examples of the steps as
applicable to a project.

23.Difference between risk avoidance, acceptance and mitigation with


examples.

24.Briefly explain the term "risk exposure" with some examples.

25.What is the difference between risk management and crisis


management?

26.Assume you have been assigned the responsibility of "project manager"


for a marriage of a relative. Identify and tabulate the top 20 risks that
you can visualize from the beginning till completion of marriage. For
each of the risks explain the impact, the mitigation plans, the tracking
and monitoring required for this.

27.Differentiate between reactive and proactive risk management? Explain


giving examples from real life problems.

28.Risks are not passive but active. What does this statement imply? What
must be done to handle such risks?


! !376
IMPLEMENTATION, MAINTENANCE, CHANGE MANAGEMENT, RISK MANAGEMENT

REFERENCE MATERIAL
Click on the links below to view additional reference material for this
chapter

Summary

PPT

MCQ

Video Lecture

! !377
QUALITY MANAGEMENT AND METRICS

Chapter 9
Quality Management and Metrics
Objective:

On completion of this chapter you would be able to understand

❖ Different perceptions and perspectives of quality

❖ Different perceptions and perspectives of quality

❖ Importance of Quality in Software development

❖ Concepts of Quality Assurance, Quality Control and Quality Management


Systems

❖ Difference between Assurance and Control

❖ Different Standards in practice in Industry to ensure quality of product or


services to customers

❖ SEI - Capability Maturity Model Integration - a maturity framework to


help IT organizations to improve their software engineering capabilities
and develop the right software, defect free, within budget and on time,
every time.

❖ Six sigma - a methodology that provides businesses with the tools to


improve the capability of their business processes

❖ Basics of Metrics and their usage

❖ How quality can be managed with metrics

Structure:

9.1 Quality - A perspective

9.2 Software Quality Management

! !378
QUALITY MANAGEMENT AND METRICS

9.2.1. Quality planning

9.2.2. Quality Assurance

9.2.3. Quality Control

9.2.4. The Seven Basic Tools of Quality

9.3 Cost of Quality

9.4 Quality Management System - Requirements (ISO 9001)

9.4.1. Overview of the Standard

9.4.2. Principles of ISO 9001

9.4.3. Criticism of ISO 9001

9.5 SEI-Capability Maturity Model Integration (CMMI)

9.6 Six Sigma

9.7 Metrics

9.7.1 Measurement

9.7.2 Why Measure?

9.7.3 Software Metrics

9.7.4 Attributes of Effective Software Metrics & Etiquette of Metrics

9.8 Quality is a journey

9.9 Summary

9.10 Self-Assessment Questions

! !379
QUALITY MANAGEMENT AND METRICS

9.1 QUALITY - A PERSPECTIVE

What do the following 3 statements mean?

❖ Burgers at McDonalds are very good

❖ I prefer to drive a bike than a car

❖ I want 3 Xerox copies of this document

Each one of these indicates some attributes and perspectives of quality.


Quality can be viewed in

❖ Comparative sense : Degree or level of excellence of a product which a


thing posses

❖ Quantitative sense: Extent of departure from certain well-defined


standards

❖ Fitness for purpose: Article made in order to serve a purpose and if it


does not fulfill its intended purpose then it is termed as low quality
product

❖ Subjective sense: Quality of a creative or artistic item depends very


much on the individual perception of the person viewing it and the value
each person places for the item

Perspective of quality differs depending on whose perspective it is. The


differences in views are mainly due to the fact that any activity, project or
business has many stakeholders, each perceiving quality in one's own way.
Some of these are direct tangible views while others are indirect or
derived.

Customer's View could be


❖ Receiving the right product, "fit for use"
❖ Getting a defect free product
❖ Getting on-time delivery
❖ Receiving value for money
❖ Being treated with Integrity, courtesy & respect

! !380
QUALITY MANAGEMENT AND METRICS

Producer's (Employee) View includes


❖ Conformance to Requirements
❖ Doing the right thing (Developing the right product, Proper

understanding of the requirements)


❖ Doing it the right way (Right Processes, Appropriate Design methodology,

Training, etc.)
❖ Doing it right the first time and every time (Emphasis on prevention,

Working smarter)
❖ Clear Communication with Customer
❖ Better Work-life Balance

Provider's (Organization) View


❖ Delivery of products or services within time and budget
❖ Increase in productivity
❖ Increase in Customer Satisfaction
❖ Repeat business
❖ Differentiator between own offerings and those of competitors

ISO standards define quality as "the totality of features and characteristics


of a product or service that bears its ability to satisfy stated or implied
needs." Quality is a manifestation of some characteristics IN a product or
OF a process. Quality could be "non-inferiority or superiority" of something.

Quality has been with us since time immemorial; after Second World War it
has been used more and more as a competitive weapon or competitive
advantage. Japan is a classic example of how a nation used quality to
become a world player in trade and industry. After World War II as Japan
was rebuilding from the war, many business executive went through
training in quality, which was conducted by Dr. Deming and Dr. Juran.
These executives took the quality message to heart and one can see the
results today, which are too obvious to mention. One associates Japan with
"quality" as "fish to water". Other countries like Taiwan, Singapore and
South Korea adopted similar models and became very successful in the
global market. Customers all over the world have become so demanding
and expecting good quality that quality is no longer a competitive
advantage, but is a sheer necessity to survive. Therefore quality cannot be
left to "inspection" and "rejection"; it has to be designed and built into the
processes and the products.

! !381
QUALITY MANAGEMENT AND METRICS

To quote Robert Pirsig in a best seller book entitled "Zen and The Art of
Motorcycle Maintenance.

"If we can show that a world without "Quality" functions abnormally, then
we have shown that Quality exists, whether it is defined or not" - Unquote.

Why is quality important?

❖ In 1986, two hospital patients died after receiving fatal doses of radiation
from a Therac 25 machine after a software problem caused the machine
to ignore calibration data

❖ In one of the biggest software errors in banking history, Chemical Bank


mistakenly deducted about $15 million from more than 100,000
customer accounts

❖ In August 2008, the Privacy Rights Clearinghouse stated that more than
236 million data records of U.S. residents have been exposed due to
security breaches since January 2005

There are many more such examples where quality has been compromised
and fatal results have occurred. The importance of quality is well
emphasized in the following quote:

"Programmers are responsible for software quality - quality in their own


work, quality in the products that incorporate their work, and quality at the
interfaces between components. Quality has never been and will never be
tested in. The responsibility is both moral and professional".
-Boris Beizer

9.2 SOFTWARE QUALITY MANAGEMENT

A quality management system (QMS) is a set of policies, processes and


procedures required for planning and execution (production/development/
service) in the core business area of an organization. ISO 9001:2008 is an
example of a Quality Management System.

Generically a group of documents is referred as a QMS, but specifically it


refers to the entire system - the documents just describe it. A QMS
integrates the various internal processes within the organization and

! !382
QUALITY MANAGEMENT AND METRICS

intends to provide a process approach for project execution. A Process


Based QMS enables the organizations to identify, measure, control and
improve the various core business processes that will ultimately lead to
improved business performance. A quality management system is a
management technique used to communicate to employees what is
required to produce the desired quality of products and services and to
influence employee actions to complete tasks according to the quality
specifications.

What Purpose Does a Quality Management System Serve?


❖ Establishes a vision for the employees.
❖ Sets standards for employees.
❖ Builds motivation within the company.
❖ Sets goals for employees.
❖ Helps fight the resistance to change within organizations.
❖ Helps direct the corporate culture.

It includes a process for "identifying which quality standards are relevant to


the project and determining how to satisfy them". Obviously one shoe does
not fit all. Selecting and modifying applicable quality standards and
procedures for a particular project is part of the planning process.

Countries have raised their own standards of quality in order to meet


International standards and customer demands. There are many methods
for quality management and improvement. These cover product
improvement, process improvement and people based improvement.
Some of these are

❖ ISO 9004:2008 - guidelines for performance improvement.

❖ ISO 15504-4: 2005 - information technology - process assessment - Part


4: Guidance on use for process improvement and process capability
determination.

❖ QFD - quality function deployment, also known as the house of quality


approach.

❖ Kaizen - Japanese for change for the better; the common English term is
continuous improvement.

! !383
QUALITY MANAGEMENT AND METRICS
❖ Zero Defect Program - created by NEC Corporation of Japan, based upon
statistical process control and one of the inputs for the inventors of Six
Sigma.

❖ Six Sigma - 6?. Six Sigma combines established methods such as


statistical process control, design of experiments and failure mode and
effects analysis (FMEA) in an overall framework.

❖ PDCA - plan, do, check, act cycle for quality control purposes.

❖ The Toyota Production System - reworked in the west into lean


manufacturing.

❖ TQM - total quality management is a management strategy aimed at


embedding awareness of quality in all organizational processes

Some of these standards are covered in the subsequent sections.

Software Quality Management is structured into 3 principal activities


❖ Quality Planning
❖ Quality Assurance
❖ Quality Control

"The purpose of the quality management is to set up a system and a


management discipline that prevents defects from happening" - Philip B.
Crosby.

9.2.1 Quality planning

"Quality planning" is a structured process for developing products or


services that ensures that customer needs are met by the final result. The
tools and methods of quality planning are incorporated along with the
technological tools for the particular product being developed and
delivered. Designing a new automobile requires automotive engineering
and related disciplines and planning a new approach for a call-centre
services will require the techniques of an experienced business-process
expert. All need process, methods, tools, and techniques of quality
planning to ensure that they not only fulfill the best technical requirements
of the relevant disciplines but also meet (or exceed) the needs and
expectations of the customers

! !384
QUALITY MANAGEMENT AND METRICS

Quality planning process is essential because in the history of modern


society, organizations have rather universally demonstrated a consistent
failure to produce the goods and services that unerringly delight their
customers. As a customer, everyone has been dismayed time and time
again when flights are delayed, a toy fails to function, public transport
vehicles break-down, bridges & buildings crash, system discs crash, power
grid failures occur frequently, water-pipes burst - all due to quality issues.
Most often the gap is because the producer simply fails to consider who the
customers are and what they need.

To meet such challenges, a systematic process that translates quality policy


into measurable objectives and requirements, and lays down a sequence of
steps for realizing them within a specified timeframe i.e. Quality planning is
essential

9.2.2 Quality Assurance

Quality Assurance (QA) is a set of planned and systematic activities to


provide confidence that products and services will conform to specified
requirements and meets user needs. QA is a broad concept and includes all
those planned or systematic actions necessary to provide adequate
confidence that a product or service will satisfy given needs. QA includes all
the activities related to satisfying the relevant quality standards for a
project. Includes periodically evaluating overall project performance to
ensure the project will satisfy the relevant quality standards. QA refers to
the process used to create the deliverables, and can be performed by a
manager, customer, auditor or third party reviewer etc. It is primarily a
management responsibility but performed as staff function.

Examples of quality assurance include process checklists, project audits


and methodology and standards development

It involves preventing defects (management by inputs)

Another goal of quality assurance is continuous quality improvement.


Benchmarking generates ideas for quality improvements by comparing
specific project practices or product characteristics to those of other
projects or products within or outside the performing organization. A
quality audit is another quality assurance process whereby structured

! !385
QUALITY MANAGEMENT AND METRICS

reviews of specific quality management activities help identify lessons


learned that could improve performance on current or future projects.

QA activities are determined before production work begins and these


activities are performed while the product is being developed. QA is generic
and does not concern the specific requirements of the product being
developed.

9.2.3 Quality Control

Quality Control defines and implements processes which ensure that the
project quality procedures and standards are followed by the software
development team. It is a process by which product quality is compared
against standards and action is taken if there is nonconformance.

It involves detecting and fixing defects (management by outputs)

Examples of quality control activities include inspection, deliverable reviews


and the testing process.

The main outputs of quality control are:

❖ Acceptance Decision - It describes if the products on services produce as


a part of the project will be accepted or rejected

❖ Rework: Describes the action taken to bring the rejected items into
compliance with product requirement or specification or other
stakeholder expectation.

❖ Process Adjustments: Describes the adjustments made to correct or


prevent further quality problems based on quality control

QC refers to quality related activities associated with the creation of


product or project deliverables. QC with a narrower focus is used to verify
that deliverables are of acceptable quality and that they are complete and
correct. QC control is about adherence to requirements. QC activities are
performed after the product is developed.

! !386
QUALITY MANAGEMENT AND METRICS

9.2.4 The Seven Basic Tools of Quality

This is a designation given to a fixed set of graphical techniques identified


as being most helpful in troubleshooting issues related to quality. They are
called basic because they are suitable for people with little formal training
in statistics and because they can be used to solve the vast majority of
quality-related issues.

The seven tools are:

1. Cause-and-effect diagram (also known as the "fishbone" or Ishikawa


diagram)

2. Check sheet

3. Control chart

4. Histogram

5. Pareto chart

6. Scatter diagram

7. Run Chart

Other useful tools are Testing and Six Sigma. Testing has already been
covered in detail earlier. Six Sigma topics will be covered later in this
chapter. Some of the tools mentioned above are explained below.

Ishikawa Diagrams
Ishikawa diagrams are also called fishbone diagrams, herringbone
diagrams and cause-and- effect diagrams. They are causal diagrams
created by Kaoru Ishikawa (1968) that show the causes of a specific event.
Common uses of the Ishikawa diagram are product design and quality
defect prevention, to identify potential factors causing an overall effect.
Each cause or reason for imperfection is a source of variation. Causes are
usually grouped into major categories to identify these sources of variation.
The categories typically include

! !387
QUALITY MANAGEMENT AND METRICS
❖ People: Anyone involved with the process

❖ Methods: How the process is performed and the specific requirements for
doing it, such as policies, procedures, rules, regulations and laws

❖ Machines: Any equipment, computers, tools, etc. required to accomplish


the job

❖ Materials: Raw materials, parts, pens, paper, etc. used to produce the
final product

❖ Measurements: Data generated from the process that are used to


evaluate its quality

❖ Environment: The conditions, such as location, time, temperature, and


culture in which the process operate

Cause-and-effect diagrams trace complaints about quality problems back


to the responsible production operations. They help to find the root cause
of a problem. A typical diagram is shown adjacent, where the various
causes for a problem of users' inability to log on to a system is analyzed
and root causes identified.

! !388
QUALITY MANAGEMENT AND METRICS

Control Charts
The control chart was invented by Walter A. Shewhart while working for
Bell Labs in the 1920s. The control chart is one of the seven basic tools of
quality control. Typically control charts are used for time-series data,
though they can be used for data that have logical comparability. A control
chart is a graphic display of data that illustrates the results of a process
over time. The main use of control charts is to prevent defects, rather than
to detect or reject them.

Control Charts are useful when

❖ Controlling ongoing processes by finding and correcting problems as they


occur.

❖ Predicting the expected range of outcomes from a process.

❖ Determining whether a process is in control or out of control.

! !389
QUALITY MANAGEMENT AND METRICS
❖ Analyzing patterns of process variation from special causes or common
causes

❖ Determining whether quality improvement projects should aim to prevent


specific problems or to make fundamental changes to the process.

Please refer the diagram given. Quality control charts are. in conjunction
with the "Seven run rule" to look for patterns in data. Seven run rule states
that if seven data points in a row are all below the mean, above the mean,
or are all increasing or decreasing, then the process needs to be examined
for non-random problems.

If analysis of the control chart indicates that the process is currently under
control (i.e., is stable, with variation only coming from sources common to
the process), then no corrections or changes to process control parameters
are needed or desired. In addition, data from the process can be used to
predict the future performance of the process. If the chart indicates that
the monitored process is not in control, analysis of the chart can help
determine the sources of variation, as this will result in degraded process

! !390
QUALITY MANAGEMENT AND METRICS

performance. A process that is stable but operating outside of desired


limits needs to be improved through a deliberate effort to understand the
causes of current performance and fundamentally improve the process.

If analysis of the control chart indicates that the process is currently under
control (i.e., is stable, with variation only coming from sources common to
the process), then no corrections or changes to process control parameters
are needed or desired. In addition, data from the process can be used to
predict the future performance of the process. If the chart indicates that
the monitored process is not in control, analysis of the chart can help
determine the sources of variation, as this will result in degraded process
performance. A process that is stable but operating outside of desired
limits needs to be improved through a deliberate effort to understand the
causes of current performance and fundamentally improve the process.

Run Charts

A run chart, also known as a run-sequence plot is a graph that displays


observed data in a time sequence. The data displayed represent some
aspect of the output or performance of a manufacturing or other business
process. A run chart displays the history & pattern of variation of a process
over time. The chart shows data points plotted in the order in which they
occur. These charts are used to perform trend analysis to forecast future
outcomes based on historical patterns. Run charts can help identify
problems and the time when a problem occurred, or monitor progress
when solutions are implemented. A simple example is given in the adjacent
diagram.

! 


! !391
QUALITY MANAGEMENT AND METRICS

Run charts are similar in some regards to the control charts used in
statistical process control, but do not show the control limits of the
process. They are therefore simpler to produce, but do not allow for the full
range of analytic techniques supported by control charts..

Scatter Diagrams
A scatter diagram is a tool for analyzing relationships between two
variables. One variable is plotted on the horizontal axis and the other is
plotted on the vertical axis. The pattern of their intersecting points can
graphically show relationship patterns. Most often a scatter diagram is used
to prove or disprove cause-and-effect relationships. While the diagram
shows relationships, it does not by itself prove that one variable causes the
other. In addition to showing possible cause and-effect relationships, a
scatter diagram can show that two variables are from a common cause
that is unknown or that one variable can be used as a surrogate for the
other. The closer data points are to a diagonal line, the more closely the
two variables are related. An example is given in the diagram above.

! !392
QUALITY MANAGEMENT AND METRICS

Histograms
A histogram is a graphical representation of the distribution of data. It is a
bar graph of a distribution of variables. It is an estimate of the probability
distribution of a continuous variable (quantitative variable) and was first
introduced by Karl Pearson. Histograms give an idea of the density of the
data, and often for density estimation: estimating the probability density
function of the underlying variable. Each bar represents an attribute or
characteristic of a problem or situation, and the height of the bar represent
its frequency. The adjacent diagram is a simple example.

Histograms are often confused with bar charts. A histogram is used for
continuous data, where the bins represent ranges of data, and the areas of
the rectangles are meaningful, while a bar chart is a plot of categorical
variables and the discontinuity should be indicated by having gaps between
the rectangles, from which only the length is meaningful. Often this is
neglected which may lead to a bar chart being confused for a histogram.

Pareto Chart
A Pareto chart, named after Vilfredo Pareto, is a type of chart that contains
both bars and a line graph, where individual values are represented in
descending order by bars, and the cumulative total is represented by the
line. It is a histogram that helps identify & prioritize problem areas. The
adjacent diagram is a simple example of a Pareto Chart.

! !393
QUALITY MANAGEMENT AND METRICS

The purpose of the Pareto chart is to highlight the most important among a
typically large set of factors. Pareto charts are extremely useful for
analyzing what problems need attention first because the taller bars on the
chart, which represent frequency, clearly illustrate which variables have the
greatest cumulative effect on a given system.

Pareto analysis is also called the 80-20 rule, meaning that 80 percent of
problems are often due to 20 percent of the causes

9.3 COST OF QUALITY

Nothing in life comes without a price (except our parents). In process


improvement efforts and to ensure quality of service or product one must
keep the costs in mind. Cost of quality is a means to quantify the total cost
of quality-related efforts and deficiencies. It was first described by Armand
V. Feigenbaum in a 1956 Harvard Business Review article.

The "cost of quality" is not just the price of creating a quality product or
service. It also includes the cost of NOT creating a quality product or
service. Every time work is redone, the cost of quality increases. Any cost

! !394
QUALITY MANAGEMENT AND METRICS

that would not have been expended if quality requirements are not met
contributes to the cost of quality.

Cost of Quality is the cost of conformance plus the cost of


nonconformance. Conformance means delivering products that meet
requirements and fitness for use. Cost of nonconformance means taking
responsibility for failures or not meeting quality expectations.

A study reported that s/w bugs cost the U.S. economy $59.6 billion each
year; one third of the bugs could be eliminated by an improved testing
infrastructure.

There are five Categories of Cost of Quality:


Prevention cost: cost of planning and executing a project so it is error-free
or within an acceptable error range. This is cost of all activities specifically
designed to prevent poor quality in products or services. Examples are the
costs of New product reviews, Quality planning, Supplier capability surveys,
Process capability evaluations, Quality improvement team meetings,
Quality education and training, etc.

Appraisal cost: cost of evaluating processes & their outputs to ensure


quality This is the cost associated with measuring, evaluating or auditing
products or services to assure conformance to quality standards and
performance requirements. Examples are Inspection cost of purchased
material, In-process and final inspection, Product, process or service
audits, Calibration of measuring and test equipment, etc.

Internal failure cost: cost incurred to correct an identified defect before


the customer receives the product. These are failure costs occurring prior
to delivery or shipment of the product, or the furnishing of a service to the
customer. Examples include rework, Re-inspection, Regression testing,
Material review, discontinuing projects, etc.

External failure cost: cost that relates to all errors not detected and
corrected before delivery to the customer. These are failure costs occurring
after delivery or shipment of the product - and during or after furnishing of
a service - to the customer. Examples are processing customer complaints,
customer returns, warranty claims, product recalls, loss of client, loss of
other potential customers, etc.

! !395
QUALITY MANAGEMENT AND METRICS

Measurement and test equipment costs: capital cost of equipment used to


perform prevention and appraisal activities. In the software development
context, automation tools for testing, defect tracking and quality data
analysis tools are part of the measurement costs.

9.4 QUALITY MANAGEMENT SYSTEM - REQUIREMENTS


(ISO 9001)

ISO 9000 series of standards, developed and published by the


International Organization for Standardization (ISO) define, establish, and
maintain an effective quality assurance system for manufacturing and
service industries. ISO 9000 deals with the fundamentals of quality
management systems, including eight management principles upon which
the family of standards is based. ISO 9001 deals with the requirements
that organizations must fulfill to meet the standards. Third-party
certification bodies provide independent confirmation that organizations
meet the requirements of ISO 9001. Over a million organizations
worldwide are independently certified, making ISO 9001 one of the best
known and most widely used management tools in the world.

The reasons for such widespread adoption of ISO 9001 include

❖ Purchasers require their suppliers to be ISO 9001 certified

❖ Significant financial benefits for organizations certified to ISO 9001

❖ Higher Return on investments compared to otherwise similar


organizations without certification; Shareholders rewarded for the
investment in an ISO 9001 system

❖ Better operational performance

❖ In manufacturing companies improvements were seen in cycle time


reduction, inventory reductions after ISO certification.

❖ Internal process improvements in organizations lead to externally


observable improvements

❖ Increased trade and market share

! !396
QUALITY MANAGEMENT AND METRICS
❖ Higher customer satisfaction

❖ Better interdepartmental communications

The ISO certification as per an ISO Survey in 20011 shows that the
number of certificates has increased continuously from 457K+ certificates
in Dec 2000 to 1111K+ certificates in Dec 2011. In 2010, India ranked 8th
while China headed the top 10 countries for ISO 9001 certificates
accounting for approximately a quarter of the global certifications. The
ranking of top 10 countries in 2010 was 1-China (297,037) 2-Italy
(138,892), 3-Russian Federation
(62,265), 4-Spain (59,854), 5-Japan (59,287), 6-Germany (50,583), 7-UK
(44,849), 8-India (33,250), 9-USA (25,101) and 10-Republic of Korea
(24,778). ISO does not certify organizations.

Many countries have formed accreditation bodies to authorize the


certification bodies. The various accreditation bodies have mutual
agreements with each other to ensure that certificates issued by one of the
Accredited Certification Bodies are accepted globally. An ISO 9001
certificate must be renewed at regular intervals recommended by the
certification body, usually once every three years. Also there are no levels
or grades of competence within ISO 9001 - a company is either certified or
Not.

! !397
QUALITY MANAGEMENT AND METRICS

The ISO 9001 standard is generalized and abstract. Developing software is


different from producing cheese or cars, offering counseling services or
running soccer teams, selling fish or constructing buildings. ISO 9001
guidelines, because they are business management guidelines has been
successfully applied to each of these businesses. ISO 9001 promotes the
adoption of a "process approach" when developing, implementing and
improving the effectiveness of a quality management system, to enhance
customer satisfaction by meeting customer requirements and exceeding
expectations. This approach provides over the linkage between the
individual processes within the system of processes, as well as over their
combination and interaction. It requires that the quality policy is
understood and followed at all levels and by all employees with measurable
objectives. The approach requires that the business regularly reviews
performance through internal audits and meetings. The business
determines whether the quality system is working and what improvements
can be made. It has a documented procedure for internal audits.

! !398
QUALITY MANAGEMENT AND METRICS

In addition, the methodology known as "Plan-Do-Check-Act" (PDCA) can be


applied to all processes. PDCA can be briefly described as follows.

❖ Plan: establish the objectives and processes necessary to deliver results


in accordance with customer requirements and the organization's
policies.

❖ Do: implement the processes.

❖ Check: monitor and measure processes and product against policies,


objectives and requirements for the product and report the results.

❖ Act: take actions to continually improve process performance

9.4.1 Overview of the Standard

Sections 1 to 3 cover the Introduction, Scope and References. Section 4


onwards till 8 is the core standards. The coverage of each is briefly
described in this book to give an idea of what the requirements are as per
the standards

Section 4: General Requirements


This section mandates the development of a proper quality Management
system, implementing and using it and con system: develop it, use it, and
continue to improve it. The section emphasizes creating formal
documentation for describing, managing and operating the quality system.
This documentation starts with the enterprise "quality policy" and includes
objectives, procedures, records, work instructions, formats and templates
etc. The documentation needs to be always accurate and current with
proper document procedures to track changes in documentation to
implementation and creation of records that demonstrate that the system
does work.

Section 5: Requirements for Management


This section defines requirements that management must meet to
demonstrate commitment to the quality management system. The
commitment for any quality system starts with the management who need
to have a vision, define and communicate how important quality is, have a
policy on quality with both qualitative and quantitative objectives that can
be measured. The policy must be live and include commitment to meet all

! !399
QUALITY MANAGEMENT AND METRICS

relevant requirements - ISO 9001, legal & statutory related to the


organizations services/products. Management commitment is important to
communicate the policy, plan, conduct periodic reviews of the system and
provide adequate resources (people, equipment, tools, and infrastructure)
for the system. It is important to identify who has what responsibility and
authority in the system and assign a senior management level person
(Management Representative) responsible for the quality system overall.
Management needs to focus on the customers who bring home the bacon,
understand their requirements and aim to enhance their satisfaction. They
must schedule regular, periodic formal reviews that include customer
feedback, reports on process efficiency, product performance, failures &
weaknesses, audit results, actions from previous reviews, improvements,
and best practices and so on. The focus is on continual improvement.

Section 6: Resource Requirements


Resource management is very essential for a well-managed quality system.
The first step is to decide what resources are needed and includes people,
equipment, machines, materials, facilities, etc. to meet the requirements
and then provide them. The people need to be competent for the work
assigned, and needed to operate the system, meet customer requirements,
improve and enhance customer satisfaction. Shortcomings in skill or
competencies need to be addressed by training and hands-on-experience.
They need to be provided the infrastructure needed which are properly
maintained and the work environment required for delivering the product
or services. The management invariably has to do a balancing act to
manage what is needed and what can be afforded to still satisfy the
customer. Work environment also includes the culture, the social
atmosphere, the philosophy of the organization and employee recognition
programs.

Section 7: Requirements for Services or Products (requirements


for realization) Product Realization starts with planning the activities
leading to the product. Gathering customer requirements both explicit and
implicit customer requirements is important. Implicit requirements include
product requirements not specified by the customer but necessary for
intended product use and regulatory and legal requirements. Tracking of
requirements, contracts and orders is essential for realization of the correct
products. Communication plays a key role and every organization must
plan and implement customer communications processes for handling
customer queries, customer feedback, product information and regular

! !400
QUALITY MANAGEMENT AND METRICS

review processes. There are very specific requirements and records


required for design and development planning, inputs, outputs, review,
verification, validation and changes. Purchasing is part of all projects or
business activities. The organization must determine criteria for evaluation,
selection and re-evaluation of suppliers and have proper processes for
purchasing information and for verification of purchased product. This
section also covers the requirement for control of measuring devices; to
determine measuring devices and calibrate according to the standard
requirements.

Section 8: Requirements for Analysis, Measurement and


Improvement
The organization must facilitate the continuous improvement of the quality
management system through the use of the quality policies, objectives,
audit results, data analysis, corrective and preventive actions, and
management review. Data on customer satisfaction levels is one of the
parameters to confirm the ability of the organization to prove conformance
to requirements and ensure satisfaction to customers. These methods shall
confirm the continuing ability of each process to satisfy its intended
purpose. Regular audit of the quality management system against defined
requirements will provide data on the non-conformance areas that
requirement management oversight and control. At appropriate stages of
the product realization process, the organization must measure and
monitor the characteristics of the product to verify that requirements are
met. Post every audit the management must establish corrective and
preventive measures to fix problems and avoid potential problems.

9.4.2. Principles of ISO 9001

The eight principles of the standard are:

❖ Customer Focus - Start with a relentless focus on the customer.


Understand their wants and needs. Meet their requirements. Exceed their
expectations.

❖ Leadership - Provide strategic unity of purpose. Create a clear vision. Set


direction. Provide resources.

❖ Involvement of People - Communicate. Inform. Involve. Leads to


ownership, pride, passion.

! !401
QUALITY MANAGEMENT AND METRICS
❖ Process approach - Everything is a process. Everyone has a supplier and
a customer.

❖ Systems approach to management - Understand that interrelated


process form a system. For example, the order fulfillment process not
silos.

❖ Continual Improvement - Products, processes, systems, services, people


everything.

❖ Factual approach to decision making - data collection, display, analysis.


Data to information to knowledge. Analytical and creative. Quantitative
and qualitative.

❖ Mutually beneficial supplier relationships - a company and its suppliers


are interdependent. Shared goals. Shared resources.

9.4.3 Criticism of ISO 9001

A common criticism of ISO 9000 and 9001 is the amount of money, time,
and paperwork required for registration. Some feel it is only for
documentation and the workplace becomes oppressive and quality is not
improved. Estimating the cost for certification, resource requirements for
implementation and sustaining an ISO culture is not easy. The other strong
criticism is that ISO systems merely focus whether the processes are being
followed but not how good the processes are or whether the correct
parameters are being measured and controlled to ensure quality.

9.5 SEI- CAPABILITY MATURITY MODEL INTEGRATION


(CMMI)

The Software Engineering Institute (SEI) is a federally funded research &


development center established in 1984, headquartered at Carnegie Mellon
University, in Pittsburgh, USA. SEI was awarded to Carnegie Mellon
University and works closely with defense and government organizations,
industry, and academia to continually improve software systems. Their
vision was to bring an engineering discipline to the development and
maintenance of software products and their core purpose was to help IT
organizations to improve their software engineering capabilities and
develop the right software, defect free, within budget and on time, every

! !402
QUALITY MANAGEMENT AND METRICS

time. The simple methodology to achieve the desired results is shown in


the adjacent diagram. The approach is to know - "Where to go?", "How to
go?" and most importantly "Where is the starting point?”

Methodology to Achieve that desired


Result

1. Identify Current State:


Know your Current Capability
Maturity Level 2. Identify Desired State:
Understand the description of
the next Level

3. Reduce the Gap: Plain, implement, and


institutionalise the key practices of the next
level. Repeat until continuos optimisation is
part of the culture
!

CMMI developed by a group of experts from industry, government, and the


SEI, is also used as a framework for appraising the process maturity of the
organization. CMMI is administered and marketed by Carnegie Mellon
University and required by many Departments of Defense (DOD) and U.S.
Government contracts, especially for software development. Though CMMI
originated in software engineering it has been generalized to address other
areas such as the development of hardware products, the delivery of all
kinds of services, and the acquisition of products and services. The word
"software" does not appear in definitions of CMMI.

CMMI Version 1.3 released in Nov 2013, comprising models, appraisal


methods, and training for three areas of interest (called constellations). A
constellation is a particular collection of process areas specifically chosen to
help improve a given business need

❖ Product and service development (CMMI for Development)

❖ Service establishment, management, and delivery (CMMI for Services)

❖ Product and service acquisition (CMMI for Acquisition)

! !403
QUALITY MANAGEMENT AND METRICS

To give a brief description of the SEI-CMM model:

❖ It is a Model that describes software development practices critical to


success

❖ It is a descriptive model not a prescriptive one

❖ Reasonable interpretation is required and tailoring is important

❖ Capability indicates the Range of Results that can be Expected

❖ Maturity indicates a higher or lower degree of Capability

❖ High capability implies greater predictability and low variation

In the current high-technology environment organizations build


increasingly complex products and services. Some of the components of
the products are outsourced while others are built in-house. Organizations
need to manage and control the complex development and maintenance
processes of production, procurement and integration.

! 


! !404
QUALITY MANAGEMENT AND METRICS

The three critical dimensions that organizations must focus on are people,
procedures and methods, and tools and equipment. They are the major
determinants of product cost, schedule, and quality. Normally importance
of having a motivated, quality work force is realized but even the finest
people cannot perform at their best when the process is not understood or
process is not operating "at its best." What holds all of them together are
the processes; they allow addressing scalability and provide a way to
incorporate knowledge of how to do things better; allow leveraging the
resources; examine business trends and align the way to do the business.

Organizations and their project managers live with strong myths (wrongly
so). They believe that with advanced technology, the tools and with an
experienced manager and a good team, processes are not needed. They
believe that process hampers creativity, introduces bureaucracy, works
only for large projects, adds to the costs and is a hindrance in a
competitive market place. Fortunately these myths have been proven
wrong and organizations have benefited from a process driven approach.

The key words in the CMMI model are Maturity and Capability. Immature
organizations can be successful occasionally, but will ultimately run into
difficulties because they depend on "heroics" which cannot be guaranteed
to be repeated; their success depends on having the same people on the
team which is practically impossible. In immature organizations processes
are reactively introduced as crisis management, quality is compromised
and cost overruns are normal. Quality problems result in rework, incorrect
functions and customer complaints & dissatisfaction. Teams working on
such projects end up with low morale. A mature organization has processes
that are managed throughout the organization, quality can be
quantitatively assessed and estimating, scheduling or budgeting is based
on metrics from past historical data.

The SEI has taken the process management premise, that "the quality of
a system or product is highly influenced by the quality of the
process used to develop and maintain it". This helps organizations
improve their performance, improve capability to consistently and
predictably deliver the products and services that their customers want,
when they want them and at a price they're willing to pay. Internally CMMI
helps companies improve their operational performance by lower costs for
production, delivery and procurement.

! !405
QUALITY MANAGEMENT AND METRICS

The CMMI Model focuses on four key areas - Project management,


Engineering, Organization process and Support activities. CMMI consists of
24 Process areas and 460 Practices. A process area (PA) is a cluster of
related practices in an area that, when performed collectively, satisfy a set
of goals considered important for making significant improvement in that
area. Practices are actions to be performed to achieve the goals of a
process area.

CMMI uses a common structure to describe each of the 25 process areas


(PAs). A process area has 1 to 4 goals. Each process area has only one
generic goal at each level. Generic goals are called "generic" because the
same goal statement appears in multiple process areas. Each goal is
comprised of practices. Within each PAs there are specific goals and
practices which describe activities that are specific to the PA. A specific
goal applies to a process area and addresses the unique characteristics
that describe what must be implemented to satisfy the process area. A
specific practice is an activity that is considered important in achieving the
associated specific goal. One set of goals and practices called generic goals
and generic practices apply in common across all of the PAs.

CMMI addresses only "Process Areas (PAs)". CMMI practices can improve
existing work practices but does not define them and organizations need to
look elsewhere to define their own practices.

There are two types of representations in the CMMI models: Staged &
Continuous. The staged model groups process areas into 5 maturity
levels and is used to achieve a "CMMI Level Rating". The continuous
representation defines capability levels within each profile. The continuous
representation focuses on process area capability as measured by
capability levels and the staged representation focuses on overall maturity
as measured by maturity levels. The capability/maturity dimension of CMMI
is used for benchmarking and appraisal activities, as well as guiding an
organization's improvement efforts. The contents of both representations
are the same. Both CMMI representations contain all the model
components i.e. Process Areas, Specific Goals, Specific Practices, Generic
Goals and Generic practices. Also metrics are collected and used at all
levels of the CMMI, in both the staged and continuous representations.

! !406
QUALITY MANAGEMENT AND METRICS

What is a Maturity Level or a Capability level?

❖ A "Maturity Level" is what one can be appraised to and rated as when the
organization uses the Staged Representation of the CMMI A maturity
level is a well-defined evolutionary stages of process improvement. There
are five maturity levels. Each level is a layer in the foundation for
continuous process improvement using a proven sequence of
improvements, beginning with basic management practices and
progressing through a predefined and proven path of successive levels.

❖ A "Capability Level" is what one can be appraised to and rated as when


the organization uses the Continuous Representation of the CMMI. A
capability level is a well-defined evolutionary plateau describing the
organization's capability relative to a particular process area. There are
six capability levels. Each level is a layer in the foundation for continuous
process improvement. Capability levels are cumulative (i.e., a higher
capability level includes the attributes of the lower levels).

The figures shown below illustrate the structures of the continuous and
staged representations.
Continuos Representation

! !407
QUALITY MANAGEMENT AND METRICS

Staged Representation

The differences between the structures are subtle but significant. The
staged representation uses maturity levels to characterize the overall state
of the organization's processes relative to the model as a whole, whereas
the continuous representation uses capability levels to characterize the
state of the organization's processes relative to an individual process area.

! !408
QUALITY MANAGEMENT AND METRICS

The Table given below depicts the Levels for both the
representations
Level Continuous Representation Staged Representation Maturity
Capability Levels Levels

Level 0 Incomplete NA

Level 1 Performed Initial

Level 2 Managed Managed

Level 3 Defined Defined

Level 4 Quantitatively Managed Quantitatively Managed

Level 5 Optimizing Optimizing

Continuous Representation Staged Representation

Process areas are organized by process Process areas are organized by


area categories. maturity level.

Improvement is measured using Improvement is measured using


capability levels. Capability levels: maturity levels. Maturity levels
• Measure maturity of a particular • Measure maturity of a set of
process across an organization processes across an organization
• Range from 0 through 5 • Range from 1 through 5

There are two types of specific There is only one type of specific
practices: base and advanced. All practice. The concept of base and
specific practices appear in the advanced practices is not used. All
continuous representation. specific practices appear in the staged
representation with some exceptions.

Capability levels are used to organize Common features are used to organize
the generic practices. generic practices.

All generic practices are included in Only the level 2 and level 3 generic
each process area. practices are included.

Comparisons across organizations There is no need for an equivalence


possible since equivalent staging mechanism because each organization
allows determination of a maturity can choose what to improve and how
level of an organization’s achievement. much to improve it using the staged
representation.

! !409
QUALITY MANAGEMENT AND METRICS

The stage model representation is based on the idea that with the
groupings of process areas made sense as building blocks with higher
blocks depending on the prior blocks on the way to maturing processes
towards higher and higher capabilities. (Refer diagrams given below). The
groupings resembled stair-steps, or "levels". The other view was to focus in
improving a specific process area to the point of optimization which can
have a high value to the business and then move on to other process
areas. For example, an organization well known for its expertise in the
Verification (VER) process area, may want to improve their processes to
run verification at a continually optimizing pace. The continuous
representation allows an organization to select the process area or areas
and the depth of capability they want to become in those process areas.

CMMI Staged Representation- Maturity Levels

The staged representation has five maturity levels (1 - Initial, 2 Managed,


3-Defined, 4-Quantitatively Managed and 5-Optimizing). Maturity levels
consist of a predefined set of process areas. The maturity levels are
measured by the achievement of the specific and generic goals that apply
to each predefined set of process areas. The diagrams given below depict
1) the staged representation maturity levels and the characteristics of each
level and 1) the mapping of the process areas to each level.

! !410
QUALITY MANAGEMENT AND METRICS

Maturity Level 1 - Initial


At level 1, processes are usually ad hoc and chaotic. The organization
usually does not provide a stable environment. Success in these
organizations depends on the competence and heroics of the people in the
organization and not on the use of proven processes. Products and services
of such organizations may work but projects frequently will exceed budget
and schedule. Over commitment leads to crisis and processes are
neglected.

Maturity Level 2 - Managed


At level 2, an organization has achieved all the specific and generic goals
of the maturity level 2 process areas. This implies that the projects of the
organization have ensured that requirements are managed; processes are
planned, performed, measured, and controlled. Existing practices are
retained during times of stress and projects are performed and managed
according to their documented plans. The work products and services
satisfy their specified requirements, standards, and objectives.

! !411
QUALITY MANAGEMENT AND METRICS

Maturity Level 3 - Defined


At level 3, an organization has achieved all the specific and generic goals
of the process areas assigned to maturity levels 2 and 3. Processes are
well characterized and understood, and are described in standards,
procedures, tools, and methods. Unlike level 2 where standards, process
descriptions, and procedures may be quite different in each specific
instance of the process, at maturity level 3, the standards, process
descriptions, and procedures for a project are tailored from a set of
standard processes to suit a particular project The organization's set of
standard processes includes the processes addressed at maturity level 2
and maturity level 3. Hence processes are consistent across the
organization barring differences allowed by the tailoring guidelines.
Processes are described in more detail and more rigorously than at
maturity level 2. Also processes are managed more proactively using an
understanding of the interrelationships of the process activities and
detailed measures of the process, its work products, and its services.

Maturity Level 4 - Quantitatively Managed


At level 4, an organization has achieved all the specific goals of the
process areas assigned to maturity levels 2, 3, and 4 and the generic
goals assigned to maturity levels 2 and 3. At maturity level 4 sub-
processes are selected that significantly contribute to overall process
performance and are controlled using quantitative techniques. Quantitative
objectives (based on customer requirements, organization and process
stakeholders) for quality and process performance are established and
used as criteria in managing processes. Quality and process performances
are understood in statistical terms and are managed throughout the life of
the processes. Quality and process performance measures are incorporated
into the organization's measurement repository to support fact-based
decision making in the future.

It is important to note that at maturity level 4, the performance of


processes is controlled using statistical and other quantitative techniques
and is quantitatively predictable. Processes are concerned with addressing
special causes of process variation and providing statistical predictability of
the results.

! !412
QUALITY MANAGEMENT AND METRICS

Maturity Level 5 - Optimizing


At level 5, an organization has achieved all the specific goals of the
process areas assigned to maturity levels 2, 3, 4, and 5 and the generic
goals assigned to maturity levels 2 and 3.

The key factor in this level is that processes are continually improved
based on a quantitative understanding of the common causes of variation
inherent in processes. The entire organization focuses on continually
improving process performance through both incremental and innovative
technological improvements. Objectives are continually revised to reflect
changing business objectives, and used as criteria in managing process
improvement. Optimizing processes that are agile and innovative depends
on the participation of an empowered workforce aligned with the objectives
and values of the organization. Improvement of the processes is inherently
part of everybody's role, resulting in a cycle of continual improvement. At
maturity level 5, processes are concerned with addressing common causes
of process variation and changing the process to improve process
performance to achieve the established quantitative process-improvement
objectives.

In the staged model the maturity levels should not be skipped: Each
maturity level provides a necessary foundation for effective implementation
of processes at the next level. Higher level processes have less chance of
success without the discipline provided by lower levels. Higher the level the
quality and productivity are increased; time-to-market & risks are reduced.

CMMI Continuous Representation- Capability Levels


The continuous representation has six levels (0- Incomplete, 1- Performed,
2 Managed, 3- Defined, 4-Quantitatively Managed and 5-Optimizing).
Maturity levels consist of a predefined set of process areas. The maturity
levels are measured by the achievement of the specific and generic goals
that apply to each predefined set of process areas.

Capability Level 0: Incomplete


An "incomplete process" is a process that is either not performed or
partially performed. One or more of the specific goals of the process area
are not satisfied and no generic goals exist for this level since there is no
reason to institutionalize a partially performed process. This is identical to
Maturity Level 1 in the staged representation.

! !413
QUALITY MANAGEMENT AND METRICS

Capability Level 1: Performed


Level 1 process is a process that is expected to perform all of the
Capability Level 1 specific and generic practices. Performance may not be
stable and may not meet specific objectives such as quality, cost, and
schedule, but useful work can be done. This is only a start, or baby-step, in
process improvement.

Capability Level 2: Managed


Level-2 process is a "managed" process that is planned, performed,
monitored, and controlled for individual projects, groups, or stand-alone
processes to achieve a given purpose. Managing the process achieves the
model objectives for the process and other objectives, such as cost,
schedule, and quality. This level implies that things are managed actively in
the organization.

CMMI - Continuos Representation Capabilities Level

! 


! !414
QUALITY MANAGEMENT AND METRICS

Capability Level 3: Defined


Level-3 process is characterized as a "defined" process. A defined process
is a managed (capability level 2) process that is tailored from the
organization's set of standard processes according to the organization's
tailoring guidelines, and contributes work products, measures, and other
process-improvement information to the organizational process assets.

Capability Level 4: Quantitatively Managed


Level-4 process is characterized as a "quantitatively managed" process. A
quantitatively managed process is a defined (capability level 3) process
that is controlled using statistical and other quantitative techniques.
Quantitative objectives for quality and process performance are established
and used as criteria in managing the process.

Capability Level 5: Optimizing


An optimizing process is a quantitatively managed process that is
improved, based on an understanding of the common causes of process
variation inherent in the process. It is important to note the verb used is
"optimizing" and not "optimized"; the focus is on continually improving
process performance through both incremental and innovative
improvements.

❖ Capability Level 4 focuses on establishing baselines, models, and


measurements for process performance.

❖ Capability Level 5 focuses on studying performance results across the


organization or entire enterprise, finding common causes of problems in
how the work is done; processes used, and fixing the problems in the
process.

Organization of Process Areas in Continuous Representation:


The optimizing level (Level 5) is not the destination of process
management. The destination is better value products for an optimum
price. The optimizing level is only a foundation for building an ever-
improving capability. Some Improvements are incremental, while some
improvements are revolutionary


! !415
QUALITY MANAGEMENT AND METRICS

Category Process Area

Project • Project Planning


Management • Project Monitoring and Control
• Supplier Agreement Management
• Integrated Project Management(IPPD)
• Integrated Supplier Management (SS)
• Integrated Teaming (IPPD)
• Risk Management Quantitative Project Management

Support • Configuration Management


• Measurement and Analysis Causal Analysis and Resolution
• Decision Analysis and Resolution
• Organizational Environment for Integration (IPPD)

Engineering • Requirements Management


• Requirements Development
• Technical Solution
• Product Integration
• Verification
• Validation

Process • Organizational Process Focus


Management • Organizational Process
• Organizational Training Definition
• Organizational Process Performance
• Organizational Innovation and Deployment

When ISO or SEI-CMM related processes were implemented in many IT


organizations, "testing" was one major weak area and needed a "war-like"
footing to make revolutionary changes starting with the test plans till test
defect capturing and root-cause-analysis. All changes need to be managed
in a disciplined Manner and EVERYONE must be involved in improvements.

To give a simple example for the discussions above, let us assume that a
task-force formed by the management was assigned the responsibility of
identifying priority areas to fix a series of customer complaints regarding
quality, delays and incomplete work deliverables. The task force identified
that the Requirements Management was one of the areas of concern and
high risks.

Requirements Management (REQM) is a Project Management process


area at Maturity Level 2.

! !416
QUALITY MANAGEMENT AND METRICS

Its' purpose is to manage requirements of the project's products and


product components and to ensure alignment between those requirements
and the project's plans and work products.

Specific Practices by Goal

❖ SG 1 Manage Requirements
✴ SP 1.1 Understand Requirements
✴ SP 1.2 Obtain Commitment to Requirements
✴ SP 1.3 Manage Requirements Changes
✴ SP 1.4 Maintain Bidirectional Traceability of Requirements
✴ SP 1.5 Ensure Alignment Between Project Work and Requirements

For the Requirements Management Process Area, a goal is to "Manage


Requirements". An example "practice" to support the Goal (required) is
"Maintain bi-directional traceability of requirements". This implies that any
change in down-the-line Life-Cycle activities after requirement gathering
i.e. during analysis, design, user-interface design, coding and testing
phases MUST be traceable back to the requirements phase. Similarly any
change in the requirements documents MUST be reflected in all the
SDLC-phases down-stream. In many projects customers communicate
changes verbally or through e-mails and changes are implemented in the
code but related documents and artifacts are not changed appropriately.
Traceability is lost in such cases. One solution i.e. Work Products is to build
and maintain a "Requirements traceability matrix" or a "Requirements
tracking system".

Another important word used across CMMI process areas (and many other
standards) is "institutionalization". It sounds like some magic wand to be
used to spread the good-practices across the organizations. What it
actually means is the extent to which processes have taken root within an
organization - with just one project or 100's. It is about how they're
performed, how they're managed, how they're defined, what is measured
and with what are the processes are controlled, and how continuous
improvement happens. This obviously is the most difficult part of CMMI
implementation and project managers have to play a big role and make the
biggest impact. It is important to build and reinforce a corporate culture
that supports methods, practices and procedures; so that they are the
ongoing way of business. Organizations adopting the SEI-CMMI model need
to be assessed for their compliance to the CMII framework. CMMI

! !417
QUALITY MANAGEMENT AND METRICS

Assessments are also known as CMMI Appraisals. Assessment firms follow


the Standard CMMI Appraisal Method for Process Improvement (SCAMPI)
Method Definition Document that provides information on the CMMI
Assessment Methodology. In CMMI, there is no "Certification"; instead SEI
uses "Assessment". This is due to the fact that SEI does not impose the
limitation of following defined principles laid down by the Model or
Standard and allows the implementers to implement Specific Practices of
Process Areas to be implemented in their own way. But in common man
language Certification and Assessment are synonymous words and are
used interchangeably, SEI does not provide Certificate on successful
completion of the CMMI Assessment; name of the Organization is published
on the SEI website.

9.6 SIX SIGMA

The quote "In God we trust, all others bring data" is attributed to W.
Edwards Deming the famous statistician and management scientist, who
promoted the Shewhart "Plan-Do-Check- Act" Cycle. In problem solving,
data is so much more valuable than opinion is the underlying theory in this
statement. Everyone has opinions and beliefs, but to really solve problems
one needs to be objective and not rule out possibilities without proof. At
school tally charts, histograms or bar charts were taught for data
representation.

Data is collected for several reasons. Protests against colleting toll charges
at toll gates on roads in India led to counting of the vehicles using an
expressway or major arterial roads. The problem was that the tolls were
being collected way past the milestone dates or costs. The other problem
arose from the statistics presented to justify the collection. There was no
clear understanding and agreement on the process of counting, scope of
what was counted or clarity on how long the entire exercise should take.
How does one decide when to stop collecting toll? What are the variations
possible? Here's where science and statistical models come into play.
Collecting and analyzing the data to improve the processes and then
making "informed" decision was the key.

Sigma is a Greek symbol represented by "?", used in statistics to represent


standard deviation from mean value, an indicator of the degree of variation
in a set of a process. Sigma measures how far a given process deviates
from perfection. Higher the sigma capability better is the performance. The

! !418
QUALITY MANAGEMENT AND METRICS

term "sigma" is used to designate the distribution or spread about the


mean (average) of any process or procedure.

Let us digress here to understand the basics of sigma in a statistical sense.

"Mean" is the average of a set of values; add up the values and divide by
the number of items. If one subtracts the mean from each value the result
is called the deviation from the mean. Dividing the sum of the squares of
each deviation of the mean by the number of items gives the variation in
data. Standard Deviation shows the variation in data. Variance is a
measure of how data points differ from the mean. The following table
shows the math test scores of two batches of five students each.

In spite of having different scores both batches have the same mean i.e.
76. Standard deviation shows the variation in the scores. With a standard
variation of 14.53 for the first class and 19.60 for the second class, the
conclusion one can make is that the scores from the second class would be
more distributed than the scores in the second class.
Batch Score Square of Variance Batch B Score Square Variance
A Deviation & of &
from Mean Standard Deviation Standard
Deviation from Deviation
for Batch Mean for Batch
A B

1 92 256 Var = 1 92 256 Var =


1056/5 1920/5
2 88 144 .=211.2 2 92 256

3 80 16 3 92 256

4 68 64 SD = 4 52 576 SD =
Ö211.2 Ö384=19
5 52 576 = 14.53 5 52 576 .60
19.60
Mean = Total = 1056 Mean = Total =
76 76 1920

In probability theory, the normal (or Gaussian) distribution is a very


commonly occurring continuous probability distribution-a function that tells
the probability that an observation in some context will fall between any
two real numbers. For example it will tell the distribution of grades on a
test administered to many people. Normal distributions or Bell curve (as

! !419
QUALITY MANAGEMENT AND METRICS

shown in the figure) with Mean (µ=0) and Standard Deviation are
extremely important in statistics and are often used.

'Bell curve' refers to the shape that is created when a graph is plotted
using the data points for an item that meets the criteria of 'normal
distribution'. The mean identifies the position of the center and the
standard deviation determines the height and width of the bell. The center
contains the greatest number of a value and therefore would be the
highest point on the arc of the line. The important thing to note about a
normal distribution is the curve is concentrated in the center and decreases
on either side. The data has less of a tendency to produce unusually
extreme values, called outliers, as compared to other distributions. Also
the bell curve signifies that the data is symmetrical and hence there is a
high possibility that an outcome will lie within a range to the left or right of
the center the amount of deviation contained in the data can be measured.

!
The interesting thing about the Normal Distribution is that 68% of all
measurements fall within one sigma either side of the mean. If one takes
all measurements that fall within 3- sigma of the mean - that is between
(mean +3 sigma) to (mean -3 sigma), it will be 99.74% of all outcomes.

In practical terms - if one measures the shoe sizes of the entire population,
the plotted measurements will look like the Normal Distribution, with a
mean (M) and a sigma (S). Almost 100% of all people will have shoe sizes
from M-3S to M+3S, so if one makes shoes one can satisfy 99.74% of all
their customers with just this range of shoe sizes.

! !420
QUALITY MANAGEMENT AND METRICS

Six sigma is a set of techniques and tools for process improvement. It is a


methodology that provides businesses with the tools to improve the
capability of their business processes. In late 1970s Motorola experimented
with problem solving through statistical analysis and in 1987 officially
launched its Six Sigma program. In 1995, General Electric's CEO Jack
Welch decided to implement Six Sigma in GE, and by 1998 GE claimed that
Six Sigma had generated over three-quarters of a billion dollars of cost
savings. All vendors, associates, partners and GE subsidiaries have to
swear by Six Sigma if they did any business with GE. Six Sigma in 10+
year became a brand in large manufacturing corporations and outside the
manufacturing sector too.

The maturity of a manufacturing process can be described by a sigma


rating indicating its yield or the percentage of defect-free products it
creates. A six sigma process is one in which 99.99966% of the products
manufactured are statistically expected to be free of defects (3.4 defective
parts/million). Motorola set a goal of "six sigma" for all of its manufacturing
operations, and this goal became a by-word for the management and
engineering practices used to achieve it. In 2005 Motorola attributed over
US$17 billion in savings to Six Sigma. Other companies which also adopted
Six Sigma methodologies early-on and continue to practice it today include
Bank of America, Caterpillar, Honeywell International, Raytheon, Merrill
Lynch and General Electric. By late 1990s, about two-thirds of the Fortune
500 organizations had begun Six Sigma initiatives with the aim of reducing
costs and improving quality.

The Six Sigma premise is:

❖ Everything is a result of some process

❖ Variation Exists in Everything

❖ The process introduces product variation

❖ Variation in product proportional to variation in process

❖ Sources of variation can be Identified, Quantified & Mitigated by control


Six Sigma is also not a standard; neither it is a certification and cannot be
treated like another metric. A Chinese proverb says "If you don't know
where you are going, any road will do". This was slightly altered by W.

! !421
QUALITY MANAGEMENT AND METRICS

Humphrey "If you don't know where you are, a map won't help". One need
to understand the deep "quality philosophy" behind Six Sigma and the way
of improving performance by knowing where you are and where you could
be.

The objective of Six Sigma quality is to reduce process output variation on


a long term basis. Sigma levels can be expressed in defects per million
opportunities as given in the table below.
Sigma DPMO Defects (%) Yield (%)
Level

1 6,91,462 69% 31%


2 3,08,538 31% 69%
3 66,807 6.7% 93.3%
4 6,210 0.62% %99.38
5 233 0.023% 99.977%
6 3.4 0.00034% 99.99966%
7 0.019 0.0000019% 99.999998%

Defects per Million Opportunities (DPMO = ((Total Defects) / (Total


Opportunities)) * 1,000,000 Defects (%) = ((Total Defects) / (Total
Opportunities)) * 100
Yield (%) = 100 - (Defects Percentage)

The Six Sigma philosophy is


❖ Know What's Important to the Customer (CTQ)
❖ Reduce Defects
❖ Center Around Target (Mean)
❖ Reduce Variation (Standard Deviation)

Variation means that a process (X) does not produce the same result (Y)
every time. Some variation will exist in all processes. Variation directly
affects customer experiences.

Why do companies adopt Six Sigma? The answer is simple - to make


money. Poor quality and a failure to focus on customers ultimately cost
organizations a lot of money. The fundamental goal of the Six Sigma

! !422
QUALITY MANAGEMENT AND METRICS

methodology is to drive a measurement-based approach that focuses on


process improvement and variation reduction with the goal of improving
financial results and meeting customer needs. The emphasis is on
"measurement-based."

Companies should only measure what they value - quality, customer


satisfaction, and productivity. Increase in performance and decrease in
process variation leads to defect reduction and vast improvement in profits,
employee morale and quality of product

Some of the benefits of Six Sigma are


❖ Generates sustained success
❖ Sets performance goal for everyone
❖ Enhances value for customers
❖ Accelerates rate of improvement
❖ Promotes learning across boundaries
❖ Executes strategic change

For a process, the sigma capability is a metric that indicates how well that
process is performing. The higher the sigma capability the better it is.
Sigma capability measures the capability of the process to produce defect-
free outputs. A defect is anything that results in customer dissatisfaction.
99% Good is 3.8 Sigma while 99.99966% Good is 6 Sigma. What does this
mean in practical terms?

! !423
QUALITY MANAGEMENT AND METRICS

Practical Meaning of Six Sigma

3.8 Sigma (99% Good) 6 Sigma (99.99966% Good)

20,000 lost articles of mail per hour Seven articles lost per hour

Unsafe drinking water for 15 minutes One unsafe minute every seven months
each day

5,000 incorrect surgical operations 1.7 incorrect operations per week


per week

Two short or long landings at most One short or long landing every five
major airports each day years

200,000 wrong drug prescriptions 68 wrong drug prescriptions per year


each year

No electricity for almost seven hours One hour without electricity every 34
each month years

The table is quite self-explanatory to indicate what can be achieved.

Six Sigma projects follow two project methodologies inspired by Deming's


Plan-Do-Check- Act Cycle i.e. DMAIC and DMADV

❖ DMAIC (Define, Measure, Analyze, Improve, Control) is used for projects


aimed at improving an existing business process

❖ DMADV (Define, Measure, Analyze, Design, Verify)is used for projects


aimed at creating new product or process designs

The DMAIC project methodology has five phases as shown in the figure
below.


! !424
QUALITY MANAGEMENT AND METRICS

Define
❖It is the initial stage of starting the project and the most significant step

❖ Define the system, the voice of the customer and their requirements, and
the project goals, specifically.

❖ Identify the projects that are measurable. Projects are defined including
the demands of the customer and process.

Measure
❖Measure key aspects of the current process and collect relevant data.

❖ Collect such data that can precisely pinpoints the areas causing
problems.

❖ Defects found must be well defined and possible + potential causes for
the problems must be identified

! !425
QUALITY MANAGEMENT AND METRICS

Analyse
❖ Projects are statistically analyzed and the problems are documented.

❖ Gap between target and actual states is clearly defined in statistical


terms e.g. mean, average etc.

❖ A comprehensive list of the potential causes of the problems is created.

❖ Statistical analysis is carried out to reduce the potential causes into few
causes.

❖ Analyze the data to verify cause-and-effect relationships between


problems and causes

❖ Seek out root cause of the defect under investigation

Improve
❖ Improvements for the potential causes identified in the 'Analysis' step is

carried out in this step

❖ Improve or optimize the current process based upon data analysis

❖ Solutions to all the potentials problems are found

❖ A trial run is carried out for a planned period of time to ensure the
revisions and improvements implemented in the process result in
achieving the targeted values.

❖ Steps are repeated if necessary

Control
❖ Proper control and maintenance of the improved states are established in

this step.

❖ Control the future state process to ensure that any deviations from
target are corrected before they result in defects.

❖ The results and accomplishments of all the improvement activities are


documented.

! !426
QUALITY MANAGEMENT AND METRICS
❖ There is continuous monitoring of whether the improved process is well
maintained.

The DMADV basic methodology consists of the following five steps:

❖ Define the goals of the design activity that are consistent with customer
demands and enterprise strategy.

❖ Measure and identify CTQs (critical to qualities), product capabilities,


production process capability, and risk assessments.

❖ Analyze to develop and design alternatives, create high-level design and


evaluate design capability to select the best design.

❖ Design details, optimize the design, and plan for design verification. This
phase may require simulations.

❖ Verify the design, set up pilot runs, implement production process and
handover to process owners.

Six Sigma professionals exist at every level - each with a different role to
play. At the project level, there are black belts, master black belts, green
belts, yellow belts and white belts. These people conduct projects and
implement improvements.

❖ Master Black Belt: Develops key metrics and the strategic direction;
acts as an organization's Six Sigma technologist and internal consultant.
The master black belt will make sure everything continues running
smoothly and all the training the company learned stays in the company.
They train and coach Black Belts and Green Belts. They are there to
execute the practices throughout the company, not just within the
project.

❖ Black Belt: They work full time and lead problem-solving projects. They
train and coach project teams. Once the project is completed and
everything has been implemented, they will return to their regular
duties. The projects they head up typically are expected to save the
company at least $100,000.

! !427
QUALITY MANAGEMENT AND METRICS
❖ Green Belt: They assist with data collection and analysis for Black Belt
projects. People with this certification are often referred to as worker
bees because they do the majority of the work during projects. They also
conduct the experiments and tests throughout the project. The main
goals of a green belt are to ensure the success of the training techniques
and lead smaller improvement projects. People with green belts must
have a strong understanding of what the Six Sigma training is all about.

❖ Yellow Belt: They participate as project team members. They review


process improvements to support the project.

❖ White Belt: Can work on local problem-solving teams that support


overall projects, but may not be part of a Six Sigma project team.
Understands basic Six Sigma concepts from an awareness perspective.

Every project needs organizational support. Six Sigma executives and


champions set the direction for selecting and deploying projects.

❖ Champions: Translate the company's vision, mission, goals and metrics


to create an organizational deployment plan and identify individual
projects. Identify resources and remove roadblocks.

❖ Executives: Provide overall alignment by establishing the strategic focus


of the Six Sigma program within the context of the organization's culture
and vision.

The success stories of implementing Six Sigma and benefiting are many,
GE being in the forefront. One example was about United Technologies
Automotive [UTA] which molds plastic into casings used for car side-view
mirrors. Environmental laws prevented UTA from making more casings
because they were limited by pollution caused by the painting. Using Six-
sigma, GE found a way to add a carbon-based conductor to plastic causing
far more paint to stick and cutting UTA's pollution by 35%. Now GE sells
more plastic to UTA.

A simpler case in India was when a leading IT company found during the
Y2K and e-Biz boom that their readiness to serve was hampered severely
by the time and effort spent on recruitment, hiring and training of fresh
talent. The stages in recruitment after the placement of advertisements for
fresh talent at that time were:

! !428
QUALITY MANAGEMENT AND METRICS
❖ Receiving physical applications and responses from candidates (30 days
lead time)

❖ Physical scanning of resumes (1st Level filtering of eligible candidates)

❖ Conduct aptitude tests (60% elimination)

❖ Conduct Group Discussions (60% elimination)

❖ Conduct technical interviews (40% elimination)

❖ Conduct personal interviews (60% elimination)

The final hit rates hovered around 15%; for selecting 100 successful
candidates one needed to select at least 695 candidates at the aptitude
test stage. The supply-demand ratio was such that 8000+ applications
would come in for 100 positions. Each stage required a panel of HR staff,
technical staff, interviewing staff and Senior Managers for the final
selection. HR (recruitment) had a challenging task of coordinating the
panel teams who had to juggle the schedules (and pressures) of project
responsibilities, weekends and interview schedules to finally get 100
candidates on board.

The company put a small task force to study the problem and applied Six
Sigma techniques to find a solution and reduce this cycle time. The task
force analyzed the steps in the recruitment process, the activities and
gathered data on the efforts and the time taken to recruit 100 candidates
in different business groups and locations of the organization. It was found
that the average effort spent was 13 person days per selected candidate.
Without getting into the nitty-gritty of numbers, in summary the task force
found two areas of risks and cascading effect of delays. The first one was
the physical resume scanning which could take as much as 5 days per
candidate and the final interview stage which took 3+ days per candidate.
The second bottleneck was the availability of the appropriate interview
panel (technology, role, seniority, etc.) during weekends or during office
hours when there is peak load of work.

Seniors were always found to be busy and candidates had to wait out
sometimes for hours and cancellation of interviews was not uncommon.
Though sounding too simple today the solutions were

! !429
QUALITY MANAGEMENT AND METRICS

a. The organization first implemented an online registration system which


allowed the candidate to fill in their details directly into the system and
the 1st level filtering time was reduced significantly. Once sites like
naukri.com, monster.com, SimplyHired.com, freshersworld.com,
bixee.com, etc. came into force it was far simpler for the first level
filtering of candidates by degree, colleges, qualification, marks, skill sets
etc.

b. HR arranged a backup member for each interview panel member. Since


mobile technology was already in place, it was easy to call up the
standby person and proceed with the interviews with minimum hiccups.

The net result was that the recruitment time reduced to 5.6 days per
selected candidate after the Six Sigma based exercise.

The Dabbawala Story

The dabbawalas of Mumbai is an amazing meal story (though not in IT


space) and a great case study for management courses (Source: http://
www.mydabbawala.com) with their unmatched "daily-lunch-box-courier"
system with barely any technology involved.

A dabbawala in Mumbai is part of a delivery system that collects and hot


food in lunch boxes (more than 175000) from residences of workers or
from meal suppliers in the late morning. The dabbawalas pick up
lunchboxes and move fast using a combination of bicycles, trains and their
Shank's pony (i.e. two legs). In a 3 hour period they wade through a 25-
Km of public transportation involving multiple transfer points to deliver the
box to their customers. The delivery to the workplace happens before the
worker's lunch breaks and the return of the empty boxes back to the
source the same day afternoon.

BBC has produced a documentary on dabbawalas and Prince Charles


visited them in 2003, during his visit to India; it is said that he had to fit in
with their schedule, since their timing was too precise to allow any
flexibility. Owing to the tremendous publicity, some of the dabbawalas were
invited to give guest lectures in some of the top business schools of India,
which is very rare.

! !430
QUALITY MANAGEMENT AND METRICS

Most of the carriers are illiterate; the dabbas have distinguishing marks on
them for identification, sorting, picking, transferring and delivery. These
include abbreviations for collection points, color code for starting station,
number for destination station and markings for the handling dabbawala at
destination, building and floor. Each dabbawala has a designated role,
takes his collected boxes to a designated sorting place, where the boxes
are sorted into groups or bundles. The grouped boxes are put in the
designated coaches of trains, with markings to identify the destination of
the box. The boxes could be traversing multiple rail routes before reaching
their destination addresses. The return process is identical in the return
direction.

In 1998, Forbes Global magazine conducted a quality assurance study on


their operations and gave it an accuracy rating of 99.999999, more than
Six Sigma. The Dabbawalas made one error in six million transactions!
That put them on the list of Six Sigma rated companies, along with
multinationals like Motorola and GE. Only one mistake in every 6,000,000
deliveries speaks volumes of the process compliance and the commitment
from each person in the chain. The success of the dabbawala trade has
involved no advanced technology except for trains and SMS for booking

The investment by each dabbawala is a minimum capital in kind of 2


bicycles, a wooden crate for the boxes (called tiffins), white cotton kurta-
pyjamas, and the white Gandhi cap (topi). Today they have their websites
and delivery requests can be made through SMS. An online poll on their
website indicates that customer feedback is given the pride of place. The
success of the system depends on teamwork and time management and
there is no documentation at all. Management layer is also very thin.
Personal trust at both ends of the delivery chain bonds the worker and the
dabbawalas which makes it a great success.

The GE success story can be read in the article Why GE's Six Sigma
Success Story Is Still Relevant, March 31, 2011 by Mark Micheletti.

In recent years, some practitioners have combined Six Sigma ideas with
lean manufacturing to create a methodology named Lean Six Sigma. Lean
was developed by Toyota and is mainly focused on process flow and
eliminating waste issues. Six Sigma focuses on variation and design. These
two methodologies act as complementary disciplines aimed at promoting

! !431
QUALITY MANAGEMENT AND METRICS

"business and operational excellence". Companies such as GE, Verizon,


GENPACT, and IBM use Lean Six Sigma.

Going further it was felt that even though one could achieve a good degree
of defect-free products using lowest cost production that is world class and
earns profit, one ingredient that was missing was the concept of "value".
Hence a new technique "Third Generation Six Sigma" came into force. It
was aimed to show companies how to deliver products or services that, in
the eyes of customers, have real value. Companies like Korean steel maker
Posco (3rd largest steel maker) and Electronics maker Samsung are two
examples while the Government of India has bought into the idea and has
begun promoting it both in private and government-owned industries
there. Gen III addresses issues or shortcomings in the past Six Sigma
programmes

The International Organization for Standardization (ISO) has published ISO


13053:2011 defining the six sigma process.

Every good initiative comes with its share of criticism. Six Sigma also faces
some of them.

❖ Lack of originality: Noted quality expert Joseph M. Juran has described


Six Sigma as "a basic version of quality improvement". He claims that Six
Sigma only adopted more flamboyant terms, like belts with different
colors. And that most of their concepts were already there.

❖ Role of consultants: The use of "Black Belts" has nurtured an industry of


training and certification. There is an overselling of Six Sigma by tall-
claiming consulting firms, many of have very limited understanding or
expertise.

❖ Over-reliance on (statistical) tools: One criticism is on the "rigid" nature


of Six Sigma with its over-reliance on methods and tools. More Attention
is being paid to reducing variation and searching for any significant
factors and less attention is paid to developing robustness in the first
place, which could eliminate the need for reducing variation.

❖ Stifling creativity in research environments: Some argue that Six Sigma


could have an effect of stifling creativity in research environments with

! !432
QUALITY MANAGEMENT AND METRICS

its emphasis on excessive metrics, steps, measurements and intense


focus on reducing variability water down the discovery process.

❖ Lack of systematic documentation: Unlike GE and Motorola, most cases


are not documented in a systemic or academic manner. In fact, the
majority are case studies illustrated on websites are sketchy. They do not
provide much information on specific Six Sigma methods that were used
to resolve the problems.

9.7 METRICS

Where do all these standards and their implementation lead to? How is it
relevant to Software Engineering?

The assumption is that the entire purpose of having the tools, methods and
processes was to produce "quality" products or deliver "quality" services to
meet and/or exceed customer expectations.

Reiterating what has been covered so far, high quality software


❖ Must be useful (to the original customer).
❖ Must be portable
❖ Must be maintainable.
❖ Must be reliable.
❖ Must have integrity (produces correct results, with a high degree of

accuracy)
❖ Must be efficient.
❖ Must be consistent
❖ Must have easy to learn and easy to use

When a company promises and delivers on quality, there is a good chance


that customer satisfaction and retention will be high. But paving the road
to success depends on companies being well-informed about their own
business. They achieve that knowledge by developing and utilizing effective
metrics.

Suppose one collects metrics from everyday life. What would one measure?

! !433
QUALITY MANAGEMENT AND METRICS

Working and living


❖ Cost of utilities for the month
❖ Cost of groceries for the month
❖ Amount of monthly rent per month
❖ Time spent at work each Saturday for the past month
❖ Time spent mowing the lawn for the past two weekends

College experience
❖ Grades received in class last semester
❖ Number of classes taken each semester
❖ Amount of time spent in class this week
❖ Amount of time spent on studying and homework this week
❖ Number of hours of sleep last night

Travel
❖ Time to drive from home to the airport
❖ Amount of miles traveled today
❖ Cost of meals and lodging for yesterday

There are several terms used when one talks of metrics i.e. Measures,
Metrics, and Indicators. These are terms often used interchangeably - but
have subtle differences.

❖ Measure: Provides a quantitative indication of the extent, amount,


dimension, capacity, or size of some attribute of a product or process
❖ Measurement: The act of determining a measure
❖ Metric (IEEE): A quantitative measure of the degree to which a system,
component, or process possesses a given attribute
❖ Indicator: A metric or combination of metrics that provides insight into
the software process, a software project, or the product itself

9.7.1 Measurement

Measurement is a process by which numbers or symbols are assigned to


attributes in the real world real world to describe them according to clearly
defined rules. Measurement helps people to understand the world. Without
measurement one cannot manage anything. All other engineering
disciplines have high level of rigorous measurements; however, not
followed with the same rigor in Software Engineering.

! !434
QUALITY MANAGEMENT AND METRICS

Due to lack of measurements in most of the software projects -

❖ Measurable targets such as reliability, maintainability, user friendly for


software products are not set

❖ Estimating the component costs for software for example design cost,
coding costs are rarely done

❖ Ability to predict or quantify the quality of software products is low

❖ Difficulty in deciding the effectiveness and efficiency of new development


techniques before trying them

Measurement is essential for understanding, controlling and improving the


processes and products of a project.

The overall theme of metrics for any IT project covers:

❖ Effort: productivity, utilization, throughput - Practitioners should


persuade process owners to log this information. Even if the data is not
100 percent accurate, it adds great value in the calculation of resource
capacity utilization and productivity.

❖ Quality: defect, defect prevention, training - Defect-related information


is more easily available, but most defects are not logged. More mature
organizations record metrics related to training and penetration of the
process improvement program.

❖ Budget: cost variance, return on investment - Cost variation, along with


productivity, is a metric that the leadership at most organizations
captures and monitors.

❖ Schedule: service-level agreement (SLA), slippage - Schedule slippage


can be seen from two aspects: go-live slippage, which is visible and
impactful to the customer, and internal slippage, which is visible to the
project team but not to the customer.

! !435
QUALITY MANAGEMENT AND METRICS

9.7.2 Why Measure?

Tom DeMarco quotes: "You cannot control what you cannot measure.
You can neither predict nor control what you cannot measure" -
Unquote.

Metrics are used to drive improvements and help businesses focus their
people and resources on what's important. The range of metrics that
companies can employ vary from those that are mandatory - for legal,
safety or contractual purposes - to those that track increases in efficiency,
reductions in complaints, greater profits and better savings. Metrics
indicate the priorities of the company and provide a window on
performance, ethos and ambition.

Metrics are useful to


❖ Assess the status of an ongoing project
❖ Track potential risks
❖ Uncover problem areas before they go "critical "
❖ Adjust work flow or tasks
❖ Evaluate the project team's ability to control quality of software work

products
❖ Understand what is happening during development and maintenance
❖ Control the projects
❖ Improving the processes and products of software projects

Every stakeholder gets benefited by using metrics


❖ Software developer gets sense of
❖ Whether the requirements are consistent and complete
❖ Whether the design is of high quality
❖ Whether the code is ready to be tested
❖ Project Managers gets a measure of
• Attributes of process
• Whether product will be ready within the time schedule
• Whether the budget will be exceeded
❖ Customers can measure
• Whether the final product meets the requirements
• Whether the product is of sufficient quality

To control any activity, one must have the knowledge about the activity
inside the project. Each activity has some specific area of measure to

! !436
QUALITY MANAGEMENT AND METRICS

control. If one knows what is to be controlled i.e. if one knows the measure
then one can control any activity of that project. Measurement of particular
project or program leads to effective controlling of any problem

9.7.3 Software Metrics

A software metric is a quantitative measure of a degree to which a


software system or process possesses some property. The goal is obtaining
objective, reproducible and quantifiable measurements, which may have
numerous valuable applications in schedule and budget planning, cost
estimation, quality assurance testing, software debugging, software
performance optimization, and optimal personnel task assignments.

Software metrics can be classified into three categories:


❖ Product metrics
❖ Process metrics
❖ Project metrics

Product Metrics

Size Metric- Lines of Code

Internal product attribute describes software product in a way that is


dependent on product itself Size of a software system is the most obvious
& useful such attribute

Why do we need to measure size? Assume that software A has 120 defects
while software B has 1 defect. How does one compare A and B?
Additionally if software A has 10,000 lines of code and software B has 10
lines of code then what can be said about A and B? From the given fact one
can say that: Defect density (A) < Defect Density (B) It becomes clear
ONLY when the size of the code is examined and not just the number of
defects. Hence, it is important to measure the size.

Code can be produced traditionally using procedural languages. Traditional


code is measured based on Number of Line of code i.e. LOC. Hewlett-
Packard definition of LOC
Total Length of Code = NCLOC + CLOC where CLOC is commented lines of
code and NCLOC is non-commented lines.

! !437
QUALITY MANAGEMENT AND METRICS

Measuring Lines of Code (LOC) is not as simple metric as it appears. There


is ambiguity of the counting; meaning is not the same with Assembler or
high-level languages. There are questions like "What to count? Blank lines,
comments, data definitions, only executable lines?. This will change the
metrics that is derived from using LOC. There must be consistency across
the project or organization in deciding what to count. Also if benchmarks
are used to compare against other external organizations then the basis for
arriving at LOC must be clearly understood. There are also problems for
productivity studies - the amount of LOC is negatively correlated with
design efficiency.

Typical Size-Oriented Metrics


❖ Errors per KLOC (thousand lines of code)
❖ Defects per KLOC
❖ $ per LOC
❖ Pages of documentation per KLOC
❖ Errors per person-month
❖ Errors per review hour
❖ LOC per person-month
❖ Density of comments = CLOC/LOC
❖ $ per page of documentation

Size Metric - Function Points

Function points were defined in 1979 in Measuring Application


Development Productivity by Allan Albrecht at IBM. The functional user
requirements of the software are identified and each one is categorized
into one of five types: outputs, inquiries, inputs, internal files, and external
interfaces. Some software engineers prefer functionality of a product over
Lines of Code to give a better understanding of the product size. Code is
available for measurements ONLY after development and not in the early
phases. Hence it is not available for predicting cost or productivity.

A function point is a unit of measurement to express the amount of


business functionality that an information system provides to a user.
Function points measure software size. The IFPUG counting practices
committee ( http://www.ifpug.org ) is the de facto standard for counting
methods.

! !438
QUALITY MANAGEMENT AND METRICS

Once the function is identified and categorized into a type, it is then


assessed for complexity and assigned a number of function points. Each of
these functional user requirements maps to an end-user business function,
such as a data entry for an Input or a user query for an Inquiry.

Using historical data, function points can be used to

❖ Estimate the cost or effort for design, coding & testing software

❖ Predict the number of errors that will be encountered during testing

❖ Forecast the number of components and/or the number of projected


source code lines in the implemented system

Computing Function Points


FP is a weighted total of five major components that form an application.

Number of external inputs: Each external input originates from a user or


is transmitted from another application. They are often used to update
internal logical files. They are not inquiries (counted under another
category)

Number of external outputs: Each external output is derived within the


application and provides information to the user. This refers to reports,
screens, error messages, etc.

Number of external inquiries: An external inquiry is defined as an online


input that results in the generation of some immediate software response.
The response is in the form of an on-line output

Number of internal logical files: Each internal logical file is a logical


grouping of data that resides within the application's boundary/

Number of external interface files: Each external interface file is a


logical grouping of data that resides external to the application but
provides data that may be of use to the application

Once the basic Function points are calculated, one assigns a weighting
factor (complexity value) with each count based on criteria established by
the organization. The overall characteristics of a system must be assessed

! !439
QUALITY MANAGEMENT AND METRICS

and factored in to get the total number of Adjusted Function Points. This is
done by examining 14 general system characteristics of the system, such
as the transaction rate, performance, and installation ease. Each
characteristic is evaluated as to its degree of influence on the system..
There are 14 value adjustment factors with each ranging in value from 0
(not important) to 5 (absolutely essential). The Total Degree of Influence is
used in a formula to give the Adjusted Function Point Count, commonly
called the Function Point Count

Typical Function-Oriented Metrics


❖ Errors per FP (thousand lines of code)
❖ Defects per FP
❖ $ per FP
❖ Pages of documentation per FP
❖ FP per person-month

Although FP is usually more realistic metric than LOC calculating FP's needs
training and sometimes LOC is a good enough metric.

Process Metrics
Most students are puzzled when one talks of measuring processes. How
does one measure "testing", "coding", "requirement gathering" or,
"estimating"? How does one improve such processes? The only rational
way to improve any process is oto measure specific attributes of the
process, develop a set of meaningful metrics based on these attributes and
use the metrics to provide indicators that will lead to a strategy for
improvement.

Process metrics gathered from projects over a long period of time are used
for making strategic decisions. The intent is to provide a set of process
indicators that lead to long-term software process improvements.

The effectiveness of a process can be measured by deriving a set of


metrics based on outcomes of the process. A Japanese firm measured
effectiveness of a "document" review process by specifying a "minimum"
number of defects to be found for every page of review.

! !440
QUALITY MANAGEMENT AND METRICS
❖ Errors uncovered before release of the software
❖ %age of total errors uncovered in requirement, design, coding phase
❖ %age of total errors uncovered in testing phase
❖ Defects discovered after delivery to customer
❖ %age effort spent for each SDLC phase
❖ Effort/time per software engineering task
❖ Errors uncovered per review hour
❖ Changes (number) and their characteristics
❖ Propagation of errors from one process activity to another activity
❖ The number of components produced and their degree of reusability

Defect removal efficiency is one of the important process metrics to


measure and control. DRE = E /(E + D) where
E = Number of defects found before delivery of software to end-user
D = Number of defects found after delivery

Ideally DRE should be 1. Lower the efficiency, higher the risks

Project Metrics
Management of the software development process requires the ability to
identify measures that characterize the underlying parameters to control
and aid continuous improvement of the project. Organizations make
several attempts and research inside and outside the company on
gathering metrics and end up with a huge set of proposed metrics. In the
initial stages any data collection for metric management is an uphill task.
One must start with some set of easy metrics that involves less additional
burden on the project teams and then incrementally add new metrics or
improve the data being gathered.

If one harks back to one's childhood, many would be guilty of "swallowing


tooth paste" or "tooth powder" while brushing one's teeth. Apart from
forcing the child to comply with the process of brushing the teeth, parents
have to admonish the child to first brush and not waste the paste. The
taste of the tooth powder was too tough to resist. Every child outgrows this
tendency over time. This was explained beautifully by a colleague


! !441
QUALITY MANAGEMENT AND METRICS

It starts with building a HABIT


From HABIT - Remove H, A BIT remains
From "A BIT" - Remove A, BIT remains
From "BIT"- Remove B, IT remains.

This applies for metrics compliance programs as well. Initially filling a form
for test cases looks tedious and arduous. Soon IT remains, the HABIT is
formed. Fresh joiners to an organization adapt to the process easily if the
HABIT is seen as a "culture" thing in the organization.

Any project can measure

❖ Inputs-measures of the resources (e.g., people, tools) required to do the


work

❖ Outputs-measures of the deliverables or work products created during


the software engineering process

❖ Results-measures that indicate the effectiveness of the deliverables

Project metrics enable a software project manager to

❖ Assess the status of an ongoing project

❖ Track potential risks

❖ Uncover problem areas before their status becomes critical

❖ Modify the technical approach

❖ Adjust project work flow or tasks

❖ Evaluate the project team's ability to control quality of software work


products

❖ Make tactical decisions

❖ Minimize the development schedule by making the necessary


adjustments

! !442
QUALITY MANAGEMENT AND METRICS
❖ Assess product quality on an ongoing basis

Project Metrics can include (not exhaustive)

❖ Schedule Variance: In calendar days the difference between the


scheduled completion of an activity and the actual completion

SV = ((Actual calendar days - Planned calendar days)/ Planned calendar


days x 100

❖ Effort Variance: Difference between the planned outlined effort and the
effort required to actually undertake a task

EV = (Actual Effort - Planned Effort)/ Planned Effort x 100

❖ Size Variance: Difference between the estimated size of the project and
the actual size of the project (normally in KLOC or FP)

SZV = (Actual size - Estimated size)/ Estimated size x 100

❖ Requirement Stability Index: Provides visibility and understanding into


the magnitude and impact of requirements changes.

RSI = 1- ((No of changed + No of deleted + No of added) / Total no of


Initial requirements) x100

❖ Productivity: It is a measure of output from a related process for a unit


of input. Productivity = Actual Project Size / Actual Effort spent for the
project

❖ Cost of Quality: It is a measure in terms of money for the quality


performance within an organization

❖ Cost of quality = (review + testing + verification review + verification


testing + QA + configuration management + measurement + training +
rework review + rework testing)/ total effort x 100

❖ Defect Density: It is the number of defects detected in the software


during the development divided by the size of the software (typically in
KLOC or FP)

! !443
QUALITY MANAGEMENT AND METRICS

DD = Total number of defects/ project size in KLOC or FP

The list obviously is not exhaustive. Depending on the maturity of the


organization, its current processes and the tools available several useful
metrics can be captured and used to assure better quality to the
customers.

Internal Product Attributes: Structure


The structure of requirements, design, and code may help the developers
to understand difficulty they sometimes have in converting one product to
another, in testing a product or in predicting external software attributes
like maintainability, testability, reusability and reliability from internal
product measures. There are several aspects of structure but major ones
are:

❖ Control-flow structure: It deals with the sequence in which instructions


are executed in a program

❖ Data-flow structure: It deals with the trail of a data item created or


handled by a program

❖ Data structure: It gives the organization of the data itself, which is


independent of the program

McCabe proposed a measure for structuredness in a program. He used


cyclomatic numbers for predicting LOC, effort, cost, etc. Cyclomatic
Complexity - Already seen in earlier chapters

External Product Attributes


External attributes are those that can be measured only with respect how
the product relates to its environment. They are related to software quality.
External attribute quality is impacted by time, functionality & efforts.
Predicting external attributes by measuring and analyzing internal
attributes is important; internal attributes are available for measurement
early in the development cycle, whereas external attributes are measurable
only when the product is complete.

The external attributes in combination, present a comprehensive picture of


quality. Measurement of these attributes involves decomposing an attribute
into measurable components.

! !444
QUALITY MANAGEMENT AND METRICS

McCall's Software Quality Model

McCall's quality model focuses on three categories of software product's


characteristics:

❖ Operational characteristics
❖ Ability to undergo change
❖ Adaptability to new environments

McCalls Quality factors

Portability (Will I be able to use it on


another machine?)
Maintainability (Can I Fix it?) Reusability (Will I be able to reuse
Flexibility (Can I change it?) some of the softwares?)
Testability (Can I test it?) Interoperability (Will I be able to
interface it with another system)

Correctness (Does it do what I want?)


reliability (Does it do accurately all of the time?)
Efficiency (Will it run on my hardware as well as it
can?)
Integrity (Is it Secure?)
Usability (Is it designed for the user?)
!

! !445
QUALITY MANAGEMENT AND METRICS

McCall's 11 quality factors are (Refer the diagram above):-

❖ Correctness: The extent to which a program satisfies its specs and fulfills
the customer's mission objectives.

❖ Efficiency: The amount of computing resources and code required by a


program to perform its function

❖ Integrity: Extent to which access to software or data by unauthorized


persons can be controlled

❖ Usability: Effort required to learn, operate, prepare input, and interpret


the output of a program

❖ Maintainability: Effort required to locate and fix an error in a program

❖ Flexibility: Effort required to modify an operational program

❖ Testability: Effort required to test a program to ensure that it performs


its intended function

❖ Portability: Effort required to transfer the program from one hardware


and/or software system environment to another

❖ Reliability: The extent to which a program can be expected to perform its


intended function with required precision.

❖ Reusability: The extent to which a program can be reused in other


applications-related to the packaging and scope of the functions that the
program performs.

❖ Interoperability: The effort required to couple one system to another.

9.7.4 Attributes of Effective Software Metrics & Etiquette of Metrics

Managing with metrics needs to be effective and cannot be a snap-shot


activity or one-time effort. Deciding which metrics to be emphasized or
which will serve the objectives of quality decided by the organization
requires proper management study and analysis.

! !446
QUALITY MANAGEMENT AND METRICS

Attributes

For software metrics to be effective they must be:

❖ Simple and computable : It should be relatively easy to learn how to


derive the metric, and its computation should not demand inordinate
effort or time

❖ Empirically and intuitively persuasive: The metric should satisfy the


engineer's intuitive notions about the product attribute under
consideration

❖ Consistent and objective : The metric should always yield results that are
unambiguous

❖ Consistent in the use of units and dimensions : The mathematical


computation of the metric should use measures that do not lead to
bizarre combinations of units

❖ Programming language independent : Metrics should be based on the


analysis model, the design model, or the structure of the program itself

❖ An effective mechanism for high-quality feedback :The metric should


lead to a higher- quality end product

Etiquette in Software Metrics


Metrics are transforming the ways companies and teams work, but
numbers can become a double-edged sword. While metrics can improve
processes and productivity, they also can be misused, abused, or not used
at all. Metrics are merely measurements; i.e., methods by which to
measure something. Though metrics have been around for many years
there are companies that fear metrics. One strong reason is potential
misuse of metrics by the organization. Immature managers and insensitive
leaders ruin the purpose of gathering them. The misuse of metrics can
destroy good processes and perpetuate poor ones, as well as improperly
reward or punish people.

It is also observed in some organizations that developers get so wrapped


up in metrics that they spend less time writing code. Obsession with data
drives managers to collect all kinds of meaningless data and with an excel

! !447
QUALITY MANAGEMENT AND METRICS

tool in hand the results can be error prone; thinking and overthinking is
not the answer.

Simple rules to keep in mind are

❖ Use common sense and sensitivity when interpreting metrics data

❖ Provide regular feedback to the individuals and teams who collect


measures and metrics

❖ Don't use metrics to evaluate individuals and teams

❖ Work with practitioners and teams to set clear goals and metrics that will
be used to achieve them

❖ Metrics data that indicate a problem should not be considered "negative".


Such data are merely an indicator for process improvement

❖ Don't obsess on a single metric to the exclusion of other important


metrics

9.8 QUALITY IS A JOURNEY

There are several great minds who have given their views on quality and
some of their quotes are legendary.

❖ "Quality is not an act. It is a habit."- Aristotle

❖ "Always do things right. This will gratify some people and astonish the
rest."- Mark Twain

❖ "Be a yardstick of quality. Some people aren't used to an environment


where excellence is expected."- Steve Jobs

❖ "Quality is never an accident; it is always the result of high intention,


sincere effort, intelligent direction and skillful execution; it represents the
wise choice of many alternatives." - William A. Foster

❖ "Quality means doing it right when no one is looking." - Henry Ford

! !448
QUALITY MANAGEMENT AND METRICS
❖ "People forget how fast you did a job - but they remember how well you
did it" - Howard Newton

❖ "Total quality management is a journey, not a destination." - Berry

❖ "If a thing's worth doing, it's worth doing well."- Chinese Proverb

❖ "Total quality management is a journey, not a destination." - Berry

And there are many more.


When a batch of students was asked "Is a customer King or God", most of
them replied- God. The reality is that customer is KING because God will
forgive if there are quality problems; not a king. A king as a customer can
be very unforgiving. IT vendors have experienced closure of not just
projects but entire relationships - overnight on a Friday evening with
millions of dollars at stake.

"Quality is what the customer defines it to be", is a very old, but,


nevertheless, an apt definition of Quality. If it's perfection that the
customer wants, the pursuit for Quality would mean pursuit for perfection.
Quality becomes the key differentiator that can provide the competitive
edge. In the current globally competitive environment, one has to be
nimble and quality conscious to keep up with a customer base that is
demanding.

One of the advanced companies with regard to continuous improvement


and lean transformation is Toyota Motor Corp. Their senior management is
involved in, and supportive of, the continuous improvement process, but
it's certainly not a top-down driven effort at Toyota. It's the way they do
business-it's in their corporate DNA. The Toyota Production System (TPS) is
the way the company operates, but their continuous improvement efforts
are bottom-up driven, with all the different work teams throughout every
function of the company leading their own projects.

Japanese clients work with their vendors to define the parameters of


measurement of quality attributes and expect the vendor to keep
improving the skills, methods and processes to not only maintain high
quality of deliverables but reduce the costs thereby improving margins.
Quality across GE is not only well known but makes Quality the only
language being spoken and understood.

! !449
QUALITY MANAGEMENT AND METRICS

To inculcate a sense of Quality the journey of Quality must begin in the


schools and colleges in India. This is definitely a big missing link in the
nation's progress. Continuous improvement is more about rigor and
discipline than it is about technique. Organizations need to train everyone
to work on the transformation toward a different culture. People need to
understand the tools of continuous improvement and use those tools to
identify and eliminate non-value added waste. Everyone can contribute to
the continuous improvement effort. Senior management has to be visibly
and actively involved in supporting the teams in their improvement efforts.

In conclusion, continuous improvement is a journey, not a destination.


Properly supported and rewarded, continuous improvement will produce a
positive environment to make it a way of life -everyone wins.

! !450
QUALITY MANAGEMENT AND METRICS

9.9 SUMMARY

Quality is defined as "the totality of features and characteristics of a


product or service that bears its ability to satisfy stated or implied needs."
Quality can be viewed in a comparative sense, quantitative sense, fitness
for purpose sense or subjective sense. The perspective of quality differs
depending on whose perspective it is due to the fact that any activity,
project or business has many stakeholders. Some of these are direct
tangible views while others are indirect or derived.

It is well known how after World War II Japan rebuilt itself and many
industries + executives went through training in quality conducted by Dr.
Deming and Dr. Juran. Poor software quality has resulted in several
financial and physical, security and other disasters. Software Quality
Management comprises of three principal activities of Quality Assurance,
Quality Planning and Quality Control. Assurance involves preventing defects
(management by inputs) while Control involves detecting and fixing defects
(management by outputs). The Seven Basic Tools of Quality along with
Testing and Six Sigma are most helpful in troubleshooting issues related to
quality. Obviously quality comes with its' associated costs which includes
the total cost of conformance and non-conformance. Ironically it also
includes the cost of NOT creating a quality product or service. External
failure costs are the most critical compared to the other costs of
prevention, appraisals, internal failures and measurement costs.

For better quality management, industries implement several quality


standards like ISO, SEI-CMMI, PCMM and methodologies like Six Sigma to
meet International standards and customer demands. The plan, do, check,
act (PDCA) cycle to monitor processes is commonly used for quality control
purposes. ISO standards believe in "Do what you say and Say what you
do". With the premise that quality of a system or product is highly
influenced by the quality of the process used to develop and maintain it,
CMM models focus on the maturity and capability of the processes in each
phase of the delivery cycle. Achieving the Optimizing Level 5 in the CMMI
assessment is only a foundation for building an ever-improving capability.

Six sigma aims to reduce process output variation and follows methodology
Define, Measure, Act, Improve and Control (DMAIC), which in 10+ years
after it was initiated by Motorola, GE and other companies, has become a
brand in large manufacturing corporations and adapted in IT too.

! !451
QUALITY MANAGEMENT AND METRICS

Constant monitoring of quality implies measurement of the attributes of


the product or process. Metrics provide a quantitative measure of the
degree of the attributes. The range of metrics which companies can employ
vary from those that are mandatory to those that track increases in
efficiency, reductions in complaints, greater profits and better savings.
Software Metrics comprising of product, process and project metrics are a
quantitative measure of a degree to which a software system or process
possesses some property. Defect removal efficiency is one of the important
process metrics to measure and control. Project Metrics includes variance
in schedule, effort, size, productivity etc.

9.10 SELF -ASSESSMENT QUESTIONS (EXAMPLES


PREFERABLY DIFFERENT FROM THOSE GIVEN IN THE
BOOK)

1. What is the meaning of the word "Quality"? Explain briefly the different
views of Quality.

2. Describe the different perceptions and perspectives of quality. Why is it


different for customers and practitioners?

3. Discuss the Importance of Quality in Software development with


examples.

4. Write short notes on the following topics highlighting its purpose,


objectives, coverage, importance, benefits and challenges

5. Quality Management Systems i.e. ISO 9001

6. SEI - Capability Maturity Model Integration i.e. SEI-CMMI

7. Six Sigma

8. Explain the PDCA model and its importance

9. Explain the difference between Quality Assurance and Quality Control


with some examples.

! !452
QUALITY MANAGEMENT AND METRICS

10.What is meant by Cost of Quality? Explain each component with some


examples.

11.Discuss some of the standards that are prevalent in non-IT Industries to


ensure quality of product or services to customers.

12.Why is ISO 9001 popular in the industries?

13.Name the eight principles of the ISO 9001 standards

14.What does capability and maturity mean? What are Capability and
Maturity Levels?

15.What does SEI CMMI focus on?

16.Briefly describe the structure of SEI CMMI.

17.What is the difference between staged and continuous representation?


Are the Key Process Areas different for the two? Explain

18.Briefly explain the different maturity levels of SEI CMMI

19.Briefly explain the different capability levels of SEI CMMI

20.What does institutionalization mean? What are the challenges for this?

21.What is the basic premise of Six Sigma?

22.Explain the statistical concept of Six Sigma? What is the practical


meaning of achieving Six Sigma.

23.Briefly explain the components of the DMAIC methodology in Six Sigma.

24.Explain the difference in roles of people with different types of Belts as


per Six Sigma definitions.

25.Explain the benefits of using Six Sigma Methodology with some


examples.

! !453
QUALITY MANAGEMENT AND METRICS

26.Where do all standards, models, techniques, tools etc. and their


implementation lead to? How is metrics relevant to Software
Engineering?

27.What according to your experience is implied by "high quality


software"?

28.Briefly indicate some of the examples of metrics that one can collect in
our everyday life (other than given in the book).

29.Differentiate between the terms measures, measurement metrics, and


indicators. Give an example of each for an automobile or airplane.

30.Explain what can go wrong if one does not "measure" anything in a


pharmaceutical company.

31.Tom DeMarco quote was - "You cannot control what you cannot
measure. You can neither predict nor control what you cannot
measure". Discuss this quote with some practical example. For example
a patient in a hospital recovering after surgery.

32.Explain some of the uses of metrics.

33.Assume that the organization is an automobile vendor with departments


like vehicle sales, service, spare part sales, accessories sales,
marketing, finance, credit control, purchase, audit, etc. Discuss how
every stakeholder in this organization can benefit from metrics.

34.Explain how every stakeholder in an IT organization benefit by using


metrics. Can a programmer benefit from these metrics?

35.What are software metrics? Give some examples of each of the


categories of software metrics.

36.Explain the use of the size metric - lines of code, how it can be
measured and used? What kind of benefits can follow by using the LOC
metric?

37.Explain the use of the Function points - FPs. How can it be measured
and used? What kind of benefits can follow by using the FP metrics?.

! !454
QUALITY MANAGEMENT AND METRICS

38.Briefly explain how Function Points are computed? What does the FP
indicate? And how can it be useful?

39.What is process metrics? Give examples of some processes that need to


be measured in a school. How will it benefit the students?

40.Why is process metrics difficult to gather?

41.Explain some of the process metrics gathered for a software project and
indicate how it will help in assuring quality to the end customer?

42.Discuss defect removal efficiency and its use.

43.A large construction company builds the sea-link bridge across a back-
water creek in Mumbai. List down some of the project metrics the
organization could have gathered to monitor the project. Explain each of
these metrics, how it can be measured and how the project can be
controlled.

44.Explain the transition from "A HABIT" to "IT" for a software organization
where employees adapt to a new process or rule e.g. wearing ties.

45.Discuss some of the software project metrics that are useful.

46.Discuss McCabe measure for structuredness in a program. What is


cyclomatic complexity?

47.What are the 11 quality factors? Write a few lines for each of them.

48.Name and briefly explain the attributes of Effective Software Metrics

49.What is "etiquette" in Software Metrics?

50.It is said that "Quality is a journey". Why? Discuss your own journey of
quality in the organization you worked for.

! !455
QUALITY MANAGEMENT AND METRICS

References and Links

Google Links

1. http://asq.org/learn-about-quality/cause-analysis-tools/overview/
scatter.html

2. http://asq.org/service/body-of-knowledge/tools-run-chart

3. http://blog.iesve.com/index.php/2010/01/26/the-architecture-design-
concept-of-software-engineering/

4. http://blog.lnsresearch.com/blog/bid/200624/A-Metric-Misused-or-
Misunderstood-Is-Worse-than-No-Metric-at-All

5. http://blog.proqc.com/quality-quotes/

6. http://bokardo.com/principles-of-user-interface-design/

7. http://c2.com/cgi/wiki?
SoftwareDevelopmentImprovementParadigmShift

8. http://codebetter.com/raymondlewallen/2005/07/19/4-major-principles-
of-object-oriented-programming/

9. http://computer-literacybase.blogspot.in/2011/03/performing-user-
interface-design-golden.html

10.http://docstore.mik.ua/orelly/java-ent/jnut/ch03_05.htm (Data Hiding


and Encapsulation)

11.http://en.wikibooks.org/wiki/C%2B%2B_Programming/
Exception_Handling

12.http://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/
Architecture/Design_Patterns

13.http://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/
Implementation/Code_Convention

! !456
QUALITY MANAGEMENT AND METRICS

14.http://en.wikipedia.org/wiki/Acceptance_testing

15.http://en.wikipedia.org/wiki/Agile_software_development

16.http://en.wikipedia.org/wiki/Alan_M._Davis

17.http://en.wikipedia.org/wiki/Black-box_testing

18.http://en.wikipedia.org/wiki/C%2B%2B

19.http://en.wikipedia.org/wiki/Change_management_%28engineering
%29

20.http://en.wikipedia.org/wiki/Change_request

21.http://en.wikipedia.org/wiki/Class_diagram

22.http://en.wikipedia.org/wiki/Class-responsibility-collaboration_card

23.http://en.wikipedia.org/wiki/Coding_conventions

24.http://en.wikipedia.org/wiki/Configuration_management

25.http://en.wikipedia.org/wiki/Control_chart

26.http://en.wikipedia.org/wiki/Custom_software

27.http://en.wikipedia.org/wiki/Cyclomatic_complexity

28.http://en.wikipedia.org/wiki/Dynamic_testing

29.http://en.wikipedia.org/wiki/Encapsulation_%28object-
oriented_programming%29

30.http://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model

31.http://en.wikipedia.org/wiki/Equivalence_partitioning

32.http://en.wikipedia.org/wiki/Falkirk_Wheel

! !457
QUALITY MANAGEMENT AND METRICS

33.http://en.wikipedia.org/wiki/Function_point

34.http://en.wikipedia.org/wiki/Histogram

35.http://en.wikipedia.org/wiki/History_of_the_euro

36.http://en.wikipedia.org/wiki/HP_QuickTest_Professional

37.http://en.wikipedia.org/wiki/HP_WinRunner

38.http://en.wikipedia.org/wiki/Incremental_build_model

39.http://en.wikipedia.org/wiki/Information_hiding

40.http://en.wikipedia.org/wiki/Inheritance_%28object-
oriented_programming%29

41.http://en.wikipedia.org/wiki/Integration_testing

42.http://en.wikipedia.org/wiki/Ishikawa_diagram

43.http://en.wikipedia.org/wiki/
List_of_failed_and_overbudget_custom_software_projects

44.http://en.wikipedia.org/wiki/Memory_leak

45.http://en.wikipedia.org/wiki/Method_%28computer_programming%29

46.http://en.wikipedia.org/wiki/Modular_design

47.http://en.wikipedia.org/wiki/Modularity

48.http://en.wikipedia.org/wiki/Mutation_testing

49.http://en.wikipedia.org/wiki/Object-oriented_analysis_and_design

50.http://en.wikipedia.org/wiki/Open/closed_principle

51.http://en.wikipedia.org/wiki/Pareto_chart

! !458
QUALITY MANAGEMENT AND METRICS

52.http://en.wikipedia.org/wiki/Quality_costs

53.http://en.wikipedia.org/wiki/Rapid_application_development

54.http://en.wikipedia.org/wiki/Regression_testing

55.http://en.wikipedia.org/wiki/Reusability

56.http://en.wikipedia.org/wiki/Run_chart

57.http://en.wikipedia.org/wiki/Scatter_plot

58.http://en.wikipedia.org/wiki/Scrum_(software_development)

59.http://en.wikipedia.org/wiki/Security_testing

60.http://en.wikipedia.org/wiki/Seven_Basic_Tools_of_Quality

61.http://en.wikipedia.org/wiki/Smoke_testing_(software)

62.http://en.wikipedia.org/wiki/Software_configuration_management

63.http://en.wikipedia.org/wiki/Software_deployment

64.http://en.wikipedia.org/wiki/Software_design

65.http://en.wikipedia.org/wiki/Software_development_process

66.http://en.wikipedia.org/wiki/Software_engineering

67.http://en.wikipedia.org/wiki/Software_evolution

68.http://en.wikipedia.org/wiki/Software_maintenance

69.http://en.wikipedia.org/wiki/Software_portability

70.http://en.wikipedia.org/wiki/Software_prototyping

71.http://en.wikipedia.org/wiki/Software_testing

! !459
QUALITY MANAGEMENT AND METRICS

72.http://en.wikipedia.org/wiki/Software_verification_and_validation

73.http://en.wikipedia.org/wiki/Static_testing

74.http://en.wikipedia.org/wiki/Test_automation

75.http://en.wikipedia.org/wiki/Test_script

76.http://en.wikipedia.org/wiki/The_Chicken_and_the_Pig

77.http://en.wikipedia.org/wiki/Unified_Process

78.http://en.wikipedia.org/wiki/Value_object

79.http://en.wikipedia.org/wiki/Verification_and_validation

80.http://en.wikipedia.org/wiki/V-Model_%28software_development%29

81.http://en.wikipedia.org/wiki/White-box_testing

82.http://eras.readthedocs.org/en/latest/doc/guidelines.html

83.http://geosoft.no/development/cppstyle.html

84.http://infolab.stanford.edu/~burback/watersluice/node19.html

85.http://istqbexamcertification.com/what-is-a-defect-life-cycle/

86.http://istqbexamcertification.com/what-is-rad-model-advantages-
disadvantages-and-when-to-use-it/

87.http://istqbexamcertification.com/what-is-security-testing-in-software/

88.http://istqbexamcertification.com/what-is-v-model-advantages-
disadvantages-and-when-to-use-it/

89.http://istqbexamcertification.com/why-is-testing-necessary/

90.http://oer.nios.ac.in/wiki/index.php/
Phases_of_System_Development_Life_Cycle

! !460
QUALITY MANAGEMENT AND METRICS

91.http://pmstudycircle.com/2012/01/configuration-management-vs-
change-management/

92.http://programmers.stackexchange.com/questions/134256/what-is-
the-difference-between-a-software-process-model-and-software-
engineering

93.http://programmers.stackexchange.com/questions/173547/what-is-
the-difference-between-data-hiding-and-encapsulation

94.http://reqtest.com/testing-blog/differences-between-different-test-
levels/

95.http://scrummethodology.com/

96.http://searchsoftwarequality.techtarget.com/guides/Quality-metrics-A-
guide-to-measuring-software-quality

97.http://spectrum.ieee.org/computing/software/why-software-fails

98.http://stackoverflow.com/questions/359790/what-are-pitfalls-for-agile-
development-methodologies

99.http://testermindset.blogspot.in/2011_05_01_archive.html

100.http://testingbasicinterviewquestions.blogspot.in/2012/01/why-we-
use-stubs-and-drivers.html

101.http://testingbasicinterviewquestions.blogspot.in/search/labelSmoke
%20Testing%20Example.

102.http://users.csc.calpoly.edu/~jdalbey/206/Lectures/BasisPathTutorial/

103.http://www.3csoftware.com/to-build-or-to-buy-comparing-custom-
and-off-the-shelf-software-applications/

104.http://www.avionyx.com/publications/e-newsletter/issue-3/126-
demystifying-software-coupling-in-embedded-systems.html

! !461
QUALITY MANAGEMENT AND METRICS

105.http://www.cavehill.uwi.edu/staff/eportfolios/paulwalcott/courses/
comp2145/2010/design_-_concepts_and_principles.htm

106.http://www.cs.cornell.edu/courses/cs501/2005sp/syllabus.html

107.http://www.cs.olemiss.edu/~hcc/csci581oo/notes/
dataAbstraction.html

108.http://www.c-sharpcorner.com/Forums/Thread/175854/what-is-
inheritance-in-oops-with-example-why-we-are-using.aspx

109.http://www.defectmanagement.com/defectmanagement/index.htm

110.http://www.designyourway.net/blog/inspiration/menus-and-buttons-
in-mobile-design-45-examples/

111.http://www.engineeringtoolbox.com/pumping-water-horsepower-
d_753.html

112.http://www.etestinghub.com/v_model.php

113.http://www.freepatentsonline.com/article/International-Journal-
Business-Research/190463129.html(Object-orientedprogramming
(Analysis)

114.http://www.fsa.usda.gov/FSA/sdlcapp?
area=home&subject=dev&topic=req

115.http://www.geometrick.com/fv_wys_mis_metric.htm

116.http://www.guru99.com/static-dynamic-testing.html

117.http://www.guru99.com/user-acceptance-testing.html

118.h t t p : / / w w w. h a r m o n i c s s . c o. u k / i n d e x . p h p / t u t o r i a l s / s o f t wa r e -
engineering/56-the-death-of-the-v-model

119.http://www.informit.com/articles/article.aspx?p=19796&seqNum=3

! !462
QUALITY MANAGEMENT AND METRICS

120.http://www.infoworld.com/article/2623631/agile-development/an-
agile-pioneer-versus-an--agile-ruined-my-life--critic.html

121.http://www.isixsigma.com/methodology/metrics/importance-
implementing-effective-metrics/

122.http://www.isixsigma.com/methodology/metrics/tips-defining-and-
collecting-it-process-metrics/

123.http://www.math-cs.gordon.edu/courses/cs211/ATMExample/
InitialFunctionalTests.html

124.http://www.methodsandtools.com/archive/archive.php?id=32

125.http://www.methodsandtools.com/archive/archive.php?
id=32(Understanding the Unified Process (UP)

126.http://www.mindtools.com/pages/article/newTMC_03.htm

127.http://www.mountaingoatsoftware.com/blog/my-primary-criticism-of-
scrum

128. http://www.my-project-management-expert.com/the-advantages-
and-disadvantages-of- agile-software-development.html

129.http://www.nngroup.com/articles/first-rule-of-usability-dont-listen-to-
users/

130.http://www.nngroup.com/articles/ten-usability-heuristics/http://
www.cse.lehigh.edu/~gtan/bug/softwarebug.html

131.http://www.qcin.org/nbqp/qualityindia/Vol-1-No1/qualityjourney.php

132.http://www.slu.edu/its/policies-and-processes

133.http://www.softwaretestingclub.com/profiles/blogs/defect-clustering-
pesticide-paradox

134.http://www.softwaretestinghelp.com/security-testing-of-web-
applications/

! !463
QUALITY MANAGEMENT AND METRICS

135.http://www.softwaretestinghelp.com/static-testing-and-dynamic-
testing-difference/

136.http://www.softwaretestinghelp.com/test-case-template-examples/

137.http://www.softwaretestinghelp.com/what-is-performance-testing-
load-testing-stress-testing/

138.http://www.softwaretestinghelp.com/why-documentation-is-
important-in-software-testing/

139.http://www.softwaretestingstuff.com/2007/10/top-down-testing-vs-
bottom-up- testing.html

140.http://www.stevemcconnell.com/rdenum.htm

141.http://www.stevemcconnell.com/rdenum.htm (Classic Mistakes


Enumerated)

142.http://www.useoftechnology.com/5-ethical-challenges-information-
technology/

143.http://en.wikipedia.org/wiki/Cloud_computing

144.http://www.news24.com/Technology/News/Drone-delivers-beer-at-
Oppikoppi-20130808-Drone delivers beer at Oppikoppi

145.http://en.wikipedia.org/wiki/History_of_the_InternetHistory of the
Internet

146.http://en.wikipedia.org/wiki/Information_technology_controls-
Informationtechnologycontrols

147.http://www.risktec.co.uk/knowledge-bank/technical-articles/lessons-
learned-from-lehman-brothers.aspxLessonslearnedfromLehmanBrothers

148.http://www.aaxnet.com/design/where.html-
WhereisInformationTechnologyHeaded?

! !464
QUALITY MANAGEMENT AND METRICS

149.http://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration-
CapabilityMaturityModelIntegration

150.http://en.wikipedia.org/wiki/CMMI_Version_1.3CMMIVersion1.3

151.http://en.wikipedia.org/wiki/History_of_software_engineering-History
ofsoftwareengineering

152.http://hrsuccess.wordpress.com/2013/01/25/in-god-we-trust-all-
others-bring-data/InGodWeTrust,allothersbringdata

153.http://en.wikipedia.org/wiki/Quality_management-
Qualitymanagement

154.h t t p : / / w w w. t u t o r i a l s p o i n t . c o m / c m m i / c m m i - o v e r v i e w. h t m -
SEICMMIOverview

155.http://en.wikipedia.org/wiki/Six_Sigma-SixSigma

156.http://en.wikipedia.org/wiki/Software_crisis-Softwarecrisis

157.http://en.wikipedia.org/wiki/Software_quality-Software quality

158.http://en.wikipedia.org/wiki/W._Edwards_Deming-W.EdwardsDeming

159.http://www.response-uk.co.uk/blog/is-process-killing-
creativity.htmlbyAudreyFitzpatrick

160.http://en.wikipedia.org/wiki/Google_China-GoogleChina

161.http://www.sw-engineering-candies.com/blog-1/
top10thingseverysoftwareengineershouldknow

162.http://www.techiwarehouse.com/engine/18a41ffa/Software-
Engineering-Phases

163.http://www.technologyexecutivesclub.com/Articles/management/
artChangeControl.php

164.http://www.technotrice.com/rad-model-software-engineering/

! !465
QUALITY MANAGEMENT AND METRICS

165.http://www.uxdesignedge.com/2010/06/intuitive-ui-what-the-heck-is-
it/

166. https://danashby04.wordpress.com/2013/07/09/manual-testing-and-
automated-testing-the-myths-the-misconceptions-and-the-reality/

167.https://intensetesting.wordpress.com/2014/03/28/memory-leak-
testing-why-it-is-important-how-is-it-done/

168.https://users.csc.calpoly.edu/~djanzen/secsdiff.html

169.https://www.artima.com/weblogs/viewpost.jsp?thread=218013
(AgitatingThoughts&Ideas-SoftwareMetricsDon'tKillProjects,Moronic
ManagersKill Projects)

170.https://www.veracode.com/blog/2013/12/static-testing-vs-dynamic-
testing

Documents & Books Referred (including through websites)

1. An Illustrated History of Computers John Kopplin (2002)

2. Six Sigma Belts, Executives and Champions - What Does It All Mean?

3. Information Technology Offshoring to India: Sriram N & Jayashankar M.


S. (2007)

4. An overview of Information Security Standards, February 2008 by the


Government of the Hong Kong Special Administrative Region

5. International Standard ISO 9001, 4th edition 2008 - Quality


management systems - Requirements

6. Quality management principles from ISO's Website www.iso.org

7. ISO 9001 - It's in the detail by bsi

8. Capability Maturity Model Integration Version 1.2 Overview by SEI


Carnegie Mellon

! !466
QUALITY MANAGEMENT AND METRICS

9. CMMI® for Development, Version 1.3 - CMMI Product Team

10.Strianzblog.com/wordpress/?p=217 - Why GE's Six Sigma Success


Story Is Still Relevant?

11.Basic Statistics and Six Sigma Concepts by Douglas T. Meyers,


Concurrent Technologies Corporation, Aug 2009

12.Six Sigma - Past, present and future - BSI lecture Material

13.People Capability Maturity Model (P-CMM) V 2.0, 2nd Edition - SEI, July
2009 Technical Report

14.Software Process Improvement Capability Maturity Models 2010 by


Theo Schouten

15.The changing faces of risk management: The evolution of a concept by


Simona Fionda, The Institute of Risk Management

16.Software Engineering: A Practitioner's Approach (6th Edition), Roger


Pressman, McGraw-Hill

17.Design Patterns: Elements of Reusable Object-Oriented Software - by


Erich Gamma

18.The Mythical Man-Month: Essays on Software Engineering - by Frederick


P. Brooks Jr.

19.Refactoring: Improving the Design of Existing Code - by Martin Fowler

20.Extreme Programming Explained: Embrace Change (The XP Series) by


Kent Beck

21.Software Engineering (International Computer Science Series) - by Ian


Sommerville

22.Writing Effective Use Cases Paperback - October 15, 2000, by Alistair


Cockburn

! !467
QUALITY MANAGEMENT AND METRICS

23.Object-Oriented Analysis and Design with Applications (3rd Edition),


2007 by Grady Booch (Author), Robert A. Maksimchuk (Author), Michael
W. Engle (Author), & 3 more

24.Information Technology Project Management Paperback, 2007 by Kathy


Schwalbe

25.Agile Software Development with Scrum (Series in Agile Software


Development) Paperback - October 21, 2001 by Ken Schwaber, Mike
Beedle

26.Fundamentals of Software Engineering Rajib Mall PHI Learning Pvt. Ltd.,


18-May- 2009

27.An Integrated Approach to Software Engineering, Third Edition, Pankaj


Jalote, Springer/ Narosa Publishing House

28.ISO9001:2000 & Capability Maturity Model® Integration - Several


authors Software Quality Institute, Griffith University & Australian
Department of Defense

29.Multiple standards: is this the future for organizations? Merce Bernardo,


Universitat de Barcelona (Spain) & Alexandra Simon Universitat de
Girona (Spain)

30.A study of six sigma implementation and critical success factors Mr.
Obaidullah Hakeem Khan Kundi Pakistan Institute of Quality Control

31.Software product quality: Theory, Model, and Practice R. Geoff. Dromey,


Software Quality Institute, Brisbane, AUSTRALIA

32.Practical Software Measurement: Measuring for Process Management


and Improvement by William A. Florac, Robert E. Park, Anita D.
Carleton, April 1997 (CMU/SEI)

33.Software Metrics SEI Curriculum Module SEI-CM-12-1.1 Dec 1988,


Everald E. Mills, Seattle University (CMU/SEI)

34.Books and Documents referred (on Websites)

! !468
QUALITY MANAGEMENT AND METRICS

35.Stephen H. Kan, "Metrics and Models in Software Quality Engineering",


Pearson Education Limited 2003, Boston, United States

36.Mutation Testing by Stuart Anderson

37.Software Testing - Goals, Principles, and Limitations by S.M.K Quadri,


Head of

Department, Department of Computer Sciences, University of Kashmir


(India)

38.A study of software metrics, by Hilda b. Klasky

39.Control flowgraphs and code coverage, Robert gold, faculty of Electrical


Engineering and Computer Science, Ingolstadt University of Applied
Sciences, Esplanade 10, D- 85049 Ingolstadt, Germany

40.Writing effective use cases, Alistair Cockburn, Humans and Technology,


pre-publication draft #3, edit date: 2000.02.21, published by Addison-
Wesley, c. 2001

41.Software Metrics, Alex Boughton

42.Coding Guidelines and Quick Start Tips For Software Development


Version 0.6 (in progress)

43.NOAA National Weather Service NWS/OHD General Software Coding


Standards and Guidelines

44.International Journal of Emerging Technology and Advanced Engineering


Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified
Journal, Volume 4, Issue 2, February 2014)

45.Software Maintenance As Part of the Software Life Cycle, Comp180:


Software Engineering, Department of Computer Science Tufts University,
Prof. Stafford, Prepared by: Kagan Erdil

46.Emily Finn, Kevin Keating. Jay Meattle

! !469
QUALITY MANAGEMENT AND METRICS

47.Advanced Topics in Computer Science: Testing, Path Testing Luke


Gregory 321512, Professor H. Schligloff and Dr. M. Roggenbach

48.ISTQB Certification Preparation Guide: Chapter 1 - Principles of Testing

49.The Scope and Importance of Human Interruption in Human-Computer


Interaction Design Daniel C. McFarlane, Lockheed Martin Advanced
Technology Laboratories, Kara A. Latorella, NASA Langley Research
Center

50.Software Engineering: A Practitioner's Approach Copyright© 1996,


2001, R.S. Pressman & Associates, Inc., For University Use Only

51.A First Step Towards Nuance-Oriented Interfaces for Virtual


Environments, Chadwick

A. Wingrave, Doug A. Bowman and Naren Ramakrishnan, Department of


Computer Science, Virginia Tech

52.Blacksburg, VA 24061 USA, {cwingrav,bowman,naren}@cs.vt.edu

53.Software Testing Methods and Techniques, Jovanovi?, Irena

54.Test Release Processes, Rex Black: President and Principal Consultant,


RBCS, Inc., Bulverde, TX

55.ESE Einführung in Software Engineering Software Quality by Prof. O.


Nierstrasz

56.Software Testing and Quality Assurance Theory and Practice Chapter 4


Control Flow Testing © Naik & Tripathy, Wiley Publishers

57.Software Process and Product Metrics CIS 375, Bruce R. Maxim, UM-
Dearborn

58.Metrics to improve software process, Juha Tarvainen

59.Software Quality Metrics to Identify Risk Department of Homeland


Security Software Assurance Working Group by Thomas McCabe Jr.

! !470
QUALITY MANAGEMENT AND METRICS

60.Software Quality Measurement: A Framework for Counting Problems


and Defects William A. Florac with the Quality Subgroup of the Software
Metrics Definition Working Group and the Software Process
Measurement Project Team


! !471
QUALITY MANAGEMENT AND METRICS

REFERENCE MATERIAL
Click on the links below to view additional reference material for this
chapter

Summary

PPT

MCQ

Video Lecture - Part 1

Video Lecture - Part 2

! !472

You might also like