Professional Documents
Culture Documents
4
MANUAL TESTING
5
Introduction to Software Testing
6
1.1 Evolution of Software Testing
The ability to produce software in a cost-effective way is the key factor that determines the
effective functioning of the modern systems. To produce cost effective software a number of activities
were in practice.
The attitude towards Software testing underwent a major positive change in the recent years. In
1950’s when machine languages were used testing is nothing but debugging.1960’s Compilers were
developed, testing started to be considered as a separate activity from debugging. In 1970 ‘s when
Software engineering concepts were introduced ,software testing began to evolve as technical discipline
Over the last two decades there has been an increased focus on the better, faster, cost-effective,
secured software. This has increased the acceptance of testing as a technical discipline and also a
career choice
Software is the process used to identify the correctness, completeness and quality of developed software.
It is a means of evaluating the system or a system component to determine that it meets the requirement
of the customer.
7
SOFTWARE PROCESS
8
2.1 Introduction to Software process
Capability Maturity Model (CMM), has found its way from Carnegie Melon University’s (CMU) software
engineering Institute (SEI) to major Software developers all over the world. Some consider it as an answer to
Software Industry’s chaotic problems, and some consider it just another exhaustive framework that requires too
much to do and too little to show for it. This article is not intended to be a comprehensive introduction to CMM; the
interested readers should read official CMM documentation available from SEI’s web site to get a comprehensive
discussion of CMM. This article is intended to show that CMM is not a framework that advocates magical and
revolutionary new ideas, but it is in fact a tailored compilation of the best practices in Software engineering.
The intention of this article is to introduce CMM as a logical and obvious evolution of the Software
Engineering practices. The article does not require any prior knowledge of CMM; however it is assumed that the
reader is cognizant of issues involved in Software development.
Before we move any further, we must define one term that is central to almost every industry – Process. This
term has also found its rightful place in Software Industry. It was Deming who popularized this term. Japanese
have managed a miraculous industrial revolution based on the simple concept of a Process. “Process is a mean
by which people, procedures, methods, equipment, and tools are integrated to produce a desired end result”
[quoted from CMM for Software, version 2B]. Humphrey in his Book Introduction to the PSP, (1997) defines a
process in Software Development context as “Process defines a way to do a project, Projects typically produces a
product, and Product is something you produce for a co-worker, an employer, or a customer.”
Now that we know what Process means, how can we use this knowledge to achieve success? The answer lies
in the following three-step strategy:
1- Analyze the current process, by which your organization executes its projects,
The above seemingly simple steps, have baffled the Software Industry for years. Different software developers
have adopted different techniques to implement the three-step recipe, with varying degree of success.
Having noted down the above “three-step approach to success”, we would now concentrate on mastering each of
the above three steps.
Let us start by considering the normal course of events that follow when a software project is undertaken. We
will only outline the steps, without going into the details of each; since our purpose is to highlight the most
common events and not there particular details, as they may vary depending on the contract and the nature of the
project.
9
Step-1 – The Requirements:
The client gives a set of Requirements of the product to the contracting company (referred to as “the
Company”). The first step is to discuss these requirements with the client. The discussion will focus on removing
any ambiguity, conflict, or any other issues related to the product in question. The outcome of this discussion will
ideally be a “Well-defined set of functionalities that the product will achieve”.
The next step is to plan the project. Given the required set of functionalities the million dollar question is
“How much time and Dollars will the Company requires completing the project?” Based on these estimates,
resources (both human and non-human) will be allocated to the project. Various milestones will be defined, to
facilitate project monitoring. Plans will also be made to outsource any part of the project, if deemed necessary.
Step-3 – On with the Project:
Assuming that the plans have been made, team formed, and estimates in place now the Company is
ready to start actual work on the project.
Step-4 – How is the Project doing (continuous monitoring):
Once the project is under way, the project will continuously monitor their progress against the Plans and
milestones made in Step-2.
Step-5 – How are the sub-contractors Doing:
In Step-2, if the Company decided to outsource or sub-contract a part of the project, then the sub-
contractors will also be managed and their progress monitored closely. This will ensure that no delays occur due
to lapses caused by the sub-contractor.
Step-6 Software Quality Assurance:
In Step-4 the Company monitored the project for any cost overrun, or any schedule slippage; but that’s
not all that need to be monitored. An in-budget, with-in-time project may still have serious problems. In Step-4 the
Company ensured that the project is going according to the schedule, and is with in budget, but is it doing what it
is suppose to do?. That is, are all the tasks completed according to the Company’s standards and according to,
the Requirements agreed in Step-1?. In Step-6, the Company will ensure that no work is done in violation of any
standard and any system Requirements.
Step-7 handling the inevitable changes:
A software project, usually involves different teams working on different aspects of the project, e.g. one
team may be coding a module, while another may be working on writing the users manual. Although Teams work
on a certain aspect of the project, but the project is eventually going to be delivered as a single product. It’s
evident that all the teams MUST co-ordinate their work, to produce a well-integrated final product. In Step-2, the
plan was well laid, and all the involved personnel were assigned their share of work. But some changes will
almost always have to be made. These changes may affect more than one team. Therefore it is necessary for the
Company to ensure that all the bits-and-pieces of the project remain well coordinated. The Company must
determine if a change made to one piece of the product also necessitates a change to one or more other pieces,
and if it does then those changes must be made accordingly. In Software terms this is called “Configuration
Management”.
One can come up with many other activities that a software company would normally follow. But we would stop
here and will focus only on the above-mentioned activities.
It is obvious that the above mentioned activities are performed by almost all the software companies; then what is
it that makes a company Microsoft and another company go bellies up ?.
10
The answer is simple: Not all the companies observe the above steps with the same vigor. These steps are all
very simple to understand but extremely difficult to execute effectively.
The purpose of the above discussion was to enable the readers to appreciate the need for a guideline, or a road
map that software companies can follow to produce quality software, with in budget and with in time. One such
roadmap is called Capability Maturity Model (CMM).
11
CAPABILITY MATURITY MODEL
(CMM)
12
3.1 CAPABILITY MATURITY MODEL (CMM)SM
Capability Maturity Model, as already mentioned, is the outcome of decades of research and study of successful
and unsuccessful projects. The major philosophy of CMM is very similar to life itself. When a child is born it is at a
very "initbial" level of maturity. The child grows up, learns and attains a higher level of maturity. This keeps on
going until he/she becomes a fully mature adult; and even after that the learning goes on.
According to CMM, a software company also goes (or should go) through similar maturity evolutions. The CMM
maturity levels are discussed later.
Readers should notice that CMM is NOT a software development life cycle model. Instead it is a strategy for
improving the software process irrespective of the actual life-cycle model used [Schach 1996].
Lets dive right into the intricacies of CMM.
Given below is a brief explanation of various components of CMM. This explanation has been extracted
from SEI's official documents. This section is followed by more detailed explanation of each component.
Maturity levels
A maturity level is a well-defined evolutionary plateau toward achieving a mature software process. The five
maturity levels provide the top-level structure of the CMM.
Process capability
Software process capability describes the range of expected results that can be achieved by following a software
process. The software process capability of an organization provides one means of predicting the most likely outcomes
to be expected from the next software project the organization undertakes.
Each maturity level is composed of key process areas. Each key process area identifies a cluster of related activities
that, when performed collectively, achieve a set of goals considered important for establishing process capability at that
maturity level. The key process areas have been defined to reside at a single maturity level. For example, one of the
key process areas for Level 2 is Software Project Planning.
Goals
The goals summarize the key practices of a key process area and can be used to determine whether an organization or
project has effectively implemented the key process area. The goals signify the scope, boundaries, and intent of each
key process area. An example of a goal from the Software Project Planning key process area is "Software estimates
are documented for use in planning and tracking the software project." See "Capability Maturity Model for Software,
Version 1.1" [Paulk93a] and Section 4.5, Applying Professional Judgment, of this document for more information on
interpreting the goals.
13
Common features
The key practices are divided among five Common Features sections: Commitment to Perform, Ability to Perform,
Activities Performed, Measurement and Analysis, and Verifying Implementation. The common features are attributes
that indicate whether the implementation and institutionalization of a key process area is effective, repeatable, and
lasting.The Activities Performed common feature describes implementation activities. The other four common features
describe the institutionalization factors, which make a process part of the organizational culture.
Key practices
Each key process area is described in terms of key practices that, when implemented, help to satisfy the goals of that
key process area. The key practices describe the infrastructure and activities that contribute most to the effective
implementation and institutionalization of the key process area. For example, one of the practices from the Software
Project Planning key process area is "The project's software development plan is developed according to a documented
procedure."
As mentioned earlier the above description of various components of CMM has been taken out of SEI's official
documents. The readers need not worry if they don't understand some or all of what has been written above. I will
MATURITY LEVELS
This is the lowest of the maturity levels; you may consider it as the immature level. At this level the software process is
not documented and is not fixed. Everything in these companies is done on an ad-hoc basis. The projects are usually
late, over budgeted and have quality issues. This does not mean that a company at this level can not do successful
projects. As a matter of fact the author himself works for a company that is somewhere between Level 1 and Level 2
and despite this it has a very impressive track record for producing quality software, with in budget and with in time.
Companies at Level 1 do manage to produce good software mainly because of their personnel immense competency.
These companies are characterized by heroes - individuals with good programming, communications and peoples
skills. It is for these individual heroes that the companies at Level 1 manage to complete successful projects. Most of
the companies around the world are at Level 1. These companies make their decisions on the spur of the moment,
rather than anticipating problems and fixing them before they occur. Software developers in these companies are
14
usually over-worked, over-burdened and spend major portion of their time in re-working, or fixing bugs. The success of
a project depends totally on the team working on the project and on the project manager's abilities rather than on the
company's processes. As the team changes or some key individuals of the team leave - the project usually fall flat on its
face.
At this level basic software project management practices are in place. A project planning, monitoring and
measurements are properly done according to certain well-defined processes. Typical measurements include tracking
of costs and schedule. The results of these measurements are used in future projects to make a better and realistic
project plan. Projects have a stronger chance of being successful, and if unsuccessful the mistakes are recorded and
thus are avoided in the future projects. The key point is that without measurements it is impossible to foresee and
At this level the process of software development is fully documented. Both the managerial and technical aspects are
fully defined and continued efforts are made to better the process. At this level CASE tools are used for development of
software. If a Level 1company tries to follow activities involved in Level 3, the result usually are disastrous. This is
because in CMM a preceding level lays the ground work for the next level. In order to be able to achieve Level 3, one
An example of a documented process could be "the process for identifying software defects/bugs". This process may
be documented by using checklist for identification of common defects; the check list may contain entries like "All
variables initialized, all pointers initialized, all pointers deleted, all exceptions caught" etc. The process of defect
identification may also include the total count of defects, and the categories of each software defects. A company may
use any method to documents its processes. CMM lays no compulsion on how a process should be documented. The
only compulsion is that the process should be documented in such a manner that a new recruit to the company can
15
Level 3, provides a way to document the processes; Level 4 allows that documentation to be used in a meaningful
manner. Level 4, involves software matrices, and statistical quality control techniques. In Level 3, I gave an example of
documenting a Software Defects/bugs identification process. Imagine that the total count of defects per thousand lines
of code turn out to be 500. Level 4 would have activities aimed at identifying the root cause(s) of these bugs, and would
The software environment changes all the time. Technology changes and so do the techniques. Level 5 deals with the
ongoing changes and with ways to improve the current processes to meet the changing environment. In essence Level
5 provides a positive feedback loop. Level 5 is about continuous improvement. A company at Level 5 uses statistical
methods to incorporate future changes and to be receptive to ideas and technologies for continuous growth.
16
The above discussion would make sense only to the readers who already know about CMM. For others the above lines
just add to confusion. Once again I remind the readers that "patience has its virtue", CMM is a vast subject, and a few
lines can not even begin to explain it. The rest of the article further break the above levels down, with a hope that this
would help the readers in understanding CMM. So if the above discussion has left you confused and has not added
much to your understanding of CMM then keep reading on, as the best is yet to come :)
17
3.4 KEY PROCESS AREAS (KPAs)
Each level (Level 1,2, 3, 4, 5) have been divided into certain KPAs. For a company to achieve a certain maturity level it
must fulfill all the KPAs of the desired maturity level. Since every company is at least at Level 1, there is no Key
Process Areas for Level 1 - meaning that Software Company does not need to do anything to be at level 1. You may
think of Key Process Areas as "TO DOs of a maturity level" or a Task list that must be performed. A Key Process Area
contains a group of common activities that a company must perform to fully address that Key process Area. Given
Level 1 – Initial
Level 2 - Repeatable
• Requirements Management
• Software Project Planning
• Software Project Tracking & Oversight
• Software Subcontract Management
• Software Quality Assurance
• Software Configuration Management
Level 3 - Defined
Level 4 - Managed
Level 5 - Optimizing
• Defect Prevention
• Technology Change Management
• Process Change Management
18
There are 18 KPAs in CMM. So what should the reader make out of the above KPAs ?. A detailed book on CMM would
explain what each KPA means. But with in the space and scope restriction of this article I could not delve deep into
each KPA. Just by reading the KPAs, readers would realize that some of the KPAs would immediately make sense
while others would be difficult to understand. For example the "Peer Reviews" KPA of Level 3 is easily understood and
so are most of the KPAs of Level 2. However KPAs like "Organizational Process focus, Process definition, Integrated
Software Management etc." are difficult to understand without some explanation. There is a reason why some of the
KPAs are easily understood while others take considerable effort. Those KPAs that are usually done by many
companies (namely the KAPs of Level-2) are the ones that are easily understood - while the other KPAs alienate us -
not because they are some abstract terms being churned out in the labs of CMU, but simply because most of the
companies in the world do not follow the activities encompassed by these KPAs. And that is why CMM is such a
19
wonderful roadmap to follow. It tells us exactly what successful, big software companies have been doing to achieve
success.
Unfortunately the scope of this article restricts me from explaining the above KPAs in detail.
What CMM tells us by virtue of the above KPAs is: For a company to level with the best it MUST address all the 18
KPAs. Failing to address one or more of the above KPAs would result in a relatively immature company - hence
3.4.1 GOALS
Looking at the KPAs an obvious question comes to mind. How can a company be sure that it has successfully
addressed a KPA?. CMM assigns GOALS to each KPA. In order to successfully address a KPA a company must
achieve ALL the goals associated with that KPA. Given below is the complete list of GOALS associated to each of the
above 18 KPAs.
Level 2 - Repeatable
• Requirements Management
o GOAL 1:
System requirements allocated to software are controlled to establish a baseline for software
engineering and management use.
o GOAL 2:
Software plans, products, and activities are kept consistent with the system requirements allocated to software.
Software estimates are documented for use in planning and tracking the software project.
•
o GOAL 2:
o GOAL 3:
Affected groups and individuals agree to their commitments related to the software project.
20
• Software Project Tracking & Oversight
o GOAL 1:
Actual results and performances are tracked against the software plans.
o GOAL 2:
Corrective actions are taken and managed to closure when actual results and performance deviate
significantly from the software plans.
o GOAL 3:
Changes to software commitments are agreed to by the affected groups and individuals.
o GOAL 2:
The prime contractor and the software subcontractor agree to their commitments to each other.
o GOAL 3:
The prime contractor and the software subcontractor maintain ongoing communications.
o GOAL 4:
The prime contractor tracks the software subcontractor's actual results and performance against its
commitments.
o GOAL 2:
Adherence of software products and activities to the applicable standards, procedures, and
requirements is verified objectively.
o GOAL 3:
Affected groups and individuals are informed of software quality assurance activities and results.
o GOAL 4:
Noncompliance issues that cannot be resolved within the software project are addressed by senior
management.
21
• Software Configuration Management
o GOAL 1:
o GOAL 2:
o GOAL 3:
o GOAL 4:
Affected groups and individuals are informed of the status and content of software baselines.
Level 3 - Defined
Level 4 - Managed
22
• Quantitative Process Management
o GOAL 1:
o GOAL 2:
• Software Quality Management
o GOAL 1:
o GOAL 2:
Level 5 - Optimizing
• Defect Prevention
o GOAL 1:
o GOAL 2:
• Technology Change Management
o GOAL 1:
o GOAL 2:
• Process Change Management
o GOAL 1:
o GOAL 2:
• Common Features
• Key Practices
The interrelationship of the terms discussed above can be best shown by the following diagram:
23
The Structure of Capability Maturity Model
24
ISO
25
4.1 What is ISO?
ISO or the International Organization for Standardization is a non-governmental organization that was established in
1947. ISO includes a network of 146 national standards bodies (as of 12/31/02) from the world’s leading industrial
nations. One of the main goals of ISO is to develop worldwide standardization by promoting adoption of international
quality standards. By doing so, barriers of trade are eliminated.
ISO has created 13,736 standards as of 12/31/02 in a variety of industries. Examples of standards ISO has created
include the standardized codes for country names, currencies and languages, standardized format of worldwide
telephone and banking cards, as well as sizes and colors of road signs, and automobile bumper heights.
ISO includes 2,937 technical working bodies (as of 12/31/02), in which some 30,000 experts from industry, labor,
government, and standardization bodies in all parts of the world develop and revise standards. ISO has created
standards for the automotive, manufacturing, mechanics, packaging, and health care fields amongst many others.
The ISO standards are structured around the Process Approach concept. Two of the eight quality management
principles are key to understanding this principle:
• Process Approach - Understand and organize company resources and activities to optimize how the
organization operates.
• System Approach to Management - Determine sequence and interaction of processes and manage them as
a system. Processes must meet customer requirements.
Therefore, when company resources and activities are optimally organized, and managed as a system, the desired
result is achieved more efficiently.
In order to effectively manage and improve your processes, use the Plan-Do-Check-Act or PDCA cycle as a guide.
First, you Plan by defining your key processes and establishing quality standards for those processes. Next, you Do by
implementing the plan. Thirdly, you Check by using measurements to assess compliance with your plan, and finally,
you Act by continuously improving your product performance.
26
What are the ISO Elements?
ISO standards are documented rules and guidelines for implementing a quality system into your company. Specific
technical specifications and/or other specific criteria may also be included depending on the standard you select.
The ISO 9001 standard is a model of a quality system, describing the processes and resources required for registration
of a company's quality system. This ISO System diagram shows the management system and processes that are part
of the ISO quality management standard. A brief summary of the key requirements is detailed below.
• QMS - Document processes necessary to ensure product or service is of high quality and conforms to
customer requirements.
• Management Responsibility - Provide a vision. Show commitment. Focus on the customer. Define policy.
Keep everyone informed.
• Resource Management - Assign the right person to the job. Create and maintain positive workspace.
• Product Realization - Clearly understand customer, product, legal and design requirements. Ensure
specifications are followed. Check your suppliers.
• Measurement, Analysis & Improvement - Identify current and potential problems. Monitor and measure
customer satisfaction. Perform internal audits. Fix problems.
Implementing ISO in your company is a management decision that requires consideration of your organization’s
operations, strategy, staff and, most importantly, your customers.
ISO standards are now readily being applied by organizations in industries ranging from manufacturers and labs to auto
suppliers and pharmaceuticals. In many instances, the choice to implement an ISO standard into a company is not only
the result of a company seeking to improve quality, efficiency, and profitability, but also as a result of ISO
implementation being:
• Mandated by certain Industry Leaders, as the Big Three (DaimlerChrysler, Ford and GM) has required of
automotive suppliers (See ISO/TS 16949 for more information on deadlines)
• Required by your Customers, especially internationally-focused businesses
• Required by overseas regulatory bodies for suppliers of quality-sensitive products, e.g. medical devices
• Necessary to maintain market presence and a competitive advantage
27
For whatever reason your company decides to pursue or update its ISO certification, you need to consider the benefits
and costs involved with this process.
ISO standards are a guide that can help transform your company’s quality system into an effective system that meets
and exceeds customer expectations. Your company will start to realize these benefits as you implement and adhere to
the quality standards, and you will see the internal and external benefits accrue over time.
Internally, processes will be aligned with customer expectations and company goals, therefore forming a more
organized operating environment for your management and employees. Product and service quality will improve which
decreases defects and waste. Process improvements will help to motivate employees and increase staff involvement.
Products and services will be continually improved. All of these internal benefits will continually drive better financial
results, hence creating more value for your business.
As for the external benefits, ISO certification shows your customers and suppliers worldwide that your company desires
their confidence, satisfaction and continued business. Your company also has the opportunity to increase its
competitive advantage, retain and build its customer list, and more easily respond to market opportunities around the
world.
Although the costs of implementation can be offset with increased sales, reduced defects and improved productivity
throughout the organization, the investment of implementing and maintaining an ISO quality system needs to be
considered.
Many factors should be considered when calculating your company’s ISO implementation costs. The time, effort and
money your organization puts into ISO registration depends on the number of employees, locations, the ISO standard
selected for registration and the current state of your quality system and processes. Typical costs include:
28
• Upgrading and creating documentation
• Training employees
• Registration fees
• Maintenance
As with implementation of any new tool, the key to minimizing costs is to arm yourself with knowledge about the
process, and then to design a sensible plan that has realistic objectives, adequate resources and a practical time
schedule. Having a leader or consultant to guide you through the process and manage deadlines can also help you to
control costs and achieve your goals more quickly. In addition, if you have multiple locations or departments, costs can
be minimized by leveraging the information you learn and the resources you use as you move through the
implementation and maintenance process.
ISO 9001 defines the rules and guidelines for implementing a quality management system into organizations of any size
or description. The standard includes process-oriented quality management standards that have a continuous
improvement element. Strong emphasis is given to customer satisfaction. ISO 9001 registered companies can give their
customer important assurances about the quality of their product and/or service.
If your company is currently registered to the ISO 9001:1994 standard, you must update your quality system to the ISO
9001:2000 standard by December 15, 2003. Additionally, companies registered to the discontinued ISO 9002 or ISO
9003 must also transition to the ISO 9001:2000 standard by December 15, 2003 to maintain a valid certification.
ISO 14001 defines Environmental Management best practices for global industries. The standard is structured like the
ISO 9001 standard. ISO 14001 gives Management the tools to control environmental aspects, improve environmental
performance and comply with regulatory standards. The standards apply uniformly to organizations of any size or
description.
29
4.5.3 Automotive: ISO/TS 16949
ISO/TS 16949 defines global quality standards for the automotive supply chain. These QMS standards are gradually
replacing the multiple national specifications now used by the sector. Main focus of these standards is on COPS,
(Customer Oriented Processes) and how each key process relates to the company strategy.
Depending on your place in the automotive supply chain or the current standard to which you subscribe, ISO/TS 16949
compliance dates vary:
• For DaimlerChrysler’s Tier 1 suppliers worldwide, the transition from QS-9000 to ISO/TS 16949 must be
complete by July 1, 2004
• For the Big Three’s Tier 1 suppliers worldwide, the transition from QS-9000 to ISO/TS 16949 must be complete
by December 14, 2006
• For other OEMs, evidence suggests that the transition deadline most likely will be in accordance with a 2006
deadline
Additionally, ISO 9001/2/3:1994 registered companies are required to upgrade their system to the ISO 9001:2000
standard by December 15, 2003. If you are one of many automotive suppliers currently registered to both QS-9000 (the
Standard which the ISO/TS 16949 standard is based), and ISO 9001/2/3:1994, your company should also consider a
transition to ISO/TS 16949 by December 15, 2003. For practical reasons, it may be difficult, confusing and costly to
meet both the QS-9000 and the revised ISO 9001 standard requirements and then have to upgrade your system to
ISO/TS 16949 shortly thereafter.
ISO 17025 contains specific calibration and testing lab requirements in addition to the ISO 9001 quality standards. The
central focus of these standards is on calculation of measurement uncertainty as well as assuring quality and
repeatability of measurement results. ISO 17025 applies to independent and in-house labs.
30
PDCA
31
5. 1 Description
The PDCA (or PDSA) Cycle was originally conceived by Walter Shewhart in 1930's, and later adopted by W. Edwards
Deming. The model provides a framework for the improvement of a process or system. It can be used to guide the
entire improvement project, or to develop specific projects once target improvement areas have been identified.
5.2 Use
The PDCA cycle is designed to be used as a dynamic model. The completion of one turn of the cycle flows into the
beginning of the next. Following in the spirit of continuous quality improvement, the process can always be reanalyzed
and a new test of change can begin. This continual cycle of change is represented in the ramp of improvement. Using
what we learn in one PDCA trial, we can begin another, more complex trial.
In this phase, analyze what you inted to improve, looking for areas that hold opportunities for change. The
first step is to choose areas that offer the most return for the effort you put in-the biggest bang for your buck.
To identify these areas for change consider using a Flow chart<link> or Pareto chart<link>.
Check or Study - the results. What was learned? What went wrong?
This is a crucial step in the PDCA cycle. After you have implemented the change for a short time, you must
determine how well it is working. Is it really leading to improvement in the way you had hoped? You must
decide on several measures with which you can monitor the level of improvement. Run Charts can be helpful
with this measurement.
Act - Adopt the change, abandon it, or run through the cycle again.
After planning a change, implementing and then monitoring it, you must decide whether it is worth
continuing that particular change. If it consumed too much of your time, was difficult to adhere to, or even led to
no improvement, you may consider aborting the change and planning a new one. However, if the change led to
a desirable improvement or outcome, you may consider expanding the trial to a different area, or slightly
32
increasing your complexity. This sends you back into the Plan phase and can be the beginning of the ramp of
improvement
5.3Examples
Personal Improvement
Example 1: The student with poor grades
Improving Patient Compliance in Personal Health Maintenance
Example 2: The businesswoman who wants to lose weight
Student Section: Improving Your History-Taking Skills
Example 3: Feedback for the medical student
Clinician Section: Improving Your Office
Example 4: The Medical Student who made a difference
5.3.1Personal Improvement
The PDCA cycle is a valuable process that can be applied to practically anything. In this chapter, we discuss cases
related to patient care and medical student performance, but the PDCA cycle can be used in everything from making a
meal to walking your dog. An immediate concern of yours may be improving your study skills.
• What is she trying to accomplish? Isabel knows that she needs to improve her studying skills in order to gain a
better understanding of the material.
• How will she know that a change is an improvement? Isabel considers the most important measure of her
study skills to be her exam grades. However, she does not want to risk another exam period just to find out that
her skills are still not good. She decides that a better way to measure improvement is by taking old exams.
• What changes can she make that will result in improvement? Isabel thinks that she has spent too little time
studying. She feels that the best way to improve her study skills is by putting in more hours.
Cycle 1
Plan: Isabel decides to add an additional thirty hours per week to her already busy schedule. She resolves that she
must socialize less, get up earlier, and stay up later. At the end of the week she will take an old exam to see how she is
progressing.
Do: By the end of the week, Isabel finds that she was able to add only fifteen hours of studying. When she takes the
exam she is dismayed to find that she does no better.
Check: The fifteen extra hours of studying has made Isabel feel fatigued. In addition, she finds that her ability to
concentrate during those hours is rather limited. She has not exercised all week and has not seen any of her friends.
This forced isolation is discouraging her.
Act: Isabel knows that there must be another way. She needs to design a better, more efficient way to study that will
allow her time to exercise and socialize.
Cycle 2
Plan: Isabel contacts all her medical school friends who she knows are doing well yet still have time for outside lives.
Many of these friends have similar advice that Isabel thinks she can use. Based on her findings, she decides to always
attend lectures, to rewrite her class notes in a format she can understand and based on what the professor has
emphasized, and to use the assigned text only as a reference.
Do: Isabel returns to her original schedule of studying. However, instead of spending a majority of her time poring over
the text, she rewrites and studies her notes. She goes to the text only when she does not understand her notes. When
Isabel takes one of the old exams, she finds that she has done better, but she still sees room for improvement.
Check: Isabel now realizes that she had been spending too much time reading unimportant information in the required
text. She knows that her new approach works much better, yet she still feels that she needs more studying time. She is
unsure what to do, because she doesn't want to take away from her social and physically active life.
Act: Isabel decides to continue with her new studying approach while attempting to find time in her busy day to study
more.
33
Cycle 3
Plan: In her search for more time to study, Isabel realizes that there are many places that she can combine exercising
and socializing with studying. First, she decides to study her rewritten notes while she is exercising on the Stairmaster.
Next, she intends to spend part of her socializing time studying with her friends.
Do: Isabel's friends are excited about studying together, and their sessions turn into a fun and helpful use of everyone's
time. Isabel has found that she enjoys studying while she exercises. In fact, she discovers that she remains on the
Stairmaster longer when she's reading over her notes. When Isabel takes her exams this week, she is happy to find
that her grades are significantly higher.
Check: Isabel now knows that studying does not mean being locked up in her room reading hundreds of pages of text.
She realizes that she can gain a lot by studying in different environments while focusing on the most important points.
Act: Isabel chooses to continue with the changes she has made in her studying habits.
What Isabel initially thought would be an improvement turned out to only discourage her further. Many people who are
in Isabel's place do not take the time to study their changes and continue them even though they lead down a
disheartening path. By using the PDCA cycle, Isabel was able to see that her initial change did not work and that she
had to find one that would better suit her. With perseverance and the willingness to learn, Isabel was able to turn a
negative outcome into a positive improvement experience.
• What is she trying to accomplish? Mrs T. and her doctor are trying to find and implement a viable exercise
regimen for her. The goal is to design an exercise schedule that the patient can maintain despite traveling four
days a week on business.
• How will she know that a change is an improvement? Improvement will be measured by how frequently she
exercises and for how long, and whether her blood pressure decreases.
• What changes can she make that will result in improvement? The doctor and patient need to design a plan that
she enjoys as well as one that she can (and will) follow, even when she is traveling.
Cycle 1
Plan: Ride an exercise bike four days a week for twenty minutes. To continue her exercise program while traveling, Mrs.
T will make reservations only at hotels equipped with gyms. She will also lease an exercise bike for her home.
Do: Mrs. T tries to exercise four days a week for twenty minutes. The patient finds that the exercise bike is too difficult
and makes her back sore. She can ride for only three minutes before she gets dizzy and has to stop. Mrs. T finds that at
hotels, it is hard to get time on the bike, since there are usually many people who want to use it.
Check: Mrs. T exercised only one day a week and could go for only three minutes. The patient is not motivated to use
the exercise bike because she doesnÌt enjoy it. Also, the hassle about using bikes at hotels is a big hindrance. Mrs. T
needs to find an exercise that permits her to set her own pace and her own hours.
Act: Mrs. T and her doctor decide to find a different program.
Cycle 2
Plan: Mrs. T will try a treadmill instead of the exercise bike.
Do: Mrs. T tries to exercise four days a week for twenty minutes, but can go for only about five minutes before she gets
bored. Also, she feels sick after getting off the treadmill. There was no problem finding an available treadmill at the
hotels.
Check: Mrs. T exercised twice a week for five minutes. However, the patient did not enjoy it. She enjoys the walking but
has trouble with motion sickness.
Act: Mrs. T will continue to walk but will walk outside to avoid inconvenient gym hours and the motion sickness. The
patient considers purchasing a dog, knowing that this will provide greater motivation to walk and make it more
enjoyable.
34
Cycle 3
Plan: Mrs. T will get a dog and walk it every morning she is home. When she is away, she will try to take a short sight-
seeing trip on foot, while her husband takes care of their dog at home.
Do: Mrs. T exercises as frequently as possible. She finds walking her dog very enjoyable and does it every day she is
home (approximately three days a week) for about forty-five minutes. When she is away, she tries to take a walking tour
of the city. This isn't always possible but occurs about 50 percent of the time.
Check: Mrs. T exercises three to six days a week for at least twenty minutes. She finds walking the dog most enjoyable
because of the early-morning fresh air. Her blood pressure has become less elevated as well.
Act: Now that she has found a program she enjoys, Mrs. T decides to commit herself to this new exercise regimen:
walking the dog and sight-seeing by foot.
By directly considering Mrs. T's needs as well as Mrs. T's likes and dislikes, the physician and the patient were able to
design and implement an unconventional but highly effective exercise program that improved both the physical and the
emotional wellness of the patient.
• What is he trying to accomplish? Jake would like to improve his history-taking skills.
• How will he know that a change is an improvement? Jake knows that he needs more information concerning
his history-taking skills. The only way he can get that information is through feedback from others in the
medical field. He decides that the most important measure of his performance should come from Dr. Eastman.
• What changes can he make that will result in improvement? Jake is unsure how to answer this question. He
feels confident in his ability to take a patient history. The only weakness he feels is a lack of questions to ask.
Cycle 1
Plan: Jake asks Dr. Eastman to sit in on at least two interviews so that he can receive immediate feedback. On any
interview that Dr. Eastman doesn't sit in on, Jake will see the patient first and report all his findings.
Do: Dr. Eastman is very busy the next time Jake visits him, and he sits in on only one interview. However, he has his
nurse practitioner, Ms. Irvine, observe Jake for two additional interviews. Because Dr. Eastman is so busy, Jake doesn't
have time to report his findings to him.
Check: The feedback that Dr. Eastman and Ms. Irvine gave Jake was very different. Dr. Eastman told Jake that he was
doing a good job but that he forgot to ask a couple of questions in the HPI. Ms. Irvine said that Jake needed to work on
asking open-ended questions and pausing to let the patient think. In addition, she mentioned that he completely left out
the social history.
Act: Jake decides to make some changes that will affect both his history taking and the feedback he is receiving. He
needs more feedback from both Dr. Eastman and Ms. Irvine, in addition to other sources such as his classmates and
the doctors he works with at school.
Cycle 2
Plan: Jake decides to continue receiving regular feedback from both Dr. Eastman and Ms. Irvine. He specifically asks
Dr. Eastman what questions he may have missed while interviewing and what the doctor thinks of his interviewing style.
Jake also works with other medical students at mock interviewing. He tries to find a group of four so that two can watch
and critique while Jake interviews the fourth student. Finally, DMS tests its students' interviewing skills twice a year
35
during observed structural clinical encounters (OSCEs). In this process, medical students are videotaped while they
interview patients (paid actors). Jake just went through his first OSCE a month ago. He received feedback from the
mock patient he interviewed, but he also wants feedback from some of the physicians who run the OSCE program. He
sets up a time to meet with them to watch his video.
Do: It takes only two weeks for Jake to receive more feedback. Dr. Eastman seems more comfortable criticizing Jake
now that he knows what he wants. Also, Jake and his fellow classmates have a lot of fun doing the mock interviews.
Check: Jake receives a lot more feedback from Dr. Eastman, who notes that Jake tends to rush patients and ask
closed-ended (yes or no) questions. "Take the time to let them tell their story," Ms. Irvine tells him. In the OSCE
videotape, Jake and the physician who watched it with him notice that he needs to work on his skills taking blood
pressures, that he missed the social history, and that he didnÌt ask any questions regarding the patient's habits. In
addition, the videotape reveals Jake's poor habit of rushing the patient and asking closed-ended questions. In the mock
interviews with his peers, Jake notices that he is slowing down and does a better job covering the social history aspect
of the interview.
Act: Jake decides to continue receiving regular feedback from Dr. Eastman and Ms. Irvine. He also continues to meet
with his peers to work on his interviewing skills and receive criticism from them. Jake works on all the weaknesses he
discovers in these learning sessions when he sees real patients in Dr. Eastman's office.
Jake's major improvements came from his ability to study his changes in the check phase of the PDCA cycle. In this
phase, Jake was able to recognize that Dr. Eastman and Ms. Irvine provided different kinds of feedback. This
knowledge led him to a second PDCA cycle in which he experimented with using more and different health care
professionals to test his history-taking performance. As Jake proceeds with each cycle, he will gain more knowledge
and continue to improve his history-taking skills.
Cycle 1
Plan: Tucker asked his preceptor for all her referrals in the past six months. After stratifying the referrals by specialty,
Tucker realized that 70 percent of the patients went to the orthopedics department at the local tertiary care center,
mostly for sprained ankles and knee trauma. He also noted that a number of the initial calls to the family practice came
when the office was closed, on weekends and after 5 p.m. Tucker presented this information to his preceptor, and
together they realized that the practice might benefit from a change in its delivery of orthopedic care. Their plan was
simple: have the orthopedics department at the local hospital train the four physicians in the practice how to treat
sprained ankles and some knee trauma. Since the local hospital physicians are on a salaried status, not fee-for-service,
there is no disincentive for this training.
Do: The family practitioners arranged for a one-week, after-hours training session in these two areas of high-volume
injuries. They decided that they would test this change for two months to determine whether they would be able to
reduce the number of referrals and maintain their patients' continuum of care at the practice. They also decided to stay
open until 9 p.m. every Wednesday and from 10 a.m. to 1 p.m. every Sunday as an open clinic. One physician, one
nurse, and one administrator would staff each open clinic.
36
Check: The practice is interested in the number and type of referrals, as well as financial productivity. After two months
of implementing this change, the number of orthopedic referrals fell by 30 percent compared with the same period in
previous years. By staying open longer, treating more patients, and referring less, the profits at the practice were 18
percent higher than they were during those two months in any previous year. Further, although they had no formal
metric for patient satisfaction, all four physicians received positive feedback for the orthopedic care they were delivering
and for their new convenient open clinic.
Act: Clearly, this change resulted in major improvement. The physicians decided to institute this change permanently.
Because of its success, the physicians are considering applying this technique to other specialties to which they refer
patients.
As demonstrated by this case study, the PDCA cycle can be applied to any situation. By employing the PDCA cycle, the
family practice first carefully assessed what needed to be changed and then implemented an effective improvement
plan. Implementing an improvement plan that is hastily selected rarely leads to effective change. This family practice did
not fall into the trap of shooting without properly aiming.
37
SDLC
38
6.1 Software Development Life Cycle (SDLC)
The software development life cycle (SDLC) is the entire process of formal, logical steps taken to
develop a software product. The phases of SDLC can vary somewhat but generally include the
following:
conceptualization;
requirements and cost/benefits analysis;
detailed specification of the software requirements;
software design;
programming;
testing;
user and technical training;
and finally, maintenance.
There are several methodologies or models that can be used to guide the software development lifecycle. Some of
these include:
Note that this model is sometimes referred to as the linear sequential model or the software life cycle.
39
The model consist of six distinct stages, namely:
2. In the specification phase the system specification is produced from the detailed definitions of (a) and (b)
above. This document should clearly define the product function.
Note that in some text, the requirements analysis and specifications phases are combined and represented
as a single phase.
3. In the system and software design phase, the system specifications are translated into a software
representation. The software engineer at this stage is concerned with:
Data structure
Software architecture
Algorithmic detail and
Interface representations
40
The hardware requirements are also determined at this stage along with a picture of the overall system
architecture. By the end of this stage the software engineer should be able to identify the relationship between
the hardware, software and the associated interfaces. Any faults in the specification should ideally not be
passed ‘down stream’
4. In the implementation and testing phase stage the designs are translated into the software domain
Detailed documentation from the design phase can significantly reduce the coding effort.
Testing at this stage focuses on making sure that any errors are identified and that the software
meets its required specification.
5. In the integration and system testing phase all the program units are integrated and tested to ensure that the
complete system meets the software requirements. After this stage the software is delivered to the customer
[Deliverable – The software product is delivered to the client for acceptance testing.]
6. The maintenance phase the usually the longest stage of the software. In this phase the software is updated
to:
Meet the changing customer needs
Adapted to accommodate changes in the external environment
Correct errors and oversights previously undetected in the testing phases
Enhancing the efficiency of the software
Observe that feed back loops allow for corrections to be incorporated into the model. For example a problem/update in
the design phase requires a ‘revisit’ to the specifications phase. When changes are made at any phase, the relevant
documentation should be updated to reflect that change.
Advantages
Testing is inherent to every phase of the waterfall model
It is an enforced disciplined approach
It is documentation driven, that is, documentation is produced at every stage
Disadvantages
The waterfall model is the oldest and the most widely used paradigm.
However, many projects rarely follow its sequential flow. This is due to the inherent problems associated with its rigid
format. Namely:
It only incorporates iteration indirectly, thus changes may cause considerable confusion as the project
progresses.
As The client usually only has a vague idea of exactly what is required from the software product, this
WM has difficulty accommodating the natural uncertainty that exists at the beginning of the project.
The customer only sees a working version of the product after it has been coded. This may result in
disaster if any undetected problems are precipitated to this stage.
41
6.3 Prototyping Model
The Prototyping Model is a systems development method (SDM) in which a prototype (an early approximation of a final
system or product) is built, tested, and then reworked as necessary until an acceptable prototype is finally achieved
from which the complete system or product can now be developed. This model works best in scenarios where not all of
the project requirements are known in detail ahead of time. It is an iterative, trial-and-error process that takes place
between the developers and the users.
1. The new system requirements are defined in as much detail as possible. This usually involves interviewing a
number of users representing all the departments or aspects of the existing system.
2. A preliminary design is created for the new system.
3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down
system, and represents an approximation of the characteristics of the final product.
4. The users thoroughly evaluate the first prototype, noting its strengths and weaknesses, what needs to be
added, and what should to be removed. The developer collects and analyzes the remarks from the users.
5. The first prototype is modified, based on the comments supplied by the users, and a second prototype of the
new system is constructed.
6. The second prototype is evaluated in the same manner as was the first prototype.
7. The preceding steps are iterated as many times as necessary, until the users are satisfied that the prototype
represents the final product desired.
8. The final system is constructed, based on the final prototype.
The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to
prevent large-scale failures and to minimize downtime.
42
6.4 Incremental Model
This model combines the elements of the waterfall model with the iterative philosophy of prototyping. However, unlike
prototyping the IM focuses on the delivery of an operational product at the end of each increment.
An example of this incremental approach is observed in the development of word processing applications where the
following services are provided on subsequent builds:
1. Basic file management, editing and document production functions
2. Advanced editing and document production functions
3. Spell and grammar checking
4. Advance page layout
The first increment is usually the core product which addresses the basic requirements of the system. This maybe
either be used by the client or subjected to detailed review to develop a plan for the next increment. This plan
addresses the modification of the core product to better meet the needs of the customer, and the delivery of additionally
functionality. More specifically, at each stage
· The client assigns a value to each build not yet implemented
· The developer estimates cost of developing each build
· The resulting value-to-cost ratio is the criterion used for selecting which build is delivered next
Essentially the build with the highest value-to-cost ratio is the one that provides the client with the most functionality
(value) for the least cost. Using this method the client has a usable product at all of the development stages.
Incremental Model
Iterative: many releases (increments)
– First increment: core functionality
– Successive increments: add/fix functionality
– Final increment: the complete product
• Each iteration: a short mini-project with a separate lifecycle
– e.g., waterfall
• Increments may be built sequentially or in parallel
43
Incremental model
features
increment #3
version
A D C T M
#3
increment #2
version
A D C T M
#2
increment #1
version
A D C T M
#1
time
The spiral model is a software development model combining elements of both design and
prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up
concepts.
The spiral model was defined by Barry Boehm. This model was not the first model to discuss
iteration, but it was the first model to explain why the iteration matters. As originally
envisioned, the iterations were typically 6 months to 2 years long. This persisted until around
2000.
Each phase starts with a design goal (such as a user interface prototype as an early phase)
and ends with the client (which may be internal) reviewing the progress thus far.
Analysis and engineering efforts are applied to each phase of the project, with an eye toward
the end goal of the project.
So, for a typical shrink-wrap application, this might mean that you have a rough-cut of user
elements (without the pretty graphics) as an operable application, add features in phases, and,
at some point, add the final graphics.
The Spiral model is not used today (2004) as such. However, it has influenced the modern
day concept of agile software development. Agile software development tends to be rather
more extreme in their approach than the spiral model.
44
45
QUALITY
46
7.1 What is Quality?
Quality is the customer’s perception of how a good or service is fit for their purpose and how it satisfies stated and
implicit specifications.
Quality in an organization is best achieved by Management creating a Quality Management System (QMS). A QMS is a
formalized system that documents the company structure, management and employee responsibilities, and the
procedures required to deliver a quality product or service. Four quality tools should be utilized when creating a QMS:
Quality Manual, Standard Operating Procedures (SOPs), work instructions and supporting documentation as flowcharts
and quality records. All four tools must be consistent, coherent and work together to increase the perceived value of the
good or service.
Quality Management is effectively managing your company QMS to achieve maximum customer satisfaction at the
lowest overall cost. Quality Management (QM) is a continuous process that requires inputs of time, effort and
commitment from all company resources.
1. Customer Focus - Understand your customer’s needs. Measure customer satisfaction. Strive to exceed their
expectations.
2. Leadership - Management establishes the strategy and leads the company toward achieving its objectives.
Management creates an environment that encourages staff to continuously improve and work towards
satisfying the customer.
3. People Involvement - Train your staff effectively. Teamwork and full employee involvement makes quality a
reality.
4. Continuous Improvement - Continue to make things better.
5. Process Approach - Understand and organize company resources and activities to optimize how the
organization operates.
6. Factual Approach to Decision Making - Make decisions based on the facts. Data must be gathered,
analyzed and assessed against the objectives.
7. System Approach to Management - Determine sequence and interaction of processes and manage them as
a system. Processes must meet customer requirements.
8. Mutually Beneficial Supplier Relationships - Work with your suppliers to produce a win-win outcome.
The quality of a product or service refers to the perception of the degree to which the product or service meets
the customer's expectations.
Quality is essentially about learning what you are doing well and doing it better. It also means finding out what
you may need to change to make sure you meet the needs of your service users.
Quality is defined by the customer. A quality product or service is one that meets customer requirements. Not
all customers have the same requirements so two contrasting products may both be seen as quality products by their
users. For example, one house-owner may be happy with a standard light bulb - they would see this as a quality
47
product. Another customer may want an energy efficient light bulb with a longer life expectancy - this would be their
view of quality. Quality can therefore be defined as being fit for the customer's purpose.
There are three main ways in which a business can create quality:
One key distinction to make is there are two common applications of the term Quality as form of activity or
function within a business. One is Quality Assurance which is the "prevention of defects", such as the deployment of a
Quality Management System and preventative activities like FMEA. The other is Quality Control which is the "detection
of defects", most commonly associated with testing which takes place within a Quality Management System typically
referred to as Verification and Validation.
Quality is about:
3. Product-Based The product has something that other similar products do not that adds value.
Typically, these are the stages that organizations implementing a quality system aim to follow:
• Agree on standards. These concern the performance that staff, trustees and users expect from the
organization
• Carry out a self-assessment. This means that you compare how well you are doing against these
expectations.
• Draw up an action plan. This will include what needs to be done, who will do it, how it will be done, and when
• Implement. Do the work
• Review. At this stage, you check what changes have been made and whether they have made the difference
you were hoping to achieve.
• they meet the often conflicting needs and demands of their service users, and that users are satisfied with the
48
quality of services offered
• they provide users with efficient, consistent services
• the organization is making a real difference
• they can work effectively with limited resources or short-term project funding.
• end customers - people like you and me, looking to buy an iPod or plasma screen television
• organizational customers - for example, a company recording audio CDs would buy in blank CDs, record music
to them and sell them on as a finished product.
Quality, in the eye of the consumer, means that a product must provide the benefits required by the consumer
when it was purchased. If all the features and benefits satisfy the consumer, a quality product has been bought. It is
consumers, therefore, who define quality.
Quality as defined by the consumer, he argued, is more important than price in determining demand for most
goods and services. Consumers will be prepared to pay for the best quality. Value is thus added by creating those
quality standards required by consumers.
Consumer quality standards involve:
49
7.6 Quality management and software development
Quality plan
50
7.8Quality attributes
The process which is described as Total Quality Management (TQM) involves taking quality to new heights.
When the term 'quality assurance system' is used, it means a formal management system you can use to
strengthen your organization. It is intended to raise standards of work and to make sure everything is done consistently.
A quality assurance system sets out expectations that a quality organization should meet. Quality assurance is the
system set up to monitor the quality and excellence of goods and services.
51
Quality assurance demands a degree of detail in order to be fully implemented at every step.
• Planning, for example, could include investigation into the quality of the raw materials used in manufacturing,
the actual assembly, or the inspection processes used.
• The Checking step could include customer feedback, surveys, or other marketing vehicles to determine if
customer needs are being exceeded and why they are or are not.
• Acting could mean a total revision in the manufacturing process in order to correct a technical or cosmetic
flaw.
Quality assurance verifies that any customer offering, regardless if it is new or evolved, is produced and offered
with the best possible materials, in the most comprehensive way, with the highest standards. The goal to exceed
customer expectations in a measurable and accountable process is provided by quality assurance.
Essentially, quality control involves the examination of a product, service, or process for certain minimum levels
of quality. The goal of a quality control team is to identify products or services that do not meet a company’s specified
standards of quality. If a problem is identified, the job of a quality control team or professional may involve stopping
production temporarily. Depending on the particular service or product, as well as the type of problem identified,
production or implementation may not cease entirely.
Quality control can cover not just products, services, and processes, but also people. Employees are an
important part of any company. If a company has employees that don’t have adequate skills or training, have trouble
understanding directions, or are misinformed, quality may be severely diminished. When quality control is considered in
terms of human beings, it concerns correctable issues.
52
(1) Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is
adequate to ensure a system will meet its objectives.
(2) Quality Control: A set of activities designed to evaluate a developed work product.
The difference is that QA is process oriented and QC is product oriented.
Testing therefore is product oriented and thus is in the QC domain. Testing for quality isn't assuring quality, it's
controlling it.
Quality Assurance makes sure you are doing the right things, the right way. Quality Control makes sure the results of
what you've done are what you expected.
7.12 QA Activity
The mission of the QA Activity is fourfold. QA improves the quality of specifications, through guidelines and
reviews of specifications at critical stages of their development. QA promotes wide deployment and proper
implementation of these specifications through articles, tutorials and validation services. QA communicates the value of
test suites and helps Working Groups produce quality test suites. And QA designs effective processes that, if followed,
will help groups achieve these goals.
The overall mission of the QA Activity is to improve the quality of specification implementation in the field. In
order to achieve this, the QA activity will work on the quality of the specifications themselves, making sure that each
specification has a conformance section, primer, is clear, unambiguous and testable, and maintains consistency
between specifications, promote the development of good validators, test tools, and harnesses for implementors and
end user to use.
The QA Activity was initiated to address these demands and improve the quality of specifications as well as
their implementation. In particular, the Activity has a dual focus:
(1) To solidify and extend current quality practices regarding the specification publication process,
validation tools, test suites, and test frameworks.
(2) To share with the Web community their understanding of issues related to ensuring and promoting
quality, including conformance, certification and branding, education, funding models, and
relationship with external organizations.
QA activities ensure that the process is defined and appropriate. Methodology and standards development are
examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being
defined at the proper level of detail.
QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right
requirements. Testing is one example of a QC activity, but there are others such as inspections.
53
Validation and Verification
54
8.1 V & V
In the process of testing two terms that needs significance and understanding are
1. Verification
2. Validation
8.1.1Verification:
"Are we building the product right?” i.e., does the product conform to the specifications?
It is one aspect of testing a product's fitness for purpose.
The verification process consists of static and dynamic parts. E.g., for a software product one can inspect the source
code (static) and run against specific test cases (dynamic). Validation usually can only be done dynamically, i.e., the
product is tested by putting it through typical usages and atypical usages ("Can we break it?").
Static testing - Testing that does not involve the operation of the system or component. Some of these
techniques are performed manually while others are automated. Static testing can be further divided into
2 categories - techniques that analyze consistency and techniques that measure some program property.
Consistency techniques - Techniques that are used to insure program properties such as correct syntax, correct
parameter matching between procedures, correct typing, and correct requirements and specifications translation.
Measurement techniques - Techniques that measure properties such as error proneness, understandability, and well-
structuredness.
8.2 Validation:
"Are we building the right product?", i.e., does the product do what the user really requires?
Validation is the complementary aspect. Often one refers to the overall checking process as V & V.
55
• Formal methods - Formal methods is not only a verification technique but also a validation technique.
Formal methods means the use of mathematical and logical techniques to express, investigate, and
analyze the specification, design, documentation, and behavior of both hardware and software.
• Fault injection - Fault injection is the intentional activation of faults by either hardware or software
means to observe the system operation under fault conditions.
• Hardware fault injection - Can also be called physical fault injection because we are actually
injecting faults into the physical hardware.
• Software fault injection - Errors are injected into the memory of the computer by software
techniques. Software fault injection is basically a simulation of hardware fault injection.
• Dependability analysis - Dependability analysis involves identifying hazards and then proposing
methods that reduces the risk of the hazard occurring.
• Hazard analysis - Involves using guidelines to identify hazards, their root causes, and possible
counter measures.
• Risk analysis - Takes hazard analysis further by identifying the possible consequences of each
hazard and their probability of occurring.
Verification ensures the product is designed to deliver all functionality to the customer; it typically involves
reviews and meetings to evaluate documents, plans, code, requirements and specifications;
Validation ensures that functionality, as defined in requirements, is the intended behavior of the product;
validation typically involves actual testing and takes place after verifications are completed.
56
Testing Lifecycle
57
9.1 Phases of Testing Life cycle
Testing lifecyclce ensures that all the relevant requirements ( inputs) are obtained,planning is adequately
carried out , the test cases are designed and executed as per plan.It also ensures that the results are
obtained,reviewed and monitered.
Test Requirements
Test Planning
Test Design.
Test Environment
Test Execution
Final Reporting
58
Testing Methods
59
10.1`Methods of Testing
There are two primary methods of testing . They are
1. Functional or Black Box testing
2. Logical or White box Testing.
60
WHITE BOX TESTING
61
11 White Box Testing
White box testing is a test case design approach that employs the control architecture of the procedural design
to produce test cases. Using white box testing approaches, the software engineering can produce test cases that
(3) guarantee that all independent paths in a module have been exercised at least once.
(4) Exercise all logical decisions.
(5) Execute all loops at their boundaries and in their operational bounds.
(6) Exercise internal data structures to maintain their validity.
62
11.2.1.1Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the application is executed
at least once. It helps in assuring that all the statements execute without any side effect.
63
1
6 4
5
7 8
9
10
Flow chart
1
Node
11
2,3
Predicate
Node
6
Edges
7 8 4,5
Region
10
64
11
11.2.7 Cyclomatic Complexity
As we have seen before McCabe’s cyclomatic complexity is a software metric that offers an indication of the
logical complexity of a program. When used in the context of the basis path testing approach, the value is determined
for cyclomatic complexity defines the number of independent paths in the basis set of a program and offer upper
bounds for number of tests that ensures all statements have been executed at least once. An independent path is any
path through the program that introduces at least one new group of processing statements or new condition. A set of
independent paths for the example flow graph are:
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-11
b
E
5 3
f
c d
65
g
4
Connection to node
Node 1 2 3 4 5
1 A
2 b
3 d, c f
4
5 E g
In the graph and matrix each node is represented with a number and each edge a letter. A letter is entered
into the matrix related to connection between the two nodes. By adding a link weight for each matrix entry the graph
matrix can be used to examine program control structure during testing. In its basic form the link weight is 1 or 0. The
link weights can be given more interesting characteristics:
• The probability that a link will be executed.
• The processing time expanded during traversal of a link
• The memory required during traversal of a link
Although basis path testing is simple and highly effective, it is not enough in itself. Next we consider variations on
control structure testing that broaden testing coverage and improve the quality of white box testing.
66
11.2.12 Condition Testing
Condition testing is a test case design approach that exercises the logical conditions contained in a program
module. A simple condition is a Boolean variable or a relational expression, possibly with one NOT operator. A
relational expression takes the form
where E1 and E 2 are arithmetic expressions and relational operator is one of the following <, =,
≠, ≤, (nonequality) >, or ≥. A compound condition is made up of two or more simple conditions, Boolean operators,
and parentheses. We assume that Boolean operators allowed in a compound condition include OR, AND and NOT.
The condition testing method concentrates on testing each condition in a program. The purpose of condition
testing is to determine not only errors in the conditions of a program but also other errors in the program. A number of
condition testing approaches have been identified. Branch testing is the most basic. For a compound condition, C, the
true and false branches of C and each simple condition in C must be executed at least once.
Domain testing needs three and four tests to be produced for a relational expression. For a relational
expression of the form
Three tests are required the make the value of E1 greater than, equal to and less than E 2 , respectively.
Loops are the basis of most algorithms implemented using software. However, often we do consider them
when conducting testing. Loop testing is a white box testing approach that concentrates on the validity of loop
constructs. Four loops can be defined: simple loops, concatenate loops, nested loops, and unstructured loops.
67
11.2.13.2 Nested Loop
Nested loop: For the nested loop the number of possible tests increases as the level of nesting grows. This would
result in an impractical number of tests. An approach that will help to limit the number of tests:
• Start at the innermost loop. Set all other loops to minimum values.
• Conduct simple loop tests for the innermost loop while holding the outer loop at their minimum iteration
parameter value.
• Work outward, performing tests for the next loop, but keeping all other outer loops at minimum values and
other nested loops to “typical” values.
• Continue until all loops have been tested.
Concatenated loops can be tested using the techniques outlined for simple loops, if each of the loops is
independent of the other. When the loops are not independent the approach applied to nested loops is
recommended.
68
Example program.
69
11.3 Advantages of White box testing:
i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data
can help in testing the application effectively.
ii) The other advantage of white box testing is that it helps in optimizing the code
iii) It helps in removing the extra lines of code, which can bring in hidden defects.
i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of
testing, which increases the cost.
ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems,
resulting in failure of the application.
70
Black Box Testing
71
12.1 Black Box Testing:
Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example,
when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the
expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black
box testing can be considered testing with respect to the specifications, no other knowledge of the program is
necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer
bias toward his own work. Black box testing is sometimes also called as "Opaque Testing", "Functional/Behavioral
Testing" and "Closed Box Testing".
In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement
specifications of the system and as a user, should know, how the system should behave in response to the particular
action.
Various testing types that fall under the Black Box Testing strategy are: functional testing, stress testing,
recovery testing, volume testing, User Acceptance Testing (also known as UAT), system testing, Sanity or Smoke
testing, load testing, Usability testing, Exploratory testing, ad-hoc testing, alpha testing, beta testing etc.
These testing types are again divided in two groups:
a) Testing in which user plays a role of tester and
b) User is not required.
• Black box testing should make use of randomly generated inputs (only a test range should be specified by the
tester), to eliminate any guess work by the tester as to the methods of the function
• Data outside of the specified input range should be tested to check the robustness of the program
• Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest
allowable inputs produce proper output
• The number zero should be tested when numerical data is to be input
• Stress testing should be performed (try to overload the program with inputs to see where it reaches its
maximum capacity), especially with real time systems
• Crash testing should be performed to see what it takes to bring the system down
• Test monitoring tools should be used whenever possible to track which tests have already been performed and
the outputs of these tests to avoid repetition and to aid in the software maintenance
• Other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing,
and state testing.
• Finite state machine models can be used as a guide to design functional tests
• According to Beizer the following is a general order by which tests should be designed:
72
• Clean tests against requirements.
• Additional structural tests for branch coverage, as needed.
• Additional tests for data-flow coverage as needed.
• Domain tests not covered by the above.
• Special techniques as appropriate--syntax, loop, state, etc.
• Any dirty tests not covered by the above.
• only a small number of possible inputs can actually be tested, to test every possible input stream would take
nearly forever
• without clear and concise specifications, test cases are hard to design
• there may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer
has already tried
• may leave many program paths untested
• cannot be directed toward specific segments of code which may be very complex (and therefore more error
prone)
• most testing related research has been directed toward glass box testing
73
Levels of testing
74
Levels of testing
Unit testing. Isn't that some annoying requirement that we're going to ignore? Many developers get very
nervous when you mention unit tests. Usually this is a vision of a grand table with every single method listed, along
with the expected results and pass/fail date. It's important, but not relevant in most programming projects.
The unit test will motivate the code that you write. In a sense, it is a little design document that says, "What will this bit
of code do?" Or, in the language of object oriented programming, What will these clusters of objects do?"
The crucial issue in constructing a unit test is scope. If the scope is too narrow, then the tests will be trivial and the
objects might pass the tests, but there will be no design of their interactions. Certainly, interactions of objects are the
crux of any object oriented design.
Likewise, if the scope is too broad, then there is a high chance that not every component of the new code will
get tested. The programmer is then reduced to testing-by-poking-around, which is not an effective test strategy.
How do you know that a method doesn't need a unit test? First, can it be tested by inspection? If the code is
simple enough that the developer can just look at it and verify its correctness then it is simple enough to not require a
unit test. The developer should know when this is the case.
Unit tests will most likely be defined at the method level, so the art is to define the unit test on the methods that
cannot be checked by inspection. Usually this is the case when the method involves a cluster of objects. Unit tests that
isolate clusters of objects for testing are doubly useful, because they test for failures, and they also identify those
segments of code that are related. People who revisit the code will use the unit tests to discover which objects are
related, or which objects form a cluster. Hence: Unit tests isolate clusters of objects for future developers.
Another good litmus test is to look at the code and see if it throws an error or catches an error. If error handling
is performed in a method, then that method can break. Generally, any method that can break is a good candidate for
having a unit test, because it may break at some time, and then the unit test will be there to help you fix it.
The danger of not implementing a unit test on every method is that the coverage may be incomplete. Just because we
don't test every method explicitly doesn't mean that methods can get away with not being tested. The programmer
75
should know that their unit testing is complete when the unit tests cover at the very least the functional requirements of
all the code. The careful programmer will know that their unit testing is complete when they have verified that their unit
tests cover every cluster of objects that form their application.
Testing will occur throughout the project lifecycle i.e., from Requirements till User Acceptance Testing. The
main Objective to Unit Testing are as follows :
76
Condition/decision coverage
Path coverage
• Screen Functionalities
• Field Dependencies
• Auto Generation
• Algorithms and Computations
• Normal and Abnormal terminations
• Specific Business Rules if any..
Function coverage
Loop coverage
77
Race coverage
78
Unit Testing Flow:
79
Advantage of Unit Testing
• Can be applied directly to object code and does not require processing source code.
• Performance profilers commonly implement this measure.
• Also known as: branch coverage, all-edges coverage, basis path coverage, decision-decision-path testing
• "Basis path" testing selects paths that achieve decision coverage.
Advantage:
Simplicity without the problems of statement coverage
Disadvantage
• This measure ignores branches within boolean expressions which occur due to short-circuit operators.
Method for Condition Coverage:
-Test if every condition (sub-expression) in decision for true/false
-Select unique set of test cases.
• Reports the true or false outcome of each Boolean sub-expression, separated by logical-and and logical-or if
they occur.
• Condition coverage measures the sub-expressions independently of each other.
• Reports whether every possible combination of boolean sub-expressions occurs. As with condition coverage,
the sub-expressions are separated by logical-and and logical-or, when present.
• The test cases required for full multiple condition coverage of a condition are given by the logical operator truth
table for the condition.
Disadvantage:
• Tedious to determine the minimum set of test cases required, especially for very complex Boolean expressions
80
• Number of test cases required could vary substantially among conditions that have similar complexity
• Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage and decision
coverage.
• It has the advantage of simplicity but without the shortcomings of its component measures
• This measure reports whether each of the possible paths in each function have been followed.
• A path is a unique sequence of branches from the function entry to the exit.
• Also known as predicate coverage. Predicate coverage views paths as possible combinations of logical
conditions
• Path coverage has the advantage of requiring very thorough testing
Function coverage:
• This measure reports whether you invoked each function or procedure.
• It is useful during preliminary testing to assure at least some coverage in all areas of the software.
• Broad, shallow testing finds gross deficiencies in a test suite quickly.
Loop coverage
This measure reports whether you executed each loop body zero times, exactly once, twice and more than
twice (consecutively).
For do-while loops, loop coverage reports whether you executed the body exactly once, and more than once.
The valuable aspect of this measure is determining whether while-loops and for-loops execute more than once,
information not reported by others measure.
Race coverage
This measure reports whether multiple threads execute the same code at the same time.
Helps detect failure to synchronize access to resources.
Useful for testing multi-threaded programs such as in an operating system.
Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase of Software testing
in which individual software modules are combined and tested as a group. It follows unit testing and precedes system
testing.
Testing performed to expose faults in the interfaces and in the interaction between integrated components.
Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates,
applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system
ready for system testing.
Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been
tested are combined into a component and the interface between them is tested. A component, in this sense, refers to
an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which
are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually
expand the process to test your modules with those of other groups. Eventually all the modules making up a process
are tested together. Beyond that, if the program is composed of more than one process, they should be tested in pairs
81
rather than all at once. Integration testing identifies problems that occur when units are combined. By using a test plan
that requires you to test each unit and ensure the viability of each before combining units, you know that any errors
discovered when combining units are likely related to the interface between units. This method reduces the number of
possibilities to a far simpler level of analysis.
Purpose
The purpose of integration testing is to verify functional, performance and reliability requirements placed on
major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces
using Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated
usage of shared data areas and inter-process communication is tested, individual subsystems are exercised through
their input interface. All test cases are constructed to test that all components within assemblages interact correctly, for
example, across procedure calls or process activations, and is done after the testing single module i.e. unit testing. The
overall idea is a "building block" approach, in which verified assemblages are added to a verified base which is then
used to support the Integration testing of further assemblages.
The different types of integration testing are big bang, top-down and bottom-up.
A big bang project is one that has no staged delivery. The customer must wait, sometimes months, before
seeing anything from the development team. At the end of the wait comes a "big bang". A common argument against
big bang projects is that there are no check points during the project where the customers expectations can be tested,
thus risking that the final delivery is not what the customer had in mind.
All components are tested in isolation, and will be mixed together when we first test the final system.
Disadvantages:
82
13.1.2.1..2
You can do integration testing in a variety of ways but the following are common strategies:
The top-down approach to integration testing requires the highest-level modules be test and integrated first.
This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers.
However, the need for stubs complicates test management and low-level utilities are tested relatively late in the
development cycle. Another disadvantage of top-down integration testing is its poor support for early release of limited
functionality.
The bottom-up approach requires the lowest-level units be tested and integrated first. These units are
frequently referred to as utility modules. By using this approach, utility modules are tested early in the development
process and the need for stubs is minimized. The downside, however, is that the need for drivers complicates test
management and high-level logic and data flow are tested late. Like the top-down approach, the bottom-up approach
also provides poor support for early release of limited functionality.
Top-down and bottom-up are strategies of information processing and knowledge ordering, mostly involving
software, and by extension other humanistic and scientific System theories.
In the top-down model an overview of the system is formulated, without going into detail for any part of it. Each
part of the system is then refined by designing it in more detail. Each new part may then be refined again, defining it in
yet more detail until the entire specification is detailed enough to validate the model.
83
M1
M2 M3 M4
M5 M6 M7
M8
1. The main control module is used as a test driver and stubs are substituted for all components directly
subordinate to the main module.
2. Depending on integration approach, subordinate stubs are replaced once a time with actual
components.
3. Tests are conducted as each component is integrated.
4. Stubs are removed and integration moves downward in the program structure.
Advantage
Can verify major control or decision points early in the testing process.
Disadvantage
Stubs are required when perform the integration testing, and generally, develop stubs is very difficult.
13.1.2.1..2.b Bottom-up
In bottom-up design, first the individual parts of the system are specified in great detail. The parts are then
linked together to form larger components, which are in turn linked until a complete system is formed. This strategy
often resembles a "seed" model, whereby the beginnings are small, but eventually grow in complexity and
completeness.
Major steps
1. Low-level components will be tested individually first.
2. A driver(a control program for testing) is written to coordinate test case input and output.
3. The driver is removed and integration moves upward in the program structure.
4. Repeat the process until all components are included in the test.
84
Advantage
Compared with stubs, drivers are much easier to develop.
Disadvantage
Major control and decision problems will be identified later in the testing process.
85
13.1.3 System Testing:
System testing is testing conducted on a complete, integrated system to evaluate the system's compliance
with its specified requirements. System testing falls within the scope of Black box testing and as such, should require no
knowledge of the inner design of the code or logic. System testing should be performed by testers who are trained to
plan, execute, and report on application and system code. They should be aware of scenarios that might not occur to
the end user, like testing for null, negative, and format inconsistent values.
System testing is actually done to the entire system against the Functional Requirement Specifications (FRS)
and/or the System Requirement Specification (SRS). Moreover, the System testing is an investigatory testing phase,
where the focus is to have almost a destructive attitude and test not only the design, but also the behavior and even the
believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the
software/hardware requirements specification.
Types of System Testing:
• Sanity Testing
• Compatibility Testing
• Recovery Testing
• Usability Testing
• Exploratory Testing
• Adhoc Testing
• Stress Testing
• Volume Testing
• Load Testing
• Performance Testing
• Security Testing
13.1.3. 1 Testing
Testing the major working functionality of the system whether the system is working fine for the major testing
effort. This testing is done before the testing phase and after the coding. The tests performed during sanity testing are
86
13.1.3. 4 Usability Testing:
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the
application stands an important consideration and needs to be specific for the specific type of user.
This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.
This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in
deciding the scope and duration of the various other testing and it also helps testers in learning the application prior
starting with any other testing.
The application is tested against heavy load such as complex numerical values, large number of inputs, large
number of queries etc. which checks for the stress/load the applications can withstand.
Volume testing is done against the efficiency of the application. Huge amount of data is processed through the
application (which is being tested) in order to check the extreme limitations of the system.
The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what
point the web-site/application fails or at what point its performance degrades.
On each iteration of true regression testing, all existing, validated tests are run, and the new results are
compared to the already-achieved standards. And normally, one or more additional tests are run, debugged and rerun
until the project successfully passes the test.
87
Regression tests begin as soon as there is anything to test at all. The regression test suite grows as the project
moves ahead and acquires new or rewritten code. Soon it may contain thousands of small tests, which can only be run
in sequence with the help of an automated test management tool like Test Complete.
The selective retesting of a software system that has been modified to ensure that any bugs have been fixed
and that no other previously working functions have failed as a result of the reparations and that newly added features
have not created problems with previous versions of the software. Also referred to as verification testing, regression
testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a
program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified
code still complies with its specified requirements and that unmodified code has not been affected by the maintenance
activity.
Quality is usually appraised by a collection of regression tests forming a suite of programs that test one or
more features of the system.
The advantage to this procedure is that if there is a malfunction in one of the regression tests, you know it
resulted from a code edit made since the last run.
Purpose
The standard purpose of regression testing is to avoid getting the same bug twice. When a bug is found, the
programmer fixes the bug and adds a test to the test suite. The test should fail before the fix and pass after the fix.
When a new version is about to be released, all the tests in the regression test suite are run and if an old bug
reappears, this will be seen quickly since the appropriate test will fail.
(1)It increases our chances of detecting bugs caused by changes to an application - either enhancements or bug
fixes. Note that we don't guarantee that there are no side effects. We'll talk later about what you need to guarantee that
88
you've detected any side effects.
(2) It can also detect undesirable side effects caused by changing the operating environment. For example,
hardware changes, or upgrades to system software such as the operating system or the database management
system.
(3) The Regression Test Set is also useful for a new way of doing integration testing. This new method is much faster
and less confusing than the old way of doing integration testing - but you need a Regression Test Set to do it.
Summary:
• Regression testing means rerunning tests of things that used to work to make sure that a change didn't break
something else.
• The set of tests used is called the Regression Test Set, or RTS for short.
• It's enormously helpful when you change an application, change the environment, and during integration of
pieces.
• Regression testing is a simple concept, but it needs to be done just right to work in the real world.
89
Acceptance Testing checks the system against the "Requirements". It is similar to systems testing in that the whole
system is checked but the important difference is the change in focus:
Systems Testing checks that the system that was specified has been delivered.
Acceptance Testing checks that the system delivers what was requested.
• The customer, and not the developer should always do acceptance testing. The customer knows what is required
from the system to achieve value in the business and is the only person qualified to make that judgment.
Hence the goal of acceptance testing should verify the overall quality, correct operation, scalability,
completeness, usability, portability, and robustness of the functional components supplied by the Software system.
Factors influencing Acceptance Testing
The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order
to provide a realistic and adequate exposure of the system to all reasonably expected events. The testing can be based
upon the User Requirements Specification to which the system should conform.
In this type of testing, the software is handed over to the user in order to find out if the software meets the user
expectations and works as it is expected to.
In this type of testing, the users are invited at the development center where they use the application and the
developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is
noted and rectified by the developers.
In this type of testing, the software is distributed as a beta version to the users and users test the application at
their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.
90
TEST PLAN
91
14.1 TEST PLAN
• The test plan keeps track of possible tests that will be run on the system after coding.
• The test plan is a document that develops as the project is being developed.
• Record tests as they come up
• Test error prone parts of software development.
• The initial test plan is abstract and the final test plan is concrete.
• The initial test plan contains high level ideas about testing the system without getting into the details of exact
test cases.
• The most important test cases come from the requirements of the system.
• When the system is in the design stage, the initial tests can be refined a little.
• During the detailed design or coding phase, exact test cases start to materialize.
• After coding, the test points are all identified and the entire test plan is exercised on the software.
• To achieve 100% CORRECT code. Ensure all Functional and Design Requirements are implemented as
specified in the documentation.
• To provide a procedure for Unit and System Testing.
• To identify the documentation process for Unit and System Testing.
• To identify the test methods for Unit and System Testing.
In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including
• Scope of testing
• Schedule
• Test Deliverables
• Release Criteria
• Risks and Contingencies
92
14.4 Process of the Software Test Plan
• Identify the requirements to be tested. All test cases shall be derived using the current Design Specification.
• Identify which particular test(s) you're going to use to test each module.
• Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data
and test cases are adequate to verify proper operation of the unit.
• Identify the expected results for each test.
• Document the test case configuration, test data, and expected results. This information shall be submitted via
the on-line Test Case Design(TCD) and filed in the unit's Software Development File(SDF). A successful Peer
Technical Review baselines the TCD and initiates coding.
• Perform the test(s).
• Document the test data, test cases, and test configuration used during the testing process. This information
shall be submitted via the on-line Unit/System Test Report(STR) and filed in the unit's Software Development
File(SDF).
• Successful unit testing is required before the unit is eligible for component integration/system testing.
• Unsuccessful testing requires a Program Trouble Report to be generated. This document shall describe the
test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It
shall be used as a basis for later technical analysis.
• Test documents and reports shall be submitted on-line. Any specifications to be reviewed, revised, or updated
shall be handled immediately.
Deliverables: Test Case Design, System/Unit Test Report, Problem Trouble Report(if any).
93
• Approvals
• Glossary
• Project Plan
• System Requirements specifications.
• High Level design document.
• Detail design document.
• Development and Test process standards.
• Methodology guidelines and examples.
• Corporate standards and guidelines.
In relation to the Software Project plan that it relates to. Other items may include, resource and budget
constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and
possibly the process to be used for change control and communication and coordination of key activities.
This can be controlled on a local Configuration Management (CM) process if you have one. This information
includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are
supported). It may also include key delivery schedule issues for critical elements.
Remember, what you are testing is what you intend to deliver to the client.
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional
area, for lower levels it may be by program, unit, module or build.
There are some inherent software risks such as complexity; these need to be identified.
• Safety.
• Multiple interfaces.
• Impacts on Client.
• Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the
management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be
tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the
software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a
particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and
clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several brainstorming sessions.
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low.
These types of levels are understandable to a User. You should be prepared to discuss why a particular level was
chosen.
Identify why the feature is not to be tested, there can be any number of reasons.
95
• Are any special tools to be used and what are they?
• Will the tool require special training?
• What metrics will be collected?
• Which level is each metric to be collected at?
• How is Configuration Management to be handled?
• How many different configurations will be tested?
• Hardware
• Software
• Combinations of HW, SW and other vendor packages
• What levels of regression testing will be done and how much at each test level?
• Will regression testing be based on severity of defects detected?
• How will elements in the requirements and design that do not make sense or are un testable be
processed?
If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
(7) MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is
available.
(8) SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
• All developed code must be unit tested. Unit and Link Testing must be completed and signed off by
development team.
96
• System Test plans must be signed off by Business Analyst and Test Controller.
• All human resources must be assigned and in place.
• All test hardware and environments must be in place, and free for System test use.
• The Acceptance Tests must be completed, with a pass rate of not less than 80%.
• All High Priority errors from System Test must be fixed and tested
• If any medium or low-priority errors are outstanding - the implementation risk must be signed off as acceptable
by Business Analyst and Business Expert
Resumption Criteria
In the event that system testing is suspended resumption criteria will be specified and testing will not re-
commence until the software reaches these criteria.
• Project Integration Test must be signed off by Test Controller and Business Analyst.
• Business Acceptance Test must be signed off by Business Expert.
Summary
The goal of this exercise is to familiarize students with the process of creating test plans.
Experience has shown that good planning can save a lot of time, even in an exercise, so do not underestimate
the effort required for this phase.
97
The goal of all these exercises is to carry out system testing on Word Pad, a simple word processor. Your task
is to write a thorough test plan in English using the above-mentioned sources as guidelines. The plan should be based
on the documentation of Word Pad
The role of a review is to make sure that a document (or code in a code review) is readable and clear and that
it contains all the necessary information and nothing more. Some implementation details should be kept in mind:
• The groups will divide their roles themselves before arriving at the inspection. A failure to follow the roles
correctly will be reflected in the grading. However, one of the assistants will act as the moderator and will not
assume any other roles.
• There will be only one meeting with the other group and the moderator. All planning, overview and preparation
is up to the groups themselves. You should use the suggested check lists in the lecture notes while preparing.
Task 3 deals with the after-meeting activities.
• The meeting is rather short, only 60 minutes for a pair (that is, 30 minutes each). Hence, all comments on the
language used in the other group's test plan are to be given in writing. The meeting itself concentrates on the
form and content of the plan.
98
Task 3: Improved Test Plan and Inspection Report
After the meeting, each group will prepare a short inspection report on their test plan listing their most typical
and important errors in the first version of the plan together with ideas for correcting them. You should also answer the
following questions in a separate document:
Furthermore, the test plan is to be revised according to the input from the inspection.
99
Test Case
100
Test Case:
A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to
exercise a particular program path or to verify compliance with a specific requirement.
In software engineering, a test case is a set of conditions or variables under which a tester will determine if a
requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a
requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at
least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub
requirement must have at least one test case. Some methodologies recommend creating at least two test cases for
each requirement. One of them should perform positive testing of requirement and other should perform negative
testing.
If the application is created without formal requirements, then test cases are written based on the accepted normal
operation of programs of a similar class.
What characterises a formal, written test case is that there is a known input and an expected output, which is worked
out before the test is executed. The known input should test a precondition and the expected output should test a post
condition.
Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would
evaluate if the results can be considered as a pass. This happens often on new products' performance number
determination. The first test is taken as the base line for subsequent test / product release cycles.
Written test cases include a description of the functionality to be tested taken from either the requirements or use cases,
and the preparation required to ensure that the test can be conducted.
A variation of test cases are most commonly used in acceptance testing. Acceptance testing is done by a group of end-
users or clients of the system to ensure the developed system meets their requirements. User acceptance testing is
usually differentiated by the inclusion of happy path or positive test cases.
101
15.1 Test Case Template
102
15.2 Test Case Design Techniques
– Cause-Effect Diagram
– State-Transtition.
103
Take each input condition described in the specification and derive at least two equivalence classes for it. One class
represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the
invalid class)
Following are some general guidelines for identifying equivalence classes:
a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify
one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and
inputs which are too high. For example, if an item in inventory can have a quantity of - 9999 to + 9999, identify the
following classes:
1. One valid class: (QTY is greater than or equal to -9999 and is less than or equal to 9999). This is written as (- 9999
< = QTY < = 9999)
2. the invalid class (QTY is less than -9999), also written as (QTY < -9999)
3. the invalid class (QTY is greater than 9999) , also written as (QTY >9999)
b) If the requirements state that the number of items input by the system at some point must lie within a certain range,
specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few
inputs and one invalid class where there are too many inputs.
For example, specifications state that a maximum of 4 purchase orders can be registered against anyone product. The
equivalence classes are: the valid equivalence class: (number of purchase an order is greater than or equal to 1 and
less than or equal to 4, also written as (1 < = no. of purchase orders < = 4) the invalid class (no. of purchase
orders> 4) the invalid class (no. of purchase orders < 1)
c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the
same way, identify a valid class for values in the set and one invalid class representing values outside of the set.
Says that the code accepts between 4 and 24 inputs; each is a 3-digit integer
• One partition: number of inputs
• Classes “x<4”, “4<=x<=24”, “24<x”
• Chosen values: 3,4,5,14,23,24,25
104
broadens the portions of the business requirement document used to generate tests. Unlike equivalence partitioning, it
takes into account the output specifications when deriving test cases.
105
2. A similar rule applies where the, condition states that the number of values must lie within a certain range select two
valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just above the
acceptable range.
3. Design tests that highlight the first and last records in an input or output file.
4. Look for any other extreme input or output conditions, and generate a test for each of them.
1. A cause represents a distinct input condition or an equivalence class of input conditions. A cause can be interpreted
as an entity which brings about an internal change in the system. In a CEG, a cause is always positive and atomic.
2. An effect represents an output condition or a system transformation which is observable. An effect can be a state or
a message resulting from a combination of causes. 3Constraints represent external constraints on the system.
106
15.2.4 State- Transtition
When changes occurring in the state based behavior or attributes of an object or the various links that
the object has with other objects can be represented by this technique. State models are ideal for
describing the behavior of a single object .State transtition is State-based behavior of the instances of
a class.
For example
Operation of an Elevator
Elevator has to go to all the 5 floors in a building. Consider each floor as one state.
Let the lift be initially at the 0th floor (initial state), now a request comes from the 5th floor it has to
respond to that request and the lift has move to 5th floor (next state) and now a request also comes
from the 3rd floor( another state) it has to respond to this request also. Like wise the requests may
come from other floors also.
Each floor means a different state, the lift has to take care of the request from all the states and has to
transit to all the state in sequence the request comes.
State Transtition Diagram
107
108
15. 3 Sample Test Cases
1.1.1.1.1.10 Search
Test List :
Steps :
Step Name Description Expected Result
Step 1 Login to Citadel using correct GID Search Screen should appear if login
Number, password and 'Citadel' server successful
selected from drop-down menu
Step 2 Check the GUI items in the User search 1.The User search screen should
screen display "Reports, Change Password and
Log Off" button at the top right corner of
the screen.
2. A Identifier Drop Down box should be
displayed at the top left side of the
screen.
3. A Folder Check box and text field
should be displayed next to the Identifier
drop down list.
4. Major Folder list box should be
displayed.
5. Minor Folder list box should be
displayed below the Major folder
6. Document Type Code text field
should be displayed along with a button
" Doc Code List"
5. Document Type Description text field
should be displayed
6. Document Date field along with
display options list box should be
displayed
7. Scan Date field along with display
options list box should be displayed
7. Buttons namely "Import, Search and
Reset" should be displayed below the
above fields.
109
1.1.1.1.1.10.2 Test Name : CDL_TCD_USH_TCS_002
Subject : Search
Status : Review
Designer : Edwin
Creation Date : 05/09/2003
Type : AUTOMATED
Description : Identifier Drop Down check and Folder selection
Execution Status : Passed
Steps :
Step Name Description Expected Result
Step 1 Login to Citadel using correct GID Main Search Screen should appear if
Number, password and 'Citadel' server login successful
selected from drop-down menu
Step 2 Click the Identifier drop down button Banker Last Name, CAS ID, CAS Last
Name,SPN, SPN Name, Polaris Doc
Number, Processor Name, Specialists
Name, Fiduciary Manager, Investment
Manager, Portfolio Manager, Sales
Manager, Account Number and Account
Title should be displayed in the drop
down list
Step 3 Select any of the below option from the Folder check box shouldn't be selected
dop down list
1. Polaris Doc Number
2. Processor Name
Step 4 Select any of the below option from the Folder check box should be selected
dop down list
1. Banker Last Name
2. Specialists Name
3. Fiduciary Manager
4. Investment Manager
5. Portfolio Manager
6. Sales Manager
Step 5 Select any of the below option from the The folder check box should not have
dop down list any effect on this item selection, it still
1. Account Number retain the previous selection of the
2. Account Title folder
3. CAS ID
4. CAS Name
5. SPN
6. SPN Name
110
Test execution
111
16.1 Test execution
When the test design activity has finished, the test cases are executed. Test execution is the phase that follows
after everything discussed to this point, with test strategies, test planning, test procedures designed and developed, and
the test environment operational, it is time to execute the tests created in the preceding phases.
Once development of the system is underway and software builds become ready for testing, the testing team
must have a precisely defined work flow for executing tests, tracking defects found, and providing information, or
metrics, on the progress of the testing effort.
Realistically, testing is a trade-off between budget, time and quality. It is driven by profit models. There is two
types of approach,
• Pessimistic Approach.
• Optimistic Approach.
The pessimistic and unfortunately most often used approach is to stop testing whenever some or any of the
allocated resources -- time, budget, or test cases -- are exhausted.
The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from
continuing testing cannot justify the testing cost. This will usually require the use of reliability models to evaluate and
predict reliability of the software under test. Each evaluation requires repeated running of the following cycle: failure
data gathering -- modeling -- prediction.
16. 3 Defect
Defects are commonly defined as "failure to conform to specifications," e.g., incorrectly implemented
specifications and specified requirement(s) missing from the software. A bug in software product is any exception that
can hinder the functionality of either the whole software or part of it.
112
Defect is termed as variance from a desired attribute. These attributes include complete and correct
requirements and specifications, designs that meet requirements and programs that observe requirements and
business rules.
114
16.4.2 - Contents of a Bug report
Complete list of contents of a bug/error/defect that are needed at the time of raising a bug during software
testing. These fields help in identifying a bug uniquely.
When a tester finds a defect, he/she needs to report a bug and enter certain fields, which helps in uniquely
identifying the bug reported by the tester. The contents of a bug are as given below:
Project: Name of the project under which the testing is being carried out.
Subject: Description of the bug in short which will help in identifying the bug. This generally starts with the project
identifier number/string. This string should be clear enough to help the reader in anticipate the problem/defect for which
the bug has been reported.
Description: Detailed description of the bug. This generally includes the steps that are involved in the test case and the
actual results. At the end of the summary, the step at which the test case fails is described along with the actual result
obtained and expected result.
Summary: This field contains some keyword information about the bug, which can help in minimizing the number of
records to be searched.
Assigned To: Name of the developer who is supposed to fix the bug. Generally this field contains the name of
developer group leader, who then delegates the task to member of his team, and changes the name accordingly.
Test Lead: Name of leader of testing team, under whom the tester reports the bug.
Detected in Version: This field contains the version information of the software application in which the bug was
detected.
Closed in Version: This field contains the version information of the software application in which the bug was fixed.
Date Detected: Date at which the bug was detected and reported.
Expected Date of Closure: Date at which the bug is expected to be closed. This depends on the severity of the bug.
Actual Date of Closure: As the name suggests, actual date of closure of the bug i.e. date at which the bug was fixed
and retested successfully.
115
Priority: Priority of the bug fixing. This specifically depends upon the functionality that it is hindering. Generally
Medium, Low, High, Urgent are the type of severity that are used.
Severity: This is typically a numerical field, which displays the severity of the bug. It can range from 1 to 5, where 1 is
high severity and 5 is the lowest.
Status: This field displays current status of the bug. A status of ‘New’ is automatically assigned to a bug when it is first
time reported by the tester, further the status is changed to Assigned, Open, Retest, Pending Retest, Pending Reject,
Rejected, Closed, Postponed, Deferred etc. as per the progress of bug fixing process.
Bug ID: This is a unique ID i.e. number created for the bug at the time of reporting, which identifies the bug uniquely.
Attachment: Sometimes it is necessary to attach screen-shots for the tested functionality that can help tester in
explaining the testing he had done and it also helps developers in re-creating the similar testing condition.
Test Case Failed: This field contains the test case that is failed for the bug.
Any of above given fields can be made mandatory, in which the tester has to enter a valid data at the time of
reporting a bug.
16.5.1 Critical
The defect results in the failure of the complete software system, of a subsystem, or of a software unit
(program or module) within the system. A defect that prevents the user from moving ahead in the application, a "show
stopper" is classified as "Critical," e.g., performing an event causes a general protection fault in the application.
Performance defects may also be classified as "Critical" for certain software that must meet predetermined performance
metrics.
16.5.2 Major
The defect results in the failure of the complete software system, of a subsystem, or of a software unit
(program or module) within the system. There is no way to make the failed component(s), however, there are
116
acceptable processing alternatives which will yield the desired result. An overly long processing time may be classified
as "Major" because although it does not prevent the user from proceeding, it is performance deficiency.
16.5.3 Average
The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent
results, or the defect impairs the systems usability. If the user is able to formulate work-around where there are defects,
these defects may be classified as "Average." Defects with severity "Average" will be repaired when the higher-category
defects have been repaired and if time permits.
16.5.4Minor
The defect does not cause a failure, does not impair usability, and the desired processing results are easily
obtained by working around the defect. Certain graphical user interface defects, such as placement of push buttons on
the window, may be classified as "Minor" since this does not impede the application functionality. Although defect
priority indicates how quickly the defect must be repaired, its severity is determined by the importance of that aspect of
the application in relation to the software requirements.
16.5.5 Exception
The defect is the result of non-conformance to a standard, is related to the aesthetics of the system, or is a
request for an enhancement. Defects at this level may be deferred or even ignored.
16.6 .1 Urgent
Further development and/or testing cannot occur until the defect has been repaired. The system cannot be
used until the repair has been affected. Like system crash or error message forcing to close the window. Tester's ability
to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is
affected by the incident and it is significant to business processes.
117
A misstatement of a requirement or a serious design flaw must be resolved immediately, before the developer
translates it into codes that are implemented in the software—it is much cheaper to amend a requirement document
than to make program code changes.
16.6 .2 High
The defect must be resolved as soon as possible because it is impairing development/and or testing activities.
System use will be severely affected until the defect is fixed.
The critical path for development is another determinant of defect priority. If one piece of the functionality must
work before the next piece is added, any functional defects of the first piece will be given the "High" priority level.
For example: A query engine retrieved transactions matching user-specified criteria upon which further processing was
performed. If the query engine had been defective, no further development (or testing) would have been practical.
Therefore, all functional defects of the query engine were prioritized as "High".
16.6 .3 Medium
The defect should be resolved in the normal course of development activities. It can wait until a new build or
version is created.
16.6 .4 Low
The defect is an irritant which should be repaired but which can be repaired after more serious defect have
been fixed. The wrong font size for a label may be classified as "Low Priority".
16.6 .5 Defer
The defect repair can be put of indefinitely. It can be resolved in a future major system revision or not resolved
at all.
118
There are seven different life cycles that a bug can passes through:
Cycle I:
Cycle II:
119
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as ‘New’.
4) The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of ‘Pending
Reject’ before passing it back to the testing team.
5) After getting a satisfactory reply from the development side, the test leader marks the bug as ‘Rejected’.
Cycle III:
Cycle IV:
Cycle V:
Cycle VI:
1) After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of the bug
is postponed for indefinite time and it is marked as ‘Postponed’.
Cycle VII:
1) If the bug does not stand importance and can be/needed to be postponed, then it is given a status as ‘Deferred’.
This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.
Any software development process is incomplete if the most important phase of Testing of the developed
product is excluded. Software testing is a process carried out in order to find out and fix previously undetected
bugs/errors in the software product. It helps in improving the quality of the software product and make it secure for client
to use.
Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned
various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and
Closed.
121
16. 8.1 b Assigned:
After the bug is reported as ‘New’, it comes to the Development Team. The development team verifies if the
bug is valid. If the bug is valid, development leader assigns it to a developer to fix it and a status of ‘Assigned’ is
assigned to it.
122
16. 8.1 l Deferred:
In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is
marked with ‘Deferred’ status
123
16.10 Defect tracking
Defect tracking is the process of finding defects in a product, (by inspection, testing, or recording feedback
from customers), and making new versions of the product that fix the defects. Defect tracking is important in software
engineering as complex software systems typically have tens or hundreds of thousands of defects: managing,
evaluating and prioritizing these defects is a difficult task: defect tracking systems are computer database systems that
store defects and help people to manage them.
The purpose of Defect Tracking is to help engineering management achieve their goal of producing quality
products on time and to budget.
• Planning and Estimation
• Tracking
• Control
• Process Implementation and Change
• Accessibility
124
Coding was well under way when incomplete system specifications caused transfer of data on the bridge to
fail. The failure was not due to coding errors but to specification errors that were translated into program codes. Had the
deficiency been discovered before coding began, we could have saved the substantial time and money required to
repair the programs.
125
TESTING METRICS
126
17 .1 what is test metric?
Test metrics is a process for analyzing the current level of maturity while testing and predict future trends,
finally meant for enhancing the testing activities which were missed in current testing will be added in next build to
improve or enhance the Testing Process.
Metrics are the numerical data which will help us to measure the test effectiveness.
Metrics are produced in two forms
1. Base Metrics and
2. Derived Metrics.
# Test Cases
# New Test Cases
# Test Cases Executed
# Test Cases Unexecuted
# Test Cases Re-executed
# Passes
# Fails
# Test Cases Under Investigation
# Test Cases Blocked
# 1st Run Fails
#Test Case Execution Time
# Testers
127
1. Test coverage
3. Convergence of testing
As, we all know, a major percentage of software projects suffer from quality problems. Software testing
provides visibility into product and process quality. Test metrics are key ”facts” that project managers can understand
their current position and to prioritize their activities to reduce the risk of schedule over-runs on software releases.
Test metrics are a very powerful management tool. They help you to measure your current performance.
Because today’s data becomes tomorrow’s historical data. its never too late to start recording key information on your
project. This data can be used to improved future work estimates and quality levels. Without historical data, estimates
will be guesses.
You cannot track the project status meaningfully unless you know the actual effort and time spent on each task
as compared to your estimates. You cannot sensibly decide whether your product is stable enough to ship unless you
track the rates at which your team is finding and fixing defects. You cannot quantify the performance of your new
development processes without some statistics on your current performance and a baseline to compare it with. Metrics
help you to better control your software projects. They enable you to learn more about the functioning of your
organization by establishing a Process Capability baseline that can be used to better estimate and predict the quality of
your projects in the future.
1. Test metrics data collection helps predict the long-term direction and scope for an organization and enables a more
holistic view of business and identifies high-level goals
2. Provides a basis for estimation and facilitates planning for closure of the performance gap
5. Provides meters to flag actions for faster, more informed decision making
128
6. Quickly identifies and helps resolve potential problems and identifies areas of improvement
7. Test metrics provide an objective measure of the effectiveness and efficiency of testing
1. Collect only the data that you will actually use/need to make informed decisions to alter your strategies, if you are not
going to change your strategy regardless of the finding, your time is better spent in testing.
2. Do not base decisions solely on data that is variable or can be manipulated. For example, measuring testers on the
number of tests they write per day can reward them for speeding through superficial tests or punish them for tracking
trickier functionality.
3. Use statistical analysis to get a better understanding of the data. Difficult metrics data should be analyzed carefully.
The templates used for presenting data should be self explanatory.
4. One of the key inputs to the metrics program is the defect tracking system in which the reported process and product
defects are logged and tracked to closure. It is therefore very important to carefully decide on the fields that need per
defect in the defect tracking systems and then generate customizable reports.
5. Metrics should be decided on the basis of their importance to stakeholders rather than ease of data collection.
Metrics that are of not interest to the stakeholders should be avoided.
6. Inaccurate data should be avoided and complex data should be collected carefully. Proper benchmarks should be
definite for the entire program
There are literally thousands of possible software metrics to collect and possible things to measure about
software development. There are many books and training programs available about software metrics, which of the
many metrics are appropriate for your situation? One method is to start with one of the many available published suites
of metrics and a vision of your own management problems and goals, and then customize the metrics list based on the
following metrics collection checklist. For each metric, you must consider,
1) What are you trying to manage with this metric? Each metric must relate to a specific management
area of interest in a direct way. The more convoluted the relationship between the measurement and the
management goal, the less likely you are to be collecting the right thing.
2) What does this metric measure? Exactly what does this metric count? High-level attempts to answer
this question (such as "it measures how much we've accomplished") may be misleading. The detailed answers
129
(such as "it reports how much we had budgeted for design tasks that first-level supervisors are reporting as
greater than 80 percent complete") is much more informative, and can provide greater insight regarding the
accuracy and usefulness of any specific metric.
3) If your organization optimized this metric alone, what other important aspects of your software design,
development, testing, deployment, and maintenance would be affected? Asking this question will provide a list
of areas where you must check to be sure that you have a balancing metric. Otherwise, your metrics program
may have unintended effects and drive your organization to undesirable behavior.
4) How hard/expensive is it to collect this information? This is where you actually get to identify whether
collection of this metric is worth the effort. If it is very expensive or hard to collect, look for automation that can
make the collection easier, or consider alternative metrics that can be substituted.
5) Does the collection of this metric interact with (or interfere with) other business processes? For
example, does the metric attempt to gather financial information on a different periodic basis or with different
granularity than your financial system collects and reports it? If so, how will the two quantitative systems be
synchronized? Who will reconcile differences? Can the two collection efforts be combined into one and provide
sufficient software metrics information?
6) How accurate will the information be after you collect it? Complex or manpower-intensive metrics
collection efforts are often short circuited under time and schedule pressure by the people responsible for the
collection. Metrics involving opinions (e.g., what percentage complete do you think you are?) are notoriously
inaccurate. Exercise caution, and carefully evaluate the validity of metrics with these characteristics.
7) Can this management interest area be measured by other metrics? What alternatives to this metric
exist? Always look for an easier-to-collect, more accurate, more timely metric that will measure relevant
aspects of the management issue of concern.
Use of this checklist will help ensure the collection of an efficient suite of software development metrics that directly
relates to management goals. Periodic review of existing metrics against this checklist is recommended.
Projects that are underestimated, over-budget, or that produce unstable products, have the potential to devastate
the company. Accurate estimates, competitive productivity, and renewed confidence in product quality are critical to the
success of the company.
Hoping to solve these problems as quickly as possible, the company management embarks on the 8-Step
Metrics Program
Step 1: Document the Software Development Process
Integrated Software does not have a defined development process. However, the new metrics coordinator
does a quick review of project status reports and finds that the activities of requirements analysis, design, code, review,
recode, test, and debugging describe how the teams spend their time. The inputs, work performed, outputs and
130
verification criteria for each activity have not been recorded. He decides to skip these details for this "test" exercise. The
recode activity includes only effort spent addressing software action items (defects) identified in reviews.
Step 2: State the Goals
The metrics coordinator sets out to define the goals of the metrics program. The list of goals in Step 2 of the 7 -
Step Metrics Program are broader than (yet still related to) the immediate concerns of Integrated Software. Discussion
with development staff leads to some good ideas on how to tailor these goals into specific goals for the company.
1. Estimates
The development staff at Integrated Software considers past estimates to have been unrealistic as they were
established using “finger in the wind” techniques. They suggest that current plan could benefit from past experience as
the present project is very similar to past projects.
Goal: Use previous project experience to improve estimations of Productivity.
2. Productivity
Discussions about the significant effort spent in debugging center on a comment by one of the developers that
defects found early on in reviews have been faster to repair than Defects discovered by the test group. It seems that
both reviews and testing are needed, but the amount of effort to put into each is not clear.
Goal: Optimize defect detection and removal.
3. Quality
The test group at the company argues for exhaustive testing. This however, is Prohibitively expensive.
Alternatively, they suggest looking at the trends of defects discovered and repaired over time to better understand the
probable number of defects remaining.
Goal: Ensure that the defect detection rate during testing is converging towards a level that indicates that less than five
defects per KSLOC will be discovered in the next year.
Step 3: Define Metrics Required to Reach Goals and Identify Data to Collect
Working from the Step 3 tables, the metrics coordinator chooses the following metrics for the metrics program.
Goal 1: Improve Estimates
• Actual effort for each type of software in PH
• Size of each type of software in SLOC
• Software product complexity (type)
• Labor rate (PH/SLOC) for each type
Goal 2: Improve Productivity
• Total number of person hours per activity
• Number of defects discovered in reviews
• Number of defects discovered in testing
• Effort spent repairing defects discovered in reviews
• Effort spent repairing defects discovered in testing
• Number of defects removed per effort spent in reviews and recode
• Number of defects removed per effort spent in testing and debug
Goal 3: Improve Quality
• total number of defects discovered
131
• total number of defects repaired
• number of defects discovered / schedule date
• number of defects repaired / schedule date
132
17.7.1 Product test metrics
I. Number of remarks
Definition
The total number of remarks found in a given time period/phase/test type. A remark is a claim made by test
engineer that the application shows an undesired behavior. It may or may not result in software modification or changes
to documentation.
Purpose
One of the earliest indicators to measure once the testing commences; provides initial indications about the
stability of the software
Data to collect
Total number of remarks found.
133
The severity level of a defect indicates the potential business impact for the end user (business impact = effect
on the end user x frequency of occurrence).
Purpose
Provides indications about the quality of the product under test. A high-severity defect means low product
quality, and vice versa. At the end of this phase, this information is useful to make the release decision based on the
number of defects and their severity levels.
Data to collect
Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.
134
VII. Time to solve a defect
Definition
Effort required resolving a defect (diagnosis and correction).
Purpose
Provides an indication of the maintainability of the product and can be used to estimate projected maintenance
costs.
Data to collect
Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the
same period.
X. Defects/KLOC
Definition
The number of defects per 1,000 lines of code.
Purpose
This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to
be addressed in the next phase or the next version.
Data to collect
Ratio of the number of defects found vs. the total number of lines of code (thousands)
Formula used
Uses of defect/KLOC
Defect density is used to compare the relative number of defects in various software components. This helps
identifies candidates various for additional inspection or testing or for possible engineering or replacement. Identifying
defect prone components allows the concentration of limited resources into areas with the highest potential return on
investment.
Another use of defect density is to compare subsequent releases of a product to track the impact of defect
reduction and quality improvement activities. Normalling by size allows releasing of various sizes to be compared.
Differences between products or products lines can also be compared in this manner.
136
Definition
The planned value related to the actual value.
Purpose
Shows how well estimation was done.
Data to collect
The ratio of the actual effort spent to the planned effort
Purpose
The effort spent in testing, in relation to the effort spent in the development activities, will give us an indication
of the level of investment in testing. This information can also be used to estimate similar projects in the future.
Data to collect
This metric can be computed by dividing the overall test effort by the total project effort.
Data to collect
This metric can be computed by dividing the defects that belong to a particular category by the total number of
defects.
V. Phase yield
Definition
Defined as the number of defects found during the phase of the development life cycle vs. the estimated
number of defects at the start of the phase.
Purpose
138
Shows the effectiveness of the defect removal. Provides a direct measurement of product quality; can be used
to determine the estimated number of defects for the next phase.
Data to collect
Ratio of the number of defects found by the total number of estimated defects. This can be used during a
phase and also at the end of the phase.
Definition
The number of resolved remarks that are yet to be retested by the development team.
Purpose
Indicates how well the test engineers are coping with the development efforts.
Data to collect
The number of remarks that have been resolved.
Formula used
140
VII. Valid remark ratio
Definition
Percentage of valid remarks during a certain period.
Purpose
Indicates the efficiency of the test process.
Data to collect
Ratio of the total number of remarks that are valid to the total number of remarks found
Formula used
Valid remarks = number of defects + duplicate remarks + number of remarks that will be resolved in the next
phase or release
141
• Defect data includes dates of defect detection and repair, and the number of defects discovered and
repaired per activity. Defect data should be available from the minutes of meetings, test reports, and code
headers. However, as Integrated software has not previously kept such data, the metrics coordinator must
assume that all defects detected in reviews were repaired in the recode activity.
• Effort data includes total person hours to complete each activity and is available in project status
reports only.
• Implementation data includes the type and size of software for each project. This data is available from
the development staff.
143