You are on page 1of 27

Chapter 1: Software Testing Fundamentals

Learning Objectives
After completing this chapter, you will be able to:
� Explain system failures caused by software bugs
� Describe software ‘quality’
� Explain software reliability
� Explain software reusability
� Describe history and evolution of software testing
Some recent computer system failures caused by software bugs
Software testing is a critical element of software quality assurance and represents the ultimate
process to ensure the correctness of the product. The quality product always enhances the
customer confidence in using the product thereby increases the business economics. In other
words, a good quality product means zero defects, which is derived from a better quality process in
Software systems are an increasing part of life, from business applications (e.g. banking) to
consumer products (e.g. cars). Most people have had an experience with software that did not
work as expected. Software that does not work correctly can lead to many problems, including loss
of money, time or business reputation, and could even cause injury or death.
Some of the case studies:
Software errors have caused spectacular failures, some with dire consequences, such as following
On 31st March 1986, a Mexicans Airlines Boeing 727 Airliner crashed into a mountain because the
software system did not correctly negotiate the mountain position.
Between March – June 1986 massive Therac 25 radiation therapy machines in Marietta, Georgia,
Boston, Massachusetts and Texas, overdosed cancer patients due to flaws in the software
program controlling the highly automated devices
On 10 December 1990, Space Shuttle Columbia was forced to land early due to software
On 17 September 1991, a power outage at the AT&T switching facility in New York City interrupted
service to 10 million telephone users for nine hours. The problem was due to deletion of there bits
of code in a software upgrade and failure to test the software before its installation in the public
network. These situations and other such events have made it apparent that we must determine
the reliability of the software before putting them into operation.
Disney’s Lion King, 1994-1995
Handout - Software Testing Fundamentals
Page 6
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
In the fall of 1994, Disney Company released its first multimedia CD-ROM game for children, The
Lion King Animated storybook. This was Disney’s first venture into the market and it was highly
promoted and advertised. Sales were huge. It was “the game to buy” for children that holiday
season. What happened, however, was a huge debacle. On December 26, the day after
Christmas, Disney’s customer support phones began to ring, and ring, and ring. Soon the phones
support technicians were swamped with calls from angry parents with crying children who couldn’t
get the software to work. Numerous stories appeared in newspapers and on TV news. This
problem later was found out, due to non performance of software testing for all conditions.
Why is Software testing necessary?
Software systems are an increasing part of life, from business applications (e.g. banking) to
consumer products (e.g. cars). Most people have had an experience with software that did not
work as expected. Software that does not work correctly can lead to many problems, including loss
of money, time or business reputation, and could even cause injury or death.
A human being can make an error (mistake), which produces a defect (fault, bug) in the code, in
software or a system, or in a document. If a defect in code is executed, the system will fail to do
what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems or
documents may result in failures, but not all defects do so.
Failures can be caused by environmental conditions as well: radiation, magnetism, electronic
fields, and pollution can cause faults in firmware or influence the execution of software by
changing hardware conditions.
General Testing Principles
A number of testing principles have been suggested over the past 40 years and offer general
guidelines common for all testing.
Principle 1 – Testing reveals the Defect’s
Testing can show that defects are present, but cannot prove that there are no defects. Testing
reduces the probability of undiscovered defects remaining in the software but, even if no defects
are found, it is not a proof of correctness.
Principle 2 – Meticulous testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial
cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing
Principle 3 – Timely testing
Testing activities should start as early as possible in the software or system development life cycle,
and should be focused on defined test objectives.
Principle 4 – Defect clustering
If in a small modules contain the majority of the defects discovered during pre-release testing, or
are responsible for the most operational failures.
Principle 5 – Bug killer absurdity
Handout - Software Testing Fundamentals
Page 7
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “Bug killer absurdity/Inconsistency”, the test cases
need to be regularly reviewed and revised, and new and different tests need to be written to
exercise different parts of the software or system to potentially find more defects.
Principle 6 – Testing is perspective needy
Testing is done differently in different frameworks. For example, safety-critical software is tested
differently from an e-commerce site.
Principle 7 – Absence-of-errors myth
Finding and fixing defects does not help if the system built is unusable and does not fulfill the
users’ needs and expectations.
Entry Criteria:
This section explains the various steps to be performed before the start of a test (i.e.)
� Timely environment set up, starting the web server / app server, successful
implementation of the latest build,
� Reviewing the test basis (such as requirements, architecture, design, interfaces).
� Designing and prioritizing test cases.
� Identifying necessary test data to support the test conditions and test cases.
� Designing the test environment set-up and identifying any required infrastructure and
Exit Criteria:
The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or
when a set of tests has a specific goal. Typically exit criteria may consist of:
� Thoroughness measures, such as coverage of code, functionality or risk.
� Estimates of defect density or reliability measures.
� Cost.
� Residual risks, such as defects not fixed or lack of test coverage in certain areas.
� Schedules such as those based on time to market.
Why does software have bug?
A human being can make an error (mistake), which produces a defect (fault, bug) in the code, in
software or a system, or in a document. If a defect in code is executed, the system will fail to do
what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems or
documents may result in failures, but not all defects do so.
Defects occur because human beings are fallible and because there is time pressure, complex
code, complexity of infrastructure, changed technologies, and/or many system interactions.
Failures can be caused by environmental conditions as well: radiation, magnetism, electronic
fields, and pollution can cause faults in firmware or influence the execution of software by
changing hardware conditions.
Handout - Software Testing Fundamentals
Page 8
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
What is Software ‘Quality’?
The quality of the software does vary widely from system to system. Some common quality
attributes are stability, usability, reliability, portability, and maintainability.
A product is a quality product if it is defect free. To the producer, a product is a quality product if it
meets or conforms to the statement of requirements that defines the product. This statement is
usually shortened to: quality means meets requirements. From a customer's perspective, quality
means "fit for use".
What is software reliability?
Software reliability is the probability that the software will not fail for a specified period of time
under specified condition.
Load testing is a blanket term that is used in many different ways across the professional software
testing community. The term, load testing, is often used synonymously with stress testing,
performance testing, reliability testing, and volume testing. Load testing generally stops short of
stress testing. During stress testing, the load is so great that errors are the expected results,
though there is gray area in between stress testing and load testing.
What is software reusability?
Computer science and software engineering, reusability is the likelihood a segment of source
code can be used again to add new functionalities with slight or no modification. Reusable
modules and classes reduce implementation time, increase the likelihood that prior testing and use
has eliminated bugs and localizes code modifications when a change in implementation is
Measurement of Software ‘Quality’?
With the help of testing, it is possible to measure the quality of software in terms of defects found,
for both functional and non-functional software requirements and characteristics (e.g. reliability,
usability, efficiency, maintainability and portability).
Testing can give confidence in the quality of the software if it finds few or no defects. A properly
designed test that passes reduces the overall level of risk in a system. When testing does find
defects, the quality of the software system increases when those defects are fixed. Lessons should
be learned from previous projects. By understanding the root causes of defects found in other
projects, processes can be improved, which in turn should prevent those defects from reoccurring
and, as a consequence, improve the quality of future systems. This is an aspect of quality
History & Evolution of Software Testing
The effective functioning of modern systems depends on our ability to produce software in a
way. The term software engineering was first used at a 1968 NATO workshop in West
Germany. It focused on the growing software crisis! Thus we see that the software crisis on
quality, reliability, high costs etc. started way back when most of today’s software testers were not
even born!
Handout - Software Testing Fundamentals
Page 9
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
The attitude towards Software Testing underwent a major positive change in the recent years. In
the 1950’s when Machine languages were used, testing is nothing but debugging. When in the
1960’s, compilers were developed, testing started to be considered a separate activity from
debugging. In the 1970’s when the software engineering concepts were introduced, software
testing began to evolve as a technical discipline. Over the last two decades there has been an
increased focus on better, faster and cost-effective software. Also there has been a growing
interest in software safety, protection and security and hence an increased acceptance of testing
as a technical discipline and also a career choice!
Now to answer, “What is testing?” we can go by the famous definition of Myers, which says,
“Testing is the process of executing a program with the intent of finding errors”
What is Software Quality Assurance?
Quality Assurance ensures all parties concerned with the project adhere to the process and
procedures, standards and templates and test readiness reviews.
Rob Davis' QA service depends on the customers and projects. A lot will depend on team leads or
managers, feedback to developers and communications among customers, managers, developers'
test engineers and testers.
Software Quality Assurance (SWQA) when Rob Davis does it is oriented to *prevention*. It
involves the entire software development process. Prevention is monitoring and improving the
process, making sure any agreed-upon standards and procedures are followed and ensuring
problems are found and dealt with. Software Testing, when performed by Rob Davis, is also
oriented to *detection*. Testing involves the operation of a system or application under controlled
conditions and evaluating the results. Organizations vary considerably in how they assign
responsibility for QA and testing. Sometimes they are the combined responsibility of one group or
individual. Also common are project teams, which include a mix of test engineers, testers and
developers who work closely together, with overall QA processes monitored by project managers.
It depends on what best fits your organization's size and business structure. Rob Davis can
provide QA and/or SWQA.
What is a Defect & its significance?
A mismatch in the application and its specification is a defect. A software error is present when the
program does not do what its end user expects it to do.
A Defect is a product anomaly or flaw. Defects include such things as omissions and imperfections
found during testing phases. Symptoms (flaws) of faults contained in software that is sufficiently
mature for production will be considered as defects. A deviation from expectation that is to be
tracked and resolved is also termed a defect.
Rigorous evaluation uses assumptions about the arrival or discovery rates of defects during the
testing process. The actual data about defect rates are then fit to the model. Such an evaluation
estimates the current system reliability and predicts how the reliability will grow if testing and defect
removal continue. This evaluation is described as system reliability growth modeling. Defect
Classification The severity of bugs will be classified as follows:
Critical: The problem prevents further processing and testing. The development team must be
informed immediately and they need to take corrective action immediately.
Handout - Software Testing Fundamentals
Page 10
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
High: The problem affects selected processing to a significant degree, making it inoperable,
Cause data loss, or could cause a user to make an incorrect decision or entry. The development
team must be informed that day, and need to take corrective action within 0 to 2004 hours.
Medium: The problem affects selected processing, but has a work-around that allows continued
processing and testing. No data loss is suffered. These may be cosmetic problems that hamper
usability or divulge client specific information. The development team must be informed within 24
hours, and they need to take corrective action within day or two.
Low: The problem is cosmetic, and does not affect further processing and testing. The
Development team must be informed within 48 hours, and they need to take corrective action
within 48-96 hours.
Cost of Defect
An evaluation of defects discovered during testing provides the best indication of software quality.
Quality is the indication of how well the system meets the requirements. So in this context defects
are identified as any failure to meet the system requirements. Defect evaluation is based on
methods that range from simple number count to rigorous statistical modeling.
Cost of defects would be very high if defect uncover at the later stage of the software development
life cycle.
� “Testing is the process of executing a program with the intent of finding errors”
� Software Testing basics and
� The Testing process and lifecycle.
� Defect and its significance.
Test Your Understanding
1. The primary objective of testing is
a) to show that the program works
b) to provide a detailed indication of quality
c) to find errors
d) to protect the end –user
2. Which of the following will be the best definition for Testing?
a) The goal / purpose of testing is to demonstrate that the program works.
b) The purpose of testing is to demonstrate that the program is defect free.
c) The purpose of testing is to demonstrate that the program does what it is supposed to
d) Testing is executing Software for the purpose of finding defects.
Handout - Software Testing Fundamentals
Page 11
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected

Chapter 2: Software Development Life Cycle

Software Development Life Cycle and Testing Phase
Let us look at the Traditional Software Development life cycle vs presently or most commonly used
life cycle.
In the above Fig A, the Testing Phase comes after the Development or coding is complete and
before the product is launched and goes into Maintenance phase. We have some disadvantages
using this model - cost of fixing errors will be high because we are not able to find errors until
coding is completed. If there is error at Requirements phase then all phases should be changed.
So, total cost becomes very high.
The Fig B shows the recommended Test Process involves testing in every phase of the life cycle.
During the Requirements phase, the emphasis is upon validation to determine that the defined
requirements meet the needs of the customers. During Design and Development phases, the
emphasis is on verification to ensure that the design and program accomplish the defined
requirements. During the Test and Installation phases, the emphasis is on inspection to determine
that the implemented system meets the system specification. During the maintenance phases, the
system will be re-tested to determine that the changes work and that the unchanged portion
continues to work.
If tester used the fig B approach most useful and significant always as because testing involves at
early stage of SDLC which help to expose any ambiguities or inconsistencies in the requirement
specification. It reduces the cost for fixing the defect since defects will be found in early stages
Handout - Software Testing Fundamentals
Page 12
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
The thought process of designing tests early in the life cycle (verifying the test basis via test
design) can help to prevent defects from being introduced into code. Reviews of documents (e.g.
requirements) also help to prevent defects appearing in the code.
Involving software testing in all phases of the software development life cycle has become a
necessity as part of the software quality assurance process. Right from the Requirements study till
the implementation, testing is being involved at every phase. The V-Model of the Software Testing
Life Cycle along with the Software Development Life cycle work as verification and validations
Fig A: Advantages:
� Testing is inherent to every phase of the waterfall model
� It is an enforced disciplined approach
� It is documentation driven, that is, documentation is produced at every stage.
The disadvantage of waterfall development is that it does not allow for much reflection or revision.
Once an application is in the testing stage, it is very difficult to go back and change something that
was not well-thought out in the concept stage. Alternatives to the waterfall model include joint
application development (JAD), rapid application development (RAD), synch and stabilize, build
and fix, and the spiral model.
It increases cost and time for fixing the defect since defect would be found at later stages.
Fig B Advantages:
In V model the testing is commenced at the beginning unlike in the traditional models where
testing is given attention after the coding phase which causes failures and huge cost to repair.
It’s a proactive model.
It reduces the cost and time for fixing the defect since defects will be found at early stages.
Requirement Analysis
The main objective of the requirement analysis is to prepare a document, which includes all the
customer requirements. That is, the Software Requirement Specification (SRS) document is the
primary output of this phase. Proper requirements and specifications are critical for having a
successful project. Removing errors at this phase can reduce the cost as much as errors found in
the Design phase. And also you should verify the following activities:
� Determine Verification Approach.
� Determine Adequacy of Requirements.
� Generate functional test data.
� Determine consistency of design with requirements.
Requirement document is used to determine the testable requirements so what is testable
A testable requirement is a requirement that is precisely and unambiguously defined. For example:
This criteria will be met only if someone can write a test case that would validate whether or not
the requirement has or has not been implemented correctly. This is the source of the term
"testable requirement."
Handout - Software Testing Fundamentals
Page 13
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
As a tester need to Reviews of documents (e.g. requirements) which also help to prevent defects
appearing in the code.
Design Phase
In this phase we are going to design entire project into two
� High –Level Design or System Design.
� Low –Level Design or Detailed Design.
High –Level Design or System Design (HLD)
High – level Design gives the overall System Design in terms of Functional Architecture and
Database design. This is very useful for the developers to understand the flow of the system. In
this phase design team, review team (testers) and customers plays an important role. For this the
entry criteria are the requirement document that is SRS. And the exit criteria will be HLD, projects
standards, the functional design documents, and the database design document.
Low – Level Design (LLD)
During the detailed phase, the view of the application developed during the high level design is
broken down into modules and programs. Logic design is done for every program and then
documented as program specifications. For every program, a unit test plan is created.
The entry criteria for this will be the HLD document. And the exit criteria will the program
specification and unit test plan (LLD).
Development Phase
This is the phase where actually coding starts. After the preparation of HLD and LLD, the
developers know what their role is and according to the specifications they develop the project.
This stage produces the source code, executables, and database. The output of this phase is the
subject to subsequent testing and validation.
And we should also verify these activities:
� Determine adequacy of implementation.
� Generate structural and functional test data for programs.
The inputs for this phase are the physical database design document, project standards, program
specification, unit test plan, program skeletons, and utilities tools. The output will be test data,
source data, executables, and code reviews.
Testing Phase
This phase is intended to find defects that can be exposed only by testing the entire system. This
can be done by Static Testing or Dynamic Testing. Static testing means testing the product, which
is not executing the code, we do it by examining and conducting the reviews, inspection of testing
artifacts and documents.
Dynamic testing is what you would normally think of testing. We test the executing part of the
project. When we test the software by executing and comparing the actual & expected results, it is
called Dynamic Testing.
Handout - Software Testing Fundamentals
Page 14
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
A series of different tests are done to verify that all system elements have been properly integrated
and the system performs all its functions.
Note that the system test planning can occur before coding is completed. Indeed, it is often done in
parallel with coding. The input for this is requirements specification document, and the output are
the system test plan and test result.
Implementation Phase or Development Phase
This phase includes two basic tasks:
� Getting the software accepted
� Installing the software at the customer site.
Acceptance consist of formal testing conducted by the customer according to the Acceptance test
plan prepared earlier and analysis of the test results to determine whether the system satisfies its
acceptance criteria. When the result of the analysis satisfies the acceptance criteria, the user
accepts the software.
Maintenance Phase
This phase is for all modifications, which is not meeting the customer requirements or any thing to
append to the present system. All types of corrections for the project or product take place in this
phase. The cost of risk will be very high in this phase. This is the last phase of software
development life cycle. The input to this will be project to be corrected and the output will be
modified version of the project.
V- Model
Software Testing has been accepted as a separate discipline to the extent that there is a separate
life cycle for the testing activity. Involving software testing in all phases of the software
development life cycle has become a necessity as part of the software quality assurance process.
Right from the Requirements study till the implementation, there needs to be testing done on
Software testing every phase. The V-Model of the Software Testing Life Cycle along with the
Software Development Life cycle given below indicates the various phases or levels of testing.
Handout - Software Testing Fundamentals
Page 15
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
From the V-model, we see that are various levels or phases of testing, namely, Unit testing,
Integration testing, System testing, User Acceptance testing etc. Let us see a brief definition on the
widely employed types of testing.
Unit Testing: The testing done to a unit or to a smallest piece of software. Done to verify if it
satisfies its functional specification or its intended design structure.
Integration Testing: Testing which takes place as sub elements are combined (i.e., integrated) to
form higher-level elements.
Regression Testing: Selective re-testing of a system to verify the modification (bug fixes) have
not caused unintended effects and that system still complies with its specified requirements.
System Testing: Testing the software for the required specifications on the intended hardware
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its
acceptance criteria, which enables a customer to determine whether to accept the system or not.
Performance Testing: To evaluate the time taken or response time of the system to perform it’s
required functions in comparison
Stress Testing: To evaluate a system beyond the limits of the specified requirements or system
resources (such as disk space, memory, processor utilization) to ensure the system do not break
Load Testing: Load Testing, a subset of stress testing, verifies that a web site can handle a
particular number of concurrent users while maintaining acceptable response times
Alpha Testing: Testing of a software product or system conducted at the developer’s site by the
Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered
software product system
Software Testing Life Cycle
According to the respective projects, the scope of testing can be tailored, but the process
mentioned above is common to any testing activity.
Handout - Software Testing Fundamentals
Page 16
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Agile or Iterative Model
Agile development methods apply time boxed iterative and evolutionary development, adaptive
planning, promote evolutionary delivery, and include other values and practices that encourages
Agile Principles
� The highest priority is to satisfy the customer through early and continuous quality
delivery of valuable software.
� Welcome changing requirements, even late in development. Agile processes harness
change for the customer’s competitive advantage,
� Deliver working software frequently, from a couple of weeks to a couple of months,
preferences to shorter time scale.
� Business peoples, Developers and testing team must work together daily through out
the project.
� Build projects around motivated individuals. Give them environment and support they
need, and trust them to get the job done.
� Then most efficient and effective method of conveying information to and within
development team is face-to-face conversation.
� Working software is the primary measure of progress.
� Agile processes promote sustainable development.
� Continuous attention to technical excellence and good design enhances agility.
� The best architectures, requirements, and designs emerge from self-organizing teams.
� At regular intervals, them team reflects on how to become more effective, then tunes
and adjusts its behaviors accordingly.
Iterative model is foundation of modern software methods. Iterative development is an approach to
building software in which overall lifecycle is composed of several iterations in sequence. Each
Handout - Software Testing Fundamentals
Page 17
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
iteration is self contained mini project composed of activities such as requirements analysis,
design, programming, and test. The goal for the end of iteration is an iteration release, a stable
integrated and tested complete system. The final iteration release is the complete product,
released to the market or clients.
Iterative-incremental development is the process of establishing requirements, designing, and
building and testing a system, done as a series of shorter development cycles.
Examples are: prototyping, rapid application development (RAD), Rational Unified Process (RUP)
and agile development models. The resulting system produced by iteration may be tested at
several levels as part of its development. An increment, added to others developed previously,
forms a growing partial system, which should also be tested. Regression testing is increasingly
important on all iterations after the first one.
Independent Testing
The mindset to be used while testing and reviewing is different to that used while developing
software. With the right mindset developers are able to test their own code, but separation of this
responsibility to a tester is typically done to help focus effort and provide additional benefits, such
as an independent view by trained and professional testing resources. Independent testing may be
carried out at any level of testing.
A certain degree of independence (avoiding the author bias) is often more effective at finding
defects and failures. Independence is not, however, a replacement for familiarity and developers
can efficiently find many defects in their own code. Several levels of independence can be defined:
� Tests designed by the person(s) who wrote the software under test (low level of
� Tests designed by another person(s) (e.g. from the development team).
� Tests designed by a person(s) from a different organizational group (e.g. an
independent test team) or test specialists (e.g. usability or performance test
� Tests designed by a person(s) from a different organization or company (i.e.
outsourcing or certification by an external body).
People and projects are driven by objectives. People tend to align their plans with the objectives
set by management and other stakeholders, for example, to find defects or to confirm that software
works. Therefore, it is important to clearly state the objectives of testing.
Identifying failures during testing may be perceived as criticism against the product and against the
author. Testing is, therefore, often seen as a destructive activity, even though it is very constructive
in the management of product risks. Looking for failures in a system requires curiosity,
professional pessimism, a critical eye, attention to detail, good communication with development
peers, and experience on which to base error guessing.
If errors, defects or failures are communicated in a constructive way, bad feelings between the
testers and the analysts, designers and developers can be avoided. This applies to reviewing as
well as in testing.
What is difference between Independent testing and Development cum testing?
Complex business needs, multiple delivery and computing platforms, time to market compulsions
and increasing user sophistication have lead to exponential increase in software complexity and
Handout - Software Testing Fundamentals
Page 18
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
size. This trend has created avenues to examine paradigms that emphasize on software
development and testing as dedicated streams in software engineering.
� Independent testing is most important as because Independent testers see other and
different defects, and are unbiased.
� It Improves software Quality: The primary benefit of engaging with organizations
having an independent testing practice is improved software quality.
� Reduces Time to Market Organizations having an independent testing practice ensure
improved time to market by enabling faster turnaround of releases.
� Testing costs are a significant component of the total software project cost.
Organizations having an independent testing practice allow you to optimize your
testing spend since they use automation, employ specialized resources across
projects, and leverage multiple assignments for resource optimization.
� Lower Lifecycle Costs: Software firms with an independent testing practice can
provide exclusive focus on quality and conformance to requirements, ensuring that
software is engineered for low failure rates and reduced maintenance costs.
� This is achieved through: Subjecting the software to rigorous testing cycles across
testing streams such as functionality testing, performance testing, load testing and so
on improved focus on maintainability and scalability to address future needs.
Development cum Testing/ Software development engineers allocated to testing
� Conformance to requirements and performance standards is suspect.
� Defects get bias which may result into poor quality.
� Increased effort in solution building rather than testing
� Development cum testing loses end-user perspective.
� Delivery deadline pressures may result in shallow testing
� Evolution of Software Testing
� The Testing process and lifecycle
� Broad categories of testing
� Widely employed Types of Testing
� Agile or Iterative testing methods
� Independent testing
Test Your Understanding
1. Which of the following is a benefit of independent testing?
a) Code cannot be released into production until independent testing is complete.
b) Testing is isolated from development.
c) Developers do not have to take as much responsibility for quality.
d) Independent testers see other and different defects, and are unbiased.
2. Which of the following statements regarding static testing is false?
a) Static testing requires the running of tests through the code
b) Static testing includes desk checking
Handout - Software Testing Fundamentals
Page 19
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
c) Static testing includes techniques such as reviews and inspections
d) Static testing can give measurements such as cyclomatic complexity
3. Which of the following is a characteristic of good testing in any life cycle model?
a) All document reviews involve the development team.
b) Some, but not all, development activities have corresponding test activities.
c) Each test level has test objectives specific to that level.
d) Analysis and design of tests begins as soon as development is complete.
Handout - Software Testing Fundamentals

� Introduction:
This module explains various levels of software testing:
� Unit testing
� Integration testing
� System testing
� Acceptance testing
� Migration/Upgrade testing
© 2007, Cognizant Technology Solutions Confidential 5
Levels of Testing: Objectives
� Objective:
After completing this chapter, you will be able:
� Explain unit testing
� Describe integration testing
� List the Various Integration Test Approaches
� Describe System Testing
� Explain Acceptance Testing
� Evaluate Migration/Upgrade Testing
© 2007, Cognizant Technology Solutions Confidential 6
Unit Testing
� Unit testing:
� Testing of individual program components
� Usually the responsibility of the component developer
� Tests are derived with the developer’s experience
� Purpose of unit testing:
� To ensure that each as‐built module behaves according to its
specification defined during detailed design
� To remove coding errors such as typos, basic logic problems, and
syntax errors
© 2007, Cognizant Technology Solutions Confidential 7
Unit Testing (Contd.)
� Key things to be tested are:
� Information flow in and out of the module
� Number of Input parameters are equal to number of arguments
� Parameters and arguments type match
� Parameters passed in correct order
� Incorrect variables name
� Inconsistent datatype
� Boundary conditions
� All error‐handling paths
© 2007, Cognizant Technology Solutions Confidential 8
Integration Testing
� Integration testing:
� Testing of groups of components integrated to create a system or subsystem
� The responsibility of an independent testing team
� Tests are based on a system specification
� Purpose of integration testing:
� To ensure that each as‐built component behaves according to
specification defined during preliminary design.
� To test the communicating interfaces among integrated components to
avoid communication errors.
© 2007, Cognizant Technology Solutions Confidential 9
Integration Testing (Contd.)
� Key things to be tested are:
� Combining and testing multiple components together
� Integration of modules, programs and functions
� Testing internal program interfaces
� Testing external interfaces for modules
� Data flow between two modules
� Control flow between two modules
© 2007, Cognizant Technology Solutions Confidential 10
Integration Testing Approaches
� Integration testing approaches:
� Top‐down
� Bottom‐up
� Bi‐directional
© 2007, Cognizant Technology Solutions Confidential 11
Integration Testing: Top Down
� Top‐down integration strategy:
� Top‐down integration strategy focuses on testing the top layer or the
controlling subsystem first that means the main program.
� The general process in top‐down integration strategy is to gradually add
more subsystems that are referenced/required by the already tested
subsystems while testing the application.
� Continue the process until all subsystems are incorporated into the test.
� Start with high‐level system and integrate from the top‐down replacing
individual components by stubs wherever appropriate.
© 2007, Cognizant Technology Solutions Confidential 12
Integration Testing: Top Down (Contd.)
� Advantages of top‐down integration:
� Core functionality tested early
� Disadvantages of top‐down integration:
� Key interface defects trapped late in the cycle
© 2007, Cognizant Technology Solutions Confidential 13
Integration Testing: Bottom Up
� Bottom‐up integration:
� Bottom‐up integration strategy focuses on testing the units at the
lowest levels first which means the units at the leafs of the
decomposition tree.
� The general process in bottom‐up integration strategy is to gradually
include the subsystems that reference/require the previously tested
subsystems while testing the application.
� This is done repeatedly until all subsystems are included in the testing.
© 2007, Cognizant Technology Solutions Confidential 14
Integration Testing: Bottom Up (Contd.)
� Advantages of bottom‐up integration:
� Key Interface defects trapped late in the cycle
� Disadvantages of bottom‐up integration:
� Core Functionality tested early
© 2007, Cognizant Technology Solutions Confidential 15
Integration Testing: Bi Directional
� Bi‐directional integration:
� Combines top‐down strategy with bottom‐up strategy
� Pros and cons of bi‐directional Integration:
� Top and bottom layer tests can be done in parallel
� Does not test the individual subsystems thoroughly before integration
© 2007, Cognizant Technology Solutions Confidential 16
System Testing
� System testing comprises:
� Specifications‐based testing
� Typically independent team testing
� Simulated environment testing
� Live/Simulated user data
� Testing the whole system
� Functional and non‐functional requirements testing
� Business transaction‐driven testing
� Compatibility errors uncovering
� Performance limitations uncovering
© 2007, Cognizant Technology Solutions Confidential 17
System Testing (Contd.)
� Purpose of system testing:
� To test the system as a whole
� To identify defects which cannot be attributed to individual components
� To identify defects which cannot be attributed to interaction between
two components
� To verify end‐to‐end workflows and scenarios
� To find unexpected behavior in system
© 2007, Cognizant Technology Solutions Confidential 18
System Testing Types
� Types of system testing:
� Functional testing (Sanity/Regression)
� Performance and scalability testing
� Load/Stress testing
� Usability testing
� Installability testing
� Disaster and recovery testing
� Security testing
� Compatibility testing
� Concurrency testing
© 2007, Cognizant Technology Solutions Confidential 19
Acceptance Testing
� Acceptance testing:
� To determine whether a system satisfies its acceptance criteria or not.
� To enable the customer to determine whether to accept the system or
� To test the software in the "real world" by the intended audience.
� Purpose of acceptance testing:
� To verify the system or changes according to the original needs.
� To evaluate the system via normal business circumstances in a
controlled environment.
© 2007, Cognizant Technology Solutions Confidential 20
Acceptance Testing (Contd.)
� Procedures for conducting the acceptance testing:
� Define the acceptance criteria:
� Functionality requirements
� Performance requirements
� Interface quality requirements
� Overall software quality requirements
� Develop an acceptance plan:
� Project description
� User responsibilities
� Acceptance description
� Execute the acceptance test plan.
© 2007, Cognizant Technology Solutions Confidential 21
Migration/Upgrade Testing
� Migration/Upgrade testing:
� Determines the differences between the expected results (the source
environment) and the observed results (the migrated application)
� Need for migration/upgrade testing:
� Identifying the errors in the database that occurs due to migration
� Checking the business data format
� Ensuring data clean up process is done properly
� Checking all rows are imported to the target database
© 2007, Cognizant Technology Solutions Confidential 22
� Questions from participants
© 2007, Cognizant Technology Solutions Confidential 23
Test Your Understanding
1. What is unit testing?
2. What are the various integration testing approaches?
© 2007, Cognizant Technology Solutions Confidential 24
Levels of Testing: Summary
� Levels of testing helps you to:
� Ensure the customer requirements are implemented.
� Evaluate the product with an user perspective.
� Identify defects as early as possible.
� Reduce the risk of identifying the defects in production.

Chapter 1: Verification and Validation

Learning Objectives
After completing this chapter, you will be able to:
� Define verification and validation
� Differentiate verification and validation
� List the activities performed during Verification and Validation
What is Verification
The standard definition of Verification is: "Are we building the product RIGHT?" i.e. Verification is a
process which ensures that the Software product is developed the right way. The software should
confirm to its predefined specifications, as the product development goes through different stages,
an analysis is done to ensure that all required specifications are met. Methods and techniques
used in the Verification and Validation shall be designed carefully, the planning of which starts right
from the beginning of the development process. The Verification part of ‘Verification and Validation
Model’ comes before Validation, which incorporates Software inspections, reviews, audits,
walkthroughs, buddy checks etc. in each phase of verification (every phase of Verification is a
phase of the Testing Life Cycle)
During the Verification, the work product (the ready part of the Software being developed and
various documentations) is reviewed/examined personally by one or more persons in order to find
and point out the defects in it. This process helps in prevention of potential bugs, which may cause
in failure of the project.
Terms Related to Verification
Inspection involves a team of about 3-6 people, led by a leader, which formally reviews the
documents and work product during various phases of the product development life cycle. The
work product and related documents are presented in front of the inspection team, the member of
which carries different interpretations of the presentation. The bugs that are detected during the
inspection are communicated to the next level in order to take care of them.
Walkthrough can be considered same as inspection without formal preparation (of any
presentation or documentations). During the walkthrough meeting, the presenter/author introduces
the material to all the participants in order to make them familiar with it. Even when the
walkthrough can help in finding potential bugs, they are used for knowledge sharing or
communication purpose.
Buddy Checks:
This is the simplest type of review activity used to find out bugs in a work product during the
verification. In buddy check, one person goes through the documents prepared by another person
in order to find out if that person has made mistake(s) or Errors.
Handout - Testing Type
Page 6
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
The activities involved in Verification process are: Requirement Specification verification,
Functional design verification, internal/system design verification and code verification (these
phases can also be subdivided further). Each activity makes it sure that the product is developed
right way and every requirement, every specification, design code etc. is verified!
What is Validation
Validation is a process of finding out if the product being built is right.i.e. whatever the software
product is being developed, it should do what the user expects it to do. The software product
should functionally do what it is supposed to, it should satisfy all the functional requirements set by
the user. Validation is done during or at the end of the development process in order to determine
whether the product satisfies specified requirements.
Validation and Verification processes go hand in hand, but visibly Validation process starts after
Verification process ends (after coding of the product ends). Each Verification activity (such as
Requirement Specification Verification, Functional design Verification etc.) has its corresponding
Validation activity (such as Functional Validation/Testing, Code Validation/Testing,
System/Integration Validation etc.).
All types of testing methods are basically carried out during the Validation process. Test plan, test
suits and test cases are developed, which are used during the various phases of Validation
process. The phases involved in Validation process are: Code Validation/Testing, Integration
Validation/Integration Testing, Functional Validation/Functional Testing, and System/User
Acceptance Testing/Validation.
Terms used in Validation Process:
Code Validation/Testing:
Developers as well as testers do the code validation. Unit Code Validation or Unit Testing is a type
of testing, which the developers conduct in order to find out any bug in the code unit/module
developed by them. Code testing other than Unit Testing can be done by testers or developers.
Integration Validation/Testing:
Integration testing is carried out in order to find out if different (two or more) units/modules
integrated properly. This test helps in finding out if there is any defect in the interface between
different modules.
Functional Validation/Testing:
This type of testing is carried out in order to find if the system meets the functional requirements. In
this type of testing, the system is validated for its functional behavior. Functional testing does not
deal with internal coding of the project, instead, it checks if the system behaves as per the
Handout - Testing Type
Page 7
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
User Acceptance Testing or System Validation:
In this type of testing, the developed product is handed over to the user/Third Party testers in order
to test it in real time scenario. The product is validated to find out if it works according to the
system specifications and satisfies all the user requirements. As the user/Third Party testers use
the software, it may happen that bugs that are yet undiscovered, come up, which are
communicated to the developers to be fixed. This helps in improvement of the final product.
Software Verification and Validation Model
Major Software V & V Activities
Table 2-1. Major Software V&V Activities
Software V&V Management
• Planning
• Monitoring
• Evaluating results, impact of change
• Reporting
Software Requirements V&V
• Review of concept documentation (if not performed
prior to software requirements development)
• Traceability analysis
• Software Requirements Evaluation
• Interface Analysis
• Initial Planning for Software System Test
• Reporting
Handout - Testing Type
Page 8
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Table 2-1. Major Software V&V Activities
Software Design V&V
• Traceability Analysis
• Software Design Evaluation
• Interface Analysis
• Initial Planning for Unit Test
• Initial Planning for Software Integration Test
• Report
Code V&V
• Traceability Analysis
• Code Evaluation
• Interface Analysis
• Completion of Unit Test Preparation
• Reporting
Unit Test
• Unit Test Execution
• Reporting
Software Integration Test
• Completion of Software Integration Test
• Preparation
• Execution of Software Integration Tests
• Reporting
Software System Test (4)
• Completion of Software System Test Preparation
• Execution of Software System Tests
• Reporting
Software Installation Test
• Installation Configuration Audit
• Reporting
Software Operation and
maintenance V&V
• Impact-of-Change Analysis
• Repeat Management V&V
• Repeat Technical V&V Activities
Goals for Validation and Verification
� Verification and validation should establish confidence that the software is fit for
purpose. This does not mean completely free of defects. Rather, it must be good
enough for its intended use.
� The type of use will determine the degree of confidence that is needed.
� The discovery of defects in the system
� The assessment of whether or not the system is usable in an operational situation
Handout - Testing Type
Page 9
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected

Chapter 2: Black Box Testing

Learning Objectives
After completing this chapter, you will be able to:
� Explain black box testing
� Describe the testing methods available for black box testing
Black-box test design treats the system as a literal "black-box", so it doesn't explicitly use
knowledge of the internal structure. It is usually described as focusing on testing functional
requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed
box. Black Box Testing is testing without knowledge of the internal workings of the item being
tested. For example, when black box testing is applied to software engineering, the tester would
only know the "legal" inputs and what the expected outputs should be, but not how the program
actually arrives at those outputs. It is because of this that black box testing can be considered
testing with respect to the specifications, no other knowledge of the program is necessary. For this
reason, the tester and the programmer can be independent of one another, avoiding programmer
bias toward his own work. For this testing, test groups are often used, though centered around the
knowledge of user requirements, black box tests do not necessarily involve the participation of
users. Among the most important black box tests that do not involve users are functionality testing,
volume tests, stress tests, recovery testing, and benchmarks. Additionally, there are two types of
black box test that involve users, i.e. field and laboratory tests. In the following the most important
aspects of these black box tests will be described briefly.
Black Box Testing – Without User Involvement
The so-called ``functionality testing'' is central to most testing exercises. Its primary objective is to
assess whether the program does what it is supposed to do, i.e. what is specified in the
requirements. There are different approaches to functionality testing. One is the testing of each
program feature or function in sequence. The other is to test module by module, i.e. each function
where it is called first. The objective of volume tests is to find the limitations of the software by
processing a huge amount of data. A volume test can uncover problems that are related to the
efficiency of a system, e.g. incorrect buffer sizes, a consumption of too much memory space, or
only show that an error message would be needed telling the user that the system cannot process
the given amount of data.
During a stress test, the system has to process a huge amount of data or perform many function
calls within a short period of time. A typical example could be to perform the same function from all
workstations connected in a LAN within a short period of time (e.g. sending e-mails, or, in the NLP
area, to modify a term bank via different terminals simultaneously). The aim of recovery testing is
to make sure to which extent data can be recovered after a system breakdown. Does the system
provide possibilities to recover all of the data or part of it? How much can be recovered and how?
Is the recovered data still correct and consistent? Particularly for software that needs high
reliability standards, recovery testing is very important. The notion of benchmark tests involves the
testing of program efficiency. The efficiency of a piece of software strongly depends on the
hardware environment and therefore benchmark tests always consider the soft/hardware
Handout - Testing Type
Page 10
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
combination. Whereas for most software engineers benchmark tests are concerned with the
quantitative measurement of specific operations, some also consider user tests that compare the
efficiency of different software systems as benchmark tests. In the context of this document,
however, benchmark tests only denote operations that are independent of personal variables.
Black Box Testing – With USER Involvement
For tests involving users, methodological considerations are rare in SE literature. Rather, one may
find practical test reports that distinguish roughly between field and laboratory tests. In the
following only a rough description of field and laboratory tests will be given. E.g. Scenario Tests.
The term ``scenario'' has entered software evaluation in the early 1990s . A scenario test is a test
case which aims at a realistic user background for the evaluation of software as it was defined and
performed It is an instance of black box testing where the major objective is to assess the
suitability of a software product for every-day routines. In short it involves putting the system into
its intended use by its envisaged type of user, performing a standardized task. In field tests users
are observed while using the software system at their normal working place. Apart from general
usability-related aspects, field tests are particularly useful for assessing the interoperability of the
software system, i.e. how the technical integration of the system works.
Moreover, field tests are the only real means to elucidate problems of the organizational
integration of the software system into existing procedures. Particularly in the NLP environment
this problem has frequently been underestimated. A typical example of the organizational problem
of implementing a translation memory is the language service of a big automobile manufacturer,
where the major implementation problem is not the technical environment, but the fact that many
clients still submit their orders as print-out, that neither source texts nor target texts are properly
organized and stored and, last but not least, individual translators are not too motivated to change
their working habits.
Laboratory tests are mostly performed to assess the general usability of the system. Due to the
high laboratory equipment costs laboratory tests are mostly only performed at big software houses
such as IBM or Microsoft. Since laboratory tests provide testers with many technical possibilities,
data collection and analysis are easier than for field tests.
Black Box Testing – With User Involvement
Testing Strategies/Techniques
� Black box testing should make use of randomly generated inputs (only a test range
should be specified by the tester), to eliminate any guess work by the tester as to the
methods of the function
� Data outside of the specified input range should be tested to check the robustness of
the program
� Boundary cases should be tested (top and bottom of specified range) to make sure the
highest and lowest allowable inputs produce proper output
� The number zero should be tested when numerical data is to be input
� Stress testing should be performed (try to overload the program with inputs to see
where it reaches its maximum capacity), especially with real time systems
Handout - Testing Type
Page 11
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
� Crash testing should be performed to see what it takes to bring the system down
� Test monitoring tools should be used whenever possible to track which tests have
already been performed and the outputs of these tests to avoid repetition and to aid in
the software maintenance
� Other functional testing techniques include: transaction testing, syntax testing, domain
testing, logic testing, and state testing.
� Finite state machine models can be used as a guide to design functional tests
� According to Beizer the following is a general order by which tests should be
1. Clean tests against requirements.
2. Additional structural tests for branch coverage, as needed.
3. Additional tests for data-flow coverage as needed.
4. Domain tests not covered by the above.
5. Special techniques as appropriate--syntax, loop, state, and so on.
6. Any dirty tests not covered by the above.
Black Box Testing Methods
Graph Based Testing Methods
� Black-box methods based on the nature of the relationships (links) among the program
objects (nodes), test cases are designed to traverse the entire graph
� Transaction flow testing (nodes represent steps in some transaction and links
represent logical connections between steps that need to be validated)
� Finite state modeling (nodes represent user observable states of the software and
links represent transitions between states)
� Data flow modeling (nodes are data objects and links are transformations from one
data object to another)
� Timing modeling (nodes are program objects and links are sequential connections
between these objects, link weights are required execution times)
Equivalence Partitioning
� Black-box technique that divides the input domain into classes of data from which test
cases can be derived.
� An ideal test case uncovers a class of errors that might require many arbitrary test
cases to be executed before a general error is observed
� Equivalence class guidelines:
1. If input condition specifies a range, one valid and two invalid equivalence
classes are defined
2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class is defined
4. If an input condition is Boolean, one valid and one invalid equivalence class is
Handout - Testing Type
Page 12
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Boundary Value Analysis
Black-box technique that focuses on the boundaries of the input domain rather than its center
BVA guidelines:
� If input condition specifies a range bounded by values a and b, test cases should
include a and b, values just above and just below a and b
� If an input condition specifies and number of values, test cases should exercise the
minimum and maximum numbers, as well as values just above and just below the
minimum and maximum values
� Apply guidelines 1 and 2 to output conditions, test cases should be designed to
produce the minimum and maxim output reports
� If internal program data structures have boundaries (e.g. size limitations), be certain to
test the boundaries
Comparison Testing
� Black-box testing for safety critical systems in which independently developed
implementations of redundant systems are tested for conformance to specifications
� Often equivalence class partitioning is used to develop a common set of test cases for
each implementation
Orthogonal Array Testing
� Black-box technique that enables the design of a reasonably small set of test cases
that provide maximum test coverage
� Focus is on categories of faulty logic likely to be present in the software component
(without examining the code)
� Priorities for assessing tests using an orthogonal array
� Detect and isolate all single mode faults
� Detect all double mode faults
� Multimode faults
Specialized Testing
� Graphical user interfaces
� Client/server architectures
� Documentation and help facilities
� Real-time systems
1. Task testing (test each time dependent task independently)
2. Behavioral testing (simulate system response to external events)
3. Inter task testing (check communications errors among tasks)
4. System testing (check interaction of integrated system software and
Handout - Testing Type
Page 13
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Advantages of Black Box Testing
� More effective on larger units of code than glass box testing
� Tester needs no knowledge of implementation, including specific programming
� Tester and programmer are independent of each other
� Tests are done from a user's point of view
� Will help to expose any ambiguities or inconsistencies in the specifications
� Test cases can be designed as soon as the specifications are complete
Disadvantages of Black Box Testing
� Only a small number of possible inputs can actually be tested, to test every possible
input stream would take nearly forever
� Without clear and concise specifications, test cases are hard to design
� There may be unnecessary repetition of test inputs if the tester is not informed of test
cases the programmer has already tried
� May leave many program paths untested
� Cannot be directed toward specific segments of code which may be very complex
(and therefore more error prone)
� Most testing related research has been directed toward glass box testing
Handout - Testing Type
Page 14
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected

Chapter 3: Online Vs Batch Testing

Learning Objectives
After completing this chapter, you will be able to:
� Define online testing and batch testing
� Differentiate online and batch testing
The Online and batch testing are the testing of two different types of application categories. When
we try to differentiate both the terms, we come to know that we need to learn in detail on both the
categories of testing
Batch Testing
Testing of batch job or testing whether the batch is being processed properly is the batch testing.
To find in detail about batch testing there is a need for a good knowledge of batch processing
Batch Definition:
A Batch is used to define a sequence of Job steps. When a Batch is started, Job steps are
performed one at a time in the order they are defined. A batch is a set of data or jobs to be
processed in a single program run. It is a category of data processing in which data is accumulated
into "batches" and processed periodically.
Batch testing is the testing of whether the sequence of jobs is performed and gives proper results.
I'm sure that you are familiar with printing queues and the concepts behind them. The idea is very
simple: when a user has finished a document, and wants to print it, their document is copied into a
`print queue', where it sits awaiting its turn to print. Eventually, a printer on the network will take the
document out of the `print queue' and print it out.
Batch processing systems do more or less the same job, but they work with programs and
computers rather than documents and printers [1]. As with print queues, the whole point of a batch
processing system is:
a) To maximize the use of less resources
b) To make sure everyone can get a slice of that resource
With a print queue system, the idea is that you don't need to purchase one printer per user. You
can buy a number of printers, and make each one available to many users at once. Because a
printer can only print one document at a time, the queuing system is also necessary to ensure that
many users can use the printer; otherwise everyone would be sat at their desks unable to print
because the printer was busy.
Handout - Testing Type
Page 15
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
The same is true with a batch processing system. Instead of buying very powerful workstations for
every user, you only need to buy a few, and you can make them available to many users at once.
As any UNIX administrator knows, the most powerful machines on the network quickly become
overloaded, because too many people try to run their programs at the same time. In this case, the
queuing system is necessary to ensure that the machine isn't trying to do too much at any one
time; overall, programs finish more quickly, and everyone gets their results back a lot sooner than
would otherwise be the case.
Examples of Batch Processing:
1. A payroll system might be implemented using batch processing. The master file could
contain the records for all of company employees, including their employee number,
rates of pay and how much they have been paid so far this year. The records in the
master file are sorted using the employee number as the primary key field. The input
data put into the transaction file would consist of records showing how many hours
each employee had worked in the current week. Sometimes transactions would be
used to add a new record if a new employee started or delete a record when an
employee left the company.
At the end of the week the transaction file would be processed. Before processing it
would have to be sorted into the same order as the master file, i.e. records in order of
employee number.
The computer would then process the transactions, using the information about how
many hours each employee has worked this week (from the transaction file) and their
rates of pay (from the master file) to calculate the employee's wages for the week.
Payslips can then be printed and the master file can be updated to increase the
amount paid so far this year by the wages paid this week. An error report will also be
2. An example of batch processing is the way that credit card companies process billing.
The customer does not receive a bill for each separate credit card purchase but one
monthly bill for all of that month’s purchases. The bill is created through batch
processing, where all of the data are collected and held until the bill is processed as a
batch at the end of the billing cycle.
Testing of Batches:
The example of the payroll system needs to be tested for the following,
� Testing whether the Batch job takes the input from the master file
� Testing the processing of newly added data by the batch.
� Updation of table with the data from the master file.
� Calculation of the hours worked for the employee based on his data
� Pay slip generation from the batch job.
� The error report generated by the batch
Handout - Testing Type
Page 16
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
The batch job of the credit card company needs to be tested for,
� Generation of bill for the month
� The accuracy of the data in the bill
� The information in the database for every purchase
� The bill is proper if no purchase is happening in the month.
Online Testing
Testing of any application which gives the result of the transaction instantly. Consider any website
which offers online banking or online shopping. In case of these websites the user gives the
required information and check the result.
OLTP (online transaction processing) is a class of program that facilitates and manages
transaction-oriented applications, typically for data entry and retrieval transactions in a number of
industries, including banking, airlines, mail-order, supermarkets, and manufacturers. Probably the
most widely installed OLTP product is IBM's CICS (Customer Information Control System).
Today's online transaction processing increasingly requires support for transactions that span a
network and may include more than one company. For this reason, new OLTP software uses
client/server processing and brokering software that allows transactions to run on different
computer platforms in a network.
The examples of online transaction processing applications include,
� Automatic Teller machine
� Online Banking websites
� E-Buying sites
Testing of Online Applications:
A type of computer processing in which the computer responds immediately to user requests.
Each request is considered to be a transaction. Automatic teller machines for banks are an
example of transaction processing.
The testing of online application needs the testing for,
� The response from the system is proper for any user request
� The format of the system response
� The data returned is intact.
� The processing of the data.
� The data which is stored in the backend by the AUT.
Handout - Testing Type
Page 17
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Online Vs Batch Testing
Batch Testing Online Testing
The batch processing is processing of batch of
requests which are stored and then executed all
at one time.
The interactive processing, is the one in which
the application responds to commands as soon
as you enter them.
Batch processing can take place without a user
being present.
Transaction processing requires interaction with
a user
Batch Testing is more towards testing of
backend and the tables
Online testing is the testing of UI and the
responses in the webpage.
Realtime Example – The Postpaid mobile
connection produces the bill on usage once a
month, processing the data of the month
Realtime Example – The Prepaid mobile
connection results in the balance recovery
based on usage at an instant
Handout - Testing Type
Page 18
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected

Chapter 4: Regression Testing

Learning Objectives
After completing this chapter, you will be able to:
� Explain regression testing
� List the importance of regression testing
� Perform regression testing
What is Regression Testing
If a piece of Software is modified for any reason testing needs to be done to ensure that it works
as specified and that it has not negatively impacted any functionality that it offered previously. This
is known as Regression Testing.
Regression testing attempts to verify:
� That the application works as specified even after the changes/additions/modification
were made to it
� The original functionality continues to work as specified even after
changes/additions/modification to the software application
� The changes/additions/modification to the software application have not introduced
any new bugs
When to go for Regression Testing
Regression Testing plays an important role in any Scenario where a change has been made to a
previously tested software code. Regression Testing is hence an important aspect in various
Software Methodologies where software changes enhancements occur frequently.
Any Software Development Project is invariably faced with requests for changing Design, code,
features or all of them.
Some Development Methodologies embrace change.
For example ‘Extreme Programming’ Methodology advocates applying small incremental changes
to the system based on the end user feedback.
Each change implies more Regression Testing needs to be done to ensure that the System meets
the Project Goals.
Handout - Testing Type
Page 19
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Why Regression Testing is Important
Any Software change can cause existing functionality to break. Changes to a Software component
could impact dependent Components. It is commonly observed that a Software fix could cause
other bugs. All this affects the quality and reliability of the system. Hence Regression Testing,
since it aims to verify all this, is very important.
Making Regression Testing Cost Effective
Every time a change occurs one or more of the following scenarios may occur:
� More Functionality may be added to the system
� More complexity may be added to the system
� New bugs may be introduced
� New vulnerabilities may be introduced in the system
� System may tend to become more and more fragile with each change
After the change the new functionality may have to be tested along with all the original
functionality. With each change Regression Testing could become more and more costly. To make
the Regression Testing Cost Effective and yet ensure good coverage one or more of the following
techniques may be applied:
Test Automation: If the Test cases are automated the test cases may be executed using scripts
after each change is introduced in the system. The execution of test cases in this way helps
eliminate oversight, human errors,. It may also result in faster and cheaper execution of Test
cases. However there is cost involved in building the scripts.
Selective Testing: Some teams choose the test cases selectively to execute. They do not
execute all the Test Cases during the Regression Testing. They test only what they decide is
relevant. This helps reduce the Testing Time and Effort but there is also a risk of leaving out the
impacted code which may be indirectly related.
Regression Testing – What to Test?
Since Regression Testing tends to verify the software application after a change has been made
everything that may be impacted by the change should be tested during Regression Testing.
Generally the following areas are covered during Regression Testing:
Any functionality that was addressed by the change
Original Functionality of the system
Performance of the System after the change was introduced
Regression Testing – How to Test?
Like any other Testing Regression Testing Needs proper planning. For an Effective Regression
Testing following steps are necessary:
Create a Regression Test Plan: Test Plan identified Focus Areas, Strategy, Test Entry and Exit
Criteria. It can also outline Testing Prerequisites, Responsibilities, etc.
Handout - Testing Type
Page 20
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Create Test Cases: Test Cases that cover all the necessary areas are important. They describe
what to Test, Steps needed to test, Inputs and Expected Outputs. Test Cases used for Regression
Testing should specifically cover the functionality addressed by the change and all components
affected by the change. The Regression Test case may also include the testing of the performance
of the components and the application after the change(s) were done.
Defect Tracking: As in all other Testing Levels and Types it is important Defects are tracked
systematically, otherwise it undermines the Testing Effort.
Handout - Testing Type
Page 21
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected

Chapter 5: Non Functional Testing

Learning Objectives
After completing this chapter, you will be able to:
� Define non functional testing
� List the types of non functional testing
What is Non Functional Testing
In order to ensure that the system is ready to go live it is necessary to go beyond just functional
testing. Non-Functional Testing is designed to evaluate the readiness of the system according to
several criteria not covered by functional testing. These criteria include but not limited to :
� Performance Testing
� Disaster Recovery
� Security
What is Performance Testing
Performance testing is the process of determining the speed or effectiveness of a computer,
network, software program or device. This process can involve quantitative tests done in a lab,
such as measuring the response time or the number of MIPS (millions of instructions per second)
at which a system functions
Following are the classification of Performance Testing
� Load Testing
� Stress Testing
� Volume Testing
Load Testing
Load testing is subjecting a system to a statistically representative (usually) load. The two main
reasons for using such loads are in support of software reliability testing and in performance
testing. In performance testing, load is varied from a minimum (zero) to the maximum level the
system can sustain without running out of resources or having, transactions suffer (applicationspecific)
excessive delay.
Stress Testing
Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g.,
RAM, disc, interrupts etc.) needed to process that load. The idea is to stress a system to the
breaking point in order to find bugs that will make that break potentially harmful. The system is not
expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent
manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress
testing may or may not be repaired depending on the application, the failure mode, consequences,
etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to
force the system into resource depletion.
Handout - Testing Type
Page 22
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Volume Testing
Testing which confirms that any values that may become large over time (such as accumulated
counts, logs, and data files), can be accommodated by the program and will not cause the
program to stop working or degrade its operation in any manner
What is Disaster Recovery
Disaster Recovery is a procedure defined for any application that is deployed in LIVE. It ensures
that when a system breaks/stops abnormally it comes back to its original state and continues to
function as it was before. The disaster recovery plan should be developed in such a way that the
business is not impacted or least impacted due to system malfunctions.
What is Security Testing
Techniques used to confirm the design and/or operational effectiveness of security controls
implemented within a system. Examples: Attack and penetration studies to determine whether
adequate controls have been implemented to prevent breach of system controls and processes
Password strength testing by using tools (“password crackers”)
What is Interoperability Testing
With respect to software, the term interoperability is used to describe the capability of different
programs to exchange data via a common set of business procedures, and to read and write the
same file formats and use the same protocols. (The ability to execute the same binary code on
different processor platforms is 'not' assumed to be part of the interoperability definition!) The lack
of interoperability strongly implies that the described product or products were not designed with
standardization in mind. Indeed, interoperability is not taken for granted in the non-standardsbased
portion of the computing and EDP world.
According to ISO/IEC 2382-01, Information Technology Vocabulary, Fundamental Terms,
interoperability is defined as follows: "The capability to communicate, execute programs, or
transfer data among various functional units in a manner that requires the user to have little or no
knowledge of the unique characteristics of those units.
Handout - Testing Type
Page 23
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected

Chapter 6: White Box Testing

Learning Objectives
After completing this chapter, you will be able to:
� Describe white box testing
� List the types of white box testing
Software testing approaches that examine the program structure and derive test data from the
program logic. Structural testing is sometimes referred to as clear-box testing since white boxes
are considered opaque and do not really permit visibility into the code.
Synonyms for white box testing
� Glass Box testing
� Structural testing
� Clear Box testing
� Open Box Testing
Types of White Box Testing
Code Coverage Analysis
Code coverage is a measure used in software testing. It describes the degree to which the source
code of a program has been tested. It is a form of testing that looks at the code directly.
Basis Path Testing
A testing mechanism proposed by McCabe whose aim is to derive a logical complexity measure of
a procedural design and use this as a guide for defining a basic set of execution paths. These are
test cases that exercise basic set will execute every statement at least once.
Flow Graph Notation
A notation for representing control flow similar to flow charts and UML activity diagrams.
Handout - Testing Type
Page 24
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Cyclomatic Complexity
The cyclomatic complexity gives a quantitative measure of the logical complexity. This value gives
the number of independent paths in the basis set, and an upper bound for the number of tests to
ensure that each statement is executed at least once. An independent path is any path through a
program that introduces at least one new set of processing statements or a new condition (i.e., a
new edge). Cyclomatic complexity provides upper bound for number of tests required to guarantee
coverage of all program statements.
Control Structure Testing
Conditions Testing
Condition testing aims to exercise all logical conditions in a program module.
They may define:
� Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions.
� Simple condition: Boolean variable or relational expression, possibly proceeded by a
NOT operator.
� Compound condition: composed of two or more simple conditions, Boolean operators
and parentheses.
� Boolean expression: Condition without Relational expressions.
Handout - Testing Type
Page 25
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Data Flow Testing
Selects test paths according to the location of definitions and use of variables.
Loop Testing
Loops fundamental to many algorithms. Can define loops as simple, concatenated, nested, and
Design By Contract (DbC)
DbC is a formal way of using comments to incorporate specification information into the code itself.
Basically, the code specification is expressed unambiguously using a formal language that
describes the code's implicit contracts. These contracts specify such requirements as:
� Conditions that the client must meet before a method is invoked.
� Conditions that a method must meet after it executes.
� Assertions that a method must satisfy at specific points of its execution
Profiling provides a framework for analyzing Java code performance for speed and heap memory
use. It identifies routines that are consuming the majority of the CPU time so that problems may be
tracked down to improve performance. These include the use of Microsoft Java Profiler API and
Sun’s profiling tools that are bundled with the JDK.
Error Handling
Exception and error handling is checked thoroughly are simulating partial and complete failover by
operating on error causing test vectors. Proper error recovery, notification and logging are checked
against references to validate program design.
Handout - Testing Type
Page 26
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Systems that employ transaction, local or distributed, may be validated to ensure that ACID
(Atomicity, Consistency, Isolation, Durability). Each of the individual parameters is tested
individually against a reference data set. Transactions are checked thoroughly for partial/complete
commits and rollbacks encompassing databases and other XA compliant transaction processors.
Advantages of White Box Testing
� Forces test developer to reason carefully about implementation
� Approximate the partitioning done by execution equivalence
� Reveals errors in "hidden" code
� Beneficent side-effects
Disadvantages of White Box Testing
� Expensive
� Cases omitted in the code could be missed out.
Handout - Testing Type
Page 27
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected

Chapter 7: Static Vs Dynamic Testing

Learning Objectives
After completing this chapter, you will be able to:
� Explain static and dynamic testing
� Differentiate static and dynamic testing techniques
Static Testing
Static Testing is a form of software testing where the software isn't actually used. This is in
contrast to dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of
the code, algorithm, or document. It is primarily syntax checking of the code or and manually
reading of the code or document to find errors. This type of testing can be used by the developer
who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.
Static testing involves review of requirements or specifications. This is done with an eye toward
completeness or appropriateness for the task at hand. This is the verification portion of Verification
and Validation. Bugs discovered at this stage of development are less expensive to fix than later in
the development cycle.
In software development, static testing, also called dry run testing, is a form of software testing
where the actual program or application is not used. Instead this testing method requires
programmers to manually read their own code to find any errors. The Verification activities fall into
the category of Static Testing. During static testing, you have a checklist to check whether the
work you are doing is going as per the set standards of the organization.
Static Testing Techniques
The static testing is performed using any of the following techniques
Feasibility Reviews: Tests for this structural element would verify the logic flow of a unit of
Requirements Reviews: These reviews verify software relationships; for example, in any
particular system, the structural limits of how much load (e.g., transactions or number of
concurrent users) a system can handle.
Integration and Deployment Reviews: These are reviews to verify whether the Integration of
code is done for the related modules and the deployment of the code is done in the proper
Code Walkthrough: This is the going through the code manually to find errors such as syntax
errors or loop control errors. Unit Tests, Integration Tests, System Tests and Acceptance Tests are
few of the Dynamic Testing methodologies.
Handout - Testing Type
Page 28
©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved
C3: Protected
Dynamic Testing
Dynamic testing (or dynamic analysis) is a term used in software engineering to describe the
testing of the dynamic behavior of code. That is, dynamic analysis refers to the examination of the
physical response from the system to variables that are not constant and change with time. In
dynamic testing the software must actually be compiled and run; this is in contrast to static testing.
Dynamic testing is the validation portion of Verification and Validation.
Dynamic Testing involves working with the software, giving input values and checking if the output
is as expected. These are the Validation activities.
Dynamic Testing Techniques
The Dynamic testing is performed using any of the following techniques
Examples of this are:
� Unit Testing – These tests verify that the system functions properly; for example,
pressing a function key to complete an action.
� Integrated Testing – The system runs tasks that involve more than one application or
database to verify that it performed the tasks accurately.
� System Testing – The tests simulate operation of the entire system and verify that it
ran correctly.
� User Acceptance – This real-world test means the most to your business, and
unfortunately, there’s no way to conduct it in isolation. Once your organization’s staff,
customers, or vendors begin to interact with your system, they’ll verify that it functions
properly for us.
Static Vs Dynamic Testing
Category Static testing Dynamic testing
Definition manual examination (reviews) and
automated analysis (static analysis) of
the code or other project documentation
the examination of the physical
response from the system to
variables that are not constant and
change with time
Work involved Primarily syntax checking of the code or
and manually reading of the code or
document to find errors.
Working with the software, giving
input values and checking if the
output is as expected.
Object of test Code, algorithm, or document relevant
to the application tested.
Software or the application to be
Unit Tests
Integration tests
System Tests
Acceptance Test.
Handout - Testing Type