You are on page 1of 15

REVISION HISTORY

S. Version / Date of Revised Sections Remarks


No. Revision no. release
1 1.0 25-04-98 1st release

2 2.0 24-11-2001 Enhanced by detailing


out Definitions
Rev No. 2.0 Header not to be changed 11-01

Table of Contents
INTRODUCTION.....................................................................................................3
Purpose.............................................................................................................3
Scope.................................................................................................................3
Definitions, Acronyms and Abbreviations.....................................................3
Definitions.................................................................................................... ..........3
Acronyms................................................................................................. ..............3
Abbreviations..................................................................................................... ....3
References........................................................................................................3
APPLICABLE ENVIRONMENT ............................................................................3
STANDARD/GUIDELINES ....................................................................................4
Software Testing - Principles and Practices.................................................4
Role of Testing.............................................................................................. .........4
Objective of Testing............................................................................................... 4
Testing in Software Product/Project Industry............................ .........................5
Effective Testing................................................................................... .................5
Testing Axioms.................................................................................. ....................6
Software testing terminology..........................................................................6
Introduction............................................................................. ..............................6
Testing Strategies............................................................................................. .....6
Testing Types ..................................................................................................7
White Box Testing Techniques............................................................ .................8
Black Box Testing Techniques...................................................... .......................8
Components of Testing Procedure..............................................................10
Testing Tools...................................................................................................10
Test plan, test case and test data preparation............................................12
Introduction.......................................................................... ...............................12
Test Planning.................................................................................. .....................12
Test Criteria ................................................................................................ ........12
Test Completion Criteria .............................................................................13
Test Cases.......................................................................................................13
Test Sequence Preparation ..........................................................................13
Test Data..........................................................................................................14
Guidelines for Test Case Preparation..........................................................14

Guidelines for Testing 2


Rev No. 2.0 Header not to be changed 11-01

INTRODUCTION

Purpose
This document provides Guidelines on Software Testing.

Scope
The following are the various topics covered by this guideline:

• Software Testing Terminology


• Test Plan, Test Case and Test Data Preparation

Definitions, Acronyms and Abbreviations

Definitions
Not Applicable

Acronyms
Not Applicable

Abbreviations
SDLC : Software Development Life Cycle
SCM : Software Configuration Management

References
Software Engineering - A practitioners handbook by Roger S Pressman.

APPLICABLE ENVIRONMENT

This guideline is applicable to all development groups in the organization.

Guidelines for Testing 3


Rev No. 2.0 Header not to be changed 11-01

STANDARD/GUIDELINES

Software Testing - Principles and Practices

Role of Testing
The increasing cost of failures and mission critical nature of software has
brought a high focus on software testing in the development organization.
Software testing has become the most critical element in software quality
assurance.
Every component of the software developed at Newgen passes through the
distinct stages as per the Quality Life Cycle shown below :

REQUIREMENTSS
BLACK BOX TESTING

DESIGN INTEGRATION TESTING

UNIT TEST
CODE

The approach to development is to focus sharply on the first two stages of


the above cycle, so that only a minimal number of defects reach the third
stage of testing. The product has to be put through carefully planned test
cycles to ensure quality. As we know, testing only releases the defects but
does not prove the absence of defects.

Objective of Testing
A clear objective is essential for the success of any task. A critical activity
like software testing is no exception. However, there is a common
misconception that software testing is an activity to prove the correctness
of a software. The reality however is that testing should be viewed more as
a “destructive” process than a “constructive” process of software
development.
“The objective of testing is not to show the absence of defects, but to show
their presence. Hence any structured testing activity has to comply to this
underlying objective.

Guidelines for Testing 4


Rev No. 2.0 Header not to be changed 11-01

Testing in Software Product/Project Industry


Testing strategies are most often directly adaptable only to a project
development organization since they have evolved parallel to the software
development life cycle (SDLC). Software product development, involves
continuous "maintenance", which calls for more stringent measures of
quality control.
Product development involves maintaining the software products for the
present customers and developing future releases, incorporating additional
features. There is a constant need for correction and enhancement
activities. "Change" is hence permanent. Problems due to this constantly
changing scenario are like :

• Change without proper impact study creates havoc on configuration


management.
• Unplanned changes lead to old defects reappearing and fresh defects
surfacing.
• Loss or change in original functionality of a given feature is also a
common outcome of unplanned defect rectification.
Since a software product services multiple clients, these problems
take a heavy toll of client confidence, credibility and a good client referral
base. Hence we are likely to lose out on factors which are the major
marketing strengths of any software product.

To ensure that constant change does not affect product quality and
performance, a means of quality control has to be adapted which will
streamline effecting changes and provide greater control over maintenance
activity. This calls for :
• Proactive problem identification and resolution.
• Generating results through pre-defined data
• Regression testing through recorded sequence of test steps for
comparison with expected outcomes.
• Performance measure built into the test sequences to monitor
performance of functional features.

Effective Testing
Implementing software testing in a structured manner involves preparation
of well designed Test Plans and Test Cases for checking the functionality
of the software. Though mechanisms like automation of test cases for
increasing the efficiency of testing are available, the critical success factor
of effective testing lies in the test plan and test case design to meet the
objective of testing.

Guidelines for Testing 5


Rev No. 2.0 Header not to be changed 11-01

Testing Axioms
Presented below are set of Testing axioms from an excellent book on
software testing by Glen J Myers :
• A good test case detects more errors.
• One most difficult problem is when to stop testing.
• It is impossible to test your own program.
• A necessary part of every test case is a description of expected output
of results.
• Avoid “on the fly” testing.
• Write test cases for valid as well as invalid inputs.
• Thoroughly inspect the results of each test.
• As the number of detected errors in a piece of software increases, the
probability of existence of more undetected errors also increases.
• Assign your most creative developers to testing.
• The design of a system should be such that each module is integrated
into the system only once.
• Never alter the program to make testing easier.
• Testing, as almost every other activity, must start with the objectives.

Software testing terminology

Introduction
To adopt a thorough testing process, one has to be familiar with the
various testing techniques and terminology. Presented below are some of
the terminology on vital testing techniques, methods and Product quality
indices.

Testing Strategies
Verification refers to the set of activities that ensure that the
Verification :
software correctly implements a specific function, imposed at the start of
that phase. Testing activity focuses on verifying the correct
implementation of business requirement and customer requirement.

Validation : Validation refers to the set of activities which ensure the


software that has been built is traceable to customer requirements.
Validation includes activities like Code-walkthrough to ensure that the
software conforms to set standards.
Software Testing is a systematic activity aimed to uncover errors in a
software program with respect to its specification to fulfill stated
requirements.

Guidelines for Testing 6


Rev No. 2.0 Header not to be changed 11-01

Unit Testing : Unit Testing refers to the testing of individual software units
or related units, where a unit is the smallest functional part of an
application. Unit testing makes heavy use of White box testing techniques
along with Black box techniques.
In our environment, a typical screen and its associated components make a
unit. White box testing measures like code-walkthroughs, control flow
graphing are used extensively at this level apart from functional testing
efforts like messages, boundary values etc,.

Integration Testing : Integration testing refers to the testing in which


software units of an application are combined and tested for evaluating the
interaction between them. Black box test case design are most prevalent
during integration, though white box testing techniques like Control flow
graphing and Execution tracing are also carried out.
Inter module and inter product integration issues are the prime focus areas
here. We concentrate on the application’s business rules and ensure they
are validated across different modules.

System Testing :Testing conducted on a complete, integrated system to


evaluate the system’s compliance with its specified requirements.
Software once validated for meeting functional requirements must be
verified for proper interface with other system elements like hardware,
databases and people. System Testing verifies that all these system
elements mesh properly and the software achieves overall function /
performance.
We carry out Product audit and acceptance, performance testing as a part
of system testing.

Testing Types

Structural Testing (White Box Testing) : Those testing techniques that


involve understanding of the control structure of software components and
their procedural design form a part of Structured Testing.
Code-walkthroughs of front-end and back-end code come under this type
of testing.

Functional Testing (Black Box Testing) : Those testing methods that


need functional understanding of ‘what’ a software unit is supposed to
perform rather than ‘how’ form a part of Functional Testing.
Business rule validations through sample data in a test sequence comes
under this type of testing.

Guidelines for Testing 7


Rev No. 2.0 Header not to be changed 11-01

White Box Testing Techniques

Branch Testing : Testing designed to execute each outcome of each


decision point in a program.

Control Flow Graphing : A technique (using a dynamic analysis tool) to


`generate the graphical representation of the sequence in which operations
are performed during the execution of a program.

Basis Path Testing :This is a white box control structure testing technique
that enables the definition of “basis set” of execution paths. Test cases
derived to exercise the “basis set” ensure that every statement in the
program is executed at least once during testing.

Condition Testing : This is a white box control structure testing technique


that exercises the testing of each logical condition contained in a program.

This is a white box control structure testing technique


Data Flow Testing :
that aims at generating test cases to satisfy the execution control of
program depending upon the data values and sequences of operation.

Black Box Testing Techniques

Back-to-back testing : Testing in which two or more variants of a program


are executed with the same inputs, the outputs are compared and errors
analyzed in case of discrepancies.

Big Bang Testing : A type of integration testing in which software


components of an application are combined all at once into an overall
system.

Bottom-up Testing : A type of integration testing that begins with


construction and testing of atomic modules ( components at the lowest
level in the application structure ) and moves up to integrate and test the
entire application.

Top-down Testing : A type of integration testing that employs depth-first


integration or breadth first integration to start with the controlling module
of the application and integrate test the rest of the modules. This testing
employs test drivers and stubs for testing and relies on Regression Testing
to ensure testing completion.

Guidelines for Testing 8


Rev No. 2.0 Header not to be changed 11-01

Regression Testing :Regression Testing refers to the selective re-testing of


a system or component to verify that modifications have not caused
unintended effects and the system component still conforms to the
specified requirements.

Stress Testing : Stress testing is a type of system testing that aims at


confronting the system with varied levels of abnormal situations in terms
of consumption of computer resources like quantity, frequency or volume.

Performance Testing : Performance testing is a type of system testing that


aims to determine whether a system meets the performance requirements
within the physical limitation of the computing environment (CPU process
speed, memory capacity and number of users etc.) of the system.

Recovery Test : This System test examines how a system recovers from
hardware failure, circuit errors, power blackouts and other problems
caused by program errors within a pre-specified time.

Security Test : Security features are essential to protect data from


unauthorized access and modifications. Security testing is a system testing
technique that aims at verifying the protection mechanisms built into the
system to protect it from improper penetration.

Fault Tolerance :The type of system testing that aims at testing the ability
of the system or component to continue normal operation despite the
presence of software or hardware faults.

Equivalence Class Partitioning : This is a black box testing method that


divides the input domain of a program into classes of data from which test
cases can be prepared. In a unit testing situation, for a field level
validation criteria, the equivalence class represents a set of valid or invalid
data types to check the validity of inputs.

Boundary Value Analysis : This Black box technique relies on the rule that
a greater number of errors tend to occur at the boundaries of the input
domain than in the “center”. This leads to developing test case with test
criteria as boundary conditions wherein the appropriate inputs are chosen
to study the system behavior.

Guidelines for Testing 9


Rev No. 2.0 Header not to be changed 11-01

Components of Testing Procedure

Test : An activity in which a component is executed under specified conditions,


the results are observed or recorded and evaluated with the expected outcome or
target.

Test Bed : An environment containing the simulators, tools and other support
elements needed to conduct a test.

Test Plan : Test Plan refers to the document that plans for testing of a system or
component under various criterion of the testing strategy and procedures.

Test Case : A document that specifies the test inputs, execution conditions, and
predicted results for an item to be tested with respect to the testing criteria.

Test Criteria : Refers to the major focus areas of testing for a given test strategy.
The test Criteria relies heavily on the appropriate testing technique adopted.

Test Log : Refers to a chronological record of all relevant statistics about the
execution of a test.

Testing Tools

Test Driver : A test driver is a simulation of a control module. It performs the two
functions of repeatedly calling the lower module being tested and passes input
data to the lower module in each call.

Test Stub : A test stub is a dummy program module used by the driving /
controlling units. It gets invoked by a function call and posts an action to notify
execution.

Test Management Tools : These tools allow data collection about testing, test
administration and help in tracking defects from testing and their status.

Test Design Tools : These tools allow design of tests based on user inputs of
program specification and heuristic algorithms.

Test Requirements Generator : Generates test case requirements based on


system specifications.

Test Case Generators : Generates test cases based on test specifications.

Guidelines for Testing 10


Rev No. 2.0 Header not to be changed 11-01

Static Analysis Tools : Static Analysis is the process of analyzing the quality
attributes of the software by analyzing the source code.

Load Analyzer :These tools analyze and predict database loads (considering
network traffic, transaction processing etc.,.) for several metrics.

Code Auditors : Tools that act as special purpose filters to check the quality of
software code and ensuring that it meets expected coding standards.

Plan Analyzers : These tools analyze SQL query execution plans and identify
performance problems in a database and deadlock prediction.

Standards Auditing : These tools allow users to configure programming


standards and audit code for conformance to these standards.

Static Data Flow analyzers : These tools follow the flow of a given set of data
through a program and reports any anomalies.

Complexity Metric Predictors : These utilities predict the complexity of code


according to well known metrics like Mc Cabe’s Complexity Metric

Dynamic Analysis Tools : Dynamic Analysis is the process of analyzing the


behavior of the software as it is executed by test data sets to produce test coverage
reports.

Timing Analyzer : A tool that measures the execution time of a software system
or unit.

Test Coverage verifiers : Internal coverage of program execution to detect dead


code.

Data Flow analyzers : These tools track the flow of data through a system and
attempt to find undefined data references, incorrect indexing and other data
related problems.

Control Flow Tracing : Traces the path taken through the code execution.

Spy and Playback Tools : These tools offer scripting facility to specify the
execution and testing parameters. They enable recording of test sessions and
automatic playback of recorded scripts. (These are very useful in regression
testing of an application component).

Guidelines for Testing 11


Rev No. 2.0 Header not to be changed 11-01

Test plan, test case and test data preparation

Introduction

Effective testing depends on a well planned test specification and steps to


execute tests with necessary test data. A product’s quality depends upon
the quality of the test cases it has been put through. Hence the objective of
test planning for a software should aim to detect more errors.

Test Planning

Test Plan preparation or test planning is the foremost activity in the


preparation to test a product. Test plans can be prepared for any of the
identified testing strategies. Test plan enables the identification and
definition of test cases objectives across “functional areas” or “test
criteria” applicable for the chosen test strategy. These criteria are the
major focus areas of testing for a given test strategy.

Test Criteria

Test criteria / functional areas for a typical Client/Server business


application system like ours can have the following test criteria :

Field Level Validations : The aim of this functional area of testing is to


uncover field property related errors. The test cases associated to this
functional area target null / not-null checks, data type validations, data
entry masks etc,.

Field Level Dependencies : Very often, it is found that values entered for
a particular field are supposed to be validated with respect to the value
contained in another field. The ‘field level dependency’ test cases should
aim to uncover defects relating to the non-validation of dependent screen
field values with respect to the independent fields. E.g. ‘To’ date in a range
of dates depends upon the ‘From’ date.

Functionality validations : Application functionality is implemented in


applications through a variety of interface controls. The “functional
validation” test cases aim to test each interface control provided by the
application and verify intended implementation. E.g., the graying of status
icons, activeness of menu items etc,.

Guidelines for Testing 12


Rev No. 2.0 Header not to be changed 11-01

Referential Integrity Checks : Most business applications involve master


data that are referenced by the application’s transactions. The application
cannot have transactions without the corresponding master data.
‘Referential integrity’ test cases aim to uncover such errors in the
application.

Referential Integrity checks can be planned for unit and multiple levels of
integration testing.

Business Rule validations : Business applications are built on a business


model of that application. Business rules are sequencing rules of the
application. E.g.,. approved Purchase requisitions precede the preparation
of Purchase orders. “Business rule validation” test cases aim to uncover
these functional deficiencies of the application with respect to the
requirements specification.

Boundary value validations : Applications allow definition of a range of


parameters and operate within their boundary. “Boundary value
validation” test cases aim to check the functionality of the application with
respect to the values of the parameters in the upper and lower boundary.
E.g..

Test Completion Criteria

Test completion criteria should mention the exit criteria for a test plan. The
statement of “when to stop testing” should come out clearly. The objective of the

testing exercise must be defined as the test completion criteria. A sample test
completion criteria could be stated for a unit test plan of a transaction screen as
:“Software unit can be declared as unit tested if and only if all test cases defined
are executed to meet the expected outcomes”.

Test Cases

Test Case is a document that specifies the test inputs, execution conditions, and
predicted results for an item to be tested with respect to the testing criteria and its
objective.

Test Sequence Preparation


Test sequences are execution steps in meeting the test case objectives. Test
sequences are prepared in a manner that each individual steps are traceable during
execution. Expected outcomes may not be available for every test sequence step.
However, the structuring of test sequences should be such that expected outcomes
are definable.
Test sequences should cover both valid and invalid outcomes.

Guidelines for Testing 13


Rev No. 2.0 Header not to be changed 11-01

Test Data

Test sequences in many test criteria like referential integrity validation, business
rule checks and functionality validations require sample data that are used to
check valid and invalid outcomes. These test data are listed along with the
corresponding sequences or loaded through script files.

Guidelines for Test Case Preparation

Presented below are some sample test sequences:

• Execution of scripts to delete table data.


• Execution of scripts to load pre-defined data into the respective tables.

• Execution of reporting programs involving all combinations of filtering with a


given volume of data to check with pre-printed report and benchmarking of
reporting time.
• Execution of reporting programs involving erroneous combinations of
parameters (from date > to date etc.).
• Report comparison (with pre-printed report annexed to the test case ) for
report content, totals, fonts, and format.

• Entry of data to check maximum dimensions and formats for all fields of
table.
• 'Data Entry' validations (depending on data type) for each screen entry fields.
• 'Applied' data checked for benchmarked performance for proper insertion.
• 'Deletion' of authorized records.
• Multi-user checking of feature characteristics.
• Concurrency checks for updations involved in the feature.
• 'Help' check for every field and filtration through selected parameters.
• 'Tool Bar' checking for every tool bar icon to ensure correctness.
• Help text checking for each screen related to the feature.
• 'Apply' checking of 'Erroneous entry' in multi-lines.
• 'Sequence of entry' checking through multi-line.
• Server or Client 'Date and timestamp' checking during apply.
• 'Commit'/ 'Roll back' checks in local / common / multi-server common
databases.
• Security checks with varying assignment of access rights to the functionality’s
(entry, modify, delete, authorize, post ) of a feature.
• Multi-company checking.

Guidelines for Testing 14


Rev No. 2.0 Header not to be changed 11-01

• Year change process and house keeping activities like old year data archival.
• Some of the test sequences may involve a direct execution of a software
component and check for the expected outcome, while others may involve a
sequence of entry and a subsequent analysis / comparison of action.

Guidelines for Testing 15