You are on page 1of 106

Software Testing Techniques

CS-321
Instructor: Nadeem Sarwar
University Of Gujrat Sialkot Sub Campus
Software Testing

Testing is the process of exercising a


program with the specific intent of finding
errors prior to delivery to the end user.

Testing is the one step in the software process that


could be viewed psychologically as destructive
rather than constructive
Software Testing
Some commonly used terms associated with
testing are:
Failure: This is a manifestation of an error (or
defect or bug). But, the mere presence of an error
may not necessarily lead to a failure.
Test case: This is the triplet [I,S,O], where I is
the data input to the system, S is the state of the
system at which the data is input, and O is the
expected output of the system.
Test suite: This is the set of all test cases with
which a given software product is to be tested.
Aim of Software Testing

The aim of the testing process is to identify


all defects existing in a software product.
However for most practical systems, even
after satisfactorily carrying out the testing
phase, it is not possible to guarantee that the
software is error free.
What Testing Shows
errors

requirements conformance

performance

an indication
of quality
Who Tests the Software?

developer independent tester


Understands the system Must learn about the system,
but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by “quality”
Testing Objectives
1) Testing is a process of executing a program with an
intent of finding an error
2) A good test case is the one that has a higher probability
of finding an as-yet-undiscovered errors
3) A successful test is one that uncovers an as-yet-
undiscovered error

Objective : To design tests that systematically uncover


different classes of errors and to do so in minimum
amount of time and effort
Testing Principles
All tests should be traceable to customer requirements
Tests should be planned long before testing begins
The Pareto principle applies to software testing
Testing should begin “in the small” and progress toward
testing “in the large”
Exhaustive Testing is not possible
To be most effective, testing should be conducted by an
independent third party
Attributes of a Good Test
A good test has a high probability of finding an error
A good test is not redundant
A good test should be “best of breed”
A good test should be neither too simple nor too
complex
Exhaustive Testing

loop < 20 X

14
There are 10 possible paths! If we execute one
test per millisecond, it would take 3,170 years to
test this program!!
Selective Testing
Selected path

loop < 20 X
What are the steps?
Software is tested from two different
perspectives
Internal program logic is exercised using
“white box” test case design techniques
Software requirements are exercised using
“black box” test case design techniques.
… cont’d
There are two ways of testing a product
 Black-Box testing
 tests are conducted at the software interface
 Checks whether inputs, outputs and functions are
properly working
 It pays little regard to the internal logical structure
 White-Box testing
 Closely examines the procedural details
 Logical paths are thoroughly tested
Testing of Engineered Product
Black-box testing
Knowing the specified function that a product has been
designed to perform, tests can be conducted that
demonstrate each function is fully operational while at
the same time searching for errors in each function
White-box testing
Knowing the internal workings of a product, tests can
be conducted to ensure that “all gears mesh,” that is,
internal operations are performed according to
specifications and all internal components have been
adequately exercised.
White-Box Testing
White-Box Testing (or Glass Box Testing)

... our goal is to ensure that all


statements and conditions have
been executed at least once ...
White Box Testing
Examines the program structure and derive test
cases from program logic
Also known as
Glass box
Structural
Clear box
Open box int a;
int b;
If (a==0)
1) Guarantee that all independent paths within a module
have been exercised at least once
2) Exercise all logical decisions on their true and false
sides
3) Execute all loops at their boundaries and within their
operational bounds
4) Exercise internal data structures to ensure their
validity
Why Cover?

logic errors and incorrect assumptions


are inversely proportional to a path's
execution probability

we often believe that a path is not


likely to be executed; in fact, reality is
often counter intuitive

typographical errors are random; it's


likely that untested paths will contain
some
White Box Testing

Fig: Stronger and complementary testing strategies


Statement coverage

Example: Consider the Euclid’s GCD computation algorithm:


int compute_gcd(x, y)
int x, y;
{
1 while (x! = y){
2 if (x>y) then
3 x= x – y;
4 else y= y – x;
5}
6 return x;
}
By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3,
y=4)}, we can exercise the program such that all
statements are executed at least once.
Branch coverage
Black-Box Testing

requirements

output

input events
Black Box Testing
Internal working taken as perfect
Focus on interface
Internal working of software being tested is
not known to the tester
conformance to requirements
Also known as
Functional Testing
Why White Box ?
Why white box when black box done
 Logic errors and incorrect assumptions.
 assumptions about execution paths may be incorrect,
and so make design errors. White box testing can find
these errors.
Software Testing
Strategies
Software Testing Strategy
What is it ?
Designing effective test cases is important, but also is the strategy
you use to execute them
Why is it important ?
oTesting often takes more project effort than any other
software engineering activity
oIf it is conducted haphazardly, time is wasted, unnecessary
effort is expended, and even worse, errors sneak through
undetected
oItwould therefore seem reasonable to establish a
SYSTEMATIC STRATEGY for testing software
Testing
• Testing begins at the module level and works outward
toward the integration of the entire computer-based
system.
• Different testing techniques are appropriate at different
points in time.
• Testing is conducted by the developer or tester of the
software and (for large projects) an independent test
group.
• Testing and debugging are different activities, but
debugging must be accommodated in any testing
strategy.
Verification and Validation
• Verification – are we building the product right ?
– refers to the set of activities that ensure that
software correctly implements a specific function.

• Validation – are we building the right product?


– refers to the set of activities that ensure that the
software that has been built is traceable to
customer requirements.
Verification

“Are we
building the
product right?”
Validation

“Are we
building the
right product?”
Testing Strategy

unit test integration


test

system
test validation
test
Unit Testing

module
to be
tested

results

software
engineer
test cases
Unit Testing
It is a verification effort on the smallest unit of the
software design – the software component or module
The unit test is white-box oriented
The steps can be conducted in parallel for multiple
modules
Unit Testing
module
to be
tested

interface
local data structures

boundary conditions
independent paths
error handling paths

test cases
Unit Test Procedures
Unit testing is normally considered as an adjunct to the
coding step
Because a component is a stand-alone program, drive and/or
stub software must be developed
Stub::
Stub
Driver::
Driver ItItserves
servestotoreplace
replacemodules
modulesthat
thatare
are
AAdriver
driverisisnothing
nothingmore
more subordinate(calledby)
subordinate(called by)the
thecomponents
components
thanaa“Main
than “Mainprogram”
program”that,
that,
totobebetested.
tested.
accepts
acceptstest
testdata,
data,
ItItuses
usesthe
thesubordinate
subordinatemodule’s
module’sinterface,
interface,
passes
passesititto
tothe
thecomponent
component
tobe
betested,
tested, may
maydo dolittle
littledata
datamanipulation,
manipulation,
to
and
andprints
printsrelevant
relevantresults
results prints
printsverification
verificationof ofentry,
entry,
and
andreturns
returnscontrol
controltotothe
themodule
module
undergoingtesting
undergoing testing
Unit Testing Environment

driver
interface
local data structures

module boundary conditions


independent paths
error handling paths

stu stu
b b
test cases

RESULTS
Comments

Drivers and stubs represents overhead


Both softwares are written but are not delivered with
the final product
If drivers and stubs are kept simple, actual overhead is
relatively low
Unit testing is simplified when a component with ‘high
cohesion’ is designed
Integration Testing Strategies
Options:
• The big bang approach – slap everything together,
and hope it works. If not, debug it.
• An incremental construction strategy – add modules
one at a time, or in small groups, and debug problems
incrementally.
Integration Testing
Integration testing is a systematic techniques for constructing
the program structure while at the same time conducting tests
to uncover errors associated with interfacing.

Objective: Take unit tested components and build a program


structure that has been dictated by design

Top-down Integration Incremental Integration


Strategies
Bottom-up Integration
Top-down Integration
It is an incremental approach to construction of program
structure
Modules are integrated by moving downward through the
control hierarchy, beginning with the main control module
(main program)
Modules subordinate (and ultimately subordinate) to the
main control module are incorporated into the structure in
either a depth-first or breadth-first manner
Top-down Integration

A
top module is
tested with stubs.

B F G

stubs are replaced one


at a time, “depth first”.
C
as new modules are integrated,
some subset of tests is re-run.
D E
Top Down Integration
“Depth first”

A
top module is tested with
stubs

B F G

stubs are replaced one at


a time, "depth first"
C

as new modules are integrated,


some subset of tests is re-run

D E
Top Down Integration
“Breadth first”

B F G

Incorporates all components directly


C subordinate at each level

D E
Steps in Top-down Integration

1) The main control module is used as a test driver and stubs


are substituted for all components directly subordinate to
the main control module
2) Depending on the integration approach selected (depth-
first or depth-first), subordinate modules are replaced one
at a time with actual components
3) Tests are conducted as each component is integrated
4) On completion of each set of tests , another stub is
replaced with the real component
5) Regression testing may be conducted to ensure that new
errors have not been introduced
Pros
 major decision points are verified early in the test process
 Using depth-first integration, a complete function of the software
may be implemented
Cons
 Stubs are required which are overhead
 Problems occur when processing at low levels in the hierarchy is
required to test upper levels because stubs replace low-level
modules at the beginning of top-down testing
 No significant data flow upward in the program structure until all
the stubs are replaced by the actual components
Bottom-up Integration
It begins construction and testing with atomic modules
(i.e., components at the lowest levels in the program
structure)
processing required for components subordinate to a given
level is always available
need for stubs is eliminated
Bottom-Up Integration

B F G

drivers are replaced one


at a time, “depth first”.
C
worker modules are grouped
into builds and integrated.
D E

cluster
Bottom Up Integration
A

B G

D1 D2

worker modules
are grouped into
clusters

Cluster 1

Cluster 2
Steps in Bottom-up Integration
1) Low-level modules are combined into clusters that
perform a specific software function
2) A driver (a control program for testing) is written to co-
ordinate test case input and output
3) The cluster is tested
4) Drivers are removed and clusters are combined moving
upwards in the program structure
Pros
 Processing required for components to a given level is
always available
 The need for stubs is eliminated
 As integration moves upward, the need for separate test
drivers lessens
Cons
 “the program as an entity does not exist until the last
module is added”
Sandwich Testing

A
top modules are
tested with stubs.
A combined approach
that uses top-down
tests for upper levels B F G
of the program
structure, coupled
with bottom-up tests
C
for sub-ordinate
levels worker modules are grouped
into builds and integrated.
D E

cluster
High-Order Testing

• Validation Testing
• Verifies conformance with requirements
• Answers the question “Did we build the correct
product?”
• Alpha and Beta Testing
• Testing by the customer of a near-final version
of the product.
• System Testing
• Testing of the entire software system, focused
on end-to-end qualities.
Validation Testing
• Validation succeeds when the software under test functions in
a manner that can reasonably be expected by the customer.
• Validation is achieved through a series of black-box tests that
demonstrate conformity with requirements.
• The test plan should outline the classes of tests to be
conducted and define specific test cases that will be used in an
attempt to uncover errors in conformity with requirements.
• Deviations or errors discovered at this stage in a project can
rarely be corrected prior to scheduled delivery
Alpha and Beta Testing

Alpha Test
software customer tests
developer site customer site

Beta Test software


developer reviews customer tests
developer site customer site
System Testing
System testing is actually a series of different tests whose
primary purpose is to fully exercise the computer-based
system.

Although each test has a different purpose, all work to


verify that system elements have been properly integrated
and perform allocated functions
System Testing
• Recovery testing – forces the software to fail in a variety of
ways and verifies that recovery is properly performed.
• Security testing – attempts to verify that protection
mechanisms built into a system will in fact protect it from
improper access.
• Stress testing – executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume.
• Performance testing – tests run-time performance of
software within the context of an integrated system.
Debugging:
A Diagnostic Process
Debugging
Debuggingisisnot
notTesting
Testing
 Debugging
Debugging occurs
occurs asas aa consequence
consequence ofof
successful
successfultesting
testing
 When
When aa test test case
case uncovers
uncovers an an error,
error,
debugging
debugging isis the
the process
process that
that results
results inin the
the
removal
removalofofthe
theerror
error
 Debugging
Debugging isis the
the process
process that
that connects
connects aa
symptom
symptomtotoaacause,
cause,thereby
therebyleading
leadingtotoerror
error
correction
correction
 Its
Its an
an art.
art. Some
Some people
people are
are good
good atat itit and
and
some
somearen’t
aren’t
The Debugging Process
test cases

new test results


cases
regression
tests suspected
causes
corrections Debugging
identified
causes
Debugging Effort

time required to
correct the error
and conduct
regression tests

time required to
diagnose the
symptom and
determine the cause

(always exactly 25%  )


Symptoms and Causes

• Symptom and cause may be


geographically separated.
• Symptom may disappear when
another problem is fixed.
• Cause may be due to a combination of
non-errors.

symptom • Cause may be due to a system or


cause compiler error.
• Cause may be due to assumptions
that everyone believes.
• Symptom may be intermittent.
Consequences of Bugs
infectious

catastrophic
Damage extreme
serious
disturbing
annoying
mild

Bug Type

Bug Categories: function-related bugs, system-related


bugs, data bugs, coding bugs, design bugs,
documentation bugs, standards violations, etc.
Debugging Techniques
1. Brute force – print out values of “all” the
variables, at “every” step, look through the mass
of data, and find the error.
• Applied when all else fails.
• Using a “let the computer find the error”philosophy

2. Backtracking – begin with symptom of error.


Trace backwards from there (by hand) to error.
• Used in small programs
• Beginning at the site where a symptom has been
uncovered, the source code is traced backward until
the site of the code is found
• (continued)
Debugging Techniques
3. Cause Elimination
• Induction – moving from particular to general:
• Run error-detecting test with lots of different input.
• Based on all the data generated, hypothesize an error
cause.
• Run another test to validate the hypothesis.
• If hypothesis is not validated, generate a new
hypothesis and repeat.
Debugging Techniques
• Deduction – moving from general to particular:
• Consider all possible error causes.
• Generate tests to eliminate each (we
hope/expect all but one will succeed).
• We may then be able to use further tests to
further refine the error cause.
Debugging Hints
• First, think about the symptom you are seeing – don’t
run off half-cocked.
• Use tools (e.g., dynamic debuggers) to gain more
insight, and to make the steps reproducible.
• If at an impasse, get help from someone else.
• Be absolutely sure to conduct regression tests when
you do fix the bug.
References:
Software Engineering - A practitioner’ s Approach by
Roger S. Pressman (6th Ed)
• 13.1, 13.2, 13.3A-13.3, 13.5, 13.6, 13.7
Software Engineering - A practitioner’ s Approach by
Roger S. Pressman (5th Ed)
Chapter 18
• 18.1 - (18.1.1, 18.1.2, 18.1.3), 18.2 , 18.3 - (18.3.1,
18.3.2)
• 18.4 - (18.4.1, 18.4.2), 18.4 - (18.4.3, 18.4.4, 18.4.5)
• 18.5 - (18.5.1, 18.5.2, 18.5.3) , 18.6 - (18.6.1, 18.6.2,
18.6.3, 18.6.4), 18.7 - (18.7.1, 18.7.2, 18.7.3)
Integration Test
Validation Testing

 Aims to demonstrate the software functions in a manner that can be


reasonably expected by the customer.
 Tests conformance of the software to the Software Requirements
Specification. This should contain a section "Validation Criteria"
which is used to develop the validation tests.
In Validation Testing

 Validation Test Criteria


 set of black box tests to demonstrate conformance with
requirements
 Configuration Review
• An audit to ensure that all elements of the software
configuration are properly developed, catalogued, and
have the necessary detail to support maintenance.
 Alpha and Beta Testing
 and  Testing

 Acceptance tests
• Carried out by customers to validate all requirements
 Alpha tests
• Conducted at developer’s site by a customer
• Conducted in controlled environment
 Beta tests
• Conducted at customer’s site by end users
• Environment not controllable by developer
System Testing
 Recovery testing - tests of fault tolerance (power
failures, hardware failures)
 Security testing - tests to be sure hackers
cannot gain access or that users can only
access functions/data intended for them
 Stress testing - tests limits of the system,
overload the amount or rate at which data comes
in.
 Performance tests - verify the system works fast
enough, using the defined set of resources, etc.
Basis Path Testing

 White box testing technique

 Derive a logical complexity measure

 Define a basis set of execution paths

 Thus, test cases derived to exercise the basis set are


guaranteed to execute every statement in the program at
least once
Basis Path Testing
Flow Graph Notation
Cyclomatic Complexity Analysis
Graph Matrices
Flow Graph Notation
Node one / more
procedural statements

Edge / Link Indicates flow of control

Areas bounded by edges and nodes are called Regions


Area outside the graph is also a region
A Predicate Node is a node that contains a condition
Flow Graph Fundamentals
The flow graph is somewhat similar to a flowchart
There are nodes (vertices) for
Assignment and procedure call statements
Predicates (conditional expressions)
• If the predicate involves an “and” or an “or”,
multiple nodes will be needed
All “End If”, “End While” and “End Case” clauses
A linear sequence of nodes can be combined into one node
Flow chart = Flow graph

Picture Source: Roger Pressman, Software Engineering, 5th Edition


Cyclomatic Complexity
A Software metric that provides a quantitative measure of
the logical complexity of the program.

It defines the number of independent paths in the basis set


i.e. an upper bound on the number of tests that need to be
made.

Independent path : Any path through the program that


introduces at least one new set of processing statements
or condition variables.
How many paths to look for?
Formulae for measuring Cyclomatic Complexity
 V(G) = R

 V(G) = E - N + 2

 V(G) = P + 1

Where,
• R = Number of regions
• E = Number of edges
• N = Number of nodes
• P = Predicate nodes
Steps in Testing
1. Draw the flow graph

2. Calculate the cyclomatic complexity, V(G).

3. Determine a basis set.

4. Prepare test cases that will force the execution of each path in the
basis set.
Example program

Flow chart = Flow graph

Picture Source: Roger Pressman, Software Engineering, 5th Edition


Complexity of example program

E = 11
N=9
V(G) = 11 – 9 + 2 = 4

P=3
V(G) = 3 + 1 = 4

R=4
V(G) = 4

V(G) is,…… 4
Let’s look at the paths

path 1: 1-11
Let’s look at the paths

path 1: 1-11
path 2: 1-2-3-4-5-10-1-11
Let’s look at the paths

path 1: 1-11
path 2: 1-2-3-4-5-10-1-11
path 3: 1-2-3-6-8-9-10-1-11
Let’s look at the paths

path 1: 1-11
path 2: 1-2-3-4-5-10-1-11
path 3: 1-2-3-6-8-9-10-1-11
path 4: 1-2-3-6-7-9-10-1-11
Great, so now what?
So now you set about to design test cases that cause the
basis set of paths to execute
To summarize:
1. Using the code, draw the flow graph
2. Determine V(G) of the flow graph
3. Determine a basis set of independent paths
4. Prepare a set of test cases that will force all paths in the
basis set
Example
Draw the flow graph, calculate the cyclomatic
complexity, list the basis paths and prepare a test case for
one path using the following C++ code fragment:

while(value[i] != -999.0 && totinputs < 100)


{ totinputs++;
if(value[i] >= min && value[i] <= max)
{ totvalid++;
sum = sum + value[i];
} // End if
i++;
}// End while
1 2
while(value[i] != ­999.0 && totinputs < 
100) 4
3
{ totinputs++;
   if(value[i] >= min && value[i] <= max)
5
   { totvalid++;
     sum = sum + value[i];
6
   }//End if
7    i++;
}//end while
1 3 6 7

2 4 5
Cyclomatic Complexity:
V(G) = number of enclosed areas + 1 = 5
V(G) = number of simple predicates + 1 = 5
V(G) = edges - nodes + 2 = 10 - 7 + 2 = 5

1 3 6 7

2 4 5
1-7 (value[i] = -999.0)
1-2-7 (value[i] = 0, totinputs = 100)
1-2-3-6-1-7
1-2-3-4-6-1-7
1-2-3-4-5-6-1-7

1 3 6 7

2 4 5
Cyclomatic Complexity
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.

modules

V(G)

modules in this range are


more error prone
Graph Matrices
A graph matrix is a square matrix, whose
size (i.e. No. of rows and columns) = No. of Nodes in the
flow graph
Each row and column corresponds to an identified node
Matrix entries correspond to connections (an edge)
between nodes
A link weight is added to each matrix entry
Graph Matrices
Assist in basis path testing

1 2 3 4 5
1 1 1
a
2
e 3
b 3 1 1
d
5 4
f 4 1 1
c
g
2 5 1 1

Flow graph Connection Matrix


Black-Box Testing
Black-box Testing
Focuses on the functional requirements of the software

It is a complementary approach of White-box techniques


and uncover a different class of errors

It is applied during later stages of testing


Black-Box Testing

requirements

output

input events
Black-Box
Finds Errors In:
Incorrect or missing functions
Interface errors
Data structures or external data base access
Performance errors
Initialization and termination errors
Black-Box Testing Methods
Graph-Based Testing

Equivalence Partitioning

Boundary Value Analysis

Comparison Testing
Equivalence Partitioning Method
Divides the input domain of a program into
classes of data from which test cases can be
derived.
Test case design is based on an evaluation of
equivalence classes for an input condition.
Equivalence class represents a set of valid or
invalid states for input conditions.
Black-box Testing — Equivalence Partitioning

Invalid  inputs Valid  inputs

System

Outputs
Boundary Value Analysis Method
Create test cases that exercise bounding values(The edges
of the class)
Complements equivalence class testing
Derives test cases from the input and output domain as
well
Guidelines For Boundary Value
Range A to B, Test case for values A and B and just above
and below A and B.
Set of values, Test cases for the min and max values and
just above and below.
The above applies to output conditions.
Internals programs data structures boundary,test cases for
this boundaries.
Black-box Testing — Boundary Value Analysis
Faults frequently exist at and on either side of the
boundaries of equivalence sets
For a range v1 ... v2, test data should be designed with v1
and v2, just above and just below v1 and v2, respectively
Example
Suppose that the range is 1 ... 25
Then test data are 0, 1, 2, 24, 25, 26
Equivalence Partitioning:
Sample Equivalence Classes
Valid data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)

Invalid data
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
References:
Software Engineering - A practitioner’ s Approach
by Roger S. Pressman (6th Ed)
14.3, 14.4

Software Engineering - A practitioner’ s Approach


by Roger S. Pressman (5th Ed)
Chapter 17
• 17.1 - (17.1.1, 17.1.2, 17.1.3), 17.2, 17.6 - (17.6.2,
17.6.3)
• 17.3, 17.4 - (17.4.1, 17.4.2, 17.4.3, 17.4.4)

You might also like