You are on page 1of 128

20CSE71 SOFTWARE TESTING

By
Chempavathy.B
MODULE 2
White Box and Black Box Testing:
White box testing, static testing, static analysis tools, Structural
testing: Module/Code functional testing, Code coverage testing,
Code complexity testing,
Black Box testing, Requirements based testing, Boundary value
analysis, Equivalence partitioning, state/ graph based testing,
Model based testing and model checking, Differences between
white box and Black box testing.
WHAT IS WHITE BOX TESTING?
• White box testing is a way of testing the external
functionality of the code by examining and testing the
program code that realizes the external functionality. This
is also known as clear box, or glass box or open box
testing.
• White box testing takes into account the program code,
code structure, and internal design flow.
• A number of defects come about because of incorrect
translation of requirements and design into program code.
Some other defects are created by programming errors and
programming language idiosyncrasies.
Classification of white box testing.
STATIC TESTING
• Static testing is a type of testing which requires only the
source code of the product, not the binaries or
executables. Static testing does not involve executing
the programs on computers but involves select people
going through the code to find out whether
The code works according to the functional
requirement;
The code has been written in accordance with the design
developed earlier in the project life cycle;
The code for any functionality has been missed out;
The code handles errors properly. Static testing can be
done by humans or with the help of specialized tools.
Static Analysis Tools
• There are several static analysis tools available in the market that
can reduce the manual work and perform analysis of the code to
find out errors such as those listed below.
1. Whether there are unreachable codes (usage of GOTO
statements sometimes creates this situation; there could be
other reasons too)
2. Variables declared but not used
3. Mismatch in definition and assignment of values to variables
4. Illegal or error prone typecasting of variables
5. Use of non-portable or architecture-dependent programming
constructs
6. Memory allocated but not having corresponding statements for
freeing them up memory
7. Calculation of cyclomatic complexity
• These static analysis tools can also be considered as
an extension of compilers as they use the same
concepts and implementation to locate errors. A
good compiler is also a static analysis tool.
• Some of the static analysis tools can also check
compliance for coding standards as prescribed by
standards such as POSIX. These tools can also check
for consistency in coding guidelines (for example,
naming conventions, allowed data types,
permissible programming constructs, and so on).
• While following any of the methods of human
checking-desk checking, walkthroughs, or formal
inspections-it is useful to have a code review
checklist. Every organization should develop its own
code review checklist. The checklist should be kept
current with new learning as they come about.
• In a multi-product organization, the checklist may
be at two levels-first, an organization-wide checklist
that will include issues such as organizational coding
standards, documentation standards, and so on;
second, a productor project-specific checklist that
addresses issues specific to the product or project.
CODE REVIEW CHECKLIST
• Data Item Declaration related
• Data Usage related
• Control Flow related
• Standard related
• Style related
• Miscellaneous
• Documentation related
STRUCTURAL TESTING
• Structural testing takes into account the code, code structure,
internal design, and how they are coded. The fundamental
difference between structural testing and static testing is that in
structural testing tests are actually run by the computer on the
built product, whereas in static testing, the product is tested by
humans using just the source code and not the executables or
binaries.
• Structural testing entails running the actual product against
some predesigned test cases to exercise as much of the code as
possible or necessary. A given portion of the code is exercised if
a test case causes the program to execute that portion of the
code when running the test.
Unit/Code Functional Testing
• This initial part of structural testing corresponds to some quick checks that a developer
performs before subjecting the code to more extensive code coverage testing or code
complexity testing. This can happen by several methods
Initially, the developer can perform certain obvious tests, knowing the input variables and the
corresponding expected output variables. This can be a quick test that checks out any obvious
mistakes. By repeating these tests for multiple values of input variables, the confidence level of
the developer to go to the next level increases. This can even be done prior to formal reviews of
static testing so that the review mechanism does not waste time catching obvious errors.
For modules with complex logic or conditions, the developer can build a “debug version” of the
product by putting intermediate print statements and making sure the program is passing
through the right loops and iterations the right number of times. It is important to remove the
intermediate print statements after the defects are fixed.
Another approach to do the initial test is to run the product under a debugger or an Integrated
Development Environment (IDE).These tools allow single stepping of instructions (allowing the
developer to stop at the end of each instruction, view or modify the contents of variables, and
so on), setting break points at any function or instruction, and viewing the various system
parameters or program variable values.
Code Coverage Testing
• Code coverage testing involves designing and
executing test cases and finding out the percentage
of code that is covered by testing. The percentage of
code covered by a test is found by adopting a
technique called instrumentation of code.
• Code coverage testing is made up of the following
types of coverage.
1. Statement coverage
2. Path coverage
3. Condition coverage
4. Function coverage
Statement coverage
• Program constructs in most conventional
programming languages can be classified as
1. Sequential control flow
2. Two-way decision statements like if then else
3. Multi-way decision statements like Switch
4. Loops like while do, repeat until and for
• Statement Coverage=(Total statements exercised/Total number of
executable statements in program)*100
• Statement coverage refers to writing test cases that execute each
of the program statements. One can start with the assumption
that more the code covered, the better is the testing of the
functionality, as the code realizes the functionality.
• For a section of code that consists of statements that are
sequentially executed (that is, with no conditional branches), test
cases can be designed to run through from top to bottom. A test
case that starts at the top would generally have to go through the
full section till the bottom of the section.
• When we consider a two-way decision construct like the if
statement, then to cover all the statements, we should also cover
the then and else parts of the if statement. This means we should
have, for each if then else, (at least) one test case to test the
Then part and (at least) one test case to test the e1se part.
• The multi-way decision construct such as a Switch statement can
be reduced to multiple two-way if statements. Thus, to cover all
possible switch cases, there would be multiple test cases.
• A good percentage of the defects in programs come about
because of loops that do not function properly. More often, loops
fail in what are called "boundary conditions."
• In order to make sure that there is better statement
coverage for statements within a loop, there should be test
cases that
1. Skip the loop completely, so that the situation of the
termination condition being true before starting the loop
is tested.
2. Exercise the loop between once and the maximum
number of times, to check all possible "normal"
operations of the loop.
3. Try covering the loop, around the "boundary" of n-that is,
just below n, n, and just above n.
Example
• consider the following program.
Total = O; /* set total to zero */
if (code == "M") {
Stmt1;
Stmt2;
Stmt3;
Stmt4;
Stmt5;
Stmt6;
Stmt7;
else percent = value/Total*100; /* divide by zero */
In the above program, when we test with code = "M," we will get 80 percent code
coverage. But if the data distribution in the real world is such that 90 percent of the time,
the value of code is not = "M," then, the program will fail 90 percent of the time (because of
the divide by zero in the highlighted line).
Path coverage
• In path coverage , we split a program into a number
of distinct paths. A program (or a part of a program)
can start from the beginning and take any of the
paths to its completion.
• Let us take an example of a date validation routine.
The date is accepted as three fields mm, dd and yyyy.
We have assumed that prior to entering this routine,
the values are checked to be numeric.
• Path Coverage=(Total Paths exercised / Total number
of paths in program)*100
Flowchart for a date validation routine
There are different paths that can be taken through
the program.
• A
• B-D-G
• B-D-H
• B-C-E-G
• B-C-E-H
• B-C-F-G
• B-C-F-H
Condition coverage
• In the above example, even if we have covered all the paths possible, it would not
mean that the program is fully tested. For example, we can make the program
take the path A by giving a value less than 1 (for example, 0) to mm and find that
we have covered the path A and the program has detected that the month is
invalid.
• But, the program may still not be correctly testing for the other condition namely
mm > 12.
• It is necessary to have test cases that exercise each Boolean expression and have
test cases test produce the TRUE as well as FALSE paths.
• Condition Coverage = (Total decisions exercised / Total number of decisions in
program ) * 100
• The condition coverage, as defined by the formula along side in the margin gives
an indication of the percentage of conditions covered by a set of test cases.
Condition coverage is a much stronger criteria than path coverage, which in turn
is a much stronger criteria than statement coverage.
Function coverage
• This is a new addition to structural testing to identify how many program functions
• Function Coverage = (Total functions exercised / Total number of functions in program)
* 100
• The advantages that function coverage provides over the other types of coverage are as
follows.
1. Functions are easier to identify in a program and hence it is easier to write test cases
to provide function coverage.
2. Since functions are at a much higher level of abstraction than code, it is easier to
achieve 100 percent function coverage than 100 percent coverage in any of the earlier
methods.
3. Functions have a more logical mapping to requirements and hence can provide a
more direct correlation to the test coverage of the product.
4. Since functions are a means of realizing requirements, the importance of functions
can be prioritized based on the importance of the requirements they realize. Thus, it
would be easier to prioritize the functions for testing. This is not necessarily the case
with the earlier methods of coverage.
5. Function coverage provides a natural transition to black box testing.
Code Complexity Testing
• Different types of coverage that can be provided to test a
program. Two questions that come to mind while using these
coverage are:
1. Which of the paths are independent? If two paths are not
independent, then we may be able to minimize the number of
tests.
2. Is there an upper bound on the number of tests that must be
run to ensure that all the statements have been executed at
least once?
• Cyclomatic complexity is a metric that quantifies the complexity
of a program and thus provides answers to the above questions.
• A program is represented in the form of a flow graph. A flow graph consists of nodes
and edges. In order to convert a standard flow chart into a flow graph to compute
cyclomatic complexity, the following steps can be taken.
1. Identify the predicates or decision points (typically the Boolean conditions or
conditional statements) in the program.
2. Ensure that the predicates are simple (that is, no and/or, and so on in each predicate).
If there are loop constructs, break the loop termination checks into simple predicates.
3. Combine all sequential statements into a single node. The reasoning here is that these
statements all get executed, once started.
4. When a set of sequential statements are followed by a simple predicate (as simplified
in (2) above), combine all the sequential statements and the predicate check into one
node and have two edges emanating from this one node. Such nodes with two edges
emanating from them are called predicate nodes.
5. Make sure that all the edges terminate at some node; add a node to represent all the
sets of sequential statements at the end of the program
Flow graph translation of an OR to a simple
predicate.
Converting a conventional flow chart to a
flow graph.
A hypothetical program with no decision
node.

This graph has no predicate nodes because there are no decision points. Hence, the
cyclomatic complexity is also equal to the number of predicate nodes (0) + 1.
Note that in this flow graph, the edges (E) = 1; nodes (N) = 2. The cyclomatic
complexity is also equal to 1 = 1 + 2 – 2 = E – N + 2.
Adding one decision node

Incidentally, this number of independent paths, 2, is again equal to the number of


predicate nodes (1) + 1. When we add a predicate node (a node with two edges),
complexity increases by 1, since the “E” in the E – N + 2 formula is increased by one
while the “N” is unchanged. As a result, the complexity using the formula E – N + 2 also
works out to 2

Cyclomatic Complexity = Number of Predicate Nodes + 1


Cyclomatic Complexity = E – N + 2
• Using the flow graph, an independent path
can be defined as a path in the flow graph that
has at least one edge that has not been
traversed before in other paths. A set of
independent paths that cover all the edges is a
basis set. Once the basis set is formed, test
cases should be written to execute all the
paths in the basis set.
Calculating and using cyclomatic complexity
• It will be very difficult to manually create flow graphs for large
programs. There are several tools that are available in the market
which can compute cyclomatic complexity
Complexity What it means
1–10 Well-written code, testability is high,
cost/effort to maintain is low
10–20 Moderately complex, testability is medium,
cost/effort to maintain is medium
20–40 Very complex, testability is low, cost/effort to
maintain is high
>40 Not testable, any amount of money/effort to
maintain may not be enough
Black Box Testing
• Black box testing involves looking at the specifications
and does not require examining the code of a
program.
• Black box testing is done from the customer's
viewpoint. The test engineer engaged in black box
testing only knows the set of inputs and expected
outputs and is unaware of how those inputs are
transformed into outputs by the software.
• Black box testing is done without the knowledge of
the internals of the system under test
WHY BLACK BOX TESTING
Black box testing helps in the overall functionality
verification of the system under test.
• Black box testing is done based on requirements
• Black box testing addresses the stated
requirements as well as implied requirements
• Black box testing encompasses the end user
perspectives
• Black box testing handles valid and invalid inputs
Requirements Based Testing
• Requirements testing deals with validating the
requirements given in the Software Requirements
Specification (SRS) of the software system.
• The precondition for requirements testing is a detailed
review of the requirements specification.
Requirements review ensures that they are consistent,
correct, complete, and testable.
• All explicit requirements (from the Systems
Requirements Specifications) and implied requirements
(inferred by the test team) are collected and
documented as “Test Requirements Specification”
(TRS).
Sample requirements specification for lock
and key system
Sample Requirements Traceability Matrix.(RTM)
Once the test case creation is completed, the RTM helps in
identifying the relationship between the requirements and test
cases. The following combinations are possible
• One to one—For each requirement there is one test case (for
example, BR-01)
• One to many—For each requirement there are many test cases
(for example, BR-03)
• Many to one—A set of requirements can be tested by one test
case
• Many to many—Many requirements can be tested by many test
cases (these kind of test cases are normal with integration and
system testing; however, an RTM is not meant for this purpose)
• One to none—The set of requirements can have no test cases.
The test team can take a decision not to test a requirement due
to non-implementation or the requirement being low priority
(for example, BR-08)
An RTM plays a valuable role in requirements based testing.
1. Regardless of the number of requirements, ideally each of the requirements has to be
tested. When there are a large numbers of requirements, it would not be possible for
someone to manually keep a track of the testing status of each requirement. The RTM
provides a tool to track the testing status of each requirement, without missing any (key)
requirements.
2. By prioritizing the requirements, the RTM enables testers to prioritize the test cases
execution to catch defects in the high-priority area as early as possible. It is also used to
find out whether there are adequate test cases for high-priority requirements and to
reduce the number of test cases for low priority requirements. In addition, if there is a
crunch for time for testing, the prioritization enables selecting the right features to test.
3. Test conditions can be grouped to create test cases or can be represented as unique test
cases. The list of test case(s) that address a particular requirement can be viewed from
the RTM.
4. Test conditions/cases can be used as inputs to arrive at a size / effort / schedule
estimation of tests.
The Requirements Traceability Matrix provides a wealth of
information on various test metrics. Some of the metrics that
can be collected or inferred from this matrix are as follows.
• Requirements addressed priority wise—This metric helps in
knowing the test coverage based on the requirements.
Number of tests that is covered for high-priority requirement
versus tests created for low-priority requirement.
• Number of test cases requirement wise—For each
requirement, the total number of test cases created.
• Total number of test cases prepared—Total of all the test
cases prepared for all requirements.
Once the test cases are executed, the test results can be used to collect metrics such as
• Total number of test cases (or requirements) passed—Once test execution is completed,
the total passed test cases and what percent of requirements they correspond.
• Total number of test cases (or requirements) failed—Once test execution is completed,
the total number of failed test cases and what percent of requirements they correspond.
• Total number of defects in requirements—List of defects reported for each requirement
(defect density for requirements). This helps in doing an impact analysis of what
requirements have more defects and how they will impact customers. A comparatively
high-defect density in low-priority requirements is acceptable for a release. A high-defect
density in high-priority requirement is considered a high-risk area, and may prevent a
product release.
• Number of requirements completed—Total number of requirements successfully
completed without any defects.
• Number of requirements pending—Number of requirements that are pending due to
defects.
Sample test execution data.
Graphical representation of test case results.
Types of Black Box Testing
There are several types of Black Box Testing that are possible, but if we consider
a major variant of it then only the below mentioned are the two fundamental
ones.
#1) Functional Testing
• This testing type deals with the functional requirements or specifications of
an application. Here, different actions or functions of the system are being
tested by providing the input and comparing the actual output with the
expected output.
For example, when we test a Dropdown list, we click on it and verify if it
expands and all the expected values are showing in the list.
Few major types of Functional Testing are:
• Smoke Testing
• Sanity Testing
• Integration Testing
• System Testing
• Regression Testing
• User Acceptance Testing
#2) Non-Functional Testing
Apart from the functionalities of the requirements, there
are even several non-functional aspects that are required to
be tested to improve the quality and performance of the
application.
Few major types of Non-Functional Testing include:
• Usability Testing
• Load Testing
• Performance Testing
• Compatibility Testing
• Stress Testing
• Scalability Testing
Black Box Testing Techniques
In order to systematically test a set of functions, it is
necessary to design test cases. Testers can create test
cases from the requirement specification document
using the following Black Box Testing techniques:
• Equivalence Partitioning
• Boundary Value Analysis
• Decision Table Testing
• State Transition Testing
• Error Guessing
• Graph-Based Testing Methods
• Comparison Testing
BOUNDARY VALUE ANALYSIS (BVA)
Definition
• Boundary Value Analysis is a black box test design technique
where test case are designed by using boundary values
• Boundary value analysis (BVA) is based on testing at the
boundaries between partitions
• Most errors occur at extremes (< instead of <=, counter off by
one)
• Here we have both:
– valid boundaries (in the valid partitions) and
– invalid boundaries (in the invalid partitions)
• BVA is used in range checking
• When a function F is implemented as a
program, the input variables x1 and x2 will
have some boundaries:
a <= x1<=b
c <=x2 <=d
where
- [a,b] and [c,d] are the intervals

BVA focuses on the boundary of the input space


to identify test cases
EXAMPLE 1 FOR BVA
• Suppose you have to check the User Name and Password field that accepts
– minimum 8 characters and
– maximum 12 characters
– Valid range 8-12
– Invalid range 7 or less than 7
– Invalid range 13 or more than 13

• Test Cases for Valid partition value, Invalid partition value and exact
boundary value
• Test Cases 1: Consider password length less than 8.
• Test Cases 2: Consider password of length exactly 8.
• Test Cases 3: Consider password of length between 9 and 11.
• Test Cases 4: Consider password of length exactly 12.
• Test Cases 5: Consider password of length more than 12.
EXAMPLE 2 FOR BVA
• Test cases for the application whose input box accepts numbers between
1-1000
- Valid range 1-1000
- Invalid range 0 and
- Invalid range 1001 or more
• Test Cases for Valid partition value, Invalid partition value and exact boundary
value.
• Test Cases 1: Consider test data exactly as the input boundaries of
input domain i.e. values 1 and 1000
• Test Cases 2: Consider test data with values just below the extreme
edges of input domains i.e. values 0 and 999
• Test Cases 3: Consider test data with values just above the extreme edges of
input domain i.e. values 2 and 1001
• SINGLE FAULT ASSUMPTION
Assumes the ‘single fault’ or “independence of input
variables.”

– e.g. If there are 2 input variables, these input variables are


independent of each other.

• MULTIPLE FAULT ASSUMPTION


“dependence among the inputs”
VALUE SELECTION IN BVA
• Basic idea of BVA is to use input variable values at their:
blue shaded region – input domain space
- Minimum (min)
- Above Minimum (min+)
- Nominal Value (nom) (Average Value)
- Below Maximum (max-)
- Maximum (max)

Test case for a variable x, where


a <= x <=b
INPUT BOUNDARY VALUE TEST CASES (2 VARIABLES)

X1, X2 Variables


Dots  Represents a test value at
which the program is to be tested
TEST CASES (BY NUMBER OF VARIABLES)

• <X1nom, X2min>, <10,25>


• <X1nom, X2min+>, <10,26>
• <X1nom, X2nom>, <10,30>
INPUTS
• <X1nom, X2max>, <10,35> 5<=X1<=15
• <X1nom, X2max->, <10,34> 25<=X2<=35
• <X1min, X2nom>, <5,30>
• <X1min+, X2nom>, <6,30>
• <X1nom, X2nom>, <10,30>
• <X1max, X2nom>, <15,30>
• <X1max+, X2nom>, <14,30>
GENERALIZING BVA
• BVA can be generalized in two ways:
– By the number of variables  (4n+1) test cases for n
variables
• Where,
– 4 Values (min, min+, max, max-)
– n, number of variables
– 1, nor (average value)

• By the kinds of ranges of variables


– Programming language dependent
– Logical variables
– Bounded (upper or lower bounds clearly defined)
– Unbounded (no upper or lower bounds clearly defined)
LIMITATIONS OF BVA
ROBUSTNESS TESTING
• Robustness can also be defined as the ability of an algorithm to continue
operating despite abnormalities in input, calculations, etc.

• Robustness testing is a simple extension of boundary value analysis

• In addition to the 5 BVA values of variables includes, add values slightly greater
than the maximum (max+) and a value slightly less than the minimum (min-)
– Min – 1
– Min
– Min +1
– Nom Number of TC = 6n+1
– Max -1 6  Values
– Max n  Number of Variables
1 Nom (Average Value)
– Max +1

• Main value of robustness testing is to force attention on exception handling

• In some strongly typed languages values beyond the predefined range will cause a
run-time error
ROBUSTNESS TEST CASES FOR A FUNCTION
OF TWO VARIABLES

Dots that are outside the range


[a, b] of variable x1.
Similarly, for variable x2, we have
crossed its legitimate boundary of
[c, d] of variable x2.
WORST – CASE TESTING
ROBUST WORST-CASE TEST CASE
Involves 7n test cases
EXAMPLES
TRIANGLE PROBLEM BVA TEST CASES
CONDITIONS
c1: 1<=a<=200
c2: 1<=b<=200
Min
Min+
c3: 1<=c<=200
Nom
Max-
Max

Min = 1
Min+ = 2
Nom = 100
Max- = 199
Max = 200
Min
Min+
Nom
Max-
Max
TEST CASE FOR NEXT DATE FUNCTION

Min
Min+
Nom
Max-
Max
Min
Min+
Nom
Max-
Max
EQUIVALENCE CLASS TESTING
CONS OF BVA
• Boundary Value Testing derives test cases with
 Serious gaps
 Massive redundancy
• Motivations for equivalence class testing are
 Complete testing
 Avoid redundancy
• Black Box Technique
• Utilizes a subset of data which is representative of a
larger class
• Equivalence partitioning involves partitioning of the
input domain of a software system into several
equivalent classes in a manner that the testing of
one particular representative from a class is
equivalent to the testing of some different value from
the subject class 
• Applicability
– Program is a function from input to output
– Input and/or output variables have well defined intervals
DIFFERENCE BETWEEN BVA AND
EQUIVALENCE PARTITIONING
• The idea of equivalence class testing is to identify test
cases by using one element from each equivalence class
• If the equivalence classes are chosen wisely, the
potential redundancy among test cases can be reduced 
• [  Closed Interval  Includes end-points
• )  Open Interval  Does not include end-points

 example (0,1) means greater than 0 and less


than 1
TYPES OF EQUIVALENCE CLASS
TESTING

1) Weak Normal Equivalence Class Testing


 
2) Strong Normal Equivalence Class Testing
 
3) Weak Robust Equivalence Class Testing
 
4) Strong Robust Equivalence Class Testing
WEAK NORMAL EQUIVALENCE CLASS
TESTING
• Weak  Single Fault Assumption (One
from each class)
• Normal  Classes of Valid Values of Input
(Identify Classes)
• Choose the test case from each of the equivalence
classes for each input variable independently of
the other input variable
• Test cases have all valid values
• Detects faults due to calculations with valid values
of single variable
DOT  Indicates a Valid
Value (Test Data)
Valid value one from each
class
Example
Assume the equivalence partitioning of input X2 is: 1 to 10; 11 to 20, 21 to 30
and the equivalence partitioning of input X1 is: 1 to 5; 6 to 10; 11 to 15; and 16 to 20
NUMBER OF TEST CASES = #classes in the partition with the largest numbering of
subsets.
X2 We have covered everyone of
the 3 equivalence classes for
30 input X.

20

For ( X2, X1 )
10 we have:
(24, 2)
(15, 8 )
1 ( 4, 13)
X1 (23, 17)
1 5 10 15 20

PROBLEM: May miss some equivalence classes


STRONG NORMAL EQUIVALENCE CLASS TESTING

• Strong  Multiple Fault Assumption (One


from each class in Cartesian
Product)
• Normal  Identify equivalence classes of Valid
Inputs
• Test cases from Cartesian Product of valid values
• Cartesian Product guarantees notion of completeness
• Detects faults due to interaction with valid values of
any number of variables
Dots  Valid values but with all possibilities
COVERS ONE POSSIBLE COMBINATION FROM
EACH PARTITION
Example of : Strong Normal Equivalence testing
Assume the equivalence partitioning of input X2 is: 1 to 10; 11 to 2; 21 to 30
and the equivalence partitioning of input X1 is: 1 to 5; 6 to 10; 11 to 15; and 16 to 20
NUMBER OF TEST CASES: Based on Cartesian product of the partition subsets

X2

30

We have covered everyone of


the 3 x 4 Cartesian product of
20 equivalence classes

10

1
X1
1 5 10 15 20

PROBLEM: Doesn’t test values outside of intervals


WEAK ROBUST EQUIVALENCE CLASS
TESTING
• Weak  Single Fault Assumption (One from each
class)
• Robust  Classes of valid and invalid values of
input
• Up to now we have only considered partitioning the
valid input space
• “Weak robust” is similar to “weak normal”
equivalence test except that the invalid input
variables are now considered
Example of : Weak Robust Equivalence testing
Assume the equivalence partitioning of input X2 is 1 to 10; 11 to 20, 21 to 30
and the equivalence partitioning of input X1 is 1 to 5; 6 to 10; 11 to 15; and 16 to 20

X2

30

We have covered
20 everyone of the 5
equivalence classes for
input X.
10

1
X1
1 5 10 15 20

PROBLEM: Misses some equivalence classes


PROBLEMS WITH ROBUST EQUIVALENCE TESTING

a) Very often the specification does not define


what the expected output for an invalid test
case should be
b) Thus, testers spend a lot of time defining
expected outputs for these cases
c) Strongly typed languages like Pascal, Ada,
eliminate the need for the consideration of
invalid inputs
STRONG ROBUST EQUIVALENCE CLASS TESTING

• Strong  Multiple Fault Assumption (One


from each class in Cartesian
Product)
• Robust Classes of valid and invalid values
of input

• Obtain the test cases from each element of the


Cartesian product of all the equivalence
classes
Example of : Strong Robust Equivalence testing
Assume the equivalence partitioning of input X is: 1 to 10; 11 to 20, 21 to 30
and the equivalence partitioning of input Y is: 1 to 5; 6 to 10; 11 to 15; and 16 to 20

30
We have covered everyone of
the 5 x 6 Cartesian product of
equivalence classes (including
20 invalid inputs)

10

1
Y
1 5 10 15 20

Problem: None, but results in lots of test cases (expensive)


EQUIVALENCE CLASS TEST CASES FOR THE
TRIANGLE PROBLEM

• Four possible outputs: INPUT OUTPUT


– NotATriangle
– Scalene Triangle
– Isosceles Triangle
– Equilateral Triangle
• Output (range) equivalence classes:
– input sides <a, b, c> do not form a triangle
– input sides <a, b ,c> form an isosceles triangle
– input sides <a, b, c> form a scalene triangle
– input sides <a, b, c> form an equilateral triangle
WEAK NORMAL AND STRONG NORMAL
EQUIVALENCE CLASS TEST CASE

Four weak normal equivalence class test cases, chosen arbitrarily from each
class.No subintervals exist for a,b,c so WN=SN
WEAK ROBUST TEST CASES
The invalid
values could be
- Zero (0)
- Any Negative
Number (-ve)
- Any number
>200

Considering, the invalid values for a, b, and c


yield the above additional
weak robust equivalence class test cases
STRONG ROBUST TEST CASES
TRIANGLE INPUT EQUIVALENCE CLASSES

For example, triplet <1,4,1> has exactly one pair of


equal sides, but those sides do not form a triangle
If we want to be even more thorough, we could
separate the “greater than or equal to” to two
distinct cases as:

D6 will become:
D6’={<a,b,c>:a=b+c}
D6”={<a,b,c>:a>b+c}

Similarly, for D7 and D8


EQUIVALENCE CLASS TEST CASES FOR THE NEXTDATE
FUNCTION

• Valid Equivalence classes :


M1= {month : 1 <= month <=12}
D1 = {day : 1<= day <= 31}
Y1 = {year : 1812 <= year <= 2012}
WEAK NORMAL EQUIVALENCE CLASS
WEAK ROBUST TEST CASES
STRONG ROBUST TEST CASES
SPECIAL VALUE TESTING
• Most widely practiced form of functional testing
• Tester uses (to devise test cases):
– his/her domain knowledge,
– experience with similar programs, and
– information about “soft spots”(sentimental weakness)
• Dependent on the abilities of the tester!!!
• Test cases involving Feb 28, 29 and leap years are most
likely to be identified as test cases using special value
testing
• Robustness testing and worst case testing do not
identify Feb 28 as a test case
• Also known as Ad-hoc Testing
State Based or Graph Based Testing
• State or graph based testing is very useful in situations
where
1. The product under test is a language processor (for
example, a compiler), wherein the syntax of the language
automatically lends itself to a state machine or a context
free grammar represented by a railroad diagram.
2. Workflow modeling where, depending on the current
state and appropriate combinations of input variables,
specific workflows are carried out, resulting in new
output and new state.
3. Dataflow modeling, where the system is modeled as a
set of dataflow, leading from one state to another.
• Consider an application that is required to validate a
number according to the following simple rules.
1. A number can start with an optional sign.
2. The optional sign can be followed by any number
of digits.
3. The digits can be optionally followed by a decimal
point, represented by a period.
4. If there is a decimal point, then there should be
two digits after the decimal.
5. Any number—whether or not it has a decimal
point, should be terminated by a blank.
An example of a state transition diagram.
An example of a state transition diagram.
State
Key things

transition Event clicking the button in website event


Action error message displayed on login page
State transition table
• The above state transition table can be used to
derive test cases to test valid and invalid numbers.
Valid test cases can be generated by:
1. Start from the Start State (State #1 in the example).
2. Choose a path that leads to the next state (for
example, +/-/digit to go from State 1 to State 2).
3. If you encounter an invalid input in a given state
(for example, encountering an alphabetic character in
State 2), generate an error condition test case.
4. Repeat the process till you reach the final state
(State 6 in this example).
• Graph based testing methods are applicable to generate
test cases for state machines such as language
translators, workflows, transaction flows, and data flows.
A general outline for using state based testing methods
with respect to language processors is:
1. Identify the grammar for the scenario. In the above
example, we have represented the diagram as a state
machine. In some cases, the scenario can be a context-free
grammar, which may require a more sophisticated
representation of a “state diagram.”
2. Design test cases corresponding to each valid state-input
combination.
3. Design test cases corresponding to the most common
invalid combinations of state-input.
A second situation where graph based testing is useful is to
represent a transaction or workflows. Consider a simple example of
a leave application by an employee. A typical leave application
process can be visualized as being made up of the following steps.
1. The employee fills up a leave application, giving his or her
employee ID, and start date and end date of leave required.
2. This information then goes to an automated system which
validates that the employee is eligible for the requisite number of
days of leave. If not, the application is rejected; if the eligibility
exists, then the control flow passes on to the next step below.
3. This information goes to the employee's manager who validates
that it is okay for the employee to go on leave during that time (for
example, there are no critical deadlines coming up during the
period of the requested leave).
4. Having satisfied himself/herself with the feasibility of leave, the
manager gives the final approval or rejection of the leave application.
State graph to represent workflow.
Graph based testing such as in this example will be
applicable when
1. The application can be characterized by a set of
states.
2. The data values (screens, mouse clicks, and so on)
that cause the transition from one state to another
is well understood.
3. The methods of processing within each state to
process the input received is also well understood.
Key things
• State
• Transition correct,
• Event clicking the button in website event
• Action error message displayed on login
page
Draw state transition diagram for Atm lab
program
• Draw a state transition diagram for triangle
problem
• Next date function
•Model based or specification based testing:
Model-based testing is a testing approach
where test cases are automatically generated
from models. .
Model based or specification based testing
occurs when the requirements are formally
specified as for example, using one or more
mathematical or graphical notations such as
state charts, event sequence graphs
• Examples of the model are:
• Data Flow
• Control Flow
• Dependency Graphs
• Decision Tables
• State transition machines
• Model-Based Testing describes how a system behaves
in response to an action (determined by a model).
Supply action, and see, if the system responds as per
the expectation.
• It is a lightweight formal method to validate a system.
This testing can be applied to both hardware and
software testing.
Model based testing and model checking
• Model based testing refers to the acts of modeling and the
generation of tests from a formal model of application behavior.
• Model checking refers to a class of techniques that allow the
validation of one or more properties from a given model of an
application.
• A model, usually finite state is extracted from some source. The source could be
the requirements and in some cases, the application code itself.
• One or more desired properties are then coded to a formal specification
language. Often, such properties are coded in temporal logic, a language for
formally specifying timing properties. The model and the desired properties are
then input to a model checker. The model checker attempts to verify whether
the given properties are satisfied by the given model.
• For each property, the checker could come up with one of three possible answer:
o the property is satisfy
o the property is not satisfied.
o or unable to determine
• In the second case, the model checker provides a counter example showing why
the property is not satisfied.
• The third case might arise when the model checker is unable to terminate after
an upper limit on the number of iterations has reached.
• While model checking and model based testing use models, model checking uses
finite state models augmented with local properties that must hold at individual
states. The local properties are known as atomic propositions and augmented
models as kripke structure.
Advantages of Model Testing:
Following are benefits of MBT:
•Easy test case/suite maintenance
•Reduction in Cost
•Improved Test Coverage
•Can run different tests on n number of machines
•Early defect detection
•Increase in defect count
•Time savings
•Improved tester job satisfaction
#              Black Box Testing                    White Box Testing
Black box testing is the Software
White box testing is the software
testing method which is used to test
testing method in which internal
1 the software without knowing the structure is being known to tester
internal structure of code or
who is going to test the software.
program.

This type of testing is carried out by Generally, this type of testing is


2 testers. carried out by software developers.

Implementation Knowledge is not Implementation Knowledge is


3 required to carry out BB Testing. required to carry out WB Testing.

Programming Knowledge is not Programming Knowledge is required


4 required to carry out BB Testing to carry out White Box Testing.

Testing is applicable on higher levels Testing is applicable on lower level


5 of testing like System Testing, of testing like Unit Testing,
Acceptance testing. Integration testing.
#              Black Box Testing                    White Box Testing
Black box testing means functional test White box testing means structural test or
6
or external testing. interior testing.
In White Box testing is primarily
In Black Box testing is primarily
concentrate on the testing of program
7 concentrate on the functionality of the
code of the system under test like code
system under test.
structure, branches, conditions, loops etc.
The main aim of this testing to check on
The main aim of White Box testing to
8 what functionality is performing by the
check on how System is performing.
system under test.
Black Box testing can be started based
White Box testing can be started based on
9 on Requirement Specifications
Detail Design documents.
documents.
The Structural testing, Logic testing, Path
The Functional testing, Behavior testing,
testing, Loop testing, Code coverage
Close box testing is carried out under
testing, Open box testing is carried out
10 Black Box testing, so there is no
under White Box testing, so there is
required of the programming
compulsory to know about programming
knowledge.
knowledge.
Activity for software Testing
What You Will Learn: 
• Exercise #1: Finding Defects
• Exercise #2: Writing Test Scenarios
• Exercise #3: Defect Reporting
• Exercise #4: Providing Suggestions
1. AIRLINES SYSTEM
2. DEFENSE
3. HEALTH CARE
4. RESERVATION
5. WEB APPLICATION
6. MOBILE APPLICATION
7. AUTOMOBILE
8. MECHANICAL
9. ELECTRICAL AND ELECTRONICS
10. BIOTECHNOLOGY
11. AERONAUTICAL
12. CIVIL
13.SATELLITE
14.FORENSIC
15. AGRICULTURE
• Following are the fundamental steps involved in
testing an application:
• Create a test plan according to the application
requirements
• Develop manual test case scenarios from the
end-users perspective
• Automate the test scenarios using scripts
• Perform functional tests and validate if
everything works according to requirements

You might also like