You are on page 1of 66

Software Testing Techniques and Strategies

Fundamentals of Testing
Critical element of SQA Reviews and other SQA activities can uncover errors but they are not sufficient

Testing Objective

It is a process of executing a program with the intention of finding an error

Who tests the software better??

developer
Understands the system but, will test gently and, is driven by delivery

independent tester Must learn about the system, but, will attempt to break it and, is driven by quality
4

Testing Principles

All tests should be traceable to customer requirements Tests should be planned long before testing begins Testing should begin in the small and progress toward testing in the large To be most effective testing should be conducted by an independent third party

Testability
It gives simply the platform for testing a software program easily

Characteristics of testable software


Operability Observability Controllability Decomposability
The better it works, the more efficiently it can be tested What you see is what you test The better we can control the software, more the testing can be automated and optimized By controlling the scope of testing, we can more quickly isolate problems and perform smarter testing

Simplicity The less there is to test, the more quickly we can test it Stability The fewer the changes, the fewer the disruptions to testing Understandability The more information we have, the smarter we will test
6

Kaner,Falk and Nguyen Attributes of good test A good test has a high probability of finding an error A good test is not redundant A good test should be best of breed A good test should neither too simple nor too complex

Testing cannot show the absence of defects, it can only show that software defects are present
7

Software Configuration includes a Software requirements Specification, a Design Specification, and source code.

A test configuration includes a Test Plan and Procedures, test cases, and testing tools.

Test information Flow

Test Case Design Can be as difficult as the initial design. Testing can not prove correctness as not all execution paths can be tested. Testing an Engineered Product Knowing the specified function that a product has been designed to perform Knowing all internal workings of a product

A program with a structure as illustrated above (with less than 100 lines of Pascal code) has about 100,000,000,000,000 possible paths. If attempted to test these at rate of 1000 tests per second, would take 3170 years to test all paths. 10

White Box Testing (Glass Box Testing)


11

Uses control structures of a procedural design to derive test cases Can derive test cases to ensure: all independent paths are exercised at least once. all logical decisions are exercised for both true and false paths. all loops are executed at their boundaries and within operational bounds. all internal data structures are exercised to ensure validity. 1. Basis Path Testing 2. Control Structure Testing
12

2 White Box Testing Techniques

Basis Path Testing


A testing mechanism proposed by Tom McCabe. Aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. Test cases which exercise basic set will execute every statement at least once. Flow Graph Notation Cyclomatic Complexity Deriving test cases Graph Matrices
13

Flow Graph Notation Notation for representing control flow

Arrows called edges represent flow of control Circles called nodes represent one or more actions. Areas bounded by edges and nodes called regions. A predicate node is a node containing a condition
14

1 2 3 6 7 9 10 8 5 9 4 7 6

2,3 4,5

10

11

11

Flow Chart

Flow Graph

15

Cyclomatic Complexity The Cyclomatic complexity gives a quantitative measure of the logical complexity. This value gives the number of independent paths in the basis set, and an upper bound for the number of tests to ensure that each statement is executed at least once. An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i.e., a new edge) Cyclomatic Complexity of can be calculated as: 1. Number of regions of flow graph. 2. #Edges - #Nodes + 2 3. #Predicate Nodes + 1

Continued
16

Independent Paths a) 1, 8 c)1, 2, 4, 5, 7a, 7b, 1, 8 b)1, 2, 3, 7b, 1, 8 d)1, 2, 4, 6, 7a, 7b, 1, 8
17

Deriving Test Cases


Using the design or code, draw the corresponding flow graph. Determine the Cyclomatic complexity of the flow graph. Determine a basis set of independent paths. Prepare test cases that will force execution of each path in the basis set.

18

Documenting test cases


Name Number Values for the inputs Expected outputs Short description (if needed)

19

Graph Matrices

Graph Matrix is square with #sides equal to #nodes Rows and columns correspond to the nodes Entries correspond to the edges. Can associate a number with each edge entry. Use a value of 1 to calculate the Cyclomatic complexity For each row, sum column values and subtract 1. Sum these totals and subtract 1.

Continued
20

21

Connected to Node Node

1 1 2 3 4

3 1

Connections

1-1 = 0

1 1 1 1

1 1

2-1 = 1 2-1 = 1 2-1 = 1


3+1=4

1 a 3 b 5 f 4 g 2 c d

Graph Matrix

Cyclomatic Complexity

22

Control Structure Testing


Basic path testing one example of control structure testing 1) Conditions Testing Condition testing aims to exercise all logical conditions in a program module. Can define: Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions. Simple condition: Boolean variable or relational expression, possibly preceded by a NOT operator. Compound condition: Composed of two or more simple conditions, boolean operators and parentheses Boolean expression: Condition without relational expressions.
23

Causes of Errors in expressions Boolean operator error Boolean variable error Boolean parenthesis error Relational operator error Arithmetic expression error
Condition testing methods focus on testing each condition in the program.

24

2) Data flow Testing Selects test paths according to the location of definitions and use of variables

Data flow testing identifies paths in the program that go from the assignment of a value to a variable to the use of such variable, to make sure that the variable is properly used. The DU (Definition - Usage) testing strategy requires that each DU chain is covered at least once

25

[1] void k() { [2] x = 11; [3] if (p(cond1)) { [4] y = x + 1; [5] } else if (q(cond2)) { [6] w = x + 3; [7] } else { [8] w = y + 1; [9] } [10]}

Considering x, there are two DU paths: (a) [2]-[4] (b) [2]-[6]

We need therefore to derive test cases to match the following conditions: (a) k() is executed and p(cond1) is true (b) k() is executed and p(cond1) is false and q(cond2) is true
26

3) Loop Testing Loops are the fundamental to many algorithms. We Can define loops as simple, concatenated, nested, and unstructured. Cornerstone of every program Loops can lead to non-terminating programs Error in indexes of loops are very easy to occur

Loop Testing is a white-box testing technique that whitefocuses only on the validity of loop constructs
27

Continued
28

To test Simple Loops of size n (available passes)


Skip loop entirely Only one pass through loop Two passes through loop m passes through loop where m<n. (n-1), n, and (n+1) passes through the loop.

To test Nested Loops


Start with inner loop. Set all other loops to minimum values. Conduct simple loop testing on inner loop. Work outwards Continue until all loops tested.
29

To test Concatenated Loops


1. If independent loops, use simple loop testing. 2. If dependent, treat as nested loops. int k; for (k=0;k<10;k++ k=0;k<10;k++) { w(); if p(m) break; } for (;k<10;k++ { ;k<10;k++) r(); }

To test Unstructured loops


Don't test - redesign.
30

requirements output input events

Black Box Testing


31

Black Box Testing


Focus on functional requirements. Compliments white box testing. Categories of Errors in Black Box Testing: incorrect or missing functions interface errors errors in data structures or external database access performance errors initialization and termination errors.

32

Issues on black-box testing blackHow is functional validity tested? What classes of input will make good test cases? Is the system particularly sensitive to certain input values? How are the boundaries of a data class isolated? What data rates and data volume can the system tolerate? What effect will specific combinations of data have on system operations?

33

GraphGraph-Based Testing
Complex Systems are hard to understand They can be modeled as interacting objects Test can be performed to ensure that all the required relations are in place
New file menu select
M e nu s e le ct ge ne rate s (ge ne ration time < 1.0 s e c) Docume nt Window

Is re pre s e nte d as

Allows e diting of Contains

Attributes:
Start dime ns ion:de fault s e tting or pre fe re nce s B ackground color:White Te xt color: de fault color or pre fe re nce s

Docume nt Te xt

34

Kinds of Valid data


user supplied commands responses to system prompts file names computational data physical parameters bounding values initiation values output data formatting commands responses to error messages graphical data (e.g., mouse picks)

35

Kinds of Invalid data


data outside bounds of the program physically impossible data proper value supplied in wrong place

36

Boundary Value Analysis (BVA)

mouse picks on menu output format requests responses to prompts

output domain

user queries numerical data

command key input 37

Large number of errors tend to occur at boundaries of the input domain. BVA leads to selection of test cases that exercise boundary values. BVA complements equivalence partitioning. Rather than select any element in an equivalence class, select those at the ''edge' of the class.

38

Examples: 1. For a range of values bounded by a and b, test (a-1), a, (a+1), (b-1), b, (b+1). 2. If input conditions specify a number of values n, test with (n-1), n and (n+1) input values. 3. Apply 1 and 2 to output conditions (e.g., generate table of minimum and maximum size). 4. If internal program data structures have boundaries (e.g., buffer size, table limits), use input data to exercise structures on boundaries

39

Comparison Testing
In some applications the reliability is critical. Redundant hardware and software may be used. For redundant s/w, use separate teams to develop independent versions of the software. Test each version with same test data to ensure all provide identical output. Run all versions in parallel with a real-time comparison of results.

Continued
40

Even if will only run one version in final system, for some critical applications can develop independent versions and use comparison testing or back-to-back testing. When outputs of versions differ, each is investigated to determine if there is a defect. Method does not catch errors in the specification

41

Equivalence Partitioning

Partitioning is based on input conditions

mouse picks on menu

user queries numerical data

output format requests responses to prompts

command key input

42

Divide the input domain into classes of data for which test cases can be generated. Attempting to uncover classes of errors based on equivalence classes for input conditions. An equivalence class represents a set of valid or invalid states An input condition is either a specific numeric value, range of values, a set of related values, or a boolean condition.

Continued
43

Equivalence classes can be defined by: If an input condition specifies a range or a specific value, one valid and two invalid equivalence classes defined. If an input condition specifies a boolean or a member of a set, one valid and one invalid equivalence classes defined. Here we can conclude that in Equivalence partitioning Test cases for each input domain data item developed and executed.

44

Equivalence partitioning

Invalid inputs

Valid inputs

System

Outputs

Example: ATM
Consider data maintained for ATM
User should be able to access the bank using PC and modem (dialing) User should provide six-digit password Need to follow a set of typed commands

Data format (xxx-xxx-xxxx)


Software accepts Area code:
blank or three-digit

Prefix:
three-digit number not beginning with 0 or 1

Suffix:
four digits number

Password:
six digit alphanumeric value

Command:
{check, deposit, bill pay, transfer etc.}

Input conditions for ATM


Input conditions Area code:
Boolean: (the area code may or may not be present) Range: values defined between 200-999 Specific value: no value > 905

Prefix:
range specific value >200

Suffix:
value (four-digit length)

Password:
Boolean: password may or may not be present value : six char string

Command:
set containing commands noted previously

GCD Test Planning (1)


Lets look at an example of testing a unit designed to compute the greatest common divisor (GCS) of a pair of integers (not both zero).
GCD(a,b) = c where
c is a positive integer c is a common divisor of a and b (e.g., c divides a and c divides b) c is greater than all other common divisors of a and b.

For example
GCD(45, 27) = 9 GCD (7,13) = 1 GCD(-12, 15) = 3 GCD(13, 0) = 13 GCD(0, 0) undefined

GCD Test Planning (2)


How do we proceed to determine the tests cases?
1. Design an algorithm for the GCD function. 2. Analyze the algorithm using basic path analysis. 3. Determine appropriate equivalence classes for the input data. 4. Determine the boundaries of the equivalence classes. 5. Then, choose tests cases that include the basic path set, data form each equivalence class, and data at and near the boundaries.

GCD Algorithm
note: Based on Euclids algorithm 1. function gcd (int a, int b) { 2. int temp, value; 3. a := abs(a); 4. b := abs(b); 5. if (a = 0) then 6. value := b; // b is the GCD 7. else if (b = 0) then 8. raise exception; 9. else 10. loop 11. temp := b; 12. b := a mod b; 13. a := temp; 14. until (b = 0) 15. value := a; 16. end if; 17. return value; 18. end gcd

5 7 9 6

1 0 17

18

GCD Test Planning (3)


Basic Path Set
V(G) = 4

(1,5,6,17,18), (1,5,7,18), (1,5,7,9,10,17,18), (1,5,7,9,10,9,10,17,18) Equivalence Classes Although the the GCD algorithm should accept any integers as input, one could consider 0, positive integers and negative integers as special values. This yields the following classes: a < 0 and b < 0, a < 0 and b > 0, a > 0 and b < 0 a > 0 and b > 0, a = 0 and b < 0, a = 0 and b > 0 a > 0 and b = 0, a > 0 and b = 0, a = 0 and b = 0 Boundary Values
a = -231, -1, 0, 1, 231-1 and b = -231, -1, 0, 1, 231-1

GCD Test Plan


Test Description / Data Expected Results Basic Path Set path (1,5,6,17,18) (0, 15) 15 path (1,5,7,18) (15, 0) 15 path (1,5,7,9,10,17,18) (30, 15) 15 path (1,5,7,9,10,9,10,17,18) (15, 30) 15 Equivalence Classes a < 0 and b < 0 (-27, -45) 9 a < 0 and b > 0 (-72, 100) 4 a > 0 and b < 0 (121, -45) 1 a > 0 and b > 0 (420, 252) 28 a = 0 and b < 0 (0, -45) 45 a = 0 and b > 0 (0 , 45) 45 a > 0 and b = 0 (-27, 0) 27 a > 0 and b = 0 (27, 0) 27 a = 0 and b = 0 (0 , 0) exception raised Boundary Points (1 , 0) 1 (-1 , 0) 1 (0 , 1) 1 (0 , -1) 1 (0 , 0) (redundant) exception raised (1, 1) 1 (1, -1) 1 (-1, 1) 1 (-1, -1) 1 Test Experience / Actual Results

Anything missing?

Test Implementation (1)


Once one has determined the testing strategy, and the units to tested, and completed the unit test plans, the next concern is how to carry on the tests.
If you are testing a single, simple unit that does not interact with other units (like the GCD unit), then one can write a program that runs the test cases in the test plan. However, if you are testing a unit that must interact with other units, then it can be difficult to test it in isolation. The next slide defines some terms that are used in implementing and running test plan.

Test Implementation Terms


Test Driver a class or utility program that applies test cases to a component being tested. Test Stub a temporary, minimal implementation of a component to increase controllability and observability in testing. When testing a unit that references another unit, the unit must either be complete (and tested) or stubs must be created that can be used when executing a test case referencing the other unit. Test Harness A system of test drivers, stubs and other tools to support test execution

Test Implementation (2)


Here is a suggested sequence of steps to followed in testing a unit. Once the design for the unit is complete, carry out a static test of the unit. This might be a single desk check of the unit or it may involve a more extensive symbolic execution or mathematic analysis. Complete a test plan for a unit. If the unit references other units, not yet complete, create stubs for these units. Create a driver (or set of drivers) for the unit, which includes the following; construction of test case data (from the test plan) execution of the unit, using the test case data provision for the results of the test case execution to be printed or logged as appropriate

What is this?
A failure? An error? A fault? Need to specify the desired behavior first!

Erroneous State (Error)

Algorithmic Fault

Mechanical Fault

How do we deal with Errors and Faults?

Verification?

Modular Redundancy?

Declaring the Bug as a Feature?

Patching?

Testing?

You might also like