You are on page 1of 337

A-PDF Merger DEMO : Purchase from www.A-PDF.

com to remove the watermark

Functional Testing Part 1


Boundary Value Testing Equivalence Class Testing

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

Boundary Value Analysis


 

F(x1, x2), a x1 b, c x2 d Boundary value analysis focuses on the boundary of the input space to identify test cases The rationale behind boundary value testing is that errors tend to occur near the extreme values of an input variable


e.g. loop conditions (< instead of ), counters


4

Boundary Value Analysis




Basic idea: use input variable values at their minimum (min), just above the minimum (min+), a nominal value (nom), just below their maximum (max(max-), and at their maximum (max) Single fault assumption in reliability theory: failures are only rarely the result of the simultaneous occurrence of two (or more) faults. The boundary value analysis test cases are obtained by holding the values of all but one variable at their nominal values, and letting that variable assume its extreme values
5

Boundary Value Analysis


F(x1, x2)

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

Generalizing Boundary Value Analysis


 

By the number of variables




For n variables 4n + 1 test cases

By the kinds of ranges, depends on the type (nature) of the variables


 

Variables have discrete, bounded values




e.g. NextDate function, commission problem Create artificial bounds e.g. triangle problem Decision tabletable-based testing e.g. PIN and transaction type in SATM System

Variables have no explicit bounds


 

 

Boolean variables


Logical variables (bound to a value or another logic variable)




Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

Limitations of Boundary Value Analysis




Boundary value analysis works well when the program to be tested is a function of several independent variables that represent bounded physical quantities
e.g. NextDate test cases are inadequate (little stress on February, dependencies among month, day, and year)  e.g. variables refer to physical quantities, such as temperature, air speed, load etc.

10

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

11

Robustness Testing
 

Simple extension of boundary value analysis In addition to the five boundary value analysis values of a variable, see what happens when the extrema are exceeded with a value slightly greater than the maximum (max+) and a value slightly less than the minimum (min(min-) Focuses on the expected outputs


e.g. exceeding load capacity of a public elevator

Forces attention on exception handling


12

Robustness Testing

13

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

14

Worst Case Testing


 

Worst case analysis: more than one variable has an extreme value Procedure:
 

For each variable create the set <min, min+, nom, maxmax-, max> Take the Cartesian product of these sets to generate test cases

 

More thorough than boundary value analysis Represents more effort




For n variables 5n test cases (as opposed to 4n+1 test cases for boundary value analysis)

  

Follows the generalization pattern Same limitations Robust worst case testing can be applied
15

Worst Case Testing

16

Worst Case Testing


x2 Robust Worst Case Test Cases (function of two variables)

c a b x1

17

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

18

Special Value Testing


  

 

The most widely practiced form of functional testing Most intuitive, least uniform, no guidelines The tester uses his/her domain knowledge, experience with similar programs, ad hoc testing It is dependent on the abilities of the tester Even though it is highly subjective, it often results in a set of test cases which is more effective in revealing faults than the test sets generated by the other methods
19

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

20

Test Cases for the Triangle Problem


Boundary Value Analysis Test Cases
Case 1 2 3 4 5 6 7 8 9 10 11 12 13 a 100 100 100 100 100 100 100 100 100 1 2 199 200 b 100 100 100 100 100 1 2 199 200 100 100 100 100 c 1 2 100 199 200 100 100 100 100 100 100 100 100 Expected Output Isosceles Isosceles Equilateral Isosceles Not a Triangle Isosceles Isosceles Isosceles Not a Triangle Isosceles Isosceles Isosceles Not a Triangle

min = 1 min+ = 2 nom = 100 max- = 199 max = 200

21

Test Cases for the Triangle Problem


W rst
s 2 3 4 5 6 7 8 9 c 2 99 2 2 2 2 2 2 2 99 2 2 Is 99 2 99 99 99 99 99 2 2 2 2 2 2 2 2 2 2 2 99 Is 2 2 99 2 2 99 2 Is Is Is

s T st
ct t il t r l t Tri l t Tri l t Tri l t Tri l t Tri l sc l s t Tri l t Tri l t Tri l t Tri l t Tri l sc l s t Tri l t Tri l t Tri l t Tri l t Tri l sc l s t Tri l t Tri l t Tri l t Tri l t Tri l sc l s t Tri l sc l s t Tri l t Tri l t Tri l t

s s (6
s 3 32 33 34 35 36 37 38 39 4 4 42 43 44 45 46 47 48 49 5 5 52 53 54 55 56 57 58 59 6 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

f 25)
c 2 2 2 2 2 2 99 2 2 99 2 99 99 99 99 99 2 2 2 2 2 2 99 2 2 99 2 2 99 2 2 2 2 2 2 2 99 2 Is sc il t t t t t Is sc t t t t t Is sc c l t t t c l Is sc t t Is sc t t t t Is sc t t ct t l s t r l Tri l Tri l Tri l Tri l Tri l l s Tri l Tri l Tri l Tri l Tri l l s Tri Tri Tri l s Tri Tri l s Tri Tri Tri Tri l s Tri Tri l l l t

2 3 4 5 6 7 8 9 2 2 22 23 24 25 26 27 28 29 3

l l l l l l l l

22

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

23

Test Cases for the NextDate Problem


Boundary Value Analysis Test Cases
Case month 1 6 2 6 3 6 4 6 5 6 6 6 7 6 8 6 9 6 10 1 11 2 12 11 13 12 day 15 15 15 15 15 1 2 30 31 15 15 15 15 year 1812 1813 1912 2011 2012 1912 1912 1912 1912 1912 1912 1912 1912 Expected Output June 16, 1812 June 16, 1813 June 16, 1912 June 16, 2011 June 16, 2012 June 2, 1912 June 3, 1912 July 1, 1912 error January 16, 1912 February 16, 1912 November 16, 1912 December 16, 1912

month min = 1 min+ = 2 nom = 6 max- = 11 max = 12

day min = 1 min+ = 2 nom = 15 max- = 30 max = 31

year min = 1812 min+ = 1813 nom = 1912 max- = 2011 max = 2012

24

Test Cases for the NextDate Problem


Worst Case Test Cases (60 of 125)
Case 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 o t 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 a 1 1 1 1 1 2 2 2 2 2 15 15 15 15 15 30 30 30 30 30 31 31 31 31 31 1 1 1 1 1 ear 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja Ja e e e e e e e e e e ecte t t ar 2, 1812 ar 2, 1813 ar 2, 1912 ar 2, 2011 ar 2, 2012 ar 3, 1812 ar 3, 1813 ar 3, 1912 ar 3, 2011 ar 3, 2012 ar 16, 1812 ar 16, 1813 ar 16, 1912 ar 16, 2011 ar 16, 2012 ar 31, 1812 ar 31, 1813 ar 31, 1912 ar 31, 2011 ar 31, 2012 r ar 1, 1812 r ar 1, 1813 r ar 1, 1912 r ar 1, 2011 r ar 1, 2012 r ar 2, 1812 r ar 2, 1813 r ar 2, 1912 r ar 2, 2011 r ar 2, 2012 Case 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 o t 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 6 6 6 6 6 6 6 6 6 6 a 2 2 2 2 2 15 15 15 15 15 30 30 30 30 30 31 31 31 31 31 1 1 1 1 1 2 2 2 2 2 ear 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 1812 1813 1912 2011 2012 ecte t t e r ar 3, 1812 e r ar 3, 1813 e r ar 3, 1912 e r ar 3, 2011 e r ar 3, 2012 e r ar 16, 1812 e r ar 16, 1813 e r ar 16, 1912 e r ar 16, 2011 e r ar 16, 2012 error error error error error error error error error error J e 2, 1812 J e 2, 1813 J e 2, 1912 J e 2, 2011 J e 2, 2012 J e 3, 1812 J e 3, 1813 J e 3, 1912 J e 3, 2011 J e 3, 2012

25

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

26

Test Cases for the Commission Problem


   

13 Boundary Value Analysis Test Cases 125 Worst Case Test Cases Boundary values for the output range, especially near the threshold points of $1000 and $1800 Part of the reason for using the output range to determine test cases is that test cases from the input range are almost all in the 20% zone We want to find input variable combinations that stress the boundary values: $100, $1000, $1800, and $7800
27

Test Cases for the Commission Problem


Out ut Boundary Value Analysis Test Cases
Cas 2 3 4 5 6 7 8 9 l cks st cks arr ls 2 2 2 5 5 9 9 5 9 sa l s 25 3 45 5 975 97 955 25 3 45 4 8 18 17 18 18 18 19 48 70 70 69 70 4 18 17 18 18 18 19 18 48 80 79 80 80 4 17 18 18 18 19 18 18 48 89 90 90 90 4 1775 177 1755 1800 1825 1830 1845 4800 7775 7770 7755 7800 c issi t 2.5 3 4.5 5 97.5 97 95.5 3.75 4.5 6.75 6 216.25 215.5 213.25 220 225 226 229 820 1415 1414 1411 1420 c t t i i t t i i t t i i t t i i i i t r r i tr r i tr r i tr r i t r r i t r r i t r r i t i i t r r i tr r i tr r i tbor r oi t bor r oi t bor r oi t bor r oi t mi o i t out ut maximum out ut maximum out ut maximum out ut maximum

2 3 4 15 16 17 18 19 20 21 22 23 24 25

28

Test Cases for the Commission Problem


 

Test case 9 is the $1000 border point If we tweak the input variables we get values just below and just above the border Form of special value testing based on mathematical insight
Output Special Value Test Cases
Case 1 2 3 locks 10 18 18 stocks 11 17 19 barrels 9 19 17 sales 1005 1795 1805 commission 100.75 219.25 221 comment border point + border point border point +

29

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

30

Equivalence Classes


Motivations
 

Have a sense of complete testing Avoid redundancy

  

Equivalence classes form a partition of a set, where partition refers to a collection of mutually disjoint subsets whose union is the entire set (completeness, nonnon-redundancy) The idea is to identify test cases by using one element from each equivalence class treated the same traversing the same execution path The key is the choice of the equivalence relation that determines the classes


second guess the likely implementation , and think about the functional manipulations that must somehow be present in the implementation

31

Equivalence Classes


Program under consideration:


Function of variables a, b, c  Input domain consists of sets A, B, and C  A = A1 7 A2 7A3  B = B1 7 B2 7 B3 7 B4  C = C1 7 C2  a1 A1, b3 B3, c2 C2


32

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

33

Weak Equivalence Class Testing




Accomplished by using one variable from each equivalence class in a test case
Test Case 1 WE2 WE3 WE4 a a1 a2 a3 a1 b b1 b2 b3 b4 c c1 c2 c1 c2

A = A1 7 A2 7 A3 B = B1 7 B2 7 B3 7 B4 C = C1 7 C2 a1 A1, b3 B3, c2 C2

Number of weak equivalence class test cases = number of classes in the partition with the largest number of subsets
34

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

35

Strong Equivalence Class Testing




 

Based on the Cartesian Product of the partition subsets A x B x C = 3 x 4 x 2 = 24 elements Equivalence relations can be defined on the output range of the program function being tested

Test Case E1 SE2 SE3 SE4 SE5 SE6 SE7 SE8 SE9 SE10 SE11 S E 12 SE13 SE14 S E 15 S E 16 S E 17 S E 18 S E 19 SE20 SE21 SE22 SE23 SE24

a a1 a1 a1 a1 a1 a1 a1 a1 a2 a2 a2 a2 a2 a2 a2 a2 a3 a3 a3 a3 a3 a3 a3 a3

b b1 b1 b2 b2 b3 b3 b4 b4 b1 b1 b2 b2 b3 b3 b4 b4 b1 b1 b2 b2 b3 b3 b4 b4

c c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2

36

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

37

Traditional Equivalence Class Testing


 

Defines equivalence classes in terms of validity Commission problem


 

Valid inputs: 1 lock 70, 1 stock 80, 1 barrel 90 Invalid inputs: lock < 1, lock > 70, stock < 1, stock > 80, barrel < 1, barrel > 90

 

For valid inputs, use one value from each valid class (like weak equivalence testing) For invalid inputs, a test case will have one invalid value and the remaining values will all be valid (single failure)
38

Traditional Equivalence Class Testing




Problems:
Very often, the specification does not define what the expected output for an invalid test case should be a lot of time spent in defining expected outputs  Strongly typed languages eliminate the need for the consideration of invalid inputs


39

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

40

Equivalence Class Test Cases for the Triangle Problem


 

Outputs: Not a Triangle, Scalene, Isosceles, Equilateral Easier to identify output (range) equivalence classes
   

R1 = {<a, b, c> : the triangle with sides a, b, and c is equilateral} R2 = {<a, b, c> : the triangle with sides a, b, and c is isosceles} R3 = {<a, b, c> : the triangle with sides a, b, and c is scalene} R4 = {<a, b, c> : sides a, b, and c do not form a triangle}
Test Case OE1 OE2 OE3 OE4 a 5 2 3 4 b 5 2 4 1 c 5 3 5 2 Expected Output Equilateral Isosceles Scalene Not a Triangle

41

Equivalence Class Test Cases for the Triangle Problem




Input (domain) equivalence classes


D1 = {<a, b, c> : a = b = c} D2 = {<a, b, c> : a = b, a c} D3 = {<a, b, c> : a = c, a b} D4 = {<a, b, c> : b = c, a b} D5 = {<a, b, c> : a b, a c, b c} // D6 = {<a, b, c> : a b + c} D6 = {<a, b, c> : a = b + c} D6 = {<a, b, c> : a > b + c} // D7 = {<a, b, c> : b a + c} D7 = {<a, b, c> : b = a + c} D7 = {<a, b, c> : b > a + c} // D8 = {<a, b, c> : c a + b} D8 = {<a, b, c> : c = a + b} D8 = {<a, b, c> : c > a + b}
42

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

43

Equivalence Class Test Cases for the NextDate Function




Input variables
  

1 month 12 1 day 31 1812 year 2012 Valid equivalence classes


  

Traditional approach


M1 = { month : 1 month 12 } D1 = { day : 1 day 31 } Y1 = { year : 1812 year 2012 } M2 = { month : month < 1 } M3 = { month : month > 12 } D2 = { day : day < 1 } D3 = { day : day >31 } Y2 = { year : year < 1812 } Y3 = {year : year > 2012 }

Invalid equivalence classes


     

Case ID TE1 TE2 TE3 TE4 TE5 TE6 TE7

Month 6 -1 13 6 6 6 6

Day 15 15 15 -1 32 15 15

Year 1912 1912 1912 1912 1912 1811 2013

Expected Output 6/16/1912 invalid input invalid input invalid input invalid input invalid input invalid input

44

Equivalence Class Test Cases for the NextDate Function


 

Traditional approach is deficient because it treats the elements of a class at the valid/invalid level Different approach: What must be done to an input date?
M1 = { month: month has 30 days } M2 = { month: month has 31 days } M3 = { month: month is February } D1 = { day: 1 day 28 } D2 = { day: day = 29 } D3 = { day: day = 30 } D4 = { day: day = 31 } Y1 = { year: year = 1900 } Y2 = { year: 1812 year 2012 AND (year 1900) AND (year mod 4 = 0) } Y3 = { year: 1812 year 2012 AND (year mod 4 0) } Not a perfect set of equivalence classes!!!
45

Equivalence Class Test Cases for the NextDate Function


Weak equivalence class test cases
Case ID WE1 WE2 WE3 WE4 Month 6 7 2 6 Day 14 29 30 31 Year 1900 1912 1913 1900 Expected Output 6/15/1900 7/30/1912 invalid input invalid input

Strong equivalence class test cases


Case ID SE1 SE2 SE3 SE4 SE5 SE6 SE7 SE8 SE9 SE10 SE11 SE12 SE13 SE14 SE15 SE16 SE17 SE18 Month 6 6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 Day 14 14 14 29 29 29 30 30 30 31 31 31 14 14 14 29 29 29 Year 1900 1912 1913 1900 1912 1913 1900 1912 1913 1900 1912 1913 1900 1912 1913 1900 1912 1913 Expected Output 6/15/1900 6/15/1912 6/15/1913 6/30/1900 6/30/1912 6/30/1913 07/01/1900 07/01/1912 07/01/1913 E O E O E O 7/15/1900 7/15/1912 7/15/1913 7/30/1900 7/30/1912 7/30/1913 Case ID SE19 SE20 SE21 SE22 SE23 SE24 SE25 SE26 SE27 SE28 SE29 S E 30 S E 31 S E 32 SE33 SE34 SE35 SE36 Month 7 7 7 7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 Day 30 30 30 31 31 31 14 14 14 29 29 29 30 30 30 31 31 31 Year 1900 1912 1913 1900 1912 1913 1900 1912 1913 1900 1912 1913 1900 1912 1913 1900 1912 1913 Expected Output 7/31/1900 7/31/1912 7/31/1913 08/01/1900 08/01/1912 08/01/1913 2/15/1900 2/15/1912 2/15/1913 E O 03/01/1912 E O E O E O E O E O E O E O

46

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

47

Equivalence Class Test Cases for the Commission Problem




Input Domain Equivalence Classes




Lock
  

L1 = { lock: 1 lock 70 } L2 = { lock: lock < 1 } L3 = { lock: lock > 70 } S1 = { stock: 1 stock 80 } S2 = { stock: stock < 1 } S3 = { stock: stock > 80 } B1 = { barrel: 1 barrel 90 } B2 = { barrel: barrel < 1 } B3 = { barrel: barrel > 90 }

Stock
  

Barrel
  

48

Equivalence Class Test Cases for the Commission Problem


Strong Input Domain Equivalence Class Test Cases
Test Case SE1 SE2 SE3 SE4 SE5 SE6 SE7 SE8 SE9 SE10 SE11 SE12 SE13 SE14 SE15 SE16 SE17 SE18 SE19 SE20 SE21 SE22 SE23 SE24 SE25 SE26 SE27 locks 35 35 35 35 35 35 35 35 35 0 0 0 0 0 0 0 0 0 71 71 71 71 71 71 71 71 71 stocks 40 40 40 0 0 0 81 81 81 40 40 40 0 0 0 81 81 81 40 40 40 0 0 0 81 81 81 barrels 45 0 91 45 0 91 45 0 91 45 0 91 45 0 91 45 0 91 45 0 91 45 0 91 45 0 91 sales 3900 E E E E E E E E E E E E E E E E E E E E E E E E E E commission 640 E E E E E E E E E E E E E E E E E E E E E E E E E E

Weak Input Domain Equivalence Class Test Cases


Test Case W E1 W E2 WE3 locks 35 0 71 stocks 40 0 81 barrels sales 45 3900 0 E 91 E commission 640 E E

49

Equivalence Class Test Cases for the Commission Problem


  

sales = 45 x locks + 30 x stocks + 25 x barrels L1 = { <lock, stock, barrel> : sales 1000 } L2 = { <lock, stock, barrel> : 1000 < sales 1800 } L3 = { <lock, stock, barrel> : sales > 1800 }
Output Range Equivalence Class Test Cases

Test Case OR1 OR2 OR3

locks 5 15 25

stocks 5 15 25

barrels 5 15 25

sales 500 1500 2500

commission 50 175 360

50

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

51

Guidelines and Observations


1.

2.

3.

4.

The traditional form of equivalence class testing is generally not as thorough as weak equivalence class testing, which in turn, is not as thorough as the strong form of equivalence class testing The only time it makes sense to use the traditional approach is when the implementation language is not strongly typed If error conditions are a high priority, we could extend strong equivalence class testing to include invalid classes (e.g. commission problem) Equivalence class testing is appropriate when input data is defined in terms of ranges and sets of discrete values. This is certainly the case when system malfunctions can occur for outout-of of-limit variable values
52

Guidelines and Observations


5.

6.

7.

Equivalence class testing is strengthened by a hybrid approach with boundary value testing. (We can reuse the effort made in defining the equivalence classes) (e.g. NextDate function) Equivalence class testing is indicated when the program function is complex. In such cases, the complexity of the function can help identify useful equivalence classes, as in the NextDate function Strong equivalence class testing makes a presumption that the variables are independent when the Cartesian Product is taken. If there are any dependencies, these will often generate error test cases, as they did in the NextDate function
53

Guidelines and Observations


8.

Several tries may be needed before the right equivalence relation is discovered, as we saw in the NextDate example. In other cases, there is an obvious or natural equivalence partition. When in doubt, the best bet is to try to second guess aspects of any reasonable implementation
54

Agenda


Boundary Value Testing




Boundary Value Analysis


 

Generalizing Boundary Value Analysis Limitations of Boundary Value Analysis

   

Robustness Testing Worst Case Testing Special Value Testing Examples


  

Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem

Guidelines for Boundary value Testing Equivalence Classes


  

Equivalence Class Testing




Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing

   

Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations

55

References


Software Testing A Craftsman's Approach 2nd edition, Paul C. Jorgensen, CRC Press (Chapters 5 and 6)

56

Chapter 6
Equivalence Class Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Equivalence Class Testing


F

Domain

Range

Equivalence class testing uses information about the functional mapping itself to identify test cases

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Equivalence Relations
Given a relation R defined on some set S, R is an equivalence relation if (and only if), for all, x, y, and z elements of S:
R is reflexive, i.e., xRx R is symmetric, i.e., if xRy, then yRx R is transitive, i.e., if xRy and yRz, then xRz

An equivalence relation, R, induces a partition on the set S, where a partition is a set of subsets of S such that:
The intersection of any two subsets is empty, and The union of all the subsets is the original set S

Note that the intersection property assures no redundancy, and the union property assures no gaps.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Equivalence Partitioning
F Define relation R as follows: for x, y in domain, xRy iff F(x ) = F(y). Facts: Domain Range 1. R is an equivalence relation. 2. An equivalence relation induces a partition on a set. 3. Works best when F is many-to-one 4. (pre-image set) Test cases are formed by selecting one value from each equivalence class. - reduces redundancy - identifying the classes may be hard Range

Domain

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Forms of Equivalence Class Testing


Normal: classes of valid values of inputs Robust: classes of valid and invalid values of inputs Weak: (single fault assumption) one from each class Strong: (multiple fault assumption) one from each class in Cartesian Product We compare these for a function of two variables, F(x1, x2) Extension to problems with 3 or more variables is obvious.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Weak Normal Equivalence Class Testing


Identify equivalence classes of valid values. Test cases have all valid values. Detects faults due to calculations with valid values of a single variable. OK for regression testing.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Weak Normal Equivalence Class Test Cases


x2

g f

e a b c d x1

Equivalence classes (of valid values): {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d} {e <= x2 < f}, {f <= x2 <= g}

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Weak Robust Equivalence Class Testing


Identify equivalence classes of valid and invalid values. Test cases have all valid values except one invalid value. Detects faults due to calculations with valid values of a single variable. Detects faults due to invalid values of a single variable. OK for regression testing.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Weak Robust Equivalence Class Test Cases


x2

g f

e a b c d x1

Equivalence classes (of valid and invalid values): {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d}, {e <= x2 < f}, {f <= x2 <= g} Invalid classes: {x1 < a}, {x1 > d}, {x2 < e}, {x2 > g}
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing

x 2

Is this preferable to this? Why?


x 2

g f

e a b c d x 1

g f

e a b c d x 1

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Strong Normal Equivalence Class Testing


Identify equivalence classes of valid values. Test cases from Cartesian Product of valid values. Detects faults due to interactions with valid values of any number of variables. OK for regression testing, better for progression testing.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Strong Normal Equivalence Class Test Cases


x2

g f

e a b c d x1

Equivalence classes (of valid values): {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d} {e <= x2 < f}, {f <= x2 <= g}
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing

Strong Robust Equivalence Class Testing


Identify equivalence classes of valid and invalid values. Test cases from Cartesian Product of all classes. Detects faults due to interactions with any values of any number of variables. OK for regression testing, better for progression testing. (Most rigorous form of Equivalence Class testing, BUT, Jorgensens First Law of Software Engineering applies.) Jorgensens First Law of Software Engineering:
The product of two big numbers is a really big number. (scaling up can be problematic)

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Strong Robust Equivalence Class Test Cases


x2

g f

e a b c d x1

Equivalence classes (of valid and invalid values): {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d}, {e <= x2 < f}, {f <= x2 <= g} Invalid classes: {x1 < a}, {x1 > d}, {x2 < e}, {x2 > g}
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing

Selecting an Equivalence Relation


There is no such thing as THE equivalence relation. If x and y are days, some possibilities for Nextdate are: x R y x R y x R y x R y x R y x R y iff x and y are mapped onto the same year iff x and y are mapped onto the same month iff x and y are mapped onto the same date iff x(day) and y(day) are treated the same iff x(month) and y(month) are treated the same iff x(year) and y(year) are treated the same

Best practice is to select an equivalence relation that reflects the behavior being tested.
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing

NextDate Equivalence Classes


Month: M1 = { month : month has 30 days} M2 = { month : month has 31 days} M3 = { month : month is February} Day D1 D2 D3 D4 = = = = {day : {day : {day : {day : 1 <= day <= 28} day = 29 } day = 30 } day = 31 }

Year (are these disjoint?) Y1 = {year : year = 2000} Y2 = {year : 1812 <= year <= 2012 AND (year 0 Mod 100) and (year = 0 Mod 4) Y3 = {year : (1812 <= year <= 2012 AND (year 0 Mod 4)

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Not Quite Right


A better set of equivalence classes for year is
Y1 = {century years divisible by 400} i.e., century leap years Y2 = {century years not divisible by 400} i.e., century common years Y3 = {non-century years divisible by 4} i.e., ordinary leap years Y4 = {non-century years not divisible by 4} i.e., ordinary common years

All years must be in range: 1812 <= year <= 2012 Note that these equivalence classes are disjoint.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Weak Normal Equivalence Class Test Cases


Select test cases so that one element from each input domain equivalence class is used as a test input value. Test Case WN-1 WN-2 WN-3 WN-4 Input Domain Equiv. Classes M1, D1, Y1 M2, D2, Y2 M3, D3, Y3 M1, D4, Y4 Input Values April 1 2000 Jan. 29 1900 Feb. 30 1812 April 31 1901 Expected Outputs April 2 2000 Jan. 30 1900 impossible impossible

Notice that all forms of equivalence class testing presume that the variables in the input domain are independent; logical dependencies are unrecognized.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Strong Normal Equivalence Class Test Cases


With 4 day classes, 3 month classes, and 4 year classes, the Cartesian Product will have 48 equivalence class test cases. (Jorgensens First Law of Software Engineering strikes again!) Note some judgment is required. Would it be better to have 5 day classes, 4 month classes and only 2 year classes? (40 test cases) Questions such as this can be resolved by considering Risk.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Revised NextDate Domain Equivalence Classes


Month: M1 = { month : month has 30 days} M2 = { month : month has 31 days except December} M3 = { month : month is February} M4 = {month : month is December} Day D1 = {day : 1 <= day <= 27} D2 = {day : day = 28 } D3 = {day : day = 29 } D4 = {day : day = 30 } D5 = {day : day = 31 } Year (are these disjoint?) Y1 = {year : year is a leap year} Y2 = {year : year is a common year} The Cartesian Product of these contains 40 elements.
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing

When to Use Equivalence Class Testing


Variables represent logical (rather than physical) quantities. Variables support useful equivalence classes. Try to define equivalence classes for The Triangle Problem 0 < sideA < 200 0 < sideB < 200 0 < sideC < 200 The Commission Problem (exercise)

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Another Equivalence Class Strategy


Work backwards from output classes. For the Triangle Problem, could have
{x, y, z such that they form an Equilateral triangle} {x, y, z such that they form an Isosceles triangle with x = y} {x, y, z such that they form an Isosceles triangle with x = z} {x, y, z such that they form an Isosceles triangle with y = z} {x, y, z such that they form a Scalene triangle}

How many equivalence classes will be needed for x,y,z Not a triangle?
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing

In-Class Exercise

Apply the working backwards approach to develop equivalence classes for the Commission Problem. Hint: use boundaries in the output space.

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Assumption Matrix
Valid Values
Boundary Value

Valid and Invalid Values


Robust Boundary Value Weak Robust Equiv. Class Robust Worst Case Boundary Value Strong Robust Equiv. Class

Single fault
Weak Normal Equiv. Class

Multiple fault

Worst Case Boundary Value Strong Normal Equiv. Class

Software Testing: A Craftsmans Approach, 3rd Edition

Equivalence Class Testing

Chapter 7
Decision Table Based Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Decision Table Based Testing


Originally known as Cause and Effect Graphing
Done with a graphical technique that expressed AND-OR-NOT logic. Causes and Effects were graphed like circuit components Inputs to a circuit caused outputs (effects)

Equivalent to forming a decision table in which: inputs are conditions outputs are actions Test every (possible) rule in the decision table. Recommended for logically complex situations.

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Decision Tables
Represent complex conditional behavior. Support extensive analysis
Consistency Completeness Redundancy Algebraic simplification

Executable (and compilable) Two forms: Limited and Extended Entry. Don't Care condition entries require special attention. Dependencies usually yield impossible situations

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Decision Table Terminology


Stub Conditions c1 c2 c3 True T F Entry True False -False True T F False --

Actions

a1 a2 a3 a4

X X

X X X X X X

Rule

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

One Decision Table for the Triangle Problem


c1: a, b, c are a triangle? N c2: a = b? c3: a = c? c4: b = c? a1: Not a triangle a2: Scalene a3: Isosceles a4: Equilateral a5: Impossible X X X X X X X ---X X Y Y Y Y Y Y Y N Y Y N Y Y Y N N Y N Y Y Y N Y N Y N N Y Y N N N

Why are rules 3, 4, and 6 impossible?


Software Testing: A Craftsmans Approach, 3rd Edition Decision Table Based Testing

Decision Table with Mutually Exclusive Conditions


Conditions c1: 30-day month c2: 31-day month c3: February Rule 1 Rule 2 T T Rule 3 T

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Rule Counting to Check for Completeness


conditions c1: month in M1 c2: month in M2 c3: month in M3 Rule count R1 T --4 R2 -T -4 R3 --T 4

A limited Entry decison table with n conditions has 2n rules A Don't Care entry doubles the count of a rule What are the possibilities when rule count is not 2n ? More precise to use F! (must be false) than -- (don't care)
Software Testing: A Craftsmans Approach, 3rd Edition Decision Table Based Testing

A Redundant Decision Table


conditions! 1-4! 5! ! c1: !! T! F! ! ! ! ! ! c2: !! c3: !! a1:! ! a2:! ! a3:! ! --! --! X! --! X! T! T! X! X! --! 6! F! T! F! X! X! X! 7! F! F! T! --! X! X! 8! F! F! F! --! --! X! 9 T F F X! -X !

! ! !

Rule 9 is identical to Rule 4 (T, F, F) Since the action entries for rules 4 and 9 are identical, there is no ambiguity, just redundancy.

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Decision Table Exercise


(revisit the false negative, false positive question) Suggested conditions
Expected output is correct Observed output is correct Expected and observed outputs agree

Suggested actions
True pass True fail False pass False fail Impossible

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

An Inconsistent Decision Table


conditions 1-4 5 c1: T F c2: c3: a1: a2: a3: --X -X T T X X -6 F T F X X X 7 F F T -X X 8 F F F --X 9 T F F -X --

Rule 9 is identical to Rule 4 (T, F, F) Since the action entries for rules 4 and 9 are different there is ambiguity. This table is inconsistent, and the inconsistency implies non-determinism.
Decision Table Based Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Nextdate Limited Entry Decision Table


Conditions c1: c2: c3: c4: c5: c6: c7: c8: month in M1? month in M2? month in M3? day in D1? day in D2? day in D3? day in D4? leap year? This decision table will have 256 rules, many of which will be logically impossible.

Actions a1: impossible a2: next date

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Decision Table Based Test Cases


1. Decision table testing begins with equivalence classes for conditions as in equivalence class testing. 2. The sparseness due to the assumption of independence is addressed by careful examination of elements in the cross product. 3. For the equivalence classes defined earlier, the cross product contains 36 elements. The corresponding decision table has 36 rules.
<M1, D1, Y1>, <M1, D2, Y1>, <M1, D3, Y1>, <M1, D4, Y1>, <M2, D1, Y1>, <M2, D2, Y1>, <M2, D3, Y1>, <M2, D4, Y1>, <M3, D1, Y1>, <M3, D2, Y1>, <M3, D3, Y1>, <M3, D4, Y1>, <M1, D1, Y2>, <M1, D2, Y2>, <M1, D3, Y2>, <M1, D4, Y2>, <M2, D1, Y2>, <M2, D2, Y2>, <M2, D3, Y2>, <M2, D4, Y2>, <M3, D1, Y2>, <M3, D2, Y2>, <M3, D3, Y2>, <M3, D4, Y2>, <M1, D1, Y3>, <M1, D2, Y3>, <M1, D3, Y3>, <M1, D4, Y3>, <M2, D1, Y3>, <M2, D2, Y3>, <M2, D3, Y3>, <M2, D4, Y3>, <M3, D1, Y3>, <M3, D2, Y3>, <M3, D3, Y3>, <M3, D4, Y3>.

4. Notice that many of these are impossible, e.g., <M1, D4, * >

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

NextDate Extended Entry Decision Table


Conditions c1: month in M1? M2? M3? c2: day in D1? D2? D3? D4? c3: year in Y1? Y2? Y3? Actions a1: a2: a3: a4: a5: a6: impossible increment day reset day increment month reset month increment year

This decision table will have 36 rules, and corresponds to the cross product. Many of the rules will be logically impossible. Many rules would collapse, except for considerations for December.

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Revised NextDate Domain Equivalence Classes


Month: M1 = { month : month has 30 days} M2 = { month : month has 31 days except December} M3 = { month : month is December} M4 = {month : month is February } Day D1 = {day : 1 <= day <= 27} D2 = {day : day = 28 } D3 = {day : day = 29 } D4 = {day : day = 30 } D5 = {day : day = 31 } Year (are these disjoint?) Y1 = {year : year is a leap year} Y2 = {year : year is a common year} The corresponding decision table of these contains 40 elements.
Software Testing: A Craftsmans Approach, 3rd Edition Decision Table Based Testing

NextDate Extended Entry Decision Table


1 c1: month in c2: day in c3: year in a1: impossible a2: increment day a3: reset day a4: increment month a5: reset month a6: increment year X X X X X D 1 D 2 2 3 M1 D 3 D 4 D 5 X X X X X X X X X X X X X X X X X X X X X X X D 1 D 2 4 5 6 7 8 M2 D 3 D 4 D 5 D 1, D 2 9 10 11 12 13 M3 D 3 D 4 D 5 D 1 D 2 Y 1 D 2 Y 2 14 15 16 17 18 19 M4 D 3 Y 1 D 3 Y 2 D 4 D 5 20 21 22

Notice there are 40 rules in this decision table, corresponding to the 40 elements in the cross product of the revised equivalence classes.

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

NextDate Extended Entry Decision Table


Algebraically Condensed to 13 rules (test cases)
rules c1: month in c2: day in c3: year in a1: impossible a2: increment day a3: reset day a4: increment month a5: reset month a6: increment year X D1,D2,D3 1-3 M1 D4 D5 X X X X X 4 5 6-9 M2 D1,D2,D3,D4 D5 10 11-14 M3 D1,D2,D3,D4 D5 D1 D2 Y1 D2 Y2 15 16 17 18 19 M4 D3 Y1 D3 Y2 X D4, D5 X 20 21,22

X X

X X

X X

X X

X X

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Procedure for Decision Table Based Testing 1. Determine conditions and actions. (Might need to iterate) 2. Develop a (the!) Decision Table, watching for completeness don't care entries redundant and inconsistent rules 3. Each rule denes a test case.

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Procedure for Decision Table Based Testing


Determine conditions and actions. (Might need to iterate) Develop a (the!) Decision Table, watching for Completeness Dont care entries Redundant and/or inconsistent entries Impossible rules Each rule defines a test case.

Software Testing: A Craftsmans Approach, 3rd Edition

Decision Table Based Testing

Chapter 8
Retrospective on Functional Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Retrospective on Functional Testing


Test case development effort Test case effectiveness Test method selection guidelines Case study

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Test Case Development Effort


As with so many things in life, You get out of it what you put into it. Boundary value: almost mechanical Equivalence class: effort to identify classes Decision table: still more effort --Dad

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Numbers of Test Cases


Number of Test Cases high

low

Sophistication boundary equivalence value class decision table

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Test Case Development Effort


Effort to Identify Test Cases high

low Sophistication boundary value equivalence class decision table

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Test Case Effectiveness


True trade-off between development effort and number of test cases. Vulnerabilities
Boundary value testing has gaps and redundancies, and many test cases. Equivalence class testing eliminates the gaps and redundancies, but cannot deal with dependencies among variables. Decision table testing extends equivalence class testing by dealing with dependencies, and supports algebraic reduction of test cases.

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Appropriate Choices of Test Methods


c1. variables (P, physical, L, logical) c2. independent variables? c3. single fault assumption? c4. exception handling? a1. boundary value analysis (BVA) a2. robustness BVA a3. worst case BVA a4. robust worst case BVA a5. weak normal equiv. class a6. weak robust equiv. class a7. strong normal equiv. class a8. strong robust equiv. class a9. decision table x x x x x x x x x x x P Y Y Y P Y Y N x P Y N Y P Y N N P N L Y Y Y L Y Y N L Y N Y L Y N N L N

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Case Study
A hypothetical Insurance Premium Program computes the semiannual car insurance premium based on two parameters: the policy holder's age and driving record: Premium = BaseRate*ageMultiplier safeDrivingReduction The ageMultiplier is a function of the policy holder's age, and the safe driving reduction is given when the current points (assigned by traffic courts for moving violations) on the policy holder's driver's license are below an age-related cutoff. Policies are written for drivers in the age range of 16 to 100. Once a policy holder has 12 points, his/her driver's license is suspended (hence there is no need for insurance). The BaseRate changes from time to time; for this example, it is $500 for a semi-annual premium.

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Insurance Premium Program Data


Age Range 16<= age < 25 25<= age < 35 35<= age < 45 45<= age < 60 60<= age <= 100 Age Multiplier 2.8 1.8 1.0 0.8 1.5 Points Cutoff 1 3 5 7 5 Safe Driving Reduction 50 50 100 150 200

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Insurance Premium Program Calculations, Test Method Selection


Premium = BaseRate*ageMultiplier safeDrivingReduction ageMultiplier = F1(age) [from table] safeDrivingReduction = F2(age, points) [from table] age and safeDrivingReduction are physical variables, with a dependency in F2. Boundary values for age: 16, 17, 54, 99, 100 Boundary values for safeDrivingReduction: 0, 1, 6, 11, 12 Robust values for age and safeDrivingReduction are not allowed by business rules. Worst case BVA yields 25 test cases, and many gaps, some redundancy. Need something better.

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Graph of Boundary Value Test Cases


15 points

10

0 age 20 40 60 80 100

Severe gaps!

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Graph of Boundary Value Test Cases


(refined boundaries)
15 points

10

0 age 20 40 60 80 100

Severe redundancy!
Software Testing: A Craftsmans Approach, 3rd Edition Retrospective on Functional Testing

Insurance Premium Program Test Method Selection


age has ranges that receive similar treatment. equivalence class testing is indicated.
Age ranges per the data table

The points cutoff is also a range, further indication for equivalence class testing.
Points {0, 1} Points {2, 3} Points {4, 5} Points {6, 7} Points {8, 9, 10, 11, 12}

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Insurance Premium Program Strong Normal Equivalence Class Test Cases


15 points

10

0 age 20 40 60 80 100

Still a lot of redundancy, try decision tables.

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Insurance Premium Program Decision Table Test Cases


c1. age is c2. points a1. age multiplier a2. safeDriving 0 2.8 50 16-25 1-12 2.8 -25-35 0-2 1.8 50 3-13 1.8 -35-45 0-4 1.8 100 5-12 1.8 -45-60 0-6 0.8 150 7-12 0.8 -60-100 0-4 1.5 200 5-12 1.5 --

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Insurance Premium Program Decision Table Test Cases


15 points

10

0 age 20 40 60 80 100

What about age range endpoints?


Software Testing: A Craftsmans Approach, 3rd Edition Retrospective on Functional Testing

Insurance Premium Program Test Cases


(Decision table with boundary values hybrid)
15 points

10

0 age 20 40 60 80 100

Ahhhh, at last!
Software Testing: A Craftsmans Approach, 3rd Edition Retrospective on Functional Testing

Wrap Up
The inherent nature of the program being tested should dictate the test method.
The decision table expert system (slide 7) recommendation is just a start. Applications are seldom chemically pure.

Hybrid combinations of test methods can be very useful. Good judgment, based on insight, is a sign of a craftsperson.

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Functional Testing

Chapter 9
Path TestingPart 2

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

(McCabe) Basis Path Testing


in math, a basis "spans" an entire space, such that everything in the space can be derived from the basis elements. the cyclomatic number of a strongly connected directed graph is the number of linearly independent cycles. given a program graph, we can always add an edge from the sink node to the source node to create a strongly connected graph. (assuming single entry, single exit) computing V(G) = e - n + p from the modified program graph yields the number of independent paths that must be tested. since all other program execution paths are linear combinations of the basis path, it is necessary to test the basis paths. (Some say this is sufficient; but that is problematic.) the next few slides follow McCabe's original example.

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

McCabe's Example
McCabe's Original Graph
A 1 B E C F D 3 B 4 C 9 G G 10

Derived, Strongly Connected Graph


A 2 6 E 8 D 7 F

V(G) = 10 - 7 + 2(1) = 5
Software Testing: A Craftsmans Approach, 3rd Edition

V(G) = 11 - 7 + 1 = 5
Path Testing II

McCabe's Baseline Method


To determine a set of basis paths, 1. 2. Pick a "baseline" path that corresponds to normal execution. (The baseline should have as many decisions as possible.) To get succeeding basis paths, retrace the baseline until you reach a decision node. "Flip" the decision (take another alternative) and continue as much of the baseline as possible. Repeat this until all decisions have been flipped. When you reach V(G) basis paths, you're done. If there aren't enough decisions in the first baseline path, find a second baseline and repeat steps 2 and 3.

3. 4.

Following this algorithm, we get basis paths for McCabe's example.

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Basis Paths
path \ edges traversed p1: p2: p3: p4: p5: A, B, C, G A, B, C, B, C, G A, B, E, F, G A, D, E, F, G A, D, F, G 1 1 1 1 0 0 1 1
A

2 0 0 0 1 1 0 0
2 6

3 0 1 0 0 0 1 2

4 1 2 0 0 0 1 3

5 0 0 1 0 0 1 0

6 0 0 0 1 0 0 0

7 0 0 0 0 1 0 0

8 0 0 1 1 0 1 0

9 1 1 0 0 0 0 1

10 0 0 1 1 1 1 0

ex1: A, B, C, B, E, F, G ex2: A, B, C, B, C, B, C, G ex1 = p2 + p3 - p1 ex2 = 2p2 - p1


B 3 4 C 9 1 5

D 7 F

8 10

G
Software Testing: A Craftsmans Approach, 3rd Edition Path Testing II

McCabe Basis Paths in the Triangle Program


4 5 6 7 8 9 11 12

V(G) = 23 - 20 + 2(1) = 5 Basis Path Set B1 p1: 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 16, 18, 19, 20, 22, 23 p2: 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 18, 19, 20, 22, 23 p3: 4, 5, 6, 7, 8, 9, 11, 12, 13, 21, 22, 23 p4: 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 20, 22, 23 p5: 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 17, 19, 20, 22, 23
16

10

13 14

21 15

(mainline) (flipped at 9) (flipped at 13) (flipped at 14) (flipped at 16)

17

18

There are 8 topologically possible paths. 4 are feasible, and 4 are infeasible. Exercise: Is every basis path feasible?

19 20 22

23

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Essential Complexity
McCabes notion of Essential Complexity deals with the extent to which a program violates the precepts of Structured Programming. To find Essential Complexity of a program graph,
Identify a group of source statements that corresponds to one of the basic Structured Programming constructs. Condense that group of statements into a separate node (with a new name) Continue until no more Structured Programming constructs can be found. The Essential Complexityof the original program is the cyclomatic complexity of the resulting program graph.

The essential complexity of a Structured Program is 1. Violations of the precepts of Structured Programming increase the essential complexity.

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Essential Complexity of Schachs Program Graph


first A

This original
C

B
4

Reduces to this

V(G) = 8 5 + 2(1) =5 Essential complexity is 5

last

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Condensation with Structured Programming Constructs


rst a A B D E F L H J last rst L e last last d L H G I K e rst a last L C a b F H J E G I K last rst a d b c L H J a b I K c rst rst

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Violations of Structured Programming Precepts


Branching into a loop Branching out of a loop

Branching into a decision

Branching out of a decision

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Cons and Pros


Issues
Linear combinations of execution paths are counterintuitive. What does 2p2 p1 really mean? How does the baseline method guarantee feasible basis paths? Given a set of feasible basis paths, is this a sufficient test?

Advantages
McCabe's approach does address both gaps and redundancies. Essential complexity leads to better programming practices. McCabe proved that violations of the structured programming constructs increase cyclomatic complexity, and violations cannot occur singly.

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Program NextDate Dim tomorrowDay,tomorrowMonth,tomorrowYear As Integer Dim day,month,year As Integer 1. Output ("Enter today's date in the form MM DD YYYY") 2. Input (month,day,year) 3. Case month Of 4. Case 1: month Is 1,3,5,7,8,Or 10: '31 day months (except Dec.) 5. ! If day < 31 6. ! ! Then tomorrowDay = day + 1 7. ! ! Else 8. ! ! ! tomorrowDay = 1 9. ! ! ! tomorrowMonth = month + 1 10. ! EndIf 11. Case 2: month Is 4,6,9,Or 11 '30 day months 12. ! If day < 30 13. ! ! Then tomorrowDay = day + 1 14. ! ! Else 15. ! ! ! tomorrowDay = 1 16. ! ! ! tomorrowMonth = month + 1 17. ! EndIf 18. Case 3: month Is 12: 'December 19. ! If day < 31 20. ! ! Then tomorrowDay = day + 1 21. ! ! Else 22. ! ! ! tomorrowDay = 1 23. ! ! ! tomorrowMonth = 1 24. ! ! ! If year = 2012 25. ! ! ! ! Then Output ("2012 is over") 26. ! ! ! ! Else tomorrow.year = year + 1 27. ! ! ! EndIf 28. ! EndIf
Software Testing: A Craftsmans Approach, 3rd Edition Path Testing II

29. Case 4: month is 2: 'February 30. ! If day < 28 31. ! ! Then tomorrowDay = day + 1 32. ! ! Else! 33. ! ! ! If day = 28 34. ! ! ! ! Then! 35. ! ! ! ! ! If ((year MOD 4)=0)AND((year MOD 400)"0) 36. ! ! ! ! ! ! Then tomorrowDay = 29 'leap year 37. ! ! ! ! ! ! Else! ! 'not a leap year 38. ! ! ! ! ! ! ! tomorrowDay = 1 39. ! ! ! ! ! ! ! tomorrowMonth = 3 40. ! ! ! ! ! EndIf 41. ! ! ! ! Else! If day = 29 42. ! ! ! ! ! ! ! Then tomorrowDay = 1 43. ! ! ! ! ! ! ! ! tomorrowMonth = 3 44. ! ! ! ! ! ! ! Else! Output("Cannot have Feb.", day) 45. ! ! ! ! ! ! EndIf 46. ! ! ! EndIf 47. ! EndIf 48. EndCase 49. Output ("Tomorrow's date is", tomorrowMonth, ! ! ! ! tomorrowDay, tomorrowYear) 50. End NextDate

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

1 2 3

4 5 7 6 8 9 10 13

11 12 14 15 16 17 20

18 19 21 22 23 24 25 27 28 26 31 36 34 35

29 30 32 33 41 42 44 37 38 39 40 46 47 43 45

48 49 50

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Commission Program Pseudo-Code


Program Commission Dim lockPrice, stockPrice, barrelPrice As Real Dim locks, stocks, barrels As Integer 15 Dim totalLocks, totalStocks As Integer 16 Dim totalBarrels As Integer 17 Dim lockSales, stockSales As Real 18 Dim barrelSales As Real Dim sales, commission As Real 19 1 lockPrice = 45.0 20 2 stockPrice = 30.0 21 3 barrelPrice = 25.0 22 4 totalLocks = 0 23 5 totalStocks = 0 24 6 totalBarrels = 0 25 7 Input(locks) 26 8 While NOT(locks = -1) 27 9 Input(stocks, barrels) 28 10 totalLocks = totalLocks + locks 29 11 totalStocks = totalStocks + stocks 30 12 totalBarrels = totalBarrels + barrels 31 13 Input(locks) 32 14 EndWhile 33

Output("Locks sold: ", totalLocks) Output("Stocks sold: ", totalStocks) Output("Barrels sold: ", totalBarrels) sales = lockPrice*totalLocks + stockPrice*totalStocks + barrelPrice * totalBarrels Output("Total sales: ", sales) If (sales > 1800.0) Then commission = 0.10 * 1000.0 commission = commission + 0.15 * 800.0 commission = commission + 0.20*(sales-1800.0) Else If (sales > 1000.0) Then commission = 0.10 * 1000.0 commission = commission + 0.15*(sales-1000.0) Else commission = 0.10 * sales EndIf EndIf Output("Commission is $", commission) End Commission

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

10

11

12

13

14

15

16

17

18

19

20 25 21 26 22 27 23 28 24 30 31 29

32

33

Software Testing: A Craftsmans Approach, 3rd Edition

Path Testing II

Chapter 10
Data Flow Testing Slice Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Data Flow Testing


Often confused with "dataflow diagrams. Main concern: places in a program where data values are defined and used. Static (compile time) and dynamic (execution time) versions. Static: Define/Reference Anomalies on a variable that
is defined but never used (referenced) is used but never defined is defined more than once

Starting point is a program, P, with program graph G(P), and the set V of variables in program P. "Interesting" data flows are then tested as mini-functions.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Definitions
Node n G(P) is a defining node of the variable v V, written as DEF(v, n), iff the value of the variable v is defined at the statement fragment corresponding to node n. Node n G(P) is a usage node of the variable v V, written as USE(v, n), iff the value of the variable v is used at the statement fragment corresponding to node n. A usage node USE(v, n) is a predicate use (denoted as P-use) iff the statement n is a predicate statement; otherwise, USE(v, n) is a computation use (denoted C-use).

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

More Definitions
A definition-use path with respect to a variable v (denoted du-path) is a path in PATHS(P) such that for some v V, there are define and usage nodes DEF(v, m) and USE(v, n) such that m and n are the initial and final nodes of the path. A definition-clear path with respect to a variable v (denoted dc-path) is a definition-use path in PATHS(P) with initial and final nodes DEF (v, m) and USE (v, n) such that no other node in the path is a defining node of v.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Example: first part of the Commission Program


1. Program Commission (INPUT,OUTPUT) 2. Dim locks, stocks, barrels As Integer 3. Dim lockPrice, stockPrice, barrelPrice As Real 4. Dim totalLocks, totalStocks, totalBarrels As Integer 5. Dim lockSales, stockSales, barrelSales As Real 6. Dim sales, commission As Real 7. lockPrice = 45.0 8. stockPrice = 30.0 9. barrelPrice = 25.0 10. totalLocks = 0 11. totalStocks = 0 12. totalBarrels = 0 13. Input(locks) 14. While NOT(locks = -1) 15. Input(stocks, barrels) 16. totalLocks = totalLocks + locks 17. totalStocks = totalStocks + stocks 18. totalBarrels = totalBarrels + barrels 19. Input(locks) 20. EndWhile 21. Output(Locks sold: , totalLocks) 22. Output(Stocks sold: , totalStocks) 23. Output(Barrels sold: , totalBarrels)

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Rest of Commission Problem


23. Output(Barrels sold: , totalBarrels) 24. lockSales = lockPrice * totalLocks 25. stockSales = stockPrice * totalStocks 26. barrelSales = barrelPrice * totalBarrels 27. sales = lockSales + stockSales + barrelSales 28. Output(Total sales: , sales) 29. If (sales > 1800.0) 30. Then 31. commission = 0.10 * 1000.0 32. commission = commission + 0.15 * 800.0 33. commission = commission + 0.20 *(sales-1800.0) 34. Else If (sales > 1000.0) 35. Then 36. commission = 0.10 * 1000.0 37. commission = commission + 0.15 *(sales-1000.0) 38. Else 39. commission = 0.10 * sales 40. EndIf 41. EndIf 42. Output(Commission is $, commission) 43. End Commission

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Program Graph of Commission Problem


7 8 9 10 11 12 13 14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

34

31

35

32

36

38

33

37

39

40

41

42

43

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Define/Use Test Cases


Technique: for a particular variable,
find all its definition and usage nodes, then find the du-paths and dc-paths among these. for each path, devise a "suitable" (functional?) set of test cases.

Note: du-paths and dc-paths have both static and dynamic


interpretations Static: just as seen in the source code Dynamic: must consider execution-time flow (particularly for loops)

Definition clear paths are easier to test


No need to check each definition node, as is necessary for du-paths

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Define and Use Nodes


Variable locks stocks barrels totalLocks totalStocks totalBarrels Defined at Node 13, 19 15 15 10, 16 11, 17 12, 18 Used at Node 14, 16 17 18 16, 21, 24 17, 22, 25 18, 23, 26

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Example (continued)
13. Input(locks) 14. While NOT(locks = -1) 15. Input(stocks, barrels) 16. totalLocks = totalLocks + locks 17. totalStocks = totalStocks + stocks 18. totalBarrels = totalBarrels + barrels 19. Input(locks) 20. EndWhile We have DEF (locks, 13), DEF (locks, 19) USE (locks, 14), a predicate use USE (locks, 16). A computation use du-paths for locks are the node sequences <13, 14> (a dc-path), <13, 14, 15, 16>, <19, 20, 14 >, < 19, 20, 14 , 15, 16> Is <13, 14, 15, 16> definition clear? Is < 19, 20, 14, 15, 16> definition clear? What about repetitions?

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Coverage Metrics Based on du-paths


In the following definitions, T is a set of paths in the program graph G(P) of a program P, with the set V of variables. The set T satisfies the All-Defs criterion for the program P iff for every variable v V, T contains definition-clear paths from every defining node of v to a use of v. The set T satisfies the All-Uses criterion for the program P iff for every variable v V, T contains definition-clear paths from every defining node of v to every use of v, and to the successor node of each USE(v, n).
Data Flow Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Coverage Metrics Based on du-paths


(continued) The set T satisfies the All-P-Uses/Some C-Uses criterion for the program P iff for every variable v V, T contains definition-clear paths from every defining node of v to every predicate use of v; if a definition of v has no P-uses, a definition-clear path leads to at least one computation use.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Coverage Metrics Based on du-paths


(continued) The set T satisfies the All-C-Uses/Some P-Uses criterion for the program P iff for every variable v V, T contains definition-clear paths from every defining node of v to every computation use of v; if a definition of v has no C-uses, a definition-clear path leads to at least one predicate use.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Coverage Metrics Based on du-paths


(concluded) The set T satisfies the All-du-paths criterion for the program P iff for every variable v V, T contains definition-clear paths from every defining node of v to every use of v and to the successor node of each USE(v, n), and that these paths are either single-loop traversals or cycle-free.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Rapps-Weyuker Coverage Subsumption


All Paths All DU Paths All Uses All C Uses Some P Uses All Defs All P Uses Some C Uses All P Uses All Edges All Nodes
S. Rapps and E. J. Weyuker "Selecting Software Test Data Using Data Flow Information" IEEE Transactions of Software Engineering , vol 11 no 4 IEEE Computer Society Press, Washington, D. C. , April 1985, pp 367 - 375.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Exercise: Where does the All definition-clear paths coverage metric fit in the Rapps-Weyuker lattice?

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Data Flow Testing Strategies


Data flow testing is indicated in
Computation-intensive applications long programs Programs with many variables

A definition-clear du-path represents a small function that can be tested by itself. If a du-path is not definition-clear, it should be tested for each defining node.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Slice Testing
Often confused with "module execution paths" Main concern: portions of a program that "contribute" to the value of a variable at some point in the program. Nice analogy with history -- a way to separate a complex system into "disjoint" components that interact:
European history North American history Orient history

A dynamic construct.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Slice Testing Definitions


Starting point is a program, P, with program graph G(P), and the set V of variables in program P. Nodes in the program graph are numbered and correspond to statement fragments. Definition: The slice on the variable set V at statement fragment n, written S(V, n), is the set of node numbers of all statement fragments in P prior to n that contribute to the values of variables in V at statement fragment n. This is actually a backward slice. Exercise: define a forward slice.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Fine Points
"prior to" is the dynamic part of the definition. "contribute" is best understood by extending the Define and Use concepts:
P-use: used in a predicate (decision) C-use: used in computation O-use: used for output L-use: used for location (pointers, subscripts) I-use: iteration (internal counters, loop indices) I-def: defined by input A-def: defined by assignment

usually, the set V of variables consists of just one element. can choose to define a slice as a compilable set of statement fragments -- this extends the meaning of "contribute" because slices are sets, we can develop a lattice based on the subset relationship.
Software Testing: A Craftsmans Approach, 3rd Edition Data Flow Testing

In the program fragment


13. Input(locks) 14. While NOT(locks = -1) 15. Input(stocks, barrels) 16. totalLocks = totalLocks + locks 17. totalStocks = totalStocks + stocks 18. totalBarrels = totalBarrels + barrels 19. Input(locks) 20.EndWhile

There are these slices on locks (notice that statements 15, 17, and 18 do not appear): S1: S(locks, 13) = {13} S2: S(locks, 14) = {13, 14, 19, 20} S3: S(locks, 16) = {13, 14, 19, 20} S4: S(locks, 19) = {19}

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Lattice of Slices
Because a slice is a set of statement fragment numbers, we can find slices that are subsets of other slices. This allows us to work backwards from points in a program, presumably where a fault is suspected. The statements leading to the value of commission when it is output are an excellent example of this pattern. Some researchers propose that this is the way good programmers think when they debug code.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Example Lattice of Slices


S34

S35

S37

S36

S38

S39

S34: S(commission, 41) = {41} S35: S(commission, 42) = {41, 42}

S40

S36: S(commission, 43) = {3, 4, 5, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 30, 36, 41, 42, 43} S37: S(commission, 47) = {47} S38: S(commission, 48) = {3, 4, 5, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 36, 47, 48} S39: S(commission, 50) = {3, 4, 5, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 36, 50} S40: S(commission, 51) = {3, 4, 5, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 36, 41, 42, 43, 47, 48, 50}
Software Testing: A Craftsmans Approach, 3rd Edition Data Flow Testing

Diagnostic Testing with Slices


Relative complements of slices yield a "diagnostic" capability. The relative complement of a set B with respect to another set A is the set of all elements of A that are not elements of B. It is denoted as A - B. Consider the relative complement set S(commission, 48) S(sales, 35):
S(commission, 48) = {3, 4, 5,36,18,19, 20, 23, 24, 25, 26, 27, 34, 38, 39, 40, 44,45,47} S(sales, 35) = {3, 4, 5, 36, 18, 19, 20, 23, 24, 25, 26, 27} S(commission, 48) - S(sales, 35) = {34, 38, 39, 40, 44,45,47}

If there is a problem with commission at line 48, we can divide the program into two parts, the computation of sales at line 34, and the computation of commission between lines 35 and 48. If sales is OK at line 34, the problem must lie in the relative complement; if not, the problem may be in either portion.

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Programming with Slices


One researcher suggests the possibility of slice splicing:
Code a slice, compile and test it. Code another slide, compile and test it, then splice the two slices. Continue until the whole program is complete.

Exercise: in what ways is slice splicing distinct from agile (bottom up) programming?

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Exercise/Discussion: When should testing stop?


when you run out of time? when continued testing causes no new failures? when continued testing reveals no new faults? when you can't think of any new test cases? when you reach a point of diminishing returns? when mandated coverage has been attained? when all faults have been removed?

Software Testing: A Craftsmans Approach, 3rd Edition

Data Flow Testing

Chapter 11
Retrospective on Structural Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Structural Testing Comparison


How much testing is enough? Effort and size trendlines Metrics for test method comparison

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Exercise/Discussion: When should testing stop?


when you run out of time? when continued testing causes no new failures? when continued testing reveals no new faults? when you can't think of any new test cases? when you reach a point of diminishing returns? when mandated coverage has been attained? when all faults have been removed?

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Number of Test Coverage Items

high

low Sophistication DDPath Basis Path DUPath Slice

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Effort to Identify Test Coverage Items

high

low Sophistication DDPath Basis DU-Path Path Slice

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Number of Coverage Items in the Commission Problem


40 36 32 28 24 20 16 12 8 4 0 DD-Path Basis DU-Path Slices

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Metrics for Test Method Comparison


Assume that a functional testing technique M generates m test cases, and that these test cases are tracked with respect to a structural metric S that identifies s elements in the unit under test. When the m test cases are executed, they traverse n of the s structural elements. This framework supports the definition of metrics for testing effectiveness.

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Metrics for Testing Effectiveness


The coverage of a methodology M with respect to a metric S is ratio of n to s. We denote it as C(M,S). The redundancy of a methodology M with respect to a metric S is ratio of m to s. We denote it as R(M,S). The net redundancy of a methodology M with respect to a metric S is ratio of m to n. We denote it as NR(M,S).

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Sample Comparisons
Method Triangle Program BVA Worst Case BVA Commission Program Output BVA Decision Table DD-Path DU-Path Slices 25 3 25 25 25 11 11 11 33 40 11 11 11 33 40 1.00 1.00 1.00 1.00 1.00 2.27 0.27 2.27 0.76 0.63 2.27 0.27 2.27 0.76 0.63 15 125 7 11 11 11 0.64 1.00 1.36 11.36 2.14 11.36 m n s C(M,S) R(M,S) NR(M,S)

Software Testing: A Craftsmans Approach, 3rd Edition

Retrospective on Structural Testing

Chapter 12
Levels of Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Levels and Life Cycle Models


Levels of testing depend primarily on the software life cycle used. BUT, most forms of testing levels are derived from the V-Model version of the good, old Waterfall Model. Iterative models introduce the need for regression testing. System testing is greatly enhanced when an executable specification is used.

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Software Development Life Cycle Models


Waterfall (presumes perfect foresight) Incremental (delayed prototypes) Rapid Prototyping (elicit user feedback) Operational Specification (executable) Transformational Implementation (lab only) Agile Methods

Increasingly Operational views


Ref. Agresti, New Paradigms of Software Development IEEE Tutorial
Software Testing: A Craftsmans Approach, 3rd Edition Levels of Testing

Requirements Specication Preliminary Design

The Waterfall Model

Detailed Design

Coding

Output of a phase is the input to the next phase

Unit Testing Integration Testing System Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

The Waterfall Model (continued)


Requirements Specification what

how

Preliminary Design

what

how

Detailed Design

what

how

Coding

Unit Testing Integration Testing System Testing

Feedback Cycles

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

The Waterfall Model (aka the V-Model)


Levels of Abstraction and Testing

Requirements Specification Preliminary Design Detailed Design Unit Testing Coding Integration Testing

System Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

The Waterfall Model: Advantages


Framework fits well with levels of abstraction levels of management programming language packaging Phases have clearly defined end products Convenient for project management Works well with Functional Decomposition Massive parallel development

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

The Waterfall Model: Disadvantages


Long customer feedback cycle resolving faults found during
system testing is extremely expensive

"Exists for the convenience of management" (-- M. Jackson) stifles creativity and unnecessarily constraints
designer's thought processes

Stresses analysis to the exclusion of synthesis High peak in manpower loading profile "Requires perfect foresight" -- William Agresti
any errors, omissions in early phases will propagate

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Incremental Software Development


Requirements Specification Detailed Design Preliminary Design Series of Builds Unit Testing Integration Testing Regression Testing Progression Testing

Coding

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Rapid Prototyping Software Development


Series of Prototypes Preliminary Design Detailed Design Customer Feedback Define Prototype Objectives Build Prototype

Coding Unit Testing Integration Testing System Testing


Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Rapid Prototyping
Why prototype? 1. To determine feasibility 2. To obtain early customer feedback Keep or dispose? To be rapid, many compromises are made. If a prototype is kept, it will be extremely difficult to modify and maintain. Best practice: dispose once purpose has been served.

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Software Development with an Executable Specification


Requirements Specification Preliminary Design Detailed Design Customer Feedback Develop executable specification execute spec

Coding Unit Testing Integration Testing System Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Executable Specifications
Why use an executable specification? 1. To determine behavior 2. To obtain early customer feedback Other uses? 1. Automatic generation of system test cases. 2. Develop order of test case execution 3. Training 4. Early analysis

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Transformational Implementation
Formal Requirements Specification Series of Transformations Working System Customer Testing

Requirements Specification

System Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Transformational Specification Pros and Cons


Advantages: Intermediate Waterfall phases (design, code, test) are eliminated. Customer tests delivered system All maintenance is done on specification Disadvantages: Specification must be very formal (predicate calculus) Series of transformations is not well understood; tends to be specific to application domains Very limited success so far. Doesn't scale up well. Knowing what to change in the formal specification is difficult

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Agile Methods
Many flavors
eXtreme Programming (XP) SCRUM Test-Driven Development

Customer-Driven Goals
Respond to customer Reduce unnecessary effort Always have something that works

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Length of Feedback Cycle (Customer/Developer)


Where would you put the other life cycle models?

Software Testing: A Craftsmans Approach, 3rd Edition

Waterfall

Levels of Testing

Agile

Hybrid Models

Because they replace the Requirements specification phase, rapid prototyping and executable specifications can be merged with the iterative models. The Agile Models are highly iterative, but they typically are not used in combination with rapid prototyping or executable specifications.

Software Testing: A Craftsmans Approach, 3rd Edition

Levels of Testing

Integra(on tes(ng
by Kris(an Sandahl IDA TDDD04 spring 2012

Levels of tes(ng
Maintenance Validate Requirements, Verify Specification Acceptance Test
(Release testing)

Requirements

System Design
(Architecture, High-level Design)

Verify System Design

System Testing
(Integration testing of modules)

Module Design
(Program Design, Detailed Design)

Verify Module Design Verify Implementation

Module Testing
(Integration testing of units)

Implementation
of Units (classes, procedures, functions)

Unit testing

Project Management, Software Quality Assurance (SQA), Supporting Tools, Education


2

Outline
Informal example Theory:
Decomposi(on-based integra(on Call-graph based integra(on Path-based integra(on

Example: Local bus card reader


Sell (ckets Chose (cket and User buNons Display Read RFID Func(on group, aka Capability, aka Anatom Show balance Registrer travel

Check balance

Deduct money Check validity Communicate with server Dependency Power supply

Example: Organic integra(on plan


Sell (ckets Chose (cket Services User buNons Display User interface Read RFID Show balance Registrer travel

Check balance

Deduct money Check validity Communicate with server Power supply

Server func(ons

Communica(on Hardware and supply

Big Bang integra(on tes(ng


Sell (ckets Chose (cket Show balance

Registrer travel User buNons Display Bring everything together Switch on the current Try to buy a (cket !

Power supply

Read RFID Check balance Deduct money

Check validity Communicate with server

Is Big Bang smart?


Perhaps for small systems Well-dened components and interfaces No extra soWware needed Problems: It is very dicult to isolate the defects found, as it is very dicult to tell whether defect is in component or interface. Defects present at the interfaces of components are iden(ed at very late stage. There is high probability of missing some cri(cal defects which might surfaced in produc(on. It is very dicult to make sure that all the cases for integra(on tes(ng are covered. (www.Tes(ngGeek.com)

BoNom-up integra(on 1
Environment crea(on Install components Draw cables Measure compare with calcula(on

Power supply

Hardware and supply

BoNom-up integra(on 2

Communica(on possible?

Drivers

Environment Rudimental client Rudimental server Communicate with server Power supply

Communica(on Hardware and supply

Driver
A pretend module that requires a sub-system and passes a test case to it
setup driver SUT(x) verica(on driver

SUT

SUT

black-box view
SUT System Under Test

Is boNom-up smart?
If the basic func(ons are complicated, error-prone or has development risks If boNom-up development strategy is used If there are strict performance or real-(me requirements Problems: Lower level func(ons are oWen o-the shelf or trivial Complicated User Interface tes(ng is postponed End-user feed-back postponed Eort to write drivers.

Top-down integra(on 1
Sell (ckets Chose (cket Services Envrionment: Modules for the services Driving a prototype interface Show balance Registrer travel

Test end-users

Is it possible to perform services in a good way?

Top-down integra(on 2
Sell (ckets Chose (cket Services User buNons Display User interface Read RFID Show balance Registrer travel

Func(on 1

Func(on 2

Func(on 2

Rudimentary Server func(ons

Stubs

Are all our test scenarios fullled?

Stub
A program or a method that simulates the input-output func6onality of a missing sub- system by answering to the decomposi(on sequence of the calling sub-system and returning back simulated or canned data.
SUT Service(x) SUT

Check x Stub Return y; end

Stub

Is top-down smart?
Test cases are dened for func(onal requirements of the system Defects in general design can be found early Works well with many incremental development methods No need for drivers Problems: Technical details postponed, poten(al show-stoppers Many stubs are required Stubs with many condi(ons are hard to write

Decomposi(on-based integra(on
The func(onal decomposi(on tree: hierarchical order of processes return edges excluded reects the lexical inclusion of units, in terms of the order in which they need to be compiled
Jorgensen, Paul C "Chapter 13 - Integra(on Tes(ng". SoWware Tes(ng: A CraWsman's Approach, Third Edi(on. Auerbach Publica(ons. 2008. Books24x7. Go to: hNp://guide.bibl.liu.se/datavetenskap Click: Books24x7, Login Search: ISBN:9780849374753

Func6onal Decomposi6on of the SATM System

Example: SATM from Jorgensen


1 A 10 B C D E 11 12 13 14 15 16 17 F 22

18

19

20

21

23

24

25

26

27

Unit Level Unit Nam Unit Level Unit Name B 1.3 Terminal Sense & Control 1 1 SATM System 14 1.3.1 Screen Driver A 1.1 Device Sense & Control 15 1.3.2 Key Sensor D 1.1.1 Door Sense & Control C 1.4 Manage Session 2 1.1.1.1 Get Door Status 16 1.4.1 Validate Card 3 1.1.1.2 Control Door 17 1.4.2 Validate PIN 4 1.1.1.3 Dispense Cash 18 1.4.2.1 GetPIN E 1.1.2 Slot Sense & Control F 1.4.3 Close Session 5 1.1.2.1 WatchCardSlot 19 1.4.3.1 New Transac6on Request 6 1.1.2.2 Get Deposit Slot Status 20 1.4.3.2 Print Receipt 7 1.1.2.3 Control Card Roller 21 1.4.3.3 Post Transac6on Local 8 1.1.2.3 Control Envelope Roller 22 1.4.4 Manage Transac6on 9 1.1.2.5 Read Card Strip 23 1.4.4.1 Get Transac6on Type 10 1.2 Central Bank Comm. 24 1.4.4.2 Get Account Type 11 1.2.1 Get PIN for PAN 25 1.4.4.3 Report Balance 12 1.2.2 Get Account Status 26 1.4.4.4 Process Deposit 13 1.2.3 Post Daily Transac6ons 27 1.4.4.5 Process Withdrawal 17 Table 1: SATM Units and Abbreviated Names

Three level func(onal decomposi(on tree


A Level 1

Level 2

Level 3

Big-Bang tes(ng
A B E F Unit test A Unit test B Unit test H System-wide test C G D H Level 1 Level 2 Level 3 Environment: A, B, C, D, E, F, G, H

BoNom-up tes(ng
A B E F C G D H Level 1 Level 2 Level 3 Environments: Session 1: E, driver(B) S2: F, driver(B) S3: E, F, driver(B) S4: G, driver(D) S5: H, driver(D) S6: G, H, driver(D) S7: E, F, B, driver(A) S8: C, driver(A) S9: G, H, D, driver(A) S10: E, F, B, C, G, H, D, A

General formula: Number of drivers: (nodes-leaves) Number of drivers: 3 Number of sessions: (nodes-leaves)+edges Number of sessions: 10 SATM: 10 drivers, 42 sessions

Top-down tes(ng
A B E F C G D H Level 1 Level 2 Level 3 Environments: Session 1: A, stub(B), stub(C), stub(D) S2: A, B, stub(C), stub(D) S3: A, stub(B), C, stub(D) S4: A, stub(B), stub(C), D S5: A, B, stub(E), stub(F), C, D, stub(G), stub(H) S6: A, B, E, stub(F), C, D, stub(G), stub(H) S7: A, B, stub(E), F, C, D, stub(G), stub(H) S8: A, B, stub(E), stub(F), C, D, G, stub(H) S9: A, B, stub(E), stub(F), C, D, stub(G), H S10: A, B, E, F, C, D, G, H

General formula: Number of stubs: (nodes 1) Number of stubs: 7 Number of sessions: (nodes-leaves)+edges Number of sessions: 10 SATM: 32 stubs, 42 sessions

Sandwich tes(ng
Taget level A B E F C G D H Level 1 Level 2 Level 3 Environments: Session 1: A, stub(B), stub(C), stub(D) S2: A, B, stub(C), stub(D) S3: A, stub(B), C, stub(D) S4: A, stub(B), stub(C), D S5: E, driver(B) S6: F, driver(B) S7: E, F, driver(B) S8: G, driver(D) S9: H, driver(D) S10: G, H, driver(D) S11: A, B, E, F, C, D, G, H

Fewer stubs and drivers Risk driven Small-bang at target level More complicated

Number of stubs: 3 Number of drivers: 2 Number of sessions: 11

Poten(al problems
Ar(cial: Assumes correct units and interfaces Test correct structure only Investment in stubs and drivers Retes(ng

Call-graph integra(on tes(ng


Use the call-graph instead of the decomposi(on tree The call graph is directed Two types of tests:
Pair-wise integra(on tes(ng Neighborhood integra(on tes(ng

Matches well with development and builds Tests behaviour

5 7 20 21 9 10 12 11

Pairwise integra(on of SATM


1 22 16 4 13 17 18 19 23 24 26 27 25

One session per edge Real code 40 sessions, but no extra code

14

15

Neighborhood integra(on of SATM


1 22 16 4 13 17 18 19 23 24 26 27 25

7 20 21 9 10 12 11

Integra(ng direct neighbors of nodes Number of sessions: nodes sinknodes (a sink node has no outgoing calls) SATM: 11 sessions

14

15

Poten(al problems
Small-bang problems Especially fault isola(on in large neighborhoods Restes(ng needed if a node is changed Assumes correct units

Path-based integra(on
Base tes(ng on system level threads Mo(vated by overall system behaviour, not the structure Smooth prepara(on for System level tes(ng

Example: An MM-Path module A calls module B, which in turn calls module C


A B 1 2 2 3 4 3 5 4 6 5 4 2 3 C 1

Extensions to Deni6ons

Deni(ons

Deni6on: A source node in a program is a statement fragment at which program execu(on begins or resumes. Deni6on: A sink node in a program is a statement fragment at which program execu6on halts or terminates. Deni6on: A module execu6on path (MEP) is a sequence of statements that begins with a source node and ends with a sink node, with no intervening sink nodes. Deni6on: A message is a programming language mechanism by which one unit transfers control to another unit.

Deni6ons for Integra6on Tes6ng

More deni(ons

Den(on: An MM-Path is an interleaved sequence of module execu(on paths (MEP) and messages. Deni(on: Given a set of units, their MM-Path graph is the directed graph in which nodes are module execu6on paths and edges correspond to messages and returns from one unit to another.

Example (cont.): source nodes and sink nodes Example (cont.): source nodes and sink nodes A

Source nodes

1 2

Sink nodes 3

4 5

Example (cont.): source nodes and sink nodes


Source nodes

Example (cont.): source nodes and sink nodes

B
1

Sink nodes 2

Example (cont.): source nodes and sink nodes


Source nodes

Example (cont.): source nodes and sink nodes

C
1

Sink nodes 2 4 3

Example (cont.): source nodes and sink nodes


Source nodes A 1 2 2 3 4 3 5 4 6 5 4 2 3 B 1 Sink nodes C 1

Example (cont.): source nodes and sink nodes

Example (cont.): source nodes and sink nodes


Module A:
Source nodes: Nodes 1 and 5 Sink nodes: Nodes 4 and 6 Source nodes: Nodes 1 and 3 Sink nodes: Nodes 2 and 4

Module B: Module C:

Source nodes: Node 1 Sink nodes: Node 5

Example (cont.) module execu(on path (MEP)


MEP (A, I) = < 1, 2, 3, 6 > MEP (A, II) = < 1, 2, 4 > MEP (A, III) = < 5, 6 > MEP (B, I) = < 1, 2 > MEP (B, II) = < 3, 4 > MEP (C, I) = < 1, 2, 4, 5 > MEP (C, II) = < 1, 3, 4, 5 >

Example (cont.) module execu6on path (MEP)

Crea(ng a MM-path graph


1 2 B 1 MEP(B,I) MEP(A,II) 4 3 5 MEP(A,III) 4 2 2 MEP(C,I) MEP(B,II) 4 C 1 MEP(C,II) 3

3 MEP(A,I)

Example (cont.): MM-Path graph

MM-path graph example


Messages!

MEP (A, II) MEP (B, I) MEP (A, I) MEP (C, I)

Returns!
MEP (B, II)

MEP (A, III)

MEP (C, II)

Test cases are selected to cover these paths

Problems

more eort is needed to iden6fy the MM-Paths. This eort is probably oset by the elimina(on of stub and driver development.

Chapter 14
System Testing

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

System Testing
Threads ( = a system level test cases) Basis concepts of requirements specification Identifying threads Metrics for system testing

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Various Views of Threads


a scenario of normal usage a Use Case a system level test case a stimulus/response pair behavior that results from a sequence of system level inputs an interleaved sequence of port input and output events a sequence of transitions in a state machine description of the system an interleaved sequence of object messages and method executions a sequence of machine instructions a sequence of source instructions a sequence of atomic system functions

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Some Observations
Threads are dynamic Threads occur at execution time Threads can be identified in (or even better: derived from) many models
Finite state machines Decision tables Statecharts Petri nets Use Cases (sufficiently detailed)

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Candidate Threads in the Simple ATM System


Entry of a digit Entry of a Personal Identification Number (PIN) A simple transaction: ATM Card Entry, PIN entry, select transaction type (deposit, withdraw), present account details (checking or savings, amount), conduct the operation, and report the results. An ATM session, containing two or more simple transactions.

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Levels of Threads
A unit level thread is a path in the program graph of a unit. An integration level thread is an MM-Path. There are two levels of system level threads: A single thread A set of interacting threads If necessary, we can deal with threads in systems of systems

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

System Level Threads


An atomic system function (ASF) is an action that is observable at the system level in terms of port input and output events. Given a system defined in terms of atomic system functions, the ASF Graph of the system is the directed graph in which nodes are atomic system functions and edges represent sequential flow. A source ASF is an atomic system function that appears as a source node in the ASF graph of a system; similarly, a sink ASF is an atomic system function that appears as a sink node in the ASF graph. A system thread is a path from a source ASF to a sink ASF in the ASF graph of a system.

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

ASFs and MM-Paths


MM-Paths end at a point of message quiescence. In an event-driven system, ASFs frequently occur between points of event quiescence. There is no nice connection between ASFs and MM-Paths. An ASF corresponds to a stimulus/response pair.
stimulus/response cluster is more accurate. depending on its context, an input event may result in several distinct output events.

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Classifying the Candidate Threads


an MM-Path: an ASF: a thread: Entry of a digit Entry of a Personal Identification Number (PIN) A simple transaction: ATM Card Entry, PIN entry, select transaction type (deposit, withdraw), present account details (checking or savings, amount), conduct the operation, and report the results. An ATM session, containing two or more simple transactions.

a sequence of threads:

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Closer Look at the PIN Entry ASF


a sequence of system level inputs and outputs:
A screen requesting PIN digits An interleaved sequence of digit keystrokes and screen responses The possibility of cancellation by the customer before the full PIN is entered A system disposition (depending on the validity of the PIN

Observe:
several stimulus/response pairs this is the cross-over point between integration and system testing

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

E/R Model of Basis Concepts

Data

1..n 1..n

input 1..n Action output 1..n 1..n sequenceOf 1..n Thread

Event 1..n occur 1..n

Device

Mainline requirements specification techniques populate some (or all ) portions of this database.

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Modeling with the Basis Concepts


Data Structural Model Event Action Context Model Thread Device Behavior Model Condition

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Bad card 0.05 1. Card Entry Legitimate card 0.95 2.1 First PIN Try

Incorrect PIN 0.10 Incorrect PIN 0.10

2.2 Second PIN Try Correct PIN 0.90 Correct PIN 0.90

Incorrect PIN 0.10

2.3 Third PIN Try

Correct PIN 0.90 3. Await Transaction Choice Button B 1 0.05 Button B 2 0.10 Deposit Low cash 0.10 1.00 1.00 Normal 0.85 Low balance 0.05 Print Receipt Button B 3 0.85

Balance

Withdrawal

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Decomposition of Await PIN State


(PIN Entry FSM)
/ Display Screen S1 Idle Legitimate Card Display Screen S2 Awaiting First PIN Try Incorrect PIN Display Screen S4 Wrong Card Display Screen S1, Eject Card Incorrect PIN Display Screen S3 Awaiting Second PIN Try Incorrect PIN Display Screen S3 Awaiting Third PIN Try

Correct PIN Display Screen S5

Correct PIN Display Screen S5

Await Transaction Choice

Correct PIN Display Screen S5

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

2.x.1 0 Digits Received digit / echo 'X---' x1 2.x.2 1 Digit Received digit / echo 'XX--' x2

PIN Try finite state machine x7 (PIN Try x)


cancel cancel x8

Port Input Events ! digit! ! cancel! Port Output Events ! echo 'X---' ! echo 'XX--' ! echo 'XXX-' ! echo 'XXXX' Logical Output Events ! Correct PIN ! Incorrect PIN ! Canceled

2.x.3 2 Digits Received digit / echo 'XXX-' x3

cancel x9

2.x.6 Cancel Hit

2.x.4 3 Digits Received digit / echo 'XXXX' x4

cancel x10 x11

2.x.5 Cancelled 4 Digits Received Correct Pin Incorrect Pin x5 x6

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Deriving An ASF Test Case


Description:! Correct PIN on First Try Port Event Sequence in PIN Try FSM Port Input Event!! ! ! ! 1 pressed! ! ! ! 2 pressed! ! ! ! 3 pressed! ! ! ! 4 pressed! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! Port Output Event Screen 2 displayed with '----' Screen 2 displayed with 'X---' Screen 2 displayed with 'XX--' Screen 2 displayed with 'XXX-' Screen 2 displayed with 'XXXX' Screen 5 displayed

(Correct PIN)!

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Deriving An ASF Test Case (cont'd)


Description:! Correct PIN on First Try Test operator instructions Initial Conditions: screen 2 being displayed with no digit echoes Perform the following sequence of steps: 1.! Verify:! ! Screen 2 displayed with '----' 2.! Cause:! ! 1 pressed! 3.! Verify:! ! Screen 2 displayed with 'X---' 4.! Cause:! ! 2 pressed! 5.! Verify:! ! Screen 2 displayed with 'XX--' 6.! Cause:! ! 3 pressed! 7.! Verify:! ! Screen 2 displayed with 'XXX-' 8.! Cause:! ! 4 pressed! 9.! Verify:! ! Screen 2 displayed with 'XXXX' Post Condition:! Screen 5 displayed Test Result:!___ Pass ! ! ! ___ Fail

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Slight Digression: Architecture of an Automated Test Executor

Cause Digit Keypress ATE Processor

Digit Keypress Port Boundary

Harness

Verify screen text

screen text

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Metrics for System Testing In the PIN Entry ASF, for a given PIN, there are 156 distinct paths from the First PIN Try state to the Await Transaction Choice or Card Entry states in the PIN Entry FSM. Of these, 31 correspond to eventually correct PIN entries (1 on the first try, 5 on the second try, and 25 on the third try); the other 125 paths correspond to those with incorrect digits or with cancel keystrokes. To control this explosion, we have two possibilities: ! pseudo-structural coverage metrics ! "true" structural coverage metrics

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Pseudo-structural Coverage Metrics Behavioral models provide "natural" metrics: Decision Table Metrics:! ! every condition ! ! ! ! ! ! ! ! ! every action ! ! ! ! ! ! ! ! ! every rule FSM Metrics:! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! every state ! every transition ! ! ! ! every place every port event every transition every marking

Petri Net Metrics:! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

These are pseudo-structural because they are just models of the eventual system.
Software Testing: A Craftsmans Approach, 3rd Edition System Testing

Pseudo-structural Coverage Metrics for PIN Try


Input Event! ! Sequence! ! 1,2,3,4! ! ! 1,2,3,5! ! ! C! ! ! ! 1,C! ! ! ! 1,2,C! ! ! 1,2,3,C! ! ! State Sequence 2.x.1, 2.x.2, 2.x.3, 2.x.4, 2.x.5 2.x.1, 2.x.2, 2.x.3, 2.x.4, 2.x.5 2.x.1, 2.x.6 2.x.1, 2.x.2, 2.x.6 2.x.1, 2.x.2, 2.x.3, 2.x.6 2.x.1, 2.x.2, 2.x.3, 2.x.4, 2.x.6

Two test cases yield state coverage Input Event! ! Sequence! ! 1,2,3,4! 1,2,3,5! C! ! 1,C! ! 1,2,C! 1,2,3,C! ! ! ! ! ! ! ! ! ! ! ! ! Path of Transitions x1, x2, x3, x4, x5 x1, x2, x3, x4, x6 x7, x11 x1, x8, x11 x1, x2, x9, x11 x1, x2, x3, x10, x11

All six are needed for transition coverage


Software Testing: A Craftsmans Approach, 3rd Edition System Testing

Pseudo-structural Coverage Metrics for PIN Entry FSM


Input Event! ! Sequence! ! 1,2,3,4! ! ! ! ! State Sequence

1,2,3,5,1,2,3,4! 1,2,3,5,C,1,2,3,4

C,C,C! ! ! ! ! ! ! How many test cases yield state coverage? Input Event! ! Path of Sequence! ! Transitions 1,2,3,4! ! ! ! !

1,2,3,5,1,2,3,4! 1,2,3,5,C,1,2,3,4 C,C,C! !

How many are needed for transition coverage?


Software Testing: A Craftsmans Approach, 3rd Edition System Testing

Consequences of Pseudo-structural Coverage Metrics

1.! Combinatoric explosion is controlled ! Selecting test cases from the FSM decomposition reduced
! 156 threads to 10 test cases

2.! Fault isolation is improved ! When a "verify" operation fails, use the FSMs to determine:
! ! ! ! ! ! what went wrong where it went wrong when it went wrong

3.! Base information for testing management

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

SATM System Threads


1.! Insertion of an invalid card. (this is probably the "shortest" system thread) 2.! Insertion of a valid card, followed by three failed PIN entry attempts. 3.! Insertion of a valid card, a correct PIN entry attempt, followed by a balance inquiry. 4.! Insertion of a valid card, a correct PIN entry attempt, followed by a deposit. 5.! Insertion of a valid card, a correct PIN entry attempt, followed by a withdrawal. 6.! Insertion of a valid card, a correct PIN entry attempt, followed by an attempt to withdraw more cash than the account balance.

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

SATM System Thread Testing


SATM Test Data ATM ! Card! 1! ! 2! ! 3! ! 4! ! PAN! Expected! ! ! PIN!! ! 100!! 1234! ! 200!! 4567! ! 300!! 6789! ! (invalid) ! ! Checking! Balance!! $1000.00! $100.00! ! $25.00! ! ! ! ! ! ! ! ! Savings Balance $800.00 $90.00 $20.00

Port Input Events! ! ! ! ! ! !

Port Output Events ! ! ! ! ! ! Display Screen(n,text) Open Door(dep, withdraw) Close Door(dep, withdraw) Dispense Notes (n) Print Receipt (text) Eject ATM Card

Insert ATM Card (n)! ! Key Press Digit (n)! ! Key Press Cancel! ! Key Press Button B(n)! Insert Deposit Envelope! ! ! ! ! ! !

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Thread 1 Test Procedure

Description:

invalid card

Test operator instructions Initial Conditions: screen 1 being displayed Perform the following sequence of steps: 1. 2. 3. Cause: Verify: Verify: Insert ATM Card 4 Eject ATM Card Display Screen(1, null)

Post Condition: Screen 1 displayed Test Result: ___ Pass ___ Fail

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Thread 2 Test Procedure


Description: valid card, 3 failed PIN attempts Initial Conditions: screen 1 being displayed Perform the following sequence of steps: 1. 2. 1. 2. 1. 2. 1. 2. 1. 2. 1. 2. 1. 2. 1. 2. 1. 2. 1. 2. 3. 3. Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Verify: Verify: Insert ATM Card 1 Display Screen(2, '----') Key Press Digit (1) Display Screen(2,'X---') Key Press Cancel Display Screen(2,'----') Key Press Digit (1) Display Screen(2,'X---') Key Press Digit (2) Display Screen(2,'XX--') Key PressCancel Display Screen(2,'----') Key Press Digit (1) Display Screen(2,'X---') Key Press Digit (2) Display Screen(2,'XX--' Key Press Digit (3) Display Screen(2,'XXX-') Key Press Digit (5) Display Screen(2,'XXXX') Display Screen(4, null) Display Screen(1, null) Test Result: ___ Pass ___ Fail
System Testing

Post Condition: Screen 1 displayed

Software Testing: A Craftsmans Approach, 3rd Edition

Thread 3 Test Procedure


Description: valid card, a correct PIN entry attempt, followed by a balance inquiry of the checking account. Initial Conditions: screen 1 being displayed Perform the following sequence of steps: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Verify: Cause: Verify: Cause: Verify: Cause: Verify: Verify: Verify: Verify: Insert ATM Card 1 Display Screen(2, '----') Key Press Digit (1) Display Screen(2,'X---') Key Press Digit (2) Display Screen(2,'XX--' Key Press Digit (3) Display Screen(2,'XXX-') Key Press Digit (4) Display Screen(2,'XXXX') Display Screen(5, null) Key Press Button B(1) Display Screen(6, null) Key Press Button B(1) Display Screen(14,null) Key Press Button B(2) Print Receipt ('$1000.00') Display Screen(15, null) Eject ATM Card Display Screen(1, null) Test Result: ___ Pass ___ Fail
System Testing

Post Condition: Screen 1 displayed

Software Testing: A Craftsmans Approach, 3rd Edition

Thread 4 Test Procedure


Description: valid card, a correct PIN entry attempt, followed by a $25.00 deposit to the checking account 1. Cause: Insert ATM Card 1 2. Verify: Display Screen(2, '----') 3,5,7,9. Cause: Key Press Digit (1,2,3,4) 4,6,8,10. Verify: Display Screen(2,'XXXX') 11. Verify: Display Screen(5, null) 12. Cause: Key Press Button B(2) 13. Verify: Display Screen(6, null) 14. Cause: Key Press Button B(1) 15. Verify: Display Screen(7, '$----.--') 16. Cause: Key Press Digit (2) 17. Verify: Display Screen(7, '$----.-2') 18. Cause: Key Press Digit (5) 19. Verify: Display Screen(7, '$----.25') 20. Cause: Key Press Digit (0) 21. Verify: Display Screen(7, '$---2.50') 22. Cause: Key Press Digit (0) 23. Verify: Display Screen(7, '$--25.00' 24. Verify: Display Screen(13, null) 25. Verify: Open Door(deposit) 26. Cause: Insert Deposit Envelope 27. Verify: Close Door(deposit) 28. Verify: Display Screen(14, null) 29. Cause: Key Press Button B(2) 30. Verify: Print Receipt ('$1025.00') 31. Verify: Display Screen(15, null) 32. Verify: Eject ATM Card 33. Verify: Display Screen(1, null) Post Condition: Screen 1 displayed Test Result: ___ Pass ___ Fail
Software Testing: A Craftsmans Approach, 3rd Edition System Testing

Thread 5 Test Procedure


Description: valid card, a correct PIN entry attempt, followed by an attempt to withdraw more cash than the savings account balance. 1. Cause: Insert ATM Card 2 2. Verify: Display Screen(2, '----') 3,5,7,9. Cause: Key Press Digit (4,5,6,7) 4,6,8,10. Verify: Display Screen(2,'XXXX') 11. Verify: Display Screen(5, null) 12. Cause: Key Press Button B(3) 13. Verify: Display Screen(6, null) 14. Cause: Key Press Button B(2) 15. Verify: Display Screen(7, '$----.--') 16,18,20,22,24 Cause: Key Press Digit (1,1,0,0,0) 17,19,21,23,25 Verify: Display Screen(7, '$-110.00') 24. Verify: Display Screen(8, '----.--') 26. Cause: Key Press Cancel 28. Verify: Display Screen(14, null) 29. Cause: Key Press Button B(2) 30. Verify: Print Receipt ('$90.00') 31. Verify: Eject ATM Card 32. Verify: Display Screen(1, null) Post Condition: Screen 1 displayed Test Result: ___ Pass ___ Fail

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Operational Profiles Zipf's Law: 80% of the activities occur in 20% of the space ! ! ! ! ! productions of a language syntax natural language vocabulary menu options of a commercial software package area of an office desktop floating point divide on the Pentium chip

For Threads: a small fraction of all possible threads represents the majority of system execution time. Therefore: find the occurrence probabilities of threads and use these to order thread testing.
Software Testing: A Craftsmans Approach, 3rd Edition System Testing

Operational Profiles of SATM


Bad card 0.05 1. Card Entry Legitimate card 0.95 2.1 First PIN Try Incorrect PIN 0.10 Incorrect PIN 0.10

2.2 Second PIN Try Correct PIN 0.90 Correct PIN 0.90

Incorrect PIN 0.10

Common Thread Legitimate Card PIN ok 1st try Withdraw Normal

Probabilities 0.95 0.90 0.85 0.85 0.6177375 Probabilities 0.95 0.10 0.10 0.90 0.85 0.01 0.00072675

2.3 Third PIN Try

Correct PIN 0.90 3. Await Transaction Choice Button B 1 0.05 Button B 2 0.10 Deposit Low cash 0.10 1.00 1.00 Normal 0.85 Low balance 0.05 Print Receipt Button B 3 0.85

Rare Thread Legitimate Card Bad PIN 1st try Bad PIN 2nd try PIN ok 3rd try Withdraw Low Cash
Withdrawal

Balance

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

SATM Atomic System Functions


ASF1: Examine ATM Card Inputs: PAN from card, List of acceptable cards Outputs: Legitimate Card, Wrong Card ASF2: Control PIN Entry Inputs: Expected PIN, Offered PIN Outputs: PIN OK, Wrong PIN ASF3: Get Transaction Type Inputs: Button1, Button2, or Button3 depressed Outputs: call Get Account Type (not externally visible) ASF4: Get Account Type Inputs: Button1 or Button2 depressed Outputs: call one of Process Withdrawal, Process Deposit, or Display Balance (not externally visible) ASF5: Process Withdrawal Inputs: Amount Requested, Cash Available, Local Balance Outputs: Process Request (call Dispense Cash) Reject Request (insuficient funds or insuficient balance)

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

SATM Atomic System Functions


ASF6: Process Deposit Inputs: Deposit Amount , Deposit Envelope, Deposit Door Status, Local Balance Outputs: Process Request (call Credit Local Balance) Reject Request ASF7: Display Balance Inputs: Local Balance Outputs: (call Screen Handler) ASF8: Manage Session Inputs: New Transaction Requested, Done Outputs: (call Get Transaction Type or call Print Receipt) ASF9: Print Receipt Inputs: Account Number, transaction type and amount, new local balance, time, date Outputs: formatted text for receipt, (call Eject Card) ASF10: Eject Card Inputs: (invoked) Outputs: (control rollers)

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

"Hidden" Atomic System Functions


ASF11: Dispense $10 Note ASF12: Screen Handler ASF13: Button Sensor ASF14: Digit Keypad Sensor ASF15: Cancel Sensor ASF16: Card Roller Controller ASF17: Control Deposit Door ASF18: Control Deposit Rollers ASF19: Control Cash Door ASF20: Count Cash on Hand ASF21: Timer

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

SATM Threads
Thread 1: Wrong Card State Sequence: Idle, Idle Event Sequence: Display Screen 1, Wrong Card ASF Sequence:!

Thread 2: Wrong PIN State Sequence:

Idle, Await 1st PIN Try, Await 2nd PIN Try, Await 3rd PIN Try, Idle Event Sequence: Display Screen 1, Legitimate Card, Wrong PIN, Wrong PIN, Wrong PIN, Display Screen 4, Display Screen 1 ASF Sequence:!

Thread 3: Balance Inquiry State Sequence:

Idle, Await 1st PIN Try, Acquire Transaction Data, Display Balance, Display Balance, Close Session, Idle Event Sequence: Display Screen 1, Legitimate Card, Wrong PIN, Wrong PIN, Wrong PIN, Display Screen 4, Display Screen 1 ASF Sequence:!

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

SATM Threads
Thread 4: Balance then Deposit then Withdraw State Sequence: Idle, Await 1st PIN Try, Acquire Transaction Data, Display Balance, Close Session, Acquire Transaction Data, Process Deposit, Close Session, Acquire Transaction Data, Process Withdrawal, Close Session, Idle Event Sequence: Display Screen 1, Legitimate Card, Wrong PIN, Wrong PIN, Wrong PIN, Display Screen 4, Display Screen 1 ASF Sequence:!

Thread 5: 3 PIN Tries then Balance then Deposit then Withdraw State Sequence: Idle, Await 1st PIN Try, Await 2nd PIN Try, Await 3rd PIN Try, Acquire Transaction Data, Display Balance, Close Session, Acquire Transaction Data, Process Deposit, Close Session, Acquire Transaction Data, Process Withdrawal, Close Session, Idle Event Sequence: Display Screen 1, Legitimate Card, Wrong PIN, Wrong PIN, Wrong PIN, Display Screen 4, Display Screen 1 ASF Sequence:!

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

ATM States and Transitions


t5 t1 t2 t16 S2 t6 S5 t9 t10 S7 t12 t13 S9 t14 t11 S8 t15 t3 t7 S3 t8 t4 S4

S1

S6

Software Testing: A Craftsmans Approach, 3rd Edition

System Testing

Chapter 15
Interaction Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Interaction: the hot topic of the 1990s


"Elements of Interaction"Communications of the ACM, (Jan. 93) Robin Milner (1993 Turing Award Lecture) "An Investigation of the Therac-25 Accident, IEEE Computer, (July 93) Nancy G. Leveson and Clark S. Turner. "Feature Interactions and Formal Specifications in Telecommunications" IEEE Computer, (Aug. 93) Pamela Zave The underlying issue: Non-Determinism Our Approach: A Taxonomy of Interactions An executable specification Analysis from graph theory

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction System Testing Testing

Definitions of Interaction
1. "System behavior as a whole does not satisfy the separate specifications of all its [parts]." (Pamela Zave) 2. Relationship between the whole and its parts. 3. Totality of connections among components. 4. Consequences of connections among components. 5. The result of composition. (Robin Milner)

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Feature Interaction
Feature: a service provided by a system (activated by a subscriber paying for the service) Telephony Examples (from P. Zave): Notation: d1, d2, d3, and d4 are directory numbers

POTS (Plain Old Telephone Service)

Call Forwarding: calls to d1 are terminated on d2 iff call forwarding is enabled on d1 and activated by defining d2 as the current destination Calling Party Identification: when active on d2, d2 receives the directory number of all incoming calls Call Rejection: allows a subscriber to define a list of directory numbers from which calls will not be completed. Busy Treatment: several possibilities; call override, call rejection, call forward, call forward on busy, do not disturb, automatic re-call

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

(Zaves) Sample Interactions


1. Calling Party Identication and Call Rejection calling party identication offers a directory number (data) which is used as an input in the call rejection process. Call Forwarding and Call Rejection d2 rejects calls from d1, d1 forwards calls to d2, d3 calls d1. Call Forward Loop d1 forwards calls to d2 d2 forwards calls to d3 d3 forwards calls to d1 d4 calls d1 Voice Mail and Credit Card Calling In credit card calling, # terminates a call so that a new call can be dialed without re-entering the credit card digits. In many Voice Mail systems, # is a command (e.g., to hear your recorded message) d1 makes a credit card call to d2 and gets d2's voice mail. What happens when d1 enters # ?
Interaction Testing

2.

3.

4.

Software Testing: A Craftsmans Approach, 3rd Edition

First Clues
Feature interaction is a consequence of adaptive maintenance (i.e., adding new capabilities to an existing system) Interactions involve connections, and the essence of every modeling technique is to nd/establish connections (see Wurmann, Information Anxiety ) Composition creates connections. What can be connected? First Approximation: interaction Should be Is Is not intended missing Should not be unintended null

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Definitions of Determinism
Let C be some calculation, C (input) = output . Definition 1:! If the result of C can always be predicted, C is deterministic ! ! ! (or pre-determined), otherwise C is non-deterministic. Definition 2:! If the result of C is always the same, C is deterministic ! ! ! (or pre-determined), otherwise C is non-deterministic. Example: ComputeSalesTax(price, taxrate) If the tax rate is 4%, sales tax on a $100 item will be $4.00. If the legislature changes the tax rate to 6%, we have interaction. If we understand all the points of interaction, the function still is deterministic. "Concurrency inflicts non-determinism." Robin Milner

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Toward a Taxonomy of Interactions


What elements can interact? In what ways can these elements interact? "Basis" system elements (the reality of any system): Data Action Event

Device

Thread

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Basis Concepts for Requirements Specification


1! Data
When a system is described in terms of its data, the focus is on the information used and created by the system. Data refers to information that is either initialized, stored, updated, or (possibly) destroyed.

2!

Actions

Actions have inputs and outputs, and these can be either data or port events. Some methodologyspecific synonyms for actions: transform, data transform, control transform, process, activity, task, method, and service. Actions can be decomposed into lower level actions.

3!

Devices

Every system has devices; these are the sources and destinations of system level inputs and outputs (events that occur at the port boundary). Physical actions (e.g., keystrokes, light emissions from a screen) occur on port devices, and these are translated from physical to logical (or logical to physical) appearances by actions that execute on other devices (e.g., a CPU executing software).

4!

Events

A system level input (or output) that occurs on a port device. Like data, events can be inputs to or outputs of actions. Events can be discrete (such as keystrokes) or they can be continuous (such as temperature, altitude, or pressure). There are situations where the context of present data values changes the logical meaning of physical events. We refer to such situations as "context sensitive port events".

5!

Threads

A thread an instance of execution-time behavior of a system. Two synonyms: a scenario, a use-case. A thread is a sequence of actions, and these in turn have data and events as their inputs and outputs.

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Connections (interactions) Among Basis Concepts

Data Data Action Device Event Thread


context sensitivity points of n-connection Square of Opposition I/O, usage

Action Device
I/O, usage timing, race conditions execution I/O, usage execution resource contention incidence

Event
context sensitivity I/O, usage incidence timing, concurrency points of n-connection

Thread
points of n-connection

points of n-connection n-connectivity

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Toward a Taxonomy of Interactions


! Each element needs another "dimension"; we'll call it Location, where location refers to time and position. ! ! Each of these elements can interact with itself. Interactions among pairs of elements are more interesting. Data Action Event

Device

Thread

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Some Ground Rules


Views of location: a point in space-time something that happens in a processor (position) 1. For now, a processor is something that executes threads, or a device where events occur. 2. 3. 4. Since threads execute, they have a strictly positive time duration. In a single processor, two threads cannot execute simultaneously. Events have a strictly positive time duration.

5. Two (or more) input events can occur simultaneously, but an event cannot occur simultaneously in two (or more) processors. 6. In a single processor, two output events cannot begin simultaneously.

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Taxonomy of Interactions
Static interaction: independent of time Dynamic interaction: time dependent Each of the ve "basis elements" can interact in each quadrant: static dynamic

single processor multiple processor

We will consider static interactions of data with data, and dynamic interactions of threads with threads.
Software Testing: A Craftsmans Approach, 3rd Edition Interaction Testing

Static Interactions (Known to Aristotle)


contraries All S is P No S is P

contradictories subalternation Some S is P sub-contraries subalternation Some S is not P

Interactions in the Square of Opposition. 1. Contradictories: exactly one is true 2. Contraries: cannot both be true 3. Sub-contraries: cannot both be false 4. Subalternation: Truth of superaltern guarantees truth of its subaltern Examples 1. When the pre-condition for a thread is a conjunction of data propositions, contrary or contradictory data values will prevent thread execution. 2. Context sensitive port input events usually involve contradictory data. 3. Case statement clauses are contradictories. 4. Rules in a decision table are contradictories.
Software Testing: A Craftsmans Approach, 3rd Edition Interaction Testing

Static Interactions in Multiple Processors


The "Call Forwarding Loop" in telephony. Background: when a subscriber defines a call forwarding destination, this becomes call roting data local to the subscribers telephone office. Suppose Subscriber A (in Grand Rapids) forwards calls to Subscriber B in Northbrook, and Subscriber B (in Northbrook) forwards calls to Subscriber C in Phoenix, and Subscriber C (in Phoenix) forwards calls to Subscriber A inGrand Rapids. What happens when someone outside this loop calls one of A, B, or C? Observations: 1.! The call forwarding data is locally correct, but globally, it is contrary. 2.! The global contrary condition is afault, not a failure. 3.! The fault only becomes a failure when a thread creates a dynamic interaction. 4.! (Most telephone systems avoid this by refusing to forward a forwarded call.)

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

n-Connectedness
Linear Graph Theory sheds some light on interaction. i i j i 0-connected 1-connected i i j j

j 2-connected Two kinds of faults:

j 3-connected

Missing n-connectedness occurs when a pair lacks an essential connection, and Inappropriate n-connctedness, when a pair has an undesired connection.

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Interpretations of n-Connectedness
0-Connected: true independence 1-Connected: (by an ancestor) resource conflict, context sensitivity (by a descendant) ambiguous cause 2-Connected: define-reference, enable, disable, precedence, prohibit prerequisite 3-Connected: mutual influence, repetition, deadlock Examples: Failures under the "single failure" assumption of Reliability Theory Context Sensitive Port Events Falkland Island submarine incident Email loop
Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Interactions Based on n-Connectedness


Data with Data 0-Connected! 1-Connected! 2-Connected! ! ! ! ! 3-Connected! ! ! ! ! data that are independent data that are inputs to the same action data that are sub-alternates, data that are used in a computation data that are contraries, contradictories, or sub-contraries, "deeply" related data, as in iteration or semaphores

Threads with Threads ! ! ! Threads can be n-connected with each other in two ways: via events and/or via data. To explore this, we need a sufficiently expressive notation.

My candidate: Event Driven Petri Nets

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Elements of the Interaction Taxonomy


1.! Basis concept interactions ! ! ! ! ! ! 2.! ! 3.! data-data! ! ! port-port data-action! ! ! port-event data-event! ! ! event-event data-thread! ! ! event-thread action-action! ! thread-thread action-event Refined by appropriate relationship (e.g., n-connectivity, square of opposition) Further refined by Is/Should modality

Placement of Zave's examples 1.! ! 2.! ! 3.! ! 4.! ! Calling Party Identification and Call Rejection Intended thread-thread 2-connectivity Call Forwarding and Call Rejection Unintended data-data contraries Call Forward Loop Unintended thread-thread 3-connectivity Voice Mail and Credit Card Calling Unintended data-event context sensitivity
Interaction Testing

Software Testing: A Craftsmans Approach, 3rd Edition

A Petri Net Model for Interactions


An Event-Driven Petri Net is a tripartite directed graph (P, D, S, In, Out) composed of three sets of nodes, P, D, and S, and two mappings, In and Out, where P is a set of port events D is a set of data places S is a set of transitions In is a set of ordered pairs from (P D) S Out is a set of ordered pairs from S (P D) Event-Driven Petri Nets express four of our ve basic system constructs; only devices are missing. In an Event-Driven Petri Net , the external inputs are the places with indegree = 0, and the external outputs are the places with outdegree = 0. Graphical conventions for Event-Driven Petri Nets port event transition data place

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Composing Event-driven Petri Nets


Given two (or more) EDPNs, their composition (unique!) is obtained as follows: 1. If any places or port events have synonymous names, collapse these into unique names. (For example, collapse Lamp ON and Light ON into Lamp ON) 2. Consider all data places and port events to be global (if a place appears in two individual nets, it need only appear once in their composition.) 3. Consider actions to be local. (This preserves the execution sequence of the individual threads.) 4. Adjust all input and output relations (arrows) to preserve the original input and output relations. 5. It may be necessary to add transitions and input/output edges to represent interactions that were not captured in the prior components. 6. Trace through some mainline threads to convince yourself that the composition (especially interactions) is accurate.

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Thread Interaction and Composition


When two threads interact (via an event or data), we can always compose their respective EDPNs. Thread 1
p1 d1

Thread 2
p1 d2 d1

Composition
p1 d2

s1

s3

s1

s3

d3

p3

d4

d3

d4

s2

s2

p3

d4

p3

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Producer/Consumers Composition
Consumer 1
t3 p4 p3 p3

Consumer 2
t5 p6 p1 t2 t6 p2 t1

p5 t4 t1

p7

p3 p1 t2 p5 t4 p3 p7 t6 p2 t3 p4 t5 p6

Producer

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Saturn Windshield Wiper Controller


The windshield wiper on the Saturn automobile (at least on the 1992 models) is controlled by a lever with a dial. The lever has four positions, OFF, INT (for intermittent), LOW, and HIGH, and the dial has three positions, numbered simply 1, 2, and 3. The dial positions indicate three intermittent speeds, and the dial position is relevant only when the lever is at the INT position. The decision table below shows the windshield wiper speeds (in wipes per minute) for the lever and dial positions.

Lever! Dial!!

! !

OFF! n/a! ! 0! !

INT! INT! INT! LOW! 1! 2! 3! n/a! ! 4! 6! 12! 30! !

HIGH n/a 60
Description ! 0 w.p.m. ! 4 w.p.m. ! 6 w.p.m. ! 12 w.p.m. ! 30 w.p.m. ! 60 w.p.m.

Wiper! !
Input Event! ! ie1! ! ! ie2! ! ! ie3! ! ! ie4! ! ! ie5! ! ! ie6! ! ! ie7! ! ! ie8! ! ! ie9! ! ! ie10! !

Description lever from OFF to INT lever from INT to LOW lever from LOW to HIGH lever from HIGH to LOW lever from LOW to INT lever from INT to OFF dial from 1 to 2 dial from 2 to 3 dial from 3 to 2 dial from 2 to 1

Output Event! ! oe1! ! ! oe2! ! ! oe3! ! ! oe4! ! ! oe5! ! ! oe6! !

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Saturn Windshield Wiper FSMs


Lever OFF ie1 ? INT ie2 oe5 LOW ie3 oe6 HIGH ie4 oe5 ie5 ? ie8 ? 3 ie6 oe1 ie7 ? 2 ie9 ? Dial 1 ie10 ?

Input Event! Description ! ie1! ! lever from OFF to INT ! ie2! ! lever from INT to LOW ! ie3! ! lever from LOW to HIGH ! ie4! ! lever from HIGH to LOW ! ie5! ! lever from LOW to INT ! ie6! ! lever from INT to OFF ! ie7! ! dial from 1 to 2 ! ie8! ! dial from 2 to 3 ! ie9! ! dial from 3 to 2 ! ie10! dial from 2 to 1 Output Event! ! oe1!! ! ! oe2!! ! ! oe3!! ! ! oe4!! ! ! oe5!! ! ! oe6!! ! Description 0 w.p.m. 4 w.p.m. 6 w.p.m. 12 w.p.m. 30 w.p.m. 60 w.p.m.

Note that several transition actions are indeterminate because of composition.

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Deriving an EDPN from a State Machine


Lever OFF ie1 ? INT ie2 oe5 LOW ie3 oe6 HIGH ie4 oe5 ie5 ? ie6 oe1 Input Event p1 ie1 p2 ie2 p3 ie3 p4 ie4 p5 ie5 p6 ie6 p7 ie7 p8 ie8 p9 ie9 p10 ie10 Output Event p11 oe1 p12 oe2 p13 oe3 p14 oe4 p15 oe5 p16 oe6 Data Place d1 d2 d3 d4 d5 d6 d7 Description lever from OFF to INT lever from INT to LOW lever from LOW to HIGH lever from HIGH to LOW lever from LOW to INT lever from INT to OFF dial from 1 to 2 dial from 2 to 3 dial from 3 to 2 dial from 2 to 1 Description 0 w.p.m. 4 w.p.m. 6 w.p.m. 12 w.p.m. 30 w.p.m. 60 w.p.m. Description Lever at OFF position Lever at INT position Lever at LOW position Lever at HIGH position Dial at position 1 Dial at position 2 Dial at position 3 p1 d1 p6

s1

s6

p11 p2 d2 p5

s2

s5

p15 p3 d3 p4

s3

s4

p16

d4

p15

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

n-Connected Thread Interaction


Let T1 and T2 be two EDPN threads in which synonym places have been resolved, and with external input and output sets EI1, EI2, EO1, and EO2. Furthermore, let T be the EI2 and EO = EO1 EO2 are the composition of threads T1 and T2, where EI = EI1 external input and output sets of the composed thread T. The threads T1 and T2 are: 0-connected if EI1 EI2 = , EO1 EO2 = , EI1 EO2 = , and EO1 EI2 = or EO , 1-connected if either EI 2-connected if either EI1 EO2 or EI2 EO1 , 3-connected if both EI1 EO2 and EI2 EO1 p1

d1

d2

s1

s3

d3

d4 p3
Interaction Testing

s2

Software Testing: A Craftsmans Approach, 3rd Edition

p1

d1

p6

Full EDPN for the Saturn Windshield Wiper


s6 s11 p7 d5 p10

s1

p11 d2 p2 p12 s12

s7

s10

s2

s5

p13 s13 p8

d6 p9

p15 p3 d3

p5 p4

p14 s4

s8

s9

s3

p16

d4

p5

d7
Interaction Testing

Software Testing: A Craftsmans Approach, 3rd Edition

Composition in a Database
(all relations are 0..n at each end)

Data

DataInput

DataOutput EventOutput

Action

SequenceOf

Thread

Event
OccursOn

EventInput

Device

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

DataInput Table for d3


DataID DataName ActionID ActionName

d3 d3

Low Low

s3 s5

LowtoHigh LowToInt

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

DataInput Table for d2

DataID d2 d2 d2 d2 d2

DataName Intermittent Intermittent Intermittent Intermittent Intermittent

ActionID

ActionName

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

What Connections Can you Identify Between an EDPN Data Model and Graph Theory?
Indegree of a place Outdegree of a place Indegree of a transition Outdegree of a transition Indegree of an event Outdegree of an event

Software Testing: A Craftsmans Approach, 3rd Edition

Interaction Testing

Chapters 16 - 20
Testing Object-Oriented Software

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Object-Oriented Testing

1. Traditional vs. Object-Oriented Testing 2. Saturn Windshield Wiper Example 3. Testing with O-O Notations

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Traditional vs. Object-Oriented Testing

Data

input Action output

Data encapsulate

Object Action

Event

Device

Device

Thread

Event Thread

Object-orientation repackages the basis concepts (all relations are 0..n)

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Issues in Object-Oriented Testing 1. Implications of message-based communication


With an procedural language, program graphs are "natural". O-O testing must deal with event and message quiescence.

2. Decomposition to composition.
Functional decomposition tree as a basis of integration testing is lost. Composition implies unknowable contexts. We can never know all the possible objects with which a given object may be composed.

3. O-O language packaging requires redefinition of testing levels.


What is a unit? Can we continue with integration level constructs (MM-Path, ASF)? What do O-O threads look like?

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Levels of Object-Oriented Testing


Level Unit Integration Item Method of an object? Class? MM-Path Atomic System Function System Thread Thread Interaction Boundary Program graph Message quiescence Event quiescence Source to sink ASF (none)

Notice the cascading levels of interaction: unit testing covers statement interaction, MM-Path testing covers method interaction, ASF testing covers MM-Path interaction, thread testing covers object interaction, and all of this culminates in thread interaction.

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Re-usable Testing Techniques

Level Unit

Item Method of an object Class

Technique Traditional functional and/or structural StateChart-based New definition? New definition? New definition? (StateCharts) (as before)

Integration

MM-Path Atomic System Function

System

Thread Thread Interaction

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Saturn Windshield Wiper Controller


The windshield wiper on the Saturn automobiles is controlled by a lever with a dial. The lever has four positions, OFF, INT (for intermittent), LOW, and HIGH, and the dial has three positions, numbered simply 1, 2, and 3. The dial positions indicate three intermittent speeds, and the dial position is relevant only when the lever is at the INT position. The decision table below shows the windshield wiper speeds (in wipes per minute) for the lever and dial positions.

Lever Dial Wiper

OFF n/a 0

INT INT INT LOW 1 2 3 n/a 4 6 12 30


Output Event oe1 oe2 oe3 oe4 oe5 oe6

HIGH n/a 60
Description 0 w.p.m. 4 w.p.m. 6 w.p.m. 12 w.p.m. 30 w.p.m. 60 w.p.m.

Input Event Description ie1 lever from OFF to INT ie2 lever from INT to LOW ie3 lever from LOW to HIGH ie4 lever from HIGH to LOW ie5 lever from LOW to INT ie6 lever from INT to OFF ie7 dial from 1 to 2 ie8 dial from 2 to 3 ie9 dial from 3 to 2 ie10 dial from 2 to 1

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Saturn Windshield Wiper Objects


Wiper Speed Lever Position Dial Position Compute Speed

Lever position Sense Up Sense Down

Dial position Sense Left Sense Right

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Saturn Windshield Wiper Object FSMs


Lever OFF ie1 ? INT ie2 oe5 LOW ie3 oe6 ie4 oe5 ie5 ? ie8 ? 3 ie6 oe1 ie7 ? 2 ie9 ? Dial 1 ie10 ?

Input Event ie1 ie2 ie3 ie4 ie5 ie6 ie7 ie8 ie9 ie10

Description lever from OFF to INT lever from INT to LOW lever from LOW to HIGH lever from HIGH to LOW lever from LOW to INT lever from INT to OFF dial from 1 to 2 dial from 2 to 3 dial from 3 to 2 dial from 2 to 1

Output Event Description oe1 0 w.p.m. oe2 4 w.p.m. HIGH oe3 6 w.p.m. oe4 12 w.p.m. oe5 30 w.p.m. oe6 60 w.p.m. Note that several transition actions are indeterminate because of composition.

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Resolving the Indeterminacy with Messages


Lever OFF ie1 m1 INT ie2 m2 LOW ie3 m3 HIGH ie4 m4 ie5 m5 ie8 m8 3 ie6 m6 ie7 m7 2 ie9 m9 Dial 1 ie10 m10 4 wpm m1 m6 m7 Wiper 0 wpm m1 m6 m1 m6 m8 m9 m2 m5

6 wpm m5

12 wpm

m10 m2 m2 m5

30 wpm m3 m6 60 wpm

Messages m1 to m6 inform the Wiper object of the state of the Lever object. Messages m7 to m10 inform the Wiper object of the state of the Dial object.

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Windshield Wiper StateChart


power off power on 0 Off leverUp instate(Int) leverDown Int leverUp Instate(1) leverDown Low leverUp leverDown High Instate(2) Instate(2) Instate(3) Instate(3) 1 dialUp dialDown 2 dialUp dialDown 3 instate(Low) 30 instate(High) 60 12 Instate(1) 6 Instate(2) 4 instate(Off)

instate(Int) instate(Low)

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Testing Object-Oriented Software definition in UML two kinds of o-o software: - data-driven - event-driven how are these described (for a tester)? what is an o-o unit: - a class? - a method? what is the basis for integration testing?

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

First Example (data-driven) The o-oCalendar program is an object-oriented implementation of the NextDate function. (NextDate(Mar., 5, 2002)) = Mar., 6, 2002. When this is implemented in procedural code, it is approximately 50 lines long, with a cyclomatic complexity less than 15. (can be 11.) A "pure" (i.e., good practice) object-oriented implementation contains one abstract class and five classes. The next few slides show the UML description of o-oCalendar.

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Classes
testIt CalendarUnit 'abstract class Date Day d Month m Year y Date( pDay, pMonth,, pYear) increment() printDate()

currentPos As Integer CalendarUnit( pCurrentPos) setCurrentPos( pCurrentPos) increment() 'boolean Month private Year y private sizeIndex = <31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31> Month(pcur, Year pYear) setCurrentPos(pCurrentPos) setMonth(pcur, Year pYear) getMonth() getMonthSize() increment()

Day Month m Day( pDay, Month pMonth) setCurrentPos( pCurrentPos) setDay( pDay, Month pMonth) getDay() increment()

Year Year(int pYear) setCurrentPos( pCurrentPos) getYear() increment() isleap() 'boolean

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Inheritance and Aggregation


CalendarUnit

Day

Month

Year

Date

Day

Month

Year

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Program Graphs of Date Methods


testIt
1 2 3

Date.constructor Date.increment
4 5 6 7 12 13 14 17 18 8 9 10 11 15 16

Date.printDate
19 20

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Program Graphs of Day Methods


Day.constructor
21

Day.setCurrentPos
a

Day.setDay
23

Day.getDay
26

Day.increment
28

22

24

27

29

23

25

30

31

32

33

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Program Graphs of Month Methods


Month.constructor Month.setCurrentPos
39 a

Month.setMonth
36

40

37

39

38

Month.getMonth
39

Month.increment
47

Month.getMonthSize
41

40

48 43

42 44 45

49 50 52 51

46

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Program Graphs of Year Methods


Year.constructor Year.setCurrentPos Year.getYear Year.increment
53 a 55 57

Year.isLeap
60

54

56

58

61

62 59 64

63

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Collaboration Diagram
1: create 2: increment 3: getYear 1: create 2: increment 3:setMonth 4: getMonth Date 1: create 2: increment 3: setDay 4: getDay

Year 1:isLeap

testIt

1: create 2: increment 3: printDate

Month 1:getMonthSize

Day

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

All Message Flows


testit Main m6 m7

Day Day() setCurrentPos() increment() setDay getDay m11 m13 m17 m15 m16

Date m1 m2 m3 Date() increment() printDate

Month m5 m8 Month() setCurrentPos() increment() setMonth getMonth getMonthSize m19 m18

m10

m4 Year m9 Year() setCurrentPos() increment() m14 getYear isLeap m20 m21

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Message Sequence Diagram (for an easy test case)


testIt Date:testdate Day:d Month:m Year:y

1: printDate() 2:getMonth()

3:getDay()

4:getYear()

time

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Message Flow for Easy Case


testit Main m6

Day Day() setCurrentPos() increment() setDay getDay m15 m16

Date m1 Date() increment() printDate

Month m5 Month() setCurrentPos() increment() setMonth getMonth getMonthSize m19 m18

m4 Year Year() setCurrentPos() increment() getYear isLeap

m21

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Second Example (event-driven)


Currency Converter U. S. Dollar amount Equivalent in Brazil Canada European Community Japan Clear Quit Compute

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Procedural Implementation of Compute Button


procedure Compute ( USDollarAmount, EquivCurrencyAmount) dim brazilRate, canadaRate, euroRate, japanRate, USDollarAmount As Single If (optionBrazil) Then EquivCurrencyAmount = brazilRate * USDollarAmount Else If (optionCanada) Then EquivCurrencyAmount = canadaRate * USDollarAmount Else If (optionEuropeanUnion) Then EquivCurrencyAmount = euroRate * USDollarAmount Else If (optionJapan) Then EquivCurrencyAmount = japanRate * USDollarAmount Else Output ("No country selected") EndIF EndIF EndIF EndIF
Software Testing: A Craftsmans Approach, 3rd Edition Object-Oriented Testing

object optionBrazil () Constant USdollarToBrazilReal= 2.067 private procedure senseClick commandCompute(USdollarToBrazilReal) End senseClick object optionCanada() Constant USdollarToCanadianDollar = 1.16 private procedure senseClick commandCompute(USdollarToCanadianDollar) End senseClick object optionEuropeanUnion () Constant USdollarToEuro= 0.752 private procedure senseClick commandCompute(USdollarToEuro) End senseClick object optionJapan () Constant USdollarToJapanYen = 117.82 private procedure senseClick commandCompute(USdollarToJapanYen) End senseClick procedure comandCompute ( exchangeRate) dim exchangeRate, USDollarAmount As Single USDollarAmount = Val(txtUSDollarAmount.text) EquivCurrencyAmount = exchangeRate * USDollarAmount End procedure Compute

ObjectOriented Implementation of Compute Button

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Visual Basic Style Implementation


Public exchangeRate As Single Private Sub optBrazil_Click() exchangeRate = 2.067 End Sub Private Sub optCanada_Click() exchangeRate = 1.16 End Sub Private Sub optEuropeanUnion_Click() exchangeRate = 0.752 End Sub Private Sub optJapan_Click() exchangeRate = 117.82 End Sub Private Sub commandCompute () EquivCurrencyAmount = exchangeRate * Val(txtUSDollarAmount.text) End Sub

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Input Events for the Currency Converter

Input Events ip1 ip2 ip2.1 ip2.2 ip2.3 Enter US Dollar amount Click on a country button Click on Brazil Click on Canada Click on European Community ip2.4 ip3 ip4 ip5 ip6

Input Events Click on Japan Click on Compute button Click on Clear button Click on Quit button Click on OK in error message

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Output Events for the Currency Converter


Output Events op1 op2 op2.1 op2.2 op2.3 op2.4 op2.5 op3 op3.1 op3.2 op3.3 op3.4 Display US Dollar Amount Display currency name Display Brazilian Reals Display Canadian Dollars Display European Community Euros Display Japanese Yen Display ellipsis Indicate selected country Indicate Brazil Indicate Canada Indicate European Community Indicate Japan op4 op4.1 op4.2 op4.3 op4.4 op5 op6 op7 op8 op9 op10 Output Events Reset selected country Reset Brazil Reset Canada Reset European Community Reset Japan Display foreign currency value Error Msg: Must select a country Error Msg: Must enter US Dollar amount Error Msg: Must select a country and enter US Dollar amount Reset US Dollar Amount Reset equivalent currency amount

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

ASFs and Data Places for the Currency Converter


Atomic System Functions s1 s2 s3 s4 Store US Dollar amount Sense Click on Brazil Sense Click on Canada Sense Click on E. U. s5 s6 s7 s8 Atomic System Functions Sense Click on Japan Sense Click on Compute button Sense Click on Clear button Sense Click on Quit button

Data Places d1 US Dollar Amount entered d2

Data Places Country selected

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

High Level Finite State Machine


ip4 or ip5 ip2/op2, op3 ip4 or ip5 op2.5, op4 op9, and op10 ip2/op2, op3 ip6 ip1/op1 Both Inputs Done ip3/op5 Equiv Amount Displayed ip6 ip3/op6 Idle ip3/op8 ip6 Missing Country and Dollar ip1/op1 Message US Dollar Amount Entered

Country Selected ip3/op7

Missing US Dollar Message

Missing Country Message

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Details of Option Button FSM


ip2.1 op2.1 op3.1 ip2.2 op2.2 op3.2 Brazil ip2.1 op2.1 op3.1 op4.3 ip2.2 op2.2 op3.2 op4.1 ip2.1 op2.1 op3.1 op4.2 ip2.4 op2.4 op3.4 op4.2 Canada ip2.2 op2.2 op3.2 op4.4 Japan ip2.3 op2.3 op3.3 op4.4 ip2.4 op2.4 op3.4 op4.3 Idle ip2.3 op2.3 op3.3 ip2.4 op2.4 op3.4 Euro Comm

ip2.3 op2.3 op3.3 op4.1

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

StateChart for the Currency Converter


ip 5 Quit
In Storage

End Application

Executing Ip 4 ___________________ op 2 , op 4, op 9 , op 10

Ip 4
Idle

Ip 3 /op 8

GUI action Ip 2/ op 2 ,op 3


Country Selected

Ip 1/ op 1
U .S . Dollar Amount Selected

Ip 6
Missing Country and Dollar Message

Ip 1 /op 1 Ip 2/ op 2 ,op 3 Ip 3 /op 7 Ip 6


Both Inputs Done

Ip 6

Ip 3/ op 7

Missing U

.S . Dollar Message

Ip 3/ op 5 Ip 1 /op 1
Equiv . Amount Displayed

Missing Country Message

Ip 2 /op 2 ,op 3

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Observations and Conclusions


Classes/objects are MUCH more complicated than procedures - need to consider inheritance, polymorphism - (the encapsulation part is easy) - o-o design proceeds by composition, not decomposition Complexity is moved from methods to messaging among objects - hence unit level testing becomes integration level testing - it's like that bump in the rug, you can push down on it, but it pops up somewhere else. Models to describe integration level testing are inadequate (you can't test what you don't model)

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Extended Definitions
An MM-Path in object-oriented software is a sequence of method executions linked by messages.
An MM-Path starts with a method and ends when it reaches a method which does not issue any messages of its own ( message quiescence) Since MM-Paths are composed of linked method-message pairs in an object network, they interleave and branch off from other MM-Paths.

An atomic system function (ASF ) is a sequence of statements that begins with an input port event and ends with an output port event.
An ASF begins with a port input event This system level input triggers the method-message sequence of an MM-Path which may trigger other MM-Paths The sequence of MM-Paths ends with a port output event (event quiescence)

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Constructs for Event- and Message-Driven Petri Nets


Port Input Event

Port Output Event

Data Place Method Execution Path

Message Send/Return

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Message from object A to object B

object A

object B

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

d1

(Second Edition Cover Story)

mep1 d2

d3 mep3

d5 mep5 msg2

msg1

d6

d4

mep6

mep2

mep4

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

ASFs for the Currency Converter


p1 s2 s1 d1 p2 s3 p3 s4 p4

p10 p15 p19

d2

p11 p16 p20

d2

p12 p17 p21

d2

p5 s5

p6 s6

d1

d2 s7

d1

d2

p7

p8

s8 p13 p18 p22 d2 p27 p24 p25 p14

p26

p23

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

EDPN Composition of four ASFs


p1 p4

s1

s4

p12 p17 p21 d1 d2

p6 s6 s7

p7

p26

p23 p24 p25 p14

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Directed Graph of Intended ASF Sequences


s1

s2

s3

s4

s5

s6

s7

s8

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

Forced Navigation Adjacency Matrix s1 s2 s3 s4 s5 s6 s7 s8 s1 1 0 0 0 0 1 1 0 s2 1 0 1 1 1 1 1 0 s3 1 1 0 1 1 1 1 0 s4 1 1 1 0 1 1 1 0 s5 1 1 1 1 0 1 1 0 s6 0 1 1 1 1 0 0 0 s7 0 0 0 0 0 1 0 0 s8 0 0 0 0 0 1 1 0

Software Testing: A Craftsmans Approach, 3rd Edition

Object-Oriented Testing

You might also like