Professional Documents
Culture Documents
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
F(x1, x2), a x1 b, c x2 d Boundary value analysis focuses on the boundary of the input space to identify test cases The rationale behind boundary value testing is that errors tend to occur near the extreme values of an input variable
Basic idea: use input variable values at their minimum (min), just above the minimum (min+), a nominal value (nom), just below their maximum (max(max-), and at their maximum (max) Single fault assumption in reliability theory: failures are only rarely the result of the simultaneous occurrence of two (or more) faults. The boundary value analysis test cases are obtained by holding the values of all but one variable at their nominal values, and letting that variable assume its extreme values
5
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
e.g. NextDate function, commission problem Create artificial bounds e.g. triangle problem Decision tabletable-based testing e.g. PIN and transaction type in SATM System
Boolean variables
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
Boundary value analysis works well when the program to be tested is a function of several independent variables that represent bounded physical quantities
e.g. NextDate test cases are inadequate (little stress on February, dependencies among month, day, and year) e.g. variables refer to physical quantities, such as temperature, air speed, load etc.
10
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
11
Robustness Testing
Simple extension of boundary value analysis In addition to the five boundary value analysis values of a variable, see what happens when the extrema are exceeded with a value slightly greater than the maximum (max+) and a value slightly less than the minimum (min(min-) Focuses on the expected outputs
Robustness Testing
13
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
14
Worst case analysis: more than one variable has an extreme value Procedure:
For each variable create the set <min, min+, nom, maxmax-, max> Take the Cartesian product of these sets to generate test cases
For n variables 5n test cases (as opposed to 4n+1 test cases for boundary value analysis)
Follows the generalization pattern Same limitations Robust worst case testing can be applied
15
16
c a b x1
17
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
18
The most widely practiced form of functional testing Most intuitive, least uniform, no guidelines The tester uses his/her domain knowledge, experience with similar programs, ad hoc testing It is dependent on the abilities of the tester Even though it is highly subjective, it often results in a set of test cases which is more effective in revealing faults than the test sets generated by the other methods
19
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
20
21
s T st
ct t il t r l t Tri l t Tri l t Tri l t Tri l t Tri l sc l s t Tri l t Tri l t Tri l t Tri l t Tri l sc l s t Tri l t Tri l t Tri l t Tri l t Tri l sc l s t Tri l t Tri l t Tri l t Tri l t Tri l sc l s t Tri l sc l s t Tri l t Tri l t Tri l t
s s (6
s 3 32 33 34 35 36 37 38 39 4 4 42 43 44 45 46 47 48 49 5 5 52 53 54 55 56 57 58 59 6 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
f 25)
c 2 2 2 2 2 2 99 2 2 99 2 99 99 99 99 99 2 2 2 2 2 2 99 2 2 99 2 2 99 2 2 2 2 2 2 2 99 2 Is sc il t t t t t Is sc t t t t t Is sc c l t t t c l Is sc t t Is sc t t t t Is sc t t ct t l s t r l Tri l Tri l Tri l Tri l Tri l l s Tri l Tri l Tri l Tri l Tri l l s Tri Tri Tri l s Tri Tri l s Tri Tri Tri Tri l s Tri Tri l l l t
2 3 4 5 6 7 8 9 2 2 22 23 24 25 26 27 28 29 3
l l l l l l l l
22
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
23
year min = 1812 min+ = 1813 nom = 1912 max- = 2011 max = 2012
24
25
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
26
13 Boundary Value Analysis Test Cases 125 Worst Case Test Cases Boundary values for the output range, especially near the threshold points of $1000 and $1800 Part of the reason for using the output range to determine test cases is that test cases from the input range are almost all in the 20% zone We want to find input variable combinations that stress the boundary values: $100, $1000, $1800, and $7800
27
2 3 4 15 16 17 18 19 20 21 22 23 24 25
28
Test case 9 is the $1000 border point If we tweak the input variables we get values just below and just above the border Form of special value testing based on mathematical insight
Output Special Value Test Cases
Case 1 2 3 locks 10 18 18 stocks 11 17 19 barrels 9 19 17 sales 1005 1795 1805 commission 100.75 219.25 221 comment border point + border point border point +
29
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
30
Equivalence Classes
Motivations
Equivalence classes form a partition of a set, where partition refers to a collection of mutually disjoint subsets whose union is the entire set (completeness, nonnon-redundancy) The idea is to identify test cases by using one element from each equivalence class treated the same traversing the same execution path The key is the choice of the equivalence relation that determines the classes
second guess the likely implementation , and think about the functional manipulations that must somehow be present in the implementation
31
Equivalence Classes
32
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
33
Accomplished by using one variable from each equivalence class in a test case
Test Case 1 WE2 WE3 WE4 a a1 a2 a3 a1 b b1 b2 b3 b4 c c1 c2 c1 c2
A = A1 7 A2 7 A3 B = B1 7 B2 7 B3 7 B4 C = C1 7 C2 a1 A1, b3 B3, c2 C2
Number of weak equivalence class test cases = number of classes in the partition with the largest number of subsets
34
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
35
Based on the Cartesian Product of the partition subsets A x B x C = 3 x 4 x 2 = 24 elements Equivalence relations can be defined on the output range of the program function being tested
Test Case E1 SE2 SE3 SE4 SE5 SE6 SE7 SE8 SE9 SE10 SE11 S E 12 SE13 SE14 S E 15 S E 16 S E 17 S E 18 S E 19 SE20 SE21 SE22 SE23 SE24
a a1 a1 a1 a1 a1 a1 a1 a1 a2 a2 a2 a2 a2 a2 a2 a2 a3 a3 a3 a3 a3 a3 a3 a3
b b1 b1 b2 b2 b3 b3 b4 b4 b1 b1 b2 b2 b3 b3 b4 b4 b1 b1 b2 b2 b3 b3 b4 b4
c c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2 c1 c2
36
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
37
Valid inputs: 1 lock 70, 1 stock 80, 1 barrel 90 Invalid inputs: lock < 1, lock > 70, stock < 1, stock > 80, barrel < 1, barrel > 90
For valid inputs, use one value from each valid class (like weak equivalence testing) For invalid inputs, a test case will have one invalid value and the remaining values will all be valid (single failure)
38
Problems:
Very often, the specification does not define what the expected output for an invalid test case should be a lot of time spent in defining expected outputs Strongly typed languages eliminate the need for the consideration of invalid inputs
39
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
40
Outputs: Not a Triangle, Scalene, Isosceles, Equilateral Easier to identify output (range) equivalence classes
R1 = {<a, b, c> : the triangle with sides a, b, and c is equilateral} R2 = {<a, b, c> : the triangle with sides a, b, and c is isosceles} R3 = {<a, b, c> : the triangle with sides a, b, and c is scalene} R4 = {<a, b, c> : sides a, b, and c do not form a triangle}
Test Case OE1 OE2 OE3 OE4 a 5 2 3 4 b 5 2 4 1 c 5 3 5 2 Expected Output Equilateral Isosceles Scalene Not a Triangle
41
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
43
Input variables
Traditional approach
M1 = { month : 1 month 12 } D1 = { day : 1 day 31 } Y1 = { year : 1812 year 2012 } M2 = { month : month < 1 } M3 = { month : month > 12 } D2 = { day : day < 1 } D3 = { day : day >31 } Y2 = { year : year < 1812 } Y3 = {year : year > 2012 }
Month 6 -1 13 6 6 6 6
Day 15 15 15 -1 32 15 15
Expected Output 6/16/1912 invalid input invalid input invalid input invalid input invalid input invalid input
44
Traditional approach is deficient because it treats the elements of a class at the valid/invalid level Different approach: What must be done to an input date?
M1 = { month: month has 30 days } M2 = { month: month has 31 days } M3 = { month: month is February } D1 = { day: 1 day 28 } D2 = { day: day = 29 } D3 = { day: day = 30 } D4 = { day: day = 31 } Y1 = { year: year = 1900 } Y2 = { year: 1812 year 2012 AND (year 1900) AND (year mod 4 = 0) } Y3 = { year: 1812 year 2012 AND (year mod 4 0) } Not a perfect set of equivalence classes!!!
45
46
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
47
Lock
L1 = { lock: 1 lock 70 } L2 = { lock: lock < 1 } L3 = { lock: lock > 70 } S1 = { stock: 1 stock 80 } S2 = { stock: stock < 1 } S3 = { stock: stock > 80 } B1 = { barrel: 1 barrel 90 } B2 = { barrel: barrel < 1 } B3 = { barrel: barrel > 90 }
Stock
Barrel
48
49
sales = 45 x locks + 30 x stocks + 25 x barrels L1 = { <lock, stock, barrel> : sales 1000 } L2 = { <lock, stock, barrel> : 1000 < sales 1800 } L3 = { <lock, stock, barrel> : sales > 1800 }
Output Range Equivalence Class Test Cases
locks 5 15 25
stocks 5 15 25
barrels 5 15 25
50
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
51
2.
3.
4.
The traditional form of equivalence class testing is generally not as thorough as weak equivalence class testing, which in turn, is not as thorough as the strong form of equivalence class testing The only time it makes sense to use the traditional approach is when the implementation language is not strongly typed If error conditions are a high priority, we could extend strong equivalence class testing to include invalid classes (e.g. commission problem) Equivalence class testing is appropriate when input data is defined in terms of ranges and sets of discrete values. This is certainly the case when system malfunctions can occur for outout-of of-limit variable values
52
6.
7.
Equivalence class testing is strengthened by a hybrid approach with boundary value testing. (We can reuse the effort made in defining the equivalence classes) (e.g. NextDate function) Equivalence class testing is indicated when the program function is complex. In such cases, the complexity of the function can help identify useful equivalence classes, as in the NextDate function Strong equivalence class testing makes a presumption that the variables are independent when the Cartesian Product is taken. If there are any dependencies, these will often generate error test cases, as they did in the NextDate function
53
Several tries may be needed before the right equivalence relation is discovered, as we saw in the NextDate example. In other cases, there is an obvious or natural equivalence partition. When in doubt, the best bet is to try to second guess aspects of any reasonable implementation
54
Agenda
Test Cases for the Triangle Problem Test Cases for the NextDate Problem Test Cases for the Commission Problem
Weak Equivalence Class Testing Strong Equivalence Class Testing Traditional Equivalence Class Testing
Equivalence Class Test Cases for the Triangle Problem Equivalence Class Test Cases for the NextDate Function Equivalence Class Test Cases for the Commission Problem Guidelines and Observations
55
References
Software Testing A Craftsman's Approach 2nd edition, Paul C. Jorgensen, CRC Press (Chapters 5 and 6)
56
Chapter 6
Equivalence Class Testing
Domain
Range
Equivalence class testing uses information about the functional mapping itself to identify test cases
Equivalence Relations
Given a relation R defined on some set S, R is an equivalence relation if (and only if), for all, x, y, and z elements of S:
R is reflexive, i.e., xRx R is symmetric, i.e., if xRy, then yRx R is transitive, i.e., if xRy and yRz, then xRz
An equivalence relation, R, induces a partition on the set S, where a partition is a set of subsets of S such that:
The intersection of any two subsets is empty, and The union of all the subsets is the original set S
Note that the intersection property assures no redundancy, and the union property assures no gaps.
Equivalence Partitioning
F Define relation R as follows: for x, y in domain, xRy iff F(x ) = F(y). Facts: Domain Range 1. R is an equivalence relation. 2. An equivalence relation induces a partition on a set. 3. Works best when F is many-to-one 4. (pre-image set) Test cases are formed by selecting one value from each equivalence class. - reduces redundancy - identifying the classes may be hard Range
Domain
g f
e a b c d x1
Equivalence classes (of valid values): {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d} {e <= x2 < f}, {f <= x2 <= g}
g f
e a b c d x1
Equivalence classes (of valid and invalid values): {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d}, {e <= x2 < f}, {f <= x2 <= g} Invalid classes: {x1 < a}, {x1 > d}, {x2 < e}, {x2 > g}
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing
x 2
g f
e a b c d x 1
g f
e a b c d x 1
g f
e a b c d x1
Equivalence classes (of valid values): {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d} {e <= x2 < f}, {f <= x2 <= g}
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing
g f
e a b c d x1
Equivalence classes (of valid and invalid values): {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d}, {e <= x2 < f}, {f <= x2 <= g} Invalid classes: {x1 < a}, {x1 > d}, {x2 < e}, {x2 > g}
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing
Best practice is to select an equivalence relation that reflects the behavior being tested.
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing
Year (are these disjoint?) Y1 = {year : year = 2000} Y2 = {year : 1812 <= year <= 2012 AND (year 0 Mod 100) and (year = 0 Mod 4) Y3 = {year : (1812 <= year <= 2012 AND (year 0 Mod 4)
All years must be in range: 1812 <= year <= 2012 Note that these equivalence classes are disjoint.
Notice that all forms of equivalence class testing presume that the variables in the input domain are independent; logical dependencies are unrecognized.
How many equivalence classes will be needed for x,y,z Not a triangle?
Software Testing: A Craftsmans Approach, 3rd Edition Equivalence Class Testing
In-Class Exercise
Apply the working backwards approach to develop equivalence classes for the Commission Problem. Hint: use boundaries in the output space.
Assumption Matrix
Valid Values
Boundary Value
Single fault
Weak Normal Equiv. Class
Multiple fault
Chapter 7
Decision Table Based Testing
Equivalent to forming a decision table in which: inputs are conditions outputs are actions Test every (possible) rule in the decision table. Recommended for logically complex situations.
Decision Tables
Represent complex conditional behavior. Support extensive analysis
Consistency Completeness Redundancy Algebraic simplification
Executable (and compilable) Two forms: Limited and Extended Entry. Don't Care condition entries require special attention. Dependencies usually yield impossible situations
Actions
a1 a2 a3 a4
X X
X X X X X X
Rule
A limited Entry decison table with n conditions has 2n rules A Don't Care entry doubles the count of a rule What are the possibilities when rule count is not 2n ? More precise to use F! (must be false) than -- (don't care)
Software Testing: A Craftsmans Approach, 3rd Edition Decision Table Based Testing
! ! !
Rule 9 is identical to Rule 4 (T, F, F) Since the action entries for rules 4 and 9 are identical, there is no ambiguity, just redundancy.
Suggested actions
True pass True fail False pass False fail Impossible
Rule 9 is identical to Rule 4 (T, F, F) Since the action entries for rules 4 and 9 are different there is ambiguity. This table is inconsistent, and the inconsistency implies non-determinism.
Decision Table Based Testing
4. Notice that many of these are impossible, e.g., <M1, D4, * >
This decision table will have 36 rules, and corresponds to the cross product. Many of the rules will be logically impossible. Many rules would collapse, except for considerations for December.
Notice there are 40 rules in this decision table, corresponding to the 40 elements in the cross product of the revised equivalence classes.
X X
X X
X X
X X
X X
Procedure for Decision Table Based Testing 1. Determine conditions and actions. (Might need to iterate) 2. Develop a (the!) Decision Table, watching for completeness don't care entries redundant and inconsistent rules 3. Each rule denes a test case.
Chapter 8
Retrospective on Functional Testing
low
Case Study
A hypothetical Insurance Premium Program computes the semiannual car insurance premium based on two parameters: the policy holder's age and driving record: Premium = BaseRate*ageMultiplier safeDrivingReduction The ageMultiplier is a function of the policy holder's age, and the safe driving reduction is given when the current points (assigned by traffic courts for moving violations) on the policy holder's driver's license are below an age-related cutoff. Policies are written for drivers in the age range of 16 to 100. Once a policy holder has 12 points, his/her driver's license is suspended (hence there is no need for insurance). The BaseRate changes from time to time; for this example, it is $500 for a semi-annual premium.
10
0 age 20 40 60 80 100
Severe gaps!
10
0 age 20 40 60 80 100
Severe redundancy!
Software Testing: A Craftsmans Approach, 3rd Edition Retrospective on Functional Testing
The points cutoff is also a range, further indication for equivalence class testing.
Points {0, 1} Points {2, 3} Points {4, 5} Points {6, 7} Points {8, 9, 10, 11, 12}
10
0 age 20 40 60 80 100
10
0 age 20 40 60 80 100
10
0 age 20 40 60 80 100
Ahhhh, at last!
Software Testing: A Craftsmans Approach, 3rd Edition Retrospective on Functional Testing
Wrap Up
The inherent nature of the program being tested should dictate the test method.
The decision table expert system (slide 7) recommendation is just a start. Applications are seldom chemically pure.
Hybrid combinations of test methods can be very useful. Good judgment, based on insight, is a sign of a craftsperson.
Chapter 9
Path TestingPart 2
Path Testing II
Path Testing II
McCabe's Example
McCabe's Original Graph
A 1 B E C F D 3 B 4 C 9 G G 10
V(G) = 10 - 7 + 2(1) = 5
Software Testing: A Craftsmans Approach, 3rd Edition
V(G) = 11 - 7 + 1 = 5
Path Testing II
3. 4.
Path Testing II
Basis Paths
path \ edges traversed p1: p2: p3: p4: p5: A, B, C, G A, B, C, B, C, G A, B, E, F, G A, D, E, F, G A, D, F, G 1 1 1 1 0 0 1 1
A
2 0 0 0 1 1 0 0
2 6
3 0 1 0 0 0 1 2
4 1 2 0 0 0 1 3
5 0 0 1 0 0 1 0
6 0 0 0 1 0 0 0
7 0 0 0 0 1 0 0
8 0 0 1 1 0 1 0
9 1 1 0 0 0 0 1
10 0 0 1 1 1 1 0
D 7 F
8 10
G
Software Testing: A Craftsmans Approach, 3rd Edition Path Testing II
V(G) = 23 - 20 + 2(1) = 5 Basis Path Set B1 p1: 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 16, 18, 19, 20, 22, 23 p2: 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 18, 19, 20, 22, 23 p3: 4, 5, 6, 7, 8, 9, 11, 12, 13, 21, 22, 23 p4: 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 20, 22, 23 p5: 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 17, 19, 20, 22, 23
16
10
13 14
21 15
17
18
There are 8 topologically possible paths. 4 are feasible, and 4 are infeasible. Exercise: Is every basis path feasible?
19 20 22
23
Path Testing II
Essential Complexity
McCabes notion of Essential Complexity deals with the extent to which a program violates the precepts of Structured Programming. To find Essential Complexity of a program graph,
Identify a group of source statements that corresponds to one of the basic Structured Programming constructs. Condense that group of statements into a separate node (with a new name) Continue until no more Structured Programming constructs can be found. The Essential Complexityof the original program is the cyclomatic complexity of the resulting program graph.
The essential complexity of a Structured Program is 1. Violations of the precepts of Structured Programming increase the essential complexity.
Path Testing II
This original
C
B
4
Reduces to this
last
Path Testing II
Path Testing II
Path Testing II
Advantages
McCabe's approach does address both gaps and redundancies. Essential complexity leads to better programming practices. McCabe proved that violations of the structured programming constructs increase cyclomatic complexity, and violations cannot occur singly.
Path Testing II
Program NextDate Dim tomorrowDay,tomorrowMonth,tomorrowYear As Integer Dim day,month,year As Integer 1. Output ("Enter today's date in the form MM DD YYYY") 2. Input (month,day,year) 3. Case month Of 4. Case 1: month Is 1,3,5,7,8,Or 10: '31 day months (except Dec.) 5. ! If day < 31 6. ! ! Then tomorrowDay = day + 1 7. ! ! Else 8. ! ! ! tomorrowDay = 1 9. ! ! ! tomorrowMonth = month + 1 10. ! EndIf 11. Case 2: month Is 4,6,9,Or 11 '30 day months 12. ! If day < 30 13. ! ! Then tomorrowDay = day + 1 14. ! ! Else 15. ! ! ! tomorrowDay = 1 16. ! ! ! tomorrowMonth = month + 1 17. ! EndIf 18. Case 3: month Is 12: 'December 19. ! If day < 31 20. ! ! Then tomorrowDay = day + 1 21. ! ! Else 22. ! ! ! tomorrowDay = 1 23. ! ! ! tomorrowMonth = 1 24. ! ! ! If year = 2012 25. ! ! ! ! Then Output ("2012 is over") 26. ! ! ! ! Else tomorrow.year = year + 1 27. ! ! ! EndIf 28. ! EndIf
Software Testing: A Craftsmans Approach, 3rd Edition Path Testing II
29. Case 4: month is 2: 'February 30. ! If day < 28 31. ! ! Then tomorrowDay = day + 1 32. ! ! Else! 33. ! ! ! If day = 28 34. ! ! ! ! Then! 35. ! ! ! ! ! If ((year MOD 4)=0)AND((year MOD 400)"0) 36. ! ! ! ! ! ! Then tomorrowDay = 29 'leap year 37. ! ! ! ! ! ! Else! ! 'not a leap year 38. ! ! ! ! ! ! ! tomorrowDay = 1 39. ! ! ! ! ! ! ! tomorrowMonth = 3 40. ! ! ! ! ! EndIf 41. ! ! ! ! Else! If day = 29 42. ! ! ! ! ! ! ! Then tomorrowDay = 1 43. ! ! ! ! ! ! ! ! tomorrowMonth = 3 44. ! ! ! ! ! ! ! Else! Output("Cannot have Feb.", day) 45. ! ! ! ! ! ! EndIf 46. ! ! ! EndIf 47. ! EndIf 48. EndCase 49. Output ("Tomorrow's date is", tomorrowMonth, ! ! ! ! tomorrowDay, tomorrowYear) 50. End NextDate
Path Testing II
1 2 3
4 5 7 6 8 9 10 13
11 12 14 15 16 17 20
18 19 21 22 23 24 25 27 28 26 31 36 34 35
29 30 32 33 41 42 44 37 38 39 40 46 47 43 45
48 49 50
Path Testing II
Output("Locks sold: ", totalLocks) Output("Stocks sold: ", totalStocks) Output("Barrels sold: ", totalBarrels) sales = lockPrice*totalLocks + stockPrice*totalStocks + barrelPrice * totalBarrels Output("Total sales: ", sales) If (sales > 1800.0) Then commission = 0.10 * 1000.0 commission = commission + 0.15 * 800.0 commission = commission + 0.20*(sales-1800.0) Else If (sales > 1000.0) Then commission = 0.10 * 1000.0 commission = commission + 0.15*(sales-1000.0) Else commission = 0.10 * sales EndIf EndIf Output("Commission is $", commission) End Commission
Path Testing II
10
11
12
13
14
15
16
17
18
19
20 25 21 26 22 27 23 28 24 30 31 29
32
33
Path Testing II
Chapter 10
Data Flow Testing Slice Testing
Starting point is a program, P, with program graph G(P), and the set V of variables in program P. "Interesting" data flows are then tested as mini-functions.
Definitions
Node n G(P) is a defining node of the variable v V, written as DEF(v, n), iff the value of the variable v is defined at the statement fragment corresponding to node n. Node n G(P) is a usage node of the variable v V, written as USE(v, n), iff the value of the variable v is used at the statement fragment corresponding to node n. A usage node USE(v, n) is a predicate use (denoted as P-use) iff the statement n is a predicate statement; otherwise, USE(v, n) is a computation use (denoted C-use).
More Definitions
A definition-use path with respect to a variable v (denoted du-path) is a path in PATHS(P) such that for some v V, there are define and usage nodes DEF(v, m) and USE(v, n) such that m and n are the initial and final nodes of the path. A definition-clear path with respect to a variable v (denoted dc-path) is a definition-use path in PATHS(P) with initial and final nodes DEF (v, m) and USE (v, n) such that no other node in the path is a defining node of v.
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
34
31
35
32
36
38
33
37
39
40
41
42
43
Example (continued)
13. Input(locks) 14. While NOT(locks = -1) 15. Input(stocks, barrels) 16. totalLocks = totalLocks + locks 17. totalStocks = totalStocks + stocks 18. totalBarrels = totalBarrels + barrels 19. Input(locks) 20. EndWhile We have DEF (locks, 13), DEF (locks, 19) USE (locks, 14), a predicate use USE (locks, 16). A computation use du-paths for locks are the node sequences <13, 14> (a dc-path), <13, 14, 15, 16>, <19, 20, 14 >, < 19, 20, 14 , 15, 16> Is <13, 14, 15, 16> definition clear? Is < 19, 20, 14, 15, 16> definition clear? What about repetitions?
Exercise: Where does the All definition-clear paths coverage metric fit in the Rapps-Weyuker lattice?
A definition-clear du-path represents a small function that can be tested by itself. If a du-path is not definition-clear, it should be tested for each defining node.
Slice Testing
Often confused with "module execution paths" Main concern: portions of a program that "contribute" to the value of a variable at some point in the program. Nice analogy with history -- a way to separate a complex system into "disjoint" components that interact:
European history North American history Orient history
A dynamic construct.
Fine Points
"prior to" is the dynamic part of the definition. "contribute" is best understood by extending the Define and Use concepts:
P-use: used in a predicate (decision) C-use: used in computation O-use: used for output L-use: used for location (pointers, subscripts) I-use: iteration (internal counters, loop indices) I-def: defined by input A-def: defined by assignment
usually, the set V of variables consists of just one element. can choose to define a slice as a compilable set of statement fragments -- this extends the meaning of "contribute" because slices are sets, we can develop a lattice based on the subset relationship.
Software Testing: A Craftsmans Approach, 3rd Edition Data Flow Testing
There are these slices on locks (notice that statements 15, 17, and 18 do not appear): S1: S(locks, 13) = {13} S2: S(locks, 14) = {13, 14, 19, 20} S3: S(locks, 16) = {13, 14, 19, 20} S4: S(locks, 19) = {19}
Lattice of Slices
Because a slice is a set of statement fragment numbers, we can find slices that are subsets of other slices. This allows us to work backwards from points in a program, presumably where a fault is suspected. The statements leading to the value of commission when it is output are an excellent example of this pattern. Some researchers propose that this is the way good programmers think when they debug code.
S35
S37
S36
S38
S39
S40
S36: S(commission, 43) = {3, 4, 5, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 30, 36, 41, 42, 43} S37: S(commission, 47) = {47} S38: S(commission, 48) = {3, 4, 5, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 36, 47, 48} S39: S(commission, 50) = {3, 4, 5, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 36, 50} S40: S(commission, 51) = {3, 4, 5, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 36, 41, 42, 43, 47, 48, 50}
Software Testing: A Craftsmans Approach, 3rd Edition Data Flow Testing
If there is a problem with commission at line 48, we can divide the program into two parts, the computation of sales at line 34, and the computation of commission between lines 35 and 48. If sales is OK at line 34, the problem must lie in the relative complement; if not, the problem may be in either portion.
Exercise: in what ways is slice splicing distinct from agile (bottom up) programming?
Chapter 11
Retrospective on Structural Testing
high
high
Sample Comparisons
Method Triangle Program BVA Worst Case BVA Commission Program Output BVA Decision Table DD-Path DU-Path Slices 25 3 25 25 25 11 11 11 33 40 11 11 11 33 40 1.00 1.00 1.00 1.00 1.00 2.27 0.27 2.27 0.76 0.63 2.27 0.27 2.27 0.76 0.63 15 125 7 11 11 11 0.64 1.00 1.36 11.36 2.14 11.36 m n s C(M,S) R(M,S) NR(M,S)
Chapter 12
Levels of Testing
Levels of Testing
Levels of Testing
Detailed Design
Coding
Levels of Testing
how
Preliminary Design
what
how
Detailed Design
what
how
Coding
Feedback Cycles
Levels of Testing
Requirements Specification Preliminary Design Detailed Design Unit Testing Coding Integration Testing
System Testing
Levels of Testing
Levels of Testing
"Exists for the convenience of management" (-- M. Jackson) stifles creativity and unnecessarily constraints
designer's thought processes
Stresses analysis to the exclusion of synthesis High peak in manpower loading profile "Requires perfect foresight" -- William Agresti
any errors, omissions in early phases will propagate
Levels of Testing
Coding
Levels of Testing
Levels of Testing
Rapid Prototyping
Why prototype? 1. To determine feasibility 2. To obtain early customer feedback Keep or dispose? To be rapid, many compromises are made. If a prototype is kept, it will be extremely difficult to modify and maintain. Best practice: dispose once purpose has been served.
Levels of Testing
Levels of Testing
Executable Specifications
Why use an executable specification? 1. To determine behavior 2. To obtain early customer feedback Other uses? 1. Automatic generation of system test cases. 2. Develop order of test case execution 3. Training 4. Early analysis
Levels of Testing
Transformational Implementation
Formal Requirements Specification Series of Transformations Working System Customer Testing
Requirements Specification
System Testing
Levels of Testing
Levels of Testing
Agile Methods
Many flavors
eXtreme Programming (XP) SCRUM Test-Driven Development
Customer-Driven Goals
Respond to customer Reduce unnecessary effort Always have something that works
Levels of Testing
Waterfall
Levels of Testing
Agile
Hybrid Models
Because they replace the Requirements specification phase, rapid prototyping and executable specifications can be merged with the iterative models. The Agile Models are highly iterative, but they typically are not used in combination with rapid prototyping or executable specifications.
Levels of Testing
Integra(on
tes(ng
by
Kris(an
Sandahl
IDA
TDDD04
spring
2012
Levels
of
tes(ng
Maintenance Validate Requirements, Verify Specification Acceptance Test
(Release testing)
Requirements
System Design
(Architecture, High-level Design)
System Testing
(Integration testing of modules)
Module Design
(Program Design, Detailed Design)
Module Testing
(Integration testing of units)
Implementation
of Units (classes, procedures, functions)
Unit testing
Outline
Informal
example
Theory:
Decomposi(on-based
integra(on
Call-graph
based
integra(on
Path-based
integra(on
Check balance
Deduct money Check validity Communicate with server Dependency Power supply
Check balance
Server func(ons
Registrer travel User buNons Display Bring everything together Switch on the current Try to buy a (cket !
Power supply
BoNom-up
integra(on
1
Environment
crea(on
Install
components
Draw
cables
Measure
compare
with
calcula(on
Power supply
BoNom-up integra(on 2
Communica(on possible?
Drivers
Environment Rudimental client Rudimental server Communicate with server Power supply
Driver
A
pretend
module
that
requires
a
sub-system
and
passes
a
test
case
to
it
setup
driver
SUT(x)
verica(on
driver
SUT
SUT
black-box
view
SUT
System
Under
Test
Is
boNom-up
smart?
If
the
basic
func(ons
are
complicated,
error-prone
or
has
development
risks
If
boNom-up
development
strategy
is
used
If
there
are
strict
performance
or
real-(me
requirements
Problems:
Lower
level
func(ons
are
oWen
o-the
shelf
or
trivial
Complicated
User
Interface
tes(ng
is
postponed
End-user
feed-back
postponed
Eort
to
write
drivers.
Top-down
integra(on
1
Sell
(ckets
Chose
(cket
Services
Envrionment:
Modules
for
the
services
Driving
a
prototype
interface
Show
balance
Registrer
travel
Test end-users
Top-down
integra(on
2
Sell
(ckets
Chose
(cket
Services
User
buNons
Display
User
interface
Read
RFID
Show
balance
Registrer
travel
Func(on 1
Func(on 2
Func(on 2
Stubs
Stub
A
program
or
a
method
that
simulates
the
input-output
func6onality
of
a
missing
sub- system
by
answering
to
the
decomposi(on
sequence
of
the
calling
sub-system
and
returning
back
simulated
or
canned
data.
SUT
Service(x)
SUT
Stub
Is
top-down
smart?
Test
cases
are
dened
for
func(onal
requirements
of
the
system
Defects
in
general
design
can
be
found
early
Works
well
with
many
incremental
development
methods
No
need
for
drivers
Problems:
Technical
details
postponed,
poten(al
show-stoppers
Many
stubs
are
required
Stubs
with
many
condi(ons
are
hard
to
write
Decomposi(on-based
integra(on
The
func(onal
decomposi(on
tree:
hierarchical
order
of
processes
return
edges
excluded
reects
the
lexical
inclusion
of
units,
in
terms
of
the
order
in
which
they
need
to
be
compiled
Jorgensen,
Paul
C
"Chapter
13
-
Integra(on
Tes(ng".
SoWware
Tes(ng:
A
CraWsman's
Approach,
Third
Edi(on.
Auerbach
Publica(ons.
2008.
Books24x7.
Go
to:
hNp://guide.bibl.liu.se/datavetenskap
Click:
Books24x7,
Login
Search:
ISBN:9780849374753
18
19
20
21
23
24
25
26
27
Unit Level Unit Nam Unit Level Unit Name B 1.3 Terminal Sense & Control 1 1 SATM System 14 1.3.1 Screen Driver A 1.1 Device Sense & Control 15 1.3.2 Key Sensor D 1.1.1 Door Sense & Control C 1.4 Manage Session 2 1.1.1.1 Get Door Status 16 1.4.1 Validate Card 3 1.1.1.2 Control Door 17 1.4.2 Validate PIN 4 1.1.1.3 Dispense Cash 18 1.4.2.1 GetPIN E 1.1.2 Slot Sense & Control F 1.4.3 Close Session 5 1.1.2.1 WatchCardSlot 19 1.4.3.1 New Transac6on Request 6 1.1.2.2 Get Deposit Slot Status 20 1.4.3.2 Print Receipt 7 1.1.2.3 Control Card Roller 21 1.4.3.3 Post Transac6on Local 8 1.1.2.3 Control Envelope Roller 22 1.4.4 Manage Transac6on 9 1.1.2.5 Read Card Strip 23 1.4.4.1 Get Transac6on Type 10 1.2 Central Bank Comm. 24 1.4.4.2 Get Account Type 11 1.2.1 Get PIN for PAN 25 1.4.4.3 Report Balance 12 1.2.2 Get Account Status 26 1.4.4.4 Process Deposit 13 1.2.3 Post Daily Transac6ons 27 1.4.4.5 Process Withdrawal 17 Table 1: SATM Units and Abbreviated Names
Level 2
Level 3
Big-Bang
tes(ng
A
B
E
F
Unit
test
A
Unit
test
B
Unit
test
H
System-wide
test
C
G
D
H
Level
1
Level
2
Level
3
Environment:
A,
B,
C,
D,
E,
F,
G,
H
BoNom-up
tes(ng
A
B
E
F
C
G
D
H
Level
1
Level
2
Level
3
Environments:
Session
1:
E,
driver(B)
S2:
F,
driver(B)
S3:
E,
F,
driver(B)
S4:
G,
driver(D)
S5:
H,
driver(D)
S6:
G,
H,
driver(D)
S7:
E,
F,
B,
driver(A)
S8:
C,
driver(A)
S9:
G,
H,
D,
driver(A)
S10:
E,
F,
B,
C,
G,
H,
D,
A
General formula: Number of drivers: (nodes-leaves) Number of drivers: 3 Number of sessions: (nodes-leaves)+edges Number of sessions: 10 SATM: 10 drivers, 42 sessions
Top-down
tes(ng
A
B
E
F
C
G
D
H
Level
1
Level
2
Level
3
Environments:
Session
1:
A,
stub(B),
stub(C),
stub(D)
S2:
A,
B,
stub(C),
stub(D)
S3:
A,
stub(B),
C,
stub(D)
S4:
A,
stub(B),
stub(C),
D
S5:
A,
B,
stub(E),
stub(F),
C,
D,
stub(G),
stub(H)
S6:
A,
B,
E,
stub(F),
C,
D,
stub(G),
stub(H)
S7:
A,
B,
stub(E),
F,
C,
D,
stub(G),
stub(H)
S8:
A,
B,
stub(E),
stub(F),
C,
D,
G,
stub(H)
S9:
A,
B,
stub(E),
stub(F),
C,
D,
stub(G),
H
S10:
A,
B,
E,
F,
C,
D,
G,
H
General formula: Number of stubs: (nodes 1) Number of stubs: 7 Number of sessions: (nodes-leaves)+edges Number of sessions: 10 SATM: 32 stubs, 42 sessions
Sandwich
tes(ng
Taget
level
A
B
E
F
C
G
D
H
Level
1
Level
2
Level
3
Environments:
Session
1:
A,
stub(B),
stub(C),
stub(D)
S2:
A,
B,
stub(C),
stub(D)
S3:
A,
stub(B),
C,
stub(D)
S4:
A,
stub(B),
stub(C),
D
S5:
E,
driver(B)
S6:
F,
driver(B)
S7:
E,
F,
driver(B)
S8:
G,
driver(D)
S9:
H,
driver(D)
S10:
G,
H,
driver(D)
S11:
A,
B,
E,
F,
C,
D,
G,
H
Fewer stubs and drivers Risk driven Small-bang at target level More complicated
Poten(al
problems
Ar(cial:
Assumes
correct
units
and
interfaces
Test
correct
structure
only
Investment
in
stubs
and
drivers
Retes(ng
5 7 20 21 9 10 12 11
One session per edge Real code 40 sessions, but no extra code
14
15
7 20 21 9 10 12 11
Integra(ng direct neighbors of nodes Number of sessions: nodes sinknodes (a sink node has no outgoing calls) SATM: 11 sessions
14
15
Poten(al
problems
Small-bang
problems
Especially
fault
isola(on
in
large
neighborhoods
Restes(ng
needed
if
a
node
is
changed
Assumes
correct
units
Path-based
integra(on
Base
tes(ng
on
system
level
threads
Mo(vated
by
overall
system
behaviour,
not
the
structure
Smooth
prepara(on
for
System
level
tes(ng
Extensions to Deni6ons
Deni(ons
Deni6on: A source node in a program is a statement fragment at which program execu(on begins or resumes. Deni6on: A sink node in a program is a statement fragment at which program execu6on halts or terminates. Deni6on: A module execu6on path (MEP) is a sequence of statements that begins with a source node and ends with a sink node, with no intervening sink nodes. Deni6on: A message is a programming language mechanism by which one unit transfers control to another unit.
More deni(ons
Den(on: An MM-Path is an interleaved sequence of module execu(on paths (MEP) and messages. Deni(on: Given a set of units, their MM-Path graph is the directed graph in which nodes are module execu6on paths and edges correspond to messages and returns from one unit to another.
Example (cont.): source nodes and sink nodes Example (cont.): source nodes and sink nodes A
Source nodes
1 2
Sink nodes 3
4 5
B
1
Sink nodes 2
C
1
Sink nodes 2 4 3
Module B: Module C:
3 MEP(A,I)
Returns!
MEP (B, II)
Problems
more eort is needed to iden6fy the MM-Paths. This eort is probably oset by the elimina(on of stub and driver development.
Chapter 14
System Testing
System Testing
System Testing
Threads ( = a system level test cases) Basis concepts of requirements specification Identifying threads Metrics for system testing
System Testing
System Testing
Some Observations
Threads are dynamic Threads occur at execution time Threads can be identified in (or even better: derived from) many models
Finite state machines Decision tables Statecharts Petri nets Use Cases (sufficiently detailed)
System Testing
System Testing
Levels of Threads
A unit level thread is a path in the program graph of a unit. An integration level thread is an MM-Path. There are two levels of system level threads: A single thread A set of interacting threads If necessary, we can deal with threads in systems of systems
System Testing
System Testing
System Testing
a sequence of threads:
System Testing
Observe:
several stimulus/response pairs this is the cross-over point between integration and system testing
System Testing
Data
1..n 1..n
Device
Mainline requirements specification techniques populate some (or all ) portions of this database.
System Testing
System Testing
Bad card 0.05 1. Card Entry Legitimate card 0.95 2.1 First PIN Try
2.2 Second PIN Try Correct PIN 0.90 Correct PIN 0.90
Correct PIN 0.90 3. Await Transaction Choice Button B 1 0.05 Button B 2 0.10 Deposit Low cash 0.10 1.00 1.00 Normal 0.85 Low balance 0.05 Print Receipt Button B 3 0.85
Balance
Withdrawal
System Testing
System Testing
2.x.1 0 Digits Received digit / echo 'X---' x1 2.x.2 1 Digit Received digit / echo 'XX--' x2
Port Input Events ! digit! ! cancel! Port Output Events ! echo 'X---' ! echo 'XX--' ! echo 'XXX-' ! echo 'XXXX' Logical Output Events ! Correct PIN ! Incorrect PIN ! Canceled
cancel x9
System Testing
(Correct PIN)!
System Testing
System Testing
Harness
screen text
System Testing
Metrics for System Testing In the PIN Entry ASF, for a given PIN, there are 156 distinct paths from the First PIN Try state to the Await Transaction Choice or Card Entry states in the PIN Entry FSM. Of these, 31 correspond to eventually correct PIN entries (1 on the first try, 5 on the second try, and 25 on the third try); the other 125 paths correspond to those with incorrect digits or with cancel keystrokes. To control this explosion, we have two possibilities: ! pseudo-structural coverage metrics ! "true" structural coverage metrics
System Testing
Pseudo-structural Coverage Metrics Behavioral models provide "natural" metrics: Decision Table Metrics:! ! every condition ! ! ! ! ! ! ! ! ! every action ! ! ! ! ! ! ! ! ! every rule FSM Metrics:! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! every state ! every transition ! ! ! ! every place every port event every transition every marking
These are pseudo-structural because they are just models of the eventual system.
Software Testing: A Craftsmans Approach, 3rd Edition System Testing
Two test cases yield state coverage Input Event! ! Sequence! ! 1,2,3,4! 1,2,3,5! C! ! 1,C! ! 1,2,C! 1,2,3,C! ! ! ! ! ! ! ! ! ! ! ! ! Path of Transitions x1, x2, x3, x4, x5 x1, x2, x3, x4, x6 x7, x11 x1, x8, x11 x1, x2, x9, x11 x1, x2, x3, x10, x11
1,2,3,5,1,2,3,4! 1,2,3,5,C,1,2,3,4
C,C,C! ! ! ! ! ! ! How many test cases yield state coverage? Input Event! ! Path of Sequence! ! Transitions 1,2,3,4! ! ! ! !
1.! Combinatoric explosion is controlled ! Selecting test cases from the FSM decomposition reduced
! 156 threads to 10 test cases
2.! Fault isolation is improved ! When a "verify" operation fails, use the FSMs to determine:
! ! ! ! ! ! what went wrong where it went wrong when it went wrong
System Testing
System Testing
Port Output Events ! ! ! ! ! ! Display Screen(n,text) Open Door(dep, withdraw) Close Door(dep, withdraw) Dispense Notes (n) Print Receipt (text) Eject ATM Card
Insert ATM Card (n)! ! Key Press Digit (n)! ! Key Press Cancel! ! Key Press Button B(n)! Insert Deposit Envelope! ! ! ! ! ! !
System Testing
Description:
invalid card
Test operator instructions Initial Conditions: screen 1 being displayed Perform the following sequence of steps: 1. 2. 3. Cause: Verify: Verify: Insert ATM Card 4 Eject ATM Card Display Screen(1, null)
Post Condition: Screen 1 displayed Test Result: ___ Pass ___ Fail
System Testing
System Testing
Operational Profiles Zipf's Law: 80% of the activities occur in 20% of the space ! ! ! ! ! productions of a language syntax natural language vocabulary menu options of a commercial software package area of an office desktop floating point divide on the Pentium chip
For Threads: a small fraction of all possible threads represents the majority of system execution time. Therefore: find the occurrence probabilities of threads and use these to order thread testing.
Software Testing: A Craftsmans Approach, 3rd Edition System Testing
2.2 Second PIN Try Correct PIN 0.90 Correct PIN 0.90
Probabilities 0.95 0.90 0.85 0.85 0.6177375 Probabilities 0.95 0.10 0.10 0.90 0.85 0.01 0.00072675
Correct PIN 0.90 3. Await Transaction Choice Button B 1 0.05 Button B 2 0.10 Deposit Low cash 0.10 1.00 1.00 Normal 0.85 Low balance 0.05 Print Receipt Button B 3 0.85
Rare Thread Legitimate Card Bad PIN 1st try Bad PIN 2nd try PIN ok 3rd try Withdraw Low Cash
Withdrawal
Balance
System Testing
System Testing
System Testing
System Testing
SATM Threads
Thread 1: Wrong Card State Sequence: Idle, Idle Event Sequence: Display Screen 1, Wrong Card ASF Sequence:!
Idle, Await 1st PIN Try, Await 2nd PIN Try, Await 3rd PIN Try, Idle Event Sequence: Display Screen 1, Legitimate Card, Wrong PIN, Wrong PIN, Wrong PIN, Display Screen 4, Display Screen 1 ASF Sequence:!
Idle, Await 1st PIN Try, Acquire Transaction Data, Display Balance, Display Balance, Close Session, Idle Event Sequence: Display Screen 1, Legitimate Card, Wrong PIN, Wrong PIN, Wrong PIN, Display Screen 4, Display Screen 1 ASF Sequence:!
System Testing
SATM Threads
Thread 4: Balance then Deposit then Withdraw State Sequence: Idle, Await 1st PIN Try, Acquire Transaction Data, Display Balance, Close Session, Acquire Transaction Data, Process Deposit, Close Session, Acquire Transaction Data, Process Withdrawal, Close Session, Idle Event Sequence: Display Screen 1, Legitimate Card, Wrong PIN, Wrong PIN, Wrong PIN, Display Screen 4, Display Screen 1 ASF Sequence:!
Thread 5: 3 PIN Tries then Balance then Deposit then Withdraw State Sequence: Idle, Await 1st PIN Try, Await 2nd PIN Try, Await 3rd PIN Try, Acquire Transaction Data, Display Balance, Close Session, Acquire Transaction Data, Process Deposit, Close Session, Acquire Transaction Data, Process Withdrawal, Close Session, Idle Event Sequence: Display Screen 1, Legitimate Card, Wrong PIN, Wrong PIN, Wrong PIN, Display Screen 4, Display Screen 1 ASF Sequence:!
System Testing
S1
S6
System Testing
Chapter 15
Interaction Testing
Interaction Testing
Definitions of Interaction
1. "System behavior as a whole does not satisfy the separate specifications of all its [parts]." (Pamela Zave) 2. Relationship between the whole and its parts. 3. Totality of connections among components. 4. Consequences of connections among components. 5. The result of composition. (Robin Milner)
Interaction Testing
Feature Interaction
Feature: a service provided by a system (activated by a subscriber paying for the service) Telephony Examples (from P. Zave): Notation: d1, d2, d3, and d4 are directory numbers
Call Forwarding: calls to d1 are terminated on d2 iff call forwarding is enabled on d1 and activated by defining d2 as the current destination Calling Party Identification: when active on d2, d2 receives the directory number of all incoming calls Call Rejection: allows a subscriber to define a list of directory numbers from which calls will not be completed. Busy Treatment: several possibilities; call override, call rejection, call forward, call forward on busy, do not disturb, automatic re-call
Interaction Testing
2.
3.
4.
First Clues
Feature interaction is a consequence of adaptive maintenance (i.e., adding new capabilities to an existing system) Interactions involve connections, and the essence of every modeling technique is to nd/establish connections (see Wurmann, Information Anxiety ) Composition creates connections. What can be connected? First Approximation: interaction Should be Is Is not intended missing Should not be unintended null
Interaction Testing
Definitions of Determinism
Let C be some calculation, C (input) = output . Definition 1:! If the result of C can always be predicted, C is deterministic ! ! ! (or pre-determined), otherwise C is non-deterministic. Definition 2:! If the result of C is always the same, C is deterministic ! ! ! (or pre-determined), otherwise C is non-deterministic. Example: ComputeSalesTax(price, taxrate) If the tax rate is 4%, sales tax on a $100 item will be $4.00. If the legislature changes the tax rate to 6%, we have interaction. If we understand all the points of interaction, the function still is deterministic. "Concurrency inflicts non-determinism." Robin Milner
Interaction Testing
Device
Thread
Interaction Testing
2!
Actions
Actions have inputs and outputs, and these can be either data or port events. Some methodologyspecific synonyms for actions: transform, data transform, control transform, process, activity, task, method, and service. Actions can be decomposed into lower level actions.
3!
Devices
Every system has devices; these are the sources and destinations of system level inputs and outputs (events that occur at the port boundary). Physical actions (e.g., keystrokes, light emissions from a screen) occur on port devices, and these are translated from physical to logical (or logical to physical) appearances by actions that execute on other devices (e.g., a CPU executing software).
4!
Events
A system level input (or output) that occurs on a port device. Like data, events can be inputs to or outputs of actions. Events can be discrete (such as keystrokes) or they can be continuous (such as temperature, altitude, or pressure). There are situations where the context of present data values changes the logical meaning of physical events. We refer to such situations as "context sensitive port events".
5!
Threads
A thread an instance of execution-time behavior of a system. Two synonyms: a scenario, a use-case. A thread is a sequence of actions, and these in turn have data and events as their inputs and outputs.
Interaction Testing
Action Device
I/O, usage timing, race conditions execution I/O, usage execution resource contention incidence
Event
context sensitivity I/O, usage incidence timing, concurrency points of n-connection
Thread
points of n-connection
Interaction Testing
Device
Thread
Interaction Testing
5. Two (or more) input events can occur simultaneously, but an event cannot occur simultaneously in two (or more) processors. 6. In a single processor, two output events cannot begin simultaneously.
Interaction Testing
Taxonomy of Interactions
Static interaction: independent of time Dynamic interaction: time dependent Each of the ve "basis elements" can interact in each quadrant: static dynamic
We will consider static interactions of data with data, and dynamic interactions of threads with threads.
Software Testing: A Craftsmans Approach, 3rd Edition Interaction Testing
Interactions in the Square of Opposition. 1. Contradictories: exactly one is true 2. Contraries: cannot both be true 3. Sub-contraries: cannot both be false 4. Subalternation: Truth of superaltern guarantees truth of its subaltern Examples 1. When the pre-condition for a thread is a conjunction of data propositions, contrary or contradictory data values will prevent thread execution. 2. Context sensitive port input events usually involve contradictory data. 3. Case statement clauses are contradictories. 4. Rules in a decision table are contradictories.
Software Testing: A Craftsmans Approach, 3rd Edition Interaction Testing
Interaction Testing
n-Connectedness
Linear Graph Theory sheds some light on interaction. i i j i 0-connected 1-connected i i j j
j 3-connected
Missing n-connectedness occurs when a pair lacks an essential connection, and Inappropriate n-connctedness, when a pair has an undesired connection.
Interaction Testing
Interpretations of n-Connectedness
0-Connected: true independence 1-Connected: (by an ancestor) resource conflict, context sensitivity (by a descendant) ambiguous cause 2-Connected: define-reference, enable, disable, precedence, prohibit prerequisite 3-Connected: mutual influence, repetition, deadlock Examples: Failures under the "single failure" assumption of Reliability Theory Context Sensitive Port Events Falkland Island submarine incident Email loop
Software Testing: A Craftsmans Approach, 3rd Edition
Interaction Testing
Threads with Threads ! ! ! Threads can be n-connected with each other in two ways: via events and/or via data. To explore this, we need a sufficiently expressive notation.
Interaction Testing
Placement of Zave's examples 1.! ! 2.! ! 3.! ! 4.! ! Calling Party Identification and Call Rejection Intended thread-thread 2-connectivity Call Forwarding and Call Rejection Unintended data-data contraries Call Forward Loop Unintended thread-thread 3-connectivity Voice Mail and Credit Card Calling Unintended data-event context sensitivity
Interaction Testing
Interaction Testing
Interaction Testing
Thread 2
p1 d2 d1
Composition
p1 d2
s1
s3
s1
s3
d3
p3
d4
d3
d4
s2
s2
p3
d4
p3
Interaction Testing
Producer/Consumers Composition
Consumer 1
t3 p4 p3 p3
Consumer 2
t5 p6 p1 t2 t6 p2 t1
p5 t4 t1
p7
p3 p1 t2 p5 t4 p3 p7 t6 p2 t3 p4 t5 p6
Producer
Interaction Testing
Lever! Dial!!
! !
OFF! n/a! ! 0! !
HIGH n/a 60
Description ! 0 w.p.m. ! 4 w.p.m. ! 6 w.p.m. ! 12 w.p.m. ! 30 w.p.m. ! 60 w.p.m.
Wiper! !
Input Event! ! ie1! ! ! ie2! ! ! ie3! ! ! ie4! ! ! ie5! ! ! ie6! ! ! ie7! ! ! ie8! ! ! ie9! ! ! ie10! !
Description lever from OFF to INT lever from INT to LOW lever from LOW to HIGH lever from HIGH to LOW lever from LOW to INT lever from INT to OFF dial from 1 to 2 dial from 2 to 3 dial from 3 to 2 dial from 2 to 1
Interaction Testing
Input Event! Description ! ie1! ! lever from OFF to INT ! ie2! ! lever from INT to LOW ! ie3! ! lever from LOW to HIGH ! ie4! ! lever from HIGH to LOW ! ie5! ! lever from LOW to INT ! ie6! ! lever from INT to OFF ! ie7! ! dial from 1 to 2 ! ie8! ! dial from 2 to 3 ! ie9! ! dial from 3 to 2 ! ie10! dial from 2 to 1 Output Event! ! oe1!! ! ! oe2!! ! ! oe3!! ! ! oe4!! ! ! oe5!! ! ! oe6!! ! Description 0 w.p.m. 4 w.p.m. 6 w.p.m. 12 w.p.m. 30 w.p.m. 60 w.p.m.
Interaction Testing
s1
s6
p11 p2 d2 p5
s2
s5
p15 p3 d3 p4
s3
s4
p16
d4
p15
Interaction Testing
d1
d2
s1
s3
d3
d4 p3
Interaction Testing
s2
p1
d1
p6
s1
s7
s10
s2
s5
p13 s13 p8
d6 p9
p15 p3 d3
p5 p4
p14 s4
s8
s9
s3
p16
d4
p5
d7
Interaction Testing
Composition in a Database
(all relations are 0..n at each end)
Data
DataInput
DataOutput EventOutput
Action
SequenceOf
Thread
Event
OccursOn
EventInput
Device
Interaction Testing
d3 d3
Low Low
s3 s5
LowtoHigh LowToInt
Interaction Testing
DataID d2 d2 d2 d2 d2
ActionID
ActionName
Interaction Testing
What Connections Can you Identify Between an EDPN Data Model and Graph Theory?
Indegree of a place Outdegree of a place Indegree of a transition Outdegree of a transition Indegree of an event Outdegree of an event
Interaction Testing
Chapters 16 - 20
Testing Object-Oriented Software
Object-Oriented Testing
Object-Oriented Testing
1. Traditional vs. Object-Oriented Testing 2. Saturn Windshield Wiper Example 3. Testing with O-O Notations
Object-Oriented Testing
Data
Data encapsulate
Object Action
Event
Device
Device
Thread
Event Thread
Object-Oriented Testing
2. Decomposition to composition.
Functional decomposition tree as a basis of integration testing is lost. Composition implies unknowable contexts. We can never know all the possible objects with which a given object may be composed.
Object-Oriented Testing
Notice the cascading levels of interaction: unit testing covers statement interaction, MM-Path testing covers method interaction, ASF testing covers MM-Path interaction, thread testing covers object interaction, and all of this culminates in thread interaction.
Object-Oriented Testing
Level Unit
Technique Traditional functional and/or structural StateChart-based New definition? New definition? New definition? (StateCharts) (as before)
Integration
System
Object-Oriented Testing
OFF n/a 0
HIGH n/a 60
Description 0 w.p.m. 4 w.p.m. 6 w.p.m. 12 w.p.m. 30 w.p.m. 60 w.p.m.
Input Event Description ie1 lever from OFF to INT ie2 lever from INT to LOW ie3 lever from LOW to HIGH ie4 lever from HIGH to LOW ie5 lever from LOW to INT ie6 lever from INT to OFF ie7 dial from 1 to 2 ie8 dial from 2 to 3 ie9 dial from 3 to 2 ie10 dial from 2 to 1
Object-Oriented Testing
Object-Oriented Testing
Input Event ie1 ie2 ie3 ie4 ie5 ie6 ie7 ie8 ie9 ie10
Description lever from OFF to INT lever from INT to LOW lever from LOW to HIGH lever from HIGH to LOW lever from LOW to INT lever from INT to OFF dial from 1 to 2 dial from 2 to 3 dial from 3 to 2 dial from 2 to 1
Output Event Description oe1 0 w.p.m. oe2 4 w.p.m. HIGH oe3 6 w.p.m. oe4 12 w.p.m. oe5 30 w.p.m. oe6 60 w.p.m. Note that several transition actions are indeterminate because of composition.
Object-Oriented Testing
6 wpm m5
12 wpm
m10 m2 m2 m5
30 wpm m3 m6 60 wpm
Messages m1 to m6 inform the Wiper object of the state of the Lever object. Messages m7 to m10 inform the Wiper object of the state of the Dial object.
Object-Oriented Testing
instate(Int) instate(Low)
Object-Oriented Testing
Testing Object-Oriented Software definition in UML two kinds of o-o software: - data-driven - event-driven how are these described (for a tester)? what is an o-o unit: - a class? - a method? what is the basis for integration testing?
Object-Oriented Testing
First Example (data-driven) The o-oCalendar program is an object-oriented implementation of the NextDate function. (NextDate(Mar., 5, 2002)) = Mar., 6, 2002. When this is implemented in procedural code, it is approximately 50 lines long, with a cyclomatic complexity less than 15. (can be 11.) A "pure" (i.e., good practice) object-oriented implementation contains one abstract class and five classes. The next few slides show the UML description of o-oCalendar.
Object-Oriented Testing
Classes
testIt CalendarUnit 'abstract class Date Day d Month m Year y Date( pDay, pMonth,, pYear) increment() printDate()
currentPos As Integer CalendarUnit( pCurrentPos) setCurrentPos( pCurrentPos) increment() 'boolean Month private Year y private sizeIndex = <31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31> Month(pcur, Year pYear) setCurrentPos(pCurrentPos) setMonth(pcur, Year pYear) getMonth() getMonthSize() increment()
Day Month m Day( pDay, Month pMonth) setCurrentPos( pCurrentPos) setDay( pDay, Month pMonth) getDay() increment()
Object-Oriented Testing
Day
Month
Year
Date
Day
Month
Year
Object-Oriented Testing
Date.constructor Date.increment
4 5 6 7 12 13 14 17 18 8 9 10 11 15 16
Date.printDate
19 20
Object-Oriented Testing
Day.setCurrentPos
a
Day.setDay
23
Day.getDay
26
Day.increment
28
22
24
27
29
23
25
30
31
32
33
Object-Oriented Testing
Month.setMonth
36
40
37
39
38
Month.getMonth
39
Month.increment
47
Month.getMonthSize
41
40
48 43
42 44 45
49 50 52 51
46
Object-Oriented Testing
Year.isLeap
60
54
56
58
61
62 59 64
63
Object-Oriented Testing
Collaboration Diagram
1: create 2: increment 3: getYear 1: create 2: increment 3:setMonth 4: getMonth Date 1: create 2: increment 3: setDay 4: getDay
Year 1:isLeap
testIt
Month 1:getMonthSize
Day
Object-Oriented Testing
Day Day() setCurrentPos() increment() setDay getDay m11 m13 m17 m15 m16
m10
Object-Oriented Testing
1: printDate() 2:getMonth()
3:getDay()
4:getYear()
time
Object-Oriented Testing
m21
Object-Oriented Testing
Object-Oriented Testing
object optionBrazil () Constant USdollarToBrazilReal= 2.067 private procedure senseClick commandCompute(USdollarToBrazilReal) End senseClick object optionCanada() Constant USdollarToCanadianDollar = 1.16 private procedure senseClick commandCompute(USdollarToCanadianDollar) End senseClick object optionEuropeanUnion () Constant USdollarToEuro= 0.752 private procedure senseClick commandCompute(USdollarToEuro) End senseClick object optionJapan () Constant USdollarToJapanYen = 117.82 private procedure senseClick commandCompute(USdollarToJapanYen) End senseClick procedure comandCompute ( exchangeRate) dim exchangeRate, USDollarAmount As Single USDollarAmount = Val(txtUSDollarAmount.text) EquivCurrencyAmount = exchangeRate * USDollarAmount End procedure Compute
Object-Oriented Testing
Object-Oriented Testing
Input Events ip1 ip2 ip2.1 ip2.2 ip2.3 Enter US Dollar amount Click on a country button Click on Brazil Click on Canada Click on European Community ip2.4 ip3 ip4 ip5 ip6
Input Events Click on Japan Click on Compute button Click on Clear button Click on Quit button Click on OK in error message
Object-Oriented Testing
Object-Oriented Testing
Object-Oriented Testing
Object-Oriented Testing
Object-Oriented Testing
End Application
Executing Ip 4 ___________________ op 2 , op 4, op 9 , op 10
Ip 4
Idle
Ip 3 /op 8
Ip 1/ op 1
U .S . Dollar Amount Selected
Ip 6
Missing Country and Dollar Message
Ip 6
Ip 3/ op 7
Missing U
.S . Dollar Message
Ip 3/ op 5 Ip 1 /op 1
Equiv . Amount Displayed
Ip 2 /op 2 ,op 3
Object-Oriented Testing
Object-Oriented Testing
Extended Definitions
An MM-Path in object-oriented software is a sequence of method executions linked by messages.
An MM-Path starts with a method and ends when it reaches a method which does not issue any messages of its own ( message quiescence) Since MM-Paths are composed of linked method-message pairs in an object network, they interleave and branch off from other MM-Paths.
An atomic system function (ASF ) is a sequence of statements that begins with an input port event and ends with an output port event.
An ASF begins with a port input event This system level input triggers the method-message sequence of an MM-Path which may trigger other MM-Paths The sequence of MM-Paths ends with a port output event (event quiescence)
Object-Oriented Testing
Message Send/Return
Object-Oriented Testing
object A
object B
Object-Oriented Testing
d1
mep1 d2
d3 mep3
d5 mep5 msg2
msg1
d6
d4
mep6
mep2
mep4
Object-Oriented Testing
d2
d2
d2
p5 s5
p6 s6
d1
d2 s7
d1
d2
p7
p8
p26
p23
Object-Oriented Testing
s1
s4
p6 s6 s7
p7
p26
Object-Oriented Testing
s2
s3
s4
s5
s6
s7
s8
Object-Oriented Testing
Object-Oriented Testing