Testing

Defect testing
• • • • • • • • • Testing programs to establish the presence of system defects To understand testing techniques that are geared to discover program faults To introduce guidelines for interface testing To understand specific approaches to object-oriented testing To understand the principles of CASE tool support for testing Defect testing Integration testing Object-oriented testing Testing workbenches

Objectives

Topics covered

The testing process
q

Component testing
 Testing of individual program components  Usually the responsibility of the component developer (except sometimes for critical systems)  Tests are derived from the developer’s experience

q

Integration testing
 Testing of groups of components integrated to create a system or sub-system  The responsibility of an independent testing team  Tests are based on a system specification

Testing phases
Component testing Software  developer Integration testi ng Independent  testing team

Defect testing
qThe qA

goal of defect testing is to discover defects in programs successful defect test is a test which causes a program to behave in an anomalous way qTests show the presence not the absence of defects

Testing priorities
qOnly

exhaustive testing can show a program is free from defects. However, exhaustive testing is impossible qTests should exercise a system's capabilities rather than its components qTesting old capabilities is more important than testing new capabilities qTesting typical situations is more important than boundary value cases

Test data and test cases
qTest

data Inputs which have been devised to test the system

1

Testing
qTest

cases Inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification

The defect testing process
Test cases Design test cases Prepare test data Test data Run program with test data Test results Compare r esults to test cases Test reports

Black-box testing
qAn

approach to testing where the program is considered as a ‘black-box’ program test cases are based on the system specification qTest planning can begin early in the software process
qThe

Input test data

I
e

Inputs causing anomalous behaviour

System

Output test results

Oe

Outputs which reveal the presence of defects

Equivalence partitioning
qInput

data and output results often fall into different classes where all members of a class are related qEach of these classes is an equivalence partition where the program behaves in an equivalent way for each class member qTest cases should be chosen from each partition

2

Testing

Invali d  in pu ts

Vali d  in pu ts

Sy stem

Ou tput s
qPartition

system inputs and outputs into ‘equivalence sets’  If input is a 5-digit integer between 10,000 and 99,999, equivalence partitions are <10,000, 10,000-99, 999 and > 10, 000 qChoose test cases at the boundary of these sets  00000, 09999, 10000, 99999, 10001 3 11 4 7 10

Less than 4

Between 4 and 10

More than 10

Number of  input values 9999 10000 50000 100000 99999

Less than 10000 Input values

Between 10000 and 99999

More than 99999

Search routine specification
procedure Search (Key : ELEM ; T: ELEM_ARRAY; Found : in out BOOLEAN; L: in out ELEM_INDEX) ; Pre-condition -- the array has at least one element T’FIRST <= T’LAST Post-condition -- the element is found and is referenced by L ( Found and T (L) = Key) or -- the element is not in the array ( not Found and not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key ))

3

middle and last elements of the sequence are accessed qTest with sequences of zero length Search routine . Chapter 20 Structural testing qSometime called white-box testing of test cases according to program structure. 1 true. 38 21. 41. L) true. 30. 33. 23. 1 false. 31.input partitions Search routine ­ input partitions Array Single value Single value More than 1 value More than 1 value More than 1 value More than 1 value Element In sequence Not in sequence First element in sequence Last element in sequence Middle element in sequence Not in sequence Input sequence (T) 17 17 17. 23.Testing Search routine . 9. 45 17. 6th edition. ?? Slide  19 Software Engineering. 4 false. 18. 29. 21. 29. Knowledge of the program is used to identify additional test cases qObjective is to exercise all program statements (not all path combinations) qDerivation White-box testing 4 . 23 41. 38 ©Ian Sommerville 2000  Key (Key) 17 0 17 45 23 25 Output (Found.input partitions qInputs qInputs which conform to the pre-conditions where a pre-condition does not hold qInputs where the key element is a member of the array qInputs where the key element is not a member of the array Testing guidelines (sequences) qTest qUse software with sequences which have only a single value sequences of different sizes in different tests qDerive tests so that the first. 16. 7 true. 21. ?? true. 29. 18.

length ­ 1 . while (  bottom <= top ) { mid = (top + bottom) / 2  . key element not in array qInput array has a single value qInput array has an even number of values qInput array has an odd number of values Binary search equiv. Result r ) { int bottom = 0 . int [] elemArray. partitions qPre-conditions satisfied.equiv. key element not in array qPre-conditions unsatisfied. return . r. r. if (elemArray  [mid] == key) { r.index = ­1 . } // if part else { if (elemArray  [mid] < key) bottom = mid + 1 . int top = elemArray. } } //while loop } // search } //BinSearch Binary search (Java) Binary search .Testing T est data T ests Derives Component code Test outputs Binary search (Java) class BinSearch { // This is an encapsulation of a binary s earch function that takes an array of // ordered  objects and a k ey a nd returns an o bject with 2 attributes namely // index ­ the  value of the array  index // found ­ a boolean indicating whether or not the key  is in the array // An object is returned because it is not possible in J ava to pass basic types by // reference to a function and so return two  values // the key  is ­1 if the element is not found public static v oid search ( int key. partitions 5 .found = true . else top = mid ­ 1 . int mid .index  = mid . key element in array qPre-conditions unsatisfied. key element in array qPre-conditions satisfied. r.found = false .

 18. 23. Chapter 20 Slide  25 Path testing qThe objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once qThe starting point for path testing is a program flow graph that shows nodes representing program decisions and arcs representing the flow of control qStatements with conditions are therefore nodes in the flow graph Program flow graphs qDescribes the program control flow. 6th edition. 21. 1 true. 38. all combinations of paths are not executed qCyclomatic Binary search flow graph 6 . 23. 38 12.Number of nodes +2 Cyclomatic complexity qThe number of tests to test all control statements equals the cyclomatic complexity complexity equals number of conditions in a program qUseful if used with care.test cases Binary search ­ test cases Input array (T) 17 17 17. 16. Does not imply adequacy of testing. 41. ?? ©Ian Sommerville 2000  Software Engineering. 21. 32 21. 4 true. 18. 45 17. 38 Key (Key) 17 0 17 45 23 21 23 25 Output  (Found. 31. 23. 29 9. 23. qAlthough all paths are executed. 21. 21. 18. Each branch is shown as a separate path and loops are shown by arrows looping back to the loop condition node qUsed as a basis for computing the cyclomatic complexity qCyclomatic complexity = Number of edges . 7 true. 33. 41 17. ?? true. 3 true. 23. 33. 29. 18.Testing Equivalence class boundaries Elements < Mid Elements  > Mid Mid­point Binary search . 30. 4 false. 29. 1 false. 29. L) true.

7. 3. 7. 4. 3. 5. 4. 2. 9 2. 6. 2 q1. 9 qTest cases should be derived so that all of these paths are executed qA dynamic program analyser may be used to check that paths have been executed Integration testing qTests complete systems or subsystems composed of integrated components testing should be black-box testing with tests derived from the specification qMain difficulty is localising errors qIncremental integration testing reduces this problem qIntegration Incremental integration testing A T1 T2 B T3 A T1 T2 B T3 C T4 D Test sequence 1 qTop-down T1 T2 T3 A B C T4 T5 Test sequence 3 Test sequence 2 Approaches to integration testing testing •Start with high-level system and integrate from the top-down replacing individual 7 .Testing 1 while bottom < = top bottom > top 2 3 if (elemArray [mid] == key 8 5 9 4 (if (elemArray [mid]<  key 6 7 Binary search flow graph Independent paths q1. 3. 7. 6. 3. 2. 8. q1. 4. 2. 2. 2 q1. 8.

 . . Level 2 Le vel 2 stubs Le vel 3 stubs Level 2 Le vel 2 Level 2 Bottom-up testing Test drivers Level N Level N Le vel N Level N Level N Testing sequence Test drivers Level N–1 Level N–1 Level N–1 Tetsing approaches qArchitectural validation •Top-down integration testing is better at discovering errors in the system architecture qSystem demonstration •Top-down integration testing allows a limited demonstration at an early stage in the development qTest implementation •Often easier with bottom-up integration testing qTest observation 8 . most integration involves a combination of these strategies qBottom-up Top-down testing Level 1 Testing sequence Level 1 .Testing components by stubs where appropriate testing •Integrate individual components in levels until the complete system is created qIn practice.

parameters in the wrong order qInterface misunderstanding •A calling component embeds assumptions about the behaviour of the called component which are incorrect qTiming errors •The called and the calling component operate at different speeds and out-of-date information is accessed Interface testing guidelines qDesign tests so that parameters to a called procedure are at the extreme ends of their 9 .Testing •Problems with both approaches. Extra code may be required to observe tests Interface testing qTakes place when modules or sub-systems are integrated to create larger systems are to detect faults due to interface errors or invalid assumptions about interfaces qParticularly important for object-oriented development as objects are defined by their interfaces qObjectives Test cases A B C Interfaces types qParameter qShared interfaces •Data passed from one procedure to another memory interfaces •Block of memory is shared between procedures qProcedural interfaces •Sub-system encapsulates a set of procedures to be called by other sub-systems qMessage passing interfaces •Sub-systems request services from other sub-systems Interface errors qInterface •A calling component calls another component and makes an error in its use of its misuse interface e.g.

Stressing the system often causes defects to come to light qStressing the system test failure behaviour.. Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data qParticularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded Object-oriented testing qThe components to be tested are object classes that are instantiated as objects qLarger grain than individual functions so approaches to white-box testing have to be extended qNo obvious ‘top’ to the system for top-down integration and testing Testing levels qTesting operations associated with objects qTesting object classes qTesting clusters of cooperating objects qTesting the complete OO system Object class testing qComplete test coverage of a class involves •Testing all operations associated with an object •Setting and interrogating all object attributes •Exercising the object in all possible states qInheritance makes it more difficult to design object class tests as the information to be tested is not localised Weather station object interface qTest qUse cases are needed for all operations a state model to identify state transitions for testing qExamples of testing sequences •Shutdown Waiting Shutdown •Waiting •Waiting Calibrating Collecting Testing Waiting Transmitting Summarising Waiting Transmitting Waiting Object integration qLevels of integration are less distinct in object-oriented systems testing is concerned with integrating and testing clusters of cooperating objects qIdentify clusters using knowledge of the operation of objects and the system features that are implemented by these clusters qCluster Approaches to cluster testing 10 . vary the order in which components are activated Stress testing qExercises the system beyond its maximum design load.Testing ranges qAlways test pointer parameters with null pointers qDesign tests which cause the component to fail qUse stress testing in message passing systems qIn shared memory systems.

Testing workbenches provide a range of tools to reduce the time required and total testing costs qMost testing workbenches are open systems because testing needs are organisation- 11 .Testing qUse-case or scenario testing •Testing is based on a user interactions with the system •Has the advantage that it tests system features as experienced by users qThread qObject •Tests the systems response to events as processing threads through the system testing interaction testing •Tests sequences of object interactions that stop when an object operation does not call on services from another object Scenario-based testing qIdentify scenarios from use-cases and supplement these with interaction diagrams that show the objects involved in the scenario qConsider the scenario in the weather station system where a report is generated Collect weather data :CommsController request (report) acknowledge () :WeatherStation :WeatherData report () summarise () send (report) reply (report) acknowledge () Weather station testing qThread of methods executed WeatherStation:report WeatherData:summarise •CommsController:request qInputs and outputs •Input of report request with associated acknowledge and a final output of a report •Can be tested by creating raw data and ensuring that it is summarised properly •Use the same raw data to test the WeatherData object Testing workbenches qTesting is an expensive process phase.

misunderstanding.Testing specific qDifficult to integrate with closed design and analysis workbenches A testing workbench Test data generator Specification Source code Test manager Test data Oracle Dynamic analyser Program being tested Test results Test predictions Execution report Simulator File comparator Report generator Test results report Tetsing workbench adaptation qScripts may be developed for user interface simulators and patterns for test data generators qTest outputs may have to be prepared manually for comparison qSpecial-purpose file comparators may be developed Key points qTest parts of a system which are commonly used rather than those which are rarely executed qEquivalence partitions are sets of test cases where the program should behave in an equivalent way qBlack-box testing is based on the system specification qStructural testing identifies test cases which cause all paths through the program to be executed qTest coverage measures ensure that all statements have been executed at least once. What is testing? The act of checking if a part or a product performs as expected. qInterface defects arise because of specification misreading. 2. 12 . attributes and states qIntegrate object-oriented systems around clusters of objects 1. errors or invalid timing assumptions qTo test object classes. Why test? • Gain confidence in the correctness of a part or a product. test all operations.

• These are written in a formal programming language. 6. 3. Test set Set of test inputs Program execution Execution of a program on a test input. • For sort let S be: Sample Specification • P takes as input an integer N>0 and a sequence of N integers called elements of the sequence. Test case or test input A set of values of input variables of a program. Specification Description of requirements for a program. • There is a large collection of techniques and tools to test programs. Few basic terms Program: A collection of functions. • Let S denote the specification for P.  Test a subsystem using functional testing.Testing • Check if there are any errors in a part or a product. Values of environment variables are also included. Test all! • Each of these products needs testing. • • 13 . 5. an integer sort program). • Methods for testing various products are different. Oracle A function that determines whether or not the results of executing a program under test is as per the program’s specifications. Examples:  Requirements document  Design document  Software subsystems  Software system 4. or a collection of classes as in java. • Examples:  Test a requirements document using scenario construction and simulation  Test a design document using simulation. This might be formal or informal. 0< K< (e-1) for some e • Let K denote any element of this sequence. What to test? During software lifecycle several products are generated. • Programs may be subsystems or complete systems. as in C. Example: Correctness • Let P be a program (say. What is our focus? • We focus on testing programs.

Correctness again P is considered correct with respect to a specification S if and only if: For each valid input the output of P is in accordance with the specification S. Test-debug cycle 14 . • Testing usually leads to debugging • Testing and debugging usually happen in a cycle. • The process of finding and removing the cause of this failure is known as debugging. Errors. faults Error: A mistake made by a programmer Example: Misunderstood the requirements.b). Defect/fault: Manifestation of an error in a program.} Failure • Incorrect program behavior due to a fault in the program. • The word bug is slang for fault. • Failure can be determined only with respect to a set of requirement specifications. 7. Example: Incorrect code: if (a<b) {foo(a.b). • A necessary condition for a failure to occur is that execution of the program force the erroneous portion of the program to be executed. What is the sufficiency condition? Errors and failure • Inputs Error-revealing inputs cause failure Program Outputs Erroneous outputs indicate failure 8.} Correct code: if (a>b) {foo(a. defects. Debugging • Suppose that a failure is detected during the testing of P.Testing P sorts the input sequence in descending order and prints the sorted sequence.

For example. a telephone system must be able to handle 1000 calls over any 1-minute interval.Testing Test Yes Failure? Yes No Debug Done! 9. What happens when the system is loaded or overloaded? Performance testing Clues come from performance requirements. For example. Does the system process each call in less than 5 seconds? 15 . Stress testing Clues come from “load” requirements. Types of testing Testing complete? No Source of clues for test input construction Object under test Testing: based on source of test inputs • Functional testing/specification testing/black-box testing/conformance testing: Clues for test input generation come from requirements. • White-box testing/coverage testing/code-based testing Clues come from program text. each call must be processed in less than 5 seconds.

A unit is the smallest testable piece of a program. Test are generated randomly using these clues. As. Subsystem testing Testing of a subsystem. 10. Regression testing Test a subsystem or a system on a subset of the set of existing test inputs to check if it continues to function correctly after changes have been made to an older version. Random testing Clues come from requirements. for example. One or more units form a subsystem. • The Pareto principle applies to software testing. Software Testing Fundamentals Software testing is a critical element of software quality assurance and represents the ultimate review of specification. Protocol testing Clues come from the specification of a protocol. OO testing Clues come from the requirements and the design of an OO-program. • Tests should be planned long before testing begins. • 80% of all errors uncovered during testing will likely be traceable to 20% of all 16 .or error.Testing Fault. Unit testing Testing of a program unit. design. Software Testing Principles Davids [DAV95] suggests a set of testing principles: • All tests should be traceable to customer requirements. when testing for a communication protocol. The goal is to test a program under scenarios not stipulated in the requirements. • A successful test is one that uncovers an as-yet undiscovered error. and coding. Software testing demonstrates that software function appear to be working according to specifications and performance requirements. Robustness testing Clues come from requirements. • A good test case is one that has high probability of finding an undiscovered error. A subsystem is a collection of units that cooperate to provide a part of system functionality Integration testing Testing of subsystems that are being integrated to form a larger subsystem or a complete system.based testing Clues come from the faults that are injected into the program text or are hypothesized to be in the program. Testing Objectives: Myers [MYE79] states a number of rules that can serve well as testing objectives: • Testing is a process of executing a program with the intent of finding an error. System testing Testing of a complete system. The major testing objective is to design tests that systematically uncover types of errors with minimum time and effort.

logic paths. the more quickly we can test it.” • Decomposability: “By controlling the scope of testing.” Test Case Design Two general software testing approaches: Black-Box Testing and White-Box Testing Black-box testing: knowing the specific functions of a software. To be most effective.Testing • • • program modules. also known as glass-box testing. operations. White-Box Testing and Basis Path Testing White-box testing. Exhaustive testing is not possible. Basic path testing (a white-box testing technique): • First proposed by TomMcCabe [MCC76]. the smarter we will test.” • Stability: “The fewer the changes.” • Observability: “What you see is what you test.” • Understandability:”The more information we have.” • Controllability: “The better we can control the software. data flows internal data structures. Major focus: functions. • Used as a guide for defining a basis set of execution path. the more the testing can be automated and optimized. It is a test case design method that uses the control structure of the procedural design to derive test cases. Testing should begin “in the small” and progress toward testing “in the large”. • Execute all loops at their boundaries and within their operational bounds. the more efficiently it can be tested.” • Simplicity: “The less there is to test. • Can be used to derive a logical complexity measure for a procedure design. • Exercise all logical decisions on their true and false sides. design tests to exercise all internals of a software to make sure they operates according to specifications and designs Major focus: internal structures. A set of program characteristics that lead to testable software: • Operability: “the better it works. we derive test cases that • Guarantee that all independent paths within a module have been exercised at least once. design tests to demonstrate each function and check its errors. Using white-box testing methods. external data and information White-box testing: knowing the internals of a software. the fewer the disruptions to testing. Software Testability According to James Bach: Software testability is simply how easily a computer program can be tested. we can more quickly isolate problems and perform smarter retesting. • Exercise internal data structures to assure their validity. testing should be conducted by an independent third party. etc. conditions. loop. 17 . external interfaces. control flows.

where 2 <= I <= 100 expected results: correct average based on k values and proper totals. Equivalence Partitioning Equivalence partitioning is a black-box testing method • divide the input domain of a program into classes of data • derive test cases based on these partitions. • If an input condition requires a specific value.. the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program. value (i) = -999. An equivalence class represents a set of valid or invalid states for input condition. 18 . where k < i defined below. Step 2: Determine the cyclomatic complexity of the resultant flow graph. For example. Three ways to compute cyclomatic complexity: • The number of regions of the flow graph correspond to the cyclomatic complexity. When this metric is used in the context of the basis path testing. • Cyclomatic complexity. one valid and two invalid equivalence class are defined. Deriving Test Cases Step 1 : Using the design or code as a foundation. Path 6: 1-2-3-4-5-6-7-8-9-2-. one valid and two invalid equivalence classes are defined. Path 1: test case: value (k) = valid input. Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an input domain. for a flow graph G is defined as V(G) = E . • Cyclomatic complexity. draw a corresponding flow graph.N +2 where E is the number of flow graph edges and N is the number of flow graph nodes.. V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G. V(G). a range of values • a set of related values. Step 4: Prepare test cases that will force execution of each path in the basis set. or a Boolean condition Equivalence classes can be defined using the following guidelines: • If an input condition specifies a range.Testing • Guarantee to execute every statement in the program at least one time. path 1: 1-2-10-11-13 path 2: 1-2-10-12-13 path 3: 1-2-3-10-11-13 path 4: 1-2-3-4-5-8-9-2-… path 5: 1-2-3-4-5-6-8-9-2-. An input condition is: • a specific numeric value. Cyclomatic Complexity Cyclomatic complexity is a software metric -> provides a quantitative measure of the global complexity of a program. Step 3: Determine a basis set of linearly independent paths.

Testing If an input condition specifies a member of a set. • If an input condition is Boolean. 5. Guidelines: • If an input condition specifies a range bounded by values a and b. Boolean . value . just above and below a and b.a password nay or may not be present. out-of-boundary search for element: . Examples: area code: input condition. 0 • If an input condition specifies a number values. input condition. Values just above and below minimum and maximum are also tested. single element. 10].containing commands noted before. Boolean . range . 10. 102.the area code may or may not be present.six character string. test cases should be designed with value a and b. but never their absence. test values: -3. 11. • 19 . set .value defined between 200 and 900 password: input condition. test cases should be developed to exercise the minimum and maximum numbers. one valid and one invalid classes are defined. queue. stack. 102} test values: 3. Example: Enumerate data E with input condition: {3. one valid and one invalid equivalence classes are defined.element is inside array or the element is not inside array You can think about other data structures: –list. set. command: input condition. -1. Example: Integer D with input condition [-3. If internal program data structures have prescribed boundaries. 5 • • Guidelines 1 and 2 are applied to output condition. Boundary Value Analysis • a test case design technique • complements to equivalence partition Objective: Boundary value analysis leads to a selection of test cases that exercise bounding values. Testing is hence the process of executing a program with the intent to produce failures. be certain to design a test case to exercise the data structure at its boundary Such as data structures: –array input condition: empty. 200. 100. and tree Reviews = non-execution based testing Testing = execution based testing Dijkstra’s law of testing Program testing can be used to show the presence of bugs. input condition. -2. full element.

White-box or structural selection 20 .) – hit and miss Systematic testing: – systematic: consists of a test plan and test strategy – planned: actually design for testability – documented: for repeatability and understandability – maintained: repeat tests after every change Test-case selection Black-box or functional selection – – – – Test cases are based on the requirements or specification. Note: may miss important aspects of the requirements (won’t test missing functionality).g.Testing Validation: – Does a s/w product satisfy its requirement? alternatively Are we building the right-product? Does a s/w product satisfy its specification? alternatively Are we building the product right? – Verification: – – Levels of testing Systematic testing Ad hoc approach to testing is: – ineffective: too late to influence design decisions – inefficient: testing cannot be reused during maintenance (e. Note: may miss important aspects of the implementation. Test cases are based on the implementation.

100] and q: {red. b.q) p: [0. return. Define a set of test cases for `triangle’        typedef enum {EQ. blue} calculate test values for p (interval rule) – correct values: {0.. blue} – exception values: none number of combinations is 7 * 3 = 21 Structural testing (white-box)  Execute as many statements as possible  Coverage can be:  statement: each statement executed at least once  branch: each decision at least once  path: each path in flowchart at least once Example of structural testing  statement coverage: {2}  branch coverage: {-1. IS. 100} – exception values: {-100.101. 50. SC. green.Testing Functional testing (black-box) Choose test values for the state and parameters – correct input – input to induce exceptions Test all combinations (combinatorial explosion) Example of functional testing Function f(p.1000} values for q – correct values: {red. 2}  Consider the code: { if (x>0) pos=pos+1.-1. 1. }  path coverage: {-2. if (a==b && b==c) return(EQ). c. if (x%2) even=even+1. 21 . IN} type. b. It returns whether the triangle is equilateral. 2}  The function `triangle’ takes 3 integer parameters that are interpreted as the lengths of the sides of a triangle. green. type triangle (a. scalene or invalid. if (a==b || b==c || a==c) return(IS). -1. { if (a>=b+c || b>=a+c || c>=a+b) return(IN). isosceles. c) int a.

and 1 point if you have specified the outcome of each test case. b>a+c. integration.  Use structural testing to measure degree of coverage. b==c. In 1978.Testing  return (SC). when the book was published. a==c  5) a=0  6) a=b=c=0  7) a=-1  8) 3 invalid: a=b+c. c>a+b   Assume the compiler checks for correct types Myer scoring: 1 point for each test case. Module. system testing  Module testing:test module in isolation using stubs and drivers  Integration testing: test combinations of modules  System: test the entire system Module testing 22 . b=a+c.  } Test cases for `triangle’ 1) valid scalene  2) valid equilateral 3) valid isosceles  4) 3 isosceles: a==b. c=a+b  9) 3 invalid: a>b+c. Maximum score is 10. experienced professional programmers scored on average about 6.  Use functional testing to select test cases.

like poor performance.Testing  Poor testability. is a design weakness.  Good testability means: controllability  ease by which arbitrary input can be applied  able to initialize a module  have test cases that will generate all test cases observability  ease by which output can be observed  must have proper error handling (errors signalled to an error handler) Integration testing  Incremental composition of module testing  Two purposes:  test subsystems as units  test interfaces between units  Two basic strategies:  top-down  bottom-up The system Test scaffolding 23 .

cost of developing and maintaining stubs  Bottom-up testing + better controllability and observability .cost of developing and maintaining drivers System testing  Test the entire system in production environment  Various forms:  volume/stress testing (load limits)  performance testing (efficiency)  reliability testing (e. program maintenance requires far more system testing per statement than any other programming.g.g. mean-to-failure)  recovery testing (e.poor controllability and observability . h. 24 .Testing Top-down testing  Use stubs to test top-level modules  Stepwise replace stubs by lower-level modules  depth-first or breadth first Bottom-up testing  Use drivers to test bottom-level modules  Stepwise replace drivers by higher-level modules Top-down vs bottom-up testing  Top-down testing + early availability of executable + early detection of flaws in interfaces .executable available very late ./w failure)  acceptance testing (by the client)  regression testing (maintenance) Regression Testing As a consequence of the introduction of new bugs. Theoretically.

interoperability  revisional  maintainability.g. one must run the entire bank of test cases previously run against the system. to ensure that it has not been damaged in an obscure way . and this is very costly.. flexibility. Three dimensions of quality  operational  correctness (functional testing)  reliability (reliability testing)  efficiency (performance testing)  integrity (recovery testing)  usability (acceptance testing)  transitional  portability. reusability. IEEE Standard 829 for S/W Test Document  Test report  report of the results of the test implementation  Planning  build the test `scaffolding’. which consists of: drivers: modules that call code under test stubs: modules called by code under test  generate the test cases  determine the expected output for each test case  write the test plan  Execution  execute the test cases and generate a report that includes the actual outputs  Evaluation  compare the actual and expected outputs Test plan  an informal specification of the tests  consists of:  assumptions assumptions on which the testing depends  environment environment in which testing is performed  test-case selection strategy choice of test cases  implementation strategy key aspects of the implementation 25 .Testing after each fix.. testability Test documents  Test plan:  the specification of the test system  test selection strategy  e.

Testing
Test-case selection strategy
 Functional testing tests are based on the specification  Structural testing test are based on the implementation  Both

Test implementation strategy
 Implement the test plan (test cases) need drivers and stubs need test-case data files need a procedure for automating the tests

Test drivers
 Calls code under test (CUT)  initialises the CUT  provides the test cases  captures the results (outputs and exceptions)  may conpare actual and expected results  Benefits  isolates the CUT  allows test cases to be re-run

Test stubs
 Called by the CUT  simulates and replaces modules  may accept or return data  Benefits  isolates the CUT  allows test cases to be re-run

Kinds of test drivers
 Interactive drivers  prompts the user for name of routine and parameters  prints return values and exceptions  good for debugging  tedious when there are many cases  tedious when cases are repeated often  batch drivers  test script language  automates execution and checking of results  e.g. Roast (SVRC, Hoffman and Strooper)

Stack module
name s_init s_push inputs integer outputs exceptions empty

26

Testing
s_pop g_top g_depth integer integer empty empty

 set and get routines  s_init and s_depth are for test purposes

Stack test plan for roast
assumptions MAXSIZ > 2 test environment roast driver no stubs test implementation strategy load(n) - loads stack with 10, 20, . . . 10*n

test case selection strategy
special values module state interval rule on stack size: [0,MAXSIZ] access routine parameters none test cases for each of the special module state values call s_push, s_pop, g_top, g_depth check exception behavior

Roast test case  <trace, expexc, actval, expval, type>  trace: a sequence of calls  expexc: the exception trace is expected to generate  actval: an expression evaluated after trace  expval: the value that actval is expected to have  type: the data type of actval and expval
 e.g.  <s_init().s_push(10), noexc,g_top(0, 10, int>

27

Testing

Formal Proof vs. Testing
An argument that testing has limits in certain aspects to be complete to replace proof

Proof and Testing both serve their own purposes There are some areas where testing is infeasible
“The problem with testing is not that they cannot show the absence of bugs, but that, in
practice, it fails to show their presence”

Proof appears to be substantially more efficient at finding faults than the testing phase
–Relatively litter effort –Early in development phase

Proof can find bugs different from those found with testing techniques
28

formal verifications of large algorithms Appropriate use: –Verify small but crucial segment of an algorithm – achieve cost-effectiveness by developing partial proof for the “weakest links” –Maintain acceptable levels of rigor without complete formalization Relationships between proof and testing? Formal proof And Testing are complementary approaches To software verification and validation Exploratory Testing 29 . we can also model non-functional properties like fault-tolerance and timing Limits of Proof Not scalable to large programs Models cover only some aspects of a program’s behavior –Models not available –Correspondence between understood formal models and software behavior not well We can make mistakes in the proofs! Use of Formal Proof Inappropriate use: –Complete.Testing Limits of Testing Concurrent / Distributed / Mobile systems –Concurrency –Non-determinism –Mobility Non-functional properties –Performance –Reliability. Fault Tolerance (Statistical metrics) Non-testable condition –Environment Testing based solely on analysis of program implementation is not sufficient Few formal methods are available explicitly for testing Formal Proof Good mathematical models for the behavior of sequential programs Models for concurrent / distributed systems / mobile computing are also available Now in addition to required functionality (mapping from input to output).

Exploratory testing is useful when…  it is not obvious what the next test should be.  Model the test space.  Determine test oracles. What is ET? concurrent.  Select what to cover. Then they may be executed at some later time or by a different tester.Testing In scripted testing. tests are first designed and recorded. tests are designed and executed at the same time. interacting test tasks • “Burst of Testing” could mean any of these:  Study the product. 30 . and they often are not recorded. In exploratory testing. OR  we want to go beyond the obvious tests.

– alert their clients to project issues that prevent good testing. –Avoid repeating the exact same test twice. –Exploit subject matter expertise. Exploit inconsistency. – spontaneously coordinate and collaborate. –Encourage variability among testers. Learn the logic of testing –conjecture and refutation –abductive inference –correlation and causality –design of experiments –forward. Operate the test system. Observe the test system. Organize notes. and critical.Testing       • Configure the test system. –Usually it’s best to use a diversified strategy. indecision. and time pressure. curious. –Work in pairs or groups. and lateral thinking –biases. – adapt their test strategy to fit the situation. –Get to know your developers. backward. Exploratory questions 31 . –Use your confusion as a resource. – have developed resources and tools to enhance performance. heuristics. – have earned the trust placed in them. Exploit the human factor. the core practiceof a skilled tester Excellent exploratory testers… – challenge constraints and negotiate their mission. whenever possible. – tolerate substantial ambiguity. and the meaning of “good enough” Practice critical reading and interviewing –Analyzing natural language specifications –Analyzing and cross-examining a developer’s explanation. Notice issues. –Let yourself be distracted by anomalies and new ideas. – train their minds to be cautious. Getting the Most Out of ET Augment ET with scripted tests and automation. Evaluate the test results. – take notes and report results in a useful and compelling way. –Test in short bursts. – know the difference between observation and inference. – know how to design questions and experiments. benefit. and human error –risk. as needed.

test plan Learn to model a product rapidly –flowcharting. If you’re in a highly structured environment. –Record at least what your strategy was. – Don’t have enough of the right test data. and what issues you have. Keep track of what areas you have and have not tested in. Develop and use testing heuristics –Try the heuristic test strategy model. then specify broad test areas in terms of that model (not specific test cases). yet seemto you that it does? In what ways could your car work. –Protects the intuitive process. test strategy. on my web site. Verification Validation Dynamic and Testing Techniques 32 . test case. yet. –Packages ET into chunks that can be tracked and measured.Testing How well does your car work? How do you know how well your car works? What evidence do you have about how your car works? Is that evidence reliable and up to date? What does it mean for your car to “work”? What facts would cause you to believe that your car doesn’t work? In what ways could your car not work. –Notice the test ideas you use that lead to interesting results. Learn to take reviewable notes –Take concise notes so that they don’t interrupt your work. gives bosses what they want. data flows. yet seem to you that it doesn’t? What might cause your car not to work well (or at all)? What would cause you to suspect that your car will soon stop working? Do other drivers operate your car? How does it work for them? How important is it for your car to work? Are you qualified to answer these questions? Is anyone else qualified? Adopt a clear and consistent testing vocabulary –bug. Practice responding to scrutiny –Why did you test that? –What was your strategy? –How do you know your strategy was worthwhile? Learn to spot obstacles to good testing –Not enough information. –The product isn’t testable enough. what problems you found. –Practice critiquing test techniques and rules of thumb. what you tested. risk. consider using session-based test management to measure and control exploratory testing. state model –matrices and outlines –function/data square –study the technology Use a “grid search” strategy to control coverage –Model the product in some way. specification.

The concern is how accurately the model transform a given set of input data into a set of output data. so it doesn’t explicitly use knowledge of the internal structure. Generation of test data is a crucially important but a very difficult task. most people successfully operate automobiles with only black-box knowledge. and the function of the black-box is understood completely in terms of its inputs and outputs. As an example. then we can get 100% confidence.Testing Black-box testing/Functional testing  What is black-box testing  Generation of test data  Examples of generation of test data  Examples of application of black-box testing Field testing  What is field testing  Examples of application of field testing Fault/Failure insertion testing  What is fault/failure insertion testing  Examples of fault/failure insertion testing Reference Black-Box Testing • • • Synonyms for black-box testing include: behavioral. If we can test all input-output transformation paths. Black-box testing treats the system as “black-box”. opaque-box. Black-box testing is usually described as focusing on testing functional requirements. Many times. Examples of generation of test data  Exhaustive Testing  Random Testing  Systematic Way 33 . Black-box testing is used to access the accuracy of model input-output transformation. The more the system input domains is covered in testing. Black-box testing is based on the view that any model can be considered to be a function that maps values from its input domain to values in its output range. Input Output Black-box testing is applied by feeding test data to model and evaluating the corresponding outputs. functional. The content ( implementation ) of a black-box is not know. and closed-box testing. in fact this is central to object orientation. the more confidence we gain in the accuracy of the system input-output transformation. we operate very effectively with black-box knowledge.

we’ll never want such a simple printer.Testing Exhaustive Testing Void printBoolean (bool error) //Print the Boolean Value on the screen { if (error) cout<<“True”. the number of input-output transformation paths could be very large. entering data randomly until we cause the system to fail. This allows one to estimate the “ operational reliability”. Random testing is essentially a black-box testing strategy in which a system is tested by randomly selecting some subsets of all possible input values. Random Testing  Attempt testing in a haphazard way. 34 . cout<<end1: } Unfortunately. else cout<<“False”. but it is very unlikely to find them all.  Test data may be chosen randomly or by a sampling procedure reflecting “ the actual probability distribution on the input sequences. Therefore. Void printBoolean (int intValue) //Print the integer Value on the screen { if (inValue>10) cout<<inValue” else cout<<end1”. the object of functional testing is to increase our confidence in model inputoutput transformation accuracy as much as possible rather than trying to claim absolute correctness. For a reasonably large and complex simulation system.’’  It is likely to uncover some errors in the system. So it is virtually impossible to test all input-output transformation paths. } Integer 9 Nothing on the screen The simple model is designed to print every input integer on the screen.

Testing It is not practical to test this model by running it with every possible data input. 35 .  It is necessary to demonstrate system capabilities for acceptance. Fortunately. using real data as the input source).  Host -----.  The target environments may not yet be available. as well as boundaries and other special cases. Although it is usually difficult expensive and sometimes impossible to devise meaningful field tests for complex systems. We should test at least one example of each category of inputs. as well as boundaries and other special cases. Negativ e Values Three cases Zero Positive Values Field Testing  Field testing known as live environment testing places the system in an operational (real) environment (i.  Target environments are usually less convenient to work with than host environments.The development environment  Target -----. however. all testing should be conducted in the target environment. Since the advent of high level language. To test this simple printer model. we don’t attempt exhaustive testing. After all. In such cases. there are strategies for testing in a systematic way.The environment in which the system will be used  Which system should be tested in the target environment? Theoretically. for it can not print the input integer less than 10 on the screen. we may try over hundreds of time until finally we feed an integer 9 to it.  The purpose is collecting as much information as possible. the practice of developing software in a different environment to the environment in which it will eventually be used has become common. it is the target environment in which the system will eventually be used. One goaloriented approach is to cover general classed of data. and find there is a error in the model. the number of elements in the set of intValue is clearly too large. A Systematic Way of Testing  There are strategies for testing in a systematic way. One goal-oriented approach is to cover general classed of data.  We should test at least one example of each category of inputs. constraining all testing to the target environment can result can result in a number of problems. However.e. their use however possible helps both the project team and decision makers to develop confidence in the model.

In the development of an expert system. such as an operating system or real-time kernel.Incorrect system component  Failure-----. the folk of many applications areas consist of heuristic rules(e. or there may be major differences such as between a workstation and an embedded control processor. one of the first test cases tried is usually “null” program. For example. and not a simulation in the host environment. Fault/Failure Insertion Testing  Fault ------. there is typically substantial asymmetry of expertise: the expert knows more about the domain than the developers of the system. informal debugging sessions frequently include checks on extreme values of variables. In error-based testing.  Testing the expert system with actual data is a method for the validation of expert systems. the goal is to construct test cases that reveal the presence or absence of specific errors.  Test data adequacy Test data set is adequate if the model run successfully on the data set and if all incorrect models run incorrectly. Error based testing is presented in nearly all heuristic approaches to testing. Acceptance testing has to be conducted in the target environment! The final stage of system testing. Put the system in real life situations and see the direct effectiveness of the system in those situation. Basic Methodology 36 . such as input-output devices. Acceptance of a system must be based on tests of the actual system.Testing  Target environments and associated development tools are usually much more expensive to provide to system developer than host environments. Fault/failure insertion testing is an error-based testing technique. In addition.  Calls to target environment system. “Close” refers the potential errors which could have occurred in the model being tested. Test Implementation  Direct access to target hardware.g. has to be conducted in the target environment.  Mutants The mutants are a set of models which are “close” to the model being tested.Incorrect behavior of a system component  This technique is used to insert a kind of fault or a kind of failure into the system and observe whether the system produces the invalid behavior as expected. There may just minor differences in configuration. acceptance testing. in testing compliers. It is rare for a host environment to be identical to a target environment. where the two environments may even have a different instruction set. Examples of application of field testing  It is especially useful for validating models of military combat systems.  Unexplained behavior may reveal errors in system representation.

D)  ms (model.A set of mutants of the model.  If some mutants are live and the test data D is adequate. Examples of fault/failure insertion testing  Budd and Miller have studied fault/failure insertion testing as tool to uncover typographical errors in Matrix Calculation Programs. dead branch errors. D) = dm / (m-e)  ms (model.Test data used to test the model  M (model) -----. defined to the fraction of the nonequivalent mutants of the model. they will be indistinguishable form the model under the test data. then either the live mutants are functionally equivalent to the model or there still might be complex errors in the model. A low score indicates a weakness in the test data. domain errors.  Its application to regression testing is carried out by Demillo.Number of elements in M (model)  E (model) -----. Some of the mutant models will turn out to be functionally equivalent to the model.Testing  D -----. If all mutants die. which are distinguished by the test data D. It can be used to uncover simple statement errors.Mutation score. dead code errors. A high score indicate that D is very close to being adequate for the model relative to the set of mutants of the model. and coincidental correctness errors. D) -----. That is. D) -----.Set of equivalent mutants of the model  e -----.  If all mutants of the being tested model give incorrect results on execution by the test data D. data flow errors.1]. it is highly likely that the tested model is very likely to be correct. special values errors. differ from the model in containing a single error chosen form a given list of error types  m -----.Number of elements in E (model)  DM (model. we say they die on the execution.  It is used by Lipton to uncover resistant error in production programs.Set of the mutants that will return results differ from the results which  dm -----. The test data does not distinguish the test model form the mutant model which contain an error.Number of elements in DM (model. A mutation score is a number in the interval [0. 37 .

benefits. Lack of customer and user involvement 5. Lack of management understanding/support of testing 6. 38 . Base the case for test tools in costs vs. Them” mentality 8.Testing The Top 10 Testing Problems 10. Testers are in a “lose/lose” situation 1. Solutions to the Teamwork Challenge The goal is to get to “Us and them. Seek Certification. Over-reliance on independent testers 3. Not enough time for testing 4. –CSTE (Certified Software Test Engineer) Attend conferences. Lack of test tools 7. Measure the benefits. Have a basic testing process in place. “Us vs. Solutions to Educating Management in Testing Issues Cultural change is needed. Not enough training 9. Having to say “no” Solutions for Training Obtain formal training in testing techniques. Read books and articles. Train people in tool usage. Rapid change 2.” Each person on the team can have a role in testing: –Developers: unit and structural testing –Testers: independent testing –Users: business-oriented testing –Management: to support testing activities Solutions for Acquiring and Using Test Tools Identify a “champion” for obtaining test tools.

Quality control is most effective when performed at the point of creation. –Scripts to be executed –Cases to be tested –Requirements to be tested Have contingency plans for schedule slippage. Get management support for developer responsibility for quality. Use automated testing tools. Solutions for Having to Say “No” 39 . Train developers to become excellent testers. Solutions for Hitting a Moving Target The testing process must accommodate change. Solutions for Fighting a Lose-Lose Situation The perception of testing must change. Include users on the system test team. Solutions to Identifying and Involving the Customer in Testing Involve the customer and users throughout the project by performing reviews and inspections. Perform user acceptance testing. Integrate automated testing tools to the project. –Testers are paid to find defects. Understand the difference between the customer and users. Manage the rate and degree of change. Focus on testable requirements. –Each defect found is one more the customer or user will not find.Testing Focus your message to management on: –reducing the cost of rework –meeting the project schedule The benefits of testing must relate to these two things to be persuasive. Testers are not to blame for bottlenecks. It is management’s responsibility to have an efficient process. Solutions to Overcoming Throwing Stuff Over the Wall Developers must take ownership and responsibility for the quality of their work.  Solutions to the Time Crunch Base schedules and estimates on measurable testing activities.

software or system) meets specifications or user needs. QAI Workbench Model z Test Terminology (Cont’d. –accept the honest facts.Testing Most responsibility is on management to: –have a quality software development process in place.. When Testing Occurs 40 . –have contingency plan in place in case of problems..) Verification –All QC activities throughout the life cycle that ensure interim deliverables meet specific specifications. Keep the test results objective. –understand that testing is only an evaluation activity.g. Validation –The “test phase” of the life cycle which ensures that the end product (e..

Regression Testing –Testing after changes have been made to ensure that no unwanted changes were introduced to the software or system. System Testing predetermined combination of tests that.Testing Test Terminology (Cont’d. when executed successfully. User Acceptance Testing –Testing to ensure that the system meets the need of the organization and the end user/customer –Validates that the right system was built. Integration Testing –Testing –A performed on groups of related modules to ensure data and control are passed properly between modules. Functional Tests –Tests that validate business requirements –Tests what the system is supposed to do Black Box Tests –Functional testing –Based on external specifications without knowledge of how the system is constructed 41 . satisfy management that the system meets specifications –Validates that the system was built right.) When Testing Occurs Business Need Veri fy Business Need System Test Acceptance Test Validate Business Need Define Requirements Veri fy Requirements Validate Requirements Design System Verificat ion Veri fy System Code System Integration Test Validate System Validat ion Veri fy Code Unit Test Validate Code Unit Testing –Testing performed on a single. standalone module or unit of code.

Making the Message to Management Where Defects Originate Where Testing Resources are Used 42 . To effectively test systems. both functional and structural testing need to be performed.Testing –Usually process and/or data driven Structural Tests –Tests that validate the system architecture –Tests how the system was implemented White Box or Glass Box Tests –Structural testing –Testing based on knowledge of internal structure and logic –Usually logic driven If x=curr-date then set next-val to 03 else set next-val to 05. The Economics of Testing .

the more intense the test should be The higher the test coverage. the more confidence you’ll have in the test 43 .Testing The Relative Cost of Fixing Defects The Bottom Line Most defects are created in the early stages of a project Most defects are found in the later stages of a project It costs 10 to 100 times as much to fix a defect in the later phases of a project. spend time early in the system development (or purchase) process to make sure the requirements and design are correct. If you want to reduce the cost of testing. Basic Testing Principles Test early and often Involve everyone on the project Management support is critical The greater the risk.

Testing Test Strategy Planning Step 1 .Determine Test Strategy How Crit ical is the system to the Organization? What are the Tradeoffs? What Type of Projects? What Type of Software? Determine Testing Strategy What Type of Technical Environment? What is the Project’s Scope? Who Will Conduct Testing? What are the Critical Success Factors? When Will Testing Occur? What Type of Project? –Traditional –Prototyping/CASE –Maintenance What Type of Software? –On-line –Real Time –Batch What Type of Technical Environment? –Mainframe –Client/Server What is the Project’s Scope? –New Development –System Maintenance When Will Testing Occur? –Requirements –Design –Testing What Are the Critical Success Factors? –Correctness 44 .

Testing –Reliability Who Will Conduct Testing? –Users –Developers What Are the Tradeoffs? –Schedule –Cost/Resources –Quality How Critical is the System to the Organization? –Risk Assessment Risk Assessment A Tool For Performing Risk Assessment 45 .

Develop Test Plan The better the test plan. Test plans should be specific. Test planning should be a team activity.Execute Tests Step 4 .Develop Test Plan Step 3 .Testing Effective Testing Methods and Techniques The QAI Testing Process Step 1 .Evaluate/Report Test Results Step 1 .Set Test Objectives Step 2 . –There should be a one-to-one correspondence between system objectives and test objectives. Step 2 . The test plan should be easily read by management. Major Elements of a Test Plan Introduction 46 . Test plans should be reviewed just as any other project deliverable. yet flexible for change.Set Test Objectives Select test team Perform risk assessment Define test objectives –A test objective is what the test is to validate. the easier the test.

May be manual or automated Automated Tools 47 . Frequently have the test team review the test plan.Execute Tests Select test tools Develop test cases Execute tests Select Test Tools A test tool is any vehicle that assists in testing. Step 3 . roughly one-third of the time can be allocated each to: –Test planning –Test execution –Test evaluation Tips for Test Planning Start early. Keep the test plan concise and readable. Planning Time Guidelines Of the total test time. Keep the test plan flexible to deal with change.Testing Approach (Strategy) Test Objectives Description of the system or software to be tested Test environment Description of the test team Milestones/Schedule Functions and attributes to be tested Evaluation criteria Data recording References Tests to be performed How Much Time Should be Spent on Test Planning? Many organizations report spending onethird to one-half of a project’s time in testrelated activities.

Testing Not the complete solution. but an important part Requires: –a process –an understanding of testing in general •Knowing what to test •Defining test cases •Knowing how to evaluate test results –cultural acceptance More than just capture/playback Categories of Automated Tools Capture/playback or script execution Defect trackers Test management Test case generators Coverage analyzers Path and complexity analyzers Manual Testing Automated Testing 48 .

Testing Critical Success Factors Get senior management support for buying and integrating test tools Know your requirements Be reasonable in your expectations .Start small and grow Have a strong testing process that includes tools Don’t cut the training corner Regression Testing Why perform regression testing? The process The issues The role of automated testing tools How much is enough? No Regression Testing: Hidden Defects 49 .

There must be a way to conduct two identical tests.Testing Regression Testing: No Hidden Defects Regression Testing . 50 .The Process Regression Testing Issues Test data must be maintained.

automated. Step 3 .g. Regression Testing . Base the amount of regression testing on risk. Consider manual vs.) Tips for Performing Regression Testing Control the scope of testing. There must be a stable baseline version for comparisons.How Much is Enough? The easy answer: “It depends. Some tests cannot use previous versions of test data. Return on Investment (ROI) can include: –Shorter test times –More accurate testing –More consistent testing –Improved communication of defects –More effective testing (e. Use automated tools. Consider initial investment in creating test environment and test scripts/procedures Measure time and defects. the less effective the regression test.Execute Tests Develop Test Cases Functional Techniques –Requirements-based –Process-based 51 .Testing There must be a way to compare two identical tests. –Data conversion may be required –Date dependencies The greater the difference between versions.” What does it depend on? –Risk –Scope of the change –System dependencies Proving the Value of Regression Testing You need a benchmark of non-regression testing. Build a repeatable and defined process for regression testing. Fewer test cases needed to find more defects. Build a reusable test bed of data.

Testing –Data-oriented –Boundary value analysis –Decision tables –Equivalence partitioning Structural Techniques –Complexity analysis –Coverage •Statement •Branch •Condition •Multi-condition •Path Step 4 .Evaluate/Report Test Results Occurs throughout the testing life cycle Tracks testing progress Keeps management informed of testing progress Valuable Test Metrics Two key areas to measure –Time •For future estimating –Defects •For determining effectiveness of testing •For improving the development and testing processes Time –Time per test case –Time per test script –Time per unit test –Time per system test Sizing –Function points –Lines of code Defects –Numbers of defects –Defects per sizing measure –Defects per phase of testing –Defect origin –Defect removal efficiency 52 .

Testing Defect Removal Efficiency = Number of defects found in producer testing Number of defects during the life of the product The Critical Path of Testing What Must be in Place for Effective Testing? Management support A defined and repeatable process for testing Adequate tools Trained testers Cooperation between testers. developers and end users Maximum coverage with minimal test cases The Five Levels of Software Process Maturity Best Practices for Control/Test Management Processes 53 .

Defect Management Level 2 .Statistical Process Control Level 3 .Testing Best Practices for Control/Test Management Processes ••Risk analysis Risk analysis ••Project customization Project customization ••Defect profiles Defect profiles ••Dashboards Dashboards ••Root cause analysis Root cause analysis ••Statistical analysis Statistical analysis ••Defect database Defect database ••Defect reporting Defect reporting ••Defect analysis Defect analysis ••Code analyzers Code analyzers ••Walkthroughs Walkthroughs ••Inspections Inspections ••Acceptance test Acceptance test ••Unit test Unit test ••Integration test Integration test ••System test System test Level 5 – Preventive Management Level 4 .Validation Software Testing 54 .Verification Level 1 .

but not the absence of problems." ". any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. software testing is defined as 'the execution of a program to find its faults'. safety. Testing can show the presence. This is the question of 'good enough software'.. For projects of a large size.. This sounds simple enough.. 55 . Testing often becomes a question of economics... The question then becomes when to stop testing. Among alternative definitions of testing are the following: ". Thus. the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results . and what is an acceptable level of bugs." ". Inspections A Closer Look: Fault Based Methods Conclusion Introduction to Software Testing Software testing is an vital part of the software lifecycle... fault-tolerance or security. it is instructive to review the definition of software testing in the literature.. none of these definitions claims that testing shows that software is free from defects. a successful test is one that finds a defect.Testing Table of Contents Introduction to Software Testing Basic Methods Testing Levels Unit testing Integration testing External Function Testing System Testing Regression Testing Acceptance Testing Installation Testing Completion Criteria Metrics Organization Testing and SQA.. more testing will usually reveal more bugs. To understand its role." Of course. we may also be interested in testing performance. Besides finding faults... the process of executing a program with the intent of finding errors.. According to Humphrey [1]. Testing is the measurement of software quality . but there is much to consider when we want to do software testing. It is important to remember that testing assumes that requirements are already validated.

In considering testing. The effectiveness or thoroughness of white-box testing is commonly expressed in terms of test or code coverage metrics.Testing Basic Methods White Box Testing White box testing is performed to reveal problems with the internal structure of a program. This requires the tester to have detailed knowledge of the internal structure. throughput. is not a single phase of the software lifecycle. and execution time. It is a set of activities performed throughout the entire software lifecycle. device utilization. A fundamental strength that all white box testing strategies share is that the entire software implementation is taken into account during testing. as is suggested by the input arrows into the testing activities. The different levels of testing reflect that testing. 56 . Functional tests typically exercise code with valid or nearly valid input for which the expected output is known. which facilitates error detection even when the software specification is vague or incomplete. Testing Levels Different Levels of TestTesting occurs at every stage of system construction. Performance tests evaluate response time. Software testing must be considered before implementation. the harder and more expensive it is to find and correct the defects. Black Box Testing Black box tests are performed to assess how well a program meets its requirements. most people think of the activities described in figure. The larger a piece of code is when defects are detected. in the general sense. This includes concepts such as 'boundary values'. The activities after Implementation are normally the only ones associated with testing. Stress tests push the system to or beyond its specified limits to evaluate its robustness and error handling capabilities. which measure the fraction of code exercised by test cases. memory usage. A common goal of white-box testing is to ensure a test case exercises every path through a program. Reliability tests monitor system response to representative user input. looking for missing or incorrect functionality. counting failures over time to measure or certify reliability.

tests. The nature of this phase is certainly 'white box'. A unit is typically a function or small collection of functions (libraries. bottom-up and 'big bang'. Unit Testing Unit testing exercises a unit in isolation from the rest of the system. 'Big bang' testing is. more code has been written and tested that with top down testing. untested sub-systems. The integrated system frequently fails in significant and mysterious ways. as we must have a certain knowledge of the units to recognize if we have been successful in fusing them together in the module. Top-down combines. the prevalent integration test 'method'. It is also easier to locate and remove bugs at this level of testing. or system. The main characteristic that distinguishes a unit is that it is small enough to test thoroughly. subsystem. to make sure the units work together. This is waiting for all the module units to be complete before trying them out together. Major emphasis is on module functionality and performance. The small size of units allows a high level of code coverage. Integration testing focuses on the interfaces between units. Top-Down The control program is tested first Modules are integrated one at a time Major emphasis is on interface testing 57 . Modules can be integrated in various clusters as desired. if not exhaustively. unfortunately. There are three main approaches to integration testing: top-down. Advantages No test stubs are needed It is easier to adjust manpower needs Errors in critical modules are found early Disadvantages Test drivers are needed Many modules must be integrated before a working program is available Interface errors are discovered late Comments At any given point.Testing The following paragraphs describe the testing activities from the 'second half' of the software lifecycle. Some people feel that bottom-up is a more intuitive test philosophy. and debugs top-level routines that become the test 'harness' or 'scaffolding' for lower-level units. Developers are normally responsible for the testing of their own units and these are normally white box tests. and it is difficult to fix it Integration testing exercises several units that have been combined to form a module. implemented by a single developer. Integration Testing One of the most difficult aspects of software development is the integration and testing of large. Bottom-up combines and tests low-level units into progressively larger modules and subsystems. Bottom-up Allows early testing aimed t proving feasibility and practicality of particular modules. classes).

including factors such as hardware setup and database size and complexity. While this approach is very quick. environment. Regression Testing Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affect 58 . This phase is sometimes known as an alpha test. as the users have a better idea how the system will be used than the system testers. Testers will run tests that they believe reflect the end use of the system. errors that are found 'later' take longer to fix. a project may leave one of them out. Integration tests can rely heavily on stubs or drivers. return hard-coded values. External Function Testing The 'external function test' is a black box test to verify the system correctly implements specified functions. By replicating the target environment. A stub might consist of a function header with no body. It may be too expensive to replicate the user environment for the system test. Stubs stand-in for finished subroutines or sub-systems. the platform must be as close to production use in the customers&#8217.Testing Advantages No test drivers are needed The control program plus a few modules forms a basic early prototype Interface errors are discovered early Modular features aid debugging Disadvantages Test stubs are needed The extended early phases dictate a slow manpower buildup Errors in critical modules at low levels are found late Comments An early working program raises morale and helps convince management progress is being made. Acceptance Testing An acceptance (or beta) test is an exercise of a completed system by a group of end users to determine whether the system is ready for deployment. it frequently reveals more defects than the other methods. or obtain data from the tester. or it may read and return test data from a file. there is really nothing that can be demonstrated until later in the process. security and fault-tolerance). The cost of drivers and stubs in the top-down and bottom-up testing methods is what drives the use of 'big bang' testing. and can be known as an alpha test. This approach waits for all the modules to be constructed and tested independently. or we may not have enough time to run both. Here the system will receive more realistic testing that in the 'system test' phase. It is hard to maintain a pure top-down strategy in practice. System Testing The 'system test' is a more robust version of the external test. they are integrated all at once. In addition. Stub creation can be a time consuming piece of testing. we can more accurately test 'softer' system features (performance. like bottom up. and when they are finished. In system testing. These errors have to be fixed and as we have seen. The essential difference between 'system' and 'external function' testing is the test platform. Because of the similarities between the test suites in the external function and system test phases.

While the original suite could be used to test the modified software. It can be difficult to determine how much re-testing is needed. Some common examples are:  All black-box test cases are run  White-box test coverage targets are met 59 . Coverage approaches are also based on coverage criteria. where boundaries can be placed around modules and subsystems. An interesting approach to limiting test cases is based on whether we can confine testing to the "vicinity" of the change. to which the answer is 'The same person probably told you it worked in the first place'. they seek to select all tests that exercise changed or affected program components. If I put a new radio in my car. Instead. or upgrade install/uninstall processes. the performance of the two versions should be identical. Regression testing has been receiving more attention as corporations focus on fixing the 'Year 2000 Bug'. but do not require minimization of the test set. automated sets of procedures designed to exercise all parts of a program and to show defects. this might be very timeconsuming. partial. This means not only do they do the same things correctly. A new 'Y2K' version of the system is compared against a baseline original system. With the obvious exception of date formats. Completion Criteria There are a number of different ways to determine the test phase of the software life cycle is complete. Installation Testing The testing of full. A non-Y2K bug in the original software should not have been fixed by the Y2K work. especially near the end of the development cycle.Testing other system components. Most industrial testing is done via test suites. they also do the same things incorrectly. the tests that are deemed necessary to validate modified software. (Ex. Three of these things are bad. from an existing test set. it is necessary to do regression testing. There are three main groups of test selection approaches in use: Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be rerun. A frequently asked question about regression testing is 'The developer says this problem is fixed. These graphs can determine which tests from the existing suite may exhibit changed behavior on the new version. do I have to do a complete road test to make sure the change was successful?) A new breed of regression test theory tries to identify. Safe attempt instead to select every test that will cause the modified program to produce different output than original program. Why do I need to re-test?&#8217. and one is good: Successful Change Unsuccessful Change New Bug Bad Bad No New Bug Good Bad Because of the high probability that one of the bad outcomes will result from a change to the system. Four things can happen when a developer attempts to fix a bug. through program flows or reverse engineering. The goal of most Y2K is to correct the date handling portions of their system without changing any other behavior. A regression test selection technique chooses.

we have discovered Tr 'real' errors and Ts seeded errors. the major goal of testing is to discover errors in the software. The most popular relate to inconsistent definitions of defects and system sizes. Rather than discuss the merits of specific measurements. As with all domains of the software process. how many remain?)  Risk Management (What is the risk related to remaining defects?)  Test Process Improvement (How long does our testing process take?) Quality Assessment An important question in the testing process is "when should we stop?" The answer is when system reliability is acceptable or when the gain in reliability cannot compensate for the testing cost. The most commonly used means of measuring system quality is defect density. To answer either of these concerns we need a measurement of the quality of the system. Three themes prevail:  Quality Assessment (What percentage of defects are captured by our testing process. We assume the system has X errors. then the number of 'real' defects found represents half of the total defects in the system.Testing     Rate of fault discovery goes below a target value Target percentage of all faults in the system are found Measured reliability of the system achieves its target value (mean time to failure) Test phase time or resources are exhausted When we begin to talk about completion criteria. We need metrics on our testing process if we are to tell which is the right answer. Other metrics attempt to estimate of how many defects remain undetected. Defect density accounts only for defects that are found in-house or over a given amount of operational field use. there are a number of well documented problems with this metric. Metrics Goals As stated above. we can calculate X:  S / (X + S) = Ts / (Tr + Ts)  X = S * ((Tr + Ts) / Ts -1) For example. If we assume (questionable assumption) that the testers find the same percentage of seeded errors as real errors. A secondary goal is to build confidence that the system will work without error when testing does not reveal any errors. It is artificially seeded with S additional errors. After a testing. Although it is a useful indicator of quality when used consistently within an organization. we move naturally into a discussion of software testing metrics. there are hosts of metrics that can be used in testing. Then what does it mean when testing does not detect any errors? We can say that either the software is high quality or the testing process is low quality. A simplistic case of error estimation is based on "error seeding". 60 . if we find half the seeded errors. Defect density is represented by: # of Defects / System Size where system size is usually expressed in thousands of lines of code or KLOC. it is more important to know what they are trying to achieve.

Availability. this might be represented visually: The relative importance of likelihood and impact will vary from project to project and company to company. Process Improvement It is generally accepted that achieve improvement you need a measure against which to gauge performance. It is very important to consider maintenance costs and redevelopment efforts when deciding on value of additional testing. Test data sampled from realistic beta testing is used find the average time until system failure. This data is extrapolated to predict overall uptime and the expected time the system will be operational. we can evaluate the changes made to the testing process. Obviously. Popular measures of the testing process report: Effectiveness: Number of defects found and successfully removed / Number of Defect Presented Efficiency: Number of defects found in a given time It is also important to consider reported system failures in the field by the customer. By tracking our test efficiency and effectiveness. A system level measurement for risk management is the Mean Time To Failure (MTTF). Together. where Risk has specific meaning. Impact is a severity rating. While these are reasonable measures for assessing quality. If a high percentage of customer reported defects were not revealed in-house. These measurements allow us to prioritize our testing and repair cycles. One approach is known as Risk Driven Testing. A truism is that there is never enough time or resources for complete testing. To improve our testing processes we the ability to compare the results from one process to another. Impact and Likelihood determine the Risk for the piece. Sometimes measured with MTTF is Mean Time To Repair (MTTR). We can use this information to improve the testing process by altering and adding test activities to improve our changes of finding the defects that are currently escaping detection. A good defect reporting structure will allow defect types and origins to be identified. 61 . The failure of each component is rated by Impact and Likelihood. they are more often used to assess the risk (financial or otherwise) that a failure poses to a customer or in turn to the system supplier. This represents the expected time until the system will be repaired and back in use after a failure is observed. making prioritization a necessity. Likelihood is an estimate of how probable it is that the component would fail. the higher rating on each scale corresponds to the overall risk involved with defects in the component. based on what would happen if the component malfunctioned. it is a significant indicator that the testing process in incomplete. With a rating scale. Risk Management Metrics involved in risk management measure how important a particular defect is (or could be). is the probability that a system is available when needed.Testing Estimating the number and severity of undetected defects allows informed decisions on whether the quality is acceptable or additional testing is cost-effective. obtained by calculating MTTF / (MTTF + MTTR).

Software Testing Organization Test GroupsThe following summarizes the Pros and Cons of maintaining separate test groups . nor motivated to test  Overall. more of the defects in the product will likely be detected. analysis and feedback is what is needed. The purpose of testing is to discover defects in the product.  Independent testing is typically more efficient at detecting defects related to special cases. Neglecting to test documentation and/or installation procedures is also a risky decision.g. Test plans often over emphasize testing functionality at the expense of potential interactions.  The cost of maintaining separate test groups The key to optimizing the use of separate test groups is understanding that developers are able to find certain types of bugs very efficiently. and testers have greater abilities in detecting other bugs. designers may have to wait for responses from the test group to proceed. Pros  Testers are usually the only people to use a system heavily as experts. and system level usability and performance problems  Programmers are neither trained. and can is a reasonable indicator if its performance in the future. Testing Problems When trying to effectively implement software testing.  The detection of the defects happens at a later stage. Poor planning of the testing effort. there are several mistakes that organizations typically make.. It must be remembered that measurement is not the goal. An important consideration would be the size of the organization. reporting status. it is important to have an understanding of the relative criticality of defects when planning tests.  Test groups can provide insight into the reliability of the software before it is actually shipped Cons  Having separate test groups can result in duplication of effort (e. and recommending actions. This mentality also can lead to incomplete configuration testing and inadequate load and stress testing. This problem can be exacerbated in situations where the test group is not physically collocated with the design group. 62 . Furthermore. The errors fall into (at least) 4 broad classes: Misunderstanding the role of testing. the test group  expends resources executing tests developers have already run.Testing Testing metrics give us an idea how reliable our testing process has been at finding defects. and the criticality of the product... interaction between modules. improvement through measurement.

A test group should include domain experts. using code coverage as a performance goal for testers. Poor testing methodology. As well. An important point about inspections is that they can be performed much earlier in the design cycle. and need not be limited to people who can program. Inspections are strict and close examinations conducted on specifications. In any case. the detection of defects early is critical.Testing Using the wrong personnel as testers. design. Testing and SQA. and other artifacts. the closer to the time of its creation that we detect and remove a defect. The tests must verify that product does what it is supposed to. Testers can review their test plans with developers as they are creating their designs. or ignoring coverage entirely are poor strategies. testers can be too focussed on running tests at the expense of designing them. while not doing what it should not. A test team that lacks diversity will not be as effective. Having said that. Thus the developer may be more aware of the potential defects and act accordingly. This is illustrated in figure : Figure : Defect Detection and cost to correct (Source: McConnell) 63 . testing is something that can be started much earlier than is normally the case. nor should it be a place to employ failed programmers. well before testing begins. the lower the cost. Inspections Inspections are undoubtedly a critical tool to detect and prevent defects. code. The role of testing should not be relegated to junior programmers. both in terms of time and money. Just as programmers often prefer coding to design. test.

These methods attempt to address the belief that current techniques for assessing oftware quality are not adequate. it has been suggested that "software inspections can replace testing". Voas et. fault injection will be discussed in more detail. if it exists. suggests that the traditional belief that improving and documenting the software development process will increase software quality is lacking.g. and fault injection. Second. as systems become more complex the chances of one person understanding all the interfaces and being present at all the reviews is quite small. particularly in the case of mission critical systems. they recognize that the amount of testing (which is product focussed) required in order to demonstrate high reliability is impractical. In the face of all this evidence. code reading detects twice as many defects/hour as testing. inspections resulted in a 10x reduction in cost of finding errors. depending on the product. Based on the number of these artificial faults discovered during testing. After briefly describing each of the 4 techniques. al. 80% of development errors were found by inspections. Fault seeding. timing/synchronization). testing identifies system level performance and usability issues that inspections cannot. one cannot replace the other. mutation testing. Mutation testing injects faults into code to determine optimal test inputs. the optimal mix of inspections and testing may be different! A Closer Look: Fault Based Methods The following paragraphs will describe some newer techniques in the software testing field. The literature (Humphrey 1989) reports cases where:     inspections are up to 20 times more efficient than testing. While inspections can detect this event. This is not true for several reasons. Fault seeding implies the injection of faults into software prior to test. However. Fault based methods include Error Based Testing. since inspections and testing provide different. among others. equally important information. Firstly. For this to be valid the seeded faults must be assumed similar to the real faults. Error based testing defines classes of errors as well as inputs that will reveal any error of a particular class. While the benefits of inspections are real. inferences are made on the number of remaining ‘real’ faults. testing can provide a measure of software reliability (i. 64 . quality processes cannot demonstrate reliability and the testing necessary to do so is impossible to perform. Therefore. failures/execution time) that is unobtainable from inspections. In short.e. they are not enough to replace testing. This measure can often be used as a vital input to the release decision. Inspections could replace testing if and only if all information gleaned through testing could be obtained through inspection. Thirdly. Fault Injection evaluates the impact of changing the code or state of an executing program on behavior of the software.Testing Evidence of the benefits of inspections abounds. Yet. testing can identify defects due to complex interactions in large systems (e.

though it would appear that companies such as Hughes Information Systems. Microsoft. and Hughes Electronics have applied the techniques or are considering them. Properly used. which may be a "black box" Conclusion Software testing is an important part of the software development process.e. whether or not systems are fail-safe. As with the other activities in the software lifecycle. The injection of faults into software is not so widespread. It is not a single activity that takes place after code implementation. as well as to 3rd party software. By using perturb(x) to generate changed values of X (i. As software systems become more and more complex. Testing details will be fleshed through high and low level system designs. A successful test strategy will begin with consideration during requirements specification. a random number generator) you can quickly determine how often corrupted values of X lead to undesired values of T.Testing Fault injection is not a new concept. The technique can be applied to internal source code. 65 . testing has its own unique challenges. Hardware design techniques have long used inserted fault conditions to test system behavior. but is part of each stage of the lifecycle. the importance of effective. It is as simple as pulling the modem out of your PC during use and observing the results to determine if they are safe and/or desired. and testing will be carried out by developers and separate test groups after code implementation. As a simple example consider the following code: Original X = (r1 – 2) + (s2 – s1) Y=z–1 … T = x/y Fault Injected X = (r1 – 2) + (s2 – s1) X = perturb(x) Y=z–1 … T = x/y If T > 100 then print (‘WARNING’) In this case it is catastrophic if T > 100. how much testing should be done. well planned testing efforts will only increase. etc. fault insertion can give insight as to where testing should be concentrated.

Testing Objectives: 66 . and coding. Software testing demonstrates that software function appear to be working according to specifications and performance requirements. Testability  Software Test Case Design  White-Box Testing Cyclomatic Complexity Graph Matrices Control Structuring Testing (not included) Condition Testing (not included) Data Flow Testing (not included) Loop Testing (not included)  Black-Box Testing Graph-based Testing Methods (not included) Equivalence Partitioning Boundary Value Analysis Comparison Testing (not included) Software Testing Fundamentals Software testing is a critical element of software quality assurance and represents the ultimate review of specification. design.Testing Software Testing Techniques  Software Testing Fundamentals Testing Objectives. Principles.

    Software Testability Software testability is simply how easily a computer program can be tested.”  Decomposability: “By controlling the scope of testing.”  Simplicity: “The less there is to test.  Testing should begin “in the small” and progress toward testing “in the large”.”  Observability: “What you see is what you test. testing should be conducted by an independent third party. 80% of all errors uncovered during testing will likely be traceable to 20% of all program modules.”  Understandability:”The more information we have.”  Controllability: “The better we can control the software. we can more quickly isolate problems and perform smarter retesting.  A successful test is one that uncovers an as-yet undiscovered error.” Test Case Design 67 . Software Testing Principles Davids [DAV95] suggests a set of testing principles: All tests should be traceable to customer requirements. The Pareto principle applies to software testing.  A good test case is one that has high probability of finding an undiscovered error. The major testing objective is to design tests that systematically uncover types of errors with minimum time and effort. the more the testing can be automated and optimized. the fewer the disruptions to testing.Testing Myers [MYE79] states a number of rules that can serve well as testing objectives:  Testing is a process of executing a program with the intent of finding an error. the more quickly we can test it. Tests should be planned long before testing begins. the smarter we will test.  To be most effective. the more efficiently it can be tested.”  Stability: “The fewer the changes. A set of program characteristics that lead to testable software:  Operability: “the better it works.  Exhaustive testing is not possible.

loops. 68 . etc.  Exercise all logical decisions one their true and false sides. external interfaces. control flows.  Execute all loops at their boundaries and within their operational bounds. Used as a guide for defining a basis set of execution path. we derive test cases that  Guarantee that all independent paths within a module have been exercised at least once. external data and information White-box testing: knowing the internals of a software. It is a test case design method that uses the control structure of the procedural design to derive test cases. operations. White-Box Testing and Basis Path Testing White-box testing. data flows internal data structures. design tests to demonstrate each function and check its errors. Basic path testing (a white-box testing technique):     First proposed by TomMcCabe [MCC76]. Guarantee to execute every statement in the program at least one time. Can be used to derive a logical complexity measure for a procedure design. Using white-box testing methods.Testing Two general software testing approaches: Black-Box Testing and White-Box Testing Black-box testing: knowing the specific functions of a software. conditions. also known as glass-box testing. Major focus: functions. design tests to exercise all internals of a software to make sure they operates according to specifications and designs Major focus: internal structures.  Exercise internal data structures to assure their validity. logic paths.

where k < i defined below. value (i) = -999.. Step 4: Prepare test cases that will force execution of each path in the basis set. Step 2: Determine the cyclomatic complexity of the resultant flow graph. V(G).  Cyclomatic complexity.Testing Cyclomatic Complexity Cyclomatic complexity is a software metric  provides a quantitative measure of the global complexity of a program. draw a corresponding flow graph. Three ways to compute cyclomatic complexity:  The number of regions of the flow graph correspond to the cyclomatic complexity. For example. Deriving Test Cases Step 1 : Using the design or code as a foundation. the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program. Path 1: test case: value (k) = valid input. 69 .  Cyclomatic complexity.. V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G. Path 6: 1-2-3-4-5-6-7-8-9-2-. for a flow graph G is defined as V(G) = E . where 2 <= I <= 100 expected results: correct average based on k values and proper totals.N +2 where E is the number of flow graph edges and N is the number of flow graph nodes. path 1: 1-2-10-11-13 path 2: 1-2-10-12-13 path 3: 1-2-3-10-11-13 path 4: 1-2-3-4-5-8-9-2-… path 5: 1-2-3-4-5-6-8-9-2-. When this metric is used in the context of the basis path testing. Step 3: Determine a basis set of linearly independent paths.

 If an input condition requires a specific value. An input condition is:  a specific numeric value.value defined between 200 and 900 70 . one valid and two invalid equivalence class are defined. Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an input domain. a range of values  a set of related values. one valid and one invalid equivalence classes are defined.the area code may or may not be present. one valid and two invalid equivalence classes are defined.Testing Equivalence Partitioning Equivalence partitioning is a black-box testing method  divide the input domain of a program into classes of data  derive test cases based on these partitions. An equivalence class represents a set of valid or invalid states for input condition.  If an input condition is Boolean. one valid and one invalid classes are defined. Examples: area code: input condition. range .  If an input condition specifies a member of a set. or a Boolean condition Equivalence Classes Equivalence classes can be defined using the following guidelines:  If an input condition specifies a range. input condition. Boolean .

10]. full element.containing commands noted before.six character string. 5. Boolean . 5  Guidelines 1 and 2 are applied to output condition. 100. Example: Enumerate data E with input condition: {3. 11. 102.Testing password: input condition. test cases should be designed with value a and b.  If internal program data structures have prescribed boundaries. test values: -3. 200.a password nay or may not be present. -1. just above and below a and b. Example: Integer D with input condition [-3. Values just above and below minimum and maximum are also tested. set . Boundary Value Analysis  a test case design technique  complements to equivalence partition Objective: Boundary value analysis leads to a selection of test cases that exercise bounding values. 0  If an input condition specifies a number values. command: input condition. test cases should be developed to exercise the minimum and maximum numbers. -2. 10. Guidelines:  If an input condition specifies a range bounded by values a and b. value . single element. out-of-boundary search for element: element is inside array or the element is not inside array You can think about other data structures: 71 . input condition. 102} test values: 3. be certain to design a test case to exercise the data structure at its boundary Such as data structures: array input condition: empty.

stack. and tree 72 . queue. set.Testing list.

Testing 73 .

Sign up to vote on this title
UsefulNot useful