Software Testing


The hardest task
Can write a program in this 50 minute class Can also have 50 million possible combinations to test. Labor involved can cost fortunes. Testing is often the first thing to slip when resources stretch - but it's the entire organization that suffers when a defect causes problems. Measurements show that a defect discovered during design that costs $1 to rectify will cost $1,000 to repair in production. This is a tremendous (l n) cost differential and clearly points out the advantage of early error detection.

Software Testing



Introduction to the subject
What is... error/bug/fault/failure/testing? Psychology, goals and testing theory Overview of the testing process – concepts and dimensions Testing principles Standard testing questions

Classifying aspects of testing
Types Levels
Software Testing 3

What is...
error: something wrong in software itself fault: manifestation of an error
(observable in software behavior)


failure: something wrong in software
(deviates from requirements) requirements: for input i, give output 2*(i^3) (so 6 yields 432) software: i=input(STDIN); i=double(i); i=power(i,3); output(STDOUT,i);
Software Testing


output (verbose): input: 6 fault doubling input.. computing power.. output: 1728
fault + failure


Your ideas ???

Software Testing


The Psychology of Testing
“Testing is the process of demonstrating (ch ng minh, gi i thích) that an errors are not present” “Testing is the process of establishing confidence that the program does what it intends to do” “Testing is the process of executing programs with the intent of finding errors”
Successful (positive) test: exposes an error
Software Testing 6


Software Testing 8 4 . chính xác) of detection (therefore absence) of particular faults (phát hi n l i c th ) Software Testing 7 What if testing does not detect any errors Either the software is high quality the testing process is low quality Metrics are required on our testing process if we are to tell which is the right answer.Possible Goals of Testing Find faults Glenford Myers. The Art of Software Testing Provide confidence of reliability (s tin c y) of (probable) correctness (tính úng n.

how many remain?) Risk Management (What is the risk related to remaining defects?) Test Process Improvement (How long does our testing process take?) Software Testing 9 What is. testing: by experiment. find errors in software (Myers..Three themes prevail: (3 ch ph bi n) Quality Assessment (What percentage of defects are captured by our testing process. 1979) establish quality of software (Hetzel. 1988) a succesful test: finds at least one error gives quality judgment (phán quy t) with maximum confidence with minimum effort Software Testing 10 5 ..

Let’s start with a global look at the testing process Software Testing 12 6 . decision procedures for undecidable problems? Software Testing 11 What’s been said? Dijkstra: Testing can show the presence of bugs..Testing Theory (such as it is) Plenty of negative results Nothing guarantees correctness Statistical confidence is prohibitively expensive Being systematic may not improve fault detection as compared to simple random testing So what did you expect. but not the absence Beizer: 1st law: Every method you use to prevent or find bugs leaves a residue of subtler bugs (xót l i nh ng l i không d phát hi n) 2nd law: Software complexity grows to the limits of our ability to manage it Beizer: Testers are not better at test design than programmers are at code design ..

interactive. Organization (documentation. heuristics) c. Assumptions (limitations. Purpose (what development phase. Who tests? (programmer. execution:platform. What is tested? a.. batch) d. third party) 3. simplifications. Requirements (what property/quality do we want from the software) c.Concept map of the testing process Software Testing 13 Dimensions of software testing 1.) b.. How to test? a. user. Software characteristics (design/embedded?/language/. what next step) 2. what kind of errors. Method (adequacy criterion: how to generate how many tests) b. implementation. How are the results evaluated? Software Testing 14 7 .

Dimensions + concept map 2d 2c 1a 3 1b 2b 2a 1c Software Testing 15 What information can we exploit (khai thác) ? 1b.. Requirements functional: the behavior of the system is correct precise non-functional: performance. robustness (stress/volume/recovery). reliability. usability.. always vague (luôn mơ h ) Software Testing 16 8 . . compatibility. possibly quantifiable (có th xác nh s lư ng h p lý).

What other information can we exploit? Specifications (formal or informal) in Oracles for Selection. Generation. Test purpose Software development phases (Vmodel): requirements acceptance test specification system test detailed design integration test implementation code unit test 18 Software Testing 9 . Adequacy Usage (historical or models) Organization experience Software Testing 17 1c. Adequacy Designs … Code for Selection. Generation.

Test purpose What errors to find? typical typos functional mistakes unimplemented required features ‘software does not do all it should do’ implemented non-required features ‘software does things it should not do’ Software Testing 19 2a. Test method dimensions data-based structure-based error seeding typical errors efficiency .1c... black box white box Software Testing 20 10 .

. perfect repair . Execution duration. . Software Testing 21 2c.. Test organization Documentation for reuse! Implementation platform batch? inputs. accessibility.. interruptions manual/interactive or automated in parallel on several systems Software Testing 22 11 . coordination.2b. Assumptions. limitations single/multiple fault: clustering/dependency of errors testability: can software be tested? time.. timing. resources.

Testing Principles Test case must include the definition of expected results A program should not test her/his own code A programming organization should not test its own programs Thoroughly inspect (xem xét k lư ng) the results of each test Test cases must be also written for invalid inputs Software Testing 23 Testing Principles Check that programs do not do unexpected things Test cases should not be thrown a way Do not plan testing assuming that there are no errors The probably of error in a piece of code is proportional(tương ng) to the errors found so far in this part of the code Software Testing 24 12 .

Independent testing is typically more efficient at detecting defects related to special cases.Test Groups – Pros Only people to use a system heavily is experts. Test groups can provide insight into the reliability ( tin c y) of the software before it is actually shipped Software Testing 25 Developer’s Mindset Product Expertise A good developer is clever (thông minh) Emphasis on depth of expertise Software Testing 26 13 . and system level usability and performance problems Programmers are neither trained. nor motivated to test Overall. interaction between modules. more of the defects in the product will likely be detected.

Tester’s Mindset Product Expertise A good tester is mischievous (tinh quái) Emphasis on breadth of expertise Software Testing 27 Slogan ! Software Testing 28 14 .

Software Testing A Tester’s Attitude Cautious (c n th n) Jump to conjectures (ph ng oán). Actively seek counter-evidence. Software Testing (and Courageous) 30 15 . Critical (ch trích.” Have someone check your work. Practice admitting “I don’t know. Curious (tò mò) What would happen if…? How does that work? Why did that happen? Good testers are hard to fool.Testing is in Your Head Ah! Problem! Technical Knowledge Domain Knowledge Problem Specifications Critical Thinking Communication Experience Product Problem Report Coverage 29 The important parts of testing don’t take place in the computer or on your desk. not conclusions (k t lu n). phê phán) Proceed by conjecture and refutation (bác b ).

Standard Testing Questions Did this test execution succeed or fail? Oracles How shall we select test cases? Selection. generation How do we know when we’ve tested enough? Adequacy What do we know when we’re done? Assessment Software Testing 31 Software Testing Classifying aspects of testing 16 .

Basic Methods of Testing Black Box White Box Software Testing 33 Black Box Testing View the program as a black box Exhaustive testing is infeasible (không th th c hi n) even for tiny programs Can never guarantee correctness Fundamental question is economics “Maximize investment” Example: “Partition test” Software Testing 34 17 .

Black Box Tests Input Output Targeted at the apparent simplicity of the software Makes assumptions about implementation Good for testing component interactions Tests the interfaces and behavior Software Testing 35 White Box Testing Investigate the internal structure of the program Exhaustive path testing is infeasible Does not even guarantee < epsilon) if (a-b correctness Specification is needed Missing paths Data dependent paths Software Testing 36 18 .

White Box Tests Input Output Targeted at the underlying complexity of the software Intimate knowledge of implementation Good for testing individual functions Tests the implementation and design Software Testing 37 Black and White Box Testing Methods Black box •Equivalence partitioning •Boundary-value analysis •Cause-effect graphing •Error guessing •State space exploration White box •Coverage/Adequacy •Data flow analysis •Cleanness •Correctness •Mutation •Slicing Software Testing 38 19 .

testing is surely inadequate Control flow coverage criteria Statement (node.Structural Coverage Testing (In)adequacy criteria If significant parts of program structure are not tested. basic block) coverage Branch (edge) and condition coverage Data flow (syntactic dependency) coverage Various control-flow criteria Attempted compromise (dung hòa)between the impossible and the inadequate Software Testing 39 Testing Levels Different testing levels Unit test (procedure boundaries) Module test Function test (discrepancies external spec..) System test (discrepancies with original objectives) Facility test Volume test Usability test Security test Performance test . Software Testing 40 20 ..

we may use historical characteristics of the development process Reliability growth models (Musa. Littlewood. relies on coupling of simple faults with complex faults Coverage criterion: Test set should be adequate to reveal (all. or x%) faults generated by the model similar to hardware test coverage Software Testing 42 21 . et al) project reliability based on experience with the current system and previous similar systems Software Testing 41 Fault-based testing Given a fault model hypothesized set of deviations from correct program typically.Process-Based Reliability (s c y) Testing tin Rather than relying only on properties of the program. simple syntactic mutations.

especially for regression testing makes large numbers of tests infeasible Not dependable Automated oracles are essential to costeffective testing Software Testing 44 22 .The Budget Coverage Criterion A common answer to “when is testing done” When the money is used up When the deadline is reached This is sometimes a rational (d a trên lý trí) approach! Implication 1: Test selection is more important than stopping criteria per se. and ignored oracles Much testing practice has relied on the “eyeball oracle” Expensive. Implication 2: Practical comparison of approaches must consider the cost of test case selection Software Testing 43 The Importance of Oracles Much testing research has concentrated on adequacy.

hard problem is parameterization Software Testing 45 What can be automated? Oracles assertions.Sources of Oracles Specifications sufficiently formal (e. ADL. Z spec) but possibly incomplete (e. dependence Management (GATE) Software Testing 46 23 . from some specifications Selection (Generation) – “YOU” scripting. replay. as in protocol conformance testing Prior runs (capture/replay) especially important for regression testing and GUIs. specification-driven.g. APP.g. Nana) Design models treated as specifications... replay variations selective regression test Coverage (XATAC) statement. branch. assertions in Anna.

Design for Test: 3 Principles Observability (có th quan sát) Providing the right interfaces to observe the behavior of an individual unit or subsystem Controllability (có kh năng i u khi n) Providing interfaces to force behaviors of interest Partitioning (chia c t. 24 . Completely uninstall earlier builds. Smoke testing Suspend (hoãn) test cycle if the product is untestable. Formal builds Informal builds Save old builds. 2. 3. Verify testability. cô l p) Separating control and observation of one component from details of others Software Testing 47 Test Cycle: What You Do With a Build 1. Clean your system. Receive the product.

Test Cycle: What You Do With a Build 4. Not performed for an incremental cycle. manual Important tests first! 9. Test new or changed areas. 7. Determine what has been fixed. Bug tracking system 6. Exploratory testing. Perform regression testing. existing. Report results. Determine what is new or changed. Test Cycle: What You Do With a Build 8. reopened. Automated vs. Test fixes. Change log 5. Coverage Observations Bug status (new. Many fixes fail! Also test nearby functionality. closed) Assessment of quality Assessment of testability 25 .

who believes that is the only way to test. Smoke tests are designed to confirm that changes in the code function as expected and do not destabilize an entire build. smoke testing is the most cost effective method for identifying and fixing defects in software. there’s somebody. somewhere.General Black Box Testing Techniques Function testing Domain testing Stress testing Flow testing Scenario testing User testing Regression testing Risk testing Claims testing Random Testing For any one of these techniques. Smoke Testing MSDN: smoke testing describes the process of validating code changes before the changes are checked into the product’s source tree. After code reviews. Software Testing 52 26 .

. but not bothering with finer details. . ascertaining that the most crucial functions of a program work.Determine how you would know if they worked. in which the device passed the test if it didn't catch fire the first time it was turned on Software Testing 53 Function Testing Key Idea: “March through the functions” Summary: . one at a time. Good for: assessing capability rather than reliability (kh năng và tin c y) 27 . The term comes to software testing from a similarly basic type of hardware testing. .Identify each function and sub-function.A function is something the product can do.Test each function.Smoke Testing On “What is ?” Smoke testing is non-exhaustive software testing.

reliability.g.Identify each domain of input and output. .g.Select test items and functions to stress. . best representative Good for: all purposes Stress Testing Key Idea: “Overwhelm the product” Summary: . .Select coverage strategy: e. large or complex data structures.Domain Testing Key Idea: “Divide the data. boundaries. .Identify data and platform elements that relate to them. exhaustive. long test runs. and conquer” Summary: . high loads..Analyze limits and properties of each domain.Select or generate challenging data and platform configurations to test with: e.Identify combinations of domains to test.. . many test cases Good for: performance. . and efficiency assessment 28 .A domain is a set of test data.

Good for: finding problems fast (however.Design tests that involve meaningful and complex interactions with the product. Good for: Finding problems that seem important 29 .Define test procedures or high level cases that incorporate many events and states.Don’t reset the system between events. or some combination thereof. parallel.Incorporate multiple elements of the product.A good scenario test is a plausible story of how someone who matters might do something that matters with the product. bug analysis is more difficult) Scenario Testing Key Idea: “Test to a compelling story” Summary: . . . .Flow Testing Key Idea: “Do one thing after another” Summary: . and data that is realistically complex.The events may be in series. .

g. .Coverage strategy: e. Good for: all purposes Regression Testing Key Idea: “Test the changes” Summary: . .Understand favored and disfavored users. .Identify what product elements changed.Find these users and have them do testing or help you design tests. .User testing is powerful when you involve a variety of users.User Testing Key Idea: “Involve the users” Summary: . likely elements. all elements Good for: managing risks related to product enhancement 30 . past bug fixes. . recent bug fixes.Identify what elements could have been impacted by the changes..Identify categories of users.

Expect the specification and product to be brought into (hư ng v ) alignment.Analyze individual claims about the product. . .” Summary: . then look for it.Make a list of interesting problems and design tests specifically to reveal them. .Ask customer to clarify vague claims.Verify each claim. Good for: Simultaneously ( ng th i) testing the product and specification.How would you detect them if they were there? . design documentation.Claims Testing Key Idea: “Verify every claim” Summary: . past bug reports.Identify specifications (implicit or explicit).What kinds of problems could the product have? . while refining expectations Risk Testing Key Idea: “Imagine a problem. . or apply risk heuristics. leveraging experience 31 . . Good for: making best use of testing resources.It may help to consult experts.

there’s somebody. Software Testing 63 General White Box Testing Techniques Coverage Testing Branch Logic Statement Data flow … Unit Testing DotNet Unit PhP Unit Cpp Unit Junit Ruby Unit …. . high speed evaluation strategy.Random Testing Key Idea: “Run a million different tests” Summary: . Good for: Assessing reliability across input and time.Create an automated. who believes that is the only way to test.Write a program to generate. somewhere. and evaluate all the tests. 32 . .Look for an opportunity to automatically generate thousands of slightly different tests. For any one of these techniques. execute.

Sign up to vote on this title
UsefulNot useful