You are on page 1of 25

Software Testing Strategies

based on
Chapter 13 - Software Engineering: A Practitioner’s Approach, 6/e
copyright © 1996, 2001, 2005
R.S. Pressman & Associates, Inc.
For University Use Only
May be reproduced ONLY for student use at the university level
when used in conjunction with Software Engineering: A Practitioner's Approach.
Any other reproduction or use is expressly prohibited.

1
Software Testing
Testing is the process of exercising a program with the
specific intent of finding errors prior to delivery
to the end user.
errors
requirements conformance

performance

an indication
of quality

2
Who Tests the Software?

developer independent tester


Understands the system Must learn about the system,
but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by quality

3
Levels of Testing
 Unit testing
 Integration testing
 Validation testing
 Focus is on software requirements
 System testing
 Focus is on system integration
 Alpha/Beta testing
 Focus is on customer usage
 Recovery testing
 forces the software to fail in a variety of ways and verifies that recovery is properly performed
 Security testing
 verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration
 Stress testing
 executes a system in a manner that demands resources in abnormal quantity, frequency, or volume
 Performance Testing
 test the run-time performance of software within the context of an integrated system

4
Unit Testing

module
to be
tested
results

software interface
engineer
local data structures
boundary conditions
independent paths
error handling paths
test cases

5
Integration Testing Strategies
Options:
• the “big bang” approach
• an incremental construction strategy

• Top Down, bottom-up, sandwich Integration

6
OOT Strategy
 class testing is the equivalent of unit testing
…if there is no nesting of classes
 operations within the class are tested
 the state behavior of the class is examined

 integration applied three different strategies/levels


of abstraction
 thread-based testing—integrates the set of classes
required to respond to one input or event
 use-based testing—integrates the set of classes
…this
required to respond to one use is pushing…
case
 cluster testing—integrates the set of classes required to
demonstrate one collaboration

7
Debugging: A Diagnostic Process
test cases

new test results


regression cases
tests suspected
causes
corrections
Debugging
identified
causes

8
Software Testing Techniques
based on

Chapter 14 - Software Engineering: A Practitioner’s Approach, 6/e


copyright © 1996, 2001, 2005
R.S. Pressman & Associates, Inc.
For University Use Only
May be reproduced ONLY for student use at the university level
when used in conjunction with Software Engineering: A Practitioner's Approach.
Any other reproduction or use is expressly prohibited.

9
What is a “Good” Test?
 a high probability of finding an error
 not redundant.
 neither too simple nor too complex

"Bugs lurk in corners


and congregate at
boundaries ..."
Boris Beizer

OBJECTIVE: to uncover errors


CRITERIA: in a complete manner
CONSTRAINT: with a minimum of effort and time
10
White-Box or Black-Box?
Exhaustive Testing

loop < 20 X

14
There are approx. 10 possible paths! If we execute one
test per millisecond, it would take 3,170 years to test this program!!

Where does 10 14 come from?


11
Level of abstraction RE in V Model

system system
requirements integration

software acceptance
requirements test

preliminary software
design integration

detailed component
design test

code & unit


debug test

Time
12
[Chung]
Software Testing
White-Box Testing Black-Box Testing

requirements

output

... our goal is to ensure that all


statements and conditions have
been executed at least once ... input
events

13
White-Box Testing
Basis Path Testing
First, we compute the cyclomatic
complexity:

number of simple decisions + 1

or

number of enclosed areas + 1

In this case, V(G) = 4

14
White-Box Testing
Basis Path Testing
Next, we derive the
independent paths:
1

Since V(G) = 4,
2 there are four paths

3 Path 1: 1,2,4,7,8
4
5 6 Path 2: 1,2,3,5,7,8
Path 3: 1,2,3,6,7,8
Path 4: 1,2,4,7,2,4,...7,8
7
Finally, we derive test
cases to exercise these
8 paths.
A number of industry studies have indicated
that the higher V(G), the higher the probability
15
or errors.
White-Box Testing
Loop Testing

Simple
loop
Nested
Loops
Concatenated
Loops Unstructured
Loops
Why is loop testing important? 16
White-Box Testing Black-Box Testing
Equivalence Partitioning &
Boundary Value Analysis

 If x = 5 then …

 If x > -5 and x < 5 then …

What would be the equivalence classes? 17


Black-Box Testing
Comparison Testing
 Used only in situations in which the reliability of software
is absolutely critical (e.g., human-rated systems)
 Separate software engineering teams develop independent
versions of an application using the same specification
 Each version can be tested with the same test data to ensure
that all provide identical output
 Then all versions are executed in parallel with real-time
comparison of results to ensure consistency

18
OOT—Test Case Design
Berard [BER93] proposes the following approach:

1. Each test case should be uniquely identified and should be explicitly


associated with the class to be tested,
2. A list of testing steps should be developed for each test and should
contain [BER94]:
a. a list of specified states for the object that is to be tested
b. a list of messages and operations that will be exercised as a
consequence of the test how can this be done?
c. a list of exceptions that may occur as the object is tested
d. a list of external conditions (i.e., changes in the environment external
to the software that must exist in order to properly conduct the test)
{people, machine, time of operation, etc.}

19
OOT Methods: Behavior Testing
The tests to be empty set up
designed should open acct setup Accnt acct

achieve all state


coverage [KIR94].
deposit
(initial)

That is, the operation deposit

sequences should working


cause the Account balance
acct
withdraw
class to make credit
accntInfo
transition through all withdrawal
allowable states (final)

dead nonworking
acct close acct

Figure 1 4 .3 St at e diagram f or A ccount class ( adapt ed f rom [ KIR9 4 ] )

Is the set of initial input data enough? 20


Omitted Slides

21
Testability

 Operability—it operates cleanly


 Observability—the results of each test case are readily observed
 Controllability—the degree to which testing can be automated and optimized
 Decomposability—testing can be targeted
 Simplicity—reduce complex architecture and logic to simplify tests
 Stability—few changes are requested during testing
 Understandability—of the design

22
Strategic Issues

 Understand the users of the software and develop a profile for each user category.

 Develop a testing plan that emphasizes “rapid cycle testing.”

 Use effective formal technical reviews as a filter prior to testing

 Conduct formal technical reviews to assess the test strategy and test cases themselves.

23
NFRs:
Reliability [Chung, RE Lecture Notes]]

Counting Bugs

 Sometimes reliability requirements take the form:


"The software shall have no more than X bugs/1K LOC"
But how do we measure bugs at delivery time?

 Bebugging Process - based on a Monte Carlo technique for statistical analysis of random events.
1. before testing, a known number of bugs (seeded bugs) are secretly inserted.
2. estimate the number of bugs in the system
3. remove (both known and new) bugs.

# of detected seeded bugs/ # of seeded bugs = # of detected bugs/ # of bugs in the system
# of bugs in the system = # of seeded bugs x # of detected bugs /# of detected seeded bugs

Example: secretely seed 10 bugs


an independent test team detects 120 bugs (6 for the seeded)
# of bugs in the system = 10 x 120/6 = 200
# of bugs in the system after removal = 200 - 120 - 4 = 76

 But, deadly bugs vs. insignifant ones; not all bugs are equally detectable; ( Suggestion [Musa87]:

"No more than X bugs/1K LOC may be detected during testing"


"No more than X bugs/1K LOC may be remain after delivery,
24
as calculated by the Monte Carlo seeding technique"
White-Box Testing
Cyclomatic Complexity
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.

modules

V(G)

modules in this range are


more error prone

25

You might also like