You are on page 1of 27

Software Testing Strategies

1
Software Testing
 Strategy
 Integration of software test case design methods into
a well-planned series of steps
 Successful software construction
 Provides a road map that describes
 The steps to be conducted as part of testing
 How much effort, time and resources will be required
 Must incorporate test planning, test case design, test
execution, and resultant data collection and
evaluation

2
Software Testing
 Strategy
 Should be flexible enough to promote a customized
testing approach
 Must be rigid enough to promote reasonable planning
and management tracking as the project progresses
 Different test methods are beginning to cluster
themselves into distinct approaches and philosophies

3
Software Testing
 What is it?
 To uncover errors
 How do we conduct the test?
 Do we develop a formal plan?
 Should we test the entire program as a whole or run tests on a
small part of it?
 Should we rerun tests we have already conducted as we add
new components to a large system?
 When should we involve customer?
 These questions must be answered when we develop a
software testing strategy.

4
Software Testing
 Who does it?
 The strategy is developed by the project manager,
software engineers, and testing specialists.
 Why is it important
 Testing often accounts for more project effort than
any other software engineering activity
 If conducted haphazardly
 time is wasted
 Unnecessary effort is expended
 Errors sneak through undected

5
Software Testing
 What are the steps?
 Begins in the small and progresses to the large
 Focus on a single component or a small group of related
components
 After components are tested they may be integrated until the
complete system is constructed
 As errors are uncovered, they must be diagnosed and
corrected using a process that is called debugging

6
Software Testing
 What is the work product?
 A test specification document
 Defining a plan that describes overall strategy
 A procedure that defines specific testing steps and the tests
that will be conducted
 How do I ensure that I’ve done it right?
 By reviewing the Test specification prior to testing
 Assess the completeness of test cases and testing tasks

7
Software Testing
 A Strategic Approach
 Testing is a set of activities that can be planned in
advance and conducted systematically
 Conduct effective formal technical reviews
 Begin at the component level and work “outward” toward
the integration of the entire computer-based system
 Different testing techniques at different points in time
 Usually conducted by the software developer and for large
projects by an independent test group
 Testing and debugging are different activities, but
debugging must be accommodated in any testing strategy

8
Software Testing
 Verification and Validation
 Software testing is often referred to as V&V
 Verification refers to the set of activities that ensures that
software correctly implements a specific function
 Are we building the product right?

 Validation is a different set of activities that ensure that the


software that has been built is traceable to customer
requirements
 Are we building the right product?

9
Software Testing
 Verification and Validation
 Encompass a wide array of Software Quality
Assurance (SQA) activities including
 Formal technical reviews
 Quality and configuration audits
 Performance monitoring
 Simulation
 Feasibility study
 Documentation review
 Database review
 Algorithm analysis
 Development, usability, installation testing

10
Software Testing
 Organizing for Software Testing
 Psychological point of view
 The software engineer analyzes, models, and then creates a
computer program and its documentation
 From the software engineers perspective testing is destructive
as it tries to break the thing that SE has built
 There are often a number of misconceptions
 The developer of the software should no do test
 Software should be tossed over the wall to strangers who will
test it mercilessly
 Testers get involved with the project only when the testing
steps are about to begin

11
Software Testing
 Organizing for Software Testing
 The role of the independent test group
 Remove the inherent problems associated with letting the
builder test the thing that has been built
 Removes the conflict of interest
 The software engineer, however, does turn the program over
to the ITG and walk away; both have to work closely
throughout the project
 The developer must be available to correct uncovered errors

12
Software Testing
 Strategy for conventional software architecture
 Unit Testing
 Concentrates on each unit (i.e. component) of the software as
implemented in the source code
 Integration Testing
 Focus is on design and the construction of the software
architecture
 Validation Testing
 Requirements are analysis are validated against the software
that has been constructed
 System Testing
 Software and other elements are tested as a whole

13
Software Testing
 Strategy for Object Oriented Architecture
 Focus on “testing in the small” and work outward
toward “testing in the large”
 Testing in the small changes from an individual module to a
class encompassing attributes and operations; implies
communication and collaboration
 A series of regression tests are run to uncover errors due to
communication and collaboration between classes

14
Software Testing
 Strategic Issues
 Specify product requirements in a quantifiable manner long
before testing
 A good strategy not only looks for errors, but also asses other
quality characteristics such as
 Portability
 Maintainability
 Usability
 State testing objectives explicitly
 The specific objectives of testing should be stated in measurable
terms
 Test effectiveness
 The cost to find and fix defects
 Test coverage

15
Software Testing
 Strategic Issues
 Understand the user of the software and develop a
profile for each
 Use-cases that describe the interaction scenario for each class
of user can reduce overall testing effort by focusing testing
on actual use of the project
 Build robust software that is designed to test itself
 Software should be designed in a manner that uses anti-
bugging techniques. Software should be capable of
diagnosing certain classes of errors. The design should be
accommodated automated testing and regression testing

16
Software Testing
 Strategic Issues
 Use effective formal technical reviews as a filter prior
to testing
 Reviews can reduce the amount of testing effort that is
required to produce high-quality software
 Develop a continuous improvement approach
 The testing strategy should be measured. The metrics
collected during testing should be used as part of a statistical
process control approach for testing

17
Software Testing
 Test Strategies for Conventional Software
 There are many strategies for testing software.
 A software team could wait until the system is fully
constructed and then conduct tests on the overall system.

 At the other extreme software engineer can conduct tests on


a daily basis, whenever any part of the system is constructed

 A testing strategies falls between the two extremes


 Begin with the testing of individual program units
 Moving to tests designed to facilitate the integration of the
units

18
Software Testing
 Test Strategies for Conventional Software
 Unit Testing
 Focuses verification effort on the smallest unit of software
design – the software component or module
 Integration Testing
 “If they all work individually, why do you doubt that they’ll
work when we put them together?”
 The problem is “putting them together” – interfacing
 Data may be lost

 One module can have an inadvertent, adverse effect on


another; sub-function when combined
 Global data structure can present problems

19
Software Testing
 Test Strategies for Conventional Software
 Regression Testing
 Each time a new module is added as part of integration
testing, the software changes
 New data flow paths are established
 New I/O and control logic is invoked
 Regression testing is the re-execution of some subset of tests
that have already been conducted to ensure that changes
have not propagated unintended side effects
 Smoke testing
 An integration testing

20
Software Testing
 Test Strategies for Conventional Software
 Smoke testing
 An integration testing
 Software components that have been translated into code are
integrated into a “build” – data files, libraries, reusable
modules, and engineered components that are required to
implement one or more product
 The intent of testing here is to uncover show stoppers –
errors that have the highest likelihood of throwing the
software project behind schedule
 The build is integrated with other builds and the entire
product is smoke tested daily

21
Software Testing
 Test Strategies for OBJECT-ORIENTED
SOFTWARE
 Unit Testing
 An encapsulated class is the focus of unit testing
 Operations within the class are, however, the smallest unit
 Integration Testing
 Thread based testing
 Integrates a set of classes required to respond to one input or
event for the system
 Used based testing
 Begins the construction of the system by testing independent
classes – use very few if any server classes
 Testing dependent classes – which use independent classes

22
Software Testing
 Test Strategies for OBJECT-ORIENTED
SOFTWARE
 Integration Testing
 Cluster Testing
 A cluster of collaborating classes (determined by examining the
CRC) to uncover errors in collaboration

23
Software Testing
 Validation Testing
 At this level the distinction between conventional and object-
oriented software disappears
 Testing focuses on user visible actions and user recognizable output
from the system
 Can be defined in many ways
 A simple definition is that validation succeeds when software
functions in a manner that can be reasonably expected by the
customer.
 Alpha Testing
 Conducted at the developer site by end-users
 Beta Testing
 Conducted at end user sites – the developer is generally not present

24
Software Testing
 System Testing
 Ultimately, software is incorporated with other system elements
 Hardware, people, information
 A series of system integration and validation tests are conducted
 A classic system testing problem – finger pointing
 Software engineers should anticipate potential interfacing problems
 Design error handling paths that test all information coming from
other elements of the system
 Conduct series of tests to simulate bad data

 Record the results of testing as evidence

 Participate in planning and design of systems tests to ensure that


software is adequately tested

25
Software Testing
 System Testing
 Recovery Testing
 Is a system test that forces the software to fail in a variety of
ways and verifies that recovery is properly performed
 If automatic recovery
 Re-initialization

 Check-pointing mechanisms

 Data recovery

 Restart are evaluated for correctness

 If manual
 Mean time to repair (MTTR) is evaluated

26
Software Testing
 Security Testing
 It verifies that protection mechanisms built into a
system will, in fact protect it from improper penetration
 Stress Testing
 It executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume
 Performance Testing
 Are often coupled with stress testing and usually
require both hardware and software instrumentation.
 To measure resource utilization

27

You might also like