This action might not be possible to undo. Are you sure you want to continue?
a strategy for software testing integrates software test case design methods into weell ² planned series of steps that result in the successful construction of software. Testing is a set of activities that can be planned in advanced and conducted systematically . for this reason a template for software testing i.e. a set of steps into which we can place specific test case design techniques and testing methods is designed and is called as a strategy.
Strategic Approach to Software Testing
y General characteristics of a strategy are: y Testing begins at the component level and works outward toward the y y y y y y
integration of the entire computer-based system. Different testing techniques are appropriate at different points in time. The developer of the software conducts testing and may be assisted by independent test groups for large projects. The role of the independent tester is to remove the conflict of interest inherent when the builder is testing his or her own product. Testing and debugging are different activities. Debugging must be accommodated in any testing strategy. Make a distinction between verification (are we building the product right?) and validation (are we building the right product?)
.feasibility study .database review . documentation review. simulation. y Verification and validation includes a set of software quality assurence activities: y Formal technical reviews. y Validation refers to adifferent set of activities that ensure that the software has been built is traceble to customer requirements.qualification testing.performance monitoring .and installation testing etc. algorithm analysis.Verification and Validation y Verification refers to the set of activitities that ensure that software correctly implements a specific function . quality and configuration audits.development testing.
y Developer also conducts the integration testing. . y Only after the software architecture is complete the independent test group is involved.the role of ITG is to remove the inherent problems associated with letting the builder test the thigs that has been built. y ITG:. The developer will confirm that the program works for the function for which it was designed .Organizing for software testing y Before strating testing it must be specified that who will do the testing? y Software developer is responsible for testing individual units of the program.
A software testing strategy (process) y A strategy for software testing can be viewed as 1. Testing progrss by moving outward along the spiral to Integration testing where the focus is on design and construction of softwarearchitecture. 2. . Finally system testing where software and other system elements are tested as whole. 4. Validation testing begins on the next turn on the spiral where requirements established as a part of software requirments analysis are validated against the software that has been constructed. a spiral model: Unit testing begins at the vortex of the spiral and concentrates on each unit of the software as implemented in source code. 3.
ensuring that it functions properly as a unit.behavioural. . Block box test case design techniques are used during integrated testing. Unit testing uses white box testing techniques. Integration testing addresses the issues associated with verification and program construction. exercising specific paths in a module·s control structure to ensure complete coverage and maximum error detection. y Validation testing: after the software has been integrated . y Integration testing: after unit testing components must be assembled or integrated to form the complete software package. a set high ² order tests are conducted . Validation testing provides final assurance that software meets all functional.y Testing is actually a series of four steps that are implemented sequentially y Unit testing: initially test focus each component individually . Validation criteria must be tested.blackbox techniques are exhaustively used during validation .and performance requirements.
A version of failure model . takes the form y F(t)=(1/p)In[lopt+1] y Where f(t)=cumulative number of failures that are expected tp occur once the software has been tested for a certain amount of execution time. models of software failures as a function of execution time can be developed . y Lo = the initial software failure intencity (failures per time unit ) at the beginning of testing.Criteria for completion of testing y Once stated with testing it is important to know when to stop testing«« y Using statastical modelling and softwware realiability theory . called a logarithmic Poisson execution ²time method . y P=the exponential reduction in failure intencity as errors are uncovered and repairs are made.t. .
.tester can predict a drop off of errors as testing progresses . If the actual daat gathered during testing and the logarithimic poission execution time model are resonably close to one another over a number of data points .a graph of failure intensity against excution time.t can be drawn.the model can be used to predict total testing time required to achive acceptably low failure intensity.y The instanteneous failure intensity . l(t) can be derived by taking the derivative of f(t) y l(t)=l0 / (l0 pt+1) using the above relationship .
Therefore if the product requirements are specified in as measurable way tehn testing results are unambiguous. test coverage. . y Identify the user classes of the software and develop a profile for each: identifying the users of the product testing efforts can be reduce by focusing on actual use of the product. maintainability. test effectiveness. y Specify testing objectives explicitly.g. and usability. y Develop a test plan that emphasizes rapid cycle testing.Strategic Testing Issues y Guide lines for a sucessful testing: y Specify product requirements in a quantifiable manner before testing starts: testing is also about portability.: the specific objectives of testing should be stated in a measurable terms in the test plan e. mean time to failure the cost to find and fix defects etc.
Strategic issues: y Build robust software that is designed to test itself (e. outright errors in the testing approach . the developed software should be capable of diagnosing certain classes of errors. In addition . Technical reviews can uncover inconsistencies . uses anitbugging). y Use effective formal reviews as a filter prior to testing. omissions. y Conduct formal technical reviews to assess the test strategy and test cases. .the design should accommodate automated testing and regression testing.g. This saves time and also improves product quality.
All error handling paths should be tested. . y Local data are examined to ensure that integrity is y y y y maintained. Boundary conditions are tested. Basis path testing should be used.Unit Testing y Black box and white box testing. Drivers and/or stubs need to be developed to test incomplete software. y Module interfaces are tested for proper information flow.
. Regression testing may be used to ensure that new errors not introduced.Integration Testing y Top-down integration testing Main control module used as a test driver and stubs are substitutes for components directly subordinate to it. Tests are conducted as each component is integrated. On completion of each set of tests and other stub is replaced with a real component. Subordinate stubs are replaced one at a time with real components (following the depth-first or breadth-first approach).
y A driver (control program) is written to coordinate test case input and output. y Drivers are removed and clusters are combined moving upward in the program structure. .Integration Testing y Bottom-up integration testing y Low level components are combined in clusters that perform a specific software function. y The cluster is tested.
Integration Testing y Regression testing (check for defects propagated to other modules by changes made to existing program) y Representative sample of existing test cases is used to exercise all software functions. . y Tests cases that focus on the changed software components. y Additional test cases focusing software functions likely to be affected by the change.
y A series of tests designed to expose errors that will keep the build from performing its functions are created.Integration Testing y Smoke testing y Software components already translated into code are integrated into a build. y The build is integrated with the other builds and the entire product is smoke tested daily (either top-down or bottom integration may be used). .
General Software Test Criteria y Interface integrity (internal and external module interfaces are tested as each module or cluster is added to the software) y Functional validity (test to uncover functional defects in the software) y Information content (test for errors in local or global data structures) y Performance (verify specified performance bounds are tested) y .
and documented to allow its support during its maintenance phase. . y Deviations (deficiencies) must be negotiated with the customer to establish a means for resolving the errors. cataloged. y Configuration review or audit is used to ensure that all elements of the software configuration have been properly developed.Validation Testing y Ensure that each function or performance characteristic conforms to its specification.
Acceptance Testing y Making sure the software works correctly for intended user in his or her normal work environment. y Alpha test (version of the complete software is tested by customer under the supervision of the developer at the developer·s site) y Beta test (version of the complete software is tested by customer at his or her own site without the developer being present) .
System Testing y Recovery testing (checks the system·s ability to recover from failures) y Security testing (verifies that system protection mechanism prevent improper penetration or data alteration) y Stress testing (program is checked to see how well it deals with abnormal resource demands ² quantity. frequency. or volume) y Performance testing (designed to test the run-time performance of software. especially real-time software) .
y Common approaches: Brute force (memory dumps and run-time traces are examined for clues to error causes) Backtracking (source code is examined by looking backwards from symptom to potential causes of errors) Cause elimination (uses binary partitioning to reduce the number of locations potential where errors can exist) .Debugging y Debugging (removal of a defect) occurs as a consequence of successful testing. y Some people are better at debugging than others.
Bug Removal Considerations y Is the cause of the bug reproduced in another part of the program? y What "next bug" might be introduced by the fix that is being proposed? y What could have been done to prevent this bug in the first place? .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.