 Testing Definition

 Goals
 Difficulties  Dimensions of Test Case selection

 Stages of Testing
 Test Generation Strategies(Techniques)  Prioritization

 Automation

Executing software in a simulated or real environment, using inputs selected somehow.

 Detect faults  Establish confidence in software  Evaluate properties of software  Reliability  Performance  Memory Usage  Security  Usability .

.  Comparing resulting internal states to expected states.Most of the software testing literature equates test case selection to software testing but that is just one difficult part.  Determining whether adequate testing has been done. Other difficult issues include:  Determining whether or not outputs are correct.  Measuring performance characteristics.

 Stages of Development  Source of Information for Test Case Selection .

Testing in the Small  Unit Testing  Feature Testing  Integration Testing .

Tests the smallest individually executable code units. Usually done by programmers. etc. specification. intuition. Test cases might be selected based on code. Tools:  Test driver/harness  Code coverage analyzer  Automatic test case generator .

Issues:  In what order are units combined? . Usually done by programmers. Emphasizes interfaces.Tests interactions between two or more units or components.

top-level tested repeatedly.  Bottom-up => need drivers.How are units integrated? What are the implications of this order?  Top-down => need stubs. . critical units tested repeatedly. bottom-levels tested repeatedly.  Critical units first => stubs & drivers needed.

Potential Problems:  Inadequate unit testing. .  Inadequate documentation and testing of externallysupplied components.  Inadequate planning & organization for integration testing.

Testing in the Large  System Testing  End-to-End Testing  Operations Readiness Testing  Beta Testing  Load Testing  Stress Testing  Performance Testing  Reliability Testing  Regression Testing .

Usually done by professional testers. .Test the functionality of the entire system.

 Testing must be planned. Exhaustive testing is not possible.  Testing is creative and difficult.  Testing should be done by people who are independent of the developers.  A major objective of testing is failure prevention. .

domain structure. etc. and selecting one or more test case from each. risk analysis. •The division can be based on such things as code characteristics (white box).•Every systematic test selection strategy can be viewed as a way of dividing the input domain into sub domains. . specification details (black box).

 Can only be used at the unit testing level. and even then it can be prohibitively expensive.  Don’t know the relationship between a “thoroughly” tested component and faults. Can generally argue that they are necessary conditions but not sufficient ones. .

 Unless there is a formal specification.  Even if every functionality unit of a specification has been tested. (which there rarely/never is) it is very difficult to assure that all parts of the specification have been used to select test cases. .  Specifications are rarely kept up-to-date as the system is modified. that doesn’t assure that there aren’t faults.

boundary. producing output-based test cases. & near-boundary cases.Look at characteristics of the input domain or subdomains. Paris.  This sort of boundary analysis may be meaningless for non-numeric inputs. . … }?  Can also apply similar analysis to output values. What are the boundaries of {Rome. London.  Consider typical.

Risk is the expected loss attributable to the failures caused by faults remaining in the software.  Failure consequence. So risk-based testing involves selecting test cases in order to minimize risk by making sure that the most likely inputs and highest consequence ones are selected. Risk is based on  Failure likelihood or likelihood of occurrence. .

• The outcome determines whether the customer will accept system. • This is often part of a contractual agreement.• The end user runs the system in their environment to evaluate whether the system meets their criteria. .

• Usually done by testers.• Test modified versions of a previously validated system. previously-run test cases. • The goal is to assure that changes to the system have not introduced errors. • The primary issue is how to choose an effective regression test suite from existing. .

at least the “most important” ones can be. it is often desirable to prioritize test cases based on some criterion. • That way.• Once a test suite has been selected. since the time available for testing is limited and therefore all tests can’t be run. .

 Most critical functions.  Most critical individual inputs. . Most frequently executed inputs.

.  Test case adequacy assessment.White-box methods can be used for  Test case selection or generation.

 Is code coverage an effective means of detecting faults?  How much coverage is enough?  Is one coverage criterion better than another?  Does increasing coverage necessarily lead to higher fault detection?  Are coverage criteria more effective than random test case selection? .

track test progress & completeness . or model. code. map tests to requirements & functionality.  Test management: Log test cases & results.  Test generation: Produce test cases by processing the specification. Test execution: Run large numbers of test cases/suites without human intervention.

 Testing is repetitive.once they are created. and error-prone. tedious.  Test cases are valuable . More testing can be accomplished in less time. . particularly during regression testing. they can and should be used again.

 Does the payoff from test automation justify the expense and effort of automation?  Learning to use an automation tool can be difficult. supplying the inputs.  Tests. and verifying the results. running the test case. have a finite lifetime. .  Completely automated execution implies putting the system into the proper state. collecting the results.

cope with their problems. and understand what they can and can’t do. particularly when the system under test changes.  Use of tools require that testers learn how to use them. Automated tests are more expensive to create and maintain (estimates of 3-30 times). .  Automated tests can lose relevancy.

 Load/stress tests -Very difficult to have very large numbers of human testers simultaneously accessing a system. run to check that changes haven’t caused faults.  Sanity tests .Run after every new system build to check for obvious problems.  Stability tests . .Run the system for 24 hours to see that it can stay up.  Regression test suites -Tests maintained from previous releases.

. 2002.NIST estimates that billions of dollars could be saved each year if improvements were made to the testing process. *NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing.

Sign up to vote on this title
UsefulNot useful