Handout: Software Testing

Version: ST/Handout/1107/1.0 Date: 16-11-07

Cognizant 500 Glen Pointe Center West Teaneck, NJ 07666 Ph: 201-801-0233 www.cognizant.com

Handout – Software Testing

TABLE OF CONTENTS
Introduction ...................................................................................................................................5 About this Module .........................................................................................................................5 Target Audience ...........................................................................................................................5 Module Objectives ........................................................................................................................5 Pre-requisite .................................................................................................................................5 Chapter 1: Introduction to Testing ...............................................................................................6 Learning Objectives ......................................................................................................................6 What is Software Testing..............................................................................................................6 Testing Life Cycle .........................................................................................................................6 Broad Categories of Testing .........................................................................................................7 The Testing Techniques ...............................................................................................................7 Types of Testing ...........................................................................................................................8 SUMMARY ...................................................................................................................................8 Test your Understanding ..............................................................................................................9 Chapter 2: Black Box Vs. White Box Testing ............................................................................10 Learning Objective:.....................................................................................................................10 Introduction to Black Box and White Box testing........................................................................10 Black box testing ........................................................................................................................10 Black box testing - without user involvement .............................................................................11 Black box testing - with user involvement ..................................................................................11 White Box Testing ......................................................................................................................14 Black Box (Vs) White Box...........................................................................................................18 SUMMARY .................................................................................................................................20 Test your Understanding ............................................................................................................20 Chapter 3: Other Testing Types ..................................................................................................21 Learning Objective ......................................................................................................................21 What is GUI Testing? .................................................................................................................21 Regression Testing.....................................................................................................................31 Integration Testing ......................................................................................................................38 Acceptance Testing ....................................................................................................................43 Configuration Testing & Installation Testing ...............................................................................45 Alpha testing and Beta testing ....................................................................................................48 Test your Understanding ............................................................................................................52
Page 2 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing
Chapter 4: Levels of Testing .......................................................................................................53 Learning Objective ......................................................................................................................53 Unit Testing.................................................................................................................................53 Integration Testing ......................................................................................................................60 System Testing ...........................................................................................................................61 SUMMARY .................................................................................................................................64 Test your Understanding ............................................................................................................64 Chapter 5: JUnit Testing ..............................................................................................................65 Learning Objective ......................................................................................................................65 JUNIT Testing - Introduction.......................................................................................................65 Simple Test Case .......................................................................................................................65 Fixture .........................................................................................................................................66 Test Case ...................................................................................................................................67 Suite............................................................................................................................................67 TestRunner .................................................................................................................................68 Chapter 6: Testing Artifacts ........................................................................................................70 Learning Objective ......................................................................................................................70 Test Strategy and Test Plan .......................................................................................................70 Test Plan.....................................................................................................................................75 Test Case .................................................................................................................................100 SUMMARY ...............................................................................................................................103 Test your Understanding ..........................................................................................................103 Chapter 7: Defect Management .................................................................................................104 Learning Objective ....................................................................................................................104 What is a Defect? .....................................................................................................................104 Defect Lifecycle ........................................................................................................................105 Defect Reporting and Tracking .................................................................................................105 SUMMARY ...............................................................................................................................107 Test your Understanding ..........................................................................................................108 Chapter 8: Automation ...............................................................................................................109 Learning Objective ....................................................................................................................109 What is Automation? ................................................................................................................109 Automation Benefits .................................................................................................................109 Automation Life Cycle...............................................................................................................111 Test Environment Setup ...........................................................................................................113 Other Phases in Automation.....................................................................................................116
Page 3 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

.....................................................................174 WEBSITES .........................................................................................................................172 Test your Understanding ............................126 Learning Objective .................117 Automation tool comparison ...........................................................................................163 SUMMARY ............140 Supported environments .................................................................................................................................................................130 Rational Test Manager ................................ All Rights Reserved C3: Protected ...................................................................125 Chapter 9: Sample Test Automation Tool .........................................................................................................................................................................................................................................................................................................................................................144 Performance Testing Requirements ......................................................167 What is a Test Case Point (TCP)? ......................................................................................................................................126 Rational Administrator ..................................................................................126 Sample Test Automation Tool ...............166 Chapter 11: Test Case Point ......................................126 Rational Suite of tools.............................................................................................................................................................................................................................................................................................................................................................................................................................154 Volume and Stress Testing ...........................127 Rational Robot ..........................................................................................................................................147 Performance Testing Tools ...........................................................................................118 SUMMARY .............................................................................................................................................................142 SUMMARY ............................................................................................175 Page 4 ©Copyright 2007...............................................................................................Handout – Software Testing Automation Methods.143 Chapter 10: Performance Testing ..............................................................................................................174 BOOKS .....................172 REFERENCES .............................144 Learning Objective ...................................................................................174 STUDENT NOTES: ...............................................................167 SUMMARY .................................................................................................................................................................................. Cognizant Technology Solutions.............144 What is Performance testing? ...........................................................................146 Performance Testing Process .........................................................................................................................................................................................................167 Learning Objective ......................................................................................................................................................................167 Test Case Point Analysis................................................................................

suggested prerequisites and module objectives. audience. the student will be able to: Explain Software Testing List the types of testing Explain Test Strategy Describe Test Plan Describe Test Design Describe Test Cases Describe Test Data Explain Test Execution Perform Defect reporting and analyzing the defects List Test Automation advantages and disadvantages Work with Winrunner Describe Performance Testing Work with Loadrunner Too Work with Test Director Pre-requisite This module does not require any prerequisite Page 5 ©Copyright 2007.Handout – Software Testing Introduction About this Module This module provides you with a brief description of the module. All Rights Reserved C3: Protected . Target Audience Entry Level Trainees Module Objectives After completing this module. Cognizant Technology Solutions.

Cognizant Technology Solutions." where the "questions" are things the tester tries to do with the product. One definition of testing is "the process of questioning a product in order to evaluate it. not merely a matter of creating and following rote procedure. Software Testing has been accepted as a separate discipline to the extent that there is a separate life cycle for the testing activity. testing can never establish the correctness of computer software. but the process mentioned above is common to any testing activity. The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability. not prove that there are none. Actually. the word testing is connoted to mean the dynamic analysis of the product-. All Rights Reserved C3: Protected . Although most of the intellectual processes of testing are nearly identical to that of review or inspection. but effective testing of complex products is essentially a process of investigation. maintainability and usability.putting the product through its paces. stability. and the product answers with its behavior in reaction to the probing of the tester. as this can only be done by formal verification (and only when there is no mistake in the formal verification process). Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria. portability. completeness and quality of developed computer software. There are many approaches to software testing.Handout – Software Testing Chapter 1: Introduction to Testing Learning Objectives After completing this topic you will be able to: Explain the need for Software testing What is Software Testing Software testing is a process used to identify the correctness. According to the respective projects. Testing Life Cycle Every testing project has to follow the waterfall model of the testing process. the scope of testing can be tailored. It can only find defects. Involving software testing in all phases of the software Page 6 ©Copyright 2007.

The above said testing types are performed based on the following testing techniques. Cognizant Technology Solutions. Black-Box testing technique: This technique is used for testing based solely on analysis of requirements (specification. Static Testing Dynamic Testing The kind of verification we do on the software work products before the process of compilation and creation of an executable is more of Requirement review. it is called Dynamic Testing. there needs to be testing done on Software Testing every phase. When we test the software by executing and comparing the actual & expected results. design review. The V-Model of the Software Testing Life Cycle along with the Software Development Life cycle given below indicates the various phases or levels of testing. code. code review. This type of testing is called Static Testing. Also known as functional testing. Right from the Requirements study till the implementation.Handout – Software Testing development life cycle has become a necessity as part of the software quality assurance process.) (But expected results still come requirements). These topics will be elaborated in the coming chapters Page 7 ©Copyright 2007. etc. there are two widely used testing techniques. Broad Categories of Testing Based on the V-Model mentioned above. user documentation. namely. walkthrough and audits. The Testing Techniques To perform these types of testing. we see that there are two categories of testing activities that can be done on software. Also known as Structural testing. All Rights Reserved C3: Protected .). White-Box testing technique: This technique us used for testing based on analysis of internal logic (design.

e. namely. which enables a customer to determine whether to accept the system or not. Unit testing. a subset of stress testing. memory. integrated) to form higher-level elements Regression Testing: Selective re-testing of a system to verify the modification (bug fixes) have not caused unintended effects and that system still complies with its specified requirements. User Acceptance testing etc. SUMMARY “Testing is the process of executing a program with the intent of finding errors” Evolution of Software Testing The Testing process and lifecycle Broad categories of testing Widely employed Types of Testing The Testing Techniques Page 8 ©Copyright 2007. Performance Testing: To evaluate the time taken or response time of the system to perform it’s required functions in comparison Stress Testing: To evaluate a system beyond the limits of the specified requirements or system resources (such as disk space. verifies that a web site can handle a particular number of concurrent users while maintaining acceptable response times Alpha Testing: Testing of a software product or system conducted at the developer’s site by the Customer Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered software product system. System Testing: Testing the software for the required specifications on the intended hardware Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria. Integration Testing: Testing which takes place as sub elements are combined (i. Integration testing.. Done to verify if it satisfies its functional specification or its intended design structure.Handout – Software Testing Types of Testing From the V-model. Let us see a brief definition on the widely employed types of testing. processor utilization) to ensure the system do not break unexpectedly Load Testing: Load Testing. we see that are various levels or phases of testing. System testing. Cognizant Technology Solutions. All Rights Reserved C3: Protected . Unit Testing: The testing done to a unit or to a smallest piece of software.

Cognizant Technology Solutions.Handout – Software Testing Test your Understanding 1. All Rights Reserved C3: Protected . The primary objective of testing is a) To show that the program works b) To provide a detailed indication of quality c) To find errors d) To protect the end –user Answers: 1) c Page 9 ©Copyright 2007.

It is used to detect errors by means of execution-oriented test cases. functional. For this testing. no other knowledge of the program is necessary. how to develop and document test cases. so it doesn't explicitly use knowledge of the internal structure. In the following the most important aspects of these black box tests will be described briefly. While black-box and white-box are terms that are still in popular use. but it's still discouraged. For example. the tester would only know the "legal" inputs and what the expected outputs should be. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. it hasn't proven useful to use a single test design method. For this reason. i. and closedbox. glass-box and clear-box. It is usually described as focusing on testing functional requirements. White-box test design allows one to peek inside the "box". and how to build and maintain test data. Synonyms for black-box include: behavioral. It is because of this that black box testing can be considered testing with respect to the specifications. test coverage.e. opaque-box. Though centered around the knowledge of user requirements. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden.Handout – Software Testing Chapter 2: Black Box Vs. you will be able to: Explain the methods of testing Introduction to Black Box and White Box testing Test Design refers to understanding the sources of test cases. avoiding programmer bias toward his own work. the tester and the programmer can be independent of one another. field and laboratory tests. In practice. Cognizant Technology Solutions. Some call this "graybox" or "translucent-box" test design. There are 2 primary methods by which tests can be designed and they are: Black box White box Black-box test design treats the system as a literal "black-box". White Box Testing Learning Objective: After completing this chapter. black box tests do not necessarily involve the participation of users. when black box testing is applied to software engineering. but not how the program actually arrives at those outputs. and benchmarks. Additionally. Synonyms for white-box include: structural. test groups are often used. Page 10 ©Copyright 2007. there are two types of black box test that involve users. volume tests. All Rights Reserved C3: Protected . recovery testing. many people prefer the terms "behavioral" and "structural". but others wish we'd stop talking about boxes altogether!!! Black box testing Black Box Testing is testing without knowledge of the internal workings of the item being tested. Among the most important black box tests that do not involve users are functionality testing. and it focuses specifically on using internal knowledge of the software to guide the selection of test data. stress tests.

Handout – Software Testing Black box testing . what is specified in the requirements.g. Scenario Tests.g. field tests are the only real means to elucidate problems of the organizational integration of the software system into existing procedures. In the following only a rough description of field and laboratory tests will be given.without user involvement The so-called ``functionality testing'' is central to most testing exercises. some also consider user tests that compare the efficiency of different software systems as benchmark tests. The notion of benchmark tests involves the testing of program efficiency. each function where it is called first. In short it involves putting the system into its intended use by its envisaged type of user. how the technical integration of the system works. i. Its primary objective is to assess whether the program does what it is supposed to do. Cognizant Technology Solutions. There are different approaches to functionality testing. in the NLP area. a consumption of too much memory space. The aim of recovery testing is to make sure to which extent data can be recovered after a system breakdown. however.e. benchmark tests only denote operations that are independent of personal variables. The efficiency of a piece of software strongly depends on the hardware environment and therefore benchmark tests always consider the soft/hardware combination. methodological considerations are rare in SE literature. one may find practical test reports that distinguish roughly between field and laboratory tests. i. Black box testing . sending e-mails. Apart from general usability-related aspects. or only show that an error message would be needed telling the user that the system cannot process the given amount of data. The other is to test module by module. to modify a term bank via different terminals simultaneously).g. E.with user involvement For tests involving users.e. A volume test can uncover problems that are related to the efficiency of a system. field tests are particularly useful for assessing the interoperability of the software system. During a stress test. All Rights Reserved C3: Protected . i. The objective of volume tests is to find the limitations of the software by processing a huge amount of data. Particularly in the NLP environment Page 11 ©Copyright 2007. A typical example could be to perform the same function from all workstations connected in a LAN within a short period of time (e. e. One is the testing of each program feature or function in sequence. Does the system provide possibilities to recover all of the data or part of it? How much can be recovered and how? Is the recovered data still correct and consistent? Particularly for software that needs high reliability standards. In the context of this document. or. A scenario test is a test case which aims at a realistic user background for the evaluation of software as it was defined and performed It is an instance of black box testing where the major objective is to assess the suitability of a software product for every-day routines. performing a standardised task.e. Rather. recovery testing is very important. incorrect buffer sizes. The term ``scenario'' has entered software evaluation in the early 1990s . In field tests users are observed while using the software system at their normal working place. the system has to process a huge amount of data or perform many function calls within a short period of time. Moreover. Whereas for most software engineers benchmark tests are concerned with the quantitative measurement of specific operations.

test cases are designed to traverse the entire graph Transaction flow testing (nodes represent steps in some transaction and links represent logical connections between steps that need to be validated) Page 12 ©Copyright 2007. etc. Special techniques as appropriate--syntax. data collection and analysis are easier than for field tests. state. syntax testing. Any dirty tests not covered by the above. especially with real time systems Crash testing should be performed to see what it takes to bring the system down Test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance Other functional testing techniques include: transaction testing. Testing Strategies/Techniques Black box testing should make use of randomly generated inputs (only a test range should be specified by the tester). A typical example of the organizational problem of implementing a translation memory is the language service of a big automobile manufacturer. and state testing. logic testing. where the major implementation problem is not the technical environment. Black box testing Methods Graph-based Testing Methods Black-box methods based on the nature of the relationships (links) among the program objects (nodes). that neither source texts nor target texts are properly organised and stored and. Finite state machine models can be used as a guide to design functional tests According to Beizer the following is a general order by which tests should be designed: o o o o o o Clean tests against requirements. as needed. to eliminate any guess work by the tester as to the methods of the function Data outside of the specified input range should be tested to check the robustness of the program Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output The number zero should be tested when numerical data is to be input Stress testing should be performed (try to overload the program with inputs to see where it reaches its maximum capacity). but the fact that many clients still submit their orders as print-out. individual translators are not too motivated to change their working habits. Additional structural tests for branch coverage. Due to the high laboratory equipment costs laboratory tests are mostly only performed at big software houses such as IBM or Microsoft. Since laboratory tests provide testers with many technical possibilities. Additional tests for data-flow coverage as needed. Laboratory tests are mostly performed to assess the general usability of the system. last but not least.Handout – Software Testing this problem has frequently been underestimated. domain testing. All Rights Reserved C3: Protected . Domain tests not covered by the above. loop. Cognizant Technology Solutions.

All Rights Reserved C3: Protected . link weights are required execution times) Equivalence Partitioning Black-box technique that divides the input domain into classes of data from which test cases can be derived An ideal test case uncovers a class of errors that might require many arbitrary test cases to be executed before a general error is observed Equivalence class guidelines: o o o o o If input condition specifies a range. one valid and one invalid equivalence class is defined If an input condition is Boolean. values just above and just below a and b If an input condition specifies and number of values. be certain to test the boundaries Comparison Testing Black-box testing for safety critical systems in which independently developed implementations of redundant systems are tested for conformance to specifications Often equivalence class partitioning is used to develop a common set of test cases for each implementation Orthogonal Array Testing Black-box technique that enables the design of a reasonably small set of test cases that provide maximum test coverage Focus is on categories of faulty logic likely to be present in the software component (without examining the code) Page 13 ©Copyright 2007. one valid and one invalid equivalence class is defined Boundary Value Analysis Black-box technique that focuses on the boundaries of the input domain rather than its center BVA guidelines: If input condition specifies a range bounded by values a and b. test cases should include a and b. one valid and two invalid equivalence classes are defined If an input condition requires a specific value. size limitations). Cognizant Technology Solutions.g. test cases should be designed to produce the minimum and maxim output reports If internal program data structures have boundaries (e.Handout – Software Testing Finite state modeling (nodes represent user observable states of the software and links represent transitions between states) Data flow modeling (nodes are data objects and links are transformations from one data object to another) Timing modeling (nodes are program objects and links are sequential connections between these objects. as well as values just above and just below the minimum and maximum values Apply guidelines 1 and 2 to output conditions. test cases should exercise the minimum and maximum numbers. one valid and two invalid equivalence classes are defined If an input condition specifies a member of a set.

All Rights Reserved C3: Protected . including specific programming languages Tester and programmer are independent of each other Tests are done from a user's point of view Will help to expose any ambiguities or inconsistencies in the specifications Test cases can be designed as soon as the specifications are complete Disadvantages of Black Box Testing Only a small number of possible inputs can actually be tested. Cognizant Technology Solutions. to test every possible input stream would take nearly forever Without clear and concise specifications. test cases are hard to design There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried May leave many program paths untested Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone) Most testing related research has been directed toward glass box testing White Box Testing Software testing approaches that examine the program structure and derive test data from the program logic.Handout – Software Testing Priorities for assessing tests using an orthogonal array Detect and isolate all single mode faults Detect all double mode faults Multimode faults Specialized Testing Graphical user interfaces Client/server architectures Documentation and help facilities Real-time systems o o o o Task testing (test each time dependent task independently) Behavioral testing (simulate system response to external events) Inter task testing (check communications errors among tasks) System testing (check interaction of integrated system software and hardware) Advantages of Black Box Testing More effective on larger units of code than glass box testing Tester needs no knowledge of implementation. Structural testing is sometimes referred to as clear-box testing since white boxes are considered opaque and do not really permit visibility into the code. Synonyms for white box testing Glass Box testing Structural testing Clear Box testing Page 14 ©Copyright 2007.

Perform complete coverage at the component level. Development and use of standard procedures.1 Flow Graph Notation A notation for representing control flow similar to flow charts and UML activity diagrams. Cognizant Technology Solutions. results verification and documentation capabilities. The purpose of white box testing Initiate a strategic initiative to build quality throughout the life cycle of a software product or service. The means to develop or acquire tool support for automation of capture/replay/compare. In general. test suite execution.1 Basis Path Testing A testing mechanism proposed by McCabe whose aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. These are test cases that exercise basic set will execute every statement at least once. naming conventions and libraries.1. Page 15 ©Copyright 2007. Developing a test harness made up of stubs. document and manage a test history library. All Rights Reserved C3: Protected . white-box testing practices have the following considerations: The allocation of resources to perform class and method analysis and to document and review the same. 1 Code Coverage Analysis 1. drivers and test object libraries.Handout – Software Testing Open Box Testing Types of White Box testing A typical rollout of a product is shown in figure 1 below. Establishment and maintenance of regression test suites and procedures. Improve quality by optimizing performance. Allocation of resources to design. 1. Provide a complementary function to black box testing. Practices: This section outlines some of the general practices comprising white-box testing process.

Boolean operators and parentheses. a new edge). Compound condition: composed of two or more simple conditions. they are redesigned. Cognizant Technology Solutions.1 Conditions Testing Condition testing aims to exercise all logical conditions in a program module. 1.1. Examples: Note that unstructured loops are not to be tested rather.. and an upper bound for the number of tests to ensure that each statement is executed at least once. Simple condition: Boolean variable or relational expression. They may define: 1). 1. where E1 and E2 are arithmetic expressions. Boolean expression: Condition without Relational expressions. This value gives the number of independent paths in the basis set. nested.2.2 Control Structure testing 1. Page 16 ©Copyright 2007. Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.2 Cyclomatic Complexity The cyclomatic complexity gives a quantitative measure of 4the logical complexity.3 Loop Testing Loops fundamental to many algorithms. 4).2 Data Flow Testing Selects test paths according to the location of definitions and use of variables. 3).2.e. Relational expression: (E1 op E2). 2). possibly proceeded by a NOT operator.Handout – Software Testing 1.2. 1. and unstructured. An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i. All Rights Reserved C3: Protected . concatenated. Can define loops as simple.

htm] are used to perform this function.ibm. Each of the individual parameters is tested individually against a reference data set. notification and logging are checked against references to validate program design. Proper error recovery.html] may also be used to perform this function. the code specification is expressed unambiguously using a formal language that describes the code's implicit contracts. These include the use of Microsoft Java Profiler API and Sun’s profiling tools that are bundled with the JDK.com/journal/sj/391/kazi. Consistency. All Rights Reserved C3: Protected . Durability). Transactions are checked thoroughly for partial/complete commits and rollbacks encompassing databases and other XA compliant transaction processors. Isolation. It identifies routines that are consuming the majority of the CPU time so that problems may be tracked down to improve performance. 4 Error Handling Exception and error handling is checked thoroughly are simulating partial and complete fail-over by operating on error causing test vectors. 5 Transactions Systems that employ transaction. local or distributed. Cognizant Technology Solutions. Conditions that a method must meet after it executes. 3 Profiling Profiling provides a framework for analyzing Java code performance for speed and heap memory use.com/products/jtract/index. Third party tools such as JaViz [http://www. Advantages of White Box Testing Forces test developer to reason carefully about implementation Approximate the partitioning done by execution equivalence Reveals errors in "hidden" code Beneficent side-effects Disadvantages of White Box Testing Expensive Cases omitted in the code could be missed out.research. Assertions that a method must satisfy at specific points of its execution Tools that check DbC contracts at runtime such as Jcontract [http://www. These contracts specify such requirements as: Conditions that the client must meet before a method is invoked. Page 17 ©Copyright 2007. Basically. may be validated to ensure that ACID (Atomicity.parasoft.Handout – Software Testing 2 Design by Contract (DbC) DbC is a formal way of using comments to incorporate specification information into the code itself.

and you can’t see beyond its surface. or decisions) is exercised. These terms are commonly used. The distinction here is based on what the person knows or can understand. To help understand the different ways that software testing can be divided between black box and white box techniques. An opposite test approach would be to open up the electronics system. Black box testing begins with a metaphor. however. consider the Five-Fold Testing System. Boundary testing and other attack-based techniques are targeted at common coding errors. switches. “black box” testing becomes another name for system testing. By analogy.Handout – Software Testing Black Box (Vs) White Box An easy way to start up a debate in a software testing forum is to ask the difference between black box and white box testing. People: Who do the testing? Some people know how software works (developers) and others just use it (users). Black box software testing is doing the same thing. yet everyone seems to have a different idea of what they mean. Risks: Why are you testing? Sometimes testing is targeted at particular risks. This is one way to think about coverage. It lays out five dimensions that can be used for examining testing: People (who do the testing) Coverage (what gets tested) Risks (why you are testing) Activities (how you are testing) Evaluation (how you know you’ve found a bug) Let’s use this system to understand and clarify the characteristics of black box and white box testing. and dials on the outside. depends on how you define the boundary of the box and what kind of access the “blackness” is blocking. apply probes internally and maybe even disassemble parts of it. You have to see if it works just by flipping switches (inputs) and seeing what happens to the lights and dials (outputs). this is called white box testing. Another is to contrast testing that aims to cover all the requirements with testing that aims to cover all the code. Code-based testing is often called “white box” because it makes sure that all the code (the statements. any testing by users or other non-developers is sometimes called “black box” testing. This is black box testing. These are the two most commonly used coverage criteria. Both are supported by extensive literature and commercial tools. The actual meaning of the metaphor. Requirements-based testing could be called “black box” because it makes sure that all the customer requirements have been verified. Accordingly. All Rights Reserved C3: Protected . And testing the units inside the box becomes white box testing. You must test it without opening it up. but with software. Coverage: What is tested? If we draw the box around the system as a whole. Imagine you’re testing an electronics system. paths. Cognizant Technology Solutions. these techniques might be Page 18 ©Copyright 2007. It’s housed in a black box with lights. Thus. Developer testing is called “white box” testing. Effective security testing also requires a detailed understanding of the code and the system architecture. see how the circuits are wired.

Thus black box testing is testing against the specification and will discover faults of omission. which defines tests based on functional requirements. They may be masked by fault tolerance or simply luck. In order to fully test a software product both black and white box testing are required. since they use code instrumentation to make the internal workings of the software more visible.” Indeed. White box testing is much more expensive than black box testing. it is often called “black box. Evaluation: How do you know if you’ve found a bug? There are certain kinds of software faults that don’t always lead to obvious failures. which defines tests based on the code itself. indicating that part of the specification has not been fulfilled. These contrast with black box techniques that simply look at the official outputs of a program. Memory leaks and wild pointers are examples.Handout – Software Testing classified as “white box”. To conclude. Assertions are another technique for helping to make problems more visible. apart from the above described analytical methods of both glass and black box testing. helping with diagnosis. All Rights Reserved C3: Protected . In this case. These are two design approaches. Another set of risks concerns whether the software will actually provide value to users. Since behavioral testing is based on external functional definition. Black box testing is concerned only with testing the specification. The advice given is to start test planning with a black box test approach as soon as the specification is available. We could also focus on the tools used. Related techniques capture code history and stack information when faults occur. indicating that part of the implementation is faulty. Some tool vendors refer to code-coverage tools as white box tools. All of these techniques could be considered white box test techniques. It requires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is or is not correct. The paths should then be checked against the black box test plan and any additional required test runs determined and applied. Cognizant Technology Solutions. and maps code inspection (static testing) with white box testing. and structural test design.” while structural testing—based on the code internals—is called “white box. Page 19 ©Copyright 2007. this is probably the most commonly cited definition for black box and white box testing. with the production of flowgraphs and determination of paths. Another activity-based distinction contrasts dynamic test execution with formal code inspection. White box testing is testing against the implementation and will discover faults of commission. The consequences of test failure at this stage may be very expensive. Certain test techniques seek to make these kinds of problems more visible. and could be termed “black box. there are further constructive means to guarantee high quality software end products. the metaphor maps test execution (dynamic testing) with black box testing. Usability testing focuses on this risk. Testing is then categorized based on the types of tools used. A failure of a white box test may result in a change which requires all black box testing to be repeated and the redetermination of the white box paths. White box planning should commence as soon as all black box tests have been successfully passed. it cannot guarantee that all parts of the implementation have been tested. it cannot guarantee that the complete specification has been implemented.” Activities: How do you test? A common distinction is made between behavioral test design. and tools that facilitate applying inputs and capturing inputs—most notably GUI capture replay tools—as black box tools. White box testing is concerned only with testing the software product.

on the other hand. Test your Understanding 1. It is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral. Synonyms for white-box include: structural. inspection or code-coverage automation (activities). boundary or security testing (risks). d 2). the integration of CASE tools. opaque-box. rapid prototyping. functional. can sometimes describe developer-based testing (people). or behavioral testing or capture replay automation (activities). and last but not least the involvement of users in both software development and testing procedures SUMMARY Black box testing can sometimes describe user-based testing (people). At a minimum. Cognizant Technology Solutions. and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Equivalence partitioning is a black-box testing method that a) Looks for equivalent data values in the program b) Looks for classes of output c) Focuses on output errors d) Defines classes of input 2. and logs (evaluation). It is used to detect errors by means of execution-oriented test cases. glass-box and clear-box. White box testing. usability testing (risk). All Rights Reserved C3: Protected . assertions. system or requirements-based testing (coverage). so it doesn't explicitly use knowledge of the internal structure. white-box test case design requires that you have a) Source code b) An operational program c) A detailed procedural design d) The program architecture Answers: 1). or testing based on probes. structural testing.Handout – Software Testing Among the most important constructive means are the usage of object-oriented programming tools. c Page 20 ©Copyright 2007. unit or code-coverage testing (coverage). and closed-box White-box test design allows one to peek inside the "box". Black-box test design treats the system as a literal "black-box".

If there is no hour glass. click it. version number. you will be able to Explain the methods of testing What is GUI Testing? GUI is the abbreviation for Graphic User Interface.Handout – Software Testing Chapter 3: Other Testing Types Learning Objective After completing this chapter. All screens should have a Help button (i. The text in the Micro Help line should change . It is absolutely essential that any application has to be user-friendly. All Rights Reserved C3: Protected . This icon should correspond to the Original Icon under Program Manager. The following is a set of guidelines to ensure effective GUI Testing and can be used even as a checklist while testing a product / application. Try this for every grayed control.) F1 key should work the same. Use TAB to move focus around the Window. Try to start the application twice as it is loading. Closing the application should result in an "Are you Sure" message box Attempt to start application twice. If the screen has a Control menu. and a bigger pictorial representation of the icon.especially the error messages. The end user should be comfortable while using all the components on screen and the components should also perform their functionality with utmost clarity. or it can refer to testing the functionality of each and every component involved. Tab order should be left to right. Window should return to an icon on the bottom of the screen.Windows Compliance Testing Application Start Application by Double Clicking on its ICON.Check for spelling. All controls should get focus . clarity and non-updateable etc. Check all text on window for Spelling/Tense and Grammar. then use all ungrayed options. GUI Testing can refer to just ensuring that the look-and-feel of the application is acceptable to the user.indicated by dotted box. then the hour glass should be displayed. The main window of the application should have the same caption as the caption of the icon in Program Manager. The window caption for every application should have the name of the application and the window name . These should be checked for spelling. The Loading message should show the application name. If a field is disabled (grayed) then it should not get focus. then some enquiry in progress message should be displayed. especially on the top of the screen. If Window has a Minimize Button. On each window. Tabbing to an entry field with text in it should highlight the entire text in the field. Cognizant Technology Solutions.e. Page 21 ©Copyright 2007. Section 1 . This should not be allowed . Double Click the Icon to return the Window to its original size. if the application is busy. and Up to Down within a group box on the screen. No Login is necessary. Use SHIFT+TAB to move focus backwards. Hence it becomes very essential to test the GUI components of any application. It should not be possible to select them with either the mouse or by using TAB. or cursor. English and clarity.you should be returned to main window. Check if the title of the window make sense.

All others are gray. Pressing ALT+Letter should activate the button. This is indicated by a letter underlined in the button text.Press RETURN .). . double-clicking is not essential. there should be a message phrased positively with Yes/No answers where Yes results in the completion of the action. Drop down with the item selected should be display the list with the selected item on the top. Items should be in alphabetical order with the exception of blank/none. Make sure there is no duplication. Make sure only one space appears.F4’ should open/drop down the list box. All Rights Reserved C3: Protected . Pressing a letter should bring you to the first item in the list with that start with that letter.Letters in amount fields. All tab buttons should have a distinct letter. and if the user can enter or change details on the other screen then the Text on the button should be followed by three dots. Pressing Return in ANY no command button control should activate it. If there is a Cancel Button on the screen. Check Boxes Clicking with the mouse on the box. in All fields. Selection should also be possible with mouse. Spacing should be compatible with the existing windows spacing (word etc.This should activate The above are VERY IMPORTANT. or on the text should SET/UNSET the box. Drop Down List Boxes Pressing the Arrow should give list of options. Command Buttons If Command Button leads to another Screen. If pressing the Command button results in uncorrectable data e. Pressing ‘Ctrl . List boxes are always white background with black text whether they are disabled or not. If it doesn't then the text in the box should be gray or non-updateable. In general.Handout – Software Testing Never updateable fields should be displayed with black text on a gray background with a black label. Cognizant Technology Solutions. You should not be able to type text in the box. Option (Radio Buttons) Left and Right arrows should move 'ON' Selection. followed by a colon tight to it. shouldn't have a blank line at the bottom. which is at the top or the bottom of the list box. Cursor should change from arrow to Insert Bar.g. everything can be done using both the mouse and the keyboard. Enter invalid characters . In general. try strange characters like + . This List may be scrollable.Press SPACE This should activate Tab to each button . Refer to previous page. Double Click should select all text in box. So should Up and Down. Click each button once with the mouse . In a field that may or may not be updateable. the label text and contents changes from black to gray depending on the current status. All text should be left justified. SPACE should do the same.This should activate Tab to each button . Page 22 ©Copyright 2007. Enter text into Box Try to overflow the text by typing to many characters – should be stopped Check the field width with capitals W. All Buttons except for OK and Cancel should have a letter Access to them. Text Boxes Move the Mouse Cursor over all Enterable Text Boxes. Tab to another type of control (not a command button). closing an action step. then pressing <Esc> should activate it. Select with mouse by clicking. One button on the screen should be default (indicated by a thick black border).* etc. SHIFT and Arrow should Select Characters. and should be done for EVERY command Button.

Are all numeric fields right justified? This is the default unless otherwise specified. then clicking the command button. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message? Page 23 ©Copyright 2007. should act in the same way as selecting and item in the list box. make sure all the data can be seen in the box. Assure that all windows have a consistent look and feel. Pressing a letter should take you to the first item in the list starting with that letter. Is all the micro-help text spelt correctly on this screen? Is all the error message text spelt correctly on this screen? Is all user input captured in UPPER case or lowercase consistently? Where the database requires a value (other than null) then this should be defaulted into fields. Section 2 . are the field prompts of the correct color? In read-only mode. Clicking Arrow should allow user to choose from list List Boxes Should allow a single selection to be chosen.e. Cognizant Technology Solutions. are the field backgrounds of the correct color? Are all the screen prompts specified in the correct screen font? Is the text in all fields specified in the correct screen font? Are all the field prompts aligned perfectly on the screen? Are all the field edit boxes aligned perfectly on the screen? Are all group boxes aligned correctly on the screen? Should the screen be resizable? Should the screen be allowed to minimize? Are all the field prompts spelt correctly? Are all character or alphanumeric fields left justified? This is the default unless otherwise specified. Validation Conditions: Does a failure of validation on every field cause a sensible user error message? Is the user required to fix entries. The user must either enter an alternative valid value or leave the default value intact. Assure that all dialog boxes have a consistent look and feel.Handout – Software Testing Combo Boxes Should allow text to be entered. All Rights Reserved C3: Protected .Screen Validation Checklist Aesthetic Conditions: Is the general screen background of the correct color? Are the field prompts of the correct color? Are the field backgrounds of the correct color? In read-only mode. which have failed validation tests? Have any fields got multiple validation rules and if so are all rules being applied? If the user enters an invalid value and clicks on the OK button (i. or using the Up and Down Arrow keys. If there is a 'View' or 'Open' button besides the list box then double clicking on a line in the List Box. Force the scroll bar to appear. by clicking with the mouse.

Handout – Software Testing
Is validation consistently applied at screen level unless specifically required at field level? For all numeric fields check whether negative numbers can and should be able to be entered. For all numeric fields check the minimum and maximum values and also some midrange values allowable? For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size? Do all mandatory fields require user input? If any of the database columns don't allow null values then the corresponding screen fields must be mandatory. (If any field, which initially was mandatory, has become optional then check whether null values are allowed in this field.) Navigation Conditions: Can the screen be accessed correctly from the menu? Can the screen be accessed correctly from the toolbar? Can the screen be accessed correctly by double clicking on a list control on the previous creen? Can all screens accessible via buttons on this screen be accessed correctly? Can all screens accessible by double clicking on a list control be accessed correctly? Is the screen modal? (i.e.) Is the user prevented from accessing other functions when this screen is active and is this correct? Can a number of instances of this screen be opened at the same time and is this correct? Usability Conditions: Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified. Is all date entry required in the correct format? Have all pushbuttons on the screen been given appropriate Shortcut keys? Do the Shortcut keys work correctly? Have the menu options that apply to your screen got fast keys associated and should they have? Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified. Are all read-only fields avoided in the TAB sequence? Are all disabled fields avoided in the TAB sequence? Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse? Can the cursor be placed in read-only fields by clicking in the field with the mouse? Is the cursor positioned in the first input field or control when the screen is opened? Is there a default button specified on the screen? Does the default button work correctly? When an error message occurs does the focus return to the field in error when the user cancels it?
Page 24 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing
When the user Alt+Tab's to another application does this have any impact on the screen upon return to the application? Do all the fields edit boxes indicate the number of characters they will hold by there length? e.g. a 30 character field should be a lot longer Data Integrity Conditions: Is the data saved when the window is closed by double clicking on the close box? Check the maximum field lengths to ensure that there are no truncated characters? Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact. Check maximum and minimum field values for numeric fields? If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers? If a set of radio buttons represents a fixed set of values such as A, B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions, which are not screen based, and thus the required initial values can be incorrect.) If a particular set of data is saved to the database check that each value gets saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding of numeric values. Modes (Editable Read-only) Conditions: Are the screen and field colors adjusted correctly for read-only mode? Should a read-only mode be provided for this screen? Are all fields and controls disabled in read-only mode? Can the screen be accessed from the previous screen/menu/toolbar in read-only mode? Can all screens available from this screen be accessed in read-only mode? Check that no validation is performed in read-only mode. General Conditions: Assure the existence of the "Help" menu. Assure that the proper commands and options are in each menu. Assure that all buttons on all tool bars have a corresponding key commands. Assure that each menu command has an alternative (hot-key) key sequence, which will invoke it where appropriate. In drop down list boxes, ensure that the names are not abbreviations / cut short In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations. Ensure that duplicate hot keys do not exist on each screen Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost - Continue yes/no" Assure that the cancel button functions the same as the escape key.
Page 25 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing
Assure that the Cancel button operates, as a Close button when changes have been made that cannot be undone. Assure that only command buttons, which are used by a particular window, or in a particular dialog box, are present. – (i.e) make sure they don't work on the screen behind the current screen. When a command button is used sometimes and not at other times, assures that it is grayed out when it should not be used. Assure that OK and Cancel buttons are grouped separately from other command buttons. Assure that command button names are not abbreviations. Assure that all field labels/names are not technical labels, but rather are names meaningful to system users. Assure that command buttons are all of similar size and shape, and same font & font size. Assure that each command button can be accessed via a hot key combination. Assure that command buttons in the same window/dialog box do not have duplicate hot keys. Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed - and NOT the Cancel or Close button Assure that focus is set to an object/button, which makes sense according to the function of the window/dialog box. Assure that all option buttons (and radio buttons) names are not abbreviations. Assure that option button names are not technical labels, but rather are names meaningful to system users. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box. Assure that option box names are not abbreviations. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box" Assure that the Tab key sequence, which traverses the screens, does so in a logical way. Assure consistency of mouse actions across windows. Assure that the color red is not used to highlight active objects (many individuals are redgreen color blind). Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics). Assure that the screen/window does not have a cluttered appearance Ctrl + F6 opens next tab within tabbed window Shift + Ctrl + F6 opens previous tab within tabbed window Tabbing will open next tab within tabbed window if on last field of current tab Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window Tabbing will go onto the next editable field in the window Banner style & size & display exact same as existing windows
Page 26 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

generating "changes will be lost" message if necessary.values are correctly processed. 29. Assure that Feb. Assure that 00 and 13 are reported as errors. Assure that Feb. 30 are validated correctly & do not cause errors/ miscalculations. Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations.Handout – Software Testing If 8 or less options in a list box.should be no need to scroll Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error.e the tab is opened. Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations. Assure that numeric fields with a blank in position 1 are processed or reported as an error. Assure that both + and . 30 is reported as an error. Cognizant Technology Solutions. Assure that fields with a blank in the last position are processed or reported as an error an error. Page 27 ©Copyright 2007. Assure that century change is validated correctly & does not cause errors/miscalculations. highlighting the field with the error on it) Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs. display all options on open of list box . Assure that division by zero does not occur. 28. On open of tab focus will be on first editable field All fonts to be the same Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate). Assure that invalid values are logged and reported. (i. Numeric Fields Assure that lowest and highest values are handled correctly. Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations. Microhelp text for every enabled field & button Ensure all fields are disabled in read-only mode Progress messages on load of tabbed screens Return operates continue If retrieve on load of tabbed window fails window should not open Specific Field Tests Date Field Checks Assure that leap years are validated correctly & do not cause errors/miscalculations. Assure that valid values are handles by the correct procedure. All Rights Reserved C3: Protected .

Valid data Fill each field . Include out of range values above the maximum and below the minimum. Include data items with last position blank. continue saving changes or additions) Add View Change Delete Cancel . Include data items with first position blank.(i.e.Substitute your specific commands Add View Change Delete Continue .Standard Actions Examples of Standard Actions .e. Include invalid characters & symbols. Assure that upper and lower values in ranges are handled correctly.(i.Handout – Software Testing Include value zero in all calculations. Cognizant Technology Solutions. All Rights Reserved C3: Protected . Alpha Field Checks Use blank and non-blank data. Include maximum and minimum range values. abandon changes or additions) Fill each field . Include valid characters. Include at least one in-range value. Include lowest and highest values. Validation Testing .Invalid data Different Check Box / Radio Box combinations Scroll Lists / Drop Down List Boxes Help Fill Lists and Scroll Tab Tab Sequence Shift Tab Page 28 ©Copyright 2007.

Handout – Software Testing Shortcut keys / Hot Keys Note: The following keys are used in some windows applications. All Rights Reserved C3: Protected . Cognizant Technology Solutions. Page 29 ©Copyright 2007. and are included as a guide.

Page 30 ©Copyright 2007. Cognizant Technology Solutions. All Rights Reserved C3: Protected .Handout – Software Testing * These shortcuts are suggested for text formatting applications. in the context for which they make sense. Applications may use other modifiers for these operations.

Also referred to as verification testing Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. you need to execute different tests in order to address specific goals. Cognizant Technology Solutions. This test cycle includes tests that check that module in depth. refer to the testing goals you defined at the beginning of the process. Usually you do not run all the tests at once. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. The reason they might not work because changing or adding new code to a program can easily introduce errors into code that is not intended to be changed. A related group of tests is called a test cycle. Also consider issues such as the current state of the application and whether new functions have been added or modified. Example: You can create another set of tests for a particular module in your application.Handout – Software Testing Regression Testing What is Regression Testing? Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes. The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. Test department coders develop code test scenarios and exercises that will test new units of code after they have been written. At different stages of the quality assurance process. the old test cases are run against the new version to make sure that all the old capabilities still work. To decide which test cycles to build. you will want to execute the relevant parts of your test plan in order to locate defects and assess quality. Page 31 ©Copyright 2007. Each time your application changes. All Rights Reserved C3: Protected . to determine the application's stability before beginning more rigorous testing. Before a new version of a software product is released. and can include both manual and automated tests Example: You can create a cycle containing basic tests that run on each build of the application throughout development. Test Execution Test Execution is the heart of the testing process. Regression testing is a normal part of the program development process. Create Test Cycles During this stage you decide the subset of tests from your test database you want to execute. You can run the cycle each time a new build is ready.

enter input. A test cycle is complete only when all tests-automatic and manual-have been run. This cycle should include basic-level tests containing mostly positive checks.Handout – Software Testing Following are examples of some general categories of test cycles to consider: Sanity cycle checks the entire system at a basic level (breadth. The goal of this type of cycle is to verify that a change to one part of the software did not break the rest of the application. Regression cycle tests maintenance builds. All Rights Reserved C3: Protected . Type of Change Request Bug the application works incorrectly or provides incorrect information. providing outcome summaries for each test. rather than depth) to see that it is functional and stable. (for example. The tests in the cycle cover the entire application (breadth). Testing Tools executes automated tests for you. During Automated Test Execution you create a batch of tests and launch the entire batch at once. sorting the files alphabetically by the second field rather than numerically by the first field makes them easier to find) Enhancement Page 32 ©Copyright 2007. A regression cycle includes sanity-level tests for testing the entire software. notices a problem with an application. Any major or minor request is considered a problem with an application and will be entered as a change request. Testing Tools runs the tests one at a time. Change Request Initiating a Change Request A user or developer wants to suggest a modification that would improve an existing application. or wants to recommend an enhancement. and log the results. You use the application. Advanced cycle tests both breadth and depth. as well as in-depth tests for the specific area of the application that was modified. Cognizant Technology Solutions. And have to identify all the failed steps in the tests and to determine whether a bug has been detected. It then imports results. With Manual Test Execution you follow the instructions in the test steps of each test. You perform manual tests using the test steps. Analyze Test Results After every test run one analyze and validate the test results. you begin executing the tests in the cycle. Normal cycle tests the system a little more in depth than the sanity cycle. (for example. This cycle can be run when more time is available for testing. Run Test Cycles (Automated & Manual Tests) Once you have created cycles that cover your testing objectives. containing both positive and negative checks. compare the application output with the expected output. a letter is allowed to be entered in a number field) Change a modification of the existing application. and also test advanced options in the application (depth). For each test step you assign either pass or fail status. or if the expected result needs to be updated. This cycle can group medium-level tests.

Report Bugs Once you execute the manual and automated tests in a cycle. Track and Analyze Bugs The lifecycle of a bug begins when it is reported and ends when it is fixed. a new report. Information about bugs must be detailed and organized in order to schedule bug fixes and determine software release dates. All Rights Reserved C3: Protected . This also applies to any Section 508 infraction. but this is necessary to perform a job. Bug Tracking Locating and repairing software bugs is an essential part of software development. testers. When you report a bug. If a bug is detected again. it is reopened. all members of the development and quality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed. High the application works. a new field. fix. it is Closed. Software developers fix the Open bugs and assign them the status Fixed. The number of open or fixed bugs is a good indicator of the quality status of your application. Bugs can be detected and reported by engineers. verified. and end-users in all phases of the testing process. There can be more things Page 33 ©Copyright 2007. The Quality Assurance manager or Project manager periodically reviews all New bugs and decides which should be fixed. Cognizant Technology Solutions. you record all the information necessary to reproduce and fix it. If a bug does not reoccur. or a new button) Priority for the request Low the application works but this would make the function easier or more user friendly. Communication is an essential part of bug tracking. Below is a simple traceability matrix structure.Tests are associated with the requirements on which they are based and the product tested tomeet the requirement. and closed.Handout – Software Testing new functionality or item added to the application. These bugs are given the status Open and are assigned to a member of the development team. job functions are impaired and there is no work around. and provide all necessary information to reproduce. (for example. Traceability Matrix A traceability matrix is created by associating requirements with the products that satisfy them. The bugs are stored in a database so that you can manage them and analyze the status of your application. First you report New bugs to the database. QA personnel test a new build of the application. and follow up the bug. you report the bugs (or defects) that you detected. Critical the application does not work. Bug Tracking involves two main stages: reporting and tracking. You can use data analysis tools such as re-ports and graphs in interpret bug data. You also make sure that the QA and development personnel involved in fixing the bug are notified.

Traceability requires unique identifiers for each requirement and product. User requirement identifiers begin with "U" and system requirements with "S.Handout – Software Testing included in a traceability matrix than shown below. All Rights Reserved C3: Protected . or the traceability corrected." Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated. SAMPLE TRACEABILITY MATRIX A traceability matrix is a report from the requirements database or repository. Page 34 ©Copyright 2007. and that all higher level requirements are allocated to lower level requirements. that all lower level requirements derive from higher level requirements. The examples below show traceability between user and system requirements. Traceability ensures completeness. Numbers for products are established in a configuration management (CM) plan. Traceability is also used in managing change and provides the basis for test planning. rewritten. Cognizant Technology Solutions.

Phases of Testing The primary objective of testing effort is to determine the conformance to requirements specified in the contracted documents.) Types and Phases of Testing Page 35 ©Copyright 2007. The integration of this code with the internal code is the important objective. All Rights Reserved C3: Protected . Acceptance Testing. not its parts Techniques can be structural or functional.Handout – Software Testing In addition to traceability matrices. What goes into each report depends on the information needs of those receiving the report(s). other reports are necessary to manage requirements. Determine their information needs and document the information that will be associated with the requirements when you set up your requirements database or repository. Unit testing. Cognizant Technology Solutions. Goal is to evaluate the system as a whole. Installation. Techniques can be used in any stage that tests the system as a whole (System testing. etc.

Handout – Software Testing Page 36 ©Copyright 2007. All Rights Reserved C3: Protected . Cognizant Technology Solutions.

Cognizant Technology Solutions.Handout – Software Testing Page 37 ©Copyright 2007. All Rights Reserved C3: Protected .

the system would fail in so many places at once that the debugging and retesting effort would be impractical. or by first integrating functional subsystems and then integrating the subsystems in separate phases using any of the basic strategies. this is impractical for two major reasons. because of the vast amount of detail separating the input data from the individual code modules. Large systems may require many integration phases. In general. The key is to Page 38 ©Copyright 2007. First. top-down. Cognizant Technology Solutions. satisfying any white box testing criterion would be very difficult. an integration testing technique should fit well with the overall integration strategy.Handout – Software Testing Integration Testing One of the most significant aspects of a software development project is the integration strategy. bottom-up. the more important the integration strategy. In fact. For most real systems. beginning with assembling modules into lowlevel subsystems. However. most integration testing has been traditionally limited to ``black box'' techniques. To be most effective. the larger the project. performing rigorous testing of the entire software involved in each integration phase involves a lot of wasteful duplication of effort across phases. Performing only cursory testing at early integration phases and then applying a more rigorous criterion for the final stage is really just a variant of the high-risk "big bang" approach. then assembling subsystems into larger subsystems. testing at each phase helps detect errors early and keep the system under control. In a multi-phase integration. Very small systems are often assembled and tested in one phase. critical piece first. All Rights Reserved C3: Protected . and finally assembling the highest level subsystems into the complete system. Second. Integration may be performed all at once.

modules are rigorously tested in isolation using stubs and drivers before any integration is attempted. Generalization of module testing criteria Module testing criteria can often be generalized in several possible ways to support integration testing.Handout – Software Testing leverage the overall integration structure to allow rigorous testing at each phase while minimizing duplication of effort. Then. However. In one view. combining module testing with the lowest level of subsystem integration testing. module and integration testing can be combined. so an integration testing method should be flexible enough to accommodate them all. Page 39 ©Copyright 2007. It is important to understand the relationship between module testing and integration testing. verifying the details of each module's implementation in an integration context. All Rights Reserved C3: Protected . Each of these views of integration testing may be appropriate for any given project. the most obvious generalization is to satisfy the module testing criterion in an integration context. leads to an excessive amount of redundant testing. Applying it to each phase of a multiphase integration strategy. assuming that the details within each module are accurate. As discussed in the previous subsection. At the other extreme. for example. integration testing concentrates entirely on module interactions. this trivial kind of generalization does not take advantage of the differences between module and integration testing. in effect using the entire program as a test driver environment for each module. and then performing pure integration testing at higher levels. Many projects compromise. Cognizant Technology Solutions.

It is important to preserve the module's connectivity when using the looping rule. and then use the resultant "reduced" flow graph to drive integration testing. structured testing at the integration level focuses on the decision outcomes that are involved with module calls. Since the repetitive. The idea behind design reduction is to start with a module control flow graph. at which point the design reduction is complete. there must be a path from the module entry to the top of the loop and a path from the bottom of the loop to the module exit. can be generalized to require each module call statement to be exercised during integration testing. Page 40 ©Copyright 2007. conditional. The remaining rules work together to eliminate the parts of the flow graph that are not involved with module calls. Rules 1 through 4 are intended to be applied iteratively until none of them can be applied. The statement coverage module testing criterion. they each reduce cyclomatic complexity by one. even very complex logic can be eliminated as long as it does not involve any module calls. and looping rules each remove one edge from the flow graph. The sequential rule eliminates sequences of non-call ("white dot") nodes. The design reduction technique helps identify those decision outcomes. since for poorlystructured code it may be hard to distinguish the ``top'' of the loop from the ``bottom. Although the specifics of the generalization of structured testing are more detailed. However. The repetitive rule eliminates top-test loops that are not involved with module calls. All Rights Reserved C3: Protected . it does simplify the graph so that the other rules can be applied. Since structured testing at the module level requires that all the decision logic in a module's control flow graph be tested independently. The looping rule eliminates bottom-test loops that are not involved with module calls. the appropriate generalization to the integration level requires that just the decision logic involved with calls to other modules be tested independently. By this process. it leaves the cyclomatic complexity unchanged.Handout – Software Testing More useful generalizations adapt the module testing criterion to focus on interactions between modules rather than attempting to test all of the details of each module's implementation in an integration context. so that it is possible to exercise them independently during integration testing. the approach is the same. The conditional rule eliminates conditional statements that do not contain calls in their bodies. in which each statement is required to be exercised during module testing. Figure below shows a systematic set of rules for performing design reduction.'' For the rule to apply. Module design complexity Rather than testing all decision outcomes within a module independently. Although not strictly a reduction rule. Since application of this rule removes one node and one edge from the flow graph. Cognizant Technology Solutions. the call rule states that function call ("black dot") nodes cannot be reduced. remove all control structures that are not involved with module calls.

it is required that each statement be executed during the first phase (which may be anything from single modules to the entire Page 41 ©Copyright 2007. and it is important to limit the corresponding stages of testing as well. The key principle is to test just the interaction among components at each integration stage. All Rights Reserved C3: Protected .Handout – Software Testing Incremental integration Hierarchical system design limits each stage of development to a manageable effort. Hierarchical design is most effective when the coupling among sibling components decreases as the component size increases. To extend statement coverage to support incremental integration. Cognizant Technology Solutions. including support for hierarchical design. To form a completely flexible "statement testing" criterion. avoiding redundant testing of previously integrated sub-components. The remainder of this section extends the integration testing techniques of structured testing to handle the general case of incremental integration. it is required that all module call statements from one component into a different component be exercised at each integration stage. which simplifies the derivation of data sets that test interactions among components.

Given hierarchical integration stages with good cohesive partitioning properties.Handout – Software Testing program). this limits the testing effort to a small fraction of the effort to cover each statement of the system at each integration phase. All Rights Reserved C3: Protected . and that at each integration phase all call statements that cross the boundaries of previously integrated components are tested. Cognizant Technology Solutions. Page 42 ©Copyright 2007.

the testing should be planned in order to provide a realistic and adequate exposure of the system to all reasonably expected events. Acceptance. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgment. the means by which 'Acceptance' will be achieved. User Acceptance Testing is a critical phase of any 'systems' project and requires significant participation by the 'End Users'. an Acceptance Test Plan should be developed in order to plan precisely. Cognizant Technology Solutions. problems will arise and it is important to have determined what will be the expected and required responses from the various parties concerned. and in detail. but at all times they are informed by the business needs. As in any system though. acceptance testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system. only two additional tests are required to complete the integration testing. including Users. The key is to perform design reduction at each integration phase using just the module call nodes that cross component boundaries. and not the developer should always do acceptance testing. Acceptance Testing checks the system against the "Requirements". Factors influencing Acceptance Testing The User Acceptance Test Plan will vary from system to system but. The test procedures that lead to formal 'acceptance' of new or changed systems. Modules A and C have been previously integrated. yielding component-reduced graphs. To be of real use. System. the component module design complexity of module A is 1. It is similar to systems testing in that the whole system is checked but the important difference is the change in focus: Systems Testing checks that the system that was specified has been delivered. The forms of the tests may follow those in system testing. Figure 7-7 illustrates the structured testing approach to incremental integration. The main types of software testing are: Component. Interface. Modules B and D are removed from consideration because they do not contain cross-component calls. The testing can be based upon the User Requirements Specification to which the system should conform. and exclude from consideration all modules that do not contain any cross-component calls. Release. Acceptance Testing checks that the system delivers what was requested. Page 43 ©Copyright 2007. and the component module design complexity of module C is 2. The final part of the UAT can also include a parallel run to prove the system against the current system. in general. as have modules B and D. The customer. since the design predicates decision to call module D from module B has been tested in a previous phase.Handout – Software Testing Structured testing can be extended to cover the fully general case of incremental integration in a similar manner. Acceptance Testing In software engineering. It would take three tests to integrate this system in a single phase. All Rights Reserved C3: Protected . However.

colours. in consultation with the executive sponsor of the project. '1' is the most severe. seek additional functionality which could be classified as scope creep. All Rights Reserved C3: Protected .g. the allocation of a problem into its appropriate severity level can be subjective and open to question. Cognizant Technology Solutions. but little or no changes to business processes are envisaged. The users of the system. 'Cosmetic' Problem e. Medium Problem. To avoid the risk of lengthy and protracted exchanges over the categorization of problems. testing can continue and the system is likely to go live with only minimal departure from agreed business processes. In order to agree what such responses should be. testing can continue but live this feature will cause severe disruption to business processes in live operation. Minor Problem . if such features are key to the business requirements they will warrant a higher severity level. Because no system is entirely fault free. prior consideration of this is advisable. we strongly advised that a range of examples are agreed in advance to ensure that there are no fundamental areas of disagreement. both testing and live operations may progress. where appropriate Regression Testing. For example. the maximum number of acceptable 'outstanding' in any particular category. it is impossible to continue with the testing because of the severity of this error / bug. Again. must be subjected to rigorous System Testing and. These levels will range from (say) 1 to 6 and will represent the relative severity. these will be known in advance and your organization is forewarned. This problem should be corrected. Vendors and possibly Consultants / Contractors. perhaps unintentionally. and '6' has the least impact: 'Show Stopper' i. it is crucial to agree the Criteria for Acceptance. in terms of business / commercial impact. testing can continue but we cannot go into production (live) with this problem. found during testing. receive priority response and that all testing will cease until such level 1 problems are resolved. In any event. or. of a problem with the system. Major Problem. Even where the severity levels and the responses to each have been agreed by all parties. These conditions need to be analyzed as they may. Page 44 ©Copyright 2007. the End Users and the Project Team need to develop and agree a range of 'Severity Levels'. Finally. Here is an example which has been used successfully. fonts. any and all fixes from the software developers.B.e. or if there are. you may demand that any problems in severity level 1.Handout – Software Testing Project Team. pitch size However. In some cases. Critical Problem. Caution. it must be agreed between End User and vendor. must then agree upon the responsibilities and required actions for each category of problem. N. users may agree to accept ('sign off') the system subject to a range of conditions.

The independent test team is adequately staffed and trained in configuration testing. Page 45 ©Copyright 2007. Objectives The typical objectives of configuration testing are to: Partially validate the application (i. Internationalization (e. etc. All Rights Reserved C3: Protected .. Personalization Report these failures to the development teams so that the associated defects can be fixed. analyzed. taxes and tariffs. Determine the effect of adding or modifying hardware resources such as: o o o o Memory Disk and tape resources Processors Load balancers Determine an optimal system configuration.g. and prevented in the future. Examples Typical examples include configuration testing of an application that must: Have multiple functional variants.). fixed. Support internationalization.. Cause failures concerning the configurability requirements that help identify defects that are not efficiently found during unit and integration testing: o o o Functional Variants.Handout – Software Testing Configuration Testing & Installation Testing Configuration testing: Testing to determine whether the program operates properly when the software or hardware is configured in a required manner. multiple languages. Multiple variants of the application exist. Preconditions Configuration testing can typically begin when the following preconditions hold: The configurability requirements to be tested have been specified. time zones. to determine if it fulfills its configurability requirements). The test environment is ready. currencies. The relevant software components have passed unit testing. Software integration testing has started. The relevant system components have passed system integration testing.e. Support personalization. Cognizant Technology Solutions. However. configuration testing can begin prior to the distribution of the software components onto the hardware components. The typical goals of configuration testing are to cause the application to fail to meet its configurability requirements so that the underlying defects can be identified.

All Rights Reserved C3: Protected . Cognizant Technology Solutions. Tasks Configurability testing typically involves the independent test team performing the following testing tasks: Test Planning Test Reuse Test Design Test Implementation Test Execution Test Reporting Environments Configuration testing is performed on the following environments using the following techniques: Test Environment: Test Harness Work Products Configuration testing typically results in the production of all or part of the following work products from the test work product set: Documents: Project Test Plan Master Test List Test Procedures Test Report Test Summary Report Software and Data: Test Harness Test Scripts Test Suites Test Cases Test Data Phases Configuration testing typically consists of the following tasks being performed during the following phases: Page 46 ©Copyright 2007.Handout – Software Testing Completion Criteria Configuration testing is typically complete when the following postconditions hold: At least one configuration test suite exists for each configurability requirement. The test suites for every scheduled configurability requirement execute successfully on the appropriate configuration.

installation testing is defined as any testing that takes place outside of the developer's controlled environment. and for the purposes of this document. Installation testing is any testing that takes place at a user's site with the actual hardware and software that will be part of the installed system configuration. site validation. Cognizant Technology Solutions. However. Configuration testing must be automated if adequate regression testing is to occur. Page 47 ©Copyright 2007. Quality System Regulations require installation and inspection procedures (including testing where appropriate) and documentation of inspection and testing to demonstrate proper installation. To avoid confusion. and automated systems be validated for their intended use. there are specific site validation requirements that need to be considered in the planning of installation testing. Terminology in this testing area can be confusing. manufacturing equipment must meet specified requirements. Test planners should check with Soft Solutions International to determine whether there are any additional regulatory requirements for installation testing. All Rights Reserved C3: Protected . Terms such as beta test. Installation testing: Testing to identify the ways in which the installation procedures lead to incorrect results. in some areas. To the extent practical. and installation verification have all been used to describe installation testing. reuse functional test cases as configuration test cases. user acceptance test.Handout – Software Testing Guidelines The iterative and incremental development cycle implies that configuration testing is regularly performed in an iterative and incremental manner. Likewise. Guidance contained here is general in nature and is applicable to any installation testing. The testing is accomplished through either actual or simulated use of the software being tested within the environment in which it is intended to function.

Handout – Software Testing Installation testing should follow a pre-defined plan with a formal summary of testing and a record of formal acceptance. there should be an evaluation of the ability of the users of the system to understand and correctly interface with it. etc. Measures should ensure that all system components are exercised during the testing and that the versions of these components are those specified. warnings. In addition to an evaluation of the system's ability to properly perform its intended functions. implementation of safety requirements. and recovery). security. errors. These may include tests for a high volume of data. Page 48 ©Copyright 2007. Alpha testing and Beta testing Alpha testing is the launch testing consisting of the development organization’s initial internal dry runs of the application’s acceptance tests in the production environment. Help determine the extent to which the application is ready for: o o o Beta testing. There should be evidence that hardware and software are installed and configured as specified. Records should be maintained during installation testing of both the system's capability to properly perform and the system's failures. If the developers are not involved. and serviceability. The developer may be able to furnish the user with some of the test data sets to be used for this purpose. Launch. they may seamlessly carry over to the user's site the last portions of design-level systems testing. fault testing (avoidance. test input data and test results. All Rights Reserved C3: Protected . The revision of the system to compensate for faults detected during this installation testing should follow the same procedures and controls as any other software change. if any. it is all the more important that the user have persons knowledgeable in software engineering who understand the importance of such matters as careful test planning. heavy loads or stresses. There should be retention of documented evidence of all testing procedures. and the recording of all test outputs. which are encountered. If the developers are involved. Acceptance testing. Operators should be able to perform the intended operations and respond in an appropriate and timely manner to all alarms. tolerance. The developers of the software may or may not be involved in the installation testing. the definition of expected test results. Cognizant Technology Solutions. which are not apparent during more normal activities. The testing instructions should encourage use through the full range of operating conditions and should continue for a sufficient time to allow the system to encounter a wide spectrum of conditions and events in an effort to detect any latent faults. detection. Report these failures to the development teams so that the associated defects can be fixed. error messages. Some of the evaluations that have been performed earlier by the software developer at the developer's site should be repeated at the site of actual use. Objectives The typical objectives of alpha testing are to: Cause failures that only tend to occur in the production environment.

Severity two defects that do not have adequate work arounds. Test Reporting: Report failures that occurred during testing to the development teams so that the associated defects can be fixed. Environments Alpha testing is typically performed on the following environments with the following tools: o Production Environments None o o o o Page 49 ©Copyright 2007. The application has been ported to the production environment. Cognizant Technology Solutions. Preconditions Execution of alpha tests can typically begin when the following preconditions hold: The application has passed all system tests. Completion Criteria Alpha testing is typically complete when the following post conditions hold: An initial version of the acceptance test suites exists. Acceptance testing does not discover any: o o Severity one defects.Handout – Software Testing Provide input to the defect trend analysis effort. All Rights Reserved C3: Protected . Update the alpha testing subsection of Project Test Plan (PTP) Test Design: Select an adequate subset of the system test suites of test cases (both functional and quality) to be repeated on the production environment during alpha testing. The acceptance tests execute on the production environment. The production environment is ready. Test Execution: Execute the alpha test suites on the production environment. Test Implementation: Fix any defects in the test suites found during evaluation. Tasks Typically involves the following teams performing the following testing tasks: Independent Test Team: o Test Planning: Determine alpha testing completion criteria. The independent test team is adequately staffed. The delivery phase has begun. The customer representative has approved these acceptance test suites.

Provide input to the defect trend analysis effort. All Rights Reserved C3: Protected .Handout – Software Testing Phases Alpha testing typically involves the following tasks being performed during the following phases: Guidelines To the extent practical. Objectives The typical objectives of beta testing are to: Cause failures that only tend to occur during actual usage by the user community rather than during formal testing. reuse the tests from system testing when performing alpha testing rather than producing new tests. Page 50 ©Copyright 2007. Launch. The application has passed alpha testing. Cognizant Technology Solutions. Definition Beta testing is the launch testing of the application in the production environment by a few select users prior to acceptance testing and the release of the application to its entire user community. Report these failures to the development teams so that the associated defects can be fixed. The delivery phase has begun. Obtain additional user community feedback beyond that received during usability testing. The production environment is ready. Preconditions Beta test execution can typically begin when the following preconditions hold: The application has passed all system tests. Help determine the extent to which the system is ready for: o o Acceptance testing.

These failures have been passed on to the development teams. Tasks Beta testing typically involves the following producers performing the following testing tasks: Independent Test Team: o Test Planning . User Organizations: Page 51 ©Copyright 2007. The users have reported any failures observed to the development organization.Report failures to customer organization. Customer Organization: o o o o Environments Beta testing is typically performed on the following environments (limited to a select group of users) using the following tools: Production Environments: Client Environment Contact Center Environment Content Management Environment Data Center Environment Tools: Defect reporting tool.Update beta testing subsection of Project Test Plan (PTP). Test Reporting .Select beta test user group.Handout – Software Testing The application has been ported to the production environment. Completion Criteria Beta testing is typically complete when: The time period scheduled for beta testing ends.Pass on reported failures to developer organization. The selected group of users is ready. All Rights Reserved C3: Protected .Use application under normal conditions of operation. Cognizant Technology Solutions. Test Reporting . Test Implementation . Test Execution .

The testing that ensures that no unwanted changes were introduced is a) Unit Testing b) System Testing c) Acceptance Testing d) Regression Testing Answers: 1) a 2) d Page 52 ©Copyright 2007. Cognizant Technology Solutions. Test your Understanding 1. Beta testing is critical if formal usability testing was not performed during system testing. Alpha testing is differentiated from beta testing by a) the location where the tests are conducted b) the types of tests conducted c) the people doing the testing d) the degree to which white-box techniques are used 2.Handout – Software Testing Phases Beta testing typically involves the following tasks being performed during the following phases: Guidelines Limit the user test group to users who are willing to use a lower quality version of the application in exchange for obtaining it early and having input into its iteration. Beta testing often uses actual live data rather than data created for testing purposes. All Rights Reserved C3: Protected .

so the art is to define the unit test on the methods that cannot be checked by inspection. Generally. then the tests will be trivial and the objects might pass the tests. along with the expected results and pass/fail date. It's important. but there will be no design of their interactions. If the scope is too narrow. Usually this is a vision of a grand table with every single method listed. Unit tests that isolate clusters of objects for testing are doubly useful. All Rights Reserved C3: Protected . Usually this is the case when the method involves a cluster of objects. Certainly. Unit tests will most likely be defined at the method level. The programmer should know that their unit testing is complete when the unit tests cover at the very least the functional requirements of all the code. because they test for failures. it is a little design document that says. People who revisit the code will use the unit tests to discover which objects are related. Likewise. you will be able to: List different levels of testing Unit Testing Unit testing: Isn't that some annoying requirement that we're going to ignore? Many developers get very nervous when you mention unit tests. if the scope is too broad. The danger of not implementing a unit test on every method is that the coverage may be incomplete. The unit test will motivate the code that you write. The programmer is then reduced to testing-by-poking-around. The careful programmer will know that their unit testing is complete when they have verified that their unit tests cover every cluster of objects that form their application. Just because we don't test every method explicitly doesn't mean that methods can get away with not being tested. or which objects form a cluster. If error handling is performed in a method.Handout – Software Testing Chapter 4: Levels of Testing Learning Objective After completing this chapter. In a sense. "What will this bit of code do?" Or. because it may break at some time. but not relevant in most programming projects. Another good litmus test is to look at the code and see if it throws an error or catches an error. then there is a high chance that not every component of the new code will get tested. and then the unit test will be there to help you fix it. and they also identify those segments of code that are related. The developer should know when this is the case. any method that can break is a good candidate for having a unit test. can it be tested by inspection? If the code is simple enough that the developer can just look at it and verify its correctness then it is simple enough to not require a unit test. then that method can break. Hence: Unit tests isolate clusters of objects for future developers. Need for Unit Test How do you know that a method doesn't need a unit test? First. in the language of object oriented programming. Page 53 ©Copyright 2007. which is not an effective test strategy. "What will these clusters of objects do?" The crucial issue in constructing a unit test is scope. Cognizant Technology Solutions. interactions of objects are the crux of any object oriented design.

and Prepare a test case with a high probability of finding an as-yet undiscovered error. All Rights Reserved C3: Protected . from Requirements till User Acceptance Testing. Levels of Unit Testing UNIT 100% code coverage INTEGRATION SYSTEM ACCEPTANCE MAINTENANCE AND REGRESSION Concepts in Unit Testing: The most 'micro' scale of testing.. Types of Errors detected The following are the Types of errors that may be caught Error in Data Structures Performance Errors Logic Errors Validity of alternate and exception flows Identified at analysis/design stages Page 54 ©Copyright 2007. To test particular functions or code modules.Handout – Software Testing Life Cycle Approach to Testing Testing will occur throughout the project lifecycle i.e. To uncover an as-yet undiscovered error . Cognizant Technology Solutions. The main Objective to Unit Testing are as follows : To execute a program with the intent of finding an error. As it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code. Typically done by the programmer and not by testers..

Handout – Software Testing Unit Testing – Black Box Approach Field Level Check Field Level Validation User Interface Check Functional Level Check Unit Testing – White Box Approach STATEMENT COVERAGE DECISION COVERAGE CONDITION COVERAGE MULTIPLE CONDITION COVERAGE (nested conditions) CONDITION/DECISION COVERAGE PATH COVERAGE Unit Testing – Field Level Checks Null / Not Null Checks Uniqueness Checks Length Checks Date Field Checks Numeric Checks Negative Checks Unit Testing – Field Level Validations Test all Validations for an Input field Date Range Checks (From Date/To Date’s) Date Check Validation with System date Unit Testing – User Interface Checks Readability of the Controls Tool Tips Validation Ease of Use of Interface Across Tab related Checks User Interface Dialog GUI compliance checks Unit Testing .Functionality Checks Screen Functionalities Field Dependencies Auto Generation Algorithms and Computations Normal and Abnormal terminations Specific Business Rules if any.. All Rights Reserved C3: Protected . Page 55 ©Copyright 2007. Cognizant Technology Solutions.

This measure reports whether each executable statement is encountered. Cognizant Technology Solutions. All Rights Reserved C3: Protected . segment coverage and basic block coverage. Page 56 ©Copyright 2007. Basic block coverage is the same as statement coverage except the unit of code measured is each sequence of non-branching statements. Also known as: line coverage.Other Measures Function coverage Loop coverage Race coverage Execution of Unit Tests Design a test case for every statement to be executed. Select the unique set of test cases.Handout – Software Testing Unit Testing .

Handout – Software Testing Page 57 ©Copyright 2007. All Rights Reserved C3: Protected . Cognizant Technology Solutions.

Cognizant Technology Solutions. Method for Statement Coverage Design a test-case for the pass/failure of every decision point Select unique set of test cases Page 58 ©Copyright 2007. Performance profilers commonly implement this measure. Disadvantage of Unit Testing Insensitive to some control structures (number of iterations) Does not report whether loops reach their termination condition Statement coverage is completely insensitive to the logical operators (|| and &&). All Rights Reserved C3: Protected .Handout – Software Testing Unit Testing Flow : Advantage of Unit Testing Can be applied directly to object code and does not require processing source code.

all-edges coverage. Predicate coverage views paths as possible combinations of logical conditions Path coverage has the advantage of requiring very thorough testing Page 59 ©Copyright 2007.Handout – Software Testing This measure reports whether Boolean expressions tested in control structures (such as the if-statement and while-statement) evaluated to both true and false. Reports the true or false outcome of each Boolean sub-expression. Advantage: Simplicity without the problems of statement coverage Disadvantage This measure ignores branches within boolean expressions which occur due to shortcircuit operators. Disadvantage: Tedious to determine the minimum set of test cases required. when present. Cognizant Technology Solutions. Reports whether every possible combination of boolean sub-expressions occurs. exception handlers. Additionally. Also known as: branch coverage. separated by logical-and and logical-or if they occur. especially for very complex Boolean expressions Number of test cases required could vary substantially among conditions that have similar complexity Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage and decision coverage. basis path coverage. As with condition coverage. The entire Boolean expression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. It has the advantage of simplicity but without the shortcomings of its component measures This measure reports whether each of the possible paths in each function have been followed. "Basis path" testing selects paths that achieve decision coverage. All Rights Reserved C3: Protected . Method for Condition Coverage: Test if every condition (sub-expression) in decision for true/false Select unique set of test cases. Also known as predicate coverage. this measure includes coverage of switch-statement cases. Condition coverage measures the sub-expressions independently of each other. decisionpath testing. A path is a unique sequence of branches from the function entry to the exit. The test cases required for full multiple condition coverage of a condition are given by the logical operator truth table for the condition. the sub-expressions are separated by logical-and and logical-or. and interrupt handlers.

theshelf (COTS) software components to determine if they are not interoperable (i. system integration testing is the testing of software components that have been distributed across multiple platforms (e. Race Coverage This measure reports whether multiple threads execute the same code at the same time.. Broad. For do-while loops. Software Integration Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. System Integration System integration testing is the integration testing of two or more system components.. twice and more than twice (consecutively). Objectives The typical objectives of integration testing are to: Determine if components will work properly together. Useful for testing multi-threaded programs such as in an operating system. if they contain any interface defects). It is useful during preliminary testing to assure at least some coverage in all areas of the software. exactly once. The valuable aspect of this measure is determining whether while-loops and for-loops execute more than once. Cognizant Technology Solutions. loop coverage reports whether you executed the body exactly once.Handout – Software Testing Function Coverage: This measure reports whether you invoked each function or procedure. information not reported by others measure. web server.g. Helps detect failure to synchronize access to resources. shallow testing finds gross deficiencies in a test suite quickly. Identify defects that are not easily identified during unit testing. Kinds of Integration Testing Integration testing includes the following kinds of testing: Commercial Component Integration Commercial component integration testing is the integration testing of multiple commercialoff.e. Integration Testing Integration testing is the testing of a partially integrated application to identify defects involving the interaction of collaborating components. Loop Coverage This measure reports whether you executed each loop body zero times. Specifically. and database Page 60 ©Copyright 2007. client. All Rights Reserved C3: Protected . application server. and more than once.

``Did we build the right product?'' and not just. The difference between function testing and system testing is that now the focus is on the whole application and its environment . acceptance and qualification testing. it is beyond doubt that this test cannot be done completely. this again includes the question. This means that those tests should be done in the environment for which the program was designed. and nevertheless. Guidelines The iterative and incremental development cycle implies that integration testing is regularly performed in an iterative and incremental manner. We test for errors that users are likely to make as they interact with the application as well as your application’s ability to trap errors gracefully. railway. In this paper we examine the results of applying several types of Poisson-process models to the development of a large system for which system test was performed in two parallel tracks. Once again. The main goal is rather to demonstrate the discrepancies of the product from its requirements and its documentation.Handout – Software Testing server) to produce failures caused by system integration defects (i.. All Rights Reserved C3: Protected . However. development costs and improved 'time to market' for new systems. We will test that the functionality of your systems meets with your specifications. ``Did we build the product right?'' However. Systems with software components and software-intensive systems are more and more complex everyday. it is one of the most important. Moreover. whether testing a financial system. integrating with which-ever type of development methodology you are applying. In other words. Industry sectors such as telecom. the validation process is close to other activities such as conformance. it also contains some aspects that are orientated on the word ``system'' . like a mulituser network or whetever. defects involving distribution and back-office integration). are good examples. These techniques can be applied flexibly. Integration testing must be automated if adequate regression testing is to occur. Even security guide lines have to be included. Cognizant Technology Solutions. because this would be too redundant. A number of time-domain software reliability models attempt to predict the growth of a system's reliability during the system test phase of the development life cycle. using different strategies for test data selection.e. Page 61 ©Copyright 2007. automotive. ecommerce. the validation process does not often receive the required attention. Therefore the program has to be given completely. It is often agreed that testing is essential to manufacture reliable products. software and system testing represents a significant element of a project's cost in terms of money and management time. an online casino or games testing. Making this function more effective can deliver a range of benefits including reductions in risk. System Testing For most organizations. and aeronautical and space. This does not mean that now single functions of the whole program are tested. system testing does not only deal with this more economical problem. while this is one of the most incomplete test methods.

also encompass many other types of testing.test adherence to standards Page 62 ©Copyright 2007. has been proven over the last 3decades to deliver real business benefits including: These benefits are achieved as a result of some fundamental principles of testing. System Testing Techniques Goal is to evaluate the system as a whole. speed. data. increased independence naturally increases objectivity. precision. users.Handout – Software Testing System Testing is more than just functional testing. etc. how it handles corrupted data. etc. installation. when appropriate.test performance in terms of speed. Operations testing . as a part of software engineering.test larger-than-normal capacity in terms of transactions. Execution testing. for example. however. Cognizant Technology Solutions. and can. You will have a personal interest in its success in which case it is only human for your objectivity to be compromised. etc. Recovery testing .test how the system recovers from a disaster. etc. commercial and technical. such as: Security Load/stress Performance Browser compatibility Localisation Need for System Testing Effective software testing. All Rights Reserved C3: Protected . not its parts Techniques can be structural or functional Techniques can be used in any stage that tests the system as a whole (acceptance.) Techniques not mutually exclusive Structural techniques Stress testing .test how the system fits in with existing operations and procedures in the user organization Compliance testing . Your test strategy must take into consideration the risks to your organisation.

choose test cases that violate the format rules for input Special values .every part of every expression is exercised Path testing . etc.fundamental form of testing .ensure the set of test cases exercises every statement at least once Branch testing . module.includes user documentation Intersystem handling testing . Cognizant Technology Solutions.test that the system can be used properly .pick test cases representative of the range of allowable input.design test cases that use input values that represent special situations Output domain testing . it’s usually ad-hoc and looks a lot like debugging More structured approaches exist Functional techniques Input domain testing . program. then pick a test case from each partition Boundary value . component.test required control mechanisms Parallel testing .partition the range of allowable input so that the program is expected to behave similarly for all inputs in a given partition.each truth statement is exercised both true and false Expression testing . and average values Equivalence partitioning .choose test cases with input values at the boundary (both inside and outside) of the allowable range Syntax checking .each branch of an if/then statement is exercised Conditional testing .Handout – Software Testing Security testing . including high. low.test security requirements Functional techniques Requirements testing .pick test cases that will produce output at the extremes of the output domain Structural techniques Statement testing .make sure unchanged functionality remains unchanged errorhandling testing .test that the system is compatible with other systems in the environment Control testing .feed same input into two versions of the system to make sure they Produce the same output Unit Testing Goal is to evaluate some piece (file.makes sure the system does what it’s required to do Regression testing .) in isolation Techniques can be structural or functional In practice. All Rights Reserved C3: Protected .every path is exercised (impossible in practice) Error-based techniques Page 63 ©Copyright 2007.test required error-handling functions (usually user error) Manual-support testing .

especially with the top-down method. Cognizant Technology Solutions.create mutants of the program by making single changes. unit testing is performed by: a. the software engineer c. The customer Answers: 1) a 2) b Page 64 ©Copyright 2007. SQA d. Test your Understanding 1). you can estimate whether or not you’ve found all of them or not Fault seeding . Unit testing is predominantly a.Handout – Software Testing Basic idea is that if you know something about the nature of the defects in the code. an independent test group b. none of the above 2).an organization keeps records of the average numbers of defects in the products it produces. All Rights Reserved C3: Protected .put a certain number of known faults into the code. white-box oriented b. both black-and-white-box oriented d. System testing can occur in parallel with integration test. then tests a new product until the number of defects found approaches the expected number SUMMARY Testing irrespective of the phases of testing should encompass the following: Cost of Failure associated with defective products getting shipped and used by customer is enormous To find out whether the integrated product work as per the customer requirements To evaluate the product with an independent perspective To identify as many defects as possible before the customer finds To reduce the risk of releasing the product Hence the system Test phase should begin once modules are integrated enough to perform tests in a whole system environment. then run test cases until all mutants have been killed Historical test data . In general. black-box oriented c. then test until they are all found Mutation testing .

Handout – Software Testing Chapter 5: JUnit Testing Learning Objective After completing this chapter. Also. Override the method runTest() When you want to check a value. Both styles of tests are limited because they require human judgment to analyze their results. When you need to test something. and it is easy to run many of them at the same time. JUNIT Testing . It is an instance of the xUnit architecture for unit testing frameworks. You can also write test expressions as statements which print to the standard output stream. "CHF"). All Rights Reserved C3: Protected . You can change debug expressions without recompiling. they don't compose nicely. and you can wait to decide what to write until you have seen the running objects. here is what you do: Create an instance of Test Case: Create a constructor which accepts a String as a parameter and passes it to the superclass.you can only execute one debug expression at a time and a program with too many print statements causes the dreaded "Scroll Blindness". Money m14CHF= new Money(14. you will be able to: Write a Junit Testing. "CHF"). Cognizant Technology Solutions. call assertTrue() and pass a boolean that is true if the test succeeds For example. Page 65 ©Copyright 2007. to test that the sum of two Moneys with the same currency contains a value which is the sum of the values of the two Moneys.Introduction JUnit is a simple framework to write repeatable tests. JUnit features include: Assertions for testing expected results Test fixtures for sharing common test data Test suites for easily organizing and running tests Graphical and textual test runners JUnit was originally written by Erich Gamma and Kent Beck Simple Test Case How do you write testing code? The simplest way is as an expression in a debugger. JUnit tests do not require human judgment to interpret. write: public void testSimpleAdd() { Money m12CHF= new Money(12.

When you want to run more than one test. "CHF"). Add an instance variable for each part of the fixture Override setUp() to initialize the variables Override tearDown() to release any permanent resources you allocated in setUp For example. first create a fixture: public class MoneyTest extends TestCase { private Money f12CHF. write a Fixture instead. However. Page 66 ©Copyright 2007. you will be able to use the same fixture for several different tests. a much bigger savings comes from sharing fixture code. private Money f28USD. Each case will send slightly different messages or parameters to the fixture and will check for different results. To some extent. Cognizant Technology Solutions. create a Suite. "CHF"). here is what you do: Create a subclass of TestCase Create a constructor which accepts a String as a parameter and passes it to the superclass. } If you want to write a test similar to one you have already written. protected void setUp() { f12CHF= new Money(12. private Money f14CHF. you can make writing the fixture code easier by paying careful attention to the constructors you write. 14 Swiss Francs. All Rights Reserved C3: Protected . Money result= m12CHF. "CHF"). f14CHF= new Money(14. you can write as many Test Cases as you'd like. Fixture What if you have two or more tests that operate on the same or similar sets of objects? Tests need to run against the background of a known set of objects. Often.equals(result)). When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values.Handout – Software Testing Money expected= new Money(26. to write several test cases that want to work with different combinations of 12 Swiss Francs. This set of objects is called a test fixture. f28USD= new Money(28. When you have a common fixture. "USD"). assertTrue(expected. } } Once you have the Fixture in place. and 28 US Dollars.add(m14CHF).

Suite How do you run several tests at once? As soon as you have two tests. to test the addition of a Money and a MoneyBag. Page 67 ©Copyright 2007. To do so you pass the class of your Test Case to the TestSuite constructor. You write test cases for a Fixture the same way.add(f14CHF))). TestSuite which runs any number of test cases together.override runTest in an anonymous subclass of TestCase. For example. All Rights Reserved C3: Protected . but you would quickly grow tired of that. you'll want to run them together. Instead. assertEquals(expected. Once you have several tests.addTest(new MoneyTest("testSimpleAdd")). Cognizant Technology Solutions. suite. write: public void testMoneyMoneyBag() { // [12 CHF] + [14 CHF] + [28 USD] == {[26 CHF][28 USD]} Money bag[]= { f26CHF.run(). } Create an instance of of MoneyTest that will run this test case like this: new MoneyTest("testMoneyMoneyBag") When the test is run. suite. You could run the tests one at a time yourself.Handout – Software Testing Test Case How do you write and invoke an individual test case when you have a Fixture? Writing a test case without a fixture is simple. execute: TestSuite suite= new TestSuite(). JUnit provides an object. Another way is to let JUnit extract a suite from a TestCase. Be sure to make it public.Create an instance of the TestCase class and pass the name of the test case method to the constructor. To create a suite of two test cases and run them together.add(f28USD. JUnit provides a more concise way to write a test against a Fixture. Here is what you do: Write the test case method in the fixture class. to run a single test case. However. the name of the test is used to look up the method to run. or it can't be invoked through reflection. after a few such tests you would notice that a large percentage of your lines of code are sacrificed to syntax. For example. TestResult result= suite.run(). MoneyBag expected= new MoneyBag(bag). by making a subclass of TestCase for your set up code and then making anonymous subclasses for the individual test cases. f28USD }. f12CHF.addTest(new MoneyTest("testMoneyEquals")). you execute: TestResult result= (new MoneyTest("testMoneyMoneyBag")). organize them into a Suite.

In the case of an unsuccessful test JUnit reports the failed tests in a list at the bottom. } If a TestCase class doesn't define a suite method a TestRunner will extract a suite and fill it with all the methods starting with "test".run().suite()). For example.TestRunner or junit. JUnit provides both a graphical and a textual version of a TestRunner tool.TestRunner. It avoids you having to update the suite creation code when you add a new test case. you can create a TestSuite in your code and I can create one in mine.class). suite. Errors are unanticipated problems like an Page 68 ©Copyright 2007. A failure is anticipated and checked for with assertions. return suite. Cognizant Technology Solutions. Use the manual way when you want a suite to only contain a subset of the test cases. TestSuites don't only have to contain TestCases. They contain any object that implements the Test interface.Start it by typing java junit.addTest(Kent. The graphical user interface presents a window with: A field to type in the name of a class with a suite method.addTest(new MoneyTest("testMoneyEquals")). A run button to start the test. Otherwise the automatic suite extraction is the preferred way.suite()). You make your suite accessible to a TestRunner tool with a static method suite that returns a test suite For example. TestResult result= suite. TestRunner How do you run your tests and collect their results? Once you have a test suite. you'll want to run it.awtui. JUnit provides tools to define the suite to be run and to display its results. All Rights Reserved C3: Protected . suite.Handout – Software Testing TestSuite suite= new TestSuite(MoneyTest. suite. A list of failed tests. to make a MoneyTest suite available to a TestRunner.run(). and we can run them together by creating a TestSuite that contains both: TestSuite suite= new TestSuite().swingui.addTest(new MoneyTest("testSimpleAdd")). suite. A progress indicator that turns from red to green in the case of a failed test.addTest(Erich. TestResult result= suite. add the following code to MoneyTest: public static Test suite() { TestSuite suite= new TestSuite(). JUnit distinguishes between failures and errors.

jar file is on your CLASSPATH. In a dynamic programming environment like VisualAge for Java which supports hot code update you can leave the JUnit window up all the time. An alternative way to invoke the batch interface is to define a main method in your TestCase class. For using either the graphical or the textual version make sure that the junit. As an alternative JUnit's AWT and Swing UIs use junit. This LoadingTestCollector reloads all your classes for each test run.Handout – Software Testing ArrayIndexOutOfBoundsException.This feature can be disabled by unchecking the 'Reload classes every run' checkbox. Page 69 ©Copyright 2007. Cognizant Technology Solutions. to start the batch TestRunner for MoneyTest.textui.TestRunner followed by the name of the class with a suite method at an operating system prompt.textui. also. In other environments you have to restart the graphical version for each run.run(suite()). The batch interface shows the result as text output. To use it typejava junit. The following figure shows an example of a failed test. This is tedious and time consuming.runner. } With this definition of main you can run your tests by simply typing java MoneyTest at an operating system prompt.LoadingTestCollector . write: public static void main(String args[]) { junit. There is a batch interface to JUnit. All Rights Reserved C3: Protected .TestRunner. For example.

Any test support tools introduced should be aligned with. and in support of. A detailed test plan and schedule is prepared with key test responsibilities being indicated. The Testing strategy should define the objectives of all test stages and the techniques that apply. you will be able to: Create a test plans and test cases Test Strategy and Test Plan Introduction This Document entails you towards the better insight of the Test Strategy and its methodology. All Rights Reserved C3: Protected . integration and system testing – configuration items are verified against the appropriate specifications and in accordance with the test plan. Unit. Test Approach/Test Architecture are the acronyms for Test Strategy. The test environment should also be under configuration control and test data and results stored for future evaluation. Test specifications – required for all levels of testing and covering all categories of test. The project framework under which the testing activities will be carried out is reviewed. Test organization also involves the determination of configuration standards and the definition of the test environment. Test monitoring and assessment – ongoing monitoring and assessment of the integrity of the development and construction. Cognizant Technology Solutions. and facilitates communication of the test process and its implications outside of the test discipline. The status of the configuration items should be reviewed against the phase plans and test progress reports prepared providing some assurance of the verification and validation activities. high level test phase plans prepared and resource schedules considered. Key elements of Test Management: Test organization –the set-up and management of a suitable test organizational structure and explicit role definition. The required outcome of each test must be known before the test is attempted. The testing strategy also forms the basis for the creation of a standardized documentation set.Handout – Software Testing Chapter 6: Testing Artifacts Learning Objective After completing this chapter. Test planning – the requirements definition and design specifications facilitate in the identification of major test items and these may necessitate the test strategy to be updated. Page 70 ©Copyright 2007. It is the role of test management to ensure that new or modified service products meet the business requirements for which they have been developed or enhanced. the test strategy. Test management is also concerned with both test resource and test environment management.

Evaluation criteria.g. It is possible to gain greater control of this process and the associated risk through the use of specialists such as Systems Integration who can be appointed as part of the professional team. The Project Sponsor should ensure that the professional team and the contractor consider realistically how much time is needed. Traditionally the responsibility for testing and commissioning is buried deep within the supply chain as a sub-contract of a subcontract. non-functional testing and the associated techniques such as performance. Reporting requirements. stress and security etc? Does the test plan prescribe the approach to be taken for intended test activities. Risks requiring contingency measures? Are test processes and practices reviewed regularly to assure that the testing processes continue to meet specific business needs? For example. All Rights Reserved C3: Protected . Testing and commissioning is often considered by teams as a secondary activity and given a lower priority particularly as pressure builds on the program towards completion. e. The time necessary for testing and commissioning will vary from project to project depending upon the complexity of the systems and services that have been installed. Test schedules. identifying: The items to be tested. Sufficient time must be dedicated to testing and commissioning as ensuring the systems function correctly is fairly fundamental to the project’s success or failure. Fitness for purpose checklist: Is there a documented testing strategy that defines the objectives of all test stages and the techniques that may apply.Handout – Software Testing Product assurance – the decision to negotiate the acceptance testing program and the release and commissioning of the service product is subject to the ‘product assurance’ role being satisfied with the outcome of the verification activities. Product assurance may oversee some of the test activity and may participate in process reviews. Resource and facility requirements. e-commerce testing may involve new user interfaces and a business focus on usability may mean that the organization must review its testing strategies. Page 71 ©Copyright 2007. The testing to be performed. A common criticism of construction programmers is that insufficient time is frequently allocated to the testing and commissioning of the building systems together with the involvement and subsequent training of the Facilities Management team. Cognizant Technology Solutions.

Understand the underlying Algorithm.Handout – Software Testing Test Strategy Flow: Test Cases and Test Procedures should manifest Test Strategy. Suggestion of Wrong Ideas. People will use the Product Incorrectly Incorrect comparison of scenarios. . Unable to handle Complex Decisions. Page 72 ©Copyright 2007. Cognizant Technology Solutions. Scenarios may be corrupted. Review Documentation and Help. Generate large number of decision scenarios. Test Strategy – Selection Selection of the Test Strategy is based on the following factors Product o Test Strategy based on the Application to help people and teams of people in making decisions. Simulate the Algorithm in parallel. All Rights Reserved C3: Protected Based on the Key Potential Risks o o o o o o o o o o o o Determination of Actual Risk. Create complex scenarios and compare them. Test for sensitivity to user Error. Capability test each major function.

Issues in Execution of the Test Strategy The difficulty of understanding and simulating the decision algorithm The risk of coincidal failure of both the simulation and the product. The difficulty of automating decision tests. The two components of the testing strategy are the Test Factors and the Test Phase. and the design of the user interface and functionality for its sensitivity to user error. Test with decision scenarios that are near the limit of complexity allowed by the product Compare complex scenarios. Create a means to generate and apply large numbers of decision scenarios to the product. All Rights Reserved C3: Protected .Handout – Software Testing Test Strategy Execution: Understand the decision Algorithm and generate the parallel decision analyzer using the Perl or Excel that will function as a reference for high volume testing of the app. This will be done using the GUI test Automation system or through the direct generation of Decide Right scenario files that would be loaded into the product during test. General Testing Strategies Top-down Bottom-up Thread testing Stress testing Back-to-back testing Need for Test Strategy The objective of testing is to reduce the risks inherent in computer systems. Review the Documentation. The strategy must address the risks and present a process that can reduce those risks. The system concerns on risks then establish the objectives for the test process. Test the product for the risk of silent failures or corruptions in decision analysis. Cognizant Technology Solutions. Page 73 ©Copyright 2007.

Not all the test factors will be applicable to all software systems. The strategy will select those factors that need to be addressed in the testing of a specific application system. Developing a Test Strategy The test Strategy will need to be customized for any specific software system. For example the test phases in as traditional waterfall life cycle methodology will be much different from the phases in a Rapid Application Development methodology. Test Phase – The Phase of the systems development life cycle in which testing will occur. Four test steps must be followed to develop a customized test strategy. Cognizant Technology Solutions. The test phase will vary based on the testing methodology used.Handout – Software Testing Test Factor – The risk or issue that needs to be addressed as part of the test strategy. All Rights Reserved C3: Protected . Place risks in the Matrix Page 74 ©Copyright 2007. The applicable test factors would be listed as the phases in which the testing must occur. Select and rank Test Factors Identify the System Developmental Phases Identify the Business risks associated with the System under Development. The development team will need to select and rank the test factors for the specific software systems being developed.

the testing tasks. All Rights Reserved C3: Protected . deadlines and deliverables for the project. The system accordingly focuses on risks thereby establishes the objectives for the test process. and any risks requiring contingency planning. approach. Cognizant Technology Solutions. resources and schedule of intended test activities. The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with regards to the scope.Handout – Software Testing Conclusion: Test Strategy should be developed in accordance with the business risks associated with the software when the test team develop the test tactics. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the project (especially between the developers and the testers). the features to be tested. Thus the Test team needs to acquire and study the test strategy that should question the following: What is the relationship of importance among the test factors? Which of the high level risks are the most significant? What damage can be done to the business if the software fails to perform correctly? What damage can be done to the business if the business if the software is not completed on time? Who are the individuals most knowledgeable in understanding the impact of the identified business risks? Hence the Test Strategy must address the risks and present a process that can reduce those risks. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. responsibilities. Test Plan A Test Plan can be defined as a document that describes the scope. Contents of a Test Plan Purpose Scope Test Approach Entry Criteria Resources Tasks / Responsibilities Exit Criteria Schedules / Milestones Page 75 ©Copyright 2007. It identifies test items. who will do each task. Purpose of preparing a Test Plan A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a software product.

database. mainframe processes etc).Handout – Software Testing Hardware / Software Requirements Risks & Mitigation Plans Tools to be used Deliverables References Procedures Templates Standards/Guidelines Annexure Sign-Off Contents (in detail) Purpose This section should contain the purpose of preparing the test plan Scope This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of scope (screens. Test Approach This would contain details on how the testing is to be performed and whether any specific strategy is to be followed (including configuration management). database refresh etc. All Rights Reserved C3: Protected . Schedules / Milestones This sections deals with the final delivery date and the various milestone dates to be met in the course of the project. Tasks / Responsibilities This section talks about the tasks to be performed and the responsibilities assigned to the various members in the project. Exit criteria Contains tasks like bringing down the system / server. Page 76 ©Copyright 2007. Resources This section should list out the people who would be involved in the project and their designation etc. starting the web server / app server. For example: Timely environment set up.) prerequisites. successful implementation of the latest build etc.e. Cognizant Technology Solutions. restoring system to pre-test environment. Entry Criteria This section explains the various steps to be performed before the start of a test (i.

connectivity related issues etc. PCOM. Cognizant Technology Solutions. Preparation of the data can help to focus the business where requirements are vague. be prepared which is representative of normal business transactions. Deliverables This section contains the various deliverables that are due to the client at various points of time (i. Functional testing can suffer if data is poor.g. Test Procedure.) QView Project related documents (RSD. ADD. Sign-Off This should contain the mutual agreement between the client and the QA team with both leads / managers signing off their agreement on the Test Plan. Its contents. Test Scripts etc. Templates for all these could also be attached.) WinRunner. specific software that needs to be installed on the systems to get the application running or to connect to the database. test cases etc. These could include Test Plans. Actual customer names or contact details should also not be used for such tests. Test Director. FSD etc) Annexure This could contain embedded documents or links to documents which have been / will be used in the course of testing (e. Referenced documents can also be attached here. It is recommended that a full test environment be set up for use in the applicable circumstances. the simulated conditions used. The first stage of any recogniser development project is data preparation. Status Reports. Test Matrices.Introduction A System is programmed by its data.Handout – Software Testing Hardware / Software Requirements This section would contain the details of PC’s / servers required (with the configuration) to install the application or perform the testing. Tools to be used This would list out the testing tools or utilities (if any) that are to be used in the project (e.g. References Procedures Templates (Client Specific or otherwise) Standards / Guidelines (e. end of the project etc.g.e. All Rights Reserved C3: Protected .) templates used for reports. start of the project. WinSQL. correctly chosen. the persons involved in the testing Page 77 ©Copyright 2007. Good test data can be structured to improve understanding and testability. Test data should however. Test Data Preparation . Risks & Mitigation Plans This section should list out all the possible risks that can arise during the testing and the mitigation plans that the QA team plans to implement incase the risk actually turns into a reality.) daily. weekly. Each separate test should be given a unique reference number which will identify the Business Process being recorded. can reduce maintenance effort and allow flexibility. and good data can help improve functional testing.

the more formal a plan the better. and will show that testing can be improved by a careful choice of input data. All Rights Reserved C3: Protected . etc. This paper sets out to illustrate some of the ways that data can influence the test process. Data is a crucial part of most functional testing. data manipulation. performing a specific set of tests at appropriate points in the process is more important than running the tests at a specific time. which finally spews forth yet more data to be checked against expectations. is close enough good enough? You should have a good idea of a methodology for the test. Page 78 ©Copyright 2007. what steps are required. such as operational profiles. summarized and referenced by the functionality under test. Roles of Data in Functional Testing Testing consumes and produces large amounts of data. Data describes the initial conditions for a test. the way the test is going to be run and applied. rather than output data or the transitional states the data passes through during processing. as input data has the greatest influence on functional testing and is the simplest to manipulate. work (almost) seamlessly with a variety of cooperative systems and provide tailored experiences to a host of different users. extrapolated. how the protocols behave. In other words. Testing is the process of creating. massive datasets and environmental tuning. The paper will focus on input data. forms the input. You must understand the limits inherent in the tests themselves. You should have a definition of what success and failure are. you must know how it's supposed to work.Handout – Software Testing process and the date the test was carried out. A system can be configured to fit several business models. In doing this. you should design test cases. A business may look to an application's configurability to allow them to keep up with the market without being slowed by the development process. etc. Cognizant Technology Solutions. Effective quality control testing requires some basic goals and understanding: You must understand what you are testing. implementing and evaluating tests. an individual may look for a personalized experience from commonly-available software. the paper will concentrate most on data-heavy applications. Data is manipulated. is the medium through which the tester influences the software. This will enable the monitoring and testing reports to be co-coordinated with any feedback received. you have to decide such things as what exactly you are testing and testing for. if you're testing a specific functionality. You must have a consistent schedule for testing. Configuration data can dictate control flow. Tests must be planned and thought out a head of time. A System Is Programmed By Its Data Many modern systems allow tremendous flexibility in the way their basic functionality can be used. The paper will not consider areas where data is important to non-functional testing. presentation and user interface. those which use databases or are heavily influenced by the data they hold.

This section of the BCP should contain the names of the BCP Team members nominated to coordinate the testing process. Cognizant Technology Solutions. Good Data Can Help Testing Stay On Schedule An easily comprehensible and well-understood dataset is a tool to help communication. All Rights Reserved C3: Protected . Identify Who is to Conduct the Tests In order to ensure consistency of the testing process throughout the organization. and varied to allow diagnosis. it is hard to communicate problems to coders.Handout – Software Testing Functional Testing Suffers If Data Is Poor Tests with poor data may not describe the business model effectively. whether they are good or bad. Poor data tends to result in poor tests. They may obscure problems or avoid them altogether. It should also list the duties of the appointed co-ordinators. Good Data Is Vital To Reliable Test Results An important goal of functional testing is to allow the test to be repeated with the same result. one or more members of the Business Continuity Planning (BCP) Team should be nominated to co-ordinate the testing process within each business unit. Without this. and it can become difficult to have confidence in the QA team's results. This task would normally be carried out by a nominated member of the Business Recovery Team or a member of the Business Continuity Planning Team. Good data can greatly assist in speedy diagnosis and rapid re-testing. Good data allows diagnosis. a nominated testing and across the organization. that take longer to execute. Regression testing and automated test maintenance can be made speedier and easier by using good data. effective reporting. while an elegantly-chosen dataset can often allow new tests without the overhead of new data. or require lengthy and difficult setup. and allows tests to be repeated with confidence. A formal test plan is a document that provides and records important information about a test project. they may be hard to maintain. Page 79 ©Copyright 2007. the tests should be independently monitored. for example: Project and quality assumptions Project background information Resources Schedule & timeline Entry and exit criteria Test milestones Tests to be performed Use cases and/or test cases Criteria for Test Data Collection This section of the Document specifies the description of the test data needed to test recovery of each business process. Identify Who is to Control and Monitor the Tests In order to ensure consistency when measuring the results. Each business process should be thoroughly tested and the coordinator should ensure that each business unit observes the necessary rules associated with ensuring that the testing process is carried out within a realistic environment.

Conducting the Tests The tests must be carried out under authentic conditions and all participants must take the process seriously. The 'Preparing for a Possible Emergency' Phase of the BCP process will involve the identification and implementation of strategies for back up and recovery of data files or a part of a business process. This is probably best handled in a workshop environment and should be presented by the persons responsible for developing the emergency procedures. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. it is necessary for the core testing team to be trained in the emergency procedures. This section of the BCP will contain a list of the testing phase activities and a cost for each. Prepare Budget for Testing Phase Each phase of the BCP process which incurs a cost requires that a budget be prepared and approved. Prepare Feedback Questionnaires It is vital to receive feedback from the persons managing and participating in each of the tests. This section of the BCP should contain a template for a Feedback Questionnaire. Cognizant Technology Solutions. Page 80 ©Copyright 2007. It is inevitable that these back up and recovery processes will involve additional costs. Training Core Testing Team for each Business Unit In order for the testing process to proceed smoothly.Handout – Software Testing This section of the BCP will contain the names of the persons nominated to monitor the testing process throughout the organization. Critical parts of the business process such as the IT systems. Where the costs are significant they should be approved separately with a specific detailed budget for the establishment costs and the ongoing maintenance costs. The forms should be completed either during the tests (to record a specific issue) or as soon after finishing as practical. Completion of feedback forms should be mandatory for all persons participating in the testing process. All Rights Reserved C3: Protected . It is important that all persons who are likely to be involved with recovering a particular business process in the event of an emergency should participate in the testing process. It should be mandatory for the management of a business unit to be present when that unit is involved with conducting the tests. It should be noted whenever part of the costs is already incorporated with the organization’s overall budgeting process. may require particularly expensive back up strategies to be implemented. This feedback will hopefully enable weaknesses within the Business Recovery Process to be identified and eliminated. This section of the BCP should contain a list of the core testing team for each of the business units who will be responsible for coordinating and undertaking the Business Recovery Testing process. It is important that clear instructions are given to the Core Testing Team regarding the simulated conditions which have to be observed. It will also contain a list of the duties to be undertaken by the monitoring staff.

Assess Test Results Prepare a full assessment of the test results for each business process. adequate or requiring further testing. Training Staff in the Business Recovery Process All staff should be trained in the business recovery process. provide further comment What were the main comments received in the feedback questionnaires Each test should be assessed as fully satisfactory. This process must have safety features incorporated to ensure that if one person is not contactable for any reason then this is notified to a nominated controller. Every part of the procedures included as part of the recovery process is to be tested to ensure validity and relevance. All contact numbers are to be validated for all involved employees. provide further comment Did the tests proceed without any problems . This is particularly important for management and key employees who are critical to the success of the recovery process.if not. The following questions may be appropriate: Were objectives of the Business Recovery Process and the testing process met . The training should be assessed to verify that it has achieved its objectives and is relevant for the procedures involved. a hierarchical process could be used whereby one person contacts five others. in a realistic manner. This training may be integrated with the training phase or handled separately. Page 81 ©Copyright 2007. Test Accuracy of Employee and Vendor Emergency Contact Numbers During the testing process the accuracy of employee and vendor emergency contact information is to be re-confirmed. Cognizant Technology Solutions.Handout – Software Testing Test each part of the Business Recovery Process In so far as it is practical. in the event of an emergency occurring outside of normal business hours. provide further comment Was test data representative . provides further comment Were simulated conditions reasonably "authentic" . Training may be delivered either using in-house resources or external resources depending upon available skills and related costs. each critical part of the business recovery process should be fully tested.if not. Where. The testing co-ordination and monitoring will endeavor to ensure that the simulated environments are maintained throughout the testing process. a large numberof persons are to be contacted. The training should be carefully planned and delivered on a structured basis. This will enable alternative contact routes to be used.if not. All Rights Reserved C3: Protected . This is particularly important when the procedures are significantly different from those pertaining to normal operations. This section of the BCP is to contain a list of each business process with a test schedule and information on the simulated conditions being used.if not. This activity will usually be handled by the HRM Department or Division.

an estimate of resources and an estimate of the completion date. The training will cover all aspects of the Business Recovery activities section of the BCP including IT systems recovery". It will be necessary to identify the objective and scope for the training. All Rights Reserved C3: Protected . The objectives for the training could be as follows : "To train all staff in the particular procedures to be followed during the business recovery process". For larger organizations it may be practical to carry out the training in a classroom environment. as appropriate. Cognizant Technology Solutions. it could delay the organization in reaching an adequate level of preparedness. These manual procedures must be fully understood by the persons who are required to carry them out. Training Materials Development Schedule Once the training needs have been identified it is necessary to specify and develop suitable training materials. This section of the BCP contains information on each of the training programmes with details of the training materials to be developed. Training Needs Assessment The plan must specify which person or group of persons requires which type of training. It is necessary for all new or revised processes to be explained carefully to the staff. what specific training is required.Handout – Software Testing Managing the Training Process For the BCP training phase to be successful it has to be both well managed and structured. This will enable the training to be consistent and organized in a manner where the results can be measured. who needs it and a budget prepared for the additional costs associated with this phase. however. Develop Objectives and Scope of Training The objectives and scope of the BCP training activities are to be clearly stated within the plan. Consideration should also be given to the development of a comprehensive corporate awareness program for communicating the procedures for the business recovery process. This section of the BCP will identify for each business process what type of training is required and which persons or group of persons need to be trained. Page 82 ©Copyright 2007. The BCP should contain a description of the objectives and scope of the training phase. for smaller organizations the training may be better handled in a workshop style. The scope of the training could be along the following lines: o "The training is to be carried out in a comprehensive and exhaustive manner so that staff become familiar with all aspects of the recovery process. This can be a time consuming task and unless priorities are given to critical training programmes. Prepare Training Schedule Once it has been agreed who requires training and the training materials have been prepared a detailed training schedule should be drawn up. For example it may be necessary to carry out some process manually if the IT system is down for any length of time. and the training fine tuned.

Feedback Questionnaires Assess Feedback Feedback Questionnaires It is vital to receive feedback from the persons managing and participating in each of the training programmes. Identified weaknesses should be notified to the BCP Team Leader and the process strengthened accordingly. If there are a significant number of negative issues raised then consideration should be given to possible retraining once the training materials. This section of the BCP contains a draft communication to be sent to each member of staff to advise them about their training schedule. Completion of feedback forms should be mandatory for all persons participating in the training process. Depending upon the cross charging system employed by the organization. This information will be gathered from the trainers and also the trainees through the completion of feedback questionnaires. The key issues raised by the trainees should be noted and consideration given to whether the findings are critical to the process or not. The communication should provide for feedback from the staff member where the training dates given are inconvenient. or the process. This section of the BCP will contain a list of the training phase activities and a cost for each. training incurs additional costs and these should be approved by the appropriate authority within the organization. however well justified. A separate communication should be sent to the managers of the business units advising them of the proposed training schedule to be attended by their staff.Handout – Software Testing This section of the BCP contains the overview of the training schedule and the groups of persons receiving the training. it has to be recognized that. Communication to Staff Once the training is arranged to be delivered to the employees. Each member of staff will be given information on their role and responsibilities applicable in the event of an emergency. The forms should be completed either during the training (to record a specific issue) or as soon after finishing as practical. Page 83 ©Copyright 2007. the training costs will vary greatly. it is necessary to advise them about the training programmes they are scheduled to attend. It should be noted whenever part of the costs is already incorporated with the organization’s overall budgeting process. All Rights Reserved C3: Protected . However. Prepare Budget for Training Phase Each phase of the BCP process which incurs a cost requires that a budget be prepared and approved. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. have been improved. This feedback will enable weaknesses within the Business Recovery Process. Assessing the Training The individual BCP training programmes and the overall BCP training process should be assessed to ensure its effectiveness and applicability. Cognizant Technology Solutions. This section of the BCP should contain a template for a Feedback Questionnaire for the training phase. or the training. This section of the BCP will contain a format for assessing the training feedback. to be identified and eliminated. Assess Feedback The completed questionnaires from the trainees plus the feedback from the trainers should be assessed.

The BCP Team Leader will remain in overall control of the BCP but business unit heads will need to keep their own sections of the BCP up to date at all times. The increase in technological based processes over the past ten years. Maintaining the BCP It is necessary for the BCP updating process to be properly structured and controlled. It is important that the relevant BCP coordinator and the Business Recovery Team are kept fully informed regarding any approved changes to the plan. Advise Person Responsible for BCP Training A member of the BCP Team will be given responsibility for co-ordinating all training activities (BCP Training Co-ordinator). Similarly. A Change request Form / Change Order form is to be prepared and approved in respect of each proposed change to the BCP. All Rights Reserved C3: Protected . Whenever changes are made or proposed to the BCP. Test All Changes to Plan The BCP Team will nominate one or more persons who will be responsible for co-ordinating all the testing processes and for ensuring that all changes to the plan are properly tested. The BCP Testing Co-ordinator will then be responsible for notifying all affected units and for arranging for any further testing activities. and particularly within the last five. This is necessary due to the level of complexity contained within the BCP. Responsibilities for Maintenance of Each Part of the Plan Each part of the plan will be allocated to a member of the BCP Team or a Senior Manager with the organization who will be charged with responsibility for updating and maintaining the plan. This section of the BCP contains a draft communication from the BCP Co-ordinator to affected business units and contains information about the changes which require testing or re-testing. Cognizant Technology Solutions. This chapter deals with updating the plan and the managed process which should be applied to this updating activity. the BCP Testing Co-ordinator will be notified. This will involve the use of formalized change control procedures under the control of the BCP Team Leader. Change Controls for Updating the Plan It is recommended that formal change controls are implemented to cover any changes required to the BCP. An assessment should be made on whether the change necessitates any re-training activities. Whenever changes are made to the BCP they are to be fully tested and appropriate amendments should be made to the training materials. It is necessary for the BCP to keep pace with these changes in order for it to be of use in the event of a disruptive emergency. Products and services change and also their method of delivery.Handout – Software Testing Keeping the Plan Up-to-date Changes to most organizations occur all the time. The BCP Team Leader will notify the BCP Training Co-ordinator of all approved changes to the BCP in order that the training materials can be updated. Page 84 ©Copyright 2007. These changes are likely to continue and probably the only certainty is that the pace of change will continue to increase. have significantly increased the level of dependency upon the availability of systems and information for the business to function effectively. This section of the BCP will contain a Change Request Form / Change Order to be used for all such changes to the BCP. HRM Department will be responsible to ensure that all emergency contact numbers for staff are kept up to date.

Page 85 ©Copyright 2007. Reduced flexibility in test execution If datasets are large or hard to set up. Most reports make reference to the input data and the actual and expected results. unrecognized database corruption. the cost increases further. Most projects experience these problems at some stage . It is important to consider inputs and outputs of a process for requirements modeling. Degradation of test data over time Program faults can introduce inconsistency or corruption into a database. as the data is restored. An assessment should be made on whether the change necessitates any re-training activities. testers stand a greater chance of missing important diagnostic features of a failure. it may not be time-effective to construct further data to support investigator tests. or of a failure to recognize all the data that is influential on the system. If the datasets are poorly constructed. Poor data can make these reports hard to understand.recognizing them early can allow their effects to be mitigated. or indeed of missing the failure entirely. they can cause hard-to-diagnose failures that may be apparently unrelated to the original fault. Each of these groups has different data requirements. Simpler to make test mistakes. The BCP Team Leader will notify the BCP Training Co-ordinator of all approved changes to the BCP in order that the training materials can be updated. Requirements problems can be hidden in inadequate data. Less time spent hunting bugs the more time spent doing unproductive testing or ineffective test maintenance. Poor data will cause more of these problems. Problems which can be caused by Poor Test Data Most testers are familiar with the problems that can be caused by poor data. Inadequate data can lead to ambiguous or incomplete requirements.Handout – Software Testing Advise Person Responsible for BCP Training A member of the BCP Team will be given responsibility for co-ordinating all training activities (BCP Training Co-ordinator). Obscure results and bug reports without clearly comprehensible data. Data can play a significant role in these failures. If not spotted at the time of generation. Cognizant Technology Solutions. but the original fault is undiagnosed and can carry on into live operation and perhaps future releases. Furthermore. some tests may be excluded from a test run. Everybody makes mistakes. The following list details the most common problems familiar to the author. Larger proportion of problems can be traced to poor data. not to be faults at all. the less time spent testing. A proportion of all failures logged will be found. evidence of the fault is lost. the cost of test maintenance is correspondingly increased. Increased test maintenance cost If each test has its own data. Confusing or over-large datasets can make data selection mistakes more common. This can be a symptom of an uncontrolled environment. Restoring the data to a clean set gets rid of the symptom. All Rights Reserved C3: Protected . testers and business. Confusion between developers. after further analysis. If that data is itself hard to understand or manipulate. Unreliable test results Running the same test twice produces inconsistent results. A failure to understand each others data can lead to ongoing confusion.

often don't reflect the way the system will be used in practice. documents can all be input data. Inability to spot data corruption caused by bugs. All Rights Reserved C3: Protected .Handout – Software Testing Unwieldy volumes of data. Classification of Test Data Types In the process of testing a system. it is useful to be able to classify the data according to the way it is used. where new billing products are supported and indeed created by additions to the setup data. For the purposes of testing. Typically. many references are made to "The Data" or "Data Problems". and may lend themselves to automated testing / sanity checks. Page 86 ©Copyright 2007. Accounts. Cognizant Technology Solutions. It might include a cross reference between country and delivery cost or method. they can influence and corrupt each others results as they change the data in the system. Fixed Input Data Fixed input data is available before the start of the test. Test requirements. particularly in configuration data. This can not only cause false results. Small datasets can be manipulated more easily than large datasets. The current date and time can be seen as environmental data. setup data causes different functionality to apply to otherwise similar data. share the same dataset. or tests. actions. While this may arguably lead to broad testing for a variety of purposes. Environmental data Environmental data tells the system about its technical environment. and can be seen as part of the test conditions. Setup data Setup data tells the system about the business rules. Although it is perhaps simpler to discuss data in these terms. it can be hard for the business or the end users to feel confidence in the test effort if they feel distanced from it. or methods of debt collection from different kinds of customers. It includes communications addresses. Business data not representatively tested. a complex dataset will positively hinder diagnosis. orders. it is useful to split the categorization once more. Poor database/environment integrity.as can be seen in the mobile phone industry. products. The following broad categories allow data to be handled and discussed more easily. Input data Input data is the information input by day-to-day system functions. If a large number of testers. business can offer new intangible products without developing new functionality . but can lead to database integrity problems and data corruption. This can make portions of the application untestable for many testers simultaneously. With an effective approach to setup data. A few datasets are easier to manage than many datasets. A readily understandable dataset can allow straightforward diagnosis. A few well-known datasets can be more easily be checked than a large number of complex datasets. directory trees and paths and environmental variables.

This small. these criteria apply to many traditional database-based systems: Fixed input data consists of many rows Fields are independent You want to do many tests without loading / you do not load fixed input data for each test. on fixed input data. A subset of the output data is generally compared with the expected results at the end of test execution. generating tests so that all possible permutations of inputs are tested. To sum up. Transitional data is not seen outside the system (arguably. test handles and instrumentation make it output data). this method of working with fixed input data can help greatly in testing the setup data. but the data maintenance required will be greatly lessened by the small size of the dataset and the amount of reuse it allows. Typically held in internal system variables. and includes not only files. but can also include test measurements. Pair wise. as above. Jackson's Structured Programming methodology). the test data can contain all possible pairs of permutations in a far smaller set than that which contains all possible permutations. As such. reduces data maintenance time and can help improve the test process. reports and database updates. Good data assists testing. it is temporary and is lost at the end of processing. rather than hinders it. or diagnostic tests. All Rights Reserved C3: Protected . during processing of input data. and so is comprehensive enough to allow a great many new. It allows complete pairwise coverage. Fortunately. It generally has a correspondence with the input data (cf. A good approach increases data reliability. Database changes will affect it. This method is most appropriate when used. this produces a far smaller set of tests than the brute-force approach for all permutations. it does not directly influence the quality of the tests. This allows a small. It is most effective when the following conditions are satisfied. for non-trivial sets. Finally. influenced by the uses that are planned for it. but its state can be inferred from actions that the system has taken. Output Data Output data is all the data that a system outputs as a result of processing input data and events. Achieves good test coverage without having to construct massive datasets Page 87 ©Copyright 2007. Cognizant Technology Solutions. Most are also familiar with the ways in which this generally vast set can be cut down.which also allows a wide range of tests. Permutations Most testers are familiar with the concept of permutation. easy to handle dataset . or combinatorial testing addresses this problem by generating a set of tests that allow all possible pairs of combinations to be tested. transmissions. Transitional data Transitional data is data that exists only within the program. permutation helps because: Permutation is familiar from test planning. Organizing the data A key part of any approach to data is the way the data is organized.Handout – Software Testing Consumable Input Data Consumable input data forms the test input It can also be helpful to qualify data after the system has started to use it. and easy to manipulate dataset is capable of supporting many tests. the way it is chosen and described. adhoc. Typically. The same techniques can be applied to test data.

Handout – Software Testing
Can perform investigative testing without having to set up more data Reduces the impact of functional/database changes Can be used to test other data - particularly setup data Partitioning Partitions allow data access to be controlled, reducing uncontrolled changes in the data. Partitions can be used independently; data use in one area will have no effect on the results of tests in another. Data can be safely and effectively partitioned by machine / database / application instance, although this partitioning can introduce configuration management problems in software version, machine setup, environmental data and data load/reload. A useful and basic way to start with partitions is to set up, not a single environment for each test or tester, but to set up three shared by many users, so allowing different kinds of data use. These three have the following characteristics: Safe area o Used for enquiry tests, usability tests etc. o No test changes the data, so the area can be trusted. o Many testers can use simultaneously Change Area o o o Used for tests which update/change data. Data must be reset or reloaded after testing. Used by one test/tester at a time.

Scratch area o Used for investigative update tests and those which have unusual requirements. o Existing data cannot be trusted. o Used at tester's own risk! o Testing rarely has the luxury of completely separate environments for each test and each tester. Controlling data, and the access to data, in a system can be fraught. Many different stakeholders have different requirements of the data, but a common requirement is that of exclusive use. While the impact of this requirement should not be underestimated, a number of stakeholders may be able to work with the same environmental data, and to a lesser extent, setup data – and their work may not need to change the environmental or setup data. The test strategy can take advantage of this by disciplined use of text / value fields, allowing the use of 'soft' partitions. 'Soft' partitions allow the data to be split up conceptually, rather than physically. Although testers are able to interfere with each others tests, the team can be educated to avoid each others work. If, for instance, tester 1's tests may only use customers with Russian nationality and tester 2's tests only with French, the two sets of work can operate independently in the same dataset. A safe area could consist of London addresses, the change area Manchester addresses, and the scratch area Bristol addresses. Typically, values in free-text fields are used for soft partitioning.

Page 88 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing
Data partitions help because: Allow controlled and reliable data, reducing data corruption / change problems Can reduce the need for exclusive access to environments/machines Clarity Permutation techniques may make data easier to grasp by making the datasets small and commonly used, but we can make our data clearer still by describing each row in its own free text fields, allowing testers to make a simple comparison between the free text (which is generally displayed on output), and actions based on fields which tend not to be directly displayed. Use of free text fields with some correspondence to the internals of the record allows output to be checked more easily. Testers often talk about items of data, referring to them by anthropomorphic personification – that is to say, they give them names. This allows shorthand, but also acts as jargon, excluding those who are not in the know. Setting this data, early on in testing, to have some meaningful value can be very useful, allowing testers to sense check input and output data, and choose appropriate input data for investigative tests. Reports, data extracts and sanity checks can also make use of these; sorting or selecting on a free text field that should have some correspondence with a functional field can help spot problems or eliminate unaffected data. Data is often used to communicate and illustrate problems to coders and to the business. However, there is generally no mandate for outside groups to understand the format or requirements of test data. Giving some meaning to the data that can be referred to directly can help with improving mutual understanding. Clarity helps because: Improves communication within and outside the team Reduces test errors caused by using the wrong data Allows another method way of doing sanity checks for corrupted or inconsistent data Helps when checking data after input Helps in selecting data for investigative tests Data Load and Data Maintenance An important consideration in preparing data for functional testing is the ways in which the data can be loaded into the system, and the possibility and ease of maintenance. Loading the data Data can be loaded into a test system in three general ways. Using the system you're trying to test o The data can be manually entered, or data entry can be automated by using a capture/replay tool.

Page 89 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing
o This method can be very slow for large datasets. It uses the system's own validation and insertion methods, and can both be hampered by faults in the system, and help pinpoint them. If the system is working well, data integrity can be ensured by using this method, and internally assigned keys are likely to be effective and consistent. Data can be well-described in test scripts, or constructed and held in flat files. It may, however, be input in an ad-hoc way, which is unlikely to gain the advantages of good data listed above. Data load tools directly manipulate the system's underlying data structures. As they do not use the system's own validation, they can be the only way to get broken data into the system in a consistent fashion. As they do not use the system to load the data, they can provide a convenient workaround to known faults in the system's data load routines. However, they may come up against problems when generating internal keys, and can have problems with data integrity and parent/child relationships. Data loaded can have a range of origins. In some cases, all new data is created for testing. This data may be complete and well specified, but can be hard to generate. A common compromise is to use old data from an existing system, selected for testing, filtered for relevance and duplicates and migrated to the target data format. In some cases, particularly for minor system upgrades, the complete set of live data is loaded into the system, but stripped of personal details for privacy reasons. While this last method may seem complete, it has disadvantages in that the data may not fully support testing, and that the large volume of data may make test results hard to interpret.

o

Using a data load tool o

o

Not loaded at all o Some tests simply take whatever is in the system and try to test with it. This can be appropriate where a dataset is known and consistent, or has been set up by a prior round of testing. It can also be appropriate in environments where data cannot be reloaded, such as the live system. However, it can be symptomatic of an uncontrolled approach to data, and is not often desirable. o Environmental data tends to be manually loaded, either at installation or by manipulating environmental or configuration scripts. Large volumes of setup data can often be generated from existing datasets and loaded using a data load tool, while small volumes of setup data often have an associated system maintenance function and can be input using the system.Fixed input data may be generated or migrated and is loaded using any and all of themethods above, while consumable input data is typically listed in test scripts or generated as an input to automation tools. When data is loaded, it can append itself to existing data, overwrite existing data, or delete existing data first. Each is appropriate in different circumstances, and due consideration should be given to the consequences. Testing the Data A theme bought out at the start of this paper was 'A System is Programmed by its Data'. In order to test the system, one must also test the data it is configured with; the environmental and setup data. Environmental data is necessarily different between the test and live environment. Although testing can verify that the environmental variables are being read and used correctly, there is little point in testing their values on a system other than the target system. Environmental data is often checked
Page 90 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Cognizant Technology Solutions. and the wide variety of possible methods will not be discussed further here. The following points summarize the actions that can influence the quality of the data and the effectiveness of its usage: Plan the data for maintenance and flexibility Know your data. Common data problems can be avoided or reduced with preparation and automation. Aspects of all the elements above come into play. it is important to have a well-known set of fixed input data and consumable input data. as the business environment changes – particularly if there is a long period between requirements gathering and live rollout. The advantages of testing the setup data include: Overall testing will be improved if the quality of the setup data improves Problems due to faults in the live setup data will be reduced The business can re-configure the software for new business needs with increased confidence Data-related failures in the live system can be assessed in the light of good data testing Conclusion Data can be influential on the quality of testing. All Rights Reserved C3: Protected . Does the planned/current setup data induce the functionality that the business requires? Will changes made to the setup data have the desired effect? Testing for these two questions only becomes possible when that data is controlled.Handout – Software Testing manually on the live system during implementation and rollout. and make its structure and content transparent Use the data to improve understanding throughout testing and the business Test setup data as you would test functionality Page 91 ©Copyright 2007. throughout testing. The setup data should be organized to allow a good variety of scenarios to be considered The setup data needs to be able to be loaded and maintained easily and repeatable The business needs to become involved in the data so that their setup for live can be properly tested When testing the setup data. Testing done on the setup data needs to cover two questions. Well-planned data can allow flexibility and help reduce the cost of test maintenance. and good data can be used as a tool to enable and improve communication throughout the project. This allows the effects of changes made to the setup data to be assessed repeat ably and allows results to be compared. Setup data can change often. Effective testing of setup data is a necessary part of system testing.

no finding exists.Essentially the user compares” what is” with “what should be”. the first essential step toward development of a problem statement has occurred. Identification of the cause is the necessary as a basis for corrective action. A well developed problem statement will include each of these attributes. attributes of a problem statement. If a comparison between the two gives little or no practical consequence. The ‘What is”: can be called the statement of condition. These concepts are the first two and the most basic . It is difficult to visualize any type of problem that is not in some way characterized by this deviation. as they currently exist. The following four attributes should be developed for all the test problems: Statement of condition –Tells what it is. which represents what the user desires. Carefully and completely documenting a test problem is the first step in correcting the problem. All Rights Reserved C3: Protected . and the criteria. as they exist. Page 92 ©Copyright 2007.Handout – Software Testing Test Logs . The actual deviation will be the difference or gap between “what –is” and “ what is desired”. When one or more these attributes is missing. questions almost arise. such as Criteria: Why is the current state inadequate? Effect: How significant is it? Cause: What could have cause of the problem? Factors defining the Test Log Generation Document Deviation: Problem statements begin to emerge by process of comparision. The statement of condition is uncovering and documenting the facts. Cognizant Technology Solutions. the I/S professional will need to ensure that the information is accurate. Criteria – Tells what should be. The “What should be” shall be called the “Criteria”. The documenting of the deviation is describing the conditions. For those facts. What is a fact? The statement of condition will of course depend on the nature and extent of the evidence or support that is examined and noted. Effect: Tells why the difference between what is and what should be is significant Cause: Tells the reasons for the deviation. These two attributes are the basis for a finding. The statement of condition should document as many of the following attributes as appropriate of the problem. making up the statement of condition. well supported.Introduction Test Problem is a condition that exists within the software system that needs to be addressed. When a deviation is identified between what is found to actually exist and what the user thinks is correct or proper . and worded as clearly and precisely as possible.

The specific business or administered activities that are being performed during Test Log generation are as follows: Procedures used to perform work. Work Paper to describe the problem.or documents that cause this activity to be executed. For example the following Work paper provides the information for Test Log Documentation: Page 93 ©Copyright 2007.Handout – Software Testing Activities Involved:. Outputs /Deliverables – The products that are produced from the activity. All Rights Reserved C3: Protected . it could indicate the need to reduce the complaints or delays as well as desired processing turn around time. Inputs . It can be stated in the either negative or positive terms. Cognizant Technology Solutions.The triggers. and document the statement of condition and the statement of criteria. Deficiencies noted – The status of the results of executing this activity and any appropriate interpretation of those facts. Users/Customers served –The organization .or class users/customers serviced by this activity. The Criterion is the user’s statement of what is desired.individuvals.events. For example . – The specific step-by –step activities that are utilized in producing the output from the identical activities.

Defect This category includes a Description of the individual defects uncovered during the testing process.The hardware and Software environment in which the software system will operate.Identifiable Software components normally associated with the requirements of the software.Handout – Software Testing Collecting Status Data Four categories of data will be collected during testing. Functions/Sub functions . which will be based on software requirements. Test transactions/events: The type of tests that will be conducted during the execution of tests. These are explained in the following paragraphs.The smallest identifiable software components Platform. All Rights Reserved C3: Protected . Inspections – A verification of process deliverables against deliverable specifications. Test Suites. Units. Test Results Data This data will include. Cognizant Technology Solutions. Reviews: Verification that the process deliverables / phases are meeting the user’s true needs. Interface Objectives . the validation of which becomes the Test Objective. and Test Events These are the test products produced by the test team to perform testing. This description includes but not limited to : Data the defect uncovered Name of the Defect Location of the Defect Page 94 ©Copyright 2007. Test factors -The factors incorporated in the plan.Validation that data/Objects can be correctly passed among Software components. Test Transactions. Business Objective –The validation that specific business objectives have been met.

Cognizant Technology Solutions. It is also suggested that the database be put in online through client/server systems so that with a vested interest in the status of the project can be readily accessed for the status update. As described the most common test Report is a simple Spread sheet . The test reports are for use of testers. and the results of testing at any point of time. the test that will be performed to determine the status of that component. All Rights Reserved C3: Protected . but not performed 2=Test currently being performed 3=Minor defect noted 4=Major defect noted 5=Test complete and function is defect free for the criteria included in this test Page 95 ©Copyright 2007. test managers. The intersection can be coded with a number or symbol to indicate the following: 1=Test is needed. when it was corrected. Developing Test Status Reports Report Software Status Establish a Measurement Team Inventory Existing Project Measures Develop a Consistent Set of Project metrics Define Process Requirements Develop and Implement the Process Monitor the Process The Test process should produce a continuous series of reports that describe the status of testing. and when it was entered for retest. Many organizations use spreadsheet package to maintain test results. which indicates the project component for which the status is requested.Handout – Software Testing Severity of the Defect Type of Defect How the defect was uncovered (Test Data/Test Script) The Test Logs should add to this information in the form of where the defect originated . Use of Function/Test matrix: This shows which tests must be performed in order to validate the functions and also used to determine the status of testing. The frequency of the test reports should be based on the discretion of the team and extensiveness of the test process. and the software development team. Storing Data Collected during Testing It is recommended that a database be established in which to store the results collected during testing.

composing and revising text. database. XML. text. organizing. DocBook. Some query tools available for Linux based databases include: MySQL dbMetrix PgAccess Cognos Powerhouse This is not yet available for Linux. LaTeX2e. and more. Individual Reports include all of the following information. Some Database test tools like Data Vision is a database reporting tool similar to Crystal Reports.Use of word processing. PostScript. Cognizant Technology Solutions. HTML. you can quickly scan through any number of these reports and see how each person's history compares. Reports can be viewed and printed from the application or output as HTML. or tab. and data base management products. A one-page summary report may be printed with either the Report Manager program or from the individual keyboard or keypad software at any time. Cognos is looking into what interest people have in the product to assess what their strategy should be with respect to the Linux ``market. defect tracking. and graphic tools to prepare test reports. All Rights Reserved C3: Protected . The program was loosely designed to produce TeX/LaTeX formatted output. but plain ASCII text. troff.Handout – Software Testing Methods of Test Reporting Reporting Tools . This allows each person to use the normal functions of the computer keyboard that are common to all word processors. PostScript. Word –Processing: One way of increasing the utility of computers and word processors for the teaching of writing may be to use software that will guide the processes of generating. From the Report Manager. delimited ASCII text file or a SQL query to a RDBMS and produces a report listing. HTML or any other kind of ASCII based output format can be produced just as easily.or comma-separated text files. email editors. From the LaTeX2e and DocBook output files you can in turn produce PDF. Status Report Word Processing Tests or Keypad Tests Basic Skills Tests or Data Entry Tests Progress Graph Game Scores Test Report for each test Page 96 ©Copyright 2007.GNU Report Generator The GRG program reads record and field information from a dBase3+ file. order entry systems. however.'' GRG .

Testing Data used for metrics Testers are typically responsible for reporting their test status at regular intervals. Cognizant Technology Solutions. business analysts and Client can participate and contribute to the testing process Traceability throughout the testing process Test Cases can be mapped to requirements providing adequate visibility over the test coverage of requirements Test Director links requirements to test cases and test cases to defects Manages Both Manual and Automated Testing Test Director can manage both manual and automated tests (Win Runner) Scheduling of automated tests can be effectively done using Test Director Test Report Standards . defect statistics can be used for production planning Provides Anytime. which can be repeated throughout the application life cycle Provides Analysis and Decision Support Graphs and reports help analyze application readiness at any point in the testing process Requirements coverage. o Average duration between defect detection and defect correction o Average effort to correct a defect o Total number of defects remaining at delivery o Software performance data us usually generated during system testing. Anywhere access to Test Assets Using Test Director’s web interface. o Average CPU utilization o Average memory Utilization o Measured I/O transaction rate Page 97 ©Copyright 2007. The following measurements generated during testing are applicable: o Total number of tests o Number of Tests executed to date o Number of tests executed successfully to date o Data concerning software defects include o Total number of defects corrected in each activity o Total number of defects entered in each activity. developers.Defining the components that should be included in a test report. All Rights Reserved C3: Protected . tester.Ability to draw statistically valid conclusions from quantitative test results. run schedules. Statistical Analysis . test execution progress.Handout – Software Testing Test Director: Facilitates consistent and repetitive testing process Central repository for all testing assets facilitates the adoption of a more consistent testing process. once the software has been integrated and functional testing is complete.

as paper report will summarize the data. to assess the potential consequences and initiate appropriate actions to minimize those consequences. The second long term purpose is to use the data to analyze the rework process for making changes to prevent the defects from occurring in the future. All Rights Reserved C3: Protected . These defect prone components identify tasks/steps that if improved. Given is the Individual Project test report except that conditions tested are interfaces. The test report can be a combination of electronic data and hard copy. Individual Project Test Report These reports focus on the Individual projects(software system). This includes the following Individual Project Test Report Integration Test Report System Test Report Acceptance test Report These test reports are designed to document the results of testing as defined in the testplan. and if so. Cognizant Technology Solutions.9 .Purpose of a Test Report: The test report has one immediate and three long term purposes.when different testers should test individual projects. if the function matrix is maintained electronically. Page 98 ©Copyright 2007. there is no reason to print that.Handout – Software Testing Test Reporting A final test report should be prepared at the conclusion of each test activity. draws appropriate conclusions and present recommendations. The Third long term purpose is to show what was accomplished in case of an Y2K lawsuit. they should prepare a report on their results. A good test plan will identify the interfaces and institute test conditions that will validate interfaces. The immediate purpose is to provide information to customers of the software system so that they can determine whether the system is ready for production . Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective actions. Integration Test Report Integration testing tests the interfaces between individual projects. For example. The first of the three long term uses is for the project to trace problems in the event the application malfunctions in production. could eliminate or minimize the occurrence of high frequency defects.

Acceptance Test Report There are two primary objectives of Acceptance testing Report: The first is to ensure that the system as implemented meets the real operating needs of the user/customer. could eliminate or minimize the occurrence of high frequency defects in future. then it need only be referenced .Handout – Software Testing System Test Reports A System Test plan standard that identified the objective of testing . Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective actions. which includes people skills and attitudes. The Acceptance Test Report should encompass these criteria’s for the User acceptance respectively. if so the potential consequences and appropriate actions to minimize these consequences. The project can use the test report to trace problems in the event the application malfunction in production. Conclusion The Test Logs obtained from the execution of the test results and finally the test reports should be designed to accomplish the following objectives: Provide Information to the customer whether the system should be placed into production. and so forth. changing business conditions. testing should have accomplished this objective. how was it to be tested. Cognizant Technology Solutions. and when tests should occur. what was to be tested. All Rights Reserved C3: Protected . The data can also be used to analyze the developmental process to make changes to prevent defects from occurring in the future. If the defined requirements are those true needs. These defect prone components identify tasks/steps that if improved. not included in the report. If these details are maintained Electronically . Page 99 ©Copyright 2007. One Long term objective is for the Project and the other is for the information technology function. The second objective is to ensure that software system can operate in the real world user environment. time pressures. The system test Report should present the results of executing the test plan.

e.. Help developers understand the behavior of the item under test.g. a single test of a use case path or class method). Help developers improve the quality of the specifications (e.. Help improve the quality of the item under test. Objectives To support these goals. the objectives of a single test case include: Document the purpose of the test case (i. Cause failures that uncover underlying defects so that they can be identified and removed.Handout – Software Testing Test Report A Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer.. use case path and class responsibilities) of the item under test. Contents of a Test Report The contents of a test report are as follows: Executive Summary Overview Application Overview Testing Scope Test Details Test Approach Types of testing conducted Test Environment Tools Used Metrics Test Results Test Deliverables Recommendations Test Case A test case is a testing work product that automatically performs a single test on an executable work product. Document the producer of the test case. This document would contain a summary of the entire project and would have to be presented in a way that any person who has not worked on the project would also get a good overview of the testing effort. Cognizant Technology Solutions. Goals The goals of a test case is to automate or document the following: Perform a single test (e. Page 100 ©Copyright 2007. the type of failures to be elicited). the part of the item under test being tested. All Rights Reserved C3: Protected .g.

changes in state and messages sent) to the test stimuli. Contents Test case objectives Test preparation (e.g. Documents a single test in terms of objective.e. Failure to automate test cases makes regression testing more expensive and less likely to occur. and the collaborators of the item under test into their correct pretest states. Evaluator: Test Inspection Team Approvers: None. test oracle) Test reporting script Test finalization script Stakeholders Producers: o o o o o The Requirements Team for requirements model test cases. Maintainers: o The Requirements Team for requirements model test cases...Handout – Software Testing Prepare the item under test for testing (i.. exceptions raised. Report the results of the associated test. The Integration Team for integration test cases.e. Compare the actual responses (i. Stimulate the item (e. to place objects under test into the appropriate pre-test states) Test stimuli (e. The SoftwareDevelopment Team for design model and unit test cases. Page 101 ©Copyright 2007. etc. All Rights Reserved C3: Protected . The Independent Test Team for system test cases.. Cognizant Technology Solutions. Documents test results Failure to produce test cases increases the probability that the item under test will contain defects that will make the application fail to meet its requirements. The Architecture Team for architecture model test cases. thereby supporting regression testing.g.g. Observe how the item responds (e. send it test messages. o The Software Development Team for design model and unit test cases. Benefits A test case provides the following benefits: Automates a single test. and provide the necessary test data).g. raise test exceptions). to send test messages or raise test exceptions) Expected behavior (i.e. o The Architecture Team for architecture model test cases.. place the item under test. oracle.. the test stimuli. values returned.. postconditions) to the expected responses to identify failures that imply the existence of defects in the item under test.

Model testing of the software design 2). assertions. Domain object model The Architecture Team for: 1). The relevant requirements. The relevant test suite is started. branching and looping logic) Stakeholders: None Page 102 ©Copyright 2007.Handout – Software Testing o o Users: o Tumultuous 1). Inputs Work products: Project Test Plan System Requirements Specification System Architecture Document Software Architecture Document Javadoc including responsibilities Software components (e. Model testing of the software architecture 2). Use case model 2). o o Phases Initiation: Completed Construction: Completed Delivery: Completed Usage: Maintained Retirement: Archived Preconditions A test case typically can be started if the following preconditions hold: The relevant sections of the Project Test Plan are completed. architecture. The Integration Team for integration test cases. 4). The relevant item under test is started. The Integration Team for performing integration testing.. Cognizant Technology Solutions. The Independent Test Team for system test cases. All Rights Reserved C3: Protected . method signatures. Unit testing of the software components 3). Unit testing of the software architecture prototype The SoftwareDevelopment Team for 1). or design are completed.g. The Test Team for performing system testing. The relevant team is staffed.

and system testing). A test reflects what tests need to be performed. Test Specification b. When performed manually. Test cases need not document how to perform the test unless they are automated. model testing. execution conditions and expected results. test cases will be automated whenever practical. Network. Test cases do not document the results of the tests. unit testing. Test Result c. The document that describes the expected output as well as inputs is a.Handout – Software Testing Guidelines Test cases will be used at all levels of testing (e. and the test case developer can make mistakes.g. Test Case d. Test your Understanding 1).. Test Case b. Test harness 2). Software. A series of test data that is logically tested together is a.) Features to test with priority/criticality. Test Environment (Hardware. Communication etc. Test Strategy/Approach based on customer priorities. test cases need to be evaluated for defects. Test Script Answers: 1) c 2) c Page 103 ©Copyright 2007. Test Log d. integration testing. which are documented in the associated test report. Test Plan c. All Rights Reserved C3: Protected . If the quality of the test cases is not at least as good as the quality of the item under test. Thus. To support regression testing. this information is documented in the associated test procedure. then it will be difficult to know if the defect causing the failure is in the item under test or in the test case. The oracle can be incorrect. A test case has set of test inputs. Guidelines A test case is constrained by the following conventions: Content and Format Standard Inspection Checklist SUMMARY A test plan contains description of testing objectives and Goals. Cognizant Technology Solutions. Test Deliverables.

Defects include such things as omissions and imperfections found during testing phases. Rigorous evaluation uses assumptions about the arrival or discovery rates of defects during the testing process. and reporting What is a Defect? A mismatch in the application and its specification is a defect. The actual data about defect rates are then fit to the model. Cognizant Technology Solutions.Handout – Software Testing Chapter 7: Defect Management Learning Objective After completing this chapter. Symptoms (flaws) of faults contained in software that is sufficiently mature for production will be considered as defects. Such an evaluation estimates the current system reliability and predicts how the reliability will grow if testing and defect removal continue. This evaluation is described as system reliability growth modelling. A software error is present when the program does not do what its end user expects it to do. you will be able to: Describe defect lifecycle. tracking. A Defect is a product anomaly or flaw. An evaluation of defects discovered during testing provides the best indication of software quality. A deviation from expectation that is to be tracked and resolved is also termed a defect. Quality is the indication of how well the system meets the requirements. So in this context defects are identified as any failure to meet the system requirements. All Rights Reserved C3: Protected . Defect evaluation is based on methods that range from simple number count to rigorous statistical modeling. Defect Classification The severity of bugs will be classified as follows: Page 104 ©Copyright 2007.

Handout – Software Testing Defect Lifecycle Defect Reporting and Tracking The key to making a good report is providing the development staff with as much information as necessary to reproduce the bug. the easier it will be for the developers to determine the problem and fix it. Page 105 ©Copyright 2007. Simple problems can have a simple report. Supply a copy of all relevant reports and data including copies of the expected results. When you are reporting a defect the more information you supply. but the more complex the problem– the more information the developer is going to need. Cognizant Technology Solutions. This can be broken down into 5 points: Give a brief description of the problem List the steps that are needed to reproduce the bug or problem Supply all relevant information such as version. how to get it and what needs to be changed. project and data used. For example: cosmetic errors may only require a brief description of the screen. Summarize what you think the problem is. All Rights Reserved C3: Protected .

an error in processing will require a more detailed description. an earlier version of the software and any formulas used) Documentation on what actually happened. As a rule the detail of your report will increase based on The severity of the bug. If there are parameters. In most cases the more information/ correct information given the better. The level of the processing. This includes spread sheets. Include all proper menu names. one before the process and one after. The report should explain exactly how to reproduce the problem and an explanation of exactly what the problem is. After you’ve finished writing down the steps. If you have to enter any data. Data: Unless you are reporting something very simple. developers will be forced to try and find the bug based on forensic evidence. you should include a dataset that exhibits the error.Handout – Software Testing However. list them. Copies of any output should be included. developers can trace what is happening. If specific data is involved. Anatomy of a bug report Bug reports need to do more than just describe the bug. With the data. When you report the steps they should be the clearest steps to recreating the bug. If the dataset from before the process is not included. don’t abbreviate and don’t assume anything. The complexity of reproducing the bug. a copy of the data both before and after the process should be included. (Perceived results) An explanation of how the results differed. (Expected results) The source of the expected results. All Rights Reserved C3: Protected . supply the exact data entered.make sure you’ve included everything you type and do to get to the problem. such as: The name of the process and how to get to it. They have to give developers something to work with so that they can successfully reproduce the problem. if available. Page 106 ©Copyright 2007. follow them . developers will have been working on it and if they’ve found a bug– it may already have been reported or even fixed. If you’re reporting a processing error. Steps: List the steps taken to recreate the bug. Cognizant Technology Solutions. Product: If you are developing more than one product– Identify the product in question. Identify the individual items that are wrong. you should include two versions of the dataset. Documentation on what was expected. Go through the process again and see if there are any steps that can be removed. In most cases the product is not static. they need to know which version to use when testing out the bug. such as a cosmetic error on a screen. The basic items in a report are as follows: Version: This is very important. In either case.

and the defect still exists. Invalid Bug – The reported bug is not valid one as per the requirements/design As Designed – This is an intended functionality as per the requirements/design Deferred –This will be an enhancement. it is set to Closed.Try to weed out any extraneous information. If you have a report to compare against. Cognizant Technology Solutions. After the development team has fixed the defect. Include a list of what was expected. In order to work it must supply all necessary information to not only identify the problem but what is needed to fix it as well. Remember report one problem at a time. The developers need it to reproduce the bug. Document – Once it is set to any of the above statuses apart from Open. It is not enough to say that something is wrong. Page 107 ©Copyright 2007. but detail what is wrong.Handout – Software Testing Description: Explain what is wrong . The Project Lead of the development team will review the defect and set it to one of the following statuses: o o o o o o Open – Accepts the bug and assigns it to a developer. SUMMARY A bug report is a case against a product. don’t combine bugs in one report. include the version number and the dataset used) This information should be stored in a centralized location so that Developers and Testers have access to the information. include a copy of the report with the problem areas highlighted. Supporting documentation: If available. identify it and fix it. All Rights Reserved C3: Protected . include it and its source information (if it’s a printout from a previous version. which will follow the same cycle as an open defect. Testers will need this information for later regression testing and verification. which means the defect is ready to re-test. On re-testing the defect. and the testing team does not agree with the development team it is set to document status. The report must also say what the system should be doing. the status is set to REOPENED. Include what you expected. it must be reported to development so that it can be fixed. the status is set to FIXED. supply documentation. If the fixed defect satisfies the requirements/passes the test case. so that someone who has never seen the system can follow the steps and reproduce the problem. Once the development team has started working on the defect the status is set to WIP ((Work in Progress) or if the development team is waiting for a go ahead or some technical feedback. Duplicate – The bug has already been reported. The Initial State of a defect will be ‘New’. Defect Tracking After a defect has been found. The report should be written in clear concise steps. they will set to Dev Waiting. If the process is a report. what data was used. including the version number. It should include information about the product.

All Rights Reserved C3: Protected . Tracking a defect to closure c. The following is NOT a defect management activity a. Cognizant Technology Solutions. Preventing defects Answers: 1) a&b 2) c Page 108 ©Copyright 2007. Logging a defect b. Defect is defined as a. Temporary problem that is not related to the software d. Finding the person who introduced the defect d. Unexpected error/event that needs investigation c. Software problem that requires corrective action b. All the above 2).Handout – Software Testing Test your Understanding 1).

Furthermore. most software tests were performed using manual methods. the need is greatly increased for testing methods that support business objectives.Handout – Software Testing Chapter 8: Automation Learning Objective After completing this chapter. are virtually impossible to perform manually. An automated test executes the next operation in the test hierarchy at machine speed. and with fewer errors than individuals. In the past. load/stress testing using automated methods require only a fraction of the computer hardware that would be necessary to complete a manual test. Using Testing Effectively By definition. allowing tests to be completed many times faster than the fastest individual. Every organization has unique reasons for automating software quality activities. The reason is that computers can execute instructions many times faster. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. The very nature of application software development dictates that no matter which methods are employed to carry out testing (manual or automated). Therefore. you will be able to: Explain automated testing What is Automation? Automated testing is automating the manual testing process currently in use. Automation allows the tester to reduce or eliminate the required “think time” or “read time” necessary for the manual interpretation of when or where to click the mouse or press the enter key. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability. built according to specification. As more organizations develop mission-critical systems to support their business activities. they remain repetitious throughout the development lifecycle. but several reasons are common across industries. Owing to the size and complexity of today’s advanced software applications. and have the ability to support business processes. rigorous application testing is a critical part of virtually all software development projects. This required a large staff of test personnel to perform expensive. Automation Benefits Today. Automation of testing processes allows machines to complete the tedious. All Rights Reserved C3: Protected . testing is a repetitive activity. some types of testing. It is necessary to ensure that these systems are reliable. and time-consuming manual test procedures. manual testing is no longer a viable option for most testing situations. repetitive work while human personnel perform other tasks. Reducing Testing Costs The cost of performing manual testing is prohibitive when compared to automated methods. Cognizant Technology Solutions. such as load/stress testing. Imagine performing a load test on a typical distributed client/server application on which 50 concurrent users were planned. Page 109 ©Copyright 2007.

at night or on weekends without having to assemble an army of end users. the tester has a very high degree of control over which types of tests are being performed.Handout – Software Testing To do the testing manually. and how the tests will be executed. Replicating Testing Across Different Platforms Automation allows the testing organization to perform consistent and repeatable tests. imagine the same application used by hundreds or thousands of users. the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary. automated tests can be executed as many times as necessary without requiring a user to recreate a test script each time the test is run. When applications need to be deployed across different hardware or software platforms. 50 application users employing 50 PCs with associated software. Most importantly. With an automated scenario. automated tests can be built that extract variable data from external files or applications and then run a test using the data as an input value. and a cadre of coordinators to relay instructions to the users would be required. Greater application test coverage also reduces the risk of exposing users to malfunctioning or non-compliant software. In some industries such as healthcare and pharmaceuticals. All Rights Reserved C3: Protected . Cognizant Technology Solutions. Using automated tests enforces consistent procedures that allow developers to evaluate the effect of various application modifications as well as the effect of various user actions. standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently. As another example. an available network. It is easy to see why manual methods for load/stress testing is an expensive and logistical nightmare. Page 110 ©Copyright 2007. Repeatability and Control By using automated techniques. Greater Application Coverage The productivity gains delivered by automated testing allow and encourage organizations to test more often and more completely. organizations are required to comply with strict quality regulations as well as being required to document their quality assurance efforts for all parts of their systems. For example.

Examples include: creating customer records. These automated modules can be used again and again without having to rebuild the test scripts. the company can face extreme disruptions in critical operations. it is also a prime candidate for automation. sales order entry and other core activities. If the application fails. The following are examples of criteria that can be used to identify tests that are prime candidates for automation. but not all. High Path Frequency . types of tests can be automated. All Rights Reserved C3: Protected . Critical Business Processes . For example. This modular approach saves time and money when compared to creating a new end-to-end script for each and every test. invoicing and other high volume activities where software failures would occur frequently.If a testing procedure can be reused many times. Page 111 ©Copyright 2007.Automated testing can be used to verify the performance of application paths that are used with a high degree of frequency when the software is running in full production. software applications can literally define or control the core of a company’s business. tests that run only once. Repetitive Testing . production planning. and tests that require constant human intervention are usually not worth the investment to automate. Cognizant Technology Solutions. common outline files can be created to establish a testing session. Examples include: financial month-end closings. close a testing session and apply testing values.Handout – Software Testing Automation Life Cycle Identifying Tests Requiring Automation . Certain types of tests like user comprehension tests. Any application with a high-degree of risk associated with a failure is a good candidate for test automation.Most.In many situations. Mission-critical processes are prime candidates for automated testing.

easy-to-modify tests. the testing tool itself should have a short learning curve. GUI and Client/Server Testing A robust testing tool should support testing with a variety of user interfaces and create simple-to manage. Even if programmers are responsible for testing. Test components built for performing functional tests should also support other types of testing including regression and load/stress testing. and should automatically adjust for different load times and performance levels. which should be addressed when selecting an application testing solution. Ease of Use Testing tools should be engineered to be usable by non-programmers and application endusers. A robust tool will allow users to integrate existing test results into an automated test plan. All Rights Reserved C3: Protected . All products within the testing product environment should be based upon a common. the greater the benefits are from automation. User training and experience gained in performing one testing task should be transferable to other testing tasks.Handout – Software Testing Applications with a Long Life Span . allowing users to evaluate application readiness based upon the application's ability to support the business requirements. Internet/Intranet Testing A good tool will have the ability to support testing within the scope of a web browser. With much of the testing responsibility shifting from the development staff to the departmental level. a testing tool that requires programming skills is unusable by most organizations. provide organization for testing components. easy-to-understand language. The tests created for testing Internet or intranet-based applications should be portable across browsers. and one which often poses enterprise-wide implications. and create meaningful end-user and management reports. What to Look For in a Testing Tool Choosing an automated software testing tool is an important step. Testing Product Integration Testing tools should provide tightly integrated modules that support test component reusability. an automated test should be able to link business requirements to test results. Here are several key issues. Also.If an application is planned to be in production for a long period of time. Test component reusability should be a cornerstone of the product architecture. It should also allow users to include non-automated testing procedures within automated test plans and test results. It should also provide test results in an easy-tounderstand reporting format. Cognizant Technology Solutions. Finally. Page 112 ©Copyright 2007. Test Planning and Management A robust testing tool should have the capability to manage the testing process. Load and Performance Testing The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug tracking packages.

Restoration Procedures . locate and configure test-related hardware and software products and coordinate the human resources required to complete all testing. all user groups and the Page 113 ©Copyright 2007. The time invested in detailed planning significantly improves the benefits resulting from test automation. All Rights Reserved C3: Protected . or to print a salary check. Cognizant Technology Solutions. functional requirements of the software system in question. a business requirement for a payroll application might be to calculate a salary. This plan is very much a “living document” that should evolve as the application functions become more clearly defined. Make sure that other groups that might share these resources are informed of this schedule. defines the high-level. the test environment can be prepared. Operational Support . The definition of these tasks. Test Schedule .Identify any support needed from other parts of your organization. For example. and the procedures needed for installation and restoration of the environment. those evaluating test automation should consider these fundamental planning steps. outline those procedures needed to restore the test environment to its original state. Creating a Test Plan For the greatest return on automated testing. a testing plan should be created at the same time the software application requirements are defined.Document the technical environment needed to execute the tests.Finally. The test environment is defined as the complete set of steps necessary to execute the test as described in the test plan. Inputs to the Test Environment Preparation Process Technical Environment Descriptions Approved Test Plan Test Execution Schedules Resource Allocation Schedule Application Software to be installed Test Planning Careful planning is the key to any successful process. A good testing plan should be reviewed and approved by the test team. To guarantee the best possible result from an automated testing program.Outline the procedures necessary to install the application software to be tested. or business requirements. Description . These business requirements should be defined in such a way as to make it abundantly clear that the software system correctly (or incorrectly) performs the necessary business functions. Evaluating Business Requirements Begin the automated testing process by defining exactly what tasks your application software should accomplish in terms of the actual business activities of the end-user. the software development team.Identify the times during which your testing facilities will be used for a given test.Handout – Software Testing Test Environment Setup Once the test cases have been created. This enables the testing team to define the tests. you are ready to re-execute tests or prepare for a different set of tests. By doing this. Installation Procedures . The test environment includes initial set up and description of the environment.

the action to be completed. so that the results of these test elements can be traced and analyzed.This section of the test case identifies the values to be supplied to the application as input including. The type and number of test cases needed will be dictated by the testing plan. Page 114 ©Copyright 2007.When is the scheduled release? When are updates or enhancements planned? Are there any specific events or actions that are dependent upon the application? Acceptance Criteria for implementation . Application Implementation Schedules . the standardized test cases can be created that will be used to test the application.What critical actions must the application accomplish before it can be deployed? This information forms the basis for making informed decisions on whether or not the application is ready to deploy. All Rights Reserved C3: Protected .Handout – Software Testing organization’s management. A proper test case will include the following key components: Test Case Name(s) .Identify set up or testing criteria that must be established before a test can be successfully executed. Inputs to the Test Planning Process Application Requirements . Test Case Execution Order . These expected results will be used to measure the acceptance criteria. Test Data Sources . Test Case Prerequisites . and therefore the ultimate success of the test. run orders and dependencies that might exist between test cases. Input Values . Test Design and Development After the test components have been defined.Each test case must have a unique name. A test case identifies the specific input values that will be sent to the application. The following items detail the input and output components of the test planning process.Document all screen identifier(s) and expected value(s) that must be verified as part of the test. the procedures for applying those inputs.Take note of the sources for extracting test data if it is not included in the test case. Expected Results . and the expected application values for the procedure being tested. if necessary. Test Procedures – Identify the application steps necessary to complete the test case.Specify any relationships.What is the application intended to do? These should be stated in the terms of the business requirements of the end users. Cognizant Technology Solutions.

This step applies the test cases identified by the test plan. repeatable. Specific performance measurements of the test execution phase include: Application of Test Cases – The test cases previously created are applied to the target software application as described in the testing environment Documentation . Cognizant Technology Solutions.Activities within the test execution are logged and analyzed as follows: Actual Results achieved during test execution are compared to expected application behavior from the test cases Test Case completion status (Pass/Fail) Actual results of the behavior of the technical test environment Deviations taken from the test plan or test process Inputs to the Test Execution Process Approved Test Plan Documented Test Cases Stabilized. All Rights Reserved C3: Protected . test execution and restoration Executing the Test The test is now ready to be run. and validates those results against expected performance. documents the results. test execution environment Standardized Test Logging Procedures Outputs from the Test Execution Process Test Execution Log(s) Restored test environment Page 115 ©Copyright 2007.Handout – Software Testing Inputs to the Test Design and Construction Process Test Case Documentation Standards Test Case Naming Standards Approved Test Plan Business Process Documentation Business Process Flow Test Data sources Outputs from the Test Design and Construction Process Revised Test Plan Test Procedures for each Test Case Test Case(s) for each application function described in the test plan Procedures for test set up.

Measuring the Results This step evaluates the results of the test as compared to the acceptance criteria set down in the test plan. All Rights Reserved C3: Protected . Without an adequate test plan in place to control your entire test process. Application Defects . Specific elements to be measured and analyzed include: Test Execution Log Review . Other Phases in Automation Phase I: Tool Acquisition Assessment Evaluation/Selection Installation With the right tool. Test Execution Statistics . Determine Application Status .This final and very important report identifies potential defects in the software. and install and configure the tool(s) for your application and environment. Cognizant Technology Solutions. and test team. An automation assessment allows us to evaluate your tool needs. and a separate test execution cycle may be required for the stress/volume testing of the same application. and the completion status. for example: ready for release. noting those that passed.Handout – Software Testing The test execution phase of your software test process will control how the test gets applied to the application.This step identifies the overall status of the application after testing. needs more testing. the type of test. Additionally. there may be several test execution cycles necessary to complete all the necessary types of testing required for your application. a test execution may be required for the functional testing of an application. operating environment. including application processes that need to be analyzed further. The problems experienced in test execution are usually attributed to not properly performing steps from earlier in the process. The secret to a controlled test execution is comprehensive planning. This step of the process can range from very chaotic to very simple and schedule driven. you may inadvertently cause problems for subsequent testing. For example.This summary identifies the total number of tests that were executed. test automation can offer a dramatic increase in productivity. The first and most important step in the process is acquiring a tool that is suitable for your application. A complete and thorough test plan will identify this need and many of the test cases can be used for both test cycles. failed or were not executed.The Log Review compiles a listing of the activities of all test cases. Phase II: Tool Implementation Preparation Execution Page 116 ©Copyright 2007. provide an objective selection of the best tool(s). etc.

Although test automation tools can save time through unattended execution. This is applicable when large volume and different sets of data need to be fed to the application and tested for correctness. Select the script that needs to be executed and run it… Wait until execution is done. Test Script execution: In this phase we execute the scripts that are already created. however overall savings is usually minimal. it is how you use it that counts. it takes time to define. we will prepare the test environment. Analysis the results via Test manager or in the logs. and it is possible for engineers to edit and maintain such scripts. develop the automated scripts.Handout – Software Testing Selecting the right test tool is only the beginning of a successful test automation effort. Automation Methods Capture/Playback Approach The Capture/Playback tools capture the sequence of manual operations in a test script that are entered by the test engineer. As with any tool. Testing can be done with both positive and negative approach simultaneously. Page 117 ©Copyright 2007. Cognizant Technology Solutions. The short-comings of Capture/Playback are that in many cases. The benefit of this approach is that the time consumed is less and accurate than manually testing it. necessary data table or data pool updation needs to be taken care. Prerequisite for running the scripts such as tool settings. design the test cases. This allows one script to test multiple sets of positive data. playback options. and execute the tests. Data Driven Approach Data driven approach is a test that plays back the same user actions but with varying input values. reusable automated test environment. This sometimes reduces the effort over the completely manual approach. All Rights Reserved C3: Protected . design. These sequences are played back during the test execution. This process will result in a robust. and automate tests. the capture/playback session will need to be completely re-run to capture the new sequence of user interactions. if the system functionality changes. During the implementation phase. Scripts need to be reviewed and validated for results and accepted as functioning as expected before they are used live. The benefit of this approach is that the captured session can be re-run at some later point in time to ensure that the system performs the required behavior. Steps to be followed before execution of scripts: Test tool to be installed in the machine. Tools like Winrunner provide a scripting language. Test environment /application to be tested to be installed in the machine.

The best tool for any particular situation depends on the system engineering environment that applies and the testing methodology that will be used. Functional Test Tool Matrix The Tool Matrix is provided for quick and easy reference to the capabilities of the test tools. 1 = Excellent support for this functionality. Page 118 ©Copyright 2007. In general a set of criteria can be built up by using this matrix and an indicative score obtained to help in the evaluation process. Mercury. 5 = No support. Each category in the matrix is given a rating of 1 – 5. 4 = This is only supported by use of an API call or third party add-in but not included in the general test tool/below average. Usually the lower the score the better but this is subjective and is based on the experience of the author and the test professionals opinions used to create this document.Handout – Software Testing Test script execution process: Automation tool comparison Anyone who has contemplated the implementation of an automated test tool has quickly realized the wide variety of options on the market in terms of both the kinds of test tools being offered and the number of vendors. 3 = Basic/ support only. Cognizant Technology Solutions. tool integration capability. and vendor qualification. test reporting capability. test execution capability. 2 = Good support but lacking or another tool provides more effective support. and Segue. This appendix evaluates major tool vendors on their test tool characteristics. which in turn will dictate how automation will be invoked to support the process. Rational. performance testing and analysis. All Rights Reserved C3: Protected . The following tool vendors evaluated are Compuware. Empirix/RSW.

ODBC and how they hold returned data e. Are there facilities that will allow me to programmatically look for objects of a certain type on a web page or locate a specific object? Can I extract data from the web page itself? E. I have looked at all the tools support for SQL. checking what is in the backend database usually verifies the proper validation of tests carried out on the front end of an application. Web testing can be riddled with problems if various considerations are not taken into account. SQLServer. Record and Playback This category details how easy it is to record & playback a test. A person may be connecting from the USA or Africa. this is the first thing that most test professionals will do. This is usually achieved by holding the data in a Database. Oracle. Database Tests Most applications will provide the facility to preserve data outside of itself. As such the test tool should provide good web based test functionality in addition to its client/server functions.g. they may use various browsers. As such.g. databases. DOM. is this in an array. etc. Web Testing Web based functionality on most applications is now a part of everyday life.Handout – Software Testing A detailed description is given below of each of the categories used in the matrix. look at the code and then playback. frames. All Rights Reserved C3: Protected . a variable. etc all of them support a universal query language known as SQL and a protocol for communicating with these databases called ODBC (JDBC can be used on java environments). They will speak different languages. etc. exact screen location)? Is there object recognition when recording and playing back or does it appear to record ok but then on playback (without environment change or unique id’s. When automating. they may be disabled. DB2. and the screen resolution on their computer will be different. a cursor. etc. Eventually record and playback becomes less and less part of the automation process as it is usually more robust to use the built-in functions to directly test objects. etc. Here are a few examples Are there functions to tell me when the page has finished loading? Can I tell the test tool to wait until an image appears? Can I test whether links are valid or not? Can I test web based objects functions like is it enabled. So the cost to set up a test environment is usually greater than for a client server test where the environment is fairly well defined. Sybase. Does the tool support low-level recording (mouse drags. Cognizant Technology Solutions. This is very similar to recording a macro in say Microsoft Access. does it contain data. etc. How does the tool manipulate this returned data? Can it call stored procedures and supply required input variables? What is the range of functions supplied for this testing? Page 119 ©Copyright 2007. etc changes) fail? How easy is it to read the recorded script. the applications and so on but on the web it is far different. connect using MAC.g. They will record a simple script. Linux or Windows. In judging the rating for this category I looked at the tools native support for HTML tables. various platforms for browsers. Web site maps and links. will have fast connections and slow connections. the title? A hidden form element? With Client server testing the target customer is usually well defined you know what network operating system you will be using. Informix. Because of the many databases available e. Ingres. However this should be done as a minimum in the evaluation process because if the tool of choice cannot recognize the applications objects then the automation process will be a very tedious experience.

we will need to create data to input into the application. Pushbuttons Checkboxes Radio buttons List views Edit boxes Combo boxes If you have a custom object that behaves like one of these are you able to map (tell the test tool that the custom control behaves like the standard) control? Does it support all the standard controls methods? Can you add the custom control to it’s own class of control? Page 120 ©Copyright 2007. input data. Then hopefully you will not need this functionality. Object Mapping If you are in a role that can help influence the design of a product. databases. classes. return data. Most custom objects will behave like a similar standard control here are a few standard objects that are seen in everyday applications. I have looked at all the tools facilities for creating and manipulating data. spreadsheets.Handout – Software Testing Data Functions As mentioned above applications usually provide a facility for storing data off line. Frameworks are usually the ultimate goal in deploying automation test tools. A test framework has parallels to Software frameworks where you develop an encapsulation layer of software (framework) around the applications. Cognizant Technology Solutions. etc to create. extract data? Can you randomise the access to that data? Is the data access truly random? This functionality is normally more important than database tests as the databases will usually have their own interface for running queries. Does the tool allow you to specify the type of data you want? Can you automatically generate data? Can you interface with files. to datadriven to framework testing. try to get the development/design team to use standard and not custom objects. numbers. spreadsheet or database. Frameworks provide an interface to all the applications under test by exposing a suitable list of functions. databases etc and expose functions. etc with variables supplied from an external source usually a CSV (Comma Separated variable) file. Data-driven tests are tests that replace hard coded names. This allows an inexperienced tester/user to run tests by just running/providing the test framework with know commands/variables. skilled resources and money to facilitate the first two. All Rights Reserved C3: Protected . However applications (except for manual input) do not usually provide facilities for bulk data input. These functions are also very important as you move from the record/playback phase. So to test this. for the aforementioned bulk data input sometimes carried out in data migration or application upgrades. address. However you may find that most (hopefully) of the application has been implemented using standard objects supported by your test tool vendor but there may be a few objects that are custom ones. etc. methods etc that is used to call the underlying applications.g. However to do this requires a lot of time. The added benefit (as I have found) is this functionality can be used for a production reason e. etc.

The last and least desirable should be by coordinates on the screen. Firstly the tool should provide services to uniquely identify each object it interacts with and by various means. etc. ID and similar. Does the Object Name Map allow you to alias the name or change the name given by the tool to some more meaningful name? Object Identity Tool Once you become more proficient with automation testing one of the primary means of identifying objects will be via an ID Tool. At least one of the tools allows you to map painted controls to standard controls but to do this you have to rely on the screen co-ordinates of the image. Object Name Map As you test your application using the test tool of your choice you will notice that it records actions against the objects that it interacts with. Cognizant Technology Solutions. name. object ID. Suppose the application crashes while I am testing what can I do? If a function does not receive the correct information how can I handle this? If I get an error message how do I deal with that? If I access a web site and get a warning what do I do? I cannot get a database connection how do I skip those tests? The test tool should provide facilities to handle the above questions. index. I have looked at these facilities in the base tool set. A sort of spy that looks at the internals of the object givi ng you details like the object ame.Handout – Software Testing Image Testing Lets hope this is not a major part of your testing effort but occasionally you may have to use this to test bit map and similar images. the types of errors. All tools provide a search and replace facility but the best implementations are those that provide a central repository to store these object identities. The premise is it is better to change the reference in one place rather than having to go through each of the scripts to replace it there. Page 121 ©Copyright 2007. I looked at built in wizards of the test tools for standard test recovery (when you finish tests or when a script fails). Once you are well into automation and build up 10’s and 100’s of scripts that reference these objects you will want to have a mechanism that provides an easy update if the application being tested changes. Also when the application has painted controls like those in the calculator app found on a lot of windows applications you may need to use this. Does the tool provide OCR (optical character recognition)? Can it compare one image against another? How fast does the compare take? If the compare fails how long does that take? Does the tool allow you to mask certain areas of the screen when comparing. How easy is it to build this into your code? The rating given will depend on how much errors the tool can capture. These objects are either identified through the coordinates on the screen or preferably via some unique object reference referred to as a tag. All Rights Reserved C3: Protected . can be programmed to reference windows and object names in one place (say via a variable) and that variable can be used throughout the script (where that object appears). Test/Error recovery This can be one of the most difficult areas to automate but if it is automated. Error recovery caused by the application and environment. how it recovers from errors. I found this to be true but not as big a point as some have stated because those tools that don’t support the central repository scheme. etc. it provides the foundation to produce a truly robust test suite.

Handout – Software Testing
This will allow you to reference that object within a function call. The tool should give you details of some of the object’s properties, especially those associated with uniquely identifying the object or window. The tool will usually provide the tester with a point and ID service where you can use the mouse to point at the object and in some window you will see all of that objects ID’s and properties. A lot of the tools will allow you to search all the open applications in one swoop and show you the result in a tree that you can look at when required. Extensible Language Here is a question that you will here time and time again in automation forums. “How do I get {insert test tool name here} to do such and such”, there will be one of four answers. I don’t know It can’t do it It can do it using the function x, y or Z It can’t in the standard language but you can do it like this What we are concerned with in this section is the last answer e.g. if the standard test language does not support it can I create a DLL or extend the language in some way to do it? This is usually an advanced topic and is not encountered until the trained tester has been using the tool for at least 6 – 12 months. However when this is encountered the tool should support language extension. If via DLL’s then the tester must have knowledge of a traditional development language e.g. C, C++ or VB. For instance if I wanted to extend a tool that could use DLL’s created by VB I would need to have Visual Basic then open say an ActiveX dll project, create a class containing various methods (similar to functions) then I would make a dll file. Register it on the machine then reference that dll from the test tool calling the methods according to their specification. This will sound a lot clearer as you go on in the tools and this document will be updated to include advanced topics like this in extending the tools capabilities. Some tools provide extension by allowing you to create user defined functions, methods, classes, etc but these are normally a mixture of the already supported data types, functions, etc rather than extending the tool beyond it’s released functionality. Because this is an advanced topic I have not taken into account ease of use, as those people who have got to this level should have already exhausted the current capabilities of the tools. So want to use external functions like win32api functions and so on and should have a good grasp of programming. Environment Support How many environments does the tool support out the box? Does it support the latest Java release, what Oracle, Powerbuilder, WAP, etc. Most tools can interface to unsupported environments if the developers in that environment provide classes, dll’s etc that expose some of the applications details but whether a developer will or has time to do this is another question. Ultimately this is the most important part of automation. Environment support. If the tool does not support your environment/application then you are in trouble and in most cases you will need to revert to manually testing the application (more shelf ware).

Page 122 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing
Integration How well does the tool integrate with other tools. This is becoming more and more important. Does the tool allow you to run it from various test management suites? Can you raise a bug directly from the tool and feed the information gathered from your test logs into it? Does it integrate with products like word, excel or requirements management tools? When managing large test projects with an automation team greater than five and testers totaling more than ten. The management aspect and the tools integration moves further up the importance ladder. An example could be a major Bank wants to redesign its workflow management system to allow faster processing of customer queries. The anticipated requirements for the new workflow software numbers in the thousands. To test these requirements 40,000 test cases have been identified 20,000 of these can be automated. How do I manage this? This is where a test management tool comes in real handy. Also how do I manage the bugs raised as a result of automation testing, etc? Integration becomes very important rather than having separate systems that don’t share data that may require duplication of information. The companies that will score larger on these are those that provide tools outside the testing arena as they can build in integration to their other products and so when it comes down to the wire on some projects, we have gone with the tool that integrated with the products we already had. Cost In my opinion cost is the least significant in this matrix, why? Because all the tools are similar in price except Visual Test that is at least 5 times cheaper than the rest but as you will see from the matrix there is a reason. Although very functional it does not provide the range of facilities that the other tools do. Price typically ranges from $2,900 - $5,000 (depending on quantity brought, packages, etc) in the US and around £2,900 - £5,000 in the UK for the base tools included in this document. So you know the tools will all cost a similar price it is usually a case of which one will do the job for me rather than which is the cheapest. Visual Test I believe will prove to be a bigger hit as it expands its functional range it was not that long ago where it did not support web based testing. The prices are kept this high because they can. All the tools are roughly the same price and the volumes of sales is low relative to say a fully blown programming language IDE like JBuilder or Visual C++ which are a lot more function rich and flexible than any of the test tools. On top of the above prices you usually pay an additional maintenance fee of between 10 and 20%. There are not many applications I know that cost this much per license not even some very advanced operating systems. However it is all a matter of supply. The bigger the supply the less the price as you can spread the development costs more. However I do not anticipate a move on the prices upwards as this seems to be the price the market will tolerate. Visual Test also provides a free runtime license.

Page 123 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing
Ease Of Use This section is very subjective but I have used testers (my guinea pigs) of various levels and got them from scratch to use each of the tools. In more cases than not they have agreed on which was the easiest to use (initially). Obviously this can change as the tester becomes more experienced and the issues of say extensibility, script maintenance, integration, data-driven tests, etc are required. However this score is based on the productivity that can be gained in say the first three months when those issues are not such a big concern. Ease of use includes out the box functions, debugging facilities, layout on screen, help files and user manuals. Support In the UK this can be a problem as most of the test tool vendors are based in the USA with satellite branches in the UK. Just from my own experience and the testers I know in the UK. We have found Mercury to be the best for support, then Compuware, Rational and last Segue. However having said that you can find a lot of resources for Segue on the Internet including a forum at www.betasoft.com that can provide most of the answers rather than ringing the support line. On their website Segue and Mercury provide many useful user and vendor contributed material. I have also included various other criteria like the availability of skilled resources, online resources, validity of responses from the helpdesk, speed of responses and similar Object Tests Now presuming the tool of choice does work with the application you wish to test what services does it provide for testing object properties? Can it validate several properties at once? Can it validate several objects at once? Can you set object properties to capture the application state? This should form the bulk of your verification as far as the automation process is concerned so I have looked at the tools facilities on client/server as well as web based applications. Matrix What will follow after the matrix is a tool-by-tool comparison under the appropriate heading (as listed above) so that the user can get a feel for the tools functionality side by side. Each category in the matrix is given a rating of 1 – 5. 1 = Excellent support for this functionality, 2 = Good support but lacking or another tool provides more effective support, 3 = Basic/ support only. 4 = This is only supported by use of an API call or third party add-in but not included in the general test tool/below average, 5 = No

Page 124 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Owing to the size and complexity of today’s advanced software applications.Handout – Software Testing Matrix score Win Runner = 24 QARun = 25 SilkTest = 24 Visual Test = 39 Robot = 24 SUMMARY Automated testing is automating the manual testing process currently in use. manual testing is no longer a viable option for most testing situations. Cognizant Technology Solutions. All Rights Reserved C3: Protected . Page 125 ©Copyright 2007.

g. A client/server system includes client applications accessing a database or application server. Rational Suite of tools Rational RequisitePro is a requirements management tool that helps project teams control the development process. Rational Robot. Use Robot to record client/server conversations and store them in scripts. RequisitePro organizes your requirements by linking Microsoft Word to a requirements repository and providing traceability and change management throughout the project lifecycle. All Rights Reserved C3: Protected . defect reports. including enhancement requests. and run tests. Rational Clear Quest is a change-request management tool that tracks and manages defects and change requests throughout the development process. you can access it in Test Manager. Rational Purify is a comprehensive C/C+ + run-time error checking tool that automatically pinpoints run-time errors and memory leaks in all components of an application. Rational Rose. etc. With Clear Quest. Rational Pure Coverage is a customizable code coverage analysis tool that provides detailed application analysis and ensures that all code has been exercised. and browsers accessing a Web server. and documentation modifications. test cases and a whole set of tools to support the process. ensuring that code is reliable Rational Quantify is an advanced performance profiler that provides application performance analysis. Facilitates functional and performance testing by automating record and playback of test scripts. RequistePro. When it comes to Object Oriented development they are the acknowledged leaders with most of the leading OO experts working for them. Rational Robot. and to capture and analyze the results. you can manage every type of change activity associated with software development. When you define a test requirement in RequisitePro. you will be able to: Work with Rational Testing tool Sample Test Automation Tool Rational offers the most complete lifecycle toolset (including testing) of these vendors for the windows platform. A baseline version of RequisitePro is included with Rational Test Manager. Rational Suite Performance Studio is a sophisticated tool for automating performance tests on client/server systems. enabling developers to quickly find. Cognizant Technology Solutions. Allows you to write. Some of their products are worldwide leaders e. preventing untested code from reaching the end-user. Performance Studio includes Rational Robot and Rational Load Test. Use Load Test to schedule and play back the scripts. Their Unified Process is a very good development model that allows mapping of requirements to use cases. prioritize and eliminate performance bottlenecks within an application. Clear case. organize. Page 126 ©Copyright 2007. including thirdparty libraries.Handout – Software Testing Chapter 9: Sample Test Automation Tool Learning Objective After completing this chapter.

During playback. Rational Load Test can emulate hundreds. You can use the Rational Administrator to create and manage projects. one Clear Quest databases. even thousands. Rational administrator is used to create and manage rational repositories. Rational Test categorizes test information within a repository by project. All Rights Reserved C3: Protected .Handout – Software Testing Rational Test Factory. Automates testing by combining automatic test generation with ourcecode coverage analysis. users and groups and manage security privileges. Tests an entire application. one RequisitePro database. The tools that are to discussed here are Rational Administrator Rational Robot Rational Test Manager Rational Administrator What is a Rational Project? A Rational project is a logical collection of databases and data stores that associates the data you use when working with Rational Suite. and optionally places them under configuration management. Cognizant Technology Solutions. A Rational project is associated with one Rational Test data store. and multiple Rose models and RequisitePro projects. of users placing heavy loads and stress on your database and Web servers. How to create a new project? Page 127 ©Copyright 2007. including all GUI features and all lines of source code.

Page 128 ©Copyright 2007. To manage the requirements assets connect to Requisite Pro.Handout – Software Testing Open the Rational administrator and go to File->New Project. configure or delete the project. the below seen Create Test Data store window will be displayed. which is required to connect to. In the corresponding window displayed. to manage test assets create associated test data store and for defect management connect to Clear quest database. Once the Create button in the Configure project window is chosen. Click Next. Click Finish. In the above window opened enter the details like Project name and location. Cognizant Technology Solutions. enter the Password if you want to protect the project with password. In the configure project window displayed click the Create button. All Rights Reserved C3: Protected . Accept the default path and click OK button.

Rational Administrator will display your “TestProject” details as below: Page 129 ©Copyright 2007. All Rights Reserved C3: Protected . Click OK in the configure project window and now your first Rational project is ready to play with….Handout – Software Testing Once the below window is displayed it is confirmed that the Test datastore is successfully created and click OK to close the window. Cognizant Technology Solutions.

Use Robot and TestManager together to record and play back scripts that help you determine whether a multi-client system is performing within user-defined standards under varying loads. Test applications developed with IDEs such as Visual Basic. All Rights Reserved C3: Protected . Perform full performance testing. Robot is integrated with Rational Purify. and VU scripting environments. Robot still finds them on playback. and Java. HTML. Quantify. PowerBuilder. Robot uses Object-Oriented Recording to identify objects by their internal object names. Page 130 ©Copyright 2007. Oracle Forms. The Robot editor provides color-coded commands with keyword Help for powerful integrated programming during script development. If objects change locations or their text changes. Cognizant Technology Solutions. Test objects even if they are not visible in the application's interface. Robot can be used to: Perform full functional testing. VB. Collect diagnostic information about an application during script playback. not by screen coordinates. You can play back scripts under a diagnostic tool and see the results in the log. Create and edit scripts using the SQABasic.Handout – Software Testing Rational Robot Rational Robot to develop three kinds of scripts: GUI scripts for functional testing and VU and VB scripts for performance testing. and PureCoverage. The Object-Oriented Recording technology in Robot lets you generate scripts quickly by simply running and using the application-under-test. Record and play back scripts that navigate through your application and test the state of objects through verification points.

Handout – Software Testing The Object Testing technology in Robot lets you test any object in the application-under-test. You can test standard Windows objects and IDEspecific objects. Once logged you will see the robot window. Cognizant Technology Solutions. Go to File-> New->Script In the above screen displayed enter the name of the script say “First Script” by which the script is referred to from now on and any description (Not mandatory). including the object's properties and data. Page 131 ©Copyright 2007.The type of the script is GUI for functional testing and VU for performance testing. All Rights Reserved C3: Protected . whether they are visible in the interface or hidden.

Line numbers are enclosed in parentheses to indicate lines in the script with warnings and errors. editing. Also displays certain system messages from Robot. or debugging. Script pane (right) – Displays the script. Cognizant Technology Solutions. How to record a play back script? To record a script just go to Record->Insert at cursor Then perform the navigation in the application to be tested and once recording is done stop the recording. All Rights Reserved C3: Protected . To display the Output window: Click View ® Output. Record-> Stop Page 132 ©Copyright 2007. Console – Displays messages that you send with the SQAConsoleWrite command.Handout – Software Testing The GUI Script top pane) window displays GUI scripts that you are currently recording. The Output window bottom pane) has two tabs: Build – Displays compilation results for all scripts compiled in the last operation. It has two panes: Asset pane (left) – Lists the names of all verification points and low-level scripts for this script.

Handout – Software Testing

In this window we can set general options like identification of lists, menus, recording think time in General tab: Web browser tab: Mention the browser type IE or Netscape… Robot Window: During recording how the robot should be displayed and hotkeys details… Object Recognition Order: the order in which the recording is to happen. For ex: Select a preference in the Object order preference list. If you will be testing C++ applications, change the object order preference to C++ Recognition Order.

Page 133 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing

Go to Tools-> Playback options to set the options needed while running the script. This will help you to handle unexpected window during playback, error recovery, mention the time out period, to manage log and log data. Verification points A verification point is a point in a script that you create to confirm the state of an object across builds of the application-under-test. During recording, the verification point captures object information (based on the type of verification point) and stores it in a baseline data file. The information in this file becomes the baseline of the expected state of the object during subsequent builds When you play back the script against a new build, Robot retrieves the information in the baseline file for each verification point and compares it to the state of the object in the new build. If the captured object does not match the baseline, Robot creates an actual data file. The information in this file shows the actual state of the object in the build. After playback, the results of each verification point appear in the log in Test Manager. If a verification point fails (the baseline and actual data do not match), you can select the verification point in the log and click View ® Verification Point to open the appropriate Comparator. The Comparator displays the baseline and actual files so that you can compare them. Verification point
Page 134 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing
is stored in the project and is always associated with a script. When you create a verification point, its name appears in the Asset (left) pane of the Script window. The verification point script command, which always begins with Result =, appears in the Script (right) pane. Because verification points are assets of a script, if you delete a script, Robot also deletes all of its associated verification points. You can easily copy verification points to other scripts if you want to reuse them. List of Verification Points The following table summarizes each Robot verification point.

Page 135 ©Copyright 2007, Cognizant Technology Solutions, All Rights Reserved C3: Protected

Handout – Software Testing Page 136 ©Copyright 2007. All Rights Reserved C3: Protected . Cognizant Technology Solutions.

Page 137 ©Copyright 2007. Adding Declarations to the Global Header File For your convenience. Click the Comment button on the GUI Insert toolbar. and variables that you want to use with multiple scripts or SQABasic library source files. Set the file type to Header Files (*. Select global. They can be accessed by all modules within the project. unless you specify another location. it will look in the SQABas32 directory. Click OK to continue recording or editing. You can add declarations to this global header file and/or create your own. and then click Open. click the Display GUI Insert Toolbar button on the GUI Record toolbar. You can use Robot to create and edit SQABasic header files. Comments are helpful for documenting and editing scripts. It supplies data values to the variables in a script during script playback. constants. you can insert lines of comment text into a GUI script. Click Edit ® Comment Line or Edit ® Uncomment Line. Robot will check this location first.Handout – Software Testing About SQABasic Header Files SQABasic header files let you declare custom procedures. To open Global. Click the Preferences tab. About Data pools A datapool is a test dataset. Inserting a Comment into a GUI Script: During recording or editing.SQABasic files are stored in the SQABas32 folder of the project. Cognizant Technology Solutions. Robot inserts the comment into the script (in green by default) preceded by a single quotation mark. Type the comment (60 characters maximum). You can specify another location by clicking Tools ® General Options.sbh. Robot ignores comments at compile time. If the file is not there. For example: This is a comment in the script To change lines of text into comments or to uncomment text: Highlight the text. Global. If editing. use the Browse button to find the location. To insert a comment into a script during recording or editing. Datapools let you automatically pump test data to virtual testers under high-volume conditions that potentially involve hundreds of virtual testers performing thousands of transactions. All Rights Reserved C3: Protected . Under SQABasic path. Robot provides a blank header file called Global. position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.sbh: Click File ® Open ® SQABasic File.sbh is a project-wide header file stored in SQABas32 in the project. SQABasic header files have the extension .sbh).sbh.sbh. If recording.

A single virtual tester that performs the same transaction multiple times can send realistic data to the server in each transaction. You can also compile scripts and SQABasic library source files manually. and so forth. you might want to provide a different set of values each time. you use a datapool so that: Each virtual tester that runs the script can send realistic data (which can include unique data) to the server. Using Datapools with GUI Scripts If you are providing one or more values to the client application during GUI recording. Also. when a GUI script is played back in a TestManager suite. There is no DATAPOOL_CONFIG statement in a GUI script. or when you debug a GUI script. Robot adds datapool commands to VU scripts automatically. you define a datapool for either type of script using TestManager in exactly the same way. Compiling the script When you play back a GUI script or VU script. The SQADatapoolOpen command defines the access method to use for the datapool. If you plan to repeat the transaction multiple times during playback.Handout – Software Testing Typically. Although there are differences in setting up datapool access in GUI scripts and sessions. Cognizant Technology Solutions. There are differences in the way GUI scripts and sessions are set up for datapool access: You must add datapool commands to GUI scripts manually while editing the script in Robot. the GUI script can access the same datapool as other scripts. part name. you might be filling out a data entry form and providing values such as order number. Robot compiles the script if it has been modified since it last ran. All Rights Reserved C3: Protected . A GUI script can access a datapool when it is played back in Robot. you might want a datapool to supply those values during playback. Page 138 ©Copyright 2007. For example. Debug menu The Debug menu has the following commands: Go Go Until Cursor Animate Pause Stop Set or Clear Breakpoints Clear All Breakpoints Step Over Step Into Step Out Note: The Debug menu commands are for use with GUI scripts only.

Cognizant Technology Solutions. The compilation results can be viewed in the Build tab of the Output window. Page 139 ©Copyright 2007. All Rights Reserved C3: Protected .Handout – Software Testing During compilation. the Build tab in the Output window displays compilation results and error messages with line numbers for all compiled scripts and library source files.

and test documents. Create and manage builds. log folders. and run reports. design. It is where the team defines the plan it will implement to meet those goals. it provides the entire team with one place to go to determine the state of the system at any time. and track test coverage and progress. With Test manager we can: Create. Rational Test Manager Test Manager is the open and extensible framework that unites all of the tools. assets. and logs. All Rights Reserved C3: Protected . and data both related to and produced by the testing effort. all participants in the testing effort can define and refine the quality goals they are working toward. Cognizant Technology Solutions.Handout – Software Testing Compilation errors After the script is created and compiled and errors fixed it can be executed. Create and manage data pools and data types Page 140 ©Copyright 2007. The reporting tools help you track assets such as scripts. And. Under this single framework. The results need to be analyzed in the Test Manager. execute tests and evaluate results. manage. most importantly. implement. builds. In Test Manager you can plan.

The folder in which the log is to stored and the log name needs to be given in this window. All Rights Reserved C3: Protected . In the Results tab of the Test Manager.Handout – Software Testing When the script execution is started the following window will be displayed. Cognizant Technology Solutions. you could see the results stored. From Test Manager you can know start time of the script and Page 141 ©Copyright 2007.

0 or later Netscape navigator (limited support) Page 142 ©Copyright 2007. Cognizant Technology Solutions.0 with service pack 5 Win2000 WinXP(Rational 2002) Win98 Win95 with service pack1 Protocols Oracle SQL server HTTP Sybase Tuxedo SAP People soft Web browsers IE4.Handout – Software Testing Supported environments Operating system WinNT4. All Rights Reserved C3: Protected .

Some of their products are worldwide leaders e. RequistePro.rational.0 or later. etc.com SUMMARY Rational offers the most complete lifecycle toolset (including testing) of these vendors for the windows platform. Clear case.g. VC++ and basic web pages.0 and above The basic product supports Visual basic. Page 143 ©Copyright 2007.Handout – Software Testing Markup languages HTML and DHTML pages on IE4.0 or above Visual C++ Java Oracle forms 4. When it comes to Object Oriented development they are the acknowledged leaders with most of the leading OO experts working for them. To test these types of application. Rational Rose. Rational Robot. All Rights Reserved C3: Protected .5 Delphi Power builder 5. Cognizant Technology Solutions. Development environments Visual basic 4. For more details visit www. you have to download and run a free enabler program from Rational’s website.

throughput. however.e. performance is a secondary issue to features. it is still an issue. The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. A systematic approach like performance analysis is essential to extract maximum benefit from an existing system. To increase confidence and to provide an advance warning of potential problems in case of load conditions. Performance analysis is also carried for various purposes such as: During a design or redesign of a module or a part of the system. developers would execute their applications using different execution streams (i. and utilization of the web site while simulating attempts by virtual users to simultaneously access the site. All Rights Reserved C3: Protected . Typically to debug applications. the evaluation of a design alternative is the prime mover for an analysis. Performance testing of an application is basically the process of understanding how the web application and its operating environment respond at various user load levels. we want to measure the latency. and low utilization. The main deliverables from such a test. Page 144 ©Copyright 2007. As the user base grows.Handout – Software Testing Chapter 10: Performance Testing Learning Objective After completing this chapter. In such cases. Maintaining optimum Web application performance is a top priority for application developers and administrators. some system resource. When looking for errors in the application. the cost of failure becomes increasingly unbearable. you will be able to: Test performance of a software What is Performance testing? The performance testing is a measure of the performance characteristics of an application. This helps to replace and focus efforts at improving overall system response. One of the main objectives of performance testing is to maintain a web site with low latency. analysis must be done to forecast performance under load. the system is unable to scale to higher levels of performance. Why Performance testing? Performance problems are usually the result of contention for. high throughput. Identification of bottlenecks in a system is more of an effort at troubleshooting. The main objective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the required transaction volumes in real-time production database.. completely exercise the application) in an attempt to find errors. prior to execution. In general. or exhaustion of. Post-deployment realities create a need for the tuning the existing system. more than one alternative presents itself. are automated test scripts and an infrastructure to be used to execute automated tests for extended periods. Cognizant Technology Solutions. When a system resource is exhausted.

a response time must be relevant to a business process. measurable.expressed in quantifiable terms such that when response times are measured. Defines specific customer scenarios. performance requirements should be agreed prior to the test. achievable requirements As a foundation to all tests. A comprehensive test strategy would define a test infrastructure to enable all these objectives be met. so it pays to make as much use of this infrastructure as possible. The performance testing goals are: End-to-end transaction response time measurements. Monitor system resources under various loads. the design specification or a separate performance requirements document should: Defines specific performance goals for each feature that is instrumented. Quantitative . The following attributes will help to have a meaningful performance comparison.response times should take some account of the cost of achieving them. a sensible comparison can be derived. relevant. Not all of these need be in place prior to planning or preparing the test (although this might be helpful). All Rights Reserved C3: Protected . This helps in determining whether or not the system meets the stated requirements.response time requirements should be justifiable when compared with the durations of the activities within the business process the system supports. the list defines what is required before a test can be executed. Achievable . Cognizant Technology Solutions. Relevant . First and foremost thing is. Measure the network delay between the server and clients Pre-requisites for Performance Testing We can identify five pre-requisites for a performance test. which can be re-used for other tests with broader objectives. Page 145 ©Copyright 2007. this infrastructure is a test bed. Realistic . Quantitative. This infrastructure is an asset and an expensive one too. but rather.a response time should be defined such that it can be measured using a tool or stopwatch and at reasonable cost. Bases performance goals on customer requirements. Fortunately. Measure Application Server components performance under various loads. Measure database components performance under various loads.Handout – Software Testing Performance Testing Objectives The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. Measurable . realistic.

but others are less critical and response time requirements can be less stringent. data volumes estimated to exist after one year’s use of the systems are used. but is unreasonable. Cognizant Technology Solutions. the test environment should be comparable to the actual production environment. which must be supported by the system. depending on the business application. but is. Even with an environment which is somewhat different from the production environment. A test environment which bears no similarity to the actual production environment may be useful for finding obscure errors in the code. or may not be able to execute a test for a reasonable length of time before the software. A single response time requirement for all transactions might be simple to define from the user’s point of view. users normally focus attention on response times. However. Typically. Database volumes Data volumes. Load profiles The second component of performance requirements is a schedule of load profiles. but two year volumes or greater might be used in some circumstances. If the software crashes regularly. Performance Testing Requirements Performance requirements normally comprise three components: Response time requirements Transaction volumes detailed in ‘Load Profiles’ Database volumes Response time requirements When asked to specify performance requirements. Testers will not be able to record scripts in the first instance. and often wishes to define requirements in terms of generic response times. with some confidence. the behavior of the target environment. All Rights Reserved C3: Protected . A load profile is the level of system loading expected to occur during a specific business scenario. for the results of the test to be realistic. however. middleware or operating systems crash. Page 146 ©Copyright 2007. Often this is not possible. Business scenarios might cover different situations when the users’ organization has different levels of activity or involve a varying mix of activities. it will probably not withstand the relatively minor stress of repeated use.Handout – Software Testing Stable system A test team attempting to construct a performance test of a system whose software is of poor quality is unlikely to be successful. it should still be possible to interpret the results obtained using a model of the system to predict. defining the numbers of table rows which should be present in the database after a specified period of live running complete the load profile. Some functions are critical and require short response times. Realistic test environment The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. useless for a performance test.

All Rights Reserved C3: Protected . Cognizant Technology Solutions.Handout – Software Testing Performance Testing Process Page 147 ©Copyright 2007.

The objective is to understand the performance test requirements. Hardware Platform Server Machines Processors Memory Disk Storage Load Machines configuration Network configuration Page 148 ©Copyright 2007. Cognizant Technology Solutions. Response Time Transactions Per Second Hits Per Second Workload No of con current users Volume of data Data growth rate Resource usage Hardware and Software configurations Phase 2 – Test Plan The following configuration information will be identified as part of performance testing environment requirement identification. It is important to understand as accurately and as objectively as possible the nature of load that must be generated. Hardware & Software components and Usage Model.Handout – Software Testing Phase 1 – Requirements Study This activity is carried out during the business and technical requirements identification phase. Following are the important performance test requirement that needs to be captured during this phase. All Rights Reserved C3: Protected .

Cognizant Technology Solutions.Handout – Software Testing Software Configuration Operating System Server Software Client Machine Software Applications Phase 3 – Test Design Based on the test strategy detailed test scenarios would be prepared. During the test design period the following activities will be carried out: Scenario design Detailed test execution plan Dedicated test environment setup Script Recording/ Programming Script Customization (Delay. Checkpoints. All Rights Reserved C3: Protected . Synchronizations points) Data Generation Parameterization/ Data pooling Page 149 ©Copyright 2007.

All Rights Reserved C3: Protected . The following artifacts will be produced during test execution period: Test logs Test Result Page 150 ©Copyright 2007.Handout – Software Testing Phase 4 –Scripting Phase 5 – Test Execution The test execution will follow the various types of test as identified in the test plan. Cognizant Technology Solutions. Virtual user loads are simulated based on the usage pattern and load levels applied as stated in the performance test strategy. All the scenarios identified will be executed.

Cognizant Technology Solutions.Handout – Software Testing Phase 6 – Test Analysis Phase 7 – Preparation of Reports The test logs and results generated are analyzed based on Performance under various load. Think time. All Rights Reserved C3: Protected . database throughput. Manual and automated results analysis methods can be used for performance results analysis. Transaction Distribution and Data handling. Network delay. Resource usage. Transaction/second. Network throughput. The following performance test reports/ graphs can be generated as part of performance testing:Transaction Response time Transactions per Second Transaction Summary graph Transaction performance Summary graph Transaction Response graph – Under load graph Virtual user Summary graph Page 151 ©Copyright 2007.

network capacity or routing.Handout – Software Testing Error Statistics graph Hits per second graph Throughput graph Down load per second graph Based on the Performance report analysis. All Rights Reserved C3: Protected . Cognizant Technology Solutions. Common Mistakes in Performance Testing No Goals No general purpose model Goals =>Techniques. Upgrades to client or server hardware. Metrics. suggestions on improvement or tuning will be provided to the design team: Performance improvements to application software. middleware. database organization. Workload Not trivial Biased Goals ‘To show that OUR system is better than THEIRS” Analysts = Jury Unsystematic Approach Analysis without Understanding the Problem Incorrect Performance Metrics Unrepresentative Workload Wrong Evaluation Technique Overlook Important Parameters Ignore Significant Factors Inappropriate Experimental Design Inappropriate Level of Detail No Analysis Erroneous Analysis Page 152 ©Copyright 2007. Changes to server system parameters.

keep the performance test suite fairly static throughout the product development cycle. at how to instrument tests to best measure various response times. Design the performance test suite to measure response times and not to identify bugs in the product. Also. Page 153 ©Copyright 2007. Performance issues must be identified as soon as possible to prevent further degradation. Cognizant Technology Solutions. Design the build verification test (BVT) suite to ensure that no new bugs are injected into the build that would prevent the performance test suite from successfully completing.Handout – Software Testing No Sensitivity Analysis Ignoring Errors in Input Improper Treatment of Outliers Assuming No Change in the Future Ignoring Variability Too Complex Analysis Improper Presentation of Results Ignoring Social Aspects Omitting Assumptions and Limitations Benchmarking Lessons Ever build needs to be measured. The Web Capacity Analysis (WebCAT) tool provides Web server performance analysis. testers must guess. Significant changes to the performance test suite skew or make obsolete all previous data. Without defined performance goals or requirements. we should run the performance test suite under controlled conditions from build to build. All the members in the team should agree that a performance issue is not just a bug. Therefore. it is important to define concrete performance goals. it is a software architectural problem. This typically means measuring performance on "clean" test environments. The performance tests should be modified consistently. If we decide to make performance a goal and a measure of the quality criteria for release. Performance testing of Web services and applications is paramount to ensuring an excellent customer experience on the Internet. Establish incremental performance goals throughout the product development cycle. the tool can also assess Internet Server Application Programming Interface and application server provider (ISAPI/ASP) applications. Creating an automated test suite to measure performance is time-consuming and labor-intensive. The performance tests should not be used to find functionality-type bugs. the management team must decide to enforce the goals. Therefore. without a clear purpose. Performance goals need to be ensured. If the design or requirements change and you must modify a test. perturb only one variable at a time for each build. All Rights Reserved C3: Protected . We should run the automated performance test suite against every build and compare the results against previous results.

authentifications.5 LoadRunner is Mercury Interactive’s tool for testing the performance of client/server systems. Ensure that you know what you are measuring and why. it is probably wasted data. SSL. You should reuse automated performance tests Automated performance tests can often be reused in many other automated test suites. client certificates. TSL. Achieving performance goals early also helps to ensure that the ship date is met because a product rarely ships if it does not meet performance goals. LoadRunner 6. load and functional tests or by running them individually. Repeatable and measurable load to execute your client/server system just as real users would.1. LoadRunner runs thousands of Virtual Users that are distributed over a network.5x.Handout – Software Testing Strive to achieve the majority of the performance goals early in the product development cycle because: Most performance issues require architectural change. You can watch the results as they occur. Use WebLoad to test how well your web site will perform under realworld conditions by combining performance. functionality and performance of Web-based applications – both Internet and Intranet. LoadRunner enables you to test your system under controlled and peak load conditions. if the data is not going to be used in a meaningful way to make improvements in the engineering cycle. proxies. All Rights Reserved C3: Protected .5 and Webload 4. Tests are capturing secondary metrics when the instrumented tests have nothing to do with measuring clear and established performance goals. including cookies. Load Runner’s in depth reports and graphs provide the information that you need to evaluate the performance of your client/server system.0 and 1. For example. You create test scripts (called agendas) using Java Scripts that instruct those virtual clients about what to do. Using a minimum of hardware resources. Page 154 ©Copyright 2007. Although secondary metrics look good on wall charts and in reports. To generate load. When Webload runs the test.Webload displays them in graphs and tables in real-time and you can save and export the results when the test is finished. per-transaction and per-instance level from the computers that are generating the load. Performance is known to degrade slightly during the stabilization phase of the development cycle.5 Webload is a testing tool for testing the scalability. it gathers results at a per-client. Cognizant Technology Solutions. Webload supports HTTP1. incorporate the performance test suite into the stress test suite to validate stress scenarios and to identify potential performance issues under different stress conditions. The tools used for performance testing are Loadrunner 6. persistent connections and chunked transfer coding. It can measure the performance of your application under any load conditions. Webload generates load by creating virtual clients that emulate network traffic. Tools used for testing would be the tool pecified in the requirement specification. Performance Testing Tools Testing for most applications will be automated. WebLoad 4. Webload can also gather information server’s performance monitor. these Virtual users provide consistent.

All Rights Reserved C3: Protected .summary and comparison Page 155 ©Copyright 2007. Cognizant Technology Solutions.Handout – Software Testing Performance Testing Tools .

All Rights Reserved C3: Protected . Cognizant Technology Solutions.Handout – Software Testing Page 156 ©Copyright 2007.

Handout – Software Testing Page 157 ©Copyright 2007. Cognizant Technology Solutions. All Rights Reserved C3: Protected .

All Rights Reserved C3: Protected . Cognizant Technology Solutions.Handout – Software Testing Page 158 ©Copyright 2007.

Defining the right placement and composition of software instances can help in vertical scalability of the system without addition of hardware resources.Handout – Software Testing Architecture Benchmarking Hardware Benchmarking . All Rights Reserved C3: Protected . This is achieved through software benchmark test.Hardware benchmarking is performed to size the application with the planned Hardware platform. Cognizant Technology Solutions. It is significantly different from capacity planning exercise in that it is done after development and before deployment Software Benchmarking . Page 159 ©Copyright 2007.

Time estimate: a rough estimate of the amount of time that the test may take to complete. Methodology: a list of suggested steps to take in order to assess the system under test. each test requires a certain type of workload. Performance Metrics The Common Metrics selected /used during the performance testing is as below Response time Turnaround time = the time between the submission of a batch job and the completion of its output. What to look for: contains information on behaviors. Utilization: The fraction of time the resource is busy servicing requests.g. Throughput: Rate (requests per unit of time) Examples: Jobs per second Requests per second Millions of Instructions Per Second (MIPS) Millions of Floating Point Operations Per Second (MFLOPS) Packets Per Second (PPS) Bits per second (bps) Transactions Per Second (TPS) Capacity: Nominal Capacity: Maximum achievable throughput under ideal workload conditions. Or. Cognizant Technology Solutions. Page 160 ©Copyright 2007. This methodology specification provides information on the appropriate script of pages or transactions for the user. The methodologies below are generic. Usable capacity: Maximum throughput achievable without exceeding a pre-specified response-time limit Efficiency: Ratio usable capacity to nominal capacity. Constraints: details any constraints and values that should not be exceeded during testing. The response time at maximum throughput is too high. Type of workload: in order to properly achieve the goals of the test. Stretch Factor: The ratio of the response time with single user to that of concurrent users..Handout – Software Testing General Tests What follows is a list of tests adaptable to assess the performance of most systems. the ratio of the performance of an nprocessor system to that of a one-processor system is its efficiency. along with some simple background information that might be helpful during testing. All Rights Reserved C3: Protected . E. allowing one to use a wide range of tools to conduct the assessments. bandwidth in bits per second. Purpose: explains the value and focus of the test. issues and errors to pay attention to during and after the test. Methodology Definitions Result: provide information about what the test will accomplish.

executing performance testing does not yield anything more than finding more defects in the system. plan. As tests are executed.. Client Side Statistics Running Vusers Hits per Second Throughput HTTP Status Code HTTP responses per Second Pages downloaded per Second Transaction response time Page Component breakdown time Page Download time Component size Analysis Error Statistics Errors per Second Total Successful/Failed Transactions Server Side Statistics System Resources . metrics such as response times for transactions. should be collected. Third party monitoring tools are also used based on the requirement. memory. However. throughput etc. It is also important to monitor and collect the statistics such as CPU utilization.e strategy. disk space and network usage on individual web. performance testing can unearth issues that otherwise cannot be done through mainstream testing.Handout – Software Testing Average Fraction used for memory. Without the rigor described in this paper. Memory and Disk Space Web Server Resources–Threads. All Rights Reserved C3: Protected . execution. design. HTTP requests per second.Processor Utilization. Cognizant has built custom monitoring tools to collect the statistics. if executed systematically with appropriate planning. SQL Queries Transaction Profiling Code Block Analysis Network Statistics Bandwidth Utilization Network delay time Network Segment delay time Conclusion Performance testing is an independent discipline and involves all the phases as the mainstream testing lifecycle i. application and database servers and make sure those numbers recede as load decreases. Cognizant Technology Solutions. JDBC Connection Pool Database Server Resources–Wait Events. Cache Hit Ratio Application Server Resources–Heap size. It is very typical of the project manager to be overtaken by time and resource pressures Page 161 ©Copyright 2007. analysis and reporting.

Load testing gives the greatest line of defense against poor performance and accommodates complementary strategies for performance management and monitoring of a production environment. leveraging an ongoing. lifecycle-focused approach.Handout – Software Testing leading not enough budget being allocated for performance testing. performance testing to check if it offers an acceptable response time and load testing to see what hardware or software configuration will be required to provide acceptable response time and handle the load that will created by the real users of the system Why is load testing important? Load Testing increases the uptime for critical web applications by helping you spot the bottlenecks in the system under large user stress scenarios before they happen in a production environment. Cognizant Technology Solutions. it may be too late in the software development cycle to correct serious performance issues. Once these solutions are properly adopted and utilized. By continuously testing and monitoring the performance of critical software applications. There is another flip side of the coin. businesses can begin to take charge and leverage information technology assets to their competitive advantage. Page 162 ©Copyright 2007. Before testing the system for performance requirements. Testing of critical web applications during its development and before its deployment should include functional testing to confirm to the specifications. Fortunately. All Rights Reserved C3: Protected . Web-enabled applications and infrastructures must be able to execute evolving business processes with speed and precision while sustaining high volumes of changing and unpredictable user audiences. the system should have been architect and designed for meeting the required performance goals. enabling new business opportunity lowering transaction costs and strengthening profitability. business can confidently and proactively execute strategic corporate initiatives for the benefit of shareholders and customers alike. When should load testing be done? Load testing should be done when the probable cost of the load test is likely less than the cost of a failed application deployment. The discipline helps businesses succeed in leveraging Web technologies to their best advantage. the consequences of which could be disastrous to the final system. Thus a load testing is accomplished by stressing the real application under simulated load provided by virtual users. Automated load testing tools and services are available to meet the critical need of measuring and optimizing complex and dynamic application and infrastructure performance. Load Testing Load Testing is creation of a simulated load on a real computer system by using virtual users who submit work as real users would do at real client workstations and thus testing the systems ability to support such workload. However there is an important point to be noted here. If not. robust and viable solutions exist to help fend off disasters that result from poor performance.

For example. Volume Testing is conducted in conjunction with Component. All Rights Reserved C3: Protected . hardware. and dependency on over-utilized shared resources).g. is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Report these failures to the development teams so that the associated defects can be fixed. Stress testing is the system testing of an integrated. amount of data). and data components.. Goals The typical goals of stress testing are to: Cause the application to fail to scale gracefully under extreme conditions so that the underlying defects can be identified. Volume testing will seek to verify the physical and logical limits to a system's capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization's business processing.g. Provide input to the defect trend analysis effort. or an entry at the maximum amount of data in a field. queries with an extreme number of restrictions. as conditions become extreme. number of users.e. and prevented in the future. A system including software. Provide data that will assist systems engineers in making intelligent decisions regarding future scaling needs. insufficient memory inadequate hardware. Determine how an application degrades and eventually fails. to determine if it fulfills its scalability requirements). number of transactions. stress testing could involve an extreme number of simultaneous users. Objectives The typical objectives of stress testing are to: Partially validate the application (i. extreme numbers of transactions. extreme utilization.Handout – Software Testing Volume and Stress Testing Volume testing: Volume Testing. Examples Typical examples include stress testing of an application that is: Software only. Page 163 ©Copyright 2007. Help determine the extent to which the application is ready for launch. Huge (e. as its name implies. queries that return the entire contents of a database. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval. blackbox application that attempts to cause failures involving how its performance varies under extreme but valid conditions (e. fixed. Cognizant Technology Solutions.. analyzed. Determine if the application will support "worst case" production load conditions.. Configuration and/or Stress Testing. Batch with no realtime requirements.

All Rights Reserved C3: Protected . Cognizant Technology Solutions. The test environment is ready. automotive engine control).g. Test Implementation: o Develop test scripts o Simulating extreme workloads. Hard realtime (e. The independent test team is adequately staffed and trained in stress testing.. cruise-control software). Embeded within another system (e. radar.Handout – Software Testing Soft realtime (i. Software integration has occurred (i. Client/server or n-tier distributed. A research prototype that will not be placed into service. Test Execution: o o Regression Testing Profiling Test Reporting Environments Load testing is typically performed on the following environments using the following tools: Test Environment: o Test Harness o Use case modeling tool o Performance analyzer o Profiler Page 164 ©Copyright 2007. human reaction times).. Tasks Stress testing typically involves the independent test team performing the following testing tasks using the following techniques: Test Planning Test Reuse Test Design: o o Use Case Based Testing Workload analysis to determine the maximum production workloads.. Preconditions Stress test execution can typically begin when the following preconditions hold: The scalability requirements to be tested have been specified.e. The relevant system components have passed system integration testing.g. flight-control software. Business-critical or safety-critical.e. load testing can begin prior to the distribution of the software components onto the hardware components). avionics. Completion Criteria Stress testing is typically complete when the following postconditions hold: At least one stress test suite exists for each scalability requirement. The test suites for every scheduled scalability requirement execute successfully. The relevant software components have passed unit testing..

All Rights Reserved C3: Protected .Handout – Software Testing Work Products Stress testing typically results in the production of all or part of the following work products from the test work product set: Documents: o Project Test Plan o Master Test List o Test Procedures o Test Report o Test Summary Report Software and Data: o o o o o Phases Test Harness Test Scripts Test Suites Test Cases Test Data (*) Optional stress testing of COTS software components during the technology analysis and technology vendor selection tasks. (**) Optional stress testing of the executable architecture as well as the COTS components during the vendor and tool evaluation and vendor and tool selection tasks. Page 165 ©Copyright 2007. Cognizant Technology Solutions.

All Rights Reserved C3: Protected . reuse functional test cases as stress test cases. To the extent practical. The iterative and incremental development cycle implies that stress testing is regularly performed in an iterative and incremental manner.Handout – Software Testing Guidelines A system can fulfilits operational requirements and still be a failure if it does not scale. Develop test scripts simulating exceptional workloads. Cognizant Technology Solutions. Stress testing can elicit failures prior to launch. SUMMARY The performance testing is a measure of the performance characteristics of an application. Stress testing must be automated if adequate regression testing is to occur. The main deliverables from such a test. The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. Page 166 ©Copyright 2007. Perform stress testing for several minutes to several hours. The main objective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the required transaction volumes in real-time production database. prior to execution. are automated test scripts and an infrastructure to be used to execute automated tests for extended periods.

you will be able to: Describe TCP and TCP Analysis What is a Test Case Point (TCP)? TCP is a measure of estimating the complexity of an application. An Average requirement is ranked between 4 and 7. Cognizant Technology Solutions. the application is classified into various modules like say for a web application. of verification points OR Baseline Test data Refer the test case classification table given below Page 167 ©Copyright 2007. The test cases for a particular requirement are classified into Simple. we can have ‘Login and Authentication’ as a module and rank that particular module as Simple. average and complex and quantifying the same into a measure of complexity. A Complex requirement is ranked between 8 and 10. All Rights Reserved C3: Protected .Handout – Software Testing Chapter 11: Test Case Point Learning Objective After completing this chapter. A Simple requirement is one. The TCP counts are nothing but ranking the requirements and the test cases that are to be written for those requirements into simple. Test Case Point Analysis Calculating the Test Case Points Based on the Functional Requirement Document (FRD). Average and Complex based on the following four factors. In this courseware we shall give an overview about Test Case Points and not elaborate on using TCP as an estimation technique. which can be given a value in the scale of 1 to3. Average and Complex based on the number and complexity of the requirements for that module. This is also used as an estimation technique to calculate the size and effort of a testing project. Test case complexity for that requirement OR Interface with other Test cases OR No.

Cognizant Technology Solutions. average and complex test cases. we arrive at the count of Total Test Case Points.Handout – Software Testing A sample guideline for classification of test cases is given below. Summing up the three results. average and complex test case types. From the break up of Complexity of Requirements done in the first step. Page 168 ©Copyright 2007. This adjustment factor has been calculated after a thorough study and analysis done on many testing projects. All Rights Reserved C3: Protected . Based on the test case type an adjustment factor is assigned for simple. we can get the number of simple. we get the simple. the complexity needs to be identified in a similar manner. The Adjustment Factor in the table mentioned below is pre-determined and must not be changed for every project. which interfaces with or interacts with another application is classified as 'Complex' Any verification point consisting of report verification is considered as 'Complex' A verification point comprising Search functionality may be classified as 'Complex' or 'Average' depending on the complexity Depending on the respective project. By multiplying the number of requirements with it s corresponding adjustment factor. average and complex test case points. Any verification point containing a calculation is considered 'Complex' Any verification point.

If the execution time in some procedures is zero. All Rights Reserved C3: Protected . Creating additional test cases to increase coverage. Coverage analysis requires access to test program source code and often requires recompiling it with a special command. Page 169 ©Copyright 2007. without regard to how it works internally. This contrasts with functional testing (black-box testing). Also an optional aspect of test coverage analysis is: Identifying redundant test cases that do not increase coverage. Functional testing examines what the program accomplishes. Cognizant Technology Solutions. and Determining a quantitative measure of code coverage. Test Coverage analysis is the process of: Finding areas of a program not exercised by a set of test cases.Handout – Software Testing Test Coverage Test Coverage is an important measure of quality for software systems. Test coverage analysis can be used to assure quality of the set of tests. which compares test program behavior against a requirements specification. A test coverage analyzer automates this process. The two terms are synonymous. Code coverage analysis is a structural testing technique (white box testing). The academic world more often uses the term "test coverage" while practitioners more often use "code coverage". But this measure of test coverage is so coarse-grained it's not very practical. Structural testing examines how the program works. Here is a description of some fundamental measures and their strengths and weaknesses Procedure-Level Test Coverage Probably the most basic form of test coverage is to measure what procedures were and were not executed during the test suite. Test coverage measures A large variety of coverage measures exist. and not the quality of the actual product. This simple statistic is typically available from execution profiling tools. Test coverage analysis is sometimes called code coverage analysis. which is an indirect measure of quality. you need to write new tests that hit those procedures. taking into account possible pitfalls in the structure and logic. whose job is really to measure performance bottlenecks. Structural testing compares test program behavior against the apparent intention of the source code.

you don't know whether you have tested the case when it is skipped. code coverage version in addition to other versions. Typically the line coverage information is also presented at the source code level. Some products.) than other forms of instrumentation. However. it is an enviable commitment to quality! How Test Coverage Tools Work To monitor execution. for example. and add additional code (such as calls to a code coverage runtime) that will record where the program reached. Source-Level Instrumentation Some products add probes at the source level. such as debug (un optimized) and release (optimized) needs to be maintained. Because the file is not modified in any way. file. test coverage tools generally "instrument" the program by inserting "probes". Proponents claim this technique can provide higher levels of code coverage measurement (condition coverage. This. most available code coverage tools do not provide much beyond basic line coverage. All Rights Reserved C3: Protected . Condition Coverage and Other Measures It's easy to find cases where line coverage doesn't really tell the whole story. The probes exist only in the inmemory copy of the executable file.the provider of the tool must explicitly choose which processors or virtual machines to support. and which are not. This type of instrumentation is dependent on programming language -. just executing it will not automatically start code coverage (as it would with the other methods of Page 170 ©Copyright 2007. or project level giving a percentage of the code that was executed. and then create a new. The same executable file used for product release testing should be used for code coverage. you can see exactly what functionality has not been tested. A separate version namely. A large project that achieved 90% code coverage might be considered a well-tested product. For example. One drawback of this technique is the need to modify the build process. The tool will analyze the existing executable. you should have more. or virtual machine). allowing you to see exactly which lines of code were executed and which were not. Runtime Instrumentation Probes need not be added until the program is actually run. instrumented one. if you achieve 95+% line coverage and still have time and budget to commit to further testing improvements. Executable Instrumentation Probes can also be added to a completed executable file. OS. a statement in an if clause). etc.Handout – Software Testing Line-Level Test Coverage The basic measure of a dedicated test coverage tool is tracking which lines of code are executed. But in practice. But it can be somewhat independent of operating environment (processor.the provider of the tool must explicitly choose which languages to support. If the test suite is large and timeconsuming.g. it is dependent on operating environment -. Adding probes to the program will make it bigger and slower. Such a tool may not actually generate new source files with the additional code. However. of course. Cognizant Technology Solutions. is often the key to writing more tests that will increase coverage: By studying the unexecuted code. In theory. If that code is shown as executed. intercept the compiler after parsing but before code generation to insert the changes they need. There are many other test coverage measures. consider a block of code that is skipped under certain conditions (e. the file itself is not modified. This type of instrumentation is independent of programming language. the performance factor may be significant. They analyze the source code as written.. How and when this instrumentation phase happens can vary greatly between different products. You need condition coverage to know. This result is often presented in a summary at the procedure.

and does nothing if the coverage tool is not waiting. the code coverage tool must start program execution directly or indirectly. Runtime Instrumentation is independent of programming language but dependent on operating environment. Page 171 ©Copyright 2007. up-to-date requirements specification. All Rights Reserved C3: Protected . This added code does not affect the size or performance of the executable. safety-critical software should have a high goal. Test Coverage Tools at a Glance Coverage analysis is a structural testing technique that helps eliminate gaps in a test suite. It helps most in the absence of a detailed. Cognizant Technology Solutions.Handout – Software Testing instrumentation). This new code will wake up and connect to a waiting coverage tool whenever the program executes. the code coverage tool will add a tiny bit of instrumentation to the executable. Alternatively. We must set a higher coverage goal for unit testing than for system testing since a failure in lower-level code may affect multiple high-level callers. Like Executable Instrumentation. Each project must choose a minimum percent coverage for release criteria based on available testing resources and the importance of preventing post-release failures. Instead. Clearly.

along with email is displayed.frm. All Rights Reserved C3: Protected . Phone no. if (document. an additional field for credit card number is provided. The search page provides with facility to search by name or place and the results are displayed in a tabular form where the essential fields name and phone no. State. Case Study: Application Personal Address book. Cognizant Technology Solutions. The entries can be seen in a tabular form also by choosing the search by list option. The Personal Address book is designed for people to access their contacts online from a central database on the server. It uses ASP in the server side and java script in the client side. Address2. Note that the pages are not submitted to the server for this action. The view only pages are reached through the results page on clicking the hyperlinks. Validation code at client side function save() { var flag.frm.name1.Handout – Software Testing SUMMARY TCP is a measure of estimating the complexity of an application. document. However only the name and phone numbers are mandatory fields. Address1. City. you need to submit the following artifacts: a) Draw the flow chart.value=="") { alert ("Please Fill in the name"). Test your Understanding Assignment 1 There is an application on personal address book.focus(). Email address. However the system is not compatible with Netscape browser and is designed specifically for the IE 5. As a whole it is designed for personal use and security was not a concern while designing it. Country. The details that can be stored are Name.name1. so that effectively reduces the network traffic. } if (document. The database is Microsoft Access. This is also used as an estimation technique to calculate the size and effort of a testing project.value=="") Page 172 ©Copyright 2007. The phone number and credit card number are numeric and email address is also verified for a character '@'. flag=0. find the independent paths b) Develop Test cases based on that. These entries can be modified directly from here by clicking the edit button.. In the personal information page.frm.phone. return false. c) Also develop other test cases for complete testing.0 and above.

charAt(i)=="@") { flag=1. } } document.submit().frm. } if (document.i<document.focus(). return false.value.phone. return false. return false.focus().frm.i++) { if (document.select(). } Page 173 ©Copyright 2007. break.phone.Handout – Software Testing { alert ("Please Fill in the Phone number").value)==true) { alert ("Phone number should be numeric").phone.focus(). document. document. } } if (flag!=1) { alert ("Please Fill email in correct format").frm.frm.value!="") { for (i=0. All Rights Reserved C3: Protected . document.phone. document. return true.frm.email.frm.select().frm.email.email.email.frm. Cognizant Technology Solutions.email.frm.value. } if (isNaN(document.frm. document.length.

Rex Black 50 Ways to improve your testing.com/ http://www. Boris Beizer Automated Testing Handbook. Hung Q.co. Elfriede Dustin. by. Myers The Complete Guide to Software Testing. by. Bret Pettichord Page 174 ©Copyright 2007.softwaretesting. by.aptest. by. Taz Daughtrey Testing Applications on the Web. by. Nguyen Software Test Automation. Brian Marick Software test automation.com/ http://www.nildram. by. All Rights Reserved C3: Protected .com/resources. Bill Hetzel Software Testing Techniques.Bob Johnson . Linda G. Daniel J. Elfriede Dustin Effective use of Test automation tools. Mosley Fundamental Concepts for the Software Quality Engineer. by. Nguyen. by.softwaretestinginstitute.edu/~storm/ http://www.html http://www.Michael Hacke & Robert Johnson Software Testing: A Craftsman's Approach. Robert M. and Performance. by. by. Mark Fewster & Dorothy Graham Black-Box Testing .com/~bazman/ http://www. Poston. Paul Jorgensen Automated Software Testing: Introduction.com/ Cognizant eResources: http://elibrary/ \\ctsintcosaca\library BOOKS The Art of Software Testing. Cognizant Technology Solutions.Handout – Software Testing REFERENCES WEBSITES http://members.com/ http://www. Paulc. by. Hung Q.sqatester. by. Management.tripod.testing.uk/ http://www. by. Jeff Rashka. by. Boris Beizer Client Server Software Testing on the Desktop and the Web.softwareqatest.mtsu. by. Mark Fewster & Dorothy Graham The craft of software testing. by. by. IEEE Black Box Testing. Glenford J. Hayes Automating Specification-Based Software Testing. by. by. Jorgensen. Boris Beizer Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems. John Paul & Paperback Practical Tools and Techniques for Managing Hardware and Software Testing.

Cognizant Technology Solutions.Handout – Software Testing STUDENT NOTES: Page 175 ©Copyright 2007. All Rights Reserved C3: Protected .

Sign up to vote on this title
UsefulNot useful