You are on page 1of 178

Software Testing

Confidential

Cognizant Technology Solutions

Table of Contents 1 INTRODUCTION TO SOFTWARE...............................................................................................7 1.1 EVOLUTION OF THE SOFTWARE TESTING DISCIPLINE ...........................................................................7 1.2 THE TESTING PROCESS AND THE SOFTWARE TESTING LIFE CYCLE.......................................................7 1.3 BROAD CATEGORIES OF TESTING....................................................................................................8 1.4 WIDELY EMPLOYED TYPES OF TESTING ...........................................................................................8 1.5 THE TESTING TECHNIQUES.............................................................................................................9 1.6 CHAPTER SUMMARY......................................................................................................................9 2 BLACK BOX AND WHITE BOX TESTING...............................................................................11 2.1 INTRODUCTION...........................................................................................................................11 2.2 BLACK BOX TESTING...................................................................................................................11 2.3 TESTING STRATEGIES/TECHNIQUES................................................................................................13 2.4 BLACK BOX TESTING METHODS ....................................................................................................14 2.5 BLACK BOX (VS) WHITE BOX....................................................................................................16 2.6 WHITE BOX TESTING........................................................................................................18 3 GUI TESTING.................................................................................................................................23 3.1 SECTION 1 - WINDOWS COMPLIANCE TESTING...............................................................................23 3.2 SECTION 2 - SCREEN VALIDATION CHECKLIST................................................................................25 3.3 SPECIFIC FIELD TESTS.................................................................................................................29 3.4 VALIDATION TESTING - STANDARD ACTIONS..................................................................................30 4 REGRESSION TESTING...............................................................................................................33 4.1 WHAT IS REGRESSION TESTING.....................................................................................................33 4.2 TEST EXECUTION .......................................................................................................................34 4.3 CHANGE REQUEST......................................................................................................................35 4.4 BUG TRACKING .........................................................................................................................35 4.5 TRACEABILITY MATRIX...............................................................................................................36 5 PHASES OF TESTING....................................................................................................................39 5.1 INTRODUCTION ...........................................................................................................................39 5.2 TYPES AND PHASES OF TESTING....................................................................................................39 5.3 THE “V”MODEL........................................................................................................................40 ..............................................................................................................................................................42 6 INTEGRATION TESTING.............................................................................................................43 6.1 GENERALIZATION OF MODULE TESTING CRITERIA..............................................................................44 ...............................................................................................................................................................46 7 ACCEPTANCE TESTING..............................................................................................................49 7.1 INTRODUCTION – ACCEPTANCE TESTING........................................................................................49 7.2 FACTORS INFLUENCING ACCEPTANCE TESTING................................................................................49 7.3 CONCLUSION..............................................................................................................................50 8 SYSTEM TESTING.........................................................................................................................51 8.1 INTRODUCTION TO SYSTEM TESTING....................................................................................51 8.2 NEED FOR SYSTEM TESTING ........................................................................................................51
Performance Testing Process & Methodology Proprietary & Confidential -2-

8.3 SYSTEM TESTING TECHNIQUES .....................................................................................................52 8.4 FUNCTIONAL TECHNIQUES.............................................................................................................53 8.5 CONCLUSION:.............................................................................................................................53 9 UNIT TESTING...............................................................................................................................54 9.1 INTRODUCTION TO UNIT TESTING..................................................................................................54 9.2 UNIT TESTING –FLOW:...............................................................................................................55 1 RESULTS...........................................................................................................................................55 UNIT TESTING – BLACK BOX APPROACH..........................................................................................56 UNIT TESTING – WHITE BOX APPROACH...........................................................................................56 UNIT TESTING – FIELD LEVEL CHECKS...................................................................................56 UNIT TESTING – FIELD LEVEL VALIDATIONS.....................................................................................56 UNIT TESTING – USER INTERFACE CHECKS.........................................................................................56 9.3 EXECUTION OF UNIT TESTS..........................................................................................................57 UNIT TESTING FLOW :.....................................................................................................................57 DISADVANTAGE OF UNIT TESTING............................................................................................59 METHOD FOR STATEMENT COVERAGE.................................................................................................59 RACE COVERAGE ...................................................................................................................60 9.4 CONCLUSION..............................................................................................................................60 10 TEST STRATEGY.........................................................................................................................62 10.1 INTRODUCTION ........................................................................................................................62 10.2 KEY ELEMENTS OF TEST MANAGEMENT:......................................................................................62 10.3 TEST STRATEGY FLOW :............................................................................................................63 10.4 GENERAL TESTING STRATEGIES..................................................................................................65 10.5 NEED FOR TEST STRATEGY........................................................................................................65 10.6 DEVELOPING A TEST STRATEGY..................................................................................................66 10.7 CONCLUSION:...........................................................................................................................66 11 TEST PLAN.....................................................................................................................................68 11.1 WHAT IS A TEST PLAN?............................................................................................................68 CONTENTS OF A TEST PLAN..............................................................................................................68 11.2 CONTENTS (IN DETAIL)..............................................................................................................68 12 TEST DATA PREPARATION - INTRODUCTION.................................................................71 12.1 CRITERIA FOR TEST DATA COLLECTION ......................................................................................72 12.2 CLASSIFICATION OF TEST DATA TYPES........................................................................................78 12.3 ORGANIZING THE DATA..............................................................................................................80 12.4 DATA LOAD AND DATA MAINTENANCE.......................................................................................82 12.5 TESTING THE DATA..................................................................................................................83 12.6 CONCLUSION............................................................................................................................83 13 TEST LOGS - INTRODUCTION ...............................................................................................85 13.1 FACTORS DEFINING THE TEST LOG GENERATION.........................................................................85 13.2 COLLECTING STATUS DATA......................................................................................................86 14 TEST REPORT..............................................................................................................................92 14.1 EXECUTIVE SUMMARY...............................................................................................................92
Performance Testing Process & Methodology Proprietary & Confidential -3-

.......................................................................................................................................................112 17............................................................5 RATIONAL ROBOT MAIN WINDOW-GUI SCRIPT............15 EASE OF USE..................1 RATIONAL SUITE OF TOOLS ..................................1 WHY AUTOMATE THE TESTING PROCESS?...............115 17...........97 15....................116 17..............................................127 18...........6 RECORD AND PLAYBACK OPTIONS...........................................................................................7 IMAGE TESTING....................................................................134 Performance Testing Process & Methodology Proprietary & Confidential -4- ..................................10 OBJECT IDENTITY TOOL......................................132 18...9 OBJECT NAME MAP............3 PREPARING THE TEST ENVIRONMENT........................4 AUTOMATION METHODS.....................................103 16...............................96 15.................................10 INSERTING A COMMENT INTO A GUI SCRIPT:....................................................................9 ADDING DECLARATIONS TO THE GLOBAL HEADER FILE............................................13 COMPILING THE SCRIPT...111 17........................................................................................................17 OBJECT TESTS.............................................................5 DEFECT REPORTING GUIDELINES...............................................8 ABOUT SQABASIC HEADER FILES........................................16 SUPPORT...............................................................................14 COST................................................................................114 17..2 RECORD AND PLAYBACK........................................................3 DEFECT TRACKING........................................................................................................13 INTEGRATION...........................11 ABOUT DATA POOLS......................................................................................................................................................................................................118 17..................................................................................124 18.........................................132 18..................................................................................................................95 15...................................................................................................................................................................................................................117 17........115 17............................................................................125 18.............11 EXTENSIBLE LANGUAGE..............................................131 18..........95 15..............131 18................................................................116 17..........14 COMPILATION ERRORS..............................................................................................101 16.............................111 17............6 OBJECT MAPPING.................................126 18..............................2 RATIONAL ADMINISTRATOR.....................................................................................................................................4 ROBOT LOGIN WINDOW...................................................3 WEB TESTING......2 DEFECT FUNDAMENTALS ......................113 17................................................................................................................................................................................133 18.................................................119 18...................7 VERIFICATION POINTS.............1 FUNCTIONAL TEST TOOL MATRIX...............................................................119 18.....................................129 18............................................................................................105 16....................................................................................................95 15........................................................4 DATABASE TESTS...................................................................................................................116 17.........................12 DEBUG MENU..............................................12 ENVIRONMENT SUPPORT............120 18.......................................................................................................8 TEST/ERROR RECOVERY...............................................................18 MATRIX.........................................................112 17......................3 RATIONAL ROBOT..............................................131 18................................................................................108 17 GENERAL AUTOMATION TOOL COMPARISON.........................................1 DEFECT................................................4 DEFECT CLASSIFICATION...........................................................114 17............................................................................................................................................................................................5 DATA FUNCTIONS...........................117 17......98 16 AUTOMATION......................................................................................101 16..........................112 17.....................2 AUTOMATION LIFE CYCLE...........................................................................................................................111 17..................................................................................118 18 SAMPLE TEST AUTOMATION TOOL.................................117 17.......................................................................................19 MATRIX SCORE.........114 17........15 DEFECT MANAGEMENT..........................................................................................................................

......................................................................................162 26.................1 OPERATING SYSTEM........................................143 22.......................................................148 22................................145 22..........2 SERVER SIDE STATISTICS..............................................................................144 22..............................................1 LOADRUNNER 6..........................................................................................................................................................................................................................................................................................................162 26........6 PHASE 6 – TEST ANALYSIS...........................163 Performance Testing Process & Methodology Proprietary & Confidential -5- ...........4 PHASE 4 –SCRIPTING...............................................................................................................................................5 PHASE 5 – TEST EXECUTION..4 MARKUP LANGUAGES.......................138 20.......................159 24...............1 CLIENT SIDE STATISTICS................................................4 CONCLUSION....................1 TEST MANAGER-RESULTS SCREEN .......................................2 WHEN SHOULD LOAD TESTING BE DONE?................161 25....................................138 20........19 RATIONAL TEST MANAGER.........135 19.....................................................................142 22...........140 21..........................................................................1 WHY IS LOAD TESTING IMPORTANT ?....................3 WEB BROWSERS.....................................................................................................................................................................136 20 SUPPORTED ENVIRONMENTS..........................................................................................161 26 LOAD TESTING PROCESS..........4 GENERAL TESTS............................................3 PHASE 3 – TEST DESIGN..........................................................................138 20.....................162 26..1 PHASE 1 – REQUIREMENTS STUDY...........................................................................................................................................159 24..................................................140 21............................................150 23.......7 PHASE 7 – PREPARATION OF REPORTS............................2 WEBLOAD 4...............................................................................139 21.........157 23.........................................................................................................5 PERFORMANCE REQUIREMENTS........................................................................................................................................................................5 DEVELOPMENT ENVIRONMENTS.........................................................................................................2 WHY PERFORMANCE TESTING?......4 PRE-REQUISITES FOR PERFORMANCE TESTING.....2 PROTOCOLS.............................................1 WHAT IS PERFORMANCE TESTING?..................................................................................................................146 22..............................................................161 25..3 NETWORK STATISTICS................157 24 PERFORMANCE METRICS.........................2 PHASE 2 – TEST PLAN...............................................5.......................9 BENCHMARKING LESSONS .................................................3 PERFORMANCE TESTING OBJECTIVES....................................................................................................................................138 20....................................................3 ARCHITECTURE BENCHMARKING......................................................141 22 PERFORMANCE TESTING PROCESS......................................................................138 20........................139 21.....................................................150 23.......................................................................................................158 24...........................138 21 PERFORMANCE TESTING.................................................................2 USER SCRIPTS....................................................................................................................................................................................................................150 23.................................148 23 TOOLS........3 SETTINGS.....................8 COMMON MISTAKES IN PERFORMANCE TESTING...........................................................................................................................................1 SYSTEM ANALYSIS.............................................................147 22...139 21............159 25 LOAD TESTING.........................................................................................................146 22................................................................................................................................................158 24................................162 26...................................................4 PERFORMANCE MONITORING..........5.......................................................................................................................................................................144 22....

.............166 27............................178 Performance Testing Process & Methodology Proprietary & Confidential -6- .........................................5 CONDITION COVERAGE AND OTHER MEASURES................................................163 27 STRESS TESTING................................................173 28.........175 29 TEST CASE POINTS-TCP....................................................................................................................................170 28 TEST CASE COVERAGE.....................................................168 27..........................................................................................................................................................................................165 27................................3 AUTOMATED STRESS TESTING IMPLEMENTATION.......................................2 BACKGROUND TO AUTOMATED STRESS TESTING.......................................................................5 ANALYZING RESULTS..................................169 27............................................................................................................................................................................................176 29........................................4 PROGRAMMABLE INTERFACES.................165 27......6 HOW TEST COVERAGE TOOLS WORK.......................172 28.................................6 CONCLUSION.............................................................3 CHAPTER SUMMARY..............................................................172 28........................................6 DATA FLOW DIAGRAM........................................................................................................................176 29..................................173 28...............................................7 TEST COVERAGE TOOLS AT A GLANCE...169 27.....................163 26.173 28.................................2 TEST COVERAGE MEASURES...........................................3 PROCEDURE-LEVEL TEST COVERAGE.................................................................176 29...173 28................................................................26................5 GRAPHICAL USER INTERFACES............168 27..............................................1 INTRODUCTION TO STRESS TESTING..........................7 TECHNIQUES USED TO ISOLATE DEFECTS...............................................................................................................4 LINE-LEVEL TEST COVERAGE...................................................................................................................................................................................2 CALCULATING THE TEST CASE POINTS:.....................................1 TEST COVERAGE....1 WHAT IS A TEST CASE POINT (TCP)....................................172 28.....................................

1 Introduction to Software 1.1 Evolution of the Software Testing discipline
The effective functioning of modern systems depends on our ability to produce software in a cost-effective way. The term software engineering was first used at a 1968 NATO workshop in West Germany. It focused on the growing software crisis! Thus we see that the software crisis on quality, reliability, high costs etc. started way back when most of today’s software testers were not even born! The attitude towards Software Testing underwent a major positive change in the recent years. In the 1950’s when Machine languages were used, testing is nothing but debugging. When in the 1960’s, compilers were developed, testing started to be considered a separate activity from debugging. In the 1970’s when the software engineering concepts were introduced, software testing began to evolve as a technical discipline. Over the last two decades there has been an increased focus on better, faster and cost-effective software. Also there has been a growing interest in software safety, protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice!. Now to answer, “What is Testing?” we can go by the famous definition of Myers, which says, “Testing is the process of executing a program with the intent of finding errors”

1.2 The Testing process and the Software Testing Life Cycle
Every testing project has to follow the waterfall model of the testing process. The waterfall model is as given below 1.Test Strategy & Planning 2.Test Design 3.Test Environment setup 4.Test Execution 5.Defect Analysis & Tracking 6.Final Reporting According to the respective projects, the scope of testing can be tailored, but the process mentioned above is common to any testing activity. Software Testing has been accepted as a separate discipline to the extent that there is a separate life cycle for the testing activity. Involving software testing in all phases of the
Performance Testing Process & Methodology Proprietary & Confidential -7-

software development life cycle has become a necessity as part of the software quality assurance process. Right from the Requirements study till the implementation, there needs to be testing done on every phase. The V-Model of the Software Testing Life Cycle along with the Software Development Life cycle given below indicates the various phases or levels of testing.

Requirement Study High Level Design Low Level Design Unit Testing

Production Verification Testing User Acceptance Testing System Testing Integration Testing

SDLC - STLC

1.3 Broad Categories of Testing
Based on the V-Model mentioned above, we see that there are two categories of testing activities that can be done on software, namely,  Static Testing  Dynamic Testing The kind of verification we do on the software work products before the process of compilation and creation of an executable is more of Requirement review, design review, code review, walkthrough and audits. This type of testing is called Static Testing. When we test the software by executing and comparing the actual & expected results, it is called Dynamic Testing

1.4 Widely employed Types of Testing
From the V-model, we see that are various levels or phases of testing, namely, Unit testing, Integration testing, System testing, User Acceptance testing etc. Let us see a brief definition on the widely employed types of testing. Unit Testing: The testing done to a unit or to a smallest piece of software. Done to verify if it satisfies its functional specification or its intended design structure. Integration Testing: Testing which takes place as sub elements are combined (i.e., integrated) to form higher-level elements Regression Testing: Selective re-testing of a system to verify the modification (bug fixes) have not caused unintended effects and that system still complies with its specified requirements
Performance Testing Process & Methodology Proprietary & Confidential -8-

System Testing: Testing the software for the required specifications on the intended hardware Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria, which enables a customer to determine whether to accept the system or not. Performance Testing: To evaluate the time taken or response time of the system to perform it’s required functions in comparison Stress Testing: To evaluate a system beyond the limits of the specified requirements or system resources (such as disk space, memory, processor utilization) to ensure the system do not break unexpectedly Load Testing: Load Testing, a subset of stress testing, verifies that a web site can handle a particular number of concurrent users while maintaining acceptable response times Alpha Testing: Testing of a software product or system conducted at the developer’s site by the customer Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered software product system.

1.5 The Testing Techniques
To perform these types of testing, there are two widely used testing techniques. The above said testing types are performed based on the following testing techniques. Black-Box testing technique: This technique is used for testing based solely on analysis of requirements (specification, user documentation.). Also known as functional testing. White-Box testing technique: This technique us used for testing based on analysis of internal logic (design, code, etc.)(But expected results still come requirements). Also known as Structural testing. These topics will be elaborated in the coming chapters

1.6 Chapter Summary
This chapter covered the Introduction and basics of software testing mentioning about
Performance Testing Process & Methodology Proprietary & Confidential -9-

     Evolution of Software Testing The Testing process and lifecycle Broad categories of testing Widely employed Types of Testing The Testing Techniques Performance Testing Process & Methodology Proprietary & Confidential .10 - .

no other knowledge of the program is necessary. there are two types of black box test that involve users. It is used to detect errors by means of execution-oriented test cases. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Among the most important black box tests that do not involve users are functionality testing. White-box test design allows one to peek inside the "box". black box tests do not necessarily involve the participation of users. stress tests. and how to build and maintain test data. Performance Testing Process & Methodology Proprietary & Confidential . For example. many people prefer the terms "behavioral" and "structural". i. test coverage. when black box testing is applied to software engineering. Synonyms for white-box include: structural. volume tests. In the following the most important aspects of these black box tests will be described briefly. the tester and the programmer can be independent of one another. It is because of this that black box testing can be considered testing with respect to the specifications. functional. For this reason. Some call this "gray-box" or "translucent-box" test design. so it doesn't explicitly use knowledge of the internal structure. how to develop and document test cases. In practice. test groups are often used. recovery testing.1 Introduction Test Design refers to understanding the sources of test cases. Synonyms for black-box include: behavioral. avoiding programmer bias toward his own work. but it's still discouraged. field and laboratory tests. Though centered around the knowledge of user requirements. opaque-box. There are 2 primary methods by which tests can be designed and they are: BLACK BOX WHITE BOX Black-box test design treats the system as a literal "black-box". and it focuses specifically on using internal knowledge of the software to guide the selection of test data.2 Black Box and White Box testing 2. and closed-box. the tester would only know the "legal" inputs and what the expected outputs should be. but others wish we'd stop talking about boxes altogether!!! 2. While black-box and white-box are terms that are still in popular use. Additionally.e. For this testing.2 Black box testing Black Box Testing is testing without knowledge of the internal workings of the item being tested. and benchmarks . It is usually described as focusing on testing functional requirements. glass-box and clear-box. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden. but not how the program actually arrives at those outputs.11 - . it hasn't proven useful to use a single test design method.

1 Black box testing . field tests are the only real means to elucidate problems of the organisational integration of the software system into existing procedures.e. to modify a term bank via different terminals simultaneously).2.g. e. Does the system provide possibilities to recover all of the data or part of it? How much can be recovered and how? Is the recovered data still correct and consistent? Particularly for software that needs high reliability standards. i.2. Apart from general usability-related aspects. in the NLP area.1. performing a standardised task. what is specified in the requirements. There are different approaches to functionality testing. A volume test can uncover problems that are related to the efficiency of a system.with user involvement For tests involving users.without user involvement The so-called ``functionality testing'' is central to most testing exercises. each function where it is called first.2. Whereas for most software engineers benchmark tests are concerned with the quantitative measurement of specific operations.e. In the following only a rough description of field and laboratory tests will be given. methodological considerations are rare in SE literature. Its primary objective is to assess whether the program does what it is supposed to do. i. One is the testing of each program feature or function in sequence. The term ``scenario'' has entered software evaluation in the early 1990s . or. The objective of volume tests is to find the limitations of the software by processing a huge amount of data. The other is to test module by module. The efficiency of a piece of software strongly depends on the hardware environment and therefore benchmark tests always consider the soft/hardware combination. sending e-mails. i. how the technical integration of the system works. The notion of benchmark tests involves the testing of program efficiency. field tests are particularly useful for assessing the interoperability of the software system. During a stress test. Scenario Tests. benchmark tests only denote operations that are independent of personal variables. 2. or only show that an error message would be needed telling the user that the system cannot process the given amount of data. Particularly in the NLP environment this problem has frequently been underestimated. A typical Performance Testing Process & Methodology Proprietary & Confidential . The aim of recovery testing is to make sure to which extent data can be recovered after a system breakdown.g. the system has to process a huge amount of data or perform many function calls within a short period of time. recovery testing is very important. Moreover. In short it involves putting the system into its intended use by its envisaged type of user.g.2 Black box testing . A typical example could be to perform the same function from all workstations connected in a LAN within a short period of time (e. A scenario test is a test case which aims at a realistic user background for the evaluation of software as it was defined and performed It is an instance of black box testing where the major objective is to assess the suitability of a software product for every-day routines. Rather. E. some also consider user tests that compare the efficiency of different software systems as benchmark tests. In the context of this document. In field tests users are observed while using the software system at their normal working place.e. one may find practical test reports that distinguish roughly between field and laboratory tests.12 - .1. a consumption of too much memory space. however. incorrect buffer sizes.

example of the organisational problem of implementing a translation memory is the language service of a big automobile manufacturer, where the major implementation problem is not the technical environment, but the fact that many clients still submit their orders as print-out, that neither source texts nor target texts are properly organised and stored and, last but not least, individual translators are not too motivated to change their working habits. Laboratory tests are mostly performed to assess the general usability of the system. Due to the high laboratory equipment costs laboratory tests are mostly only performed at big software houses such as IBM or Microsoft. Since laboratory tests provide testers with many technical possibilities, data collection and analysis are easier than for field tests.

2.3 Testing Strategies/Techniques
• • • • • • • • • • Black box testing should make use of randomly generated inputs (only a test range should be specified by the tester), to eliminate any guess work by the tester as to the methods of the function Data outside of the specified input range should be tested to check the robustness of the program Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output The number zero should be tested when numerical data is to be input Stress testing should be performed (try to overload the program with inputs to see where it reaches its maximum capacity), especially with real time systems Crash testing should be performed to see what it takes to bring the system down Test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance Other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing, and state testing. Finite state machine models can be used as a guide to design functional tests According to Beizer the following is a general order by which tests should be designed: 1. Clean tests against requirements. 2. Additional structural tests for branch coverage, as needed. 3. Additional tests for data-flow coverage as needed. 4. Domain tests not covered by the above. 5. Special techniques as appropriate-syntax, loop, state, etc. 6. Any dirty tests not covered by the above.

Performance Testing Process & Methodology

Proprietary & Confidential

- 13 -

2.4 Black box testing Methods
2.4.1 Graph-based Testing Methods
• • • • • 2.4.2 • • • Black-box methods based on the nature of the relationships (links) among the program objects (nodes), test cases are designed to traverse the entire graph Transaction flow testing (nodes represent steps in some transaction and links represent logical connections between steps that need to be validated) Finite state modeling (nodes represent user observable states of the software and links represent transitions between states) Data flow modeling (nodes are data objects and links are transformations from one data object to another) Timing modeling (nodes are program objects and links are sequential connections between these objects, link weights are required execution times) Equivalence Partitioning Black-box technique that divides the input domain into classes of data from which test cases can be derived An ideal test case uncovers a class of errors that might require many arbitrary test cases to be executed before a general error is observed Equivalence class guidelines: 1. If input condition specifies a range, one valid and two invalid equivalence classes are defined 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined 4. If an input condition is Boolean, one valid and one invalid equivalence class is defined

2.4.3 Boundary Value Analysis
• • Black-box technique that focuses on the boundaries of the input domain rather than its center BVA guidelines: 1. If input condition specifies a range bounded by values a and b, test cases should include a and b, values just above and just below a and b 2. If an input condition specifies and number of values, test cases should be exercise the minimum and maximum numbers, as well as values just above and just below the minimum and maximum values 3. Apply guidelines 1 and 2 to output conditions, test cases should be designed to produce the minimum and maxim output reports 4. If internal program data structures have boundaries (e.g. size limitations), be certain to test the boundaries

Performance Testing Process & Methodology

Proprietary & Confidential

- 14 -

2.4.4 Comparison Testing
• • Black-box testing for safety critical systems in which independently developed implementations of redundant systems are tested for conformance to specifications Often equivalence class partitioning is used to develop a common set of test cases for each implementation

2.4.5 Orthogonal Array Testing
• • • Black-box technique that enables the design of a reasonably small set of test cases that provide maximum test coverage Focus is on categories of faulty logic likely to be present in the software component (without examining the code) Priorities for assessing tests using an orthogonal array 1. Detect and isolate all single mode faults 2. Detect all double mode faults 3. Multimode faults

2.4.6 Specialized Testing
• • • • Graphical user interfaces Client/server architectures Documentation and help facilities Real-time systems 1. 2. 3. 4. Task testing (test each time dependent task independently) Behavioral testing (simulate system response to external events) Intertask testing (check communications errors among tasks) System testing (check interaction of integrated system software and hardware)

2.4.7 Advantages of Black Box Testing
• • • • • • More effective on larger units of code than glass box testing Tester needs no knowledge of implementation, including specific programming languages Tester and programmer are independent of each other Tests are done from a user's point of view Will help to expose any ambiguities or inconsistencies in the specifications Test cases can be designed as soon as the specifications are complete

2.4.8 Disadvantages of Black Box Testing
• • • • Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever Without clear and concise specifications, test cases are hard to design There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried May leave many program paths untested
Proprietary & Confidential - 15 -

Performance Testing Process & Methodology

Risks (why you are testing) 4. The distinction here is based on what the person knows or can understand. Coverage (what gets tested) 3. An opposite test approach would be to open up the electronics system. and you can’t see beyond its surface. People: Who does the testing? Some people know how software works (developers) and others just use it (users). consider the Five-Fold Testing System. This is black box testing.Activities(how you are testing) 5. Developer testing is called “white box” testing. Black box testing begins with a metaphor. By analogy. Performance Testing Process & Methodology Proprietary & Confidential . It lays out five dimensions that can be used for examining testing: 1. You have to see if it works just by flipping switches (inputs) and seeing what happens to the lights and dials (outputs). however. Evaluation (how you know you’ve found a bug) Let’s use this system to understand and clarify the characteristics of black box and white box testing. and dials on the outside. Coverage: What is tested? If we draw the box around the system as a whole. but with software. any testing by users or other non-developers is sometimes called “black box” testing. yet everyone seems to have a different idea of what they mean. “black box” testing becomes another name for system testing. see how the circuits are wired.5 Black Box (Vs) White Box An easy way to start up a debate in a software testing forum is to ask the difference between black box and white box testing. These terms are commonly used. You must test it without opening it up. To help understand the different ways that software testing can be divided between black box and white box techniques. Accordingly.• • Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone) Most testing related research has been directed toward glass box testing 2. this is called white box testing. The actual meaning of the metaphor. And testing the units inside the box becomes white box testing. switches. It’s housed in a black box with lights. Black box software testing is doing the same thing.16 - .People(who does the testing) 2. Imagine you’re testing an electronics system. apply probes internally and maybe even disassemble parts of it. depends on how you define the boundary of the box and what kind of access the “blackness” is blocking.

This is one way to think about coverage. They may be masked by fault tolerance or simply luck.” Indeed. Requirements-based testing could be called “black box” because it makes sure that all the customer requirements have been verified. it cannot guarantee that all parts of the implementation have been tested. Related techniques capture code history and stack information when faults occur. which defines tests based on functional requirements. In this case. since they use code instrumentation to make the internal workings of the software more visible. and tools that facilitate applying inputs and capturing inputs—most notably GUI capture replay tools—as black box tools. In order to fully test a software product both black and white box testing are required. It requires the source code to be produced before the tests can be planned and is much more laborious in the Performance Testing Process & Methodology Proprietary & Confidential . Both are supported by extensive literature and commercial tools. White box testing is testing against the implementation and will discover faults of commission. Risks: Why are you testing? Sometimes testing is targeted at particular risks. Another is to contrast testing that aims to cover all the requirements with testing that aims to cover all the code. Boundary testing and other attack-based techniques are targeted at common coding errors. paths. Thus black box testing is testing against the specification and will discover faults of omission.” while structural testing—based on the code internals—is called “white box. and structural test design. These are two design approaches.17 - . helping with diagnosis. Another set of risks concerns whether the software will actually provide value to users. Effective security testing also requires a detailed understanding of the code and the system architecture. indicating that part of the specification has not been fulfilled. Since behavioral testing is based on external functional definition. and maps code inspection (static testing) with white box testing. which defines tests based on the code itself. the metaphor maps test execution (dynamic testing) with black box testing. Black box testing is concerned only with testing the specification. this is probably the most commonly cited definition for black box and white box testing. it cannot guarantee that the complete specification has been implemented. We could also focus on the tools used. Some tool vendors refer to code-coverage tools as white box tools. these techniques might be classified as “white box”. Another activity-based distinction contrasts dynamic test execution with formal code inspection. Code-based testing is often called “white box” because it makes sure that all the code (the statements. Testing is then categorized based on the types of tools used. or decisions) is exercised. and could be termed “black box. Assertions are another technique for helping to make problems more visible. Certain test techniques seek to make these kinds of problems more visible. All of these techniques could be considered white box test techniques.” Activities: How do you test? A common distinction is made between behavioral test design. Memory leaks and wild pointers are examples. Evaluation: How do you know if you’ve found a bug? There are certain kinds of software faults that don’t always lead to obvious failures. White box testing is concerned only with testing the software product. These contrast with black box techniques that simply look at the official outputs of a program. Thus. Usability testing focuses on this risk. indicating that part of the implementation is faulty. White box testing is much more expensive than black box testing. it is often called “black box. These are the two most commonly used coverage criteria.

system or requirements-based testing (coverage). Among the most important constructive means are the usage of object-oriented programming tools. can sometimes describe developer-based testing (people). White box testing. White box planning should commence as soon as all black box tests have been successfully passed. there are further constructive means to guarantee high quality software end products. with the production of flowgraphs and determination of paths. usability testing (risk). and logs (evaluation). and last but not least the involvement of users in both software development and testing procedures Summary : Black box testing can sometimes describe user-based testing (people). Synonyms for white box testing • • • • Glass Box testing Structural testing Clear Box testing Open Box Testing Types of White Box testing A typical rollout of a product is shown in figure 1 below. unit or code-coverage testing (coverage).6 WHITE BOX TESTING Software testing approaches that examine the program structure and derive test data from the program logic. Performance Testing Process & Methodology Proprietary & Confidential . structural testing. Structural testing is sometimes referred to as clear-box testing since white boxes are considered opaque and do not really permit visibility into the code. inspection or code-coverage automation (activities). The paths should then be checked against the black box test plan and any additional required test runs determined and applied. The consequences of test failure at this stage may be very expensive. rapid prototyping.determination of suitable input data and the determination if the software is or is not correct. the integration of CASE tools. The advice given is to start test planning with a black box test approach as soon as the specification is available. assertions. on the other hand. or testing based on probes. boundary or security testing (risks). or behavioral testing or capture replay automation (activities). apart from the above described analytical methods of both glass and black box testing.18 - . A failure of a white box test may result in a change which requires all black box testing to be repeated and the re-determination of the white box paths To conclude. 2.

The purpose of white box testing Initiate a strategic initiative to build quality throughout the life cycle of a software product or service. Provide a complementary function to black box testing. Perform complete coverage at the component level. Improve quality by optimizing performance. Practices : This section outlines some of the general practices comprising white-box testing process. In general, white-box testing practices have the following considerations: 1. The allocation of resources to perform class and method analysis and to document and review the same. 2. Developing a test harness made up of stubs, drivers and test object libraries. 3. Development and use of standard procedures, naming conventions and libraries. 4. Establishment and maintenance of regression test suites and procedures. 5. Allocation of resources to design, document and manage a test history library. 6. The means to develop or acquire tool support for automation of capture/replay/compare, test suite execution, results verification and documentation capabilities.

1 Code Coverage Analysis
1.1 Basis Path Testing A testing mechanism proposed by McCabe whose aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. These are test cases that exercise basic set will execute every statement at least once.
Performance Testing Process & Methodology Proprietary & Confidential - 19 -

1.1.1 Flow Graph Notation
A notation for representing control flow similar to flow charts and UML activity diagrams. 1.1.2 Cyclomatic Complexity The cyclomatic complexity gives a quantitative measure of 4the logical complexity. This value gives the number of independent paths in the basis set, and an upper bound for the number of tests to ensure that each statement is executed at least once. An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i.e., a new edge). Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.

1.2 Control Structure testing
1.2.1 Conditions Testing Condition testing aims to exercise all logical conditions in a program module. They may define: • Relational expression: (E1 op E2), where E1 and E2 are arithmetic expressions. • Simple condition: Boolean variable or relational expression, possibly proceeded by a NOT operator. • Compound condition: composed of two or more simple conditions, Boolean operators and parentheses. • Boolean expression : Condition without Relational expressions. 1.2.2 Data Flow Testing Selects test paths according to the location of definitions and use of variables. 1.2.3 Loop Testing Loops fundamental to many algorithms. Can define loops as simple, concatenated, nested, and unstructured. Examples:

Performance Testing Process & Methodology

Proprietary & Confidential

- 20 -

Note that unstructured loops are not to be tested . rather, they are redesigned. 2 Design by Contract (DbC) DbC is a formal way of using comments to incorporate specification information into the code itself. Basically, the code specification is expressed unambiguously using a formal language that describes the code's implicit contracts. These contracts specify such requirements as: • Conditions that the client must meet before a method is invoked. • Conditions that a method must meet after it executes. • Assertions that a method must satisfy at specific points of its execution Tools that check DbC contracts at runtime such as JContract [http://www.parasoft.com/products/jtract/index.htm] are used to perform this function. 3 Profiling Profiling provides a framework for analyzing Java code performance for speed and heap memory use. It identifies routines that are consuming the majority of the CPU time so that problems may be tracked down to improve performance. These include the use of Microsoft Java Profiler API and Sun’s profiling tools that are bundled with the JDK. Third party tools such as JaViz [http://www.research.ibm.com/journal/sj/391/kazi.html] may also be used to perform this function.

4 Error Handling
Performance Testing Process & Methodology Proprietary & Confidential - 21 -

local or distributed. Isolation. Durability). Consistency. Performance Testing Process & Methodology Proprietary & Confidential . may be validated to ensure that ACID (Atomicity. notification and logging are checked against references to validate program design.22 - . Transactions are checked thoroughly for partial/complete commits and rollbacks encompassing databases and other XA compliant transaction processors. Advantages of White Box Testing • • • • Forces test developer to reason carefully about implementation Approximate the partitioning done by execution equivalence Reveals errors in "hidden" code Beneficent side-effects Disadvantages of White Box Testing • • Expensive Cases omitted in the code could be missed out. 5 Transactions Systems that employ transaction. Each of the individual parameters is tested individually against a reference data set.Exception and error handling is checked thoroughly are simulating partial and complete fail-over by operating on error causing test vectors. Proper error recovery.

or cursor. Performance Testing Process & Methodology Proprietary & Confidential .indicated by dotted box. Try to start the application twice as it is loading. Check does the title of the window make sense. Window should return to an icon on the bottom of the screen. These should be checked for spelling. Try this for every grayed control.3 GUI Testing What is GUI Testing? GUI is the abbreviation for Graphic User Interface. 3.1 Section 1 . clarity and non-updateable etc.1 Application Start Application by Double Clicking on its ICON. click it. The following is a set of guidelines to ensure effective GUI Testing and can be used even as a checklist while testing a product / application. If there is no hour glass. Closing the application should result in an "Are you Sure" message box Attempt to start application twice.23 - .you should be returned to main window. If Window has a Minimize Button. The main window of the application should have the same caption as the caption of the icon in Program Manager.Check for spelling.) F1 key should work the same. No Login is necessary. Double Click the Icon to return the Window to its original size. The text in the Micro Help line should change . and Up to Down within a group box on the screen. then some enquiry in progress message should be displayed.Windows Compliance Testing 3. All screens should have a Help button (i. Tabbing to an entry field with text in it should highlight the entire text in the field. It should not be possible to select them with either the mouse or by using TAB. The Loading message should show the application name. then the hour glass should be displayed. This should not be allowed . The end user should be comfortable while using all the components on screen and the components should also perform their functionality with utmost clarity. version number. On each window. then use all un-grayed options. if the application is busy. The window caption for every application should have the name of the application and the window name especially the error messages. If the screen has a Control menu. English and clarity. All controls should get focus .1. and a bigger pictorial representation of the icon. This icon should correspond to the Original Icon under Program Manager. Use TAB to move focus around the Window. Hence it becomes very essential to test the GUI components of any application.e. If a field is disabled (grayed) then it should not get focus. especially on the top of the screen. or it can refer to testing the functionality of each and every component involved. Use SHIFT+TAB to move focus backwards. It is absolutely essential that any application has to be user-friendly. Check all text on window for Spelling/Tense and Grammar. GUI Testing can refer to just ensuring that the look-and-feel of the application is acceptable to the user. Tab order should be left to right.

3. . SPACE should do the same. Pressing ‘Ctrl .Letters in amount fields. Tab to another type of control (not a command button).3 Option (Radio Buttons) Left and Right arrows should move 'ON' Selection. 3. One button on the screen should be default (indicated by a thick black border). If there is a Cancel Button on the screen.1. You should not be able to type text in the box. Selection should also be possible with mouse. everything can be done using both the mouse and the keyboard.6 Drop Down List Boxes Pressing the Arrow should give list of options. Pressing Return in ANY no command button control should activate it. This is indicated by a letter underlined in the button text. List boxes are always white background with black text whether they are disabled or not.1. Select with mouse by clicking. try strange characters like + . If it doesn't then the text in the box should be gray or non-updateable.* etc.This should activate Tab to each button . followed by a colon tight to it. then pressing <Esc> should activate it. SHIFT and Arrow should Select Characters. there should be a message phrased positively with Yes/No answers where Yes results in the completion of the action. and should be done for EVERY command Button. Make sure there is no duplication. double-clicking is not essential.Press RETURN . Double Click should select all text in box.Press SPACE . Enter invalid characters . This List may be scrollable. 3.g. Cursor should change from arrow to Insert Bar. In general.1. If pressing the Command button results in uncorrectable data e.2 Text Boxes Move the Mouse Cursor over all Enterable Text Boxes. All text should be left justified. Click each button once with the mouse . All tab buttons should have a distinct letter.1. All others are gray. in All fields. 3. closing an action step.F4’ should open/drop down the list box. the label text and contents changes from black to gray depending on the current status.4 Check Boxes Clicking with the mouse on the box. All Buttons except for OK and Cancel should have a letter Access to them. Pressing a letter should bring you to the first item in the list with that start with that letter. In general. In a field that may or may not be updateable.Never updateable fields should be displayed with black text on a gray background with a black label. Performance Testing Process & Methodology Proprietary & Confidential . So should Up and Down. and if the user can enter or change details on the other screen then the Text on the button should be followed by three dots.This should activate Tab to each button . 3. Enter text into Box Try to overflow the text by typing to many characters should be stopped Check the field width with capitals W. Pressing ALT+Letter should activate the button. or on the text should SET/UNSET the box.24 - .This should activate The above are VERY IMPORTANT.5 Command Buttons If Command Button leads to another Screen.1. Refer to previous page.

Clicking Arrow should allow user to choose from list 3. 8. Pressing a letter should take you to the first item in the list starting with that letter. 12. 19.7 Combo Boxes Should allow text to be entered. which is at the top or the bottom of the list box.1. Assure that all windows have a consistent look and feel.2 Section 2 . If there is a 'View' or 'Open' button besides the list box then double clicking on a line in the List Box. Force the scroll bar to appear. 3.2. 6. 15. then clicking the command button. Make sure only one space appears. 21. Items should be in alphabetical order with the exception of blank/none. 14. are the field prompts the correct color? In read-only mode. Are all numeric fields right justified? This is the default unless otherwise specified. 20. 5. 16.8 List Boxes Should allow a single selection to be chosen. make sure all the data can be seen in the box.1 Aesthetic Conditions: 1. 3. Is all the micro-help text spelt correctly on this screen? Is all the error message text spelt correctly on this screen? Is all user input captured in UPPER case or lowercase consistently? Where the database requires a value (other than null) then this should be defaulted into fields. Drop down with the item selected should be display the list with the selected item on the top. Assure that all dialog boxes have a consistent look and feel. 17. 10. should act in the same way as selecting and item in the list box. 11. 9.1. Is the general screen background the correct color? Are the field prompts the correct color? Are the field backgrounds the correct color? In read-only mode. 13.Spacing should be compatible with the existing windows spacing (word etc. by clicking with the mouse.25 - . 2. The user must either enter an alternative valid value or leave the default value intact. shouldn't have a blank line at the bottom.). 4. Performance Testing Process & Methodology Proprietary & Confidential . 3.Screen Validation Checklist 3. 18. or using the Up and Down Arrow keys. 7. are the field backgrounds the correct color? Are all the screen prompts specified in the correct screen font? Is the text in all fields specified in the correct screen font? Are all the field prompts aligned perfectly on the screen? Are all the field edit boxes aligned perfectly on the screen? Are all group boxes aligned correctly on the screen? Should the screen be resizable? Should the screen be allowed to minimize? Are all the field prompts spelt correctly? Are all character or alphanumeric fields left justified? This is the default unless otherwise specified.

2 Validation Conditions: 1. 7. 2.e. Is the screen modal? (i.2.4 Usability Conditions: 1.2.3 Navigation Conditions: 1. 5. which have failed validation tests? Have any fields got multiple validation rules and if so are all rules being applied? If the user enters an invalid value and clicks on the OK button (i. Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse? 10. Can a number of instances of this screen be opened at the same time and is this correct? 3.) 3. Do the Shortcut keys work correctly? 5.3. 2. Is all date entry required in the correct format? 3.) Is the user prevented from accessing other functions when this screen is active and is this correct? 7. Can the screen be accessed correctly from the menu? 2. Can the cursor be placed in read-only fields by clicking in the field with the mouse? Performance Testing Process & Methodology Proprietary & Confidential . Can all screens accessible by double clicking on a list control be accessed correctly? 6. Can the screen be accessed correctly by double clicking on a list control on the previous screen? 4. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified. 7. Have the menu options that apply to your screen got fast keys associated and should they have? 6. (If any field. Are all read-only fields avoided in the TAB sequence? 8. 6. Does a failure of validation on every field cause a sensible user error message? Is the user required to fix entries. 8. 3.e. Can the screen be accessed correctly from the toolbar? 3. Can all screens accessible via buttons on this screen be accessed correctly? 5. For all numeric fields check the minimum and maximum values and also some mid-range values allowable? For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size? Do all mandatory fields require user input? If any of the database columns don't allow null values then the corresponding screen fields must be mandatory. which initially was mandatory. Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.26 - . 10. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message? Is validation consistently applied at screen level unless specifically required at field level? For all numeric fields check whether negative numbers can and should be able to be entered. 9. 4. Have all pushbuttons on the screen been given appropriate Shortcut keys? 4. has become optional then check whether null values are allowed in this field.2. Are all disabled fields avoided in the TAB sequence? 9.

5 Data Integrity Conditions: 1. 4. Can all screens available from this screen be accessed in read-only mode? 6. 3.7 General Conditions: 1.) 7. Ensure that duplicate hot keys do not exist on each screen Performance Testing Process & Methodology Proprietary & Confidential . In drop down list boxes. Is there a default button specified on the screen? 13.27 - . a 30 character field should be a lot longer 3. If a set of radio buttons represents a fixed set of values such as A. Check that no validation is performed in read-only mode. The user must either enter an alternative valid value or leave the default value intact. which are not screen based. When an error message occurs does the focus return to the field in error when the user cancels it? 15. 3. Where the database requires a value (other than null) then this should be defaulted into fields.2. (i. 2. ensure that the names are not abbreviations / cut short 6. Check maximum and minimum field values for numeric fields? 5.2.6 Modes (Editable Read-only) Conditions: 1. 3. Are the screen and field colors adjusted correctly for read-only mode? Should a read-only mode be provided for this screen? Are all fields and controls disabled in read-only mode? Can the screen be accessed from the previous screen/menu/toolbar in read-only mode? 5. Check the maximum field lengths to ensure that there are no truncated characters? 3. When the user Alt+Tab's to another application does this have any impact on the screen upon return to the application? 16.g.2. assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations. 5. Is the cursor positioned in the first input field or control when the screen is opened? 12. If a particular set of data is saved to the database check that each value gets saved fully to the database. Is the data saved when the window is closed by double clicking on the close box? 2. 4. Assure that the proper commands and options are in each menu.11. 2. 3. In drop down list boxes. If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers? 6. B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions. Assure that all buttons on all tool bars have a corresponding key commands. Assure that each menu command has an alternative (hot-key) key sequence. 7. 4.) Beware of truncation (of strings) and rounding of numeric values. which will invoke it where appropriate. Does the default button work correctly? 14.e. Do all the fields edit boxes indicate the number of characters they will hold by there length? e. and thus the required initial values can be incorrect. Assure the existence of the "Help" menu.

which are used by a particular window. Assure that all field labels/names are not technical labels. 25. Assure that command buttons in the same window/dialog box do not have duplicate hot keys. and command buttons are logically grouped together in clearly demarcated areas "Group Box" 26. as a Close button when changes have been made that cannot be undone. Assure that the cancel button functions the same as the escape key. 14. are present. which makes sense according to the function of the window/dialog box. 19. 21. 30.28 - . or in a particular dialog box. Assure that option button names are not technical labels. Tabbing will open next tab within tabbed window if on last field of current tab 34. 15. Assure that each window/dialog box has a clearly marked default value (command button. Assure that all option buttons (and radio buttons) names are not abbreviations. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics). 16. 28. If hot keys are used to access option buttons. Assure that the color red is not used to highlight active objects (many individuals are red-green color blind). Assure that OK and Cancel buttons are grouped separately from other command buttons. Tabbing will go onto the next editable field in the window 36. 11. Assure that each command button can be accessed via a hot key combination.8. Assure that only command buttons. Assure that option box names are not abbreviations. 27. Assure that the Tab key sequence. 10.e) make sure they don't work on the screen behind the current screen. 24. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost Continue yes/no" 9. and same font & font size. 13. 22. – (i. 12. but rather are names meaningful to system users. 18. option buttons. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window 35. does so in a logical way. When a command button is used sometimes and not at other times. assures that it is grayed out when it should not be used. Assure that the Cancel button operates. Ctrl + F6 opens next tab within tabbed window 32. Assure that option boxes. Assure that command buttons are all of similar size and shape. 29. Assure that command button names are not abbreviations. but rather are names meaningful to system users. which traverses the screens. 17.and NOT the Cancel or Close button 20. Shift + Ctrl + F6 opens previous tab within tabbed window 33. Assure consistency of mouse actions across windows. Assure that the screen/window does not have a cluttered appearance 31. Banner style & size & display exact same as existing windows Performance Testing Process & Methodology Proprietary & Confidential . assure that duplicate hot keys do not exist in the same window/dialog box. Assure that focus is set to an object/button. or other object) which is invoked when the Enter key is pressed . 23.

highlighting the field with the error on it) 39. Assure that leap years are validated correctly & do not errors/miscalculations. 3. Assure that invalid values are logged and reported.should be no need to scroll 38. Return operates continue 47. (i. Assure that fields with a blank in the last position are processed or reported as an error an error. Assure that division by zero does not occur. All fonts to be the same 42.2 Numeric Fields 1. 30 are validated correctly & do not cause miscalculations. 2. cause cause cause errors/ errors/ cause 3. 7. Ensure all fields are disabled in read-only mode 45. 40. 5. 29. display all options on open of list box . 10. 43. 11. If retrieve on load of tabbed window fails window should not open 3. Assure that out of cycle dates are validated correctly & do not errors/miscalculations. generating "changes will be lost" message if necessary. 5. 4. Assure that day values 00 and 32 are validated correctly & do not errors/miscalculations.e the tab is opened. Include out of range values above the maximum and below the minimum. Proprietary & Confidential .3 Specific Field Tests 3.37. Include maximum and minimum range values. If 8 or less options in a list box. 8. 2. Assure that month code 00 and 13 are validated correctly & do not errors/miscalculations. Progress messages on load of tabbed screens 46. 4. Assure that century change is validated correctly & does not cause miscalculations. Microhelp text for every enabled field & button 44. On open of tab focus will be on first editable field 41. 3. 28. Assure that Feb.values are correctly processed.29 - Performance Testing Process & Methodology . Assure that numeric fields with a blank in position 1 are processed or reported as an error. 6. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error.1 Date Field Checks 1.3. Include at least one in-range value. 9. 7. 30 is reported as an error. Assure that 00 and 13 are reported as errors. 6. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate). Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs. Assure that valid values are handles by the correct procedure. Assure that both + and .3. Assure that Feb. Assure that lowest and highest values are handled correctly. 8. Include value zero in all calculations.

4. Include data items with last position blank. Include invalid characters & symbols. 2.4 Validation Testing . Include valid characters.e. Include lowest and highest values. 3.12. and are included as a guide.e. abandon changes or additions) Fill each field .(i. 3. 6. Include data items with first position blank.2 Shortcut keys / Hot Keys Note: The following keys are used in some windows applications. Assure that upper and lower values in ranges are handled correctly.(i. Use blank and non-blank data. continue saving changes or additions) Add View Change Delete Cancel .3 Alpha Field Checks 1.3.Invalid data Different Check Box / Radio Box combinations Scroll Lists / Drop Down List Boxes Help Fill Lists and Scroll Tab Tab Sequence Shift Tab 3.Standard Actions 3. Key F1 No Modifier Help Shift CTRL ALT N/A .Valid data Fill each field .4. 4.1 Examples of Standard Actions . 3.Substitute your specific commands Add View Change Delete Continue . 5.30 - Enter Help Mode N/A Proprietary & Confidential Performance Testing Process & Methodology .

if mode. N/A N/A F9 F10 N/A N/A N/A Switch to previously used application.g. Puts focus on N/A first menu command (e. N/A Toggle menu bar N/A activation. supported. N/A Alt 3. window. N/A N/A N/A Add N/A if N/A N/A N/A N/A N/A N/A N/A F5 F6 F7 F8 N/A N/A N/A N/A N/A N/A Toggle extend Toggle mode. (Adding SHIFT reverses the order of movement). F12 N/A Tab Move to next Move to Move to next active/editable previous open Document field. supported.F2 F3 F4 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A Close Close Document / Application.4. active/editable or Child field. 'File'). Child window.31 - Performance Testing Process & Methodology . (Holding down the ALT key displays all open applications).3 Control Shortcut Keys Key Function Proprietary & Confidential . N/A F11.

Applications may use other modifiers for these operations. in the context for which they make sense.32 - .CTRL + Z CTRL + X CTRL + C CTRL + V CTRL + N CTRL + O CTRL + P CTRL + S CTRL + B CTRL + I CTRL + U Undo Cut Copy Paste New Open Print Save Bold* Italic* Underline* * These shortcuts are suggested for text formatting applications. Performance Testing Process & Methodology Proprietary & Confidential .

The reason they might not work because changing or adding new code to a program can easily introduce errors into code that is not intended to be changed. Test department coders develop code test scenarios and exercises that will test new units of code after they have been written. Also referred to as verification testing recognized problem or has added source code to a program that may have inadvertently introduced errors.4 Regression Testing 4. the old test cases are run against the new version to make sure that all the old capabilities still work. Regression testing is a normal part of the program development process. any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. Before a new version of a software product is released.33 - . − − The selective retesting of a software system that has been modified to ensure that − Regression testing is initiated after a programmer has attempted to fix a − Performance Testing Process & Methodology Proprietary & Confidential .1 What is regression Testing − − Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes.

and also test advanced options in the application (depth). 4. containing both positive and negative checks. you begin executing the tests in the cycle.4. to determine the application's stability before beginning more rigorous testing. This test cycle includes tests that check that module in depth.2. advanced cycle tests both breadth and depth. Each time your application changes. You perform manual tests using the test steps.34 - . you will want to execute the relevant parts of your test plan in order to locate defects and assess quality. Testing Tools executes automated tests for you. rather than depth) to see that it is functional and stable. The goal of this type of cycle is to verify that a change to one part of the software did not break the rest of the application. and can include both manual and automated tests Example: You can create a cycle containing basic tests that run on each build of the application throughout development.2. This cycle can be run when more time is available for testing. refer to the testing goals you defined at the beginning of the process.2 Test Execution Test Execution is the heart of the testing process. A test cycle is complete only when all tests-automatic and Performance Testing Process & Methodology Proprietary & Confidential . you need to execute different tests in order to address specific goals. Usually you do not run all the tests at once. To decide which test cycles to build. Following are examples of some general categories of test cycles to consider: • • • • sanity cycle checks the entire system at a basic level (breadth. as well as in-depth tests for the specific area of the application that was modified.2 Run Test Cycles (Automated & Manual Tests) Once you have created cycles that cover your testing objectives. regression cycle tests maintenance builds. The tests in the cycle cover the entire application (breadth). You can run the cycle each time a new build is ready. A related group of tests is called a test cycle.1 Create Test Cycles During this stage you decide the subset of tests from your test database you want to execute. Also consider issues such as the current state of the application and whether new functions have been added or modified. A regression cycle includes sanity-level tests for testing the entire software. This cycle should include basic-level tests containing mostly positive checks. 4. normal cycle tests the system a little more in depth than the sanity cycle. At different stages of the quality assurance process. This cycle can group medium-level tests. Example: You can create another set of tests for a particular module in your application.

4.35 - Performance Testing Process & Methodology .3 Change Request 4. providing outcome summaries for each test. High the application works. It then imports results. notices a problem with an application. but this is necessary to perform a job. During Automated Test Execution you create a batch of tests and launch the entire batch at once. or wants to recommend an enhancement. or a new button) 4.2. and end-users in all phases of the testing process. 4.2 Type of Change Request Bug the application works incorrectly or provides incorrect information. Bugs can be detected and reported by engineers. For each test step you assign either pass or fail status. 4. enter input. Proprietary & Confidential . testers. And have to identify all the failed steps in the tests and to determine whether a bug has been detected. and log the results. − With Manual Test Execution you follow the instructions in the test steps of each test. a letter is allowed to be entered in a number field) Change a modification of the existing application. − 4. or if the expected result needs to be updated.3 Analyze Test Results After every test run one analyze and validate the test results.3.4 Bug Tracking − − Locating and repairing software bugs is an essential part of software development.1 Initiating a Change Request A user or developer wants to suggest a modification that would improve an existing application.3. (for example. Any major or minor request is considered a problem with an application and will be entered as a change request.3 Priority for the request Low the application works but this would make the function easier or more user friendly. a new report. This also applies to any Section 508 infraction.manual-have been run. a new field. compare the application output with the expected output. Critical the application does not work. (for example.3. sorting the files alphabetically by the second field rather than numerically by the first field makes them easier to find) Enhancement new functionality or item added to the application. (for example. You use the application. Testing Tools runs the tests one at a time. job functions are impaired and there is no work around.

If a bug does not reoccur. − First you report New bugs to the database. Bug Tracking involves two main stages: reporting and tracking.36 - . − The Quality Assurance manager or Project manager periodically reviews all − Software developers fix the Open bugs and assign them the status Fixed. Communication is an essential part of bug tracking. When you report a bug. verified. − QA personnel test a new build of the application. it is reopened. you record all the information necessary to reproduce and fix it. You also make sure that the QA and development personnel involved in fixing the bug are notified. If a bug is detected again. and closed. 4. 4. you report the bugs (or defects) that you detected.5 Traceability Matrix A traceability matrix is created by associating requirements with the products that satisfy them.4. The number of open or fixed bugs is a good indicator of the quality status of your application.1 Report Bugs Once you execute the manual and automated tests in a cycle. fix.2 Track and Analyze Bugs The lifecycle of a bug begins when it is reported and ends when it is fixed.4. There can be more things included in a traceability matrix than shown below.− Information about bugs must be detailed and organized in order to schedule bug fixes and determine software release dates. You can use data analysis tools such as re-ports and graphs in interpret bug data. These bugs are given the status Open and are assigned to a member of the development team. and provide all necessary information to reproduce. Tests are associated with the requirements on which they are based and the product tested to meet the requirement. The bugs are stored in a database so that you can manage them and analyze the status of your application. Traceability Performance Testing Process & Methodology Proprietary & Confidential . Below is a simple traceability matrix structure. all members of the development and quality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed. and follow up the bug. New bugs and decides which should be fixed. 4. it is Closed.

SAMPLE TRACEABILITY MATRIX A traceability matrix is a report from the requirements database or repository. or the traceability corrected. Performance Testing Process & Methodology Proprietary & Confidential . and that all higher level requirements are allocated to lower level requirements. that all lower level requirements derive from higher level requirements.37 - . Numbers for products are established in a configuration management (CM) plan. rewritten. The examples below show traceability between user and system requirements. Traceability ensures completeness. User requirement identifiers begin with "U" and system requirements with "S." Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated.requires unique identifiers for each requirement and product. Traceability is also used in managing change and provides the basis for test planning.

In addition to traceability matrices. Determine their information needs and document the information that will be associated with the requirements when you set up your requirements database or repository Performance Testing Process & Methodology Proprietary & Confidential . What goes into each report depends on the information needs of those receiving the report(s). other reports are necessary to manage requirements.38 - .

Acceptance Testing.39 - . not its parts Techniques can be structural or functional. Techniques can be used in any stage that tests the system as a whole (System testing .) 5. Goal is to evaluate the system as a whole. The integration of this code with the internal code is the important objective. Installation. Performance Criteria Software Requirement Specification. Performance Testing Process & Methodology Proprietary & Confidential . Unit testing.5 Phases of Testing 5. Unit / System / Integration / Regression / Performance Test Case Documents QA Document Requirement Checklist Design Checklist Functional Checklist Unit Test Case Documents Integration Test Case Documents System Test Case Documents Regression Test Case Documents Performance Test Case Documents User Acceptance Test Case Documents. etc.1 Introduction The Primary objective of testing effort is to determine the conformance to requirements specified in the contracted documents.2 Types and Phases of Testing SDLC Document Software Requirement Specification Design Document Functional Specification Design Document & Functional Specs Design Document & Functional Specs Design Document & Functional Specs Unit / System / Integration Test Case Documents Functional Specs.

40 - .5.3 The “V”Model Requirements Acceptance Testing Specification System Testing Architecture Integration Testing Detailed Design Unit Testing Coding Performance Testing Process & Methodology Proprietary & Confidential .

41 - .Requirement Study Requirement Checklist Software Requirement Specification Functional Specification Checklist Functional Specification Document Architecture Design Detailed Design Document Coding Software Requirement Specification Functional Specification Document Architecture Design Functional Specification Document Design Document Functional Specification Document Unit/Integratio n/System Test Case Documents Functional Specification Performance Document Criteria Software Requirement Regression Specification Test Case Performance Document Test Cases and Scenarios Unit Test Case Documents Unit Test Case Document System Test Case Document Integration Test Case Document Regression Test Case Document Performance Test Cases and Scenarios User Acceptance Test Case Documents/Sce narios Performance Testing Process & Methodology Proprietary & Confidential .

Requirement s Regression Round 3 Performance Testing Requirement s Review Specification Regression Round 2 Specification Review System Testing Architecture Regression Round 1 Architectur e Review Integration Testing Detailed Design Design Review Code Unit Testing Code Walkthrough Performance Testing Process & Methodology Proprietary & Confidential .42 - .

Each of these views of integration testing may be appropriate for any given project. beginning with assembling modules into low-level subsystems. most integration testing has been traditionally limited to ``black box'' techniques. module and integration testing can be combined. Performing only cursory testing at early integration phases and then applying a more rigorous criterion for the final stage is really just a variant of the high-risk "big bang" approach. because of the vast amount of detail separating the input data from the individual code modules. then assembling subsystems into larger subsystems. At the other extreme. testing at each phase helps detect errors early and keep the system under control. top-down. Very small systems are often assembled and tested in one phase. the larger the project. combining module testing with the lowest level of subsystem integration testing. bottom-up. the more important the integration strategy. Combining module testing with bottom-up integration. It is important to understand the relationship between module testing and integration testing. Then. The key is to leverage the overall integration structure to allow rigorous testing at each phase while minimizing duplication of effort. In one view. performing rigorous testing of the entire software involved in each integration phase involves a lot of wasteful duplication of effort across phases.43 - . Large systems may require many integration phases.6 Integration Testing One of the most significant aspects of a software development project is the integration strategy. assuming that the details within each module are accurate. In general. critical piece first. To be most effective. However. so an integration testing method should be flexible enough to accommodate them all. verifying the details of each module's implementation in an integration context. In fact. First. integration testing concentrates entirely on module interactions. Performance Testing Process & Methodology Proprietary & Confidential . satisfying any white box testing criterion would be very difficult. and finally assembling the highest level subsystems into the complete system. For most real systems. In a multi-phase integration. Many projects compromise. modules are rigorously tested in isolation using stubs and drivers before any integration is attempted. and then performing pure integration testing at higher levels. or by first integrating functional subsystems and then integrating the subsystems in separate phases using any of the basic strategies. the system would fail in so many places at once that the debugging and retesting effort would be impractical Second. Integration may be performed all at once. this is impractical for two major reasons. an integration testing technique should fit well with the overall integration strategy.

the appropriate generalization to the integration level requires that just the decision logic involved with calls to other modules be tested independently. leads to an excessive amount of redundant testing. As discussed in the previous subsection. in effect using the entire program as a test driver environment for each module. The design reduction technique helps identify those decision outcomes. However. for example.1 Generalization of module testing criteria Module testing criteria can often be generalized in several possible ways to support integration testing. the most obvious generalization is to satisfy the module testing criterion in an integration context. in which each statement is required to be exercised during module testing. Since structured testing at the module level requires that all the decision logic in a module's control flow graph be tested independently. the approach is the same. Although the specifics of the generalization of structured testing are more detailed. structured testing at the integration level focuses on the decision outcomes that are involved with module calls.44 - . so that it is Performance Testing Process & Methodology Proprietary & Confidential . Module design complexity Rather than testing all decision outcomes within a module independently. The statement coverage module testing criterion. can be generalized to require each module call statement to be exercised during integration testing. More useful generalizations adapt the module testing criterion to focus on interactions between modules rather than attempting to test all of the details of each module's implementation in an integration context. Applying it to each phase of a multi-phase integration strategy. this trivial kind of generalization does not take advantage of the differences between module and integration testing.6.

at which point the design reduction is complete. Since application of this rule removes one node and one edge from the flow graph. it leaves the cyclomatic complexity unchanged. By this process. even very complex logic can be eliminated as long as it does not involve any module calls. and then use the resultant "reduced" flow graph to drive integration testing. The idea behind design reduction is to start with a module control flow graph. Performance Testing Process & Methodology Proprietary & Confidential . the call rule states that function call ("black dot") nodes cannot be reduced. since for poorly-structured code it may be hard to distinguish the ``top'' of the loop from the ``bottom. remove all control structures that are not involved with module calls.'' For the rule to apply.45 - . Since the repetitive. Rules 1 through 4 are intended to be applied iteratively until none of them can be applied. However. they each reduce cyclomatic complexity by one. conditional. The sequential rule eliminates sequences of non-call ("white dot") nodes.possible to exercise them independently during integration testing. The looping rule eliminates bottom-test loops that are not involved with module calls. and looping rules each remove one edge from the flow graph. Although not strictly a reduction rule. The repetitive rule eliminates top-test loops that are not involved with module calls. there must be a path from the module entry to the top of the loop and a path from the bottom of the loop to the module exit. It is important to preserve the module's connectivity when using the looping rule. Figure 7-2 shows a systematic set of rules for performing design reduction. The remaining rules work together to eliminate the parts of the flow graph that are not involved with module calls. The conditional rule eliminates conditional statements that do not contain calls in their bodies. it does simplify the graph so that the other rules can be applied.

which simplifies the derivation of data sets that test interactions among components. avoiding redundant testing of previously integrated subcomponents. including support for hierarchical design.46 - . Performance Testing Process & Methodology Proprietary & Confidential . The key principle is to test just the interaction among components at each integration stage. The remainder of this section extends the integration testing techniques of structured testing to handle the general case of incremental integration.Incremental integration Hierarchical system design limits each stage of development to a manageable effort. Hierarchical design is most effective when the coupling among sibling components decreases as the component size increases. and it is important to limit the corresponding stages of testing as well.

since the design predicate decision to call module D from module B has been tested in a previous phase. Given hierarchical integration stages with good cohesive partitioning properties. the component module design complexity of module A is 1. Performance Testing Process & Methodology Proprietary & Confidential . Figure 7-7 illustrates the structured testing approach to incremental integration.To extend statement coverage to support incremental integration. as have modules B and D. it is required that all module call statements from one component into a different component be exercised at each integration stage. Structured testing can be extended to cover the fully general case of incremental integration in a similar manner. it is required that each statement be executed during the first phase (which may be anything from single modules to the entire program). The key is to perform design reduction at each integration phase using just the module call nodes that cross component boundaries. yielding component-reduced graphs. Modules B and D are removed from consideration because they do not contain cross-component calls. However.47 - . and the component module design complexity of module C is 2. only two additional tests are required to complete the integration testing. and that at each integration phase all call statements that cross the boundaries of previously integrated components are tested. this limits the testing effort to a small fraction of the effort to cover each statement of the system at each integration phase. To form a completely flexible "statement testing" criterion. Modules A and C have been previously integrated. and exclude from consideration all modules that do not contain any cross-component calls. It would take three tests to integrate this system in a single phase.

Performance Testing Process & Methodology Proprietary & Confidential .48 - .

7. in terms of business / commercial impact. Acceptance Testing checks the system against the "Requirements".1 Introduction – Acceptance Testing In software engineering. Here is an example which has been used successfully. The final part of the UAT can also include a parallel run to prove the system against the current system. and '6' has the least impact :'Show Stopper' i. Acceptance Testing checks that the system delivers what was requested. the testing should be planned in order to provide a realistic and adequate exposure of the system to all reasonably expected events. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgment. and in detail. User Acceptance Testing is a critical phase of any 'systems' project and requires significant participation by the 'End Users'. To be of real use. The customer. but at all times they are informed by the business needs. and not the developer should always do acceptance testing. Interface. in general. System. an Acceptance Test Plan should be developed in order to plan precisely. including Users. it is impossible to continue with the testing because of the severity of this error / bug Performance Testing Process & Methodology Proprietary & Confidential . The forms of the tests may follow those in system testing. The main types of software testing are: Component. The test procedures that lead to formal 'acceptance' of new or changed systems. Acceptance.7 Acceptance Testing 7.2 Factors influencing Acceptance Testing The User Acceptance Test Plan will vary from system to system but. found during testing. The testing can be based upon the User Requirements Specification to which the system should conform. These levels will range from (say) 1 to 6 and will represent the relative severity. the End Users and the Project Team need to develop and agree a range of 'Severity Levels'.e. the means by which 'Acceptance' will be achieved. In order to agree what such responses should be. '1' is the most severe. It is similar to systems testing in that the whole system is checked but the important difference is the change in focus: Systems Testing checks that the system that was specified has been delivered. Project Team. acceptance testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system. As in any system though.49 - . Release. of a problem with the system. problems will arise and it is important to have determined what will be the expected and required responses from the various parties concerned. Vendors and possibly Consultants / Contractors.

In any event.3 Conclusion Hence the goal of acceptance testing should verify the overall quality. This problem should be corrected. the maximum number of acceptable 'outstandings' in any particular category. Caution. any and all fixes from the software developers. must then agree upon the responsibilities and required actions for each category of problem. fonts. the allocation of a problem into its appropriate severity level can be subjective and open to question. seek additional functionality which could be classified as scope creep. and robustness of the functional components supplied by the Software system. scalability.Critical Problem. These conditions need to be analysed as they may. in consultation with the executive sponsor of the project. Because no system is entirely fault free. perhaps unintentionally. To avoid the risk of lengthy and protracted exchanges over the categorisation of problems. must be subjected to rigorous System Testing and. correct operation. these will be known in advance and your organisation is forewarned. we strongly advised that a range of examples are agreed in advance to ensure that there are no fundamental areas of disagreement. Again. Even where the severity levels and the responses to each have been agreed by all parties.B. prior consideration of this is advisable. testing can continue and the system is likely to go live with only minimal departure from agreed business processes Minor Problem . users may agree to accept ('sign off') the system subject to a range of conditions. it is crucial to agree the Criteria for Acceptance. portability. The users of the system. 7. pitch size However. usability. Performance Testing Process & Methodology Proprietary & Confidential . if such features are key to the business requirements they will warrant a higher severity level. you may demand that any problems in severity level 1. or if there are. but little or no changes to business processes are envisaged 'Cosmetic' Problem e. For example.g. where appropriate Regression Testing.50 - . testing can continue but live this feature will cause severe disruption to business processes in live operation Medium Problem. receive priority response and that all testing will cease until such level 1 problems are resolved. colours. In some cases. N. Finally. completeness. both testing and live operations may progress. or. it must be agreed between End User and vendor. testing can continue but we cannot go into production (live) with this problem Major Problem.

Industry sectors such as telecom. It is often agreed that testing is essential to manufacture reliable products. Even security guide lines have to be included. as a part of software engineering. However. A number of time-domain software reliability models attempt to predict the growth of a system's reliability during the system test phase of the development life cycle. ``Did we build the product right?'' However.1 Introduction to SYSTEM TESTING For most organizations. In this paper we examine the results of applying several types of Poisson-process models to the development of a large system for which system test was performed in two parallel tracks. because this would be too redundant. while this is one of the most incomplete test methods. In other words. it is beyond doubt that this test cannot be done completely. and can. an online casino or games testing. software and system testing represents a significant element of a project's cost in terms of money and management time. the validation process does not often receive the required attention. has been proven over the last 3 decades to deliver real business benefits including: Performance Testing Process & Methodology Proprietary & Confidential . Moreover. however. This does not mean that now single functions of the whole program are tested. This means that those tests should be done in the environment for which the program was designed. acceptance and qualification testing. development costs and improved 'time to market' for new systems. automotive. and aeronautical and space. the validation process is close to other activities such as conformance. when appropriate. it is one of the most important. e-commerce. this again includes the question. Once again. The difference between function testing and system testing is that now the focus is on the whole application and its environment . such as: o security o load/stress o performance o browser compatibility o localisation 8. We test for errors that users are likely to make as they interact with the application as well as your application’s ability to trap errors gracefully. Systems with software components and software-intensive systems are more and more complex everyday. and nevertheless. like a mulituser network or whetever. it also contains some aspects that are orientated on the word ``system'' . The main goal is rather to demonstrate the discrepancies of the product from its requirements and its documentation. These techniques can be applied flexibly. are good examples. railway.8 SYSTEM TESTING 8.2 Need for System Testing Effective software testing. integrating with which-ever type of development methodology you are applying. we will test that the functionality of your systems meets with your specifications. using different strategies for test data selection. ``Did we build the right product?'' and not just. System Testing is more than just functional testing. Therefore the program has to be given completely. Making this function more effective can deliver a range of benefits including reductions in risk.51 - . system testing does not only deal with this more economical problem. also encompass many other types of testing. whether testing a financial system.

fundamental form of testing .makes sure the system does what it’s required to do regression testing .test required control mechanisms parallel testing . not its parts Techniques can be structural or functional Techniques can be used in any stage that tests the system as a whole (acceptance.test security requirements Functional techniques requirements testing . commercial and technical. increased independence naturally increases objectivity.3 System Testing Techniques Goal is to evaluate the system as a whole.test adherence to standards security testing . execution testing.includes user documentation intersystem handling testing . etc. so why take a leap of faith while your competition step forward with confidence? These benefits are achieved as a result of some fundamental principles of testing.test larger-than-normal capacity in terms of transactions. recovery testing . You will have a personal interest in its success in which case it is only human for your objectivity to be compromised. 8.test how the system recovers from a disaster. users.test required error-handling functions (usually user error) manual-support testing . data. etc. module.) in isolation Performance Testing Process & Methodology Proprietary & Confidential .test performance in terms of speed.feed same input into two versions of the system to make sure they produce the same output Unit Testing Goal is to evaluate some piece (file. program. precision. what is the potential impact on your commercial goals? Knowledge is power.test that the system can be used properly .test how the system fits in with existing operations and procedures in the user organization compliance testing . etc. how it handles corrupted data.52 - .make sure unchanged functionality remains unchanged error-handling testing . installation.reduction of costs increased productivity reduce commercial risks Reduce rework and support overheads More effort spent on developing new functionality and less on "bug fixing" as quality increases If it goes wrong.test that the system is compatible with other systems in the environment control testing . speed. operations testing . for example.) Techniques not mutually exclusive Structural techniques stress testing . etc. Your test strategy must take into consideration the risks to your organisation. etc. component.

choose test cases that violate the format rules for input special values .4 Functional techniques input domain testing . then run test cases until all mutants have been killed historical test data .every part of every expression is exercised path testing . including high. then test until they are all found mutation testing . low. System testing can occur in parallel with integration test.pick test cases representative of the range of allowable input. then pick a test case from each partition boundary value .5 Conclusion: Hence the system Test phase should begin once modules are integrated enough to perform tests in a whole system environment.each truth statement is exercised both true and false expression testing . Performance Testing Process & Methodology Proprietary & Confidential .put a certain number of known faults into the code.Techniques can be structural or functional In practice.each branch of an if/then statement is exercised conditional testing .ensure the set of test cases exercises every statement at least once branch testing .every path is exercised (impossible in practice) Error-based techniques basic idea is that if you know something about the nature of the defects in the code.create mutants of the program by making single changes. it’s usually ad-hoc and looks a lot like debugging More structured approaches exist 8.pick test cases that will produce output at the extremes of the output domain Structural techniques statement testing .choose test cases with input values at the boundary (both inside and outside) of the allowable range syntax checking . especially with the top-down method. then tests a new product until the number of defects found approaches the expected number 8.design test cases that use input values that represent special situations output domain testing .53 - .partition the range of allowable input so that the program is expected to behave similarly for all inputs in a given partition. and average values equivalence partitioning . you can estimate whether or not you’ve found all of them or not fault seeding .an organization keeps records of the average numbers of defects in the products it produces.

can it be tested by inspection? If the code is simple enough that the developer can just look at it and verify its correctness then it is simple enough to not require a unit test. which is not an effective test strategy. Need for Unit Test How do you know that a method doesn't need a unit test? First. The danger of not implementing a unit test on every method is that the coverage may be incomplete. then the tests will be trivial and the objects might pass the tests. Generally. so the art is to define the unit test on the methods that cannot be checked by inspection. "What will these clusters of objects do?" The crucial issue in constructing a unit test is scope.9 Unit Testing 9. The programmer should Performance Testing Process & Methodology Proprietary & Confidential . because they test for failures. then there is a high chance that not every component of the new code will get tested. it is a little design document that says. The developer should know when this is the case. The unit test will motivate the code that you write.54 - . Hence: Unit tests isolate clusters of objects for future developers. Isn't that some annoying requirement that we're going to ignore? Many developers get very nervous when you mention unit tests. Another good litmus test is to look at the code and see if it throws an error or catches an error. The programmer is then reduced to testing-by-poking-around. "What will this bit of code do?" Or. People who revisit the code will use the unit tests to discover which objects are related. along with the expected results and pass/fail date. or which objects form a cluster. If error handling is performed in a method. Likewise. but there will be no design of their interactions. In a sense. Unit tests will most likely be defined at the method level.1 Introduction to Unit Testing Unit testing. interactions of objects are the crux of any object oriented design. if the scope is too broad. It's important. Usually this is the case when the method involves a cluster of objects. because it may break at some time. and they also identify those segments of code that are related. Certainly. but not relevant in most programming projects. If the scope is too narrow. and then the unit test will be there to help you fix it. in the language of object oriented programming. Just because we don't test every method explicitly doesn't mean that methods can get away with not being tested. Unit tests that isolate clusters of objects for testing are doubly useful. then that method can break. any method that can break is a good candidate for having a unit test.Usually this is a vision of a grand table with every single method listed.

55 - . • As it requires detailed knowledge of the internal program design and code. • Not always easily done unless the application has a well-designed architecture with tight code. The careful programmer will know that their unit testing is complete when they have verified that their unit tests cover every cluster of objects that form their application. Life Cycle Approach to Testing Testing will occur throughout the project lifecycle i. and • Prepare a test case with a high probability of finding an as-yet undiscovered error.The main Objective to Unit Testing are as follows : •To execute a program with the intent of finding an error.2 Unit Testing –Flow: driver Module interface local data structures boundary conditions independent paths error handling paths TestCases Performance Testing Process & Methodology Proprietary & Confidential . 9.. •To test particular functions or code modules. •Typically done by the programmer and not by testers.. from Requirements till User Acceptance Testing.know that their unit testing is complete when the unit tests cover at the very least the functional requirements of all the code. Levels of Unit Testing •UNIT •100% code coverage • INTEGRATION • SYSTEM • • ACCEPTANCE • MAINTENANCE AND REGRESSION Concepts in Unit Testing: •The most 'micro' scale of testing. • To uncover an as-yet undiscovered error .e.

56 - Performance Testing Process & Methodology .Types of Errors detected The following are the Types of errors that may be caught • Error in Data Structures • Performance Errors • Logic Errors • Validity of alternate and exception flows • Identified at analysis/design stages Unit Testing – Black Box Approach • Field Level Check • • • Field Level Validation User Interface Check Functional Level Check Unit Testing – White Box Approach STATEMENT COVERAGE DECISION COVERAGE CONDITION COVERAGE MULTIPLE CONDITION COVERAGE (nested conditions) CONDITION/DECISION COVERAGE PATH COVERAGE Unit Testing – FIELD LEVEL CHECKS • • • • • • Null / Not Null Checks Uniqueness Checks Length Checks Date Field Checks Numeric Checks Negative Checks Unit Testing – Field Level Validations • • • Test all Validations for an Input field Date Range Checks (From Date/To Date’s) Date Check Validation with System date • • • • • Unit Testing – User Interface Checks Readability of the Controls Tool Tips Validation Ease of Use of Interface Across Tab related Checks User Interface Dialog Proprietary & Confidential .

Functionality Checks • • • • • • Screen Functionalities Field Dependencies Auto Generation Algorithms and Computations Normal and Abnormal terminations Specific Business Rules if any. Basic block coverage is the same as statement coverage except the unit of code measured is each sequence of non-branching statements.  Example of Unit Testing: int invoice (int x.OTHER MEASURES FUNCTION COVERAGE LOOP COVERAGE RACE COVERAGE 9. else if (s<1000) d1 = 95.. return (s*d1*d2/10000). d2.• GUI compliance checks Unit Testing . if (s<200) d1=100. segment coverage and basic block coverage.3 Execution of Unit Tests     Design a test case for every statement to be executed. else d2=90. Also known as: line coverage. int y) { int d1. } Unit Testing Flow : Performance Testing Process & Methodology Proprietary & Confidential . This measure reports whether each executable statement is encountered.57 - . Unit Testing . else d1 = 80. if (x<=30) d2=100. s. Select the unique set of test cases. s=5*x + 10 *y.

58 - .Performance Testing Process & Methodology Proprietary & Confidential .

§Additionally.59 - . Performance Testing Process & Methodology Proprietary & Confidential .Advantage of Unit Testing § Can be applied directly to object code and does not require processing source code. §The entire Boolean expression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. Method for Statement Coverage -Design a test-case for the pass/failure of every decision point -Select unique set of test cases §This measure reports whether Boolean expressions tested in control structures (such as the if-statement and while-statement) evaluated to both true and false. this measure includes coverage of switch-statement cases. decisiondecision-path testing §"Basis path" testing selects paths that achieve decision coverage. Method for Condition Coverage: -Test if every condition (sub-expression) in decision for true/false -Select unique set of test cases. basis path coverage. § ADVANTAGE: Simplicity without the problems of statement coverage DISADVANTAGE §This measure ignores branches within boolean expressions which occur due to shortcircuit operators. and interrupt handlers §Also known as: branch coverage. exception handlers. all-edges coverage. DISADVANTAGE of Unit Testing §Insensitive to some control structures (number of iterations) §Does not report whether loops reach their termination condition §Statement coverage is completely insensitive to the logical operators (|| and &&). § Performance profilers commonly implement this measure.

§ Broad.60 - . and more than once. DISADVANTAGE: §Tedious to determine the minimum set of test cases required. loop coverage reports whether you executed the body exactly once. the sub-expressions are separated by logical-and and logical-or. RACE COVERAGE This measure reports whether multiple threads execute the same code at the same time. Useful for testing multi-threaded programs such as in an operating system. exactly once.4 Conclusion Testing irrespective of the phases of testing should encompass the following : Performance Testing Process & Methodology Proprietary & Confidential . when present. §Also known as predicate coverage. For do-while loops. §A path is a unique sequence of branches from the function entry to the exit.§Reports the true or false outcome of each Boolean sub-expression. information not reported by others measure. As with condition coverage. separated by logicaland and logical-or if they occur. shallow testing finds gross deficiencies in a test suite quickly. § It is useful during preliminary testing to assure at least some coverage in all areas of the software. The valuable aspect of this measure is determining whether while-loops and for-loops execute more than once. Predicate coverage views paths as possible combinations of logical conditions §Path coverage has the advantage of requiring very thorough testing FUNCTION COVERAGE: § This measure reports whether you invoked each function or procedure. 9. §It has the advantage of simplicity but without the shortcomings of its component measures §This measure reports whether each of the possible paths in each function have been followed. Helps detect failure to synchronize access to resources. §The test cases required for full multiple condition coverage of a condition are given by the logical operator truth table for the condition. § §Condition coverage measures the sub-expressions independently of each other. twice and more than twice (consecutively). especially for very complex Boolean expressions §Number of test cases required could vary substantially among conditions that have similar complexity §Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage and decision coverage. §Reports whether every possible combination of boolean sub-expressions occurs. LOOP COVERAGE This measure reports whether you executed each loop body zero times.

     Cost of Failure associated with defective products getting shipped and used by customer is enormous To find out whether the integrated product work as per the customer requirements To evaluate the product with an independent perspective To identify as many defects as possible before the customer finds To reduce the risk of releasing the product Performance Testing Process & Methodology Proprietary & Confidential .61 - .

The status of the configuration items should be reviewed against the phase plans and test progress reports prepared providing some assurance of the verification and validation activities. and in support of. the test strategy. integration and system testing – configuration items are verified against the appropriate specifications and in accordance with the test plan. The testing strategy also forms the basis for the creation of a standardized documentation set. Product assurance may oversee some of the test activity and may participate in process reviews. Sufficient time must be dedicated to testing and commissioning as ensuring the systems function correctly is fairly fundamental to the project’s success or failure. Test organization also involves the determination of configuration standards and the definition of the test environment.2 Key elements of Test Management: Test organization –the set-up and management of a suitable test organizational structure and explicit role definition. A detailed test plan and schedule is prepared with key test responsibilities being indicated.10 Test Strategy 10. Traditionally the responsibility for testing and commissioning is buried deep within the supply chain as a Performance Testing Process & Methodology Proprietary & Confidential . Test management is also concerned with both test resource and test environment management. and facilitates communication of the test process and its implications outside of the test discipline. 10. The required outcome of each test must be known before the test is attempted. Test specifications – required for all levels of testing and covering all categories of test. Test planning – the requirements definition and design specifications facilitate in the identification of major test items and these may necessitate the test strategy to be updated. The test environment should also be under configuration control and test data and results stored for future evaluation. It is the role of test management to ensure that new or modified service products meet the business requirements for which they have been developed or enhanced.1 Introduction This Document entails you towards the better insight of the Test Strategy and its methodology. Test monitoring and assessment – ongoing monitoring and assessment of the integrity of the development and construction. The Testing strategy should define the objectives of all test stages and the techniques that apply. Any test support tools introduced should be aligned with. A common criticism of construction programmers is that insufficient time is frequently allocated to the testing and commissioning of the building systems together with the involvement and subsequent training of the Facilities Management team.62 - . high level test phase plans prepared and resource schedules considered. The project framework under which the testing activities will be carried out is reviewed. Unit. Product assurance – the decision to negotiate the acceptance testing program and the release and commissioning of the service product is subject to the ‘product assurance’ role being satisfied with the outcome of the verification activities. Testing and commissioning is often considered by teams as a secondary activity and given a lower priority particularly as pressure builds on the program towards completion. Test Approach/Test Architecture are the acronyms for Test Strategy.

3 Test Strategy Flow : Test Cases and Test Procedures should manifest Test Strategy. Performance Testing Process & Methodology Proprietary & Confidential . The Project Sponsor should ensure that the professional team and the contractor consider realistically how much time is needed. non-functional testing and the associated techniques such as performance. • the testing to be performed. • evaluation criteria. identifying: • the items to be tested. It is possible to gain greater control of this process and the associated risk through the use of specialists such as Systems Integration who can be appointed as part of the professional team.sub-contract of a sub-contract. The time necessary for testing and commissioning will vary from project to project depending upon the complexity of the systems and services that have been installed. • test schedules.g.63 - . e. • risks requiring contingency measures? • Are test processes and practices reviewed regularly to assure that the testing processes continue to meet specific business needs? For example. stress and security etc? • Does the test plan prescribe the approach to be taken for intended test activities. • reporting requirements. 10. Fitness for purpose checklist: • Is there a documented testing strategy that defines the objectives of all test stages and the techniques that may apply. • resource and facility requirements. e-commerce testing may involve new user interfaces and a business focus on usability may mean that the organization must review its testing strategies.

 Based on the Key Potential Risks  Suggestion of Wrong Ideas. Performance Testing Process & Methodology Proprietary & Confidential .  People will use the Product Incorrectly  Incorrect comparison of scenarios.  Scenarios may be corrupted.  Determination of Actual Risk.  Unable to handle Complex Decisions. Test Strategy Execution: Understand the decision Algorithm and generate the parallel decision analyzer using the Perl or Excel that will function as a reference for high volume testing of the app.Test Strategy – Selection Selection of the Test Strategy is based on the following factors  Product Test Strategy based on the Application to help people and teams of people in making decisions.  Create complex scenarios and compare them.  Simulate the Algorithm in parallel.  Generate large number of decision scenarios.  Understand the underlying Algorithm.  Review Documentation and Help.  Capability test each major function.64 - .  Test for sensitivity to user Error.

5 Need for Test Strategy The objective of testing is to reduce the risks inherent in computer systems. and the design of the user interface and functionality for its sensitivity to user error. The difficulty of automating decision tests 10. The strategy must address the risks and present a process that can reduce those risks.65 - . Test with decision scenarios that are near the limit of complexity allowed by the product Compare complex scenarios. The two components of the testing strategy are the Test Factors and the Test Phase. The system concerns on risks then establish the objectives for the test process. Performance Testing Process & Methodology Proprietary & Confidential . Test the product for the risk of silent failures or corruptions in decision analysis.         Create a means to generate and apply large numbers of decision scenarios to the product. This will be done using the GUI test Automation system or through the direct generation of Decide Right scenario files that would be loaded into the product during test.4 General Testing Strategies • • • • • Top-down Bottom-up Thread testing Stress testing Back-to-back testing 10. Test Phase – The Phase of the systems development life cycle in which testing will occur. Issues in Execution of the Test Strategy The difficulty of understanding and simulating the decision algorithm The risk of coincidal failure of both the simulation and the product. Analysis Coding Errors 36% and design Errors 64%   Test Factor – The risk or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application system. Review the Documentation.

7 Conclusion: Test Strategy should be developed in accordance with the business risks associated with the software when the test team develop the test tactics. Thus the Test team needs to acquire and study the test strategy that should question the following:      What is the relationship of importance among the test factors? Which of the high level risks are the most significant? What damage can be done to the business if the software fails to perform correctly? What damage can be done to the business if the business if the software is not completed on time? Who are the individuals most knowledgeable in understanding the impact of the identified business risks? Hence the Test Strategy must address the risks and present a process that can reduce those risks.66 - Design Factor s Risks 10. 10. The applicable test factors would be listed as the phases in which the testing must occur. Performance Testing Process & Methodology Proprietary & Confidential Integrate . Four test steps must be followed to develop a customized test strategy. The development team will need to select and rank the test factors for the specific software systems being developed.  Place risks in the Matrix Build TestFactors\T est Phase Dynamic Test Requirements Maintain .  Select and rank Test Factors  Identify the System Developmental Phases  Identify the Business risks associated with the System under Development. For example the test phases in as traditional waterfall life cycle methodology will be much different from the phases in a Rapid Application Development methodology.6 Developing a Test Strategy The test Strategy will need to be customized for any specific software system. The system accordingly focuses on risks thereby establishes the objectives for the test process.Not all the test factors will be applicable to all software systems. The test phase will vary based on the testing methodology used.

Performance Testing Process & Methodology Proprietary & Confidential .67 - .

who will do each task. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. 9. 8. Contents of a Test Plan 1. approach. Templates c. 12. 13. 3. Standards/Guidelines 14. the testing tasks. resources and schedule of intended test activities. 4. It is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the project (especially between the developers and the testers). Purpose Scope Test Approach Entry Criteria Resources Tasks / Responsibilities Exit Criteria Schedules / Milestones Hardware / Software Requirements Risks & Mitigation Plans Tools to be used Deliverables References a. 10.11 TEST PLAN 11. The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with regards to the scope. responsibilities. 7. the features to be tested. 6. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. 11.2 Contents (in detail) Purpose This section should contain the purpose of preparing the test plan Performance Testing Process & Methodology Proprietary & Confidential . Procedures b. Sign-Off 11. 2. and any risks requiring contingency planning.68 - . It identifies test items.1 What is a Test Plan? A Test Plan can be defined as a document that describes the scope. Annexure 15. Purpose of preparing a Test Plan A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a software product. 5. deadlines and deliverables for the project.

Test Approach This would contain details on how the testing is to be performed and whether any specific strategy is to be followed (including configuration management). Test Matrices.e. Risks & Mitigation Plans This section should list out all the possible risks that can arise during the testing and the mitigation plans that the QA team plans to implement incase the risk actually turns into a reality.Scope This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of scope (screens. successful implementation of the latest build etc. Hardware / Software Requirements This section would contain the details of PC’s / servers required (with the configuration) to install the application or perform the testing. PCOM. Test Director. Exit criteria Contains tasks like bringing down the system / server. Resources This section should list out the people who would be involved in the project and their designation etc. Test Procedure.) prerequisites. connectivity related issues etc. Test Scripts etc. database refresh etc. WinSQL. Tasks / Responsibilities This section talks about the tasks to be performed and the responsibilities assigned to the various members in the project. Deliverables This section contains the various deliverables that are due to the client at various points of time (i.e. Performance Testing Process & Methodology Proprietary & Confidential . Templates for all these could also be attached.) WinRunner. These could include Test Plans.g.) daily. end of the project etc. database. mainframe processes etc). starting the web server / app server. restoring system to pre-test environment.69 - . specific software that needs to be installed on the systems to get the application running or to connect to the database. Schedules / Milestones This sections deals with the final delivery date and the various milestone dates to be met in the course of the project. Entry Criteria This section explains the various steps to be performed before the start of a test (i. weekly. For example: Timely environment set up. Status Reports. start of the project. Tools to be used This would list out the testing tools or utilities (if any) that are to be used in the project (e.

Sign-Off This should contain the mutual agreement between the client and the QA team with both leads / managers signing off their agreement on the Test Plan.) QView Project related documents (RSD. Performance Testing Process & Methodology Proprietary & Confidential .g.) templates used for reports. Referenced documents can also be attached here. FSD etc) Annexure This could contain embedded documents or links to documents which have been / will be used in the course of testing (e.g.References Procedures Templates (Client Specific or otherwise) Standards / Guidelines (e.70 - . ADD. test cases etc.

rather than output data or the transitional states the data passes through during processing. the more formal a plan the better. what steps are required. The paper will not consider areas where data is important to non-functional testing. those which use databases or are heavily influenced by the data they hold. Each separate test should be given a unique reference number which will identify the Business Process being recorded. is the medium through which the tester influences the software. Roles of Data in Functional Testing Testing consumes and produces large amounts of data. data manipulation. correctly chosen. The first stage of any recogniser development project is data preparation. the paper will concentrate most on data-heavy applications. presentation and user interface. the way the test is going to be run and applied. you must know how it's supposed to work. Actual customer names or contact details should also not be used for such tests. Data is manipulated. Its contents. extrapolated. work (almost) Performance Testing Process & Methodology Proprietary & Confidential . which finally spews forth yet more data to be checked against expectations. implementing and evaluating tests.71 - . the persons involved in the testing process and the date the test was carried out. It is recommended that a full test environment be set up for use in the applicable circumstances. Preparation of the data can help to focus the business where requirements are vague. is close enough good enough? You should have a good idea of a methodology for the test. The paper will focus on input data. Good test data can be structured to improve understanding and testability. how the protocols behave. Testing is the process of creating. summarized and referenced by the functionality under test. Functional testing can suffer if data is poor. you have to decide such things as what exactly you are testing and testing for. and good data can help improve functional testing. etc. Data describes the initial conditions for a test. as input data has the greatest influence on functional testing and is the simplest to manipulate. You must have a consistent schedule for testing. Data is a crucial part of most functional testing. You must understand the limits inherent in the tests themselves. be prepared which is representative of normal business transactions.Introduction A System is programmed by its data. In doing this. you should design test cases. A system can be configured to fit several business models. etc. Configuration data can dictate control flow. can reduce maintenance effort and allow flexibility. the simulated conditions used. You should have a definition of what success and failure are. and will show that testing can be improved by a careful choice of input data. performing a specific set of tests at appropriate points in the process is more important than running the tests at a specific time. In other words. such as operational profiles. This will enable the monitoring and testing reports to be co-coordinated with any feedback received.12 Test Data Preparation . Tests must be planned and thought out a head of time. A SYSTEM IS PROGRAMMED BY ITS DATA Many modern systems allow tremendous flexibility in the way their basic functionality can be used. This paper sets out to illustrate some of the ways that data can influence the test process. Test data should however. massive datasets and environmental tuning. forms the input. if you're testing a specific functionality. Effective quality control testing requires some basic goals and understanding: You must understand what you are testing.

they may be hard to maintain.  Identify Who is to Conduct the Tests In order to ensure consistency of the testing process throughout the organization. while an elegantly-chosen dataset can often allow new tests without the overhead of new data. GOOD DATA CAN HELP TESTING STAY ON SCHEDULE An easily comprehensible and well-understood dataset is a tool to help communication. A formal test plan is a document that provides and records important information about a test project.  Identify Who is to Control and Monitor the Tests Performance Testing Process & Methodology Proprietary & Confidential . it is hard to communicate problems to coders. Each business process should be thoroughly tested and the coordinator should ensure that each business unit observes the necessary rules associated with ensuring that the testing process is carried out within a realistic environment. and varied to allow diagnosis. a nominated testing and across the organization. and it can become difficult to have confidence in the QA team's results. FUNCTIONAL TESTING SUFFERS IF DATA IS POOR Tests with poor data may not describe the business model effectively. Without this. whether they are good or bad. This section of the BCP should contain the names of the BCP Team members nominated to co-ordinate the testing process. GOOD DATA IS VITAL TO RELIABLE TEST RESULTS An important goal of functional testing is to allow the test to be repeated with the same result. one or more members of the Business Continuity Planning (BCP) Team should be nominated to co-ordinate the testing process within each business unit.. Good data can greatly assist in speedy diagnosis and rapid re-testing. Good data allows diagnosis.72 - .seamlessly with a variety of cooperative systems and provide tailored experiences to a host of different users. Regression testing and automated test maintenance can be made speedier and easier by using good data. A business may look to an application's configurability to allow them to keep up with the market without being slowed by the development process.1 Criteria for Test Data Collection This section of the Document specifies the description of the test data needed to test recovery of each business process. an individual may look for a personalized experience from commonly-available software. and allows tests to be repeated with confidence. for example: project and quality assumptions project background information resources schedule & timeline entry and exit criteria test milestones tests to be performed use cases and/or test cases 12. or require lengthy and difficult setup. It should also list the duties of the appointed coordinators. They may obscure problems or avoid them altogether. that take longer to execute. effective reporting. Poor data tends to result in poor tests.

The forms should be completed either during the tests (to record a specific issue) or as soon after finishing as practical.In order to ensure consistency when measuring the results. This section of the BCP will contain the names of the persons nominated to monitor the testing process throughout the organization. This is probably best handled in a workshop environment and should be presented by the persons responsible for developing the emergency procedures.73 - . It should be mandatory for the management of a business unit to be present when that unit is involved with conducting the tests. Prepare Budget for Testing Phase Each phase of the BCP process which incurs a cost requires that a budget be prepared and approved. The 'Preparing for a Possible Emergency' Phase of the BCP process will involve the identification and implementation of strategies for back up and recovery of data files or a part of a business process. may require particularly expensive back up strategies to be implemented. Conducting the Tests The tests must be carried out under authentic conditions and all participants must take the process seriously. Critical parts of the business process such as the IT systems. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. it is necessary for the core testing team to be trained in the emergency procedures. It will also contain a list of the duties to be undertaken by the monitoring staff. This section of the BCP will contain a list of the testing phase activities and a cost for each. This feedback will hopefully enable weaknesses within the Business Recovery Process to be identified and eliminated. It is important that all persons who are likely to be involved with recovering a particular business process in the event of an emergency should participate in the testing process. the tests should be independently monitored.  Prepare Feedback Questionnaires It is vital to receive feedback from the persons managing and participating in each of the tests. This task would normally be carried out by a nominated member of the Business Recovery Team or a member of the Business Continuity Planning Team. Every part of the procedures included as part of the recovery process is to be tested to ensure validity and relevance. Where the costs are significant they should be approved separately with a specific detailed budget for the establishment costs and the ongoing maintenance costs. It is important that clear instructions are given to the Core Testing Team regarding the simulated conditions which have to be observed. each critical part of the business recovery process should be fully tested.  Training Core Testing Team for each Business Unit In order for the testing process to proceed smoothly. Completion of feedback forms should be mandatory for all persons participating in the testing process. This section of the BCP should contain a template for a Feedback Questionnaire. This section of the BCP should contain a list of the core testing team for each of the business units who will be responsible for coordinating and undertaking the Business Recovery Testing process. Test each part of the Business Recovery Process In so far as it is practical. It is inevitable that these back up and recovery processes will involve additional costs. It should be noted whenever part of the costs is already incorporated with the organization’s overall budgeting process. Performance Testing Process & Methodology Proprietary & Confidential .

if not. in a realistic manner.if not. This training may be integrated with the training phase or handled separately. adequate or requiring further testing. Where. This will enable the training to be consistent and organized in a manner where the results can be measured. The training should be assessed to verify that it has achieved its objectives and is relevant for the procedures involved. Test Accuracy of Employee and Vendor Emergency Contact Numbers During the testing process the accuracy of employee and vendor emergency contact information is to be re-confirmed. This process must have safety features incorporated to ensure that if one person is not contactable for any reason then this is notified to a nominated controller. The testing co-ordination and monitoring will endeavor to ensure that the simulated environments are maintained throughout the testing process. The BCP should contain a description of the objectives and scope of the training phase. This is particularly important for management and key employees who are critical to the success of the recovery process. in the event of an emergency occurring outside of normal business hours. what specific training is required. Performance Testing Process & Methodology Proprietary & Confidential . Training may be delivered either using in-house resources or external resources depending upon available skills and related costs. Training Staff in the Business Recovery Process All staff should be trained in the business recovery process. The training should be carefully planned and delivered on a structured basis. a hierarchical process could be used whereby one person contacts five others. provide further comment Did the tests proceed without any problems . Develop Objectives and Scope of Training The objectives and scope of the BCP training activities are to be clearly stated within the plan. provide further comment Were simulated conditions reasonably "authentic" . and the training fine tuned. The following questions may be appropriate: Were objectives of the Business Recovery Process and the testing process met . This activity will usually be handled by the HRM Department or Division.This section of the BCP is to contain a list of each business process with a test schedule and information on the simulated conditions being used. This is particularly important when the procedures are significantly different from those pertaining to normal operations. It will be necessary to identify the objective and scope for the training.74 - .if not. a large number of persons are to be contacted. who needs it and a budget prepared for the additional costs associated with this phase.if not. provide further comment Was test data representative . All contact numbers are to be validated for all involved employees. This will enable alternative contact routes to be used. Managing the Training Process For the BCP training phase to be successful it has to be both well managed and structured. provide further comment What were the main comments received in the feedback questionnaires Each test should be assessed as either fully satisfactory. as appropriate. Assess Test Results Prepare a full assessment of the test results for each business process.

The objectives for the training could be as follows : "To train all staff in the particular procedures to be followed during the business recovery process". This section of the BCP contains the overview of the training schedule and the groups of persons receiving the training. it could delay the organization in reaching an adequate level of preparedness. This section of the BCP will identify for each business process what type of training is required and which persons or group of persons need to be trained. Each member of staff will be given information on their role and responsibilities applicable in the event of an emergency. Performance Testing Process & Methodology Proprietary & Confidential . This section of the BCP contains information on each of the training programmes with details of the training materials to be developed. The scope of the training could be along the following lines : "The training is to be carried out in a comprehensive and exhaustive manner so that staff become familiar with all aspects of the recovery process. A separate communication should be sent to the managers of the business units advising them of the proposed training schedule to be attended by their staff. however. training incurs additional costs and these should be approved by the appropriate authority within the organization. For larger organizations it may be practical to carry out the training in a classroom environment. it is necessary to advise them about the training programmes they are scheduled to attend. for smaller organizations the training may be better handled in a workshop style. an estimate of resources and an estimate of the completion date. For example it may be necessary to carry out some process manually if the IT system is down for any length of time. The communication should provide for feedback from the staff member where the training dates given are inconvenient. Prepare Budget for Training Phase Each phase of the BCP process which incurs a cost requires that a budget be prepared and approved. Consideration should also be given to the development of a comprehensive corporate awareness program for communicating the procedures for the business recovery process. it has to be recognized that. however well justified. Prepare Training Schedule Once it has been agreed who requires training and the training materials have been prepared a detailed training schedule should be drawn up. The training will cover all aspects of the Business Recovery activities section of the BCP including IT systems recovery". Training Needs Assessment The plan must specify which person or group of persons requires which type of training. This can be a time consuming task and unless priorities are given to critical training programmes. These manual procedures must be fully understood by the persons who are required to carry them out. It is necessary for all new or revised processes to be explained carefully to the staff.75 - . the training costs will vary greatly. Communication to Staff Once the training is arranged to be delivered to the employees. Training Materials Development Schedule Once the training needs have been identified it is necessary to specify and develop suitable training materials. However. Depending upon the cross charging system employed by the organization. This section of the BCP contains a draft communication to be sent to each member of staff to advise them about their training schedule.

Completion of feedback forms should be mandatory for all persons participating in the training process. Assess Feedback The completed questionnaires from the trainees plus the feedback from the trainers should be assessed. Keeping the Plan Up-to-date Changes to most organizations occur all the time. This information will be gathered from the trainers and also the trainees through the completion of feedback questionnaires. Performance Testing Process & Methodology Proprietary & Confidential . or the process. This chapter deals with updating the plan and the managed process which should be applied to this updating activity. This feedback will enable weaknesses within the Business Recovery Process. This is necessary due to the level of complexity contained within the BCP. This section of the BCP will contain a Change Request Form / Change Order to be used for all such changes to the BCP. Feedback Questionnaires Assess Feedback Feedback Questionnaires It is vital to receive feedback from the persons managing and participating in each of the training programmes. have significantly increased the level of dependency upon the availability of systems and information for the business to function effectively. This section of the BCP should contain a template for a Feedback Questionnaire for the training phase. The key issues raised by the trainees should be noted and consideration given to whether the findings are critical to the process or not. This section of the BCP will contain a format for assessing the training feedback. Assessing the Training The individual BCP training programmes and the overall BCP training process should be assessed to ensure its effectiveness and applicability.This section of the BCP will contain a list of the training phase activities and a cost for each. It should be noted whenever part of the costs is already incorporated with the organization’s overall budgeting process. It is necessary for the BCP to keep pace with these changes in order for it to be of use in the event of a disruptive emergency. This will involve the use of formalized change control procedures under the control of the BCP Team Leader.76 - . The forms should be completed either during the training (to record a specific issue) or as soon after finishing as practical. A Change request Form / Change Order form is to be prepared and approved in respect of each proposed change to the BCP. or the training. If there are a significant number of negative issues raised then consideration should be given to possible re-training once the training materials. to be identified and eliminated. Whenever changes are made to the BCP they are to be fully tested and appropriate amendments should be made to the training materials. Products and services change and also their method of delivery. Identified weaknesses should be notified to the BCP Team Leader and the process strengthened accordingly. Change Controls for Updating the Plan It is recommended that formal change controls are implemented to cover any changes required to the BCP. have been improved. These changes are likely to continue and probably the only certainty is that the pace of change will continue to increase. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. Maintaining the BCP It is necessary for the BCP updating process to be properly structured and controlled. and particularly within the last five. The increase in technological based processes over the past ten years.

as the data is restored. Program faults can introduce inconsistency or corruption into a database. The BCP Testing Co-ordinator will then be responsible for notifying all affected units and for arranging for any further testing activities. or of a failure to recognize all the data that is influential on the system. Similarly. Most projects experience these problems at some stage . Furthermore. the cost of test maintenance is correspondingly increased. The BCP Team Leader will notify the BCP Training Co-ordinator of all approved changes to the BCP in order that the training materials can be updated. It is important that the relevant BCP coordinator and the Business Recovery Team are kept fully informed regarding any approved changes to the plan.77 - .recognizing them early can allow their effects to be mitigated. Degradation of test data over time. Advise Person Responsible for BCP Training A member of the BCP Team will be given responsibility for co-ordinating all training activities (BCP Training Co-ordinator). This section of the BCP contains a draft communication from the BCP Co-ordinator to affected business units and contains information about the changes which require testing or re-testing. some tests may be excluded from a Performance Testing Process & Methodology Proprietary & Confidential . Reduced flexibility in test execution If datasets are large or hard to set up. Increased test maintenance cost If each test has its own data. Restoring the data to a clean set gets rid of the symptom.Responsibilities for Maintenance of Each Part of the Plan Each part of the plan will be allocated to a member of the BCP Team or a Senior Manager with the organization who will be charged with responsibility for updating and maintaining the plan. An assessment should be made on whether the change necessitates any retraining activities. The following list details the most common problems familiar to the author. If not spotted at the time of generation. the BCP Testing Co-ordinator will be notified. the cost increases further. HRM Department will be responsible to ensure that all emergency contact numbers for staff are kept up to date. Advise Person Responsible for BCP Training A member of the BCP Team will be given responsibility for co-ordinating all training activities (BCP Training Co-ordinator). The BCP Team Leader will remain in overall control of the BCP but business unit heads will need to keep their own sections of the BCP up to date at all times. Running the same test twice produces inconsistent results. Unreliable test results. evidence of the fault is lost. they can cause hard-to-diagnose failures that may be apparently unrelated to the original fault. The BCP Team Leader will notify the BCP Training Co-ordinator of all approved changes to the BCP in order that the training materials can be updated. If that data is itself hard to understand or manipulate. Problems which can be caused by Poor Test Data Most testers are familiar with the problems that can be caused by poor data. Test All Changes to Plan The BCP Team will nominate one or more persons who will be responsible for coordinating all the testing processes and for ensuring that all changes to the plan are properly tested. but the original fault is undiagnosed and can carry on into live operation and perhaps future releases. An assessment should be made on whether the change necessitates any retraining activities. unrecognized database corruption. Whenever changes are made or proposed to the BCP. This can be a symptom of an uncontrolled environment.

Business data not representatively tested Test requirements. testers and business Each of these groups has different data requirements. particularly in configuration data. This can make portions of the application untestable for many testers simultaneously. Inability to spot data corruption caused by bugs A few well-known datasets can be more easily be checked than a large number of complex datasets. A readily understandable dataset can allow straightforward diagnosis. after further analysis. not to be faults at all.2 Classification of Test Data Types In the process of testing a system. or tests.78 - . it may not be time-effective to construct further data to support investigatory tests. the less time spent testing. Unwieldy volumes of data Small datasets can be manipulated more easily than large datasets. Obscure results and bug reports Without clearly comprehensible data. A failure to understand each others data can lead to ongoing confusion. Simpler to make test mistakes Everybody makes mistakes. Confusing or over-large datasets can make data selection mistakes more common. testers stand a greater chance of missing important diagnostic features of a failure.test run. Confusion between developers. Poor database/environment integrity If a large number of testers. This can not only cause false results. but can lead to database integrity problems and data corruption. or indeed of missing the failure entirely. 12. While this may arguably lead to broad testing for a variety of purposes. it can be hard for the business or the end users to feel confidence in the test effort if they feel distanced from it. If the datasets are poorly constructed. Inadequate data can lead to ambiguous or incomplete requirements. Most reports make reference to the input data and the actual and expected results. a complex dataset will positively hinder diagnosis. Performance Testing Process & Methodology Proprietary & Confidential . share the same dataset. A few datasets are easier to manage than many datasets. Requirements problems can be hidden in inadequate data It is important to consider inputs and outputs of a process for requirements modeling. and may lend themselves to automated testing / sanity checks. Data can play a significant role in these failures. Larger proportion of problems can be traced to poor data A proportion of all failures logged will be found. Poor data can make these reports hard to understand. they can influence and corrupt each others results as they change the data in the system. many references are made to "The Data" or "Data Problems". often don't reflect the way the system will be used in practice. Poor data will cause more of these problems. Less time spent hunting bugs The more time spent doing unproductive testing or ineffective test maintenance.

business can offer new intangible products without developing new functionality . As such. It might include a cross reference between country and delivery cost or method. reports and database updates. It generally has a correspondence with the input data (cf.Although it is perhaps simpler to discuss data in these terms. documents can all be input data. actions. Jackson's Structured Programming methodology). Transitional data Transitional data is data that exists only within the program. and includes not only files.79 - . products. Accounts. it is temporary and is lost at the end of processing. The current date and time can be seen as environmental data. orders. The following broad categories allow data to be handled and discussed more easily. where new billing products are supported and indeed created by additions to the setup data. It includes communications addresses. test handles and instrumentation make it output data). With an effective approach to setup data. Typically. it does not directly influence the quality of the tests. but can also include test measurements. and can be seen as part of the test conditions. Transitional data is not seen outside the system (arguably.as can be seen in the mobile phone industry. transmissions. but its state can be inferred from actions that the system has taken. Setup data Setup data tells the system about the business rules. directory trees and paths and environmental variables. Typically held in internal system variables. it is useful to be able to classify the data according to the way it is used. setup data causes different functionality to apply to otherwise similar data. it is useful to split the categorization once more: FIXED INPUT DATA Fixed input data is available before the start of the test. Output data Output data is all the data that a system outputs as a result of processing input data and events. For the purposes of testing. Input data Input data is the information input by day-to-day system functions. Performance Testing Process & Methodology Proprietary & Confidential . A subset of the output data is generally compared with the expected results at the end of test execution. or methods of debt collection from different kinds of customers. Environmental data Environmental data tells the system about its technical environment. during processing of input data. CONSUMABLE INPUT DATA Consumable input data forms the test input It can also be helpful to qualify data after the system has started to use it.

Data can be safely and effectively partitioned by machine / database / application instance. No test changes the data. or diagnostic tests. Achieves good test coverage without having to construct massive datasets Can perform investigative testing without having to set up more data Reduces the impact of functional/database changes Can be used to test other data . although this partitioning can introduce configuration management problems in software version.80 - .which also allows a wide range of tests. Pair wise. and easy to manipulate dataset is capable of supporting many tests. Fortunately. this produces a far smaller set of tests than the brute-force approach for all permutations.3 Organizing the data A key part of any approach to data is the way the data is organized. influenced by the uses that are planned for it. or combinatorial testing addresses this problem by generating a set of tests that allow all possible pairs of combinations to be tested. as above. A useful and basic way to start with partitions is to set up. ad-hoc. reducing uncontrolled changes in the data. the test data can contain all possible pairs of permutations in a far smaller set than that which contains all possible permutations. this method of working with fixed input data can help greatly in testing the setup data. generating tests so that all possible permutations of inputs are tested. the way it is chosen and described. and so is comprehensive enough to allow a great many new. easy to handle dataset . so the area can be trusted. so allowing different kinds of data use. Partitions can be used independently. Performance Testing Process & Methodology Proprietary & Confidential . Most are also familiar with the ways in which this generally vast set can be cut down. A good approach increases data reliability. It is most effective when the following conditions are satisfied. It allows complete pairwise coverage. These three have the following characteristics: Safe area Used for enquiry tests. This method is most appropriate when used. data use in one area will have no effect on the results of tests in another. To sum up. but the data maintenance required will be greatly lessened by the small size of the dataset and the amount of reuse it allows. Database changes will affect it. This small. Good data assists testing.particularly setup data Partitioning Partitions allow data access to be controlled. reduces data maintenance time and can help improve the test process. rather than hinders it. usability tests etc. but to set up three shared by many users. on fixed input data.12. The same techniques can be applied to test data. environmental data and data load/reload. permutation helps because: Permutation is familiar from test planning. not a single environment for each test or tester. machine setup. Permutations Most testers are familiar with the concept of permutation. these criteria apply to many traditional database-based systems: Fixed input data consists of many rows Fields are independent You want to do many tests without loading / you do not load fixed input data for each test. Typically. for non-trivial sets. Finally. This allows a small.

the two sets of work can operate independently in the same dataset.81 - . but we can make our data clearer still by describing each row in its own free text fields. tester 1's tests may only use customers with Russian nationality and tester 2's tests only with French.that is to say. they give them names. Performance Testing Process & Methodology Proprietary & Confidential . Testers often talk about items of data. allowing the use of 'soft' partitions. 'Soft' partitions allow the data to be split up conceptually. rather than physically. and the access to data. Scratch area Used for investigative update tests and those which have unusual requirements. early on in testing. Typically. sorting or selecting on a free text field that should have some correspondence with a functional field can help spot problems or eliminate unaffected data.and their work may not need to change the environmental or setup data. Data partitions help because: Allow controlled and reliable data. reducing data corruption / change problems Can reduce the need for exclusive access to environments/machines Clarity Permutation techniques may make data easier to grasp by making the datasets small and commonly used. While the impact of this requirement should not be underestimated. and the scratch area Bristol addresses. Used by one test/tester at a time. Many different stakeholders have different requirements of the data. The test strategy can take advantage of this by disciplined use of text / value fields.Many testers can use simultaneously Change area Used for tests which update/change data. Setting this data. setup data . for instance. However. and to a lesser extent. This allows shorthand. allowing testers to make a simple comparison between the free text (which is generally displayed on output). and actions based on fields which tend not to be directly displayed. If. the change area Manchester addresses. data extracts and sanity checks can also make use of these. referring to them by anthropomorphic personification . but a common requirement is that of exclusive use. Reports. A safe area could consist of London addresses. Data must be reset or reloaded after testing. allowing testers to sense check input and output data. and choose appropriate input data for investigative tests. Controlling data. but also acts as jargon. Used at tester's own risk! Testing rarely has the luxury of completely separate environments for each test and each tester. in a system can be fraught. excluding those who are not in the know. a number of stakeholders may be able to work with the same environmental data. Existing data cannot be trusted. Although testers are able to interfere with each others tests. the team can be educated to avoid each others work. Data is often used to communicate and illustrate problems to coders and to the business. to have some meaningful value can be very useful. Use of free text fields with some correspondence to the internals of the record allows output to be checked more easily. values in free-text fields are used for soft partitioning. there is generally no mandate for outside groups to understand the format or requirements of test data.

Data can be well-described in test scripts. filtered for relevance and duplicates and migrated to the target data format. and the possibility and ease of maintenance.82 - .4 Data Load and Data Maintenance An important consideration in preparing data for functional testing is the ways in which the data can be loaded into the system. Clarity helps because: Improves communication within and outside the team Reduces test errors caused by using the wrong data Allows another method way of doing sanity checks for corrupted or inconsistent data Helps when checking data after input Helps in selecting data for investigative tests 12. while small volumes of setup data often have an associated system maintenance function and can be input using the system. and can have problems with data integrity and parent/child relationships. be input in an ad-hoc way. However. they can be the only way to get broken data into the system in a consistent fashion. This method can be very slow for large datasets. Large volumes of setup data can often be generated from existing datasets and loaded using a data load tool. and can both be hampered by faults in the system. particularly for minor system upgrades. This can be appropriate where a dataset is known and consistent. they may come up against problems when generating internal keys. If the system is working well. and is not often desirable. It may. all new data is created for testing. or constructed and held in flat files. It uses the system's own validation and insertion methods. Loading the data Data can be loaded into a test system in three general ways. the complete set of live data is loaded into the system. While this last method may seem complete. however. Environmental data tends to be manually loaded. In some cases. Not loaded at all Some tests simply take whatever is in the system and try to test with it. However. This data may be complete and well specified. it has disadvantages in that the data may not fully support testing. Using the system you're trying to test The data can be manually entered. data integrity can be ensured by using this method. or has been set up by a prior round of testing. and internally assigned keys are likely to be effective and consistent. but can be hard to generate. which is unlikely to gain the advantages of good data listed above. As they do not use the system's own validation. it can be symptomatic of an uncontrolled approach to data. A common compromise is to use old data from an existing system. As they do not use the system to load the data. or data entry can be automated by using a capture/replay tool. Data loaded can have a range of origins. In some cases. and help pinpoint them. selected for testing. such as the live system. they can provide a convenient workaround to known faults in the system's data load routines. and that the large volume of data may make test results hard to interpret. either at installation or by manipulating environmental or configuration scripts. Using a data load tool Data load tools directly manipulate the system's underlying data structures. Performance Testing Process & Methodology Proprietary & Confidential .Giving some meaning to the data that can be referred to directly can help with improving mutual understanding. but stripped of personal details for privacy reasons. It can also be appropriate in environments where data cannot be reloaded.

Fixed input data may be generated or migrated and is loaded using any and all of the methods above. the environmental and setup data. When data is loaded. and the wide variety of possible methods will not be discussed further here. it can append itself to existing data. Effective testing of setup data is a necessary part of system testing. 12. Common data problems can be avoided or reduced with preparation and automation. Although testing can verify that the environmental variables are being read and used correctly. or delete existing data first. Well-planned data can allow flexibility and help reduce the cost of test maintenance. Environmental data is necessarily different between the test and live environment. it is important to have a well-known set of fixed input data and consumable input data. Environmental data is often checked manually on the live system during implementation and rollout. as the business environment changes – particularly if there is a long period between requirements gathering and live rollout. overwrite existing data. there is little point in testing their values on a system other than the target system. Each is appropriate in different circumstances.5 Testing the Data A theme bought out at the start of this paper was 'A System is Programmed by its Data'. Does the planned/current setup data induce the functionality that the business requires? Will changes made to the setup data have the desired effect? Testing for these two questions only becomes possible when that data is controlled.83 - . The setup data should be organized to allow a good variety of scenarios to be considered The setup data needs to be able to be loaded and maintained easily and repeatable The business needs to become involved in the data so that their setup for live can be properly tested When testing the setup data. and good data can be used as a tool to enable and improve communication throughout the project. This allows the effects of changes made to the setup data to be assessed repeat ably and allows results to be compared. In order to test the system. The advantages of testing the setup data include: Overall testing will be improved if the quality of the setup data improves Problems due to faults in the live setup data will be reduced The business can re-configure the software for new business needs with increased confidence Data-related failures in the live system can be assessed in the light of good data testing 12. throughout testing. one must also test the data it is configured with. Testing done on the setup data needs to cover two questions. Setup data can change often. and due consideration should be given to the consequences. Performance Testing Process & Methodology Proprietary & Confidential .6 Conclusion Data can be influential on the quality of testing. while consumable input data is typically listed in test scripts or generated as an input to automation tools. Aspects of all the elements above come into play.

and make its structure and content transparent  Use the data to improve understanding throughout testing and the business  Test setup data as you would test functionality Performance Testing Process & Methodology Proprietary & Confidential .The following points summarize the actions that can influence the quality of the data and the effectiveness of its usage:  Plan the data for maintenance and flexibility  Know your data.84 - .

The statement of condition is uncovering and documenting the facts.events. The “What should be” shall be called the “Criteria”. What is a fact? The statement of condition will of course depend on the nature and extent of the evidence or support that is examined and noted.When one or more these attributes is missing. The following four attributes should be developed for all the test problems: Statement of condition. questions almost arise. The actual deviation will be the difference or gap between “what –is” and “ what is desired”. Effect: Tells why the difference between what is and what should be is significant Cause: Tells the reasons for the deviation. making up the statement of condition.Introduction Test Problem is a condition that exists within the software system that needs to be addressed. Activities Involved:. When a deviation is identified between what is found to actually exist and what the user thinks is correct or proper . as they exist. no finding exists. These concepts are the first two and the most basic . and the criteria. as they currently exist. such as Criteria: Why is the current state inadequate? Effect: How significant is it? Cause: What could have cause of the problem? 13. Inputs .or documents that cause this activity to be executed. For those facts. The documenting of the deviation is describing the conditions. which represents what the user desires.85 - . the I/S professional will need to ensure that the information is accurate. the first essential step toward development of a problem statement has occurred.Essentially the user compares” what is” with “what should be”. – The specific step-by –step activities that are utilized in producing the output from the identical activities. Carefully and completely documenting a test problem is the first step in correcting the problem. The statement of condition should document as many of the following attributes as appropriate of the problem. It is difficult to visualize any type of problem that is not in some way characterized by this deviation.The specific business or administered activities that are being performed during Test Log generation are as follows: Procedures used to perform work.13 Test Logs .The triggers. Identification of the cause is the necessary as a basis for corrective action. A well developed problem statement will include each of these atttributes. Performance Testing Process & Methodology Proprietary & Confidential . well supported. If a comparison between the two gives little or no practical consequence. Outputs /Deliverables – The products that are produced from the activity. –Tells what it is. attributes of a problem statement. Criteria – Tells what should be.1 Factors defining the Test Log Generation Document Deviation: Problem statements begin to emerge by process of comparision. The ‘What is”: can be called the statement of condition. and worded as clearly and precisely as possible. These two attributes are the basis for a finding.

it could indicate the need to reduce the complaints or delays as well as desired processing turn around time. These are explained in the following paragraphs. Performance Testing Process & Methodology Proprietary & Confidential . Recommended Action: The testers should indicate any recommended action they believe would be helpful to the project team. Name of the S/W tested: Problem Description Statement of Condition Statement of Criteria Effect of Deviation Cause of a Problem Location of the Problem Recommended Action 13. Test factors -The factors incorporated in the plan. The Criterion is the user’s statement of what is desired. For example . For example the following Work paper provides the information for Test Log Documentation: Field Requirements: Field Instructions for Entering Data Name of Software Tested : Put the name of the S/W or subsystem tested. Statement of Criteria: Put what testers believe was the expected result from processing Effect of Deviation: If this can be estimated . the work paper will be given to the development team and they should indicate the cause of the problem. Location of the Problem: The Tests should document where problem occurred as closely as possible. If the testers re unable to do this .Users/Customers served –The organization . If not approved.2 Collecting Status Data Four categories of data will be collected during testing. Test Results Data This data will include. It can be stated in the either negative or positive terms. testers should indicate what they believe the impact or effect of the problem will be on computer processing Cause of Problem: The testers should indicate what they believe is the cause of the problem. if known.86 - . and document the statement of condition and the statement of criteria. the validation of which becomes the Test Objective.individuvals. Work Paper to describe the problem. Deficiencies noted – The status of the results of executing this activity and any appropriate interpretation of those facts. the alternate action should be listed or the reason for not following the recommended action should be documented. Problem Description: Write a brief narrative description of the variance uncovered from expectations Statement of Conditions: Put the results of actual processing that occurred here.or class users/customers serviced by this activity. Business objective –The validation that specific business objectives have been met. Interface Objectives-Validation that data/Objects can be correctly passed among Software components.

The smallest identifiable software components Platform.87 - . which indicates the project component for which the status is requested. Inspections – A verification of process deliverables against deliverable specifications. Use of Function/Test matrix: This shows which tests must be performed in order to validate the functions and also used to determine the status of testing. which will be based on software requirements. Units. The test reports are for use of testers. Many organizations use spreadsheet package to Performance Testing Process & Methodology Proprietary & Confidential . This description includes but not limited to : Data the defect uncovered Name of the Defect Location of the Defect Severity of the Defect Type of Defect How the defect was uncovered(Test Data/Test Script) The Test Logs should add to this information in the form of where the defect originated . The frequency of the test reports should be based on the discretion of the team and extensiveness of the test process. Test Transactions. Reviews: Verification that the process deliverables / phases are meeting the user’s true needs. the test that will be performed to determine the status of that component.The hardware and Software environment in which the software system will operate. and the software development team. and the results of testing at any point of time. As described the most common test Report is a simple Spread sheet . and Test Events These are the test products produced by the test team to perform testing. It is also suggested that the database be put in online through client/server systems so that with a vested interest in the status of the project can be readily accessed for the status update. Test Suites. Developing Test Status Reports Report Software Status Establish a Measurement Team Inventory Existing Project Measures Develop a Consistent Set of Project metrics Define Process Requirements Develop and Implement the Process Monitor the Process The Test process should produce a continuous series of reports that describe the status of testing. Test transactions/events: The type of tests that will be conducted during the execution of tests.Functions/Sub functions-Identifiable Software components normally associated with the requirements of the software. Defect This category includes a Description of the individual defects uncovered during the testing process. test managers. and when it was entered for retest. Storing Data Collected during Testing It is recommended that a database be established in which to store the results collected during testing. when it was corrected.

The program was loosely designed to produce TeX/LaTeX formatted output. PostScript. database.88 - . defect tracking. LaTeX2e. DocBook. Some query tools available for Linux-based databases include: QMySQL dbMetrix PgAccess Cognos Powerhouse This is not yet available for Linux. composing and revising text. you can quickly scan through any number of these reports and see how each person's history compares.GNU Report Generator The GRG program reads record and field information from a dBase3+ file.maintain test results. HTML. HTML or any other kind of ASCII based output format can be produced just as easily. but not performed 2=Test currently being performed 3=MINOR DEFECT NOTED 4=Major defect noted 5=Test complete and function is defect free for the criteria included in this test TEST FUNCTION 1 2 3 4 5 6 7 8 A B C D E Function Test Matrix 9 13. Word –Processing: One way of increasing the utility of computers and word processors for the teaching of writing may be to use software that will guide the processes of generating. Status Report Word Processing Tests or Keypad Tests Performance Testing Process & Methodology Proprietary & Confidential . or tab. and data base management products. The intersection can be coded with a number or symbol to indicate the following: 1=Test is needed. A one-page summary report may be printed with either the Report Manager program or from the individual keyboard or keypad software at any time. Individual Reports include all of the following information. From the LaTeX2e and DocBook output files you can in turn produce PDF. text. order entry systems. XML.'' GRG . organizing. and more. Some Database test tools like Data Vision is a database reporting tool similar to Crystal Reports. Cognos is looking into what interest people have in the product to assess what their strategy should be with respect to the Linux ``market. This allows each person to use the normal functions of the computer keyboard that are common to all word processors. email editors. troff. Reports can be viewed and printed from the application or output as HTML. PostScript.2. delimited ASCII text file or a SQL query to a RDBMS and produces a report listing. but plain ASCII text. however. and graphic tools to prepare test reports.Use of word processing. From the Report Manager.or comma-separated text files.1Methods of Test Reporting Reporting Tools .

Average duration between defect detection and defect correction Average effort to correct a defect Total number of defects remaining at delivery Performance Testing Process & Methodology Proprietary & Confidential . developers.Defining the components that should be included in a test report.Basic Skills Tests or Data Entry Tests Progress Graph Game Scores Test Report for each test Test Director:  Facilitates consistent and repetitive testing process  Central repository for all testing assets facilitates the adoption of a more consistent testing process.Ability to draw statistically valid conclusions from quantitative test results. Anywhere access to Test Assets  Using Test Director’s web interface. Statistical Analysis . which can be repeated throughout the application life cycle  Provides Analysis and Decision Support  Graphs and reports help analyze application readiness at any point in the testing process  Requirements coverage. Testing Data used for metrics Testers are typically responsible for reporting their test status at regular intervals. defect statistics can be used for production planning  Provides Anytime.89 - . The following measurements generated during testing are applicable: Total number of tests Number of Tests executed to date Number of tests executed successfully to date Data concerning software defects include Total number of defects corrected in each activity Total number of defects entered in each activity. business analysts and Client can participate and contribute to the testing process  Traceability throughout the testing process  Test Cases can be mapped to requirements providing adequate visibility over the test coverage of requirements  Test Director links requirements to test cases and test cases to defects  Manages Both Manual and Automated Testing  Test Director can manage both manual and automated tests (Win Runner)  Scheduling of automated tests can be effectively done using Test Director Test Report Standards . tester. test execution progress. run schedules.

What works/What does not work . as paper report will summarize the data.Software performance data us usually generated during system testing. if the function matrix is maintained electronically.when different testers should test individual projects. Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective actions. including any variance between what is and what should be 3. could eliminate or minimize the occurrence of high frequency defects. 1. The immediate purpose is to provide information to customers of the software system so that they can determine whether the system is ready for production . A good test plan will identify the interfaces and institute test conditions that will validate interfaces.The test report can be a combination of electronic data and hard copy.Scope of Test – This section indicates which functions were and were not tested 2. Recommendations – This section recommends actions that should be taken to Fix functions /Interfaces that do not work. there is no reason to print that.9 .Test Results – This section indicates the results of testing. The Third long term purpose is to show what was accomplished in case of an Y2K lawsuit. Given is the Individual Project test report except that conditions tested are interfaces. Average CPU utilization Average memory Utilization Measured I/O transaction rate Test Reporting A final test report should be prepared at the conclusion of each test activity. to assess the potential consequences and initiate appropriate actions to minimize those consequences. These defect prone components identify tasks/steps that if improved. Make additional improvements  System Test Reports Performance Testing Process & Methodology Proprietary & Confidential . This includes the following  Individual Project Test Report  Integration Test Report  System Test Report  Acceptance test Report These test reports are designed to document the results of testing as defined in the testplan.This section defines the functions that work and do not work and the interfaces that work and do not work 4. The first of the three long term uses is for the project to trace problems in the event the application malfunctions in production. The second long term purpose is to use the data to analyze the rework process for making changes to prevent the defects from occurring in the future. For example.  Individual Project Test Report These reports focus on the Individual projects(software system).Purpose of a Test Report: The test report has one immediate and three long term purposes. draws appropriate conclusions and present recommendations. once the software has been integrated and functional testing is complete. they should prepare a report on their results.  Integration Test Report Integration testing tests the interfaces between individual projects.90 - . and if so.

if so the potential consequences and appropriate actions to minimize these consequences. time pressures. then it need only be referenced .2 Conclusion The Test Logs obtained from the execution of the test results and finally the test reports should be designed to accomplish the following objectives:  Provide Information to the customer whether the system should be placed into production. what was to be tested. One Long term objective is for the Project and the other is for the information technology function. how was it to be tested. The second objective is to ensure that software system can operate in the real world user environment.     Performance Testing Process & Methodology Proprietary & Confidential . could eliminate or minimize the occurrence of high frequency defects in future. The system test Report should present the results of executing the test plan. The data can also be used to analyze the developmental process to make changes to prevent defects from occurring in the future. which includes people skills and attitudes.91 - .A System Test plan standard that identified the objective of testing . If the defined requirements are those true needs. If these details are maintained Electronically . The Acceptance Test Report should encompass these criteria’s for the User acceptance respectively. and so forth. and when tests should occur. not included in the report. changing business conditions.The first is to ensure that the system as implemented meets the real operating needs of the user/customer. Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective actions.  Acceptance Test Report There are two primary objectives of Acceptance testing Report . 13. testing should have accomplished this objective. The project can use the test report to trace problems in the event the application malfunction in production. These defect prone components identify tasks/steps that if improved.2.

Testing Scope – This would clearly outline the areas of the application that would / would not be tested by the QA team. This document would contain a summary of the entire project and would have to be presented in a way that any person who has not worked on the project would also get a good overview of the testing effort. Application Overview – This would include detailed information on the application under test. tools and people involved in such a way that it can be taken as a summary of the Test Report itself (i. This is done so that there would not be any misunderstandings between customer and QA as regards what needs to be tested and what does not need to be tested. Overview This comprises of 2 sections – Application Overview and Testing Scope. 1. Contents of a Test Report The contents of a test report are as follows: Executive Summary Overview Application Overview Testing Scope Test Details Test Approach Types of testing conducted Test Environment Tools Used Metrics Test Results Test Deliverables Recommendations These sections are explained as follows: 14. This section would also contain information of Operating System / Browser combinations if Compatibility testing is included in the testing effort. the application. the end users and a brief outline of the functionality as well.1 Executive Summary This section would comprise of general information regarding the project.14 Test Report A Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer.e.) all the topics mentioned here would be elaborated in the various sections of the report. Performance Testing Process & Methodology Proprietary & Confidential .92 - . the client.

) how many were fixed and how many rejected etc. Test Environment – This would contain information on the Hardware and Software requirements for the project (i. Defects based on Status (i. Release Report etc. number of defects found etc. This could include information on how coordination was achieved between Onsite and Offshore teams. client machine configuration.e. specific software installations required etc.2. graphs can be generated accordingly and depicted in this section. Test Details This section would contain the Test Approach. but is more for showcasing the salient features of the testing effort. project tracking tools or any other tools which made the testing work easier. Test Results This section is similar to the Metrics section. any innovative methods used for automation or for reducing repetitive workload on the testers.e. They could be functional or performance testing automation tools.93 - . 3.e. Tools used – This section would include information on any tools that were used for testing the project. how information and daily / weekly deliverables were delivered to the client etc.e. 6. Incase many defects have been logged for the project. Types of Testing conducted. Defects based on severity. Test Environment and Tools Used.) Functional.) Test Plan. This can be used in calculating the efficiency of the testing effort. defect management tools. 5. 4. Test Procedures. Types of testing conducted – This section would mention any specific types of testing performed (i.) server configuration. Usability etc along with related specifications. Test Approach – This would discuss the strategy followed for executing the project. Test Deliverables This section would include links to the various documents prepared in the course of the testing project (i. Recommendations Performance Testing Process & Methodology Proprietary & Confidential . Metrics This section would include details on total number of test cases executed in the course of the project. Calculations like defects found per test case or number of test cases executed per day per person etc would also be entered in this section. Test Logs. The graphs can be for Defects per build. Performance. Compatibility.

It could also mention the list of known defects which have been logged by QA but not yet fixed by the development team so that they can be taken care of in the next release of the application.94 - . Performance Testing Process & Methodology Proprietary & Confidential .This section would include any recommendations from the QA team to the client on the product tested.

95 - . Deviations from expectation that is to be tracked and resolved is also termed a defect. The actual data about defect rates are then fit to the model.2 Defect Fundamentals A Defect is a product anomaly or flaw. So in this context defects are identified as any failure to meet the system requirements. A software error is present when the program does not do what its end user expects it to do. Quality is the indication of how well the system meets the requirements.15 Defect Management 15. 15.1 Defect A mismatch in the application and its specification is a defect. An evaluation of defects discovered during testing provides the best indication of software quality. Such an evaluation estimates the current system reliability and predicts how the reliability will grow if testing and defect removal continue. Defect evaluation is based on methods that range from simple number count to rigorous statistical modeling. Rigorous evaluation uses assumptions about the arrival or discovery rates of defects during the testing process. Symptoms (flaws) of faults contained in software that is sufficiently mature for production will be considered as defects. Defects include such things as omissions and imperfections found during testing phases. This evaluation is described as system reliability growth modelling Performance Testing Process & Methodology Proprietary & Confidential .

2. The Project Lead of the development team will review the defect and set it to one of the following statuses: Open – Accepts the bug and assigns it to a developer.1Defect Life Cycle 15. Proprietary & Confidential . Duplicate – The bug has already been reported.15.96 - Performance Testing Process & Methodology . it must be reported to development so that it can be fixed.   The Initial State of a defect will be ‘New’. Invalid Bug – The reported bug is not valid one as per the requirements/design As Designed – This is an intended functionality as per the requirements/design Deferred –This will be an enhancement.3 Defect Tracking After a defect has been found.

97 - .96 hours. which means the defect is ready to re-test. which will follow the same cycle as an open defect. and they need to take corrective action within 0 – 24 hours. the status is set to FIXED. and the testing team does not agree with the development team it is set to document status. they will set to Dev Waiting After the development team has fixed the defect.4 Defect Classification The severity of bugs will be classified as follows: Critical High The problem prevents further processing and testing. The Development Team must be informed within 24 hours. Cause data loss. it is set to Closed. Medium Low Performance Testing Process & Methodology Proprietary & Confidential . No data loss is suffered. The Development Team must be informed immediately and they need to take corrective action immediately.    15. The problem affects selected processing to a significant degree. and they need to take corrective action within 24 . The problem affects selected processing. and/or does not affect further processing and testing. On re-testing the defect. The Development Team must be informed within 48 hours. The Development Team must be informed that day. The problem is cosmetic. These may be cosmetic problems that hamper usability or divulge client-specific information. the status is set to REOPENED.  Once the development team has started working on the defect the status is set to WIP ((Work in Progress) or if the development team is waiting for a go ahead or some technical feedback.48 hours. or could cause a user to make an incorrect decision or entry.Document – Once it is set to any of the above statuses apart from Open. and the defect still exists. but has a work-around that allows continued processing and testing. making it inoperable. If the fixed defect satisfies the requirements/passes the test case. and they need to take corrective action within 48 .

As a rule the detail of your report will increase based on a) the severity of the bug. For example: cosmetic errors may only require a brief description of the screen. how to get it and what needs to be changed. When you are reporting a defect the more information you supply. b) the level of the processing. 5) Summarize what you think the problem is. a copy of the data both before and after the process should be included. 4) Supply a copy of all relevant reports and data including copies of the expected results. if available. However. an earlier version of the software and any formulas used) 4) Documentation on what actually happened. c) the complexity of reproducing the bug. 6) Identify the individual items that are wrong. the easier it will be for the developers to determine the problem and fix it.15. (Perceived results) 5) An explanation of how the results differed. This includes spread sheets.5 Defect Reporting Guidelines The key to making a good report is providing the development staff with as much information as necessary to reproduce the bug.98 - . This can be broken down into 5 points: 1) Give a brief description of the problem 2) List the steps that are needed to reproduce the bug or problem 3) Supply all relevant information such as version. an error in processing will require a more detailed description. 8) Copies of any output should be included. 7) If specific data is involved. (Expected results) 3) The source of the expected results. project and data used. but the more complex the problem– the more information the developer is going to need. Anatomy of a bug report Performance Testing Process & Methodology Proprietary & Confidential . such as: 1) The name of the process and how to get to it. Simple problems can have a simple report. 2) Documentation on what was expected.

one before the process and one after. With the data. developers will have been working on it and if they’ve found a bug– it may already have been reported or even fixed. If you have to enter any data.99 - . Include all proper menu names. follow them . Go through the process again and see if there are any steps that can be removed. After you’ve finished writing down the steps. they need to know which version to use when testing out the bug. When you report the steps they should be the clearest steps to recreating the bug. developers can trace what is happening. developers will be forced to try and find the bug based on forensic evidence. don’t abbreviate and don’t assume anything.make sure you’ve included everything you type and do to get to the problem. list them. In most cases the more information– correct information– given the better. They have to give developers something to work with so that they can successfully reproduce the problem. you should include two versions of the dataset. The basic items in a report are as follows: Version: This is very important. Product: Data: Performance Testing Process & Methodology Proprietary & Confidential . The report should explain exactly how to reproduce the problem and an explanation of exactly what the problem is. Steps: List the steps taken to recreate the bug. If the dataset from before the process is not included. you should include a dataset that exhibits the error. such as a cosmetic error on a screen. supply the exact data entered. If you are developing more than one product– Identify the product in question.Bug reports need to do more than just describe the bug. In either case. In most cases the product is not static. If there are parameters. If you’re reporting a processing error. Unless you are reporting something very simple.

identify it and fix it. If you have a report to compare against. The developers need it to reproduce the bug. including the version number. Include what you expected. The report must also say what the system should be doing.Description: Explain what is wrong .5. include it and its source information (if it’s a printout from a previous version. but detail what is wrong. 15. include the version number and the dataset used) This information should be stored in a centralized location so that Developers and Testers have access to the information. In order to work it must supply all necessary information to not only identify the problem but what is needed to fix it as well.100 - . It should include information about the product. The report should be written in clear concise steps. If the process is a report. Performance Testing Process & Methodology Proprietary & Confidential . Remember report one problem at a time. Include a list of what was expected. include a copy of the report with the problem areas highlighted. Supporting documentation: If available. Testers will need this information for later regression testing and verification. The more organized information provided the better the report will be.Try to weed out any extraneous information. supply documentation. what data was used.1Summary A bug report is a case against a product. It is not enough to say that something is wrong. so that someone who has never seen the system can follow the steps and reproduce the problem. don’t combine bugs in one report.

some types of testing. rigorous application testing is a critical part of virtually all software development projects. Reducing Testing Costs The cost of performing manual testing is prohibitive when compared to automated methods.101 - . most software tests were performed using manual methods. Automation allows the tester to reduce or eliminate the required “think time” or “read time” necessary for the manual interpretation of when or where to click the mouse or press the enter key. Every organization has unique reasons for automating software quality activities. In the past. The reason is that computers can execute instructions many times faster. are virtually impossible to perform manually. they remain repetitious throughout the development lifecycle. This required a large staff of test personnel to perform expensive. repetitive work while human personnel perform other tasks. Using Testing Effectively By definition. and with fewer errors than Performance Testing Process & Methodology Proprietary & Confidential . Furthermore. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability. testing is a repetitive activity.1 Why Automate the Testing Process? Today. An automated test executes the next operation in the test hierarchy at machine speed. The very nature of application software development dictates that no matter which methods are employed to carry out testing (manual or automated). but several reasons are common across industries. Owing to the size and complexity of today’s advanced software applications. manual testing is no longer a viable option for most testing situations. the need is greatly increased for testing methods that support business objectives. As more organizations develop mission-critical systems to support their business activities. It is necessary to ensure that these systems are reliable. built according to specification. and time-consuming manual test procedures. allowing tests to be completed many times faster than the fastest individual. and have the ability to support business processes. such as load/stress testing. Automation of testing processes allows machines to complete the tedious.16 Automation What is Automation Automated testing is automating the manual testing process currently in use 16.

Most importantly. load/stress testing using automated methods require only a fraction of the computer hardware that would be necessary to complete a manual test. 50 application users employing 50 PCs with associated software. the tester has a very high degree of control over which types of tests are being performed.individuals. It is easy to see why manual methods for load/stress testing is an expensive and logistical nightmare. Therefore. and a cadre of coordinators to relay instructions to the users would be required. With an automated scenario. automated tests can be executed as many times as necessary without requiring a user to recreate a test script each time the test is run. When applications need to be deployed across different hardware or software platforms. and how the tests will be executed. Greater application test coverage also reduces the risk of exposing users to malfunctioning or non-compliant software. In some industries such as healthcare and pharmaceuticals. imagine the same application used by hundreds or thousands of users. the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary. Imagine performing a load test on a typical distributed client/server application on which 50 concurrent users were planned. at night or on weekends without having to assemble an army of end users. Using automated tests enforces consistent procedures that allow developers to evaluate the effect of various application modifications as well as the effect of various user actions. Greater Application Coverage The productivity gains delivered by automated testing allow and encourage organizations to test more often and more completely. Performance Testing Process & Methodology Proprietary & Confidential .102 - . For example. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Repeatability and Control By using automated techniques. automated tests can be built that extract variable data from external files or applications and then run a test using the data as an input value. an available network. Replicating Testing Across Different Platforms Automation allows the testing organization to perform consistent and repeatable tests. As another example. organizations are required to comply with strict quality regulations as well as being required to document their quality assurance efforts for all parts of their systems. To do the testing manually. standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently.

103 - . software applications can literally define or control the core of a company’s business. The following are examples of criteria that can be used to identify tests that are prime candidates for automation. High Path Frequency .In many situations. tests that run only once. Critical Business Processes . Certain types of tests like user comprehension tests. types of tests can be automated. Examples include: creating customer records. Mission-critical processes are prime candidates for automated testing.2 Automation Life Cycle Identifying Tests Requiring Automation Most. Performance Testing Process & Methodology Proprietary & Confidential . If the application fails. invoicing and other high volume activities where software failures would occur frequently. and tests that require constant human intervention are usually not worth the investment to automate. the company can face extreme disruptions in critical operations.Automated testing can be used to verify the performance of application paths that are used with a high degree of frequency when the software is running in full production. but not all.16.

an automated test should be able to link business requirements to test results. and create meaningful end-user and management reports.Examples include: financial month-end closings. the greater the benefits are from automation. production planning. Any application with a high-degree of risk associated with a failure is a good candidate for test automation. allowing users to evaluate application readiness based upon the application's ability to support the business requirements. Test Planning and Management A robust testing tool should have the capability to manage the testing process. With much of the testing responsibility shifting from the development staff to the departmental level. easy-to-understand language. User training and experience gained in performing one testing task should be transferable to other testing tasks. These automated modules can be used again and again without having to rebuild the test scripts. Repetitive Testing . and should automatically adjust for different load times and performance levels. common outline files can be created to establish a testing session. which should be addressed when selecting an application testing solution. A robust tool will allow users to integrate existing test results into an automated test plan. All products within the testing product environment should be based upon a common. It should also allow users to include non-automated testing procedures within automated test plans and test results. Here are several key issues. the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug tracking packages. Finally. a testing tool that requires programming skills is unusable by most Performance Testing Process & Methodology Proprietary & Confidential . This modular approach saves time and money when compared to creating a new end-to-end script for each and every test. and one which often poses enterprise-wide implications. Internet/Intranet Testing A good tool will have the ability to support testing within the scope of a web browser. sales order entry and other core activities. Also.104 - . close a testing session and apply testing values. Test components built for performing functional tests should also support other types of testing including regression and load/stress testing. What to Look For in a Testing Tool Choosing an automated software testing tool is an important step. Ease of Use Testing tools should be engineered to be usable by non-programmers and application endusers. Applications with a Long Life Span . For example.If an application is planned to be in production for a long period of time. it is also a prime candidate for automation. The tests created for testing Internet or intranet-based applications should be portable across browsers. Testing Product Integration Testing tools should provide tightly integrated modules that support test component reusability.If a testing procedure can be reused many times. provide organization for testing components.

3 Preparing the Test Environment Once the test cases have been created.105 - .Document the technical environment needed to execute the tests. the testing tool itself should have a short learning curve. and the procedures needed for installation and restoration of the environment. It should also provide test results in an easy-to-understand reporting format.organizations. outline those procedures needed to restore the test environment to its original state. Even if programmers are responsible for testing. Operational Support . 16. you are ready to re-execute tests or prepare for a different set of tests.Outline the procedures necessary to install the application software to be tested. Load and Performance Testing The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance.Finally. easy-to-modify tests. Test Schedule . Test component reusability should be a cornerstone of the product architecture. Restoration Procedures . GUI and Client/Server Testing A robust testing tool should support testing with a variety of user interfaces and create simple-to manage. By doing this.Identify any support needed from other parts of your organization. Description . The test environment includes initial set up and description of the environment. the test environment can be prepared.Identify the times during which your testing facilities will be used for a given test. Make sure that other groups that might share these resources are informed of this schedule. Installation Procedures . Inputs to the Test Environment Preparation Process Technical Environment Descriptions Approved Test Plan Test Execution Schedules Resource Allocation Schedule Application Software to be installed Performance Testing Process & Methodology Proprietary & Confidential . The test environment is defined as the complete set of steps necessary to execute the test as described in the test plan.

and the expected application values for the procedure being tested. the standardized test cases can be created that will be used to test the application. The type and number of test cases needed will be dictated by the testing plan. or business requirements. Creating a Test Plan For the greatest return on automated testing.What critical actions must the application accomplish before it can be deployed? This information forms the basis for making informed decisions on whether or not the application is ready to deploy. A good testing plan should be reviewed and approved by the test team. The time invested in detailed planning significantly improves the benefits resulting from test automation. To guarantee the best possible result from an automated testing program. or to print a salary check.106 - . locate and configure test-related hardware and software products and coordinate the human resources required to complete all testing. the software development team. The definition of these tasks. all user groups and the organization’s management. a business requirement for a payroll application might be to calculate a salary. functional requirements of the software system in question. the procedures for applying those inputs. Evaluating Business Requirements Begin the automated testing process by defining exactly what tasks your application software should accomplish in terms of the actual business activities of the end-user. The following items detail the input and output components of the test planning process. These business requirements should be defined in such a way as to make it abundantly clear that the software system correctly (or incorrectly) performs the necessary business functions. This plan is very much a “living document” that should evolve as the application functions become more clearly defined. Application Implementation Schedules . those evaluating test automation should consider these fundamental planning steps. a testing plan should be created at the same time the software application requirements are defined.When is the scheduled release? When are updates or enhancements planned? Are there any specific events or actions that are dependent upon the application? Acceptance Criteria for implementation .Test Planning Careful planning is the key to any successful process. Inputs to the Test Planning Process Application Requirements . This enables the testing team to define the tests. defines the high-level. For example.What is the application intended to do? These should be stated in the terms of the business requirements of the end users. A Performance Testing Process & Methodology Proprietary & Confidential . Test Design and Development After the test components have been defined. A test case identifies the specific input values that will be sent to the application.

documents the results.Each test case must have a unique name. run orders and dependencies that might exist between test cases.107 - . Expected Results . Test Data Sources . so that the results of these test elements can be traced and analyzed. and validates those results against expected performance. Test Case Prerequisites . test execution environment Performance Testing Process & Methodology Proprietary & Confidential . Test Case Execution Order . Test Procedures – Identify the application steps necessary to complete the test case. and therefore the ultimate success of the test.Identify set up or testing criteria that must be established before a test can be successfully executed. These expected results will be used to measure the acceptance criteria. This step applies the test cases identified by the test plan. test execution and restoration Executing the Test The test is now ready to be run. repeatable. the action to be completed. Inputs to the Test Design and Construction Process Test Case Documentation Standards Test Case Naming Standards Approved Test Plan Business Process Documentation Business Process Flow Test Data sources Outputs from the Test Design and Construction Process Revised Test Plan Test Procedures for each Test Case Test Case(s) for each application function described in the test plan Procedures for test set up.proper test case will include the following key components: Test Case Name(s) .Specify any relationships. Input Values . if necessary.This section of the test case identifies the values to be supplied to the application as input including.Take note of the sources for extracting test data if it is not included in the test case.Activities within the test execution are logged and analyzed as follows: Actual Results achieved during test execution are compared to expected application behavior from the test cases Test Case completion status (Pass/Fail) Actual results of the behavior of the technical test environment Deviations taken from the test plan or test process Inputs to the Test Execution Process Approved Test Plan Documented Test Cases Stabilized.Document all screen identifier(s) and expected value(s) that must be verified as part of the test. Specific performance measurements of the test execution phase include: Application of Test Cases – The test cases previously created are applied to the target software application as described in the testing environment Documentation .

failed or were not executed. Without an adequate test plan in place to control your entire test process. and the completion status. This step of the process can range from very chaotic to very simple and schedule driven. A complete and thorough test plan will identify this need and many of the test cases can be used for both test cycles. and it is possible for engineers to edit and maintain such scripts. Data Driven Approach Performance Testing Process & Methodology Proprietary & Confidential . The benefit of this approach is that the captured session can be re-run at some later point in time to ensure that the system performs the required behavior.108 - . if the system functionality changes. including application processes that need to be analyzed further. needs more testing. The short-comings of Capture/Playback are that in many cases.The Log Review compiles a listing of the activities of all test cases. Additionally.4 Automation Methods Capture/Playback Approach The Capture/Playback tools capture the sequence of manual operations in a test script that are entered by the test engineer. the type of test. Tools like WinRunner provide a scripting language.This final and very important report identifies potential defects in the software. however overall savings is usually minimal.This step identifies the overall status of the application after testing. noting those that passed. 16. These sequences are played back during the test execution. there may be several test execution cycles necessary to complete all the necessary types of testing required for your application. For example. for example: ready for release. you may inadvertently cause problems for subsequent testing. the capture/playback session will need to be completely re-run to capture the new sequence of user interactions. Test Execution Statistics . etc.Standardized Test Logging Procedures Outputs from the Test Execution Process Test Execution Log(s) Restored test environment The test execution phase of your software test process will control how the test gets applied to the application. The secret to a controlled test execution is comprehensive planning. Measuring the Results This step evaluates the results of the test as compared to the acceptance criteria set down in the test plan.This summary identifies the total number of tests that were executed. a test execution may be required for the functional testing of an application. Specific elements to be measured and analyzed include: Test Execution Log Review . and a separate test execution cycle may be required for the stress/volume testing of the same application. Application Defects . Determine Application Status . The problems experienced in test execution are usually attributed to not properly performing steps from earlier in the process. This sometimes reduces the effort over the completely manual approach.

109 - . This allows one script to test multiple sets of positive data. Test script execution process: Performance Testing Process & Methodology Proprietary & Confidential . This is applicable when large volumes and different sets of data need to be fed to the application and tested for correctness.Test tool to be installed in the machine. Test Script execution: In this phase we execute the scripts that are already created. Test environment /application to be tested to be installed in the machine.Wait until execution is done. 4. Prerequisite for running the scripts such as tool settings. 3.Analysis the results via Test manager or in the logs.Select the script that needs to be executed and run it… 5. Steps to be followed before execution of scripts: 1. The benefit of this approach is that the time consumed is less and accurate than manually testing it. 2. Testing can be done with both positive and negative approach simultaneously. 6. necessary data table or data pool updation needs to be taken care.Data driven approach is a test that plays back the same user actions but with varying input values. Scripts need to be reviewed and validated for results and accepted as functioning as expected before they are used live. playback options.

Playback options Script execution Result analysis Defect management Performance Testing Process & Methodology Proprietary & Confidential .110 - .Test tool ready Test ready application Tool settings.

exact screen location)? Is there object recognition when recording and playing back or does it appear to record ok but then on playback (without environment change or unique id’s. which in turn will dictate how automation will be invoked to support the process. look at the code and then playback. Empirix/RSW. 17. In general a set of criteria can be built up by using this matrix and an indicative score obtained to help in the evaluation process. Mercury. test reporting capability. this is the first thing that most test professionals will do. A detailed description is given below of each of the categories used in the matrix. and vendor qualification. Rational. Usually the lower the score the better but this is subjective and is based on the experience of the author and the test professionals opinions used to create this document. 3 = Basic/ support only. etc. The following tool vendors evaluated are Compuware. When automating. etc changes) fail? How easy is it to read the recorded script.17 General automation tool comparison Anyone who has contemplated the implementation of an automated test tool has quickly realized the wide variety of options on the market in terms of both the kinds of test tools being offered and the number of vendors. tool integration capability. and Segue. 4 = This is only supported by use of an API call or third party add-in but not included in the general test tool/below average. The best tool for any particular situation depends on the system engineering environment that applies and the testing methodology that will be used. Each category in the matrix is given a rating of 1 – 5. This appendix evaluates major tool vendors on their test tool characteristics. test execution capability.111 - . 5 = No support. databases. Does the tool support lowlevel recording (mouse drags. This is very similar to recording a macro in say Microsoft Access. Performance Testing Process & Methodology Proprietary & Confidential . 2 = Good support but lacking or another tool provides more effective support. performance testing and analysis. Eventually record and playback becomes less and less part of the automation process as it is usually more robust to use the built-in functions to directly test objects. They will record a simple script. However this should be done as a minimum in the evaluation process because if the tool of choice cannot recognize the applications objects then the automation process will be a very tedious experience.2 Record and Playback This category details how easy it is to record & playback a test. 17. 1 = Excellent support for this functionality.1 Functional Test Tool Matrix The Tool Matrix is provided for quick and easy reference to the capabilities of the test tools.

17.3

Web Testing

Web based functionality on most applications is now a part of everyday life. As such the test tool should provide good web based test functionality in addition to its client/server functions. In judging the rating for this category I looked at the tools native support for HTML tables, frames, DOM, various platforms for browsers, Web site maps and links. Web testing can be riddled with problems if various considerations are not taken into account. Here are a few examples • Are there functions to tell me when the page has finished loading? • Can I tell the test tool to wait until an image appears? • Can I test whether links are valid or not? • Can I test web based objects functions like is it enabled, does it contain data, etc. • Are there facilities that will allow me to programmatically look for objects of a certain type on a web page or locate a specific object? • Can I extract data from the web page itself? E.g. the title? A hidden form element? With Client server testing the target customer is usually well defined you know what network operating system you will be using, the applications and so on but on the web it is far different. A person may be connecting from the USA or Africa, they may be disabled, they may use various browsers, and the screen resolution on their computer will be different. They will speak different languages, will have fast connections and slow connections, connect using MAC, Linux or Windows, etc, etc. So the cost to set up a test environment is usually greater than for a client server test where the environment is fairly well defined.

17.4 Database Tests
Most applications will provide the facility to preserve data outside of itself. This is usually achieved by holding the data in a Database. As such, checking what is in the backend database usually verifies the proper validation of tests carried out on the front end of an application. Because of the many databases available e.g. Oracle, DB2, SQLServer, Sybase, Informix, Ingres, etc all of them support a universal query language known as SQL and a protocol for communicating with these databases called ODBC (JDBC can be used on java environments). I have looked at all the tools support for SQL, ODBC and how they hold returned data e.g. is this in an array, a cursor, a variable, etc. How does the tool manipulate this returned data? Can it call stored procedures and supply required input variables? What is the range of functions supplied for this testing?

17.5 Data Functions
As mentioned above applications usually provide a facility for storing data off line. So to test this, we will need to create data to input into the application. I have looked at all
Performance Testing Process & Methodology Proprietary & Confidential - 112 -

the tools facilities for creating and manipulating data. Does the tool allow you to specify the type of data you want? Can you automatically generate data? Can you interface with files, spreadsheets, etc to create, extract data? Can you randomise the access to that data? Is the data access truly random? This functionality is normally more important than database tests as the databases will usually have their own interface for running queries. However applications (except for manual input) do not usually provide facilities for bulk data input. The added benefit (as I have found) is this functionality can be used for a production reason e.g. for the aforementioned bulk data input sometimes carried out in data migration or application upgrades. These functions are also very important as you move from the record/playback phase, to data-driven to framework testing. Data-driven tests are tests that replace hard coded names, address, numbers; etc with variables supplied from an external source usually a CSV (Comma Separated variable) file, spreadsheet or database. Frameworks are usually the ultimate goal in deploying automation test tools. Frameworks provide an interface to all the applications under test by exposing a suitable list of functions, databases, etc. This allows an inexperienced tester/user to run tests by just running/providing the test framework with know commands/variables. A test framework has parallels to Software frameworks where you develop an encapsulation layer of software (framework) around the applications, databases etc and expose functions, classes, methods etc that is used to call the underlying applications, return data, input data, etc. However to do this requires a lot of time, skilled resources and money to facilitate the first two.

17.6 Object Mapping
If you are in a role that can help influence the design of a product, try to get the development/design team to use standard and not custom objects. Then hopefully you will not need this functionality. However you may find that most (hopefully) of the application has been implemented using standard objects supported by your test tool vendor but there may be a few objects that are custom ones. Most custom objects will behave like a similar standard control here are a few standard objects that are seen in everyday applications. • • • • • • Pushbuttons Checkboxes Radio buttons List views Edit boxes Combo boxes

If you have a custom object that behaves like one of these are you able to map (tell the test tool that the custom control behaves like the standard) control? Does it support all the standard controls methods? Can you add the custom control to it’s own class of control?
Performance Testing Process & Methodology Proprietary & Confidential - 113 -

17.7 Image Testing
Lets hope this is not a major part of your testing effort but occasionally you may have to use this to test bit map and similar images. Also when the application has painted controls like those in the calculator app found on a lot of windows applications you may need to use this. At least one of the tools allows you to map painted controls to standard controls but to do this you have to rely on the screen co-ordinates of the image. Does the tool provide OCR (optical character recognition)? Can it compare one image against another? How fast does the compare take? If the compare fails how long does that take? Does the tool allow you to mask certain areas of the screen when comparing. I have looked at these facilities in the base tool set.

17.8 Test/Error recovery
This can be one of the most difficult areas to automate but if it is automated, it provides the foundation to produce a truly robust test suite. Suppose the application crashes while I am testing what can I do? If a function does not receive the correct information how can I handle this? If I get an error message how do I deal with that? If I access a web site and get a warning what do I do? I cannot get a database connection how do I skip those tests? The test tool should provide facilities to handle the above questions. I looked at built in wizards of the test tools for standard test recovery (when you finish tests or when a script fails). Error recovery caused by the application and environment. How easy is it to build this into your code? The rating given will depend on how much errors the tool can capture, the types of errors, how it recovers from errors, etc.

17.9 Object Name Map
As you test your application using the test tool of your choice you will notice that it records actions against the objects that it interacts with. These objects are either identified through the co-ordinates on the screen or preferably via some unique object reference referred to as a tag, object ID, index, name, etc. Firstly the tool should provide services to uniquely identify each object it interacts with and by various means. The last and least desirable should be by co-ordinates on the screen. Once you are well into automation and build up 10’s and 100’s of scripts that reference these objects you will want to have a mechanism that provides an easy update if the application being tested changes. All tools provide a search and replace facility but the best implementations are those that provide a central repository to store these object identities. The premise is it is better to change the reference in one place rather than having to go through each of the scripts to replace it there. I found this to be true but not as big a point as some have stated because those tools that don’t support the central repository scheme; can be programmed to reference windows and object names in one place (say via a variable) and that variable can be used throughout the script (where that object appears). Does the Object Name Map allow you to alias the name or change the name given by the tool to some more meaningful name?
Performance Testing Process & Methodology Proprietary & Confidential - 114 -

So want to use external functions like win32api functions and so on and should have a good grasp of programming.11Extensible Language Here is a question that you will here time and time again in automation forums. etc but these are normally a mixture of the already supported data types. Because this is an advanced topic I have not taken into account ease of use. functions. methods.g. Performance Testing Process & Methodology Proprietary & Confidential . etc rather than extending the tool beyond it’s released functionality. This will sound a lot clearer as you go on in the tools and this document will be updated to include advanced topics like this in extending the tools capabilities. The tool should give you details of some of the object’s properties. as those people who have got to this level should have already exhausted the current capabilities of the tools. The tool will usually provide the tester with a point and ID service where you can use the mouse to point at the object and in some window you will see all of that objects ID’s and properties. classes. Some tools provide extension by allowing you to create user defined functions. C++ or VB. A lot of the tools will allow you to search all the open applications in one swoop and show you the result in a tree that you can look at when required. y or Z It can’t in the standard language but you can do it like this What we are concerned with in this section is the last answer e. A sort of spy that looks at the internals of the object giving you details like the object name. C. This will allow you to reference that object within a function call.115 - . Register it on the machine then reference that dll from the test tool calling the methods according to their specification.17. if the standard test language does not support it can I create a DLL or extend the language in some way to do it? This is usually an advanced topic and is not encountered until the trained tester has been using the tool for at least 6 – 12 months. especially those associated with uniquely identifying the object or window. However when this is encountered the tool should support language extension. • • • • I don’t know It can’t do it It can do it using the function x. If via DLL’s then the tester must have knowledge of a traditional development language e. create a class containing various methods (similar to functions) then I would make a dll file.g. there will be one of four answers. ID and similar.10Object Identity Tool Once you become more proficient with automation testing one of the primary means of identifying objects will be via an ID Tool. 17. For instance if I wanted to extend a tool that could use DLL’s created by VB I would need to have Visual Basic then open say an ActiveX dll project. “How do I get {insert test tool name here} to do such and such”.

How do I manage this? This is where a test management tool comes in real handy.900 .13Integration How well does the tool integrate with other tools. To test these requirements 40. An example could be a major Bank wants to redesign its workflow management system to allow faster processing of customer queries. Powerbuilder. dll’s etc that expose some of the applications details but whether a developer will or has time to do this is another question.000 test cases have been identified 20. Most tools can interface to unsupported environments if the developers in that environment provide classes. Although very functional it does not provide the range of facilities that the other tools do.£5. etc? Integration becomes very important rather than having separate systems that don’t share data that may require duplication of information. Price typically ranges from $2. why? Because all the tools are similar in price except Visual Test that is at least 5 times cheaper than the rest but as you will see from the matrix there is a reason. Ultimately this is the most important part of automation. So you know the tools will all cost a similar price it is usually a case of which one will do the job for me rather than which is the cheapest. The companies that will score larger on these are those that provide tools outside the testing arena as they can build in integration to their other products and so when it comes down to the wire on some projects.17. packages. etc) in the US and around £2.14Cost In my opinion cost is the least significant in this matrix. etc. 17. 17.000 of these can be automated.000 (depending on quantity brought. If the tool does not support your environment/application then you are in trouble and in most cases you will need to revert to manually testing the application (more shelf ware). WAP. excel or requirements management tools? When managing large test projects with an automation team greater than five and testers totaling more than ten.116 - . Also how do I manage the bugs raised as a result of automation testing. what Oracle. The anticipated requirements for the new workflow software numbers in the thousands.$5. Visual Test I believe will prove to be a bigger hit as it expands its functional range it was not that long ago where it did not support web based testing. This is becoming more and more important. Does the tool allow you to run it from various test management suites? Can you raise a bug directly from the tool and feed the information gathered from your test logs into it? Does it integrate with products like word.000 in the UK for the base tools included in this document. Performance Testing Process & Methodology Proprietary & Confidential . we have gone with the tool that integrated with the products we already had.12Environment Support How many environments does the tool support out the box? Does it support the latest Java release. The management aspect and the tools integration moves further up the importance ladder. Environment support.900 .

com that can provide most of the answers rather than ringing the support line. On top of the above prices you usually pay an additional maintenance fee of between 10 and 20%. layout on screen. then Compuware. However I do not anticipate a move on the prices upwards as this seems to be the price the market will tolerate.117 - . Rational and last Segue. In more cases than not they have agreed on which was the easiest to use (initially). integration. speed of responses and similar 17. validity of responses from the helpdesk. There are not many applications I know that cost this much per license not even some very advanced operating systems. Obviously this can change as the tester becomes more experienced and the issues of say extensibility. However having said that you can find a lot of resources for Segue on the Internet including a forum at www.15Ease Of Use This section is very subjective but I have used testers (my guinea pigs) of various levels and got them from scratch to use each of the tools. We have found Mercury to be the best for support. Performance Testing Process & Methodology Proprietary & Confidential . script maintenance.16Support In the UK this can be a problem as most of the test tool vendors are based in the USA with satellite branches in the UK. Just from my own experience and the testers I know in the UK. The bigger the supply the less the price as you can spread the development costs more. However it is all a matter of supply. Visual Test also provides a free runtime license. On their website Segue and Mercury provide many useful user and vendor contributed material.17Object Tests Now presuming the tool of choice does work with the application you wish to test what services does it provide for testing object properties? Can it validate several properties at once? Can it validate several objects at once? Can you set object properties to capture the application state? This should form the bulk of your verification as far as the automation process is concerned so I have looked at the tools facilities on client/server as well as web based applications. help files and user manuals. debugging facilities.betasoft. However this score is based on the productivity that can be gained in say the first three months when those issues are not such a big concern. I have also included various other criteria like the availability of skilled resources.The prices are kept this high because they can. online resources. All the tools are roughly the same price and the volumes of sales is low relative to say a fully blown programming language IDE like JBuilder or Visual C++ which are a lot more function rich and flexible than any of the test tools. etc are required. data-driven tests. Ease of use includes out the box functions. 17. 17.

1 = Excellent support for this functionality.118 - . 5 = No support. 4 = This is only supported by use of an API call or third party add-in but not included in the general test tool/below average.18Matrix What will follow after the matrix is a tool-by-tool comparison under the appropriate heading (as listed above) so that the user can get a feel for the tools functionality side by side. 3 = Basic/ support only.19Matrix score • • • • • Win Runner = 24 QARun = 25 SilkTest = 24 Visual Test = 39 Robot = 24 Performance Testing Process & Methodology Proprietary & Confidential . 2 = Good support but lacking or another tool provides more effective support.17. Each category in the matrix is given a rating of 1 – 5. Object Name Map Database tests Data functions Integration Image testing Extensible Language Record & Playback Test/Error recovery Object Mapping Object Identity Tool Environment support Object Tests 1 1 1 2 1 Web Testing Ease of use 2 2 3 3 1 1 2 2 2 2 Cost Support WinRunner QA Run Silk Test Visual Test Robot 2 1 1 3 1 1 2 2 3 2 1 1 1 4 1 2 2 2 3 1 1 1 1 2 1 1 1 1 2 1 2 2 1 2 2 1 2 1 4 4 2 1 2 1 1 2 2 1 2 1 1 2 2 3 2 1 1 3 2 1 3 2 3 1 2 17.

With Clear Quest. RequistePro. prioritize and eliminate performance bottlenecks within an application. When it comes to Object Oriented development they are the acknowledged leaders with most of the leading OO experts working for them. Use Robot to record client/server conversations and store them in scripts. A baseline version of RequisitePro is included with Rational Test Manager. enabling developers to quickly find. Rational Suite Performance Studio is a sophisticated tool for automating performance tests on client/server systems. Rational Pure Coverage is a customizable code coverage analysis tool that provides detailed application analysis and ensures that all code has been exercised. you can manage every type of change activity associated with software development. 18. A client/server system includes client applications accessing a database or application server. Rational Purify is a comprehensive C/C+ + run-time error checking tool that automatically pinpoints run-time errors and memory leaks in all components of an application. including third-party libraries. Performance Studio includes Rational Robot and Rational Load Test. RequisitePro organizes your requirements by linking Microsoft Word to a requirements repository and providing traceability and change management throughout the project lifecycle. preventing untested code from reaching the end-user.g. ensuring that code is reliable Rational Quantify is an advanced performance profiler that provides application performance analysis. Use Load Test to schedule and play back the scripts. Rational Robot.18 Sample Test Automation Tool Rational offers the most complete lifecycle toolset (including testing) of these vendors for the windows platform. test cases and a whole set of tools to support the process. When you define a test requirement in RequisitePro. you can access it in Test Manager. and browsers accessing a Web server.1 Rational Suite of tools Rational RequisitePro is a requirements management tool that helps project teams control the development process. Rational Clear Quest is a change-request management tool that tracks and manages defects and change requests throughout the development process. and documentation modifications. Rational Rose. including enhancement requests.Their Unified Process is a very good development model that I have been involved with which allows mapping of requirements to use cases.119 - . Performance Testing Process & Methodology Proprietary & Confidential . defect reports. etc. Clear case. Some of their products are worldwide leaders e.

one RequisitePro database. Rational administrator is used to create and manage rational repositories. and run tests. and to capture and analyze the results.Rational Robot. Rational Test categorizes test information within a repository by project. and multiple Rose models and RequisitePro projects. of users placing heavy loads and stress on your database and Web servers.120 - . How to create a new project? Performance Testing Process & Methodology Proprietary & Confidential .2 Rational Administrator What is a Rational Project? A Rational project is a logical collection of databases and data stores that associates the data you use when working with Rational Suite. Automates testing by combining automatic test generation with source-code coverage analysis. Facilitates functional and performance testing by automating record and playback of test scripts. During playback. even thousands. users and groups and manage security privileges. organize. Rational Test Factory. one Clear Quest databases. Tests an entire application. including all GUI features and all lines of source code. and optionally places them under configuration management. Allows you to write. You can use the Rational Administrator to create and manage projects. A Rational project is associated with one Rational Test data store. The tools that are to discussed here are Rational Administrator Rational Robot Rational Test Manager 18. Rational Load Test can emulate hundreds.

to manage test assets create associated test data store and for defect management connect to Clear quest database. enter the Password if you want to protect the project with password. In the corresponding window displayed.121 - . Performance Testing Process & Methodology Proprietary & Confidential . configure or delete the project. In the above window opened enter the details like Project name and location. Click Next. In the configure project window displayed click the Create button. To manage the Requirements assets connect to Requisite Pro. Click Finish. which is required to connect to.Open the Rational administrator and go to File->New Project.

Accept the default path and click OK button.Once the Create button in the Configure project window is chosen.122 - . Performance Testing Process & Methodology Proprietary & Confidential . the below seen Create Test Data store window will be displayed.

Once the below window is displayed it is confirmed that the Test datastore is successfully created and click OK to close the window. Performance Testing Process & Methodology Proprietary & Confidential .123 - . Click OK in the configure project window and now your first Rational project is ready to play with….

Record and play back scripts that navigate through your application and test the state of objects through verification points. Proprietary & Confidential .124 - Performance Testing Process & Methodology . Use Robot and TestManager together to record and play back scripts that help you determine whether a multi-client system is performing within user-defined standards under varying loads. Robot can be used to: • • Perform full functional testing. Perform full performance testing.3 Rational Robot Rational Robot to develop three kinds of scripts: GUI scripts for functional testing and VU and VB scripts for performance testing.Rational Administrator will display your “TestProject” details as below: 18.

HTML. Oracle Forms. VB. whether they are visible in the interface or hidden. and PureCoverage. Robot still finds them on playback. The Object Testing technology in Robot lets you test any object in the application-under-test. The Robot editor provides color-coded commands with keyword Help for powerful integrated programming during script development. If objects change locations or their text changes. Test applications developed with IDEs such as Visual Basic. including the object's properties and data. Test objects even if they are not visible in the application's interface. and VU scripting environments. PowerBuilder.125 - . Robot uses Object-Oriented Recording to identify objects by their internal object names. Quantify. Robot is integrated with Rational Purify. You can play back scripts under a diagnostic tool and see the results in the log. • • The Object-Oriented Recording technology in Robot lets you generate scripts quickly by simply running and using the application-under-test. not by screen coordinates. You can test standard Windows objects and IDEspecific objects. and Java.4 Robot login window Once logged you will see the robot window. 18. Go to File-> New->Script Performance Testing Process & Methodology Proprietary & Confidential . Collect diagnostic information about an application during script playback.• Create and edit scripts using the SQABasic.

126 - .5 Rational Robot main window-GUI script 2 Performance Testing Process & Methodology Proprietary & Confidential .In the above screen displayed enter the name of the script say “First Script” by which the script is referred to from now on and any description (Not mandatory).The type of the script is GUI for functional testing and VU for performance testing. 18.

127 - . The Output window bottom pane) has two tabs: • • Build – Displays compilation results for all scripts compiled in the last operation. Record-> Stop 18. Also displays certain system messages from Robot. Line numbers are enclosed in parentheses to indicate lines in the script with warnings and errors. It has two panes: • • Asset pane (left) – Lists the names of all verification points and low-level scripts for this script. or debugging.6 Record and Playback options Go to Tools-> GUI Record options the below window will be displayed. How to record a play back script? To record a script just go to Record->Insert at cursor Then perform the navigation in the application to be tested and once recording is done stop the recording. Console – Displays messages that you send with the SQAConsoleWrite command. Script pane (right) – Displays the script.The GUI Script top pane) window displays GUI scripts that you are currently recording. Performance Testing Process & Methodology Proprietary & Confidential . To display the Output window: Click View ® Output. editing.

For ex: Select a preference in the Object order preference list.1Playback options Go to Tools-> Playback options to set the options needed while running the script. Performance Testing Process & Methodology Proprietary & Confidential . If you will be testing C++ applications.6.recording think time in General tab: Web browser tab: Mention the browser type IE or Netscape… Robot Window: During recording how the robot should be displayed and hotkeys details… Object Recognition Order: the order in which the recording is to happen .In this window we can set general options like identification of lists. change the object order preference to C++ Recognition Order.128 - . menus . 18.

the results of each verification point appear in the log in Test Manager. mention the time out period. the verification point captures object information (based on the type of verification point) and stores it in a baseline data file.7 Verification points A verification point is a point in a script that you create to confirm the state of an object across builds of the application-under-test. The information in this file shows the actual state of the object in the build.129 - . Robot retrieves the information in the baseline file for each verification point and compares it to the state of the object in the new build. When you play back the script against a new build. After playback. to manage log and log data. 18.This will help you to handle unexpected window during playback. The information in this file becomes the baseline of the expected state of the object during subsequent builds. Performance Testing Process & Methodology Proprietary & Confidential . If a verification point fails (the baseline and actual data do not match). The Comparator displays the baseline and actual files so that you can compare them. you can select the verification point in the log and click View ® Verification Point to open the appropriate Comparator. error recovery. During recording. If the captured object does not match the baseline. Robot creates an actual data file.

which always begins with Result =. accelerator keys. Checks that the specified window is displayed before continuing with the playback Captures and compares the client area of a window as a bitmap (the menu. Type Alphanumeric Clipboard File Comparison File Existence Menu Description Captures and compares alphabetic or numeric values.1List of Verification Points The following table summarizes each Robot verification point. Robot also deletes all of its associated verification points. Because verification points are assets of a script. appears in the Script (right) pane. 18. Module Existence Object Data Object Properties Region Image Web Site Compare Web Site Scan Window Existence Window Image Performance Testing Process & Methodology Proprietary & Confidential . Checks the content of a Web site with every revision and ensures that changes have not resulted in defects.7. When you create a verification point. You can easily copy verification points to other scripts if you want to reuse them.A verification point is stored in the project and is always associated with a script. Captures up to five levels of sub-menus. Compares the contents of two files. The verification point script command. or is loaded anywhere in memory. if you delete a script. Checks for the existence of a specified file Captures and compares the text. and border are not captured).130 - . Captures and compares a region of the screen (as a bitmap). and state of menus. Captures and compares the properties of objects. Captures and compares the data in objects. its name appears in the Asset (left) pane of the Script window. Captures a baseline of a Web site and compares it to the Web site at another point in time. Checks whether a specified module is loaded into a specified context (process). title bar. Captures and compares alphanumeric data that has been copied to the Clipboard.

8 About SQABasic Header Files SQABasic header files let you declare custom procedures.Click File ® Open ® SQABasic File. SQABasic files are stored in the SQABas32 folder of the project.10Inserting a Comment into a GUI Script: During recording or editing. Select global.sbh. Click OK to continue recording or editing. If recording.18. 18. Proprietary & Confidential . To open Global. you can insert lines of comment text into a GUI script. 3.9 Adding Declarations to the Global Header File For your convenience. 4. Robot inserts the comment into the script (in green by default) preceded by a single quotation mark. They can be accessed by all modules within the project. 18. position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar. unless you specify another location. Robot will check this location first.sbh). Under SQABasic path. Click the Comment button on the GUI Insert toolbar. To insert a comment into a script during recording or editing. You can use Robot to create and edit SQABasic header files. and variables that you want to use with multiple scripts or SQABasic library source files. You can specify another location by clicking Tools ® General Options.sbh is a project-wide header file stored in SQABas32 in the project. If editing. 1. Comments are helpful for documenting and editing scripts. it will look in the SQABas32 directory. If the file is not there.sbh: 1. Robot ignores comments at compile time. Click the Preferences tab. Robot provides a blank header file called Global. Type the comment (60 characters maximum).sbh.131 - Performance Testing Process & Methodology . 3. and then click Open. 2. constants.Set the file type to Header Files (*. use the Browse button to find the location. click the Display GUI Insert Toolbar button on the GUI Record toolbar. Highlight the text. For example: ' This is a comment in the script To change lines of text into comments or to uncomment text: 1. SQABasic header files have the extension . Global.sbh. 2. You can add declarations to this global header file and/or create your own.

It supplies data values to the variables in a script during script playback.11. A GUI script can access a datapool when it is played back in Robot. you might want a datapool to supply those values during playback. For example. Robot adds datapool commands to VU scripts automatically. Click Edit ® Comment Line or Edit ® Uncomment Line. and so forth. If you plan to repeat the transaction multiple times during playback.1 Using Datapools with GUI Scripts If you are providing one or more values to the client application during GUI recording.2. when a GUI script is played back in a TestManager suite.132 - . you might be filling out a data entry form and providing values such as order number. you might want to provide a different set of values each time. Datapools let you automatically pump test data to virtual testers under high-volume conditions that potentially involve hundreds of virtual testers performing thousands of transactions. part name. A single virtual tester that performs the same transaction multiple times can send realistic data to the server in each transaction. There are differences in the way GUI scripts and sessions are set up for datapool access: • • You must add datapool commands to GUI scripts manually while editing the script in Robot. 18. you define a datapool for either type of script using TestManager in exactly the same way. 18. There is no DATAPOOL_CONFIG statement in a GUI script. Typically.11About Data pools A datapool is a test dataset. you use a datapool so that: • • Each virtual tester that runs the script can send realistic data (which can include unique data) to the server.12Debug menu The Debug menu has the following commands: Performance Testing Process & Methodology Proprietary & Confidential . The SQADatapoolOpen command defines the access method to use for the datapool. 18. Also. the GUI script can access the same datapool as other scripts. Although there are differences in setting up datapool access in GUI scripts and sessions.

You can also compile scripts and SQABasic library source files manually. you have made changes to global definitions that may affect all of your SQABasic files During compilation. 18. the current project for example. the Build tab in the Output window displays compilation results and error messages with line numbers for all compiled scripts and library source files.13Compiling the script When you play back a GUI script or VU script.133 - . Robot compiles the script if it has been modified since it last ran.Go Go Until Cursor Animate Pause Stop Set or Clear Breakpoints Clear All Breakpoints Step Over Step Into Step Out Note: The Debug menu commands are for use with GUI scripts only. Performance Testing Process & Methodology Proprietary & Confidential . The compilation results can be viewed in the Build tab of the Output window. Use this if. . or when you debug a GUI script. To compile Do this The active script or library source file Click File ® Compile. All scripts and library source files in Click File ® Compile All.

The results need to be analyzed in the Test Manager.14Compilation errors After the script is created and compiled and errors fixed it can be executed.18. Performance Testing Process & Methodology Proprietary & Confidential .134 - .

19 Rational Test Manager Test Manager is the open and extensible framework that unites all of the tools. most importantly. With Test manager we can Create. Under this single framework. It is where the team defines the plan it will implement to meet those goals. log folders. and run reports. and test documents. assets. implement. Performance Testing Process & Methodology Proprietary & Confidential . And. builds. manage. all participants in the testing effort can define and refine the quality goals they are working toward. The reporting tools help you track assets such as scripts. In Test Manager you can plan. Create and manage data pools and data types When the script execution is started the following window will be displayed. Create and manage builds. and track test coverage and progress.135 - . execute tests and evaluate results. it provides the entire team with one place to go to determine the state of the system at any time. The folder in which the log is to stored and the log name needs to be given in this window. design. and logs. and data both related to and produced by the testing effort.

136 - .19. you could see the results stored. From Test Manager you can know start time of the script and Performance Testing Process & Methodology Proprietary & Confidential .1 Test Manager-Results screen In the Results tab of the Test Manager.

Performance Testing Process & Methodology Proprietary & Confidential .137 - .

0 or later. VC++ and basic web pages. 20.3 Web browsers IE4.20 Supported environments 20.1 Operating system WinNT4.com Performance Testing Process & Methodology Proprietary & Confidential .4 Markup languages HTML and DHTML pages on IE4. you have to download and run a free enabler program from Rational’s website. To test other types of application.0 or later Netscape navigator (limited support) 20.0 and above The basic product supports Visual basic.2 Protocols Oracle SQL server HTTP Sybase Tuxedo SAP People soft 20. For more details visit www.138 - .5 Delphi Power builder 5.0 or above Visual C++ Java Oracle forms 4.0 with service pack 5 Win2000 WinXP(Rational 2002) Win98 Win95 with service pack1 20.rational.5 Development environments Visual basic 4.

21 Performance Testing The performance testing is a measure of the performance characteristics of an application. performance is a secondary issue to features. it is still an issue. When looking for errors in the application.2 Why Performance testing? Performance problems are usually the result of contention for. and low utilization. As the user base grows. some system resource.1 What is Performance testing? Performance testing of an application is basically the process of understanding how the web application and its operating environment respond at various user load levels. throughput.139 - . Maintaining optimum Web application performance is a top priority for application developers and administrators. 21. The main deliverables from such a test. we want to measure the latency. and utilization of the web site while simulating attempts by virtual users to simultaneously access the site. This helps to replace and focus efforts at improving overall system response. Typically to debug applications. The main objective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the required transaction volumes in real-time production database. completely exercise the application) in an attempt to find errors.e. or exhaustion of. To increase confidence and to provide an advance warning of potential problems in case of load conditions. are automated test scripts and an infrastructure to be used to execute automated tests for extended periods. Performance Testing Process & Methodology Proprietary & Confidential .. the cost of failure becomes increasingly unbearable. however. One of the main objectives of performance testing is to maintain a web site with low latency. the evaluation of a design alternative is the prime mover for an analysis. In general. the system is unable to scale to higher levels of performance. When a system resource is exhausted. The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. Performance analysis is also carried for various purposes such as: • • • • During a design or redesign of a module or a part of the system. prior to execution. 21. analysis must be done to forecast performance under load. A systematic approach like performance analysis is essential to extract maximum benefit from an existing system. high throughput. Post-deployment realities create a need for the tuning the existing system. more than one alternative presents itself. Identification of bottlenecks in a system is more of an effort at troubleshooting. In such cases. developers would execute their applications using different execution streams (i.

relevant. • Quantitative . Quantitative. which can be re-used for other tests with broader objectives.a response time should be defined such that it can be measured using a tool or stopwatch and at reasonable cost. • Achievable .a response time must be relevant to a business process. measurable. achievable requirements As a foundation to all tests. Measure Application Server components performance under various loads. This helps in determining whether or not the system meets the stated requirements. Performance Testing Process & Methodology Proprietary & Confidential . the list defines what is required before a test can be executed. • Realistic . so it pays to make as much use of this infrastructure as possible.3 Performance Testing Objectives The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. Not all of these need be in place prior to planning or preparing the test (although this might be helpful). Monitor system resources under various loads. Bases performance goals on customer requirements. The following attributes will help to have a meaningful performance comparison. This infrastructure is an asset and an expensive one too. The performance testing goals are: • • • • • End-to-end transaction response time measurements.21. but rather. a sensible comparison can be derived. realistic. Defines specific customer scenarios.140 - . performance requirements should be agreed prior to the test. Fortunately. this infrastructure is a test bed.response times should take some account of the cost of achieving them.response time requirements should be justifiable when compared with the durations of the activities within the business process the system supports. First and foremost thing is The design specification or a separate performance requirements document should : • • • Defines specific performance goals for each feature that is instrumented.expressed in quantifiable terms such that when response times are measured. • Measurable . Measure the network delay between the server and clients 21. A comprehensive test strategy would define a test infrastructure to enable all these objectives be met. • Relevant . Measure database components performance under various loads.4 Pre-Requisites for Performance Testing We can identify five pre-requisites for a performance test.

However. depending on the business application. Typically. Database volumes Data volumes.141 - . it should still be possible to interpret the results obtained using a model of the system to predict. however. useless for a performance test. Testers will not be able to record scripts in the first instance. with some confidence.5 Performance Requirements Performance requirements normally comprise three components: • • • Response time requirements Transaction volumes detailed in ‘Load Profiles’ Database volumes Response time requirements When asked to specify performance requirements. Realistic test environment The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. it will probably not withstand the relatively minor stress of repeated use. Business scenarios might cover different situations when the users’ organization has different levels of activity or involve a varying mix of activities. or may not be able to execute a test for a reasonable length of time before the software. Performance Testing Process & Methodology Proprietary & Confidential . users normally focus attention on response times. Load profiles The second component of performance requirements is a schedule of load profiles. Some functions are critical and require short response times. A load profile is the level of system loading expected to occur during a specific business scenario. 21. A test environment which bears no similarity to the actual production environment may be useful for finding obscure errors in the code. A single response time requirement for all transactions might be simple to define from the user’s point of view. for the results of the test to be realistic. Often this is not possible. but is. and often wish to define requirements in terms of generic response times. defining the numbers of table rows which should be present in the database after a specified period of live running complete the load profile. but is unreasonable. middleware or operating systems crash. which must be supported by the system. If the software crashes regularly. the test environment should be comparable to the actual production environment. data volumes estimated to exist after one year’s use of the system are used. the behavior of the target environment. Even with an environment which is somewhat different from the production environment.Stable system A test team attempting to construct a performance test of a system whose software is of poor quality is unlikely to be successful. but two year volumes or greater might be used in some circumstances. but others are less critical and response time requirements can be less stringent.

22 Performance Testing Process Requirements Collection Preparation Requirement Collection Test Plan Preparation Test Plan Test Design Preparation Test Design Scripting Test Scripts Test Execution Pre Test & Post Test Procedure Test Analysis Preliminary Report Activity NO Is Performan ce Goal Reached? Deliverable Internal Deliverable YES Preparation of Reports Final Report Performance Testing Process & Methodology Proprietary & Confidential .142 - .

Volume Test. Browser Emulation and Automation Tool Selection Decide on the type and mode of testing Operational Inputs – Time of Testing. Following are the important performance test requirement that needs to be captured during this phase. The objective is to understand the performance test requirements.1. Endurance Test Work items Understand the system and application model Server side and Client side Hardware and software requirements.1 22. Load Test. It is important to understand as accurately and as objectively as possible the nature of load that must be generated.143 - . • Response Time • Transactions Per Second • Hits Per Second • Workload • No of con current users • Volume of data • Data growth rate • Resource usage • Hardware and Software configurations Activity Performance.Stress Test. • • • • • 22.doc Sample Performance Testing Process & Methodology Proprietary & Confidential .22. Client and Server side parameters.1. Hardware & Software components and Usage Model.1 Phase 1 – Requirements Study This activity is carried out during the business and technical requirements identification phase.2Deliverables Deliverable Requirement Collection RequirementCollectio n. Spike Test.

2.144 - . Hardware Platform • Server Machines • Processors • Memory • Disk Storage • Load Machines configuration • Network configuration Software Configuration • Operating System • Server Software • Client Machine Software • Applications Activity Test Plan Preparation Work items Hardware and Software Details Test data Transaction Traversal that is to be tested with sleep times. Synchronizations points) • Data Generation Performance Testing Process & Methodology Proprietary & Confidential .22. • • • • 22. During the test design period the following activities will be carried out: • Scenario design • Detailed test execution plan • Dedicated test environment setup • Script Recording/ Programming • Script Customization (Delay.3 Phase 3 – Test Design Based on the test strategy detailed test scenarios would be prepared. Checkpoints.doc Sample 22. Periodic status update to the client.1Deliverables Deliverable Test Plan TestPlan.2 Phase 2 – Test Plan The following configuration information will be identified as part of performance testing environment requirement identification.

1Deliverables • Deliverable Test Scripts Sample Script.doc Sample Performance Testing Process & Methodology Proprietary & Confidential . the Load Generators used etc.1Deliverables Deliverable Test Design TestDesign. Error Checks and Validations Run the script for single user for checking the validity of scripts 22.4 Phase 4 –Scripting Activity Scripting • • • Work items Browse through the application and record the transactions with the tool Parameterization.doc Sample 22. Setting up the monitoring servers Setting up the data Preparing all the necessary folders for saving the results as the test is over..3. Pre Test and Post Test Procedures 22.• Parameterization/ Data pooling Activity Test Design Generation • • • • • Work items Hardware and Software requirements that includes the server components .145 - .4.

Virtual user loads are simulated based on the usage pattern and load levels applied as stated in the performance test strategy. All the scenarios identified will be executed.5. 22.6. The following artifacts will be produced during test execution period: • Test logs • Test Result Activity Test Execution • • • • Work items Starting the Pre Test Procedure scripts which includes start scripts for server monitoring.1 Deliverables Deliverable Sample Performance Testing Process & Methodology Proprietary & Confidential .doc Sample Run Logs. Modification of automated scripts if necessary Test Result Analysis Report preparation for every cycle 22.146 - .5 Phase 5 – Test Execution The test execution will follow the various types of test as identified in the test plan.6 Phase 6 – Test Analysis Activity Test Analysis • Work items Analyzing the run results and preparation of preliminary report.1 Deliverables • Deliverable Test Execution Time Sheet.22.doc 22.

middleware.7 Phase 7 – Preparation of Reports The test logs and results generated are analyzed based on Performance under various load.• Test Analysis Preliminary Report.doc 22. The following performance test reports/ graphs can be generated as part of performance testing:• Transaction Response time • Transactions per Second • Transaction Summary graph • Transaction performance Summary graph • Transaction Response graph – Under load graph • Virtual user Summary graph • Error Statistics graph • Hits per second graph • Throughput graph • Down load per second graph • Based on the Performance report analysis. Manual and automated results analysis methods can be used for performance results analysis.147 - .1 Deliverables • Deliverable Final Report Final Report. database organization. • Upgrades to client or server hardware.doc Sample Performance Testing Process & Methodology Proprietary & Confidential . Network throughput. Think time. Activity Preparation of Reports • Work items Preparation of final report.7. network capacity or routing. 22. Resource usage. Network delay. suggestions on improvement or tuning will be provided to the design team: • Performance improvements to application software. Transaction Distribution and Data handling. Transaction/second. database throughput. • Changes to server system parameters.

If we decide to make performance a goal and a measure of the quality criteria for release. Workload • Not trivial • Biased Goals • ‘To show that OUR system is better than THEIRS” • Analysts = Jury • Unsystematic Approach • Analysis without Understanding the Problem • Incorrect Performance Metrics • Unrepresentative Workload • Wrong Evaluation Technique • Overlook Important Parameters • Ignore Significant Factors • Inappropriate Experimental Design • Inappropriate Level of Detail • No Analysis • Erroneous Analysis • No Sensitivity Analysis • Ignoring Errors in Input • Improper Treatment of Outliers • Assuming No Change in the Future • Ignoring Variability • Too Complex Analysis • Improper Presentation of Results • Ignoring Social Aspects • Omitting Assumptions and Limitations 22. This typically means measuring performance on "clean" test environments.8 Common Mistakes in Performance Testing • No Goals • No general purpose model • Goals =>Techniques.148 - . Also. We should run the automated performance test suite against every build and compare the results against previous results. Performance goals needs to be ensured. the management team must decide to enforce the Performance Testing Process & Methodology Proprietary & Confidential .9 Benchmarking Lessons Ever build needs to be measured. we should run the performance test suite under controlled conditions from build to build.22. Metrics. Performance issues must be identified as soon as possible to prevent further degradation.

Without defined performance goals or requirements.5x Performance Testing Process & Methodology Proprietary & Confidential . Performance is known to degrade slightly during the stabilization phase of the development cycle. keep the performance test suite fairly static throughout the product development cycle. If the design or requirements change and you must modify a test.5 and Webload 4.goals. Tests are capturing secondary metrics when the instrumented tests have nothing to do with measuring clear and established performance goals. Design the performance test suite to measure response times and not to identify bugs in the product. if the data is not going to be used in a meaningful way to make improvements in the engineering cycle. Therefore. Strive to achieve the majority of the performance goals early in the product development cycle because: • • Most performance issues require architectural change. Significant changes to the performance test suite skew or make obsolete all previous data. The performance tests should not be used to find functionality-type bugs. testers must guess. Design the build verification test (BVT) suite to ensure that no new bugs are injected into the build that would prevent the performance test suite from successfully completing. The tools used for performance testing are Loadrunner 6. incorporate the performance test suite into the stress test suite to validate stress scenarios and to identify potential performance issues under different stress conditions. Establish incremental performance goals throughout the product development cycle. it is important to define concrete performance goals. the tool can also assess Internet Server Application Programming Interface and application server provider (ISAPI/ASP) applications. at how to instrument tests to best measure various response times. You should reuse automated performance tests Automated performance tests can often be reused in many other automated test suites. perturb only one variable at a time for each build. it is a software architectural problem. The performance tests should be modified consistently. For example. All the members in the team should agree that a performance issue is not just a bug. The Web Capacity Analysis (WebCAT) tool provides Web server performance analysis. Therefore. Achieving performance goals early also helps to ensure that the ship date is met because a product rarely ships if it does not meet performance goals. it is probably wasted data. En sure that you know what you are measuring and why. without a clear purpose. Although secondary metrics look good on wall charts and in reports. Testing for most applications will be automated. Performance testing of Web services and applications is paramount to ensuring an excellent customer experience on the Internet. Tools used for testing would be the tool specified in the requirement specification. Creating an automated test suite to measure performance is time-consuming and laborintensive.149 - .

You create test scripts (called agendas) using Java Scripts that instruct those virtual clients about what to do.150 - . proxies. authentifications.1 LoadRunner 6.5 Webload is a testing tool for testing the scalability. these Virtual users provide consistent. When Webload runs the test. including cookies. 23. functionality and performance of Web-based applications – both Internet and Intranet. per-transaction and per-instance level from the computers that are generating the load. Performance Testing Process & Methodology Proprietary & Confidential . Webload supports HTTP1. LoadRunner’s in depth reports and graphs provide the information that you need to evaluate the performance of your client/server system. persistent connections and chunked transfer coding. To generate load.1. Webload generates load by creating virtual clients that emulate network traffic.Webload displays them in graphs and tables in real-time and you can save and export the results when the test is finished.0 and 1. TSL. Webload can also gather information server’s performance monitor. it gathers results at a per-client. SSL. It can measure the performance of your application under any load conditions. Repeatable and measurable load to execute your client/server system just as real users would. LoadRunner enables you to test your system under controlled and peak load conditions. LoadRunner runs thousands of Virtual Users that are distributed over a network.5 LoadRunner is Mercury Interactive’s tool for testing the performance of client/server systems. load and functional tests or by running them individually.2 WebLoad 4.23 Tools 23. Using a minimum of hardware resources. Use WebLoad to test how well your web site will perform under realworld conditions by combining performance. You can watch the results as they occur. client certificates.

Modem simulation allows each virtual user to be bandwidth limited. usernames. IBM AIX.com Price ($) per number of virtual users: 9995-50 17995-100 SunOS. records and allows viewing of exact bytes flowing between browser and server. includes record/playback capabilities. and any other parameter to simulate multiple virtual users. Windows NT. NCR. will emulate 25 users. passwords.Performance Testing Tools .151 - Web Performance Trainer http://www. Supports all browsers and web servers. WIN2000 Performance Testing Process & Methodology Proprietary & Confidential . Tool Name URL Cost OS Description Load test tool emphasizing easeof-use. simulates up to 200 users per playback machine at various connection speeds.webperf center. and will expire in 2 weeks (may be extend) Mercury's load/stress testing tool.summary and comparison This table lists several performance testing tools available on the market. HP-UX.com/loadtesting.astratryand buy. 2495-200 Linux Solaris 4995-300 7995-1000 11995-5000 Astra LoadTest http://www. 1400-100 Windows 2000. Can automatically handle variations in session-specific items such as cookies. integrated . html Price ($) per number of virtual users: Windows NT. For your convenience we compared them based on cost and OS required. Notes: downloadable.

Sybase System 11. Notes: downloadable (?).29995-250 spreadsheet parameterizes recorded input to exercise application with a wide variety of data. WebStone. Windows2000 E-commerce load testing tool from Client/Server Solutions.benchmark factory. 'Content Check' checks for failures under heavy load. IBM's DB2 CLI. scripting. web form processing. cookies. evaluation version Windows NT. 'Scenario Builder' visually combines virtual users and host machines for tests representing real user traffic.com $ Performance Testing Process & Methodology Proprietary & Confidential .152 - Benchmark Factory http://www. . Wisconsin. Informix. and others. Includes optimized database drivers for vendorneutral comparisons .MS SQL Server. SSL. ODBC. Real-time monitors and analysis Notes: downloadable. Includes record/playback. Also includes predeveloped industry standard benchmarks such as AS3AP. Inc. Oracle 7 and 8. Set-Query. user sessions.

Windows 2000 Solaris. Unix Performance Testing Process & Methodology Proprietary & Confidential . script recording from browser. proxies. . handles dynamic web pages.radview. AIX MS Web Application Stress Test http://homer. Windows2000. Includes record/playback. password authentication. Windows2000 Rational Suite Performance Studio. SSL.after submitting information A Page with suggestion to apply for next infos to closest dealers appeared Supports recording of SSL sessions. cookies. 'LoadSmart Scheduling' capabilities allow complex usage scenarios and randomized transaction sequences.153 - Radview's WebLoad http://www. com Free Windows NT.com/ products $ Windows NT. Evaluation version does not support SSl Microsoft stress test tool created by Microsoft's Internal Tools Group (ITG) and subsequently made available for external use. adjustable delay between requests Notes: one of the advanced tools in the listing… Rational's client/server and web performance testing tool. dynamic HTML. multiple platforms Notes: downloadable.com $ Win95/98 Windows NT.rational.microsoft.rte. Rational SiteLoad http://www.

shtml $ Win95/98/ Windows NT HTTP-Load http://www. For use in conjunction with test scripts from their e-Tester functional test tool. Allows onthe-fly changes and has real-time reporting capabilities. co.com/ software/http_load Free Unix Performance Testing Process & Methodology Proprietary & Confidential .uk/intro.Notes: request a cd only.facilita.154 - Forecast http://www. Notes: downloadable. will compile on any UNIX platform Notes: unsupportable (?).acme. free cd request. evaluation copy Free load test application to generate web server .zeus. broken download link. network. client-server.uk $ Unix Zeus http://webperf. com/products/eload_ index. Not downloadable Load testing tool from Facilita Software for web.html Free Unix E-Load http://www.co. Load test tool from RSW geared to testing web applications under load and testing scalability of Ecommerce applications.rswsoftware. and database systems Notes: not downloadable Free web benchmarking/load testing tool available as source code.

org/ webart $ Windows 98.compuware.com/ products/platinum/ appdev/fe_iltps.loads Notes: free and easy. also includes functional and regression testing capabilities. Compuware's QALoad for load/stress testing of database. Windows 95. works with such middleware as: SQLnet. Telnet. Windows 2000 WEBArt http://www.htm $ Windows NT. Windows NT 4. Unix.segue. SQL Server. performance. It generates and . Windows NT. AIX.com/ html/s_solutions/s_perf ormer/s_performer. web. Windows 2000. com/products/auto/ releases/QALoad. SunOS/Solaris.155 - QALoad http://www. Linux Webload http://www.oclc.htm $ Win95/NT . Sun Solaris Performance Testing Process & Methodology Proprietary & Confidential . ODBC. Notes: downloadable Final Exam WebLoad integration and pre-deployment testing ensures the reliability. Tool for load testing of up to 100-200 simulated users. and capture/playback and scripting language. Evaluation copy avail.ca. Windows NT load test player SilkPerformer http://www. and Web Notes : free cd request Load and performance testing component of Segue's Silk web testing toolset.htm $ AIX. and char-based systems.manager. and scalability of Web applications. DBLib or CBLib.0. Notes: no download.

000 clients from a single IP address.monitors load stress tests .com $199 ($99 with discount) Windows 98. WebSizr load testing tool supports authentication. Notes: not downloadable Load testing and capture/playback tools from Technovations.0.which can be recorded during a Web session with any browser . 15day eval. Load scenarios can include unlimited numbers of virtual users on one or more load servers.technova tions. Notes: downloadable. Windows 2000 Performance Testing Process & Methodology Proprietary & Confidential . redirects Notes: . also supports multiple IP addresses with or without aliases.156 - Webspray http://www. WebCorder http://www.htm $ Win95(98). Windows 2000 Web load test tool from Microsoft for load testing of MS IIS on NT Load testing tool. cookies. can simulate up to 1. Windows 2000 WebSizr.and assesses Web application performance under user-defined variable system loads. Windows NT. com/workshop/server/ toolbox/wcat.microsoft.asp Free Windows NT. Windows NT 4. as well as single users on multiple client workstations.com/home. includes link testing capabilities.redhillnet works. period Microsoft WCAT load test tool http://msdn.

• What to look for: contains information on behaviors. Methodology Definitions • Result: provide information about what the test will accomplish. allowing one to use a wide range of tools to conduct the assessments. 30 eval.Hardware benchmarking is performed to size the application with the planned Hardware platform. • Type of workload: in order to properly achieve the goals of the test. • Purpose: explains the value and focus of the test.3 Architecture Benchmarking • Hardware Benchmarking . The methodologies below are generic.downloadable. 23.4 General Tests What follows is a list of tests adaptable to assess the performance of most systems. This is achieved through software benchmark test. issues and errors to pay attention to during and after the test. This methodology specification provides information on the appropriate script of pages or transactions for the user. each test requires a certain type of workload. • 23. • Constraints: details any constraints and values that should not be exceeded during testing. period. It is significantly different from capacity planning exercise in that it is done after development and before deployment Software Benchmarking . • Methodology: a list of suggested steps to take in order to assess the system under test. Performance Testing Process & Methodology Proprietary & Confidential .157 - .Defining the right placement and composition of software instances can help in vertical scalability of the system without addition of hardware resources. • Time estimate: a rough estimate of the amount of time that the test may take to complete. along with some simple background information that might be helpful during testing.

the ratio of the performance of an n-processor system to that of a one-processor system is its efficiency. Cognizant has built custom monitoring tools to collect the statistics. It is also important to monitor and collect the statistics such as CPU utilization. should be collected.158 - Performance Testing Process & Methodology .24 Performance Metrics The Common Metrics selected /used during the performance testing is as below • Response time • Turnaround time = the time between the submission of a batch job and the completion of its output. throughput etc. • Utilization: The fraction of time the resource is busy servicing requests. E. metrics such as response times for transactions. • Average Fraction used for memory... Or. bandwidth in bits per second. memory. The response time at maximum throughput is too high. • Stretch Factor: The ratio of the response time with single user to that of concurrent users.g. application and database servers and make sure those numbers recede as load decreases. • Throughput: Rate (requests per unit of time) Examples: • Jobs per second • Requests per second • Millions of Instructions Per Second (MIPS) • Millions of Floating Point Operations Per Second (MFLOPS) • Packets Per Second (PPS) • Bits per second (bps) • Transactions Per Second (TPS) • Capacity: Nominal Capacity: Maximum achievable throughput under ideal workload conditions.1 Client Side Statistics • • • • • • • • • • Running Vusers Hits per Second Throughput HTTP Status Code HTTP responses per Second Pages downloaded per Second Transaction response time Page Component breakdown time Page Download time Component size Analysis Proprietary & Confidential . Third party monitoring tools are also used based on the requirement. disk space and network usage on individual web. As tests are executed. • Usable capacity: Maximum throughput achievable without exceeding a pre-specified response-time limit • Efficiency: Ratio usable capacity to nominal capacity. HTTP requests per second. 24.

However. By Performance Testing Process & Methodology Proprietary & Confidential .e strategy. Web-enabled applications and infrastructures must be able to execute evolving business processes with speed and precision while sustaining high volumes of changing and unpredictable user audiences. it may be too late in the software development cycle to correct serious performance issues. JDBC Connection Pool Database Server Resources–Wait Events.Processor Utilization.4 Conclusion Performance testing is an independent discipline and involves all the phases as the mainstream testing lifecycle i. enabling new business opportunity lowering transaction costs and strengthening profitability. lifecycle-focused approach. It is very typical of the project manager to be overtaken by time and resource pressures leading not enough budget being allocated for performance testing. There is another flip side of the coin. businesses can begin to take charge and leverage information technology assets to their competitive advantage.3 Network Statistics • • • Bandwidth Utilization Network delay time Network Segment delay time 24. plan. executing performance testing does not yield anything more than finding more defects in the system. design. Cache Hit Ratio Application Server Resources–Heap size. Once these solutions are properly adopted and utilized. performance testing can unearth issues that otherwise cannot be done through mainstream testing.159 - . SQL Queries Transaction Profiling Code Block Analysis 24. execution. the system should have been architected and designed for meeting the required performance goals. Automated load testing tools and services are available to meet the critical need of measuring and optimizing complex and dynamic application and infrastructure performance. Memory and Disk Space Web Server Resources–Threads.2 Server Side Statistics • • • • • • System Resources . the consequences of which could be disastrous to the final system. robust and viable solutions exist to help fend off disasters that result from poor performance.• • • Error Statistics Errors per Second Total Successful/Failed Transactions 24. If not. leveraging an ongoing. Fortunately. if executed systematically with appropriate planning. analysis and reporting. However there is an important point to be noted here. Before testing the system for performance requirements. The discipline helps businesses succeed in leveraging Web technologies to their best advantage. Without the rigor described in this paper. Load testing gives the greatest line of defense against poor performance and accommodates complementary strategies for performance management and monitoring of a production environment.

business can confidently and proactively execute strategic corporate initiatives for the benefit of shareholders and customers alike. Performance Testing Process & Methodology Proprietary & Confidential .continuously testing and monitoring the performance of critical software applications.160 - .

25 Load Testing Load Testing is creation of a simulated load on a real computer system by using virtual users who submit work as real users would do at real client workstations and thus testing the systems ability to support such workload. Testing of critical web applications during its development and before its deployment should include functional testing to confirm to the specifications.161 - .1 Why is load testing important ? Load Testing increases the uptime for critical web applications by helping you spot the bottlenecks in the system under large user stress scenarios before they happen in a production environment 25. Thus a load testing is accomplished by stressing the real application under simulated load provided by virtual users.2 When should load testing be done? Load testing should be done when the probable cost of the load test is likely less than the cost of a failed application deployment. Performance Testing Process & Methodology Proprietary & Confidential . performance testing to check if it offers an acceptable response time and load testing to see what hardware or software configuration will be required to provide acceptable response time and handle the load that will created by the real users of the system 25.

Another important analysis of the system would also include the appropriate strategy for testing applications. The objective is to determine whether the site can sustain a requested number of users with acceptable response times. Performance Testing Process & Methodology Proprietary & Confidential . test run time. System response times also can vary based on the connection speed. All the business process should be recorded end to end so that these transactions will assist in breakdown of all actions and the time it takes to measure the performance of business process. Hence for businesses capacity testing would be the benchmark to say that the maximum loads of concurrent users the site can sustain before the system fails.3 Settings Run time settings should be defined the way the scripts should be run in order to accurately emulate real users.8 Kbps or 56. For this one should know all key performance goals and objectives like number of concurrent connections. Evaluation of the requirements and needs of a system. Hence throttling bandwidth can emulate dial up connections at varying modem speeds (28. 26.. It can be load testing or stress testing or capacity testing. Stress testing is nothing but load testing over extended periods of time to validate an application’s stability and reliability.162 - . hits per second etc.6 Kbps or T1 (1.26 Load Testing Process 26. A virtual user is nothing but an emulated real user who drives the real application as client.2 User Scripts Once the analysis of the system is done the next step would be the creation of user scripts. follow HTTP redirects etc.1 System Analysis This is the first step when the project decides on load testing for its system. Finally it should also be taken into consideration of the test tool which supports load testing by determining its multithreading capabilities and the creation of number of virtual users with minimal resource consumption and maximal virtual user count.. 26.54M) etc. prior to load testing will provide more realistic test conditions. Settings can configure the number of concurrent connections. A script recorder can be used to capture all the business processes into test scripts and this more often referred as virtual users or virtual user scripts. Load Testing is used to test the application against a requested number of users. Similarly capacity testing is used to determine the maximum number of concurrent users that an application can manage.

This will result in instantly identifying the performance bottle necks during load testing... But if the tools support real time monitoring then testers would be able to view the application performance at any time during the test.4 Performance Monitoring Every component of the system needs monitoring :the clients. 26. memory. the webs server. or backend dependencies. the network.5 Analyzing Results The last but most important step in load testing is collecting and processing the data to resolve performance bottlenecks. Manual testing would involve Performance Testing Process & Methodology • Coordination of the operations of users Proprietary & Confidential . the application server.163 - . Two common methods for implementing this load testing process are manual and automated testing. the database etc. Load Testing with WAST Web Application Stress is a tool to simulate large number of users with a relatively small number of client machines.26. such as CPU. The next step is to determine which resource prevents the requests per second from going higher. socket errors etc. The reports generated can be anything ranging from Number of hits. while maintaining adequate response times. After these changes are made the tests must re run the load test scenarios to verify adjustments.6 Conclusion Load testing is the measure of an entire Web application's ability to sustain a number of simultaneous users and transactions. Hence analyzing the results will isolate bottle necks and determine which changes are needed to improve the system performance. It is the only way to accurately test the end-to-end performance of a Web site prior to going live. requests per second. Performance data on an web application can be gathered by stressing the website and measuring the maximum requests per second that the web server can handle. Thus running the load test scenario and monitoring the performance would accelerate the test process thereby producing a more stable application 26. number of test clients.

The testing tools typically use three major components to execute a test: • • • A console. Performance Testing Process & Methodology Proprietary & Confidential . Plus. In this way.• • • Measure response times Repeat tests in a consistent way Compare results As load testing is iterative in nature. Today. they minimize the risk of human error during testing. automated load testing is the preferred choice for load testing a Web application. automated testing tools provide a more cost-effective and efficient solution than their manual counterparts. which are used to run the virtual users With automated load testing tools. performing a business process on a client application Load servers. the performance problems must be identified so that system can be tuned and retested to check for bottlenecks. manual testing is not a very practical option.164 - . tests can be easily rerun any number of times and the results can be reported automatically. drives and manages the load Virtual users. which organizes. For this reason.

rather than trying to implement every testing type. risk based testing. software asserts. Did the system lock up with 100 attempts or 100. etc. at faster than normal speeds and for longer than normal periods of time as a method to accelerate the rate of finding defects and verify the robustness of our product. How long can your application stay functioning doing this operation repeatedly? To help you reproduce your failures one of the most important things to remember to do is to log everything as you proceed. For our purposes. pick multiple testing types that will provide the best coverage for the product to be tested. that it is best to review what needs to be tested. deadlocks. stress testing is often confused with load testing and/or volume testing. Table 1 provides a summary of some of the strengths and weaknesses that we have found with stress testing. We try to fill this gap in testing by using stress testing. and it seems they agree. Stress testing can imply many different types of testing depending upon the audience. these testing activities do little to quantify the robustness of the application or determine what may happen under abnormal circumstances. We have found. test plans. Some of the defects that we have been able to catch with stress testing that have not been found in any other way are memory leaks. etc.). You need to know what exactly was happening when the system failed.1 Introduction to Stress Testing This testing is accomplished through reviews (product requirements. Performance Testing Process & Methodology Proprietary & Confidential . As a first step. and then master these testing types. software designs. The system is put through its paces to find where it may fail. you can take a common set of actions for your system and keep repeating them in an attempt to break the system.27 Stress Testing 27.165 - . and configuration conflicts. we define stress testing as performing random operational sequences at larger than normal volumes.000 attempts?[1] Note that there are many other types of testing which have not mentioned above. smoke tests. system testing (also known as functional testing). All these ‘testing’ activities are important and each plays an essential role in the overall effort but. refer to the section ‘Typical Defects Found by Stress Testing’. security testing. expert user testing (like beta testing but in-house). Stress testing in its simplest form is any test that repeats a set of actions over and over with the purpose of “breaking the product”. none of these specifically look for problems like memory and resource management. For more details about these types of defects or how we were able to detect them. etc. software functional requirements. unit testing. random testing. Even in literature on software testing. code. Adding some randomization to these steps will help find more defects. Further. for example.

poking buttons.which is often referred to as “monkey” testing. and it will provide you with more than just a mirror of your manual test suite. Manual testing generally does not allow for repeatability of command sequences. One of the problems with “monkey” testing is reproducibility. so reproducing failures is nearly impossible. Manual testing does not provide the breadth of test coverage of the product features/commands that is needed. turning knobs. “banging” on the keyboard etc. Humans can not keep the rate of interaction up high enough and long enough. software asserts.166 - . In this kind of testing. Performance Testing Process & Methodology Proprietary & Confidential . 4. Performing stress manually is not feasible and repeating the test for every software release is almost impossible. Manual testing does not perform automatic recording of discrete values with each command sequence for tracking memory utilization over time – critical for detecting memory leaks. the tester would use the application “aimlessly” like a monkey . and configuration conflicts 27. so this is a clear example of an area that benefits from automation.2 Background to Automated Stress Testing Stress testing can be done manually . Some of the weaknesses of manual stress testing we found were: 1. video recorders and the like to capture user interactions with varying (often poor) levels of success. In this kind of stress testing. Our applications are required to operate for long periods of time with no significant loss of performance or reliability. you get a return on your investment quickly.. We have found that stress testing of a software application helps in accessing and increasing the robustness of our applications and it has become a required activity before every software release. Previously. in order to find defects. 2. but use another sequence may never find the problem Does not test correctness of system response to user input Helpful at finding memory leaks. Manual techniques cannot provide the kind of intense simulation of maximum user interaction over time.Table 1 Stress Testing Strengths and Weaknesses Strengths Weakness Find defects that no other type of test would find Using randomization increase coverage Test the robustness of the application Not real world situation Defects are not always reproducible One sequence of operations may catch a problem right away. we had attempted to stress test our applications using manual techniques and have found that they were lacking in several respects. Attempts have been made to use keyboard spyware. 3. People tend to do the same things in the same way over and over so some configuration transitions do not get tested. where the tester uses no guide or script and no log is recorded. it’s often impossible to repeat the steps executed before a problem occurred. deadlocks.

non-typical sequences of user interaction are tested with the system in an attempt to find latent defects not detectable with other techniques. it still has its disadvantages. positive testing would test 0 to 59 and negative testing would try –1 to 60. automated stress testing overcomes the major disadvantages of manual stress testing and finds defects that no other testing types can find. we have found that each time the product application changes we most likely need to change the stress tool (or more commonly commands need to be added to/or deleted from the input command set). Depending on how the stress inputs are configured stress can do both ‘positive’ and ‘negative’ testing. tests vary with new seed values Repeatability of commands and parameters help reproduce problems or verify that existing problems have been resolved Informative log files facilitate investigation of problem Requires capital equipment and development of a stress test tool Requires maintaince of the tool as the product application changes Reproducible stress runs must use the same input command set Defects are not always reproducible even with the same seed value Requires test application information to be kept and maintained May take a long time to execute In summary. to execute all valid command sequences in a random order. Since the stress test is automated. Even though there are clearly advantages to automated stress testing. if a valid input is in seconds. Table 2 provides a summary of some of these advantages and disadvantages that we have found with automated stress testing. etc. Performance Testing Process & Methodology Proprietary & Confidential . if the input command set changes. Automated stress testing exercises various features of the system. The stress test tool is implemented to determine the applications’ configuration. whereas negative testing provides both valid and invalid parameters to the device as a way of trying to break the system under abnormal circumstances. In this way. then the output command sequence also changes given pseudo-randomization. the stress test is performed under computer control.167 - . at a rate exceeding that at which actual end-users can be expected to do. For example. and to perform data logging. For example. Table 2 Automated Stress Testing Advantages and Disadvantages Advantages Disadvantages Automated stress testing is performed under computer control Capability to test all product application command sequences Multiple product applications can be supported by one stress tool Uses randomization to increase coverage. it becomes easy to execute multiple stress tests simultaneously across more than one product at the same time.With automated stress testing. and for durations of time that exceed typical use. The automated stress test randomizes the order in which the product features are accessed. Also. Positive testing is when only valid parameters are provided to the device under test.

that accept strings representing command functions without regard to context or the current state of the device. which generally support a large number of commands (1000+).3 Automated Stress Testing Implementation Automated stress testing implementations will be different depending on the interface to the product application. on a manufacturing line where the product is used 24 hours a day. and retrieve data in a variety of application areas like manufacturing. control. 5. Record the memory in use over time to allow memory management analysis. Performance Testing Process & Methodology Proprietary & Confidential . Ethernet. 2. The types of interfaces available to the product drive the design of the automated stress test tool. Programmable interface stress testing is performed by randomly selecting from a list of individual commands and then sending these commands to the device under test (DUT) through the interface. 4.To take advantage of automated stress testing. Universal Serial Bus (USB). To meet the needs of these customers. 7 days a week. The interfaces fall into two main categories: 1) Programmable Interfaces: Interfaces like command prompts.4 Programmable Interfaces These interfaces have allowed users to setup. individual windows and controls may or may not be visible and/or active depending on the state of the device. 3. Testing all possible combinations of commands on these products is practically impossible using manual testing methods.168 - . General Purpose Interface Bus (GPIB). Provide as much randomization of command sequences to the product as possible to improve test coverage over the entire set of possible features/commands. etc. By using a pseudorandom number generator. 2) Graphical User Interfaces (GUI’s): Interfaces that use the Windows model to allow the user direct control over the device. RS-232. and are required to operate for long periods of time. research and development. Continuously log the sequence of events so that issues can be reliably reproduced after a system failure. and service. then the parameters are also enumerated by randomly generating a unique command parameter. Each command is also written to a log file which can be then used later to reproduce any defects that were uncovered. for example. the products provide programmable interfaces. 27. each unique seed value will create the same sequence of commands with the same parameters each time the stress test is executed. 27. If a command has parameters. our challenge then was to create an automated stress test tool that would: 1. Stress the resource and memory management features of the system. Simulate user interaction for long periods of time (since it is computer controlled we can exercise the product more than a user can).

Set Context to the Main Window 2. but you have to store “click the ‘HELP’ menu item for the particular window”. so it is not sufficient to simply store “click the ‘HELP’ menu item”. 27. which can be used to illustrate some of the stress test tool interactions. a typical confirm file overwrite dialog box for a ‘File->Save As…’ filename operation is not available until the following sequence has been executed: 1. but also information about its parent window and other information like its expected state.5 Graphical User Interfaces In recent years. An example would be a ‘HELP’ menu item. There may be multiple windows open with a ‘HELP’ menu item.For additional complexity. With this information it is possible to uniquely define all the possible product application operations (i. or the stress test can send multiple commands at the same time. However. certain property values. a new approach was needed. the flow of each operation can be important. Many controls are not visible until several levels of modal windows have been opened and/or closed. the stress test can send the commands across multiple interfaces simultaneously. If the filename already exists. Additionally.169 - . you need to group these six operations together as one “big” operation in order to correctly exercise this particular ‘OK’ button.e. Graphical User Interfaces have become dominant and it became clear that we needed a means to test these user interfaces analogous to that which is used for programmable interfaces. (if the product supports it). In this case. Select Target Directory from tree control 4. for example. The input file is used here to provide the stress test tool with a list of all the commands and interactions needed to test the DUT. etc. It is necessary to store not only the object recognition method for the control. Performance Testing Process & Methodology Proprietary & Confidential .6 Data Flow Diagram A stress test tool can have many different interactions and be implemented in many different ways. the stress test can vary the rate at which commands are sent to the interface. 27. Select ‘File->Save As…’ 3. Figure 1 shows a block diagram. Type a valid filename into the edit-box 5. Click the ‘SAVE’ button 6. For example. The main interactions for the stress test tool include an input file and Device Under Test (DUT). each control can be uniquely identified). other variations of the automated stress test can be performed. since accessing the GUI is not as simple as sending streams of command line input to the product application. either confirm the file overwrite by clicking the ‘OK’ button in the confirmation dialog or click the cancel button.

1. two different techniques are used: 1. trying to execute the interaction. Some defects are just hard to reproduce – even with the same sequence of commands. then continue to reduce the number of commands in the playback until the defect is isolated. start removing subsystems from the database and re-run the stress test while monitoring the system resources. back-up and run the stress test from the last seed (for us this is normally just the last 500 commands). Instead.1. The basic flow control of an automated stress test tool is to setup the DUT into a known state and then to loop continuously selecting a new random interaction. 27. 2. and logging the results. This technique is most effective after full integration of multiple subsystems (or. To isolate the subsystem. Diminishing resource issues – (memory leaks and the like) are usually limited to a single subsystem.7 Techniques Used to Isolate Defects Depending on the type of defect to be isolated. System crashes – (asserts and the like) do not try to run the full stress test from the beginning. These defects should still be logged into the defect tracking system. data logging (commands and test results) and system resource monitoring are very beneficial in helping determine what the DUT was trying to do before it crashed and how well it was able to manage its system resources. As the defect rePerformance Testing Process & Methodology Proprietary & Confidential .1.6. If the defect still occurs.System Resource Monitor Input File Stress Test Tool DUT Log command Sequence Log Test Results 27. modules) has been achieved.1 Figure 1: Stress Test Tool Interactions Additionally.170 - . This loop continues until a set number of interactions have occurred or the DUT crashes. unless it only takes a few minutes to produce the defect. Continue this process until the subsystem causing the reduction in resources is identified.

Performance Testing Process & Methodology Proprietary & Confidential . Eventually. you will be able to detect a pattern. over time. Some defects just seem to be un-reproducible. continue to add additional data to the defect description. we know that the robustness of our applications increases proportionally with the amount of time that the stress test will run uninterrupted.171 - .occurs. especially those that reside around page faults. but overall. isolate the root cause and resolve the defect.

Structural testing examines how the program works. Test Coverage analysis is the process of: • • • Finding areas of a program not exercised by a set of test cases.28 Test Case Coverage 28. Coverage analysis requires access to test program source code and often requires recompiling it with a special command. which compares test program behavior against a requirements specification. A test coverage analyzer automates this process. Code coverage analysis is a structural testing technique (white box testing). Test coverage analysis can be used to assure quality of the set of tests. 28. This contrasts with functional testing (black-box testing). without regard to how it works internally. The academic world more often uses the term "test coverage" while practitioners more often use "code coverage". Structural testing compares test program behavior against the apparent intention of the source code. and not the quality of the actual product. and Determining a quantitative measure of code coverage. Test coverage analysis is sometimes called code coverage analysis. taking into account possible pitfalls in the structure and logic. Creating additional test cases to increase coverage. The two terms are synonymous.2 Test coverage measures A large variety of coverage measures exist. which is an indirect measure of quality. Here is a description of some fundamental measures and their strengths and weaknesses Performance Testing Process & Methodology Proprietary & Confidential . Functional testing examines what the program accomplishes.1 Test Coverage Test Coverage is an important measure of quality for software systems.172 - . Also an optional aspect of test coverage analysis is: • Identifying redundant test cases that do not increase coverage.

the performance factor may be significant. If the execution time in some procedures is zero. it is an enviable commitment to quality! 28.6 How Test Coverage Tools Work To monitor execution. test coverage tools generally "instrument" the program by inserting "probes".28.4 Line-Level Test Coverage The basic measure of a dedicated test coverage tool is tracking which lines of code are executed. This. If that code is shown as executed. is often the key to writing more tests that will increase coverage: By studying the unexecuted code. You need condition coverage to know. whose job is really to measure performance bottlenecks. a statement in an if clause). Typically the line coverage information is also presented at the source code level.g. How and when this instrumentation phase happens can vary greatly between different products. However. you should have more. consider a block of code that is skipped under certain conditions (e. 28.. or project level giving a percentage of the code that was executed. There are many other test coverage measures.3 Procedure-Level Test Coverage Probably the most basic form of test coverage is to measure what procedures were and were not executed during the test suite.173 - . If the test suite is large and time-consuming. of course. most available code coverage tools do not provide much beyond basic line coverage. In theory. you need to write new tests that hit those procedures.5 Condition Coverage and Other Measures It's easy to find cases where line coverage doesn't really tell the whole story. Adding probes to the program will make it bigger and slower. Performance Testing Process & Methodology Proprietary & Confidential . file. 28. But this measure of test coverage is so coarse-grained it's not very practical. But in practice. This result is often presented in a summary at the procedure. and which are not. if you achieve 95+% line coverage and still have time and budget to commit to further testing improvements. you don't know whether you have tested the case when it is skipped. you can see exactly what functionality has not been tested. allowing you to see exactly which lines of code were executed and which were not. A large project that achieved 90% code coverage might be considered a well-tested product. For example. This simple statistic is typically available from execution profiling tools.

6. One drawback of this technique is the need to modify the build process.2Executable Instrumentation Probes can also be added to a completed executable file. instrumented one. This type of instrumentation is independent of programming language.174 - .1Source-Level Instrumentation Some products add probes at the source level. it is dependent on operating environment -. Because the file is not modified in any way. However. This type of instrumentation is dependent on programming language -. code coverage version in addition to other versions. But it can be somewhat independent of operating environment (processor. for example. Like Executable Instrumentation. and then create a new. The probes exist only in the in-memory copy of the executable file. Proponents claim this technique can provide higher levels of code coverage measurement (condition coverage. 28. Runtime Instrumentation is independent of programming language but dependent on operating environment.) than other forms of instrumentation. intercept the compiler after parsing but before code generation to insert the changes they need. 28. OS. and add additional code (such as calls to a code coverage runtime) that will record where the program reached.6. and does nothing if the coverage tool is not waiting. This new code will wake up and connect to a waiting coverage tool whenever the program executes. the file itself is not modified. such as debug (un optimized) and release (optimized) needs to be maintained. Some products.3Runtime Instrumentation Probes need not be added until the program is actually run.6. Instead. Performance Testing Process & Methodology Proprietary & Confidential .the provider of the tool must explicitly choose which processors or virtual machines to support. just executing it will not automatically start code coverage (as it would with the other methods of instrumentation). the code coverage tool will add a tiny bit of instrumentation to the executable. A separate version namely. etc. the code coverage tool must start program execution directly or indirectly. or virtual machine).the provider of the tool must explicitly choose which languages to support.28. Such a tool may not actually generate new source files with the additional code. They analyze the source code as written. Alternatively. The tool will analyze the existing executable. This added code does not affect the size or performance of the executable. The same executable file used for product release testing should be used for code coverage.

Java C/C++ C/C++. Clearly. Unix Win32. VB C/C++. Unix Win32. up-to-date requirements specification. Java.7 Test Coverage Tools at a Glance There are lots of tools available for measuring Test coverage. Java. VB C/C++.175 - . VB Rational (IBM) PurifyPlus Software Research Testwell Paterson Technology TCAT CTC++ LiveCoverage Coverage analysis is a structural testing technique that helps eliminate gaps in a test suite. safety-critical software should have a high goal. We must set a higher coverage goal for unit testing than for system testing since a failure in lower-level code may affect multiple high-level callers.28. Unix Win32 C/C++ C/C++. Each project must choose a minimum percent coverage for release criteria based on available testing resources and the importance of preventing post-release failures. It helps most in the absence of a detailed. Performance Testing Process & Methodology Proprietary & Confidential . Company Product OS Lang Bullseye CompuWare BullseyeCoverage DevPartner Win32. Unix Win32 Win32.

176 - . A Simple requirement is one. The TCP counts are nothing but ranking the requirements and the test cases that are to be written for those requirements into simple. An Average requirement is ranked between 4 and 7. A Complex requirement is ranked between 8 and 10. In this courseware we shall give an overview about Test Case Points and not elaborate on using TCP as an estimation technique. which can be given a value in the scale of 1 to3. 29. 29.2 Calculating the Test Case Points: Based on the Functional Requirement Document (FRD). of verification points OR Baseline Test data Refer the test case classification table given below Performance Testing Process & Methodology Proprietary & Confidential . Average and Complex based on the following four factors.1. average and complex and quantifying the same into a measure of complexity. Average and Complex based on the number and complexity of the requirements for that module. • • • • • Test case complexity for that requirement OR Interface with other Test cases OR No. the application is classified into various modules like say for a web application.2. This is also used as an estimation technique to calculate the size and effort of a testing project.1 Complexity of Requirements Requirement Classification Simple (1-3) Average (4-7) Complex (> 8) Total The test cases for a particular requirement are classified into Simple. we can have ‘Login and Authentication’ as a module and rank that particular module as Simple.1 What is a Test Case Point (TCP) TCP is a measure of estimating the complexity of an application.29 Test Case points-TCP 29.1.

Performance Testing Process & Methodology Proprietary & Confidential . average and complex test cases. which interfaces with or interacts with another application is classified as 'Complex' Any verification point consisting of report verification is considered as 'Complex' A verification point comprising Search functionality may be classified as 'Complex' or 'Average' depending on the complexity Depending on the respective project. the complexity needs to be identified in a similar manner. This adjustment factor has been calculated after a thorough study and analysis done on many testing projects.29. The Adjustment Factor in the table mentioned below is pre-determined and must not be changed for every project. Based on the test case type an adjustment factor is assigned for simple.2 Test Case Classification Complexity Type Complexity of Test Case Interface with other Test case 0 <3 >3 Number of verification points <2 3-8 >8 Baseline Test Data Not Required Required Required Simple Average Complex < 2 transactions 3-6 transactions > 6 transactions A sample guideline for classification of test cases is given below.2.177 - . • • • • Any verification point containing a calculation is considered 'Complex' Any verification point.1.

Test Case Type Simple Average Complex Total Test Case Points Complexity Adjustment Weight Factor 1 2 3 2(A) 4(B) 8(C) Number Result No of Simple requirements in the Number*Adjust factor A project (R1) No of Average requirements in Number*Adjust factor B the project (R2) No of Complex requirements in Number*Adjust factor C the project (R3) R1+R2+R3 From the break up of Complexity of Requirements done in the first step.178 - . we arrive at the count of Total Test Case Points. 29. we can get the number of simple. average and complex test case points. average and complex test case types.3 Chapter Summary This chapter covered the basics on      What is Test Coverage Test Coverage measures How does Test coverage tools work List of Test Coverage tools What is TCP and how to calculate the Test Case Points for an application Performance Testing Process & Methodology Proprietary & Confidential . By multiplying the number of requirements with it s corresponding adjustment factor. we get the simple. Summing up the three results.