You are on page 1of 17

Symphony Services

Testing Guidelines

Document Summary Guideline Name Version Prepared By Date Prepared Summary Testing Guidelines 2.0 Poornima J August 29, 2003 This document explains the Guidelines for Testing activities

Document Revision History Document Name Document Path QMS_GUID_TES_01.doc QMS\Guidelines\Testing \

Version 0.1 0.2 0.3 0.4 1.0 1.1 2.0

Change Date May 29, 2003 June 4, 2003 June 11, 2003 June 18, 2003 August 29, 2003 December 8, 2006 December 8, 2006

Changed By Poornima J Poornima J Poornima J Poornima J Preethi. V

Change Summary First Draft Reviewed by QAI Reviewed by EPG Reviewed by PC Approved and Baselined Review comments incorporated Reviewed and Baselined

Page - 1 -

Contents
1.0INTRODUCTION............................................................................4
1.1OBJECTIVE ........................................................................................................................................4 1.2INTENDED AUDIENCE..............................................................................................................................4

2.0REFERENCES:................................................................................4 3.0OBJECTIVES OF TESTING..............................................................4 4.0ABOUT TESTING...........................................................................4


4.1STATIC TESTING : ................................................................................................................................4 4.2DYNAMIC TESTING : ..............................................................................................................................5

5.0LEVELS OF TESTING......................................................................5
5.1UNIT TESTING....................................................................................................................................5 5.2FUNCTIONAL UNIT TESTING......................................................................................................................6 5.3INTEGRATION TESTING............................................................................................................................6 5.4SYSTEM TESTING..................................................................................................................................6

6.0TESTING STRATEGIES...................................................................6
6.1BLACK BOX TESTING ..............................................................................................................................7 6.2WHITE BOX TESTING .............................................................................................................................7 6.3FUNCTIONAL TESTING ............................................................................................................................7 6.4SYSTEM TESTING .................................................................................................................................7 6.5END-TO-END TESTING ............................................................................................................................7 6.6SANITY TESTING...................................................................................................................................7 6.7REGRESSION TESTING ............................................................................................................................8 6.8LOAD TESTING ....................................................................................................................................8 6.9STRESS TESTING...................................................................................................................................8 6.10PERFORMANCE TESTING .........................................................................................................................8 6.11USABILITY TESTING..............................................................................................................................8 6.12INSTALL/UNINSTALL TESTING ...................................................................................................................8 6.13RECOVERY TESTING ............................................................................................................................8 6.14SECURITY TESTING .............................................................................................................................8 6.15COMPATIBILITY TESTING ........................................................................................................................8

7.0TEST PLAN....................................................................................9 8.0TEST CASES, TEST SCRIPTS AND TEST SCENARIOS.....................10 9.0BEST PRACTICES IN TESTING.....................................................11

Page - 2 Page - 2 -

10.0OBJECT ORIENTED TESTING METHODS.....................................11


10.1OO TEST CASE DESIGN ......................................................................................................................11 10.2FAULT BASED TESTING .......................................................................................................................11 10.3CLASS LEVEL METHODS.......................................................................................................................12 10.4RANDOM TESTING.............................................................................................................................12 10.5PARTITION TESTING...........................................................................................................................12 10.6SCENARIO-BASED TESTING....................................................................................................................12

11.0OBJECT ORIENTED TESTING STRATEGIES.................................12


11.1OO UNIT TESTING...........................................................................................................................12 11.2OO INTEGRATION TESTING...................................................................................................................13 11.3OO FUNCTION TESTING AND OO SYSTEM TESTING.......................................................................................13

12.0OBJECT ORIENTED TESTING METRICS......................................13


12.1ENCAPSULATION................................................................................................................................13 12.2INHERITANCE...................................................................................................................................13

13.0WEB TESTING FOR WEBSITES AND WEB APPLICATIONS..........14


13.1WHAT IS WEB TESTING?....................................................................................................................14 13.2PERFORMANCE TESTING.......................................................................................................................14 13.3WHAT IS PERFORMANCE TESTING AUTOMATION?...........................................................................................14 13.4WHAT IS REGRESSION TEST AUTOMATION?.................................................................................................15

14.0TESTING JAVA APPLETS AND APPLICATIONS...........................15


14.1ECONOMICS OF JAVA CLIENTS...............................................................................................................15 14.2CROSS PLATFORM SUPPORT....................................................................................................................16 14.3RAPID ENHANCEMENT.........................................................................................................................16 14.4CUSTOMIZED CLIENTS.........................................................................................................................16 14.5INCREASED LOAD..............................................................................................................................16 14.6EXPANDED USE-CASES........................................................................................................................16

15.0TEST CASE VERIFICATION........................................................17

Page - 3 Page - 3 -

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

1.0

Introduction
This testing guideline gives details about various levels of testing and testing strategies. This guideline covers the process perspectives of the testing areas and shall be used accordingly.

1.1

Objective
This document guides the reader on Testing Methodologies and Testing Processes. Execution of Tests in the live projects needs the usage of respective Quality Management Systems procedures, forms and templates plus complimentary reading to know more about specific testing scenarios.

1.2

Intended Audience
Project team members, Quality Assurance personnel and Engineering Process Group members, can use this Guideline to execute testing activities.

2.0

References:
1. ISO Standards 2. SEI CMMi framework 3. IEEE Standards

3.0

Objectives of Testing
Testing has the following objectives. 1. Testing is a process of executing a program with the intent of finding an error 2. A good test case is one that has a high probability of finding an as-yet undiscovered error 3. A successful test is one that uncovers an as-yet undiscovered error. 4. Testing demonstrates that software functions appear to be working accordingly to specification and that performance requirements appear to have been met.

4.0

About Testing
Testing is defined in two types 1. Static Testing 2. Dynamic Testing

4.1

Static Testing:
Its a kind of test in which the software code is not executed but reviewed for identifying and eliminating defects. Its just a review on the software code. Ex: Walkthrough reviews, peer reviews, software inspection etc.

Symphony Services Internal and Confidential

Page 4 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

4.2

Dynamic Testing:
The software code is actually executed in the testing environment to identify and remove any defects. This Guideline is detailing about dynamic testing. Reviews are not covered in this Guideline. Refer the Review Guidelines for additional information on walk through, software inspection and peer reviews.

5.0

Levels of Testing
There are 4 levels of testing 1. Unit Level 2. Functional level 3. Integration Level 4. System Level 5. Acceptance Level Now, we will understand about each level of testing in a nutshell.

5.1

Unit Testing
Unit testing focuses verification effort on the smallest unit of software design-the module. Using the procedural design descriptions as a guide, important control paths are tested to uncover errors within the boundary of the module. The unit test is normally white box oriented, and the steps can be conducted in parallel for multiple modules. Authors of code do unit testing after the self-review and peer review on the unit is over. Unit testing is the first round of formal testing activity conducted following the initial code completion. Unit test cases may be in the form of document or in the form of unit test programs. In case of documented unit test cases, the programs are run to make sure that they produce the expected results. In case of unit test programs, the test programs are executed to demonstrate that the code executes the test cases correctly. The unit tests should demonstrate that each operation of functions produces the expected results. The results of the unit test are recorded in the document form or as the results of unit test programs. If there are any errors found during unit testing, the developer(s) fixes the code and reruns the tests. Unit testing carried out in this manner until the unit tests run successfully. Unit testing can be conducted in different ways as listed below: 1. Test drivers that call the code 2. Running the code through a debugger 3. Running the code and reviewing trace files 4. Running a User Interface that calls the code 5. Running other components that call the code

Symphony Services Internal and Confidential

Page 5 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

5.2

Functional Unit Testing


Functional testing focuses on verification of independent feature units after the coding for the same is complete. This can be white box or black box testing. If the necessary interfaces are not available for this kind of testing, then test stubs may be developed to test the same. Business Analyst or the developer himself may conduct the testing.

5.3

Integration Testing
Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. Although the individual units works well, the interfacing (putting them together) problems are tested in Integration testing. Integration test members shall carry out the test after the Unit testing was successfully completed.

5.4

System Testing
System testing is actually a series of different tests with the purpose of fully testing the over all system, although each test has a different purpose, all work to verify that all system elements have been properly integrated and perform allocated function. System testing team members shall carry out the test after the successful completion of the Integration Testing.

5.4.1 Types of System Testing


1. Recovery Testing : Is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. 2. Security Testing : Attempts to verify that protection mechanisms built into a system will in fact protect it from improper penetration 3. Stress Testing : executes a system in a manner that demands resources in abnormal quantity, frequency or volume. 4. Performance Testing : It is designed to test run-time performance of software within the context of an integrated system. 5. Acceptance Testing : Based on the acceptance criteria stated by the customer, customer personnel should test the system at the customer site. Many a times, systems testing and acceptance testing are combined and executed (based on the business and testing scenario). If need be only, the acceptance testing shall be done separately. The planning and execution of acceptance testing depends upon the business needs and understanding between the customer and supplier organization.

6.0

Testing Strategies
Having understood the levels of testing, now lets understand the testing strategies and what type of strategy shall be adopted for various levels of testing. There are two types of testing strategies namely:

Symphony Services Internal and Confidential

Page 6 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

1. Black box 2. White box.

6.1

Black box testing


It is not based on any knowledge of internal design or code. Tests are based on requirements and functionality. Following are considered under Black Box testing: 1. Behavioral Testing methods 2. Equivalence portioning 3. Boundary value analysis 4. Comparison testing

6.2

White box testing


It is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, and conditions. Following are considered under white box testing: 1. Path testing 2. Control structure testing 3. Condition testing 4. Data flow testing 5. Loop Testing

6.3

Functional testing
It is black-box type testing geared to functional requirements of an application; testers should do this type of testing. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

6.4

System testing
It is black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

6.5

End-to-end testing
It is similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

6.6

Sanity testing
It is typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the

Symphony Services Internal and Confidential

Page 7 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

6.7

Regression testing
It is re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

6.8

Load testing
It is testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.

6.9

Stress testing
This term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

6.10 Performance testing


This term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

6.11 Usability testing


It is a testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

6.12 Install/uninstall testing


It is a testing of full, partial, or upgrade install/uninstall processes.

6.13 Recovery testing


It is a- testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

6.14 Security testing


This test how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

6.15 Compatibility testing


This tests how well software performs in hardware/software/operating system/network/etc. environment. a particular

Symphony Services Internal and Confidential

Page 8 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

7.0

Test Plan
A test plan is a test strategy document, which would cover the entire requirements for testing. They can also be considered as the test requirements document that will capture all the customers requirements for testing. Things to do for developing test plan: 1. Identify the requirements to be tested. All test cases shall be derived using the current Design Specification. 2. Identify which particular test(s) you're going to use to test each module. 3. Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data and test cases are adequate to verify proper operation of the unit. 4. Identify the expected results for each test. 5. Document the test case configuration, test data, and expected results. A successful Peer Technical Review baselines the TCD and initiates coding. 6. Perform the test(s). 7. Document the test data, test cases, and test configuration used during the testing process. 8. Successful unit testing is required before the unit is eligible for component integration/system testing. 9. Unsuccessful testing requires a Defect Report to be generated. This document shall describe the test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It shall be used as a basis for later technical analysis. 10. Any specifications to be reviewed, revised, or updated shall be handled immediately. 11. Deliverables: Test Case Design, System/Unit Test Report, Defect Report (if any). Above points can be restated macroscopically as follows: 1. Prepare the test plan 2. Review the test plan 3. Approve the test plan 4. Prepare the Test Design Document 5. Review the Test Design Document 6. Approve the Test Design Document 7. Execute the tests 8. Prepare test reports 9. Inform the results of tests to Project Leader / Project Manager

Symphony Services Internal and Confidential

Page 9 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

8.0

Test Cases, Test Scripts and Test Scenarios


Application testing is based upon a number of components including Test Cases, Test scripts and test scenarios. Test Cases describe steps/actions to be taken for testing the application and the expected results. Test scripts describe these steps/actions and expected results from a day in the life of a user perspective. Test scenarios describe (a part of) the business process that the application supports. Test Cases, Test scripts and Scenarios are used to test and demonstrate the correct operation of the application. For manual testing, test cases may be combined with test scripts, but care should be taken not to consider the test script for a test case, as test case. In other words, test scripts are written for a test case and test case to test script has one to many relationships. Preparing Test Cases, Test Scripts and Scenarios is therefore an important task in preparation for the testing phases. One challenge is determining the number of Test Cases and Scenarios required to adequately testing the application. As stated by Linda Hayes of Auto Tester, Inc. in Software Magazine (March 1994), a good rule of thumb is to apply the 80/20 rule. This is based on the assumption that 20% of the application will account for 80% of the business activities; therefore, the Test Cases and Scenarios should focus on the most important 20% of the application. Herb Isenberg, head of testing services for Charles Schwab, provides the following hints for designing Test Cases and Scenarios: 1. Test Case Independence: The current test case should not depend on the success of the previous one in order to run. 2. Start and End points: Define where the test case starts and ends in the application. Also define how to navigate to the place the test case needs to run. 3. No gap, no overlap: Test cases must cover all important business functions, avoiding excessive clustering around major functions and gaps around other functions. 4. Keep it simple philosophy. Do not make test cases overly complex. The following are recommended: 1. Each window must be included in at least one scenario, ideally two. 2. Each major piece of functionality from the functionality matrix must be included in at least one scenario. 3. Write shorter scenarios as opposed to a few long scenarios. 4. Focus on the business reasoning behind the scenario. 5. Provide enough detail to demonstrate how to execute the scenario without getting too caught up in buttons being pushed and fields being enabled and disabled. Generally we say the Clients users have the responsibility for developing robust and detailed User Scenarios since they have the most in-depth, first-hand understanding of the real life business situations the applications must support and understand the special needs of the user community. The User Scenarios must be completed before System Testing begins.

Symphony Services Internal and Confidential

Page 10 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

9.0

Best Practices In Testing


We suggest the following best practices to be adopted in verification and Validation activities: 1. Requirements shall be verified and validated with Systems Test Design 2. Design Document shall be verified and validated with Integration Test Design 3. Code shall be verified and validated with Unit Test Design 4. User documentation are validated with system test design

10.0

Object Oriented Testing Methods


While the jury is still out on whether "traditional" testing methods and techniques are applicable to OO models, there seems to be a consensus that because the OO paradigm is different from the traditional one, some alteration or expansion of the traditional testing methods is needed. The OO methods may utilize many or just some aspects of the traditional ones but they need to broaden to sufficiently test the OO products. Because of inheritance and inter-object communications in OO environment, much more emphasis is placed on the analysis and design and their "correctness" and consistency. This is imperative to prevent analysis errors to trickle down to design and development, which would increase the effort to correct the problem.

10.1 OO Test Case Design


Conventional test case designs are based on the process they are to test and its inputs and outputs. OO test cases need to concentrate on the states of a class. To examine the different states, the cases have to follow the appropriate sequence of operations in the class. Class, as an encapsulation of attributes and procedures that can be inherited, thus becomes the main target of OO testing. Operations of a class can be tested using the conventional white-box methods and techniques (basis path, loop, data flow) but there is some notion to apply these at the class level instead.

10.2 Fault Based Testing


This type of testing allows for designing test cases based on the client specification or the code or both. It tries to identify plausible faults (areas of design or code that may lead to errors). For each of these faults a test case is developed to "flush" the errors out. These tests also force each line of code to be executed. This testing method does not find all types of errors, however. Incorrect specifications and interface errors can be missed. You may remember that these types of errors can be uncovered by function testing in the traditional model. In

Symphony Services Internal and Confidential

Page 11 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

OO model, interaction errors can be uncovered by scenario-based testing. This form of OO testing can only test against the client's specifications, so interface errors are still missed.

10.3 Class Level Methods


As mentioned above, a class (and its operations) is the module most concentrated on in OO environments. From here it should expand to other classes and sets of classes. Just like traditional models are tested by starting at the module first and continuing to module clusters or builds and then the whole program.

10.4 Random Testing


This is one of methods used to exercise a class. It is based on developing a random test sequence that tries the minimum number of operations typical to the behavior of the class.

10.5 Partition Testing


This method categorizes the inputs and outputs of a class in order to test them separately. This minimizes the number of test cases that have to be designed. To determine the different categories to test, partitioning can be broken down as follows: 1. State-based partitioning - categorizes class operations based on how they change the state of a class. 2. Attribute- based partitioning - categorizes class operations based on attributes they use. 3. Category- based partitioning - categorizes class operations based on the generic function the operations perform.

10.6 Scenario-based Testing


This form of testing concentrates on what the user does. It basically involves capturing the user actions and then simulating them and similar actions during the test. These tests tend to find interaction type of errors.

11.0

Object Oriented Testing Strategies


Testing strategies is one area of software testing where the traditional (procedural) and OO models follow the same path. In this case testing is started with unit testing and then continued with integration testing, function testing and finally system testing. The meaning of the individual strategies had to be adjusted, however.

11.1 OO Unit Testing


In OO paradigm it is no longer possible to test individual operations as units. Instead they are tested as part of the class and the class or an instance of a class (object) then represents the smallest testable unit or module. Because of inheritance, testing individual operation separately (independently of the class) would not be very effective, as they interact with each other by modifying the

Symphony Services Internal and Confidential

Page 12 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

state

of

the

object

they

are

applied

to

(Binder,

1994).

11.2 OO Integration Testing


This strategy involves testing the classes as they are integrated into the system. The traditional approach would test each operation separately as they are implemented into a class. In OO system this approach is not viable because of the "direct and indirect interactions of the components that make up the class" (Pressman, 1997). Integration testing in OO can be performed in three basic ways (Binder, 1994): Thread- based - Takes all the classes needed to react to a given input. Each class is unit tested and then thread constructed from these classes tested as a set. Uses- based - Tests classes in groups. Once the group is tested, the next group that uses the first group (dependent classes) is tested. Then the group that uses the second group and so on. Use of stubs or drivers may be necessary. Cluster- based - This is similar to testing builds in the traditional model. Basically collaborating classes are tested in clusters.

11.3 OO Function Testing and OO System Testing


Function testing of OO software is no different than validation testing of procedural software. Client involvement is usually part of this testing stage. In OO environment use cases may be used. These are basically descriptions of how the system is to be used. OO system testing is really identical to its counterpart in the procedural environment.

12.0

Object Oriented Testing Metrics


Testing metrics can be grouped into two categories: 1. Encapsulation 2. Inheritance.

12.1 Encapsulation
Lack of cohesion in methods (LCOM) - The higher the value of LCOM; the more states have to be tested. Percent public and protected (PAP) - This number indicates the percentage of class attributes that are public and thus the likelihood of side effects among classes. Public access to data members (PAD) - This metric shows the number of classes that access other class's attributes and thus violation of encapsulation.

12.2 Inheritance
Number of root classes (NOR) - A count of distinct class hierarchies. Fan in (FIN) - FIN > 1 is an indication of multiple inheritances and should be avoided. Number of children (NOC) and depth of the inheritance tree (DIT) - For each

Symphony Services Internal and Confidential

Page 13 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

subclass, its super class has to be re-tested. The above metrics (and others) are different than those used in traditional software testing, however, metrics collected from testing should be the same (i.e. number and type of errors, performance metrics, etc.).

13.0

Web Testing for websites and Web applications


How do you characterize the World Wide Web? Users are from around the corner and halfway around the world. A wide variety of client hardware and software platforms. An ever-changing mix of technologies - Java, JavaScript, Java Beans, ActiveX, HTML and the web browsers. Ten hits per second, then ten thousand hits per second after the press release hits the wire.

13.1 What is Web Testing?


Web testing goes beyond the basic functional and system testing of the client/server world to include tests for availability and performance, usability and compatibility. Web testing involves more variables, less time, higher risks and more exposure. An optimal web testing approach begins with a thorough risk analysis of your application to identify and prioritise key areas and testing tasks.

13.2 Performance Testing


1. How many people can use your website, web or client/server application simultaneously? 2. Unsure about how to go about designing and executing performance tests? 3. Will my current application architecture remain viable as the user base grows? 4. What is Performance Testing? 5. All systems have bottlenecks. Where are the bottlenecks in your system? 6. Will they become an issue as the user base or transaction rates increase? Performance testing identifies current bottlenecks in the website, web or client/server application and verifies it meets or exceeds key performance measures. Performance testing answers questions like: 1. Can my website support 1000 hits/second? If so, for how long? 2. Can my e-commerce application handle 500 or more users searching for products and 250 or more users adding items to their shopping carts simultaneously? 3. What happens to application performance, as the backend database gets larger and larger? 4. If concrete performance goals aren't defined, performance testing can answer the question: "At what point will my application malfunction or fail?"

13.3 What is Performance Testing Automation?

Symphony Services Internal and Confidential

Page 14 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

Performance testing automation uses sophisticated tools and high-powered hardware to execute performance tests over a broad range of user loads or transaction rates. Why is this necessary? Manually executing performance tests at large user loads just isn't feasible. For example, consider manually testing an e-commerce website at a 500 user load. Imagine what it would take to configure 500 computers, instruct each of the 500 users about what to do and when to do it, coordinate the test execution and collect/analyse the test results! Performance Testing needs to be done for 1. Website performance testing 2. Web application performance testing 3. Ongoing / periodic performance testing for live sites 4. Regression Test Automation

13.4 What is Regression Test Automation?


Regression testing involves executing a predefined battery of tests against successive builds of an application to verify that bugs are being fixed and features / functions that were working in the previous build haven't been broken. Regression testing is an essential part of testing, but is very repetitive and can become tedious when manually executed build after build after build. Regression test automation alleviates this tedium by automatically executing the tests using an automated regression-testing tool. The tool acts just as a user would, interact with an application to input data and verify expected responses. Implemented properly, an automated regression battery can be run unattended and overnight, freeing up testers to concentrate on testing new features and functionality. Regression test automation services include: 1. Tool evaluation and selection 2. Automation jump-start 3. Turnkey automation implementation and execution 4. Cross-platform automation 5. Automation maintenance and enhancement

14.0

Testing Java Applets and Applications


There is a revolution in enterprise client/server computing Java clients. Before Java, the cost of distributing and maintaining client software impeded universal access to enterprise computing assets. Information systems (IS) organizations bore the burden of installing, updating and supporting client software for each user. Only users whose job tasks directly and significantly benefited from access to a server-based asset warranted client software. Now, with Java clients, anyone with a Web browser can download an applet and access server-based assets. Distributing client software is simply a matter of posting the applet to a Web page. Updating the client software is just as easy. Much of the support

14.1 Economics Of Java Clients

Symphony Services Internal and Confidential

Page 15 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

costs are associated with assuring proper configuration of browser or standalone Java virtual machine, which are amortized across all Java client software that runs on top of these platforms. With the near-zero distribution and support costs of Java clients, IS costs shift to client and server software quality assurance. The shift to Java poses three new problems for client quality assurance.

14.2 Cross platform support


The advantage of Java is that anyone can download an applet and run it in any browser on any platform. Unfortunately, there are slight differences across platforms virtual machine vendor implementations and windowing toolkits. IS has a new challenge in assuring that Java clients work properly for all combinations of these components.

14.3 Rapid Enhancement


With the instant distribution capabilities of Java, IS will be under pressure to rapidly deliver enhancements to client software. This requirement compounds the quality assurance problem of cross-platform support.

14.4 Customized Clients


Users in different departments will want to access some of the same systems. However, because they probably have different perspectives on the enterprise, each will want Java clients that cater to its perspective. IS will be under pressure to provide Java clients customized to the specific needs of each department. More versions of client software will further compound the quality assurance problem. In addition to these client software quality issues, there are also two issues with server software quality.

14.5 Increased Load


The accessibility of clients will increase application load. Moreover, Java clients often imply n-tier architectures. Such architectures increase the number of components that can fail under the increased load. This issue will make load testing to identify interoperability, capacity and performance problems increasingly important.

14.6 Expanded Use-Cases


With more users from many different departments accessing server-based assets, they will likely exercise application features in new and unexpected combinations. This issue will make comprehensive functional testing increasingly important. The new client and server software quality issues make both functional and load testing crucial. All organizations need a complete solution for functional testing of Java clients, load testing of Java-based applications and managing the increased volume of testing data. As a further complication, many applications will have a combination of traditional and Java clients for the foreseeable future. Therefore, not only does the testing solution have to work across Java hardware platforms, virtual machines and windowing toolkits, it also has to work with both Java and traditional clients. Only Mercury Interactives Java testing solution can meet

Symphony Services Internal and Confidential

Page 16 of 17

Testing Guidelines

Version: 2.0 Dated: December 8, 2006

these rigorous requirements. Based on the proven Win Runner, Load Runner and Test Director products, Mercury Interactive can provide both comprehensive functional testing and load testing across different Java configurations and traditional client platforms.

15.0

Test Case Verification


The test case verification may be done two ways: 1. Peer Review using the QMS_CHKL_REQ_01.xls (mandatory) 2. Fagan Inspection as in QMS_GUID_V&V_01.doc (optional)

Symphony Services Internal and Confidential

Page 17 of 17

You might also like