Professional Documents
Culture Documents
Symphony Services: Testing Guidelines
Symphony Services: Testing Guidelines
Testing Guidelines
Document Summary Guideline Name Version Prepared By Date Prepared Summary Testing Guidelines 2.0 Poornima J August 29, 2003 This document explains the Guidelines for Testing activities
Change Date May 29, 2003 June 4, 2003 June 11, 2003 June 18, 2003 August 29, 2003 December 8, 2006 December 8, 2006
Change Summary First Draft Reviewed by QAI Reviewed by EPG Reviewed by PC Approved and Baselined Review comments incorporated Reviewed and Baselined
Page - 1 -
Contents
1.0INTRODUCTION............................................................................4
1.1OBJECTIVE ........................................................................................................................................4 1.2INTENDED AUDIENCE..............................................................................................................................4
5.0LEVELS OF TESTING......................................................................5
5.1UNIT TESTING....................................................................................................................................5 5.2FUNCTIONAL UNIT TESTING......................................................................................................................6 5.3INTEGRATION TESTING............................................................................................................................6 5.4SYSTEM TESTING..................................................................................................................................6
6.0TESTING STRATEGIES...................................................................6
6.1BLACK BOX TESTING ..............................................................................................................................7 6.2WHITE BOX TESTING .............................................................................................................................7 6.3FUNCTIONAL TESTING ............................................................................................................................7 6.4SYSTEM TESTING .................................................................................................................................7 6.5END-TO-END TESTING ............................................................................................................................7 6.6SANITY TESTING...................................................................................................................................7 6.7REGRESSION TESTING ............................................................................................................................8 6.8LOAD TESTING ....................................................................................................................................8 6.9STRESS TESTING...................................................................................................................................8 6.10PERFORMANCE TESTING .........................................................................................................................8 6.11USABILITY TESTING..............................................................................................................................8 6.12INSTALL/UNINSTALL TESTING ...................................................................................................................8 6.13RECOVERY TESTING ............................................................................................................................8 6.14SECURITY TESTING .............................................................................................................................8 6.15COMPATIBILITY TESTING ........................................................................................................................8
7.0TEST PLAN....................................................................................9 8.0TEST CASES, TEST SCRIPTS AND TEST SCENARIOS.....................10 9.0BEST PRACTICES IN TESTING.....................................................11
Page - 2 Page - 2 -
Page - 3 Page - 3 -
Testing Guidelines
1.0
Introduction
This testing guideline gives details about various levels of testing and testing strategies. This guideline covers the process perspectives of the testing areas and shall be used accordingly.
1.1
Objective
This document guides the reader on Testing Methodologies and Testing Processes. Execution of Tests in the live projects needs the usage of respective Quality Management Systems procedures, forms and templates plus complimentary reading to know more about specific testing scenarios.
1.2
Intended Audience
Project team members, Quality Assurance personnel and Engineering Process Group members, can use this Guideline to execute testing activities.
2.0
References:
1. ISO Standards 2. SEI CMMi framework 3. IEEE Standards
3.0
Objectives of Testing
Testing has the following objectives. 1. Testing is a process of executing a program with the intent of finding an error 2. A good test case is one that has a high probability of finding an as-yet undiscovered error 3. A successful test is one that uncovers an as-yet undiscovered error. 4. Testing demonstrates that software functions appear to be working accordingly to specification and that performance requirements appear to have been met.
4.0
About Testing
Testing is defined in two types 1. Static Testing 2. Dynamic Testing
4.1
Static Testing:
Its a kind of test in which the software code is not executed but reviewed for identifying and eliminating defects. Its just a review on the software code. Ex: Walkthrough reviews, peer reviews, software inspection etc.
Page 4 of 17
Testing Guidelines
4.2
Dynamic Testing:
The software code is actually executed in the testing environment to identify and remove any defects. This Guideline is detailing about dynamic testing. Reviews are not covered in this Guideline. Refer the Review Guidelines for additional information on walk through, software inspection and peer reviews.
5.0
Levels of Testing
There are 4 levels of testing 1. Unit Level 2. Functional level 3. Integration Level 4. System Level 5. Acceptance Level Now, we will understand about each level of testing in a nutshell.
5.1
Unit Testing
Unit testing focuses verification effort on the smallest unit of software design-the module. Using the procedural design descriptions as a guide, important control paths are tested to uncover errors within the boundary of the module. The unit test is normally white box oriented, and the steps can be conducted in parallel for multiple modules. Authors of code do unit testing after the self-review and peer review on the unit is over. Unit testing is the first round of formal testing activity conducted following the initial code completion. Unit test cases may be in the form of document or in the form of unit test programs. In case of documented unit test cases, the programs are run to make sure that they produce the expected results. In case of unit test programs, the test programs are executed to demonstrate that the code executes the test cases correctly. The unit tests should demonstrate that each operation of functions produces the expected results. The results of the unit test are recorded in the document form or as the results of unit test programs. If there are any errors found during unit testing, the developer(s) fixes the code and reruns the tests. Unit testing carried out in this manner until the unit tests run successfully. Unit testing can be conducted in different ways as listed below: 1. Test drivers that call the code 2. Running the code through a debugger 3. Running the code and reviewing trace files 4. Running a User Interface that calls the code 5. Running other components that call the code
Page 5 of 17
Testing Guidelines
5.2
5.3
Integration Testing
Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. Although the individual units works well, the interfacing (putting them together) problems are tested in Integration testing. Integration test members shall carry out the test after the Unit testing was successfully completed.
5.4
System Testing
System testing is actually a series of different tests with the purpose of fully testing the over all system, although each test has a different purpose, all work to verify that all system elements have been properly integrated and perform allocated function. System testing team members shall carry out the test after the successful completion of the Integration Testing.
6.0
Testing Strategies
Having understood the levels of testing, now lets understand the testing strategies and what type of strategy shall be adopted for various levels of testing. There are two types of testing strategies namely:
Page 6 of 17
Testing Guidelines
6.1
6.2
6.3
Functional testing
It is black-box type testing geared to functional requirements of an application; testers should do this type of testing. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
6.4
System testing
It is black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
6.5
End-to-end testing
It is similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
6.6
Sanity testing
It is typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the
Page 7 of 17
Testing Guidelines
new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
6.7
Regression testing
It is re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
6.8
Load testing
It is testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.
6.9
Stress testing
This term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Page 8 of 17
Testing Guidelines
7.0
Test Plan
A test plan is a test strategy document, which would cover the entire requirements for testing. They can also be considered as the test requirements document that will capture all the customers requirements for testing. Things to do for developing test plan: 1. Identify the requirements to be tested. All test cases shall be derived using the current Design Specification. 2. Identify which particular test(s) you're going to use to test each module. 3. Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data and test cases are adequate to verify proper operation of the unit. 4. Identify the expected results for each test. 5. Document the test case configuration, test data, and expected results. A successful Peer Technical Review baselines the TCD and initiates coding. 6. Perform the test(s). 7. Document the test data, test cases, and test configuration used during the testing process. 8. Successful unit testing is required before the unit is eligible for component integration/system testing. 9. Unsuccessful testing requires a Defect Report to be generated. This document shall describe the test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It shall be used as a basis for later technical analysis. 10. Any specifications to be reviewed, revised, or updated shall be handled immediately. 11. Deliverables: Test Case Design, System/Unit Test Report, Defect Report (if any). Above points can be restated macroscopically as follows: 1. Prepare the test plan 2. Review the test plan 3. Approve the test plan 4. Prepare the Test Design Document 5. Review the Test Design Document 6. Approve the Test Design Document 7. Execute the tests 8. Prepare test reports 9. Inform the results of tests to Project Leader / Project Manager
Page 9 of 17
Testing Guidelines
8.0
Page 10 of 17
Testing Guidelines
9.0
10.0
Page 11 of 17
Testing Guidelines
OO model, interaction errors can be uncovered by scenario-based testing. This form of OO testing can only test against the client's specifications, so interface errors are still missed.
11.0
Page 12 of 17
Testing Guidelines
state
of
the
object
they
are
applied
to
(Binder,
1994).
12.0
12.1 Encapsulation
Lack of cohesion in methods (LCOM) - The higher the value of LCOM; the more states have to be tested. Percent public and protected (PAP) - This number indicates the percentage of class attributes that are public and thus the likelihood of side effects among classes. Public access to data members (PAD) - This metric shows the number of classes that access other class's attributes and thus violation of encapsulation.
12.2 Inheritance
Number of root classes (NOR) - A count of distinct class hierarchies. Fan in (FIN) - FIN > 1 is an indication of multiple inheritances and should be avoided. Number of children (NOC) and depth of the inheritance tree (DIT) - For each
Page 13 of 17
Testing Guidelines
subclass, its super class has to be re-tested. The above metrics (and others) are different than those used in traditional software testing, however, metrics collected from testing should be the same (i.e. number and type of errors, performance metrics, etc.).
13.0
Page 14 of 17
Testing Guidelines
Performance testing automation uses sophisticated tools and high-powered hardware to execute performance tests over a broad range of user loads or transaction rates. Why is this necessary? Manually executing performance tests at large user loads just isn't feasible. For example, consider manually testing an e-commerce website at a 500 user load. Imagine what it would take to configure 500 computers, instruct each of the 500 users about what to do and when to do it, coordinate the test execution and collect/analyse the test results! Performance Testing needs to be done for 1. Website performance testing 2. Web application performance testing 3. Ongoing / periodic performance testing for live sites 4. Regression Test Automation
14.0
Page 15 of 17
Testing Guidelines
costs are associated with assuring proper configuration of browser or standalone Java virtual machine, which are amortized across all Java client software that runs on top of these platforms. With the near-zero distribution and support costs of Java clients, IS costs shift to client and server software quality assurance. The shift to Java poses three new problems for client quality assurance.
Page 16 of 17
Testing Guidelines
these rigorous requirements. Based on the proven Win Runner, Load Runner and Test Director products, Mercury Interactive can provide both comprehensive functional testing and load testing across different Java configurations and traditional client platforms.
15.0
Page 17 of 17