Professional Documents
Culture Documents
Submitted By:
Revision History
Date <dd/mmm/yy> Version <x.x> Description <details> Author <name>
Table of Content
1. Test Plan Identifier..5 2. Introduction.....5 2.1 Team Interaction...5 3. Objective and Scope....5 3.1 Objective...5 3.1.1 Primary Objective..5 3.1.2 Secondary Objective..6 3.2 Scope....6 4. Risk...6 4.1 Schedule....6 4.2 Technical...6 4.3 Management..6 4.4 Personnel...7 4.5 Requirements.7 5. Features and Functions to Test7 6. Features and Functions not to Test..7 7. Process Overview8 8. Testing Process9 9. Testing Strategy...9 9.1 Usability Testing.11 9.2 Unit Testing.11 9.2.1 White Box Testing...11 9.2.2 Black Box Testing...11 9.3 Iteration/Regression Testing...11 9.4 System Testing12 9.4.1 Performance Testing12 9.5 Final Release Testing..12 9.6 Testing completeness Criteria.13
10. Test Levels...13 10.1 Build Tests.13 10.1.1 Level 1 - Build Acceptance Tests13 10.1.2 Level 2 - Smoke Tests.13 10.1.3 Level 2a - Bug Regression Testing13 10.2 Milestone Tests..14 10.1.4 Level 3 - Critical Path Tests...14 10.3 Release Tests..14 10.3.1 Level 4 - Standard Tests.14 10.3.2 Level 5 - Suggested Test.14 11. Suspension / Exit Criteria.14 12. Resumption Criteria.15 13. Bug Tracking/ Bug Process..15 13.1 Various Roles in Bug Resolution...15 14. Test Deliverables...16 14.1 Deliverables Matrix...16 14.2 Documents.17 14.2.1 Test Approach Document17 14.2.2 Test Plan..17 14.2.3 Test Schedule..18 14.2.4 Test Specifications Requirements Traceability Matrix...18 15. Remaining Test Tasks..18 16. Resource & Environment Needs..19 16.1 Data Entry workstations.19 16.2 Main Frame19 17. Staffing and Training Needs.19 18. Roles and Responsibilities20 19. Schedule...21 20. Planning Risks and Contingencies...21 21. Approvals.22 22. Glossary23 23. References23
2. Introduction
This document is a high-level overview defining testing strategy for the Social Hub (Release 1.0) Project. Its objective is to communicate project-wide quality standards and procedures. It represents a snapshot of the project as of the end of the planning phase. This document will address the different standards that will apply to the unit, integration and system testing of the specified application. We will utilize testing criteria under the white box, black box, and system-testing paradigm. This paradigm will include, but is not limited to, the testing criteria, methods, and test cases of the overall design. Throughout the testing process we will be applying the test documentation specifications described in the IEEE Standard 829 for Software Test Documentation.
4 Risks
4.1 Schedule
The schedule for each phase is very aggressive and could affect testing. A slip in the schedule in one of the other phases could result in a subsequent slip in the test phase. Close project management is crucial to meeting the forecasted completion date. 4.2 Technical Since this is a new system to assist the public. In the event of a failure there is no option of old system which can be used. We will run our test on new application not depend on existing system.
4.3 Management Management support is required so when the project falls behind, the test schedule does not get squeezed to make up for the delay. Management can reduce the risk of delays by supporting the test team throughout the testing phase and assigning people to this project with the required skills set. 4.4 Personnel Due to the aggressive schedule, it is very important to have experienced testers on this project. Unexpected turnovers can impact the schedule. If attrition does happen, all efforts must be made to replace the experienced individual. 4.5 Requirements The test plan and test schedule are based on the current Requirements Document. Any changes to the requirements could affect the test schedule and will need to be approved.
7 Process Overview
The following represents the overall flow of the testing process: 1. Identify the requirements to be tested. All test cases shall be derived using the current Program Specification. 2. Identify which particular test(s) will be used to test each module. 3. Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data and test cases are adequate to verify proper operation of the unit. 4. Identify the expected results for each test. 5. Document the test case configuration, test data, and expected results. 6. Perform the test(s). 7. Document the test data, test cases, and test configuration used during the testing process. This information shall be submitted via the Unit/System Test Report (STR). 8. Successful unit testing is required before the unit is eligible for component integration/system testing. 9. Unsuccessful testing requires a Bug Report Form to be generated. This document shall describe the test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It shall be used as a basis for later technical analysis.
10. Test documents and reports shall be submitted. Any specifications to be reviewed,
8 Testing Process
b. Design System
Test
a. Organize
Project
f. Signoff
Figure 1: Test Process Flow The diagram above outlines the Test Process approach that will be followed. a. Organize Project involves creating a System Test Plan, Schedule & Test Approach, and assigning responsibilities. b. Design/Build System Test involves identifying Test Cycles, Test Cases, Entrance & Exit Criteria, Expected Results, etc. In general, test conditions/expected results will be identified by the Test Team in conjunction with the Development Team. The Test Team will then identify Test Cases and the Data required. The Test conditions are derived from the Program Specifications Document. c. Design/Build Test Procedures includes setting up procedures such as Error Management systems and Status reporting. d. Build Test Environment includes requesting/building hardware, software and data set-ups. e. Execute System Tests The tests identified in the Design/Build Test Procedures will be executed. All results will be documented and Bug Report Forms filled out and given to the Development Team as necessary. f. Signoff - Signoff happens when all pre-defined exit criteria have been achieved.
9 Testing Strategy
The following outlines the types of testing that will be done for unit, integration, and system testing. While it includes what will be tested, the specific use cases that determine how the testing is done will be detailed in the Test Design Document. The template that will be used for designing use cases is shown in Figure 2.
Tested By: Test Type Test Case Number Test Case Name Test Case Description
Procedural Steps 1 2 3 4 5 6 7
10
9.1 Usability Testing The purpose of usability testing is to ensure that the new components and features will function in a manner that is acceptable to the customer. Development will typically create a non-functioning prototype of the UI components to evaluate the proposed design. Usability testing can be coordinated by testing, but actual testing must be performed by non-testers (as close to end-users as possible). Testing will review the findings and provide the project team with its evaluation of the impact these changes will have on the testing process and to the project as a whole. 9.2 Unit Testing (Multiple) Unit Testing is conducted by the Developer during code development process to ensure that proper functionality, language-specific programming errors such as bad syntax, logic errors and code coverage have been achieved by each developer both during coding and in preparation for acceptance into iterations testing. The unit test cases shall be designed to test the validity of the programs correctness. The following are the example areas of the project must be unit-tested and signed-off before being passed on to regression Testing: Databases, Stored Procedures, Triggers, Tables, and Indexes NT Services Database conversion .OCX, .DLL, .EXE and other binary formatted executable. 9.2.1 White Box Testing
In white box testing, the UI is bypassed. Inputs and outputs are tested directly at the code level and the results are compared against specifications. This form of testing ignores the function of the program under test and will focus only on its code and the structure of that code. Test case designers shall generate cases that not only cause each condition to take on all possible values at least once, but that cause each such condition to be executed at least once. To ensure this happens, we will be applying Branch Testing. Because the functionality of the program is relatively simple, this method will be feasible to apply. 9.2.2 Black Box Testing
Black box testing typically involves running through every possible input to verify that it results in the right outputs using the software as an end-user would. It includes performing Equivalence Partitioning and Boundary Value Analysis testing. 9.3 Iteration/Regression Testing
11
During the repeated cycles of identifying bugs and taking receipt of new builds (containing bug fix code changes), there are several processes which are common to this phase across all projects. These include the various types of tests: functionality, performance, stress, configuration, etc. There is also the process of communicating results from testing and ensuring that new drops/iterations contain stable fixes (regression). The project should plan for a minimum of 2-3 cycles of testing (drops/iterations of new builds). At each iteration, a debriefing should be held. Specifically, the report must show that to the best degree achievable during the iteration testing phase, all identified severity 1 and severity 2 bugs have been communicated and addressed. At a minimum, all priority 1 and priority 2 bugs should be resolved prior to entering the beta phase. 9.4 System Testing The goals of system testing are to detect faults that can only be exposed by testing the entire integrated system or some major part of it. Generally, system testing is mainly concerned with areas such as performance, security, validation, load/stress, and configuration sensitivity. But in our case well focus only on function validation and performance. And in both cases we will use the black-box method of testing. 9.4.1 Performance Testing
This test will be conducted to evaluate the fulfillment of a system with specified performance requirements. It will be done using black-box testing method. And this will be performed by: Storing the maximum data in the file and trying to insert, and observe how the application will perform when it is out of boundary. Deleting data and check if it follows the right sorting algorithm to sort the resulting data or output. Trying to store new data and check if it over writes the existing once. Trying to load the data while they are already loaded 9.5 Final Release Testing Testing team with end-users participates in this milestone process as well by providing confirmation feedback on new issues uncovered, and input based on identical or similar issues detected earlier. The intention is to verify that the product is ready for distribution, acceptable to the customer and iron out potential operational issues. Assuming critical bugs are resolved during previous iterations testing- Throughout the Final Release test cycle, bug fixes will be focused on minor and trivial bugs (severity 3 and 4). Testing will continue its process of verifying the stability of the application through regression testing (existing known bugs, as well as existing test cases).
12
9.6 Testing completeness Criteria Release for production can occur only after the successful completion of the application under test throughout all of the phases and milestones previously discussed above. The milestone target is to place the release/app (build) into production after it has been shown that the app has reached a level of stability that meets or exceeds the client expectations as defined in the Requirements, Functional Specifications.
10 Test Levels
Testing of an application can be broken down into three primary categories and several sub-levels. The three primary categories include tests conducted every build (Build Tests), tests conducted every major milestone (Milestone Tests), and tests conducted at least once every project release cycle (Release Tests). The test categories and test levels are defined below: 10.1 Build Tests 10.1.1 Level 1 - Build Acceptance Tests Build Acceptance Tests should take less than 2-3 hours to complete (15 minutes is typical). These test cases simply ensure that the application can be built and installed successfully. Other related test cases ensure that adopters received the proper Development Release Document plus other build related information (drop point, etc.). The objective is to determine if further testing is possible. If any Level 1 test case fails, the build is returned to developers un-tested. 10.1.2 Level 2 - Smoke Tests Smoke Tests should be automated and take less than 2-3 hours (20 minutes is typical). These tests cases verify the major functionality a high level. The objective is to determine if further testing is possible. These test cases should emphasize breadth more than depth. All components should be touched, and every major feature should be tested briefly by the Smoke Test. If any Level 2 test case fails, the build is returned to developers un-tested. 10.1.3 Level 2a - Bug Regression Testing Every bug that was Open during the previous build, but marked as Fixed, Needs ReTesting for the current build under test, will need to be regressed, or re-tested. Once the smoke test is completed, all resolved bugs need to be regressed. It should take between 5 minutes to 1 hour to regress most bugs.
13
10.2 Milestone Tests 10.2.1 Level 3 - Critical Path Tests Critical Path test cases are targeted on features and functionality that the user will see and use every day. Critical Path test cases must pass by the end of every 2-3 Build Test Cycles. They do not need to be tested every drop, but must be tested at least once per milestone. Thus, the Critical Path test cases must all be executed at least once during the Iteration cycle, and once during the Final Release cycle. 10.3 Release Tests 10.3.1 Level 4 - Standard Tests Test Cases that need to be run at least once during the entire test cycle for this release. These cases are run once, not repeated as are the test cases in previous levels. Functional Testing and Detailed Design Testing (Functional Spec and Design Spec Test Cases, respectively). These can be tested multiple times for each Milestone Test Cycle (Iteration, Final Release, etc.). Standard test cases usually include Installation, Data, GUI, and other test areas. 10.3.2 Level 5 - Suggested Test These are Test Cases that would be nice to execute, but may be omitted due to time constraints. Most Performance and Stress Test Cases are classic examples of Suggested test cases (although some should be considered standard test cases). Other examples of suggested test cases include WAN, LAN, Network, and Load testing.
12 Resumption Criteria
14
If testing is suspended, resumption will only occur when the problem(s) that caused the suspension has been resolved. When a critical defect is the cause of the suspension, the FIX must be verified by the test department before testing is resumed.
The following chart defines the impact levels to be used when entering bugs. Impact 1 Fatal 2 Serious 3 Minor Definitions Test Stopper: If you cant access a function and need the bug to be fixed immediately. The defect prevents QA from testing the feature area, subarea or functionality of the feature. Beta Stopper: This is a bug that users would experience such as: data corruption, calculation errors, incorrect data, UEs and system crash on common user scenarios, significant QA risk, and major UI defects. Live Release: A bug that must be fixed before the product is officially completed, UEs or crashes, content, and UI and graphic changes required for release.
13.1 Various Roles in Bug Resolution Author The person who wrote the bug; this will be someone on the QA team Resolver Normally an Engineer assigned to a specific area of the application. Verifier Normally a QA Engineer responsible for testing the fix and closing the bug.
14 Test Deliverables
15
Testing will provide specific deliverables during the project. These deliverables fall into three basic categories: Documents, Test Cases / Bug Write-ups, and Reports. Here is a diagram indicating the dependencies of the various deliverables:
Test Case Results Bug Results
Requirements [PM]
Test Plan
Test Cases
Bugs
TC Coverage Reports
Bug Reports
As the diagram above shows, there is a progression from one deliverable to the next. Each deliverable has its own dependencies, without which it is not possible to fully complete the deliverable. Program function specifications Program source code Test plan document - this document should address testing objectives, criteria, standards, schedule and assignments, and testing tools. Unit Testing Plan Integration Plan System Testing Plan Test Design Document Unit white-box test design covers white testing criteria, methods and test cases
16
Unit black-box test design covers black-box testing criteria, methods and test cases System test design covers system test criteria, methods, and test cases, scripts. Test report document Unit white-box test report covers unit white box test results, problems, summary and analysis Unit black-box test report covers unit black box test results, problems, summary and analysis System Test report covers system test results, problems, summary and analysis
Define Turnover procedures for each level Verify prototypes of Screens Verify prototypes of Reports
TM, Dev
16 Test Environments
There are essentially two parts to the Social Hub application in production: the clientside, which because the application is going to accessed over the Internet by members of the general public, Social Hub has little control over. And the server-side which (initially) will be comprised of a single cluster of servers residing at Social Hubs corporate center. Available Client-side Environment Social Hub will utilize set of desktop and laptop machines, which currently consists of the following machine specifications: Low-end PC Processor: Pentium 4 (2.4 GHz)
17
RAM: 512 MB Hard Disk Drive: 40 GB Monitor: 17 Color Screen (default 1024 x 768 16 bit color) 56.6kps Modem or 100MB Ethernet Internet connection typically running Windows XP or Windows 2000 Professional The following Windows based Browsers are readily available for installation on any of the client platforms: Internet Explorer of Microsoft Corporation Firefox (also called Mozilla Firefox) of Mozilla Corporation Chrome of Google Opera of Opera Software ASA Netscape Navigator of Netscape Communications Corporation (now part of AOL) SeaMonkey of Mozilla Foundation Maxthon Browser of Maxthon Browser settings (cache size, # of connections, font selection etc.) where possible were left unchanged i.e. the installation defaults were used for all testing. No optional Plug Ins will be installed. Available Server-side Environments In addition to the cluster of servers used for production, two functionally exact replicas of the serverside production environment will be created and maintained. The development team will use one replica for unit and integration testing, while the second replica will be reserved for system testing by the testing team. Prior to a new release being put into production, the Web application will be moved to a staging area on the production system where a final series of acceptance tests can be performed. While the replica systems will be functionally the same as the production environment e.g. same system software installed in the same order, with the same installation options selected etc. Due to budget constraints, the replicas will be scaled down versions of the production system (e.g. instead of several Web servers, there will only be one) and in the case of the unit/integration replica, the hardware specifications may not be exactly the same. In addition, several network file and print servers will be made available (on a limited basis) for the testing team to use as load generators during performance tests. Available Testing Tools The following 3rd party free tools were available to scan the Web site and provide feedback: Bobby (accessibility, performance & html syntax) cast.org Freeappraisal (performance from 35 different cities) keynote.com Scrubby (meta tag analyzer) scrubtheweb.com Site analysis (search engine ratings) site-see.com Stylet (style sheet validation) microsoft.com
18
Tune up (performance & style checker) & gif lube (gif analyzer) websitegarage.com Websat (usability) nist.gov Web metasearch (search engine ratings) dogpile.com Webstone (performance benchmarking tool) - mindcraft.com Windiff (file comparison) microsoft.com W3C validation service (html and css syntax) w3c.org In addition the following commercial tools were available: Aetgweb (pair-wise combinations) from Telcordia/Argreenhouse Astra Site Manager (linkage) from Mercury Interactive eTester suite (capture/reply, linkage & performance) from RSW 100 virtual user license FrontPage (spell checking) from Microsoft Ghost (software configuration) from Symatec KeyReadiness (large scale performance testing) from Keynote systems LinkBot Enterprise (link checking, HTML compliance and performance estimates) from Watchfire Prophecy (large scale performance testing) from Envive WebLoad (performance) from Radview 1000 virtual user license Word (readability estimates) from Microsoft A manual digital stopwatch was also available.
Ensure Phase 1 is delivered to schedule and quality Produce high level and detailed test conditions Produce expected results Report progress at regular status reporting meetings Co-ordinate review and signoff of test conditions Manage individual test cycles and resolve tester queries/problems.
19
Tester
Identify test data Execute test conditions and mark-off results Prepare software error reports Administrate error measurement system Ensure test systems outages/problems are reported immediately and followed up. Ensure entrance criteria are achieved prior to system test start. Ensure exit criteria are achieved prior to system test signoff.
19 Schedule
The section contains the overall project schedule. It discusses the phases and key milestones as they relate to quality assurance. It discusses the testing goals and standards that wed like to achieve for each phase of testing that will be deployed, e.g., Usability Testing, Code Complete Acceptance, Beta Testing, Integration Testing, Regression Testing, System Testing. The key dates for overall development and Testing are outlined below.
Milestones Planning Phase End Date 00/00/00 Notes At this Milestone, the high level planning should be completed. Some of the deliverables are: Project Plan, Program function specifications. This is a feature-driven milestone where the requirements and initiatives are further defined and solutions are finalized. The deliverables for this phase are Program source code and other design related documents. QA Deliverables/Roles High-level test planning activities, which include preliminary development of Master QA Plan (this document, QA schedule. Development and Test engineers participate actively in feature design by inspecting and reviewing the requirements and design documents. As the design documents are completed, the test engineers are encouraged to start working on the Test Plan document and test design planning. The Test Engineers should have completed or in the final stages of their preliminary Infrastructure Test Plan, test cases and other QA documents related to test execution for each feature or component such as test scenarios, expected results, data sets, test procedures, scripts and applicable testing tools. The Test Engineers should have provided Code Complete Assessment Test to Development Engineer one 20
Design Phase
This milestone is when all infrastructure development and functions should be complete. The testing team should have performed unit & integration testing before checking the code into any build.
This milestone includes unit testing and code review of each function component prior to checking the code
Milestones
End Date
Notes into the test phase. The deliverables include system-testing specification, Unit testing specifications, Integration plan.
QA Deliverables/Roles week prior to Code Complete Review date. The Test Engineers should also have completed or in the final stages of their preliminary White Box Test Plan, test cases and other QA documents related to test execution for each feature or component such as test scenarios, expected results, data sets, test procedures, scripts and applicable testing tools. 2 Weeks regression of Binary Tree features to Beta and preparation for Beta Shutdown. All bugs verified and QA documentation is finalized. The test Engineers should assess that Binary Tree features are ready for Beta regression and have started their preliminary Test Summary Reports. Complete regression test execution of complete system and update Test Summary Reports for regression. Any unfinished Testing documents should be complete.
Beta Ready
Feature Complete
This milestone represents that all features are ready for Beta release shutdown. This phase allows for feature clean up to verify remaining bug fixes and regression testing around the bug fixes. This milestone indicates that the feature is ready for Beta regression. This milestone represents that project is ready for Regression Testing. 04/03/03 Product is out.
Regression Test
Ship/Live
Not enough time to complete all test cases. If time cannot be extended, individual test cases will be skipped, starting with the lowest priority
21 Approvals
Name (Print) 1. 2. 3. 4. 5. Signature Date
22
22 Glossary
TERM/ACRONM API BCP Branch testing CAT Coverage End-to End Testing DEFINITION Application Program Interface Business Continuity Plan A white box test design technique in which test cases are designed to execute branches. Client Acceptance Testing The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite. Tests user scenarios and various path conditions by verifying that the system runs and performs tasks accurately with the same set of data from beginning to end, as intended. Not Applicable Quality Assurance The degree of impact that a defect has on the development or operation of a component or system. Subject Matter Expert Standard Operating Procedure System Test Report To Be Determined Test Summary Report
23 References
Pressman, Roger S. Software Engineering - A Practitioner's Approach. Fifth edition. The McGraw-Hill companies, Inc. Kaner, C., Falk, J., Nguyen, H.-Q. Testing Computer Software. Wiley Computer Publishing, 1999.
23
24