You are on page 1of 24

Social Hub Release 1.

0 Software Test Plan

Version TP 1.0 March 19, 2012

Submitted By:

Revision History
Date <dd/mmm/yy> Version <x.x> Description <details> Author <name>

Table of Content

1. Test Plan Identifier..5 2. Introduction.....5 2.1 Team Interaction...5 3. Objective and Scope....5 3.1 Objective...5 3.1.1 Primary Objective..5 3.1.2 Secondary Objective..6 3.2 Scope....6 4. Risk...6 4.1 Schedule....6 4.2 Technical...6 4.3 Management..6 4.4 Personnel...7 4.5 Requirements.7 5. Features and Functions to Test7 6. Features and Functions not to Test..7 7. Process Overview8 8. Testing Process9 9. Testing Strategy...9 9.1 Usability Testing.11 9.2 Unit Testing.11 9.2.1 White Box Testing...11 9.2.2 Black Box Testing...11 9.3 Iteration/Regression Testing...11 9.4 System Testing12 9.4.1 Performance Testing12 9.5 Final Release Testing..12 9.6 Testing completeness Criteria.13

10. Test Levels...13 10.1 Build Tests.13 10.1.1 Level 1 - Build Acceptance Tests13 10.1.2 Level 2 - Smoke Tests.13 10.1.3 Level 2a - Bug Regression Testing13 10.2 Milestone Tests..14 10.1.4 Level 3 - Critical Path Tests...14 10.3 Release Tests..14 10.3.1 Level 4 - Standard Tests.14 10.3.2 Level 5 - Suggested Test.14 11. Suspension / Exit Criteria.14 12. Resumption Criteria.15 13. Bug Tracking/ Bug Process..15 13.1 Various Roles in Bug Resolution...15 14. Test Deliverables...16 14.1 Deliverables Matrix...16 14.2 Documents.17 14.2.1 Test Approach Document17 14.2.2 Test Plan..17 14.2.3 Test Schedule..18 14.2.4 Test Specifications Requirements Traceability Matrix...18 15. Remaining Test Tasks..18 16. Resource & Environment Needs..19 16.1 Data Entry workstations.19 16.2 Main Frame19 17. Staffing and Training Needs.19 18. Roles and Responsibilities20 19. Schedule...21 20. Planning Risks and Contingencies...21 21. Approvals.22 22. Glossary23 23. References23

1. Test Plan Identifier


Social Hub Release 1.0 TP 1.0 Note, the structure of this document is primarily based on the IEEE 829-2008 Standard for Software Test Documentation.

2. Introduction
This document is a high-level overview defining testing strategy for the Social Hub (Release 1.0) Project. Its objective is to communicate project-wide quality standards and procedures. It represents a snapshot of the project as of the end of the planning phase. This document will address the different standards that will apply to the unit, integration and system testing of the specified application. We will utilize testing criteria under the white box, black box, and system-testing paradigm. This paradigm will include, but is not limited to, the testing criteria, methods, and test cases of the overall design. Throughout the testing process we will be applying the test documentation specifications described in the IEEE Standard 829 for Software Test Documentation.

2.1 Team Interaction


The following describes the level of team interaction necessary to have a successful product. The Test Team will work closely with the Development Team to achieve a high quality design and user interface specifications based on customer requirements. The Test Team is responsible for visualizing test cases and raising quality issues and concerns during meetings to address issues early enough in the development cycle. The Test Team will work closely with Development Team to determine whether or not the application meets standards for completeness. If an area is not acceptable for testing, the code complete date will be pushed out, giving the developers additional time to stabilize the area.
5

3 Objective and Scope


3.1 Objective 3.1.1 Primary Objective A primary objective of testing application systems is to: assure that the system meets the full requirements, including quality requirements (AKA: Non-functional requirements) and fit metrics for each quality requirement and satisfies the use case scenarios and maintain the quality of the product. At the end of the project development cycle, the user should find that the project has met or exceeded all of their expectations as detailed in the requirements. Any changes, additions, or deletions to the requirements document, Functional Specification, or Design Specification will be documented and tested at the highest level of quality allowed within the remaining time of the project and within the ability of the test team. 3.1.2 Secondary Objective The secondary objective of testing application systems will be to: identify and expose all issues and associated risks, communicate all known issues to the project team, and ensure that all issues are addressed in an appropriate matter before release. As an objective, this requires careful and methodical testing of the application to first ensure all areas of the system are scrutinized and, consequently, all issues (bugs) found are dealt with appropriately. 3.2 Scope The Social Hub (Release 1) Test Plan defines the unit, integration, system, regression, and Client Acceptance testing approach. The test scope includes the following: Testing of all functional, application performance, security and use cases requirements listed in the Use Case document. Quality requirements and fit metrics to the project. End-to-end testing and testing of interfaces of all systems that interact with the project.

4 Risks
4.1 Schedule

The schedule for each phase is very aggressive and could affect testing. A slip in the schedule in one of the other phases could result in a subsequent slip in the test phase. Close project management is crucial to meeting the forecasted completion date. 4.2 Technical Since this is a new system to assist the public. In the event of a failure there is no option of old system which can be used. We will run our test on new application not depend on existing system.

4.3 Management Management support is required so when the project falls behind, the test schedule does not get squeezed to make up for the delay. Management can reduce the risk of delays by supporting the test team throughout the testing phase and assigning people to this project with the required skills set. 4.4 Personnel Due to the aggressive schedule, it is very important to have experienced testers on this project. Unexpected turnovers can impact the schedule. If attrition does happen, all efforts must be made to replace the experienced individual. 4.5 Requirements The test plan and test schedule are based on the current Requirements Document. Any changes to the requirements could affect the test schedule and will need to be approved.

5 Features and Functions to Test


Testing will consist of several phase each phase may or may not include testing of anyone or more of the following aspects of the Social Hub (Release 1.0) (listed alphabetically): Accessibility Audit Availability Coding standards Compatibility Content Functional Legal Marketing Navigation
7

Performance Reliability Scalability Security Site recognition Usability

6 Features and Functions Not to Test


It is the intent that all of the individual test cases contained in each test plan will be performed. However, if time does not permit, some of the low priority test cases may be dropped.

7 Process Overview
The following represents the overall flow of the testing process: 1. Identify the requirements to be tested. All test cases shall be derived using the current Program Specification. 2. Identify which particular test(s) will be used to test each module. 3. Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data and test cases are adequate to verify proper operation of the unit. 4. Identify the expected results for each test. 5. Document the test case configuration, test data, and expected results. 6. Perform the test(s). 7. Document the test data, test cases, and test configuration used during the testing process. This information shall be submitted via the Unit/System Test Report (STR). 8. Successful unit testing is required before the unit is eligible for component integration/system testing. 9. Unsuccessful testing requires a Bug Report Form to be generated. This document shall describe the test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It shall be used as a basis for later technical analysis.

10. Test documents and reports shall be submitted. Any specifications to be reviewed,

revised, or updated shall be handled immediately.

8 Testing Process
b. Design System
Test

a. Organize
Project

c. Design/Build Test Proc.

e. Execute system Test.

f. Signoff

d. Build Test Environment

Figure 1: Test Process Flow The diagram above outlines the Test Process approach that will be followed. a. Organize Project involves creating a System Test Plan, Schedule & Test Approach, and assigning responsibilities. b. Design/Build System Test involves identifying Test Cycles, Test Cases, Entrance & Exit Criteria, Expected Results, etc. In general, test conditions/expected results will be identified by the Test Team in conjunction with the Development Team. The Test Team will then identify Test Cases and the Data required. The Test conditions are derived from the Program Specifications Document. c. Design/Build Test Procedures includes setting up procedures such as Error Management systems and Status reporting. d. Build Test Environment includes requesting/building hardware, software and data set-ups. e. Execute System Tests The tests identified in the Design/Build Test Procedures will be executed. All results will be documented and Bug Report Forms filled out and given to the Development Team as necessary. f. Signoff - Signoff happens when all pre-defined exit criteria have been achieved.

9 Testing Strategy
The following outlines the types of testing that will be done for unit, integration, and system testing. While it includes what will be tested, the specific use cases that determine how the testing is done will be detailed in the Test Design Document. The template that will be used for designing use cases is shown in Figure 2.

Tested By: Test Type Test Case Number Test Case Name Test Case Description

Item(s) to be tested 1 2 Specifications Input Expected Output/Result

Procedural Steps 1 2 3 4 5 6 7

Figure 2: Test Case Template

10

9.1 Usability Testing The purpose of usability testing is to ensure that the new components and features will function in a manner that is acceptable to the customer. Development will typically create a non-functioning prototype of the UI components to evaluate the proposed design. Usability testing can be coordinated by testing, but actual testing must be performed by non-testers (as close to end-users as possible). Testing will review the findings and provide the project team with its evaluation of the impact these changes will have on the testing process and to the project as a whole. 9.2 Unit Testing (Multiple) Unit Testing is conducted by the Developer during code development process to ensure that proper functionality, language-specific programming errors such as bad syntax, logic errors and code coverage have been achieved by each developer both during coding and in preparation for acceptance into iterations testing. The unit test cases shall be designed to test the validity of the programs correctness. The following are the example areas of the project must be unit-tested and signed-off before being passed on to regression Testing: Databases, Stored Procedures, Triggers, Tables, and Indexes NT Services Database conversion .OCX, .DLL, .EXE and other binary formatted executable. 9.2.1 White Box Testing

In white box testing, the UI is bypassed. Inputs and outputs are tested directly at the code level and the results are compared against specifications. This form of testing ignores the function of the program under test and will focus only on its code and the structure of that code. Test case designers shall generate cases that not only cause each condition to take on all possible values at least once, but that cause each such condition to be executed at least once. To ensure this happens, we will be applying Branch Testing. Because the functionality of the program is relatively simple, this method will be feasible to apply. 9.2.2 Black Box Testing

Black box testing typically involves running through every possible input to verify that it results in the right outputs using the software as an end-user would. It includes performing Equivalence Partitioning and Boundary Value Analysis testing. 9.3 Iteration/Regression Testing

11

During the repeated cycles of identifying bugs and taking receipt of new builds (containing bug fix code changes), there are several processes which are common to this phase across all projects. These include the various types of tests: functionality, performance, stress, configuration, etc. There is also the process of communicating results from testing and ensuring that new drops/iterations contain stable fixes (regression). The project should plan for a minimum of 2-3 cycles of testing (drops/iterations of new builds). At each iteration, a debriefing should be held. Specifically, the report must show that to the best degree achievable during the iteration testing phase, all identified severity 1 and severity 2 bugs have been communicated and addressed. At a minimum, all priority 1 and priority 2 bugs should be resolved prior to entering the beta phase. 9.4 System Testing The goals of system testing are to detect faults that can only be exposed by testing the entire integrated system or some major part of it. Generally, system testing is mainly concerned with areas such as performance, security, validation, load/stress, and configuration sensitivity. But in our case well focus only on function validation and performance. And in both cases we will use the black-box method of testing. 9.4.1 Performance Testing

This test will be conducted to evaluate the fulfillment of a system with specified performance requirements. It will be done using black-box testing method. And this will be performed by: Storing the maximum data in the file and trying to insert, and observe how the application will perform when it is out of boundary. Deleting data and check if it follows the right sorting algorithm to sort the resulting data or output. Trying to store new data and check if it over writes the existing once. Trying to load the data while they are already loaded 9.5 Final Release Testing Testing team with end-users participates in this milestone process as well by providing confirmation feedback on new issues uncovered, and input based on identical or similar issues detected earlier. The intention is to verify that the product is ready for distribution, acceptable to the customer and iron out potential operational issues. Assuming critical bugs are resolved during previous iterations testing- Throughout the Final Release test cycle, bug fixes will be focused on minor and trivial bugs (severity 3 and 4). Testing will continue its process of verifying the stability of the application through regression testing (existing known bugs, as well as existing test cases).

12

9.6 Testing completeness Criteria Release for production can occur only after the successful completion of the application under test throughout all of the phases and milestones previously discussed above. The milestone target is to place the release/app (build) into production after it has been shown that the app has reached a level of stability that meets or exceeds the client expectations as defined in the Requirements, Functional Specifications.

10 Test Levels
Testing of an application can be broken down into three primary categories and several sub-levels. The three primary categories include tests conducted every build (Build Tests), tests conducted every major milestone (Milestone Tests), and tests conducted at least once every project release cycle (Release Tests). The test categories and test levels are defined below: 10.1 Build Tests 10.1.1 Level 1 - Build Acceptance Tests Build Acceptance Tests should take less than 2-3 hours to complete (15 minutes is typical). These test cases simply ensure that the application can be built and installed successfully. Other related test cases ensure that adopters received the proper Development Release Document plus other build related information (drop point, etc.). The objective is to determine if further testing is possible. If any Level 1 test case fails, the build is returned to developers un-tested. 10.1.2 Level 2 - Smoke Tests Smoke Tests should be automated and take less than 2-3 hours (20 minutes is typical). These tests cases verify the major functionality a high level. The objective is to determine if further testing is possible. These test cases should emphasize breadth more than depth. All components should be touched, and every major feature should be tested briefly by the Smoke Test. If any Level 2 test case fails, the build is returned to developers un-tested. 10.1.3 Level 2a - Bug Regression Testing Every bug that was Open during the previous build, but marked as Fixed, Needs ReTesting for the current build under test, will need to be regressed, or re-tested. Once the smoke test is completed, all resolved bugs need to be regressed. It should take between 5 minutes to 1 hour to regress most bugs.

13

10.2 Milestone Tests 10.2.1 Level 3 - Critical Path Tests Critical Path test cases are targeted on features and functionality that the user will see and use every day. Critical Path test cases must pass by the end of every 2-3 Build Test Cycles. They do not need to be tested every drop, but must be tested at least once per milestone. Thus, the Critical Path test cases must all be executed at least once during the Iteration cycle, and once during the Final Release cycle. 10.3 Release Tests 10.3.1 Level 4 - Standard Tests Test Cases that need to be run at least once during the entire test cycle for this release. These cases are run once, not repeated as are the test cases in previous levels. Functional Testing and Detailed Design Testing (Functional Spec and Design Spec Test Cases, respectively). These can be tested multiple times for each Milestone Test Cycle (Iteration, Final Release, etc.). Standard test cases usually include Installation, Data, GUI, and other test areas. 10.3.2 Level 5 - Suggested Test These are Test Cases that would be nice to execute, but may be omitted due to time constraints. Most Performance and Stress Test Cases are classic examples of Suggested test cases (although some should be considered standard test cases). Other examples of suggested test cases include WAN, LAN, Network, and Load testing.

11 Suspension / Exit Criteria


If any defects are found which seriously impact the test progress, the QA manager may choose to Suspend testing. Criteria that will justify test suspension are: Hardware/software is not available at the times indicated in the project schedule. Source code contains one or more critical defects, which seriously prevents or limits testing progress. Assigned test resources are not available when needed by the test team.

12 Resumption Criteria

14

If testing is suspended, resumption will only occur when the problem(s) that caused the suspension has been resolved. When a critical defect is the cause of the suspension, the FIX must be verified by the test department before testing is resumed.

13 Bug Tracking/ Bug Process


During testing, the testing team members normally encounter behavior that goes against a specified or implied design requirement in the product. When this happens, we will document and reproduce the bugs for the developers. Expectation of a bug: Keep track of what version of the application the bug is found Determine if bug has already been written up Indicate the steps to reproduce the bug write enough details for others looking at the bug to be able to duplicate it; exclude unnecessary steps (i.e. If access point is irrelevant, be more general in your steps). Actual results be specific on your findings. Expected results how the product should behave based on the specified or implied requirements. Implications How does the defect affect the quality of the product?

The following chart defines the impact levels to be used when entering bugs. Impact 1 Fatal 2 Serious 3 Minor Definitions Test Stopper: If you cant access a function and need the bug to be fixed immediately. The defect prevents QA from testing the feature area, subarea or functionality of the feature. Beta Stopper: This is a bug that users would experience such as: data corruption, calculation errors, incorrect data, UEs and system crash on common user scenarios, significant QA risk, and major UI defects. Live Release: A bug that must be fixed before the product is officially completed, UEs or crashes, content, and UI and graphic changes required for release.

13.1 Various Roles in Bug Resolution Author The person who wrote the bug; this will be someone on the QA team Resolver Normally an Engineer assigned to a specific area of the application. Verifier Normally a QA Engineer responsible for testing the fix and closing the bug.

14 Test Deliverables

15

Testing will provide specific deliverables during the project. These deliverables fall into three basic categories: Documents, Test Cases / Bug Write-ups, and Reports. Here is a diagram indicating the dependencies of the various deliverables:
Test Case Results Bug Results

Requirements [PM]

Project Plan [PM]

Test Plan

Test Spec. / Outline

Test Cases

Bugs

Functional Spec [PM]

Detailed Design [Dev]

TC Coverage Reports

Bug Reports

Weekly Status Reports

As the diagram above shows, there is a progression from one deliverable to the next. Each deliverable has its own dependencies, without which it is not possible to fully complete the deliverable. Program function specifications Program source code Test plan document - this document should address testing objectives, criteria, standards, schedule and assignments, and testing tools. Unit Testing Plan Integration Plan System Testing Plan Test Design Document Unit white-box test design covers white testing criteria, methods and test cases

16

Unit black-box test design covers black-box testing criteria, methods and test cases System test design covers system test criteria, methods, and test cases, scripts. Test report document Unit white-box test report covers unit white box test results, problems, summary and analysis Unit black-box test report covers unit black box test results, problems, summary and analysis System Test report covers system test results, problems, summary and analysis

15 Remaining Test Task


Task Create Acceptance Test Plan Create System/Integration Plan Define Unit Test rules and Procedures Assigned To TM, PM, Client Test TM, PM, Dev. TM, PM, Dev. Status

Define Turnover procedures for each level Verify prototypes of Screens Verify prototypes of Reports

TM, Dev

Dev, Client, TM Dev, Client, TM

16 Test Environments
There are essentially two parts to the Social Hub application in production: the clientside, which because the application is going to accessed over the Internet by members of the general public, Social Hub has little control over. And the server-side which (initially) will be comprised of a single cluster of servers residing at Social Hubs corporate center. Available Client-side Environment Social Hub will utilize set of desktop and laptop machines, which currently consists of the following machine specifications: Low-end PC Processor: Pentium 4 (2.4 GHz)
17

RAM: 512 MB Hard Disk Drive: 40 GB Monitor: 17 Color Screen (default 1024 x 768 16 bit color) 56.6kps Modem or 100MB Ethernet Internet connection typically running Windows XP or Windows 2000 Professional The following Windows based Browsers are readily available for installation on any of the client platforms: Internet Explorer of Microsoft Corporation Firefox (also called Mozilla Firefox) of Mozilla Corporation Chrome of Google Opera of Opera Software ASA Netscape Navigator of Netscape Communications Corporation (now part of AOL) SeaMonkey of Mozilla Foundation Maxthon Browser of Maxthon Browser settings (cache size, # of connections, font selection etc.) where possible were left unchanged i.e. the installation defaults were used for all testing. No optional Plug Ins will be installed. Available Server-side Environments In addition to the cluster of servers used for production, two functionally exact replicas of the serverside production environment will be created and maintained. The development team will use one replica for unit and integration testing, while the second replica will be reserved for system testing by the testing team. Prior to a new release being put into production, the Web application will be moved to a staging area on the production system where a final series of acceptance tests can be performed. While the replica systems will be functionally the same as the production environment e.g. same system software installed in the same order, with the same installation options selected etc. Due to budget constraints, the replicas will be scaled down versions of the production system (e.g. instead of several Web servers, there will only be one) and in the case of the unit/integration replica, the hardware specifications may not be exactly the same. In addition, several network file and print servers will be made available (on a limited basis) for the testing team to use as load generators during performance tests. Available Testing Tools The following 3rd party free tools were available to scan the Web site and provide feedback: Bobby (accessibility, performance & html syntax) cast.org Freeappraisal (performance from 35 different cities) keynote.com Scrubby (meta tag analyzer) scrubtheweb.com Site analysis (search engine ratings) site-see.com Stylet (style sheet validation) microsoft.com
18

Tune up (performance & style checker) & gif lube (gif analyzer) websitegarage.com Websat (usability) nist.gov Web metasearch (search engine ratings) dogpile.com Webstone (performance benchmarking tool) - mindcraft.com Windiff (file comparison) microsoft.com W3C validation service (html and css syntax) w3c.org In addition the following commercial tools were available: Aetgweb (pair-wise combinations) from Telcordia/Argreenhouse Astra Site Manager (linkage) from Mercury Interactive eTester suite (capture/reply, linkage & performance) from RSW 100 virtual user license FrontPage (spell checking) from Microsoft Ghost (software configuration) from Symatec KeyReadiness (large scale performance testing) from Keynote systems LinkBot Enterprise (link checking, HTML compliance and performance estimates) from Watchfire Prophecy (large scale performance testing) from Envive WebLoad (performance) from Radview 1000 virtual user license Word (readability estimates) from Microsoft A manual digital stopwatch was also available.

17 Staffing and Training Needs


If a separate test person is not available the project manager/test manager will assume this role. In order to provide complete and proper testing the following areas need to be addressed in terms of training if not experienced: General development & testing techniques Web site development lifecycle methodology All development and automated testing tools that they may be required to use The administration staff will require training on the new screens and reports.

18 Roles and Responsibilities


Testing Team Test Manager

Ensure Phase 1 is delivered to schedule and quality Produce high level and detailed test conditions Produce expected results Report progress at regular status reporting meetings Co-ordinate review and signoff of test conditions Manage individual test cycles and resolve tester queries/problems.

19

Tester

Identify test data Execute test conditions and mark-off results Prepare software error reports Administrate error measurement system Ensure test systems outages/problems are reported immediately and followed up. Ensure entrance criteria are achieved prior to system test start. Ensure exit criteria are achieved prior to system test signoff.

19 Schedule
The section contains the overall project schedule. It discusses the phases and key milestones as they relate to quality assurance. It discusses the testing goals and standards that wed like to achieve for each phase of testing that will be deployed, e.g., Usability Testing, Code Complete Acceptance, Beta Testing, Integration Testing, Regression Testing, System Testing. The key dates for overall development and Testing are outlined below.
Milestones Planning Phase End Date 00/00/00 Notes At this Milestone, the high level planning should be completed. Some of the deliverables are: Project Plan, Program function specifications. This is a feature-driven milestone where the requirements and initiatives are further defined and solutions are finalized. The deliverables for this phase are Program source code and other design related documents. QA Deliverables/Roles High-level test planning activities, which include preliminary development of Master QA Plan (this document, QA schedule. Development and Test engineers participate actively in feature design by inspecting and reviewing the requirements and design documents. As the design documents are completed, the test engineers are encouraged to start working on the Test Plan document and test design planning. The Test Engineers should have completed or in the final stages of their preliminary Infrastructure Test Plan, test cases and other QA documents related to test execution for each feature or component such as test scenarios, expected results, data sets, test procedures, scripts and applicable testing tools. The Test Engineers should have provided Code Complete Assessment Test to Development Engineer one 20

Design Phase

Code Complete -Infrastructure

This milestone is when all infrastructure development and functions should be complete. The testing team should have performed unit & integration testing before checking the code into any build.

Code Complete -Function

This milestone includes unit testing and code review of each function component prior to checking the code

Milestones

End Date

Notes into the test phase. The deliverables include system-testing specification, Unit testing specifications, Integration plan.

QA Deliverables/Roles week prior to Code Complete Review date. The Test Engineers should also have completed or in the final stages of their preliminary White Box Test Plan, test cases and other QA documents related to test execution for each feature or component such as test scenarios, expected results, data sets, test procedures, scripts and applicable testing tools. 2 Weeks regression of Binary Tree features to Beta and preparation for Beta Shutdown. All bugs verified and QA documentation is finalized. The test Engineers should assess that Binary Tree features are ready for Beta regression and have started their preliminary Test Summary Reports. Complete regression test execution of complete system and update Test Summary Reports for regression. Any unfinished Testing documents should be complete.

Beta Ready

Feature Complete

This milestone represents that all features are ready for Beta release shutdown. This phase allows for feature clean up to verify remaining bug fixes and regression testing around the bug fixes. This milestone indicates that the feature is ready for Beta regression. This milestone represents that project is ready for Regression Testing. 04/03/03 Product is out.

Regression Test

Ship/Live

20 Planning Risks and Contingencies


The following seeks to identify some of the more likely project risks and propose possible Contingencies: Web site becomes unavailable Testing will be delayed until this situation is rectified - May need to recruit more staff to do the testing or reduce the number of test cases. Web testing software is not available/does not work (e.g. Web site uses cookies and tool cannot handle cookies) - This will delay the introduction of automated testing and result in more manual testing - May need to recruit more staff to do the testing or reduce the number of test cases. Testing staff shortages/unavailability, many of the test staff are part-time and have other higher priorities, in addition no slack time is allocated for illness or vacation - May need to recruit more staff to do the testing or reduce the number of test cases. A large number of defects/incidents make it functionally impossible to run all of the test cases.
21

Not enough time to complete all test cases. If time cannot be extended, individual test cases will be skipped, starting with the lowest priority

21 Approvals
Name (Print) 1. 2. 3. 4. 5. Signature Date

22

22 Glossary
TERM/ACRONM API BCP Branch testing CAT Coverage End-to End Testing DEFINITION Application Program Interface Business Continuity Plan A white box test design technique in which test cases are designed to execute branches. Client Acceptance Testing The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite. Tests user scenarios and various path conditions by verifying that the system runs and performs tasks accurately with the same set of data from beginning to end, as intended. Not Applicable Quality Assurance The degree of impact that a defect has on the development or operation of a component or system. Subject Matter Expert Standard Operating Procedure System Test Report To Be Determined Test Summary Report

N/A QA Severity SME SOP STR TBD TSR

23 References
Pressman, Roger S. Software Engineering - A Practitioner's Approach. Fifth edition. The McGraw-Hill companies, Inc. Kaner, C., Falk, J., Nguyen, H.-Q. Testing Computer Software. Wiley Computer Publishing, 1999.

23

24

You might also like