You are on page 1of 8

TESTING PROCESS

 FIRST: TEST PLAN


INPUTS:

1. Requirement book
2. Functional specs
3. Technical specs
4. Use cases
5. Design document
6. Project schedule

Process:

List of items in test plan as per IEEE 829 standard

 Identify list of test to be performed according to the system


 Identify Test environmental needs
 Identify Test items
 Identify approach
 Identify Responsibilities, staffing and training needs
 Identify control procedures
 Identify suspension criteria

OUT PUT:
Check List for test plan
1. Test Plan
2. Test Schedule
3. Test case specifications
4. Features to Be Tested
5. Features Not to Be Tested
6. Roles & Responsibilities
7. Risks/Assumptions
8. Resumption criteria
9. Approvals

 SECOND: TEST SPECIFICATION

INPUTS:

o Test Plan
o Requirements Book
o Use Case
o Functional Specifications
o Design docs
PROCESS:
1. Test preparation:

 Acquire / setup test environment


 Analyze Application from both:
a. User Requirement –use techniques like:
 Ambiguity review
 Requirement Based Testing

b. Technical Stand Point –


 From Developers view
 Current development standards, technology available

2. Identify TC
A. use testing strategies, levels, approaches and techniques

C:\Documents and
Settings\Administrator\My Documents\TEST STRATEGIES.ppt

3. Building Test Case


a. Define Test condition
b. Identify Test Data
c. Create test case based on Strategy, Techniques

OUTPUT

 Test case to be executed


 Cross Reference
Test coverage matrix
Expected results

 THIRD: TEST EXECUTION

INPUT

 Test Case
 Test Execution Schedule

PROCESS
1. Test Cases that were created in Test Specification is executed
according to risks and priority set

OUTPUT

 Test Results
Actual Results

 FOURTH: TEST RECORDING

INPUTS:
1. Test Execution
2. Results from executed test
3. Test log

PROCESS:
Record:
 Identities and versions of software under test and test
specification
 Actual outcome
 Defect management / Incident Management

Incident Management Process:

INPUTS:
 Test Results
 Test logs

INCIDENT MANAGEMENT PROCESS


Incident-any significant, unplanned event that occurs during
testing, that demands investigation and/or correction to allow
testing to proceed as planned
An incident can raised against:
Code,
Documentation
The SUT (System Under Test)
The test environment
Tests

INCIDENT REPORTING
The report should contain:
Impact of incident –severity
Priority –impact, potential effect, causal analysis
Test id, system id, testers’ id
Expected and actual results
Environment in use
Date and time of execution
System build info
Any other info

OUTPUT:
Defects Status Report

Expected vs Actual Defects Uncovered Timeline

 Use to show if the number of defects uncovered is above or

below the expected number


Defects Uncovered vs. Corrected Gap Timeline

This is used to list the backlog of uncorrected defects that have

been reported
Average Age of Uncorrected Defects by Type

Used to show the breakdown of the gap from Defects Uncovered

versus Corrected Gap Timeline Report


Defect Distribution Report

Used to explain how defects are distributed among the modules /

units being tested


Relative Defect Distribution Report

Used to normalize the defect distribution presented

COMBINED OUTPUT FOR TEST RECORDING:


OUTPUT:
1. Checklist for monitoring and control
2. Test Summary
3. TMX
4. Current Status Report:
Function Test Matrix
Defect Status Report
Functional Testing Status Report
Functions Working Timeline
Expected vs Actual Defects Uncovered Timeline
Defects Uncovered vs Corrected Gap Timeline
Average Age of Uncorrected Defects by Type
Defect Distribution Report
Relative Defect Distribution Report
Testing Action Report
Individual Project Component Test Results
Summary Project Status Report
Individual Project Status Report
Final Test Reports

 FIFTH: MONITORING AND CONTROL

INPUTS:
 Reports from Test Records
 Test log
PROCESS:
CHECKING FOR TEST COMPLETION
 Check coverage criteria is completed
 Check Completion criteria set during Test Planning is met
 Check which planned deliverables we actually delivered
 Document the-acceptance or rejection of the software system.
 Finalize and archive testware, such as scripts, the test
environment, and any other test infrastructure, for later reuse
to compare the results of testing between software versions.
 Evaluate how the testing went and analyze lessons learned for
future releases and projects.
 Use the test results to set targets for improving reviews and
testing with a goal of reducing the number of defects in live
use.
 Incident management
 This is documented in a test summary report or might be part
of an overall project evaluation report.

OUTPUT:
 Deliverables
 Baselined testware
 Process improvements recommendations
 Target setting for live system

END……………..

Under Unit Testing:


• User and Administrator accounts: Verify that user and group
authentication information is accurate and complete. Verify that
the correct user database is referenced by IIS 5.0 (local or
domain)
• Permissions: Verify file and directory permissions. Check access
to files using different user accounts. Run ported applications
and server extensions, such as CGI scripts and executables or
ISAPI dynamic-link libraries (DLLs), to exercise
Component Services settings and Execute/Script permissions.
• File names and paths: Check for file-name conflicts and verify
that file names and paths are correct. Verify that Windows
conventions are used within migrated files, including referenced
file names and paths, as discussed in Migrating a Web Server to
IIS 5.0 in this book.

• Hyperlinks and page formatting: Run as http://localhost/ and


verify hyperlinks. Also check for corrupt HTML that results in
improper page formatting. Be sure to include ASP in this testing,
if you are using it.

• Applications: Verify that pooled, out-of-process, and in-process


applications run correctly, as well as any applications that rely on
a third-party script interpreter, such as Perl. Test any ported CGI
or server-extending applications (EXE and DLL) to exercise
Component Services settings and Execute/Script permissions.

Under Integration Testing:

• Network identification: Verify that the server is correctly


identified on the network.

• Application integration: Test ASP applications that access


backend databases or other remote objects, to verify that they
function as expected and that permissions and script settings
(such as time-out) are set correctly.
• Stress, or load testing: Measure Web site performance,
working with a replica of the site in a lab environment with
multiple clients to simulate load on the servers. WCAT, a useful
tool for this simulation, is provided on the Resource Kit
companion CD. You can also check how individual Web sites are
using the CPU by using IIS 5.0 process accounting, as described
in the IIS 5.0 online product documentation.

• Server availability: Measure availability of the server on the


network by using the HTTP Monitoring Tool, which is included on
the Resource Kit companion CD. It is described in a white paper
you can read at http://msdn.microsoft.com/workshop/server/ .
• Performance monitoring and tuning: Monitor server
performance by using the HTTP Monitoring Tool, as described in a
white paper you can read at
http://msdn.microsoft.com/workshop/server/ .
See also Monitoring and Tuning Your Server in this book and the
Calculating Connection Performance topic in the IIS 5.0 online
product documentation.

•Each secured system component, such as Microsoft


Internet Explorer 5, IIS 5.0, and SQL Server.
•Variants in the security implementation of each component; for
example, browser Secure Sockets Layer (SSL) security (48 bit, 128
bit, or Server Gated Cryptography [SGC]), IIS 5.0 authentication
(none, Basic, or integrated Windows authentication), and so forth.
•Communication protocols between system components, such as
named pipes and TCP/IP sockets.
• Security functionality: Test the various possible iterations of
the system to verify that security performs as expected in each
scenario. You can generate tests to exercise these system
variations from a matrix that includes:
• Security against penetration: Test security against intrusion,
as described in Security in this book.

Under Application Testing


• Code review: Check hyperlink references, keywords, and
programming style. Make sure that any UNIX conventions are
changed to Windows conventions, as described in Migrating a
Web Server to IIS 5.0 in this book. For ASP optimization tips, see
ASP Best Practices . For tips on reviewing ASP code, see
http://msdn.microsoft.com/workshop/server/.
• Load or stress testing: Test the number of concurrent users
the application can support. Verify that CPU and memory usage
is acceptable under high loads. You can use the Web Application
Stress Tool, included on the Resource Kit companion CD, for
stress testing multitier ASP applications.
• Performance testing: Test application performance under a
variety of conditions. Test overall performance impact of the
application on the server.
• Application logic: Run as http://localhost/ and check for proper
operation of application logic.

You might also like