Professional Documents
Culture Documents
RECORD OF CHANGE
SIGNATURE PAGE
Effort: Sự cố gắng. (Từ điển) Trong dự án phần mềm Effort tức là thời gian dùng để hoàn
thành công việc.
Effort thực sự đã xảy ra ( ghi lại sự kiện xảy ra trong thực tế)
Có 2 loại plan.
1. Master plan ( Kế hoạch dự án, do Project Manager làm, trong đó có kế hoạch
khoảng thời gian thực hiện test phần mềm) Hiểu na ná như kế hoạch của cả học kỳ
từ ngày nào đến ngày nào do trường làm.
2. Detail Plan do tester làm, cụ thể cho từng ngày. Hiểu na ná như khảo thí làm cụ thể
cho từng ngày. Thi môn gì, slot mấy, ở đâu cần bao nhiêu người coi thi
Chú ý
Các plan xoay quanh liên quan đến con người thực hiện công việc, sử dụng tool
dùng chung (na ná phòng dùng để thi). Sử dụng lại được nhiều người cho nhiều dự
án thì càng tốt. Sử dụng tools (Phần mềm dùng chung, server dùng chung…) cho
nhiều dự án càng tốt do sẽ giảm chi phí của doanh nghiệp.
<Position>
<Position>
<Position>
<Position>
TABLE OF CONTENTS
1 INTRODUCTION............................................................................................................5
1.1 Purpose.............................................................................................................................5
1.2 Background information......................................................................................................5
1.3 Scope of testing.................................................................................................................5
1.4 Constraints........................................................................................................................5
1.5 Risk list.............................................................................................................................6
3 TEST STRATEGY............................................................................................................6
4 RESOURCE..................................................................................................................17
5 TEST MILESTONES......................................................................................................17
6 DELIVERABLES...................................................................................................................18
1 INTRODUCTION
1.1 Purpose
<Describe briefly about the purpose and organization of the documents. How many sections?
What each section describes about? >
<Enter a brief description of the target-of-test (components, application, system, etc.) and its
goals. Include the information such as major functions and features, its architecture, and a brief
history of the project. >
<Describe the stages of testingfor example, Unit, Integration, or Systemand the types of
testing that will be addressed by this plan, such as Function or Performance.
Provide a brief list of the target-of-test’s features and functions that will or will not be tested.
List any assumptions made during the development of this document that may impact the
design, development or implementation of testing.
Define trigger for regression test (applied for maintenance projects), period and scope of
regression test.
The defects found are in range of defect expectation (refer to Planned defect in Fsoft Insight)>
1.4 Constraints
- Test environment difference or lack of some external systems that interface to the system-
under-test (Reference to SRS document may be included here if the constraints have been
stated in SRS)
<List any risks and corresponding mitigation and contingencies that may affect the design,
development or implementation of testing. >
The listing below identifies those items (use cases, functional requirements, non-functional
requirements) that have been identified as targets for testing. This list represents what will be
tested.
3 TEST STRATEGY
<The Test Strategy presents the recommended approach to the testing of the target-of-test.
State clearly the type of test being implemented, the test objectives and how you will conduct
the test.
If a type of test will not be implemented and executed, state this explicitly, such as “This test
will not be implemented or executed. This test is not appropriate.”
The main considerations for the test strategy are the techniques to be used and the criterion
for knowing when the testing is completed.
For each type of test, it should explain technique, completion criteria, and special
considerations
Technique: The technique should describe how testing will be implemented and executed.
Include what will be tested, the major actions to be taken during test execution, and the
method(s) used to evaluate the results
Method of measurement
Special considerations:
This section should identify any influences or dependencies, which may impact or influence the
test effort describe in the test strategy. Influences might include:
Human resources (such as availability or need for non-test resources to support / participate in
test)
It becomes unproductive
Technique:
- Functional Test:
For each use case flow of events, a representative set of transactions will identified, each
representing the actions taken by the actor when the use case is executed.
A minimum of two test cases will be developed for each transaction; one test case to reflect the
positive condition and one to reflect the negative (unacceptable) condition.
In the first iteration, use cases 1 - 4, and 12 will be tested, in the following manner:
Use Case 1 begins with the actor already logged into the application and at the main window,
and terminates when the user has specified SAVE.
Each test case will be implemented and executed using Rational Robot.
Verification and assessment of execution for each test case will be done using the following
methods:
Test script execution (did each test script execute successfully and as desired?)
Window Existence, or Object Data verification methods (implemented in the test scripts) will be
used to verify that key windows display and specified data is captured / displayed by the target-
of-test during test execution.
The target-of-test's database (using Microsoft Access) will be examined before the test and
again after the test to verify that the changes executed during the test are accurately reflected
in the data.
- Performance Test:
For each use case, a representative set of transactions, as identified in the workload analysis
document will be implemented and executed using Rational Suite PerformanceStudio and
Rational Robot (GUI scripts).
At least three workloads will be reflected in the test scripts and test execution schedules
including the following:
Test scripts used to execute each transaction will include the appropriate timers to capture
response times, such as total transaction time (as defined in the workload analysis document),
and key transaction activity or process times.
The test scripts will execute the workloads for one hour (unless noted differently by the
workload analysis document).
Verification and assessment of execution for each test execution (of a workload) will include:
Test execution will be monitored using state histograms (to verify that the test and workloads
are executing as expected and desired)
Test script execution (did each test script execute successfully and as desired?)
Capture and evaluation of the identified response times using the following reports:
Performance Percentile
Response Time
Completion criteria:
All planned test cases have been re-executed and all known defects have been addressed
as agreed upon, and no new defects have been discovered
Or
All high priority test cases have been re-executed and all known defects have addressed as
agreed upon, and no new defects have been discovered.
Special considerations
Test databases will require the support of a database designer / administrator to create,
update, and refresh test data.
System performance testing will use the servers on the existing network (which supports
non-test traffic). Testing will need to be scheduled after hours to ensure no non-test traffic
on the network.
The target-of-test must synchronize the legacy system (or synchronization simulated) for
full functional testing to be implemented and executed
Tester can stop the test when developers do not perform unit test....
< Function testing of the target-of-test should focus on any requirements for test that can be
traced directly to use cases or business functions and business rules. The goals of these tests
are to verify proper data acceptance, processing, and retrieval, and the appropriate
implementation of the business rules. This type of testing is based upon black box techniques;
that is verifying the application and its internal processes by interacting with the application via
the Graphical User Interface (GUI) and analyzing the output or results. Identified below is an
outline of the testing recommended for each application:>
Test Objective: <Ensure proper target-of-test functionality, including navigation, data entry,
processing, and retrieval. >
Technique: <Execute each use case, use-case flow, or function, using valid and invalid
data, to verify the following:
Special <Identify or describe those items or issues (internal or external) that impact
Considerations: the implementation and execution of function test>
<User Interface (UI) testing verifies a user’s interaction with the software. The goal
of UI testing is to ensure that the User Interface provides the user with the
appropriate access and navigation through the functions of the target-of-test. In
addition, UI testing ensures that the objects within the UI function as expected and
conform to corporate or industry standards. >
<Create or modify tests for each window to verify proper navigation and
Technique:
object states for each application window and objects. >
Special
<Not all properties for custom and third party objects can be accessed. >
Considerations:
<The databases and the database processes should be tested as a subsystem within the
Project. These subsystems should be tested without the target-of-test’s User Interface as the
interface to the data. Additional research into the DataBase Management System (DBMS) needs
to be performed to identify the tools and techniques that may exist to support the testing
identified below. >
<Invoke each database access method and process, seeding each with
valid and invalid data or requests for data.
Technique: Inspect the database to ensure the data has been populated as
intended, all database events occurred properly, or review the returned data
to ensure that the correct data was retrieved for the correct reasons>
<Business Cycle Testing should emulate the activities performed on the <Project Name> over
time. A period should be identified, such as one year, and transactions and activities that would
occur during a year’s period should be executed. This includes all daily, weekly, and monthly
cycles and, events that are date-sensitive, such as banking application. >
Testing will include using valid and invalid data to verify the
following:
Special <System dates and events may require special support activities
Considerations:
<Performance profiling is a performance test in which response times, transaction rates, and
other time-sensitive requirements are measured and evaluated. The goal of Performance
Profiling is to verify performance requirements have been achieved. Performance profiling is
implemented and executed to profile and tune a target-of-test's performance behaviors as a
function of conditions such as workload or hardware configurations.
Note: Transactions below refer to “logical business transactions”. These transactions are
defined as specific use cases that an actor of the system is expected to perform using the
target-of-test, such as add or modify a given contract. >
There are several methods that can be used to perform this, including:
Use multiple physical clients, each running test scripts to place a load on
the system.
The databases used for Performance Testing should be either actual size or
scaled equally. >
<Load testing is a performance test which subjects the target-of-test to varying workloads to
measure and evaluate the performance behaviors and ability of the target-of-test to continue to
function properly under these different workloads. The goal of load testing is to determine and
ensure that the system functions properly beyond the expected maximum workload.
Additionally, load testing evaluates the performance characteristics, such as response times,
transaction rates, and other time sensitive issues). >
<Note: Transactions below refer to “logical business transactions”. These transactions are
defined as specific functions that an end user of the system is expected to perform using the
application, such as add or modify a given contract. >
<Stress testing is a type of performance test implemented and executed to find errors due to
low resources or competition for resources. Low memory or disk space may reveal defects in the
target-of-test that aren't apparent under normal conditions. Other defects might result from
competition for shared resources like database locks or network bandwidth. Stress testing can
also be used to identify the peak workload the target-of-test can handle. >
<Verify that the target-of-test functions properly and without error under the
following stress conditions:
Notes: The goal of Stress Testing might also be stated as identify and
document the conditions under which the system FAILS to
continue functioning properly. >
<All planned tests are executed and specified system limits are reached or
Completion Criteria: exceeded without the software failing or conditions under which system failure
occurs is outside of the specified conditions. >
<Stressing the network may require network tools to load the network
with messages or packets.
Special The DASD used for the system should temporarily be reduced to restrict
Considerations: the available space for the database to grow.
<Volume Testing subjects the target-of-test to large amounts of data to determine if limits are
reached that cause the software to fail. Volume Testing also identifies the continuous maximum
load or volume the target-of-test can handle for a given period. For example, if the target-of-
test is processing a set of database records to generate a report, a Volume Test would use a
large test database and check that the software behaved normally and produced the correct
report. >
Test Objective: <Verify that the target-of-test successfully functions under the following high
volume scenarios:
Maximum database size has been reached (actual or scaled) and multiple
queries or report transactions are executed simultaneously. >
Completion Criteria: <All planned tests have been executed and specified system limits are
reached or exceeded without the software or software failing. >
Special <What period of time would be considered an acceptable time for high
Considerations: volume conditions, as noted above? >
<Security and Access Control Testing focus on two key areas of security:
System-level Security, including logging into or remote access from the system.
Application-level security ensures that, based upon the desired security, actors are restricted to
specific functions or use cases, or are limited in the data that is available to them. For example,
everyone may be permitted to enter data and create new accounts, but only managers can
delete them. If there is security at the data level, testing ensures that” user type one” can see
all customer information, including financial data, however,” user two” only sees the
demographic data for the same client.
System-level security ensures that only those users granted access to the system are capable of
accessing the applications and only through the appropriate gateways. >
<Create tests for each user type and verify each permission by
creating transactions specific to each user type. >
Modify user type and re-run tests for same users. In each case,
Completion Criteria: <For each known actor type the appropriate function or data are
available, and all transactions function as expected and run in prior
Application Function tests. >
Special Considerations: <Access to the system must be reviewed or discussed with the
appropriate network or systems administrator. This testing may not be
required as it may be a function of network or systems administration.
>
<Regression testing is a necessary maintenance activity aimed at showing that code has not been
adversely affected by changes>
<Reuse the set of test cases from an existing test suite to test a
modified module>.
Special
Considerations:
<Clearly state the stage in which the test will be executed. Identified below are the stages in
which common test are executed>
Stage of Test
Type of Tests
Unit Integration System Acceptance
<Functional Tests
X X X X
(Function, User Interface)>
<Performance Tests X X
Stage of Test
Type of Tests
Unit Integration System Acceptance
(Performance profiles of
individual components)>
<Performance Tests
X X
(Load, Stress, Contention)>
<Reliability
X X
(Integrity, Structure)>
3.3 Tools
4 RESOURCE
4.2 System
5 TEST MILESTONES
Testing of v1.2 should incorporate test activities for each of the test efforts identified in the
previous sections. Separate project milestones, which should be identified to communicate
project status accomplishments.
6 DELIVERABLES
<Test procedures>
<Defect log>
<Defect reports>