You are on page 1of 50

Ch 4: Test Management

KingR@j!!

KingR@j!! 1
Overview:
1) Test Planning
2) Test Management
3) Test Process
4) Test Reporting

KingR@j!! 2
1.Test Planning:
1) Preparing a Test Plan
2) Scope Management
3) Deciding Test Approach
4) Setting up criteria for testing
5) Identifying Responsibilities
6) Staffing
7) Training needs
8) Resource requirements
9) Test Deliverables
10) Testing Tasks
KingR@j!! 3
1.1 Preparing a Test Plan:
Test plan acts as anchor for the execution ,tracking,
and reporting of the entire testing projects. It covers
following points.
1) What needs to be tested?
2) How the testing is going to be performed?
3) What resources are needed for testing?
4) The time lines by which the testing activities will
be performed.
5) Risks that may be faced in all of the above, with
appropriate mitigation and contingency plans.
KingR@j!! 4
1.2 Scope Management:
1) Understanding what constitutes a release of a
product;
2) Breaking down the release into features;
3) Prioritizing the features for testing;
4) Deciding which features will be tested and which
will not be; and
5) Gathering details to prepare for estimation of
resources for testing.

KingR@j!! 5
1.2.1 Choice & Prioritization of features to be tested:
1) Features that are new and critical for the
release.
2) Features whose failures can be catastrophic.
3) Features that are expected to be complex to
test.
4) Features which are extensions of earlier
features that have been defect prone.

KingR@j!! 6
1.3:Deciding Test Approach
1) What type of testing would you use for
testing the functionality?
2) What are the configurations or scenarios for
testing the features?
3) What integration testing would you do to
ensure these features work together?
4) What localization validations would be
needed?
5) What “non-functional” tests would you need
to do?
KingR@j!! 7
1.4:Setting up criteria for testing:
1) Encountering more than a certain number of
defects, causing frequent stoppage of testing
activity;
2) Hitting show stoppers that prevent further progress
of testing ( for example, if a database does not start,
further tests of query , data manipulation, and so
on are is simply not possible to execute); and
3) Developers releasing a new version which they
advice should be used in instead of the product
under test( because of some critical defect fixes. )
When such conditions are addressed, the tests can
resume..
KingR@j!! 8
1.5:Identifying Responsibilities:
1) Ensure there is clear accountability for a
given task, so that each person knows what
he or she has to do;
2) Clearly list the responsibilities for various
functions to various people, so that everyone
know how his or her work fits into the entire
project;
3) Complement each other, ensuring no one
steps on an others toes; and
4) Supplement each other, so that no task is left
unassigned.
KingR@j!! 9
1.6:Staffing:
1) Staffing is done based on estimation of effort
involved and
2) The availability of time for release.
3) In order to ensure that the right tasks get
executed , the features and tasks are
prioritized the basis of on effort, time, and
importance.

KingR@j!! 10
1.7:Training needs:
1) It may not always be possible to find the
perfect fit between the requirement and the
skills available .
2) In case there are gaps between the
requirements and availability of skills, they
should be addressed with appropriate
training programs.
3) It is important to plan for such training
programs upfront as they are usually are de-
prioritized under project pressures.

KingR@j!! 11
1.8:Resource requirements:
1) Machine configuration( RAM , processor, disk etc.)
needed to run the product under test.
2) Overheads required by the test automation tool, if
any
3) Supporting tools such as compilers , test data
generators , configuration management tools, and
so on
4) The different configurations of the supporting
software ( for example OS) that must be present.
5) Special requirements for running machine-intensive
tests such as load tests and performance tests.
6) Appropriate number of licenses of all the software.
KingR@j!! 12
1.9:Test Deliverables:
1) The test plan itself ( master test plan, and
various other tests plans for the project ).
2) Test case design specifications.
3) Test cases, including any automation that is
specified in the plan
4) Test logs produced by running the tests
5) Test summary reports.

KingR@j!! 13
1.10:Testing Tasks:
1) Size Estimation
2) Effort Estimation
3) Schedule Estimation

KingR@j!! 14
1.10.1:Size Estimation:
1) Size estimation quantifies the actual amount of testing that
needs to be done.
2) Size of the product under test:
a)LOC: Lines of Codes
b)FP : Function point
c)Numbers of screens, reports or transactions.
3) Extent of automation required.
4) Number of platforms and inter-operability environments to
be tested.
5) Size estimate is expressed in terms of any of the following:
a)Number of test cases
b)Number of test scenarios
c)Number of configurations to be tested.
KingR@j!! 15
1.10.2:Effort Estimation:
1) Estimating effort is important because often effort has a
more direct influence on cost than size.
2) Productivity Data
3) Reuse Opportunities.
4) Robustness of processes
a)Well documented standards for writing test
specification, test scripts and so on…
b)Proven processes for performing functions such as
reviews and audits;
c)Consistent ways of training people; and
d)Objective ways of measuring the effectiveness of
compliance to processes.

KingR@j!! 16
1.10.3:Schedule Estimation:
1) Identifying external and internal dependencies among the
activities
2) Sequencing the activities, based on the expected duration as
well as on the dependencies
3) Identifying the time required for each of the work
breakdown structure (WBS) activates, taking into account
the above two factors
4) Monitoring the progress in terms of time and effort
5) Rebalancing schedules and resources as necessary.
6) Based on the dependencies and the parallelism possible, the
test activities are scheduled in a sequence that helps
accomplish the activities in the minimum possible time,
while taking care of all the dependencies.
7) This schedule is expressed in the form of a Gantt chart.
KingR@j!! 17
KingR@j!! 18
1.10.3.1:External & Internal dependencies:
External Dependencies:
1) Availability of the product from developers;
2) Hiring
3) Training
4) Acquisition of hardware/software required for training ; and
5) Availability of translated message files for testing.

Internal Dependencies:
1) It is fully within the control of the manager/person
performing that activity.
2) Completing the test specification
3) Coding / scripting the tests
4) Executing the tests
KingR@j!! 19
2.Test Management:
1) Choice of standards
2) Test infrastructure management
3) Test people management
4) Integrating with product release

KingR@j!! 20
2.1: Choice of Standards:
1) External standards :
a) A product should comply with, are
externally visible, and are usually stipulated by
external consortia.
b) From testing perceptive, these
standards include standards tests supplied by
external consortia and acceptance tests supplied
by customers.
c)Compliance to external standards is
usually mandated by external parties.
KingR@j!! 21
2.1: Choice of Standards:
2) Internal standards :
a) It is formulated by a testing organization
to bring in consistency and predictability.
b) They standardize the processes and
methods of working within the organization.
c)Some of the internal standards include:
i. Naming & storage conventions for
test artifacts;
ii. Document standards;
iii. Test coding standards; and
iv. Test reporting standards.
KingR@j!! 22
2.1.1: Naming & storage conventions for test artifacts:
1) Every test artifact (test specification, test case, test results ,
and so on) have to be named appropriately and
meaningfully . Such naming conventions should enable:
a) Easy identification of the product functionality that a
set of tests are intended for; and
b) Reverse mapping to identify the functionality
corresponding to a given set of tests.
2) This two way mapping between tests and product
functionality through appropriate naming conventions will
enable identification of appropriate tests to be modified and
run when product functionality changes.
3) In addition to file naming conventions, the standards may
also stipulate the conventions for directory structures for
tests. Such directory structures can group logically related
tests together.( along with related product functionality. )
KingR@j!! 23
2.1.2: Documentation Standards:
1) Appropriate header level comments at the beginning of a file
that outlines the functions to be served by the test.
2) Sufficient in-line comments, spread throughout the file,
explaining the functions served by the various parts of a test
script. This is especially needed for those parts of a test
script that are difficult to understand or have multiple levels
of loops and iterations.
3) Up-to-date change history information , recording all the
changes made to the test file.
Without such detailed documentation, a person
mainlining the test scripts is forced to rely only on the actual test
code or script to guess what the test is supposed to do or what
changes happened to the test scripts.
Furthermore, it may place an undue dependence on the
person who originally wrote the tests.
KingR@j!! 24
1)
2.1.3: Test coding Standards:
Enforce the right type of initialization and clean up that the
test should do to make the results independent of other
tests;
2) Stipulate ways of naming variable within the scripts to make
sure that a reader understands consistently the purpose of a
variable. ( for example , instead of using generic names such
as i,j, and so on , the names can be meaningful such as
network_int_flag);
3) Encourage reusability of test artifacts ( for example, all tests
should call an initialization module int_env first . Rather than
use their own initialization routines); and
4) Provide standard interfaces to external entities like OS. h/w
and so on. ( for example , if it is required for tests directly
spawn the processes, the coding standards may dictate that
they should all call a standards functions, say
create_os_process.By isolating the external interfaces
separately, the tests can be reasonable insulated from
KingR@j!! 25
changes to these lower-level layers.
2.1.4: Test Reporting Standards:
1) Since testing is tightly interlinked with
product quality, all the stakeholders must get
a consistent and timely view of the progress
of tests.
2) Test reporting standards address this above
issue.
3) They provide guidelines on the level of detail
that should be present in the test reports,
their standard formats and contents ,
recipients of the report, and so on.

KingR@j!! 26
2.2: Test infrastructure management:
Testing requires a robust infrastructure to be
planned upfront. This infrastructure is made up
of three essential elements:
1) A Test Case Database (TCDB)
2) A Defect Repository (DR)
3) Software Configuration Management
repository and tool. (SCM)

KingR@j!! 27
2.2.1: A test case database (TCDB):
Entity Purpose Attributes

Test Case Records all the ‘static’ information about • Test Case ID
the tests. • Test case name (file
name)
• Test case owner
• Associated files for the
test case.
Test Case : Provides a mapping between the tests and • Test Case ID
Product the corresponding product features; • Module ID
Cross- enables identification of tests for a given
reference feature.
Test Case Run Gives the history of when a test was run • Test Case ID
history and what was the result; provides inputs • Run date
on selection of tests for regression runs. • Time taken
• Run
Status(success/failure)
Test Case: Gives details of test cases introduced to • Test Case ID
Defect test certain specific defects detected in the • Defect Reference #
Cross- product; provides inputs on the selection (points to a record in the
KingR@j!! 28
reference of tests for regression runs. defect repository.)
2.2.2: A defect repository:
1) A defect repository captures all the relevant
details of defects reported for a product .
2) The defect repository is an important vehicle
of communication that influence the work
flow within a software organization.
3) It also provides the data in arriving at several
of the metrics.
4) Most of the metrics classified as testing
defects metrics and development defect
metrics are derived out of the data in defect
repository.
KingR@j!! 29
2.2.2: A defect repository:
Entity Purpose Attributes

Defects details Records all the “static” information about • Defect ID


the tests. • Defect priority/ severity
• Defect Description
• Affected product.
Defect test Provide details of test cases for a given • Defect ID
details defect. • Test Case ID
Cross-reference the TCDB.
Fix Details Providers details of fixes for a given • Defect ID
defect; cross-references the configuration • Fix details (file changed,
management repository. fix release information).
Communication Captures all the details of the • Test Case ID
communication that transpired for this • Defect reference #
defect among the various stakeholders. • Details of communication.
These could include communication
between the testing team and
development team, and so on.
Provide insights into effectiveness of
communication. KingR@j!! 30
2.2.3: Configuration management repository:
CM repository is also(Software Configuration Management)SCM.
Change control ensures that:
1) Changes to test files are made in controlled fashion and only
with proper approvals.
2) Changes made by one test engineer are not accidentally lost
or overwritten by other changes.
3) Each change produces a distinct version of the files that is re-
creatable at any point of time.
4) At any point of time, everyone gets access to only the most
recent version of the test files. ( except in exceptional cases).
Version control ensures that the test scripts associated with a
given release of a product are base lined along with the product
files.
Base lining is akin to taking a snapshot of the set of related files
of a version, assigning a uniqueKingR@j!!
identifier to this set. 31
2.2.4: Test infrastructure management :
1) TCDB, defect repository, and SCM repository should
complement each other and work together in an integrated
fashion as shown in fig.

TCDB

TEST CASE TEST CASE TEST CASE


PRODUCT INFO DEFECT

DR SCM
DEFECT PRODUCT PRODUCT
DEFECT FIX DEFECT TEST ENVIRONMENT
COMMUNICA DOCUMENTA- SOURCE CODE
DETAILS DETAILS FILES
-TION TION TEST

KingR@j!! 32
2.2.5: Test infrastructure management :
1) The defect repository links the defects, fixes ,
and tests.
2) The files for all these will be in the SCM.
3) The meta data about the modified test files
will be in the TCDB.
4) Thus, starting with a given defect, one can
trace all the test case that test the defect (
from the TCDB) and then find the
corresponding test case files and source files
from the SCM repository.

KingR@j!! 33
2.2.5: Test infrastructure management :
Similarly, in order to decide which tests to run for given
regression run:
1) The defects recently fixed can be obtained from the
defects repository and test for these can be
obtained from the TCDB and included in the
regression tests.
2) The list of files changed since the last regression run
can be obtained from the SCM repository and the
corresponding test files traced from the TCDB.
3) The set of tests not run recently can be obtained
from the TCDB and these can become potential
candidates to be run at certain frequencies.
KingR@j!! 34
2.3: Test People management:
These testing Why don’t these
folks…they are developers do
always nitpicking! anything right?

Developer Tester Sales person

When will I get a


product out that I
can sell?!

KingR@j!! 35
2.3.1 Test People management
1) People management is an integral part of any project
:
management.
2) People management requires the ability to hire, motivate
and retain the right people.
3) These skills are seldom formally taught. (unlike technical
skills. )
4) Project manager often learn these skills in a “sink or swim”
mode, being thrown head-on into the task.
5) These team-building exercises should be ongoing and
sustained, rather than be done in one burst.
6) The effort of these exercises tend to wear out under the
pressure of deadlines of delivery and quality.
7) The common goals and the spirit of teamwork have to be
internalized by all stakeholders.
8) Such an internalization and upfront team building has to be
KingR@j!! 36
part of the planning process for the team to succeed.
2.4 Integrating with product release:
1) Sync points between development and testing as to
when different types of testing can commence. ( Ex.
When integration testing could start, when system
testing could start and so on.
2) Service level agreements between development and
testing as to how long it would take for the testing team
to complete the testing. ( finding imp defects only ).
3) Consistent definitions of the various priorities and
severities of the defects. ( shared vision to see nature of
defects ).
4) Communication mechanisms to the documentation
group to ensure that the documentation is kept in sync
with the product in terms of known defects,
workarounds , and so on.
KingR@j!! 37
3.Test Process:
1) Base Lining a Test Plan
2) Test Case Specification
3) Update of Traceability.

KingR@j!! 38
3.1 Base Lining a Test Plan:
1) A Test plan combines all the points of product release planning
into a single document that acts as an anchor point for the
entire testing project.
2) The test plan is reviewed by a designated set of competent
people in the organization. It is then approved by a competent
authority, who is independent of the project manager directly
responsible for testing.
3) After this, the testing plan is base-lined into the configuration
management repository(SCM).
4) From then on, the significant changes in the testing project
should thereafter be reflected in management
repository(SCM).
5) In addition , periodically, any change needed to the test plan
templates are discussed among the different stake holders and
this is kept current and applicable
KingR@j!!
to the testing teams. 39
3.1.1 Test planning checklist:
1) Scope : Features to be tested, not to be tested
2) Environment: SCM tool, DR, TCDB
3) Criteria : Entry and Exit criteria
4) Test Case : Naming conventions, approved or not
,Traceability
5) Effort Estimation: size, effort, time
6) Schedule : resource availability, parallelism constraints.
7) Risks : mitigation strategies, idle time
8) People : skill level and number of people
9) Execution: as per plan, TCDB update
10) Completion: Test summary report, outstanding defects.
KingR@j!! 40
1)
3.1.2 Test
Introduction: Scope
plan template:
2) References : Requirement specification doc
3) Test Methodology and strategy / approach
4) Test Criteria: Entry/Exit Criteria, Suspension/Resumption
5) Assumptions, Dependencies , and risks
6) Estimation: Size ,effort , schedule (time)
7) Test Deliverables and Milestones
8) Responsibilities
9) Resource requirements: hw/sw, people and other
10) Training Requirements: Possible attendees, constraints
11) Defect Logging and tracking process
12) Metrics plan
KingR@j!! 41
13) Product Release Criteria
3.2 Test Case Specification:
1) The purpose of the test.
2) Items being tested, along with their version/release
numbers as appropriate.
3) Environment that needs to be set up for running the test
case.
4) Input data to be used for the test case.
5) Steps to be followed to execute the test.
6) The expected results that are considered to be “correct
results.”
7) A step to compare the actual results produced with the
expected results.
8) Any relationship between this test and other tests.
KingR@j!! 42
3.3 Traceability Matrix:
1) A requirements traceability matrix ensure that the requirements
make it through the subsequent life cycle phases.
2) The traceability matrix is a tool to validate that every
requirement is tested.
3) The traceability matrix is created during the requirement
gathering phase itself by filling up the unique identifier for each
requirement.
4) Subsequently, as the project proceeds through the design and
coding phases, the unique identifier for design features and the
program file name is entered in the traceability matrix.
5) When a test case specification is complete, the row
corresponding to the requirement which is being tested by the
test case is updated with the test case specification identifier.
6) This ensure that there is a two-way mapping between
requirements and test cases.KingR@j!! 43
4.Test Reporting:
1) Recommending Product Release
2) Traceability Matrix update
3) Executing test cases
4) Collecting and Analyzing Metrics
5) Preparing test summary report

KingR@j!! 44
4.1 Recommending Product Release Criteria:
1) Testing can never conclusively prove the absence
of defects in a software product.
2) What it provides is an evidence of what defects
exist in the product, their severity, and impact.
3) Senior management take a meaningful business
decision of criteria for product release:
a)What defects the product has;
b)What is the impact/severity of each of the
defects; and
c)What would be the risks of releasing the
product with the existing defects.
KingR@j!! 45
4.1.2 Recommending Product Release:
1) Based on the test summary report an organization can take a
decision on whether to release the product or not.
2) Ideally, an organization would like to release a product with zero
defects.
3) However, market pressures may cause the product to be released
with the defects provided that the senior management is
convinced that there is no major risk of customer dissatisfaction.
4) If the remnant defect are:
a) low priority/ impact
b) conditions are not realistic
then organization may choose to release the product.
5) Such decision should be taken by senior manager only after
consultation with the customer support team, development team,
and testing team so that the overall workload for all parts of the
organization can be evaluated. KingR@j!! 46
4.2 Traceability Matrix update :

1) During test design and execution, the traceability


matrix should be kept current.
2) As and when tests get designed and executed
successfully, the traceability matrix should be
updated.
3) The traceability matrix itself should be subject to
configuration management(SCM).
4) It should be subject to version control and change
control.

KingR@j!! 47
4.3 Executing test cases:
1) Defect repository is updated with:
a) Defects from the earlier test cycles that are fixed in
the current build; and
b) New defects that get uncovered in the current run of
the tests.
2) Defect repository work as primary vehicle of
communication between test team and development
team.
3) Defect repository contains all the information about
defects uncovered by testing (and defects reported by
customers.)
4) A test should be run only when the entry criteria for the
test are satisfied and should be deemed complete only
when the exit criteria are KingR@j!!
satisfied. 48
4.4 Collecting and Analyzing Metrics:

1) When tests are executed , information about


test execution gets collected in test logs and
other files.
2) The basic measurements from running the
tests are then converted to meaningful
metrics by the use of appropriate
transformations and formulae.

KingR@j!! 49
4.5 Preparing test summary report:
Two types:
a) Phase-wise test summary, which is produced at the end of
every phase
b) Final test summary reports ( which has all the details of all
testing done by all phases and terms, also called as “ release test
report”.)
A summary report should present:
a) Summary of activities carried out during test cycle.
b) Variance of activities carried out from the activities
planned.: additional tests, modified test cases etc.
c) Summary of results should include: Test that failed, severity
of impact of the defects uncovered by the tests.
d) Comprehensive assessment & recommendation for release
should include : “Fit for release” assessment and recommendation of
release. KingR@j!! 50

You might also like