You are on page 1of 58

1

UNIT -II

TEST MANAGEMENT:

Documenting test plan and test case

Effort estimation

Configuration management

Project progress management.

Use of Testopia for test case documentation and test management.

DEFECT MANAGEMENT:

Test Execution

Logging defects

Defect lifecycle fixing / closing defects.

Use of Bugzilla for logging and tracing defects.

TEST DATA MANAGEMENT:

Test Data Management –Overview

Why Test Data Management

Test Data Types

Need for Test Data Setup

Test Data Setup Stages

Test data management Challenges.

Creating sample test data using MS-Excel.


2

TEST MANAGEMENT

Introduction
In this chapter, we cover essential topics for test management in six sections. The first relates to
how to organize the testers and the testing. The second concerns the estimation, planning and
strategizing of the test effort. The third addresses test progress monitoring, test reporting and test
control. The fourth explains configuration management and its relationship to testing. The fifth
covers the central topic of risk and how testing affects and is affected by product and project risks.
The sixth and final section discusses the management of incidents, both product defects and other
events that require further investigation.

What is Test Management?


Test Management is a series of planning, execution, monitoring and control activities that help
achieve project goals.
Test Management Process is a set of activities from the start of the testing to the end of the testing.
It gives a discipline to testing. When follow a test process it gives us the plan at the initial. Test
process provides the facility to plan and control the testing throughout the project cycle. It helps to
track and monitor the testing throughout the project. Provides transparent of testing among
stakeholders and maintains the conducted test for future reference. It affords deep level of detail of
the testing that’s being carrying out. Gives clear understanding of testing activities of prior project
and post project to all the stakeholders. There are many tools (Tools such as qTest, JIRA, Team
Service, TestLink.) available to manage the test process. Test process can be defined and practiced
differently according to the necessity in test. Explained below are the typical activities in test
process.
3

1) Test Plan:

Test plan served as an initial sketch to carry out the testing. Testing is being tracked and monitored
as per the test plan. It gives a prior picture of test challenge and aspect that will be carried out for
the software. By maintaining a test plan we can manage the changes in the plan. When starting new
projects, based on the lesson learned in the previous tests, test plan needs to be improved to get
betterment. Test plan explains the over view of particular requirement which needs to be tested,
scope, functional and non-functional requirement, risk and mitigation, testing approaches, test
schedule and deliverables and schedule, out of scope and assumption, test team and allocation, test
environment, test activities mechanism and any other special note for testing.

Test plan elements Description

Over view of the test plan and purpose of this test plan.
What’s the project that needs to be tested? Brief of the
Over view
software that needs to be tested. Purpose of providing this
software to user.

What’s the purpose of the testing? What type of testing is


going to be carried out? If there is any out of scope of testing.
Brief explanation on the software project and what are
Scope and out of scope covered in the test plan. Defining a frame to the testing based
on resources, effort, budget and time line. What features or
section that will be covered and what features or section will
not be covered during the testing.

Explain each functional and non-fictional (performance


Functional and non-functional testing, usability testing) testing that needs to be carried out.
requirement Explain each feature that will be tested. Each functional and
non-functional item should be placed without ambiguity.

Explain the identified project, software and resources related


risk. Explain the mitigation plan and possibility. Identify risk
Risk and mitigation
that we could face during the testing. Resource unavailability,
delay in developer release, slip in schedule, less
4

understanding in functions and gap between business and


system requirement.

What kind of testing approaches will be used? What type of


testing will be carried out? Test types like installation testing,
Testing approaches functional testing, UAT testing. Specify what tools we are
going to use in testing. Specify the tools and license
information that need for the testing.

Describe entire star and complete date of testing. Need to find


out the date of developer releases and number of releases.
Mention each of the developer release date, test start date and
Test schedule and deliverables completion date. Analyze the requirement and testing we are
going to carry out and then come up with the effort. Based on
the resource, plan the schedule with mile stone. We also need
to consider the time frame like any specific deadline.

There can be any assumption related to software, project,


Assumption
resource or any concepts. And these have to be written in this.

Who are the testers that will be involved and what their
responsibilities in the project are. To whom the training is
Test team and allocations
required, if any. When responsibilities are set it’s easy to
conduct the testing in project.

Provide all the information related to test environment. What


is the test environment? In which browsers the testing is
Test environment carried out? Mentioning the UAT environment. External
system that will be accessed during the testing. State the
capacity of RAM and processor.

2) Test Design:

Test design affords how to implement the testing. Typically creating test cases is with inputs and
expected output of the system and choosing which test cases are necessary for the execution of the
test. Tester should have the clear understanding and appropriate knowledge to set the expected
result. By this, coverage of the testing is defined and tester will not miss any scenario. There are
two types of test design techniques one is static testing and the other one is dynamic testing. Static
5

testing is used to test without execution mostly to artifacts like document and dynamic testing is
testing by executing the system.

Test case (Element in test case document):

• Project / Test title, Test executed by, Test executed date, Version of the software and Test
environment

• Test case number

• Test summary

• Steps

• Pre-condition

• Post condition

• Test data

• Actual result

• Expected result

• Test result

• Note

3) Test Execution:

Manner of executing and test the actual system result against the expected result is test execution.
Test execution can be done manually and by using automation suit. During the execution tester
needs to make sure, that the user’s need of the software is occupied in the software. Test execution
is conducted by referring the document created during test design as step by step process. Tester
needs to keep the track while executing the test cases.

Example for static testing:

• Test the requirement specification document.

• Test the design document

• Test the user guide


6

Example for dynamic testing:

• Unit testing

• Functional testing

• Integration testing

4) Exit Criteria:

Exit criteria determines when to stop the test execution. Exit criteria is defined during the test plan
phase and used in the test execution phase as a mile stone. Tester needs to set the exit criteria at the
beginning, exit criteria may change during the project run as well. There are factors like client need,
system stability and filled function that decide the exit criteria. Once the tester reached the exit
criteria testing will be stopped. Below are some common exit criteria.

• All critical defects are closed.

• All the reported defects and closed and verified.

• Executed and covered the areas which used by user mostly.

• System catered all the requirements.

• All the important functions are tested and working as expected.

5) Test Reporting:

Test reporting gives the picture of test process and result for the particular testing cycle. To define
the element in the test reporting the first thing that needs to be considered is whom the audiences of
the test report are. For an example a project manager will like to see the high level picture of the
testing, intermediate people will wish to view more detail and the client will expect the test
reporting in the criteria such as requirement basis, feature basis. Test report is prepared and
communicated periodically like daily, weekly, month etc. This needs to be sent in different stages
and time.In the future project result of test reports needs to be analysed and apply the lesson learns.
Test report contain elements such as test execution status, completed percentage, plan vs. executed
test cases, test environment, test execution by resources, risk and mitigation if any, defect summary,
test scenario and conditions, any assumption, any note etc.
7

Test coverage report: (Elements of test coverage report)

• Percentage completed

• Test scenario

• Software area

• Tested resource

• Tested date

• Test result

Test Management Phases :

This topic briefly introduces Test Management Process and shows you an overview of Test
Management Phases. You will learn more details about each Test Management Phases in the next
articles.

There are two main Parts of Test Management Process:

• Planning

1. Risk Analysis

2. Test Estimation
8

3. Test Planning

4. Test Organization

• Execution

1. Test Monitoring and Control

2. Issue Management

3. Test Report and Evaluation

Planning

Risk Analysis and Solution

Risk is the potential loss (an undesirable outcome, however not necessarily so) resulting from a
given action or an activity.

Risk Analysis is the first step which Test Manager should consider before starting any project.
Because all projects may contain risks, early risk detection and identification of its solution will
help Test Manager to avoid potential loss in the future & save on project cost.

Test Estimation

An estimate is a forecast or prediction. Test Estimation is approximately determining how long a


task would take to complete. Estimating effort for the test is one of the major and important tasks
in Test Management.

Benefits of correct estimation:

1. Accurate test estimates lead to better planning, execution and monitoring of tasks under a
test manager's attention.

2. Allow for more accurate scheduling and help realize results more confidently.

Test Planning

A Test Plan can be defined as a document describing the scope, approach, resources, and
schedule of intended Testing activities. A project may fail without a complete Test Plan. Test
planning is particularly important in large software system development.
9

In software testing, a test plan gives detailed testing information regarding an upcoming testing
effort, including:

• Test Strategy

• Test Objective

• Exit /Suspension Criteria

• Resource Planning

• Test Deliverables

Test Organization

Now you have a Plan, but how will you stick to the plan and execute it? To answer that question,
you have Test Organization phase.

Generally speaking, you need to organize an effective Testing Team. You have to assemble a
skilled team to run the ever-growing testing engine effectively.

Execution

Test Monitoring and Control

What will you do when your project runs out of resource or exceeds the time schedule? You need
to Monitor and Control Test activities to bring it back on schedule.

Test Monitoring and Control is the process of overseeing all the metrics necessary to ensure that the
project is running well, on schedule, and not out of budget.

Monitoring

Monitoring is a process of collecting, recording, and reporting information about the project
activity that the project manager and stakeholder needs to know

To Monitor, Test Manager does following activities

• Define the project goal, or project performance standard


10

• Observe the project performance, and compare between the actual and the planned
performance expectations

• Record and report any detected problem which happens to the project

Controlling

Project Controlling is a process of using data from monitoring activity to bring actual performance
to planned performance.

In this step, the Test Manager takes action to correct the deviations from the plan. In some cases,
the plan has to be adjusted according to project situation.

Issue Management

As mentioned in the beginning of the topics, all projects may have potential risk. When the risk
happens, it becomes an issue.

In the life cycle of any project, there will be always an unexpected problems and questions that
crop up. For an example:

• The company cuts down your project budget

• Your project team lacks the skills to complete project

• The project schedule is too tight for your team to finish the project at the deadline.

Risk to be avoided while testing:

• Missing the deadline

• Exceed the project budget

• Lose the customer trust

When these issues arise, you have to be ready to deal with them – or they can potentially affect the
project's outcome.
11

Test Report & Evaluation

The project has already completed. It’s now time for look back what you have done.

The purpose of the Test Evaluation Reports is:

“Test Evaluation Report” describes the results of the Testing in terms of Test coverage and exit
criteria. The data used in Test Evaluation are based on the test results data and test result summary.

DOCUMENTING TEST PLAN AND TEST CASE

What is Test case?

A test case is a document, which has a set of test data, preconditions, expected results and
postconditions, developed for a particular test scenario in order to verify compliance against a
specific requirement.

Test Case acts as the starting point for the test execution, and after applying a set of input values,
the application has a definitive outcome and leaves the system at some end point or also known as
execution postcondition.

What is a Test Plan?

Test Plan is a dynamic document. The success of a testing project depends upon a well-written test
plan document that is current at all times. Test Plan is more or less like a blueprint of how the
testing activity is going to take place in a project.

1) Test Plan is a document that acts as a point of reference and only based on that testing is carried
out within the QA team.

2) It is also a document that we share with the Business Analysts, Project Managers, Dev team and
the other teams. This helps to enhance the level of transparency of the QA team’s work to the
external teams.

3) It is documented by the QA manager/QA lead based on the inputs from the QA team members.

4) Test Planning is typically allocated with 1/3rd of the time that takes for the entire QA
engagement. The other 1/3rd is for Test Designing and the rest is for Test Execution.

5) This plan is not static and is updated on an on-demand basis.

6) The more detailed and comprehensive the plan is, the more successful will be the testing activity.
12

Importance of Test Plan

Making Test Plan has multiple benefits

• Test Plan helps us determine the effort needed to validate the quality of the application
under test

• Help people outside the test team such as developers, business managers,
customers understand the details of testing.

• Test Plan guides our thinking. It is like a rule book, which needs to be followed.

• Important aspects like test estimation, test scope, Test Strategy are documented in Test
Plan, so it can be reviewed by Management Team and re-used for other projects.

The entire testing process - from planning through to closure - produces information, some of which
you will need to document. How precisely should testers write the test designs, cases and
procedures.

• Tests: the number run, passed, failed, blocked, skipped, and so forth.

• Coverage: the portions of the test basis, the software code or both that have been tested and which
have not.

• Quality: the status of the important quality characteristics for the system.

• Money: the cost of finding the next defect in the current level of testing com pared to the cost of
finding it in the next level of testing (or in production).

• Risk: the undesirable outcomes that could result from shipping too early (such as latent defects or
untested areas) - or too late (such as loss of market share). When writing exit criteria, we try to
remember that a successful project is a balance of quality, budget, schedule and feature
considerations. This is even more important when applying exit criteria at the end of the project.

The factors to consider in such decisions are often called 'entry criteria' and 'exit criteria.' For such
criteria, typical factors are: • Acquisition and supply: the availability of staff, tools, systems and
other materials required. • Test items: the state that the items to be tested must be in to start and to
finish testing. • Defects: the number known to be present, the arrival rate, the number pre dicted to
remain, and the number resolved.

How to write a Test Plan

You already know that making a Test Plan is the most important task of Test Management Process.
Follow the eight steps below to create a test plan as per IEEE 829
13

1. Analyze the product

2. Design the Test Strategy

3. Define the Test Objectives

4. Define Test Criteria

5. Resource Planning

6. Plan Test Environment

7. Schedule & Estimation

8. Determine Test Deliverables

What is Test Documentation?

Test documentation is documentation of artifacts created before or during the testing of software. It
helps the testing team to estimate testing effort needed, test coverage, resource tracking, execution
progress, etc. It is a complete suite of documents that allows you to describe and document test
planning, test design, test execution, test results that are drawn from the testing activity.

Advantages of Test Documentation


• The main reason behind creating test documentation is to either reduce or remove any
uncertainties about the testing activities. Helps you to remove ambiguity which often arises
when it comes to the allocation of tasks
• Documentation not only offers a systematic approach to software testing, but it also acts as
training material to freshers in the software testing process
• It is also a good marketing & sales strategy to showcase Test Documentation to exhibit a
mature testing process
• Test documentation helps you to offer a quality product to the client within specific time
limits
• In Software Engineering, Test Documentation also helps to configure or set-up the program
through the configuration document and operator manuals
• Test documentation helps you to improve transparency with the client

Disadvantages of Test Documentation


• The cost of the documentation may surpass its value as it is very time-consuming.
• Many times, it is written by people who can't write well or who don't know the material.
• Keeping track of changes requested by the client and updating corresponding documents is
tiring.
14

• Poor documentation directly reflects the quality of the product as a misunderstanding


between the client and the organization can occur.

Documenting Test case:

TEST CASE DESCRIPTION


A test case contains all the information necessary to verify some particular functionality of the
software:
Purpose: Describe the features of the software to be tested, and the particular behavior being
verified by this test.
Requirement Traceability: A cross reference to the number of the requirements in the system
specification which are being verified in this test.
Setup: Describe all the steps necessary to setup the software environment necessary to carry out the
test.
Test Data: Write the actual input data to be provided and the expected output for your actual
working product. You must provide the actual input data values, not just a description.

Documenting Test Plan and Test case:

Plans, estimates and strategies depend on a number of factors, including the level, targets and
objectives of the testing we're setting out to do. Writing a plan, preparing an estimate and selecting
test strategies tend to happen concurrently and ideally during the planning period for the overall
project, though we must ready to revise them as the project proceeds and we gain more information.
Let's look closely at how to prepare a test plan, examining issues related to planning for a project,
for a test level or phase, for a specific test type and for test execution.

We'll examine typical factors that influence the effort related to testing, and see two different
estimation approaches: metrics-based and expert based. We'll discuss selecting test strategies and
ways to establish adequate exit criteria for testing. In addition, we'll look at various tasks related to
test preparation and execution that need planning. As you read, keep your eyes open for the glossary
terms entry criteria, exit criteria, exploratory testing, test approach, test level, test plan, test
procedure and test strategy.

The purpose and substance of test plans while people tend to have different definitions of what goes
in a test plan, for us a test plan is the project plan for the testing work to be done. It is not a test
design specification, a collection of test cases or a set of test procedures; in fact, most of our test
plans do not address that level of detail.
15

The test planning process and the plan itself serve as vehicles for communicating with other
members of the project team, testers, peers, managers and other stakeholders. This communication
allows the test plan to influence the project team and the project team to influence the test plan,
especially in the areas of organization-wide testing policies and motivations; test scope, objectives
and critical areas to test; project and product risks, resource considerations and constraints; and the
testability of the item under test.

The test plan also helps us manage change. During early phases of the project, as we gather more
information, we revise our plans. As the project evolves and situations change, we adapt our plans.
Written test plans give us a baseline against which to measure such revisions and changes.
Furthermore, updating the plan at major milestones helps keep testing aligned with project needs.
As we run the tests, we make final adjustments to our plans based on the results. You might not
have the time - or the energy - to update your test plans every time a variance occurs, as some
projects can be quite dynamic. You can include these change records in a periodic test plan update,
as part of a test status report, or as part as an end-of-project test summary.

In terms of the specific project, understanding the purpose of testing means knowing the answers to
questions such as:

• What is in scope and what is out of scope for this testing effort?
• What are the test objectives? • What are the important project and product risks? (more on risks in
Section 5.5). • What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
• What is most critical for this product and project? • Which aspects of the product are more (or
less) testable?
16

• What should be the overall test execution schedule and how should we decide the order in which
to run specific tests? (Product and planning risks, discussed later in this chapter, will influence the
answers to these questions.) You should then select strategies which are appropriate to the purpose
of testing (more on the topic of selecting strategies in a few pages).

Irrespective of the test case documentation method chosen, any good test case template must have
the following fields:

Test Case Field Description


Test case ID: • Each test case should be represented by a unique ID.
To indicate test types follow some convention like
"TC_UI_1" indicating "User Interface Test Case#1."

Test Priority: • It is useful while executing the test.


• Low
• Medium
• High

Name of the • Determine the name of the main module or sub-


Module: module being tested

Test Designed by: • Tester's Name

Date of test • Date when test was designed


designed:
Test Executed by: • Who executed the test- tester

Date of the Test • Date when test needs to be executed


Execution:
Name or Test Title: • Title of the test case

Description/Summ • Determine the summary or test purpose in brief


ary of Test:
Pre-condition: • Any requirement that needs to be done before
execution of this test case. To execute this test case
list all pre-conditions

Dependencies: • Determine any dependencies on test requirements or


other test cases

Test Steps: • Mention all the test steps in detail and write in the
order in which it requires to be executed. While
writing test steps ensure that you provide as much
17

detail as you can

Test Data: • Use of test data as an input for the test case. Deliver
different data sets with precise values to be used as
an input

Expected Results: • Mention the expected result including error or


message that should appear on screen

Post-Condition: • What would be the state of the system after running


the test case?

Actual Result: • After test execution, actual test result should be filled

Status (Fail/Pass): • Mark this field as failed, if actual result is not as per
the estimated result

Several standard fields of a sample Test Case template are listed below

Test case ID: Unique ID is required for each test case. Follow some convention to indicate the
types of the test. E.g. ‘TC_UI_1’ indicating ‘user interface test case #1’.

Test priority (Low/Medium/High): This is very useful while test execution. Test priority for
business rules and functional test cases can be medium or higher whereas minor user interface cases
can be of a low priority. Test priority should always be set by the reviewer.

Module Name: Mention the name of the main module or the sub-module.

Test Designed By: Name of the Tester.

Test Designed Date: Date when it was written.

Test Executed By: Name of the Tester who executed this test. To be filled only after test execution.

Test Execution Date: Date when the test was executed.

Test Title/Name: Test case title. E.g. verify login page with a valid username and password.

Test Summary/Description: Describe the test objective in brief.

Pre-conditions: Any prerequisite that must be fulfilled before the execution of this test case. List all
the pre-conditions in order to execute this test case successfully.

Dependencies: Mention any dependencies on the other test cases or test requirement.

Test Steps: List all the test execution steps in detail. Write test steps in the order in which they
should be executed. Make sure to provide as many details as you can. Tip – In order to manage a
18

test case efficiently with a lesser number of fields use this field to describe the test conditions, test
data and user roles for running the test.

Test Data: Use of test data as an input for this test case. You can provide different data sets with
exact values to be used as an input.

Expected Result: What should be the system output after test execution? Describe the expected
result in detail including message/error that should be displayed on the screen.

Post-condition: What should be the state of the system after executing this test case?

Actual result: Actual test result should be filled after test execution. Describe the system behavior
after test execution.

Status (Pass/Fail): If actual result is not as per the expected result, then mark this test as failed.
Otherwise, update it as passed.

The test cases will differ depending upon the functionality of the software that it is intended for.
However, given below is a template which you can always use for documenting the test cases?

Sample Test Cases


Based on the above template, below is an Example that showcases the concept in a much
understandable way.
Let’s assume that you are testing the login functionality of any web application, say Facebook.
19

Below are the Test Cases for the same:


20

EFFORT ESTIMATION

What is Software Test Estimation?

Test Estimation is a management activity which approximates how long a Task would take to
complete. Estimating effort for the test is one of the major and important tasks in
Test Management.

Estimation techniques:

There are two techniques for estimation covered by the ISTQB Foundation Syllabus. One involves
consulting the people who will do the work and other people with expertise on the tasks to be
done. The other involves analyzing metrics from past projects and from industry data. Let's
look at each in turn. Asking the individual contributors and experts involves working with
experienced staff members to develop a work-breakdown structure for the project. With that done,
you work together to understand, for each task, the effort, duration, dependencies, and resource
requirements. The idea is to draw on the collective wisdom of the team to create your test estimate.
Using a tool such as Microsoft Project or a whiteboard and sticky-notes, you and the team can then
predict the testing end-date and major milestones. This technique is often called 'bottom up'
estimation because you start at the lowest level of the hierarchical breakdown in the work-
breakdown structure - the task - and let the duration, effort, dependencies and resources for each
task add up across all the tasks.

Analyzing metrics can be as simple or sophisticated as you make it. The simplest approach is to
ask, 'How many testers do we typically have per developer on a project?' A somewhat more reliable
approach involves classifying the project in terms of size (small, medium or large) and complexity
(simple, moderate or complex) and then seeing on average how long projects of a particular size
and complexity combination have taken in the past.

Another simple and reliable approach we have used is to look at the average effort per test case in
similar past projects and to use the estimated number of test cases to estimate the total effort.

Sophisticated approaches involve building mathematical models in a spreadsheet that look at


historical or industry averages for certain key parameters - number of tests run by tester per day,
number of defects found by tester per day, etc. - and then plugging in those parameters to predict
duration and effort for key tasks or activities on your project.
21

The tester-to developer ratio is an example of a top-down estimation technique, in that the entire
estimate is derived at the project level, while the parametric technique is bottom-up, at least when it
is used to estimate individual tasks or activities.

We prefer to start by drawing on the team's wisdom to create the work break down structure and a
detailed bottom-up estimate. We then apply models and rules of thumb to check and adjust the
estimate bottom-up and top-down using past history. This approach tends to create an estimate that
is both more accurate and more defensible than either technique by itself. Even the best estimate
must be negotiated with management.

Factors affecting test effort

Testing is a complex endeavor on many projects and a variety of factors can influence it. When
creating test plans and estimating the testing effort and schedule, you must keep these factors in
mind or your plans and estimates will deceive you at the beginning of the project and betray you at
the middle or end. The test strategies or approaches you pick will have a major influence on the
testing effort.
In this section, let's look at factors related to the product, the process and the results of testing.
• Product factors start with the presence of sufficient project documentation so that the testers
can figure out what the system is, how it is supposed to work and what correct behavior
looks like.
• In other words, adequate and high-quality information about the test basis will help us do a
better, more efficient job of defining the tests.
• The importance of non-functional quality characteristics such as usability, reliability,
security, performance, and so forth also influences the testing effort. These test targets can
be expensive and time consuming.
Complexity is another major product factor. Examples of complexity considerations include:
• The difficulty of comprehending and correctly handling the problem the system is being
built to solve (e.g., avionics and oil exploration software);
• The use of innovative technologies, especially those long on hyperbole and short on proven
track records.
• The need for intricate and perhaps multiple test configurations, especially when these rely
on the timely arrival of scarce software, hardware and other supplies;The prevalence of
stringent security rules, strictly regimented processes or other regulations;
22

• The geographical distribution of the team, especially if the team crosses time-zones (as
many outsourcing efforts do).

What to Estimate?
Resources: Resources are required to carry out any project tasks. They can be people, equipment,
facilities, funding, or anything else capable of definition required for the completion of a
project activity.
Times : Time is the most valuable resource in a project. Every project has a deadline to delivery.
Human Skills : Human skills mean the knowledge and the experience of the Team members.
They affect to your estimation. For example, a team, whose members have low testing skills,
will take more time to finish the project than the one which has high testing skills.
Cost: Cost is the project budget. Generally speaking, it means how much money it takes to finish
the project.

Test Estimation Techniques

• Work Breakdown Structure


• 3-Point Software Testing Estimation Technique
• Function Point/Testing Point Analysis
23

Following is the 4 Step process to arrive at an estimate

Step1) Divide the whole project task into subtasks

Task is a piece of work that has been given to someone. To do this, you can use the Work
Breakdown Structure technique.

In this technique, a complex project is divided into modules. The modules are divided into sub-
modules. Each sub-module is further divided into functionality. It means divide the whole project
task into the smallest tasks.
24

Use the Work Break Down structure to break out the Bank project into 5 smaller tasks-

After that, you can break out each task to the subtask. The purpose of this activity is creating task
as detailed as possible.

Task Sub task

Investigate the soft requirement specs


Analyze software requirement
specification Interview with the developer & other stakeholders to know
more about the website

Design test scenarios

Create the Test Specification Create test cases

Review and revise test cases

Build up the test environment

Execute the test cases Execute the test cases

Review test execution results

Create the Defect reports


Report the defects

Report the defects


25

Step 2) Allocate each task to team member

In this step, each task is assigned to the appropriate member in the project team. You can assigned
task as follows

Task Members

Analyze software requirement specification All the members

Create the test specification Tester/Test Analyst

Build up the test environment Test Administrator

Execute the test cases Tester, Test Administrator

Report defects Tester

Step 3) Effort Estimation For Tasks

There are 2 techniques which you can apply to estimate the effort for tasks

1. Functional Point Method


2. Three Point Estimation

Method 1) Function Point Method

In this method, the Test Manager estimates Size, Duration, and Cost for the tasks

Step A) Estimate size for the task

In Step 1, you already have broken the whole project task into small task by using WBS method.
Now you estimate the size of those tasks. Let’s practice with a particular task “Create the test
specification”
26

The size of this task depends on the functional size of the system under test. The functional size
reflects the amount of functionality that is relevant to the user. The more number of functionality,
the more complex system is.

Prior to start actual estimating tasks effort, functional points are divided into three groups like
Complex, Medium Simple as following:

Step B) Estimate duration for the task

After classifying the complexity of the function points, you have to estimate the duration to test
them. Duration means how much time needs to finish the task.

• Total Effort: The effort to completely test all the functions of the website
• Total Function Points: Total modules of the website
• Estimate defined per Function Points: The average effort to complete one function points.
This value depends on the productivity of the member who will take in charge this task.
• Total Effort: The effort to completely test all the functions of the website
• Total Function Points: Total modules of the website
• Estimate defined per Function Points: The average effort to complete one function points.
This value depends on the productivity of the member who will take in charge this task.

Suppose your project team has estimated defined per Function Points of 5 hours/points. You can
estimate the total effort to test all the features of website of Bank as follows:

Weightage # of Function Points Total


27

Complex 5 3 15

Medium 3 5 15

Simple 1 4 4

Function Total Points 34

Estimate define per point 5

Total Estimated Effort (Person Hours) 170

STEP C) Estimate the cost for the tasks

This step helps you to answer the last question of customer “How much does it cost?”

Suppose, on average your team salary is $5 per hour. The time required for “Create Test Specs”
task is 170 hours. Accordingly, the cost for the task is 5*170= $850.

METHOD 2) Three Point Estimation

Three-Point estimation is one of the techniques that could be used to estimate a task. The simplicity
of the Three-point estimation makes it a very useful tool for a Project Manager that who wants to
estimate.

In three-point estimation, three values are produced initially for every task based on prior
experience or best-guesses as follows.
28

When estimating a task, the Test Manager needs to provide three values, as specified above. The
three values identified, estimate what happens in an optimal state, what is the most likely, or what
we think it would be the worst case scenario.

Step 4) Validate the estimation

Once you create an aggregate estimate for all the tasks mentioned in the WBS, you need to forward
it to the management board, who will review and approve it.The member of management board
could comprise of the CEO, Project Manager & other stakeholders. The management board will
review and discuss your estimation plan with you. You may explain them your
estimation logically and reasonably so that they can approve your estimation plan.

CONFIGURATION MANAGEMENT:

Configuration Management helps organizations to systematically manage, organize, and control the
changes in the documents, codes, and other entities during the Software Development Life Cycle. It
is abbreviated as the SCM(Software configuration Management) process. It aims to control cost
and work effort involved in making changes to the software system. The primary goal is to increase
productivity with minimal mistakes.

The primary reasons for Implementing Software Configuration Management System are:

• There are multiple people working on software which is continually updating


29

• It may be a case where multiple version, branches, authors are involved in a software
project, and the team is geographically distributed and works concurrently
• Changes in user requirement, policy, budget, schedule need to be accommodated.
• Software should able to run on various machines and Operating Systems
• Helps to develop coordination among stakeholders
• SCM process is also beneficial to control the costs involved in making changes to a system

Any change in the software configuration Items will affect the final product. Therefore, changes to
configuration items need to be controlled and managed.

Configuration management is a topic that often perplexes new practitioners, but, if you ever have
the bad luck to work as a tester on a project where this critical activity is handled poorly, you'll
never forget how important it is. Briefly put, configuration management is in part about determining
clearly what the items are that make up the software or system. These items include source code,
test scripts, third-party software, hardware, data and both development and test documentation.
Configuration management is also about making sure that these items are managed carefully,
thoroughly and attentively throughout the entire project and product life cycle.
Configuration management has a number of important implications for testing. For one thing, it
allows the testers to manage their Testware and test results using the same configuration
management mechanisms, as if they were as valuable as the source code and documentation for the
system itself - which of course they are.
Configuration management supports the build process, which is essential for delivery of a test
release into the test environment. Simply sending Zip archives by e-mail will not suffice, because
there are too many opportunities for such archives to become polluted with undesirable contents or
to harbor left-over previous versions of items. Especially in later phases of testing, it is critical to
have a solid, reliable way of delivering test items that work and are the proper version.
Configuration management allows us to map what is being tested to the underlying files and
components that make it up. This is absolutely critical. For example, when we report defects, we
need to report them against something, something which is version controlled. If it's not clear what
we found the defect in, the programmers will have a very tough time of finding the defect in order
to fix it. For the kind of test reports discussed earlier to have any meaning, we must be able to trace
the test results back to what exactly we tested.
30

Tasks in Software Configuration Management

• Configuration Identification

• Baselines

• Change Control

• Configuration Status Accounting

• Configuration Audits and Reviews

Configuration Identification:

Configuration identification is a method of determining the scope of the software system. With the
help of this step, you can manage or control something even if you don't know what it is. It is a
description that contains the CSCI type (Computer Software Configuration Item), a project
identifier and version information.

Activities during this process:

• Identification of configuration Items like source code modules, test case, and requirements
specification.
• Identification of each CSCI in the SCM repository, by using an object-oriented approach
• The process starts with basic objects which are grouped into aggregate objects. Details of
what, why, when and by whom changes in the test are made
• Every object has its own features that identify its name that is explicit to all other objects
• List of resources required such as the document, the file, tools, etc.

Baseline:

A baseline is a formally accepted version of a software configuration item. It is designated and


fixed at a specific time while conducting the SCM process. It can only be changed through formal
change control procedures.

Activities during this process:

• Facilitate construction of various versions of an application


• Defining and determining mechanisms for managing various versions of these work
products
• The functional baseline corresponds to the reviewed system requirements
• Widely used baselines include functional, developmental, and product baselines
31

In simple words, baseline means ready for release.

Change Control:

Change control is a procedural method which ensures quality and consistency when changes are
made in the configuration object. In this step, the change request is submitted to software
configuration manager.

Activities during this process:

• Control ad-hoc change to build stable software development environment. Changes are
committed to the repository
• The request will be checked based on the technical merit, possible side effects and overall
impact on other configuration objects.
• It manages changes and making configuration items available during the software lifecycle

Configuration Status Accounting:

Configuration status accounting tracks each release during the SCM process. This stage involves
tracking what each version has and the changes that lead to this version.

Activities during this process:

• Keeps a record of all the changes made to the previous baseline to reach a new baseline
• Identify all items to define the software configuration
• Monitor status of change requests
• Complete listing of all changes since the last baseline
• Allows tracking of progress to next baseline
• Allows to check previous releases/versions to be extracted for testing

Configuration Audits and Reviews:

Software Configuration audits verify that all the software product satisfies the baseline needs. It
ensures that what is built is what is delivered.

Activities during this process:

• Configuration auditing is conducted by auditors by checking that defined processes are


being followed and ensuring that the SCM goals are satisfied.
• To verify compliance with configuration control standards. auditing and reporting the
changes made
• SCM audits also ensure that traceability is maintained during the process.
32

• Ensures that changes made to a baseline comply with the configuration status reports
• Validation of completeness and consistency

TEST PROGRESS MONITORING AND CONTROL:

Project progress Management:

Project management is the practice of initiating, planning, executing, controlling, and closing the
work of a team to achieve specific goals and meet specific success criteria at the specified time. A
project is a temporary endeavor designed to produce a unique product, service or result with a
defined beginning and end (usually time-constrained, and often constrained by funding or staffing)
undertaken to meet unique goals and objectives, typically to bring about beneficial change or added
value.
The primary challenge of project management is to achieve all of the project goals within the given
constraints This information is usually described in project documentation, created at the beginning
of the development process. The primary constraints are scope, time, quality and budget. The
secondary — and more ambitious — challenge is to optimize the allocation of necessary inputs and
apply them to meet pre-defined objectives. The object of project management is to produce a
complete project which complies with the client's objectives. In many cases the object of project
management is also to shape or reform the client's brief in order to feasibly be able to address the
client's objectives.

Monitoring the progress of test activities:

Having developed our plans, defined our test strategies and approaches and estimated the work to
be done, we must now track our testing work as we carry it out. Test monitoring can serve various
purposes during the project, including the following:
• Give the test team and the test manager feedback on how the testing work is going, allowing
opportunities to guide and improve the testing and the project.
• Provide the project team with visibility about the test results.
• Measure the status of the testing, test coverage and test items against the exit criteria to determine
whether the test work is done.
• Gather data for use in estimating future test efforts.
One way to gather test progress information is to use the IEEE 829 test log template. While much of
the information related to logging events can be usefully captured in a document, we prefer to
capture the test-by-test information in spreadsheets .
33

Test Monitoring and Test Control is basically a management activity. Test monitoring is a
process of evaluating and providing feedback of the “currently in progress” testing phase and Test
control is an activity of guiding and taking corrective action based on some metrics or information
to improve the efficiency and quality.

Test monitoring activity includes:

1. Providing feed back to the team and the other required stakeholders about the progress of
the testing efforts.
2. Broadcasting the results of testing to associated members.
3. Finding and tracking the test metrics.
4. Planning and estimation and deciding the future course of action based on the metrics
calculated.

Points 1 and 2 basically talks about Test reporting which is an important part of test monitoring.
Reports should be precise and to the point and should avoid the “long stories”. It is important here
to understand that the content of reporting differs for every stakeholder.

Points 3 and 4 talk about the metrics. Following are the metrics can that be used for test
monitoring:

1. Test coverage metric


2. Test execution metrics (Number of test cases pass, fail, blocked, on hold)
3. Defect metrics
4. Requirement traceability metrics
5. Miscellaneous metrics like level of confidence of testers, dates milestones, cost, schedule
and turnaround time.

Test control is basically a guiding and taking corrective measures activity, based on the
results of test monitoring. Examples include:

1. Prioritizing the testing efforts


2. Revisiting the test schedules and dates
3. Reorganizing the test environment
4. Re prioritizing the test cases / conditions
34

Test monitoring and control goes hand in hand. Being primarily a manager’s activity, a Test
Analyst contributes towards this activity by gathering and calculating the metrics which will be
eventually used for monitoring and control.

After finishing the Test Estimation and test planning, the management board agreed with your plan
and the milestones are set as per the following figure.

You promised to finish and deliver all test artifacts of the Bank Testing project as per above
milestones. Everything seems to be great, and your team is hard at work.

But after 4 weeks, things are not going as per plan. The task of “Making Test specification” is
delayed by 4 working days. It has a cascading effect and all succeeding tasks are delayed.
35

You missed the milestone as well the overall project deadline.

As a consequence, your project fails and your company loses the customer trust. You must take full
responsibility for the project’s failure.

No matter how much and carefully we plan, something will go wrong. We need to actively monitor
the project to

• Early detect and react appropriately to deviations and changes to plans


• Let’s you communicate to stakeholders, sponsors, and team members exactly where
the project stands and determine how closely your initial plan of action resembles
reality
• It will be helpful for the Manager to know whether the project is going on the right
track according to the project goals. Allows you to make the necessary adjustments
regarding resources or your budget.

Project monitoring helps you avoid disasters. Monitoring can be compared to checking the gas
gauge in your car as you drive. It helps you see how much gas left in the tank, monitoring
your project helps you avoid running out of gas before you reach your goal.

What do we monitor?

Monitoring will allow you to make comparisons between your original plan and your progress so
far. You will be able to implement changes, where necessary, to complete the project successfully.
36

In your project, as the Test Manager, you should monitor the key parameters as below

Cost
Costs are an important aspect of project monitoring and control. You have to estimate and track
basic cost information for your project. Having accurate project estimates and a robust project
budget is necessary to deliver project within the decided budget. Suppose, your boss has agreed to
fund the project with $100,000. You must keep an eye on the actual costs while the project is being
implemented. As mentioned in Test estimation article, there’s a ton of project activities which need
money. You have to monitor and manage the project budget in order to control all that activities.
Without monitoring the project cost, the project will most likely never be delivered on-budget.

Schedules

How can you work without a schedule? It can be compared to driving your car but without any idea
of how long it takes you get to the destination. No matter how big or small is the size & scope of
your project, you must prepare a project schedule. The schedule tells you

• When should each activity be done?


• What has already been completed?
• The sequence in which things need to be finished.

Here is an example of project schedule

You assigned a Team member to a Task: Executing the Integration Cases of Bank website.

This task should be finished in one week. You can create a schedule as given below
37

Resources

As mentioned in previous articles, resources are all things required to carry out the project tasks.
They can be people or equipment required to complete the project activity. Lack of resources can
affect the project progress.

The truth is, everything may not happen as planned, employees will leave, the project budget may
be cut, or the schedule will get pushed. Monitoring resources will help you to early detect any
resource crunch and find a solution to deal with it.

Quality

Quality monitoring involves monitoring the results of specific work products (like test case suite,
test execution log), to evaluate whether its meets the defined quality standards. In case results do
not meet quality standards, you need to identify potential resolution.

Example: Suppose that you monitored and controlled the project progress very well. Finally, you
delivered the product at the deadline. The project seems to be successful.

But after delivering 2 weeks, you got this feedback from customer

How to monitor?

As your project comes to life, keep these questions in mind:

• Are you on schedule? If not, how far behind are you, and how can you catch up?
• Are you over budget?
38

• Are you still working toward the same project goal?


• Are you running low on resources?
• Are there warning signs of impending problems?
• Is there pressure from management to complete the project sooner?

These are just a few of the questions you should ask yourself as you monitor the progress of your
project.

It is important to monitor the progress of the project so you will know if adjustments need to be
made to get it moving back in the right direction. To monitor project progress effectively, you
should follow the following steps

Step 1) Create Monitoring Plan

You cannot monitor progress unless you have a plan to monitor progress with DEFINED metrics.
Similar to Test Plan, Monitoring Plan is the first and one of the most important steps in progress
monitoring.

In the Monitoring Plan, you must plan carefully about

• What metrics you need to collect and measure?


• When to collect the metrics?
• How to evaluate the project’s progress via metrics?

What metrics need to collect and measure?

In the monitoring plan, you should clearly define what metrics you need to collect and measure. As
mention in previous section, the metrics which you need to collect
39

• The cost (time, money) spent for the project so far


• How much resource (employees, equipment) are used for the project
• The status of the task (on schedule, behind or before the schedule)
• The quality of the work product (Run rate/pass rate, defect metrics)

When to collect the data?

Now decide when or how often you are going to collect the data for monitoring in the monitoring
plan –Weekly or monthly? Or just at the start and end of the project?

As the Plan, the Bank project will be completed in one month. In that case, we recommend you
monitor the project progress weekly or daily basis.

How to evaluate the project’s progress via metrics?

In the monitoring plan, you should define the methods to evaluate the project’s progress via
collected metrics. Some methods you can refer are

• Compare the progress in plan with the actual progress that the team has made
• Define the criteria which are used to evaluate the project’s progress. For example, if the
effort to complete a task took more than 30% effort than planed a project delay.

Step 2) Update progress record

With time, your team member will be making progress on their project task. You must track their
activity as per schedule and ask them frequently update the progress information such as time spent,
task status…etc. By checking these records, you can immediately see the impact on the project plan.

One of the best methods to track the member progress is holding regular meetings.

In the meeting, all members report their current status and issues if any. If a team member or
members have fallen behind or have run into obstacles, formulate a plan for identifying and solving
the problem.

Let’s practice with following scenario

As defined in monitoring plan, you assigned a task “Setting up Test environment” for testing
website bank to a member in your Team. His role is a Test Administrator. He has to set up the Test
40

Environment in 6 days. You required him to report the current status in every team meeting. Here is
an example of his record of current progress

Step 3) Analyze record and make the adjustment

There’re 2 sub-steps in the steps

Step 3.1) Analyze

In this step, you compare the progress you defined in plan with the actual progress that the team has
made. By analyzing the record, you can also see how much time has been spent on individual task
and the total time spent on the project overall.

By tracking and analyzing the project progress, you can early detect any issue which may happen to
the project, and you can find out the solution to solve that issue.

Step 3.2) Adjustment

Make the necessary adjustments keep your project on track. Reassign tasks, modify schedules, or
reassess your goals. This will help you keep moving toward the finish line.
41

Step 4) Produce the report

If your boss asks you about the current project progress, whether the progress behind or
ahead the schedule, what will you answer? You need to prepare progress report of the project.
Using the report is a good option to share the overall project progress with team members or the
Management Board. It is also a useful way to show your boss whether the project is on track. You
can use some template reports to ensure the progress data is presented consistently and clearly.
This article includes the report template which you can refer. Also, check a sample report for
Banking project as reference.

Best Practices in Test Monitoring and Control

• Follow the standards: One important consideration of project planning is to ensure


standardization. It means that all the project activities must follow the standard process
guideline. Standardized processes, tools, templates, and measurement values make analysis
easy, facilitate easy communications, and help the project team members understand the
situation better.

• Documentation: What will happen if you do not write down any discussion or decision in a
document? You may forget them and lose many things. You should write down discussions
and decisions at the appropriate place, and establishing a formal documentation procedure
for meetings. Such documentation helps you to resolve issues of miscommunication or
misunderstandings among the project team.

• Proactivity: Issues occur in all projects. The important thing is that you have to adopt a
proactive approach to solve issues and problems that arise during project execution. Such
issues could be budget, scope, time, quality, and human resources

USE OF TESTOPIA FOR TEST CASE DOCUMENTATION AND TEST MANAGEMENT


42

Testopia is a test case management extension to the Bugzilla bug tracking system. It is an
open source project licensed under the Mozilla Public License. Testopia allows developers and
testers to work in a single environment to produce better software.
Test case management is the process of tracking test outcomes on a set of test cases for a
given set of environments and development iterations. To do this effectively, an organizational
structure needs to be set in place to track test cases and their outcomes in a given test scenario.
Testopia was developed to provide a central repository for the collaborative efforts of distributed
testers. It serves as both a test case repository and management system. Testopia is designed to meet
the needs of software testers from all sizes of groups and organizations. Though Testopia was
designed primarily for software testing, it can be used to track any type of test cases. Also, being
open source, Testopia can be easily adapted to fit just about any testing model.
Reference link for more details:
http://mozilla.inkedblade.net/source/mozilla/webtools/testopia/testopia/doc/Manual.pdf

DEFECT MANAGEMENT
Goals:
• Prevent the Defect
• Early Detection
• Minimize the impact
• Resolution of the Defect
• Process improvement
• When the testing team executes the test cases, they come across a situation where the actual
test result is different from the expected result. This variation is termed as a Defect.
• Basically, a Software Defect is a condition which does not meet the software requirement.
The defect is an error or a flaw which produces an unexpected or incorrect behavior in the
system.
• In order to handle the projects appropriately, you need to know how to deal with the
development and release, but along with that you also need to know how to handle defects.

LOGGING DEFECTS

What is Defect?

A defect is a variation or deviation from the original business requirements


43

These two terms have very thin line of difference, In the Industry both are faults that need to be
fixed and so interchangeably used by some of the Testing teams.

When a tester executes the test cases, he might come across the test result which is contradictory to
expected result. This variation in the test result is referred as a Software Defect. These defects or
variation are referred by different names in a different organization like issues, problem, bug or
incidents.

Defect logging, a process of finding defects in the application under test or product by testing or
recording feedback from customers and making new versions of the product that fix the defects or
the clients feedback.

Defect logging, a process of finding defects in the application under test or product by testing or
recording feedback from customers and making new versions of the product that fix the defects or
the clients feedback.

Defect tracking is an important process in software engineering as Complex and business critical
systems have hundreds of defects. One of the challenging factors is managing, evaluating and
prioritizing these defects. The number of defects gets multiplied over a period of time and to
effectively manage them, defect tracking system is used to make the job easier.

DEFECT LIFE CYCLE FIXING / CLOSING DEFECTS

What is Defect Life Cycle?

Defect life cycle, also known as Bug Life cycle is the journey of a defect cycle, which a defect goes
through during its lifetime. It varies from organization to organization and also from project to
project as it is governed by the software testing process and also depends upon the tools used.

A Defect life cycle, also known as a Bug life cycle, is a cycle of a defect from which it goes through
covering the different states in its entire life. This starts as soon as any new defect is found by a
tester and comes to an end when a tester closes that defect assuring that it won’t get reproduced
again.
44

Defect Workflow:

It is now time to understand the actual workflow of a defect life cycle with the help of a simple
diagram as shown below.

Defect States:

1) New: This is the first state of a defect in the defect life cycle. When any new defect is found, it
falls in a ‘New’ state and validations and testing are performed on this defect in the later stages of
the defect life cycle.

2) Assigned: In this stage, a newly created defect is assigned to the development team for working
on the defect. This is assigned by the project lead or the manager of the testing team to a developer.

3) Open: Here, the developer starts the process of analyzing the defect and works on fixing it, if
required. If the developer feels that the defect is not appropriate then it may get transferred to any of
the below four states namely Duplicate, Deferred, Rejected or Not a Bug-based upon the specific
reason.
45

• Rejected: If the defect is not considered as a genuine defect by the developer then it is
marked as ‘Rejected’ by the developer.
• Duplicate: If the developer finds the defect as same as any other defect or if the concept of
the defect matches with any other defect then the status of the defect is changed to
‘Duplicate’ by the developer.
• Deferred: If the developer feels that the defect is not of very important priority and it can
get fixed in the next releases or so in such a case, he can change the status of the defect as
‘Deferred’.
• Not a Bug: If the defect does not have an impact on the functionality of the application then
the status of the defect gets changed to ‘Not a Bug’.

4) Fixed: When the developer finishes the task of fixing a defect by making the required changes
then he can mark the status of the defect as ‘Fixed’.

5) Pending Retest: After fixing the defect, the developer assigns the defect to the tester for
retesting the defect at their end and till the tester works on retesting the defect, the state of the defect
remains in ‘Pending Retest’.

6) Retest: At this point, the tester starts the task of working on the retesting of the defect to verify if
the defect is fixed accurately by the developer as per the requirements or not.

7) Reopen: If any issue still persists in the defect then it will be assigned to the developer again for
testing and the status of the defect gets changed to ‘Reopen’.

8) Verified: If the tester does not find any issue in the defect after being assigned to the developer
for retesting and he feels that if the defect has been fixed accurately then the status of the defect gets
assigned to ‘Verified’.

9) Closed: When the defect does not exist any longer then the tester changes the status of the defect
to ‘Closed’.

TEST EXECUTION:

Test execution is, without doubt, the most important and ‘happening’ phase in the STLC and also
the entire development lifecycle.
46

Test execution is the process of executing the code and comparing the expected and actual results.
Test execution starts when the entry criteria has been satisfied. Test manager has to ensure that the
test execution starts only when entry criteria has been satisfied in order to avoid any unnecessary
defects and delays in testing.

When test execution begins, the test analysts start executing the test scripts based on test strategy
allowed in the project.

There is some amount of exploratory testing done in the project and test manager has to ensure that
he has accounted for some amount of exploratory testing and the way to capture necessary
information for exploration.

During test execution test manager should monitor the progress as per the test plan and if required
he needs to take control actions in terms of objective and strategy. Requirements Traceability
Matrix is used to determine progress and coverage.

Test manager has also to ensure that there are not much false positives or false negatives (false
positive occur when test analyst marks the test as failed when the behavior was correct and false
negatives occur when test analyst marks the test as pass when the behavior was incorrect). He has to
control both these situations and try to keep it in the acceptable level.

Test manager has to also ensure the test analysts are logging results with necessary data as test
execution is going on.

Test manager needs to ensure that appropriate configuration management is in place so that test
team is clear against which version to log the test results.

Following factors are to be considered for a test execution process:

• Based on a risk, select a subset of test suite to be executed for this cycle.

• Assign the test cases in each test suite to testers for execution.

• Execute tests, report bugs, and capture test status continuously.

• Resolve blocking issues as they arise.

• Report status, adjust assignments, and reconsider plans and priorities daily.

• Report test cycle findings and status.


47

Test Execution Guidelines:

1. The build (the code that is written by the dev team is packaged into what is referred to a build-
this is nothing but an installable piece of software (AUT), ready to be deployed to QA
environment.) being deployed (in other words, installed and made available) to the QA
environment is one of the most important aspects that needs to happen for the test execution to
start.

2. Test execution happens in the QA environment. To make sure that the dev team’s work on the
code is not in the same place, where the QA team is testing, the general practice is to have
dedicated Dev and QA environment. (There is also a production environment to host the live
application). This is basically to preserve the integrity of the application at various stages in the
SDLC lifecycle. Otherwise, ideally, all the 3 environments are identical in nature.

3. Test team size is a not constant from the beginning of the project. When the test plan is initiated
the team might just have a Team lead. During the test design phase, a few testers come on
board. Test execution is the phase when the team is at its maximum size.

4. Test execution also happens in at least 2 cycles (3 in some projects). Typically in each cycle, all
the test cases (the entire test suite) will be executed. The objective of the first cycle is to
identify any blocking, critical defects, and most of the high defects. The objective of the second
cycle is to identify remaining high and medium defects, correct gaps in the scripts and obtain
results.

5. Test execution phase consists of- Executing the test scripts + test script maintenance (correct
gaps in the scripts) + reporting (defects, status, metrics, etc.) Therefore, when planning this
phase schedules and efforts should be estimated taking into consideration all these aspects and
not just the script execution.

6. After the test script being done and the AUT is deployed – and before the test execution begins,
there is an intermediary step. This is called the “Test Readiness Review (TRR)”. This is a
sort of transitional step that will end the test designing phase and ease us into the test execution.

7. In addition to the TRR, there are few more additional checks before we ensure that we can go
ahead with accepting the current build that is deployed in the QA environment for test
execution.
48

8. On the successful completion of TRR, smoke and sanity tests, the test cycle officially begins.

9. Exploratory Testing would be carried out once the build is ready for testing. The purpose of this
test is to make sure critical defects are removed before the next levels of testing can start. This
exploratory testing is carried out in the application without any test scripts and documentation.
It also helps in getting familiar with the AUT.

10. Just like with the other phases of the STLC, work is divided among team members in the test
execution phase also. The division might be based on module wise or test case count wise or
anything else that might make sense.

11. The primary outcome of the test execution phase is in the form of reports – primarily, defect
report and test execution status report. The detailed process for reporting can be found at Test
executions reports.

USE OF BUGZILLA FOR LOGGING AND TRACING DEFECTS:

Bugzilla is an open-source issue/bug tracking system that allows developers to keep track of
outstanding problems with their product. It is written in Perl and uses MYSQL database. Bugzilla is
a Defect tracking tool, however, it can be used as a test management tool as such it can be easily
linked with other Test Case management tools like Quality Center, Testlink etc.

Bugzilla is a robust, featureful and mature defect-tracking system, or bug-tracking system. Defect-
tracking systems allow teams of developers to keep track of outstanding bugs, problems, issues,
enhancement and other change requests in their products effectively. Simple defect-tracking
capabilities are often built into integrated source code management environments such as Github or
other web-based or locally-installed equivalents. We find organizations turning to Bugzilla when
they outgrow the capabilities of those systems - for example, because they want workflow
management, or bug visibility control (security), or custom fields.

Key features of Bugzilla includes

• Advanced search capabilities


• E-mail Notifications
• Modify/file Bugs by e-mail
• Time tracking
• Strong security
49

• Customization
• Localization

How to log-in to Bugzilla

Step 1) Use the following link for your handons. To create an account in Bugzilla tool or to login
into the existing account go to New Account or Log in option in the main menu.

Step 2) Now, enter your personal details to log into Bugzilla


1. User ID
2. Password
3. And then click on "Log in"

Step 3) You are successfully logged into Bugzilla system


50

Creating a Bug-report in Bugzilla


Step 1) To create a new bug in Bugzilla, visit the home-page of Bugzilla and click on NEW tab
from the main menu

Step 2) In the next window


1. Enter Product
2. Enter Component
3. Give Component description
4. Select version,
5. Select severity
6. Select Hardware
7. Select OS
8. Enter Summary
9. Enter Description
10. Attach Attachment
11. Submit
NOTE: The above fields will vary as per your customization of Bugzilla
51

NOTE: The mandatory fields are marked with *.


In our case field's
• Summary
• Description
Are mandatory

Step 3) Bug is created ID# 26320 is assigned to our Bug. You can also add additional information
to the assigned bug like URL, keywords, whiteboard, tags, etc. This extra-information is helpful to
give more detail about the Bug you have created.
1. Large text box
2. URL
3. Whiteboard
4. Keywords
5. Tags
6. Depends on
7. Blocks
52

8. Attachments

Step 4) In the same window if you scroll down further. You can select deadline date and also status
of the bug. Deadline in Bugzilla usually gives the time-limit to resolve the bug in given time
frame.

TEST DATA MANAGEMENT:

What is Test Data? Why is it Important?

Test data is actually the input given to a software program. It represents data that affects or is
affected by the execution of the specific module. Some data may be used for positive testing,
typically to verify that a given set of input to a given function produces an expected result. Other
53

data may be used for negative testing to test the ability of the program to handle unusual, extreme,
exceptional, or unexpected input.

What is Test Data Generation? Why test data should be created before test execution?

Depending on your testing environment you may need to CREATE Test Data (Most of the times) or
at least identify a suitable test data for your test cases (is the test data is already created).

Typically test data is created in-sync with the test case it is intended to be used for.

Test Data can be Generated -

• Manually

• Mass copy of data from production to testing environment

• Mass copy of test data from legacy client systems

• Automated Test Data Generation Tools

Typically sample data should be generated before you begin test execution because it is difficult to
handle test data management otherwise. Since in many testing environments creating test data
takes many pre-steps or test environment configurations which is very time-consuming.

Also If test data generation is done while you are in test execution phase you may exceed your
testing deadline.

Test Data Management

Test data is of two types: static data and transactional data. Static data comprises of names,
currencies, countries, etc., which are not sensitive in nature. But when it comes to transactional data
there is always a risk of the data getting stolen, as it involves data like credit/debit card numbers,
information pertaining to bank accounts or it can be your medical history. Hence transactional data
is very sensitive in nature. Test Data management is the method by which we can satisfy the test
data requirements of test teams by including high-quality data with right quantity and format. It
helps in ensuring that data that is being picked up for testing subsumes everything from quality to
the appropriate sizing and its provisioning can be done by synthetic data creation or production
extraction. Clearly defined processes and manual methods are very significant in implementing the
test data management.
54

Why Test Data Management:

Test Data management is very critical during the test life cycle. The amount of data that is
generated is enormous for testing the application. Reporting the results it minimizes the time spent
for processing the data and creating reports greatly contributes to the efficiency of an entire product.
Efficient management of data used for testing is essential to maximizing return on investment and
supplementing the testing efforts for the highest levels of success and coverage. If the data used in
testing does not promote ease of use and adaptation, poorly represents the sampled source, or
consumes excessive resources for preparation and maintenance, a negative impact on the desired
outcome quickly manifests and continues to degrade the quality of results. To balance in favor of
positive results and improved returns, consider the process, potential challenges, and possible
solutions involved in TDM.

TEST DATA TYPES:

Test data can be grouped according to different parameters. As for their importance, there can be
distinguished:

• Test-specific data: influence the system behavior and reveal the case specifics under the
test
• Test-reference data: have little influence on the test performance
• Application reference data: irrelevant to the behavior under test, and are needed to start
the application

Test data commonly include the following types


55

• Valid test data: It is necessary to verify whether the system functions are in compliance
with the requirements, and the system processes and stores the data as intended.
• Invalid test data: QA engineers should inspect whether the software correctly processes
invalid values, shows the relevant messages, and notifies the user that the data are improper.
• Boundary test data: Help to reveal the defects connected with processing boundary values.
• Wrong data: Testers have to check how the system reacts on entering the data of
inappropriate format, whether it shows the correct error messages.
• Absent data: It is a good practice to verify how the product handles entering a blank field in
the course of software testing.

Ways to create the test data:


• Manually

• Import from the production environment


• Duplication from prior customer systems
• Using test automation tools

To effectively manage test data, here is the framework to be used.

Test data is the documented data that is basically used to test the software program. Test data is
divided into two categories. First is Positive test data which is generally gives to system to generate
the expected result and other is negative test data which is used to test the unhandled conditions,
unexpected, exceptional or extreme input conditions. If the test data inadequately designed then
such test inputs are not cover the all possible test scenarios, which impact the quality of the
software application under test.

Test data can be documented in any manner – Excel Sheet, Word Document, Text file and many-
more. The data stored in an Excel Sheet can be entered by hand while running test cases or can be
examine automatically from files (XML, Flat Files, Database etc.) using automation tools. Using
test data, you can verify the expected result and the software behavior of invalid input data. It is also
56

used in order to challenge the ability of the application to respond to unusual, extreme, exceptional,
or unexpected input.

In case of domain testing; test data will be created in a systematic way but in other case like; high-
volume randomized automation testing – it is not much systematic. Most of the time test data is to
be given by the tester or by a program or a function that helps the tester. Test data can be recorded
for re-use in the application.

Limitations: It is really difficult to create sufficient test data for testing. The quantity of an
efficiency data to be tested is determined or limited by time, cost and quality.

Gguidelines to create test data:

1) The best test data: Try to create the best data set that should not be so long and identifies bugs
of all kind of applications functions but does not exceed cost and time limitation for preparing test
data and running tests.

2) Unlawful data set-up: To check provigil online pharmacy data correctness, create wrong data
set format. This invalid or dishonest format of data cannot be accepted by system and generates an
error message. Check that, it has to generate an error message.

3) Boundary condition data set: Data set holding out of range data. Recognize application
boundary cases to organize data set that will cover lower as well as upper boundary conditions.

4) Correct data set: Create correct data set to ensure that an application is responding as per the
requirement or not and to know that the data is correctly saved in a database/file or not.

5) Incorrect data set: Create incorrect data set to confirm application behavior for negative
values, alphanumeric string inputs.

6) Create large data set for performance, load and stress testing, and regression testing: Large
amount of data set is required for these kinds of testing. To do the performance testing for the DB
application that fetches/updates data from/to DB table – the large data set required.

7) Blank or default data: Execute your test cases in blank or default data set environment to check
that correct error messages are generated or not.

8) Check the corrupted data: Fill a bug after correct troubleshooting. Before running test case on
a particular data ensures that the data is not corrupted.
57

Create duplicate copy of the valuable input data: In so many software testing cases, numbers of
testers are involved in releasing build. In this situation more than one tester will be having rights to
access common test data and each tester will seek to operate that common data according to their
own requirements. So, the best way to keep those data safe while keeping the personal copy of the
same data in any format like; Word file, Excel file or other photo files.

TEST DATA MANAGEMENT CHALLENGES:

Following are the 8 key challenges w.r.t. Test Data Management:

• Lack of awareness on Test Data Management - Often, the testing team themselves provision the
test data required resulting in improper coverage in turn leading to production defects. It is
noticed that almost over 10% of the defects raised in production are due to data that could have
easily been captured during the various testing phases.

• Lack of Standardization - As different teams request data in different formats for different types
of testing - System testing, Data warehouse testing, Performance testing, UAT, etc., there is no
standard data request form and provisioning process followed resulting in longer test cycle times.

• Poor data quality and data integrity - Lack of streamlined process and inconsistent approach to
test data refresh process results in poor data quality and integrity issues. With complex and
heterogeneous systems coupled with different file formats having multiple touch
points, inappropriate process followed leads to serious data quality and integrity issues.

• Regulatory and Compliances - With increased adherence to regulatory compliances such as


PCIDS, Gramm-Leach-Bliley Financial Act (GLBA), BASEL II, Dodd Frank Act, Solvency II
etc., test data would need to ensure that sensitivity of the data is maintained along with
adherences to compliances which could also be geo-specific.

• High storage cost - High storage, license and maintenance cost as different teams take full
production copies. Due to lack of reuse, redundant data sets and production clones are spread
across various test environments increasing the overall CAPEX and OPEX.

• Absence of traceability - No traceability between test data to test cases to business requirements
leading to issues on the test data coverage for a particular test case.

• Reduced efficiency - With no standard processes followed, teams working in silos doing
manual operations for data engineering, data provisioning, data mocking etc. results in
58

inefficiency as there are no plans for reuse of the test data artifacts and optimization of data,
/environment.

• Huge effort spent on TDM - Significant amount of effort & time is spent in taking huge volume
of production data to various environments for different types of testing. Test data identification,
extraction and conditioning consume large effort in testing life cycle as much as 12-14% and at
times much higher.

You might also like