You are on page 1of 5

Evaluation

How will the tool determine if it has received a

proper result?

What type of error recovery capabilities does the

tool possess?

Does the tool provide proper logs and reports?

Success factors for tools


Rolling out the tool to the rest of the organization

incrementally

Adapting and improving processes to fit with the use of

the tool

Providing training, coaching, and mentoring for tool

users

Defining guidelines for the use of the tool

Implementing a way to gather usage information from

the actual use of the tool

Monitoring tool use and benefits

Providing support to the users of a given tool

Gathering lessons learned from all users

It is also important to make sure that the tool

is integrated in the software development

lifecycle within the organization.


Tool metrics

The Test Manager can design and gather objective

metrics from tools and capture valuable real time data

which can be used to manage the test effort.

Different tools focus on collecting different types data:

Test management

Traceability from requirements to test cases and

automated scripts = coverage metrics

A snapshot of currently available tests to measure

planned tests and current execution status

(passed,failed, skipped, blocked, in queue)

Different tools focus on collecting different types data:

Defect management

wealth of information on defects for process

improvement

current status, severity and priority, distribution

throughout the system of defects

The phase in which defects are introduced and

found, escape rates, etc.

Static analysis

can help detect and report maintainability issues

Performance

can supply valuable information on the scalability of

the system

Coverage

can help the Test Manager understand how much of

the system has actually been tested


Reporting requirements of tools should be defined

during the tool selection process.

Those requirements must be properly

implemented during tool configuration to ensure that the

information tracked by the tools can be reported in ways

that will be understandable to the stakeholders

Simply acquiring a tool does not guarantee success. Each

new tool introduced into an organization will require

effort to achieve real and lasting benefits.

In order to have a smooth &successful implementation,

there are a number of things that should be considered

when selecting and integrating test execution and test

management tools into an organization.

Potential benefits of using tools to support test execution

(test automation) include:

Reduction in repetitive manual work (save time)

running regression tests

environment set up/tear down tasks

re-entering the same test data

checking against coding standards

Greater consistency and repeatability

test data is created in a coherent manner

tests are executed by a tool in the same order with

the same frequency

tests are consistently derived from requirements


More objective assessment

static measures

coverage

Easier access to information about testing

statistics and graphs about test progress

defect rates and performance

Potential RISKS of using tools to support test execution

Expectations for the tool may be unrealistic

The time, cost and effort for the initial introduction of

a tool may be under-estimated

The time and effort needed to achieve significant and

continuing benefits may be under-estimated

The effort required to maintain the test assets generated

by the tool may be under-estimated

The tool may be relied on too much (seen as a

replacement for test design or execution, or use of

automated testing where manual would be better)

Version control of test assets may be neglected

An open source project may be suspended

Relationships and interoperability issues between

critical tools may be neglected, such as requirements

management tools, configuration management tools,

defect management tools and tools from multiple

vendors

The tool vendor may go out of business, retire the

tool, or sell the tool to a different vendor


The vendor may provide a poor response for support,

upgrades, and defect fixes

A new platform or technology may not be supported by

the tool

There may be no clear ownership of the tool

Glossary
Each of the terms specified below are defined as per the

ISTQB® Glossary which is displayed online at:

https://glossary.istqb.org/en/search/
accessibility testing
Testing to determine the ease by which users with disabilities can use

a component or system.

configuration management
A discipline applying technical and administrative direction and

surveillance to identify and document the functional and physical

characteristics of a configuration item, control changes to those

characteristics, record and report change processing and

implementation status, and verify compliance with specified

requirements.

configuration management tool


A tool that provides support for the identification and control of

configuration items, their status over changes and versions, and the

release of baselines consisting of configuration items.

data-driven testing
A scripting technique that stores test input and expected results in a

table or spreadsheet, so that a single control script can execute all of

the tests in the table. Data-driven testing is often used to support the

application of test execution tools such as capture/playback tools.

debugging tool
A tool used by programmers to reproduce failures, investigate the

state of programs and find the corresponding defect. Debuggers

enable programmers to execute programs step by step, to halt a

program at any program statement and to set and examine program

variables.