You are on page 1of 5

In the long term, the Test Manager should:

set process improvement goals for these metrics

striving to improve the efficiency of the quality risk

analysis process

Other metrics and success criteria can be applied to risk-

based testing and the Test Manager should consider:

the relationship between these metrics

the strategic objectives the team serves

the behaviors that will result from management based

on a particular set of metrics

Other Techniques for


Test Selection
Requirements-based testing

One of the most prominent alternative techniques for

developing and prioritizing test conditions.

May utilize several techniques:

ambiguity reviews

test condition analysis

cause-effect graphing

Identify and eliminate ambiguities in requirements

Use a checklist of common requirements defects


Test condition analysis involves a close reading of the

requirements to identify the test conditions to cover. If

those requirements have an assigned priority, this can be

used to allocate effort and prioritize testing.

In the absence of an assigned priority, it is difficult to

determine the appropriate effort and order of testing. In

this case we need to blend requirements-based testing with

risk-based testing.

Requirements-based  testing has a broader usage and can

reduce an extremely large testing problem to a manageable

number of test cases and still provide 100% functional

coverage of the test basis. It can also identify gaps in the

test basis during test case design, which can identify

defects early.

There are tools to support this method which can be

complicated to do manually.

Challenges faced by this approach are:

adoption of cause-effect graphing due to the complexity

of creating these graphs

often the requirements specifications are ambiguous,

untestable, incomplete or non-existent

Not all organizations are motivated to solve these

problems, so testers confronted with such situations

must select another test strategy


Requirements Quality Logical TC Design

Validate against Validate

business objects requirement


Structure /
Map against use cases Formalize
Ambiguity analysis Requirements
Domain Expert Reviews Define /
Optimize Test
Fix
Cases
TC Quality requirement

Review Test Cases (TC) by


Logical
requirements authors
Test Cases
Review Test Cases (TC) by
Domain experts

Validated TC

Design & Cost Quality

Complete Test
Review TC by Developers Cases
Review TC by Test Experts Code Execute Tests
Review Design with TC

TC Execution
Model-based approach

used to enhance the use of existing requirements by the

creation of usage or operational profiles

to accurately depict the real-world use of the system

this method utilizes a miz of:

use cases

users (also called personas)

inputs

outputs

allows testing of:

functionality

usability

interoperability

reliability

security

performance

During test analysis and planning, the test team identifies

the usage profiles and attempts to cover them with test

cases. The usage profile is an estimate, based on available

information, of realistic usage of the software.

may not perfectly model the eventual reality

adequate if enough information and stakeholder

input is provided

Start from Requirements proceed to auto generate


Test Cases
then create model
and
and then simulate the model
validate
output
Methodical approach

such as checklists to determine what to test, how

much and in what order

mainly used for products that are very stable via  a

checklist of major functional and non-functional areas

to test that is  combined with a repository of existing

test cases

Such approaches tend to become less valid when used

to test more than minor changes

The checklist provides a trial and error  allocation of

effort and the sequencing of tests, usually based on the

type and amount of changes that occurred

Reactive approach

very few test analysis, design or implementation tasks

occur prior to test execution

test team is focused on reacting to the product as

actually delivered

Bug clusters become the focus of further testing

Prioritization and allocation are dynamic

Can work as a complement to other approaches

When exclusive, tends to miss major areas of the

application that are important but not suffering from a

large number of bugs


New code is in
Why are you waiting production. Waiting
here watching the for it to crash so I
red light? can write new tests.