You are on page 1of 3

Test Hierarchy

The Test Hierarchy is made up of four items:

A test plan is an ordered collection of scenarios. Test plans are at the top
of the test hierarchy and represent a complete set of actions required to
Test Plans verify an entire application or a large portion of an application such as a
module. Test plans are assigned to users for specified test environments
which in turn creates test assignments.
A scenario is an ordered collection of test cases. Scenarios represent
work flows in an application and generally reflect how an application is
used to achieve business goals. Scenarios are required in order to build
an executable test plan. It's recommended to create high level scenarios
that do not include any test cases for the purpose of exploratory testing.
Scenarios
At the same time, you'll want to create Scenarios and add all of the
needed test cases to them to create a more rigid test script. By combining
both approaches, you can ensure a known amount of test coverage, while
working outside of the boundaries of a scripted test plan to find bugs that
may not be found by following through a rigidly defined regression test.
A test case is an ordered collection of test steps. Test cases represent a
single activity that can be accomplished on one screen. For instance, a
user administration screen might include test cases for creating, editing,
and deleting a user. Test cases can be associated with screenshots
Test Cases allowing you to determine how much test coverage you have for a given
screen in your application. Test cases are optional and are not required to
create an executable test plan. By creating and using test cases, you'll be
able to create the components needed to piece together all scenarios
supported by your application.
Test steps are at the bottom of the test hierarchy. They represent the
most basic actions that can be performed in an application such as
clicking on buttons or links and selecting or entering values. Test steps
can be mapped to your user interface using the Touchpoint Tool. This
ultimately provides the ability to assess how much test coverage you
Test Steps have of your User Interface in a given test plan. With all that said, Test
Steps are optional. It's not required that you create test steps in order to
create an executable test plan. By using steps, you can create a solid
foundation from which you can build any test case needed to support
your testing efforts.

A visual example of the Qualify test hierarchy


For each item in the test hierarchy, there is an associated editor in Qualify.

The test plan editor provides functionality to:

• Create, edit, and delete test plans


• Organize test plans in a tree structure
• Add, remove, and re-order scenarios in a test plan
• Add dynamic results to scenarios added to a test plan
• Assess test coverage for a test plan

• Assign test plans to users


The scenario editor provides functionality to:

• Create, edit, and delete scenarios


• Organize scenarios in a tree structure
• Set estimated execution times and priorities to scenarios
• Add, remove, and re-order test cases in a scenario
• Add static results to a scenario
• Add dynamic expected results to test cases in a scenario

• Link scenarios to requirements


The test case editor provides functionality to:

• Create, edit, and delete test cases


• Organize test cases in a tree structure
• Associate test cases to screenshots
• Add, remove, and re-order test steps in a test case
• Add static results to test cases
• Add dynamic expected results to test steps
• Define data that will be put on the clipboard during test
execution
• Link test cases to requirements

• Associate test cases with screenshots of your application


The test step editor provides functionality to:

• Create, edit and delete test steps


• Organize test steps in a tree structure
• Take screenshots of the application under test
• Map test steps to the user interface

• Add static results to test steps.

Test Results

Qualify supports the concept of static and dynamic expected test results. Static expected
results are results that are always the same regardless of the context of a test. Dynamic
results are context sensitive.