You are on page 1of 5

TEST CASES

A test case, in software engineering, is a set of conditions or variables under which a tester will determine whether
an application, software system or one of its features is working as it was originally established for it to do. The
mechanism for determining whether a software program or system has passed or failed such a test is known as a test
oracle. In some settings, an oracle could be a requirement or use case, while in others it could be a heuristic. It may take
many test cases to determine that a software program or system is considered sufficiently scrutinized to be released. Test
cases are often referred to as test scripts, particularly when written - when they are usually collected into test suites.
Formal test cases
In order to fully test that all the requirements of an application are met, there must be at least two test cases for each
requirement: one positive test and one negative test. If a requirement has sub-requirements, each sub-requirement must
have at least two test cases. Keeping track of the link between the requirement and the test is frequently done using
a traceability matrix. Written test cases should include a description of the functionality to be tested, and the preparation
required to ensure that the test can be conducted.
A formal written test-case is characterized by a known input and by an expected output, which is worked out before the
test is executed. The known input should test a precondition and the expected output should test a postcondition.
Informal test cases
For applications or systems without formal requirements, test cases can be written based on the accepted normal
operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and
results are reported after the tests have been run.
In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These
scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or they
could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, and easy to
evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover a number of
steps of the key.
Typical written test case format
A test case is usually a single step, or occasionally a sequence of steps, to test the correct behaviour/functionality,
features of an application. An expected result or expected outcome is usually given.
Additional information that may be included:
 test case ID
 test case description
 test step or order of execution number
 related requirement(s)
 depth
 test category
 author
 check boxes for whether the test can be or has been automated
 pass/fail
 remarks
Larger test cases may also contain prerequisite states or steps, and descriptions.
A written test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database or other common repository.
In a database system, you may also be able to see past test results and who generated the results and the system
configuration used to generate those results. These past results would usually be stored in a separate table.
Test suites often also contain
 Test summary
 Configuration
Besides a description of the functionality to be tested, and the preparation required to ensure that the test can be
conducted, the most time consuming part in the test case is creating the tests and modifying them when the system
changes.
Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would
evaluate if the results can be considered as a pass. This happens often on new products' performance number
determination. The first test is taken as the base line for subsequent test / product release cycles.
Acceptance tests, which use a variation of a written test case, are commonly performed by a group of end-users or clients
of the system to ensure the developed system meets the requirements specified or the contract. User acceptance tests
are differentiated by the inclusion of happy path or positive test cases to the almost complete exclusion of negative test
cases.
Testing Techniques
It's not possible to check every possible condition in your software application. Testing techniques help
you select a few test cases with the maximum possibility of finding a defect.
Boundary Value Analysis (BVA): As the name suggests it's the technique that defines the testing of
boundaries for specified range of values.
Equivalence Partition (EP) :This technique partitions the range into equal parts/groups that tend to
have same behavior.
State Transition Technique: This method is used when software behavior changes from one state to
another following particular action.
Error Guessing Technique: This is guessing/anticipating the error that may arise while testing.This is
not a formal method and takes advantages of a tester's experience with the application
Test Management Tools
Test management tools are the automation tools that help to manage and maintain the Test Cases. Main
Features that tools opted for are:
For documenting Test Cases: With tools you can expedite Test Case creation with use of templates
Execute the Test Case and Record the results: Test Case can be executed through the tools and
results obtained can be easily recorded.
Automate the Defect Tracking:Failed tests are automatically linked to the bug tracker , which in
turn can be assigned to the developers and can be tracked by email notifications.
Traceability :Requirements, Test cases, Execution of Test cases are all interlinked through the tools
and each case can be traced against each other to check test coverage.
How to decide the priority of execution of Test Cases

After building & validating the testing models several test cases are generated. The next biggest task is to
decide the priority for executing them by using some systematic procedure.

The process begins with identification of "Static Test Cases" and "Dynamic Test Runs", brief introduction of
which is as under.

Test case: It is a collection of several items and corresponding information, which enables a test to be
executed or performing a test run.
Test Run: It is a dynamic part of the specific testing activities in the overall sequence of testing on some
specific testing object.
Every time we invoke a static test case, we in-turn perform an individual dynamic test run. Hence we can
say that, every test case can correspond to several test runs.

Why & how do we prioritize?
Out of a large cluster of test cases in our hand, we need to scientifically decide their priorities of execution
based upon some rational, non-arbitrary, criteria. We carry out the prioritization activity with an objective
to reduce the overall number of test cases in the total testing feat.
There are couples of risks associated with our prioritization activities for the test cases. We may have the
risk that some of the application features may not undergo testing at all.
During prioritization we work out plans addressing following two key concepts:

Concept – 1: Identify the essential features that must be tested in any case.
Concept – 2: Identify the risk or consequences of not testing some of the features.

The decision making in selecting the test cases is largely based upon the assessment of the risk first.

The objective of the test case prioritization exercise is to build confidence among the testers and the
project leaders that the tests identified for execution are adequate from different angles.

The list of test cases decided for execution can be subjected to n-number of reviews in case of doubts /
risks associated with any of the omitted tests.
Following four schemes are quite common for prioritizing the test cases.

All these methods are independent of each other & are aimed at optimizing the number of test cases. It is
difficult to brand either of the methods better than the other. We can use any one method as a standalone
scheme or can be used in conjunction with another one. When we get similar results out of different
prioritization schemes, level of confidence increases.

Scheme – 1: Categorization of Priority.

Scheme – 2: Risk analysis.
Scheme – 3: Brainstorming to dig out the problematic areas.

Scheme – 4: Combination of different schemes.

Let us discuss the priority categorization scheme in greater detail here.
Easiest of all methods for categorizing our tests is to assign a priority code directly to every test
description. This involves assigning a unique number to each & every test description.
A popular three-level priority categorization scheme is described as under

Priority - 1: Allocated to all tests that must be executed in any case.
Priority - 2: Allocated to the tests which can be executed, only when time permits.
Priority - 3: Allocated to the tests, which even if not executed, will not cause big upsets.
After assignment of priority codes, the tester estimates the amount of time required to execute the tests
selected in each category. In case the estimated time happens to lie within the allotted schedule, means
successful identification of tests & completion of the partitioning exercise. In case of any deviation of time
plans, partitioning exercise is carried out further.
There is another extension to the above scheme i.e. new five-level scale using which we can classify the
test priorities further.

The Five-Level Priority scheme is as under
Priority-1a: Allocated to the tests, which must pass, otherwise the delivery date will be affected.

Priority-2a: Allocated to the tests, which must be executed before the final delivery.

Priority-3a: Allocated to the tests which can be executed, only when time permits.

Priority-4a: Allocated to the tests, which can wait & can be executed even after the delivery date.

Priority-5a: Allocated to the tests, which have remote probability of execution ever.
Testers plan to divide the tests in various categories. For instance, say tests from priority 2 are further
divided among priority levels like 3a, 4a and 5a. Likewise any test can be downgraded or upgraded.

Other considerations used while prioritizing or sequencing the test cases
a) Relative Dependencies: Some test cases are such that they can run only after the others because
the one is used to set up the other. This is applicable especially for continuously operating systems
involving test run to start from a state created by the previous one.
b) Timings of defect detection: Applies to cases wherein the problems can be detected only when
many other problems have been found and already fixed. For example it applies to integration testing
involving many components having their own problems at individual components level.

c) Damage or accidents: Applies to cases wherein acute problems or even severe damages can happen
during testing unless some critical areas had not been checked before the present test run. For example it
applies to embedded software involving safety critical systems, wherein the testers would not prefer to
start testing the safety features prior to first testing the other related functions.
d) Difficulty levels: This is one of the most natural & commonly used sequence to execute the test cases
involving moving from simple & easy test cases to difficult and complicated ones. This applies to scenarios
where complicated problems can be expected. Here the testers prefer to execute comparatively simpler
test cases first to narrow down the problematic areas.
5) Combining the test cases: Applies to majority of cases in large-scale software testing exercises
involving interleaving and parallel testing to accelerate the testing process.