You are on page 1of 21

Software Testing and Automation

UNIT-1

INTRODUCTION TO TESTING

 Humans, Errors, and Testing

Errors are a part of our daily life. Humans make errors in their thoughts, in their actions, and in
the products that might result from their actions. Errors occur almost everywhere. Humans can
make errors in any field, for example in observation, in speech, in medical prescription, in
surgery, in driving, in sports, in love, and similarly even in software development. An error
might be insignificant in that it leads to a gentle friendly smile, such as when a slip of tongue
occurs. Or, an error may lead to a catastrophe, such as when an operator fails to recognize that
a relief valve on the pressurizer was stuck open and this resulted in a disastrous radiation leak.

To determine whether there are any errors in our thought, actions, and the products generated,
we resort to the process of testing. The primary goal of testing is to determine if the thoughts,
actions, and products are as desired, that is they conform to the requirements. Testing of
thoughts is usually designed to determine if a concept or method has been understood
satisfactorily. Testing of actions is designed to check if a skill that results in the actions has
been acquired satisfactorily. Testing of a product is designed to check if the product behaves as
desired. Note that both syntax and semantic errors arise during programming. Given that most
modern compilers are able to detect syntactic errors, testing focuses on semantic errors, also
known as faults that cause the program under test to behave incorrectly.

Errors, Faults, and Failures

There is no widely accepted and precise definition of the term error. Figure 1.1 illustrates one
class of meanings for the terms error, fault, and failure. A programmer writes a program. An
error occurs in the process of writing a program. A fault is the manifestation of one or more
errors. A failure occurs when a faulty piece of code is executed leading to an incorrect state that
propagates to the program‘s output. The programmer might misinterpret the requirements and
consequently write incorrect code. Upon execution, the program might display behavior that
does not match with the expected behavior, implying thereby that a failure has occurred. A
fault in the program is also commonly referred to as a bug or a defect. The terms error and bug
are by far the most common ways of referring to something wrong in the program text that
might lead to a failure. In this text we often use the terms error and fault as synonyms. Faults
are sometimes referred to as defects.
Fig. 1. Errors, faults, and failures in the process of programming and testing.

In Figure 1, notice the separation of observable from observed behavior. This separation is
important because it is the observed behavior that might lead one to conclude that a program
has failed. Certainly, as explained earlier, this conclusion might be incorrect due to one or
more reasons.

 Software Quality

We all want high-quality software. There exist several definitions of software quality. Also,
one quality attribute might be more important to a user than another quality attribute. In any
case, software quality is a multidimensional quantity and is measurable. So, let us look at
what defines quality of software.
Quality Attributes

There exist several measures of software quality. These can be divided into static and
dynamic quality attributes. Static quality attributes refer to the actual code and related
documentation. Dynamic quality attributes relate to the behavior of the application while in
use.

Static quality attributes include structured, maintainable, and testable code as well as the
availability of correct and complete documentation. You might have come across complaints
such as ―Product X is excellent, I like the features it offers, but its user manual stinks!‖ In this
case, the user manual brings down the overall product quality. If you are a maintenance
engineer and have been assigned the task of doing corrective maintenance on an application
code, you will most likely need to understand portions of the code before you make any
changes to it. This is where attributes related to items such as code
documentation, understandability, and structure come into play. A poorly documented piece
of code will be harder to understand and hence difficult to modify. Further, poorly structured
code might be harder to modify and difficult to test.

Dynamic quality attributes include software reliability, correctness, completeness,


consistency, usability, and performance. Reliability refers to the probability of failure-free
operation and is considered in the following section. Correctness refers to the correct
operation of an application and is always with reference to some artifact. For a tester,
correctness is with respect to the requirements; for a user, it is often with respect to a user
manual.

Completeness refers to the availability of all the features listed in the requirements or in the
user manual. An incomplete software is one that does not fully implement all features
required. Of course, one usually encounters additional functionality with each new version of
an application. This does not mean that a given version is incomplete because its next version
has few new features. Completeness is defined with respect to a set of features that might
themselves be a subset of a larger set of features that are to be implemented in some future
version of the application. One can easily argue that every piece of software that is correct is
also complete with respect to some feature set.

Consistency refers to adherence to a common set of conventions and assumptions. For


example, all buttons in the user interface might follow a common color-coding convention.
An example of inconsistency could arise when a database application displays the date of
birth of a person in the database. However, the date of birth is displayed in different formats,
without regard for the user‘s preferences, depending on which feature of the database is used.

Usability refers to the ease with which an application can be used. This is an area in itself and
there exist techniques for usability testing. Psychology plays an important role in the design
of techniques for usability testing. Usability testing also refers to testing of a product by its
potential users. The development organization invites a selected set of potential users and
asks them to test the product. Users in turn test for ease of use, functionality as expected,
performance, safety, and security. Users thus serve as an important source of tests that
developers or testers within the organization might not have conceived. Usability testing is
sometimes referred to as user-centric testing.

Performance refers to the time the application takes to perform a requested task. Performance
is considered as a nonfunctional requirement. It is specified in terms such as ―This task must
be performed at the rate of X units of activity in one second on a machine running at speed Y,
having Z gigabytes of memory.‖ For example, the performance requirement for a compiler
might be stated in terms of the minimum average time to compilation of a set of numerical
applications.
Reliability

People want software that functions correctly every time it is used. However, this happens
rarely, if ever. Most software that is used today contains faults that cause it to fail on some
combination of inputs. Thus, the notion of total correctness of a program is an ideal and
applies to only the most academic and textbook programs.

Given that most software applications are defective, one would like to know how often a
given piece of software might fail. This question can be answered, often with dubious
accuracy, with the help of software reliability, hereafter referred to as reliability. There are
several definitions of software reliability, a few are examined below.
ANSI/IEEE STD 729-1983: Reliability

Software reliability is the probability of failure-free operation of software over a given time
interval and under given conditions.

The probability referred to in this definition depends on the distribution of the inputs to the
program. Such input distribution is often referred to as the operational profile. According to
this definition, software reliability could vary from one operational profile to another. An
implication is that one user might say ―this program is lousy‖ while another might sing
praises for the same program. The following is an alternate definition of reliability.
Reliability

Software reliability is the probability of failure-free operation of software in its intended


environment.

This definition is independent of ―who uses what features of the software and how often‖.
Instead, it depends exclusively on the correctness of its features. As there is no notion of
operational profile, the entire input domain is considered as uniformly distributed. The
term environment refers to the software and hardware elements needed to execute the
application. These elements include the operating system (OS), hardware requirements, and
any other applications needed for communication.
Both definitions have their pros and cons. The first of the two definitions above requires the
knowledge of the profile of its users that might be difficult or impossible to estimate
accurately. However, if an operational profile can be estimated for a given class of users, then
an accurate estimate of the reliability can be found for this class of users. The second
definition is attractive in that one needs only a single number to denote reliability of a
software application that is applicable to all its users. However, such estimates are difficult to
arrive at.
 Fundamentals of Testing

1. Why is Testing necessary?


1.1 Introduction

Testing is necessary because we all make mis-takes. Some of those mistakes are unimportant, but
some of them are expensive or dangerous. We need to check everything and anything we produce
because things can always go wrong - humans make mistakes all the time - it is what we do best!
Because we should assume our work contains mistakes, we all need to check our own work.
However, some mistakes come from bad assumptions and blind spots, so we might make the
same mistakes when we check our own work as we made when we did it. So we may not notice
the flaws in what we have done. Ideally, we should get someone else to check our work - another
person is more likely to spot the flaws.
We know that in ordinary life, some of our mistakes do not matter, and some are very important.
It is the same with software systems. We need to know whether a particular error is likely to cause
problems. To help us think about this, we need to consider the context within which we use
different software systems.

1.2 Software systems context

Testing Principle - Testing is context dependent


Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site.

These days, almost everyone is aware of software systems. We encounter them in our homes, at
work, while shopping, and because of mass-communication systems. More and more, they are
part of our lives. We use software in day-to-day business applications such as banking and in
consumer products such as cars and washing machines. However, most people have had an
experience with software that did not work as expected: an error on a bill, a delay when waiting
for a credit card to process and a website that did not load correctly are common examples of
problems that may happen because of software problems. Not all software systems carry the same
level of risk and not all problems have the same impact when they occur. A risk is something that
has not happened yet and it may never happen; it is a potential problem. We are concerned about
these potential problems because, if one of them did happen, we'd feel a negative impact. When
we discuss risks, we need to consider how likely it is that the problem would occur and the impact
if it happens. For example, whenever we cross the road, there is some risk that we'll be injured by
a car. The likelihood depends on factors such as how much traffic is on the road, whether there is
a safe crossing place, how well we can see, and how fast we can cross. The impact depends on
how fast the car is going, whether we are wearing protective gear, our age and our health. The risk
for a particular person can be worked out and therefore the best road-crossing strategy.

Some of the problems we encounter when using software are quite trivial, but others can be costly
and damaging - with loss of money, time or business reputation - and even may result in injury or
death. For example, suppose a user interface has typographical defects. Does this matter? It may
be trivial, but it could have a significant effect, depending on the website and the defect:
• If my personal family-tree website has my maternal grandmother's maiden name spelt wrong,
my mother gets annoyed and I have to put up with some family teasing, but I can fix it easily and
only the family see it (probably).
• If the company website has some spelling mistakes in the text, potential customers may be put
off the company as it looks unprofessional.
• If a software program miscalculates pesticide application quantities, the effect could be very
significant: suppose a decimal point is wrongly placed so that the application rate is 10 times too
large. The farmer or gardener uses more pesticide than needed, which raises his costs, has
environmental impacts on wildlife and water supplies and has health and safety impact for the
farmer, gardener, family and workforce, livestock and pets. There may also be consequent loss of
trust in and business for the company and possible legal costs and fines for causing the
environmental and health problems.

1.3 Causes of software defects

Why is it that software systems sometimes don't work correctly? We know that people
make mistakes - we are fallible.
If someone makes an error or mistake in using the software, this may lead directly to a
problem - the software is used incorrectly and so does not behave as we expected.
However, people also design and build the software and they can make mistakes during
the design and build. These mistakes mean that there are flaws in the software itself.
These are called defects or sometimes bugs or faults. Remember, the software is not just
the code; check the definition of soft-ware again to remind yourself.
When the software code has been built, it is executed and then any defects may cause the
system to fail to do what it should do (or do something it shouldn't), causing a failure. Not
all defects result in failures; some stay dormant in the code and we may never notice
them.

Do our mistakes matter?


Let's think about the consequences of mistakes. We agree that any human being, programmers
and testers included, can make an error. These errors may produce defects in the software code or
system, or in a document. If a defect in code is executed, the system may experience a failure. So
the mistakes we make matter partly because they have consequences for the products for which
we are responsible.
Our mistakes are also important because software systems and projects are complicated. Many
interim and final products are built during a project, and people will almost certainly make
mistakes and errors in all the activities of the build. Some of these are found and removed by the
authors of the work, but it is difficult for people to find their own mistakes while building a
product. Defects in software, systems or documents may result in failures, but not all defects
do cause failures. We could argue that if a mistake does not lead to a defect or a defect does not
lead to a failure, then it is not of any importance -we may not even know we've made an error.
Our fallibility is compounded when we lack experience, don't have the right information,
misunderstand, or if we are careless, tired or under time pressure. All these factors affect our
ability to make sensible decisions - our brains either don't have the information or cannot process
it quickly enough.
Additionally, we are more likely to make errors when dealing with perplexing technical or
business problems, complex business processes, code or infra-structure, changing technologies, or
many system interactions. This is because our brains can only deal with a reasonable amount of
complexity or change -when asked to deal with more our brains may not process the information
we have correctly.
It is not just defects that give rise to failure. Failures can also be caused by environmental
conditions as well: for example, a radiation burst, a strong magnetic field, electronic fields, or
pollution could cause faults in hardware or firmware. Those faults might prevent or change the
execution of software. Failures may also arise because of human error in interacting with the
software, perhaps a wrong input value being entered or an output being misinterpreted. Finally,
failures may also be caused by someone deliberately trying to cause a failure in a system -
malicious damage.
When we think about what might go wrong we have to consider defects and failures
arising from:
• errors in the specification, design and implementation of the software and system;
• errors in use of the system;
• environmental conditions;
• intentional damage;
• potential consequences of earlier errors, intentional damage, defects and failures.

When do defects arise?

In Figure 2, we can see how defects may arise in four requirements for a product.

We can see that requirement 1 is implemented correctly - we understood the customer's requirement,
designed correctly to meet that requirement, built correctly to meet the design, and so deliver that
requirement with the right attributes: functionally, it does what it is supposed to do and it also has the
right non-functional attributes, so it is fast enough, easy to understand and so on.

With the other requirements, errors have been made at different stages. Requirement 2 is fine until the
software is coded, when we make some mistakes and introduce defects. Probably, these are easily
spotted and corrected during testing, because we can see the product does not meet its design
specification.

The defects introduced in requirement 3 are harder to deal with; we built exactly what we were told to
but unfortunately the designer made some mistakes so there are defects in the design. Unless we
check against the requirements definition, we will not spot those defects during testing. When we do
notice them they will be hard to fix because design changes will be required.
Figure 2: Types of error and defect

The defects in requirement 4 were introduced during the definition of the requirements; the product
has been designed and built to meet that flawed requirements definition. If we test the product meets
its requirements and design, it will pass its tests but may be rejected by the user or customer. Defects
reported by the customer in acceptance test or live use can be very costly. Unfortunately, requirements
and design defects are not rare; assessments of thousands of projects have shown that defects
introduced during requirements and design make up close to half of the total number of defects.

What is the cost of defects?

As well as considering the impact of failures arising from defects we have not found, we need to
consider the impact of when we find those defects. The cost of finding and fixing defects rises
considerably across the life cycle.

If an error is made and the consequent defect is detected in the requirements at the specification stage,
then it is relatively cheap to find and fix. The specification can be corrected and re-issued. Similarly if
an error is made and the consequent defect detected in the design at the design stage then the design
can be corrected and re-issued with relatively little expense. The same applies for construction. If
however a defect is introduced in the requirement specification and it is not detected until acceptance
testing or even once the system has been implemented then it will be much more expensive to fix.
This is because rework will be needed in the specification and design before changes can be made in
construction; because one defect in the requirements may well propagate into several places in the
design and code; and because all the testing work done-to that point will need to be repeated in order
to reach the confidence level in the software that we require. It is quite often the case that defects
detected at a very late stage, depending on how serious they are, are not corrected because the cost of
doing so is too expensive. Also, if the software is delivered and meets an agreed specification, it
sometimes still won't be accepted if the specification was wrong. The project team may have
delivered exactly what they were asked to deliver, but it is not what the users wanted. This can lead to
users being unhappy with the system that is finally delivered. In some cases, where the defect is too
serious, the system may have to be de-installed completely.

Figure 3: Cost of defects

2. What is Software Testing?

Software testing is a process, to evaluate the functionality of a software application with an


intent to find whether the developed software met the specified requirements or not and to
identify the defects to ensure that the product is defect free in order to produce the quality
product.

Definition:
Software Testing Definition according to ANSI/IEEE 1059 standard – A process of analyzing
a software item to detect the differences between existing and required conditions (i.e.,
defects) and to evaluate the features of the software item.
First, let's look at testing as a process:

Process - Testing is a process rather than a single activity - there are a series of activities
involved.

 All life cycle activities - Testing is a process that takes place throughout the software
development life cycle. We saw earlier that the later in the life cycle we find bugs; the more
expensive they are to fix. If we can find and fix requirements defects at the requirements
stage, that must make commercial sense. We'll build the right software, correctly and at a
lower cost overall. So, the thought process of designing tests early in the life cycle can help to
prevent defects from being introduced into code. We sometimes refer to this as 'verifying the
test basis via the test design'. The test basis includes documents such as the requirements and
design specifications.

 Planning - Activities take place before and after test execution. We need to manage the
testing; for example, we plan what we want to do; we control the test activities; we report on
testing progress and the status of the software under test; and we finalize or close testing
when a phase completes.

 Preparation - We need to choose what testing we'll do, by selecting test conditions and
designing test cases.

 Evaluation - As well as executing the tests, we must check the results and evaluate the
software under test and the completion criteria, which help us decide whether we have
finished testing and whether the software product has passed the tests.

 Software products and related work products - We don't just test code. We test the
requirements and design specifications, and we test related documents such as operation, user
and training material.

3. Fundamental Test process

Introduction:

We will describe the fundamental test process and activities. These start with test planning
and continue through to test closure.
Although executing tests is important, we also need a plan of action and a report on the
outcome of testing. Project and test plans should include time to be spent on planning the
tests, designing test cases, preparing for execution and evaluating status.
We can divide the activities within the fundamental test process into the following basic
steps:
a) Planning and control;
b) Analysis and design;
c) Implementation and execution;
d) Evaluating exit criteria and reporting;
e) Test closure activities.

These activities are logically sequential, but, in a particular project, may overlap, take place
concurrently and even be repeated.

a) Test planning and control:


During test planning, we make sure we understand the goals and objectives of the
customers, stakeholders, and the project, and the risks which testing is intended to
address. This will give us what is sometimes called the mission of testing or the test
assignment. Based on this understanding, we set the goals and objectives for the testing
itself, and derive an approach and plan for the tests, including specification of test
activities. To help us we may have organization or program test policies and a test
strategy. Test planning has the following major tasks, given approximately in order,
which help us build a test plan:
 Determine the scope and risks and identify the objectives of testing
 Determine the test approach (techniques, test items, coverage, identifying and
interfacing with the teams involved in testing)
 Implement the test policy and/or the test strategy
 Determine the required test resources (e.g. people, test environment, PCs)
 Schedule test analysis and design tasks, test implementation, execution and
evaluation:
 Determine the exit criteria:

Management of any activity does not stop with planning it. We need to control and
measure progress against the plan. So, test control is an ongoing activity. We need to
compare actual progress against the planned progress, and report to the project manager
and customer on the current status of testing, including any changes or deviations from
the plan. We'll need to take actions where necessary to meet the objectives of the project.
Such actions may entail changing our original plan, which often happens. When different
groups perform different review and test activities within the project, the planning and
control needs to happen within each of those groups but also across the groups to
coordinate between them, allowing smooth hand-offs between each stage of testing. Test
planning takes into account the feedback from monitoring and control activities which
take place throughout the project.

b) Test analysis and design

During test analysis and design, we take general testing objectives identified during
planning and build test designs and test procedures (scripts). Test analysis and design has
the following major tasks, in approximately the following order:

 Review the test basis (such as the product risk analysis, requirements, architecture, design
specifications, and interfaces), examining the specifications for the software we are
testing. We use the test basis to help us build our tests. We can start designing certain
kinds of tests (called black-box tests) before the code exists, as we can use the test basis
documents to understand what the system should do once built. As we study the test
basis, we often identify gaps and ambiguities in the specifications, because we are trying
to identify precisely what happens at each point in the system, and this also pre-vents
defects appearing in the code.

 Identify test conditions based on analysis of test items, their specifications, and what we
know about their behavior and structure. This gives us a high- level list of what we are
interested in testing. If we consider the driving test example, the examiner might have a
list of test conditions including 'behavior at road junctions', 'use of indicators', 'ability to
maneuver the car' and so on

 Design the tests using techniques to help select representative tests that relate to particular
aspects of the soft ware which carry risks or which are of particular interest, based on the
test conditions and going into more detail. For example, the driving examiner might look
at a list of test conditions and decide that junctions need to include T-junctions, cross
roads and so on. In testing, we'll define the test case and test procedures.
 Evaluate testability of the requirements and system. The requirements may be written in
a way that allows a tester to design tests; for example, if the performance of the software
is important, that should be specified in a testable way. If the requirements just say 'the
software needs to respond quickly enough' that is not testable, because 'quick enough'
may mean different things to different people. A more testable requirement would be 'the
software needs to respond in 5 seconds with 20 people logged on'. The testability of the
system depends on aspects such as whether it is possible to set up the system in an
environment that matches the operational environment and whether all the ways the
system can be configured or used can be understood and tested. For example, if we test a
website, it may not be possible to identify and recreate all the configurations of
hardware, operating system, browser, connection, firewall and other factors that the
website might encounter.

 Design the test environment set-up and identify any required infrastructure and tools.
This includes testing tools and support tools such as spreadsheets, word processors,
project planning tools, and non-IT tools and equipment - everything we need to carry out
our work.

c) Test implementation and execution

During test implementation and execution, we take the test conditions and make them into
test cases and testware and set up the test environment. We transform our test conditions
into test cases and procedures, other testware such as scripts for automation. We also need
to set up an environment where we will run the tests and build our test data. Setting up
environments and data often involves significant time and effort, so you should plan
and monitor this work carefully. Test implementation and execution have the following
major tasks, in approximately the following order:
 Implementation:
o Develop and prioritize our test cases, and create test data for those tests.
o Create test suites from the test cases for efficient test execution. A test suite
is a logical collection of test cases which naturally work together. Test suites
often share data and a common high-level set of objectives. We'll also set up
a test execution schedule.
o Implement and verify the environment. We make sure the test environment
has been set up correctly, possibly even running specific tests on it.
 Execution:
o Execute the test suites and individual test cases, following our test
procedures. We might do this manually or by using test execution tools,
according to the planned sequence.
o Log the outcome of test execution and record the identities and versions of
the software under test, test tools and testware.
o Compare actual results (what happened when we ran the tests) with expected
results (what we anticipated would happen).
o Where there are differences between actual and expected results, report
discrepancies as incidents. We analyze them to gather further details about
the defect, reporting additional information on the problem, identify the
causes of the defect, and differentiate between problems in the software and
other products under test and any defects in test data, in test documents, or
mistakes in the way we executed the test.
d) Evaluating exit criteria and reporting

Evaluating exit criteria is the activity where test execution is assessed against the defined
objectives. This should be done for each test level, as for each we need to know whether
we have done enough testing. Based on our risk assessment, we'll have set criteria against
which we'll measure 'enough'. These criteria vary for each project and are known as exit
criteria. They tell us whether we can declare a given testing activity or level complete.
Evaluating exit criteria has the following major tasks:
 Check test logs against the exit criteria specified in test planning: We look to see what
evidence we have for which tests have been executed and checked, and what defects
have been raised, fixed, confirmation tested, or are outstanding.
 Assess if more tests are needed or if the exit criteria specified should be changed: We
may need to run more tests if we have not run all the tests we designed, or if we
realize we have not reached the coverage we expected, or if the risks have increased
for the project.
 Write a test summary report for stakeholders: It is not enough that the testers know
the outcome of the test. All the stakeholders need to know what testing has been done
and the outcome of the testing, in order to make informed decisions about the
software.

e) Test closure activities

Test closure activities include the following major tasks:


 Check which planned deliverables we actually delivered and ensure all incident
reports have been resolved through defect repair or deferral.
 Finalize and archive testware, such as scripts, the test environment, and any other
test infrastructure, for later reuse. It is important to reuse whatever we can of
testware; we will inevitable carry out maintenance testing, and it saves time and effort
if our testware can be pulled out from a library of existing tests. It also allows us to
compare the results of testing between software versions.
 Hand over testware to the maintenance organization who will support the software
and make any bug fixes or maintenance changes, for use in confirmation testing and
regression testing. This group may be a separate group to the people who build and
test the software; the maintenance testers are one of the customers of the development
testers; they will use the library of tests.
 Evaluate how the testing went and analyze lessons learned for future releases and
projects. This might include process improvements for the soft ware development life
cycle as a whole and also improvement of the test processes.
Basic Definitions

Error—People make errors. A good synonym is mistake. When people make mistakes
while coding, we call these mistakes bugs. Errors tend to propagate; a requirements error
may be magnified during design and amplified still more during coding.

Fault—A fault is the result of an error. It is more precise to say that a fault is the
representation of an error.

Fault—A fault is the result of an error. It is more precise to say that a fault is the
representation of an error.

Incident—When a failure occurs, it may or may not be readily apparent to the user (or
customer or tester). An incident is the symptom associated with a failure that alerts the user
to the occurrence of a failure.

Test—Testing is obviously concerned with errors, faults, failures, and incidents. A test is
the act of exercising software with test cases. A test has two distinct goals: to find failures
or to demonstrate correct execution.

Test case—A test case has an identity and is associated with a program behavior. It also has
a set of inputs and expected outputs.

Software Test Life Cycle

A testing life cycle

The above figure portrays a life cycle model for testing. Notice that, in the development
phases,
three opportunities arise for errors to be made, resulting in faults that may propagate through
the remainder of the development process. The fault resolution step is another opportunity for
errors (and new faults). When a fix causes formerly correct software to misbehave, the fix is
deficient. From this sequence of terms, we see that test cases occupy a central position in
testing. The process of testing can be subdivided into separate steps: test planning, test case
development, running test cases, and evaluating test results.

Test Cases

A TEST CASE is a set of actions executed to verify a particular feature or functionality of


your software application. The test case includes specific variables or conditions, using which
a testing engineer can compare expected and actual results to determine whether a software
product is functioning as per the requirements of the customer.

Example of Test Cases:

Test cases for the Test Scenario: "Check the Login Functionality" would be

 Check system behavior when valid email id and password is entered.


 Check system behavior when invalid email id and valid password is entered.
 Check system behavior when valid email id and invalid password is entered.
 Check system behavior when invalid email id and invalid password is entered.
 Check system behavior when email id and password are left blank and Sign in entered.
 Check Forgot your password is working as expected
 Check system behavior when valid/invalid phone number and password is entered.
 Check system behavior when "Keep me signed" is checked

Identifying test cases

Two fundamental approaches are used to identify test cases; Black box and White box
testing.

 Black box testing:

Black Box Testing is also known as behavioral or specification-based testing.

It is a Software Testing method that analyses the functionality of a software/application


without knowing much about the internal structure/design of the item that is being tested
and compares the input value with the output value. The main focus in Black Box Testing is
on the functionality of the system as a whole.
In Black box testing, the content (implementation) of the black box is not known, and the
function of the black box is understood completely in terms of its inputs and outputs.

With the specification-based approach to test case identification, the only information used
is the specification of the software. Therefore, the test cases have two distinct advantages:
(1) they are independent of how the software is implemented, so if the implementation
changes, the test cases are still useful; and
(2) test case development can occur in parallel with the implementation, thereby reducing
the overall project development interval.
On the negative side, specification based test cases frequently suffer from two problems:
significant redundancies may exist among test cases, compounded by the possibility of gaps
of untested software.

 White box testing:

White box testing also known structural testing or code-based or clear-box testing is a testing
technique which evaluates the code and the internal structure of a program. Code-based
testing is the other fundamental approach to test case identification.
White box testing involves looking at the structure of the code. When you know the internal
structure of a product, tests can be conducted to ensure that the internal operations performed
according to the specification. And all internal components have been adequately exercised.

Levels of Testing

Levels of testing echo the levels of abstraction found in the waterfall model of the software
development life cycle. Although this model has its drawbacks, it is useful for testing as a
means of identifying distinct levels of testing and for clarifying the objectives that pertain to
each level.

There are many different testing levels which help to check behavior and performance for
software testing. These testing levels are designed to recognize missing areas and
reconciliation between the development lifecycle states. In SDLC models there are
characterized phases such as requirement gathering, analysis, design, coding or execution,
testing, and deployment.

All these phases go through the process of software testing levels. There are mainly four
testing levels are:

 Unit Testing – Test Individual Component


 Integration Testing - Test Integrated Component
 System Testing - Test the entire system
 Acceptance Testing - Test the final system

Each of these testing levels has a specific purpose. These testing level provide value to the
software development lifecycle.

Unit testing: A Unit is a smallest testable portion of system or application which can be
compiled, loaded, and executed. This kind of testing helps to test each module separately.
The aim is to test each part of the software by separating it. It checks that component is
fulfilling functionalities or not. This kind of testing is performed by developers.

Integration testing: Integration means combining. For Example, In this testing phase,
different software modules are combined and tested as a group to make sure that integrated
system is ready for system testing. Integrating testing checks the data flow from one module
to other modules. This kind of testing is performed by testers.

System testing: System testing is performed on a complete, integrated system. It allows


checking system's compliance as per the requirements. It tests the overall interaction of
components. It involves load, performance, reliability and security testing. System testing
most often the final test to verify that the system meets the specification. It evaluates both
functional and non-functional need for the testing.

Acceptance testing: Acceptance testing is a test conducted to find if the requirements of a


specification or contract are met as per its delivery. Acceptance testing is basically done by
the user or customer. However, other stockholders can be involved in this process.

Defect Management

Defect management is an integral part of a development and test process in many software
development organizations. It is a subprocess of the development process. It entails the
following: defect prevention, discovery, recording and reporting, classification, resolution,
and prediction.

Defect prevention is achieved through a variety of processes and tools. For example, good
coding techniques, unit test plans, and code inspections are all important elements of any
defect prevention process. Defect discovery is the identification of defects in response to
failures observed during dynamic testing or found during static testing. Discovering a defect
often involves debugging the code under test.

Defects found are classified and recorded in a database. Classification becomes important in
dealing with the defects. For example, defects classified as high severity are likely to be
attended to first by the developers than those classified as low severity. A variety of defect
classification schemes exist. Orthogonal defect classification, popularly known as ODC, is
one such scheme. Defect classification assists an organization in measuring statistics such as
the types of defects, their frequency, and their location in the development phase and
document. These statistics are then input to the organization‘s process improvement team that
analyzes the data, identifies areas of improvement in the development process, and
recommends appropriate actions to higher management.

Each defect, when recorded, is marked as open indicating that it needs to be resolved. One or
more developers are assigned to resolve the defect. Resolution requires careful scrutiny of the
defect, identifying a fix if needed, implementing the fix, testing the fix, and finally closing
the defect indicating that it has been resolved. It is not necessary that every recorded defect
be resolved prior to release. Only defects that are considered critical to the company‘s
business goals, that include quality goals, are resolved; others are left unresolved until later.

Defect prediction is another important aspect of defect management. Organizations often do


source code analysis to predict how many defects an application might contain before it
enters the testing phase. Despite the imprecise nature of such early predictions, they are used
to plan for testing resources and release dates. Advanced statistical techniques are used to
predict defects during the test process. The predictions tend to be more accurate than early
predictions due to the availability of defect data and the use of sophisticated models. The
defect discovery data, including time of discovery and type of defect, is used to predict the
count of remaining defects. Once again this information is often imprecise, though
nevertheless used in planning.

Several tools exist for recording defects, and computing and reporting defect-related
statistics. Bugzilla, open source, and FogBugz, commercially available, are two such tools.
They provide several features for defect management, including defect recording,
classification, and tracking. Several tools that compute complexity metrics also predict
defects using code complexity.

Defect Life Cycle

A Defect life cycle, also known as a Bug life cycle, is a cycle of a defect from which it goes
through covering the different states in its entire life. This starts as soon as any new defect is
found by a tester and comes to an end when a tester closes that defect assuring that it won‘t
get reproduced again.
NEW: Tester finds a defect and posts it with the status NEW. This defect is yet to be
studied/approved.

ASSIGNED / OPEN: Test / Development / Project lead studies the NEW defect and if it is
found to be valid it is assigned to a member of the Development Team. The assigned
Developer‘s responsibility is now to fix the defect and have it COMPLETED.

DEFERRED: If a valid NEW or ASSIGNED defect is decided to be fixed in upcoming


releases instead of the current release it is DEFERRED. This defect is ASSIGNED when the
time comes.

DROPPED / REJECTED: Test / Development/ Project lead studies the NEW defect and if it
is found to be invalid, it is DROPPED / REJECTED. Note that the specific reason for this
action needs to be given.

COMPLETED / FIXED / RESOLVED / TEST: Developer ‗fixes‘ the defect that is


ASSIGNED to him or her. Now, the ‗fixed‘ defect needs to be verified by the Test Team and
the Development Team ‗assigns‘ the defect back to the Test Team. A COMPLETED defect is
either CLOSED, if fine, or REASSIGNED, if still not fine.

REASSIGNED / REOPENED: If the Tester finds that the ‗fixed‘ defect is in fact not fixed or
only partially fixed, it is reassigned to the Developer who ‗fixed‘ it. A REASSIGNED defect
needs to be COMPLETED again.

CLOSED / VERIFIED: If the Tester / Test Lead finds that the defect is indeed fixed and is no
more of any concern, it is CLOSED / VERIFIED.
Types of Testing

Static Testing

Static testing is carried out without executing the application under test. This is in contrast to
dynamic testing that requires one or more executions of the application under test. Static
testing is useful in that it may lead to the discovery of faults in the application, as well as
ambiguities and errors in requirements and other application-related documents, at a
relatively low cost. This is especially so when dynamic testing is expensive. Nevertheless,
static testing is complementary to dynamic testing. Organizations often sacrifice static testing
in favor of dynamic testing though this is not considered a good practice.

Static testing is best carried out by an individual who did not write the code, or by a team of
individuals. A sample process of static testing is illustrated in Figure below. The test team
responsible for static testing has access to requirements document, application, and all
associated documents such as design document and user manuals. The team also has access
to one or more static testing tools. A static testing tool takes the application code as input and
generates a variety of data useful in the test process.

Walkthroughs

Walkthroughs and inspections are an integral part of static testing. Walkthrough is an


informal process to review any application-related document. For example, requirements are
reviewed using a process termed requirements walkthrough. Code is reviewed using code
walkthrough, also known as peer code review.

A walkthrough begins with a review plan agreed upon by all members of the team. Each item
of the document, for example a source code module, is reviewed with clearly stated
objectives in view. A detailed report is generated that lists items of concern regarding the
document reviewed.

In requirements walkthrough, the test team must review the requirements document to ensure
that the requirements match user needs, and are free from ambiguities and inconsistencies.
Review of requirements also improves the understanding of the test team regarding what is
desired of the application. Both functional and nonfunctional requirements are reviewed. A
detailed report is generated that lists items of concern regarding the requirements.
Inspections

Inspection is a more formally defined process than a walkthrough. This term is usually
associated with code. Several organizations consider formal code inspections as a tool to
improve code quality at a lower cost than incurred when dynamic testing is used.
Organizations have reported significant increases in productivity and software quality due to
the use of code inspections.

Code inspection is carried out by a team. The team works according to an inspection plan that
consists of the following elements: (a) statement of purpose; (b) work product to be
inspected, this includes code and associated documents needed for inspection; (c) team
formation, roles, and tasks to be performed; (d) rate at which the inspection task is to be
completed; and (e) data collection forms where the team will record its findings such as
defects discovered, coding standard violations, and time spent in each task.

Members of the inspection team are assigned roles of moderator, reader, recorder, and author.
The moderator is in charge of the process and leads the review. Actual code is read by the
reader, perhaps with the help of a code browser and with large monitors for all in the team to
view the code. The recorder records any errors discovered or issues to be looked into. The
author is the actual developer whose primary task is to help others understand code. It is
important that the inspection process be friendly and non-confrontational.

Use of Static Code Analysis Tools in Static Testing

Static code analysis tools can provide control-flow and data-flow information. Several
commercial as well as open source tools are available. Purify from IBM Rational and
Klockwork from Klockwork, Inc. are two of the many commercially available tools for static
analysis of C and Java programs. Lightweight analysis for program security in Eclipse
(LAPSE) is an open source tool for the analysis of Java programs.

You might also like