You are on page 1of 16

SOFTWARE TESTING METHODOLOGIES

UNIT 1

SYLLABUS:
Software Testing: Introduction, Evolution, Myths & Facts, Goals, Psychology, Definition, Model
for testing, Effective Vs Exhaustive Software Testing. Software Testing Terminology and
Methodology: Software Testing Terminology, Software Testing Life Cycle, relating test life cycle
to development life cycle Software Testing Methodology.

1.1. INTRODUCTION
Software Testing Definition:
IEEE definition states “Testing is the process of exercising or evaluating a system or system
component by manual or automated means to verify that it satisfies specified requirements”.
1.2 (A). EVOLUTION OF SOFTWARE TESTING:
 In the early days of software development, Software Testing was considered only as a debugging
process for removing the errors after the development of software.
 By 1970, software engineering term was in common use. But software testing was just a beginning
at that time.
 In 1978, G.J. Myers realized the need to discuss the techniques of software testing in a separate
subject. He wrote the book “The Art of Software Testing” which is a classic work on software
testing.
 Myers discussed the psychology of testing and emphasized that testing should be done with the
mind-set of finding the errors not to demonstrate that errors are not present.
 By 1980, software professionals and organizations started talking about the quality in software.
Organizations started Quality assurance teams for the project, which take care of all the testing
activities for the project right from the beginning.
 In the 1990s testing tools finally came into their own. There was flood of various tools, which are
absolutely vital to adequate testing of the software systems. However, they do not solve all the
problems and cannot replace a testing process.
 Gelperin and Hetzel have characterized the growth of software testing with time. Based on this, we
can divide the evolution of software testing in following phases:
o Debugging-oriented phase (before 1957)
o Demonstration-oriented phase (1957-78)
o Destruction-oriented phase (1979-82)
o Evolution-oriented phase (1983-87)
o Prevention-oriented phase (1988-95)
o Process-oriented phase (1996 onwards)

Debugging-oriented phase (before 1957):


This phase is the early period of testing. At that time, testing basics were unknown. Programs were written
and then tested by the programmers until they were sure that all the bugs were removed. The term
used for testing was checkout, focused on getting the system to run. Debugging was a more general term
at that time.
Demonstration-oriented phase (1957-78):
The term “debugging‟ continued in this phase. However, in 1957, Charles Baker pointed out that the
purpose of checkout is not only to run the software but also to demonstrate the correctness according
to the mentioned requirements. Thus, the scope of checkout of program increased from program runs
to program correctness. Moreover, the purpose of checkout was to show the absence of errors. There was
no stress on the test case design. In this phase, there was a misconception that the software could be tested
exhaustively.
Destruction-oriented phase (1979-82):
This phase can be described as the revolutionary turning point in history of software testing. Myers changed
the view of testing from “testing is to show the absence of errors” to “testing is to find more and more
errors”. He separated debugging from testing and stressed on the valuable test cases. This phase has
given more importance to effective testing in comparison to exhaustive testing. The importance of early
testing was also realized in this phase.
Evolution-oriented phase (1983-87):
With the effect of early testing, it was realized that if the bugs were identified at an early stage of
development, it was cheaper to debug them as compared to the bugs found in implementation or
postimplementation phases. This phase stress on the quality of software products. In 1983, guidelines
by the National Bureau of Standards were released to choose a set of verification and validation techniques
and evaluate the software at each step of software development.
Prevention-oriented phase (1988-95):
The evaluation model stressed on the concept of bug-prevention as compared to the earlier concept of bug-
detection. With the idea of early detection of bugs in early phases, can prevent the bugs in implementation
or further phases. The prevention model includes test planning, test analysis, test design activities
playing a major role.
Process-oriented phase (1996 onwards):
In this phase, testing was established as a complete process rather than a single phase in the software
development life cycle. The testing process started as soon as requirements for a project are specified and
it runs parallel to SDLC. The emphasis in this phase is also on quantification of various parameters which
decide the performance of a testing process.

(b). Evolution of Software Testing:


The evolution of software testing was also discussed by Hung Q. Nguyen and Rob Pirozzi, in that three
phases namely Software Testing 1.0, Software Testing 2.0 and Software Testing 3.0.
Software Testing 1.0:
In this phase, software testing was considered to be just a single phase to be performed after coding of the
software in SDLC. No test organization was there. A few testing tools were present but their use was limited
due to the high cost. Management was not concerned with the testing as there was no quality goal.
Software Testing 2.0:
In this phase, software testing gained the importance in SDLC and the concept of early testing was also
started. Testing was evolving into the direction of planning the test resources. Many testing tools were also
available in this phase.
Software Testing 3.0:
In this phase, software testing is being evolved in the form of a process which is based on the strategic
effort. It means that there should be a process which gives us a road map of the overall testing process.
Moreover, it should be driven by the quality goals in mind so that all controlling and monitoring activities
can be performed by the managers. Thus, management is actively involved in this phase.
1.3 SOFTWARE TESTING MYTHS:

Myth: ―Testing is a single phase in SDLC


Truth: It is a myth that software testing is just a phase in SDLC and we perform testing only when
the running code of the module is ready. But in reality, testing starts as soon as we get the
requirement specifications for the software and continuous even after implementation of the
software.
Myth: ―Testing is easy
Truth: The general perception is that, software testing is an easy job, wherein test cases are
executed with testing tools only. But in reality, tools are there to automate the tasks and not to
carry out all testing activities. Testers job is not easy, as they have to plan and develop the test
cases manually and it requires a thorough understanding of the project being developed with its
overall design.
Myth: ―Software development is worth more than testing
Truth: This myth prevails in the minds of every team member and even in freshers who are seeking
job. As a fresher, we dream of a job as a developer. We get into the organization as a developer
and feel superior to other team members. But testing has now become as established path for
jobseekers. Testing is a complete process like development, so the testing team enjoys equal status
and importance as the development team.

Myth: ―Complete testing is possible


Truth: Complete testing at the surface level assumes that if we are giving all the inputs to the
software, then it must be tested for all of them. But in reality, it is not possible to provide all the
possible inputs to test the software, as the input domain of even a small program is too large to
test. This is the reason why the term 'Complete Testing' has been replaced with 'Effective Testing'.

Myth: ―Testing starts after program development


Truth: Most of the team members who are not aware of testing as a process, still feel that testing
cannot commence before coding. But this is not true, as the work of the tester begins as soon as
we get the specifications. The tester performs testing at the end of every phase in SDLC in the
form of verification and validation.

Myth: ―The purpose of the testing is to check the functionality of the software
Truth: Today all the testing activities are driven by quality goals. Ultimately, the goal of testing
is also to ensure quality of the software. There are various things related to quality of the software,
for which test cases must be executed.

Myth: ―Anyone can be tester


Truth: As an established process, software testing as a career also needs training for various
purposes such as to understand 1) Various phases of SDLC, 2) Recent techniques to design test
cases, 3) various tools and how to work on them.

1.4 GOALS OF SOFTWARE TESTING

Short-term or Immediate goals:


 These goals are the immediate results after performing testing.
BUG DISCOVERY:

 The immediate goal of testing is to find errors at any stage of software development.
 More the bugs discovered at an early stage, better will be the success rate of software testing.
BUG PREVENTION:

 It is the consequent action of bug discovery.


 From the behavior and interpretation of bugs discovered, everyone in the software development
team gets to learn how to code safely such that the bugs discovered should not be repeated in later
stages or future projects.
 Though errors cannot be prevented to zero, they can be minimized. In this sense, bug prevention is
a superior goal of testing.
Long-term goals:
 These goals affect the product quality in the long run, when one cycle of the SDLC is over.
RELIABILITY AND QUALITY:
Since software is also a product, its quality is primary from the users’ point of view. Thorough testing
ensures superior quality. Therefore, the first goal of understanding and performing the testing process is to
enhance the quality of the software product. Though quality depends on various factors, such as correctness,
integrity, efficiency, etc., reliability is the major factor to achieve quality. The software should be passed
through a rigorous reliability analysis to attain high quality standards. Reliability is a matter of confidence
that the software will not fail, and this level of confidence increases with rigorous testing. The confidence
in reliability, in turn, increases the quality, as shown in Fig.

Figure: testing produces reliability and quality

CUSTOMER SATISFACTION:
From the user’s perspective, the prime concern of testing is customer satisfaction only. If we want the
customer to be satisfied with the software product, then testing should be complete and thorough. Testing
should be complete in the sense that it must satisfy the user for all the specified requirements mentioned in
the user manual, as well as for the unspecified requirements which are otherwise understood. A complete
testing process achieves reliability, reliability enhances the quality, and quality in turn, increases the
customer satisfaction, as shown in Fig.

Figure: quality leads to customer satisfaction


RISK MANAGEMENT:
Risk is the probability that undesirable events will occur in a system. These undesirable events will
prevent the organization from successfully implementing its business initiatives. Risks must be
controlled to manage them with ease. Software testing may act as a control, which can help in
eliminating or minimizing risks. For example, testing may indicate that the software being
developed cannot be delivered on time, or there is a probability that high priority bugs will not be
resolved by the specified time. With this advance information, decisions can be made to minimize
risk situation.
Hence, it is the testers responsibility to evaluate business risks (such as cost, time, resources, and
critical features of the system being developed) and make the same a basis for testing choices.
Testers should also categorize the levels of risks after their assessment (like high-risk, moderate-
risk, low-risk) and this analysis becomes the basis for testing activities. Thus, risk management
becomes the long-term goal for software testing.

Fig. Testing controlled by risk factors

Post-implementation goals:
 These goals are important after the product is released.
REDUCED MAINTENANCE COST:
The maintenance cost of any software product is not its physical cost, as the software does not wear out.
The only maintenance cost in a software product is its failure due to errors. Post-release errors are costlier
to fix, as they are difficult to detect. Thus, if testing has been done rigorously and effectively, then the
chances of failure are minimized and in turn, the maintenance cost is reduced.
IMPROVED SOFTWARE TESTING PROCESS:
A testing process for one project may not be successful and there may be scope for improvement. Therefore,
the bug history and post-implementation results can be analyzed to find out obstacles in the present testing
process, which can be rectified in future projects. Thus, the long-term post-implementation goal is to
improve the testing process for future projects.
1.5 PSYCHOLOGY FOR SOFTWARE TESTING:
Software testing is directly related to human psychology. Though software testing has not been defined till
now, but most frequently it is defined as –
“Testing is the process of demonstrating that errors are not present.”
So, the purpose of testing is to show that software performs its intended functions correctly. This definition
is correct, but partially. If testing is performed keeping this goal in mind, then we cannot achieve the desired
goals as we will not be able to test the software as a whole. It may hide some bugs. If our goal is to
demonstrate that a program has errors; we will design the test cases which have higher probability to
uncover the bugs instead of showing that software works.
“Testing is the process of executing a program with the intent of finding errors.”
We should not have a guilty feeling for our errors and for human beings. This psychology factor brings the
concept that we should concentrate on discovering and preventing the errors and not to feel guilty about
them. Therefore, testing cannot be a joyous event unless you cast out your guilt.
1.6. SOFTWARE TESTING DEFINITIONS:
“Testing is the process of executing a program with the intent of finding errors.”- Myers
“A successful test is one that uncovers an as-yet-undiscovered error.” - Myers
“Testing can show the presence of bugs but never their absence.”- W. Dijkstra
“Program testing is a rapidly maturing area within software engineering that is receiving
increasing notice both by computer science theoreticians and practitioners. Its general aim is to
affirm the quality of software systems by systematically exercising the software in carefully
controlled circumstances”. - E. Miller
“Testing is a support function that helps developers look good by finding their mistakes before
anyone else does.” -James Bach
“Software testing is an empirical investigation conducted to provide stakeholders with
information about the quality of the product or service under test, with respect to the context in
which it is intended to operate.” -Cem Kaner
“The underlying motivation of program testing is to affirm software quality with methods that
can be economically and effectively applied to both large-scale and small-scale systems.” -Miller
“Testing is a concurrent lifecycle process of engineering, using and maintaining testware (i.e.
testing artifacts) in order to measure and improve the quality of the software being tested.” -
Craig
Thus, software testing can be defined as,

“Software testing is a process that detects important bugs with the objective of having
better quality software.”
1.7. MODEL FOR SOFTWARE TESTING:

Figure. Software testing model


The software is basically a part of a system for which it is being developed. Systems consist of
hardware and software to make the product run. The developer develops the software in the
prescribed system environment considering the testability of the software. Testability is a major
issue for the developer while developing the software, as a badly written software may be difficult
to test. Testers are supposed to get on with their tasks as soon as the requirements are specified.
Testers work on the basis of a bug model which classifies the bugs based on the criticality or the
SDLC phase in which the testing is to be performed. Based on the software type and the bug model,
testers decide a testing methodology which guides how the testing will be performed.
With suitable testing techniques decided in the testing methodology, testing is performed on the
software with a particular goal. If the testing results are in line with the desired goals, then the
testing is successful; otherwise, the software or the bug model or the testing methodology has to be
modified so that the desired results are achieved. The following describe the testing model.
Software and Software Model:
Software is built after analyzing the system in the environment. It is a complex entity which deals
with environment, logic, programmer psychology, etc. But a complex software makes it very
difficult to test. Since in this model of testing, our aim is to concentrate on the testing process,
therefore the software under consideration should not be so complex such that it would not be
tested. In fact, this is the point of consideration for developers who design the software. They should
design and code the software such that it is testable at every point. Thus, the software to be tested
may be modeled such that it is testable, avoiding unnecessary complexities.
Bug Model:
Bug model provides a perception of the kind of bugs expected. Considering the nature of all types
of bugs, a bug model can be prepared that may help in deciding a testing strategy. However, every
type of bug cannot be predicted. Therefore, if we get incorrect results, the bug model needs to be
modified.
Testing methodology and Testing:
Based on the inputs from the software model and the bug model, testers can develop a testing methodology
that incorporates both testing strategy and testing tactics. Testing strategy is the roadmap that gives us well-
defined steps for the overall testing process. It prepares the planned steps based on the risk factors and the
testing phase. Once the planned steps of the testing process are prepared, software testing techniques and
testing tools can be applied within these steps. Thus, testing is performed on this methodology. However,
if we don’t get the required results, the testing plans must be checked and modified accordingly.
1.8. EFFECTIVE SOFTWARE TESTING VS. EXHAUSTIVE SOFTWARE TESTING:

Exhaustive or complete software testing means that every statement in the program and every
possible path combination with every possible combination of data must be executed. But soon,
we will realize that exhaustive testing is out of scope.
The testing process should be understood as a domain of possible tests. There are subsets of these
possible tests. But the domain of possible test becomes infinite, as we cannot test every possible
combination.
Complete testing requires the organization to invest a long tine which is not cost-effective.
Therefore, testing must be performed on selected subsets, that can be performed within the
constrained resources. This selected group of subsets, but not the whole domain of testing, makes
effective testing.
Now let us see in detail why complete testing is not possible:

The domain of possible inputs to the Software is too large to test:


If we consider the input data as the only part of the domain of testing, even then we are not able to
test the complete input data combination. The domain of input data has four sub-parts:

Fig: Input Domain for Testing


(a) Valid Inputs: It seems that we can test every valid input on the software. But look at a very
simple example of adding two-digit two numbers. Their range is from -99 to 99 (total 199). So,
the total number of test cases combinations will be 199*199=39601.
(b) Invalid Inputs: The important thing in this case is the behaviour of the program as to how it
responds when a user feeds invalid inputs. If we consider the example of adding two numbers,
then
the following possibilities may occur from invalid inputs:
--Numbers out of range,
--Combination of Alphabets and digits,
-- Combination of all Alphabets,
-- Combination of Control characters,
-- Combination of any other key on the keyboard.
(c) Edited Inputs: If we can edit inputs at the time of providing inputs to the program, then many
unexpected input events may occur. For Example, you can add many spaces to the input, which
are not visible to the user.
(d) Race Condition Inputs: The timing variation between two or more inputs is also one of the
issues that limit the testing. For example, there are two input events A and B. According to the
design A precedes B in most cases, but B can also come first in rare and restricted condition. This
is the race condition whenever B precedes A. Race conditions are among the least tested.

There are too many paths through the Program to Test:


A program path can be traced through the code from the start of a program to its termination.
A testing person thinks that if all the possible paths of the control flow through the program are
executed, then possibly the program can be said completely tested. However, there are two flaws
in the statement:
(i) Consider the following segment:
for (i=0;i<n: i++)
{
if(m>=0)
x[i] = x[i] + 10;
else
x[i] = x[i] – 2;
}
In our example, there are two paths in one iteration. Now the total number of paths will be 220 +1,
where n is the number of times the loop will be carried out, and 1 is added for ―for loop to
terminate. Thus, if n is 20, the number of paths will be 1048577.

(ii) The complete path testing, if performed somehow, doe not guarantee that here will not be
errors.
For example, if a programmer develops a descending order program in place of ascending order
program then exhaustive path testing is of no use.
Every Design Error cannot be Found:
How do we know that the specifications are achievable? Its consistency and completeness must be
provided, and in general that is a provably unsolvable problem.

1.9. SOFTWARE TESTING TERMINOLOGY AND METHODOLOGY:


1.9.1 Definitions:
Failure: The inability of a system or component to perform a required function according to its
specification.
Fault / Defect / Bug: Fault is a condition that in actual causes a system to produce failure. It can be said
that failures are manifestation of bugs.

Figure: Testing Terminology


Error: Whenever a member of development team makes any mistake in any phase of SDLC, errors are
produced. It might be a typographical error, a misleading of a specification, a misunderstanding of what a
subroutine does and so on. Thus, error is a very general term used for human mistakes.

“A mistake in coding is called error, error found by tester is called defect, defect accepted by
development team then it is called bug, build does not meet the requirements then it Is failure.”

Example:
#include<stdio.h>
int main ()
{
int value1, value2, ans;
value1 = 5;
value2 = 3;
ans = value1 - value2;
printf("The addition of 5 + 3 = %d.", ans);
return 0;
}
When you compile and run this program you see the printed statement as below:
The addition of 5 + 3 = 2. So, after compiling and running this program, we realize the program
has failed to do what it was
supposed to do. The program was supposed to add two numbers but it certainly did not add 5 and
3. 5 + 3 should be 8, but the result is 2. There could be various reasons as to why the program
displays the answer 2 instead of 8. For now, we have detected a failure. As the failure has been
detected a defect can be raised.
TEST CASE:
Test case is a well-documented procedure designed to test the functionality of a feature in the system. The
primary purpose of designing test case is to find errors in the system. For designing the test case, it needs
to provide a set of inputs and its corresponding excepted outputs.

Figure: test case template


 Test case ID is the identification number given to each test case.
 Purpose defines why the case is being designed.
 Preconditions for running the inputs in a system can be defined, if required, in a test case.
 Inputs should not be hypothetical. Actual inputs must be provided, instead of general inputs.
 Expected outputs are the outputs which should be produced when there is no failure.

Testware: The documents created during testing activities are known as testware. It includes test plans,
test specification, test case design, test reports etc.
Incident: An incident is the symptom associated with a failure that alerts the user about the occurrence of
a failure.
Test oracle: An oracle is the means to judge the success or failure of a test i.e. to judge the correctness of
the system for some test. The simplest oracle is comparing actual results with expected results by hand.

1.9.2 LIFE CYCLE OF A BUG


Developer can make an error in any phase of SDLC. If an error has been produced in one phase and not
detected in the same phase, then it results as a bug in next phase. The whole life cycle of a bug can be
classified into two phases: (i) bugs-in phase and (ii)bugs-out phase.

Figure: life cycle of a bug


Bugs-in Phase:
This phase is where the errors and bugs are introduced in the software. Whenever we commit a mistake, it
creates errors on a specific location of the software, when this error goes unnoticed; it causes some
conditions to fail, leading to a bug in the software. These bugs are carried out to the subsequent phases of
SDLC, it they are not detected.
Bugs-out Phase:
If failures occur while testing a software product, we come to the conclusion that it is affected by bugs. In
this phase, we observe failures; the following activities are to be performed to get rid of the bugs.
Bug Classification: Here, we observe the failure and classify the bugs according to its nature.
Categorization of bugs may help by handling high criticality bugs first and considering other trivial bugs
on list later.
Bug isolation: Bug isolation is the activity by which we locate the module in which the bug appears.
Incidents observed in failures help in activity.
Bug resolution: After the bug is isolated, we back-trace the design to pinpoint the location of the error. A
bug is resolved when we have found the exact location of its occurrence.
1.9.3. STATES OF A BUG:

Figure: states of a bug


New: The state is new when the bug is reported first time by a tester.
Open: When the test leader approves that the bug is genuine, its state becomes open.
Assign: If a bug is valid, a developer is assigned the job to fix it and the state of the bug now is ‘ASSIGN’.
Deferred: If the priority of the reported bug is not high or there is not sufficient time to test it or the bug
does not have any adverse effect on the software, then the bug is changed to deferred state which implies
the bug is expected to fixed in next releases.
Rejected: When developer rejects the bug after checking its validity as it is not a genuine one, it is in
rejected state.
Test: After fixing the valid bug, the developer changes the bug’s state to ‘TEST’ and sends it back to the
testing team for next round of checking.
Verified/fixed: The tester tests the software and verifies whether the reported bug is fixed or not, if the bug
is fixed, its status is changed to ‘VERIFIED’.
Reopened: If the bug is still there even after fixing it, the tester changes its status to ‘REOPENED’.
Closed: Once the tester and other team members are confirmed that the bug is completely eliminated, they
change its status to ‘CLOSED’.
1.9.4. Why do bugs occur:
A software bug may be defined as a coding error that causes an unexpected defect, fault, flaw, or
imperfection in a computer program. In other words, if a program does not perform as intended, it is most
likely a bug. There are bugs in software due to unclear or constantly changing requirements, software
complexity, programming errors, timelines, errors in bug tracking, communication gap, documentation
errors, deviation from standards etc.
• To Err is Human errors are produced when developers commit mistakes.
• In many occasions, the customer may not be completely clear as to how the product should
ultimately function. Such cases usually lead to a lot of misinterpretations from any or both sides.
• Constantly changing software requirements cause a lot of confusion and pressure both on the
development and testing teams. Often, a new feature added or existing feature removed can be
linked to the other modules or components in the software. Overlooking such issues causes bugs.
• Also, fixing a bug in one part/component of the software might arise another in a different or same
component.
• Designing and re-designing, UI interfaces, integration of modules, database management all these
add to the complexity of the software and the system as a whole.
• Rescheduling of resources, re-doing or discarding already completed work, changes in
hardware/software requirements can affect the software too. Assigning a new developer to the
project in midway can cause bugs. This is possible if proper coding standards have not been
followed, improper code documentation, ineffective knowledge transfer etc. Discarding a portion
of the existing code might just leave its trail behind in other parts of the software; overlooking or
not eliminating such code can cause bugs. Serious bugs can especially occur with larger projects,
as it gets tougher to identify the problem area.
• Programmers usually tend to rush as the deadline approaches closer. This is the time when most of
the bugs occur. It is possible that you will be able to spot bugs of all types and severity.
• Complexity in keeping track of all the bugs can again cause bugs by itself. This gets harder when
a bug has a very complex life cycle i.e. when the number of times it has been closed, reopened, not
accepted, ignored etc. goes on increasing.
1.9.5. Bugs Affect Economics of Software Testing:
Studies have demonstrated that testing prior to coding is 50% effective in detecting errors and after coding,
it is 80% effective. Moreover, it is at least 10 times as costly to correct an error after coding as before and
100 times as costly to correct a post release error. This is how the bugs affect the economics of testing.
A bug found and fixed early stages when the specification is being written, cost very less. The same bug, if
not found until the software is coded and tested, might cost ten times the cost in early stages.

Figure: cost of debugging increases if bug propagates

1.9.6. BUG CLASSIFICATION:


1.9.6.1 BUG CLASSIFICATION BASED ON CRITICALITY:
Bugs can be classified based on the impact they have on the software under test. Bugs are divided based on
their criticality in the following categories:
• Critical Bugs: This type of bugs has the worst effect such that it stops or hangs the normal
functioning of the software.
• Major bug: This type of bug does not stop the functioning of the software but it causes a
functionality to fail to meet its requirements as expected. For example, in a sorting program, the
output os displayed but not the correct one.
• Medium bug: Medium bugs are less critical. If the outputs are not according to the standards (e.g.
redundant or truncated output), then the bug is a medium bug.
• Minor bug: These types of bugs do not affect the functionality of the software. These are mild
bugs which does not effect on the expected behaviour of the software. Ex: misaligned printout.
1.9.6.2 Bug Classification based on SDLC

 Requirements and specifications bugs:


The first type of bug in SDLC is in the requirement gathering and specification phase.
Requirements and specifications developed can be incomplete ambiguous, or self-contradictory.
They can be misunderstood or impossible to understand. The specifications that don't have flaws
in them may change while the design is in progress. Some of the features are added, modified and
deleted.
 Design Bugs: Design bugs may be the bugs from the previous phase and in addition those errors
which are introduced in the present phases. The following are design errors;
– Control flow bugs: The errors in control flow of a program are control flow bugs.
– Logic bugs: Any type of logical mistakes made in design is logical bugs. Ex: Improper
layout of cases, missing cases etc.
– Processing bugs: Any type of computation mistakes result in processing bugs. Ex:
Arithmetic error, improper use of logical operators, ignoring overflow etc.
– Data flow bugs: Data flow errors like un-initialized data, data initialized but not used, data
used but not initialized etc. are data flow bugs. These bugs can be identified by using Data
flow bugs.
– Error handling bugs: There are some situations in the system when exception handling
mechanisms must be adopted. If the system fails, then there must be an error message and
error should be handled in an appropriate way. If you forget to do, then error handling bug
appear.
– Race Condition bugs: Race Conditions (input timings) also lead to bugs.
– Boundary related bugs: When the software fails at the boundary values, then these are
known as boundary related bugs.
– User Interface bugs: If the user does not feel good while using the software, then there
are user interface bugs. Ex: inappropriate error messages, missing info, misleading info
etc.
• Coding Bugs: Programmer is generally aware about coding bugs. For example: undeclared data,
undeclared routines, dangling code, typographical errors etc.
• Interface and Integration Bugs: External Interface bugs are invalid timing related external
signals, misunderstanding external input and output formats and user interface bugs. Internal
interface bugs are input and output format bugs, inadequate protection against corrupted data,
wrong subroutine call etc. Integration bugs result from inconsistencies or incompatibilities between
modules, data transfer bugs, data sharing bugs etc.
• System Bugs: There may be bugs while testing the system like bugs in performance, stress,
compatibility etc.
• Testing Bugs: Bugs are present in testing phase also. Some testing mistakes like failure to
notice/report problem, failure to use the most promising test case, failure to check for unresolved
problem, failure to verify fixes, failure to provide summary report.
1.9.7 TESTING PRINCIPLES:

Principles are the guidelines to the tester


These are
1. Effective testing not exhaustive testing
Tester approach should be an effective testing not an exhaustive, considering only the
domain wise in that domain all program logic and conditions covered or not
2. Testing is not a single phase performed in SDLC
Testing is parallel to the development phases so it is not a single phase after coding.
3. Destructive approach for constructive testing
Tester mind is always trying to find more and more bugs in the programs so this destructive
approach will help through testing it become constructive.
4. Early testing is the best policy
Verification needed at early phases of software development so that bugs in the previous
phases may not be traversed to next phases. So that cost should not be wasted. So, testing
should be started as early as with the requirements phase.
5. Probability of existence of an error in a section of a program is proportional to the
number of errors already found in that section
if modules were interconnected with other modules then the tester first concentrate on the
modules which having more no. of bugs first, after completely elimination of this module
go for the module which consists less than first, last the tester has to concentrate on the
module which having least no. of bugs.
6. Testing strategy should start at the smallest module level and expand towards the
whole program
Testing must start at the unit level first then go for integration then system levels.
7. Testing should also be performed by an independent testing team
Who were developers of program if we give to test then they feel that whatever code I was
developed in my code I need to find bugs no never like feeling from the developers, so
effective testing is not done Programmers always are constructive approach, but testers
needed destructive approach Programmers‟ having positive feeling towards what they
developed but tester mind is always negative. So, there is need of separate Independent
testing team is needed.
8. Everything must be recorded in software testing
Everything must be recorded means it should be helpful to junior testers when they were
working with the similar kind of projects. If similar kind of project come to test then it
became easy to tester to classify which are high priority bugs and probability of which
modules gets more no. of bugs
9. Invalid inputs and unexpected behavior have a high probability of finding an error
First test the functionality with the invalid inputs for those invalid inputs it has to give an
unexpected behavior
10. Tester must participate in specification and design review
If the tester participated in specification and design review he can easily grasp where the
effective testing is needed
1.10. SOFTWARE TESTING LIFE CYCLE(STLC):
The testing process divided into a well-defined sequence of steps in termed as software testing life
cycle(STLC). The major contribution of STLC is to involve the testers at early stages of development.

Figure: software testing life cycle


Test Planning:
The goal of test planning is to take into account the important issues of testing strategy like resources,
schedules, responsibilities, risks, priorities as roadmap. The following are activities during test planning:

 Defining the Test Strategy


 Estimate of the number of test cases, their duration and cost.
 Plan the resources like the manpower to test, tools required, documents required.
 Identifying areas of risks.
 Defining the test completion criteria.
 Identification of methodologies, techniques and tools for various test cases.
 Identifying reporting procedures, bug classification, databases for testing, Bug Severity levels,
project metrics
The major outputs of test planning

 Develop a test case format


 Develop test case plans according to every phase of SDLC
 Identify test cases to be automated
 Prioritize the test cases according to their importance and criticality
 Plan test cycles required for regression testing
Test Design:
Design of test cases is the major activities in testing. It includes the following critical activities.

 Determining the test objectives and their prioritization


From the requirements specification and design documents identify the testing objectives.
Depending on the scope and risk give prioritize the test objectives
 Preparing list of items to be tested
Objects are converted into list of objects
 Mapping items to test cases
Need to create a matrix for knowing which test case is will be covered by which item
The matrix will help in
o Identify the major test scenario
o Reducing the redundant test cases
o Identifying the absence of a test case for particular objective and as a result, creating
them.

The tester who designs the test cases must understand the cause-and effect
connections also.
Some attributes of good test cases are
a) Criticality and high-risk priority are given highest priority
b) Good test case is one which is capable of finding high probability of
c) finding an error
d) Test cases are not overlap or be redundant
e) Good test case is one which always follow the modular approach
f) Test case always able to find an undercover error
 Selection of test case design techniques
Basically, there are two testing techniques Black-box and white box testing techniques
Black-box testing techniques – in this generate test cases without knowing the internal
working of a system
 Creating test cases and test data
Test cases are created based on testing objecting, test data means input data given to the
test cases.
 Setting up the test environment and supporting tools
Environment includes h/w configurations, testers, tools, interfaces, manuals. Tools like
QTP, load runner
 Creating test procedure specification
It is sequence of steps, this procedure used by the tester at the time of testing

Figure: test case design steps

Test Execution:
In this phase, all test cases are executed and test results are documented in the test incident reports, test
logs, testing status, and test summary reports.
Responsibilities at various levels for execution of the test cases are outlined in the following table.
Test Execution Level Person Responsible

Unit Developer of the module

Integration Testers and Developers

System Testers, Developers, End Users

Acceptance Testers, End-Users

Post-Execution /Test Review:


Bugs should be reported to the developers after test execution successful
Understanding the bug- after getting test execution report to the Developer has to understand its
whereabouts.
Reproducing the bug-once again need to execute the test case with invalid inputs so that the bug
should reproduce once again
Analyzing the nature and cause of the bug-after getting bug report based on incidents the bug
nature is analyzed
It consists of two analyses
i. Reliability analysis-this analysis helps predefined reliability goals or not
ii. Coverage analysis- this is alternative criterion to stop testing
iii. Overall defect analysis- this analysis is identifying risk areas and quality improvement
1.11. SOFTWARE TESTING METHODOLOGY:
Software testing methodology is the organization of software testing by means of which the test strategy
and test tactics are achieved.

Figure: Testing methodology

Software Testing Strategy:


Testing strategy is the planning of the whole testing process into a well-planned series of steps. The
components on which the testing strategy is based are discussed below:
• Test factors: Test factors are risk factors or issues related to the system under development. Testing
process should reduce these test factors to a prescribed level.
• Test Phase: It refers to the phases of SDLC where testing will be performed. Testing strategy will
be different for different models of SDLC.
Test Strategy Matrix:
• Select and Rank Test Factors: Based on the test factors list, factors according to specific systems
are selected and ranked from most significant to the least. These are rows of a matrix.
• Identify the System Development Phases: Different phases according to the adopted
development model is listed as columns of the matrix.
• Identify the Risks associated with System under Development: The purpose is to identify the
concern that need to be addressed under a test phase.
Ex: Let us take a project as an example. Suppose a new operating system has to be designed, which needs
attest strategy. The following is the example test strategy matrix

Table: example for strategy matrix

You might also like