You are on page 1of 28

UNIT I FOUNDATIONS OF SOFTWARE TESTING

Why do we test Software?, Black-Box Testing and White-Box Testing, Software Testing
Life Cycle, V-model of Software Testing, Program Correctness and Verification,
Reliability versus Safety, Failures, Errors and Faults (Defects), Software Testing
Principles, Program Inspections, Stages of Testing: Unit Testing, Integration Testing,
System Testing

1. Introduction:

1.1 Software testing

Software testing is the process of executing a program with the aim of


finding the error. To make our software perform well it should be error-
free. If testing is done successfully it will remove all the errors from the
software. 

Software Testing is a process of evaluation of the software on the


requirements gathered from system specifications and users. Software
Testing is done at the every phase level in software development life
cycle(SDLC). 

1.1.1 TESTING AS A PROCESS

Testing is an integral part of the software development process. Testing itself is


related to two other processes called verification and validation as shown in Figure
1.3.Testing is not Validation or Verification.

Validation is the process of evaluating a software system or component during, or at


the end of, the development cycle in order to determine whether it satisfies specified
requirements.

Verification is the process of evaluating a software system or component to


determine whether the products of a given development phase satisfy the conditions
imposed at the start of that phase. Verification is usually associated with activities
such as inspections and reviews of software deliverables.

1
Testing is generally described as a group of procedures carried out to evaluate some
aspect of a piece of software.

Testing can be described as a process used for revealing defects in software, and for
establishing that the software has attained a specified degree of quality with respect to
selected attributes.

Testing is defined as a set of activities which are well planned in advance and also
conducted systematically. It is a process of checking the software product in a
predefined way to know if the behavior is same as the expected one.

Note that testing and debugging, or fault localization, are two very different activities.
The debugging process begins after testing has been carried out and the tester has
noted that the software is not behaving as specified.

Debugging, or fault localization is the process of (1) locating the fault or defect, (2)
repairing the code, and (3) retesting the code.

Testing is a process, and as a process it must managed. Minimally that means


that an organizational policy for testing must be defined and documented. Testing
procedures and steps must be defined and documented.

Testing must be planned, testers should be trained, the process should have
associated quantifiable goals that can be measured and monitored. Testing as a
process should be able to evolve to a level where there are mechanisms in place for
making continuous improvements.

2
1.2 Black-Box Testing
Black-box testing (also known as datadriven or input/output-driven testing). Your
goal is to be completely unconcerned about the internal behavior and structure of the
program. Instead, concentrate on finding circumstances in which the program does not
behave according to its specifications.

1.2.1 Equivalence Partitioning

A good test case as one that has a reasonable probability of finding an error; it also
stated that an exhaustive input test of a program is impossible . When testing a
program, you are limited to a small subset of all possible inputs. Of course, then, you
want to select the ‘‘right’’ subset, that is, the subset with the highest probability of
finding the most errors.

One way of locating this subset is to realize that a well-selected test case also
should have two other properties:

1. It reduces, by more than a count of one, the number of other test cases that
must be developed to achieve some predefined goal of ‘‘reasonable’’ testing.

2. It covers a large set of other possible test cases. That is, it tells us something
about the presence or absence of errors over and above this specific set of input
values.

These properties, although they appear to be similar, describe two distinct


considerations. The first implies that each test case should invoke as many different
input considerations as possible to minimize the total number of test cases necessary.
The second implies that you should try to partition the input domain of a program into
a finite number of equivalence classes such that you can reasonably assume that a test
of a representative value of each class is equivalent to a test of any other value. That
is,if one test case in an equivalence class detects an error, all other test cases in the
equivalence class would be expected to find the same error. Conversely, if a test case
did not detect an error, we would expect that no other test cases in the equivalence
class would fall within another equivalence class, since equivalence classes may
overlap one another. These two considerations form a black-box methodology known
as equivalence partitioning.

3
Test-case design by equivalence partitioning proceeds in two steps:

(1) identifying the equivalence classes and


(2) defining the test cases.

Identifying the Equivalence Classes :


The equivalence classes are identified by taking each input condition (usually a
sentence or phrase in the specification) and partitioning it into two or more groups.
Notice that two types of equivalence classes are identified: valid equivalence classes
represent valid inputs to the program, and invalid equivalence classes represent all
other possible states of the condition.
Given an input or external condition, identifying the equivalence classes is
largely a heuristic process. Follow these guidelines:

1. If an input condition specifies a range of values (e.g., ‘‘the item count can be
from 1 to 999’’), identify one valid equivalence class (1999).

2. If an input condition specifies the number of values (e.g., ‘‘one through six
owners can be listed for the automobile’’), identify one valid equivalence class
and two invalid equivalence classes (no owners and more than six owners).

3. If an input condition specifies a set of input values, and there is reason to


believe that the program handles each differently (‘‘type of vehicle must be
BUS, TRUCK, TAXICAB, PASSENGER, or MOTORCYCLE’’), identify a
valid equivalence class for each and one invalid equivalence class
(‘‘TRAILER,’’ for example).

4. If an input condition specifies a ‘‘must-be’’ situation, such as ‘‘first


character of the identifier must be a letter,’’ identify one valid equivalence class
(it is a letter) and one invalid equivalence class (it is not a letter).

If there is any reason to believe that the program does not handle elements in an
equivalence class identically, split the equivalence class into smaller equivalence
classes.

Identifying the Test Cases :


The second step is the use of equivalence classes to identify the test cases. The
process is as follows:
1. Assign a unique number to each equivalence class.

4
2. Until all valid equivalence classes have been covered by (incorporated into)
test cases, write a new test case covering as many of the uncovered valid
equivalence classes as possible.
3. Until your test cases have covered all invalid equivalence classes, write a
test case that covers one, and only one, of the uncovered invalid equivalence
classes.

1.2.2 Boundary Value Analysis


Boundary Value Analysis shows that test cases that explore boundary
conditions have a higher payoff than test cases that do not. Boundary conditions are
those situations directly on, above, and beneath the edges of input equivalence classes
and output equivalence classes.
Boundary value analysis differs from equivalence partitioning in two respects:
1. Rather than selecting any element in an equivalence class as being
representative, boundary value analysis requires that one or more elements
be selected such that each edge of the equivalence class is the subject of a
test.
2. Rather than just focusing attention on the input conditions (input space), test
cases are also derived by considering the result space (output equivalence
classes).

A few general guidelines are in order:


1. If an input condition specifies a range of values, write test cases for the ends
of the range, and invalid-input test cases for situations just beyond the ends.
For instance, if the valid domain of an input value is –1.0 to 1.0, write test
cases for the situations –1.0, 1.0, –1.001, and 1.001.
2. If an input condition specifies a number of values, write test cases for the
minimum and maximum number of values and one beneath and beyond
these values. For instance, if an input file can contain 1–255 records, write
test cases for 0, 1, 255, and 256 records.
3. If the input or output of a program is an ordered set (a sequential file, for
example, or a linear list or a table), focus attention on the first and last
elements of the set.

Example of a boundary value analysis : MTEST is a program that grades


multiple-choice examinations. The input is a data file named OCR, with
multiple records that are 80 characters long. Per the file specification, the first

5
record is a title used as a title on each output report. The next set of records
describes the correct answers on the exam. These records contain a ‘‘2’’ as the
last character in column 80. In the first record of this set, the number of
questions is listed in columns 1–3 (a value of 1–999). Columns 10–59 contain
the correct answers for questions 1–50 (any character is valid as an answer).
Subsequent records contain, in columns 10–59, the correct answers for
questions 51–100, 101–150, and so on. The third set of records describes the
answers of each student; each of these records contains a ‘‘3’’ in column 80.
For each student, the first record contains the student’s name or number in
columns 1– 9 (any characters); columns 10–59 contain the student’s answers
for questions 1–50. If the test has more than 50 questions, subsequent records
for the student contain answers 51–100, 101–150, and so on, in columns 10–59.
The maximum number of students is 200. The input data are illustrated in
Figure 4.4.

The four output records are:


1. A report, sorted by student identifier, showing each student’s grade
6
(percentage of answers correct) and rank.
2. A similar report, but sorted by grade.
3. A report indicating the mean, median, and standard deviation of the grades.
4. A report, ordered by question number, showing the percentage of students
answering each question correctly

1.2.3 Cause-Effect Graphing


One weakness of boundary value analysis and equivalence partitioning is that
they do not explore combinations of input circumstances. The testing of input
combinations is not a simple task because even if you equivalence-partition the input
conditions, the number of combinations usually is astronomical.
Cause-effect graphing aids in selecting, in a systematic way, a high-yield set of
test cases. It has a beneficial side effect in pointing out incompleteness and
ambiguities in the specification.
A cause-effect graph is a formal language into which a natural-language
specification is translated. The graph actually is a digital logic circuit (a combinatorial
logic network), but instead of standard electronics notation, a somewhat simpler
notation is used.

The following process is used to derive test cases:


1. The specification is divided into workable pieces. This is necessary
because cause-effect graphing becomes unwieldy when used on large
specifications.
2. The causes and effects in the specification are identified. A cause is a
distinct input condition or an equivalence class of input conditions. An
effect is an output condition or a system transformation.
3. The semantic content of the specification is analyzed and transformed
into a Boolean graph linking the causes and effects. This is the cause-
effect graph.
4. 4. The graph is annotated with constraints describing combinations of
causes and/or effects that are impossible because of syntactic or
environmental constraints.
5. 5. By methodically tracing state conditions in the graph, you convert the
graph into a limited-entry decision table. Each column in the table
represents a test case.
6. The columns in the decision table are converted into test cases.

7
The basic notation for the graph is shown in Figure 4.5 Think of each node as having
the value 0 or 1; 0 represents the ‘‘absent’’ state and 1 represents the ‘‘present’’ state..

1.2.4 Error Guessing


It has often been noted that some people seem to be naturally adept at program
testing. Without using any particular methodology such as boundary value analysis of
cause-effect graphing, these people seem to have a knack for sniffing out errors.
One explanation for this is that these people are practicing—subconsciously
more often than not—a test-case design technique that could be termed error
guessing. Given a particular program, they surmise—both by intuition and experience
—certain probable types of errors and then write test cases to expose those errors.
It is difficult to give a procedure for the error-guessing technique since it is
largely an intuitive and ad hoc process. The basic idea is to enumerate a list of
possible errors or error-prone situations and then write test cases based on the list. For
instance, the presence of the value 0 in a program’s input is an error-prone situation.
Therefore, you might write test cases for which particular input values have a 0 value
and for which particular output values are forced to 0.

Consider the MTEST program in the section on boundary value analysis. The
following additional tests come to mind when using the error-guessing technique
8
 Does the program accept ‘‘blank’’ as an answer?
 A type-2 (answer) record appears in the set of type-3 (student) records. A
record without a 2 or 3 in the last column appears as other than the initial (title)
record.
 Two students have the same name or number. The number-of-questions field
has a negative value.

1.3 White-box Testing


White-box (or logic-driven) testing, permits you to examine the internal
structure of the program. This strategy derives test data from an examination of the
program’s logic (and often, unfortunately, at the neglect of the specification). The
goal at this point is to establish for this strategy the analog to exhaustive input testing
in the black-box approach.

1.3.1 Logic Coverage Testing:


If you back completely away from path testing, it may seem that a worthy goal
would be to execute every statement in the program at least once. Unfortunately, this
is a weak criterion for a reasonable white-box test. This concept is illustrated in Figure
4.1. Assume that this figure represents a small program to be tested.

9
A stronger logic coverage criterion is known as decision coverage or branch
coverage. This criterion states that you must write enough test cases that each
decision has a true and a false outcome at least once. In other words, each branch
direction must be traversed at least once. Examples of branch or decision statements
are switch-case, do-while, and if-else statements. Multipath GOTO statements qualify
in some programming languages such as Fortran.
Decision coverage usually can satisfy statement coverage. Since every statement
is on some sub path emanating either from a branch statement or from the entry point
of the program, every statement must be executed if every branch direction is
executed. There are, however, at least three exceptions:
 Programs with no decisions.
 Programs or subroutines/methods with multiple entry points. A given statement
might be executed only if the program is entered at a particular entry point.
 Statements within ON-units. Traversing every branch direction will not
necessarily cause all ON-units to be executed.

1.3.2 Conditional Coverage


A criterion that is sometimes stronger than decision coverage is condition
coverage. In this case, you write enough test cases to ensure that each condition in a
decision takes on all possible outcomes at least once. But, as with decision coverage,
this does not always lead to the execution of each statement, so an addition to the
criterion is that each point of entry to the program or subroutine, as well as ON-units,
be invoked at least once. For instance, the branching statement:

Errors in logical expressions are not necessarily revealed by the condition


coverage and decision/condition coverage criteria. A criterion that covers this
problem, and then some, is multiple-condition coverage.

1.3.3 Multiple-condition coverage


A criterion requires that you write sufficient test cases such that all possible
combinations of condition outcomes in each decision, and all points of entry, are
invoked at least once. For instance, consider the following sequence of pseudo-code.

10
It should be easy to see that a set of test cases satisfying the multiple condition
criterion also satisfies the decision coverage, condition coverage, and
decision/condition coverage criteria.

1.4 Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC) is the testing process which is executed in


systematic and planned manner. In STLC process, different activities are carried  out

11
to improve the quality of the product. Let’s quickly see what all stages are involved in
typical Software Testing Life Cycle

Following steps are involved in Software Testing Life Cycle (STLC). Each step is
have its own Entry Criteria and deliverable.
 Requirement Analysis
 Test Planning
 Test Case Development
 Environment Setup
 Test Execution
 Test Cycle Closure
Requirement Analysis is the very first step in Software Testing Life Cycle (STLC).
In this step Quality Assurance (QA) team understands the requirement in terms
of what we will testing & figure out the testable requirements. If any conflict, missing
or not understood any requirement, then QA team follow up with the various
stakeholders like Business Analyst, System Architecture, Client, Technical
Manager/Lead etc to better understand the detail knowledge of requirement.

12
At the end of this stage, the testing team should have a clear understanding
of the software requirements and should have identified any potential issues
that may impact the testing process.

Test Planning
Test Planning is most important phase of Software testing life cycle where all
testing strategy is defined. This phase also called as Test Strategy phase. In this
phase typically Test Manager (or Test Lead based on company to company) involved
to determine the effort and cost estimates for entire project. This phase will be kicked
off once the requirement gathering phase is completed & based on the requirement
analysis, start preparing the Test Plan. The Result of Test Planning phase will be Test
Plan or Test strategy & Testing Effort estimation documents. Once test planning
phase is completed the QA team can start with test cases development activity.

13
Test Case Development
The test case development activity is started once the test planning activity is
finished. This is the phase of STLC where testing team write down the detailed test
cases. Along with test cases testing team also prepare the test data if any required for
testing. Once the test cases are ready then these test cases are reviewed by peer
members or QA lead.

Test Environment Setup: 


Setting up the test environment is vital part of the STLC. Basically test
environment decides on which conditions software is tested. This is independent
activity and can be started parallel with Test Case Development. In process of setting
up testing environment test team is not involved in it. Based on company to company
may be developer or customer creates the testing environment. Mean while testing
team should prepare the smoke test cases to check the readiness of the test
environment setup.

14
Test Execution
 Once the preparation of Test Case Development and Test Environment setup is
completed then test execution phase can be kicked off. In this phase testing team start
executing test cases based on prepared test planning & prepared test cases in the prior
step.

Test Closure:
 Call out the testing team member meeting & evaluate cycle completion
criteria based on Test coverage, Quality, Cost, Time, Critical Business Objectives, and
Software. Test case & bug report will analyze to find out the defect distribution by
type and severity. Once complete the test cycle then test closure report & Test
metrics will be prepared

15
1.5 THE V SHAPED SOFTWARE LIFE CYCLE MODEL
The V shaped model is the modified form of the waterfall model with a special
focus on testing activities. The waterfall model allows us to start testing activities
after the completion of the implementation phase. This was popular when testing was
primarily validation oriented. Now, there is a shift in testing activities from validation
to verification where we want to review / inspect every activity of the software
development life cycle. We want to involve the testing persons from the requirement
analysis and specification phase itself. They will review the SRS document and
identify the weak areas, critical areas, ambiguous areas and misrepresented areas. This
will improve the quality of the SRS document and may further minimize the errors.

16
These verification activities are treated as error preventive exercises and are
applied at requirements analysis and specification phase, high level design phase,
detailed design phase and implementation phase. We not only want to improve the
quality of the end products at all phases by reviews, inspections and walkthroughs, but
also want to design test cases and test plans during these phases. The designing of test
cases after requirement analysis and specification phase, high level design phase,
detailed design phase and implementation phase may help us to improve the quality of
the final product and also reduce the cost and development time.

Graphical Representation :

The shape of the model is like the English letter ‘V’ and it emphasizes testing
activities in every phase. There are two parts of the software development life cycle in
this model i.e. development and testing and are shown in Figure 1.10. We want to
carry out development and testing activities in parallel and this model helps us to do
the same in order to reduce time and cost.

Relationship of Development and Testing Parts :


The development part consists of the first four phases (i.e. requirements
analysis and specification, high level design, detailed design and implementation)
whereas the testing part has three phases (i.e. unit and integration testing, system
testing and acceptance testing). The model establishes the relationship between the
development and testing parts. The acceptance test case design and planning activities
should be conducted along with the software requirements and specifications phase.
Similarly the system test case design and planning activities should be carried out
along with high level design phase.
Unit and integration test case design and planning activities should be carried
out along with the detailed design phase. The development work is to be done by the
development team and testing is to be done by the testing team simultaneously. After
the completion of implementation, we will have the required test cases for every
phase of testing. The only remaining work is to execute these test cases and observe
the responses of the outcome of the execution. This model brings the quality into the
development of our products. The encouragement of writing test cases and test plans
in the earlier phases of the software development life cycle is the real strength of this
model. We require more resources to implement this model as compared to the
waterfall model. This model also suffers from many disadvantages of the waterfall
model like nonavailability of a working version of the product until late in the life
cycle, difficulty in accommodating any change, etc. This model has also limited
applications in today’s interactive software processes.
17
1.6 Errors, Faults(Defects) and Failures
Error
Error is a situation that happens when the Development team or the developer
fails to understand a requirement definition and hence that misunderstanding gets
translated into buggy code. This situation is referred to as an Error and is mainly a
term coined by the developers.

 Errors are generated due to wrong logic, syntax, or loop that can impact the end-
user experience. 
 It is calculated by differentiating between the expected results and the actual
results.
 It raises due to several reasons like design issues, coding issues, or system
specification issues and leads to issues in the application.

Fault/Defects
Sometimes due to certain factors such as Lack of resources or not following
proper steps Fault occurs in software which means that the logic was not
incorporated to handle the errors in the application. This is an undesirable situation,
but it mainly happens due to invalid documented steps or a lack of data definitions.
 It is an unintended behavior by an application program.
 It causes a warning in the program.
 If a fault is left untreated it may lead to failure in the working of the deployed
code.
 A minor fault in some cases may lead to high-end error.
 There are several ways to prevent faults like adopting programming techniques,
development methodologies, peer review, and code analysis.

Failure
Failure is the accumulation of several defects that ultimately lead to Software
failure and results in the loss of information in critical modules thereby making the
system unresponsive. Generally, such situations happen very rarely because before
releasing a product all possible scenarios and test cases for the code are simulated.
Failure is detected by end-users once they face a particular issue in the software.
 Failure can happen due to human errors or can also be caused intentionally in
the system by an individual.
 It is a term that comes after the production stage of the software.
 It can be identified in the application when the defective part is executed.
A simple diagram depicting Bug vs Defect vs Fault vs Failure:

18
1.7 SOFTWARE TESTING PRINCIPLES

Continuing with the major premise of this chapter, that the most important considerations
in software testing are issues of psychology, we can identify a set of vital testing
principles or guidelines.

Principle 1: A necessary part of a test case is a definition of the expected output or


result.
This principle, though obvious, when overlooked is the cause of one of the most
frequent mistakes in program testing. Again, it is something that is based on human
psychology. If the expected result of a test case has not been predefined, chances are that
a plausible, but erroneous, result will be interpreted as a correct result because of the
phenomenon of ‘‘the eye seeing what it wants to see.’’ In other words, in spite of the
proper destructive definition of testing, there is still a subconscious desire to see the

19
correct result. One way of combating this is to encourage a detailed examination of all
output by precisely spelling out, in advance, the expected output of the program.
Therefore, a test case must consist of two components:
1. A description of the input data to the program.
2. A precise description of the correct output of the program for that set of input
data.

Principle 2: A programmer should avoid attempting to test his or her own program.

Any writer knows—or should know—that it’s a bad idea to attempt to edit or proofread
his or her own work. They know what the piece is supposed to say, hence may not
recognize when it says otherwise. And they really don’t want to find errors in their own
work. The same applies to software authors

Principle 3: A programming organization should not test its own programs.


The argument here is similar to that made in the previous principle. A project or
programming organization is, in many senses, a living organization with psychological
problems similar to those of individual programmers.

Principle 4: Any testing process should include a thorough inspection of the results
of each test.
This is probably the most obvious principle, but again it is something that is often
overlooked. We’ve seen numerous experiments that show many subjects failed to detect
certain errors, even when symptoms of those errors were clearly observable on the output
listings. Put another way, errors that are found in later tests were often missed in the
results from earlier tests.

Principle 5: Test cases must be written for input conditions that are invalid and
unexpected, as well as for those that are valid and expected.
There is a natural tendency when testing a program to concentrate on the valid and
expected input conditions, to the neglect of the invalid and unexpected conditions.

Principle 6: Examining a program to see if it does not do what it is supposed to do is


only half the battle; the other half is seeing whether the program does what it is not
supposed to do.
This is a corollary to the previous principle. Programs must be examined for
unwanted side effects. For instance, a payroll program that produces the correct

20
paychecks is still an erroneous program if it also produces extra checks for nonexistent
employees, or if it overwrites the first record of the personnel file.

Principle 7: Avoid throwaway test cases unless the program is truly a throwaway
program.
This problem is seen most often with interactive systems to test programs. A
common practice is to sit at a terminal and invent test cases on the fly, and then send
these test cases through the program. The major issue is that test cases represent a
valuable investment that, in this environment, disappears after the testing has been
completed. Whenever the program has to be tested again (e.g., after correcting an error or
making an improvement), the test cases must be reinvented. Saving test cases and
running them again after changes to other components of the program is known as
regression testing

Principle 8: Do not plan a testing effort under the tacit assumption that no errors
will be found.
This is a mistake project managers often make and is a sign of the use of the
incorrect definition of testing—that is, the assumption that testing is the process of
showing that the program functions correctly. Once again, the definition of testing is the
process of executing a program with the intent of finding errors.

Principle 9: The probability of the existence of more errors in a section of a


program is proportional to the number of errors already found in that section
At first glance this concept may seem nonsensical, but it is a phenomenon present
in many programs. For instance, if a program consists of two modules, classes, or
subroutines, A and B, and five errors have been found in module A, and only one error
has been found in module B, and if module A has not been purposely subjected to a more
rigorous test, then this principle tells us that the likelihood of more errors in module A is
greater than the likelihood of more errors in module B.

Principle 10: Testing is an extremely creative and intellectually challenging task.


It is probably true that the creativity required in testing a large program exceeds
the creativity required in designing that program. We already have seen that it is
impossible to test a program sufficiently to guarantee the absence of all errors.
Methodologies discussed later in this book help you develop a reasonable set of test cases
for a program, but these methodologies still require a significant amount of creativity.

1.8 Program Inspections


A code inspection is a set of procedures and error-detection techniques for
group code reading. Most discussions of code inspections focus on the procedures,

21
forms to be filled out, and so on. Here, after a short summary of the general
procedure, we will focus on the actual error-detection techniques.
Inspection Team: An inspection team usually consists of four people. The first
of the four plays the role of moderator, which in this context is tantamount to that of
a quality-control engineer.

Moderator duties include:

 Distributing materials for, and scheduling, the inspection session.


 Leading the session.
 Recording all errors found.
 Ensuring that the errors are subsequently corrected.
The second team member is the programmer. The remaining team members
usually are the program’s designer (if different from the programmer) and a test
specialist. The specialist should be well versed in software testing and familiar with
the most common programming errors.
Several days in advance of the inspection session, the moderator distributes the
program’s listing and design specification to the other participants. The participants
are expected to familiarize themselves with the material prior to the session.

During the session, two activities occur:


1. The programmer narrates, statement by statement, the logic of the program.
During the discourse, other participants should raise questions, which should be
pursued to determine whether errors exist. It is likely that the programmer, rather
than the other team members, will find many of the errors identified during this
narration. In other words, the simple act of reading aloud a program to an
audience seems to be a remarkably effective error-detection technique.
2. The program is analyzed with respect to checklists of historically common
programming errors.

Benefits of the Inspection Process


The inspection process has several beneficial side effects, in addition to its
main effect of finding errors. For one, the programmer usually receives valuable
feedback concerning programming style, choice of algorithms, and programming
techniques. The other participants gain in a similar way by being exposed to another
programmer’s errors and programming style. In general, this type of software testing
helps reinforce a team approach to this particular project and to projects that involve
these participants in general. Reducing the potential for the evolution of an adversarial

22
relationship, in favor of a cooperative, team approach to projects, can lead to more
efficient and reliable program development
Finally, the inspection process is a way of identifying early the most error-
prone sections of the program, helping to focus attention more directly on these
sections during the computer-based testing processes

1.9 Stages of Software Testing:


Software testing is generally carried out at different levels. There are four such
levels namely unit testing, integration testing, system testing and acceptance .The first
three levels of testing activities are done by the testers and the last level of testing
(acceptance) is done by the customer(s)/user(s). Each level has specific testing
objectives.

1.9.1 Unit Testing:


A unit is the smallest testable piece of software, which may consist of hundreds
or even just few lines of source code, and generally represents the result of the work
of one or few developers. The unit test cases’ purpose is to ensure that the unit
satisfies its functional specification and / or that its implemented structure matches the
intended design structure.
A unit may not be completely independent. It may be calling a few units and
also be called by one or more units. We may have to write additional source code to
execute a unit. A unit X may call a unit Y and a unit Y may call a unit A and a unit B
as shown in Figure 8.2(a). To execute a unit Y independently, we may have to write
additional source code in a unit Y which may handle the activities of a unit X and the
activities of a unit A and a unit B. The additional source code to handle the activities
of a unit X is called ‘driver’ and the additional source code to handle the activities of a

23
unit A and a unit B is called ‘stub’. The complete additional source code which is
written for the design of stub and driver is called scaffolding.

1.9.2 Integration Testing


A software program may have many units. We test units independently during
unit testing after writing the required stubs and drivers. When we combine two units,
we may like to test the interfaces amongst these units. We combine two or more units
because they share some relationship. This relationship is represented by an interface
and is known as coupling.
The coupling is the measure of the degree of interdependence between units.
Two units with high coupling are strongly connected and thus, dependent on each
other. Two units with low coupling are weakly connected and thus have low
dependency on each other. Hence, highly coupled units are heavily dependent on
other units and loosely coupled units are comparatively less dependent on other units

as shown in Figure 8.3

24
.

A design with high coupling may have more errors. Loose coupling minimizes
the interdependence, and some of the steps to minimize coupling are given as:

(i) Pass only data, not the control information.


(ii) Avoid passing undesired data.
(iii) Minimize parent/child relationship between calling and called units.
(iv) Minimize the number of parameters to be passed between two units.
(v) Avoid passing complete data structure.
(vi) Do not declare global variables.
(vii) Minimize the scope of variables

A good design should have low coupling and thus interfaces become very
important. When interfaces are important, their testing will also be important. In
integration testing, we focus on the issues related to interfaces amongst units. There
are several integration strategies that really have little basis in a rational methodology
and are given in Figure 8.4.
Top down integration starts from the main unit and keeps on adding all called
units of the next level. This portion should be tested thoroughly by focusing on
interface issues. After completion of integration testing at this level, add the next level
of units and so on till we reach the lowest level units (leaf units). There will not be
any requirement of drivers and only stubs will be designed.

25
In bottom-up integration, we start from the bottom, (i.e. from leaf units) and
keep on adding upper level units till we reach the top (i.e. root node). There will not
be any need of stubs.

A sandwich strategy runs from top and bottom concurrently, depending upon
the availability of units and may meet somewhere in the middle.

26
1.9.3 System Testing
We perform system testing after the completion of unit and integration testing.
We test complete software along with its expected environment. We generally use
functional testing techniques, although a few structural testing techniques may also be
used. A system is defined as a combination of the software, hardware and other
associated parts that together provide product features and solutions. System testing
ensures that each system function works as expected and it also tests for non-
functional requirements like performance, security, reliability, stress, load, etc. This is
the only phase of testing which tests both functional and non-functional requirements
of the system.
A team of the testing persons does the system testing under the supervision of a
test team leader. We also review all associated documents and manuals of the software.
This verification activity is equally important and may improve the quality of the final
product. Utmost care should be taken for the defects found during the system testing
phase. Progress of system testing also builds confidence in the development team as
this is the first phase in which the complete product is tested with a specific focus on
the customer’s expectations. After the completion of this phase, customers are invited
to test the software.
Testing aspects Functional testing Non-functional testing
Involves Product features Quality factors
and
functionality
Tests Product behavior Behavior and experience
Result conclusion Simple steps written to Huge data collectedand
check analyzed
expected results
Results varies due to Product implementation Product
implementation,
resources and
configurations
Testing focus Defect detection Qualification of product
Knowledge required Product and domain Product, domain,
design,
architecture, statistical
skills
Failures normally due to Code Architecture, design and
code
Testing phase Unit, component, system
integration,
system

27
28

You might also like