You are on page 1of 4

TESTING

TESTING:
"... the process of exercising or evaluating a system or system component by
manual or automated means to verify that it satisfies specified requirements or to identify
differences between expected and actual results ..."

Basic Methods
1.white box testing:
White box testing is performed to reveal problems with the internal structure of a
program. This requires the tester to have detailed knowledge of the internal structure. A
common goal of white-box testing is to ensure a test case exercises every path through a
program.

2.Black Box testing:


Black box tests are performed to assess how well a program meets its
requirements, looking for missing or incorrect functionality. Functional tests typically
exercise code with valid or nearly valid input for which the expected output is known.
This includes concepts such as 'boundary values'

3.Unit Testing:
Unit testing exercises a unit in isolation from the rest of the system. A unit is
typically a function or small collection of functions (libraries, classes), implemented by a
single developer.
The main characteristic that distinguishes a unit is that it is small enough to test
thoroughly, if not exhaustively. Developers are normally responsible for the testing of
their own units and these are normally white box tests. The small size of units allows a
high level of code coverage. It is also easier to locate and remove bugs at this level of
testing.

4.Integration Testing:
One of the most difficult aspects of software development is the integration
and testing of large, untested sub-systems. The integrated system frequently fails in
significant and mysterious ways, and it is difficult to fix it

Integration testing exercises several units that have been combined to form a module,
subsystem, or system. Integration testing focuses on the interfaces between units, to make
sure the units work together. The nature of this phase is certainly 'white box', as we must
have a certain knowledge of the units to recognize if we have been successful in fusing
them together in the module.

There are two types of Integration Testing:


1.Non-incremental Integration:
It uses “Big bang” approach .All modules are combined in advance .The
entire program is tested as a whole .
2.Incremental Integration:
This is the antithesis of the “Big bang” approach . The program is constructed
and tested in small segments ,where errors are easier to isolate and correct it.

1
There are three main approaches to integration testing: top-down, bottom-up and 'big
bang'. Top-down combines, tests, and debugs top-level routines that become the test
'harness' or 'scaffolding' for lower-level units. Bottom-up combines and tests low-level
units into progressively larger modules and subsystems. 'Big bang' testing is,
unfortunately, the prevalent integration test 'method'. This is waiting for all the module
units to be complete before trying them out together.

(From [1]) Bottom-up Top-down


Major Features • Allows early testing aimed t The control program is
proving feasibility and practicality tested first
of particular modules. Modules are integrated one
• Modules can be integrated in at a time
various clusters as desired. Major emphasis is on
interface testing
• Major emphasis is on module
functionality and performance.
Advantages No test stubs are needed No test drivers are needed
It is easier to adjust manpower The control program plus a
needs few modules forms a basic early
Errors in critical modules are prototype
found early Interface errors are
discovered early
Modular features aid
debugging
Disadvantages Test drivers are needed Test stubs are needed
Many modules must be The extended early phases
integrated before a working dictate a slow manpower
program is available buildup
Interface errors are discovered Errors in critical modules at
late low levels are found late
Comments At any given point, more code has beenAn early working program raises morale
written and tested that with top downand helps convince management
testing. Some people feel that bottom-upprogress is being made. It is hard to
is a more intuitive test philosophy. maintain a pure top-down strategy in
practice.

Integration tests can rely heavily on stubs or drivers. Stubs stand-in for finished
subroutines or sub-systems. A stub might consist of a function header with no body, or it
may read and return test data from a file, return hard-coded values, or obtain data from
the tester. Stub creation can be a time consuming piece of testing.

The cost of drivers and stubs in the top-down and bottom-up testing methods is what
drives the use of 'big bang' testing. This approach waits for all the modules to be
constructed and tested independently, and when they are finished, they are integrated all
at once. While this approach is very quick, it frequently reveals more defects than the
other methods. These errors have to be fixed and as we have seen, errors that are found

2
'later' take longer to fix. In addition, like bottom up, there is really nothing that can be
demonstrated until later in the process.

5.Alpha or External Function Testing:


The 'external function test' is a black box test to verify the system
correctly implements specified functions. This phase is sometimes known as an alpha
test. Testers will run tests that they believe reflect the end use of the system.

6.System Testing:
The 'system test' is a more robust version of the external test, and can be
known as an alpha test. The essential difference between 'system' and 'external function'
testing is the test platform. In system testing, the platform must be as close to production
use in the customers’ environment, including factors such as hardware setup and database
size and complexity. By replicating the target environment, we can more accurately test
'softer' system features (performance, security and fault-tolerance).

Because of the similarities between the test suites in the external function and system test
phases, a project may leave one of them out. It may be too expensive to replicate the user
environment for the system test, or we may not have enough time to run both.

7.Acceptance or Beta Testing:


An acceptance (or beta) test is an exercise of a completed system by a
group of end users to determine whether the system is ready for deployment. Here the
system will receive more realistic testing that in the 'system test' phase, as the users have
a better idea how the system will be used than the system testers.

8.Regression Testing:

Regression testing is an expensive but necessary activity performed on


modified software to provide confidence that changes are correct and do not adversely
affect other system components

It can be difficult to determine how much re-testing is needed, especially near the end of
the development cycle. Most industrial testing is done via test suites; automated sets of
procedures designed to exercise all parts of a program and to show defects. While the
original suite could be used to test the modified software, this might be very time-
consuming. A regression test selection technique chooses, from an existing test set, the
tests that are deemed necessary to validate modified software.

There are three main groups of test selection approaches in use:

• Minimization approaches seek to satisfy structural coverage criteria by identifying


a minimal set of tests that must be rerun.

3
• Coverage approaches are also based on coverage criteria, but do not require
minimization of the test set. Instead, they seek to select all tests that exercise
changed or affected program components.
• Safe attempt instead to select every test that will cause the modified program to
produce different output than original program.

A frequently asked question about regression testing is 'The developer says this problem
is fixed. Why do I need to re-test?’ to which the answer is 'The same person probably told
you it worked in the first place'.

Comparison between White box & Black box testing :

White box testing is concerned only with testing the software product, it cannot guarantee
that the complete specification has been implemented. Black box testing is concerned
only with testing the specification, it cannot guarantee that all parts of the implementation
have been tested. Thus black box testing is testing against the specification and will
discover faults of omission, indicating that part of the specification has not been fulfilled.
White box testing is testing against the implementation and will discover
faults of commission, indicating that part of the implementation is faulty. In order to fully
test a software product both black and white box testing are required.

White box testing is much more expensive than black box testing. It requires the source
code to be produced before the tests can be planned and is much more laborious in the
determination of suitable input data and the determination if the software is or is not
correct. The advice given is to start test planning with a black box test approach as soon
as the specification is available. White box planning should commence as soon as all
black box tests have been successfully passed, with the production of flowgraphs and
determination of paths. The paths should then be checked against the black box test plan
and any additional required test runs determined and applied.

The consequences of test failure at this stage may be very expensive. A failure of a white
box test may result in a change which requires all black box testing to be repeated and the
re-determination of the white box paths. The cheaper option is to regard the process of
testing as one of quality assurance rather than quality control