You are on page 1of 33

Software Engg & Case Tools

PGIT104
Module III
Testing
Why Testing?
• Testing find errors.
• An early agreement upon the test plan at the
requirement stage is important.
• Specification -> Test plan -> modify test plan ->
Specification.
Test case
• A test case is described through a choice of
inputs for the unit to be tested.
• For example :
What are the test cases for a given function
“ int f(int x, int y)” that compares two nos. and
report the maximum?
What is the criteria to be used?
1. Either the code should be tested based on
what is required from the external point of
view. (Black Box Testing)
2. Or, it should be tested on an insider’s view on
the given artifact i.e. the code is be seen to
design the test case. (White Box Testing).
Black Box Testing
• Black box testing is also called functional
testing, that is because we are going to mainly
look at the functionality of the black box, how
it is to be used as a black box.
• Test the artifact from external point of view.
• Specifications are used to generate test data.
• It checks performance and behavioural error.
• It is applied at the final stage of testing.
Black Box Testing
• Different methods of Black box Testing-
1. Graph based Testing
2. Equivalence Partitioning
3. Boundary Value Analysis
Boundary Value Analysis
White Box Testing
• White box testing, which is also called structural testing.
• The artifact are tested from internal or implementation
point of view.
• Cannot detect missing features.
• Coverage measures are used-
1. Statement coverage.
– Each statement is covered in testing
2. Branch coverage:
– Each branch is covered. (For example in “if then else”)
3. Path oriented testing.
– You are going to select the data such that different chosen parts
in a program are covered.
White Box Testing
• Two different methods of white box testing
are-
1. Basic Path Testing.
2. Control Structure Testing.
What should be tested?
• Entire System at once?
• Or, every component or unit before the
system is integrated?
Levels of Testing
1. Module testing (Unit Testing)
• Each module is individually tested
2. Integration level (Incremental Testing)
• Modules that depend on each other are tested
collectively
3. System testing (Evaluation testing)
• The entire system is tested.
4. Acceptance test (Live Test)
• For acceptance of software and subsequent payments.
Unit Testing
• It is a level of software testing where individual
units/ components of a software are tested
independently from rest of the software.
• The actual results are compared with the
results defined in specification and design
module.
• Fault isolation and debugging becomes easier
in unit testing.
What to test during unit testing
• Module interface
– Tested to check the correct flow of info in and out of
the system.
• Independent/basic paths
– Tested to ensure that all modules are executed at
least once during testing.
• Boundary conditions
– Output/computation at boundary value is correct.
• Error handling paths.
Role of Driver and Stub in Unit Testing
• Driver
– Main program that accepts test case data, passes data
to the components to be tested and prints relevant
results.
• Stub
– Subordinate modules that are called by the module to
be tested.
– It is a dummy sub program that does the minimal data
manipulation, provides verification of entry and
returns the control to the module under testing.
Integration testing
• It is a level of software testing where individual
units are combined and tested as a group.
• Here, unit tested module are taken one by one
and integrated incrementally.
• It is performed to expose defects in the
interfaces and in the interactions between
integrated components or systems. 
• Test drivers and test stubs are used to assist in
Integration Testing.
Why Integration testing is required?

• Data may be lost during interfacing.


• Global data may also cause problem.
• Sub functions may not work properly when
combined.
Types of Integration Testing
• Big Bang testing
• Top Down testing
• Bottom Up testing
• Sandwich/Hybrid testing
Big Bang testing
• Big Bang is an approach to Integration Testing where all or
most of the units are combined together and tested at one go.
• This approach is taken when the testing team receives the
entire software in a bundle.
• The main problem with this approach is that once an error is
found during the integration testing, it is very difficult to
localize the error as the error may potentially belong to any of
the modules being integrated.
• So what is the difference between Big Bang Integration
Testing and System Testing?
– The former tests only the interactions between the units while the
latter tests the entire system.
Top Down Integration testing
• It is an approach to Integration Testing where
top-level units are tested first and lower level
units are tested step by step after that.
• This approach is taken when top-down
development approach is followed.
• The two approaches for this will be DFS( and
BFS.
Steps of Top Down Integration testing
1. Starting with the main module, stubs are
substituted for all the components subordinate to
it.
2. Depending on the type of integration stubs are
replaced one at a time with the actual components.
3. Test are conducted as each component is
integrated.
4. On completion another stub is replaced with actual
component.
5. Regression testing ensures no new errors.
Bottom Up Integration Testing
• It is an approach to Integration Testing where
bottom level units are tested first and upper-
level units step by step after that.
• This approach is taken when bottom-up
development approach is followed
• Test Drivers are needed to simulate higher
level units which may not be available during
the initial phases.
Steps of Bottom Up Integration Testing

1. Low level components are combined into


cluster/builds that performs specific software
functions.
2. Driver coordinates the test case i/p-o/p
3. The cluster is tested.
4. Driver is removed and clusters are combined
with new module/other clusters moving up in
the hierarchy.
Sandwich/Hybrid Integration Testing
• In top-down approach, testing can start only after the
top-level modules have been coded and unit tested.
Similarly, bottom-up testing can start only after the
bottom level modules are ready.
• The Sandwich approach overcomes this shortcoming
of the top-down and bottom-up approaches.
• In the Sandwich testing approaches, testing can start
as and when modules become available. Therefore,
this is one of the most commonly used integration
testing approaches.
System Testing
• It is a level of software testing where a
complete and integrated software is tested.
The purpose of this test is to evaluate the
system’s compliance with the specified
requirements
Types of System Testing
1. Recovery Testing
2. Security Testing
3. Stress Testing
4. Sensitivity Testing
5. Performance Testing
6. Deployment Testing
Acceptance Testing
• It is a level of software testing where a system
is tested for acceptability.
• It is conducted when the software is
developed for a specific customers and not for
a large public.
• The purpose of this testis to allow customer to
validate (check wrt SRS) all requirements.
Alpha Testing
• Alpha Testing (also known as Internal
Acceptance Testing) is performed by
members of the organization that developed
the software but who are not directly
involved in the project (Development or
Testing). Usually, it is the members of Product
Management, Sales and/or Customer Support.
Beta Testing
• Beta Testing (also known as User Acceptance
Testing) is performed by the end users of the
software. They can be the customers
themselves or the customers’ customers.
External Acceptance Testing
• External Acceptance Testing is performed by people who
are not employees of the organization that developed the
software.
• It is of two types-
1. Customer Acceptance Testing is performed by the
customers of the organization that developed the
software. They are the ones who asked the organization to
develop the software. [This is in the case of the software
not being owned by the organization that developed it.]
2. Beta Testing.
Alpha Testing Vs. Beta testing

ALPHA TESTING BETA TESTING


1. Done by the customer at 1. Done by the customer at
developer’s site. user’s/customer’s site.
2. Conducted in a controlled 2. Conducted in real time
environment. environment, not under the
3. Developer is present. developer’s control.
4. Carried out before the release 3. Developer is not present.
of product to customer. 4. Carried out after the release
5. Failures/errors are recorded. of product to customer.
6. Black box and white box 5. Failures are reported.
testing.
6. Black box testing.
What is Verification?
• Definition : The process of evaluating software to determine whether
the products of a given development phase satisfy the conditions
imposed at the start of that phase.

• Verification is a static practice of verifying documents, design, code and


program. It includes all the activities associated with producing high
quality software: inspection, design analysis and specification analysis. It
is a relatively objective process.
• Methods of Verification : 
• Static Testing
• Walkthrough
• Inspection
• Review
What is Validation?

• Definition: The process of evaluating software during or at the end of


the development process to determine whether it satisfies specified
requirements.

• Validation is the process of evaluating the final product to check whether


the software meets the customer expectations and requirements. It is a
dynamic mechanism of validating and testing the actual product.

• Methods of Validation :
• Dynamic Testing
• Testing
• End Users
Difference between Verification
and Validation

Verification Validation
1. Verification is a static practice of 1. Validation is a dynamic mechanism of
verifying documents, design, code and validating and testing the actual product.
program.
2. It always involves executing the code.
2. It does not involve executing the code.
3. It is computer based execution of
3. It is human based checking of program.
documents and files.
4. Validation uses methods like black box
4. Verification uses methods like (functional) testing, gray box testing, and
inspections, reviews, walkthroughs, and white box (structural) testing etc.
Desk-checking etc.
5. Validation is to check whether software
5. Verification is to check whether the meets the customer expectations and
software conforms to specifications. requirements.
6.  Verification is done by QA team to 6. Validation is carried out with the
ensure that the software is as per the involvement of testing team.
specifications in the SRS document.

You might also like