You are on page 1of 6

CS1SE16 SOFTWARE TESTING

Software testing process


Testing is intended to show that a program does what it is supposed to do and discover defects before
it's put to use. The testing process has 2 goals:
 To demonstrate that software meets the requirements
 To identify situations where something about the software is incorrect or undesirable
The first goal leads to validation testing; the second leads to defect testing. The test cases in validation
testing reflect the system's expected use. Test cases in defect testing are designed to expose defects.
They may be deliberately obscure and don't need to reflect normal use.

Testing cannot demonstrate that the software is free of defects or that it will behave as expected in
every circumstance.
Edger Dijkstra: "Testing can only show the presence of errors, not their absence."

Testing is part of the broader process of validation and verification.


Validation and verification processes are concerned with checking that the software being developed
meets its specifications and delivers the expected functionality.
 Validation – a key to start with – are we building the right product?
o It's important to talk to potential users of the software, not just the customers –
sometimes the person paying for the software is not the one that is going to use it.
o The aim of validation is to ensure the software meets customers' needs
 Verification – are we building the product correctly? - given the specification is correct, are we
meeting it?
o The aim of verification is to check that the system meets its stated functional and non-
functional requirements.
The ultimate goal of both is to establish the confidence that the system is fit for purpose – good
enough for its intended use.
The level of the required confidence depends on:
 Software purpose – the more critical the software, the more important reliability is
 Users' expectations
 Marketing environment – competing programs, price, delivery schedule – if a product is very
cheap, users may be willing to tolerate lower reliability.

Validation and verification process may also involve software inspections and reviews – static
techniques focusing mainly on source code without the need to execute the program. In fact, any
readable representation of software can be inspected.

Advantages of inspection over testing


 During testing errors can mask other errors – whereas during inspection there is no interaction
between errors
 Incomplete versions of a system can be tested
 Inspections can also consider broader quality attributes, for example compliance with standards,
maintainability

Inspections can't replace software testing. They cannot discover:


 Defects caused by unexpected interactions between parts of the program
 Timing problems
 Problems with system performance

A model for software testing process

Test cases are specifications of input, expected output and a statement of what is being tested.
Test data – input devised to test a system.

A commercial program usually goes through 3 stages of testing:


 Development testing – system tested during program development to discover bugs and
defects
 Release testing – when the program is finished but before it is released – this is done by a
separate testing team
 User testing – users testing the product in their own environment
o Note: it may be worth to watch the users test the program without them knowing
they're being watched – people tend to do different things with a program when they
know they're being watched and when they don't

Testing process usually involves a mixture of manual and automated testing.

Development testing – all testing activities are carried out by the team developing the system – the
tester is usually the programmer who develops the software
There are 3 levels of development testing:
 Unit testing – testing individual program units or object classes
Unit testing tests individual program components, e.g. individual functions. Tests should be
designed to provide coverage for all features of an object. In particular they should:
o Test all operations associated with an object
o Set and check the value of all attributes associated with an object
o Put an object into all possible states and simulate all events that cause a state change
Unit testing should be automated whenever possible.
Unit test cases must be effective. This means they should:
o Show that the component does what it's expected to do if used as expected
o Reveal defects if there are any
There are 2 strategies for choosing test cases:
 Partition testing – identify groups of inputs that have common characteristics –
tests from within each group should be included
The input and output results often fall into a number of different classes with
common characteristics. Programs normally behave in comparable way for all
members of the class. These classes are called equivalence partitions.
Once a set of partitions has been identified, test cases from each should be
chosen. A good way is to choose test cases on the boundaries of the partitions
and close to the midpoint – boundary values are often atypical and may have
been overlooked by the developers. Using specification of a system to identify
equivalence partitions is called black-box testing. In black-box testing the
knowledge of how a system works is not needed.

There is also white-box testing – looking at the code to find possible tests – for
example, the code may include exceptions to handle incorrect inputs.

Equivalence partitioning is an effective approach to testing as it helps identify


errors that often arise at the edges of the partitions.

 Guideline-based testing – use guidelines that reflect previous experience of the


errors that are often made.
 Component testing – testing several integrated units - – focuses on showing that the
component interfaces behave according to their specifications. It is assumed that the unit
testing within the component has been completed.
There are different types of interfaces between components:
o Parameter interfaces – data passes from one component to another
o Shared memory interfaces – block of memory shared between components
o Procedural interfaces – one component encapsulates a set of procedures that can be
called by other components
o Message passing interfaces – components request services and return results by
passing messages

Interface errors are one of the most common errors in complex systems. They have 3
categories:
o Interface misuse – common in parameter interfaces
o Interface misunderstanding – a calling component misunderstands the specification of
the called component and makes an assumption about its behavior that turns out to be
wrong
o Timing error – producer and consumer of data operate on different speeds

Testing for interface defects is difficult as some of them may only become visible under unusual
conditions.
 Systems testing – testing the system as a whole - integrating components to create a version of
the system and testing it. It overlaps with component testing but there are differences:
 During system testing reusable components are integrated with the newly developed ones and
the complete system is tested
 Components developed by different teams are integrated at this stage

When you put the pieces together, you get emergent behavior – some features only become
visible at this stage. Sometimes the emergent behavior is unplanned and unwanted. The tests
must check that the system only does what it is supposed to do – system testing should focus on
interactions within the system.
Development testing is interleaved with debugging – locating problems in the code.

Release testing – testing a release of the system that is intended for use outside development team.
Release testing is a form of system testing. The differences are:
 A separate team should be responsible for release testing.
 Release testing is validation testing rather than defect testing.
The purpose is to check that the system meets the requirements and is good enough for external use
(validation testing).
Usually a black-box testing - functional testing – the tester is only concerned with the functionality, not
the implementation.
The following may be parts of release testing:
 Requirements based testing
o Requirements should be testable – meaning that every requirement should be written
so that a test can be designed for it
o Validation rather than defect testing
o Usually multiple tests have to be written to test one requirement
 Scenario testing
o Devise typical scenarios of use and use them to develop test cases
o Scenarios should be realistic and fairly complex
o Several requirements are tested within the same scenario
o Intended to check that combinations of requirements don't cause problems
 Performance testing
o Designed to ensure the system can process the intended load
o Stress testing – gradually increasing the load until system failure
o Tests the failure behavior – should not cause data corruption
o May expose defects that wouldn’t normally be discovered
o Particularly relevant to distributed systems based on a network of processors – they
often exhibit severe degradation when overly loaded

User testing - a stage in the testing process in which users or customers provide input and advice on
system testing. User testing is essential, even when comprehensive system and release testing have
been carried out. Types of user testing are:
 Alpha testing – users work with the developers at the developers' side - users and developers
work together to test the system that is being developed.
 Beta testing – release of a system made available for users to experiment with and report
problems -used mostly for products that are used in many different environments and is also a
form of marketing.
 Acceptance testing – customers test a system and decide whether it is ready to be accepted and
used - testing takes place after release testing. It involves a customer formally testing the
system. There are 6 stages of acceptance testing:
o Define acceptance criteria
o Plan acceptance testing
o Derive acceptance tests
o Run acceptance tests
o Negotiate test results
o Reject/accept the system
Acceptance testing process:

Test-driven development
TDD is an approach to program development where testing and code development are interleaved
The code is developed incrementally along with a test for that increment
The developers don't move to the next increment until the code passes the test.
Test driven development was introduced as part of the agile methods.

Test driven development helps programmers to clarify their idea of what a piece of code is supposed to
do – to write a test you need to understand what is intended.
It also has other advantages:
 Code coverage – every code segment has an associated test – defects are discovered early
 Regression testing
 Simplified debugging
 System documentation

Summary
 Testing can only show the presence of errors. It can't demonstrate that there are no remaining
faults.
 Development testing is the responsibility of the software development team. A separate team
should be responsible for testing a system before it is released to customers. In the user testing
process, customers or users provide test data and check that tests are successful.
 Development testing includes unit testing, in which you test individual objects and methods,
component testing in which you test related groups of objects and system testing in which you
test partial or completed system.
 When testing software, you should try to break it by using experience and guidelines to choose
types of test cases that have been effective in discovering defects in other systems.
 Whenever possible, you should write automated tests – embedded in a program that can be run
every time a change is made.
 Test-first development is an approach where tests are written before the code to be tested.
Small code changes are made and the code is refactored until all tests execute successfully.
 Scenario testing is useful because it replicates the practical use of the system. It involves
inventing a typical usage scenario and using it to derive test cases.
 Acceptance testing is a user testing process where the aim is to decide if the software is good
enough to be deployed and used.

You might also like