Professional Documents
Culture Documents
Testing cannot demonstrate that the software is free of defects or that it will behave as expected in
every circumstance.
Edger Dijkstra: "Testing can only show the presence of errors, not their absence."
Validation and verification process may also involve software inspections and reviews – static
techniques focusing mainly on source code without the need to execute the program. In fact, any
readable representation of software can be inspected.
Test cases are specifications of input, expected output and a statement of what is being tested.
Test data – input devised to test a system.
Development testing – all testing activities are carried out by the team developing the system – the
tester is usually the programmer who develops the software
There are 3 levels of development testing:
Unit testing – testing individual program units or object classes
Unit testing tests individual program components, e.g. individual functions. Tests should be
designed to provide coverage for all features of an object. In particular they should:
o Test all operations associated with an object
o Set and check the value of all attributes associated with an object
o Put an object into all possible states and simulate all events that cause a state change
Unit testing should be automated whenever possible.
Unit test cases must be effective. This means they should:
o Show that the component does what it's expected to do if used as expected
o Reveal defects if there are any
There are 2 strategies for choosing test cases:
Partition testing – identify groups of inputs that have common characteristics –
tests from within each group should be included
The input and output results often fall into a number of different classes with
common characteristics. Programs normally behave in comparable way for all
members of the class. These classes are called equivalence partitions.
Once a set of partitions has been identified, test cases from each should be
chosen. A good way is to choose test cases on the boundaries of the partitions
and close to the midpoint – boundary values are often atypical and may have
been overlooked by the developers. Using specification of a system to identify
equivalence partitions is called black-box testing. In black-box testing the
knowledge of how a system works is not needed.
There is also white-box testing – looking at the code to find possible tests – for
example, the code may include exceptions to handle incorrect inputs.
Interface errors are one of the most common errors in complex systems. They have 3
categories:
o Interface misuse – common in parameter interfaces
o Interface misunderstanding – a calling component misunderstands the specification of
the called component and makes an assumption about its behavior that turns out to be
wrong
o Timing error – producer and consumer of data operate on different speeds
Testing for interface defects is difficult as some of them may only become visible under unusual
conditions.
Systems testing – testing the system as a whole - integrating components to create a version of
the system and testing it. It overlaps with component testing but there are differences:
During system testing reusable components are integrated with the newly developed ones and
the complete system is tested
Components developed by different teams are integrated at this stage
When you put the pieces together, you get emergent behavior – some features only become
visible at this stage. Sometimes the emergent behavior is unplanned and unwanted. The tests
must check that the system only does what it is supposed to do – system testing should focus on
interactions within the system.
Development testing is interleaved with debugging – locating problems in the code.
Release testing – testing a release of the system that is intended for use outside development team.
Release testing is a form of system testing. The differences are:
A separate team should be responsible for release testing.
Release testing is validation testing rather than defect testing.
The purpose is to check that the system meets the requirements and is good enough for external use
(validation testing).
Usually a black-box testing - functional testing – the tester is only concerned with the functionality, not
the implementation.
The following may be parts of release testing:
Requirements based testing
o Requirements should be testable – meaning that every requirement should be written
so that a test can be designed for it
o Validation rather than defect testing
o Usually multiple tests have to be written to test one requirement
Scenario testing
o Devise typical scenarios of use and use them to develop test cases
o Scenarios should be realistic and fairly complex
o Several requirements are tested within the same scenario
o Intended to check that combinations of requirements don't cause problems
Performance testing
o Designed to ensure the system can process the intended load
o Stress testing – gradually increasing the load until system failure
o Tests the failure behavior – should not cause data corruption
o May expose defects that wouldn’t normally be discovered
o Particularly relevant to distributed systems based on a network of processors – they
often exhibit severe degradation when overly loaded
User testing - a stage in the testing process in which users or customers provide input and advice on
system testing. User testing is essential, even when comprehensive system and release testing have
been carried out. Types of user testing are:
Alpha testing – users work with the developers at the developers' side - users and developers
work together to test the system that is being developed.
Beta testing – release of a system made available for users to experiment with and report
problems -used mostly for products that are used in many different environments and is also a
form of marketing.
Acceptance testing – customers test a system and decide whether it is ready to be accepted and
used - testing takes place after release testing. It involves a customer formally testing the
system. There are 6 stages of acceptance testing:
o Define acceptance criteria
o Plan acceptance testing
o Derive acceptance tests
o Run acceptance tests
o Negotiate test results
o Reject/accept the system
Acceptance testing process:
Test-driven development
TDD is an approach to program development where testing and code development are interleaved
The code is developed incrementally along with a test for that increment
The developers don't move to the next increment until the code passes the test.
Test driven development was introduced as part of the agile methods.
Test driven development helps programmers to clarify their idea of what a piece of code is supposed to
do – to write a test you need to understand what is intended.
It also has other advantages:
Code coverage – every code segment has an associated test – defects are discovered early
Regression testing
Simplified debugging
System documentation
Summary
Testing can only show the presence of errors. It can't demonstrate that there are no remaining
faults.
Development testing is the responsibility of the software development team. A separate team
should be responsible for testing a system before it is released to customers. In the user testing
process, customers or users provide test data and check that tests are successful.
Development testing includes unit testing, in which you test individual objects and methods,
component testing in which you test related groups of objects and system testing in which you
test partial or completed system.
When testing software, you should try to break it by using experience and guidelines to choose
types of test cases that have been effective in discovering defects in other systems.
Whenever possible, you should write automated tests – embedded in a program that can be run
every time a change is made.
Test-first development is an approach where tests are written before the code to be tested.
Small code changes are made and the code is refactored until all tests execute successfully.
Scenario testing is useful because it replicates the practical use of the system. It involves
inventing a typical usage scenario and using it to derive test cases.
Acceptance testing is a user testing process where the aim is to decide if the software is good
enough to be deployed and used.