Professional Documents
Culture Documents
Software Testing
ISTQB / ISEB Foundation Exam Practice
4 Test design
5 Management 6 Tools
techniques
Lifecycle
Contents
Software Models
Test levels
Test types
Maintenance testing
Waterfall model
Cost of fixing faults
1000
100
10
Design Run
Tests Tests
V&V activities in testing
Verification
• the process of evaluating a system or component to
determine whether the products of the given
development phase satisfy the conditions imposed
at the start of that phase [BS 7925-1]
Validation
• determination of the correctness of the products of
software development with respect to the user
needs and requirements [BS 7925-1]
Early test design
Contents
Models for testing, economics of testing
Test levels
Test types
Maintenance testing
Test levels
Design Run
Tests Tests
Component testing
Lowest level
Tested in isolation
Most thorough look at detail
Usually done by programmer
Also known as unit, module, program testing
Stubs & Drivers (Test Doubles)
Stub (mock): something is called from
component to be tested
Driver: something calls component to be
tested
Component testing: objectives
Reduce risk
Verify functional and non-functional behaviour
Build confidence in components
Find defects
Prevent defects to higher levels
Component testing: test basis
Detailed design
Code
Data models
Component specifications
Component testing: test objects
Wrong functionality
Data flow problems
Incorrect code/logic
Component testing: specific approaches
and responsibilities
Reduce risk
Verify functional and non-functional behaviour
Build confidence in interfaces
Find defects
Prevent defects to higher levels
Integration testing: test basis
Software/system design
Sequence diagrams
Interface and communication protocol specs
Use cases
Architecture (component or system)
Workflows
External interface definitions
Integration testing: test objects
Subsystems
Databases
Infrastructure
Interfaces
APIs
Microservices
Integration testing: typical defects
and failures
Data problems
Inconsistent message structure (SIT)
Timing problems
Interface mismatch
Communication failures
Incorrect assumptions
Not complying with regulations (SIT)
Integration testing: specific approaches
and responsibilities
The greater scope of integration, the more difficult it
becomes → many approaches to integration testing
One extreme: all components or systems are
integrated simultaneously, then everything is tested
as a whole → big-bang integration
Another extreme: all programs are integrated one
by one, then tests are carried out after each step →
incremental integration
Big-Bang Integration
Advantages:
- As everything is finished before integration testing
starts → no need to simulate unfinished parts
- Using when being optimistic and expecting to find
no problems
Disadvantages:
- Difficult to trace the cause of failures with late
integration
- Time-consuming
Incremental Integration
Advantages:
- Earlier defects found
- Easier fault location and fix
Disadvantages:
- Time-consuming since stubs (mock objects) and
drivers may need to be developed and used in the
test
Some variant of incremental integration:
- Top-down
- Bottom-up
- Functional incremental
Top-down Integration
Following control flow or architectural structure,
example starting from GUI/main-menu
Baselines:
- baseline 0: component a a
- baseline 1: a + b
- baseline 2: a + b + c b c
- baseline 3: a + b + c + d
- etc. d e f g
Need to call to lower
level components not h i j k l m
yet integrated (stubs/mocks)
n o
Pros & cons of top-down approach
Advantages:
- critical control structure tested first and most often
- can demonstrate system early (show working
menus)
Disadvantages:
- needs stubs
- detail left until last
- may be difficult to "see" detailed output (but should
have been tested in component test)
- may look more finished than it is
Bottom-up Integration a
Baselines: b c
- baseline 0: component n
- baseline 1: n + i d e f g
- baseline 2: n + i + o
- baseline 3: n + i + o + d h i j k l m
- etc.
Needs drivers to call n o
the baseline configuration
Also needs stubs/mocks/drivers
for some baselines
Pros & cons of bottom-up approach
Advantages:
- lowest levels tested first and most thoroughly (but
should have been tested in unit testing)
- good for testing interfaces to external environment
(hardware, network)
- visibility of detail
Disadvantages
- no working system until last baseline
- needs both drivers and stubs
- major control problems found last
Stubs & Drivers
Stub → keep it simple
- print/display name (I have been called)
- reply to calling module (single value)
- computed reply (variety of values)
- prompt for reply from tester
- search list of replies
- provide timing delay
Driver: specially written or general purpose
(commercial tools)
- invoke baseline
- send any data baseline expects
- receive any data baseline produces (print)
Minimum Capability Integration
(also called Functional)
Baselines: a
- baseline 0: component a
b c
- baseline 1: a + b
- baseline 2: a + b + d
d e f g
- baseline 3: a + b + d + i
- etc.
h i j k l m
Needs stubs
Shouldn't need drivers
n o
(if top-down)
Pros & cons of Minimum Capability
Advantages:
- control level tested first and most often
- visibility of detail
- real working partial system earliest
Disadvantages
- needs stubs
Thread Integration
(also called Functional)
order of processing some event
determines integration order a
interrupt, user transaction b c
minimum capability in time
advantages: d e f g
- critical processing first
h i j k l m
- early warning of
performance problems
n o
disadvantages:
- may need complex drivers and stubs
Integration Guidelines
Reduce risk
Verify functional and non-functional behaviour
Validate completeness, works as expected
Build confidence in interfaces
Find defects
Prevent defects to higher levels
System testing: test basis
Applications
Hardware/software
Operating systems
System under test (SUT)
System configuration and data
System testing: typical
defects and failures
Incorrect calculations
Incorrect or unexpected behaviour
Incorrect data/control flows
Cannot complete end-to-end tasks
Does not work in production environments
Not as described in manuals/documentation
System testing: specific approaches
and responsibilities
Most often the final test on behalf of development
Most often carrying out by specialist independent
testers (and sometimes by third-party team)
End-to-end behaviour of both functional and non-
functional aspects
Acceptance testing
Business processes
User, business, system requirements
Regulations, legal contracts and standards
Use cases
Documentation
Installation procedures
Risk analysis
Acceptance testing: test objects
passwords
encryption
hardware permission devices
levels of access to information
authorisation
covert channels
physical security
Configuration and Installation
Configuration Tests
- different hardware or software environment
- configuration of the system itself
- upgrade paths - may conflict
Installation Tests
- distribution (CD, network, etc.) and timings
- physical aspects: electromagnetic fields, heat,
humidity, motion, chemicals, power supplies
- uninstall (removing installation)
Reliability / Qualities
Reliability
- "system will be reliable" - how to test this?
- "2 failures per year over ten years"
- Mean Time Between Failures (MTBF)
- reliability growth models
Other Qualities
- maintainability, portability, adaptability, etc.
Back-up and Recovery
Back-ups
- computer functions
- manual procedures (where are tapes stored)
Recovery
- real test of back-up
- manual procedures unfamiliar
- should be regularly rehearsed
- documentation should be detailed, clear and
thorough
Documentation Testing
Documentation review
- check for accuracy against other documents
- gain consensus about content
- documentation exists, in right format
Documentation tests
- is it usable? does it work?
- user manual
- maintenance documentation
Lifecycle
Contents
Models for testing
Test levels
Test types
Maintenance testing
Test types
Test type: group of test activities based on
specific test objectives aimed at specific
characteristics of o component or system
Functional testing: evaluate the compliance
with functional requirement
Non-functional testing: evaluate the
compliance with non-functional requirement:
- Performance, reliability, usability, efficiency,
maintainability, portability, security…
Test types
Structural testing (white-box, clear-box, code-
based, glass-box, logic coverage, logic-
driven, structured-based): Testing of software
structure/architecture
Confirmation testing (re-testing): dynamic
testing conducted after fixing defects to
confirm failures do not occur anymore
Regression testing: testing of previously
tested component/system to ensure defects
are not introduced in unchanged areas of
software as a result of the changes made
Each test type is applicable at every test level
Lifecycle
Contents
Models for testing
Test levels
Test types
Maintenance testing
Maintenance testing
Testing the changes (including regression tests) to
an operational system or the impact of a changed
environment to an operational system:
• Modification: enhancement changes, corrective and
emergency changes, operating system or database
upgrades, patches to correct the operating system, hardware
devices changed
• Migration: moving to another platform or adding a new
supported platform (new environment, changed software,
data migration)
• Retirement: data migration or archiving, also restore after
archiving
Impact analysis and regression testing
Impact analysis: identify all work products affected
by a change, including an estimate of resources to
complete the change.
Factors influence impact analysis:
• Specification are out of date or missing
• Test cases are not documented or are out of date
• Bi-directional traceability between tests and test basis is
not maintained
• Involved people don’t have domain knowledge
• …
Lifecycle