You are on page 1of 70

Chapter 2

Software Testing
ISTQB / ISEB Foundation Exam Practice

Testing Throughout the


software Life Cycle

1 Principles 2 Lifecycle 3 Static testing

4 Test design
5 Management 6 Tools
techniques
Lifecycle

1 2 3 ISTQB / ISEB Foundation Exam Practice


4 5 6

Contents
Software Models
Test levels
Test types
Maintenance testing
Waterfall model
Cost of fixing faults

1000

100

10

Req Des Test Use


V-Model

Design Run
Tests Tests
V&V activities in testing

Verification
• the process of evaluating a system or component to
determine whether the products of the given
development phase satisfy the conditions imposed
at the start of that phase [BS 7925-1]
Validation
• determination of the correctness of the products of
software development with respect to the user
needs and requirements [BS 7925-1]
Early test design

test design finds faults


faults found early are cheaper to fix
most significant faults found first
faults prevented, not built in
no additional effort, re-schedule test design
changing requirements caused by test design

Early test design helps to build quality,


stops fault multiplication
Iterative & Incremental models
Agile development
It promotes the generation of business stories to define
the functionality.
It demands an on-site customer for continual feedback
and to define and carry out functional acceptance testing.
It promotes pair programming and shared code
ownership amongst the developers.
It states that component test scripts shall be written
before the code is written and that those tests should be
automated.
It states that integration and testing of the code shall
happen several times a day.
It states that we always implement the simplest solution
to meet today's problems.
Scrum Framework
Lifecycle

1 2 3 ISTQB / ISEB Foundation Exam Practice


4 5 6

Contents
Models for testing, economics of testing
Test levels
Test types
Maintenance testing
Test levels

Design Run
Tests Tests
Component testing

Lowest level
Tested in isolation
Most thorough look at detail
Usually done by programmer
Also known as unit, module, program testing
Stubs & Drivers (Test Doubles)
Stub (mock): something is called from
component to be tested
Driver: something calls component to be
tested
Component testing: objectives

Reduce risk
Verify functional and non-functional behaviour
Build confidence in components
Find defects
Prevent defects to higher levels
Component testing: test basis

Detailed design
Code
Data models
Component specifications
Component testing: test objects

Components, units, modules


Code
Data structures
Classes
Database models
Component testing: typical defects
and failures

Wrong functionality
Data flow problems
Incorrect code/logic
Component testing: specific approaches
and responsibilities

Access to being tested code with supporting


of development environment (unit test
framework & debugging tool)
Often done by developer who wrote the code
Test first (test driven development) approach:
prepare test cases before coding, do the
coding until all test cases are passed
Try to automate the testing
Integration testing
Interfaces between components (components
integration testing, following component test)
Interactions between different parts of system
(system integration testing, following system
test)
Non-functional aspects if possible
Integration strategy: big-bang vs incremental
(top-down, bottom-up, functional)
Integration testing: objectives

Reduce risk
Verify functional and non-functional behaviour
Build confidence in interfaces
Find defects
Prevent defects to higher levels
Integration testing: test basis

Software/system design
Sequence diagrams
Interface and communication protocol specs
Use cases
Architecture (component or system)
Workflows
External interface definitions
Integration testing: test objects

Subsystems
Databases
Infrastructure
Interfaces
APIs
Microservices
Integration testing: typical defects
and failures
Data problems
Inconsistent message structure (SIT)
Timing problems
Interface mismatch
Communication failures
Incorrect assumptions
Not complying with regulations (SIT)
Integration testing: specific approaches
and responsibilities
The greater scope of integration, the more difficult it
becomes → many approaches to integration testing
One extreme: all components or systems are
integrated simultaneously, then everything is tested
as a whole → big-bang integration
Another extreme: all programs are integrated one
by one, then tests are carried out after each step →
incremental integration
Big-Bang Integration
Advantages:
- As everything is finished before integration testing
starts → no need to simulate unfinished parts
- Using when being optimistic and expecting to find
no problems
Disadvantages:
- Difficult to trace the cause of failures with late
integration
- Time-consuming
Incremental Integration
Advantages:
- Earlier defects found
- Easier fault location and fix
Disadvantages:
- Time-consuming since stubs (mock objects) and
drivers may need to be developed and used in the
test
Some variant of incremental integration:
- Top-down
- Bottom-up
- Functional incremental
Top-down Integration
Following control flow or architectural structure,
example starting from GUI/main-menu
Baselines:
- baseline 0: component a a
- baseline 1: a + b
- baseline 2: a + b + c b c
- baseline 3: a + b + c + d
- etc. d e f g
Need to call to lower
level components not h i j k l m
yet integrated (stubs/mocks)
n o
Pros & cons of top-down approach
Advantages:
- critical control structure tested first and most often
- can demonstrate system early (show working
menus)
Disadvantages:
- needs stubs
- detail left until last
- may be difficult to "see" detailed output (but should
have been tested in component test)
- may look more finished than it is
Bottom-up Integration a
Baselines: b c
- baseline 0: component n
- baseline 1: n + i d e f g
- baseline 2: n + i + o
- baseline 3: n + i + o + d h i j k l m
- etc.
Needs drivers to call n o
the baseline configuration
Also needs stubs/mocks/drivers
for some baselines
Pros & cons of bottom-up approach
Advantages:
- lowest levels tested first and most thoroughly (but
should have been tested in unit testing)
- good for testing interfaces to external environment
(hardware, network)
- visibility of detail
Disadvantages
- no working system until last baseline
- needs both drivers and stubs
- major control problems found last
Stubs & Drivers
Stub → keep it simple
- print/display name (I have been called)
- reply to calling module (single value)
- computed reply (variety of values)
- prompt for reply from tester
- search list of replies
- provide timing delay
Driver: specially written or general purpose
(commercial tools)
- invoke baseline
- send any data baseline expects
- receive any data baseline produces (print)
Minimum Capability Integration
(also called Functional)
Baselines: a
- baseline 0: component a
b c
- baseline 1: a + b
- baseline 2: a + b + d
d e f g
- baseline 3: a + b + d + i
- etc.
h i j k l m
Needs stubs
Shouldn't need drivers
n o
(if top-down)
Pros & cons of Minimum Capability

Advantages:
- control level tested first and most often
- visibility of detail
- real working partial system earliest
Disadvantages
- needs stubs
Thread Integration
(also called Functional)
order of processing some event
determines integration order a
interrupt, user transaction b c
minimum capability in time
advantages: d e f g
- critical processing first
h i j k l m
- early warning of
performance problems
n o
disadvantages:
- may need complex drivers and stubs
Integration Guidelines

minimise support software needed


integrate each component only once
each baseline should produce an easily
verifiable result
integrate small numbers of components at
once
- one at a time for critical or fault-prone components
- combine simple related components
Integration Planning

integration should be planned in the


architectural design phase
the integration order then determines the
build order
- components completed in time for their baseline
- component development and integration testing can
be done in parallel - saves time
System testing

Concern with behaviour of the whole system,


focus on end-to-end tasks that system should
perform
Non-functionals
- as important as functional requirements
- often poorly specified
- must be tested
Test environment should be as same as
production environment
System testing: objectives

Reduce risk
Verify functional and non-functional behaviour
Validate completeness, works as expected
Build confidence in interfaces
Find defects
Prevent defects to higher levels
System testing: test basis

Requirement specs (functional and non-


functional)
Risk analysis reports
Use cases
Epics and user stories
Models of system behaviour
State diagrams
System and user manuals
System testing: test objects

Applications
Hardware/software
Operating systems
System under test (SUT)
System configuration and data
System testing: typical
defects and failures
Incorrect calculations
Incorrect or unexpected behaviour
Incorrect data/control flows
Cannot complete end-to-end tasks
Does not work in production environments
Not as described in manuals/documentation
System testing: specific approaches
and responsibilities
Most often the final test on behalf of development
Most often carrying out by specialist independent
testers (and sometimes by third-party team)
End-to-end behaviour of both functional and non-
functional aspects
Acceptance testing

Final stage of testing


The focus is on validation (but verification is
still an objective)
Acceptance testing: objectives
Establish confidence in whole system and its
use
Validate completeness, works as expected
Verify functional and non-functional behaviour
Different forms:
- User Acceptance Testing (UAT)
- Operational Acceptance Testing (OAT)
- Contractual and regulatory acceptance testing
- Alpha and Beta testing
Acceptance testing: test basis

Business processes
User, business, system requirements
Regulations, legal contracts and standards
Use cases
Documentation
Installation procedures
Risk analysis
Acceptance testing: test objects

System under test (SUT)


System configuration and data
Business processes
Recovery systems
Operation and maintenance processes
Forms
Reports
Existing and converted production data
Acceptance testing: typical defects
and failures
System workflows do not meet business or
user needs
Business rules not correct
Contractual or regulatory problems
Non-functional failures (performance,
security…)
Acceptance testing: specific
approaches and responsibilities
Done by user or customer (and other
stakeholders)
Test environment should be production
environment (as-if production)

If you don't have patience to test the system

the system will surely test your patience


Test level characteristics
Test level characteristics
Non-functional system testing

different types of non-functional system tests:


- usability - configuration / installation
- security - reliability / qualities
- documentation - back-up / recovery
- storage - performance, load, stress
- volume
Performance Tests
Timing Tests
- response and service times
- database back-up times
Capacity & Volume Tests
- maximum amount or processing rate
- number of records on the system
- graceful degradation
Endurance Tests (24-hr operation?)
- robustness of the system
- memory allocation
Multi-User Tests
Concurrency Tests
- small numbers, large benefits
- detect record locking problems
Load Tests
- the measurement of system behaviour under
realistic multi-user load
Stress Tests
- go beyond limits for the system - know what will
happen
- particular relevance for e-commerce
Source: Sue Atkins, Magic Performance Management
Usability Tests

messages tailored and meaningful to (real)


users?
coherent and consistent interface?
sufficient redundancy of critical information?
within the "human envelope"? (7±2 choices)
feedback (wait messages)?
clear mappings (how to escape)?

Who should design / perform these tests?


Security Tests

passwords
encryption
hardware permission devices
levels of access to information
authorisation
covert channels
physical security
Configuration and Installation

Configuration Tests
- different hardware or software environment
- configuration of the system itself
- upgrade paths - may conflict
Installation Tests
- distribution (CD, network, etc.) and timings
- physical aspects: electromagnetic fields, heat,
humidity, motion, chemicals, power supplies
- uninstall (removing installation)
Reliability / Qualities

Reliability
- "system will be reliable" - how to test this?
- "2 failures per year over ten years"
- Mean Time Between Failures (MTBF)
- reliability growth models
Other Qualities
- maintainability, portability, adaptability, etc.
Back-up and Recovery

Back-ups
- computer functions
- manual procedures (where are tapes stored)
Recovery
- real test of back-up
- manual procedures unfamiliar
- should be regularly rehearsed
- documentation should be detailed, clear and
thorough
Documentation Testing

Documentation review
- check for accuracy against other documents
- gain consensus about content
- documentation exists, in right format
Documentation tests
- is it usable? does it work?
- user manual
- maintenance documentation
Lifecycle

1 2 3 ISTQB / ISEB Foundation Exam Practice


4 5 6

Contents
Models for testing
Test levels
Test types
Maintenance testing
Test types
Test type: group of test activities based on
specific test objectives aimed at specific
characteristics of o component or system
Functional testing: evaluate the compliance
with functional requirement
Non-functional testing: evaluate the
compliance with non-functional requirement:
- Performance, reliability, usability, efficiency,
maintainability, portability, security…
Test types
Structural testing (white-box, clear-box, code-
based, glass-box, logic coverage, logic-
driven, structured-based): Testing of software
structure/architecture
Confirmation testing (re-testing): dynamic
testing conducted after fixing defects to
confirm failures do not occur anymore
Regression testing: testing of previously
tested component/system to ensure defects
are not introduced in unchanged areas of
software as a result of the changes made
Each test type is applicable at every test level
Lifecycle

1 2 3 ISTQB / ISEB Foundation Exam Practice


4 5 6

Contents
Models for testing
Test levels
Test types
Maintenance testing
Maintenance testing
Testing the changes (including regression tests) to
an operational system or the impact of a changed
environment to an operational system:
• Modification: enhancement changes, corrective and
emergency changes, operating system or database
upgrades, patches to correct the operating system, hardware
devices changed
• Migration: moving to another platform or adding a new
supported platform (new environment, changed software,
data migration)
• Retirement: data migration or archiving, also restore after
archiving
Impact analysis and regression testing
Impact analysis: identify all work products affected
by a change, including an estimate of resources to
complete the change.
Factors influence impact analysis:
• Specification are out of date or missing
• Test cases are not documented or are out of date
• Bi-directional traceability between tests and test basis is
not maintained
• Involved people don’t have domain knowledge
• …
Lifecycle

1 2 3 ISTQB / ISEB Foundation Exam Practice


4 5 6

Summary: Key Points


V-model shows test levels, early test design
High level test planning
Component testing using the standard
Integration testing in the small: strategies
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing: user responsibility
Maintenance testing to preserve quality
Review questions
Review questions
Review questions

You might also like