Professional Documents
Culture Documents
1
Table of Contents
Static Techniques
Review process
Static analysis by tools
Dynamic Techniques
Specification-based (black-box) testing techniques
Structure-based (white-box) testing techniques
Experience-based testing techniques
2
Static and Dynamic Techniques (1)
Static testing
Software work products are examined manually (Reviews), or
with a set of tools (Static Analysis), but not executed
Detect deviations from standards, missing requirements,
design defects, non-maintainable code and inconsistent
interface specifications
Static testing - Testing of a component or system at specification or implementation level
without execution of that software, e.g. reviews or static code analysis .
3
Static and Dynamic Techniques (2)
Dynamic testing
Software is executed using a set of input values and its
output is then examined and compared to what is expected
Can only be applied to software code
Detect defects and to determine quality attributes of the code
Dynamic testing - Testing that involves the execution of the software of a component or
system.
Dynamic testing and static testing are complementary
methods, as they tend to find different types of defects .
Static Techniques
Static techniques
Static
Reviews
Analysis
Informal
Data flow
Reviews
Technical
Reviews
Inspections
5
The benefits of reviews
Detect and correct defects at an early stage
Static testing can start early in the life cycle
Rework costs are most often relatively low
Development productivity improvement
Rework effort is substantially reduced
Improved communication
Static tests contribute to an increased awareness of
quality issues
6
Types of review
Informal
Informal Review
Walkthrough
Formal
Technical Review
Inspections
Peer review - A review of a software work product by colleagues of the producer of the product for the
purpose of identifying defects and improvements. Examples are inspection, technical review and
walkthrough.
7
Review process
Phases of a formal review:
1. Planning
2. Kick-off
3. Preparation
4. Review meeting
5. Rework
6. Follow-up
8
Planning
Defining the review criteria
Selecting the personnel
Defining the entry and exit criteria for more formal
review types (inspections)
Selecting which part of documents to review
Checking entry criteria for more formal review types
9
Kick-off
The result of the entry check and defined exit criteria
are discussed in case of a more formal review
Explaining the objectives, process and documents to
the participants
Distributing documents
10
Individual Preparation
Preparing for the review meeting by reviewing the
document(s)
Noting potential defects, questions and comments
All issues are recorded
11
Review meeting
Discussing or logging, with documented results or
minutes (for more formal review types)
The focus is on logging as many defects as possible within a
certain timeframe
Noting defects, making recommendations regarding
handling the defects, making decisions about the defects
Examining/evaluating and recording during any physical
meetings or tracking any group electronic
communications
12
Rework and Follow-up
Rework
Fixing defects found (typically done by the author)
Recording updated status of defects (in formal reviews)
Follow-up
Checking that defects have been addressed
Gathering metrics
Checking on exit criteria (for more formal review types)
13
Roles and responsibilities
Manager - decides on the execution of a review, allocates time in project schedules
and determines if the review objectives have been met
Moderator – the person who leads the review process. If necessary, the moderator
may mediate between the various point of view
Author – the writer or person with chief responsibility for the document(s) to be
reviewed
Reviewers – individuals with specific technical or business background (also called
checkers or inspectors) who, after the necessary preparation, identify and describe
findings in the product under review
Scribe (or recorder) – documents all the issues, problems and open points that were
identified during the meeting
14
Informal Review
No formal process
May take the form of pair programming or a technical
lead reviewing design and code
Result may not documented
Varies in usefulness depending on the reviewers
Main purpose - inexpensive way to get some benefit
15
Walkthrough
16
Technical review
Documented, defined defect-detection process that includes peers and
technical experts with optional management participation
May be performed as a peer review without management participation
Ideally it is led by a trained moderator
Pre-meeting preparations by reviewers
Preparation of a review report which includes the list of findings, the verdict
whether the software product meets its requirements and recommendations
related to findings
May vary in practice from quite informal to very formal
Main purpose – discussing, making decisions, evaluating alternatives,
finding defects, solving technical problems and checking conformance to
specifications, plans, regulation and standards
17
Inspection
Led by a trained moderator (not by the author)
Usually conducted as a peer examination
Defined roles
Includes metrics gathering
Formal process based on rules and checklists
Specified entry and exit criteria for acceptance of the software product
Pre-meeting preparation
Inspection report including list of findings
Formal follow-up process
With optional process improvement components
Main purpose – finding defects
18
Success factors for reviews
Each review has a clear predefined objective
Review techniques are applied that are suitable to the type and level of
software work products and reviewers
The right people for the review objectives are involved
Testers are valued reviewers who contribute to the review and also learn
about the product
Defects found are welcomed, and expressed objectively
People issues and psychological aspects are dealt with
19
Success factors for reviews
Checklists or roles are used if appropriate to increase effectiveness of
defect identification
Training is given in review techniques, especially the more formal
techniques such as inspection
Management supports a good review process (e.g., by incorporating
adequate time for review activities in project schedules)
There is an emphasis on learning and process improvement
20
Static analysis by tools (1)
Static analysis - Analysis of software artifacts, e.g. requirements or code, carried out without
execution of these software artifacts.
21
Static analysis by tools (2)
Ideally performed before the types of formal review
Unrelated to dynamic properties of the requirements, design
and code, such as test coverage
Finds defects rather than failures
Typically used by developers and designers
The compiler can be considered a static analysis tool
points out incorrect usage
checks for non-compliance to coding language conventions (syntax)
Compiler - A software tool that translates programs expressed in a high order language into their machine
language equivalents. [IEEE 610]
22
Static analysis by tools (3)
Typical defects discovered by static analysis tools
Referencing a variable with an undefined value
Inconsistent interface between modules and components
Variables that are not used or are improperly declared
Unreachable (dead) code
Missing and erroneous logic (potentially infinite loops)
Overly complicated constructions
Programming standards violations
Security vulnerabilities
Syntax violations of code and software models
23
Control Flow and
Data Flow
Control Flow Analysis
Refers to the order in which the individual statements, instructions or function calls are
executed or evaluated
Can identify
○ Infinite loops
○ Unreachable code
○ Number of nested levels
○ Cyclomatic complexity
Data Flow Analysis
Looking for possible anomalies in the way the code handles the data items
○ Referencing a variable with an undefined value
○ Variables that are never used
Control flow analysis - A form of static analysis based on a representation of sequences of events (paths) in the execution through a
component or system.
Data flow - An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is
any of: creation, usage, or destruction.
24
Cyclomatic complexity
Cyclomatic complexity - The number of independent paths through a program. Cyclomatic
complexity is defined as: L – N + 2P, where:
○ L = the number of edges/links in a graph
○ N = the number of nodes in a graph
○ P = the number of disconnected parts of the graph
IF A = 5
THEN IF B > C
THEN A = B
ELSE A = C
ENDIF
ENDIF
Print A
Cyclomatic complexity = 8 – 7 + 2*1 =3
25
Test design techniques
Test design techniques (1)
The purpose of a test design techniques is to identify
test conditions, test cases and test data
Dynamic techniques are subdivided into
Specification-based test design techniques as black-box
techniques
Structure-based test design techniques as white-box
techniques
Experience-based test design techniques
27
Test design techniques (2)
Common characteristics of specification-based test design
techniques
Models, either formal or informal, are used for the specification of the
problems be solved, the software or its components
Test cases can be derived systematically from these models
Common characteristics of structure-based test design techniques
Information about how the software is constructed is used to derive the
test cases
The extend of coverage of the software can be measured for existing test
cases, and further test cases can be derived systematically to increase
coverage
28
Test design techniques (3)
Common characteristics of experience-based
techniques
The knowledge and experience of people are used to derive
the test cases
The knowledge of testers, developers, users and other
stakeholders about the software, its usage and its
environment is one source of information
Knowledge about likely defects and their distribution is
another source of information
29
Dynamic Techniques
Dynamic techniques
Equivalence
Statement Error Guessing
Partitioning
Use Case
Testing
30
Specification-based (Black-Box) testing
techniques
Black-box testing
Black-box testing - Testing, either functional or non-functional, without reference to the internal
structure of the component or system.
The black-box domain includes the following techniques
Equivalence Partitioning
Boundary Value Analysis
Decision Table Testing
State Transition Testing
Use Case Testing
Pairwise Testing
Classification Trees Testing
32
Equivalence Partitioning
Equivalence partitioning - A black box test design technique in which test cases are designed to execute representatives
from equivalence partitions. In principle test cases are designed to cover each partition at least once.
Equivalence partition - A portion of an input or output domain for which the behavior of a component or system is
assumed to be the same, based on the specification.
Equivalence partitioning is about testing various groups that we
expect the system to handle the same way
Exhibiting similar behavior for every single member of an equivalence
partition
Test cases are designed to cover each partition at least once
Equivalence partitioning aims reducing the total number of test
cases to a feasible count
Excessive testing of all possible input / output values (or conditions) is
usually impossible
33
Equivalence Partitioning
Visualizing equivalence partitioning
Subset A
Equivalence
Set Partitioning
Subset B
Subset A
Subset A
Select Test Cases
Subset B Subset B
Choosing a
member of each
partition
34
Equivalence Partitioning
Valid equivalence classes
Describe valid situations
The system should handle them normally
Invalid equivalence classes
Describe invalid situations
The system should reject them
○ Or at least escalate to the user for correction or exception handling
The Coverage Criteria
Every class member, both valid and invalid, must be represented
in at least one test case
35
Example
A program calculates Christmas bonuses for
employees depending on the affiliation to the company,
company existing from 15 years:
More than 3 years = 50% bonus
More than 5 years = 80% bonus
More than 8 years = 100% bonus
36
Answer of the Example
38
Boundary value analysis (BVA)
If the boundary values are members of a valid
equivalence class, they are valid boundary values
If they are members of an invalid equivalence class,
they are invalid boundary values
Not all equivalence classes have boundary values!
Boundary value analysis is an extension of equivalence
partitioning
Applies only when the members of an equivalence class are
ordered
39
Boundary value analysis
Example: The user must order a quantity greater than 0 and less than 100…
01 99 100
Mathematical Representation (Three-value boundary value
approach) – 0, 1, 2, 98, 99, 100
1 <= x <= 99
0 2 98 100
1 99
40
Decision Table Testing
Decision tables (cause-effect tables) are a method for testing the business
logic that lies underneath the user interface
Contains the triggering conditions, often combinations of true and false for all
input conditions, and the retesting actions for each combination of conditions
Each column of the table corresponds to a business rule that defines a unique
combination of conditions, which result in the execution of the actions
associated with that rule
The coverage standard commonly used with Decision Table Testing is to have
at least one test per column, which typically involves covering all
combinations of triggering conditions
The strength of Decision Table Testing is that it creates combinations of
conditions that might not otherwise have been exercised during testing
41
Decision Table Testing (2)
Each column in a decision table contains a business
rule
One test per column in the
A single business rule
decision table have to be derived
Conditions Rule1 R2 R3 R4 R5 R6 R7 R8
Condition A Y Y Y Y N N N N
Condition B Y Y N N Y Y N N
Condition C Y N Y N Y N Y N
Actions
Action A Y Y N N Y N N N
Action B Y N Y Y N Y N N
42
Decision Table Testing (3)
Conditions Population Pattern
Half of the first row is filled with "Yes", the other half – with
"No"
The second row is filled: first quarter "Yes", second quarter
"No"
The last row is filled: one cell "Yes", one cell "No" …
Conditions 1 2 3 4 5 6 7 8
Condition A Y Y Y Y N N N N
Condition B Y Y N N Y Y N N
Condition C Y N Y N Y N Y N
Actions
…
43
Decision Table Testing (4)
Collapsing columns in a decision table - not all columns in a decision table are
actually needed
We can sometimes collapse the decision table, combining columns, to achieve a more
concise decision table
Performed when the value of one or more particular conditions can't affect the actions for
two or more combinations of conditions
Conditions Rule1 R2 R3 R4 R5 R6 R7 R8
Condition A Y Y Y Y N N N N
Condition B Y Y N N Y Y N N
Condition C Y N Y N Y N Y N
Actions
Action A Y Y N N Y N N N
Action B Y N Y Y Y Y N N
Action C Y Y Y Y N N N N
44
State Transition Testing
State transition testing - A black box test design technique in which test cases are designed to
execute valid and invalid state transitions.
State transition - A transition between two states of a component or system.
45
State Transition Testing (2)
The states of the system or object under test are separate,
identifiable and finite in number
A state table shows the relationship between the states and
inputs, and can highlight possible transitions that are invalid
Tests can be designed to
cover a typical sequence of states
cover every state
exercise specific sequence of transition
test invalid transitions
46
State Transition Testing (3)
A state transition model has four basic parts
The states that the software may occupy
The transitions from one state to another
○ Not all transitions are allowed
The events that cause a transition
The actions that result from a transition
○ The response of the system during the transition
47
State Transition Testing - Example
State Diagram
48
Use Case Testing
Tests can be derived from use cases
A use case describes interactions between actors (users or
system), which produce a result of value to the customer
Use cases describe the “process flows” through a system
based on its actual likely use, so the test cases derived
from use cases are most useful in uncovering defects in
the process flows during real-world use of the system
Use case testing - A black box test design technique in which test cases are designed to execute user
scenarios
Use case - A sequence of transactions in a dialogue between a user and the system with a tangible
result.
49
Use Case Testing
Use case testing is a way to ensure that we have tested
typical and exceptional workflows and scenarios for the
system
Describes interactions between actors, including users and
the system, which produce a result of value to a system user
Use case describe what the user does and what the user
sees rather than what inputs the software system expects
and what the system outputs
Use cases use the business language rather than technical
terms
50
Use Case Testing
Each use case has preconditions and postconditions
Use Cases represent two basic scenarios
Normal workflow
○ Shows the typical, normal processing
○ Also called: the primary scenario, the normal course, the basic
course, the main course, the happy path, etc.
Exceptions
○ Shows abnormal processing
○ Also called: exceptions, exceptional processing, or alternative
courses
51
Use case Example
Step Description
1 A: Inserts Card
Main Success Scenario 2 S: validates card and asks for PIN
3 A: Enters PIN
A: Actor 4 S Validates PIN
S: System
5 S: Allows access to account
2a Card not valid
S:Display message and reject card
4a PIN not valid
Exceptions
S: Display message and ask for re-try (twice)
4b PIN invalid 3 times
S: Eat card and exit
52
Classification Trees Testing (Тестване чрез класификация)
Structure-based techniques
White-box testing
White-box testing - Testing based on an analysis of the internal structure of the component or system.
The white-box domain includes the following techniques
Statement Testing
Decision/Branch Testing
Data Flow Testing
Branch Condition Testing
Branch Condition Combination Testing
Modified Condition Decision Testing
LCSAJ Testing
55
Structure-based techniques
Structure-based techniques serve two purposes
Test coverage measurement
Structural test case design
What is test coverage?
The amount of testing performed by a set of tests
100% coverage does not mean 100% tested!
56
Statement coverage and statement testing
Statement - An entity in a programming language, which is typically the smallest indivisible unit of execution.
Statement coverage - The percentage of executable statements that have been exercised by a test suite.
Statement testing - A white box test design technique in which test cases are designed to execute statements.
57
Statement coverage and statement testing
Number of statements exercised
Statement coverage = x100%
Total number of statements
READ A
READ B To achieve 100% statement
coverage one test case is required
IF A>B
Example A = 18 and B = 7
Then C= 0
ENDIF
58
Decision(Branch) coverage and decision testing
Decision testing - A white box test design technique in which test cases are designed to execute
decision outcomes.
Decision outcome - The result of a decision (which therefore determines the branches to be taken).
Decision coverage - The percentage of decision outcomes that have been exercised by a test suite.
100% decision coverage implies both 100% branch coverage and 100% statement coverage.
Decision coverage is the assessment of the percentage of decision
outcomes (e.g. the True and False options of an IF statement) that have
been exercised by a test case suite
The decision testing technique derives test cases to execute specific
decision outcomes
Black-box testing may actually achieve only 40% to 60% decision coverage
59
Decision(Branch) coverage and decision testing
Number of decision outcomes exercised
Decision coverage = x100%
READ A Total number of decision outcomes
READ B
IF A>B One test case is required to achieve 100%
statement coverage
Then C= 0 Example A = 18 and B = 7
60
Other structure-based techniques
Linear code sequence and jump (LCSAJ)
100% LCSAJ coverage implies 100% decision coverage
Branch Condition coverage, multiple condition coverage (also
known as condition combination coverage)
Condition determination coverage (also known as multiple condition
decision coverage or modified condition decision coverage, MCDC)
Path coverage
100% path coverage implies 100% LCSAJ coverage
Test case design techniques and test measurement techniques are
described in BS7925-2
BS 7925 – 2 Standard for Software Component Testing
61
Experience-based techniques
Experience-based Techniques
Tests are derived from the tester’s skill and intuition
and their experience with similar applications and
technologies
Can be useful in identifying special tests not easily
captured by formal techniques
Especially when applied after more formal approaches
This techniques may yield widely varying degrees of
effectiveness, depending on the testers’ experience
63
Error guessing
Error guessing - A test design technique where the experience of the tester is used to anticipate what defects might be present in
the component or system under test as a result of errors made, and to design tests specifically to expose them
Error guessing is a technique that should always be used as a
complement to other more formal techniques
There are no rules for error guessing
A structured approach to the error-guessing technique is to list possible
defects and failures and to design tests that attempt to produce them
Called Fault attack
These defects and failures lists can be built based on
Experience
Available defect and failure data
From common knowledge about why software fails
64
Exploratory testing
Exploratory testing - An informal test design technique where the tester actively controls the design of the tests as those tests are
performed and uses information gained while testing to design new and better tests.
Hands-on approach in which testers are involved in minimum planning and
maximum test execution
Time-boxed test effort
The test design and test execution activities are performed in parallel
typically without formally documenting the test conditions, test cases or test
scripts
Most useful when:
There are no or poor specifications
Time is severely limited
It can serve as a check on the test process, to help ensure that the most
serious defects are found
65
Choosing a test technique (1)
The choice of which test technique to use depends on a number of factors:
The type of system
Regulatory standards
Customer or contractual requirements
Level of risk
Type of risk
Test objective
Documentation available
Knowledge of the testers
Time and budget
Development life cycle
Previous experience of types of defects found
66
Choosing a test technique (2)
Some techniques are more applicable to certain
situations and test levels
Others are applicable to all test levels
The best testing technique is no single testing
technique
Testers generally use a combination of test techniques
to ensure adequate coverage of the object under test
67
Equivalence partitioning
example
For example, a savings account in a bank has a different
rate of interest depending on the balance in the account.
In order to test the software that calculates the interest
due, we can identify the ranges of balance values that
earn the different rates of interest. For example, 3% rate
of interest is given if the balance in the account is in the
range of $0 to $100, 5% rate of interest is given if the
balance in the account is in the range of $100 to $1000,
and 7% rate of interest is given if the balance in the
account is $1000 and above
Equivalence partitioning
answer
1 READ A
2 READ B
3 C = A – 2 *B
4 IF C <0 THEN
5 PRINT “C negative”
6 ENDIF
Decision coverage answer
Test 1: A = 20, B = 15
Test 2: A = 10, B = 2