You are on page 1of 141

1

Confidential
2

ISTQB Foundation Level

Nataliia Shevchuk
(Senior Test Engineer)

Confidential
3

Table of Contents:

1. Fundamentals of testing
2. Testing throughout the software life cycle
3. Static techniques
4. Test design techniques
5. Test management
6. Tool support for testing

Confidential
4

Fundamentals of Testing

Confidential
5

1. Fundamentals of Testing
1.1 Why is Testing necessary? (K2)

1.2 What is Testing? (K2)

1.3 Seven Testing principles (K2)

1.4 Fundamental Test Process (K1)

1.5 The Psychology of Testing (K2)

1.6 Code of Ethics

Terms: bug, defect, error, failure, fault, mistake, quality, risk.

Confidential
6

1.1 Why is testing necessary?

➢ Software defects or bugs can cause problems for people, the environment
or a company.
➢ For tester it is important to draw distinctions between defects, their root
causes and their effects.
➢ Testing helps to find these defects and trace effects therefore promoting
quality, and becoming the part of quality assurance process.

Confidential
7

Terms to use:
• Requirement is a condition or capability needed by a user to solve a problem or achieve an objective.

• Quality is the degree to which a component, system or process meets specified requirements and/or
user/customer needs and expectations.

• Test basis is all documents from which the requirements of a component or a system can be inferred.

• Test objective is a reason of purpose for designing and executing a test.

• Deliverable is any (work) product that must be delivered to someone other than the (work) product
creator.

• Testware is artifacts produced during the test process, produced by both verification and validation
testing methods.

Confidential
8

1.1.1 Software systems context

Context dependency is connected


to domain and technology.

- COTS

- Safety critical systems

Role of testing – to reduce the risk of problems occurring during software creation and
contribute to the quality of the software system, if the defects found are corrected before
the system is released to operational use.

Confidential
9

1.1.2 Causes of software defects

Confidential
10

Terms to use:

Mistakes or errors are made by humans.


Faults, bugs and defects are flaws in a component or system.
Failures are deviation of the component or system from its expected delivery

Confidential
11

Relative cost per defect

Defect cost Failures caused by:

• Errors in the specification, design and


implementation of the software and system

• Errors in use of the system

• Environmental conditions: radiation,


magnetism, electric fields etc.

• Intentional damage

• Potential consequences of earlier errors,


intentional damage, defects and failures

Confidential
12

1.1.3 Role of testing in software development, maintenance and


operations

➢ Testing helps to measure risks and to reduce failures. Also may be required to meet
contractual or legal requirements, or industry-specific standards.

➢ Ways of measuring the system’s quality:


- Rate of defect discovery
- Number of known defects
- Extent of test coverage
- Percentage of tests that have passed

Confidential
13

1.1.4 Testing and Quality*

➢ Testing ensures that key functional and non-functional requirements are met.
➢ Testing measures the quality of software in terms of the number of defects found,
the tests run, and the system covered by the test.
➢ Testing reduces the overall level of risk in a system.
➢ Testing gives confidence in the quality of the software, if no defects are found.
➢ Testing helps to improve the quality when defects found and fixed.
➢ Testing is one of the quality assurance activities.

When lessons are learned and root causes are analyzed processes can be
enhanced and as a consequence quality is improved.

* Software Engineering – Software Product Quality (ISO 9126)

Confidential
14

Software product Quality ISO 9126


Functionality: Usability:
1.Suitability 1.Understandability
2.Accuracy 2.Learnability
3.Interoperability 3.Operability
4.Security 4.Attractiveness
5.Functional 5.Usability Compliance
Compliance

Reliability: Maintainability:
1.Maturity 1.Analyzability
2.Fault Tolerance 2.Changeability
3.Recoverability 3.Stability
4.Reliability 4.Testability
Compliance 5.Maintainability Compliance

Portability:
Efficiency:
1.Adaptability
1.Time Behavior
2.Installability
2.Resource Utilization
3.Co-Existence
3.Efficiency
4.Replaceability
Compliance
5.Portability Compliance

Confidential
15

Confidential
16

Confidential
17

1.1.5 How much Testing is enough?

Tools to give information for decision-making:


➢ Measure residual level of risk including technical, safety, and business risks.
➢ Balance project constraints: time and budget.

Confidential
18

1.2 What is testing?

• These objectives are valid for both


dynamic and static testing and help to
improve system under test, development

• Early test design, review of documents,


identification and resolution of issues
help to prevent defects from being
introduced in the code. and testing
process.

Confidential
19

Development Testing:

• Development testing (component,


integration, system) main goal is to
cause as many failures as possible, so
objectives are preventing defects, finding
defects , assess the quality.

• System testing can obtain objective


‘gaining confidence about the level of
quality’ if no acceptance testing is
performed.

Confidential
20

Acceptance/ maintenance/ operational testing:

Acceptance testing is used to gain


confidence about the level of quality and
provide information for decision-making.

Maintenance testing assures, that new


defects are not added to existing system,
so objectives are gain confidence and
assess the quality.

Operational testing is used to assess


availability or reliability. Objectives:
prevent and find defects, assess the
quality.

Confidential
21

Debugging vs Testing

Debugging is often equated with testing, but


they are entirely different activities:

Testing is the process consisting of all


lifecycle activities (active and static),
concerned with planning, preparation and
evaluation of software products and related
work products to determine that they satisfy
specified requirements, to demonstrate that
they are fit for purpose and to detect defects.

Debugging is the process of finding,


analyzing and removing the causes of failures
in software.

Confidential
22

1.3 Seven testing principles

Confidential
23

Confidential
24

Confidential
25

Confidential
26

Confidential
27

Confidential
28

Confidential
29

Confidential
30

1.4 Fundamental Test Process

Confidential
31

1.4.1 Test Planning and Control

Confidential
32

1.4.2 Test Analysis and Design

Confidential
33

Test Analysis and Design

- Analyze test basis and test objectives defined on a planning stage to be testable and
unambiguous. Update them if needed.

Major tasks:
➢ Create and prioritize test conditions and expected results.
➢ Create prioritized high level test cases.
➢ Verify bi-directional traceability between test basis and test cases.
➢ Prepare needed test data and environment.

Confidential
34

1.4.3 Test Implementation and Execution

Confidential
35

Test Implementation and Execution

Major tasks:

➢ Finalizing implementing and prioritizing TC.


➢ Developing and prioritizing test procedures creating test data, preparing test
harnesses and writing automated test scripts.
➢ Creating test suites for efficient test execution.
➢ Verifying that the test environments have been setup correctly.
➢ Verifying and updating bi-directional traceability.
➢ Executing test procedures manually or by using execution tools.
➢ Logging the outcome of test execution.
➢ Comparing actual results with expected results.
➢ Reporting and analyzing discrepancies.
➢ Re-execution of a test that previously failed.

Confidential
36

1.4.4 Evaluating Exit criteria and Reporting

Confidential
37

Evaluating Exit criteria and Reporting

If the defined objectives are reached, we start to evaluate exit criteria.

Major tasks:

➢ Writing a test summary report for stakeholders.


➢ Checking test logs against the exit criteria specified in test planning.
➢ Assessing if more tests are needed or if the exit criteria specified should be changed.

Confidential
38

1.4.5 Test Closure activities

Confidential
39

Closure activities

Major tasks:

➢ Checking which planned deliverables have been delivered.


➢ Closing incident reports or raising change records for any that remain open.
➢ Analyzing lessons learned for future releases.
➢ Finalizing and archiving testware, test environment, test infrastructure.
➢ Documenting the acceptance of the system.
➢ Handing over the testware to the maintenance organization.
➢ Using the information gathered to improve test maturity.

Confidential
40

1.5 The Psychology of Testing

Confidential
41

Psychology of Testing

Make sure that team does not see testing as a destructive activity, but as a very
constructive in a management of the product risk.

Communication skill:

➢ Avoid criticism.
➢ Use neutral, fact-focused way to communicate without criticizing.
➢ Try to understand how the other person feels and why they react as they do.
➢ Confirm that the other person has understood what you have said and vice versa.

Confidential
42

Independence of Testing

Independence of Testing is a
separation of responsibilities, that
encourages the accomplishments of
objective testing.

Levels of independence from low to


high:
• Tests designed by the person who
wrote the software.
• Tests designed by another person.
• Tests designed by a person from a
different organizational group.
• Tests designed by a person form a
different company.

Confidential
43

1.6 Code of ethics


Public – software engineers (SEs) should act consistently with the
public interest.

Client/Employer – SEs should act in a manner that is in the best


interest of their client and employer consistent with the public
interest.

Product – SEs should ensure that their products and related


modifications meet the highest professional standards possible.
Judgment – SEs should maintain integrity and independence in
their professional judgment.

Colleagues - SEs should be fair to and supportive of their colleagues. Management – SE managers and leaders should subscribe to an
promote an ethical approach to the management of soft ware
Self - SEs should participate in lifelong learning regarding the practice of development and maintenance.
their profession and should promote an ethical approach to the practice of
the profession. Profession – SEs should advance the integrity and reputation of
the profession consistent with the public interest.
Confidential
44

Testing throughout the software


life cycle

Confidential
45

2.Testing throughout the software life cycle


2.1 Software Development models (K2)
2.2 Test levels (K2)
2.3 Test Types: the Targets of Testing (K2)
2.4 Maintenance Testing (K2)

Confidential
46

2.1 Software development models


Terms: Commercial Off-The-Shelf (COTS), iterative-incremental development model, validation,
verification, V-model

Confidential
47

2.1.1 V-model (Sequential Development Model)

4 levels:

• Component (unit) testing


• Integration testing
• System testing
• Acceptance testing

Confidential
48

2.1.2 Iterative-incremental Development models

Examples:
1. Prototyping
2. Rapid Application Development
(RAD)
3. Rational Unified Process (RUP)
4. Agile development model

May be tested as several test levels


during each iteration.
Verification and validation can be
carried out on each increment.

Confidential
49

2.1.3 Testing within a Life Cycle Model

Characteristics of good testing:

➢For every development activity there is a corresponding testing activity.


➢Each test level has test objectives specific to that level
➢The analysis and design of tests for a given test level should begin during the
corresponding development activity.
➢Testers should be involved in reviewing documents and code as soon as drafts are
available in the development life cycle.

Confidential
50

2.2 Test Levels


Terms: Alpha testing, beta testing, component testing, driver, field testing, functional requirements,
integration, integration testing, non-functional requirement, robustness testing, stub, system testing, test
environment, test level, test-driven development, user acceptance testing, “big-bang”.

Confidential
51

2.2.1 Component testing

Terms: Unit, Module, Program testing, Driver, Stub ➢ Lowest level of testing.
➢ Also known as unit, module, program testing.
➢ Unit testing is intended to ensure that the code
written for the unit meets its specification, prior
to its integration with other units.
➢ Usually done by programmer.

Notes:

1. Defects found and fixed during unit testing are


often not recorded bet fixed instantaneously.
2. Usually each unit is tested in isolation, stubs,
drivers and simulators may be used.

Confidential
52

2.2.2 Integration testing


Terms: component and system integration testing.

➢ More than one (tested) component


➢ Testing interface and communication between
components.
➢ Done by designer, analysts, or specialist
integration testers.

Notes:

1. Any test type (functional, non-functional).


2. Good practice – isolate defects.
3. Use incremental approach rather than ‘big
bang’, plan in advance (before components are
done).

Confidential
53

Incremental integration

Confidential
54

2.2.3 System testing


Terms: Master test plan.
➢ System testing is concerned with the behavior of a
whole system/product. Should be described at
Master level test plan

➢ The test environment should correspond to the final


target or production environment as much as
possible in order to minimize the risk of
Notes: environment-specific failures not being found in
1. Any test type (functional, non-functional). testing.
2. Most appropriate to use specification-based
technique, but menu structures and web
page navigation can be tested with white ➢ Done by independent test group.
box.

Confidential
55

2.2.4 Acceptance testing


Terms: stakeholders, reports.
➢ Acceptance testing is often the responsibility of the
customers or users of a system.
• User requirements • Business processes ➢ The goal in acceptance testing is to establish
• System requirements on fully integrated
• Use cases system confidence in the system.
• Business processes • Operational and
• Risk analysis reports maintenance ➢ Finding defects is not the mail focus.
processes
➢ Acceptance testing may occur at various times in
• User procedures
• Forms the life cycle, for example.
• Reports
• Configuration data
Notes:
1. Any test type (functional, non-functional).
2. A COTS software product may be
acceptance tested when it is installed or
integrated.
3. Acceptance testing of a new functional
enhancement may come before system
Confidential testing.
56

User Acceptance testing

Confidential
57

2.3 Test Types


Terms: code coverage, black-box testing, interoperability testing, maintainability testing, portability
testing, load testing, stress testing.
➢ Functional Testing = Black box testing =
= Specification based

➢ Non-functional Testing (ISO 9126)

➢ Structural Testing = White box = Glass box

➢ Re-testing and Regression testing =


= Testing related to changes

Confidential
58

2.3.1 Functional testing

Notes:
• Functional tests are based on functions and features and may be performed at all test levels.
• Specification-based (black-box testing) techniques are used because it considers the externals
behavior of the software.

Confidential
59

2.3.2 Non-functional testing

Notes:
• Non-functional testing may be performed at all test levels.
• Quality model defined in ‘Software Engineering – Software Product Quality’ (ISO-9126).

Confidential
60

2.3.3 Structural testing (Testing of Software Architecture)

➢ Structural (white-box) testing may be


performed at all test levels, but especially in
component testing and component
integration testing, tools are often used.

➢ Here we measure structural aspects of the


system.

Notes: ➢ A common measure is to look at how much

• Best used after specification-based techniques. of the actual code that has been written has

• Coverage is shown how much was tested. been tested.

Confidential
61

2.3.4 Testing related to changes: Re-testing and Regression


testing

Notes:
• May be performed at all test levels,
and includes functional, non-
functional and structural testing.

• Regression test suites are strong


candidates for automation.

Confidential
62

Test levels vs Test types

All Test types can be performed at


any test level.

Confidential
63

2.4 Maintenance Testing


Terms: Impact analysis, maintenance testing.
➢ Maintenance testing is done on an existing
operational system, and is triggered by
modifications, migration, or retirement of the
software system.

➢ Maintenance testing = New test + Regression of


old ones

➢ Determining how the existing system may be


affected by changes is called impact analysis, and
is used to help decide how much regression testing
to do.

➢ Migration testing (conversion testing) happens when


migrating from one platform to another or when data
from another application will be migrated into the
system being maintained.
Confidential
64

Static techniques

Confidential
65

3. Static techniques
3.1 Static techniques and the Test Process (K2)
3.2 Review process (K2)
3.3 Static analysis by tools (K2)

Terms: Manager, Moderator (review leader, mediator), Author, Reviewers (checkers, inspectors),
Scribe (recorder)

Confidential
66

3.1 Static Techniques and the Test Process

Success factors for Reviews:

➢ clear objectives, right people are involved


(testers!), defects found, atmosphere of
trust, review techniques are applied,
checklists or roles are used, management
support, training, emphasis on learning and
process improvement.

➢ A review could be done as a manual activity,


Notes: but there is also tool support.

Can be used to test anything that is written or typed; ➢ It is a systematic examination of a


this can include documents such as requirements document by one or more people with the
specification, system design, code, test plans and aim to finding and removing errors.
test cases.

Confidential
67

Benefits of Reviews

Confidential
68

3.2 Review Process


Terms: entry criteria, formal review, informal review, inspection, metric, moderator, peer review, reviewer,
scribe, technical review, walkthrough
Reviews vary from informal, characterized by no written
instructions, to systematic, characterized by team
participation, documented result of the review, and
documented procedures for conducting the review.

The formality of the review process is related to factors


such as the maturity of the development process, any legal
or regulatory requirements or the need for an audit trail.

Confidential
69

3.2.1 Activities of a Formal Review

Planning: Examination/evaluation/recording of results:


• Defining the review criteria • Discussing or logging, with documented results or minutes (for
• Selecting the personnel more formal review types)
• Allocating roles • Noting defects, making recommendations regarding handling the
• Defining the entry and exit criteria for more formal review defects, making decisions about the defects
(inspections) • Examining/evaluating and recording issues during any physical
• Selecting which parts of documents to review meetings or tracking any group electronic communications
• Checking entry criteria (for more formal review types)
Rework:
Kick-off: • Fixing defects found (typically done by the author)
• Distributing documents • Recording updated status of defects (in formal review)
• Expanding the objectives, process and documents to the
participants Follow-up:
• Checking that defects have been addressed
Individual preparation: • Gathering metrics
• Preparing for the review meeting by reviewing the document(s) • Checking on exit criteria (for more formal review types)
• Noting potential defects, questions and comments

Confidential
70

3.2.2 Roles and responsibilities

Manager - decides on the execution of reviews, allocates time in project schedules and determines if the
review objectives have been met.

Moderator - leads the review, including planning the review, running the meeting, and following-up after
the meeting. Moderator can mediate between the various points of view and is often the person upon
whom the success of the review rests.

Author - the writer or person with chief responsibility for the document(s) to be reviewed.

Reviewers - individuals with a specific technical or business background who, after the necessary
preparation, identify and describe findings in the product under review.

Scribe – documents all the issues, problems and open points that were identified during the meeting.

Confidential
71

3.2.3 Types of Reviews

Confidential
72

3.2.4 Success Factors for Reviews

Confidential
73

3.3 Static Analysis by Tools


Terms: compiler, complexity, control flow, data flow, static analysis

Control flow: a sequence of events (paths) in


the execution through a component or system.

Data flow: an abstract representation of the


sequence and possible changes of the state
of data objects, where the state of an object is
any of :
• creation
• usage
• destruction
Notes:
Compilers may offer some support for static analysis,
including the calculation of metrics.

Confidential
74

Static Analysis by Tools

Confidential
75

Code metrics
When performing static code analysis, information is calculated about structural attributes of the code such as:
• comment frequency
• depth of nesting
• cyclomatic number
• number of lines of code

Complexity metrics identify high risk and complex areas.


Experienced programmers know that 20% of the code will cause 80% of the problems, and complexity analysis helps to
find that all-important 20%.

Cyclomatic complexity
Cyclomatic complexity is the maximum number of linear, independent paths through a program.

The easiest way is to sum the number of binary decision statements (e.g. if, while, for, etc.) and add 1 to it.
More formal cyclomatic complexity may be computed as: L - N + 2
where
L = the number of edges/links in a graph,
N = the number of nodes in a graph.

Confidential
76

Cyclomatic complexity example

IF A = 354
THEN IF B > C
THEN A = B
ELSEA= C
ENDIF
ENDIF
Print A

The control flow shows seven nodes (shapes) and eight edges (lines), thus using the formal formula the
cyclomatic complexity is 8-7 + 2 = 3. In this case there is no graph called or subroutine.

Alternatively one may calculate the cyclomatic complexity using the decision points rule.
Since there are two decision points, the cyclomatic complexity is 2 + 1 = 3.

Confidential
77

Test design techniques

Confidential
78

4. Test design techniques


4.1 The Test Development Process (K3)
4.2 Categories of Test Design techniques (K2)
4.3 Specification-based or Black-box techniques (K3)
4.4 Structure-based or White-box techniques (K4)
4.5 Experience-based techniques (K2)
4.6 Choosing Test Techniques (K2)

Terms: Test case specification, test design, test execution schedule, test procedure specification,
test script, traceability.

Confidential
79

4.1 The Test Development Process


Test case specification: a document specifying a set of test cases (objective, inputs, test actions,
expected results, and execution preconditions) for a test item.

Test script: commonly used to refer to a test procedure specification, especially an automated.

Confidential
80

Traceability
Traceability: the ability to identify related items in documentation and software, such as
requirements with associated tests. Can be horizontal and vertical.

Vertical Traceability matrix is high level document


which map the requirements to all phases of the
Software development cycle. i.e. Unit
testing, Component Integration testing, System
Integration testing, Smoke/Sanity testing, System
Testing Acceptance testing...etc.

Horizontal Traceability matrix is used for Coverage


analysis when a requirement changed it will used to
identify the Test cases prepared on
that requirement.

Confidential
81

Confidential
82

Test condition – an item or event of a component or system that could


be verified by one or more test cases. A function, transaction, feature,
quality attribute or structural element.
• Located in the test design specification document.
• Need to be prioritized.

Test case – a set of input values, preconditions, expected results and


post-conditions, developed for a particular objective(s) or test
condition(s).
• Can cover more than one test condition.
• Located in the test case specification document.
• Need to be prioritized.

Test procedure – a sequence of actions for the execution of a test.


• Located in the test procedure specification document.
• Need to be prioritized.

Confidential
83

4.2 Categories of Test Design techniques

Confidential
84

Characteristics of specification-based test design techniques (black-box testing):


➢Models, either formal or informal, are used for the specification and test cases can be derived
systematically from these models.

Characteristics of structure-based test design techniques (white-box testing):


➢Information about how the software is constructed is used to derive the test cases.
➢The extent of coverage of the software can be measured for existing test cases, and can grow.

Characteristics of experience-based test design techniques:


➢The knowledge and experience of people are used to derive the test cases
➢The knowledge of testers, developers, users and other stakeholders about the software, its usage
and its environment is one source of information.
➢Knowledge about likely defects and their distribution.

Confidential
85

4.3 Specification-based or Black-box techniques


Terms: BVA, EP, decision table, state transition, use case testing

Confidential
86

4.3.1 Equivalence Partitioning (EP)

✓ A black-box test design technique in which


test cases are designed to execute
representatives from equivalence partitions.

✓ Inputs are divided into partitions that are


expected to exhibit similar behavior.

✓ In principle test cases are designed to


cover each partition at least once.

✓ Equivalence partitions (or classes) can be


found for both valid data and invalid data.

Confidential
87

Question

Confidential
88

Answer

Confidential
89

4.3.2 Boundary Value Analysis (BVA)

✓ A black-box test design technique in which test


cases are designed base on boundary values.

✓ Boundaries are an area where testing is likely to find


defects.

✓ Test cases can be designed to cover both valid and


invalid boundary values. When designing test cases,
a test for each boundary value is chosen.

✓ This technique is often considered as an extension


of equivalence partitioning.

✓ Boundary value analysis can be applied at all test


levels.

Confidential
90

Question

Confidential
91

Answer

Confidential
92

4.3.3 Decision Table Testing

Example:
✓ A black-box test design technique
in which test cases are designed to
execute the combinations of inputs
and/or stimuli (causes) shown in a
decision table.

Confidential
93

4.3.4 State Transition Testing


State transition table

✓ A black-box test design technique in which • Tests can be designed to cover a typical
test cases are designed to execute valid and sequence of states, to cover every state, to
invalid state transitions. exercise specific sequences of transitions or to
test invalid transitions.
✓ The state transition technique is concerned
with systems that may exhibit a different • Creating a state-transition table often show
response depending on current conditions or combinations that were not identified in the
previous history (its state). requirements. It is highly beneficial to discover
these defects before coding begins.
✓ It allows the tester to view the software in
terms of its states, transitions between
states, the inputs or events that trigger state
changes (transitions) and the actions which
may result from those transitions.

Confidential
94

Example:

Confidential
95

4.3.5 Use Case Testing


✓ A black-box test design technique in which test cases are designed to execute scenarios of use cases.
✓ Use case describes interactions between actors (users or external system) and the system, which
produce a result of value to a system user or the customer.
✓ Test cases derived from use cases are most useful in uncovering defects in the process flows during
real-world use of the system.
✓ A use case usually has a most likely scenario and alternative paths, preconditions, postconditions and
final state of the system after the use case has been completed.
✓ Use cases are very useful for designing acceptance tests with customer/user participation.
✓ An excellent basis for system level testing.
✓ They also help uncover integration defects caused by the interaction and interference of different
components.
✓ Designing test cases from use cases may be combined with other specification-based test techniques.

Confidential
96

Example:

Confidential
97

4.4 Structure-based or White-box techniques


Terms: code coverage, decision coverage, statement coverage.
Levels:
➢ Structure-based or white-box testing is based on an ✓ Component level: the structure of a software
identified structure of the software or the system:
component, i.e., statements, decisions,
- Statement testing and coverage
- Decision testing and coverage branches or even distinct paths.
- Other structure-based techniques
✓ Integration level: the structure may be a call
➢ Structure-based techniques serve two purposes: tree (a diagram in which modules call other
- test coverage measurement
modules).
- structural test case design
✓ System level: the structure may be a menu
➢ They are often uses to assess the amount of testing
structure, business process or web page
performed by tests derived from specification-based
techniques, i.e., to assess coverage. structure.
✓ Condition coverage and multiple condition
➢ They are then used to design additional tests with
the aim of increasing the test coverage. coverage.
Confidential
98

Understanding Software code

Confidential
99

4.4.1 Statement testing and coverage


✓ Does not ensure coverage of all functionality – Percentage statement coverage is
weakest white-box criterion. calculated as:
✓ If we test every ‘executable’ statement we call this full
or 100 per cent statement coverage.
✓ 100% decision coverage guarantees 100% statement
coverage but not vice versa.

Confidential
100

4.4.2 Decision testing and coverage


✓Decision testing derives test cases to execute
specific decision outcomes, normally to increase
decision coverage.
✓Related to branch testing, is the assessment of
the percentage of decision outcomes (e.g., the True
and False options of an IF statement) that have
been exercised by a test case suite.

Confidential
101

How to calculate the statement

Memorize these:

➢100% LCSAJ coverage will imply 100% branch/decision coverage

➢100% path coverage will imply 100% statement coverage

➢100% branch/decision coverage will imply 100% statement coverage

➢100% path coverage will imply 100% branch/decision coverage

➢Branch coverage and decision coverage are same

*LCSAJ = linear code sequence and jump.

Confidential
102

4.4.3 Other Structure-based Techniques

Confidential
103

4.5 Experience-based Techniques


Terms: exploratory testing, fault attack.

A commonly used experience-based technique is error guessing or fault attack: enumerate a list of
possible defects and to design tests that attack these defects.

Fault attack: directed and focused attempt to evaluate the quality, especially reliability, of a test
object by attempting to force specific failures to occur.

Confidential
104

Experience-based Techniques

Confidential
105

4.6 Choosing Test Techniques

Confidential
106

Test management

Confidential
107

5. Test management
5.1 Test Organization(K2)
5.2 Test Planning and Estimation (K3)
5.3 Test Progress Monitoring and Control (K2)
5.4 Configuration Management (K2)
5.5 Risk and Testing (K2)
5.6 Incident Management (K3)

Confidential
108

5.1 Test Organization


Terms: Tester, test leader, test manager, independence.

Confidential
109

5.1.1 Test Organization and Independence

Confidential
110

Confidential
111

For large, complex or safety critical projects, it is usually best to have multiple levels of testing, with
some or all of the levels done by independent testers.

Confidential
112

5.1.2 Tasks of the Test Leader and Tester


Test leader = test manager = test coordinator

✓ Plan the tests, select tools, test approaches,


estimating the time, effort and cost of testing,
acquiring, resources, defining test levels,
cycles, and planning incident management.
✓ Decide if automation is needed.
✓ Decide what test environments are needed.
✓ Introduce and analyze metrics form
monitoring.
✓ Adapt planning based on monitoring.
✓ Write test summary reports.

Confidential
113

Tester – is doing the hard work.


✓ Create test specification and review tests.
✓ Use test administration or management tools
and test monitoring tools.
✓ Prepare and acquire test data.
✓ Implement tests on all test levels.
✓ Run, automate tests.
✓ Review and contribute to test plans.

Required skills:
• knowledge in business-domain
• knowledge of different testing types
• opportunities and limitations of the different
technologies

Confidential
114

5.2 Test Planning and Estimation


Terms: test approach, test strategy.

5.2.1 Test Planning

➢ Planning is influenced by the test policy of the organization, the scope of testing, objectives, risks,
constraints, criticality, and the availability of resources.

➢ Test planning is a continuous activity and is performed in all life cycle processes and activities.
Feedback from test activities is used to recognize changing risks so that planning can be adjusted.

Confidential
115

5.2.2 Test Planning Activities


Test approach and test procedures, including definition of the test levels and entry and exit criteria.

Risks and objectives of testing.

What to test, what roles will perform the test activities, what resources needed and who is going to use
them.

How the test activities should be done, how the test results will be evaluated.

Metrics for monitoring and controlling test preparation end execution, defect resolution and risk issues.

Scheduling test analysis and design activities, test implementation, execution and evaluation.

Test documentation: amount, level of detail, structure and templates.

Integrating and coordinating the testing activities into the software life cycle activities.

Confidential
116

Test plan

1. Test Plan Identifier


2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be tested
7. Features not to be tested
8. Approach
9. Item Pass/Fail Criteria
10.Suspension Criteria and Resumption requirements
11.Test Deliverables
12.Remaining Test Tasks
13.Environmental Needs
14.Staffing and Training Needs
15.Responsibilities
16.Schedule
17.Planning Risks and Contingencies
18.Approvals
19.Glossary

Confidential
117

5.2.3 Entry Criteria-5.2.4 Exit Criteria

Confidential
118

5.2.5 Test Estimation

Confidential
119

5.2.6 Test Strategy, Test Approach

➢ The test approach is the implementation of the


test strategy for a specific project based on the
project’s goals and objectives, as well as the risk
assessment.

➢ It forms the starting point for test planning,


selecting the test design techniques and test
types to be employed.

➢ It should be also define the software and test


entry and exit criteria.

➢ Different approaches may be combined.

Confidential
120

Confidential
121

5.3 Test Progress Monitoring and Control


Terms: defect density, failure rate, test control, test monitoring, test summary report

Defect density - the number of defects


identified in a component or system
divided by the size of the component or
system (expressed in standard
measurement terms, e.g. lines-of-code,
number of classes or function points)

Failure rate – failures per unit of time,


failures per number of transactions,
failures per number of computer runs.

Confidential
122

5.3.1 Test Progress Monitoring

Test monitoring can serve various purposes


during the project, including:
• Give feedback on how the testing work is
going.
• Provide the project team with visibility
about the test results.
• Measure the status of the testing.
• Gather data for use in estimating future
test efforts.
• Prove that the plan itself is correct.

Information to be monitored may be


collected manually or automatically and may
be used to measure exit criteria, such as
coverage.

Confidential
123

5.3.2 Test Reporting

The outline of a test summary report is given in


‘Standard for Software Test Documentation’ (IEEE Std
829-1998).

Test report analyzed information and metrics required to


support recommendations and decisions about future
actions, such as:
• an assessment of defects remaining;
• outstanding risks;
• the economic benefit of continued testing
• the level of confidence in tested software

Metrics is collected during and at the end of a test level


in order to assess:
• the adequacy of the test objectives for that test level
• the adequacy of the test approach taken
• the effectiveness of the testing with respect to the
objectives

Confidential
124

5.3.3 Test Control

Confidential
125

5.4 Configuration Management


Terms: configuration management, version control
The purpose is to establish and maintain the
integrity of the products of the software or system
through the project and product life cycle.

Configuration management ensures all items of


testware are identified, version controlled, tracked
for changes, so that traceability can be maintained.

All identified documents and software items are


referenced unambiguously in test documentation.

Configuration management helps to:


• uniquely identify (and to reproduce) the tested
item, test documents and the test harness
• control changes to those characteristics
• record and report change
• processing and implementation status
• verify compliance with specified requirements

Confidential
126

5.5 Risk and Testing


Terms: product risk, project risk, risk-based testing

Risk can be defined as the chance of an event, hazard,


threat or situation occurring and resulting in future
negative consequences or a potential problem.
Risks are used to decide where to start testing and
where to test more.

Risk management activities:


✓ Risk Identification
✓ Risk Analysis
✓ Risk Response

2 types of risks are used for analysis:


➢ Project Risks
➢ Product Risks

Confidential
127

Project Risks vs Product Risks

Project risks are the risks that surround the Potential failure areas in the software or
project’s capability to deliver its objectives. system are known as product risks.

Confidential
128

Risks Analysis

Confidential
129

Risk-based approach

➢ Determine the test techniques to be


employed
➢ Determine the extent of testing to be carried
out
➢ Prioritize testing in an attempt to find the
critical defects as early as possible
➢ Determine whether any non-testing
activities could be employed to reduce risk
(e.g providing training to inexperienced
designers)

Confidential
130

5.6 Incident Management


Terms: incident logging, incident management, incident report

Confidential
131

Test incident report Example:

Confidential
132

Tool support for testing

Confidential
133

6. Tool support for testing


6.1 Types of Test Tools (K2)
6.2 Effective use of Tools: potential benefits and risks (K3)
6.3 Introducing a Tool into an Organization(K2)

Confidential
134

6.1 Types of Test Tools


Terms: configuration management tool, coverage tool, debugging tool, dynamic analysis tool, incident
management tool, load testing tool, modeling tool, performance testing tool, probe effect, requirements
management tool, review tool, security tool, static analysis tool, stress testing tool, test comparator, test
data preparation tool, test design tool, test harness, test execution tool, test management tool, unit test
framework tool.

The term ‘test frameworks’:


➢ Reusable and extensible testing libraries that can be used to build testing tools (called test
harnesses as well)
➢ A type of design of test automation (e.g data-driven, keyword-driven)
➢ Overall process of execution of testing

Confidential
135

6.1.1 Tool Support for Testing

Types of tools: Main purposes:


➢ Improve the efficiency of test activities by automating
➢ Tools that are directly used in testing
repetitive tasks or supporting manual test activities like test
➢ Tools that help in managing the testing process
planning, test design, test reporting and monitoring.
➢ Tools that are used in exploration
➢ Automate activities that require significant resources when
➢ Any tool that aids in testing
done manually (e.g. static testing)
➢ Automate activities that cannot be executed manually (e.g.
large scale performance testing of client-server applications)
➢ Increase reliability of testing (e.g by automating large data
comparisons or simulating behavior)

Confidential
136

6.1.2-6.1.8 Test Tool Classification

Also Tool Support for Specific Testing Needs can be used – Data quality assessment

Confidential
137

6.2 Effective use of Tools: Potential Benefits and Risks


Potential benefits of using tools include: Risks of using tools include:

❖ Repetitive work is reduced (e.g. running regression ❖ Unrealistic expectations for the tool (including functionality
tests, re-entering the same test data, and checking and ease of use).
against coding standards).
❖ Underestimating the time, cost and effort for the initial
❖ Greater consistency and repeatability (e.g. tests introduction of a tool (including training and external
executed by a tool, and tests derived from expertise).
requirements).
❖ Underestimating the time and effort needed to achieve
❖ Objective assessment (e.g. static measures, coverage significant and continuing benefits from the tool (including
and system behavior). the need for changes in the testing process and continuous
improvement of the way the tool is used).
❖ Ease of access to information about tests or testing
(e.g. statistics and graphs about test progress, incident ❖ Underestimating the effort required to maintain the test
rates and performance). assets generated by the tool.

❖ Over-reliance on the tool (replacement for test design or


where manual testing would be better)
Confidential
138

Special considerations for some type of tools


Test execution tools execute test objects using automated test scripts or capturing tests by recording.
A data-driven testing approach separates out the test inputs (the data), usually into a spreadsheet, and
uses a more generic test script that can read the input data and execute the same test script with
different data.
In a keyword-driven testing approach, the spreadsheet contains keywords describing the actions to be
taken (also called action words), and test data.

Confidential
139

6.3 Introducing a Tool into an Organization


Introducing the selected tool into organization starts with a pilot project.

Success factors for the using tools within the organization:

➢Rolling out the tool to the rest of the organization incrementally

➢Adapting and improving processes to fit with the use of the tool

➢Providing training and coaching/mentoring for new users

➢Defining usage guidelines

➢Gather usage information from the actual use

➢Monitoring tool use and benefits

➢Providing support for the test team for a given tool

➢Gathering lessons learned from all teams

Confidential
140
Standards

ISO 9126 Software Engineering - Software Product Quality


ISO 9000 - Quality Management
ISO/IEC 12207 - Software life-cycle processes

IEEE 610 - Software Engineering Terminology


IEEE 829-1998 - Standard for Software Test Documentation
IEEE 1008 - Software Unit Testing
IEEE 1028 - Software Reviews and Audit

ANSI/IEEE 729 - Software Terms

BS 7925-1 - British Standard Glossary of Software Testing Terms


BS 7925-2 - Boundary Values and Equivalence Partitioning

Confidential
141

• Q&A

Thanks!

Confidential

You might also like