Professional Documents
Culture Documents
1
© Oxford University Press 2011. All rights
Chapter 1
Introduction to Effective
Software Testing
2
© Oxford University Press 2011. All rights
Chapter 1
Introduction to Effective Software Testing
Objectives
3
© Oxford University Press 2011. All rights
Evolution of ofSoftware
Evolution Testing
Software Testing
4
© Oxford University Press 2011. All rights
Evolution of ofSoftware
Evolution Testing
Software Testing
• In the 1990s testing tools finally came into their own. There was
flood of various tools, which are absolutely vital to adequate testing
of the software systems. However, they do not solve all the
problems and cannot replace a testing process.
• Gelperin and Hetzel [79] have characterized the growth of software
testing with time. Based on this, we can divide the evolution of
software testing in following phases:
5
© Oxford University Press 2011. All rights
Evolution of Software Testing
6
© Oxford University Press 2011. All rights
Evolution of Software Testing
7
© Oxford University Press 2011. All rights
Evolution of Software Testing
8
© Oxford University Press 2011. All rights
Evolution of Software Testing
9
© Oxford University Press 2011. All rights
Software Testing Myths
Software Testing Myths
• Testing is a single phase in SDLC performed after
coding.
• Testing is easy.
• Software development is of more worth as compared to
testing.
• Complete testing is possible.
• The testing starts after the program development.
• The purpose of testing is to check the functionality of the
software.
• Anyone can be a tester.
10
© Oxford University Press 2011. All rights
Software Testing Goals
11
© Oxford University Press 2011. All rights
Testing produces Reliability and Quality
12
© Oxford University Press 2011. All rights
Quality leads to customer satisfaction
Quality leads to customer satisfaction
13
© Oxford University Press 2011. All rights
Testing controlled by Risk factors
14
© Oxford University Press 2011. All rights
Psychology for Software Testing
15
© Oxford University Press 2011. All rights
Psychology for Software Testing
16
© Oxford University Press 2011. All rights
Software Testing Definitions
17
© Oxford University Press 2011. All rights
Software Testing Definitions
18
© Oxford University Press 2011. All rights
Model for Software Testing
19
© Oxford University Press 2011. All rights
Effective Software Testing vs Exhaustive Software
Testing
20
© Oxford University Press 2011. All rights
Effective Software Testing vs Exhaustive Software
Testing
• Valid Inputs
• Invalid Inputs
• Edited Inputs
• Race Conditions
21
© Oxford University Press 2011. All rights
Effective Software Testing vs Exhaustive Software
Testing
22
© Oxford University Press 2011. All rights
Effective Software Testing vs Exhaustive Software
Testing
23
© Oxford University Press 2011. All rights
Effective Software Testing vs Exhaustive Software
Testing
24
© Oxford University Press 2011. All rights
Effective Software Testing vs Exhaustive Software
Testing
25
© Oxford University Press 2011. All rights
Software Testing as a Process
26
© Oxford University Press 2011. All rights
Software Testing as a Process
27
© Oxford University Press 2011. All rights
Schools of Software Testing
The idea of schools of testing was given by Bret Pettichord [82]. He has
proposed the following schools:
28
© Oxford University Press 2011. All rights
Schools of Software Testing
29
© Oxford University Press 2011. All rights
Schools of Software Testing
30
© Oxford University Press 2011. All rights
Schools of Software Testing
31
© Oxford University Press 2011. All rights
Software Testing
Naresh Chauhan, Assistant Professor in the
Department of Computer Engineering at the
YMCA University of Science and Technology,
Faridabad
1
© Oxford University Press 2011. All rights reserved.
Chapter 2
Software Testing Terminology
and Methodology
2
© Oxford University Press 2011. All rights reserved.
Objectives
3
© Oxford University Press 2011. All rights reserved.
Software Testing Terminology
• Failure
The inability of a system or component to perform a required
function according to its specification.
4
© Oxford University Press 2011. All rights reserved.
Software Testing Terminology
Error
Whenever a member of development team makes any mistake in
any phase of SDLC, errors are produced. It might be a typographical
error, a misleading of a specification, a misunderstanding of what a
subroutine does and so on. Thus, error is a very general term used
for human mistakes.
5
© Oxford University Press 2011. All rights reserved.
Software Testing Terminology
6
© Oxford University Press 2011. All rights reserved.
Software Testing Terminology
• Testware
The documents created during the testing activities are known as
Testware.
• Incident
the symptom(s) associated with a failure that alerts the user to the
occurrence of a failure.
• Test Oracle
to judge the success or failure of a test,
7
© Oxford University Press 2011. All rights reserved.
Life Cycle of a Bug
8
© Oxford University Press 2011. All rights reserved.
States of a Bug
9
© Oxford University Press 2011. All rights reserved.
Bug affects Economics of Software Testing
Software Testing Myths
10
© Oxford University Press 2011. All rights reserved.
Bug Classification based on Criticality
• Critical Bugs
the worst effect on the functioning of software such that it stops or
hangs the normal functioning of the software.
• Major Bug
This type of bug does not stop the functioning of the software but it
causes a functionality to fail to meet its requirements as expected.
• Medium Bugs
Medium bugs are less critical in nature as compared to critical and
major bugs.
Minor Bugs
11
© Oxford University Press 2011. All rights reserved.
Bug Classification based on SDLC
12
© Oxford University Press 2011. All rights reserved.
Testing Principles
13
© Oxford University Press 2011. All rights reserved.
Testing Principles
14
© Oxford University Press 2011. All rights reserved.
Software Testing Life Cycle (STLC)
15
© Oxford University Press 2011. All rights reserved.
Test Planning
16
© Oxford University Press 2011. All rights reserved.
Test Design
17
© Oxford University Press 2011. All rights reserved.
Test Execution
18
© Oxford University Press 2011. All rights reserved.
Post-Execution / Test Review
19
© Oxford University Press 2011. All rights reserved.
Software Testing Methodology
20
© Oxford University Press 2011. All rights reserved.
Test Strategy Matrix
21
© Oxford University Press 2011. All rights reserved.
Development of Test Strategy
22
© Oxford University Press 2011. All rights reserved.
V Testing Life Cycle Model
23
© Oxford University Press 2011. All rights reserved.
Validation Activities
• Unit Testing
• Integration Testing
• Function Testing
• System Testing
• Acceptance Testing
24
© Oxford University Press 2011. All rights reserved.
Software Testing
Naresh Chauhan, Assistant Professor in the
Department of Computer Engineering at the
YMCA University of Science and Technology,
Faridabad
1
© Oxford University Press 2011. All rights reserved.
Chapter 3
Verification and Validation
2
© Oxford University Press 2011. All rights reserved.
Objectives
• V diagram provides the basis for every type of software
testing.
• What are various verification activities?
• What are various validation activities?
• How to perform verification at each stage of SDLC?
• What are various validation test plans?
3
© Oxford University Press 2011. All rights reserved.
Verification and Validation (V & V) Activities
4
© Oxford University Press 2011. All rights reserved.
Evolution of Software
Verification Testing
of Requirements
Correctness
Unambiguous
Consistent
Completeness
Updation
Traceability
Backward Traceability
Forward
Traceability.
5
© Oxford University Press 2011. All rights reserved.
Verification of High Level Design
6
© Oxford University Press 2011. All rights reserved.
Verification of High Level Design
7
© Oxford University Press 2011. All rights reserved.
Verification of High Level Design
8
© Oxford University Press 2011. All rights reserved.
Verification of High Level Design
• Check that the response time for all the interfaces are within
required ranges.
• Help Facility
9
© Oxford University Press 2011. All rights reserved.
Verify Low Level Design
Software Testing Myths
• Verify the SRS of each module.
10
© Oxford University Press 2011. All rights reserved.
How to Verify Code
• Check that every design specification in HLD and LLD has been
coded using traceability matrix.
• Examine the code against a language specification checklist.
• Verify every statement, control structure, loop, and logic
• Misunderstood or incorrect Arithmetic precedence
• Mixed mode operations
• Incorrect initialization
• Precision Inaccuracy
• Incorrect symbolic representation of an expression
• Different data types
• Improper or nonexistent loop termination
• Failure to exit
11
© Oxford University Press 2011. All rights reserved.
Validation
• The bugs, which are still existing in the software after coding
need to be uncovered.
12
© Oxford University Press 2011. All rights reserved.
Validation Activities
13
© Oxford University Press 2011. All rights reserved.
Software Testing
Naresh Chauhan, Assistant Professor in the
Department of Computer Engineering at the
YMCA University of Science and Technology,
Faridabad
1
© Oxford University Press 2011. All rights reserved.
Chapter 4
Dynamic Testing: Black Box
Testing Techniques
2
© Oxford University Press 2011. All rights reserved.
Objectives
3
© Oxford University Press 2011. All rights reserved.
Black Box Testing
Evolution of Software Testing
4
© Oxford University Press 2011. All rights reserved.
Black Box Testing
Evolution of Software Testing
• To test the modules independently.
5
© Oxford University Press 2011. All rights reserved.
Boundary Value Analysis (BVA)
6
© Oxford University Press 2011. All rights reserved.
Boundary Value Checking
7
© Oxford University Press 2011. All rights reserved.
Boundary Value Checking
• Anom, Bmin
• Anom, Bmin+
• Anom, Bmax
• Anom, Bmax-
• Amin, Bnom
• Amin+, Bnom
• Amax, Bnom
• Amax-, Bnom
• Anom, Bnom
8
© Oxford University Press 2011. All rights reserved.
Robustness Testing Method
• Amax+, Bnom
• Amin-, Bnom
• Anom, Bmax+
• Anom, Bmin-
9
© Oxford University Press 2011. All rights reserved.
Software Testing
Worst Case Myths
Testing Method
• When more than one variable are in extreme values, i.e. when more
than one variable are on the boundary. It is called Worst case
testing method.
10
© Oxford University Press 2011. All rights reserved.
Example
11
© Oxford University Press 2011. All rights reserved.
Example
12
© Oxford University Press 2011. All rights reserved.
Equivalence Class Testing
13
© Oxford University Press 2011. All rights reserved.
Example
15
© Oxford University Press 2011. All rights reserved.
State Table Based Testing
16
© Oxford University Press 2011. All rights reserved.
State Table Based Testing
• State Table
17
© Oxford University Press 2011. All rights reserved.
State Table Based Testing
18
© Oxford University Press 2011. All rights reserved.
Decision Table Based Testing
19
© Oxford University Press 2011. All rights reserved.
Decision Table Based Testing
Example
• A program calculates the total salary of an employee with the conditions that
if the working hours are less than or equal to 48, then give normal salary.
The hours over 48 on normal working days are calculated at the rate of 1.25
of the salary. However, on holidays or Sundays, the hours are calculated at
the rate of 2.00 times of the salary. Design the test cases using decision
table testing.
20
© Oxford University Press 2011. All rights reserved.
Decision Table Based Testing
21
© Oxford University Press 2011. All rights reserved.
Cause Effect Graph Based Testing
23
© Oxford University Press 2011. All rights reserved.
Cause-Effect Graphing based Testing
Solution:
The Causes of this situation are:
C1 – First character is A
C2 – First character is B
C3 – the Second character is a digit
25
© Oxford University Press 2011. All rights reserved.
Cause-Effect Graphing based Testing
Effect E2:
E2 states print message “X”. Message X will be printed when the First
character is neither A nor B.
This means Effect E2 will hold true when either C1 OR C2 is invalid. So
the graph for Effect E2 is shown as (In blue line)
27
© Oxford University Press 2011. All rights reserved.
Cause-Effect Graphing based Testing
28
© Oxford University Press 2011. All rights reserved.
Cause-Effect Graphing based Testing
C1 T F T/F
C2 T F T/F
C3 T T T/F F
Effect
E1 X X
E2 X
E3 X
29
© Oxford University Press 2011. All rights reserved.
Cause-Effect Graphing based Testing
30
© Oxford University Press 2011. All rights reserved.
Error Guessing
• Error guessing is the method used when all other methods fail or it is
the method for some special cases which need to be tested.
• It means error or bug can be guessed which do not fit in any of the
earlier defined situations. So test cases are generated for these
special cases.
31
© Oxford University Press 2011. All rights reserved.