You are on page 1of 108

SENG 607.

22
Advanced Software Testing

Module 1:
Introduction to Software Testing

Winter 2008

Dr. Vahid Garousi


Software Quality Engineering Research Group (SoftQual)
Department of Electrical and Computer Engineering
http://www.enel.ucalgary.ca/~vgarousi

Dr. V. Garousi, SENG 607.22 – W 2008 1


Course Outline

0 Course Introduction & Overview


1 Introduction to Software Testing
2 White box Testing
Functional
3 Black-box Testing
Testing
4 Testing Object-Oriented Systems
5 Non-functional Testing
6 Testability
7 Special topic (if we have time): Model-based Testing
Student Project Presentations
Student Paper Presentations

Note: This is a tentative outline that may vary depending on the


pace at which we cover the material.

Dr. V. Garousi, SENG 607.22 – W 2008 2


Introduction to Software Testing
Outline
 Quick Definitions  Types of Testing
 Why is SW testing  Goals of Testing
important?  Reality Check on Testing
 Nature of Software  Testing Process Overview
Development
 Qualities of Testing
 Examples of SE Failures
 Continuity Property
 Consequences of Poor
Quality  Software Dependability
 Ariane 5 Disaster  Fundamental Principles in
Testing
 NIST Study on SE Failures
 Theoretical Foundations of
 Key QA Capabilities
Testing
 Dealing with SW Faults
 Practical Aspects of Testing
 Testing Definitions &  Test Organization
Objectives
 Testing Activities
Dr. V. Garousi, SENG 607.22 – W 2008 3
Introduction to Software Testing
Outline
 Quick Definitions  Types of Testing
 Why is SW testing  Goals of Testing
important?  Reality Check on Testing
 Nature of Software  Testing Process Overview
Development
 Qualities of Testing
 Examples of SE Failures
 Continuity Property
 Consequences of Poor
Quality  Software Dependability
 Ariane 5 Disaster  Fundamental Principles in
Testing
 NIST Study on SE Failures
 Theoretical Foundations of
 Key QA Capabilities
Testing
 Dealing with SW Faults
 Practical Aspects of Testing
 Testing Definitions &  Test Organization
Objectives
 Testing Activities
Dr. V. Garousi, SENG 607.22 – W 2008 4
Quick Definitions I: Software Engineering

 Software engineering (SE) is the application of a


systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software.
 The discipline of software engineering encompasses
knowledge, tools, and methods for defining software
requirements, and performing software design,
software construction, software testing, and
software maintenance tasks.
 Software engineering draws on knowledge from fields
such as computer engineering, computer science,
management, mathematics, project management,
quality management, software ergonomics, and
systems engineering.
 The term software engineering was popularized during
the 1968 NATO Software Engineering Conference (held
Dr. V.in Garmisch,
Garousi, SENG 607.22 – WGermany).
2008 5
Quick Definitions II
 SW Management: The discipline of managing projects to
achieve quality within time constraints and budget.
 SW Quality Engineering: The discipline of specifying,
assuring, monitoring, and controlling the quality of
software products.
 SW Quality Assurance (SQA): Monitoring the software
engineering processes and methods used to ensure SW
quality.
 Software Quality Control (SQC): (also known as
Verification and Validation) controlling the quality of SE
products. Includes (1) inspections, and (2) testing.
 SQA <> SQC: SQA is a control of processes, but SQC is a
control of products.
 SW Verification: The goal is to find as many latent defects
as possible before delivery
Dr. V.SW Validation:
Garousi, The goal is to gain confidence in the
SENG 607.22 – W 2008 6
Quick Definitions III
 SW Testing: Techniques to execute
programs with the intent of finding as
many defects as possible and/or
gaining sufficient confidence in the
software system under test.
 “Program testing can show the
presence of bugs, never their
absence” (Dijkstra)

 SW Inspections: Techniques aimed at


systematically verifying non-
executable software artifacts with the
intent of finding as many defects as
possible, as early as possible
Dr. V. Garousi, SENG 607.22 – W 2008 7
Basic Testing Definitions
 Errors: People commit errors
 Fault: A fault is the result of an error in the software
documentation, code, etc.
 Failure: A failure occurs when a fault executes
 (Many people use the above three terms inter-changeably. We
should not do so!)
 Incident: Consequences of failures – Failure occurrence
may or may not be apparent to the user

 The fundamental chain of SW dependability threats:

propagation causation results in


Error Fault Failure Incident ...

Dr. V. Garousi, SENG 607.22 – W 2008 8


Introduction to Software Testing
Outline
 Quick Definitions  Types of Testing
 Why is SW testing  Goals of Testing
important?  Reality Check on Testing
 Nature of Software  Testing Process Overview
Development
 Qualities of Testing
 Examples of SE Failures
 Continuity Property
 Consequences of Poor
Quality  Software Dependability
 Ariane 5 Disaster  Fundamental Principles in
Testing
 NIST Study on SE Failures
 Theoretical Foundations of
 Key QA Capabilities
Testing
 Dealing with SW Faults
 Practical Aspects of Testing
 Testing Definitions &  Test Organization
Objectives
 Testing Activities
Dr. V. Garousi, SENG 607.22 – W 2008 9
Why is SW testing important?

 According to some estimates: ~50% of


development costs

 A study by (the American) NIST in 2002: The


annual national cost of inadequate testing is
as much as $59 Billion US!
 The report is titled: “The Economic
Impacts of Inadequate Infrastructure for
Software Testing”

Dr. V. Garousi, SENG 607.22 – W 2008 10


There are lots of Jobs!
Keyword: (software test) or (software tester) or (software quality assurance) or
(software QA) or (Software Quality Engineer)

Dr. V. Garousi, SENG 607.22 – W 2008 11


There are lots of Jobs!
(only Calgary- posted during only about two
months)

Dr. V. Garousi, SENG 607.22 – W 2008 12


Introduction to Software Testing
Outline
 Quick Definitions  Goals of Testing
 Why is SW testing  Reality Check on Testing
important?  Testing Process Overview
 Nature of Software
 Qualities of Testing
Development
 Continuity Property
 Examples of SE Failures
 Software Dependability
 Consequences of Poor
Quality  Fundamental Principles in
Testing
 Ariane 5 Disaster
 Theoretical Foundations of
 NIST Study on SE Failures
Testing
 Key QA Capabilities
 Practical Aspects of Testing
 Dealing with SW Faults
 Test Organization
 Testing Definitions &
 Testing Activities
Objectives

Dr. V. Garousi, SENG 607.22 – W 2008 13


Nature of Software Development

 Development, not production


 Human intensive
 Engineering, but also social process
 More and more complex software
systems
 Pervasive in an increasing number of
industries

Dr. V. Garousi, SENG 607.22 – W 2008 14


Required Qualities of Software
Products
 Correctness  Repairability
 Reliability  Evolvability
 Robustness  Reusability
 Performance  Portability
 User Friendliness  Understandability
 Verifiability  Interoperability
 Maintainability

Dr. V. Garousi, SENG 607.22 – W 2008 15


Pervasive Problems in SE

 Software is commonly delivered late, way over


budget, and of unsatisfactory quality
 Software validation and verification are rarely
systematic and are usually not based on sound,
well-defined techniques
 Software development processes are commonly
unstable and uncontrolled
 Software quality is poorly measured, monitored,
and controlled.
 Software failure examples: Therac 25, AT&T,
Ariane 5, Deutsche Telekom, and many others
(will discuss next…)

Dr. V. Garousi, SENG 607.22 – W 2008 16


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 17


Examples of SE Failures (cont.)

 Communications: Loss or corruption of


communication media, non delivery of data.
 Space Applications: Lost lives, launch
delays, e.g., European Ariane 5 shuttle,
1996:
 From the official disaster report: “Due
to a malfunction in the control software,
the rocket veered off its flight path 37
seconds after launch.”
 Defense and Warfare: Misidentification of
friend or foe.
 Transportation: Deaths, delays, sudden
acceleration, inability to brake.
 Electric Power: Death, injuries, power
outages, long-term health hazards
(radiation).
Dr. V. Garousi, SENG 607.22 – W 2008 18
Examples of SE Failures (cont.)

 Money Management: Fraud, violation of privacy,


shutdown of stock exchanges and banks, negative
interest rates.
 Control of Elections: Wrong results (intentional or non-
intentional).
 Control of Jails: Technology-aided escape attempts and
successes, failures in software-controlled locks.
 Law Enforcement: False arrests and imprisonments.

Dr. V. Garousi, SENG 607.22 – W 2008 19


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of SW Poor
 Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster and a few
other fatal cases  Software Dependability
 NIST Study on SE Failures  Fundamental Principles in
Testing
 Key QA Capabilities
 Theoretical Foundations of
 Dealing with SW Faults
Testing
 Testing Definitions &
 Practical Aspects of Testing
Objectives
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 20


Ariane 5 – European Space Agency

On June 4, 1996, the flight of the


Ariane 5 launcher ended in a
failure.

Only about 40 seconds after


initiation of the flight
sequence, at an altitude of
about 3,700 m, the launcher
veered off its flight path,
broke up and exploded.

Dr. V. Garousi, SENG 607.22 – W 2008 21


Ariane 5 – Root Cause
 Source: ARIANE 5 Flight 501 Failure, Report by the Inquiry
Board

A program segment for converting a floating point number


to a signed 16 bit integer was executed with an input data
value outside the range representable by a signed 16-bit
integer.
This run time error (out of range, overflow), which arose in
both the active and the backup computers at about the
same time, was detected and both computers shut
themselves down.
This resulted in the total loss of attitude control. The
Ariane 5 turned uncontrollably and aerodynamic forces
broke the vehicle apart.
This breakup was detected by an on-board monitor which
ignited the explosive charges to destroy the vehicle in the
air. Ironically, the result of this format conversion was no
Dr. V.longer needed
Garousi, SENG after lift off.
607.22 – W 2008 22
Ariane 5 – Lessons Learned
 Rigorous reuse procedures, including usage-
based testing (based on operational profiles)
 Adequate exception handling and
redundancy strategies (real function of a
backup system, degraded modes?)
 Clear, complete, documented specifications
(e.g., preconditions, post-conditions)
 Note this was not a complex, computing
problem, but a deficiency of the software
engineering practices in place …

Dr. V. Garousi, SENG 607.22 – W 2008 23


Another example of SE Failure: F-18
crash
 An F-18 crashed because of a missing
exception condition:
An if ... then ... block without the else clause
that was thought could not possibly arise.
 In simulation, an F-16 program bug caused the
virtual plane to flip over whenever it crossed
the equator, as a result of a missing minus
sign to indicate south latitude.

Dr. V. Garousi, SENG 607.22 – W 2008 24


Another example of SE Failure:
Fatal Therac-25 Radiation
 In 1986, a man in Texas received between
16,500-25,000 radiations in less than 10 sec,
over an area of about 1 cm.
 He lost his left arm, and died of complications
5 months later.

Dr. V. Garousi, SENG 607.22 – W 2008 25


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Ariane 5 Disaster  Qualities of Testing
 Consequences of Poor SW  Continuity Property
Quality
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 26


Consequences of Poor SW Quality

 Standish Group surveyed 350 companies, over


8000 projects, in 1994
 31% of projects were cancelled before
completion
 Only 9-16% were delivered within cost and
budget
 US study (1995): $81 billion USD is spent per
year for failing software development projects
 NIST study (2002): SW failures cost $59.5 billion
a year. Earlier detection could save $22 billion.

Dr. V. Garousi, SENG 607.22 – W 2008 27


NIST Study
 Title: Economic Impacts of Inadequate Infrastructure for
Software Testing
 http://www.nist.gov/public_affairs/releases/n02-10.htm

 According to this study by NIST (May 2002), software


failures and glitches cost the U.S. economy about $59.5
billion a year.

 The study also found that better testing could expose SW


failures and remove them at the early development stages
and could reduce about $22.2 billion of the cost.

Dr. V. Garousi, SENG 607.22 – W 2008 28


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor
 Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key SW Quality Control (QC)
Testing
and Quality Assurance (QA)
Capabilities  Theoretical Foundations of
Testing
 Dealing with SW Faults
 Practical Aspects of Testing
 Testing Definitions &
Objectives  Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 29


Key SW Quality Control (SQC) Capabilities

 Recall:
 SQC is about controlling the quality of SE products.
Includes (1) inspections, and (2) testing.

 To achieve effective SQC, we


should be able to: Implementation
 Uncover faults in the SW
artifacts (e.g., requirements,
code) where they are
introduced, in a systematic
way, in order to avoid ripple Faults
effects. Design
 Derive, in a systematic way,
effective test cases to uncover Requirements
faults
 (systematic here is the
opposite of ad hoc).
Dr. V. Garousi, SENG 607.22 – W 2008 30
Key SW Quality Assurance (SQA)
Capabilities
 Recall:
 SQA is about the process, not the product.
 We should:
 Automate testing and inspection activities, to the
maximum extent possible
 Monitor and control quality, e.g., reliability,
maintainability, safety, across all project phases and
activities
 All this implies the measurement of SW products and
processes and the empirical evaluation of testing and
inspection technologies

Dr. V. Garousi, SENG 607.22 – W 2008 31


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Different ways of Dealing  Theoretical Foundations of
with SW Faults
Testing
 Testing Definitions &  Practical Aspects of Testing
Objectives
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 32


Dealing with SW Faults
 Recall:
 Recall:
 Fault: A fault is the result
 Errors: People Fault Handling of an error in the software
commit errors documentation, code, etc.

Fault Avoidance Fault Detection Fault Tolerance

Design Atomic Modular


Inspections
Methodology Transactions Redundancy

Configuration
Verification
Management

Our focus in this course Comes after


Testing Debugging

Unit Integration System Non-functional Functional Non-functional


Testing Testing Testing Testing Debugging Debugging

Dr. V. Garousi, SENG 607.22 – W 2008 33


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 More Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 34


Testing Definitions: Test Stubs and Drivers

 Test Stub: Partial implementation of a component on


which a unit under test depends.
Test Stub Test Driver
Depends Depe
Component a Component b Component j

Under Test

 Test Driver: Partial implementation of a component that


depends on a unit under test.
Test Stub Test Driver
ds Depends
Component b Component j Component k

Under Test

 Test stubs and drivers enable components to be isolated


from the rest of the system for testing.

Dr. V. Garousi, SENG 607.22 – W 2008 35


Summary of Testing Definitions

Test suite

exercises is revised by
* * 1…n
Test case Unit Correction

* *
*
Test stub
finds
repairs
* Test driver

* *
Failure * * Fault * * Error

is caused by is caused by

Dr. V. Garousi, SENG 607.22 – W 2008 36


Introduction to Software Testing
Outline
 Quick Definitions  Types of Testing
 Why is SW testing  Goals of Testing
important?
 Reality Check on Testing
 Nature of Software
Development  Testing Process Overview
 Examples of SE Failures  Qualities of Testing
 Consequences of Poor  Continuity Property
Quality  Software Dependability
 Ariane 5 Disaster  Fundamental Principles in
 NIST Study on SE Failures Testing
 Key QA Capabilities  Theoretical Foundations of
Testing
 Dealing with SW Faults
 Practical Aspects of Testing
 Testing Definitions &
Objectives  Test Organization
 Testing Activities
Dr. V. Garousi, SENG 607.22 – W 2008 37
Types of Testing

 Functional Testing: testing of a software for its functional


requirements.
 Functional requirements specify specific behavior or functions
of a software.
 Functional testing is thus checking the correct functionality of a
system.

 Non-functional Testing: the testing of a software for its non-


functional requirements.
 Non-functional requirements:
• Specify criteria that can be used to judge the operation of a
system, rather than specific behaviors.
• Typical non-functional requirements are performance,
reliability, scalability, and cost.
• Non-functional requirements are often called the -ilities of a
system.
• Other terms for non-functional requirements are "quality
Dr. V. Garousi, SENG 607.22 – W 2008 38
(Example) Types of Non-functional SW
Requirements

 Accessibility  Platform compatibility


 Availability  Quality (e.g. Faults Density, #
of undetected Delivered)
 Efficiency (resource
consumption for given load)  Reliability (e.g. Mean Time
Between Failures - MTBF)
 Effectiveness (resulting
performance in relation to  Resource constraints (required
effort) processor speed, memory, disk
space, network bandwidth,
 Extensibility
etc.)
 Maintainability
 Robustness
 Performance / Response time
 Safety
 Scalability
 Security

Dr. V. Garousi, SENG 607.22 – W 2008 39


Types of Testing in the Course Outline

0 Course Introduction & Overview


1 Introduction to Software Testing
2 White box Testing
Functional
3 Black-box Testing
Testing
4 Testing Object-Oriented Systems
5 Non-functional Testing
6 Testability
7 Special topic (if we have time): Model-based Testing
Student Project Presentations
Student Paper Presentations

Dr. V. Garousi, SENG 607.22 – W 2008 40


Introduction to Software Testing
Outline
 Nature of Software  Goals and Limitations of
Development Testing
 Examples of SE Failures  Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality  Continuity Property
 Ariane 5 Disaster  Software Dependability
 NIST Study on SE Failures  Fundamental Principles in
 Key QA Capabilities Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 41


Goals and Limitations of Testing

 Dijkstra, 1972:
 “Program testing can be used to show
the presence of bugs, but never to
show their absence”
 Thus:
 No absolute certainty can be gained
from testing
 Testing should be integrated with
other verification activities, e.g., code
reviews and inspections
 Main goal of testing: demonstrate the
software can be depended upon, i.e.,
sufficient dependability
 We can only gain sufficient confidence
in the System Under Test (SUT), but
not
Dr. V. Garousi, full
SENG confidence!
607.22 – W 2008 42
Goals and Limitations of Testing (cont.)

 No matter how rigorous we are, software is


going to be faulty
 Testing represents a substantial percentage of
software development costs (~ 50%) and time to
market
 Impossible to test under all operating conditions
(i.e., exhaustive testing)
 Thus, based on incomplete testing, we must gain
confidence that the system has the desired
behavior
 Testing large systems is complex – it requires
strategy and technology- and is often done
inefficiently in practice

Dr. V. Garousi, SENG 607.22 – W 2008 43


Introduction to Software Testing
Outline
 Nature of Software  Goals and Limitations of
Development Testing
 Examples of SE Failures  Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality  Continuity Property
 Ariane 5 Disaster  Software Dependability
 NIST Study on SE Failures  Fundamental Principles in
 Key QA Capabilities Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 44


Testing Process Overview
SW Representation
(e.g., models, requirements)

Derive Test cases


Estimate
Expected SW Code
Results Execute Test cases

Get Test Results

Test Oracle Compare


[Test Result==Oracle]

[Test Result!=Oracle]

Dr. V. Garousi, SENG 607.22 – W 2008 45


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 46


Qualities of Testing
 A good test should:
 Be effective at uncovering faults
 Help locate faults for debugging
 Be repeatable so that a precise
understanding of the fault can be gained
 Can be automated so as to lower the cost
and timescale
 Be systematic so as to be predictable

Dr. V. Garousi, SENG 607.22 – W 2008 47


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 48


Continuity Property and that it doesn’t apply to
SW Testing

 Problem: Test a bridge ability to


sustain a certain weight
 Continuity Property: If a bridge can
sustain a weight equal to W1, then
it will sustain any weight W2 <=
W1
 Essentially, continuity property=
small differences in operating
conditions should not result in
dramatically different behavior
 BUT, the same testing property cannot be applied when
testing software, why?
 In software, small differences in operating conditions
can result in dramatically different behavior (e.g., value
boundaries)
 Thus, the continuity property is not applicable to
Dr. V. Garousi, SENG 607.22 – W 2008 49
Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 50


Software Dependability
 Dependability of a SW has several aspects:
 Correctness: providing correct service
 Reliability: continuity of correct service
 Availability: readiness for correct service
 Safety: absence of catastrophic consequences on the users and the
environment
 Robustness: ability to continue to operate despite abnormalities in
input, calculations, etc.
 …
 Different combinations:
 Correct but not safe or robust: the specification is inadequate
 Safe but not correct: non-dangerous failures may happen
 Robust but not safe: catastrophic failures are possible (even if their
probabilities are very low)
 …
 An example next…

Dr. V. Garousi, SENG 607.22 – W 2008 51


Software Dependability
An Example - Traffic Light Controller System

Correctness, Reliability:
The system should let traffic pass according to the correct pattern and central
scheduling on a continuous basis.

Robustness:
The system should provide degraded functionality in the presence of abnormalities.

Safety:
It should never signal conflicting greens.

An example degraded function: the line to central controlling is cut-off and a default
pattern is then used by local controller.
Dr. V. Garousi, SENG 607.22 – W 2008 52
SW Dependability Needs Vary
 Safety-critical applications
 Flight control systems have strict safety requirements
 Telecommunication systems have strict robustness
requirements

 Mass-market products
 Dependability is often less important than time to
market ($)

 Dependability can even vary within the same class of


products:
 Reliability and robustness are key issues for multi-user
operating systems (e.g., UNIX) and less important for
single users operating systems (e.g., Windows)

Dr. V. Garousi, SENG 607.22 – W 2008 53


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 54


Fundamental Principles in Testing

 Exhaustive (Complete) Testing


 Test Coverage and Types of Testing
 Black vs. White Box Testing
 Complete Coverage

Dr. V. Garousi, SENG 607.22 – W 2008 55


Exhaustive (Complete) Testing

 Exhaustive testing, i.e., testing a software system using all


the possible inputs, is most of the time impossible.
 Examples:
 A program that computes the factorial function (n!=n.(n-1).(n-2)
…1)
 Exhaustive testing = running the program with 0, 1, 2, …,
100, … as an input!
 A compiler (e.g., javac)
 Exhaustive testing = running the (Java) compiler with any
possible (Java) program (i.e., source code)
 Technique used to reduce the number of inputs
 Testing criteria group input elements into (equivalence)
classes
 One input is selected in each class (notion of test data
coverage)

Dr. V. Garousi, SENG 607.22 – W 2008 56


Fundamental Principles in Testing

 Exhaustive Testing
 Test Coverage and Types of Testing
 Black vs. White Box Testing
 Complete Coverage

Dr. V. Garousi, SENG 607.22 – W 2008 57


Test Coverage and Types of Testing

Software Representation
(e.g., model, code) Associated Criteria
Test cases must cover
all the … in the model

Test Cases

Types of testing: Whether testing is done based on a representation of:


• Specification  Black-Box Testing
• Implementation  White (glass)-Box Testing

Dr. V. Garousi, SENG 607.22 – W 2008 58


Black- vs. White (glass)-Box Testing

System

Specification

Implementation

Missing functionality: Unexpected functionality:


Cannot be revealed by white- Cannot be revealed by black-
box techniques box techniques

Dr. V. Garousi, SENG 607.22 – W 2008 59


White-box vs. Black-box Testing: Pro’s and
Con’s
 Black-box  White-box
 Check conformance with  It allows you to be confident
specifications about test coverage
 It scales up (different  It is based on control or data
techniques at different flow coverage
granularity levels)
 It does not scale up (mostly
 It depends on the applicable at unit and
specification notation and integration testing levels)
degree of detail
 Unlike black-box techniques,
 Do not ‘exactly’ know how it cannot reveal missing
much of the system is being functionalities (part of the
tested specification that is not
 Do not know what to do if the implemented)
software performed some
unspecified task.

Dr. V. Garousi, SENG 607.22 – W 2008 60


Fundamental Principles in Testing

 Exhaustive Testing
 Test Coverage
 Black vs. White Box Testing
 Complete Coverage
 In White-Box testing
 In Black-Box testing

Dr. V. Garousi, SENG 607.22 – W 2008 61


Complete Coverage: In White-Box testing

 An example first:
if x > y then
Max := x;
else
Max := x ; // fault! Should have been y
end if;
 Let’s see what test cases can “cover” the above program:
 Test set (suite) {(x=3, y=2), ( x=2, y=3)} can detect the error,
and has more “coverage”
 Test set (suite) {(x=3, y=2), (x=4, y=3), (x=5, y=1)} is larger but
cannot detect it
 Test coverage criteria group input domain elements into
(equivalence) classes (control flow paths here)
 Complete coverage attempts to run test cases from each
and every of those classes
Dr. V. Garousi, SENG 607.22 – W 2008 62
Complete Coverage: In Black-Box testing

 Consider the specification of a Compute Factorial Number (n!)


function:
 If the input value n is < 0, then an appropriate error message
must be printed. If 0 <= n < 20, then the exact value of n!
must be printed.
 If 20 <= n < 200, then an approximate value of n! must be
printed in floating point format, e.g., using some approximate
method of numerical calculus.
 The error threshold (for the 2nd case above) is 0.1% of the
exact value.
 Finally, if n>=200, the input can be rejected by printing an
appropriate error message.
 Because of expected variations in behavior, it is quite natural to
divide the input domain into four classes {n<0}, {0<= n <20}, {20
<= n < 200}, {n >= 200}.
 We can use one or more test cases from each class in each test
set.
Dr. V. Garousi, SENG 607.22 – W 2008 63
Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 64


Theoretical Foundations of Testing

 Formal definitions
 Adequacy Criterion
 Ideal Test Set
 Test Consistency and Completeness
 Empirical Testing Principle

Dr. V. Garousi, SENG 607.22 – W 2008 65


Theoretical Foundations of Testing

 Formal definitions
 Adequacy Criterion
 Ideal Test Set
 Test Consistency and Completeness
 Empirical Testing Principle

Dr. V. Garousi, SENG 607.22 – W 2008 66


Theoretical Foundations of Testing:
Formal definitions

 Let P be a program
 Let D and R denote its input domain and range,
i.e.:
 P is a function: D→R
 Let OR denote the expected output values
(ORacle)
 P is said to be correct for d  D if P(d) satisfies
OR; if not, we have a failure
 A test case is an element d of D and the
expected value of P in OR given d
 A test set (a.k.a. suite) T is a finite set of test
cases
Dr. V. Garousi, SENG 607.22 – W 2008 67
Theoretical Foundations of Testing

 Formal definitions
 Adequacy Criterion
 Ideal Test Set
 Test Consistency and Completeness
 Empirical Testing Principle

Dr. V. Garousi, SENG 607.22 – W 2008 68


Theoretical Foundations of Testing:
Test Adequacy Criterion
 A test adequacy criterion C is a subset of PD, where PD is
the set of all finite subsets of D that we should target
when devising test sets (note the two levels of subsets).
(Recall: D denotes the input domain of P).
 In simple words: how “much” of P we should target by our
test set.
 Example: Defining a criterion C for a program model M
 M: the Control Flow Graph (CFG) of a function
 C: the set of all the edges in the CFG
 The coverage ratio of a test set T is the proportion of the
elements in M defined by C covered by the given test set
T.
 A test set T is said to be adequate for C, or simply C-
adequate, when the coverage ratio achieves 100% for
criterion C.
 We will have examples soon…
Dr. V. Garousi, SENG 607.22 – W 2008 69
Theoretical Foundations of Testing:
Hierarchy of Adequacy Criteria
 Let us define a subsumption relationship between
different test criteria associated with a program model
 Given a model M, and two criteria C1 and C2 for that
model:
C1 subsumes C2 if any C1-adequate test set is also C2-
adequate.
 Example: Consider criteria all-
transitions and all-paths for finite state
machines:
 all-paths subsumes all-transitions

 If C1 subsumes C2, we assume:


 Satisfying C1 is more “expensive”
(e.g., # of test cases) than
satisfying C2
 C1 allows the detection of more
Dr. V. Garousi, SENG 607.22 – W 2008 70
Theoretical Foundations of Testing

 Formal definitions
 Adequacy Criterion
 Ideal Test Set
 Test Consistency and Completeness
 Empirical Testing Principle

Dr. V. Garousi, SENG 607.22 – W 2008 71


Theoretical Foundations of Testing
Ideal Test Set

 A test set T is said to be ideal if, whenever P is


incorrect, there exists a d  T such that P is
incorrect for d, i.e., P(d) does not satisfy OR.

 If T is an ideal test set and T is successful for P,


then P is correct

 T satisfies a test adequacy criterion C if its input


values belong to C.

Dr. V. Garousi, SENG 607.22 – W 2008 72


Theoretical Foundations of Testing

 Formal definitions
 Adequacy Criterion
 Ideal Test Set
 Test Consistency and Completeness
 Empirical Testing Principle

Dr. V. Garousi, SENG 607.22 – W 2008 73


Theoretical Foundations of Testing:
Test Consistency and Completeness

 A test adequacy criterion C is consistent if, for


any pair of test sets T1 and T2, both satisfying
C, T1 is successful if and only T2 is.
 A test adequacy criterion C is complete if,
whenever P is incorrect, there is an
unsuccessful test set that satisfies C.
 If C is consistent and complete, any test set T
satisfying C is ideal and could be used to decide
P’s correctness.
 The problem is that it is not possible in general
to derive algorithms that helps determine
whether a criterion, a test set, or a program has
any of the above mentioned properties … they
are undecidable problems 
Dr. V. Garousi, SENG 607.22 – W 2008 74
Theoretical Foundations of Testing

 Formal definitions
 Adequacy Criterion
 Ideal Test Set
 Test Consistency and Completeness
 Empirical Testing Principle

Dr. V. Garousi, SENG 607.22 – W 2008 75


Theoretical Foundations of Testing:
Empirical Testing Principle

 As we discussed, it is impossible to determine


(find) consistent and complete test criteria from
the theoretical standpoint
 Also, exhaustive testing cannot be performed in
practice
 Therefore, we need test strategies that have
been empirically investigated
 A significant test case is a test case with high
error detection potential – its execution
increases our confidence in the program
correctness
 The goal is to run a sufficient number of
significant test cases – and that number should
be as small as possible (to save time and $$)
Dr. V. Garousi, SENG 607.22 – W 2008 76
Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 77


Practical Aspects of Testing:
Many Causes of Failures

 The specification may be wrong or have a


missing requirement (made by an analyst
error)
 The system design may contain a fault
(made by a designer error)
 The program code may be wrong (made by a
programmer error)
 The specification may contain a requirement
that is impossible to implement given the
prescribed software and hardware

Dr. V. Garousi, SENG 607.22 – W 2008 78


Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 79


Test Organization

 Since we can have defects in different level and


stages of the SW process, we need testing at
different level and stages
 Unit testing (also called module or component
testing)
 Integration testing
 System testing
 Non-functional (e.g., performance) testing
 Acceptance testing
 Installation testing
 …

 One typical test process comes next…


Dr. V. Garousi, SENG 607.22 – W 2008 80
Test Organization (just one typical test
process)
Unit Design System System User
testing descriptions functional Non-functional Customer environment
Tested specifications specifications requirements
Unit code

unit

Unit
testing

.
.
Unit code

Integration System Non- Acceptance Installation


. testing testing functional testing testing
testing

Integrated Functioning Non-functionally Accepted


modules system tested system
software
unit

Unit
Tested
Unit code

testing SYSTEM
IN USE!
Pfleeger, 1998
Dr. V. Garousi, SENG 607.22 – W 2008 81
Testing Activities at different Stages

Testing Activities

Unit Testing Integration Testing System Testing


(white-box) (gray-box) (black-box)

From module From interface From


Specifications specifications requirements specs

Visibility Visibility of
No visibility of code
of code details integration structures

Some
Complex stubbing No drivers/stubs
stubbing

Behavior of single Interactions among System


modules modules functionalities

Pezze and Young, 1998


Dr. V. Garousi, SENG 607.22 – W 2008 82
Test Organization

 Unit testing
 Integration testing
 System testing
 Acceptance Testing

Dr. V. Garousi, SENG 607.22 – W 2008 83


Unit Testing
 (Usually) performed by each developer.
 Scope: Ensure that each module (i.e., class,
subprogram) has been implemented correctly.
 Based on source code: thus ‘White-box’ testing.

Test
 A unit is the smallest testable part of an application.
 In procedural programming, a unit may be an individual
program, function, procedure, etc.
 In object-oriented programming, the smallest unit is a
method; which may belong to a base/super class,
abstract class or derived/child class.

Dr. V. Garousi, SENG 607.22 – W 2008 84


Deriving Unit Test Cases

Unit to
be tested

local data
structures
boundary
conditions
error handling paths
interface

test cases
Dr. V. Garousi, SENG 607.22 – W 2008 85
Unit Testing
 Need an effective adequacy criterion.
 Work with source-code analysis tools
appropriate for the criterion and development
environment.
 May need to develop stubs and domain-
specific automation tools.
 Although a critical testing phase, it is often
performed poorly in industry…

Dr. V. Garousi, SENG 607.22 – W 2008 86


Test Organization

 Unit testing
 Integration testing
 System testing
 Acceptance Testing

Dr. V. Garousi, SENG 607.22 – W 2008 87


Integration Testing
 Performed by a small team.
 Scope: Ensure that the interfaces between components
(which individual developers could not test) have been
implemented correctly.

Test
 Test cases have to be planned, documented, and
reviewed.
 Performed in a relatively small time-frame

Dr. V. Garousi, SENG 607.22 – W 2008 88


Integration Testing:
Source of Failures in

 Integration of well-tested units may lead to


failure due to:
 Bad use of the interfaces (bad interface specifications /
implementation)
 Wrong hypothesis on the behavior/state of related units
(bad functional specification / implementation), e.g.,
wrong assumption about return value
 Use of poor drivers/stubs: a unit may behave correctly
with (simple) drivers/stubs, but result in failures when
integrated with actual (complex) unit.

Dr. V. Garousi, SENG 607.22 – W 2008 89


Integration Testing:
Approaches

 System integration tree:


A

Dependency

B F G

Or

D E

 “Big-bang”: all at once


 Incremental integration : Top-Down, Bottom-
Up
Dr. V. Garousi, SENG 607.22 – W 2008 90
Top-Down Integration Testing
A Top module is tested with
stubs
Dependency

B F G

Stubs are replaced one at


a time, "depth first"
C

As new modules are integrated,


D E some subset of tests is re-run

Reminder: Test Stub


Depends
Component a Component b

Under Test
Dr. V. Garousi, SENG 607.22 – W 2008 91
Bottom-Up Integration Testing
A

Dependency

B F G

Drivers are replaced one at a


time, "depth first"
C

Lower-level (worker) modules are grouped into


D E builds and integrated

Reminder:
A cluster Test Stub Test Driver
Depends Depends
Component a Component b Component j Component k

Under Test Under Test

Dr. V. Garousi, SENG 607.22 – W 2008 92


Incremental vs. “Big-Bang”
 Big-bang – Advantages:
 The whole system is available, can find general system-
wide problems soon
 Big-bang – Disadvantages:
 Focus is not on a specific unit
 Harder to locate errors

 Incremental – Advantages:
 Focus is on each module  better testing?
 Easy to locate errors, fault localization, etc.
 Incremental – Disadvantages:
 Need to develop special code (stubs and/or drivers)

Dr. V. Garousi, SENG 607.22 – W 2008 93


Test Organization

 Unit testing
 Integration testing
 System testing
 Acceptance Testing

Dr. V. Garousi, SENG 607.22 – W 2008 94


System Testing
 Performed by a separate group within the organization
(Most of the times).
 Scope: Pretend we are the end-users of the product.
 Focus is on functionality, but may also perform many
other types of tests (e.g., recovery, performance).

Test

 Black-box form of testing.


 Test case specification driven by system’s use-cases.
Dr. V. Garousi, SENG 607.22 – W 2008 95
System Testing
 The whole effort has to be planned (system test plan).
 Test cases have to be designed, documented, and
reviewed.
 Adequacy based on requirements coverage.
 but must also think beyond stated requirements
 Support tools have to be developed/acquired and used
for
preparing data, executing the test cases, analyzing the
results.
 Group members must develop expertise on specific
system features / capabilities.
 Often, the System Test group gets the initial blame for
“not seeing the problem before the customer did”.

Dr. V. Garousi, SENG 607.22 – W 2008 96


Test Organization

 Unit testing
 Integration testing
 System testing
 Acceptance Testing

Dr. V. Garousi, SENG 607.22 – W 2008 97


System vs. Acceptance Testing

 System testing
 The software is compared with the requirements
specifications (verification)
 Usually performed by the developers, who know the
system
 Acceptance testing
 The software is compared with the end-user
requirements (validation)
 Usually performed by the customer (buyer), who knows
the environment where the system is to be used
 Sometime distinguished between - and -testing for
general purpose products
 Alpha testing is a simulated or actual operational testing by
potential users/customers or an independent test team at the
developers' site.
 Beta testing comes after alpha testing. Versions of the
Dr. V. Garousi, SENG 607.22 – W 2008 98
More on Testing vs. Verification and
Validation
 Verification ensures that the product satisfies or matches
the original design (low-level checking) — i.e., you built
the product right. This is done through (white-box) testing.
 Validation checks that the product design satisfies or fits
the intended usage (high-level checking) — i.e., you built
the right product. This is done through (black-box) testing.
 Two fundamental approaches to verification:
 Dynamic verification (also known as Testing)
 Static verification (also known as Analysis, e.g.,
inspections)

 Strong recommendation: Read more…


 http://en.wikipedia.org/wiki/
Verification_and_Validation_(software)
 http://en.wikipedia.org/wiki/Software_verification
Dr. V. Garousi, SENG 607.22 – W 2008 99
Testing throughout the Lifecycle

 Much of the life-cycle development artifacts provides a


rich source of test data
 Identifying test requirements and test cases early helps
shorten the development time
 They may help reveal faults
 It may also help identify early low testable
specifications or design

Analysis Design Implementation Testing

Preparation Preparation Preparation Testing


for Test for Test for Test

Dr. V. Garousi, SENG 607.22 – W 2008 100


Lifecycle Mapping of Testing Stages:
The famous “V” model

Dr. V. Garousi, SENG 607.22 – W 2008 101


Example: Testing based on System
Requirements
 Errors at this stage will have devastating effects
as every other activity is dependent on it
 Natural language to specify requirements: is
flexible but is ambiguous, and can cause low
testability
 Example non-testable requirement: the system
should be user-friendly, the response time
should be reasonable
 Devising early-on acceptance tests from
requirements allows us assess whether they are
testable and shorten timescales
 Example testable requirement: the response
time is less than 1.5 seconds for 95% of the time
under average system loading
Dr. V. Garousi, SENG 607.22 – W 2008 102
Introduction to Software Testing
Outline
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
 Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 103


Testing Activities (in order)

 Establish the test objectives


 Design the test cases
 Write the test cases
 Verify (not execute!) the test cases
 Execute the tests
 Evaluate the test results
 Understand the cause of failures

Dr. V. Garousi, SENG 607.22 – W 2008 104


Testing Activities BEFORE Coding

 Testing is a time-consuming activity


 Devising a test strategy and identifying the test
requirements represent a substantial part of it
 Test planning is essential
 Testing activities undergo huge pressure as
testing is most often performed towards the end
of the project
 In order to shorten time-to-market and ensure a
certain level of quality, a lot of QA- and QC-
related activities (including testing) must take
place early in the development life cycle

Dr. V. Garousi, SENG 607.22 – W 2008 105


Testing takes creativity

 Testing often viewed as a dirty work in industry


(though less and less).
 To develop an effective test, one must have:
 Detailed understanding of the system
 Knowledge of the testing techniques
 Skills to apply these techniques in an effective
and efficient manner
 Testing is done best by independent testers
 Programmer often stick to the data set that makes
the program work
 A program often does not work when tried by
somebody else.

Dr. V. Garousi, SENG 607.22 – W 2008 106


Introduction to Software Testing
We just finished Module 1!
 Nature of Software  Goals of Testing
Development  Reality Check on Testing
 Examples of SE Failures
 Testing Process Overview
 Consequences of Poor  Qualities of Testing
Quality
 Continuity Property
 Ariane 5 Disaster
 Software Dependability
 NIST Study on SE Failures
 Fundamental Principles in
 Key QA Capabilities
Testing
 Dealing with SW Faults  Theoretical Foundations of
 Testing Definitions & Testing
Objectives  Practical Aspects of Testing
 Test Organization
Questions?  Testing Activities

Dr. V. Garousi, SENG 607.22 – W 2008 107


References
 Dr. Lionel C. Briand, course notes for “Software Quality
Engineering and Management”, Department of Systems
and Computer Engineering, Carleton University,
http://squall.sce.carleton.ca/people/briand/teaching.html,
used by permission, 2007

Dr. V. Garousi, SENG 607.22 – W 2008 108

You might also like