Professional Documents
Culture Documents
Introduction
What is Software Testing
• Software testing is the act of executing software with a suite of test cases so that it
can either find faults in the program or demonstrate the program is correct.
• Each test case is associated with a specific program behaviour. A test case contains
a list of test inputs and a list of corresponding expected outputs.
• It is difficult to design a suite of test cases that can prove a program is correct.
Errors
• People make errors. A good synonym is mistake. When people make mistakes
while coding, we call these mistakes bugs.
• Errors tend to propagate; a requirements error may be magnified during design
and amplified still more during coding.
• A programmer writes a program. An error occurs in the process of writing a
program.
• Errors may be caused by:
o Typographical errors
o Misreading of a specification
o Misunderstanding of a functionality of a module
Faults
• A fault is the result of an error. It is more precise to say that a fault is the
representation of an error, where representation is the mode of expression, like:
o narrative text,
o Unified Modelling Language diagrams,
o hierarchy charts, and
o source code.
• A fault is the manifestation of one or more errors. A fault in the program is also
commonly referred to as a bug or a defect.
• Types of faults:
o Fault of Commission
o Fault of Omission
• Fault of Commission:
o Occurs when we enter something into a representation that is incorrect.
o Example: Entering character data or negative numbers for the input ‘age’
o Usually some information in the SRS (Software Req. Specs.) contradicts
other information about the same or relevant information in the domain
knowledge or conflict with preceding documents
• Faults of Omission:
o Occur when we fail to enter correct information. Of these two types, faults of
omission are more difficult to detect and resolve.
o Example: not providing/omitting the input for ‘age’
o Usually, necessary information related to the problem being solved by the
software has been omitted from requirements document or are not
complete.
Failure
• A failure occurs when the code corresponding to a fault executes.
• A particular fault may cause different failures, depending upon how it has been
exercised.
• A failure occurs when a faulty piece of code is executed leading to an incorrect
state that propagates to the program’s output.
• The programmer might misinterpret the requirements and consequently write
incorrect code.
• Upon execution, the program might display behavior that does not match with the
expected behavior implying thereby that a failure has occurred.
Incident
• When a failure occurs, it may or may not be readily apparent to the user (or
customer or tester).
• An incident is the symptom associated with a failure that alerts the user to the
occurrence of a failure.
Test
• Testing is obviously concerned with errors, faults, failures, and incidents.
• A test is the act of exercising software with test cases.
• A test has two distinct goals: to find failures or to demonstrate correct execution
Summary
• Error (mistake): mistake while coding, aka bug
• Fault (defect): Result of an error
o Fault of omission
o Fault of commission
• Failure: A failure occurs when a Fault executes.
• Incident: Alerts user occurrence of a Failure
• To identify these four, we do Testing.
• To do testing we use a methodology that comprises of Test Cases.
Principles of Testing
Software testing is a procedure of implementing software or the application to
identify the defects or bugs. For testing an application or software, we need to follow
some principles to make our product defects free, and that also helps the test
engineers to test the software with their effort and time.
The seven different testing principles:
• Testing shows the presence of defects
• Exhaustive Testing is not possible
• Early Testing
• Defect Clustering
• Pesticide Paradox
• Testing is context-dependent
• Absence of errors fallacy
1. Testing shows the presence of defects
• The test engineer will test the application to make sure that the application is
bug or defects free.
• While doing testing, we can only identify whether the application or software
has any errors.
• The primary purpose of doing testing is to identify the numbers of unknown
bugs with the help of various testing methods and techniques.
• The entire process of testing should also be traceable to the customer’s
requirements so as to find any defects that may cause the product to fail/not
satisfy the customer’s requirements.
3. Early Testing
• All testing activities should start in the early stages of the Software
Development Life Cycle's (SDLC) requirement analysis stage to identify the
defects because if we find the bugs at an early stage, it will be fixed in the
initial stage itself, which costs very less as compared to those which are
identified in the future phases of the testing process.
• To perform early testing, we will require the Requirement Specification
document (SRS). Hence, if the requirements are defined correctly, then any
bugs can be fixed directly rather than fixing them in a later stage.
• Adoption of Early Test will immensely result in the successful delivery of a
Quality Product.
4. Defect clustering
• Defect clustering defines that a majority of the bugs detected in the testing
process will be concentrated in a small number of modules.
• This may be due to a variety of reasons like only a few modules are complex.
• This is the application of the Pareto Principle to software testing:
approximately 80% of the problems are found in 20% of the modules.
• But this approach has its own problems. If the same tests are repeated over
and over again, eventually the same test cases will no longer find new bugs.
5. Pesticide paradox
• Repetitive use of the same pesticide to eradicate insects will over time lead
to the insects developing resistance to the pesticide, thus making it
ineffective on insects.
• The same applies to software testing. If the same set of repetitive tests are
conducted, the testing will be redundant and will not discover new bugs.
• To overcome this, the test cases need to be regularly reviewed & revised,
adding new & different test cases to help find more defects.
6. Testing is context-dependent
• The testing approach depends on the context of the software being
developed and tested.
• Different types of softwares require different types of testing techniques
and methodologies depending upon the application.
• For example, the testing of the e-commerce site would be different from the
testing of an Android application.
• The state diagram for the program execution can be given as:
5. Assessing the Correctness of Program Behaviour
• Here, the tester determines if the observed behavior of the program under
test is correct or not.
• This step can be further divided into two smaller steps. In the first step, one
observes the behavior and in the second, one analyzes the observed behavior
to check if it is correct or not.
• The entity that performs the task of checking the correctness of the
observed behavior is known as an oracle.
• A tester often assumes the role of an oracle and thus serves as a human
oracle. Oracles can also be programs designed to check the behavior of other
programs
6. Construction of Oracles
• For trivial programs like sort the construction of automated oracle is easy,
however, in general, it is a complex undertaking. Example:
• Consider a program named HVideo that allows one to keep track of home
videos. The program operates in two modes:
1. Data entry - to enter details of a DVD and store it in the database
2. Search - to search for DVDs matching a search criterion
• To test HVideo we need to create an oracle that checks whether the program
functions correctly in data entry and search modes.
• In addition, an input generator needs to be created. As shown in the below
figure, the input generator generates inputs for HVideo.
• To test the data entry operation of HVideo, the input generator generates a
data entry request.
• The input generator now requests the oracle to test if HVideo performed its
task correctly on the input given for data entry.
• The oracle uses the input to check if the information to be entered into the
database has been entered correctly or not. The oracle returns a Pass or No
Pass to the input generator.
• To test if HVideo correctly performs the search operation, the input
generator formulates a search request with the search data, the same as the
input given previously.
• This input is then passed on to HVideo that performs the search and returns
the results of the search to the input generator. The input generator passes
these results to the oracle to check for their correctness.
Test Metrics
• The term metric refers to a standard of measurement. In software testing, there
exist a variety of metrics.
• Metrics can be computed at the organizational, process, project, and product
levels.
• Regardless of the level at which metrics are defined and collected, there exist four
general core areas that assist in the design of metrics. These are:
o Schedule-related metrics: measures actual completion times of various
activities and compare these with estimated time to completion
o Quality-related metrics: measures quality of a product or a process.
o Resource-related metrics: measures items such as cost in dollars, manpower,
and tests executed.
o Size-related metrics: measures size of various objects such as the source code
and number of tests in a test suite.
1. ORGANIZATIONAL METRICS
• Metrics at the level of an organization are useful in overall project planning
and management.
• Ex: the number of defects reported after product release, averaged over a set
of products developed and marketed by an organization, is a useful metric of
product quality at the organizational level.
• It allows senior management to monitor the overall strength of the
organization and points to areas of weakness.
• Thus, these metrics help senior management in setting new goals and plan
for resources needed to realize these goals.
2. PROJECT METRICS
• Project metrics relate to a specific project, for example the I/O device testing
project or a compiler project.
• These are useful in the monitoring and control of a specific project. Examples:
i. The ratio of actual-to-planned system test effort is one project metric.
ii. Test effort could be measured in terms of the tester-man-months. The
project manager tracks this ratio to allocate testing resources.
iii. The ratio of the number of successful tests to the total number of tests
in the system test phase.
3. PROCESS METRICS
• Every project uses some test process. The big-bang approach (Integration
Testing) is one process sometimes used in relatively small single-person
projects. Several other well-organized processes exist.
• The goal of a process metric is to assess the goodness of the process.
• When a test process consists of several phases, for example unit test,
integration test, and system test, one can measure how many defects were
found in each phase.
• It is well known that the later a defect is found, the costlier it is to fix.
• Hence, a metric that classifies defects according to the phase in which they
are found assists in evaluating the process itself.
4. PRODUCT METRICS
• Product metrics relate to a specific product such as a compiler for a
programming language.
• These are useful in making decisions related to the product, for example
“Should this product be released for use by the customer?”
• Two types of Product metrics:
i. Cyclomatic complexity metrics
ii. Halstead metrics
• Cyclomatic complexity (CYC) is a software metric used to determine the
complexity of a program. It is a count of the number of decisions in the
source code. The higher the count, the more complex the code. The
cyclomatic complexity, V(G) is given by:
V(G) = E - N + 2P, where
G ⟶ Control Flow Graph of a Program P
N ⟶ number of Nodes
E ⟶ number of Edges
P ⟶ number of connected Procedures
• Halstead's metrics depends upon the actual implementation of program and
its measures, which are computed directly from the operators and operands
from source code, in static manner.
Halstead measures of program complexity and effort
Measure Notation Definition
Program vocabulary 𝜂 𝜂1 + 𝜂2
Program size 𝑁 𝑁1 + 𝑁2
Difficulty 𝐷 2 𝜂2
×
𝜂1 𝑁2
Effort 𝐸 𝐷×𝑉
Verification Validation
Are we implementing the right system? Are we implementing the system right?
The objective is ensuring that the The objective is ensuring that the
product is as per the requirement and product meets users’ requirements.
design specification.
Includes activities like: reviews, Includes activities like: black-box testing,
meetings and inspections. white-box testing, grey-box testing
Items evaluated: plans, SRS, codes and Items evaluated: actual product or
test cases software under test.
Types of Testing
• Dynamic testing requires the execution of the program under test.
• Static testing consists of techniques for the review and analysis of the program.
• The five classifiers that serve to classify testing techniques that fall under the
dynamic testing category are:
1. C1: Source of test generation
2. C2: Life cycle phase in which testing takes place
3. C3: Goal of a specific testing activity
4. C4: Characteristics of the artifact under test
5. C5: Test process
• Interface testing
o Tests are often generated using a component’s interface.
o It is done to evaluate whether systems or components pass data and control
correctly to one another. It is usually performed by both testing and
development teams.
• Ad-hoc testing (Ad-hoc means - as and when required)
o In ad-hoc testing, a tester generates tests from requirements but without the
use of any systematic method.
o Testing is performed without planning and documentation – the tester tries
to ‘break’ the system by randomly testing the system’s functionality.
• Random testing :
o Random testing uses a systematic method to generate test.
o Generation of tests using random testing requires modelling the input space
and then sampling data from the input space randomly.
o These activities begin from the start and continue until the end of the life cycle.
o The testing activities are carried out in parallel with the development activities.
3. Spiral testing
o Spiral testing refers to a test strategy that can be applied to any incremental
software development process, especially where a prototype evolves into an
application.
o In spiral testing, the sophistication of test activities increases with the stages of
an evolving prototype.
o In the early stages of development, initial test activities are carried out to
perform Test Planning which determines how testing will be performed in the
remainder of the project.
o As the prototype is refined through successive iterations, unit and integration
tests are performed.
o In the final stage, when the requirements are well defined, testers focus on
system and acceptance testing.
4. Agile testing
o This is a name given to a test process that is well defined.
o Agile testing promotes the following ideas:
i. include testing related activities throughout a development project
starting from the requirements phase
ii. work collaboratively with the customer who specifies requirements in
terms of tests
iii. testers and developers must collaborate with each other rather than
serve as adversaries
iv. test often and in small chunks.
5. Test-Driven Development
o It focuses on creating unit test cases before developing the actual code.
o It is an iterative approach that combines programming, the creation of unit
tests, and refactoring.
o TDD starts with designing and developing tests for every small functionality of
an application.
o Test cases for each functionality are created and tested first and if the test fails
then the new code is written in order to pass the test and making code simple
and bug-free.