You are on page 1of 30

ST - Module 1

Introduction
What is Software Testing
• Software testing is the act of executing software with a suite of test cases so that it
can either find faults in the program or demonstrate the program is correct.
• Each test case is associated with a specific program behaviour. A test case contains
a list of test inputs and a list of corresponding expected outputs.
• It is difficult to design a suite of test cases that can prove a program is correct.

Errors
• People make errors. A good synonym is mistake. When people make mistakes
while coding, we call these mistakes bugs.
• Errors tend to propagate; a requirements error may be magnified during design
and amplified still more during coding.
• A programmer writes a program. An error occurs in the process of writing a
program.
• Errors may be caused by:
o Typographical errors
o Misreading of a specification
o Misunderstanding of a functionality of a module

Faults
• A fault is the result of an error. It is more precise to say that a fault is the
representation of an error, where representation is the mode of expression, like:
o narrative text,
o Unified Modelling Language diagrams,
o hierarchy charts, and
o source code.
• A fault is the manifestation of one or more errors. A fault in the program is also
commonly referred to as a bug or a defect.
• Types of faults:
o Fault of Commission
o Fault of Omission
• Fault of Commission:
o Occurs when we enter something into a representation that is incorrect.
o Example: Entering character data or negative numbers for the input ‘age’
o Usually some information in the SRS (Software Req. Specs.) contradicts
other information about the same or relevant information in the domain
knowledge or conflict with preceding documents
• Faults of Omission:
o Occur when we fail to enter correct information. Of these two types, faults of
omission are more difficult to detect and resolve.
o Example: not providing/omitting the input for ‘age’
o Usually, necessary information related to the problem being solved by the
software has been omitted from requirements document or are not
complete.

Failure
• A failure occurs when the code corresponding to a fault executes.
• A particular fault may cause different failures, depending upon how it has been
exercised.
• A failure occurs when a faulty piece of code is executed leading to an incorrect
state that propagates to the program’s output.
• The programmer might misinterpret the requirements and consequently write
incorrect code.
• Upon execution, the program might display behavior that does not match with the
expected behavior implying thereby that a failure has occurred.

Incident
• When a failure occurs, it may or may not be readily apparent to the user (or
customer or tester).
• An incident is the symptom associated with a failure that alerts the user to the
occurrence of a failure.

Test
• Testing is obviously concerned with errors, faults, failures, and incidents.
• A test is the act of exercising software with test cases.
• A test has two distinct goals: to find failures or to demonstrate correct execution

Summary
• Error (mistake): mistake while coding, aka bug
• Fault (defect): Result of an error
o Fault of omission
o Fault of commission
• Failure: A failure occurs when a Fault executes.
• Incident: Alerts user occurrence of a Failure
• To identify these four, we do Testing.
• To do testing we use a methodology that comprises of Test Cases.

Relationship between Errors, Faults and Failures


Software Testing Life Cycle

1. Requirement Specification: Business Analyst gathers the requirement from the


client and prepares Customer Requirement Specification (CRS) document.
2. Design: The developer then comprehends the CRS to prepare the High-Level
Design (HLD) and Low-Level Design (LLD).
3. Coding: Based on the CRS, HLD and LLD, the developer then develops the code
using a suitable programming language.
4. Testing: The software tester then designs the Test Cases after analysing the CRS,
HLD and LLD. The Test Cases are then executed with valid and invalid data as input.
5. Fault Classification: While testing, the tester finds bugs in the program. These bugs
cause faults. The tester then classifies theses faults with respect to its severity
(critical, major or minor) for the developer to rectify them.
6. Fault Isolation: The tester then isolates these faults to understand where, when
and why these faults occurred.
7. Fault Resolution: Developers then fix these errors based on the information given
by the testers.
In the development phases (1-3), three opportunities arise for errors to be made,
resulting in faults that may propagate through the remainder of the development
process. Bugs are put in, in the steps 1-3 and bugs are found in steps 4-6.
Objectives of Software Testing
• The essence of software testing is to determine a set of test cases for the item to
be tested.
• A test case is a recognized work product. A complete test case will contain a
o test case identifier,
o a brief statement of purpose (e.g., a business rule),
o a description of preconditions,
o the actual test case inputs,
o the expected outputs,
o a description of expected postconditions, and
o an execution history.
• The execution history is primarily for test management use—it may contain the
date when the test was run, the person who ran it, the version on which it was run,
and the pass/fail result.
• Test case execution entails establishing the necessary preconditions, providing the
test case inputs, observing the outputs, comparing these with the expected
outputs, and then ensuring that the expected postconditions exist to determine
whether the test passed.

Principles of Testing
Software testing is a procedure of implementing software or the application to
identify the defects or bugs. For testing an application or software, we need to follow
some principles to make our product defects free, and that also helps the test
engineers to test the software with their effort and time.
The seven different testing principles:
• Testing shows the presence of defects
• Exhaustive Testing is not possible
• Early Testing
• Defect Clustering
• Pesticide Paradox
• Testing is context-dependent
• Absence of errors fallacy
1. Testing shows the presence of defects
• The test engineer will test the application to make sure that the application is
bug or defects free.
• While doing testing, we can only identify whether the application or software
has any errors.
• The primary purpose of doing testing is to identify the numbers of unknown
bugs with the help of various testing methods and techniques.
• The entire process of testing should also be traceable to the customer’s
requirements so as to find any defects that may cause the product to fail/not
satisfy the customer’s requirements.

2. Exhaustive Testing is not possible


• The process of testing the functionality of the software for all possible
inputs (valid or invalid) and pre-conditions is known as exhaustive testing.
• Hence, performing exhaustive testing is not productive as it takes boundless
determinations and most of the hard work is unsuccessful.
• Also, the product timelines will not permit an exhaustive testing approach.
Therefore, we can only test the software for some test cases and not all
scenarios/combinations of input data and preconditions.
• Example: for a password field that accepts 4 characters: considering just
letters as valid characters, there would still be 264 combinations of test cases
which is impractical to test.

3. Early Testing
• All testing activities should start in the early stages of the Software
Development Life Cycle's (SDLC) requirement analysis stage to identify the
defects because if we find the bugs at an early stage, it will be fixed in the
initial stage itself, which costs very less as compared to those which are
identified in the future phases of the testing process.
• To perform early testing, we will require the Requirement Specification
document (SRS). Hence, if the requirements are defined correctly, then any
bugs can be fixed directly rather than fixing them in a later stage.
• Adoption of Early Test will immensely result in the successful delivery of a
Quality Product.
4. Defect clustering
• Defect clustering defines that a majority of the bugs detected in the testing
process will be concentrated in a small number of modules.
• This may be due to a variety of reasons like only a few modules are complex.
• This is the application of the Pareto Principle to software testing:
approximately 80% of the problems are found in 20% of the modules.
• But this approach has its own problems. If the same tests are repeated over
and over again, eventually the same test cases will no longer find new bugs.

5. Pesticide paradox
• Repetitive use of the same pesticide to eradicate insects will over time lead
to the insects developing resistance to the pesticide, thus making it
ineffective on insects.
• The same applies to software testing. If the same set of repetitive tests are
conducted, the testing will be redundant and will not discover new bugs.
• To overcome this, the test cases need to be regularly reviewed & revised,
adding new & different test cases to help find more defects.

6. Testing is context-dependent
• The testing approach depends on the context of the software being
developed and tested.
• Different types of softwares require different types of testing techniques
and methodologies depending upon the application.
• For example, the testing of the e-commerce site would be different from the
testing of an Android application.

7. Absence of Errors - fallacy (Absence of Errors doesn’t guarantee proper working)


• If a built software is 99% bug-free but it does not follow the user
requirement then it is unusable. This can be the case if the system is tested
thoroughly for the wrong requirement.
• Software testing is not just merely finding defects/bugs, but also a
mechanism to verify whether the software addresses the business needs.
• Hence, absence of error is a fallacy i.e., finding and fixing defects does not
help if the system build is unusable and does not fulfil the user’s needs &
requirements.
Requirements, Behaviour and Correctness
• Software products are developed based on the Software Requirement
Specification (SRS) document.
• The expected behavior of the product is determined by the tester’s understanding
of the requirements during testing. Example:
Requirement 1: It is required to write a program that inputs two integers and
outputs the maximum of these.
Requirement 2: It is required to write a program that inputs a sequence of integers
and outputs the sorted version of this sequence.
• Suppose that program max is developed to satisfy Requirement 1 above. The
expected output of max when the input integers are 13 and 19 can be easily
determined to be 19. Suppose now that the tester wants to know if the two
integers are to be input to the program on one line followed by a carriage return,
or on two separate lines with a carriage return typed in after each number. The
requirement as stated above fails to provide an answer to this question. This
example illustrates the incompleteness in Requirement 1
• Requirement 2 in the above example is ambiguous. It is not clear from this
requirement whether the input sequence is to be sorted in ascending or
descending order. The behavior of sort program, written to satisfy this
requirement, will depend on the decision taken by the programmer while writing
sort.
INPUT DOMAIN (INPUT SPACE) AND CORRECTNESS
• INPUT DOMAIN: The set of all possible inputs to a program P is known as the input
domain or input space, of P.
• Using Requirement 1 above we find the input domain of max to be the set of all
pairs of integers where each element in the pair integers is in the range -32,768 till
32,767.
• Using Requirement 2 it is not possible to find the input domain for the sort
program. Hence, we modify Requirement 2 as:
• Modified Requirement 2: It is required to write a program that inputs a sequence
of integers and outputs the integers in this sequence sorted in either ascending or
descending order. The order of the output sequence is determined by an input
request character which should be A when an ascending sequence is desired, and
D otherwise. While providing input to the program, the request character is
entered first followed by the sequence of integers to be sorted; the sequence is
terminated with a period.
• Based on the above modified requirement, the input domain for sort is a set of
pairs. The first element of the pair is a character. The second element of the pair is
a sequence of zero or more integers ending with a period.
• For example, following are three elements in the input domain of sort:
< A -3 15 6 1 .>
<D 5 9 1 7 -2 .>
<A .>
• CORRECTNESS: A program is considered correct if it behaves as expected on each
element of its input domain.
VALID AND INVALID INPUTS
• The modified requirement for sort mentions that the request characters can be A
or D, but what if the user types a different character?
• Any character other than A or D is considered as an invalid input to sort. The
requirement for sort does not specify what action it should take when an invalid
input is encountered.
• Identifying the set of invalid inputs and testing the program against these inputs
are important parts of the testing activity. Testing a program against invalid inputs
might reveal errors in the program.
• In cases where the input to a program is not guaranteed to be correct, it is
convenient to partition the input domain into two subdomains.
• One subdomain consists of inputs that are valid and the other consists of inputs
that are invalid.
• A tester can then test the program on selected inputs from each subdomain.

Testing and Debugging


• Testing is the process of determining if a program behaves as expected. In the
process one may discover errors in the program under test.
• However, when testing reveals an error, the process used to determine the cause
of this error and to remove it is known as debugging.
• Testing and debugging are often used as two related activities in a cyclic manner.
The following are the steps involved:
1. Preparing a test plan
2. Constructing test data
3. Executing the program
4. Specifying program behaviour
5. Assessing the correctness of program behavior
6. Construction of oracle

The Test - Debug


Cycle
1. Preparing a Test Plan
• A test cycle is often guided by a test plan. A sample test plan for testing the
sort program is as follows:
1. Execute the program on at least two input sequences, one with A and the
other with D as request characters.
2. Execute the program on an empty input sequence.
3. Test the program for robustness against erroneous inputs such as R typed
in as the request character.
4. All failures of the test program should be recorded in a suitable file using
the Company Failure Report Form.
2. Constructing Test Data
• A test case is a pair consisting of test data to be input to the program and the
expected output.
• The test data is a set of values, one for each input variable.
• A test set is a collection of zero or more test cases. Example for sort:
Test Case 1: Test Case 2:
Test data: <A 12 -29 32 .> Test data: <D 12 -29 32 .>
Expected output: -29 12 32 Expected output: 32 12 -29
Test Case 3: Test Case 4:
Test data: <D .> Test data: <R .>
Expected output: No input Expected: Invalid request character
3. Executing the Program
• Execution of a program under test is the next significant step in testing.
• Execution of this step for the sort program is most likely a trivial exercise.
• However, this may not be so for large and complex programs.
• The complexity of the actual program execution is dependent on the program
itself.
• Often a tester might be able to construct a test harness to aid in program
execution.
• The harness initializes any global variables, inputs a test case, and executes
the program. The output generated by the program may be saved in a file for
subsequent examination by a tester.
4. Specifying Program Behavior
• Program behavior can be specified in several ways: plain natural language, a
state diagram, formal mathematical specification, etc.
• Ex: state diagram specifies program states and how the program changes its
state on an input sequence.
• Example, consider a menu driven application

• The state diagram for the program execution can be given as:
5. Assessing the Correctness of Program Behaviour
• Here, the tester determines if the observed behavior of the program under
test is correct or not.
• This step can be further divided into two smaller steps. In the first step, one
observes the behavior and in the second, one analyzes the observed behavior
to check if it is correct or not.
• The entity that performs the task of checking the correctness of the
observed behavior is known as an oracle.

Relationship between the


program under test and the
oracle.

• A tester often assumes the role of an oracle and thus serves as a human
oracle. Oracles can also be programs designed to check the behavior of other
programs
6. Construction of Oracles
• For trivial programs like sort the construction of automated oracle is easy,
however, in general, it is a complex undertaking. Example:
• Consider a program named HVideo that allows one to keep track of home
videos. The program operates in two modes:
1. Data entry - to enter details of a DVD and store it in the database
2. Search - to search for DVDs matching a search criterion
• To test HVideo we need to create an oracle that checks whether the program
functions correctly in data entry and search modes.
• In addition, an input generator needs to be created. As shown in the below
figure, the input generator generates inputs for HVideo.

• To test the data entry operation of HVideo, the input generator generates a
data entry request.
• The input generator now requests the oracle to test if HVideo performed its
task correctly on the input given for data entry.
• The oracle uses the input to check if the information to be entered into the
database has been entered correctly or not. The oracle returns a Pass or No
Pass to the input generator.
• To test if HVideo correctly performs the search operation, the input
generator formulates a search request with the search data, the same as the
input given previously.
• This input is then passed on to HVideo that performs the search and returns
the results of the search to the input generator. The input generator passes
these results to the oracle to check for their correctness.

Test Metrics
• The term metric refers to a standard of measurement. In software testing, there
exist a variety of metrics.
• Metrics can be computed at the organizational, process, project, and product
levels.
• Regardless of the level at which metrics are defined and collected, there exist four
general core areas that assist in the design of metrics. These are:
o Schedule-related metrics: measures actual completion times of various
activities and compare these with estimated time to completion
o Quality-related metrics: measures quality of a product or a process.
o Resource-related metrics: measures items such as cost in dollars, manpower,
and tests executed.
o Size-related metrics: measures size of various objects such as the source code
and number of tests in a test suite.

1. ORGANIZATIONAL METRICS
• Metrics at the level of an organization are useful in overall project planning
and management.
• Ex: the number of defects reported after product release, averaged over a set
of products developed and marketed by an organization, is a useful metric of
product quality at the organizational level.
• It allows senior management to monitor the overall strength of the
organization and points to areas of weakness.
• Thus, these metrics help senior management in setting new goals and plan
for resources needed to realize these goals.

2. PROJECT METRICS
• Project metrics relate to a specific project, for example the I/O device testing
project or a compiler project.
• These are useful in the monitoring and control of a specific project. Examples:
i. The ratio of actual-to-planned system test effort is one project metric.
ii. Test effort could be measured in terms of the tester-man-months. The
project manager tracks this ratio to allocate testing resources.
iii. The ratio of the number of successful tests to the total number of tests
in the system test phase.

3. PROCESS METRICS
• Every project uses some test process. The big-bang approach (Integration
Testing) is one process sometimes used in relatively small single-person
projects. Several other well-organized processes exist.
• The goal of a process metric is to assess the goodness of the process.
• When a test process consists of several phases, for example unit test,
integration test, and system test, one can measure how many defects were
found in each phase.
• It is well known that the later a defect is found, the costlier it is to fix.
• Hence, a metric that classifies defects according to the phase in which they
are found assists in evaluating the process itself.

4. PRODUCT METRICS
• Product metrics relate to a specific product such as a compiler for a
programming language.
• These are useful in making decisions related to the product, for example
“Should this product be released for use by the customer?”
• Two types of Product metrics:
i. Cyclomatic complexity metrics
ii. Halstead metrics
• Cyclomatic complexity (CYC) is a software metric used to determine the
complexity of a program. It is a count of the number of decisions in the
source code. The higher the count, the more complex the code. The
cyclomatic complexity, V(G) is given by:
V(G) = E - N + 2P, where
G ⟶ Control Flow Graph of a Program P
N ⟶ number of Nodes
E ⟶ number of Edges
P ⟶ number of connected Procedures
• Halstead's metrics depends upon the actual implementation of program and
its measures, which are computed directly from the operators and operands
from source code, in static manner.
Halstead measures of program complexity and effort
Measure Notation Definition

Operator count 𝑁1 Number of operators in a program

Operand count 𝑁2 Number of operands in a program

Unique operators 𝜼𝟏 Number of unique operators in a


program

Unique operands 𝜼𝟐 Number of unique operands in a


program

Program vocabulary 𝜂 𝜂1 + 𝜂2

Program size 𝑁 𝑁1 + 𝑁2

Program volume 𝑉 𝑁 × log 2 𝜂

Difficulty 𝐷 2 𝜂2
×
𝜂1 𝑁2
Effort 𝐸 𝐷×𝑉

Verification and Validation

Verification Validation

Are we implementing the right system? Are we implementing the system right?

Verification is the static testing. Validation is the dynamic testing.

Evaluating products of a development Evaluating products at the closing of a


phase. development phase.

The objective is ensuring that the The objective is ensuring that the
product is as per the requirement and product meets users’ requirements.
design specification.
Includes activities like: reviews, Includes activities like: black-box testing,
meetings and inspections. white-box testing, grey-box testing

Verifies whether the outputs are Validates whether the software


according to inputs. developed is acceptable to the user.

Items evaluated: plans, SRS, codes and Items evaluated: actual product or
test cases software under test.

Involves manual checking of files and Involves validating the software


documents. developed using the files & documents.

It consists of checking of documents/files It consists of execution of program and


and is performed by human. is performed by computer.

Types of Testing
• Dynamic testing requires the execution of the program under test.
• Static testing consists of techniques for the review and analysis of the program.
• The five classifiers that serve to classify testing techniques that fall under the
dynamic testing category are:
1. C1: Source of test generation
2. C2: Life cycle phase in which testing takes place
3. C3: Goal of a specific testing activity
4. C4: Characteristics of the artifact under test
5. C5: Test process

CLASSIFIER C1: SOURCE OF TEST GENERATION


• Black-box testing:
o It is a method of software testing that verifies the functionality of an
application without having specific knowledge of the application’s
code/internal structure, implementation details and internal paths.
o Tests are based on requirements and functionality. It is performed by QA
teams.
o Black Box Testing mainly focuses on input and output of software
applications.
o It is also known as Behavioural Testing.

o Types of Black-Box testing:


▪ Functional testing – This black box testing type is related to the
functional requirements of a system; it is done by software testers
▪ Non-functional testing – This type of black box testing is not related to
testing of specific functionality, but non-functional requirements such as
performance, scalability, usability.
▪ Regression testing – It is done after code fixes, upgrades or any other
system maintenance to check the new code has not affected the existing
code.
o Types of Black-box testing techniques:
▪ Boundary Value Testing: Boundary value testing is focused on the
values at boundaries. This technique determines whether a certain
range of values are acceptable by the system or not. It is very useful in
reducing the number of test cases.
▪ Equivalence Class Testing: It is used to minimize the number of possible
test cases to an optimum level while maintaining reasonable test
coverage.
▪ Decision Table Testing: A decision table puts causes and their effects in
a matrix. There is a unique combination in each column.
• White-box testing
o In this approach, internal structure, design and coding of software are tested
to verify flow of input-output and to improve design, usability and security.
o Code could be used directly or indirectly for test generation. In the direct
case, a tool, or a human tester, examines the code and focuses on a given
path to be covered. A test is generated to cover this path.
o In the indirect case, tests generated using some black-box technique are
assessed against some code-based coverage criterion. Additional tests are
then generated to cover the uncovered portions of the code by analyzing
which parts of the code are feasible.
• Model-based or specification-based testing
o Model-based or specification-based testing occurs when the requirements
are formally specified.
o Tests are generated using formal specifications like:
▪ Data Flow ▪ Dependency Graphs ▪ State transition
▪ Control Flow ▪ Decision Tables machines

• Interface testing
o Tests are often generated using a component’s interface.
o It is done to evaluate whether systems or components pass data and control
correctly to one another. It is usually performed by both testing and
development teams.
• Ad-hoc testing (Ad-hoc means - as and when required)
o In ad-hoc testing, a tester generates tests from requirements but without the
use of any systematic method.
o Testing is performed without planning and documentation – the tester tries
to ‘break’ the system by randomly testing the system’s functionality.
• Random testing :
o Random testing uses a systematic method to generate test.
o Generation of tests using random testing requires modelling the input space
and then sampling data from the input space randomly.

CLASSIFIER C2: LIFE CYCLE PHASE


• Testing activities take place throughout the software life cycle. Each artifact
produced is often subject to testing at different levels of rigor and using different
testing techniques.
• Testing is often categorized based on the phase in which it occurs, as given below:

• Unit testing (Coding Phase)


o Programmers write code during the early coding phase. They test their code
before it is integrated with other system components. This type of testing is
referred to as unit testing.
o The programmer focuses on the unit or a small component that has been
developed. The goal is to ensure that the unit functions correctly in isolation.
• Integration testing (Integration Phase)
o When units are integrated and a large component or a subsystem formed,
one does integration testing of the subsystem.
o The goal is to ensure that a collection of components function as desired.
Integration errors are often discovered at this stage.
• System testing (System Integration Phase)
o Eventually, when the entire system has been built, its testing is referred to as
system testing.
o The goal of system testing is to ensure that all the desired functionality is in
the system and works as per its requirements.
• Alpha and Beta Testing (Pre and Post Release)
o Alpha Testing is a type of software testing performed to identify bugs before
releasing the product to real users or to the public.
o In Beta Testing, a carefully selected set of customers are asked to test a
system before release.
• Regression Testing (Maintenance Phase)
o After release, errors are reported by users, thus resulting in changes being
required to the application.
o Often, these changes are much smaller in size compared to the entire
application, thus, eliminating the need for a complete system test.
o In such situations, one performs a regression test.
o The goal of regression testing is to ensure that the modified system functions
per its specifications.
o Test cases selected for regression testing must include those designed to test
the modified code and any other code that might be affected by the
modifications.

CLASSIFIER: C3: GOAL-DIRECTED TESTING


• Goal-oriented testing looks for specific types of failures. Types of goal-directed
testing:
• Robustness testing: Robustness testing refers to the task of testing an application
for robustness against unintended inputs.
• Stress testing: In stress testing one checks for the behavior of an application under
stress. Handling of overflow of data storage, for example buffers, can be checked
with the help of stress testing.
• Performance testing:
o The term performance testing refers to that phase of testing where an
application is tested specifically with performance requirements in view.
o Ex: a compiler might be tested to check if it meets the performance
requirements stated in terms of number of lines of code compiled per second
• Load testing:
o It refers to that phase of testing in which an application is loaded with
respect to one or more operations.
o The goal is to determine if the application continues to perform as required
under various load conditions.
o Ex: a database server can be loaded with requests from a large number of
simulated users.
o While the server might work correctly when one or two users use it, it might
fail in various ways when the number of users exceeds a threshold.

CLASSIFIER C4: ARTIFACT UNDER TEST


• Testers often say “We do X-testing” where X corresponds to an artifact under test.
• The below table is a partial list of testing techniques named after the artifact that is
being tested.

• OO-testing refers to the testing of programs that are written in an object-oriented


language such as C++ or Java.
• During the design phase one might generate a design using the SDL notation. This
design can be tested before it is committed to code. This form of testing is known
as design testing.

CLASSIFIER C5: TEST PROCESS MODELS


• Software testing can be integrated into the software development life cycle in a
variety of ways. This leads to various models for the test process.
1. Testing in the waterfall model
o The waterfall model is one of the earliest, and least used, software life cycle
models.
o The below figure shows the different phases in a development process based on
the waterfall model.
o The waterfall model requires adherence to an inherently sequential process,
defects introduced in the early phases and discovered in later phases could be
costly to correct.
o There is very little iterative or incremental development when using the
waterfall model.
o Thus, customers must be concise and clear on requirements before
development begins.
o Once the project begins, changes cannot be made or are very costly to
implement.
2. Testing in the V-model
o The V-model, as show in the figure, explicitly specifies testing activities
associated with each phase of the development cycle.

o These activities begin from the start and continue until the end of the life cycle.
o The testing activities are carried out in parallel with the development activities.
3. Spiral testing
o Spiral testing refers to a test strategy that can be applied to any incremental
software development process, especially where a prototype evolves into an
application.
o In spiral testing, the sophistication of test activities increases with the stages of
an evolving prototype.
o In the early stages of development, initial test activities are carried out to
perform Test Planning which determines how testing will be performed in the
remainder of the project.
o As the prototype is refined through successive iterations, unit and integration
tests are performed.
o In the final stage, when the requirements are well defined, testers focus on
system and acceptance testing.
4. Agile testing
o This is a name given to a test process that is well defined.
o Agile testing promotes the following ideas:
i. include testing related activities throughout a development project
starting from the requirements phase
ii. work collaboratively with the customer who specifies requirements in
terms of tests
iii. testers and developers must collaborate with each other rather than
serve as adversaries
iv. test often and in small chunks.

5. Test-Driven Development
o It focuses on creating unit test cases before developing the actual code.
o It is an iterative approach that combines programming, the creation of unit
tests, and refactoring.
o TDD starts with designing and developing tests for every small functionality of
an application.
o Test cases for each functionality are created and tested first and if the test fails
then the new code is written in order to pass the test and making code simple
and bug-free.

Software Quality and Reliability


• Software quality assurance (SQA) is a process which assures that all software
engineering processes, methods, activities and work items are monitored and
comply against a set of defined standards.
• Software testing is one of the methods to assure software quality. Software quality
assurance involves verification and validation of software.
Software Verification
Software can be verified statically or dynamically.
• Static Verification
o Code review
o Code inspection
o Walkthroughs (demo of a task)
o Analysis of structured, maintainable and testable code
o Verifying correct and complete documentation
• Example: A poorly documented piece of code (without proper commenting) will be
harder to understand and hence difficult to modify for future changes.
• Dynamic Verification (R - CCC - UP)
o Reliability: Software reliability is the probability of failure free operation of
software over a given time interval & under given conditions.
o Correctness:
▪ It refers to a correct operation and is always with reference to an
artifact.
▪ Ex: For a Tester, correctness is wrt to the requirements while for a user
correctness is wrt the user manual.
o Completeness:
▪ Refers to the availability of all the features listed in the requirements or
in the user manual.
▪ An incomplete software is one that does not fully implement all
features required.
o Consistency:
▪ It is defined as the requirement that a series of measurements of the
same project carried out by different raters using the same method
should produce similar results
▪ Refers to adherence to a common set of conventions and assumptions.
▪ Ex: All buttons in the user interface might follow a common-color
coding convention.
o Usability:
▪ The development organization invites a selected set of potential users
and asks them to test the product.
▪ Users in turn test for ease of use, functionality as expected,
performance, safety and security.
▪ Users thus serve as an important source of tests.
o Performance:
▪ Refers to the time the application takes to perform a requested task.
Performance is considered as a non-functional requirement.
▪ Ex: A task must be performed at the rate of X units of activity in one
second on a machine running at speed Y, having Z gigabytes of memory.
Defect Tracking
• Defect tracking is the process of logging and monitoring bugs or errors
during software testing. It is also referred to as defect tracking or issue tracking.
• Large systems may have hundreds or thousands of defects. Each needs to be
evaluated, monitored and prioritized for debugging.
• In some cases, bugs may need to be tracked over a long period of time.
• During its lifetime, a single defect may go through several stages or states. They
include:
o Active: Investigation is underway
o Test: Fixed and ready for testing
o Verified: Retested and verified by quality assurance (QA)
o Closed: Can be closed after QA retesting or if it is not considered to be a
defect
o Reopened: Not fixed and reactivated
• Bugs are managed based on priority and severity. Severity levels help to identify
the relative impact of a problem.
o Catastrophic:
▪ Causes total failure of the software or unrecoverable data loss.
▪ There is no workaround and the product can’t be released.
o Impaired functionality:
▪ A workaround may exist, but it is unsatisfactory.
▪ The software can’t be released.
o Failure of non-critical systems:
▪ A reasonably satisfactory workaround exists.
▪ The product may be released, if the bug is documented.
o Very minor:
▪ There is a workaround, or the issue can be ignored.
▪ It does not impact a product release.
• Several tools exist for recording defects, and computing and reporting defect-
related statistics. Ex: Bugzilla
• They provide several features for defect management including defect recording,
classification, and tracking.

You might also like