You are on page 1of 49

What is Software Testing?

Software testing is defined as an activity to check whether the actual results match the expected results and to
ensure that the software system is Defect free. It involves execution of a software component or system component
to evaluate one or more properties of interest.

Software testing also helps to identify errors, gaps or missing requirements in contrary to the actual requirements. It
can be either done manually or using automated tools. Some prefer saying Software testing as a White
Box and Black Box Testing.

In simple terms, Software Testing means Verification of Application Under Test (AUT).

Consequences of S/W failure:

Software systems have become such an essential part of our economy that whenever they fail, there are economic
consequences. A research study done by software testing company Tricentis revealed that in the year 2017 software
failure affected 3.6 billion people and caused $1.7 trillion in financial losses.

Olympic hammer throw scoring system

Failure: During 2012 London Olympics, the software used for calculating scores in the hammer throw event, was not
ready to accept the freak occurrence of successive throws of exactly the same distance. During overall score
calculation, it wiped out German athlete Heidler's throw of 77.12m, which came after Russian athlete Lysenko's throw
of 77.12m. Eventually the glitch was detected and Heidler won the bronze medal. However it was not before
the results incorrectly announced China's Zhang Wenxiu as the bronze winner and she took her victory lap around
the ground.

Loss: Loss of reputation for International Olympic Committee.

Vancouver Stock Exchange crash

Failure: During 1982-83, the Vancouver Stock Exchange (VSE) stock index constantly kept falling in an
unexplainable way, causing a panic among the stock brokers of an imminent crash. Eventually the root cause was
traced to the new index calculation software, installed in January 1982. While the stocks were traded using 4 decimal
places for prices, the software, which used 3 decimal places, truncated the price values instead of rounding it. The
VSE Index was losing 0 to 0.001 points for every transaction. With thousands of transactions per day, it was losing
almost 1 point every day. When the error was discovered in November 1983, it had already lost 520 points.

Loss: Serious financial loss for the brokers and investors, along with reputation loss for VSE.

Early Warning Missile Defense System

Failure: On 26th September 1983, during the height of cold war, the Soviet early warning systems in Moscow
reported a launch of 5 nuclear missiles from USA towards USSR. USSR's retaliation protocol, in the event of a
nuclear attack, was to ensure 'Mutually Assured Destruction'. Lt. Col. Stanislav Petrov, the officer on duty at the
command center, knew that a US nuclear first strike would be a rain of missiles and not just 5. Based on his gut
feeling he reported to his seniors that it must be a false alarm (which turned out to be true) and advised not to launch
a counter strike, thus averting a nuclear war. It was later found that the satellite's analyzing software incorrectly
reported sun's rays reflected off certain clouds as missiles launches.

Loss: Potential large scale loss of life and property.

Failures due to defective softwares can manifest into loss of reputation, money and even life in extreme ways. It is
important that their causes are detected and corrected before they can manifest.
Definition of Software Quality

Software quality is the degree to which a process, component or system conforms to the specified requirements and
user expectations.

Characteristics of Software Quality

As per ISO standard model (ISO/IEC 9126), the quality of a software can be evaluated based on its following
characteristics.

DEFECT:
It can be simply defined as a variance between expected and actual. Defect is an error found AFTER the application
goes into production. It commonly refers to several troubles with the software products, with its external behavior or
with its internal features. In other words Defect is the difference between expected and actual result in the context of
testing. It is the deviation of the customer requirement.

Defect can be categorized into the following:


Wrong: When requirements are implemented not in the right way. This defect is a variance from the given
specification. It is Wrong!
Missing: A requirement of the customer that was not fulfilled. This is a variance from the specifications, an indication
that a specification was not implemented, or a requirement of the customer was not noted correctly.
Extra: A requirement incorporated into the product that was not given by the end customer. This is always a variance
from the specification, but may be an attribute desired by the user of the product. However, it is considered a defect
because it‟s a variance from the existing requirements.
ERROR: An error is a mistake, misconception, or misunderstanding on the part of a software developer. In the
category of developer we include software engineers, programmers, analysts, and testers. For example, a developer
may misunderstand a de-sign notation, or a programmer might type a variable name incorrectly – leads to an Error. It
is the one which is generated because of wrong login, loop or due to syntax. Error normally arises in software; it leads
to change the functionality of the program.
BUG: A bug is the result of a coding error. An Error found in the development environment before the product is
shipped to the customer. A programming error that causes a program to work poorly, produce incorrect results or
crash. An error in software or hardware that causes a program to malfunction. Bug is terminology of Tester.
FAILURE: A failure is the inability of a software system or component to perform its required functions within
specified performance requirements. When a defect reaches the end customer it is called a Failure. During
development Failures are usually observed by testers.
FAULT: An incorrect step, process or data definition in a computer program which causes the program to perform in
an unintended or unanticipated manner. A fault is introduced into the software as the result of an error. It is an
anomaly in the software that may cause it to behave incorrectly, and not according to its specification. It is the result
of the error.
The software industry can still not agree on the definitions for all the above. In essence, if you use the term to mean
one specific thing, it may not be understood to be that thing by your audience.

Lex defect:

A defect is a fault in the system, component or process due to which the expected outcome and the actual outcome
of a test activity do not match.
The other terms used to refer to a defect are bug, error, anomaly, etc.

The objective of software testing is to eliminate defects in a software by

1. Preventing defects from entering the software


2. Identifying and reporting the defects existing in the software

Types of S/W testing:

Formal testing

Informal testing

Formal Testing: Testing performed with a plan, documented set of test cases, etc that outline the methodology and
test objectives. Test documentation can be developed from requirements, design, equivalence partitioning, domain
coverage, error guessing, etc. The level of formality and thoroughness of test cases will depend upon the needs of
the project. Some projects can have rather informal „formal test cases‟, while others will require a highly refined test
process. Some projects will require light testing of nominal paths while others will need rigorous testing of exceptional
cases.

Informal Testing: Ad hoc testing performed without a documented set of objectives or plans. Informal testing relies
on the intuition and skills of the individual performing the testing. Experienced engineers can be productive in this
mode by mentally performing test cases for the scenarios being exercised.

What is Static Testing?


Static Testing is type of testing in which the code is not executed. It can be done manually or by a set of tools. This
type of testing checks the code, requirement documents and design documents and puts review comments on the work
document. When the software is non –operational and inactive, we perform security testing to analyse the software in
non-runtime environment. With static testing, we try to find out the errors, code flaws and potentially malicious code
in the software application. It starts earlier in development life cycle and hence it is also called verification testing.
Static testing can be done on work documents like requirement specifications, design documents, source code, test
plans, test scripts and test cases, web page content.

The Static test techniques include:

 Inspection: Here the main purpose is to find defects. Code walkthroughs are conducted by moderator. It is a
formal type of review where a checklist is prepared to review the work documents.
 Walkthrough: In this type of technique a meeting is lead by author to explain the product. Participants can ask
questions and a scribe is assigned to make notes.
 Technical reviews/formal review/inspection: In this type of static testing a technical round of review is
conducted to check if the code is made according to technical specifications and standards. Generally the test
plans, test strategy and test scripts are reviewed here.
 Informal reviews/static analysis: Static testing technique in which the document is reviewed informally and
informal comments are provided.

What is Dynamic Testing?


Dynamic testing is done when the code is in operation mode. Dynamic testing is performed in runtime environment.
When the code being executed is input with a value, the result or the output of the code is checked and compared with
the expected output. With this we can observe the functional behaviour of the software, monitor the system memory,
CPU response time, performance of the system. Dynamic testing is also known as validation testing , evaluating the
finished product. Dynamic testing is of two types: Functional Testing and Non functional testing.

Types o f Dynam ic Testing techniques are as follows:


1. Black Box Testing is a software testing method in which the internal structure/ design/ implementation of the
item being tested is not known to the tester
2. White Box Testing is a software testing method in which the internal structure/ design/ implementation of the
item being tested is known to the tester.

Software Quality Assurance

SOFTWARE QUALITY ASSURANCE (SQA) is a set of activities for ensuring quality in software
engineering processes that ultimately results, or at least gives confidence, in the quality of software
products.

SQA Activities

SQA includes the following activities:

 Process definition
 Process training
 Process implementation
 Process audit

SQA Processes

SQA includes the following processes:

 Project Management
 Project Estimation
 Configuration Management
 Requirements Management
 Software Design
 Software Development [Refer to SDLC]
 Software Testing [Refer to STLC]
 Software Deployment
 Software Maintenance
 etc.

Software Quality Assurance encompasses the entire software development life cycle and the goal is to
ensure that the development and maintenance processes are continuously improved to produce
products that meet specifications. Note that the scope of Quality is NOT limited to just Software
Testing. For example, how well the requirements are stated and managed matters a lot!

Once the processes have been defined and implemented, Quality Assurance has responsibility of
identifying weaknesses in the processes and correcting those weaknesses to continually improve the
processes.

SOFTWARE QUALITY CONTROL (SQC) is a set of activities for ensuring quality in software
products. Software Quality Control is limited to the Review/Testing phases of the Software
Development Life Cycle and the goal is to ensure that the products meet specifications/requirements.

SQC Activities

It includes the following activities:

 Reviews
o Requirement Review
o Design Review
o Code Review
o Deployment Plan Review
o Test Plan Review
o Test Cases Review
 Testing
o Unit Testing
o Integration Testing
o System Testing
o Acceptance Testing

The process of Software Quality Control (SQC) is governed by Software Quality Assurance (SQA).
While SQA is oriented towards prevention, SQC is oriented towards detection.
V-model:
The V-model is a type of SDLC model where process executes in a sequential manner in V-shape. It is also known
as Verification and Validation model. It is based on the association of a testing phase for each corresponding
development stage. Development of each step directly associated with the testing phase. The next phase starts only
after completion of the previous phase i.e. for each development activity, there is a testing activity corresponding to it.

Verification: It involves static analysis technique (review) done without executing code. It is the process of evaluation
of the product development phase to find whether specified requirements meet.
Validation: It involves dynamic analysis technique (functional, non-functional), testing done by executing code.
Validation is the process to evaluate the software after the completion of the development phase to determine
whether software meets the customer expectations and requirements.
So V-Model contains Verification phases on one side of the Validation phases on the other side. Verification and
Validation phases are joined by coding phase in V-shape. Thus it is called V-Model.
Design Phase:
 Requirement Analysis: This phase contains detailed communication with the customer to understand their
requirements and expectations. This stage is known as Requirement Gathering.
 System Design: This phase contains the system design and the complete hardware and communication setup
for developing product.
 Architectural Design: System design is broken down further into modules taking up different functionalities.
The data transfer and communication between the internal modules and with the outside world (other systems)
is clearly understood.
 Module Design: In this phase the system breaks dowm into small modules. The detailed design of modules is
specified, also known as Low-Level Design (LLD).
Testing Phases:
 Unit Testing: Unit Test Plans are developed during module design phase. These Unit Test Plans are executed
to eliminate bugs at code or unit level.
 Integration testing: After completion of unit testing Integration testing is performed. In integration testing, the
modules are integrated and the system is tested. Integration testing is performed on the Architecture design
phase. This test verifies the communication of modules among themselves.
 System Testing: System testing test the complete application with its functionality, inter dependency, and
communication.It tests the functional and non-functional requirements of the developed application.
 User Acceptance Testing (UAT): UAT is performed in a user environment that resembles the production
environment. UAT verifies that the delivered system meets user‟s requirement and system is ready for use in
real world.
Industrial Challange: As the industry has evolved, the technologies have become more complex, increasingly faster,
and forever changing, however, there remains a set of basic principles and concepts that are as applicable today as
when IT was in its infancy.
 Accurately define and refine user requirements.
 Design and build an application according to the authorized user requirements.
 Validate that the application they had built adhered to the authorized business requirements.
Principles of V-Model:
 Large to Small: In V-Model, testing is done in a hierarchical perspective, For example, requirements identified
by the project team, create High-Level Design, and Detailed Design phases of the project. As each of these
phases is completed the requirements, they are defining become more and more refined and detailed.
 Data/Process Integrity: This principle states that the successful design of any project requires the
incorporation and cohesion of both data and processes. Process elements must be identified at each and every
requirements.
 Scalability: This principle states that the V-Model concept has the flexibility to accommodate any IT project
irrespective of its size, complexity or duration.
 Cross Referencing: Direct correlation between requirements and corresponding testing activity is known as
cross-referencing.
 Tangible Documentation: This principle states that every project needs to create a document. This
documentation is required and applied by both the project development team and the support team.
Documentation is used to maintaining the application once it is available in a production environment.
Why preferred?
 It is easy to manage due to the rigidity of the model. Each phase of V-Model has specific deliverables and a
review process.
 Proactive defect tracking – that is defects are found at early stage.
When to use?
 Where requirements are clearly defined and fixed.
 The V-Model is used when ample technical resources are available with technical expertise.
Advantages:
 This is a highly disciplined model and Phases are completed one at a time.
 V-Model is used for small projects where project requirements are clear.
 Simple and easy to understand and use.
 This model focuses on verification and validation activities early in the life cycle thereby enhancing the
probability of building an error-free and good quality product.
 It enables project management to track progress accurately.
Disadvantages:
 High risk and uncertainty.
 It is not a good for complex and object-oriented projects.
 It is not suitable for projects where requirements are not clear and contains high risk of changing.
 This model does not support iteration of phases.
 It does not easily handle concurrent events.

The V-Model - Quiz 1:


Q1 of 9

'Alpha Testing' is a term most appropriate for which of these testing activities.

Post-release testing by end users or their representatives at the developer's site.


The first testing done by end users or their representatives.
Pre-release testing by end user or their representatives using the developers' infrastructure.
Pre-release testing by end user or their representatives using their own organization's infrastructure.
Q2 of 9

Which of the following entities is the best source of Expected Outcomes for User Acceptance Tests?

Code
Software Design
Business Requirement Specifications
System Requirement specifications

Q3 of 9

What is the primary reason behind splitting testing into different stages?

Every test stage serves a distinct purpose.


Managing testing in stages is easier.
Diverse tests can be run in distinct environments.
Testing is better when there are many stages.
Q4 of 9

Which of these is a significant advantage of code inspection?

Enables code testing before the execution environment is prepared.


The individual who developed the code can perform it.
Inexperienced staff can perform it.
Identifies more defects as compared to any other testing technique.
Q5 of 9

outlined_flag
Which of these is the major difference between walk-through and inspection?

Inspection: Headed by the author, Walk-through: Headed by a trained moderator


Inspection: Trained leader is present, Walk-through: There is no leader
Inspection: Authors are not present, Walk-through: Authors are present.
Inspection: Headed by a trained moderator, Walk-through: Headed by the author

Q6 of 9

Which level of testing has the objective of verifying if software components are operating correctly and finding
defects?

Integration testing
Acceptance testing
Unit testing
System Testing

Q7 of 9

Which of these is not a method of dynamic testing?

1. System testing

2. User Acceptance Testing

3. Inspection

4. Unit Testing

5. Walk through

6. Technical review

1,2,4
3,5,6
2,3,5,6
All of these.
Q8 of 9

System testing is a __________.

Grey box testing


White box testing
Black box testing
Both black box and white box
Q9 of 9

Which testing is concerned with behaviour of whole product as per specified requirements?

Acceptance testing
Smoke testing
System testing
Integration testing

What is Lifecycle?
Lifecycle in the simple term refers to the sequence of changes from one form to other forms. These changes can happen to any
tangible or intangible things. Every entity has a lifecycle from its inception to retire / demise.

In a similar fashion, Software is also an entity. Just like developing software involves a sequence of steps, testing also has steps
which should be executed in a definite sequence.

This phenomenon of executing the testing activities in a systematic and planned way is called testing life cycle.

What Is Software Testing Life Cycle (STLC)


Software Testing Life Cycle refers to a testing process which has specific steps to be executed in a definite sequence to ensure
that the quality goals have been met. In the STLC process, each activity is carried out in a planned and systematic way. Each
phase has different goals and deliverables. Different organizations have different phases in STLC; however, the basis remains the
same.

Below are the phases of STLC:


1. Requirements phase
2. Planning Phase
3. Analysis phase
4. Design Phase
5. Implementation Phase
6. Execution Phase
7. Conclusion Phase
8. Closure Phase
#1. Requirement Phase:
During this phase of STLC, analyze and study the requirements. Have brainstorming sessions with other teams and try to find out
whether the requirements are testable or not. This phase helps to identify the scope of the testing. If any feature is not testable,
communicate it during this phase so that the mitigation strategy can be planned.

#2. Planning Phase:


In practical scenarios, Test planning is the first step of the testing process. In this phase, we identify the activities and resources
which would help to meet the testing objectives. During planning we also try to identify the metrics, the method of gathering and
tracking those metrics.

On what basis the planning is done? Only requirements?

The answer is NO. Requirements do form one of the bases but there are 2 other very important factors which influence test
planning. These are:

– Test strategy of the organization.


– Risk analysis / Risk Management and mitigation.

#3. Analysis Phase:


This STLC phase defines “WHAT” to be tested. We basically identify the test conditions through the requirements document,
product risks, and other test bases. The test condition should be traceable back to the requirement.

There are various factors which affect the identification of test conditions:
– Levels and depth of testing
– The complexity of the product
– Product and project risks
– Software development life cycle involved.
– Test management
– Skills and knowledge of the team.
– Availability of the stakeholders.

We should try to write down the test conditions in a detailed way. For example, for an e-commerce web application, you can
have a test condition as “User should be able to make a payment”. Or you can detail it out by saying “User should be able to
make payment through NEFT, debit card, and credit card”.

The most important advantage of writing the detailed test condition is that it increases the test coverage since the test cases will
be written on the basis of the test condition, these details will trigger to write more detailed test cases which will eventually
increase the coverage.

Also, identify the exit criteria of the testing, i.e determine some conditions when you will stop the testing.

#4. Design Phase:


This phase defines “HOW” to test. This phase involves the following tasks:

– Detail the test condition. Break down the test conditions into multiple sub-conditions to increase coverage.
– Identify and get the test data
– Identify and set up the test environment.
– Create the requirement traceability metrics
– Create test coverage metrics.

#5. Implementation Phase:


The major task in this STLC phase is of creation of the detailed test cases. Prioritize the test cases also identify which test case
will become part of the regression suite. Before finalizing the test case, It is important to carry out the review to ensure the
correctness of the test cases. Also, don’t forget to take the sign off of the test cases before actual execution starts.

If your project involves automation, identify the candidate test cases for automation and proceed for scripting the test cases.
Don’t forget to review them!

#6. Execution Phase:


As the name suggests, this is the Software Testing Life Cycle phase where the actual execution takes place. But before you start
your execution, make sure that your entry criterion is met. Execute the test cases, log defects in case of any discrepancy.
Simultaneously fill your traceability metrics to track your progress.

#7. Conclusion Phase:


This STLC phase concentrates on the exit criteria and reporting. Depending on your project and stakeholders choice, you can
decide on reporting whether you want to send out a daily report of the weekly report, etc.

There are different types of reports ( DSR – Daily status report, WSR – Weekly status reports) which you can send, but the
important point is, the content of the report changes and depends upon whom you are sending your reports.

If Project managers belong to testing background then they are more interested in the technical aspect of the project, so include
the technical things in your report ( number of test cases passed, failed, defects raised, severity 1 defects, etc.).

But if you are reporting to upper stakeholders, they might not be interested in the technical things so report them about the risks
that have been mitigated through the testing.

#8. Closure Phase:

Tasks for the closure activities include the following:

– Check for the completion of the test. Whether all the test cases are executed or mitigated deliberately. Check there is no
severity 1 defects opened.
– Do lessons learned meeting and create lessons learned document. ( Include what went well, where are the scope of
improvements and what can be improved)

Characteristics of Good Requirements - Quiz 1:

Q1 of 3

Which of the following is non functional requirement for an e-commerce website?

A page for selecting items to buy.


A button in the Order page to teturn faulty items
Number of items that can be listed in the catalogue
Number of users who can use the website simultaneously.
Q2 of 3

Which will be functional requirement for on-line railway ticket reservation system?

Response time for listing the available tickets.


Number of users who can book tickets simultaneously
A normal user being able to book a ticket from Mumbai to Goa .
An elderly person with weak eyesight being able to book a ticket from Mumbai to Goa
Q3 of 3

outlined_flag
Requirement Statement : "The number auto-generated in the order number field must be 20 characters long. It can
contain both alphabetic and numeric characters. It must random, unique and never repeated for any number of
generations."

Which of the following characteristic does the above requirement most likely cannot satisfy?

Complete
Testable
Consistent
Unambiguous

Statement Coverage:
In this the test case is executed in such a way that every statement of the code is
executed at least once
.
Branch/Decision Coverage:
Test coverage criteria requires enough test cases such that each condition in a decision takes on all possible outcomes
at least once, and each point of entry to a program or subroutine is invoked at least once. That is, every branch
(decision) taken each way, true and false. It helps in validating all the branches in the code making sure that
no branch leads to abnormal behavior of the application.

Statement Coverage and Decision Coverage

ProblemStatement:
Read Q
If Q > 50
X = Q * 0.08
Else
If Q < 0
Print "Error"
Else
X=Q*2
End If
End If
Print X
Create the control flow diagram for the above pseudo code.
With the help of the flow diagram determine the number of tests required for statement coverage.
With the help of the flow diagram determine the number of tests required for decision coverage.

Solution

Statement Coverage and Decision Coverage in a While Loop Exercise:


Read NumberOfExams
While NumberOfExams != 0
Read Mark
TotalMarks = TotalMarks + Mark
End While
Print TotalMarks

For the above pseudo code, find the number of tests necessary to attain

1. 100% statement coverage?


2. 100% decision coverage?

Solution:
The number of tests required to achieve

1. 100% statement coverage

Answer: 1

2. 100% decision coverage

Answer: 1
While Loop with If Condition inside Exercise
In the above example, how many tests are necessary to attain

1. 100% statement coverage?


2. 100% decision coverage?

Solution:
The number of tests required to achieve

1. 100% statement coverage

Answer: 1

2. 100% decision coverage

Answer: 1

Note: For while loops, you can achieve 100% statement coverage and 100% decision coverage using
only 1 value.

Path coverage testing:

What is Path Testing?

Path testing is a structural testing method that involves using the source code of a program in order to find every
possible executable path. It helps to determine all faults lying within a piece of code. This method is designed to
execute all or selected path through a computer program.

Any software program includes, multiple entry and exit points. Testing each of these points is a challenging as well as
time-consuming. In order to reduce the redundant tests and to achieve maximum test coverage, basis path testing is
used.
Condition Coverage path Technique:

Conditional coverage or expression coverage will reveal how the variables or subexpressions in the conditional
statement are evaluated. In this coverage expressions with logical operands are only considered.

For example, if an expression has Boolean operations like AND, OR, XOR, which indicated total possibilities.

Conditional coverage offers better sensitivity to the control flow than decision coverage. Condition coverage does not
give a guarantee about full decision coverage

Complex softwares like a life insurance underwriting system can either accept or reject a policy enrollment request
from a customer based on combination of a number of inputs - Income, age, already existing policies, number of
claims in previous policies, existing health conditions etc. The customer is not only evaluated based on each
individual data point collected, but also conditional combinations of such inputs.

A decision statement of a such computer program can involve complex boolean expressions which have

 Multiple input values


 Multiple relational evaluations between the inputs

Such expressions are called decision predicates.

Here's an example of a decision predicate.

b_out = ((x>y+z) AND (y<-3)) OR ((z²+x²<4) AND (z≤y))


Even though there are multiple inputs and multiple evaluations, the final result can branch the program flow only in
two ways (b_out = true, b_out = false). Here, testing all possible decisions (2) is not enough. We need to test all
possible ways in which the inputs arrive at either of the decisions.

For scenarios like these, we use condition coverage techniques. In this technique, the goal is to cover all possible
conditional evaluations of the boolean expressions.

Lets see how we can derive tests out of the decision predicate, given in the example above, to achieve 100%
decision coverage.

One attains "condition testing coverage" by executing test cases till all the conditions in the decision produced true at
least once and false at least once.

Each of the condition outputs (b0, b1, b2, b3) gives true at least once and false at least once. We are not concerned
about a given value in few test cases because it does not influence the condition for which we require correct output.
They have been marked as “NA”.

Incidentally, the table also demonstrates that "100% decision coverage" has been achieved for this particular
decision, since b_out ) gives true at least once and false at least once, hence both branches of any code will be
covered.

Each of the condition outputs (b0, b1, b2, b3) gives true at least once and false at least once. We are not concerned
about a given value in few test cases because it does not influence the condition for which we require correct output.
They have been marked as “NA”.

Incidentally, the table also demonstrates that "100% decision coverage" has been achieved for this particular
decision, since b_out ) gives true at least once and false at least once, hence both branches of any code will be
covered.
Condition Coverage Technique - Quiz 1

Q1 of 3

outlined_flag
What is the minimum number of test cases for the below code pseudocode?

Switch on the PC

Start “outlook”

IF "outlook" has appeared THEN

Send email

Close "outlook"

Statement coverage: 1, Decision coverage: 1


Statement coverage: 1, Decision coverage: 2
Statement coverage: 1, Decision coverage: 3
Statement coverage: 2, Decision coverage: 2
Statement coverage: 2, Decision coverage: 3

Q2 of 3

outlined_flag
What is the minimum number of test cases necessary for full decision and path coverage?

Read A

Read B

IF A+B > 100 THEN

Print “Large”

ENDIF
If A > 50 THEN

Print “A Large”

ENDIF

Decision coverage: 4, Path coverage: 4


Decision coverage: 2, Path coverage: 4
Decision coverage: 1, Path coverage: 1
Decision coverage: 2, Path coverage: 3
Decision coverage: 2, Path coverage: 2

Q3 of 3

Examine this simplified procedure:

Ask: “What type of ticket do you require, single or return?”

IF customer needs "return"

Ask: “What rate, Standard or Cheap-day?”

IF customer responds "Cheap-day"

Say: “That will be £11:20”

ELSE

Say: “That will be £19:50”

END IF

ELSE

Say: “That will be £9:75”

END IF
Find the minimum number of tests required to guarantee that all the questions have been asked, all combinations
have been covered and all answers given.

3
4
5
6

Design techniques:

Specification based test design optimization techniques are used for deriving black box test cases, created by testing
team, from requirement specifications.Test scenarios are derived from the requirement specification and test cases
developed.

Each requirement specification is analyzed for functions/features of the software component in terms of all possible
inputs and their expected outcomes. The results of the analysis are utilized to derive the minimum number of test
cases required to validate concerned requirement specification completely.

Given below are the major types of specification based test design techniques:

 Equivalence partitioning
 Boundary value analysis (BVA)
 Decision Table
 State transition testing
 Orthogonal Array
 Use case testing

Equivalence partitioning:

 The whole range of input conditions, for a particular requirement specification of a software component, are
grouped into sets called equivalence classes.
 Equivalence classes are determined in such a way that it is safe to assume that all input conditions in a
particular class, when processed by the software, produce similar (equivalent) output response.
 Instead of conducting an exhaustive test using all input conditions, it would suffice to test any one random
input condition from a particular equivalence class thereby saving time and effort.

Example 1

To test a software component whose requirement specification says that acceptable input values must be >=1000
AND <=2000, the equivalence classes of input values would be
 Less than 1000 (Negative test. Expected outcome is that the input should be rejected)
 Between 1000 and 2000 (Positive test. Expected outcome is that the input should be accepted)
 Greater than 2000 (Negative test. Expected outcome is that the input should be rejected)

Thus this requirement can be tested using only 3 test cases by picking 1 input value from each equivalence class for
each test case.

Example 2

To test a software component whose requirement specification says that the field must accept only alphabets, the
equivalence classes of input values would be

 Alphabets
 Numbers
 Special Characters

Again, this requirement can be tested using only 3 test cases.

Limitation of equivalence partitioning

 When testing a requirement specification for a software component which says the valid input values must
be >=1000 AND <=2000, the exhaustive list of input values to be tested forms a continuous range from 0 to
>2000.
 Since the program codes are written in terms of boundaries separating these equivalence classes, the
probability of a defect being present is very high at the boundaries than anywhere else in that range. For
example, using '<' and '>' operators instead of '<=' and '>='. (Or the other way round)
 Choosing random input values to test from the equivalence classes carries a risk of such defects not being
detected.

Boundary Value Analysis (BVA)

This is a technique used for developing test cases to specifically test the boundaries separating a continuous range of
inputs. In this technique, the boundaries are identified and the tests are derived for the following three scenarios in
each boundary.

Lower Boundary cases: Using values which are just below the boundaries
Upper Boundaries cases: Using values which are just above the boundaries
On Boundaries cases: Using values which are lying on the boundaries

Example

To test a software component whose requirement specification says that acceptable input values must be >=1000
AND <=2000, the boundaries on the continuous range of input values lie on 1000 and 2000 respectively.
The test cases to test this requirement specification using BVA technique would be

Lower boundary cases

999 (Expected outcome : Should be rejected)

1999 (Expected outcome : Should be accepted)

Upper boundary cases

1001 (Expected outcome : Should be accepted)

2001 (Expected outcome : Should be rejected)

On the boundary cases

1000 (Expected outcome : Should be accepted)

2000 (Expected outcome : Should be accepted)

Note

 When using BVA technique, the cases for equivalence partitions are automatically covered. It‟s just that the
values chosen are not random.
 Equivalence partitioning and boundary value analysis are closely related and can be utilized together to give
more confidence in testing as this will cover all the error prone boundaries and the ranges.
 If there is a time constraint, only BVA might be used as it also covers all the partitions.

Boundary Value Analysis Technique - Quiz 1

Q1 of 2

A system is intended to calculate the tax amount to be paid:

Rs. 4000 of an employee‟s salary is tax free. The next Rs. 1500 is taxed at 10%.

The next Rs. 8000 is taxed at 22%.

Any higher amount is taxed at 40%.

Which of the following belong to the same equivalence class?

Rs. 4800; Rs. 14000; Rs. 28000


Rs. 5200; Rs. 5500; Rs. 28000
Rs. 28001; Rs. 32000; Rs. 35000
Rs. 5800; Rs. 28000; Rs. 32000
Q2 of 2

A bank‟s saving account earns an additional 2% interest when the balance of the account is greater than $10,000

Using 3-BVA, what account balance inputs should be tested?

10,000, 10,001, 10,002


9,999, 10,000, 10,001
9,998, 9,999, 10,000
9,999, 10,000, 10,002

Using BVA and EP Techniques – Exercise

ProblemStatement:

Let‟s derive the equivalence classes for an application that lets you apply for your driver‟s license

 In Victoria, Australia you must be at least 18 yrs. old to apply for your probationary license
 Let‟s say theoretically you must stop driving when you are 90 yrs. Old

Using EP and BVA, derive test cases with specific test data.

Solution

Eligibility for driving license: In Victoria, Australia you must be at least 18 yrs. old to apply for your probationary license Let’s
say theoretically you must stop driving when you are 90 yrs. Old
Decision Tables Technique:

Equivalence partitioning and boundary value analysis techniques are frequently applied to particular inputs or
circumstances. But, if diverse combinations of inputs lead to various actions being taken, it may be more problematic
to show using these techniques, which are likely to concentrate on the user interface.

How can decision tables be used for test design?

The combinations of conditions and inputs which is recorded as a table.

 The top part of the table contains combinations on input conditions.


 The bottom part contains the resultant actions as true or false
 Each column can be one test cases.
 In decision table the target would be at least one test for each column to trigger all combinations of
conditions. However testing each column may not be possible.
 Decision table helps in choosing the columns (Rule) which should be converted to test cases.
 First we should eliminate rules that are not feasible.
 Then check if the outcome of multiple rules are same, we can choose one rule for testing.
 Be careful with assumptions made when rationalizing. You might accidentally remove valid or important
combinations

Example - Shipping an online order

Online order system rules

 There must be items in the cart


 Shipping address must not be a PO. Box
 Orders can be placed from accounts with a credit balance

Conditions

 Cart has something in it


 Shipping address is a PO box
 Account balance is in credit

Actions

 Item is shipped
 Account is debited

In the following page, lets explore how the decision table is drawn and test cases are derived.

Benefits of Decision Tables:

 Decision table is a good method of dealing with combinations of inputs.


 Decision tables support organized choosing of effective test cases and may have a positive consequence of
discovering problems and ambiguities in the specification.
 It is a method which works fine in combination with equivalence partitioning.
 The combination of conditions discovered can be combinations of equivalence partitions.
 Decision table also helps in capturing and validating the completeness of requirement.
 It is valuable for developers and testers.

State Transition Diagram Technique

Why go for state transition diagrams?

Sometimes, test structures are not as simple as input-output pairs. There might be number of inputs and outputs
staggered between the initial input and final output.

The output of such systems are not only dependent on the current input, but also the history of previous inputs.

A state transition diagram allows a tester to visualize the software in terms of its states, transitions between states,
inputs or events that trigger state changes (transitions) and resulting actions.

In the next page, let us see an example of such system and how to design test cases for it, using state transition
diagrams.

State Transition in a Vending Machine:

 A system might display a different response based on present conditions or earlier history.
 A state transition diagram has to be created by an experienced tester or a business analyst from the
requirements specification.
 It allows a tester to understand the software in terms of its states, transitions between states, inputs or
events that trigger state changes (transitions) and resulting outcomes.
 In the next page, let us see how test cases can be derived once we have the state transition diagram.

While designing tests from a state transition diagram, the following should be your objectives.

1. To cover every state – cover the circles


2. To cover every transition – cover every line
Steps in designing the test cases:

Test Case 1.Test the typical transaction flow

nsert the money and give correct selection.

Expected result: selected item is delivered

Now ask yourself (after deriving every test case)

Have we covered all the lines and states Yet?

 State – Yes
 Lines – No. What more TC to be checked to achieve that?

Test Case 2.

Money is inserted and then canceled.

Expected result: Money is returned


Test Case 3.

Money is inserted – Invalid selection is made – Valid selection is made but the cost was more than the money
provided earlier.

Expected result: Ask for more money and wait for money

Have we covered all the lines and states Yet?

 State – Yes
 Lines – Yes

Based on the example in the previous page, different paths that can be tested are shown in the below diagram.

Q1. Which sequence would be testing a typical transaction flow?

Q2. What is the least number of tests needed for testing each state?

Answer: 1. Cross every circle

Q3. How many tests are needed to test every transition?


Answer: 3. Cover every line.

This is also known as Chow’s 0-switch Coverage

State transition can also be shown as a table. This will help on finding invalid transition.

State Transition Diagram Technique - Quiz 1

Q1 of 2

Which of these could be an example for decision-table testing of a monetary application applied at the level of
system-testing?

Table having rules for combinations of inputs of two fields on a screen.


Table having rules for interfaces between components.
Table having rules for mortgage applications.
Table having rules for chess
Q2 of 2

For the state diagram given, which test case gives the minimum series of valid transitions covering each state?

SS-S1-S2-S4-S1-S3-ES
SS-S1-S2-S3-S4-ES
SS-S1-S2-S4-S1-S3-S4-S1-S3-ES
SS-S1-S4-S2-S1-S3-ES

Orthogonal Array Technique

Why Orthogonal Array Testing Strategy (OATS)

 In the previous techniques we saw examples with only one input parameter (variable). When there are
multiple input parameters available for a single functionality (for example a login functionality which needs
domain name, user id and password as inputs ) the exhaustive number of input conditions to test, based on
the all possible combinations, dramatically increase.
 For systems with combinatorial inputs, equivalence partitioning and boundary value analysis cannot optimize
tests efficiently and effectively.
 The probability of presence of defects, in case of multiple input parameters, is more at their interaction
points within the software code. To increase the test efficiency, the test cases should be based on
their interactions than the number of permutations and combinations of input values.

What is Orthogonal Array Testing Strategy

The input parameters of a software component are considered to exhibit orthogonality if each of them

 Has more than one possible input value.


 Can accept an input independent of other parameters.

At the core of OATS is the statistical assumption that, in an orthogonal system, if all possible input value
combinations of all possible pairs of input parameters are tested, then it effectively tests every possible interaction
between all the input parameters.

OATS is a statistically proven method to derive minimum number of test cases (input combinations) needed to
achieve maximum functional test coverage for testing orthogonal systems.

The input combinations which must be tested, are derived using a two dimensional table of numbers
called orthogonal array. A collection of these are normally provided as reference to the testers.

How to use orthogonal arrays

Here is a sample orthogonal array which can be used for designing tests for a software component with 4 possible
input parameters and each parameter capable of accepting 3 possible values.
Problem Statement:

Objective

To practice developing test cases by means of orthogonal array technique

Problem Statement

Derive the minimum number of test cases needed to test the below requirement for login module using Orthogonal
Array Test Strategy.

1. The login page must have 3 text input fields - Domain, User Name and Password.
2. There should be a 'Login' button.
3. The user should be allowed to login only when all the fields are provided with valid values and the user
clicks on the submit button.
4. If any of the field is empty or having an invalid value, an error message "Incorrect login details" should be
displayed on the top left corner of the login page.

Solution

Step 1: Identify the Factors. Since there is only one possible input for the 'Submit' button - clicking it - for the system to work, it
does not qualify as an orthogonal input parameter. Hence there are 3 orthogonal input fields - Domain, User Name and Password.
(Factors = 3) Step 2: Identify the levels for each factor. Applying equivalence class partitioning for each of the orthogonal input
fields, there are three possible classes of inputs for each of the field which can be grouped together and assign level numbers to
them starting with the success scenario.

 Valid = 1

 Invalid = 2

 Empty = 3

Step 3: Identify the suitable orthogonal array Start the search for a suitable orthogonal array based on number of levels. In the set
of resulting orthogonal arrays, look for a matching orthogonal array based on the number of factors. If no array can be found with
matching number of factor, then select the nearest orthogonal array with a higher number of factors and remove the extra factor
columns (based on our factor number) starting from the right end. For our scenario, we can choose the orthogonal array L9(34)
and convert it to L9(33) by removing the last factor column as shown below.
Step 4: Derive the test cases Substitute the factors with input field names, levels numbers with corresponding input values and

mark the expected result against each row/run.

Use Cases

Use cases describe all possible 'real business process' flows through a system. It is usually prepared by the
requirement author/owner. It includes

 Mainstream (most likely) and alternate (exceptional) flows in terms of interactions between actors (users)
and the system to accomplish a business process.
 Description of the actor for each use case.
 Pre-conditions and post-conditions for each use case

Use cases are used extensively for

 Designing user acceptance tests


 Finding defects in integration between components (integration tests)

Below is the template for documenting use cases.


Example:
Use Cases - Quiz 1

Q1 of 1

For which of these is use case testing useful?

a. Designing acceptance tests with customers or users.

b. Ensuring that mainstream business processes are tested.

c. Discovering defects in inter component interaction.

d. Finding minimum and maximum values for each input field.

e. Finding the percentage of statements covered by a set of tests.

a, b and c
b, d and e
a, b and d
c, d and e

Exploratory Testing

Exploratory testing is a method utilized in situations when application functionalities must be learnt, by hit and trial
method, and also tested in parallel. Normally exploratory testing is done when

 There is no prior knowledge of the application.


 The requirements are not documented elaborately.
 Insufficient time available to follow a proper STLC.

Degrees of effectiveness can vary depending on the tester‟s skill and instinct.

It is advantageous to conduct usability testing (whether the system is simple enough to intuitively understand and
navigate through it) by applying exploratory technique using inexperienced testers (like randomly chosen end-users).

Error Guessing

Unlike exploratory testing, error guessing is a technique used by highly experienced testers.

Error guessing is the process of deriving test cases by anticipating and guessing the probability of presence of
defects based on

 Experience and knowledge of the tester in that specific software or similar softwares.
 History of past defects and failures in the software.
 Complexity of specific modules in the software.

This approach is also called Fault Attack

Examples of guessing errors

 Guessing 'Division by zero' scenarios if a mathematical formula used in the code.


 Checking whether blank values are accepted in text fields
 Checking whether errors are handled if required files are missing in a database.
 Checking for leap dates (Feb 29,30) in date fields.

Exploratory Testing and Error Guessing - Quiz 1


Q1 of 2

In which of these scenarios is error guessing most suitable?

As the first method to derive test cases


After more formal methods have been applied
By inexperienced testers
Once the system has become live
By end users only
Q2 of 2

Which method of defining test procedures will be most efficient and effective for a project that is under severe time
pressure, considering the presence of an extremely experienced tester with a good business background?

High-level outline of test conditions and general steps to be taken


Each step in the test explained in detail
High-level outline of test conditions with steps to be taken, deliberated in detail with another experienced tester
Comprehensive documentation of all test cases and careful records of every step taken in the testing

Test Data Preparation

What is test data?

 Any set of data which is developed specifically to test the adequacy of the system
 The data can be actual data that has been taken from previous operations or artificial data created for the
purpose
 The test data can be prepared manually by the tester or dynamically generated with the help of automation
tools
 Test data can be recorded for reuse or one time use depending on the application/system under test

How is test data used?

 Some test data are used in the confirmatory way, normally to validate that a given set of input to a given
function gives the expected results

 Others are used so as to challenge the capability of the program to respond to uncommon, extreme,
exceptional or unpredicted input to the application/system
Test Case Document

Test case design is performed much before the actual execution, depending on the type of testing (as per the V-
Model). Sometimes the gap between test case preparation and test execution can even be several months.

Once the test cases are prepared using the test case design optimization techniques described earlier, they are
supposed to be documented in a formal and elaborate document. This helps in

 Not missing out any tests during execution


 Reducing knowledge and communication related rework if tests are going to be designed by one tester and
executed by another.
 Providing a base document for reference, reviews, planning and delegation of testing related SQC activities.

Apart from documenting the steps for executing the tests and the expected results, certain additional information is
also documented for effectively planning and executing the tests. In the next page let's look all such information can
be documented.

Given below are the information that are usually captured in the Test Case Document. The number of fields and the
format of documenting may vary from one project to another. However it is still important to understand the
significance of each and every field.

Requirement ID : This is an alpha-numeric code used to uniquely identify the corresponding requirement(s) for which
the test case is being written for. It may be directly taken from the requirement document, if available, or formatted as
per the conventions followed in the team. This helps, while reviews and checks, to ensure that every requirement has
been addressed adequately

Requirement Description : A brief description of the requirement so that anybody who is going to review or execute
the test case need not refer to the requirement document frequently to understand it.

Test Case ID : This is an alpha-numeric code used to uniquely identify the test case. This helps, along with the
'Requirement ID', to simplify the representation of relationship between the test cases and the requirements. Again, it
helps during reviews, checks and test execution.

Test Case Description : One requirement might have to be tested with multiple test cases. It is imperative to briefly
describe what is the objective of every test case. This helps the test executor in understanding the necessity of every
test case.

Precondition : This field is used to describe the starting state in which the application should be in, before the test
case can be executed, along with the steps/actions required to achieve that state.

Test Data : This field states any specific input data that needs to be provided to the application for executing the test
case successfully.

Type of Check : This field is used if there are any internal guidelines in the project to categorize the test cases for
management, delegation, tracking, application classification or any such purposes. Eg: type of tests, types UI checks,
types of performance checks, etc.

Test Step Number : Each test cases is recorded as a sequence of actions to be performed by the tester. Each such
action is called a test step. The test step number is used to sequence the test steps of a test case. For an
experienced tester /lead/reviewer it is possible to deduce the size or complexity (or both) of a test case based on the
number of test steps. This in turn helps in effort estimation, planning and delegation, whenever required.

Test Step Description : This field is used to describe the action that needs to be performed to execute the
corresponding test step.
Expected Result : This field is used to describe the expected outcome of the corresponding test step. If, during
execution, the actual outcome does not match the expected result stated here then a defect is logged accordingly.

Actual Result : This field is used to record the actual outcome of a test step during execution. During test preparation
it is left empty. The information captured might be recorded as a simple PASS or FAIL status or an elaborated
description of the system behavior, especially if the step is failing.

Tester Name : To capture the name of the author/tester of a test case. If the test author and actual tester are going to
be two different persons, then two fields can be used. This is used for delegation, tracking and audit purposes.

Creation Date : Date of preparation of the test case in question for tracking and audit purposes.

Requirement Traceability Matrix

Requirements Traceability Matrix (RTM) is a document, usually in form of a table that is prepared after designing
the test cases, to correlate the relationship between test cases and requirement specifications.

It simplifies and saves effort involved in a lot of QC tasks, like reviews and audits, during the course of the software
testing life cycle.

Significance of RTM in test planning and design phase

 Helps to audit and guarantee that all requirement specifications are covered by the test cases. If any
requirement is uncovered, then design additional test cases.
 Helps to perform test focus analysis the ascertain which requirements are tested exhaustively and which
requirements are tested feebly based on the number of test cases and then make necessary modifications
according to the significance of the requirement.
 In case there is a change/correction in one of the requirement specification the RTM helps in ascertaining
the test cases have to be reviewed and, if required, modified.
 It helps in risk management by helping to ascertain
o Which requirements are at a risk of being untested if the required resources for a particular test are
unavailable.
o Which requirements are very critical for the testing process based on number of test cases that
they impact.

Significance of RTM in test execution phase

 If all requirements are not completely developed then RTM helps to decide if the test execution phase can
be started depending on the number of test cases impacted.
 If a defect is found by a test case then RTM helps in
o Identifying the requirements impacted by the defect
o Ascertaining the severity of the defect based on number of test cases impacted
o Deciding which untested test cases will be blocked due to the defect
o Deciding which test cases need testing again to ensure that they are still working as expected,
once the defect is fixed.

Significance of RTM in test closure phase

 Helps in identifying areas of focus for future tests


o Based on the defect distribution across the requirements
o Test cases and strategies that need to be improved by identifying the test cases used to test
requirements where there were defect slippages.
Types of RTM

Based on the direction of traceability, RTMs can be classified as

1. Forward Traceability RTM : Enable tracing test cases from requirements. Usually implemented by
documenting related test case IDs in the requirement document.
2. Backward Traceability RTM : Enable tracing requirements from test cases. Usually implemented by
documenting requirement IDs in the test case document.
3. Bidirectional RTM : Requirements can be traced from test cases or vice versa. Usually implemented as a
standalone document like the sample RTM below.

Deliverables of Test Execution Phase

Test execution phase is the at the heart of STLC around which all other phases are defined.

The testing team starts the system test execution phase after

 The test cases are completed and approved by the requirement owners.
 The development team has coded, unit-tested and deployed the software build in the test environment.

Deliverables of test execution phase

 Test execution results : Actual results of test cases documented in the test case document
 Defects report : List of defects found during test execution.

Types of Tests Executed

Smoke tests

 Smoke tests are the first tests to be carried out during the test execution phase, immediately after the first
code builds are delivered to the testing team.
 Their objective is to ascertain the software's test readiness - if the software is ready for the full blown testing
that is about to follow.
 Smoke tests are performed on basic and critical components of the software. They are selected based
on how quickly the tests can be carried out. For example
o Whether the application is launching without any issues
o Whether all the GUI components are available
o Whether the response times of the application are within acceptable limits.
 Smoke tests can be informal (without documenting test cases and their execution results) or formal.

Functional tests

 Functional tests are the tests carried out to validate the functional requirements.
 In case of a newly created software (development project), these tests validate the entire software.
 In case of an existing software (maintenance project), the scope of these tests is restricted to the newly
added, removed or modified components.

Non-Functional Tests

 Non functional tests are carried out to validate the non-functional requirements.
 They are normally carried out by a different group of testers (separate from regular black-box functional
testers) as these tests might require specialized skills in terms of
o Non-functional requirment analysis
o Using white-box test techniques
o Using specialized tools
 Performance tests are one of the most commonly conducted non-functional tests. They mostly consist of
o Load tests - whether the system's response times are within acceptable limits when it is operated
under regular load (cuncurrent number of users/transactions) for a specified duration.
o Stress tests - whether the system's response times are within acceptable limits when it is operated
under peak load for a specified duration.
o Spike tests - whether the system's response times are within acceptable limits when the load
suddenly increases or decreases by a huge difference.

Sanity tests

 Sanity tests are similar to smoke tests in terms of their objectives (validate test readiness) and their test
selection (how quickly they can be executed).
 They differ from smoke tests in terms of their scope and time of execution.
 While smoke tests validate the entire software, sanity tests are limited to small components and defect fixes.
 While smoke tests are done on the initial large builds, sanity tests are carried out on smaller builds delivered
in the later stages of test execution. If required, they can additionally test that no other critical functionality
are broken by the defect fixes.
 Sanity tests are usually informal.

Retests

 Retests consist of re-running the failed test cases, once their corresponding defect fixes are delivered, to
check if they are passing.

Regression tests

 While retests check whether failed test cases are passing after code changes, regression tests check
whether passed test cases are still passing after code changes.
 The objective of regression tests is to make sure that none of the software components, which were working
fine before the code change, are broken unintentionally.
 Informal and small regression tests are carried out as a part of sanity tests.
 Usually a formal regression test is carried out on the entire software once all the defects have been fixed,
before the build is promoted to the next stage.
 In case of development projects, formal regression tests are limited based on the impact analysis of the
defects fixes.
 In case of maintenance projects, formal regression tests are extensive and are carried out on all parts of the
software which are not supposed to be impacted by the current software release.
Defect Management

Defect Life Cycle:


During test execution, whenever a test case fails due to a mismatch between the actual results and expected results,
a defect MUST be logged.

Defect logging is the process of documenting all relevant information about the defect that needs to be passed on to
the developer so that he/she can fix it.

The following field/column names describe the information that is captured about the defect in a systematic and
organized way into a defect repository table or a defect management tool.

Defect ID

A numeric or alpha-numeric code that needs to be assigned to a defect so that the defect can be identified uniquely.

Defect Title

A meaningful one line title to state what the issue is. In situations where stating just the Defect ID does not help to
recognize the defect due to it's cryptic nature, this defect title can be used. (Eg: Status calls, defect triage meetings,
project check points etc.)
Detected By/Opened By

The name or identification of the person, usually the tester, who identified and opened the defect so that any
stakeholder can contact that person in case of any additional information about the defect is needed.

Opened Date

The date on which the defect was identified. It can be used to check if the defects are addressed within the stipulated
time as proposed in the test plan. A project risk may have to be raised if defect fixes are unavailable within the
required time limits.

Status

Predefined value which states how much progress has been made with respect to getting the defect fixed. This
information is constantly updated by the developers and testers whomever the defect is currently assigned to.

Assigned To

The name or identification of the person who is currently owning the defect based on its current status and his/her
role in the project. This contact information can used by the project management to check the progress made on a
particular defect on an ad-hoc basis at any point in time.

Priority

This field is used to indicate how quickly the defect needs to be fixed. It is estimated based on the degree of risk the
prolonged existence of the defect poses for

 Project schedule in terms of number of un-executed tests it impacts/blocks.


 Amount of cycle time left in SDLC/STLC
 Brand value and quality perception of the product.

Priority is normally graded as

1. High
2. Medium
3. Low

Severity

This field is used to indicate the real world impact of the defect in terms of

 Number of product functionalities that would be rendered unusable


 Amount of financial loss it might result in

Severity is normally graded as

1. Critical/Showstopper
2. Major
3. Moderate
4. Minor
5. Cosmetic
Defect Description

This is the most descriptive field of a defect. It is filled with information pertaining to most, if not, all the
following details.

 Steps to reproduce the defect


 Test data which can be used to reproduce the defect
 Expected behavior and actual behavior
 Justification for the chosen priority and severity
 Any additional information that would help the developers in their analysis.

Test Case ID

This field is used for providing the test case ID of the test case which was used to identify the defect. It simplifies test
management in circumstances where the initial test and defect retest might need to be executed by different testers.
It also helps in post testing analysis in terms of analyzing the test case effectiveness, requirement traceablility, etc.

Defect Management - Quiz 1


Q1 of 3

If a defect has been fixed and the tester tests some functionality, other than where the defect was found, to make
sure that functionality is not broken then the most appropriate name for this kind of test would be

Simple Retest
Defect Retest
Regression Test
Functional Test
Q2 of 3

In the middle of conducting functional tests, if a tester identifies some new scenario to be tested that was not already
planned and he documents that test case and its test results, then the most appropriate name for this kind of test
would be

Sanity Test
Informal Test
Ad-hoc Test
Smoke Test

Q3 of 3

While analyzing a defect, if a developer finds that the lack of functionality described in the defect is not explicitly
documented in the requirements but it is a really nice feature to have and it would take only 1 minute to fix it, then
what is the most appropriate status that he/she can assign to the defect.
Mark it 'In Progress' and fix the defect
Mark it 'Rejected' and send the defect back to the tester
Mark it 'Deferred' and let the requirement owners decide if they want that fix and if yes, when
Mark it 'Fixed' and ask the tester to close it saying its not mentioned in the requirement

Deliverables of Test Closure Phase

The test closure phase is the final stage of STLC.

Before closing the test project

 Evaluate the test exit criteria based on


o Requirement coverage
o Test completion
o Status of defects

After Closing the test project

 Archive all test artifacts and proof of testing for reuse/future reference.
 Analyze the defects' root cause and distribution.
 Analyze and document the metrics collected on test activities to identify areas to be improved.
 Document lessons learnt for enabling the success of future testing projects.

Deliverables of test closure phase

 Approval/rejection for code promotion to the next phase based on exit criteria checklist.
 Metrics report
 Project Lessons and future recommendations (based on situations)
 Case studies (optional)

In the next page let us look at some of the test metrics that are collected and documented.

Test Metrics

Though quality might seem a very subjective term to be used the truth about quality is that "if you cannot measure it
then you cannot control it."

A Metric is a quantitative measure of the degree to which a system, component of a system, or process possesses a
given attribute.

To track and ensure the overall quality of the process of software testing itself, a number of metrics are collected and
reviewed. These metrics can be broadly classified into two categories.

Base Metrics

These are metrics that can be directly derived (without any calculation). Base metrics are usually collected from the
testers. Some of the most common base metrics collected are.

1. Total number of requirements


2. Total number of test cases generated
3. Total number of test cases executed
4. Total number of test cases passed/failed/blocked
5. Total number of defects
6. Total number of defects (based on severity/priority)
7. Total amount of effort spent on test case preparation
8. Total amount of effort spent on test execution

Calculated Mertics

These are metrics that are calculated from base metrics already collected. The calculations are normally performed
by test leads and managers to monitor and report the health of the project. Some of most common collected metrics
are

Test Metrics – Q1
Q1 of 1

The testing performed is enough when:

Time runs out.


The required confidence level has been achieved.
No other faults are found.
The users would not discover any serious faults.
The 7 Principles of Software Testing

Testing only proves the presence of defects

Tests can be used only for detecting the presence of certain types of defects they are designed to identify. It is
impossible for the tests to prove, with absolute certainty, that no other defects exist in the system or component
tested. Test strategies and designs always need to be improved to catch more defects.

Exhaustive testing is impossible

It is impossible to test all the possible combinations of states and data used a software system because of the
inherent complexities of the a large code base, numerous environmental and resource parameters that need to be
considered and the limited availability of time and resources for testing. Hence test strategies needs to be devised
based on the criticality and probability of usage of the individual features and components of the software.

Early testing improves the possibility of fixing defects

A defect identified in the requirements stage would just involve a correction to the requirement specification
document. If the same defect is identified after the software is built, it requires a lot of time, effort and money to be
spent on analysis, troubleshooting and rework in terms of modifications required in the code, design, specification etc
and retesting each of those changes. Hence the later in SDLC a defect is identified, the lesser the amount of time,
effort and money required to fix it.

Defects tend to cluster

If a particular part of the code has a defect, it tends to manifest as a defect in all other parts of code that are
associated with that part in terms of reuse, function calls, inputs received and outputs processed. Statistically 80% of
the defects identified tend to cluster around 20% of the code (Pareto Principle). Hence whenever a tester finds
relatively more defects in a particular module of the software, it is important to run different types of tests on that
module to flush out more defects

Testing is context dependent

A software used for spacecrafts cannot be tested with the same approach as a software used for playing videos. The
parameters which form the context of usage like accuracy, fault tolerance, UI attractiveness, resources consumed,
financial criticality, user base, etc tend to differ for each software product. The test types, test designs and test
approaches also has to differ according to the context.

The Pesticide Paradox

Pesticides, used in agriculture, seem to be effective for a time period as they would kill and eradicate a lot of different
types of pests. But eventually the types of pests, which those pesticides are not effective against, will start to flourish
in large numbers and the same pesticides become useless. Similarly same test case sets, when used for a long time,
will become ineffective because the developers would be able to predict and avoid the types of defects could be
caught. However they might be unintentionally injecting new defects into the code which the existing set of test cases
cannot catch. Test cases and their design have to be re-evaluated whenever a drop in defect count is noticed.

The absence of error fallacy

Zero defects caught, during the testing phase, doesn't necessarily mean the software is defect free. It can also mean
that the tests were not effective enough to catch the defects present and result in an unusable software.
TBTM-Assessment

Q1 of 10

'Alpha Testing' is done on

Developers environments
Customers environment
Public network
Q2 of 10

Find the odd one out.

Statement Coverage
Branch Coverage
Decission Coverage
Equivalence partitioning
Q3 of 10

John is testing an age field which accepts age from 20 to 30 both inclusive.Which among the options suits the best
set of testcase for doing

equivalence partitioning concept?

18,21,33
18,19,20
30,31,32
20,25,30
Q4 of 10

John raised a defect and the defect is currently being worked by the developer.What will be the current status of the
bug?

New
Open
Assigned
Fixed
Q5 of 10

Rahul ,a developer closed a raised bug and the tester is testing the existence of the bug.What will be the current
status of the bug
New
Open
Retest
Closed
Q6 of 10

Minimum how many testcases are needed for the below programme to ensure 100% statement coverage

Read a,b

if a>b

print "welcome Infosys"

endif

if a=10

print"bye Infosys"

end if

if b=6

print "welcome infosys"

endif

1
2
3
4

Q7 of 10

Minimum how many testcases are needed for the below programme to ensure 100% branch/decission coverage

Read a,b

if a>b

print "welcome Infosys"

endif

if a=10
print"bye Infosys"

end if

if b=6

print "welcome infosys"

endif

1
2
3
4
Q8 of 10

Rahul raised a defect which is currently checked by the developer.What will be the bug status

Fixed
Open
New
Closed
Q9 of 10

Rachel identified a bug and the management decide to fix the bug in the next version.What will be the bug status?

New
Deferred
Closed
Assigned
Q10 of 10

Which among the statement is true?

Retest is done after Regression test


Regression test is done after Retest
Smoke test is done after Retest

You might also like