You are on page 1of 16

Lecture 2: Understanding Software Defects and

Defects management in Software Quality


Management
2.1 What is a defect?
A defect is failure by an application to conform to the requirement specification. It can also be
defined as a variance between expected and actual. It is an error found AFTER the application goes
into production. It commonly refers to several troubles with the software products, with its external
behavior or with its internal features. In other words Defect is the difference between expected and
actual result in the context of testing. It is the deviation of the customer requirement.
A Defect in Software Testing is a variation or deviation of the software application from end user’s
requirements or original business requirements. A software defect is an error in coding which causes
incorrect or unexpected results from a software program which does not meet actual requirements.
Testers might come across such defects while executing the test cases. On the other hand a bug is
the consequence/outcome of a coding fault.
These two terms have very thin line of difference, In the Industry both are faults that need to be fixed
and so interchangeably used by some of the Testing teams.

When testers execute the test cases, they might come across such test results which are contradictory
to expected results. This variation in test results is referred to as a Software Defect. These defects or
variations are referred by different names in different organizations like issues, problems, bugs or
incidents.
Testing is the process of identifying defects, where a defect is any variance between actual and
expected results. “A mistake in coding is called Error, error found by tester is called Defect, defect
accepted by development team then it is called Bug if it does not meet the requirements then it Is
Failure.”

Difference between Defect, Error, Bug, Failure and


Fault!
 A bug is a fault in a program which causes it to behave abnormaly.
Bugs are usually found either during unit testing done by developer of
module testing by testers.
 A defect is found when the application does not conform to the
requirement specification. A defect can also be found when the client
or user is testing (A bug is a type of defect)

.
Defect can be categorized into the following:
Wrong: When requirements are implemented not in the right way. This defect is a
variance from the given specification. It is Wrong!
Missing: A requirement of the customer that was not fulfilled. This is a variance from
the specifications, an indication that a specification was not implemented, or a
requirement of the customer was not noted correctly.
Extra: A requirement incorporated into the product that was not given by the end
customer. This is always a variance from the specification, but may be an attribute
desired by the user of the product. However, it is considered a defect because it’s a
variance from the existing requirements.
ERROR: An error is a mistake, misconception, or misunderstanding on the part of a
software developer. In the category of developer we include software engineers,
programmers, analysts, and testers. For example, a developer may misunderstand a
de-sign notation, or a programmer might type a variable name incorrectly – leads to
an Error. It is the one which is generated because of wrong login, loop or due to
syntax. Error normally arises in software; it leads to change the functionality of the
program.
BUG: A bug is the result of a coding error. An Error found in the development
environment before the product is shipped to the customer. A programming error that
causes a program to work poorly, produce incorrect results or crash. An error in
software or hardware that causes a program to malfunction. Bug is terminology of
Tester.
FAILURE: A failure is the inability of a software system or component to perform its
required functions within specified performance requirements. When a defect reaches
the end customer it is called a Failure. During development Failures are usually
observed by testers.
FAULT: An incorrect step, process or data definition in a computer program which
causes the program to perform in an unintended or unanticipated manner. A fault is
introduced into the software as the result of an error. It is an anomaly in the software
that may cause it to behave incorrectly, and not according to its specification. It is the
result of the error.
The software industry can still not agree on the definitions for all the above. In
essence, if you use the term to mean one specific thing, it may not be understood to
be that thing by your audience.
Image Credit armsreliability.com

.
2.2 What is defect leakage?
Defect leakage refers to defects which by pass testing efforts by the development team ending up in
the final product where users could be impacted.

Defect Leakage is the metric which is used to identify the efficiency of the Quality Assurance testing
i.e., how many defects are missed/slipped during the QA testing.

It is the ratio of number of defects attributed to a stage, but only captured in subsequent stages, to the
sum of total number of defects captured in that particular stage and the total number of defects
attributed to a stage, but only captured in subsequent stages. Other components of defect leakage are:

 It occurs at the customer or the end user side after the application is delivered.
 Used to determine the percent of defect leaked to subsequent.
 It is calculated at overall project or stage level or both.
 Is measured in percentage.

In short, defect leakage is a metric that measures the percentage of defects leaked from the current
testing stage to the subsequent stage as well as proves the effectiveness of testing executed by software
testers. However, the testing team’s worth is only validated when the percentage of defect leakage is
minimal or non-existing.

Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.)

Defects happen. It is a fact of life and as software developers we are in a constant war that we will
never fully win. As software developers, we may even create defects purposefully as requirements or
timelines require us to make a decision that introduces necessary risk. But how can we eliminate the
unwanted defects that make our software difficult to use and tarnish our reputation?

Good test logging, regular reporting, customer involvement, and transparency in your product can
go a long way to mitigating defects. You can have the best logging in the world, but it is somewhat
worthless if you do not address the defects. More time should be dedicated to handling errors from
the production site as the project importance increases. These reports help everyone on the team to
understand what problems are being faced by customers
Depending on how the team best responds, these are some ways to share feedback with the team.
1. To get a quick sense of the overall health of a mission critical application, develop an
understandable, clear report which can be used in communications with and across the
business and company leadership.
2. Automatically log defects from production. This approach can get challenging quickly, so
have a separate place to log defects, and then pull in relevant issues.
3. If your team is large enough, or the project critical enough, create a small SWAT team, or
rapid response team that can react to critical issues quickly and cure problems.
4. And, as a general practice, every developer should be aware of what is happening with their
software and be actively engaged and responsible for their code, even when in production.
Keeping track of defects found and repaired prior to release is an indicator of good software
development health and maintains a reasonable defect removal efficiency. Equally important is to
keep records of all defects found after release and bring those back to the product, development and
quality teams so that test cases can be updated, and process adjusted when necessary. Transparency
with software defects is just as important as identification and resolution, because your customers
want to know that you own the problems and are working to resolve them.

In Software Testing Life Cycle (STLC) there are numerous testing methodologies and techniques,
which are proficient in detecting majority of defects and bugs. However, even the most prominent
and effective testing methodologies are unable to retrieve and detect all the bugs, defects, and errors in
the system, as they are hidden or present at the most internal level of the software. These bugs and
errors are uncovered during the later stages of software testing life cycle (STLC) and are known as
leakage. Therefore, to calculate the total number of undetected defects and errors, competent software
engineers follow an approach known as defect leakage, which helps them calculate the total defects
present in a software system, as well as aids them in validating their testing efforts (detailed
discussion on software testing will be covered in lecture 4)

2.2.1 Other Software Testing metrics


Software testing metrics are the best way of measuring and monitoring the various testing activities
performed by the team of testers during the software testing life cycle. Moreover, it helps convey the
result of a prediction related to a combination of data. Hence, the various software testing metrics
that provide some key performance indicators to help measure testing efforts and the testing
process, include
a) Defect Density
Defect Density is the number of defects confirmed in software/module during a specific period
of operation or development divided by the size of the software/module. It enables one to decide if
a piece of software is ready to be released.
Defect density is counted per thousand lines of code also known as KLOC.

How to calculate Defect Density


A formula to measure Defect Density:
 Defect Density = Defect count/size of the release
Size of release can be measured in terms of a line of code (LoC).
Defect Density Example
Suppose, you have 3 modules integrated into your software product. Each module has the following
number of bugs discovered-
 Module 1 = 10 bugs
 Module 2 = 20 bugs
 Module 3 = 10 bugs
Total bugs = 10+20+10 =40
The total line of code for each module is

 Module 1 = 1000 LOC


 Module 2 = 1500 LOC
 Module 3 = 500 LOC

Total Line of Code = 1000+1500+500 = 3000

Defect Density is calculated as:


Defect Density = 40/3000 = 0.013333 defects/loc = 13.333 defects/Kloc

A standard for defect density


However, there is no fixed standard for bug density, studies suggest that one Defect per thousand
lines of code is generally considered as a sign of good project quality.

Factors that affect the defect density metrics


 Code complexity
 The type of defects taken into account for the calculation
 Time duration which is considered for Defect density calculation
 Developer or Tester skills

Advantages of defect density


 It helps to measure the testing effectiveness
 It helps to differentiate defects in components/software modules
 It is useful in identifying the areas for correction or improvement
 It is useful in pointing towards high-risk components
 It helps in identifying the training needs to various resources
 It can be helpful in estimating the testing and rework due to bugs
 It can estimate the remaining defects in the software
 Before the release, we can determine whether our testing is sufficient
 We can ensure a database with a standard defect density

b) Defect Removal Efficiency: Defect removal efficiency (DRE) provides a measure of the
development team’s ability to remove various defects from the software, prior to its release
or implementation. Calculated during and across test phases, DRE is measured per test
type and indicates the efficiency of the numerous defect removal methods adopted by the
test team. Also, it is an indirect measurement of the quality as well as the performance of
the software. Therefore, the formula for calculating Defect Removal Efficiency is:
DRE = Number of defects resolved by the development team/ (Total number of
defects at the moment of measurement)

c) Defect Category: This is a crucial type of metric evaluated during the process of the
software development life cycle (SDLC). Defect category metric offers an insight into the
different quality attributes of the software, such as its usability, performance, functionality,
stability, reliability, and more. In short, the defect category is an attribute of the defects in
relation to the quality attributes of the software product and is measured with the
assistance of the following formula:
Defect Category = Defects belonging to a particular category/ Total number of defects.

d) Defect Severity Index: It is the degree of impact a defect has on the development of an
operation or a component of a software application being tested. Defect severity index
(DSI) offers an insight into the quality of the product under test and helps gauge the quality
of the test team’s efforts. Additionally, with the assistance of this metric, the team can
evaluate the degree of negative impact on the quality as well as the performance of the
software. Following formula is used to measure the defect severity index.
Defect Severity Index (DSI) = Sum of (Defect * Severity Level) / Total number of
defects

e) Review Efficiency: The review efficiency is a metric used to reduce the pre-delivery
defects in the software. Review defects can be found in documents as well as in documents.
By implementing this metric, one reduces the cost as well as efforts utilized in the process
of rectifying or resolving errors. Moreover, it helps to decrease the probability of defect
leakage in subsequent stages of testing and validates the test case effect iveness. The
formula for calculating review efficiency is:
Review Efficiency (RE) = Total number of review defects / (Total number of review
defects + Total number of testing defects) x 100

f) Test Case Effectiveness: The objective of this metric is to know the efficiency of test
cases that are executed by the team of testers during every testing phase. It helps in
determining the quality of the test cases.
Test Case Effectiveness = (Number of defects detected / Number of test cases run) x
100

g) Test Case Productivity: This metric is used to measure and calculate the number of test
cases prepared by the team of testers and the efforts invested by them in the process. It is
used to determine the test case design productivity and is used as an input for future
measurement and estimation. This is usually measured with the assistance of the following
formula:
Test Case Productivity = (Number of Test Cases / Efforts Spent for Test Case
Preparation)

h) Test Coverage: Test coverage is another important metric that defines the extent to which
the software product’s complete functionality is covered. It indicates the completion of
testing activities and can be used as criteria for concluding testing. It can be measured by
implementing the following formula:
Test Coverage = Number of detected faults/number of predicted defects.
Another important formula that is used while calculating this metric is:
Requirement Coverage = (Number of requirements covered / Total number of
requirements) x 100

i) Test Design Coverage: Similar to test coverage, test design coverage measures the
percentage of test cases coverage against the number of requirements. This metric helps
evaluate the functional coverage of test case designed and improves the test coverage. This
is mainly calculated by the team during the stage of test design and is measured in
percentage. The formula used for test design coverage is:
Test Design Coverage = (Total number of requirements mapped to test cases / Total
number of requirements) x 100

j) Test Execution Coverage: It helps us get an idea about the total number of test cases
executed as well as the number of test cases left pending. This metric determines the
coverage of testing and is measured during test execution, with the assistance of the
following formula:
Test Execution Coverage = (Total number of executed test cases or scripts / Total
number of test cases or scripts planned to be executed) x 100

k) Test Tracking & Efficiency: Test efficiency is an important component that needs to be
evaluated thoroughly. It is a quality attribute of the testing team that is measured to ensure
all testing activities are carried out in an efficient manner. The various metrics that assist
in test tracking and efficiency are as follows:
i Passed Test Cases Coverage: It measures the percentage of passed test cases.
(Number of passed tests / Total number of tests executed) x 100
ii Failed Test Case Coverage: It measures the percentage of all the failed test
cases.
(Number of failed tests / Total number of test cases failed) x 100
iii Test Cases Blocked: Determines the percentage of test cases blocked, during
the software testing process.
(Number of blocked tests / Total number of tests executed) x 100
iv Fixed Defects Percentage: With the assistance of this metric, the team is able
to identify the percentage of defects fixed.
(Defect fixed / Total number of defects reported) x 100
v Accepted Defects Percentage: The focus here is to define the total number
of defects accepted by the development team. These are also measured in
percentage.
(Defects accepted as valid / Total defect reported) x 100
vi Defects Rejected Percentage: Another important metric considered under
test track and efficiency is the percentage of defects rejected by the
development team.
(Number of defects rejected by the development team / total d efects reported) x 100
vii Defects Deferred Percentage: It determines the percentage of defects
deferred by the team for future releases.
(Defects deferred for future releases / Total defects reported) x 100
viii Critical Defects Percentage: Measures the percentage of critical defects in the
software.
(Critical defects / Total defects reported) x 100
ix Average Time Taken to Rectify Defects: With the assistance of this formula,
the team members are able to determine the average time taken by the
development and testing team to rectify the defects.
(Total time taken for bug fixes / Number of bugs)

l) Test Effort Percentage: An important testing metric, test efforts percentage offer an
evaluation of what was estimated before the commencement of the testing process vs the
actual efforts invested by the team of testers. It helps in understanding any variances in
the testing and is extremely helpful in estimating similar projects in the future. Similar to
test efficiency, test efforts are also evaluated with the assistance of va rious metrics:
 Number of Test Run Per Time Period: Here, the team measures the number
of tests executed in a particular time frame.
(Number of test run / Total time)
 Test Design Efficiency: The objective of this metric is to evaluate the design
efficiency of the executed test.
(Number of test run / Total Time)
 Bug Find Rate: One of the most important metrics used during the test effort
percentage is bug find rate. It measures the number of defects/bugs found by
the team during the process of testing.
(Total number of defects / Total number of test hours)Number of Bugs
Per Test: As suggested by the name, the focus here is to measure the number
of defects found during every testing stage.
(Total number of defects / Total number of tests)
 Average Time to Test a Bug Fix: After evaluating the above metrics, the team
finally identifies the time taken to test a bug fix.(Total time between defect
fix & retest for all defects / Total number of defects)
m) Test Effectiveness: A contrast to test efficiency, test effectiveness measures and evaluates
the bugs and defect ability as well as the quality of a test set. It finds defects and isolates
them from the software product and its deliverables. Moreover, the test effectiveness
metrics offer the percentage of the difference between the total number of defects found
by the software testing and the number of defects found in the software. This is mainly
calculated with the assistance of the following formula:
Test Effectiveness (TEF) = (Total number of defects injected + Total number of
defects found / Total number of defect escaped) x 100

n) Test Economic Metrics: While testing the software product, various components
contribute to the cost of testing, like people involved, resources, tools, and infrastructure.
Hence, it is vital for the team to evaluate the estimated amount of testing, with the actual
expenditure of money during the process of testing. This is achieved by evaluating the
following aspects:
 Total allocated the cost of testing.
 The actual cost of testing.
 Variance from the estimated budget.
 Variance from the schedule.
 Cost per bug fix.
 The cost of not testing.
o) Test Team Metrics: Finally, the test team metrics are defined by the team. This metric is
used to understand if the work allocated to various test team members is distributed
uniformly and to verify if any team member requires more information or clarification
about the test process or the project. This metric is immensely helpful as it promotes
knowledge transfer among team members and allows them to share necessary details
regarding the project, without pointing or blaming an individual for certa in irregularities
and defects. Represented in the form of graphs and charts, this is fulfilled with the
assistance of the following aspects:
 Returned defects are distributed team member vise, along with other important
details, like defects reported, accepted, and rejected.
 The open defects are distributed to retest per test team member.
 Test case allocated to each test team member.
 The number of test cases executed by each test team member

2.3 Defect Management Process in Software Testing (Bug Report


Template)
Bug Report in Software Testing
A Bug Report in Software Testing is a detailed document about bugs found in the software
application. Bug report contains each detail about bugs like description, date when bug was found,
name of tester who found it, name of developer who fixed it, etc. Bug report helps to identify
similar bugs in future so it can be avoided.
While reporting the bug to developer, your Bug Report should contain the following information
 Defect_ID – Unique identification number for the defect.
 Defect Description – Detailed description of the Defect including information about the
module in which Defect was found.
 Version – Version of the application in which defect was found.
 Steps – Detailed steps along with screenshots with which the developer can reproduce the
defects.
 Date Raised – Date when the defect is raised
 Reference– where in you Provide reference to the documents like . requirements, design,
architecture or maybe even screenshots of the error to help understand the defect
 Detected By – Name/ID of the tester who raised the defect
 Status – Status of the defect , more on this later
 Fixed by – Name/ID of the developer who fixed it
 Date Closed – Date when the defect is closed
 Severity which describes the impact of the defect on the application
 Priority which is related to defect fixing urgency. Severity Priority could be
High/Medium/Low based on the impact urgency at which the defect should be fixed
respectively
Consider the following as a Test Manager
Your team found bugs while testing the Guru99 Banking project.
After a week the developer responds –

In next week the tester responds

As in the above case, if the defect communication is done verbally, soon things become very
complicated. To control and effectively manage bugs you need a defect lifecycle.

What is Defect Management Process?


Defect Management is a systematic process to identify and fix bugs. A defect management cycle
contains the following stages
1) Discovery of Defect,
2) Defect Categorization
3) Fixing of Defect by developers
4) Verification by Testers,
5) Defect Closure
6) Defect Reports at the end of project

This topic will guide you on how to apply the defect management process to the project Guru99
Bank website. You can follow the below steps to manage defects.

Discovery
In the discovery phase, the project teams have to discover as many defects as possible, before the
end customer can discover it. A defect is said to be discovered and change to status accepted when
it is acknowledged and accepted by the developers
In the above scenario, the testers discovered 84 defects in the website Guru99.

Let’s have a look at the following scenario; your testing team discovered some issues in the Guru99
Bank website. They consider them as defects and reported to the development team, but there is a
conflict –
In such case, as a Test Manager, what will you do?
A) Agree With the test team that its a defect
B) Test Manager takes the role of judge to decide whether the problem is defect or not

C) Agree with the development team that is not a defect

In such case, a resolution process should be applied to solve the conflict, you take the role as a judge
to decide whether the website problem is a defect or not.

Categorization
Defect categorization help the software developers to prioritize their tasks. That means that this
kind of priority helps the developers in fixing those defects first that are highly crucial.

Defects are usually categorized by the Test Manager –

Let’s do a small exercise as following


Prioritize the following defects either as Critical, High, Medium or Low

1) The website performance is too slow


2) The login function of the website does not work properly
3) The GUI of the website does not display correctly on Mobile devices
4) The website could not remember the user login session
5) Some links doesn’t work

Here are the recommended answers

No. Description Priority Explanation


The website The performance bug can cause huge inconvenience
1 High
performance is too slow to user.
The login function of the Login is one of the main function of the banking
2 website does not work Critical website if this feature does not work, it is serious
properly bugs
The GUI of the website
does not display The defect affects the user who use Smartphone to
3 Medium
correctly on mobile view the website.
devices
The website could not This is a serious issue since the user will be able to
4 remember the user login High login but not be able to perform any further
session transactions
This is an easy fix for development guys and the
5 Some links doesn’t work Low
user can still access the site without these links
Defect Resolution
Defect Resolution in software testing is a step by step process of fixing the defects. Defect
resolution process starts with assigning defects to developers, then developers schedule the defect to
be fixed as per priority, then defects are fixed and finally developers send a report of resolution to
the test manager. This process helps to fix and track defects easily.
You can follow the following steps to fix the defect.
 Assignment: Assigned to a developer or other technician to fix, and changed the status
to Responding.
 Schedule fixing: The developer side take charge in this phase. They will create a schedule to
fix these defects, depend on the defect priority.
 Fix the defect: While the development team is fixing the defects, the Test Manager tracks
the process of fixing defect compare to the above schedule.
 Report the resolution: Get a report of the resolution from developers when defects are
fixed.
Verification
After the development team fixed and reported the defect, the testing team verifies that the defects
are actually resolved.
For example, in the above scenario, when the development team reported that they already fixed 61
defects, your team would test again to verify these defects were actually fixed or not.
Closure
Once a defect has been resolved and verified, the defect is changed status as closed. If not, you have
send a notice to the development to check the defect again.
Defect Reporting
Defect Reporting in software testing is a process in which test managers prepare and send the defect
report to the management team for feedback on defect management process and defects’ status. Then
the management team checks the defect report and sends feedback or provides further support if
needed. Defect reporting helps to better communicate, track and explain defects in detail.
The management board has right to know the defect status. They must understand the defect
management process to support you in this project. Therefore, you must report them the current
defect situation to get feedback from them.
Important Defect Metrics
Back to the above scenario. The developer and test teams have reviews the defects reported. Here is
the result of that discussion

How to measure and evaluate the quality of the test execution?

This is a question which every Test Manager wants to know. There are 2 parameters which you can
consider as following

In the above scenario, you can calculate the defect rejection ratio (DRR) is 20/84 = 0.238 (23.8
%).

Another example, supposed the Guru99 Bank website has total 64 defects, but your testing team
only detect 44 defects i.e. they missed 20 defects. Therefore, you can calculate the defect leakage
ratio (DLR) is 20/64 = 0.312 (31.2 %).

Conclusion, the quality of test execution is evaluated via following two parameters
The smaller value of DRR and DLR is, the better quality of test execution is. What is the ratio range
which is acceptable? This range could be defined and accepted based on the project target or you
may refer the metrics of similar projects.

In this project, the recommended value of acceptable ratio is 5 ~ 10%. It means the quality of test
execution is low. You should find countermeasure to reduce these ratios such as

 Improve the testing skills of member.


 Spend more time for testing execution, especially for reviewing the test execution results.

You might also like