You are on page 1of 26

Software Testing

22518
Chapter 5
Testing tools and measurements
(R-02, U-04, A-06 =12 Marks)

5.1 Manual Testing and Need for Automated Testing Tools


5.2 Advantages and Disadvantages of Using Tools
5.3 Selecting a Testing Tool
5.4 When to Use Automated Test Tools, Testing Using Automated Tools.
5.5 5.6 Metrics and Measurement: Types of Metrics. Product Metrics and Process Metrics,
Object oriented metrics in testing.
Mrs. Shubhangi Chintawar
Lecturer
VESP
Software Testing (22518) CO5I

Manual Testing
Manual testing is a software testing process in which test cases are executed manually without
using any automated tool. All test cases executed by the tester manually according to the end
user's perspective. It ensures whether the application is working, as mentioned in the
requirement document or not. Test cases are planned and implemented to complete almost 100
percent of the software application. Test case reports are also generated manually.
Manual Testing is one of the most fundamental testing processes as it can find both visible and
hidden defects of the software.
The difference between expected output and output, given by the software, is defined as a
defect. The developer fixed the defects and handed it to the tester for retesting.
Manual testing is mandatory for every newly developed software before automated testing.
This testing requires great efforts and time, but it gives the surety of bug-free software.
Manual Testing requires knowledge of manual testing techniques but not of any automated
testing tool.

Need of Manual Testing


If the test engineer does manual testing, he/she can test the application as an end -user
perspective and get more familiar with the product, which helps them to write the correct test
cases of the application and give the quick feedback of the application.

Types of Manual Testing


There are various methods used for manual testing. Each technique is used according to its
testing criteria. Types of manual testing are given below: White Box Testing, Black Box Testing

How to perform Manual Testing


First, tester observes all documents related to software, to select testing areas.
Tester analyses requirement documents to cover all requirements stated by the customer.
Tester develops the test cases according to the requirement document.
All test cases are executed manually by using Black box testing and white box testing.
If bugs occurred then the testing team informs the development team.
The Development team fixes bugs and handed software to the testing team for a retest.

Advantages of Manual Testing


It does not require programming knowledge while using the Black box method.
It is used to test dynamically changing GUI designs.
Tester interacts with software as a real user so that they are able to discover usability and user
interface issues.
It ensures that the software is a hundred percent bug-free.
It is cost-effective.
Easy to learn for new testers.

1
Software Testing (22518) CO5I

Disadvantages of Manual Testing


It requires a large number of human resources.
It is very time-consuming.
Tester develops test cases based on their skills and experience. There is no evidence that they
have covered all functions or not.
Test cases cannot be used again. Need to develop separate test cases for each new software.
It does not provide testing on all aspects of testing.
Since two teams work together, sometimes it is difficult to understand each other's motives, it
can mislead the process.

Manual testing is an activity where the tester needs to be very patient, creative & open minded.
Manual testing is a vital part of user-friendly software development because humans are involved
in testing software applications and end-users are also humans. They need to think and act with
an End User perspective. Testing can be extremely challenging. Testing an application for possible
use cases with minimum test cases requires high analytical skills.

Automation Testing
Automation testing is a Software testing technique to test and compare the actual outcome with
the expected outcome. This can be achieved by writing test scripts or using any automation
testing tool. Test automation is used to automate repetitive tasks and other testing tasks which
are difficult to perform manually.
Manual Testing is performed by a human sitting in front of a computer carefully executing the
test steps. Successive development cycles will require execution of same test suite repeatedly.
Using a test automation tool, it's possible to record this test suite and re -play it as required. Once
the test suite is automated, no human intervention is required. The goal of Automation is to
reduce the number of test cases to be run manually and not to eliminate Manual Testing
altogether.

Test Automation
Test Automation is the best way to increase the effectiveness, test coverage, and execution
speed in software testing. Manual Testing of all workflows, all fields, all negative scenarios is
time and money consuming. It is difficult to test for multilingual sites manually
Test Automation does not require Human intervention, you can run automated test unattended
(overnight). Test Automation increases the speed of test execution and helps to increase Test
Coverage. Manual Testing can become boring and hence error-prone.

Which Test Cases to Automate?


Test cases to be automated can be selected using the following criterion:
High Risk - Business Critical test cases.
Test cases that are repeatedly executed.
Test Cases that are very tedious or difficult to perform manually.
Test Cases which are time-consuming.

2
Software Testing (22518) CO5I

The following category of test cases are not suitable for automation:
Test Cases that are newly designed and not executed manually at least once
Test Cases for which the requirements are frequently changing
Test cases which are executed on an ad-hoc basis.

Process of Automation Testing

Test Automation Feasibility Analysis −


First step is to check if the application can be automated or not. Not all applications can be
automated due to its limitations.

Appropriate Tool Selection −


The next most important step is the selection of tools. It depends on the technology in which
the application is built, its features and usage.

Evaluate the suitable framework −


Upon selecting the tool, the next activity is to select a suitable framework.
There are various kinds of frameworks and each framework has its own significance .

Build Proof of Concept −


Proof of Concept (POC) is developed with an end-to-end scenario to evaluate if the tool can
support the automation of the application.
It is performed with an end-to-end scenario, which ensures that the major functionalities can be
automated.

Develop Automation Framework −


After building the POC, framework development is carried out, which is a crucial step for the
success of any test automation project.
Framework should be built after diligent analysis of the technology use d by the application and
also its key features.

3
Software Testing (22518) CO5I

Develop Test Script, Execute, and Analyze −


Once Script development is completed, the scripts are executed, results are analyzed and defects
are logged, if any.
The Test Scripts are usually version controlled.

What to Automate
• Repetitive tests that run for multiple builds.
• Tests that tend to cause human error.
• Tests that require multiple data sets.
• Frequently used functionality that introduces high risk conditions.
• Tests that are impossible to perform manually.
• Tests that run on several different hardware or software platforms and configurations.
• Tests that take a lot of effort and time when manual testing.

Advantages of Automated Testing:


• Automated testing improves the coverage of testing as automated exe cution of test cases
is faster than manual execution.
• Automated testing reduces the dependability of testing on the availability of the test
engineers.
• Automated testing provides round the clock coverage as automated tests can be run all
time in 24*7 environment.
• Automated testing takes far less resources in execution as compared to manual testing.
• It helps in testing which is not possible without automation such as reliability testing,
stress testing, load and performance testing.
• It includes all other activities like selecting the right product build, generating the right
test data and analyzing the results.
• It acts as test data generator and produces maximum test data to cover a large number
of input and expected output for result comparison.
• Automated testing has less chances of error hence more reliable.
• As with automated testing test engineers have free time and can focus on other creative
tasks.

Disadvantages of Automated Testing:


• Automated testing is very much expensive than the manual testing.
• It also becomes inconvenient and burdensome as to decide who would automate and
who would train.
• It has limited to some organizations as many organizations not prefer test automation.
• Automated testing would also require additionally trained and skilled people.
• Automated testing only removes the mechanical execution of testing process, but
creation of test cases still required testing professionals.

4
Software Testing (22518) CO5I

5
Software Testing (22518) CO5I

6
Software Testing (22518) CO5I

Benefits of Automation Testing


1. 70% faster than the manual testing
2. Wider test coverage of application features
3. Reliable in results
4. Ensure Consistency
5. Saves Time and Cost
6. Improves accuracy
7. Human Intervention is not required while execution
8. Increases Efficiency
9. Reusable test scripts
10. Test Frequently and thoroughly
11. More cycle of execution can be achieved through automation
12. Early time to market

Advantages of using testing tools:


1. Speed.
The automation tools test the software under tests with the very faster speed.
2. Efficiency.
While testers are busy running test cases, testers can't be doing anything else.
If the tester has a test tool that reduces the time it takes for him to run his tests, he has more
time for test planning and thinking up new tests.
3. Accuracy and Precision.
After trying a few hundred cases, tester’s attention span will wane and he may start to make
mistakes. A test tool will perform the same test and check the results perfectly, each and every
time.
4. Resource Reduction.
Impossible to perform a certain test case. The number of people or the amount of equipment
required to create the test
condition could be prohibitive. A test tool can be used to simulate the real world and greatly
reduce the physical resources necessary to perform the testing.
5. Simulation and Emulation.
Test tools are often used to replace hardware or software that would normally interface to your
product. This "fake" device or application can then be used to drive or respond to your software
in ways that you choose and ways that might otherwise be difficult to achieve.
6. Relentlessness.
Test tools and automation never tire or give up. they can keep going and going and on and on
without any problem; whereas the tester gets tired to test again and again.

Disadvantages of using testing tools:


• It's more expensive to automate. Initial investments are bigger than manual testing
Manual tests can be very time consuming.
• You cannot automate everything; some tests still have to be done manually.

7
Software Testing (22518) CO5I

• You cannot rely on testing tools always.

Selecting a tool:
1. Free tools are not well supported and get phased out soon.
2. Developing in-house tools takes time.
3. Test tools sold by vendors are expensive.
4. Test tools require strong training.
5. Test tools generally do not meet all the requirements for automation.
6. Not all test tools run on all platforms.

Criteria for Selecting Test Tools:


1. Meeting requirements;
2. Technology expectations;
3. Training/skills;
4. Management aspects.

1. Meeting requirements-
There are plenty of tools available in the market but rarely do they meet all the requirements of
a given product or a given organization. Evaluating different tools for different requirements
involve significant effort, money, and time. Given of the too much of choice available, huge delay
is involved in selecting and implementing test tools.

2. Technology expectations-
Test tools in general may not allow test developers to extends/modify the functionality of the
framework. So, extending the functionality requires going back to the tool vendor and involves
additional cost and effort. A good number of test tools require their libraries to be linked with
product binaries.

3. Training/skills-
While test tools require plenty of training, very few vendors provide the training to the required
level. Organization level training is needed to deploy the test tools, as the user of the test suite
are not only the test team but also the development team and other areas like configuration
management.

4. Management aspects-
A test tool increases the system requirement and requires the hardware and software to be
upgraded. This increases the cost of the already- expensive test tool.

8
Software Testing (22518) CO5I

What is a Test Framework?


A testing framework is a set of guidelines or rules used for creating and designing test cases.
A framework is comprised of a combination of practices and tools that are designed to help QA
professionals test more efficiently.
These guidelines could include coding standards, test-data handling methods, object
repositories, processes for storing test results, or information on how to access external
resources.
Benefits of a Test Automation Framework
Utilizing a framework for automated testing will increase a team’s test speed and efficiency,
improve test accuracy, and will reduce test maintenance costs as well as lower risks.
They are essential to an efficient automated testing process for a few key reasons:
Improved test efficiency
Lower maintenance costs
Minimal manual intervention
Maximum test coverage
Reusability of code
Types of Automated Testing Frameworks
5. Modular Based Testing Framework
6. Data-Driven Framework
7. Keyword-Driven Framework
8. Hybrid Testing Framework

Modular Based Testing Framework


Implementing a modular framework will require testers to divide the application under test into
separate units, functions, or sections, each of which will be tested in isolation. After breaking
down the application into individual modules, a test script is created for each part and then
combined to build larger tests in a hierarchical fashion. These larger sets of tests will begin to
represent various test cases.
A key strategy in using the modular framework is to build an abstraction layer, so that any
changes made in individual sections won’t affect the overarching module.

Data-Driven Framework
Using a data-driven framework separates the test data from script logic, meaning testers can
store data externally. Very frequently, testers find themselves in a situation where they need to
test the same feature or function of an application multiple times with differe nt sets of data. In
these instances, it’s critical that the test data not be hard-coded in the script itself, which is what
happens with a Linear or Modular-based testing framework.
Setting up a data-driven test framework will allow the tester to store and pass the input/ output
parameters to test scripts from an external data source, such as Exce l Spreadsheets, Text Files,
CSV files, SQL Tables, or ODBC repositories.
The test scripts are connected to the external data source and told to read and populate the
necessary data when needed.

9
Software Testing (22518) CO5I

Keyword-Driven Framework
In a keyword-driven framework, each function of the application under test is laid out in a table
with a series of instructions in consecutive order for each test that needs to be run. In a similar
fashion to the data-driven framework, the test data and script logic are separated in a keyword-
driven framework, but this approach takes it a step further.
With this approach, keywords are also stored in an external data table (hence the name), making
them independent from the automated testing tool being used to execute the tests. Keywords
are the part of a script representing the various actions being performed to test the GUI of an
application. These can be labeled as simply as ‘click,’ or ‘login,’ or with complex labels like
‘clicklink,’ or ‘verifylink.’
In the table, keywords are stored in a step-by-step fashion with an associated object, or the part
of the UI that the action is being performed on. For this approach to work properly, a shared
object repository is needed to map the objects to their associated actions.

Hybrid Test Automation Framework


Hybrid framework is a combination of any of the previously mentioned frameworks set up to
leverage the advantages of some and mitigate the weaknesses of others.
Every application is different, and so should the processes used to test them. As more teams
move to an agile model, setting up a flexible framework for automated testing is crucial.
A hybrid framework can be more easily adapted to get the best test results.

Metrics and measurement:


A Metric is a measurement of the degree that any attribute belongs to a system, product or
process. For example, the number of errors per person hours would be a metric. Thus, software
measurement gives rise to software metrics.
A measurement is an indication of the size, quantity, amount or dimension of a particular
attribute of a product or process. For example, the number of errors in a system is a
measurement. A Metric is a quantitative measure of the degree to which a system, system
component, or process possesses a given attribute. Metrics can be defined as “STANDARDS OF
MEASUREMENT”. Software Metrics are used to measure the quality of the project. Metric is a
scale for measurement. Suppose, in general, “Kilogram” is a metric for measuring the attribute
“Weight”. Similarly, in software, “How many issues are found in a thousand lines of code?”, here
No. of issues is one measurement & No. of lines of code is another measurement. Metric is
defined from these two measurements.
Test metrics example:
How many defects exist within the module?
How many test cases are executed per person?
What is Test coverage %?
Software Test Measurement
Measurement is the quantitative indication of extent, amount, dimension, capacity, or size of
some attribute of a product or process.

10
Software Testing (22518) CO5I

Need of Software measurement:


1. Establish the quality of the current product or process.
2. To predict future qualities of the product or process.
3. To improve the quality of a product or process.
4. To determine the state of the project in relation to budget and schedule.

Collecting and analyzing metrics involves effort and several steps.

Step 1:
The first step involved in a metrics program is to decide what measurements are important and
collect data accordingly. The effort spent on testing, number of defects, and number of test cases,
are some examples of measurements.
Depending on what the data is used for, the granularity of measurement will vary.
For testing functions, we would obviously be interested in the effort spent on testing, number of
test cases, number of defects reported from test cases, and so on.

11
Software Testing (22518) CO5I

If there are too many overheads in making the measurements or if the measurements do not
follow naturally from the actual work being done, then the people who supply the data may resist
giving the measurement data (or even give wrong data).

While deciding what to measure, the following aspects need to be kept in mind.
1. What is measured should be of relevance to what we are trying to achieve.
2. The entities measured should be natural and should not involve too many overheads for
measurements.
3. What is measured should be at the right level of granularity to satisfy the objective for
which the measurement is being made.
The different people who use the measurements may want to make inferences on different
dimensions. The level of granularity of data obtained depends on the level of detail required by
a specific audience. Hence the measurements—and the metrics derived from them—will have
to be at different levels for different people. An approach involved in getting the granular detail
is called data drilling.

Step 2:
the second step involved in metrics collection is defining how to combine data points or
measurements to provide meaningful metrics. A particular metric can use one or more
measurements.

Step 3:
The third step in the metrics program—deciding the operational requirement for
measurements. The operational requirement for a metrics plan should lay down not only the
periodicity but also other operational issues such as who should collect measurements, who
should receive the analysis, and so on.
This step helps to decide on the appropriate periodicity for the measurements as well as
assign operational responsibility for collecting, recording, and reporting the measurements
and dissemination of the metrics information. Some measurements need to be made on a
daily basis.

Step 4:
The fourth step involved in a metrics program is to analyze the metrics to identify b oth
positive areas and improvement areas on product quality. Often, only the improvement
aspects pointed to by the metrics are analyzed and focused; it is important to also highlight
and sustain the positive areas of the product. This will ensure that the best practices get
institutionalized and also motivate the team better.

Step 5:
The final step involved in a metrics plan is to take necessary action and follow up on the
action. The purpose of a metrics program will be defeated if the action items are not followed
through to completion.

12
Software Testing (22518) CO5I

METRICS IN TESTING
Since testing is the last phase before product release, it is essential to measure the progress
of testing and product quality. Tracking test progress and product quality can give a good idea
about the release—whether it will be met on time with known quality. Measuring and
producing metrics to determine the progress of testing is very important.
To judge the remaining days needed for testing, two data points are needed—remaining test
cases yet to be executed and how many test cases can be executed per elapsed day.
The test cases that can be executed per person day are calculated based on a measure called
test case execution productivity. This productivity number is derived from the previous test
cycles. Thus, metrics are needed to know test case execution productivity and to estimate
test completion date.

The number of days needed to fix all outstanding defects is another crucial data point.
The number of days needed for defects fixes needs to take into account the “outstanding
defects waiting to be fixed” and a projection of “how many more defects that will be
unearthed from testing in future cycles.” Hence, metrics helps in predicting the number of
defects that can be found in future test cycles.

The defect-fixing trend collected over a period of time gives another estimate of the defect-
fixing capability of the team. Combining defect prediction with defect-fixing capability
produces an estimate of the days needed for the release. Hence, metrics helps in estimating
the total days needed for fixing defects. Once the time needed for testing and the time for
defects fixing are known, the release date can be estimated. Testing and defect fixing are
activities that can be executed simultaneously, the defect fixes may arrive after the regular
test cycles are completed. These defect fixes will have to be verified by regre ssion testing
before the product can be released.

Metrics are not only used for reactive activities. Metrics and their analysis help in preventing
the defects proactively, thereby saving cost and effort. Metrics help in identifying these
opportunities. For example, if there is a type of defect (say, coding defects) that is reported
in large numbers, it is advisable to perform a code review and prevent those defects, rather
than finding them one by one and fixing them in the code.
Metrics can be classified as

13
Software Testing (22518) CO5I

Product metrics and Process metrics.

Product Metrics - which has more meaning in the perspective of the software product being
developed. e.g. Quality of the developed product.
Process Metrics: It can be used to improve the process efficiency of the SDLC (Software
Development Life Cycle)

Product Metrics: -
Project metrics
Progress metrics
Productivity Metrics

Project Metrics: It can be used to measure the efficiency of a project team or any testing tools
being used by the team members. Project matrix is describing the project characteristic and
execution process.
Number of software developer
Staffing pattern over the life cycle of software
Cost and schedule
Productivity

Effort Variance: Difference between the planned outlined effort and the eff ort required to
actually undertake the task is called Effort variance.
Effort variance = [(Actual Effort – Planned Effort)/ Planned Effort] x 100.

14
Software Testing (22518) CO5I

Schedule Variance: Any difference between the scheduled completion of an activity and the
actual completion is known as Schedule Variance.
Schedule variance = [((Actual calendar days – Planned calendar days) / Planned calendar days]
x 100.

Size Variance: Difference between the estimated size of the project and the actual size of the
project (normally in KLOC or FP).
Size variance = [(Actual size – Estimated size)/ Estimated size ]x 100.

Cost Variance (CV) Difference between the estimated cost of the project and the actual cost
of the project. this metric is represented as percentage.
Cost variance = [(Actual cost – Estimated cost)/ Estimated cost ]x 100.

Progress Metrics
Automation progress refers to the number of tests that have been automated as a percentage
of all automatable test cases.
Any project needs to be tracked from two angles as given below:
1. How the project is doing with respect to effort and schedule.
2. To find out how well the product is meeting the quality requirements for the released.
One of the main objectives of testing is to find as many defects as possible before any customs
finds them. The number of defects that are found in the product is one of the main indicators
of quality.
Defect metrics are further classified in to test defect metrics which help the testing team in
analysis of product quality and testing and development defect me trics which help the
development team analysis of development activities.
How many defects have already been found and how many more defect may get discovered
are two parameters that determine product quality and its assessment?

Progress metrics

Test defect metrics Development defect metrics

1. Defect find rate


1. Component wise defect
2. Defect fix rate distribution
3. Outstanding defect rate 2. Defect density and defect
4. Priority outstanding rate removal rate
3. Age analysis of outstanding
5. Defect trend defect
6. Defect classification trend 4. Introduced and reopened
7. Weighted defect trend defect trends.
8. Defect cause distribution

15
Software Testing (22518) CO5I

Test Defect Metrics


The test progress metrics discussed in the previous section capture the progress of defects
found with time. The next set of metrics help us understand
how the defects that are found can be used to improve testing and product quality.
Some organizations classify defects by assigning a defect priority (for example, P1, P2, P3, and
so on). The priority of a defect provides a management perspective for the order of defect
fixes.

Some organizations use defect severity levels (for example, S1, S2, S3, and so on). The severity
of defects provides the test team a perspective of the impact of that defect in product
functionality.

the priority of a defect can change dynamically once assigned. Severity is absolute and does
not change often as they reflect the state and quality of the product. Some organizations use
a combination of priority and severity to classify the defects.

Since different organization use different methods of defining priorities and severities, a
common set of defect definitions and classification are provided in Table to take care of both
priority and severity levels.

16
Software Testing (22518) CO5I

Defect classification What it means


Extreme Product crashes or unusable Needs to be
fixed immediately
Critical Basic functionality of the product not
working Needs to be fixed before next test
cycle starts
Important Extended functionality of the product not
working Does not affect the progress of
testing Fix it before the release
Minor Product behaves differently No impact on
the test team or customers Fix it when time
permits
Cosmetic Minor irritant
Need not be fixed for this release

Defect find rate


The idea of testing is to find as many defects as possible early in the cycle. However, this may
not be possible for two reasons.
First, not all features of a product may become available early; because of scheduling of
resources, the features of a product arrive in a particular sequence.
Second, some of the test cases may be blocked because of some show stopper defects.
Once a majority of the modules become available and the defects that are blocking the tests
are fixed, the defect arrival rate increases.
After a certain period of defect fixing and testing, the arrival of defects tends to slow down
and a continuation of that trend enables product release.

Defect fix rate


If the goal of testing is to find defects as early as possible, it is natural to expect that the goal of
development should be to fix defects as soon as they arrive.
defect fixing rate should be same as defect arrival rate. If more defects are fixed later in the cycle,
they may not get tested properly for all possible side-effects.
defects. Hence, it is a good idea to fix the defects early and test those defect fixes thoroughly to
find out all introduced defects. If this principle is not followed, defects introduced by the defect
fixes may come up for testing just before the release and end up in surfacing of new defects or
resurfacing of old defects. This may delay the product release.

Outstanding defects rate


The number of defects outstanding in the product is calculated by subtracting the total defects
fixed from the total defects found in the product.
When testing is in progress, the outstanding defects should be kept very close to zero so that the
development team's bandwidth is available to analyze and fix the issues soon after they arrive.

17
Software Testing (22518) CO5I

Priority outstanding rate


Not all defects are equal in impact or severity.
Sometimes the defects that are coming out of testing may be very critical and may take enormous
effort to fix and to test. Hence, it is important to look at how many serious issues are being
uncovered in the product. The modification to the outstanding defects rate curve by plotting only
the high-priority defects and filtering out the low-priority defects is called priority outstanding
defects. This is an important method because closer to product release, the product team would
not want to fix the low-priority defects lest the fixes should cause undesirable side -effects.
Normally only high-priority defects are tracked during the period closer to release. The priority
outstanding defects correspond to extreme and critical classification of defects. Some
organizations include important defects also in priority outstanding defects.
Some high-priority defects may require a change in design or architecture. If they are found late
in the cycle, the release may get delayed to address the defect. But if a low-priority defect found
is close to the release date and it requires a design change, a likely decision of the management
would be not to fix that defect.

Defect trend
The effectiveness of analysis increases when several perspectives of find rate, fix rate,
outstanding, and priority outstanding defects are combined.
Having discussed individual measures of defects, it is time for the trend chart to consolidate all
of the above into one chart.

Defect classification trend


classification of defects—extreme, critical, important, minor, and cosmetic. When talking about
the total number of outstanding defects, some of the questions that can be asked are
How many of them are extreme defects?
How many are critical?
How many are important?
These questions require the charts to be plotted separately based on defect classification. The
sum of extreme, critical, important, minor, and cosmetic defects is equal to the total number of
defects.

18
Software Testing (22518) CO5I

Weighted defects trend


In this approach all the defects are counted on par, for example, both
a critical defect and a cosmetic defect are treated equally and counted as one defect. Counting
the defects, the same way takes away the seriousness of extreme or critical defects. To solve this
problem, a metric called weighted defects is introduced. This concept helps in quick analysis of
defects, instead of worrying about the classification of defects. In this approach, not all the
defects are counted the same way. More serious defects are given a higher weightage than less
serious ones. For example, the total weighted defect count can be arrived at using a formula like
the one given below.
Weighted defects = (Extreme* 5 + Critical * 4 + Important *3 + Minor *2+ Cosmetic)

Defect cause distribution


All the metrics discussed above help in analyzing defects and their impact. The next logical
questions that would arise are
Why are those defects occurring and what are the root causes?
What areas must be focused for getting more defects out of testing?
Finding the root causes of the defects help in identifying more defects and sometimes help in
even preventing the defects. For example, if root cause analysis of de fects suggests that code
level issues are causing the maximum defects, then the emphasis may have to be on white box
testing approaches and code reviews to identify and prevent more defects.

Development Defect Metrics


The defect metrics that directly help in improving development activities are discussed in this
section and are termed as development defect metrics.
While defect metrics focus on the number of defects, defects, development defect metrics try
to map those defects to different components of the product and to some of the parameters of
development such as lines of code.

Component-wise defect distribution


for development it is important to map them to different components of the product so that they
can be assigned to the appropriate developer to fix those defects. The project manager in charge
of development maintains a module ownership list where all product modules and owners are
listed. Based on the number of defects existing in each of the modules, the effort needed to fix
them, and the availability of skill sets for each of the modules, the project manager assigns
resources accordingly.

If there is an independent component which is producing a large number of defects, and if all
other components are stable, then the scope of the release can be reduced to remove the
component producing the defects and release other stable components thereby meeting the
release date and release quality, provided the functionality provided by that component is not
critical to the release.

19
Software Testing (22518) CO5I

Defect density and defect removal rate


The lifetime of the product depends on its quality, over the different releases it goes through.
For a given release, reasonable measures of the quality of the product are the number of defects
found in testing and the number of defects found after the product is released. When the trend
of these metrics is traced over different releases, it gives an idea of how well the product is
improving (or not) with the releases. The objective of collecting and analyzing this kind of data is
to improve the product quality release by release. The expectations of customers only go up with
time and this metric is thus very important.

One of the metrics that correlates source code and defects is defect density. This metric maps
the defects in the product with the volume of code that is produced for the product. There are
several standard formulae for calculating defect density. Of these, defects per KLOC is th e most
practical and easy metric to calculate and plot. KLOC stands for kilo lines of code. Every 1000 lines
of executable statements in the product is counted as one KLOC.
Defects per KLOC = (Total defects found in the product)/(Total executable lines of code in KLOC)

The defect removal rate (or percentage) is used for the purpose.
The formula for calculating the defect removal rate is
(Defects found by verification activities + Defects found in unit testing)/(Defects found by test
teams)* 100

Age analysis of outstanding defects


Age here means those defects that have been waiting to be fixed for a long time. Some defects
that are difficult to be fixed or require significant effort may get postponed for a longer duration.
Hence, the age of a defect in a way represents the complexity of the defect fix needed. Given the
complexity and time involved in fixing those defects, they need to be tracked closely else they
may get postponed close to release which may even delay the release. A method to track such
defects is called age analysis of outstanding defects.

To perform this analysis, the time duration from the filing of outstanding defects to the current
period is calculated and plotted every week for each criticality of defects in stacked area graph.
This graph is useful in finding out whether the defects are fixed as soon as they arrive and to
ensure that long pending defects are given adequate priority. The defect fixing rate discussed
earlier talks only about numbers, but age analysis talks about their age. The purpose of this metric
and the corresponding chart is to identify those defects—especially the high-priority ones—that
are waiting for a long time to be fixed.

20
Software Testing (22518) CO5I

Introduced and reopened defects trend


When adding new code or modifying the code to provide a defect fix, something that was working
earlier may stop working. This is called an introduced defect. These defects are those injected in
to the code while fixing the defects or while trying to provide an enhancement to the product.
This means that those defects were not in the code before and functionalities corresponding to
those were working fine. Sometimes, a fix that is provided in the code may not have fixed the
problem completely or some other modification may have reproduced a defect that was fixed
earlier. This is called a reopened defect. Hence, reopened defects are defects for w hich defects
fixes provided do not work properly or a particular defect that was fixed that reappears. A
reopened defect can also occur if the defect fix was provided without understanding all the
reasons that produced the defect in the first place. This is calle d a partial fix of a defect.

PRODUCTIVITY METRICS
Productivity metrics combine several measurements and parameters with effort spent on the
product. They help in finding out the capability of the team as well as for other purposes, such as
Estimating for the new release.
Finding out how well the team is progressing, understanding the reasons for (both positive and
negative) variations in results.
Estimating the number of defects that can be found.
Estimating release date and quality.
Estimating the cost involved in the release.

Defects per 100 Hours of Testing


The metric defects per 100 hours of testing covers the third point and normalizes the number of
defects found in the product with respect to the effort spent.
Defects per 100 hours of testing =
(Total defects found in the product for a period/Total hours spent to get those defects) * 100

Test Cases Executed per 100 Hours of Testing


The number of test cases executed by the test team for a particular duration depends on team
productivity and quality of product.
The team productivity has to be calculated accurately so that it can be tracked for the current
21
Software Testing (22518) CO5I

release and be used to estimate the next release of the product.


Test cases executed per 100 hours of testing =
(Total test cases executed for a period/Total hour spent in test execution) * 100

Test Cases Developed per 100 Hours of Testing


Both manual execution of test cases and automating test cases require estimating and tracking
of productivity numbers.
In a product scenario, not all test cases are written afresh for every release.
New test cases are added to address new functionality and for testing features that were not
tested earlier.

Test Cases Developed per 100 Hours of Testing


Existing test cases are modified to reflect changes in the product.

Some test cases are deleted if they are no longer useful or if corresponding features are removed
from the product.
Hence the formula for test cases developed uses the count corresponding to added/modified
and deleted test cases.
Test cases developed per 100 hours of testing =
Total test cases developed for a period/Total hour spent in test case development) * 100

Defects per 100 Test Cases


The goal of testing is finding out as many defects as possible; it is appropriate to measure the
“defect yield” of tests, that is, how many defects get uncovered during testing.
This is a function of two parameters:
The effectiveness of the tests in uncovering defects
The effectiveness of choosing tests that are capable of uncovering defects.
The ability of a test case to uncover defects depends on how well the test cases are designed
and developed.
Defects per 100 test cases = (Total defects found for a period/Total test cases executed for the
same period) * 100

Defects per 100 Failed Test Cases


Defects per 100 failed test cases is a good measure to find out how granular the test cases are. It
indicates:
How many test cases need to be executed when a defect is fixed?
What defects need to be fixed so that an acceptable number of test cases reach the pass state;
and
How the fail rate of test cases and defects affect each other for release readiness analysis.
Defects per 100 failed test cases = (Total defects found for a period/Total test cases failed due to
those defects) * 100

22
Software Testing (22518) CO5I

Closed Defect Distribution


The testing team also has the objective to ensure that all defects found through testing are fixed
so that the customer gets the benefit of testing and the product quality improves.
To ensure that most of the defects are fixed, the testing team has to track the defects and analyze
how they are closed.
The closed defect distribution helps in this analysis.

Process Metrics
Software Test metrics used in the process of test preparation and test execution phase of STLC.
1. Test case preparation productivity
Test case Preparation Productivity= No.of Test cases/Efforts spent for test case preparation
E.g No of test cases=240
Efforts spent for test case preparation in hours=10
Test case Preparation Productivity= 240/10=24 testcase/hr

2. Test design Coverage


It helps to measure the percentage of test case coverage against the number of requirements.
Test design coverage= [Total no of requirements mapped to test cases/total number of
requirements] *100
e.g
Total number of requirements=100
Total no of requirements mapped to test cases=98
Test design coverage=[98/100]*100=98%

3. Test Execution Productivity


It determines the number of test cases that can be executed per hr.
Test Execution Productivity=
No of test cases executed/Efforts spent for execution of test case
e.g
no of test cases executed=180
Efforts spent for execution of test cases= 10
Test execution Productivity=180/10=18 test cases/hr

4. Test Execution Coverage


It is to measure the number of test cases executed against the number of test cases planned.
23
Software Testing (22518) CO5I

Test Execution Coverage= [Total no of test cases executed/ Total no of test cases planned to
execute] *100
e.g.: Total no of test cases planned to execute=240
Total no of test cases executed=160
Test Execution Coverage= [180/240] *100=75%

5. Test cases Passed


It is to measure the percentage number of test cases passed.
Test case pass= [Total no of test cases passed/total no of test cases executed] *100
e.g: Test case Pass=[80/90]*100=88.8%

6. Test cases Failed


It is to measure the percentage number of test cases failed.
Test case failed= [Total no of test cases failed /total no of test cases executed] *100
e.g : Test case failed= [10/90] *100=11.1%

7. Test cases Blocked


It is to measure the percentage number of test cases blocked.
Test case blocked= [Total no of test cases blocked/total no of test cases executed] *100
e.g: Test case blocked= [5/90] *100=5.5%

Object Oriented Metrics in testing


Focus on the combination of function and data as an integrated object.
1. Method
1. Cyclomatic complexity (CC):
CC is used to evaluate the complexity of an algorithm in a method.
Low CC is better.
CC cannot be used to measure the complexity of class because of inheritance.
CC of individual methods can be combined with other measures to evaluate the complexity of
the class.
2. Size:
Size of a method is used to evaluate the ease of understandability of the code by developers and
maintainers. Size can be measured: counting all LOC, number of statements and number blank
line.

2. Class
Class is template from which objects can be created.
Three class metrics described to measure the complexity of a class using the class methods,
messages and cohesion.
1. Method:
A method is an operation upon an object and is defined in the class declaration.

24
Software Testing (22518) CO5I

Weighted Methods per class: count of the methods implemented within a class.
or Sum of the complexities of the methods.
2. Message: A message is a request that an object makes of another object to perform an
operation. The operation executed as a result of receiving a message is called a method.
Response for a class: The response for a class is the set of all me thods that can be invoked in
response to a message to an object of the class or by some method in the class.
Metrics: Combination of the complexity of a class through the number of methods and amount
of communication with other class.
3. Cohesion: Is the degree to which methods within a class are related to one another and
work together to provide well bounded behavior.
Lack of Cohesion of Methods: Measure the degree of similarity of methods by data input variable
or attributes.
Two ways:-
1.Calculate for each data field in a class what percentage of the methods use that data
field. Average the percentage then subtract from 100%. Lower percentages mean greater
cohesion of data and methods in the class.
2.Methods are more similar if they operate on the same attributes. Count the number of
disjoint sets produced from the intersection of the sets of attributes used by the methods.
4. Coupling:
Coupling is measure of the strength of association established by a connection from one entity
to another. Classes (objects) are coupled three ways as explained below:
When a message is passed between objects, the objects are said to be coupled.
Classes are coupled when methods declared in one class use methods or attributes of the other
classes.
Inheritance introduces significant tight coupling between superclasses and their subclasses.

3. Inheritance
Inheritance decreases complexity by reducing the number of operation and operator but this
abstraction of objects can make maintenance and design difficult.
1. Depth of inheritance tree
The depth of a class within the inheritance hierarchy is the maximum length from the
class node to the root of the tree and is measured by the number of ancestor classes. The
deeper a class is within the hierarchy the greater the number methods it is likely to inherit
making it more complex to predict its behavior.

2. Number of Children
The number of children is the number of immediate subclasses subordinate to a class in
the hierarchy. It is an indicator of the potential influence a class can have on the design
and on the system.

25

You might also like