Professional Documents
Culture Documents
22518
Chapter 5
Testing tools and measurements
(R-02, U-04, A-06 =12 Marks)
Manual Testing
Manual testing is a software testing process in which test cases are executed manually without
using any automated tool. All test cases executed by the tester manually according to the end
user's perspective. It ensures whether the application is working, as mentioned in the
requirement document or not. Test cases are planned and implemented to complete almost 100
percent of the software application. Test case reports are also generated manually.
Manual Testing is one of the most fundamental testing processes as it can find both visible and
hidden defects of the software.
The difference between expected output and output, given by the software, is defined as a
defect. The developer fixed the defects and handed it to the tester for retesting.
Manual testing is mandatory for every newly developed software before automated testing.
This testing requires great efforts and time, but it gives the surety of bug-free software.
Manual Testing requires knowledge of manual testing techniques but not of any automated
testing tool.
1
Software Testing (22518) CO5I
Manual testing is an activity where the tester needs to be very patient, creative & open minded.
Manual testing is a vital part of user-friendly software development because humans are involved
in testing software applications and end-users are also humans. They need to think and act with
an End User perspective. Testing can be extremely challenging. Testing an application for possible
use cases with minimum test cases requires high analytical skills.
Automation Testing
Automation testing is a Software testing technique to test and compare the actual outcome with
the expected outcome. This can be achieved by writing test scripts or using any automation
testing tool. Test automation is used to automate repetitive tasks and other testing tasks which
are difficult to perform manually.
Manual Testing is performed by a human sitting in front of a computer carefully executing the
test steps. Successive development cycles will require execution of same test suite repeatedly.
Using a test automation tool, it's possible to record this test suite and re -play it as required. Once
the test suite is automated, no human intervention is required. The goal of Automation is to
reduce the number of test cases to be run manually and not to eliminate Manual Testing
altogether.
Test Automation
Test Automation is the best way to increase the effectiveness, test coverage, and execution
speed in software testing. Manual Testing of all workflows, all fields, all negative scenarios is
time and money consuming. It is difficult to test for multilingual sites manually
Test Automation does not require Human intervention, you can run automated test unattended
(overnight). Test Automation increases the speed of test execution and helps to increase Test
Coverage. Manual Testing can become boring and hence error-prone.
2
Software Testing (22518) CO5I
The following category of test cases are not suitable for automation:
Test Cases that are newly designed and not executed manually at least once
Test Cases for which the requirements are frequently changing
Test cases which are executed on an ad-hoc basis.
3
Software Testing (22518) CO5I
What to Automate
• Repetitive tests that run for multiple builds.
• Tests that tend to cause human error.
• Tests that require multiple data sets.
• Frequently used functionality that introduces high risk conditions.
• Tests that are impossible to perform manually.
• Tests that run on several different hardware or software platforms and configurations.
• Tests that take a lot of effort and time when manual testing.
4
Software Testing (22518) CO5I
5
Software Testing (22518) CO5I
6
Software Testing (22518) CO5I
7
Software Testing (22518) CO5I
Selecting a tool:
1. Free tools are not well supported and get phased out soon.
2. Developing in-house tools takes time.
3. Test tools sold by vendors are expensive.
4. Test tools require strong training.
5. Test tools generally do not meet all the requirements for automation.
6. Not all test tools run on all platforms.
1. Meeting requirements-
There are plenty of tools available in the market but rarely do they meet all the requirements of
a given product or a given organization. Evaluating different tools for different requirements
involve significant effort, money, and time. Given of the too much of choice available, huge delay
is involved in selecting and implementing test tools.
2. Technology expectations-
Test tools in general may not allow test developers to extends/modify the functionality of the
framework. So, extending the functionality requires going back to the tool vendor and involves
additional cost and effort. A good number of test tools require their libraries to be linked with
product binaries.
3. Training/skills-
While test tools require plenty of training, very few vendors provide the training to the required
level. Organization level training is needed to deploy the test tools, as the user of the test suite
are not only the test team but also the development team and other areas like configuration
management.
4. Management aspects-
A test tool increases the system requirement and requires the hardware and software to be
upgraded. This increases the cost of the already- expensive test tool.
8
Software Testing (22518) CO5I
Data-Driven Framework
Using a data-driven framework separates the test data from script logic, meaning testers can
store data externally. Very frequently, testers find themselves in a situation where they need to
test the same feature or function of an application multiple times with differe nt sets of data. In
these instances, it’s critical that the test data not be hard-coded in the script itself, which is what
happens with a Linear or Modular-based testing framework.
Setting up a data-driven test framework will allow the tester to store and pass the input/ output
parameters to test scripts from an external data source, such as Exce l Spreadsheets, Text Files,
CSV files, SQL Tables, or ODBC repositories.
The test scripts are connected to the external data source and told to read and populate the
necessary data when needed.
9
Software Testing (22518) CO5I
Keyword-Driven Framework
In a keyword-driven framework, each function of the application under test is laid out in a table
with a series of instructions in consecutive order for each test that needs to be run. In a similar
fashion to the data-driven framework, the test data and script logic are separated in a keyword-
driven framework, but this approach takes it a step further.
With this approach, keywords are also stored in an external data table (hence the name), making
them independent from the automated testing tool being used to execute the tests. Keywords
are the part of a script representing the various actions being performed to test the GUI of an
application. These can be labeled as simply as ‘click,’ or ‘login,’ or with complex labels like
‘clicklink,’ or ‘verifylink.’
In the table, keywords are stored in a step-by-step fashion with an associated object, or the part
of the UI that the action is being performed on. For this approach to work properly, a shared
object repository is needed to map the objects to their associated actions.
10
Software Testing (22518) CO5I
Step 1:
The first step involved in a metrics program is to decide what measurements are important and
collect data accordingly. The effort spent on testing, number of defects, and number of test cases,
are some examples of measurements.
Depending on what the data is used for, the granularity of measurement will vary.
For testing functions, we would obviously be interested in the effort spent on testing, number of
test cases, number of defects reported from test cases, and so on.
11
Software Testing (22518) CO5I
If there are too many overheads in making the measurements or if the measurements do not
follow naturally from the actual work being done, then the people who supply the data may resist
giving the measurement data (or even give wrong data).
While deciding what to measure, the following aspects need to be kept in mind.
1. What is measured should be of relevance to what we are trying to achieve.
2. The entities measured should be natural and should not involve too many overheads for
measurements.
3. What is measured should be at the right level of granularity to satisfy the objective for
which the measurement is being made.
The different people who use the measurements may want to make inferences on different
dimensions. The level of granularity of data obtained depends on the level of detail required by
a specific audience. Hence the measurements—and the metrics derived from them—will have
to be at different levels for different people. An approach involved in getting the granular detail
is called data drilling.
Step 2:
the second step involved in metrics collection is defining how to combine data points or
measurements to provide meaningful metrics. A particular metric can use one or more
measurements.
Step 3:
The third step in the metrics program—deciding the operational requirement for
measurements. The operational requirement for a metrics plan should lay down not only the
periodicity but also other operational issues such as who should collect measurements, who
should receive the analysis, and so on.
This step helps to decide on the appropriate periodicity for the measurements as well as
assign operational responsibility for collecting, recording, and reporting the measurements
and dissemination of the metrics information. Some measurements need to be made on a
daily basis.
Step 4:
The fourth step involved in a metrics program is to analyze the metrics to identify b oth
positive areas and improvement areas on product quality. Often, only the improvement
aspects pointed to by the metrics are analyzed and focused; it is important to also highlight
and sustain the positive areas of the product. This will ensure that the best practices get
institutionalized and also motivate the team better.
Step 5:
The final step involved in a metrics plan is to take necessary action and follow up on the
action. The purpose of a metrics program will be defeated if the action items are not followed
through to completion.
12
Software Testing (22518) CO5I
METRICS IN TESTING
Since testing is the last phase before product release, it is essential to measure the progress
of testing and product quality. Tracking test progress and product quality can give a good idea
about the release—whether it will be met on time with known quality. Measuring and
producing metrics to determine the progress of testing is very important.
To judge the remaining days needed for testing, two data points are needed—remaining test
cases yet to be executed and how many test cases can be executed per elapsed day.
The test cases that can be executed per person day are calculated based on a measure called
test case execution productivity. This productivity number is derived from the previous test
cycles. Thus, metrics are needed to know test case execution productivity and to estimate
test completion date.
The number of days needed to fix all outstanding defects is another crucial data point.
The number of days needed for defects fixes needs to take into account the “outstanding
defects waiting to be fixed” and a projection of “how many more defects that will be
unearthed from testing in future cycles.” Hence, metrics helps in predicting the number of
defects that can be found in future test cycles.
The defect-fixing trend collected over a period of time gives another estimate of the defect-
fixing capability of the team. Combining defect prediction with defect-fixing capability
produces an estimate of the days needed for the release. Hence, metrics helps in estimating
the total days needed for fixing defects. Once the time needed for testing and the time for
defects fixing are known, the release date can be estimated. Testing and defect fixing are
activities that can be executed simultaneously, the defect fixes may arrive after the regular
test cycles are completed. These defect fixes will have to be verified by regre ssion testing
before the product can be released.
Metrics are not only used for reactive activities. Metrics and their analysis help in preventing
the defects proactively, thereby saving cost and effort. Metrics help in identifying these
opportunities. For example, if there is a type of defect (say, coding defects) that is reported
in large numbers, it is advisable to perform a code review and prevent those defects, rather
than finding them one by one and fixing them in the code.
Metrics can be classified as
13
Software Testing (22518) CO5I
Product Metrics - which has more meaning in the perspective of the software product being
developed. e.g. Quality of the developed product.
Process Metrics: It can be used to improve the process efficiency of the SDLC (Software
Development Life Cycle)
Product Metrics: -
Project metrics
Progress metrics
Productivity Metrics
Project Metrics: It can be used to measure the efficiency of a project team or any testing tools
being used by the team members. Project matrix is describing the project characteristic and
execution process.
Number of software developer
Staffing pattern over the life cycle of software
Cost and schedule
Productivity
Effort Variance: Difference between the planned outlined effort and the eff ort required to
actually undertake the task is called Effort variance.
Effort variance = [(Actual Effort – Planned Effort)/ Planned Effort] x 100.
14
Software Testing (22518) CO5I
Schedule Variance: Any difference between the scheduled completion of an activity and the
actual completion is known as Schedule Variance.
Schedule variance = [((Actual calendar days – Planned calendar days) / Planned calendar days]
x 100.
Size Variance: Difference between the estimated size of the project and the actual size of the
project (normally in KLOC or FP).
Size variance = [(Actual size – Estimated size)/ Estimated size ]x 100.
Cost Variance (CV) Difference between the estimated cost of the project and the actual cost
of the project. this metric is represented as percentage.
Cost variance = [(Actual cost – Estimated cost)/ Estimated cost ]x 100.
Progress Metrics
Automation progress refers to the number of tests that have been automated as a percentage
of all automatable test cases.
Any project needs to be tracked from two angles as given below:
1. How the project is doing with respect to effort and schedule.
2. To find out how well the product is meeting the quality requirements for the released.
One of the main objectives of testing is to find as many defects as possible before any customs
finds them. The number of defects that are found in the product is one of the main indicators
of quality.
Defect metrics are further classified in to test defect metrics which help the testing team in
analysis of product quality and testing and development defect me trics which help the
development team analysis of development activities.
How many defects have already been found and how many more defect may get discovered
are two parameters that determine product quality and its assessment?
Progress metrics
15
Software Testing (22518) CO5I
Some organizations use defect severity levels (for example, S1, S2, S3, and so on). The severity
of defects provides the test team a perspective of the impact of that defect in product
functionality.
the priority of a defect can change dynamically once assigned. Severity is absolute and does
not change often as they reflect the state and quality of the product. Some organizations use
a combination of priority and severity to classify the defects.
Since different organization use different methods of defining priorities and severities, a
common set of defect definitions and classification are provided in Table to take care of both
priority and severity levels.
16
Software Testing (22518) CO5I
17
Software Testing (22518) CO5I
Defect trend
The effectiveness of analysis increases when several perspectives of find rate, fix rate,
outstanding, and priority outstanding defects are combined.
Having discussed individual measures of defects, it is time for the trend chart to consolidate all
of the above into one chart.
18
Software Testing (22518) CO5I
If there is an independent component which is producing a large number of defects, and if all
other components are stable, then the scope of the release can be reduced to remove the
component producing the defects and release other stable components thereby meeting the
release date and release quality, provided the functionality provided by that component is not
critical to the release.
19
Software Testing (22518) CO5I
One of the metrics that correlates source code and defects is defect density. This metric maps
the defects in the product with the volume of code that is produced for the product. There are
several standard formulae for calculating defect density. Of these, defects per KLOC is th e most
practical and easy metric to calculate and plot. KLOC stands for kilo lines of code. Every 1000 lines
of executable statements in the product is counted as one KLOC.
Defects per KLOC = (Total defects found in the product)/(Total executable lines of code in KLOC)
The defect removal rate (or percentage) is used for the purpose.
The formula for calculating the defect removal rate is
(Defects found by verification activities + Defects found in unit testing)/(Defects found by test
teams)* 100
To perform this analysis, the time duration from the filing of outstanding defects to the current
period is calculated and plotted every week for each criticality of defects in stacked area graph.
This graph is useful in finding out whether the defects are fixed as soon as they arrive and to
ensure that long pending defects are given adequate priority. The defect fixing rate discussed
earlier talks only about numbers, but age analysis talks about their age. The purpose of this metric
and the corresponding chart is to identify those defects—especially the high-priority ones—that
are waiting for a long time to be fixed.
20
Software Testing (22518) CO5I
PRODUCTIVITY METRICS
Productivity metrics combine several measurements and parameters with effort spent on the
product. They help in finding out the capability of the team as well as for other purposes, such as
Estimating for the new release.
Finding out how well the team is progressing, understanding the reasons for (both positive and
negative) variations in results.
Estimating the number of defects that can be found.
Estimating release date and quality.
Estimating the cost involved in the release.
Some test cases are deleted if they are no longer useful or if corresponding features are removed
from the product.
Hence the formula for test cases developed uses the count corresponding to added/modified
and deleted test cases.
Test cases developed per 100 hours of testing =
Total test cases developed for a period/Total hour spent in test case development) * 100
22
Software Testing (22518) CO5I
Process Metrics
Software Test metrics used in the process of test preparation and test execution phase of STLC.
1. Test case preparation productivity
Test case Preparation Productivity= No.of Test cases/Efforts spent for test case preparation
E.g No of test cases=240
Efforts spent for test case preparation in hours=10
Test case Preparation Productivity= 240/10=24 testcase/hr
Test Execution Coverage= [Total no of test cases executed/ Total no of test cases planned to
execute] *100
e.g.: Total no of test cases planned to execute=240
Total no of test cases executed=160
Test Execution Coverage= [180/240] *100=75%
2. Class
Class is template from which objects can be created.
Three class metrics described to measure the complexity of a class using the class methods,
messages and cohesion.
1. Method:
A method is an operation upon an object and is defined in the class declaration.
24
Software Testing (22518) CO5I
Weighted Methods per class: count of the methods implemented within a class.
or Sum of the complexities of the methods.
2. Message: A message is a request that an object makes of another object to perform an
operation. The operation executed as a result of receiving a message is called a method.
Response for a class: The response for a class is the set of all me thods that can be invoked in
response to a message to an object of the class or by some method in the class.
Metrics: Combination of the complexity of a class through the number of methods and amount
of communication with other class.
3. Cohesion: Is the degree to which methods within a class are related to one another and
work together to provide well bounded behavior.
Lack of Cohesion of Methods: Measure the degree of similarity of methods by data input variable
or attributes.
Two ways:-
1.Calculate for each data field in a class what percentage of the methods use that data
field. Average the percentage then subtract from 100%. Lower percentages mean greater
cohesion of data and methods in the class.
2.Methods are more similar if they operate on the same attributes. Count the number of
disjoint sets produced from the intersection of the sets of attributes used by the methods.
4. Coupling:
Coupling is measure of the strength of association established by a connection from one entity
to another. Classes (objects) are coupled three ways as explained below:
When a message is passed between objects, the objects are said to be coupled.
Classes are coupled when methods declared in one class use methods or attributes of the other
classes.
Inheritance introduces significant tight coupling between superclasses and their subclasses.
3. Inheritance
Inheritance decreases complexity by reducing the number of operation and operator but this
abstraction of objects can make maintenance and design difficult.
1. Depth of inheritance tree
The depth of a class within the inheritance hierarchy is the maximum length from the
class node to the root of the tree and is measured by the number of ancestor classes. The
deeper a class is within the hierarchy the greater the number methods it is likely to inherit
making it more complex to predict its behavior.
2. Number of Children
The number of children is the number of immediate subclasses subordinate to a class in
the hierarchy. It is an indicator of the potential influence a class can have on the design
and on the system.
25