Professional Documents
Culture Documents
Level 4: Optimal
Systematic Test
Measurement &
Optimization Level 3:Automatic
Systematic
Test Generation
Level 2: Repeatable
Systematic
Test Execution
Control Level 1: Initial
Systematic Test
Information
Management
Maturity Level of Software Test Automation
Level 1: Initial
– A software test process at this level provides engineers with systematic
solutions and tools to create, update, and manage all types of software test
information, including test requirements, test cases, test data, test
procedures, test results, test scripts, and problem reports. No systematic
solutions and tools are available to support engineers test design, test
generation, and test executions.
Level 2: Repeatable
– A software test process at this level not only provides engineers with tools
to manage diverse software testing information, but also provides
systematic solutions to execute software tests in a systematic manner.
These solutions allow engineers to use a systematic approach to run tests
and validate test results. However, no systematic solutions and tools are
available to assist test engineers in test design, test generation, and test
coverage measurement.
Maturity Level of Software Test Automation
Level 3: Automatic
– Besides the test management and test execution tools, a software test
process at this level is supported with additional solutions to generate
software tests using systematic methods. They could be useful to generate
black box or white-box software tests. However, no systematic solutions are
available to measure the test coverage of a test process.
Level 4: Optimal
– This is an optimal level of test automation. At this level, systematic
solutions are available to manage test information, execute tests, and
generate tests, and measure test coverage. The primary benefit of achieving
this level is to help engineers understand the current coverage of a test
process, and identify the test coverage issues.
Skills needed for automation
Essential Needs of Software Test Automation
Organization issues
– This is the initial step in software test automation. The major task here is to
come out a plan that specifies the identified test automation focuses, objectives,
strategies, requirements, schedule and budget.
– The primary objective of this step is to draw out the detailed test automation
solutions to achieve the major objectives and meet the given requirements in a
test automation plan.
– At this step, the designed test automation solutions are developed and tested as
quality tools and facilities. The key in this step is to make sure that the developed
tools are reliable and reusable with good documentation.
The other steps in a test automation process:
A test case is a set of sequential steps to execute a test operating on a set of predefined inputs to produce
certain expected outputs.
There are two types of test cases namely automated and manual. Test case in this chapter refers to
automated test cases.
A test case can be documented as a set of simple steps, or it could be an assertion statement or a set of
assertions.
An example of assertion is “Opening a file, which is already opened should fail.” The following table
describes some test cases for the log in example, on how the log in can be tested for different types
of testing.
In the above table the how portion of the test case is called scenarios. What an operation has to do
is a product specific feature and how they are to be run is a framework-specific requirement. When
a set of test cases is combined and associated with a set of scenarios, they are called ͆test suite͇
.
IT6004 – SOFTWARE TESTING [UNIT – V SNS COLLEGE OF ENGINEERING]
User interfaces normally go through significant changes during a project. To avoid rework on automated test
cases, proper analysis has to be done to find out the areas of changes to user interfaces, and automate only
those areas that will go through relatively less change. The non- user interface portions of the product can be
automated first. This enables the non-GUI portions of the automation to be reused even when GUI goes through
changes.
IT6004 – SOFTWARE TESTING [UNIT – V SNS COLLEGE OF ENGINEERING]
In integration testing both internal interfaces and external interfaces have to be captured by design and
architecture. Architecture for test automation involves two major heads: a test infrastructure that covers a test case
database and a defect database or defect repository. Using this infrastructure, the test framework provides a backbone that
ties the selection and execution of test cases.
External modules
There are two modules that are external modules to automation – TCDB and defect DB. Manual test cases do not need
any interaction between the framework and TCDB. Test engineers submit the defects for manual test cases. For
automated test cases, the framework can automatically submit the defects to the defect DB during execution. These
external modules can be accessed by any module in automation framework.
IT6004 – SOFTWARE TESTING [UNIT – V SNS COLLEGE OF ENGINEERING]
Scenarios are information on how to execute a particular test case. A configuration file
contains a set of variables that are used in automation. A configuration file is important for running the
test cases for various execution conditions and for running the tests for various input and output conditions and
states. The values of variables in this configuration file can be changed dynamically to achieve different
execution input, output and state conditions.
Test case is an object for execution for other modules in the architecture and does not represent any
interaction by itself. A test framework is a module that combines what to execute and how they have to be
executed. The test framework is considered the core of automation design. It can be developed by the
organization internally or can be bought from the vendor.
Tools and results modules
When a test framework performs its operations, there are a set of tools that may be required. For example,
when test cases are stored as source code files in TCDB, they need to be extracted and compiled by build
tools. In order to run the compiled code, certain runtime tools and utilities may be required.
The results that come out of the test must be stored for future analysis. The history of all the previous
tests run should be recorded and kept as archives. This results help the Test engineer to execute the test cases
compared with the previous test run. The audit of all tests that are run and the related information are stored
in the module of automation. This can also help in selecting test cases for regression runs.
IT6004 – SOFTWARE TESTING [UNIT – V SNS COLLEGE OF ENGINEERING]
Report generator and reports /trics modules
Once the results of a test run are available, t.e next step is to prepare the test reports and
metrics. Preparing reports is a complex work and hence it should be part of the automation
design. The periodicity of the reports is different, such as daily, weekly, monthly, and
milestone reports. Having reports of different levels of detail can address the needs of
multiple constituents and thus provide significant returns. The module that takes the
necessary inputs and prepares a formatted report is called a report generator. Once the
results are available, the report generator can generate metrics. All the reports and
metrics that are generated are stored in the reports/metrics module of automation for
future use and analysis.
REQUIREMENTS FOR A TEST TOOL
IT6004 – SOFTWARE TESTING
[ UNIT 5 TEST AUTOMATION - SNS COLLEGE OF ENGINEERING]
REQUIREMENTS FOR A TEST TOOL UNIT - V
·Independent of languages
1.Free tools are not well supported and get phased out soon.
2.Technology expectations
3.Training/skills and
4.Management aspects.
Meeting requirements
Firstly, there are plenty of tools available in the market, but they do not meet all the requirements of a
given product. Evaluating different tools for different requirements involves significant effort, money
and time.
Secondly, test tools are usually one generation behind and may not provide backward or forward
compatibility. Thirdly, test tools may not go through the same amount of evaluation for new
requirements. Finally, a number of test tools cannot differentiate between a product failure and a test failure.
So the test tool must have some intelligence to proactively find out the changes that happened in the product
and accordingly analyze the results.
Technology expectations
□Test tools are not 100% cross platform. When there is an impact analysis of the product on the
network, the first suspect is the test tool and it is uninstalled when such analysis starts.
Training skills
While test tools require plenty of training, very few vendors provide the training to the required level.
Test tools expect the users to learn new language/scripts and may not use standard
languages/scripts. This increases skill requirements for automation and increases the need for a learning
curve inside the organization.
Management aspects
2.Make sure experiences discussed in previous sections are taken care of.
3.Collect the experiences of other organizations which used similar test tools.
6.Evaluate and shortlist one/set of tools and train all test developers on the tool.
7.Deploy the tool across the teams after training all potential users of the tool.
5.6 Challenges in Automation
Challenges in Automation
• The most important challenge of automation is the management commitment.
• Automation takes time and effort and pays off in the long run.
• Management should have patience and persist with automation.
Challenges in Automation
3. Effort is the actual time that is spent on a particular activity or a phase. “Elapsed
days” is the difference between start of an activity to completion of the activity.
3. Measurement is an unit used by metrics (e.g Effort, elapsed days, number of defects
…etc). A metric typically uses one of more measurements
Why Metrics?
1. How do you determine quality and progress of testing?
2. How much testing is completed?
3. How much more time is needed for release?
4. How much time needed to fix defects?
5. How many Days needed for release?
6. How many defects that will be reported by customers?
7. Do you know how to prevent defects rather than finding and fixing them?
Do you have answers?
Why Metrics for QA?
1. Testing is penultimate cycle of product release --- Determining quality and progress of testing thus
is very important
2. How much testing is completed can be measured if you know how much total testing is needed
3. How much more time is needed for release (e.g) Days needed to complete testing = total test
cases yet to be executed / test case execution productivity
4. How much time needed to fix defects (e.g) The defect trend gives a rough estimate of defects that
will come in future. Metrics helps in predicting the number of defects that can be found in future
test cycles (e.g) Total days needed for defect fixes = (Outstanding defects yet to be fixed + Defects
that can be found in future test cycles) / defect fixing capability
Steps for metrics
Tester: We found 100 more defects in this test pass compared to the previous one
Manager: What aspect of the product testing produced more defects?
Tester: Functionality aspect produced 60 defects out of 100
Manager: Good, what are the components in the product that produced more functional defects?
Tester: “Installation” component produced 40 out of those 60
Manager: What particular feature produced that many defects?
Tester: The data migration involving different schema produced 35 out of those 40 defects…….
Steps for metrics
•Step 3: Decide on periodicity of metrics
•Step 4: Analyze metrics and take action items for both positives and
improvement areas
•Step 5-n: Track action items from metrics
Types of metrics
Project metrics: The set of metrics which indicate how the project is planned and executed
Progress metrics: The set of metrics to indicate how different activities of the project are progressing. The
activities include both development and testing activities. Since the focus of this training is testing, only
those metrics applicable to testing are discussed.
Productivity metrics: The set of metrics that takes into account various productivity numbers that can be
collected and used for planning and tracking the testing activities.
.
Process Product
Metrics Metric
s
Overview
Project
Metric
Progress
Metrics
w
Productivi
ty
s Metrics
Effort Variance Defect find rate Component- wise Defects per 100 hrs
Defect fix rate defect distribution of testing
.
Schedule Variance
Test casesexecuted
Outstanding .
Defect density and per 100 hrs of
Effort distribution defects rate
defect removal testing
Priority rate
Test cases developed
outstanding rate Age analysis of per 100 hours
Defects trend outstanding
defects
Defect Defects per 100 test
classification trend Introduced and cases
reopened defects
Defects per 100
rate
Weighted defects failed testcases
trend Test phase
Defect cause effectiveness
metrics distribution Development
Closed defects
distribution
.
5.6.1 Project Metrics example
5.6.2.Progress Metrics example
5.6.3.Productivity Metrics example
5.7 Test Metrics and Measurement
“STANDARDS OF MEASUREMENT”
Metric is a quantitative measure of the degree to which a system, system component, or
process possesses a given attribute.
"Software testing metrics - Improves the efficiency and effectiveness of a software testing process."
What is software test measurement?
Quantitative indication of extent, capacity, dimension, amount or size of some attribute of a process or
product.
•Process Metrics: It can be used to improve the process efficiency of the SDLC ( Software Development Life Cycle)
•Product Metrics: It deals with the quality of the software product
•Project Metrics: It can be used to measure the efficiency of a project team or any testing tools being used by the team
members
IDENTIFICATION OF TEST METRICS
Fix the target audience for the metric preparation.
Define the goal for metrics.
Introduce all the relevant metrics based on project needs.
Analyze the cost benefits aspect of each metrics and the project lifestyle phase in which it results into the
maximum output.
5.7.1.1 Manual Test Metrics
BASE METRICS
Raw data collected by Test Analyst during the test case development and execution (# of test cases executed,
# of test cases).
CALCULATED METRICS
Derived from the data collected in base metrics.
Followed by the test manager for test reporting purpose (% Complete, % Test Coverage).
OTHER IMPORTANT METRICS:
Test case execution productivity metrics
Test case preparation productivity metrics
Defect metrics
Defects by priority
Defects by severity
Defect slippage ratio
DATA RETRIEVED FROM TEST ANALYST
5.7.2 How to calculate test metric
Percentage test cases executed= (No of test cases executed/ Total no of test cases written) X 100
(65/100)*100=65%
Percentage test cases not executed= (No of test cases not executed/ Total no of test cases written) X 100
(35/100)*100=35%
Percentage test cases passed= (No of test cases passed / Total no of test cases executed) X 100
(30/65)*100=46%
Percentage test cases failed = (No of test cases failed / Total no of test cases executed) X 100
(26/65)*100=40%
Percentage test cases blocked= (No of test cases blocked / Total no of test cases executed) X 100
(9/65)*100=14%
Defect density = No of Defects identified/size
30/5 = 6%
Defect density removal(DRE) = No of Defects found during QA testing/(No of Defects found during QA testing
+ No of defcts found by end user))*100
DRE = [100/(100+40)]*100=71%
Defect leakage = No of Defects found in UAT / Nof of defects found in QA testing )*100
(40/100)*100 = 40%
% of critical defects = No of critical defects identifed / total no of defects identifed * 100
6/30*100= 20%
% of high defects = No of high defects identifed / total no of defects identifed * 100
10/30*100= 33.33%
% of medium defects = No of medium defects identifed / total no of defects identifed * 100
6 /30*100= 20%
% of low defects = No of low defects identifed / total no of defects identifed * 100
8 /30*100= 27%