Professional Documents
Culture Documents
TEST AUTOMATION
Software test automation
What is Software Test Automation?
Software test automation refers to the activities and efforts that intend to
automate engineering tasks and operations in a software test process using well-
defined strategies and systematic solutions.
Systematic
Test
Execution Level 1: Initial
Control
Systematic
Test
Information
Management
Level 1: Initial
–A software test process at this level provides engineers with systematic solutions and
tools to create, update, and manage all types of software test information, including test
requirements, test cases, test data, test procedures, test results, test scripts, and problem
reports.
Level 2: Repeatable
A software test process at this level not only provides engineers with tools to
manage diverse software testing information, but also provides systematic
solutions to execute software tests in a systematic manner.
Level 3: Automatic
Besides the test management and test execution tools, a software test process
at this level is supported with additional solutions to generate software tests
using systematic methods.
Level 4: Optimal
This is an optimal level of test automation. At this level, systematic solutions
are available to manage test information, execute tests, and generate tests, and
measure test coverage.
SKILLS NEEDED FOR AUTOMATION
Review and
Evaluate Software Test
Automation
Step #1: Test automation planning
This is the initial step in software test automation. The major task here is to
come out a plan that specifies the identified test automation focuses,
objectives, strategies, requirements, schedule and budget.
Step #2: Test automation design
The primary objective of this step is to draw out the detailed test automation
solutions to achieve the major objectives and meet the given requirements in a
test automation plan.
Step #3: Test tool development
At this step, the designed test automation solutions are developed and tested as
quality tools and facilities. The key in this step is to make sure that the
developed tools are reliable and reusable with good documentation.
Step #4: Test tool deployment
Similar to commercial tools, the developed test tools and facilities must be
introduced and deployed into a project or onto a product line. At this step,
basic user training is essential, and proper user support is necessary.
•Functional automated testing has emerged as a key area in most of the testing
processes.
•The main area where the functional testing tools are used is for regression test
case execution.
In integration testing both internal interfaces and external interfaces have to be captured by
design and architecture. Architecture for test automation involves two major heads: a test
infrastructure that covers a test case database and a defect database or defect repository.
Using this infrastructure, the test framework provides a backbone that ties the selection and
execution of test cases.
External modules
There are two modules that are external modules to automation – TCDB and defect DB.
Manual test cases do not need any interaction between the framework and TCDB. Test
engineers submit the defects for manual test cases. For automated test cases, the framework
can automatically submit the defects to the defect DB during execution. These external
modules can be accessed by any module in automation framework.
[UNIT – V SNS COLLEGE
OF ENGINEERING]
Test case is an object for execution for other modules in the architecture and
does not represent any interaction by itself. A test framework is a module that
combines what to execute͇and how they have to be executed. The test framework is
considered the core of automation design. It can be developed by the organization
internally or can be bought from the vendor.
Tools and results modules
When a test framework performs its operations, there are a set of tools that may
be required. For example, when test cases are stored as source code files in
TCDB, they need to be extracted and compiled by build tools. In order to run the
compiled code, certain runtime tools and utilities may be required.
The results that come out of the test must be stored for future analysis. The
history of all the previous tests run should be recorded and kept as archives. This
results help the Test engineer to execute the test cases compared with the
previous test run. The audit of all tests that are run and the related information are
stored in the module of automation. This can also help in selecting test cases for
regression runs.
[UNIT – V SNS COLLEGE
OF ENGINEERING]
Once the results of a test run are available, t.e next step is to prepare the test reports
and metrics. Preparing reports is a complex work and hence it should be part of the
automation design. The periodicity of the reports is different, such as daily, weekly,
monthly, and milestone reports. Having reports of different levels of detail can
address the needs of multiple constituents and thus provide significant returns. The
module that takes the necessary inputs and prepares a formatted report is called a
report generator. Once the results are available, the report generator can generate
metrics. All the reports and metrics that are generated are stored in the
•Automation takes time and effort and pays off in the long run.
the activity.
drilling. Example
•Tester: We found 100 more defects in this test pass compared to the previous one
•Manager: What aspect of the product testing produced more defects?
•Tester: Functionality aspect produced 60 defects out of 100
•Manager: Good, what are the components in the product that produced more
functional defects?
•Tester: “Installation” component produced 40 out of those 60
•Manager: What particular feature produced that many defects?
•Tester: The data migration involving different schema produced 35 out of those
40 defects
Step 3: Decide on periodicity of metrics
Step 4: Analyze metrics and take action items for both positives
and improvement areas
Step 5: Track action items from metrics
Types of Metrics
•Project M etrics: The set of metrics which indicate how the project is
planned and executed
•Progress Metrics: The set of metrics to indicate how different activities
of the project are progressing. The activities include both development
and testing activities. Since the focus of this training is testing, only
those metrics applicable to testing are discussed.
•Productivity Metrics: The set of metrics that takes into account various
productivity numbers that can be collected and used for planning and
tracking the testing activities.
.
Proce Prod
ss
Metri
uct
Metr
Overv
ieww
cs ic s
Project Progress Productivit
Metric Metrics
s Metrics
Effort Variance Defect find rate Component- Defects per 100 hrs of testing
Defect fix rate wise defect
Schedule Variance
.
distribution
Test casesexecuted per 100
Outstandin .
Defect hrs of testing
Effort distribution g defects
density and
rate
defect
Priority Test cases developed per 100
removal rate
outstanding hours
Age
rate analysis
Defects trend Defects per 100 test
of
outstandi cases
Defect
classification ng Defects per 100 failed
trend defects testcases
Introduced
Weighted Test phase
and
defects trend effectiveness
reopened
metrics Defect Developmen
defects Closed defects
cause t
rate distribution
distributio
n .
5.6.1 project metric example
5.6.3.Productivity Metrics
example
What is software test measurement?
Quantitative indication of extent, capacity, dimension, amount or size of some
attribute of a process or product.
•Process Metrics: It can be used to improve the process efficiency of the SDLC
( Software Development Life Cycle)
•Product Metrics: It deals with the quality of the software product
•Project Metrics: It can be used to measure the efficiency of a project team or any
testing tools being used by the team members
IDENTIFICATION OF TEST METRICS
Fix the target audience
for the metric
preparation. Define
the goal for metrics.
Introduce all the
relevant metrics based
on project needs.
Analyze the cost benefits aspect of each metrics and the project lifestyle
phase in which it results into the maximum output.
OTHER IMPORTANT METRICS:
• Test case execution productivity metrics
• Test case preparation productivity metrics
• Defect metrics
• Defects by priority
• Defects by severity
• Defect slippage ratio