You are on page 1of 63

Unit V

TEST AUTOMATION
Software test automation
What is Software Test Automation?
Software test automation refers to the activities and efforts that intend to
automate engineering tasks and operations in a software test process using well-
defined strategies and systematic solutions.

Major objectives of software test automation:


• To free engineers from tedious and redundant manual testing operations
• To speed up a software testing process, and to reduce software testing cost
and time during a software life cycle
Software test automation activities could be performed in three
different scopes:
- Enterprise-oriented test automation, where the major focus of test
automation efforts is to automate an enterprise-oriented test process so that
it could be used and reused to support different product lines and projects
in an organization.

- Product-oriented test automation, where test automation activities are


performed to focus on a specific software product line to support its
related testing activities.

- Project-oriented test automation, where test automation effort is aimed at


a specific project and its test process.
Different Maturity Levels of Software
Test Automation
Level 4:
Optimal
Systematic Test
Measurement
& Level 3:Automatic
Optimization
Systematic
Test Generation
Level 2: Repeatable

Systematic
Test
Execution Level 1: Initial
Control
Systematic
Test
Information
Management
Level 1: Initial
–A software test process at this level provides engineers with systematic solutions and
tools to create, update, and manage all types of software test information, including test
requirements, test cases, test data, test procedures, test results, test scripts, and problem
reports.

Level 2: Repeatable
A software test process at this level not only provides engineers with tools to
manage diverse software testing information, but also provides systematic
solutions to execute software tests in a systematic manner.
Level 3: Automatic
Besides the test management and test execution tools, a software test process
at this level is supported with additional solutions to generate software tests
using systematic methods.

Level 4: Optimal
This is an optimal level of test automation. At this level, systematic solutions
are available to manage test information, execute tests, and generate tests, and
measure test coverage.
SKILLS NEEDED FOR AUTOMATION

Essential Needs of Software Test Automation


•A dedicated work force for test automation
•The commitment from senior managers and engineers
•The dedicated budget and project schedule
•A well-defined plan and strategy
•Talent engineers and cost-effective testing tools
•Maintenance of automated software tests and tools
Basic Issues of Software Test Automation

•Poor manually performed software test process


•Late engagement of software test automation in a software
product life cycle
•Unrealistic goals and unreasonable expectations
•Organization issues
•Lack of good understanding and experience of software test automation
Essential Benefits of Software Test Automation

There are a number of essential benefits from test automation.


They are listed below.
•Reduce manual software testing operations and eliminate redundant testing
efforts.
•Produce more systematic repeatable software tests, and generate more
consistent testing results.
•Execute much more software tests and achieve a better testing coverage in a
very limited schedule.
A Software Test
Automation Process
Plan Software Test
Automation

Design Test Select and Evaluate


Automation Available Software
Strategies & Testing Tools
Solutions Develop Test
Solutions
&
Implement
Automation
Introduce and Deploy
Test Automation
Solutions

Review and
Evaluate Software Test
Automation
Step #1: Test automation planning
This is the initial step in software test automation. The major task here is to
come out a plan that specifies the identified test automation focuses,
objectives, strategies, requirements, schedule and budget.
Step #2: Test automation design
The primary objective of this step is to draw out the detailed test automation
solutions to achieve the major objectives and meet the given requirements in a
test automation plan.
Step #3: Test tool development
At this step, the designed test automation solutions are developed and tested as
quality tools and facilities. The key in this step is to make sure that the
developed tools are reliable and reusable with good documentation.
Step #4: Test tool deployment
Similar to commercial tools, the developed test tools and facilities must be
introduced and deployed into a project or onto a product line. At this step,
basic user training is essential, and proper user support is necessary.

Step #5: Review and evaluation


Whenever a new tool is deployed, a review should be conducted to identify its
issues and limitations, and evaluate its provided features. The review results
will provide valuable feedback to the test automation group for further
improvements and enhancements
SCOPE OF AUTOMATION IN TESTING

•Automation is the process of evaluating the (application under test) against


the specification with the help of a tool. In this article we are gong to discuss
the scope of automation in testing.
•Depending on the nature of testing there are two main branches under
automation.
•Functional testing with automation.
•Performance testing with automation.
Functional testing with automation.

•Functional automated testing has emerged as a key area in most of the testing
processes.

•The main area where the functional testing tools are used is for regression test
case execution.

•Mostly in the agile scrum methodology where frequent releases are


happening, it is almost impossible to execute all the regression test
cases manually with the short span of time.

•Automation gives a high ROI(Return of Investment) in this area since


it is a one time effort for generating the scripts.
Performance testing with automation.

•Performance testing is the process of evaluating the application performance


which is being a critical requirement of every application now days.
•Performance testing is almost impossible by manual means.
•There are different tools used across organizations for evaluating application
performance.
•There are different divisions under performance testing based on the nature of
testing.
Design And Architecture For Automation

•A test case is a set of sequential steps to execute a test operating on a set of


predefined inputs to produce certain expected outputs.
•There are two types of test cases namely automated and manual.

•A test case can be documented as a set of simple steps, or it could be an


assertion statement or a set of assertions.

•An example of assertion is “Opening a file, which is already opened should


fail.” The following table describes some test cases for the log in example, on
how the log in can be tested for different types of testing.
[UNIT – V SNS COLLEGE
OF ENGINEERING]

Skills Needed for Automation

The automation of testing is broadly classified into three generations.

First generation – record and playback


Record and playback avoids the repetitive nature of executing tests. Almost all the test tools
available in the market have the record and playback feature. A test engineer records the
sequence of actions by keyboard characters or mouse clicks and those recorded scripts are
played back later, in the same order as they were recorded. When there is frequent change,
the record and playback generation of test automation tools may not be very effective.

Second generation – data – driven


This method helps in developing test scripts that generates the set of input conditions and
corresponding expected output. This enables the tests to be repeated for different input and
output conditions. This generation of automation focuses on input and output conditions
using the black box testing approach.
[UNIT – V SNS COLLEGE
OF ENGINEERING]

Third generation action driven


This technique enables a layman to create automated tests; there are no input and
expected output condition required for running the tests. All action that appear
on application are automatically tested based on a generic set of controls
defined for automation e input and output condition are automatically generated
and used the scenarios for test execution can be dynamically changed using the
test framework that available in this approach of automation hence automation
in the third generation involves two major aspects ͆test case automation͇
and ͆frame work design͇.

What to Automate, Scope of Automation


The specific requirements can vary from product to product, from
situation to situation, from time to time. The following gives some generic tips
for identifying the scope of automation.
Design and Architecture for Automation
Design and architecture is an important aspect of automation. As in product development,
the design has to represent all requirements in modules and in the interactions between
modules.

In integration testing both internal interfaces and external interfaces have to be captured by
design and architecture. Architecture for test automation involves two major heads: a test
infrastructure that covers a test case database and a defect database or defect repository.
Using this infrastructure, the test framework provides a backbone that ties the selection and
execution of test cases.
External modules

There are two modules that are external modules to automation – TCDB and defect DB.
Manual test cases do not need any interaction between the framework and TCDB. Test
engineers submit the defects for manual test cases. For automated test cases, the framework
can automatically submit the defects to the defect DB during execution. These external
modules can be accessed by any module in automation framework.
[UNIT – V SNS COLLEGE
OF ENGINEERING]

Scenario and configuration file modules

Scenarios are information on ͆how to execute a particular test case. A


configuration file contains a set of variables that are used in automation. A
configuration file is important for running the test cases for various execution
conditions and for running the tests for various input and output conditions and
states. The values of variables in this configuration file can be changed dynamically
to achieve different execution input, output and state conditions.

Test case is an object for execution for other modules in the architecture and
does not represent any interaction by itself. A test framework is a module that
combines what to execute͇and how they have to be executed. The test framework is
considered the core of automation design. It can be developed by the organization
internally or can be bought from the vendor.
Tools and results modules
When a test framework performs its operations, there are a set of tools that may
be required. For example, when test cases are stored as source code files in
TCDB, they need to be extracted and compiled by build tools. In order to run the
compiled code, certain runtime tools and utilities may be required.

The results that come out of the test must be stored for future analysis. The
history of all the previous tests run should be recorded and kept as archives. This
results help the Test engineer to execute the test cases compared with the
previous test run. The audit of all tests that are run and the related information are
stored in the module of automation. This can also help in selecting test cases for
regression runs.
[UNIT – V SNS COLLEGE
OF ENGINEERING]

Report generator and reports /trics modules

Once the results of a test run are available, t.e next step is to prepare the test reports
and metrics. Preparing reports is a complex work and hence it should be part of the
automation design. The periodicity of the reports is different, such as daily, weekly,
monthly, and milestone reports. Having reports of different levels of detail can
address the needs of multiple constituents and thus provide significant returns. The
module that takes the necessary inputs and prepares a formatted report is called a
report generator. Once the results are available, the report generator can generate
metrics. All the reports and metrics that are generated are stored in the

reports/metrics module of automation for future use and analysis.


REQUIREMENTS FOR A TEST TOOL
Generic Requirements for Test Tool/Framework

No hard coding in the test suite.

Test case/suite expandability.

Reuse of code for different types of testing, test cases.

Automatic setup and cleanup.

Independent test cases.

Test case dependency


Process Model for Automation
The work on automation can go simultaneously with product development and can
overlap with multiple releases of the product. One specific requirement for
automation is that the delivery of the automated tests should be done before the test
execution phase so that the deliverables from automation effort can be utilized for the
current release of the product.
Selecting a test tool:
Selecting the test tool is an important aspect of test automation for several reasons
given below:
1. Free tools are not well supported and get phased out soon.
2. Developing in-house tools take time.
3.Test tools sold by vendors are expensive.
4.Test tools require strong training.
Criteria for selecting test tools
This will change according to context and are different for different companies
and products.
•Meeting requirements
•Technology expectations
•Training/skills and
•Management aspects.
Meeting requirements
•There are plenty of tools available in the market, but they do not meet all the
requirements of a given product.
•Test tools are usually one generation behind and may not provide backward
or forward compatibility.
•Test tools may not go through the same amount of evaluation for new
requirements.
•Number of test tools cannot differentiate between a product failure and a test
failure.
So the test tool must have some intelligence to proactively find out the
changes that happened in the product and accordingly analyze the results.
Training skills
While test tools require plenty of training, very few vendors provide the
training to the required level. Test tools expect the users to learn new
language/scripts and may not use standard languages/scripts. This increases
skill requirements for automation and increases the need for a learning curve
inside the organization.
Management aspects

•Test tools require system upgrades.

•Migration to other test tools difficult

•Deploying tool requires huge planning and effort.


Steps for tool selection and deployment
1.Identify your test suite requirements among the generic requirements discussed. Add
other requirements if any.
2.Make sure experiences discussed in previous sections are taken
care
3.Collect the experiences of other organizations which used similar
test tools.
4.Keep a checklist of questions to be asked to the vendors on
cost/effort/support.
5.Identify list of tools that meet the above requirements.
6.Evaluate and shortlist one/set of tools and train all test developers
on the tool.
7.Deploy the tool across the teams after training all potential users
of the tool.
CHALLENGES IN AUTOMATION

•The most important challenge of automation is the management commitment.

•Automation takes time and effort and pays off in the long run.

•Management should have patience and persist with automation.


Metrics & Measurements

•Set of data is called information and set of information combined to provide

a perspective is called Metrics.

•A quantitative measure to explain at what degree an attribute of testing or

product quality or process has performed is called Metrics.

•Effort is the actual time that is spent on a particular activity or a phase.

“Elapsed days” is the difference between start of an activity to completion of

the activity.

•Measurement is an unit used by metrics (e.g Effort, elapsed days, number of

defects…etc). A metric typically uses one of more measurements


Steps for metrics

Step 1: Identify what measurements are important

Step 2: Define granularity of measurements ; Granularity depends on data

drilling. Example
•Tester: We found 100 more defects in this test pass compared to the previous one
•Manager: What aspect of the product testing produced more defects?
•Tester: Functionality aspect produced 60 defects out of 100
•Manager: Good, what are the components in the product that produced more
functional defects?
•Tester: “Installation” component produced 40 out of those 60
•Manager: What particular feature produced that many defects?
•Tester: The data migration involving different schema produced 35 out of those
40 defects
Step 3: Decide on periodicity of metrics
Step 4: Analyze metrics and take action items for both positives
and improvement areas
Step 5: Track action items from metrics
Types of Metrics

•Project M etrics: The set of metrics which indicate how the project is
planned and executed
•Progress Metrics: The set of metrics to indicate how different activities
of the project are progressing. The activities include both development
and testing activities. Since the focus of this training is testing, only
those metrics applicable to testing are discussed.
•Productivity Metrics: The set of metrics that takes into account various
productivity numbers that can be collected and used for planning and
tracking the testing activities.
.

Proce Prod
ss
Metri
uct
Metr
Overv
ieww
cs ic s
Project Progress Productivit
Metric Metrics
s Metrics
Effort Variance Defect find rate Component- Defects per 100 hrs of testing
Defect fix rate wise defect
Schedule Variance
.
distribution
Test casesexecuted per 100
Outstandin .
Defect hrs of testing
Effort distribution g defects
density and
rate
defect
Priority Test cases developed per 100
removal rate
outstanding hours
Age
rate analysis
Defects trend Defects per 100 test
of
outstandi cases
Defect
classification ng Defects per 100 failed
trend defects testcases
Introduced
Weighted Test phase
and
defects trend effectiveness
reopened
metrics Defect Developmen
defects Closed defects
cause t
rate distribution
distributio
n .
5.6.1 project metric example
5.6.3.Productivity Metrics
example
What is software test measurement?
Quantitative indication of extent, capacity, dimension, amount or size of some
attribute of a process or product.

Why do Test Metrics?

•We cannot improve what we cannot measure


•Take decision for next phase of activities
•Evidence of the claim or prediction
•Understand the type of improvement required
•Take decision or process or technology change
TEST METRICS LIFE
CYCLE
5.7.1 TYPES OF
METRICS

•Process Metrics: It can be used to improve the process efficiency of the SDLC
( Software Development Life Cycle)
•Product Metrics: It deals with the quality of the software product
•Project Metrics: It can be used to measure the efficiency of a project team or any
testing tools being used by the team members
IDENTIFICATION OF TEST METRICS
Fix the target audience
for the metric
preparation. Define
the goal for metrics.
Introduce all the
relevant metrics based
on project needs.

Analyze the cost benefits aspect of each metrics and the project lifestyle
phase in which it results into the maximum output.
OTHER IMPORTANT METRICS:
• Test case execution productivity metrics
• Test case preparation productivity metrics

• Defect metrics
• Defects by priority

• Defects by severity
• Defect slippage ratio

You might also like