You are on page 1of 24

White Paper

Image Area

Functional Testing Challenges & Best Practices

The ever-increasing complexity of todays software products, combined with greater competitive pressures and
skyrocketing costs of software breakdown have pushed the need for testing to new heights. While the pressures to deliver
high-quality software products continue to grow, shrinking development and deployment schedules, geographically
dispersed organizations, limited resources, and soaring turnover rates for skilled engineers make delivering quality
products the greatest challenge. Faced with the reality of having to do more with less, manage multiple projects and
distributed project teams, many organizations are facing innumerable challenges in managing the quality programs for
their products.
In addition, there is a continuing urge for enhancing the operational capabilities of the teams so as to be able to produce
more and more with a reducing investment bucket.
This paper illustrates the various challenges faced during different stages of product functional testing life cycle viz. test
requirements gathering and management, test planning, test strategizing, test execution and test results reporting along
with the best practices, which were institutionalized to cope up with those challenges thereby resulting in an effective and
efficient testing process along with a highly satisfied customer.
The paper also illustrates a comprehensive measurement model, which was adopted to strive for improvement on a
continuous basis.

Quality is an increasingly critical factor for software products as customers become more sophisticated, technology becomes
more complex, and the software business becomes extremely competitive. Software quality may look like a simple concept, at
least in the literature. But, Software quality is not so straightforward in practice: where requirements change so rapidly, projects
are continually understaffed and behind schedule.

And definitely, one of the most important criteria for success of any product is to release the right product at the right time.

As Ed Kit has rightly said - It is fundamental to delivering quality software on time and within budget.
The quality of a software system is mainly determined by the quality of software process that produced it. Similarly, the quality
and effectiveness of software testing are largely determined by the quality of the test processes used.

We have to admit the fact that it may not be practicable to test a product fully. The test coverage provided to a product is limited
by the size of your test bank supplemented by some amount of ad-hoc testing. But, due to the increased product complexities in
todays world, the test bank sizes have become huge along with an extensive list of supported hardware-software configurations.
The challenge of providing coverage to the supported configuration matrix for the product has become equally important as
providing functional coverage. Having said this, we have to be with the fact that we have limited amount of resources/time
available for providing this coverage. Hence, the challenge will now transform into: we need to provide an optimum level of test
coverage to the product. That is where effective test management comes in to picture.

There are a number of essential questions for testing questions about product quality, risk management, release criteria,
the effectiveness of the testing process and when to stop testing. Measurement provides the answers to these questions. But
once we start to think about what can be measured, its easy to be overwhelmed with the fact that we could measure almost
anything. However, this is not practical and we have to create priorities for measurement based on what measures are critical
and will actually be used once we have them.

As stated by Albert Einstein Not everything that counts can be counted, and not everything that can be
counted counts
Quite often in the world of software development, testing remains a low focus area until software implementation has been
almost completed. Obviously, this approach to testing is inadequate in light of the increasingly high demands for software
quality and shorter release cycles. As a result, the place of testing in the software lifecycle has expanded.

This paper explores the challenges faced during the functional testing for a series of products from a worlds leading Identity
and Access Management solution provider along with the practices, which were adopted to cope up with such challenges.
The paper also provides an insight in to a comprehensive measurement program established for the project, leading the team
towards continuing operational excellence.

02 | Infosys

The Need for Functional Testing

Functional testing is a means of ensuring that software applications/products work as they should - that they do what users expect
them to do. Functional tests capture user requirements in a constructive way, provide both users and developers confidence that the
application/product meets those requirements, and enable QA teams to confirm that the software is ready for release.
Functional Testing is an important step for any software development process, whose importance only grows with the complexity of the
system being deployed.

Functional Testing - Effectiveness & Efficiency

Similar to the development process, testing requires a systematic approach, which includes requirements definition, test planning, test
design, test execution and analysis - to ensure optimum coverage, consistency and reusability of testing assets.
It begins with gathering the testing requirements and continues through designing and developing tests, executing those tests and
analyzing product defects. The testing process is not linear and obviously it differs depending on each organizations practices and
methodologies. The fundamental principles of every testing process, however, remain the same.
The fundamental aim of any test manager is to have an effective and efficient method for organizing, prioritizing and analyzing an
organizations entire testing effort while ensuring effective planning and execution for the various stages in the functional testing life cycle:

Test Requirements Gathering: Define clear, complete requirements that are testable. Requirements management plays a
critical role in the testing of software.

Test Planning: Identify test-creation standards and guidelines, identify hardware/software for the test environment, assign
roles and responsibilities, define test schedule and set procedures for executing, controlling and measuring the testing

Test Strategizing: Devise plans for best possible utilization of the resources allocated for the test cycles ensuring optimum
test coverage.

Test Execution: Devise an efficient test execution flow/mechanism with the institutionalization of various tools, reusable

Defect Management: Associated with Test execution is effective defect management. As todays systems become more
complex, so does the severity of the defects. A well-defined method for defect management will benefit more than just the
testing team.

Test Results Reporting: With increased application complexity and significance, more and more people are interested in
the quality of a given product/application. By providing visibility in to a products health, large sets of stakeholders are able
to satisfy themselves as to the expected quality of the product. In addition, senior management and executives are able to
easily grasp and act upon critical quality information, acting on issues/exceptions before they turn into real problems. This
visibility is only useful if it is easy to find, easy to comprehend and personalized for the individual.

Test Metrics Collection, Analysis and Improvement: Institutionalize an effective metrics model to gauge the testing
processs health and take improvement actions on a continuous basis.

Infosys | 03

Functional Testing Life Cycle Challenges & Best Practices

While the task of ensuring the functional quality for a product is the ultimate objective, the overall functional testing life cycle is constrained
by a number of challenges and operational limitations. I, being the test manager for such a project had to face multiple such challenges, which
over a period of time, led me to formulate a number of strategies to deal with all those challenges at the same time enthusing a culture of
continuous improvement in to the project.
Let us take a deeper look at all those challenges and associated best practices.
The first phase, we will focus at:

A) Test Requirements Gathering

Since the PRS (Product Requirements Specification) and functional specifications for various
product features were the only inputs to the test requirements gathering process, following
were the major challenges for the testing team during this phase:


Define clear and complete test requirements

Manage changes to requirements

Best Practices:

Arrange for a PRS presentation by the product management team

Arrange for product feature presentations from development team

Prepare traceability matrices at 2 levels:

o A high level traceability matrix establishing traceability from requirements mentioned in the PRS to features and vice-versa (Refer figure A-1
below for a snapshot of traceability matrix at level-1)
o At the second level, prepare a traceability matrix for each feature, which establishes the traceability between detailed feature functional
requirements to test cases and vice-versa (Refer figure A-2 below for a snapshot of traceability matrix at level-2)

Get the traceability matrices reviewed by development team for completeness and clarity

In case of any requirement changes, make corresponding modifications to traceability matrices at both levels

Traceability matrices to serve as a starting point for the new testers deployed for a features testing

In summary, collaborate with development team throughout the testing life cycle starting with test requirements gathering stage till
product release (Refer figure A-3 below for the collaborative model, which was followed by the project)

Product Name
Requirement Id
(as per PRS)



Traceability Matrix Name

(including version)

Test Plan Name

(including version)


<This is my first

<First Req>

First ReqTraceabilityMatrix-1.2

First ReqTestPlan-1.2











Figure A-1 Snapshot of Traceability Matrix at Level-1

04 | Infosys

Feature Name
Sr. No.




<This is my first
functional area>








<First sub<First functional

functional area>

Sub-Functional Functional Test


Test Case Id


<First subfunctional

Strategy for
testing First


If any











Figure A-2 Snapshot of Traceability Matrix at Level-2


Clear understanding of product requirements achieved through presentations by development

Institutionalization of traceability matrices resulted in requirements completeness

Gaps, if any in the traceability matrices closed after review by development team

Reduction in learning time for each feature by the testers as the traceability matrices are quick and easy to go through

Effective management of changes to requirements through up-to-date traceability matrices

The collaborative approach for QA as indicated in figure A-3 below proved to be really beneficial resulting in requirements clarity and
completeness, gap minimization and early gap closure. In reality, the collaborative approach with QA and Development teams working
in partnership has been found to be the Mantra for successful execution of testing projects

Figure A-3 A Collaborative Approach for QA

Infosys | 05

As per the earlier practice, the preparation of test plans was being carried out directly on
the basis of feature functional specifications, which led to the following challenges for the
testing team during this phase:

B) Test Planning


Functional gaps in the test plans

Difficulty in review by the development team for large test plans

Best Practices:

Have the reviewed/approved traceability matrices as indicated in figure A-2 above serve as the basis for subsequent test plan generation


Due to completeness of traceability matrices, functional gaps in the test plans are reduced to a minimum

Reduction in time required for test plan creation as the same are now created on the basis of finalized traceability matrices

C) Test Strategizing

The product to be functionally tested was really complex one with a large feature set and a
number of supported hardware-software configurations. On the other hand, the time/effort
available for testing was limited.

As James Bach rightly said The test manager who refuses to face the fact that exhaustive testing is impossible chooses instead to seek an
impossible level of testing.
Hence, we as test managers ought to understand that an optimum level of functional test coverage needs to be provided to the product
within available resources.
Following were the major challenges for the test team during this phase:


How to utilize the available limited resources/time to provide optimum test coverage to the product functionality while covering various
supported configurations

To monitor the exact hardware-software configurations for the test environment

Figure C-1 Snapshot of a Consolidated Functional Coverage Matrix

06| Infosys

Best Practices:

Prioritization of test cases in test plans in to levels-1, 2, 3 and 4

Preparation of a consolidated functional test coverage matrix for various test cycles to be executed (Refer figure C-1 below for a snapshot
of consolidated functional coverage matrix)

Preparation of a consolidated configurations coverage matrix for various test cycles (Refer figure C-2 below for a snapshot of consolidated
configurations coverage matrix)

Utilize an extensive decision model while deciding on the test matrix for each cycle (Refer figure C-3 below for a snapshot of the QA
decision model in place)

Institutionalize Test Automation:

o Start early on automation: evaluate and finalize on the test automation framework in the test planning stage itself as indicated in figure
A-3 above
o Start performing the automation feasibility analysis for various features along with test plan preparation itself
o Initiate tools search and evaluation along-side test planning
o While the product is under implementation, work rigorously towards getting the automation suite up and ready
o Plan to institutionalize daily test harness execution (after creation of a new product build) when the product implementation is mid-way
o The test automation should be portable to be executed in different hardware-software environments
o Try to make maximum utilization of automated test suites, while deciding on your test matrix

Prepare a detailed test matrix for each test cycle to monitor the details for test environment to a minute level and get the same reviewed
by the development team before proceeding with the testing (Refer figure C-4 below for a snapshot of detailed component and
functional test matrix for a test cycle)

Figure C-2 Snapshot of a Consolidated Configurations Coverage Matrix

Infosys | 07


Test case prioritization in the test plans enabled the test

team to plan for breadth-oriented testing for low risk
features while focusing in-depth on high risk features

Creation of consolidated functional and configuration

coverage matrices enabled the team to plan for
whole of the test program at a time and evaluate the
test coverage at a glance. This practice resulted in an
optimal utilization of the effort available for various
test cycles while evaluating pros and cons of inclusion/
exclusion of various features into the test matrix

Institutionalization of the QA decision model led the

test team to plan effectively for upcoming test cycles
on the basis of results seen in earlier test cycles

Figure C-3 Snapshot of Test Strategizing Decision Model

Starting early with test automation helped the team in catching defects early in the product implementation phase

Institutionalization of daily test harness helped quick detection of defects and regressions caused by recent code check-ins

Portability of test harness enabled its execution with various supported hardware-software configurations with little or no effort thereby
resulting in maximum utilization of test harness

Inclusion of automated test runs in to the test matrix enabled the test team to reduce the effort for testing radically

The practice of having a detailed component and functional test matrix and its subsequent review by development team prevented any
ambiguities in setting up the test environment and set an agreement between both development and test teams before going in to
actual testing

Figure C-4 Snapshots of Detailed Component and Functional Test Matrices

08 | Infosys

D) Test Execution

This has been one of the most critical phases of the functional testing life cycle. Once we
have decided on the overall plan and strategy for test execution, this is the phase when the

test team will really come in to action and will make an optimum utilization of the time and resources allocated for testing. In spite of doing
a detailed and effective planning for your test cycles, sometimes test teams fail to accomplish the planned amount of testing during this
phase while juggling with various issues related to test environment setup, test case understanding and moreover product areas, which are
not testable due to certain blocking functional issues.

Following were the challenges, which the test team had to cope up during this phase:


Testers struggling while learning the product functionality by using the actual product build for the first time and due to mismatch
between test plans and actual functionality

A large amount of time going towards test environment setup due to product complexity and an extensive set of supported
configurations, which need to be setup

Recurring issues related to test environment setup and test execution, which take enormous time in resolving for most testers especially
who are new entrants to the test team

Blocking issues found in the product functional areas lead to extensive re-planning putting the initial overall plan at a stake

Best Practices:

Set up a process to get the intermediate product builds well before the formal QA hand-off. This process is also known as in-process
testing (as indicated in figure A-3 above)

Institutionalize usage of re-usable test environments through various tools viz. VMware images, Ghost images etc

Establish a knowledge repository of various problems encountered during test environment setup/test execution and their possible
solutions. This repository can be searched quickly especially by novice testers instead of re-inventing the wheel. Anyone in the team,
whenever solves a particular problem logs the problem and the corresponding solution in to this repository. Moreover, an email is
generated at the same time to the whole team containing the problem and its solution


Starting with the in-process QA well before the start of formal test cycles enabled the test team to learn the functionality by playing
around with the product

Functional gaps in the actual product and the test plan were closed early before going in to formal functional testing

Test team was able to identify and report defects for the blocking functional areas well before going in to formal functional test cycles
resulting in effective utilization of the time allocated for formal test cycles and well-planned testing

An overall reduction of approx. 25% was realized in the time going towards test environment setup due to the reusability achieved with
the usage of VMware/Ghost images. The test environment, setup by one tester can now be reused by multiple other testers. Moreover,
the setups for third party servers/components can be preserved as VMware images for usage during subsequent test runs

The institutionalization of Problems/Solutions knowledge repository resulted in approx. 30% reduction in the time going towards issue
resolution. Any tester now facing a problem has to search the repository for a solution before going ahead with spending time on
resolving the same

Infosys | 09

E) Defect Management

Effective defect tracking, analysis and reporting are critical steps in the testing process. A
well-defined method for defect management will benefit more than just the testing team.

The defect statistics are a key parameter for evaluating the health of the product at any stage during test execution.
Extra time spent on preparing a defect profile and its history often benefits through easier analysis, shorter resolution times, and better
product quality. This not only includes the new defects reported against the product, but also the defects reported during earlier test cycles
and the defects to be verified.
Following were the challenges, which were faced by the test team while managing the defects for the product:


Incomplete and ambiguous defect reporting resulting in too many defects coming back for Needs More Information from developments

Inconsistency in the structure of defects reported by various test team members

Testers assigning improper Severity/Priority to defects

Testers opening new defects for the problems already reported by other testers as defects during previous test cycles

Too many defects being marked as Not-A-Defect by development due to Operator errors

Inappropriate verification procedure for the fixed defects, which come for verification

Inadequate tracking of the various defects logged to a logical closure

Inappropriate tracking of defect trend resulting in lack of insight into current health of the product

Best Practices:

Institutionalize standard templates for defect profile preparation and defect verification profile preparation having comprehensive
details for each defect (Refer figures E-1 (a) and E-1 (b) below for snapshots of these templates)

Establish clear definitions for Defect Severity and Priority. Train test team members on the same and have all of the defects go through
a thorough review by the test lead before reporting

Maintain an updated list of the various defects logged against the product till date containing appropriate details for these defects,
which testers can refer to as and when they encounter a defect (Refer figure E-2 for snapshot of sample defects list for a product)

Coordinate and discuss with development team regarding the suspicious defects before reporting

Establish a process, wherein the development team specifies the procedure of verification for all the fixed defects in the defect tracking
system. Also, attach the verification profile for all the verified defects in the defect tracking system

Institutionalize various defect metrics, which are tracked, analyzed and reported on a continuous basis to various stakeholders (Refer
figure E-3 for snapshots of a few defect metrics, which were tracked)

Figure E-1 (a) Defect Profile Template A Snapshot

10 | Infosys

Figure E-1 (b) Defect Verification Profile Template A Snapshot

Figure E-2 List of Defects Logged against the Product A Running List


Standard templates for defect profile and defect verification profile resulted in consistent and comprehensive defects being logged by
various test team members

Assignment of appropriate Severity and Priority to various defects logged enabled development team to focus on these defects in a
well-organized manner

Referring to the list of the defects logged against the product till date enabled the test team to avoid any Duplicate defects as well as
to track every defect to a logical closure

Coordination with development team on suspicious defects brought about a considerable reduction in the number of defects being
marked as Operator Errors

Specifying the verification procedure in the defect tracking system enabled the development team to validate the verification procedure
followed by QA for any defect resulting in an increased consistency in the verification procedure

Institutionalization of various defect metrics resulted in an accurate and quick insight in to the products health and focus efforts in the
right product areas

The severity distribution of defects provided a

quick insight in to the product quality.

The priority distribution of defects in conjunction

with the severity distribution provided an
assessment of release readiness for the product.

Infosys | 11

Figure E-2 List of Defects Logged against the Product A Running List

The functional defect trend for the product enabled

various stakeholders to have a quick insight in to
product health and to prioritize development efforts

This metric is used to track the various defects logged

against the product to a logical closure. The defects,
which are still Open are the primary area of concern
apart from the ones marked as Not A Bug

This metric serves as an indicator of the effectiveness

of testing process and test execution. If you see a good
percentage of defects marked as Operator Errors, it
indicates that the understanding of the test engineer
on the functionality is low or there have been gaps in
the requirements document

Figure E-3 - A Few Defect Metrics

12 | Infosys

F) Test Results Reporting

Once the test team is done with the test cycle or even at intermediate stages, it is desirable to
have a visibility into quality of the product. With the increase in both product complexity and

significance, more and more people are interested in the quality of a given software product. By providing this visibility, large sets of
stakeholders are able to assure themselves as to the anticipated quality of an application. In addition, senior management and executives
are able to easily grasp and take action on critical quality information, acting on exceptions before they turn into tribulations.
Following were the challenges, which were faced by the test team during the test results reporting phase:


The test results report should be easy to comprehend and should be useful for all the stakeholders starting with the testers in the team
to the senior most executive having an interest in the product

It should provide a status of the feature-wise health of the product along with feature-wise defects data for an easy decision-making
towards Go/No-Go for the product

Best Practices:

Simple metrics, charts and graphs are often preferable to text, as they are easy to comprehend, and because they highlight exceptions.
A comprehensive test results report template was prepared in a spreadsheet format, which was then reviewed and approved by the
customer containing the following details:
o Component Version Details: This sheet provides the details of versions for various hardware-software used for functional testing
o Test Matrix: This sheet provides the actual test matrix used for testing
(Refer figure C-4 above for a snapshot of detailed component and functional test matrices for the test cycle)
o Result Summary report: This sheet provides the overall summary of the test cases executed and effort spent during the testing for each
o New Defect Details: This sheet provides the details for the new defects found during the current test cycle
o Old Defect Details: This sheet provides the details for the defects, which were filed during some earlier test cycles for the product and have
also been observed during the current test cycle
o Feature-Risk Analysis: This sheet provides the risk associated with each feature on a scale of High (red), Medium (yellow) and low (green)
based on the test cases which have failed, are blocked or were not executed for the feature
o Feature-Test Case Percentage-Chart-<Platform>: This chart provides the distribution of percentage of test cases passed, failed and not
executed for each feature on <Platform> platform. There will be one such sheet for each platform tested
o Feature-Test Case Count-Chart-<Platform>: This chart provides the distribution of number of test cases passed, failed and not executed for
each feature on <Platform> platform. There will be one such sheet for each platform tested
o Defect Severity Distribution: This chart provides the distribution of defects based on Severity
o Defect Priority Distribution: This chart provides the distribution of defects based on Priority
(Refer figure E-3 above for snapshots of defect severity/priority distribution)
o Effort-Feature-Chart-<Platform>: This chart provides the effort distribution per feature basis executed on <Platform> platform. There will
be one such sheet for each platform tested

(Refer figure F-1 below for snapshots of sample graphs/charts included in the detailed test summary report)

Infosys | 13

Provides a risk assessment for each feature from release readiness point-of-view

Provides a high level feature-wise quality picture for the



The comprehensive test summary report provided a

clear visibility in to the product quality while having
personalized views for different stakeholders. Along with
the metrics, which provided detailed test case, defect
and effort counts, there were metrics like Feature Risk
Assessment or Feature-wise Percentage Pass/Fail rates,
which provided a high-level snapshot of the product

The comprehensive test summary report received

tremendous appreciations from the customer

Provides a more detailed feature-wise summary for the

stakeholders interested in the same

Provides an insight in to the features, which have taken

more execution time. Helps devise future strategies to
reduce the effort for such features

Figure F-1 Components of A Detailed Test Summary Report

14 | Infosys

Once the test team is done with the test cycle or even at intermediate stages, it is desirable to
have a visibility into quality of the product. With the increase in both product complexity and

G) Test Metrics Collection, Analysis & Improvement

Test metrics are an important indicator of the effectiveness of a software testing process. Areas for process improvement can be identified
based on the analysis of the defined metrics and subsequent improvements can be targeted.
Hence, Test Metrics Collection, Analysis and Improvement is not just a single phase in the testing life cycle; but on the other hand, acts as an
umbrella of continuous improvement for whole of the testing life cycle.


The need for a mechanism for measuring the efficiency, effectiveness and quality of the testing process so as to identify the areas of

The need for measuring the program/product health objectively

Best Practices:

Identify a set of process/product metrics to be tracked on a continuous basis. Refer Figure G-1 below for a summary of the various
metrics institutionalized:

Refer Figures G-2, G-3 and G-4 below for snapshots of test metrics adopted for test process efficiency, effectiveness and quality respectively.
Refer Figures G-5 and G-6 below for snapshots of test metrics institutionalized for Product/Program Health and Defect Tracking respectively.

Develop dashboards for these metrics and share these dashboards with the customer at 2 levels: at test program level, at regular intervals

Analyze these metrics regularly and take improvement actions

Figure G-1 Summary of the Test Metrics Institutionalized


The dashboards enabled integrated, accurate and actual reporting, with a holistic view of metrics and graphical charts to facilitate
decision making

Analysis of the metrics provides an insight in to the process maturity and the areas, where improvements can be targeted

Data in the graphical form is easily interpretable

Customer gets a formalized mechanism to assess the effectiveness and efficiency of the QA process

A useful way to educate the team on the importance of various process parameters and to eliminate the problem areas in a planned

An effective way to present to Senior Management

Infosys | 15

Test Process Efficiency Metrics in Practice (Figure G-2):


Test Execution Productivity

Shows manual test execution productivity with respect to

upper & lower control limits set for the project Depicts
execution efficiency, helps identify problematic areas &
improve, where feasible


Shows combined (manual + automated) test execution

productivity Depicts execution efficiency, helps identify
problematic areas & improve, where feasible

Cost of Testing

Shows phase-wise effort distribution Depicts intensive effort areas to focus for improvement

16 | Infosys


Average Defect Turnaround Time

Depicts average verification time taken for defects of Priority-1
Indicates operational efficiency of the test team and helps in
identification of areas for improvement.
(Similar trends are tracked for defects of other priority levels


Test Automation Productivity Trends

Shows trends in productivity of test case automation Depicts changes in performance levels of automation team
and helps identify problems, if any.

Depicts average response time taken for defects of Priority-1,

when the defect is set as Needs More Info asking for more
information from test team Indicates operational efficiency
of the test team and helps in identification of areas for
(Similar trends are tracked for defects of other priority levels


Test Case Automation Trends

Shows trends in the amount of work done by automation

team- Helps identify time intervals having lower automation
and take remedial actions.

Figure G-2 Test Process Efficiency Metrics in Practice

Infosys | 17

Test Process Effectiveness Metrics in Practice (Figure G-3):


Functional Test Coverage (Feature-wise)

Shows feature-wise & priority-wise % test execution
Depicts detailed test coverage at a glance, a mechanism to
validate the test strategy


Functional Test Coverage (Overall)

Shows overall % test execution Depicts overall functional
test coverage at a glance

Defects Automated/Added to Test Plans

Depicts No. of test cycle-wise defects verified, automated
and added to test plans. Serves as an operational
effectiveness parameter for the test team.


Failed Test Cases/Hr, Failed Test Cases/Total Test Cases Executed

Depicts effectiveness of testing as well as cost of catching

18 | Infosys


Test Automation Coverage

Shows the trends in automation coverage quarter-by
-quarter - Depicts current automation coverage and helps
focus efforts towards further automation.


Effort Savings through Test Automation

Shows the feature-wise savings in execution time achieved

through test case automation Helps evaluate the benefits
(ROI) of test automation and share with customer/senior
management and decide for future roadmap

Figure G-3 Test Process Effectiveness Metrics in Practice

Test Process Quality Metrics in Practice (Figure G-4):


Percentage of Defects Marked as Operator Errors

Depicts the quality of functional test teams work. Having more operator
errors is an area of concern and needs to be looked into

Figure G-4 Test Process Quality Metrics in Practice

Infosys | 19

Product/Program Health Metrics in Practice (Figure G-5):


Feature Sensitivity

Shows the % of failed test cases for various product features over
multiple test cycles Depicts sensitive features


Feature Sensitivity

Shows feature-wise sensitivity, Depicts defect-prone feature(s)

Figure G-5 Product/Program Health Metrics in Practice

20 | Infosys

Defect Metrics in Practice (Figure G-6):


Test Cycle-wise Defects

Shows new defects logged during various test cycles. Depicts how
the product quality faired over multiple test cycles

Depicts (new + old) defects logged during various test cycles. The old
defects are the ones, which we reported during an earlier test cycle
and are detected during the current test cycle also.

Figure G-6 Defect Metrics in Practice

Infosys | 21

Continuous improvement is the key to success of any process. Having illustrated the metrics model in place, we will have to
continuously enhance the metrics model to strive for continuous improvement.

As H. James Harrington truly said - The journey towards excellence is a never ending job.

Software Engineering: A Practitioners Approach, 6/e by Roger S Pressman, R.S. Pressman and
Associates, ISBN: 0072853182, Copyright year: 2005
A Framework for Good Enough Testing by James Bach, Reliable Software Technologies
Software Testing in the Real World Improving the Process, by Edward Kit

22 | Infosys

About the Author

Mandeep Walia
is a Group Project Manager with Infosys Technologies Ltd. He has over 13 years of IT experience encompassing Software
Development, Maintenance, Testing and Professional Services. He is certified as a Project Management Professional (PMP)
by PMI and a Certified Software Quality Analyst (CSQA) from QAI, USA. During his career at Infosys, Mandeep has managed
multiple large and complex software programs for Fortune 500 companies.

Common Terms Used in This Document:

QA Quality Assurance

IT Information Technology

Infosys | 23

About Infosys
Infosys partners with global enterprises to drive their innovation-led growth.
That's why Forbes ranked Infosys 19 among the top 100 most innovative
companies. As a leading provider of next-generation consulting, technology
and outsourcing solutions, Infosys helps clients in more than 30 countries
realize their goals. Visit and see how Infosys (NYSE: INFY),
with its 150,000+ people, is Building Tomorrow's Enterprise today.

For more information, contact

2013 Infosys Limited, Bangalore, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice.
Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted,
neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or
otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document.