You are on page 1of 6

Cognizant White Paper

Eight Steps to Better

Test Automation

Automation doesnt automatically make software testing faster, more reliable and less expensive. Because the up-front costs in automation tools and set-up can be high, automated testing only pays off if the long-term cost savings offsets those initial expenses. In addition, not all automation tools and methodologies have the same features, functions and capabilities, and each project may have different requirements that affect its costs and benefits. To help you get the maximum benefit from automated testing, we offer eight tips based on our experience with more than 50 enterprise-level automated testing projects, worldwide. Following these tips can help increase the ROI of test automation and improve software quality.

Top Eight ways to achieve efficient test automation.

whitepaper

One: Choose What to Automate Not all software projects lend themselves equally to testing. Understanding which factors increase the difficulty of automating a specific test project is essential to balancing the costs against the benefits. More appropriate candidates for automated testing include code that: Plays an important role in an application Processes large amounts of data Executes common processes Can be used across applications

such as Windows and Linux, but often cannot do the same for controls created by development tools vendors such as Infragistics, and application vendors such as Oracle and Siebel. Code or test cases for which the production or infrastructure teams have not supplied the right type of data, or the proper amount of data, to support a full-scale automated test program. Code which changes often, since such changes may also require timeconsuming manual revisions to either the test script or the test data. We have developed a nine-point decision tree (see Figure 1) that helps clients select automation candidates based on the following criteria: Technical feasibility The frequency of test execution The extent to which test components can be reused Total resource requirements The complexity of the test cases The ability to use the same test cases in multiple browsers or environments The time required to execute the tests Two: Choose Your Test Tools Because companies must amortize their investment in automation, they should select automated test tools that will meet their needs for years. Proper evaluation criteria include: Support for various types of automated testing, including functional, test management, mobile, SOA and IVR (interactive voice response) software. Support for multiple testing frameworks (the methodologies that enable the reusability and efficiency of automation). The ability to recognize objects created in a wide variety of programming languages. Stable setup and operation on any environment/platform. Ease of debugging the automation scripts. Efficient test execution with a minimum of manual effort. Automatic recovery from application

Nine-Point Test Decision Tree

Figure 1

Organizations should also look for nontraditional and even unplanned areas, over which they can spread their automation investment. These include automating the testing of installation routines for patches and defect fixes, automating test management, and the creation of test reports. Less appropriate factors for automated testing include: The extensive use of non-standard software controls (the components that define the user interface). Automated test tools typically can assess the quality of standard controls within operating systems

whitepaper

failures to prevent test interruptions. A powerful scripting language that makes it easy to develop scripts (the instructions carried out in a specific test) that can be reused across platforms and test types. The ability to report results, and be controlled by, a wide variety of test management platforms. Ease of use to minimize training costs and delays.

the required licenses are purchased. Among the open-source tools we have customized are the Selenium suite for automated testing of Web applications and the Bugzilla defect tracking tool. We have also created a framework based on the WATIR (Web Application Testing in Ruby) toolkit used to automate browser-based tests during Web application development.
Tool Selection

There are several commercial tools that provide these capabilities, but they are expensive. The decision to procure them may solely depend on the proposed investment, and as mentioned above, after considering future requirements. Customizing such opensource tools to meet specific testing needs can be an effective way of reducing costs. Another way to speed ROI from automated testing is to use inexpensive software tools that, while not designed for that purpose, can be useful in automating certain scenarios. Examples: Use of the macro-recording capabilities that ship with most operating systems to automate processes such as loading test data, scheduling, and executing tests. Free or low-cost file comparison utilities can be used to determine whether actual test output matches the expected results at far lower cost than using commercial off-theshelf test automation tools. We have developed a methodology to help customers choose the test automation software most appropriate to their needs, and to make the most effective use of both new and existing automated test tools (see Figure 2). It begins with defining the objectives for the tools and specifying the tests to be automated, such as functional testing or back-end validation. The next step is to define the requirements, build an evaluation scorecard, perform a proof of concept, and finally prepare the tools for deployment. This methodology also helps customers optimize their use of automated test tools by identifying all testing needs across the organization, creating an inventory of tools already available, and reviewing existing licensing agreements to ensure only
Figure 2

Three: Refine Your Test Process In many organizations, the lack of centralized, standardized automation processes has led to an elongated testing lifecycle that is prohibitively expensive and fails to detect the maximum number of defects. Improving such processes requires the following (see Figure 3): Describing the risks of current methods, and showing how testing can be done at a lower cost, more quickly and/or more effectively. Management commitment to provide the funding and support for changing workflows required to improve testing processes. Avail support from the test staff for meeting improvement goals. Training to provide specialized skills to test managers, specialists and testing engineers in testing methodologies. Prioritizing process improvements based on business goals. Ongoing measurement of test processes to achieve higher return on investment.

whitepaper

A Standardized Test Automation Framework

A Hybrid Approach to Test Automation

Figure 3

Four: Choose a Framework Just as with any tool, automation test solutions must be used correctly to be effective. Choosing an appropriate framework can help boost long-term reusability and efficiency. A framework is not a replacement for an automation tool, but a roadmap for its effective use. It should also allow for parameterization (separate storage) of test scripts and test data to allow the maximum reuse and easy maintainability of both. In this way, when changes are made to a test case, only the test case file needs to be updated, while the driver and start-up scripts remain the same. Similarly, the test data can be changed as needed without requiring changes to the test scripts. Among the popular frameworks are those that are data driven, where the test data is stored separately from the automation tool. This provides significant ease of use and customization of reports, eases data maintainability, and allows multiple test cases to be conducted with multiple sets of input data. However, the initial cost and maintenance overhead can be considerable. Another approach is keyword-driven in which data tables and keywords are maintained independently of the automation tool and the test scripts that drive the tests. But this is somewhat harder to use than the data-driven approach, which increases costs and delays. We have developed a hybrid framework (see Figure 4, above) ,that combines the best elements of both the keyword- and the datadriven approaches. It stores the test data apart from the automation tool, (usually in an Excel spreadsheet), making it very easy to maintain and reuse the scripts.

Figure 4

It allows multiple test cases to be driven with multiple sets of data, and parameterizes the data to ease maintenance and modification. It drives efficiency by allowing business users to design and change tests without learning complex scripting languages. The use of keywords makes it easy to measure how much of the code required to perform common business functions, such as validate applicants age, is reused over time. Five: Dont Underestimate the Manual Effort The word automation implies that machines do the work, not humans. But the amount of manual effort required in automated testing is one of the least understood aspects of software testing. Humans must set up the test machines, create the scripts, select the tests, analyze the test results, log the defects and clean up the test machines. An accurate estimate of these costs is important not only for budgeting and planning, but also to build an accurate return on investment calculation. In our client work, we have identified the factors IT organizations should take into account in estimating the manual effort required for test automation. These include the complexity of the language used to create test scripts, and the amount of work required to plan, generate, execute and maintain the scripts. Another aid to estimating effort is classifying test cases as simple, medium and complex based on the number of transactions and the number of steps within the scripts required by each case.

whitepaper

Six: Watch Those Scripts Areas in which standards are particularly important include the following (see Figure 5): Exception handling, which determines whether, and how well, the test script can recover from the failure of the application being tested, or unexpected behavior such as the appearance of a pop-up. The use of standards helps ensure all scripts continue to run in the event of unexpected conditions, reducing the need for manual intervention and speeding test execution. Error logging, in which standards make it easier for multiple developers and testers to analyze and act on the test results. Documentation standards in areas such as comments and indentation help developers create uniform scripts, and to understand script code created by others.

Seven: Identify Who Will Execute the Tests Automation raises the expectation among managers that testing can be achieved with little or no manual effort. Thus, they do not allocate the staff required to perform the manual steps required in automated testing, such as analyzing test results and creating and cleaning up test machines. Many of the likely candidates for such work are either unavailable or lack the proper skills. For example: Functional testers already have their time allocated to manual testing. Automation engineers, while qualified to run the tests, typically dont understand enough of the functional requirements to analyze the failed scenarios. They are too busy developing and maintaining tests to execute the tests and track defects. Without planning for the proper numbers of skilled staff, an automated testing program will fall victim to unexpected outages, delays and cost overruns. The organization will not be able to run enough tests frequently enough to justify its investment in test automation. Thats why it is important to determine who will own, and execute all the processes around the actual automated testing at the beginning of the process. As an experienced test automation consulting firm, we can help customers make sure they have the right amount of skilled staff to cost-effectively ensure software quality. This includes accurately assessing the manual requirements at all stages in the automatic testing process, and making better use of existing staff. Functional testers can be retrained, for example, to both execute and maintain scripts (see Figure 6):

Scripting for Success

Figure 5

Along with the development and use of scripting standards, we have developed consolidated toolsets for script creation that have saved customers 10% to 15% in scripting costs by allowing greater script reuse.

whitepaper

Resourcing the Test Automation Function


Figure 6

testing. Reduced time to market. Improvements in customer satisfaction. Increased productivity due to improved application quality.

Eight: Measure Your Success -- Accurately Tools such as the Automated Execution Efficiency Ratio (AEER) can determine how effectively you are executing automated tests by analyzing the ratio of human effort as a percentage of the total effort needed to execute the automated tests. But dont just compare the number of person-hours required for manual vs. automated tests for a given size of the code. Be sure to include other benefits such as the following (see Figure 7): A higher percent of defects found. Reductions in the time needed for

All these are real benefits that must be balanced against staffing and tool licensing costs to determine an accurate return on investment.
Measuring what Matters

Figure 7 Real-World Example: A global financial services company turned to Cognizant to speed its testing of business-critical code, and to improve the return on its automation investment through the reuse of test components. Cognizant helped the client establish a central function to perform automated functional and regression testing of key modules such as trading, payments, accounts, safekeeping and wealth management, and to implement an automation architecture. Cognizant reduced the overall time required for testing by 72%, and the effort required to design and execute multilingual tests by 70% by creating a test quality center and developing a framework to maximize component reuse and the ROI of automated testing. The client has automated 43% of its regression testing, reuses 60% of its automation scripts, and has extended the benefits of this work by implementing the automation framework on multiple projects. Cognizant also developed a dashboard that provides business and technical managers measurable performance metrics through dashboards showing weekly progress on key performance indicators and service level agreements.

Conclusion:
Automating software testing is not as simple or as quick as the term implies. Software test tools can be expensive, and setting up, executing and analyzing test results requires extensive manual work. But by using the proper tools such as open-source software, and the proper processes and automation frameworks, organizations can realize the cost savings and quality of automated software testing. About Cognizant: Cognizant (NASDAQ: CTSH) is a leading provider of information technology, consulting, and business process outsourcing services. Cognizants single-minded passion is to dedicate our global technology and innovation know-how, our industry expertise and worldwide resources to working together with clients to make their businesses stronger. With over 50 global delivery centers and more than 68,000 employees as of September 30, 2009, we combine a unique onsite/offshore delivery model infused by a distinct culture of customer satisfaction. A member of the NASDAQ-100 Index and S&P 500 Index, Cognizant is a Forbes Global 2000 company and a member of the Fortune 1000 and is ranked among the top information technology companies in BusinessWeeks Hot Growth and Top 50 Performers listings. Visit us online at www.cognizant.com.