Training - Automation Architect

Test Automation Strategy and Design

Agenda – Day I
• Assurance Services – Automation Strategy, Architecture, Automation Assessment and Consulting • Test Automation Strategy Overview – Objectives – Automation Feasibility Analysis, Automation Prioritization • Feasibility Analysis – Feasibility Study Methodology – Technical Feasibility – Automation Quotient • Automation Scope Definition – Analysis – Scope Definition • Design – Data Driven Framework – Keyword Driven framework – Build Verification Test / Deployment Verification Test • Pilot – Activities to be done in Pilot – Pilot Reporting
13 July 2009

Introduction to Automation & Test Automation Strategy Overview

Introduction from a Consulting Firm

13 July 2009

Automation Strategy
Automation Objectives
 Faster to market  Increase in Test Effectiveness  Effort/Cost Saving  Decrease in Tester dependency

Automation Strategy Guidelines
 Test automation is a fulltime effort, not a sideline.  The test design and the test framework are totally separate entities.  The test framework should be applicationindependent.  The test framework must be easy to expand and maintain  The test strategy/design should be framework independent.  The test strategy/design should remove most testers from the complexities of the test framework

13 July 2009

Test Automation Strategy and Design Rationale  Rationale behind the scope (Feasibility Analysis) Test Types/ Activities  Shakedown testing  Regression  Business process tests  Data creation/ conditioning Other considerations  Test case consolidation and prioritization Scope Framework  Data Driven  Keyword /Table driven  Hybrid  Script Design  Data Sheet Design  Integration Design  Component based model in BPT  Estimation. ROI  Maintenance. Execution Design Others 13 July 2009 .

Feasibility Analysis .

. check if automation would need a major overhaul and if the tool proposed will support the technology of the proposed upgrade)  Activity expected in future • Is the application likely to be retired/replaced and therefore to see very little activity in the near future 13 July 2009 .)  Future upgrades • Is there any major upgrade in the offering (if yes.Feasibility Analysis    Automation Need Evaluation Type of Automation Activities/Type of tests to be automated • Shakedown (tests to determine test-readiness) • Regression tests • Data creation/ conditioning activity • Tests to be run on multiple builds/ releases  IT strategy (choice of toolsets. application strategy.

Attribute validation. functional. GUI Recommendations • Solution & Implementation approach • Timeline & effort • Estimated benefits Technical quotient •Bounded tests •Control over input & output •Manual interventions •Consistency of object properties •Object identification •Validation complexity – Bus Rules. Static or predictable results •Effort required for Workarounds Demo Discussions Documents Other Factors • Application Enhancement Roadmap • Business criticality Discussions •Discussion with Customer champions on findings & recommendation 13 July 2009 .Automation Feasibility Study Methodology Analysis Study Sources Questionnaire Automation quotient •Availability of Automated scripts •Test coverage level •Completeness of documents •% of data variant test conditions •Control over input & output •Repetitiveness in test conditions •Effort distribution across test phases •Frequency of releases •Potential impact of patches/small fixes •Effort and time for automation maintenance •Stability of GUI •Churn rate of regression test bed •SME bottlenecks •Type of defects found – explorative.

Sample – Feasibility Analysis Document • Sample Document 13 July 2009 .

) that cannot be reused  Cost of exiting resources because skill set is not required.  Cost for changes invested in other tools (any custom integration with test management tool etc.) 13 July 2009 .Tool Evaluation Factors to evaluate Test Tool Technology stream  Support for the current technology stream  Support for the new technologies being planned as per IT strategy (if any) Total cost for the tool  Cost of acquiring license  Cost of AMC  Cost for add-ins required as per technology stream  Cost for any additional integration exercise  Comparison to be against similar license types (ex. global license if offshore is considered) Exit cost for leaving existing tools  Sunk investment in direct assets (code etc.

Can the new tool handle it? Comparison of Market leading test tools capabilities 13 July 2009 .Tool Evaluation Factors to evaluate Test Tool The entry barrier  Ease of training new resources  Ease of finding skilled resources in the market  Availability of functional libraries  Ease of Integrating with other tools Strategic Vendor Rating  Use "Gartner's Magic Quadrant" to see if the strategic direction of the vendor is in sync with company needs. what are the issues faced?. User comfort level  Technical proficiency required to script  Any special skill required to work on the tool Technical Support level  How much support is required from the vendor?  What kind of support is required and how often? Potential to resolve current issues  If a current tool is being used.

Tool Landscape 13 July 2009 .

Automation Scope Definition .

Scope Definition Test Case Analysis Type of Test Case Number of Data variant Test cases Number of unique Validation Effort Analysis Test design Test Data setup Test execution Test Reporting Defect Analysis % of User Interface defects % of Withdrawn defects due to test data issue % of Regression Defects % of Defects Slippage % of Functional Defects Regression Testing Shakedown Testing/Build Verification Test/Smoke test Business process tests Data creation/ conditioning 13 July 2009 .

Automatability analysis Function-level Analysis Test case Analysis Automation Analysis Completeness Analysis Prioritization Analysis Facilitation High priority functions Evaluate further Undertake Automation Analysis Complete and business-critical test cases Automatability Ignore Ready to Automate Ready to Automate Phasing based on business-context and availability Ignore Alternate methods/ Selective Automation Alternate methods/ Selective Automation Ready to Automate Ignore Gap Analysis and completion Ignore Business criticality Alternate methods/ Selective Automation Business criticality Facilitation Analysis: Analysis of test case completeness. potential legislative impact. complexity of case and setup requirements. potential degree of automation. system criticality. potential reusability across phases. financial and consumer impact Automatability Analysis: Analysis of data-variance. control of input and output. accessibility and offshoreability Business Criticality Analysis: Analysis of Day-in-life (DIL) factor. test case review and tool availability. potential savings/ benefits 13 July 2009 .

Design .

and Expected Result all typically externalized and in One Record • Hardest and most timeconsuming data driven approach to implement • Greatest potential for long-term success • Table creation can start before Application building Data-Driven • Test Input/Output values are read from data files • Object and action are stored in the Script • Easiest and Quickest to Implement • Low potential for long-term success • Application building is a pre-condition • Data-Driven With Externalization other than data(Object/Part of actions) • Keyword Driven with the business keywords(only step tables/Test Tables) • Combination of Data-driven and Keyword Driven (due to knowledge constraints) 13 July 2009 . Action.Framework HP-Mercury QC BPT (Business Components model) Keyword or Table Driven Hybrid • Object. Input Data.

• Recovery Scenario Script Run time exception handling scripts • Report Script Script execution report generation scripts • Utility Script Utility function scripts (Can be an VBS file also) • Data Sheet Excel sheet/Any other Data Source Application under Test Driver Script Main Script Script Library Recovery Scenario Script Report Script Functional Script Data Grid Utility Script 13 July 2009 . • Main Script Script calls the functional scripts in an order and executes the test case.Data Driven Script Design • Driver Script Initialize the setup required to run and call scripts in a desired order. • Functional Script Common functionality or module being used in many business functions.

Data Sheet Design Input  Row heading – for the screen  Mandatory and Optional with different colors  Field values in the list box  Data Grid fields are directly mapped to the application objects Expected  Separate section/Sheet  Protect the cells from the user input  Formulae to Calculate the expected Comparison  Script/Data Sheet can be used 13 July 2009 .

Keyword/Table Driven Framework Suite Driver Sheet and Script Run T 1 .Y Run T 2 .N Datasheet T1 . action. utility.Keywords for component. results Key Word Interpreter Functional Library Business Functions Test Driver Object Map Manual Testing Component 1 Test Tool Component 2 Utility Functions Component Functions Result Functions Component 3 AUT* 3 AUT* 1 AUT*2 Results QC QC** API •Application Under Test (AUT) **HP QC (Quality Center) 13 July 2009 .Y Run T N .

reports. additional checks Automatic generation of expected results for UI validation Ready to use Scripts Customize and use Guidelines 13 July 2009 . saves rework Tool Independent XML based repository BVT Automation framework Application under test Define granularity and format of report Object property list repository Optimized property list for objects Object property list Repository Shakedown Execution Customized Reports Expected object property result sheet Data Flow sheet.22 - .Key Pointers • Uses descriptive programming method • XML and Excel based • Up to 40%+ faster testing on builds • Extendible .easy maintenance on UI changes or test requirement changes • Catches defects early. checks • Object checks parameterized .

Tool Integration • Integration of Version control tool with Test Management tool • Integration of the Automation Tool and Test Management Tool – Execution of Scripts from Test Management Tool – Reporting Mechanism Pros/Cons Executing Cases From QC Executing cases outside QC and porting results as batch Fast turnaround times Integrated view of results (manual and automated) possible Easier to maintain the script Ease of execution by a non –tech professional Scheduling selective run can be done through QC Integrated view of results (manual and automated) is possible Good medium for sharing between onsite and offshore QC link can become a bottle neck Pros Cannot schedule tests from QC Cons Other considerations Latency issues Uploading scripts into QC takes longer Version management? Potential space availability Maintainability of scripts Building a controller script which interfaces with QTP Results and the Data grid with updated the actual script and data grid results to be attached as an attachment to the test set 13 July 2009 .

Pilot .

• Workout a detailed plan to automate the rest of the test cases (that can be automated) • Investigate potential roadblocks for offshoring the automation project and come up with alternatives for the same. the following are key considerations: • Select a representative set of business scenarios using a systematic approach – Involving multiple human interaction points – Multiple platform/ online-batch jumps/ interfaces – Account for frequency of operation of a business case and the complexity of the test case 13 July 2009 . While scoping a Pilot.Pilot – Objectives and considerations Aims of the pilot : • Define the Automation Frame work for the main phase • Define guidelines for selecting test scenarios amenable for automation • Identify the scenarios/function to be automated based on the ROI indicators measured during the Pilot phase of the project.

Activities to be done in Pilot Functionality Identification  Factors  Complexity  Coverage  Priority Test cases Identification  Coverage – Including Positive and Negative Test Data Preparation  Prepare Test Data for each test case  Preconditioning the data Data sheet Preparation  Identify the Fields needed for Functionality under test  Prepare the Data Sheet with Identified fields as per the Data Sheet Design  Make a entry of all the Test cases in the Data Sheet  Key in the test data for each test cases in the corresponding fields 13 July 2009 .Pilot .

Activities to be done in Pilot Automating the Functionality under Test  Define every feature of an Functionality  Modularize the Units of Functionality under Test  Create all related steps of Functionality  Do the necessary modifications to the script to make it Data Driven  Unit Testing Execution  Run the Script with the data available in the Data sheet  Report the Execution Results as per the defined format Metrics to Management  Benefits out of Automation  Possible Risks and Mitigation 13 July 2009 .Pilot .

Estimation .

reused – time for user documentation – time for review – time for fix • framework setup • Dependency 13 July 2009 .Effort Estimation Criteria • Categorization of Complexity – no. of objects – type of objects – type of action – number of steps – number of verifications/syncs/outputs • degree of reusability – time for understanding – time for coding (including visiting existing libraries for reusability) – time for testing the scripts – time for unique.

125 2.25 2 18.375 7.25 0.5 12.25 0.25 50 30 20 3.875 13 July 2009 .5 50 40.875 35 Test Automation Total Effort 125.75 15 12.Effort Estimation Model • Sample Test case based model Type of Test Case Unit effort No Test Case s Familiari zation Test Automation Script capture and construction Script Data externali zation Script debugging Test Execution (Data SetUp+Ex ecution+A nalysis+R eporting) 3.25 10 12.125 3 3 Integra tion with Test Reposi tory tool 0 0 0 Automatio n Effort (PD) Simple Medium Complex 0.5 11.5 9.

ROI Calculation .

Optimal utilization of tester time .Increased motivation. reduction in TTM) % of errors due to testing Other benefits include: .Payback from Automation Savings due to increased efficiency + additional productivity benefits Payback = ------------------------------------------------------------------------------------------Tool cost (or AMC) + Maintenance cost+ Build cost Note1 : Tool cost can be ignored if the tool has already been procured and is unutilized Other popular metrics include: % of strategic applications automated Test Execution savings due to automation (incl. 13 July 2009 ..

The selection of application/modules for test automation should be re-looked. of components created new At the end of design phase High % indicates quicker time-to-build.Metrics – Design Phase Test Auto matio n Phase Design Metric Objective What to collect When to collect Using the metrics Automation quotient per test case (depth) Measures depth of automation within a test case No of test steps automated in a test case. 13 July 2009 . Design % of reusable scripts for new enhancements or projects Measure of Reusability achieved No of reusable scripts used Total no. Total number of test steps in a test case Automation quotient = No of test steps automated in a test case -------------------------------------Total number of test steps in a test case At the end of design phase A low value indicates shallow automation. automation can be used earlier in Test Development Life cycle (TDLC).

Sample Standards and Guidelines .

Standards and Guidelines Need of Standards for Automation Standards are needed to achieve the following  Readability  Maintainability  Clarity  Uniformity Various Automation Standards Hardware Standards Software Standards Tool Setting Standards  General Options and Test Settings  Object Identification Settings  Options Settings  Editor Settings 13 July 2009 .

Standards and Guidelines Contd… Recording Standards  Recording standards will allow to record the test as per the settings are set. For Example. These settings are Tool specific.Recording Settings can be set to keep/Remove specific properties of the objects before starting the Recording  Rational functional Tester – Once the Object is recorded.Ensure that all parameters indicate their nature (in/ out/ in-out) and -have appropriate comments where needed  Modularization – Ensure that Scripts are modularized based on Functional Units  Length of the Script – Ensure that Script is not too long 13 July 2009 .  Mercury – QTP and Winrunner . Properties can be edited Coding Standards  Test Script – Function Name  Version – Ensure that Versioning is maintained for the Scripts  Author – Ensure that Author name is mentioned in the header of the script  Comments – Comments about the Script and Each unit of the script  Descriptions/Definitions – Ensure that all the variables are defined and Descriptions are mentioned for Functions  Parameters used .

Standards and Guidelines Contd…  Path Hard Coding – Eliminate hard coded paths  Indentation . Bounds should not be defined. allowing them to process data of any size  Defining Function .Code is to be indented for easy readability and debugging  Defining Array Bounds . This would enable maximum reusability (can add new scenarios on the fly) and encourage reuse as well  Nested Loops .Keep nested loops to a minimum wherever possible  Synchronization – Ensure that Event loops are used for flow between screens  Reserved Words – Ensure that no reserved words are used 13 July 2009 . and therefore they need to be dynamic in nature. needs to be classified as a function and stored accordingly in the utility repository  Creating Functions afresh – To increase efficiency and remove redundancy. Traceability matrix should be maintained to check if functions required are already available before scripting afresh  Windows Flow control .Wherever possible. window flows to be defined using keywords.Anything abstracted at unit level.Arrays are defined as utility functions.

Check if the service packs are installed are of the same version 13 July 2009 . OS Specific Problem  If Yes. Below are the check points to be taken care  Did the code work before?  No – Follow standard debugging  Yes – Check identical OS is used  If not.Standards and Guidelines Contd… Execution Standards  Ensure that Same environment is set up for all the machines to be used for Execution Debugging Checklist  Debugging checklist should be maintained.

 Ensure that possible values are provided to select in the data sheet  Define each section clearly as Input. allow Tester to edit only the input section Readability  Conventions to be followed for each section of data sheet  Names/Color code for the variables 13 July 2009 . Which will make tester independent.Data sheet Standards and Guidelines Contd… Usability  Ensure that Guidelines are provided for each field in the data sheet. Output and Final Results  If possible.

13 July 2009 .

Thank You 13 July 2009 .

Describes the activities to execute scripts  Reporting  Readable QTP results thru Report event  Report written on the separate results workbook to port in to MQC  Error message written in Data Grid used 13 July 2009 .Mercury Quick Test Pro – Best practices Usability:  Data Grid Design Highlights  Row heading – for the screen  Mandatory and Optional with different colors  Field values in the list box  Data Grid fields are directly mapped to the application objects  Document  Execution Manual .

Benefit Option – Network selection Retrofitting in the code  Static objects These objects will be used to handle static screens/input. Tie contacts Retrofitting in the object repository 13 July 2009 . Doc Gen. Descriptive objects will be used to automate dynamic screens. Example: Client Screen. These Objects will be captured and stored in the test object repository.Mercury Quick Test Pro – Best practices Maintainability:  Object Handling – Minimize Maintenance effort  Descriptive objects The test data and application object properties will be used as input to define the dynamic objects. Example: Account – Networks.

Parameters used & Global variables) Variable Definition Content & Length of the Script Comments  Data Grid Standard Document which makes the entire development cogent and comprehensible  Script Matrix This links together the various scripts used per type  Traceability Matrix Traceability on Test cases with automation script 13 July 2009 ..):  Coding Standard A coding standards document which makes the entire development cogent and comprehensible – – – – Descriptions (Version.Mercury Quick Test Pro – Best practices Maintainability (Contd.

Mercury Quick Test Pro – Best practices Integration:  Results porting Tool  Ports executed results to regression test set in MQC API built using QC Open Test Architecture Save Manual effort  Reporting  Report written on the separate results workbook to port in to MQC 13 July 2009 .

Mercury Quick Test Pro – Best practices Performance:  Unmonitored runs  Recovery Scenario  Execution Speed  Coding standards  combination of descriptive and static objects  Logical Synchronization (Wait statement is not used)  Scripts are stored in local and QC integration mechanism build 13 July 2009 .

Mercury Quick Test Pro – Best practices Data Grid: 13 July 2009 .

Mercury Quick Test Pro – Best practices Reporting: 13 July 2009 .

Mercury Quick Test Pro – Best practices Descriptive objects: 13 July 2009 .

Mercury Quick Test Pro – Best practices Static object repository: 13 July 2009 .

Mercury Quick Test Pro – Best practices Script matrix: 13 July 2009 .

Mercury Quick Test Pro – Best practices Traceability matrix: 13 July 2009 .

Mercury Quick Test Pro – Best practices Integration: Pros/Cons Executing Cases From QC Executing cases outside QC and porting results as batch Fast turnaround times Integrated view of results (manual and automated) possible Easier to maintain the script Ease of execution by a non –tech professional Scheduling selective run can be done through QC Integrated view of results (manual and automated) is possible Good medium for sharing between onsite and offshore QC link can become a bottle neck Pros Cannot schedule tests from QC Cons Other considerations Latency issues Uploading scripts into QC takes longer Version management? Potential space availability Maintainability of scripts Building a controller script which interfaces with QTP Results and the Data grid with updated the actual script and data grid results to be attached as an attachment to the test set 13 July 2009 .

): 13 July 2009 .Mercury Quick Test Pro – Best practices Integration (Contd..

Sign up to vote on this title
UsefulNot useful