You are on page 1of 102

Testing Tools
Software Quality: Technical:
  Meeting Customer Requirements Meeting Customer Expectations (User friendly, Performance, Privacy)

Non-Technical:
 Cost of Product  Time to Market

Software Quality Assurance:
To monitor and measure the strength of development process, Organisation follows SQA concepts.

Software Project:
Software related problems solved by software engineers through a software engineering process.

Life Cycle Development (LCD):
Information Gathering Analysis

Design

Coding Testing Maintenance

Testing:
Verification & Validation of software is called testing.

Fish Model of Software Development:
LCD Analysis Design Coding Maintenance

Information Gathering (BRS) LCT Reviews

S/W RS (FRS + SRS)

LLD’s

System Testing Programs
Black Box Testing

Reviews White Box Prototype

Test S/W Change Testing

Verification

Validation

Business Requirement Specification (BRS-Information Gathering):
BRS defines requirements of the customer to be developed as a software. This type of documents developed by business analyst category people.

Software Requirement Specification (S/W RS):
This document defines w.r.t BRS. This document consists of functional Requirements to develop (FRS) & System Requirements to use (SRS). This document also developed by business analyst people only.

Reviews:
It is a static testing technique to estimate completeness and correctness of a document.

Design High Level Design Document (HLD):
This document is also known as external design. This document defines hierarchy of all possible functionality’s as modules.

Low Level Design Documents (LLD’s):
This document is also known as internal design. This document defines structural logic of every sub module.

Example: DFD-Data Flow Diagram, E-R Diagram, Class Diagram, Object Diagram.

Prototype:
Sample model of an application with out functionality is called Prototype. Ex: Power point slide show.

Coding: White Box Testing:
It is a coding level testing technique. During this test, test engineers verifies completeness and correctness of every program. This testing is also known as Glass Box Testing or Clear Box Testing.

System Testing: Block Box Testing:
It is a build level testing technique. During this tests, testing team validates internal functionality depends on external inter phase.

V – Model of S/W Development:
V – Stands for Verification & Validation. This model defines mapping between development stages & Testing Stages. Development Testing Development Plan Defect Removal Efficiency I/f gathering & Analysis of Development plan -- Prepare Test Plane -- Requirements phase testing -- Design Phase Testing
--Assessment

(DRE): DRE = A / (A+B)

It also known as Defect Deficiency. Design & Coding Where
BRS / URS / CRS

-- Program Phase Testing (WB) A = No of defects found by testing teem during testing User Acceptance Testing process. B = No of defects-- Functional & System Testing(BB) found by customer during Maintenance.

Reviews

-- User Acceptance Testing Function & System Testing Refinement Form of(BB– Model: V Testing) -- Test Documentation For medium scale and small-scale organisations V – Model is expensive to follow. For this type of -- Port Testing organisations, some refinements required in V – Model to develop quality Software. HLD IntegrationS/W Changes -- Test Testing Maintenance
S/W RS

Install BUILD

-- Test Efficiency Reviews LLD’s

White Box Testing

Unit Testing = A / (A+B) DRE Coding

From the above refinement form of V-Model, Small & Medium scale organisations are maintaining separate testing team for functional & System testing stage to decrease cost of testing.

I) Reviews During Analysis:
In general, software development process starts with information gathering and analysis. In this stage business analyst category people are developing BRS and S/W RS like documents. BRS defines requirements of the customer & S/W RS defines functional requirements to be developed and system requirements to be used. After completion of this type of documents preparation, they are conducting reviews on the documents for Completeness & Correctness. In this review analysts are using below factors:  Are they complete?  Are they met requirements?  Are they achievable? (w.r.t Technology)  Are they reasonable? (w.r.t time & cost)  Are they testable?

II) Reviews During Design:
After completion of analysis and their reviews, our project level design will start logical design of application in terms of external & internal design (HLD, LLD’s). In this stage they are conducting reviews for completeness and correctness of design documents. In this review they are using below factors.      Are they understandable? Are they met right requirements? Are they complete? Are they followable? Does they handle Errors?

III) UNIT TESTING:
After completion of design & their reviews, programmers will start coding to physically convert software. During this coding stage programmers are conducting unit testing through a set of White Box Testing Techniques.

This unit testing is also known as Module Testing or Component testing or Program Testing or Micro Testing. There are three possible White Box Techniques. 1. Execution Testing:  Basis paths coverage (Execution of all possible blocks in a program).  Loops Coverage (Termination of loops statements).  Program technique coverage (Less no of memory cycles & CPU cycles). 2. Operations Testing:  Run on customer expected platforms (OS, Browser, Compiler etc.).
Change

3.

Mutation

Testing: It means that a change in program. White Box Testers are performing this change in program to estimate test coverage on the program. Tests Retests
Change

Tests Retests

Pass Fail Pass Pass (Complete Testing) (Incomplete Testing)

IV) Integration Testing:
After completion of dependent modules development and testing, programmers are combined them to form a system. In this Integration, they are conducting Integration testing on the coupled modules w.r.t. HLD. There are three approaches to conduct integration testing. 1. Top – Down Approach: Conduct testing on main module with out coming to some of the sub modules is called Top-Down Approach.
Main Stub

Sub 1

Sub 2

1.From the above model. stub is a temporary program instead of under construction sub module. Main Driver Sub 1 Sub 2 3. Bottom – Up Approach: Conduct testing on sub modules with out coming from main module is called Bottom – Up Approach. It is also known as called program. 3. Sandwich Approach: The combination of Top – Down and Bottom-UP approaches is called Sandwich Approach. Driver is a temporary program instead of main module. From the above model. V) Functional & System Testing: After compleation of final integration of modules as a system. Usability Testing Functionality Testing Performance Testing Security Testing Core Level Advanced Level . Thease techniques classified in to four Catageries.EXE form is called Build. 2. 2. This program is also known as calling program. test engineers are planning to conduct Functional & System Testing through Black Box Testing Technique. Main Driver Sub 1 Stub Sub 2 Sub 3 BUILD: A finally intigrated all modules set . 4.

During this test. Usability Testing: In general testing team starts test execution with Usability testing. During Functionality Testing. During Security Testing. testing teams are applying two types of sub tests. Receive Build from developers UI Testing Usability Testing Remaining System Tests Manuals Support Test 2) Functional Testing: A major part of BB testing is Functional Testing. During Usability Testing. During this test testing team concentrate on Meet Customer Requirements. a) User Interface Test (UI):    Ease of use ( Understandable Screens) Look & Feel ( Attractive or pleasantness) Speed Interface ( Less no of events to complete a task) b) Manuals Support testing:  Context sensitiveness of user manuals.       Behavioral coverage ( Changes in object properties ) Input(i/p) Domine coverage ( Size and type of every input object) Error-Handling coverage ( Preventing –ve navigation’s ) Calculations Coverage ( Correctness of outputs ) Backend coverage ( Impact of front-end operations on backend tables content) Service level coverage ( order of functionality’s) b) Input Domine Testing: It is a part of Functionality Testing. This functional testing classified into below tests. .During Usability Testing. testing team validates Privacy to User Operations. 1. Test engineers are maintaining special structures to define size and type of every input object. testing team estimates Speed of Processing. test engineer validates Corrections of every functionality in terms of below coverage’s. testing team validates Correctness of Customer Requirements. testing team validates User Friendliness of screens. a) Functionality or Requirements Testing: During this test. testing team validates User Friendliness of screens of build. During Performance Testing.

1 Max + 1 -.Pass -. A text box allows 12 digit numbers along with * as mandatory and sometimes it allows – also.Pass -.Fail Password BVA 4 – pass 3 – fail 5 – pass 8 – pass 7 – pass 9 .Fail ECP Valid a to z 0 to 9 Invalid A to Z Special characters Blank ECP Valid a to z Invalid A to Z 0 to 9 Special characters Blank Example 2: Prepare BVA & ECP for the following text box. Password allows alpha bits in lower case 4 to 8 characters long. .Fail  Equivalence Class Partitions ECP (Type): Valid Pass Invalid Fail Example1: A login process allows user ID and Password to validate users. Boundary Value Analysis BVA( Range / Size ): Min Min – 1 Min + 1 Max Max .Pass -.Fail -. User ID BVA 4 – pass 3 – fail 5 – pass 16 – pass 15 – pass 17 .Pass -. Prepare BVA and ECP for user ID and password. User ID allows Alpha Numerics in lower case from 4 to 16 characters long.

d) Compatibility Testing: It is also known as portability Testing. Compiler. - Invalid A to Z a to z 0 to 9 Special characters other than *.Valid 0 to 9 with * 0 to 9 with *. test engineers validates whether the application change from abnormal state to normal state. testing team validates whether our application build run on customer expected platform (OS. . Blank Normal Abnormal Normal Using Backup & Recovery Build OS Build OS BVA Min = Max = 12 – pass 11 – fail 13 . Browser and other system software) are not? Forward Capability Backward Capability Note: During testing. During this test. During this test.Fail ECP c) Recovery Testing: It is also known as reliability testing. test engineers are finding Backward capability defects as maximum.

During this test. Defect: During testing. When defects are accepted by development team to solve. testing team observe below factors:  Setup program execution to start installation. testing team reports defects to developers in terms of bellow categories.e) Configuration Testing: It is also known as hardware compatibility testing. testing team validates installation of our application build along with supported software’s into customer site like configured systems. testing team validates whether our application build supports different technology hardware devices are not? EX: Different types of LANs.  Mismatch between expected and actual.r.  Extra functionality w. testing team compare our application build with competitors products in the market. During this test.  Easy interface during installation.t CRS. f) Inter System Testing: It is also known as end to end testing. testing team try to find extra features in our application build w. different technology printer’s etc. During this test. Some times defects are known as issues. testing team validates whether our application build co-existence with other existing software’s are not?  To share resources EBD WBA TBA ITA Local DB Server New Server New Application g) Installation Testing: During this test. different topologies.r.  Missing functionality. During this test. During this test. they called defects as BUG’s.t customer requirements. Defects raise in application due to errors in coding . h) Parallel Testing: It is also known as comparative testing and applicable to software products only. i) Sanitation Testing: It is also known as garbage testing.  Occupy disk space after installation.

testing team validates Speed of Processing. testing team validates Privacy to User Operations. Break Event Analysis Per for ma nce Threshold Point Resources EX: MS-Access 2 GB database as maximum. During this test. c) Storage Testing: The execution of application under huge amounts of resources to estimate storage limitations is called Storage Testing.3) Performance Testing: It is an advanced testing technique and expensive to apply because testing team have to create huge environment to conduct this testing. b) Stress Testing: Execution of our application under customer expected configuration and uninterval load’s to estimate performance is called stress testing. During this security testing. During this performance testing. a) Authorization (Whether user is Authorised are not ) b) Access Control (Whether valid user have permission to specific service or not) c) Encryption/Decryption (Data conversion in between Clint process and Server process) Clint Server Decryption Encryption Decryption Encryption . a) Load Testing: The execution of our application under customer expected configuration and customer expected load to estimate performance is called Load Testing. d) Data Volume Testing: The execution of our application under customer expected configuration to estimate peak limits of data is called data volume testing. testing team conduct below sub tests. 4) Security Testing: It is also an advanced testing technique and complex to conduct. testing team applies below sub tests. During this test.

This team consists of few developers. test engineers are covering Authorization and Access Control during functional testing. VI) User Acceptance Testing (UAT): After completion of Functional & System testing. α TEST  Software applications  By real customers  In development site β TEST  Software products  By customer site like people  In customer site like environments Collect Feed Back VII) Testing During Maintenance: After completion of User Acceptance Test and their modifications. Change Request (CR) Enhancement Impact Analysis CCB (Change Control Board) Perform Change Test Software Change Missed Defect Impact Analysis Perform Change Change Test Process . This team conducts Port Testing In Customer Site. Encryption and decryption process covered by development people. organization invites customer site people to collect feedback.Note: In small and medium scale organisations. release team validates below factors. management concentrates on release team formation. release team provides training sessions to customer site people and comes back. few testing & hardware engineers.        Compact Installation Overall functionality I/P Devices handling O/P Devices handling OS error handling Secondary storage handling Coexistence with other software After completion of port testing. During software maintenance customer site people are sending Change request (CR) to the organization. There are two methods to conduct UAT such as α test and β test. During this test.

tester try to trouble shoots when that build is not working before start testing. Monkey Testing / Chimpanzee Testing: A tester conducts any test on basic functionality’s of application. 6. In this test. 7. Integer.t test Impact and Criticality. Smoke Testing: It is an extra shakeup in sanity process. Sanity Testing: It is also known as Tester Acceptance Testing (TAT) or Build Verification Test (BVT). Float. A tester conducts a test on application build with the help of Testing tool / Software is called Automation testing.Single Stage) A testing team conducts single stage testing. testing team estimates stability of that build before starts testing. 4. Maximum. Incremental Testing: A multiple stages of testing process from unit level to system level is called incremental testing. test engineers are using test Automation w. after completion of entire system development instead of multiple stages.r. +ve and –ve ect. Big Bang Testing:(Informal Testing . 2. Impact means that test repetition & Criticality means that complex to apply test manually.Testing Terminology: 1. 5. 3. test engineers use different combination of inputs in terms of Minimum. . testing people are using test Automation. Manual Vs Automation: A tester conducts any test on application build without using any Testing tool / Software is called manual testing. After receiving build from development team. Exploratory Testing: Level by level of functionality’s coverage is called exploratory testing. In common testing process. Ex: To validate multiplication. It is also known as formal testing. Due to these two reasons. 8. Re-Testing : The re-execution of a test with multiple test data to validate a function is called Re-Testing.

test engineers are getting mismatches in application called defects. Regression Testing: The re-execution of tests on modified build to ensure bug fix work and occurrences of side effects called Regression Testing (Previously failed test and previously related passed tests). Defect and Bug: A mistake in code is called Error. Due to this reason test engineers are concentrating on test Automation. . Error. test repetition is mandatory in test engineer job. Note: 1) Re-Testing on same build & regression testing on modified build but both are indicating reexecution. Due to errors in coding. If defected accepted by development to solve called Bug. 2) From the definitions of Re-Testing and Regression Testing.9. 10.

Power Builder. 2. HTML and Siebell)  To support .B. 5. WinRunner returns results in terms of passed & failed. WinRunner record manual test operations in TSL (Test Script Language) like as “C”. Run Script: During test execution. Web applications ( V. Analyze Results: During automation script execution on application build. Java. D2K.WINRUNNER – 7. People Soft. test engineers run the script instead of manual testing. . SAP. 4.NET.0 supports auto leaning. Oracle applications and multimedia we can use Quick Test Professional(QTP) TEST PROCESS: Learning Record Script Edit Script Run Script Analyze Script and windows in application by 1. Delphi. WinRunner 7. Learning: Recognition of objets WinRunner is called learning. test engineers are concentrating on defect tracking. Edit Script: Test engineers are inserting required check points into the record script. Record Script: Test engineer creates automated test script to record our business operations.0  Developed by Mercury Interactive  Functionality Testing Tool  Supports Client / Server. Depends on that results. VC++ . 3.

button_press(“OK”). Process: Focus to login OK Disabled Enter UID OK Disabled Enter PWD OK Enabled Automation Process: set_window(“login”. 0). . 1). button_check_info(“OK”. “xxxx”). button_check_info(“OK”. “enabled”. 0). button_check_info(“OK”. “enabled”. This program consists of two types of statements such as Navigational statements to operate project and Check points to conduct testing. Test Script : An automated manual test program is called test script. Add-In Manager (Window): It lists out all possible supported technologies by WinRunner to conduct testing. we can use Xrunner CASE STUDY: Login UID Expected: OK P W D Manual OK button enabled after filling UID & PWD. edit_set(“UID”. “enabled”.Note: WinRunner only run on windows family operating systems. Linex platform. password_edit_set(“PWD”. 5). If we want to conduct functionality testing on application build in Unix. “encrypted PWD”).

b) Analog Mode: To record mouse pointer movements w. Recording Modes: WinRunner records manual operations in two types of modes such as Context Sensitive Mode and Analog Mode.t objects and windows in application build. Click Start Recording Twice ↓ Create menu → Record Analog Note : . button_press(“button name”). Focus to window Click push button Fill edit box Fill password Select item in list Selection option in menu Radio button Check box → → → → → → → → set_window(“window name”. EX: Digital Signature.r. It is a default mode in WinRunner. “typed text”). list_select_item(“list box name”. Graphs drawing and image movements. menu_select_item(“menu name.t desktop coordinates. ON/OFF). time to focus). 3. “item”). We can use this mode in WinRunner. a) Context Sensitive Mode: In this mode WinRunner records mouse and keyboard operations w.r. Note: TSL is a case sensitive language and it allows entire scripting in lower case but maintains Flags in upper case. To select Analog mode recording in WinRunner bellow possibilities can be used. ON/OFF). option name”). “encrypted password”).WinRunner Icons: 1. button_set(“radio button name”. 2. password_edit_set(“password”. edit_set(“text box”. button_set(“check box name”. Pause • Start Recording Run from top Run from point. 4.

GUI check point 2. test engineers are insisting check points into the script to cover below sub tests. type( ). 2.r. Syntax: move_locator_track(Track No). → → where mouse operation on desktop Release/Hold key CHECK POINTS: After completion of required navigation recording. Due to this reason test engineer maintains corresponding window in default position during recording and running. 3.t desktop coordinates. Syntax: mtype(“<T track no><kleft/kright>+/-“). Behavioral Coverage I/P Domine coverage Error handling coverage Calculation coverage Backend coverage Service levels coverage To automate above sub tests. WinRunner maintains F2 as a shortkey to change from one mode to other. Bigmap Check point . 3. Test engineer also maintains monitor resolution as constant during recording and testing. we can use four types of checkpoints in WinRunner. 5. move_locator_track( ). Analog Recording: In Analog mode WinRunner maintains bellow “ TSL” statements. 2. 1. 4. By default track no starts with 1. : WinRunner use this function to record mouse operation on the desktop. In analog mode WinRunner records mouse pointer movements on the desktop w. 1. mtype( ). 6. : WinRunner use this function to record keyboard operations in Analog mode. 1. Syntax: type(“Typed text” / “ASCII notation”).1. Ttrack no +/3. : WinRunner use this function to record mouse pointer movements on the desktop in one unit (one sec) of time. 2.

edit_set ("Edit".. ON). menu_select_item ("File. we can use this checkpoint in WinRunner. ON).3. set_window ("Flight Reservation".. Test Script set_window("Flight Reservation"."enabled". 4).1). "1"). → → → Disabled Disabled Enabled Example : . button_press ("Update Order"). This checkpoint consists of 3 sub options a) For Single property b) For Object / Window c) For Multiple Objects a) For Single Property: To verify one property of one object. 1).").0). button_press ("OK"). button_check_info("Update Order". Text Check point."enabled". set_window ("Open Order". 4. we can use this option. a) GUI Check Point: To test properties of objects. 7). button_check_info("Update Order". button_set ("Business".0).". Example: Object: Update Focus to window Open a record Perform a Change Navigation: Select position in script → Create menu→ GUI check point → For single property → Select testable object → select required property with expected → Click Paste. button_set ("Order No. button_check_info("Update Order". Database check point.Open Order."enabled".

“ XXXX ”). 0). “ focused ”. edit_check_info (“input” . button_check_info ( “ OK ” . button_check_info (“ OK ” . “ enabled ” . button_press (“ OK ”). “ enabled ” . “ XXXX ”).1).1). 1). “ enabled ”. edit_set ( “ Name ” . Script set_window (“ sample ” . button_check_info (“ OK ” . edit_check_info (“ Name ” . list_select_item ( “ Roll No ” .Sample Input OK Expecting: Focus to window → input is focused → OK disabled Fill input → OK enabled Create script. button_check_info ( “ OK ” . “ enabled ”. “ focused ” . “ XXXX ”). Example 3: Student Roll No Name OK Expected: Focus to window → Roll No focused → OK disabled Select Roll No →Name focused → OK disabled Enter Name → OK enabled Script set_window ( “ Student ” . 5 ). 1). “ enabled ”. 0). Case Study: Object Type Testable Properties . edit_check_info ( “Roll NO ” . edit_set (“ input ”. 4). 1). button_check_info (“ OK ” . “focused”. button_press ( “ OK ” ).0).

Column Count. when you ▼ select 1 item in Fly From. Value. “xxxx”). Regular Expression. Date Format. Focused Enabled Status Enabled Status Enabled. “count”. Focused. Example 5: Sample 1 Sample 2 Display . Fly TO set_window(“Journey”. list_select_item(“Fly From”. list_get_info(“Fly From”. Example 4: Journey ▼ Expected: Fly of items in Fly To equal to No No From of items in Fly From-1. n). “count”. Focused. Value. 5). Count. Enable. Range.Push Button Radio Button Check Box List / Combo Box Menu Table Grid Edit Box / Text Box Enabled. Count Rows Count. n-1). list_check_info(“Fly To”. Table Content Enabled. Time format.

“value”.▼ List OK Text Box Expected : Selected item in list box is equal to text box value when you click display. set_window(“Sample 1”. else if (P < 70 && P > = 60) edit_check_info(“grade”. P). “value”. “xxxx”). list_get_info(“Item”. else if (P < 80 && P > = 70) edit_check_info(“grade”. edit_check_info(“Text”. x). “xxx”). “A”). “C”). “value”. button_press(“OK”). list_select_item(“Roll No”. “B”). “value”. edit_get_info(“Percentage”. list_select_item(“Item”. x). set_window(“Sample 2”. Than grade A If % is < 80 & > 70 Than grade B If % is < 70 & > 60 Than grade C Other wise Grade is D. button_press(“OK”). button_press(“Display”). if (P > = 80) edit_check_info(“grade”. 5). else . 5). set_window(“Student”. “value”. Example 6: Student Roll No OK ▼ Percentage Grade Expected : If % is > 80. 5). “value”.

“value”. . “D”).edit_check_info(“grade”.

“ focused ”. menu_select_item ("File. → disabled → disabled → enabled. else if (x = = B) list_check_info(“Gender”. 1). set_window ("Flight Reservation". “Value”. Expected values file specifies expected values for those properties. x)."). "list2. 5). we can use this option. Example 8: Object : Update Order Focus to window Open record Perform Change Navigation: Select Position Script → Create Menu → GUI check point → For Object or Window → Select testable object(Double Click) → Select required property with expected → click OK.ckl". This two files created by WinRunner during checkpoint creation. obj_check_gui("Update Order". a) For Object / Window: To test more than one properties of single object.ckl”. Time to create) In the above syntax checklist file specifies list of properties to be tested. if (x = = A) edit_check_info(“Age”. "gui2". “ focused ”. else list_check_info(“Qualification”. Syntax: obj_check_gui(“object name”. 3). “checklist file. list_select_item(“Type”. 1). list_get_info(“Type”. “expected values file”. 1). focused ..Example 7: Insurance Type Age Gender Qualification ▼ ▼ ▼ Expected : If type is “A” → Age is focused If type is “B” → Gender is focused Any other type → Qualification is focused set_window(“Insurance”.. “ focused ”. 1). “X”).Open Order.

ON). "list4. "gui5".ckl". Syntax: win_check_gui(“window”. button_press ("OK"). 1). ON). button_set ("Order No.ckl". 1). button_set ("Order No. button_set ("Business". set_window ("Flight Reservation". obj_check_gui("Update Order". Time to create) set_window ("Flight Reservation". For Multiple Objects: To verify more than one properties of more than one objects.ckl”..ckl". Focused Delete Order Disabled Enabled Enabled . 1). 3). "list4. ON). "list5. we are using this checkpoint in WinRunner. win_check_gui("Flight Reservation". win_check_gui("Flight Reservation". ON). "gui4". 1).. edit_set ("Edit". set_window ("Open Order". “expected values file”.". set_window ("Flight Reservation".Open Order. "list2. edit_set ("Edit". 1). menu_select_item ("File. button_press ("Update Order"). "gui4". button_press ("OK"). 2).").". 2).set_window ("Open Order". obj_check_gui("Update Order". “checklist file. button_set ("First". "1"). Example 9: Objects Focus to window Open a record Perform change Navigation: Select position in script → create menu → GUI check point → for multiple objects → click add → select testable objects → right click to quit → selected require properties with expected for every object → click OK. Insert Order Disabled Disabled Disabled Update Order Disabled Disabled Enabled. 1).ckl". "1"). "gui2".

Example 11: Sample Name Expected : Alphabets in lower case Navigation: Create menu → gui check point → for obj/win → select name obj → select regular expression → enter expected expression ( [a-z]*) → click OK. but first character is alphabet [a-zA-Z] [a-zA-Z0-9]* Example 14: Name object is taking alphabets only but it allows “_” as middle. "list3. 1).ckl”. [a-zA-Z] [a-zA-Z_]* [a-zA-Z] → regular expression → [q-z]* . “list1. 1).ckl".win_check_gui("Flight Reservation". Example 10: Sample Age Expected: Range 16 to 80 years Create menu → gui check point → for object or window → select age object → select range property → enter from & to values → click OK. set_window(“sample”. "gui3".ckl gui1 Example 12: Name object is taking alphabets Regular expression → [a-zA-Z]* Example 13: Name object is taking alphanumerics. 1). obj_check_gui(“name”. list1. “gui1”. “gui1”. 5). 1). set_window(“sample”. “list1.ckl”. obj_check_gui(“Age”. button_press ("Update Order").

[[0-9][*]]* Editing Check Points: During test execution test engineers are getting test results in terms of passed & failed. A text box allows 12 digit numbers along with * as mandatory and sometimes it allows – also. Bitmap Check Point: . Navigation: Create menu → Edit GUI check list → select checklist file name → click OK → select new properties to test → click OK → Click OK to overwrite → click OK after reading suggestion → Change run mode to update → click run → run in verify mode to get results → open the result → analyze the result and perform changes required. a) Changes in expected values: Due to test engineer mistake or requirement change. These results analyzed by test engineers before concentrating on defect tracking along with developers. 2. test engineers perform changes in expected values through below navigation. Example 16: Name object allows alphabets in lower case and that value starts with R and end with O. [R][a-z]*[O] Example 17: Prepare Regular expression for the following text box. b) Add extra properties: Some times test engineers are adding extra properties to existing checkpoints due to tester mistake or requirement enhancements.Example 15: Regular expression for yahoo user ID. In this review test engineers are performing changes in checkpoints due to their mistakes or changes in project requirements. Navigation: Run script → Open result → Change expected value → Re-execute test to get correct results.

Time to create. b) For Screen Area: To compare our expected image area with actual.To validate static images in our application build. width.. we can use this option. Note: Expected No of items = 10005 . “Image file name. Signature comparison etc. “Image file name. Ex: Logo’s testing. No of items = 10000 0 Syntax: Actual obj_check_bitmap(“Image object name”. Syntax: obj_check_bitmap(“Image object name”. test engineers are using this checkpoint. Navigation: Create menu → bitmap checkpoint → for screen area → select required image region → right click to release. Graphs comparison. height). This Check point consists of two sub options: a) For Object or Window b) For Screen Area. we can use this option. New . Example1: dd . x.bmp”. Time to create). $ Expected = = Pass ! = fail Example2: $ Actual = = Fail ! = pass Navigation: 5000 10000 10000 5000 Create menu → Bitmap checkpoint → for object or window → selected expected image (double 0 Click). a) For Object or Window: To compare our expected image with actual image in our application build. y.bmp”.

(No functional overloading) ARITY – no of arguments in a function. It is also known as Database Testing.1) TSL functions supports variable size of parameters to call like as “c” language. 2) In functionality test automation GUI checkpoint is mandatory but bitmap check point is optional because all applications doesn’t allows images as contents. In this backend test automation. test engineers are collecting this information from development team. . Data validation means that weather the frontend side values are correctly storing into back end tables are not. test engineers are following database checkpoint concept in create menu. Database checkpoint consists of three sub options such as a) Default Check b) Custom Check c) Run Time Record Check a) Default Check: Test Engineers are conducting back end testing depending upon database table’s contents using this checkpoint. Data Integrity means that weather the impact of front end operations working on back end table contents (Updating / Deletion ). To automate above backend testing using WinRunner. During testing test engineers are validating impact of front-end operations on back end tables content in terms of data validation and data integrity. test engineers are using database checkpoint in WinRunner to automate back end testing. 3) Database Check Point: Back end testing is a part of functionality testing. Front End DSN 1 Database Check Point Wizard Excel Sheet 2 Select 3 x x x Step 1: Step 2: Step 3: Connect to Database Execute Select Statement Provide results into Excel Sheet to analyze. → DSN(Data Source Name) → Tables definitions → Forms Vs Tables D D D (Database Design Document) Depending on above information.

one or more matching records and no matching records) → click finish. Syntax : db_record_check(“check list file name. Machine data source → Flight 32) → Write select statement ( EX. “Query result file name.→ Create database checkpoint (Current Content of tables as expected) → Perform Insert/ Delete / Update through front end. test engineers are using Run Time Record Checkpoint in WinRunner.order_number. b) Custom Check: Test engineers are conducting backend testing depending on rows contents. Query result file specifies results of the query in terms of content. c) Run Time Record Check: To find mapping between front-end objects and backend columns test engineers are using this option.xls”).crr”. DVR_ONE_MATCH / DVR_ONE_OR_MORE_MATCH / DVR_NO_MATCH.) → click finish. = = Fail → Executive database check point (Current content of tables selected as actual) ! = Pass (May be) Navigation: Create menu → database check point → default check → Specify connection to database using ODBC (Local Database) / Data Junction( For remote database or distributed database) → Select specify SQL statement ( C:\ Program files \ Mercury interactive \ WinRunner \ Temp \ test name \ msqr1. Navigation: Create menu → database checkpoint → runtime record check → click next → click create to select DSN → write select statement with doubtful columns (ex: select orders. because test engineers are using this checkpoint when they got mismatch between front-end objects and backend columns.customer_name from orders) → click next → select doubtful front end objects for that columns → click next → select any one of three options ( exactly one matching record. But test engineers are not using this option because default check content also showing no of rows and column names. column contents and content of database tables. Variable). . It is optional checkpoint in tester job. Syntax: db_check(“Check list file name. In the above syntax checklist specifies content is the property. Select * from orders. orders.sql) → click next → click create to select DSN ( EX. From X →a X Expected : a DSN b 20 50 Y Y →b 10 20 30 40 To automate above like mapping testing.cdl”.

This option consists of two sub options. 1). set_window ("Open Order". button_press ("OK"). } Note: Runtime Record Checkpoint does not allow “ . → Flags specifies type of matching → Variable specifies that number of records matched for(i=1. 3)."). i<=5. variable). we can use get_text option in WinRunner. 4. button_set ("Order No. ON). db_record_check("list1. Syntax: obj_get_text(“name of the object”. record_num). Text Check Point: To conduct calculations and other text based tests. menu_select_item ("File. Example : Sample Input Out put Expected : Out put = In Put * 100 . DVR_ONE_OR_MORE_MATCH. i++) { set_window ("Flight Reservation". ” at the end of the select statement. It is a new concept in WinRunner 7.0. edit_set ("Edit".". Note: Above function is same as edit_get_info(“edit box name”.. Variable).. Navigation: Create menu → Get text → From Object / window → Select object (D Click). “Value”. "1").cvr".→ In the above syntax checklist file specifies expected mapping between back end columns and front end objects. a) From object or Window b) From Screen Area a) From object or Window: To capture object values in to variable we are using this option.Open Order.

x2. Navigation: Create menu → get text → from screen area → select required region → right click to release. if (tot = = t * p) printf(“test is pass). b) From Screen Area: To capture static text from screen area we can use this option. set_window(“flight reservation”. obj_get_text(“output”. else printf(“test is fail”). 5). y2 ). y1. length(p) .x). obj_get_text(“input”. p). obj_get_text(“total”. obj_get_text(“tickets”.1). if(y = = x * 100) printf(“test is pass”). obj_get_text(“price”. variable. 2. 5).set_window(“sample”.1). p = substr( p . Syntax: obj_get_text(“object name”. tot). Example 1: Getting text from object / window by using sub strings to cut some area of string. 2. length(tot) . Example 2: Shopping QTY Price Total xx Rs:xxx/Rs:xxx/Expected: Total = price * qty . t). else printf(“test is fail). x1.y). tot = substr(tot .

length(p)-5).4. else printf(“test is fail”). if (tot = = q * p) printf(“test is pass”). 0 / 1.set_window(“shopping”). Menu. obj_get_text(“QTY”. in the above syntax tl stands for test log(test results). tl_step( ): To create our own pass / fail result in result window. p=substr(p. 0 / 1 : 0 → pass. we can use this statement. tot).4. obj_get_text(“Total”. obj_get_text(“price”. Key Board Test Data BUILD . To executive one test more than one time on same application build with multiple test data. Table. tot=substr(tot. Syntax: tl_step(“step name”. ActiveX And Data Window) Through Excel Sheet a) Dynamic Test Data Submission: Some times test engineers are conducting re-testing depends on multiple test data through manual submission.Txt) From Front End Grits ( List. Through Flat Files (. q). “description”). 1(non zero) → fail Data Driven Test (DDT): DDT is nothing but a retest. p).length(tot)-5). There are four types of DDT tests to validate functionality testing. a) b) c) d) Dynamic Test Data Submission.

obj_get_text(“result”. edit_set ("Input 1". “Pass”). button_press ("OK"). i<=5. temp). 3). we can use below TSL statement. button_set ("Order No.. } Example 2: Multiply Input 1 Input 2 OK Expected: Result = input 1 * input 2 Test data in paper: 10 pairs of inputs Result for(i=1. ON). To record value from keyboard during test execution. i++) { x = creat_input_dialog(“ Enter order No”). x). set_window ("Open Order".. i<=10. button_press ("OK"). edit_set ("Edit_1". edit_set ("Input 2". menu_select_item ("File. 1).Open Order."). 3). y = creat_input_dialog(“ Enter Input 2”). if(temp = = X * Y) tl_step(“step”. Syntax: create_input_dialog(“message”). i++) { x = creat_input_dialog(“ Enter Input 1”). set_window ("Flight Reservation". y).".From the above model test engineers are submitting test data through keyboard. Example 1: for(i=1. else .0. x). set_window ("Multiply".

edit_set(“QTY”.2. 5). obj_get_text(“Price”. } Example 3: Shopping Expected: Total = Price * QTY OK Item No QTY Price $ Total $ Test data in paper : 10 pairs of item no and QTY for ( i = 1. tot = substr(tot. 1. edit_set(“Item No”.“Test is pass”).length(tot)-1). else tl_step(“step1”.2. obj_get_text(“Total”. } . x). y). I<=10.tl_step(“step”. y = create_input_dialog(“Enter QTY”). 0 . button_press(“OK”). p = substr(p.1. tot). i++ ) { x = create_input_dialog(“Enter Item No”). set_window(“Shopping”.length(p)-1). “Test is fail”). p).“fail”). if (tot = p * y) tl_step(“step1”.

} b) Through Flat Files: Some times test engineers are conducting re-testing depends on multiple test data from flat file. set_window(“Login”. else tl_step(“step1”.5). i<=10. passwore_encrypt(y)).Example 4: Login Expected: → If next enabled user is authorised → If next is disabled user is unauthorised Next Test data in paper : 10 pairs of user ID’s & Passwords. “User is Unauthorised”). x). n). 0. i++) { x = create_input_dialog(“Enter User ID”). y = create_input_dialog(“Enter Pwd”). if ( n = = 1) tl_step(“step1”. 1. password_edit_set(“Pwd”. button_get_info(“next”. User ID Pwd OK for(i=1. “User is Authorised”). . button_press(“OK”). “enabled”. edit_set(“User ID”.

txt Test Screen To prepare above model automated test scripts. file_open(f. while(file_getline(f. Example1: f="c:\\My Documents data. file_open( ): we can use this function to open file into RAM with required permissions. button_set ("Order No. 2). 3. file_getline( ): We can use this function to read a line from opened file in READ MODE.s) !=E_FILE_EOF) { set_window ("Flight Reservation".. ."."). Syntax: file_open(“File Path”. 2. Syntax : file_getline(“path of file”. file_close( ): we can use this function to sweep out a opened file from Ram .. 1.Test Data BUILD . FO_MODE_READ / FO_MODE_WRITE / FO_MODE_APPEND). Variable). Note: in TSL file pointer incremented automatically up to end of file. menu_select_item ("File. Syntax: file_close(“path of file”).txt".Open Order. test engineers are using few file functions in WinRunner.FO_MODE_READ). set_window ("Open Order". 5). ON).

edit_set ("Input 2". “fail”). x[2]). 3). button_press ("OK"). Example 3: Shopping Item No QTY OK Expected: Total = Price * QTY Test data in file: c:\\My Documents data. Result f="c:\\My Documents data. .1. button_press ("OK").“Pass”). x[1]).txt Ram purchase 101 items as 10 pieces Price $ Total $ .txt". } file_close(f).txt xx xxx xxx xxxx xxx xxx . edit_set ("Input 1". set_window ("Multiply". else tl_step(“step”. temp). while(file_getline(f. } file_close(f): Example 2: Multiply Input 1 Input 2 OK Expected: Result = input 1 * input 2 Test data in file: c:\\My Documents data.edit_set ("Edit". file_open(f.x “ “). if(temp = = x[1] * x[2]) tl_step(“step”.0.s). obj_get_text(“result”..FO_MODE_READ).s) !=E_FILE_EOF) { split(s.

1. edit_set(“User ID”. file_open(f. set_window(“Login”. p = substr(p. button_press(“OK”). Example 4: Login xxxx@xxx xx Expected: → If next enabled user is authorised → If next is disabled user is unauthorised User ID Test data f="c:\\My Documents data. x[3]). button_press(“OK”). tot = substr(tot. obj_get_text(“Price”. } file_close(f). 5).2. password_edit_set(“Pwd”.length(tot)-1). obj_get_text(“Total”. } file_close( ). n). edit_set(“QTY”.x “@“). passwore_encrypt(y[2])). split(x[2]. set_window(“Shopping”. “enabled”. file_printf( ): . tot).y.x “ “). “User is Authorised”). x[1]). p).txt". if ( n = = 1) tl_step(“step1”. button_get_info(“next”. 0.txt".“ “). while(file_getline(f.txt Pwd file_open(f.5).s) !=E_FILE_EOF) { split(s.FO_MODE_READ). 0 . 1. 4. OK Next while(file_getline(f.in file: c:\\My Documents data. else tl_step(“step1”.FO_MODE_READ). edit_set(“Item No”.length(p)-1). “User is Unauthorised”).“Test is pass”). else tl_step(“step1”. if (tot = p * x[6]) tl_step(“step1”.2. “Test is fail”). x[6]).f="c:\\My Documents data.s) !=E_FILE_EOF) { split(s.

5. It specifies concatenated content of both compared files. b). 5). ActiveX. i<n. Fly TO set_window(“Journey”. i++) . c) From Front-end Grids: Some times test engineers are conducting re-testing depends on multiple data objects such as list. “path of file2”. for(i=0. data window. Test Data BUILD Test Screen Example 1: Journey ▼ Expected: Fly Fromitem does not available in Selected ▼ fly to.We can use this function to print specified text into a file. “path of file3”). n). table. file_compare( ): We can use this function to compare two files content. Syntax : file_printf(“Path of file”. menu. “values / variables). “a = %d and b = %d”. “format”. If file is opened in WRITE / APPEND MODE. list_get_info(“Fly From”. EX: a xx b xx a = xx and b == xx file_printf(“xxxx”. “count”. Syntax: file_compare(“path of file1”. a. In the above syntax third argument is optional.

5). } In WinRunner every TSL returns E_OK when the statement successfully executed on our build. “Appears and Test is fail ”). i . button_press(“Display”). i<n. n). Example 2: Sample 1 Sample 2 Expected : Selected item in list box appears in text box as below model My Name is XXXXX. 5). list_get_item(“Name”. x). i. list_select_item(“Name”. for(i=0. “Does no appears”).▼ Display Name OK Text { list_get_item(“Fly From”. x) !=E_OK) tl_step(“step”. 5). button_press(“OK”). x). 0 . list_select_item(“Fly From”. list_get_info(“Name”. else tl_step(“step”. x). 1 . i++) { set_window(“Sample 1”. if (list_select_item(“Fly To”. x). . “count”. set_window(“Sample 1”. set_window(“Sample 2”.

→ If bsal < 15000 and > = 8000 than gsal = bsal + list_get_info(“EMP No”. obj_get_text(“bsal”. 1 . 5). “Calculation is Pass”). i < n. b). n). 5% of bsal bsal gsal → If bsal < 8000 than gsal = bsal + 200 EMP No ▼ for (i = 0. “Calculation is fail”). “Test is fail”). obj_get_text(“gsal”.5). else if ( b < 15000 && b >= 8000 && g == b+b*5/100) tl_step(“step1”. “Calculation is Pass”). g). i. else if ( b < 8000 && g == b+200) tl_step(“step1”. if ( temp = = “My Name is” & x) tl_step(“step”. else tl_step(“step”. i < n. list_select_item(“EMP No”. temp). for (i = 0. 1 .x).) Example 3: Employee Expected: → If bsal >= 15000 than gsal = bsal + 10% of bsal OK set_window(“Employee”. “count”. “Test is Pass”). else tl_step(“step1”. } Note: In TSL & = Concatenated (Adding two words etc. 0 . 0 . 0 . “count”. } Example 4: Insurance Type Age Gender Qualification ▼ ▼ ▼ Expected : If type is “A” → Age is focused If type is “B” → Gender is focused Any other type → Qualification is focused set_window(“Insurance”. n). “Calculation is Pass”). list_get_info(“Type”. x). button_press(“OK”). 0 . i++) { .obj_get_text(“Text”. if ( b > = 15000 && g ==b+b*10/100) tl_step(“step1”. i++) { list_get_item(“EMP No”.

list_select_item(“Type”.list_get_item(“Type”. “ focused ”. 1). } . “ focused ”. else list_check_info(“Qualification”. if (x == A) edit_check_info(“Age”. else if (x == B) list_check_info(“Gender”. i. “ focused ”. 1).x). x). 1).

“calculation is pass”). “#3”. 5). “calculation is fail”). Variable). s). Here item number starts with “0”.1. i++) { tbl_get_cell_data(“file_store”. else tl_step(“step1”. Item No. tbl_get_cell_data( ): We can use this function to capture specified cell value into a variable through row no & column no. 1 . 7. Syntax: tbl_get_rows_count(“Table grid name”. list_get_item( ): We can use this function to capture specified list item through Item number. Syntax: list_get_item(“list box name”. n). tot=substr(tot.1. i<n. s=substr(s. tbl_get_rows_count( ): We can use this function to find no of rows in table grid. variable ): 8.length(s)-2) sum = sum + s } obj_get_text(“Total”. Syntax: .length(tot)-2).Example 5: AUDIT Expected: File_store S. 0 . “#”&I. tot).NoFile PathTypeSize1XX10kb2XX20kb3XX30kb4X X40kb5xx50kb Total = Sum of size column Total xxxkb Sum = 0 set_window(“AUDIT”. tbl_get_rows_count(“file_store”. if (tot == sum) tl_step(“step1”. for ( i=1. 6.

tbl_get_cell_data(“Table Grid name”.sql) → click next → click create to select data source name → write SQL statement (select order_number from order) → click next → select excel sheet column names in required place of test script → select show data table now → click finish → click run → analyse results manually Example1: table = "default. if (rc!= E_OK && rc != E_FILE_OPEN) pause("Cannot open table. button_set ("Order No. for(i = 1..xls) → specify variable name to assignee path (by default table) → select import data from database → click next → select type of data base connection (ODBC or Data Junction) → select specify SQL statement (c:\PF\MI\WR\Temp\testname\msqrl. Syntax: . This method is default method in data driven testing.sql". DDT_MODE_READWRITE).").. ddt_get_row_count(table. set_window ("Open Order". ddt_val(table. set_window ("Flight Reservation". “# row no”. ON). To create this type of automated script WinRunner provides special navigation."order_number")). 6). "msqr1."). i < = n.". menu_select_item ("File. } ddt_close(table).xls". “# column no”. rc = ddt_open(table. variable). d) Through Excel Sheet: In general testing engineers are conducting data driven test using Excel Sheet data.i). button_press ("OK").Open Order. ddt_open( ): We can use this function to open a test data excel sheet into RAM with specified permissions. edit_set ("Edit". count). 1. i++) { ddt_set_row(table. 1). ddt_save(table). Navigation: Create test for one script → tools menu → dada driven wizard → click next → browse the path of excel sheet (c:\PF\MI\WR\Temp\testname\default.n). ddt_update_from_db(table.

variable). ddt_close( ): We can use this function to swapout a open excel sheet from RAM. Syntax: ddt_update_from_db(“path of excel file”. ddt_get_row_count( ): We can use this function to find total no of rows in excel sheet. DDT_MODE_READ / READWRITE). 7.t changes in database. 4. “column name”). Syntax: ddt_set_row(“path of excel sheet”. ddt_val( ): We can use this function to capture specified column value from a pointed row. .r. row no). Syntax: ddt_get_row_count(“path of excel sheet”.ddt_open(“path of excel file”. 5. ddt_update_from_db( ): We can use this function to extend excel sheet data w. 8. Syntax: ddt_close(“path of excel sheet”). ddt_save( ): We can use this function to save excel sheet modifications permanently. Syntax: ddt_set_val(“path of excel sheet”. 2. ddt_set_val( ): We can use this function to write a value into excel sheet column. 3. “path of query file”. variable). Syntax: ddt_val(“path of excel sheet”. 6. “column name”. Syntax: ddt_save(“Path of excel sheet”). value / variable). ddt_set_row( ): We can use this function to point a row in excel sheet.

rc = ddt_open(table.c). rc = ddt_open(table."result"."Input1").xls"."Input2"). } ddt_close(table). DDT_MODE_READWRITE). ddt_save(table).Example 2: Prepare data driven test script for below scenario. a=ddt_val(table. for(i = 1. b=ddt_val(table.xls Result Expected: Factorial of input in the result Input xx xx xx xx table = "default. default.n).i). c=a+b ddt_set_val(table. i ++) { ddt_set_row(table. if (rc!= E_OK && rc != E_FILE_OPEN) pause("Cannot open table. . ddt_get_row_count(table. Example 3: Prepare test script for below scenario. default.").xls Input1 Input2 Result xx xx xx xx xx xx xx xx Expected: Result should be written in excel sheet ( Input1 + Input2) table = "default. if (rc!= E_OK && rc != E_FILE_OPEN) pause("Cannot open table. i <= n.xls"."). DDT_MODE_READWRITE).

txt". i<n. . i <= n. j >= 1.FO_MODE_WRITE).FO_MODE_WRITE). 10).i. list_get_info("Fly From:". Example4: Prepare test script to print a list box values into a excel sheet one by one."result". } file_close(f). n). for(i = 1.n).xls"."%s\n".ddt_get_row_count(table.i."%s\r\n". file_open(f. x=ddt_val(table.x). } file_close(f). list_get_info("Fly From:".fact). f="c:\My Documents\sm. set_window ("Flight Reservation". ddt_save(table). Example4: Prepare test script to print a list box values into a flat file one by one. "count".j--) fact=fact*j ddt_set_val(table. f="c:\My Documents\sm. for(j = x. file_printf(f."input").x). for(i=0. } ddt_close(table). set_window ("Flight Reservation".x).10).x). we can use this concepts. for(i=0. Synchronization Point: To maintain time mapping between testing tool and application build during test execution. 13). i++) { list_get_item("Fly From:". i++) { ddt_set_row(table. set_window ("Flight Reservation".n). i++) { list_get_item ("Fly From:".i). file_printf(f. fact=1. "count". i<n. file_open(f.

y. Navigation: Select position in script → create menu → synchronization point → for object/window Bitmap → select indicator image (D click). 2. Syntax: obj_wait_info(“object Name”. maximum time to wait. Syntax: obj_wait_bitmap(“Image object name”. x.bmp”. <100% → disabled) → specify maximum time to wait → click paste. Navigation: Select position in script → create menu → synchronization point → for object / window property → select indicator object (Ex: Status or progress bar) → select required property with expected (100% → enabled. . “Image file name. Syntax: obj_wait_bitmap(“Image object name”. For Screen Area Bitmap: Some times test engineers are defining time mapping between testing tool and application depends on part of images also. maximum time to wait). For Object / Window Property : WinRunner waits until specified object property is equal to our expected value. width. height). maximum time to wait). wait ( ): This function defines fixed waiting time during test execution. For Object / Window Bitmap: Some times test engineers are defining time mapping between tool and application depends on Images also. Navigation: Select position in script → create menu → synchronization point → for screen area Bitmap → select required image region → right click to release. Expected value.bmp”.1. “Image file name. 3. “property”. 4. Syntax: wait( time in seconds).

5. → Delay for window synchronization – 1000 msec(Default) → Timeout to execute context sensitive and checkpoints . We can use first syntax when corresponding calling & called tests both in same folder. Example 1: Test case1 → Successful order open Test case2 → Successful up-dation. In every test batch end stage of one test is base state of next test. a) call testname( ). WinRunner depends on two runtime parameters. Test case4 → Successful mail reply Example 3: Test case1 → Successful order open Test case2 → Successful calculation. Test case2 → Successful login Test case3 → Successful mail open. To create this type of batches in WinRunner. b) call “path of test”( ).10000m. Every test batch consists of a set of multiple dependent tests. During running. Test engineers are performing changes in the parameters if required. . The test batch is also known as test suit or test set. We can use second syntax when calling & called tests both are in different folders. To increase intention of bugs finding during test execution.sed (Default) Navigation: Settings menu → general options → change delay and time out depends on requirements → click apply → click ok. Change Runtime Settings: During test script execution. BATCH TESTING The sequential execution of more than one test to validate functionality is called batch testing. batch testing is suitable criteria. recording time values are not useful. we can use below statements. Example 2: Test case1 → Successful new user registration.

i <= n. i ++) Sub test / Called test { Main test / Calling ddt_set_row(table.xls". else tl_step("s1". if(tot==p*t) tl_step("s1". test temp=ddt_val(table.1). for(i = 1. obj_get_text("Tickets:".t).").call subtest( ). p=substr(p. obj_get_text("Price:".1.p). set_window("Flight Reservation"."test is pass").2. pause("Cannot subtest(xx )."input").2. From the above model sub test maintains parameters to receive values from main test. tot=substr(tot. rc = ddt_open(table.n). edit_set(“edit”.tot). i inputXXXXXX call open table. call subsri(temp). tab → click add to create new → click parameter call file menu ). if (rc!= E_OK && rc != E_FILE_OPEN). x). DDT_MODE_READ). obj_get_text("Total:". Sub test / Called test Main test / Calling test Parameter Passing: To transmit values between one test to other test. . xx Navigation: Open sub test → subtest(xx → test properties edit_set(“edit”. Default.length(tot)-1). x). Sub test / Called of test Main test / Calling test DATA DRIVEN BATCH: X Main Test: xx table = "default.0.i). To create X this type of parameters we can follow bellow navigation.length(p)-1)."test is fail").xls ddt_get_row_count(table. properties → enter parameter name with description → click ok → click add to create more parameters → click ok → use that parameter in required place test script. we can use parameter passing concepts in batch testing.

Note: It allows only one value to return. } Note: When WinRunner in silent else tester interactive statements are not working.. if (condition) if(t= =1) Navigation: treturns(1). Silent Mode: X WinRunner allows you to continue test execution when a Checkpoint is fail also. button_press ("OK").xls Menu → general options → run tab → select run in batch mode → click apply → click ok. 1). menu_select_item ("File. set_window("Flight Reservation". 2). : We can use this function to return a value from sub test to main test. Public Variables : Main test / Calling test To access a single variable in more than one tests in a batch. EX: create_input_dialog(“xxxxx”). Syntax: public variable. Sub Test: set_window ("Flight Reservation". set_window ("Open Order". Syntax: treturn( Value / Variable). FUNCTION GENERATOR: . { i inputXXXXXx else Select Default. To define this xx type of situation we can follow below navigation.". treturn( ). t=call subtest(xx ).t). mode. obj_get_text("Name:".1). x).} ddt_close(table). treturns(0). Note: By default variables are local in TSL scripts.").. edit_set ("Edit". button_set ("Order No. Sub test / Called test continue. if(t= =" ") pause("cannot open record"). ON).Open Order.

In above syntax time specifies delay before. variable). To search required TSL function below navigation Create menu → insert function → From function generator → select required category → select required function based on description → fill arguments → click past. In this library TSL functions classified into category wise. test existence of window. call test 4( ). Example 1: Clipboard Testing A tester conducts test on selected part of an object. Example 2: Window Existence Test: Whether specified window is available on desk top or not. set_window(“login”.v). Syntax: edit_get_selection(“Name of edit box”. Case Study: Fail test 1 test 2 ↓ test 3 ↓ test 4 call test 1( ). 0) = = E_OK) call test 2( ). Printf(v). 5). call test 4( ). This function returns E_OK if window exists else E_NOT_FOUND. call test 3( ). edit_get_selection(“Agent Name”. time). pass “sample” window . else call test 3( ).It is a list of TSL functions library. Syntax: win_exists(“window name”. if(win_exists(“sample”.

Example 4: Search TSL function to print System Date? Example 5: Search TSL function to print time out. time in sec). SW_SHOW / SW_SHOWMINIMISE / SW_SHOWMAXIMISE). Example 8: Search TSL function to print path of current test. Example 6: Search TSL function to change time out with out using settings menu. Front End DSN Back End Example 9: Execute Prepared Query: WinRunner allows you to variable in query. db_execute_query( ).exe”.(System Category). printf(X). A prepared query consists of structure of that query. TSL Script execute prepared queries. X = getvar(“timeout_msec”). “Working Directory”. Syntax: getvar(“timeout_msec). “Command”. Syntax: invoke_application(“Path of . db_write_records( ). Prepared “select” Statement Excel / Flat file . this query also known as dynamic db_connect( ).Example 3: Open Project: WinRunner allows you to open a project during test execution. Syntax: setvar(“time out”. Example 7: Search TSL function to print parent path of WinRunner.

NO_LIMIT specifies that no restrictions on query result size. Note : These three functions available in database category. Syntax: db_execute_query(“session name”. TRUE / FALSE. “select statement”. db_execute_query("query1". “destination file path”. “DSN=****”).db_connect( ): We can use this function to connect database using existing DSN. db_write_records(“session name”."DSN=Flight32").num). . variable). Example: x=create_input_dialog("enter limit")."select * from orders where order_number<="&x. db_connect("query1". In above syntax session name indicates allocated resources to user when he connected to database. db_write_records( ): We can use this function to copy query result into specified file. Syntax: db_connect(“session Name”. In above syntax TRUE indicates query result with header and FALSE indicates query result with out header. In above syntax variable specifies no of rows selected after execution of that statement. db_execute_query( ): We can use this function to execute specified select statement on that connected database. NO_LIMIT).

UDF Login Only Navigation Mail Open Mail Compose Mail Reply Mail forward Test Scripts(Check Points) Syntax: public / static function function name( in / out / inout argument name) { # repeatable test script return ( ). User defined functions allows return statements to return one value. Example: public function add(in x. 100 Static Function i=100. in y.r. Every user-defined function indicates a repeatable navigation in your build w. 10 i=10.FALSE. out z) { . Static function maintains constant locations to variables during execution."default. → “in” Parameter working as general argument.xls".NO_LIMIT). → “out” Parameters working as return values.db_write_records("query1". TSL allows you to create user-defined functions. User Defined Functions: Like as programming language. public function invoked in any test. Note: We can use static function to maintain output of onetime execution as input to next time execution. → “inout” parameters are working as in and out.t testing. } In the above syntax.

inout y) { y = x + y. b = 20. b = 20. add(a. Example2: public function add(in x. c = add(a. return(z). b. b). c). z = x + y. printf( b ). Example 4: . in y) { auto z. b = 20. printf( c ). } calling test: a= 10.z = x + y. printf( c ). } calling test: a= 10. Example 3: public function add(in x. b). add(a. } calling test: a= 10.

load( ): We can use this statement to load user defined .Open Order. We can use this statement in our test scripts if required. WinRunner 7. button_set ("Order No..EXE copies. Open WinRunner → click new → record repeatable navigation’s as UDF’s → save the module in dat folder → file menu → test properties → general tab → change test type Compiled module → click apply → click OK → execute once(permanent .EXE from hard disk to RAM. set_window ("Open Order". 0 / 1). 0/1). Syntax: load(“compiled module name”. OR reload(“path of compiled module”. “unloaded function name”). 2). To do this task. unload( ): We can use this function to unload unwanted functions from RAM.. . x). we can try to make user defined function as . reload( ): We can use this function to reload unloaded functions once again. button_press ("OK"). → second 0 / 1 defines path appears in WinRunner window menu / path doesn’t appears Note: We can use this load statement in Startup Script of WinRunner. Syntax: unload(“path of compiled module”. “unwanted function name”). 0 / 1. → loads all functions LEARNING In general test automation process starts with learning to recognize objects and windows in your application build. test engineers are following below navigation. menu_select_item ("File. 1). 0/1. } To call user defined functions in required test scripts. Syntax: reload(“path of compiled module”.0 supports auto learning and pre learning. ON).public function open(in x) { set_window ("Flight Reservation"."."). edit_set ("Edit". In above syntax → first 0 / 1 defines user defined / system defined.EXE created for that user defined functions in hard disk) → write load statement → in startup script of WinRunner (c:\Program Files \ Mercury Interactive \ WinRunner \ dat \ myinit).

we can follow navigation. a) Global GUI Map File: In this mode WinRunner maintains common entries for objects and Windows in a single file Test 1 GUI Map Save Open . Auto Learning: During recognization time. b) Per Test Mode. 3 4 2 OK 5 Logical Name : OK { class : push button label : “OK” } Step 1: Step2: Step3: Step4: Step5: Start recording Recognize object Script generation Catch entry Catch object During Recording During running To recognize entries WinRunner maintained in GUI MAP. Tools menu → gui map editor. WinRunner recognize all objects and windows what you operated. a) Global GUI Map File. To edit this entries.1. GUI MAP 1 button_press(“OK”). test engineers follows two types of modes. To maintain these entries.gui Explicitly Test2 .

Because auto learning is new concept in WinRunner 7. To conduct this Pre Learning before starts recording. >>. we can use rapid test script wizard (RTSW).t test requirements. 2. 6. comprehensive) → learn →say yes / no to open project automatically during WinRunner launching → click next → remember paths of startup scripts and GUI Map File → click next → click ok. b) Per Test Mode: In this mode WinRunner maintains entries for objects & windows per every test. we can use bellow navigation. Original Entry Test2 . To create data driven test on this type of object / window. Situation 1: (Wild Card Characters) Some times our application objects / windows labels are variating depends on multiple input values. Note: In general test engineers are using global GUI Map file mode. Some times test engineers perform changes in entries w.→) → click next → select pre learning mode(express. Open Build & WinRunner → create menu → Raped Test script wizard → click next → show application main window → click next → select no test → click next → enter sub menu symbol(…. WinRunner maintains that unsaved entries in default buffer (10kb).0.If test engineers forgot entries saving. To open buffer. Settings menu → general options → environment tab → select GUI Map File Per Test → click apply → click ok.5). test engineers use file menu options in GUI Map editor. To save / open GUI Map entries. Pre Learning: In general test engineers jab starts with learning in lower versions of WinRunner (ex 6.r. Tools → GUI Map editor → view menu → GUI Files(LO < temporary >).0.gui Implicitly In general WinRunner maintains Global GUI File. test engineers follows bellow navigation. If we have to change to Per Test Mode. we can perform changes in corresponding object / window entry with Wild Card Characters. Test 1 GUI Map Save Open .

we can follow below navigation. To forcibly recognize that non recognized objects. Sample Sample Start Stop Original Entry Logical name :start { class: push button label : “start” } Modified Entry Logical name :start { class: push button label : “![s][to][a-z]*” } Situation 3: (Virtual Object Wizard) Some times WinRunner is notable to recognize advanced technology objects in our application build.Logical name : fax order no1 { class : window label : “fax order no. . Situation 2 : (Regular Expression) Some times our application build objects / windows labels are variating depends on events. Navigation: Tools menu → virtual object wizard →click next → select expected type → click next → click mark object to select non recognized area → right click to release → click next → enter logical name to that entry → click next → say yes / no to create more virtual objects → click finish. 1” } To perform above like change. Tools → GUI Map editor → select corresponding entry → click modify → insert wild card changes like as above example → click ok. 1” } Modified Entry Logical name : fax order no1 { class : window label : “fax order no. we can perform change in entry using regular expression. we can use Virtual Object Wizard. To create data driven test on this type of objects and windows.

a) USER INTERFACE TESTING: .t WinRunner defaults (Class & Label). OK OK Sample Logical Name : OK { class : push button label : “OK” MSW_id : XXXX } Logical Name : OK_1 { class : push button label : “OK” MSW_id : XXXX } Situation 6: (Selective Recording) WinRunner allows you to perform recording on specified applications only. Situation 5:( GUI Map Configuration) Some times more than 1 objects in a single window consists of same physical description w. To get all testable properties for that object we can follow below navigation.Situation 4: (Mapped to Standard Class) Some times WinRunner is notable to return all available properties to a recognized object. Navigation: Tools Menu → GUI Map Configuration → click add → Show non testable object → click ok → click configuration →select mapped to standard class → click ok. Navigation: Tools Menu → GUI Map Configuration → select object type → click configuration → select distinguishable properties into obligatory and optional list. Note: In general test engineers are maintaining MSW_id as optional for every object type.r. Because every objects consists of unique MSW_id. → click ok. Navigation: Settings → General options → record tab → click selective recording → select record only on selected applications → select record on start menu & Windows explorer if required → Browse required project path → click OK.

Syntax: check_ui(“Window Name”). Syntax: load_os_api( ). In this testing WinRunner applies Microsoft 6 rules on our application interface.). b) If you are in per test mode. b) configure_chkui( ): We can use this function to customize Microsoft. To apply above six rules on our application build. Receive Modified Build . Note:  Some times RTSW doesn’t appears in create menu. test engineers follows below navigation. WinRunner uses below TSL functions. we are not able to conduct user interface testing. Note: With downloading api system call into RAM. → Controls are initcap (Starts with Upper Case) → OK / Cancel existence → System menu existence → Controls are must visible → Controls are not over lapped → Controls are aligned. b) REGRESSION TESTING: In general test engineers follows below approach after receiving modified build from developers. Syntax: configure_chkui(TRUE / FALSE. Open WinRunner / Build → create menu → RTSW → click next → show application main window → click next select user interface test → click next → specify sub menu symbol → click next → select learning mode → click learn → say YES / NO to open your application automatically during WinRunner launching → remember paths of start up scripts & GUI Map file → remember path of user interface test script → click ok → click run → analyze results manually. c) check_ui( ): We can use this function to apply above customized rules on specified window. To create user interface test script. a) load_os_api( ): We can use this function to load application program in interface system calls into RAM. …….s six rules to be applied on our application build. a) If you select wed test option in add in manager.WinRunner is a functionality testing tool but it provides a facility to conduct user interface testing also.

Exceptional Handling: Exception is nothing but runtime error. To handle test execution errors in WinRunner. . a) TSL Exceptions: We can use these exceptions to handle run time errors depends on TSL statements return code. i. ii. because all screens does not consists of images. GUI Regression Test: We can use this option to find object properties level differences in between old build and new build. we can use three types of exceptions.↓ GUI Regression ↓ BIT Map regression ↓ Functionality Regression Screen Level Differences WinRunner provides a faility to automate GUI Regression & BIT Map Regression. This regression is optional. Navigation : Open WinRunner / Build → create menu → RTSW → click next → show application main window → click next→ select use existing information → click next → select BIT Map Regression test script→ click next → remember path of BIT Map Regression test script → click ok → open modified build and close old build → click run → analyze results manually. Old Build Modified Build GUI Check Point Navigation : Open WinRunner / Build → create menu → RTSW → click next → show application main window → click next→ select use existing information → click next → select GUI Regression test script→ click next → remember path of GUI Regression test script → click ok → open modified build and close old build → click run → analyze results manually. a) TSL Exceptions b) Object exceptions c) Popup Exceptions. BIT Map Regression Test: We can use this option to find image level differences between old build and modified build.

Navigation: Tools → exception handling → select exception type as – TSL → click next → enter exception name → enter TSL function name → specify return code → enter handler function name → click ok → click paste → click ok after reading suggestion → click close → record our required navigation to recover the situation → make it as compiled module → write lode statement of it in start up script of WinRunner. } b. in func) { printf(func& “returns” &rc).5).E_NOT_FOUND set_window(“X”. Object Exceptions: The exception raised when specified object property = our expected. Example: public function mindq(in rc. we can follow below navigation. BUILD Test Script down Disable Enable Reestablish connection to server Tools → exception handling → select exception type as Object → click new → enter exception name → select traceable object → select property with expected to determine situation → enter handler function name → click ok → click paste → click ok after reading suggestion → click close . How to open X window Test Script Handler Function To create above like exceptions.

. Pop-UP Exceptions: These exceptions raised when specified window come to focus. Note: By default exceptions in “ON” position. exception_off_all( ): We can use this function to disable all types of exceptions in your system. in obj. exception_off( ): We can use this function to disable specific exception only. click OK” and user defined function name) → click ok → click close. Syntax: exception_off (“exception name”). Tools → exception handling → select exception type as Pop-Up → click new → enter exception name → show unwanted window raising during testing → select handler action( “press enter / click cancel. in attr. } c. exception_on( ): We can use this function to enable specified exception only. Example: public function mindq(in win. Syntax: exception_on(“exception Name”). in val) { printf(func& “enabled”). ii. i.record our required navigation to recover the situation → make it as compiled module → write lode statement of it in start up script of WinRunner. Syntax: exception_off_all( ) iii. To administrate exceptions during test execution. test engineers use below statements. We can use these exceptions to skip unwanted windows in our application build during test execution.

Sybase. Clint / Server Vs WEB: Tow tire Application Fat T DSN Front End Back End Thin VB. WinRunner does not support XML objects. DHTML. Java server relets. test engineers apply below coverage’s on web interfaces. RHPCF. Behavioral Coverage Input Domain Coverage Error handling Coverage (Clint & server Validation) Calculations Coverage Back End Coverage URL (Uniform Resource Locator) Coverage Static text testing In above coverage’s. Java. 6. URL’s testing and static text testing are new coverage’s for Web application functionality testing. Manipulation Data Store Three Tire Application HTML. 4. SQL server. Java Scripts DSN Data Base Server Monitoring Manipulation Data Storage I.WEB TESTING WinRunner allows you to automate functionality testing on web interfaces also(HTML). MSAccess. Java Script…… TCP/TP ASP. 1. URL’s Testing: . Mysqc. 7. D2K. XML. 2. C. Informis Monitoring. VC++. In this test automation. VB Script. 3.PB…… Oracle. 5. C+ +. JSP.

Browser Web Server Data Base Server Local host (http://local host/ vdir/page.txt”. Image content (. test engineers are creating check points on text links. image button.htm) .bmp of image) 3.It is an extra coverage in web applications testing. We can use GUI Check Point concept to automate URL’s testing. “Checklist file name”. Links execution means that whether the link is providing right page or not. URL (path of next page) Syntax: obj_check_gui(“image file name”. To automate this testing using WinRunner. 6. previously saved site image ex.txt”. Link existence means that whether the corresponding link in right place or not. when you click link. image links. Background colour (Hexadecimal no of colours) Broken Link (Valid / Not valid) Colour (Hexadecimal no of expected colours) Font (Style of text) Text (Expected like text) URL expected path of next page. Link Name Xxxxxx Xxxxxxx xxxxxxx Local Host Off line URL xxxxxx xxxxxx xxxxxx TCP/TP Local Server (http://local server/ vdir/page. time to create). 2. test engineers are collecting below like information from development team. “expected value file. Image link: It is also a non standard object and it consists a set of non standard parameters such as. cell. we can select web test option in add in manager during WinRunner launching. Broken Link (Valid / Not valid) 2. Image link. Type ( Plain Image. a. “checklist file”. dynamic Image. Syntax: obj_check_gui(“check list”. test engineers validate links execution and links existences.htm) DSN Above document is also known as site map document. “expected value file. time to create). b. Source (Path of Image) 4. 3. 1. Text Link: It is a non standard object and it consists of a set of non standard properties such as. During this test. 5. 1. To create above like check points. In this automation. Banner) 5. Before this web functionality testing developers create two types of off line environments. 4. tables and frame.

To cover all these links through a single checkpoint. 5. It contains of a set of text links & image links. we can use cell property. “expected value file. URL. “checklist file name.c) Cell: Cell indicates an area of web page. Test engineers are using these properties for cells coverage during testing. height) Links (Link names. But test engineers are using non standard properties only for URL’s testing. width. YES / NO) Count objects (no of standard & non standard objects in that fram) Format(hierarchy of internal links) Frame Content(Static text in web page) . d.(columns. test engineers select object first and then they change their selection from object to parent cell. 3.ckl”. 2. 4. 1. format. To get cell properties. 6. 3. rows & table content). These properties are not suitable to conduct URL testing. e. Expected off line URL’s) Syntax: win_check_gui(“Cell logical name”. 4.txt”. 1. Table: It is also a non standard object and it consists of a set of non standard properties. type. time to create). Background colour (Hexadecimal no of colours) Broken Link (Valid / Not valid) Cell content (Image files path and static text in that cell area) Formats (hierarchy of internal links) Images (image file name. 2. Broken Link (link name. FRAME: It is also a non standard object and it consists of a set of standard and non standard properties.

sum = sum + s. This option consists of 4 sub options when you select web test option in add in manager. “checklist file name. variable”. Note: in general test engineers are conducting URL’s testing at frame level. “#”&i.txt”. “kb”. II. “ “. height) 6.1. time to create) Example: Rediff sum = 0 set_window(“rediff”. tot”. “#0”. “#column no.txt . type. a. Images ( image file name. “text after”. From Object / Window : To capture a web object value in to a variable we can use this option. } web_obj_get_text(“total obj”. width. s=substr(s.”#3”. test engineers are conducting on cell level. Expected off line URL’s) Syntax: win_check_gui(“frame logical name”. 5).txt . 2). b for( i=1.ckl”. “#0”.htm 5. time to create). If a frame consists of huge amounts of links. i < n .s). we can use get text option in create menu.noSubjectDatesize1XX10kb2xx2k tbl_get_row_count(“mail box”. i++) Total { Xxx kb Expected: Total = sum of all received mail sizes tbl_get_cell_data(“mail box”. Static Text Testing: To conduct calculations & other text based tests. if(tot= =sum) . Mail Box S. “# row no”.File_compare Frame content ----☺ --☺ ---☺ ---- . “text before”.n).length (s)-2). “expected value file. Links(Link names. Syntax: web_obj_get_text(“web object name”.

“0”.htm ------. From Screen area: This function not supports web pages. 1). “Indian Rs”. we can use this option.xyz -------------- . 1).abc---------------. web_frame_get_text(“shopping”. “American $”.☺ Australian $ xxxx as-------------------☺ Indian Rs xxx as------- Expected: Indian Rs = American $ value X 45 + Australian $ value X 35 web_frame_get_text(“shopping”. 1. “0”.tl_step(“s1”. 0. web_frame_get_text(“shopping”. d) Web Text Check Point: To verify existence of text in a web page in specified position through text before and text after. “Australian $”. 1). “as”. b. “as”. “as”. “variable”. “Test is pass”). else tl_step(“s1”. Navigation: Navigation: Create menu → get text → from selection → select required or text → right click to relive → select text before & text after → click ok. else tl_step(“s1”. y. From Selection: To capture static text from web pages. “caleculation is fail”). “calculation is pass”). x. Example: Shopping -----☺ --American $ xxxx as------------. if (z == x * 45 + y * 35) tl_step(“s1”. “text before”. c. . “Test is pass”). “text after”.txt . x. Syntax: web_frame_get_text(“frame logical name”. time to create).

0 Vs WinRunner 7. web_image_click( ): WinRunner use this function to record an image link operation. . “xyz”): Web Functions: 1. “URL” ): WinRunner 6. Syntax: web_browser_invoke(I. x. “abc”. web_link_click ( ): WinRunner use this function to record a text link operation.0 WinRunner 7. → Auto learning → Per text Mode → Selective Recording → Run Time Record check. 3. web_frame_get_text(“frame logical name”. 2. x).0 provides below facilities as extra. → Web Testing concepts → GUI spy ( To identify weather the object is recognizable or not) Note : To stop spying we can use Lctrl + F3.E / NetScape. y). web_browser_invoke( ): WinRunner use this function to open a web application through test script. Syntax: web_link_click (“link text” ). x. Syntax: web_image_click(“image file name”.Example: obj_get_text(“edit”.

Testing Management Measurement. TEST STRATEGY: . 1 defect per 250 line of coding /1 defect per 10 Functional Point Quality Assessment Measurement. TEST POLICY: This document developed by Quality Control people(Almost Management). In this document QC defines “Testing Objectives”.E. Process Capability Measurement Testing Measurement : XXXXXXX (C.Testing Documents QC Quality Analyst / Project manager Test Lead Test Policy ↓ Test Strategy ↓ Test Methodology ↓ Test plan ↓ Test Cases ↓ Test Procedures ↓ Test Scripts ↓ Defect Reports ↓ Final Test Summary Report Company level Project level Test Engineer Test Lead I.O) II. XXXXXXXXXXXXXXXX XXXXXXX Testing Definition Testing Process Testing Standard : : : Verification & Validation Proper planning before starts testing.

5.  Quality ↓ QA/PM  Test Factor ↓ TL  Testing Technique ↓ TE  Test Cases QC . 10. Business Issues: Budget control for testing. 4. Risks & Mitigations: What possible problems will come in testing and solutions to over come them. TMM. Change & Configuration Measurement: To handle change request during testing. This document defines testing approach. Automation & Testing Tools: Purpose of automation and possibilities to go to test automation. PCM. 100% 64 36 (Development & Maintenance) (Testing) 3. Testing Measurements & Metrics: QAM. 2. Testing Issues: To define a quality software organizations are using 15 testing issues as maximum. 8. Test Approach: Mapping between development stages and testing issues. 9. Communication & Status Reporting: required negotiations between two consecutive jobs in 6testing team. Scope & Objective: Definition & purpose of testing in organization. Roles & Responsibilities: Names of jobs in testing team and their responsibilities. 7. 11.It is a company level document and developed by Quality Analyst / Project Manager category people. Defect Reporting & Tracking: Required negotiations between testing team and development team during test execution. 6. Training Plan: Required training secessions to testing team before start testing process. Test Deliverables: Required testing documents to be prepared. Develop Stages → Testing Issues ↓ Ease of use Authorization Information Gathering & Analysis X  Design Coding System Testing Maintenance X      Depends up on change request This Matrix is known as “TEST RESPONSIBILITY MATRIX”. 12. Components 1.

Portable: Run on different platforms. Coupling: Co-Existence with other existing software’s to share resources. 6. 4. Maintainable: Whether our application build is long time serviceable to customer site people are not. Test Factors Vs Black Box testing Techniques 1. Service Levels: order of functionalities. 13. 1. 12. 6. 14. testing 7. Ease of Use: User friendliness of the screens. 2. Testing 4. 15. 8. un-installation. File Integrity: Creation of backup. 3. Dumping. Access Control: Whether a valid user have permission to use specific service or not. 2. uploading etc 9. 11. Testing 3.From the above model a quality software testing process formed with below 15 testing issues. testing Corrections → Requirements testing Coupling Ease of Use → Functionality testing → Inter System → User Interface Authorization → Functionality Testing Access Control → Functionality Testing Audit Trial →Error handling testing Continuity of processing → White Box → Functionality → Security Testing → Security . Ease of Operate: Installation. Downloading. Authorization: Whether user is valid are not to connect to application. Audit Trial: Maintains Metadata about user operations in our applications. Corrections: Meet customer requirements in terms of functionality. Methodology: Whether our testers are following standards are not during testing. Reliability: Recover from abnormal stage. 5. 10. → Execution → operation 5. Continuity of processing: Inter process communication (Module to Module). Performance: Speed of processing. 7.

III. testing 9. Maintainable → Compliance Testing Management Level Testing. Type Traditional Off Shelf Maintenance IF gathering & Analysis  X X Design Coding System Testing   X Maintenance  X X  X X  X  Note: Depends up on project type QA/PM decrease number of columns in TRM. testing 13. Step 1: Acquire Test Strategy Step 2: Determine project type. Testing → Stress testing 14. Methodology provides required testing approach to be followed for current project. In this level QA / PM selects possible approaches for corresponding project testing through below procedure. 15.→ manuals support testing 8. Methodology → Compliance Testing TEST METHODOLOGY: Portable → Configuration Testing Service Levels → Functionality → Compatibility → Load testing → Recovery Ease of Operate File Integrity → Installation → Functionality testing It is a project level document. → Recovery Testing 10. Reliability → Stress testing (Peak hours) Performance → Stress testing → Storage testing → Data Volume testing 12. Testing (1 user) 11. .

Step 6: Finalize TRM for current project Step7: Prepare system test plan. QA/PM decreases number of selected issues (Rows) in TRM. Step 4: Identifies Scope of application Note: Depends on expected future enhancements QA/PM add some of previously deleted rows and columns. Testing Process: Receive Build Test Planning Test Design Test Execution Regression Defect Test Closer Test Initiation Test Reporting PET Process (Process Expert Tools and Techniques) : It is a refinement form of V model. Remaining stages of testing done by development people.Step 3: Determine Project requirements Note: Depends on project requirements QA/PM decreases number of rows in TRM. It defines mapping between development stages and testing stages. Step5: Identifies tactical risks. This model developed in HCL and recognized by QA forum of India. Step 8: Prepare modules test plans if required. Information Gathering (BRS) ↓ Analysis (S/W RS) ↓ ↓ Design ↓ Coding ↓ Unit & Integration ↓ Test Initiation ↓ Test Planning & Training ↓ Test Design ↓ Test case selection closer Initial Build ↓ Sanity / Smoke / TAT/ BVT ( Level 0) ↓ Test Automation ↓ . From this model organizations are maintaining separate team for functionality and system testing. Note: Depends on analyzed risks.

System S/W . Team Formation: In general. test plan author follows below approach Development Documents Team Formation Identify Tactical Risks Prepare Test Plan System Test Plan TRM 1.3 : 1 (developers : Testers) .Machine critical Team Size .Review Test Plan 2. i. Availability of test environment Resources Case Study: Test Duration: . test planning starts with testing team formation. Identify Tactical Risks: . Test duration iii.Create test scripts / Test batches / Test Suits ↓ Next Batch Resolving Select a batch and start execution Regression ↓ Level 1 Level 2 Defect Fixing If a test engineer got a mismatch Independent Batch Defect Developers Suspend that batch Reporting ↓ Otherwise ↓ Test Closer ↓ Final Regression / Release testing / Pre Acceptance / Post Mortem (Level 3) ↓ User Acceptance Testing ↓ Sign OFF Modified Build IV.7 to 9 months functional & system testing . test plan author depends on below factors. To define a testing team. satellites etc ) . TEST PLANNING: After finalization of possible tests to be applied for corresponding project. test lead category people concentrate on test plan document preparation to define work allocation in terms of “ what to test?”. and “How to test ?”.3 to 5 months functional & system testing . “Who to test ?”.Client / Server or Web or ERP . To prepare test plan documents. Availability of testers ii.12 to 15 months functional & system testing (Robots. “when to test ?”.

13. 11. Feature pass/fail criteria: when a feature is pass and when a feature is fail. Note: 3-5 What to test? 6. Lack of resources Lack of budget ( Time ) Lack of test data ( Some times test engineers are conduction Adhoc testing depends on past experiences) Lack of development process rigor (Seriousness) Delays in delivery Lack of communication ( In between Testing team & Test lead / developers / testing team) 3. Introduction: About Project 3.After completion testing team formation. Suspension Criteria: Possible abnormal situations raised during above features testing. Test plan ID: Unique Number / Name 2. Note : 12 & 13 Who to test? 14. Prepare test Plan : After completion of testing team formation and risks analysis. Responsibilities: Work allocation to above selected staff members. test plan author concentrates on test plan documentation in “IEEE” format. Approach: List of selected techniques to be applied on above specified modules (From finalized TRM) 7. Example: Risk 1 : Risk 2 : Risk 3 : Risk 4 : Risk 5 : Risk 6 : Risk 7 : Lack of knowledge of test engineers on that domain. Test Items: Modules / Functions / Services / Features 4. 8. Note: 6 –11 How to test ? 12. Risks & Mitigations: Possible testing level risks and solutions to overcome them. test plan author analyses possible risks and mitigations. Test Tasks: Necessary tasks to do before start every feature testing. Test deliverables: Required testing documents to de prepared during testing. 9. Features to be Tested: Responsible modules for test design. . 10. Features not to be tested: Whish ones and why not. Schedule: Dates & Times Note: 14 – when to test? 15. 5. Staff & Training Need: Names of selected test engineers and training requirements to them. Approvals: Signatures of test plan author and PM/QA. Test Environment: Require hardware & software to conduct testing on above features. 16. Format: 1.

Business logic based test case design: In general test engineers are writing a set of test cases depends up on use cases in S/W RS.4. In this review plan author follows “Coverage Analysis” ⇒ ⇒ ⇒ Case Study: Deliverable Test Case Selection Test case review Requirements Traceability matrix Test Automation (including Sanity testing) Test execution including Regression testing Defect reporting Communication Status reporting Test closure & Final Regression User Acceptance Testing Sign OFF Responsibility Test Engineer Test lead / engineer Test Lead Test engineer Test engineer Test engineer / Every one Test Lead Test Lead / Test Engineer Customer site people / involvement of testing team Test Lead Completion time 30 to 40 days 4 to 5 days 1 to 2 days 10 to 20 days 40 to 60 days On going Weekly Twice 4 to5 days 4 to 5 days 1 to 2 days BR based coverage (What to test? Review) Risks Based coverage (When and Who to test? Review) TRM based coverage (How to test? Review) V. Depends on this use cases test engineer are writing test cases to validate that functionality. Every use case describes functionality in terms of input. User interface base test case design 1. Input Domain based test case design 3. Business logic based test case design 2. corresponding testing team members will prepare list of test cases for their responsible modules. . process and output. Review Test Plan: After completion of plan document preparation test plan author conducts a review for completeness and correctness. 1. Test Design: After completion of test planning and required training to testing team. There are three types of test case design methods to cover core level testing (Usability & Functionality testing).

BRS ↓ Use Cases / Functional Specs ↓ HLD ↓ LLD’s ↓ Coding( .EXE)

Test Cases

From the above model test engineers are preparing test cases depends on corresponding use cases and every test case defines a test condition to be applied. To prepare test cases, test engineers study use cases in below approach. Step 1: Collect use cases of our responsible modules. Step 2: Select use cases and their dependencies from that list

Use Case Determinant Step 2.1: Step 2.2: Step 2.3: Step 2.4:

Use Case

Use Case Dependent

Identify entry condition (Base State) Identify Input required (Test Data) Identify exit condition (End state) Identify output and out come (Expected) Login Operation
UID XXX XXX PWD XXX XXX OK IN BOX

Multiply Input1 Input 2
OK XXXX Result Output

Outcome

Step 2.5: Identify normal flow (Navigation) Step 2.6: Identify alternative flows and exceptions (Protocols) Step 3 : Write test cases depends on above information. Step 4 : Review test cases for completeness and correctness. Step 5 : Go to step 2 until completion of all use cases. Use Case 1: A login process allows UID & PWD to validate users. During this validation, login process allows UID as alphanumeric from 4 to 16 characters long and PWD allows alphabets in lower case from 4 to 8 characters long.

Test Case 1: Successful entry of UID. BVA(Size) Min – 4 Max – 16 Min-1 – 3 Min+1 – 5 Max-1 - 15 Max+1– 17 → → → → → → Pass Pass Fail Pass Pass Fail ECP(TYPE) Valid a – z, A- Z, 0–9 Invalid Special characters Blank

Test Case 2: Successful entry of PWD BVA(Size) Min – 4 Max – 8 Min-1 – 3 Min+1 – 5 Max-1 - 7 Max+1– 9 → → → → → → Pass Pass Fail Pass Pass Fail ECP(TYPE) Valid a–z Invalid A- Z, 0–9 Special characters Blank Test Case 3: Successful login operation UID Valid Valid In Valid Value Blank Use Case 2 : In a shopping application user can apply for different purchase orders. Every purchase orders allows item selection number and entry of qty up to 10. System returns one item price and total amount depends on given quantity. Test Case 1: Successful Selection of item number. Test Case 2: Successful Entry of QTY BVA(range) Min – 1 Max – 10 Min-1 – 0 Min+1 – 2 Max-1 - 9 Max+1– 11 → → → → → → Pass Pass Fail Pass Pass Fail ECP(TYPE) Valid 0–9 Invalid A- Z, a-z Special characters Blank Valid In valid Valid Blank Value PWD Pass Fail Fail Fail Fail Criteria

Test Case 3: Successful Calculation, Total = Price X QTY Use Case 3: In an insurance application, user can apply for different types of insurance policies. When they select insurance type as B, system asks age of that customer. The age should be > 18 years and < 60 years. Test Case 1: Successful selection of type B insurance. Test Case 2: Successful focus to age Test Case 3: Successful entry of age BVA(range) Min – 19 Max – 59 Min-1 – 18 Min+1 – 20 Max-1 - 58 Max+1– 60 → → → → → → Pass Pass Fail Pass Pass Fail ECP(TYPE) Valid 0–9 Invalid A- Z, a-z Special characters Blank

Use Case 4: A door opens when a person comes in front of door. A door closed when a person come in. Test Case 1: Successful door opens, when person comes in front of door. Test Case 2: Unsuccessful door open due to absence of the person in front of the door. Test Case 3: Successful door closing after person get in. Test Case 4: Unsuccessful door closing due to person standing at the door. Use Case 5: Prepare test cases for washing machine operation. Test Case 1: Successful power supply. Test Case 2: Successful door open Test Case 3: Successfully filling water. Test Case 4: Successful drooping of detergent Test Case 5: Successful filling of cloths Test Case 6: Successful door closing Test Case 7: Unsuccessful door close due to over flow of cloths Test Case 8: Successful selection of washing settings Test Case 9: Successful washing operation Test Case 10: Unsuccessful washing due to wrong settings Test Case 11: Unsuccessful washing due to lack of power Test Case 12: Unsuccessful washing due to lack of water Test Case 13: Unsuccessful washing due to water leakage Test Case 14: Unsuccessful washing due to door open in the middle of the process Test Case 15: Unsuccessful washing due to machinery problem Test Case 16: Successful Dry cloths

a-z Special characters Blank . Test Case 1: Successful entry of password. PIN. Bill pay and Mini statement. 6 digit alphanumeric Check deposit.Z. Right receipt and card come back) Test Case 13: Unsuccessful withdrawal due to amount > possible balance. Test Case 1: Test Case 2: Test Case 4: Test Case 4: Test Case 5: Test Case 6: Test Case 7: Test Case 8: Successful insertion of card Unsuccessful operation due to wrong angle of card insertion Unsuccessful operation due to invalid card Successful entry of pin number Unsuccessful operation due to entry of wrong pin no three times Successful selection of language Successful selection of account type Unsuccessful operation due to invalid account type selection Test Case 9: Successful selection of withdrawal option Test Case 10: Successful entry of amount Test Case 11: Unsuccessful operation due to wrong denominations Test Case 12: Successful withdrawal (Correct amount. allows blank 3 digit no. does not begins with 0 or 1. PIN. Money transfer. In this login process user can use below fields. PIN & language selection Test Case 20: Unsuccessful due to click cancel after insert card.Use Case 6: Prepare test case for money withdrawal from ATM. BVA(Size) Min = Max = 6 → Min-1 –5 → Min+1 –7 → Pass Fail Fail ECP(TYPE) Valid 0–9 Invalid A. Test Case 14: Unsuccessful withdrawal due to amount > Day limit (Including Multiple transactions) Test Case 15: Unsuccessful transaction due to lack of amount in ATM Test Case 16: Unsuccessful due to server failure Test Case 17: Unsuccessful due to click cancel after insert card Test Case 18: Unsuccessful due to click cancel after insert card & PIN Test Case 19: Unsuccessful due to click cancel after insert card. language. language & account type selection Test Case 21: Unsuccessful due to click cancel after insert card. account type & amount selection Use Case 7: In an E-Banking application users can connect to bank server using his personnel computers. Password Area code Prefix Suffix Commands → → → → → 6 digit no 3 digit no.

Test Case 2: Successful entry of area code BVA(Size) Min = Max = 3 → Min-1 –2 → Min+1 –4 → Pass Fail Fail ECP(TYPE) Valid 0–9 Blank Invalid A.Z. a-z Special characters Test Case 3: Successful entry of prefix BVA(Range) Min – 200 Max – 999 Min-1 – 199 Min+1 – 201 Max-1 .998 Max+1– 1000 → → → → → → Pass Pass Fail Pass Pass Fail ECP(TYPE) Valid 0–9 Invalid A. Test Case 6: Successful connect to bank server with all valid values Test Case 7: Successful connect to bank server with out filling area code. a-z Special characters Blank Test Case 4: Successful entry of suffix BVA(Size) Min = Max = 6 → Min-1 –5 → Min+1 –7 → Pass Fail Fail ECP(TYPE) Valid 0–9 A. a–z Invalid Blank Special characters Test Case 5: Successful selection of commands such as check deposit. money transfer. Test Case 8: Unsuccessful operation due to with out filling all fields except area code.Z. . bills pay and mini statement.Z.

Step No 1 2 3 4 Action Open note pad Fill with text I/P required Expected Empty Editor Save icon enabled Click save icon Save window appears Enter file name & Unique File File name appears in click save name title bar of editor Example2: Prepare test scenario with expected for below test case. 1. 4. Error handling. 7.Test Case Format: During test design test engineers are writing list of test cases in IEEE format. Priority : Importance of test case P0 : Basic Functionality P1 : General Function (I/P domain. Test Duration : Date & Time 9. Test Case Pass/Fail Criteria: When this case is pass and when this case is fail. Inter systems etc) P2 : Cosmetic (User Interface) 6. Test Procedure : Step by step procedure to executive this test case. Test Environment : Required Hardware and software to executive this test case. Features to be Tested : Module / Function / Feature Test Suit ID : Batch ID in which this case is a member. Compatibility. Example: Prepare test procedure for below test case. Test Effort(Person/hr) : Time to executive this test case (Ex : 20 mts max) 8. Note: In general test engineers are writing list of test cases along with step by step procedure only. 5. Test Case ID : Unique number or name Test Case Name : The name of test condition to be tested. Test Setup : Required testing tasks to do before starts this case execution. Format: Step No Action I/P required Expected Actual Result Comments Test Design Test Execution 11. “Successful Mail reply” in Yahoo. 3. Successful file save in note pad. 2. Step No Action I/P required Expected . 10.

Identify critical attributes in that list. which participated in manipulations and retrievals. Step 1: Step 2: Step 3: Step 4: Collect data model of responsible modules Study every input attribute in terms of size. .1 2 3 Login to site Click Inbox Click Mail Subject Valid UID Inbox appears Valid PWD Mail box appears Mail Message Appears Compose Window appears with To: Received Mail ID Sub: Received mail Subject CC: Off BCC: Off MSG: Received message with comments. I/P Attribute ECP Valid Invalid BVA(Size / Range) Min Max Note: In general test engineers are preparing step by step procedure based test cases for functionality testing. Identify non-critical attributes such as just input. output type. But they are not responsible to provide information about size and type of input objects. Test engineers prepare valid and invalid table based test cases for input domain of object testing. Case Study: Prepare test cases with required documentation depends on below scenario. To collect this type of information test engineers study “Data Modal” of responsible modules (E-R Diagrams in LLD’s) During data model study. Acknowledgement from WEB server 4 Click Reply - 5 Type New massage and click send 2. type and constraints. test engineer follows below approach. These functional specifications provide functional descriptions with inputs. Input Domain Based Test Case Design: In general test engineers are writing maximum test cases depend on use cases / functional specs in S/W RS. outputs and process. Example: Critical A/C No A/C Name Balance Non Critical A/C Orders Step 5: Prepare BVA and ECP for every input object.

 Customer Name → Alphabets in lower case. fixed deposit is functionality.00  Tenure → Up to 12 months  Interest → Numeric With decimal From functional specification (Use Cases). Test Case 1: Test Case ID: Test Case Name: Data Matrix: I/P Attribute Customer Name a to z ECP Valid Invalid A to Z 0 to 9 Special Characters & Blank Min 1 characters BVA(Size) Max 256 characters TC_FD_1 Successful Entry of customer Name Test Case 2: Test Case ID: Test Case Name: Data Matrix: I/P Attribute Amount 0-9 ECP Valid Invalid A to Z a to z Special Characters & Blank Min 1500 BVA(Range) Max 100000 TC_FD_2 Successful Entry of Amount Test Case 3: Test Case ID: Test Case Name: Data Matrix: I/P Attribute Tenure 0-9 ECP Valid Invalid A to Z a to z Special Characters & Blank Min 1 BVA(Range) Max 12 TC_FD_3 Successful Entry of Tenure Test Case 4: . if tenure is > 10 months interest must > 10%. Bank employee operates the functionality with below inputs.  Amount → Rs 1500 to 100000.In a bank automation Software.

Amount bank server and Time > 10 with interest >10 Error message from bank Valid customer server Name. Amount and Time > 10 with interest <10 .Test Case ID: Test Case Name: Data Matrix: I/P Attribute Interest TC_FD_4 Successful Entry of Interest ECP Valid 0-9 With Decimal Invalid A to Z a to z Special Characters & Blank Min 1 BVA(Range) Max 100 Test Case 5: Test Case ID: Test Case Name: TC_FD_5 Successful fixed deposit operation Test Procedure: Step Action No 1 Login to bank Software 2 3 Select Fixed Deposit Fill all fields and click OK I/P required Valid ID All valid Any in valid Expected Menu Appears FD form Appears Acknowledgement from bank server Error message from bank server Test Case 6: Test Case ID: Test Case Name: 10% TC_FD_6 Unsuccessful fixed deposit operation due to Time > 10 months & Interest < Test Procedure: Step Action No 1 Login to bank Software 2 3 I/P required Valid ID Expected Menu Appears from Select Fixed Deposit FD form Appears Fill all fields and click Valid customer Acknowledgement OK Name.

Test Case 4: Accuracy of data displayed Amount DOB   Amount DOB $  DOB --/--/--  --/--/--  (DD/MM/YY) Test Case 5: Accuracy of data in data base as a result of user inputs. Example: File attachments.768 10. font. User Interface Based Test Case Design: To conduct usability testing test engineers writing a list of test cases depends on our organisation user interface conventions. 10. size(object width and height) and Microsoft 6 rules) Test Case 3: Meaningful error messages.77  Test Case 6: 10. style.77 Form Data Base Table Accuracy of data in a data base as a result of external factors.Test Case 7: Test Case ID: Test Case Name: TC_FD_7 Unsuccessful fixed deposit operation due to with out filling all fields. Greetings one year Report . Examples: Test Case 1: Spell Check Test Case2: Graphics check (Screen level alignment. Global interface rules and Interest of customer site people. But some as blank Test case 0 – 4 → I/P domain Test case 5 – 6 → Functionality Test case 7 → Error handling Note: 3. Test Procedure: Step Action No 1 Login to bank Software 2 Select Fixed Deposit I/P required Valid ID Expected Menu Appears FD form Appears Error message from bank server 3 Valid customer Name. Amount Fill all fields and click and Time OK interest. colour.

TEST EXECUTION: After completion of test cases selection & their review. testing team concentrates on build release from development and test execution on the build.Test Case 7: Meaning full Help menus (Manual Support testing). Data model etc) XXXXXXXX (Mail Open) XXXXXXXXX ( Mail Compose) XXXXXXXXX (Mail Reply) : : Test Cases XXXXXXXX XXXXXXXX XXXXXXXXX XXXXXXXX XXXXXX XXXXXXX XXXXXXX XXXXXXX From the above model tracebility matrix defines mapping between customer requirements and prepared test cases to validate that requirements. → → → → → BR based coverage Use case based coverage Data modal based coverage User Interface based coverage Test Responsibility based coverage At the end of this review test lead prepare “Requirements Tracability Matrix” or “Requirements Validation Matrix". testing team concentrates on review of test cases for completeness and correctness. In this review testing team applies coverage analysis. Business Requirements XXXXXXX (Login) : : : : : : : Sources (Use cases. Test Execution Levels / Phases: Development BVT) Testing Stable Build Level 0 (Sanity / TAT / ↓ Defect Fixing Defect Reporting Test Automation Level 1 (Compressive) . 1. Review Test Cases: After completion of all possible test cases writing for responsible modules. IV.

Test Execution Levels Vs Test Cases: Level 0 → P0 test cases Level 1 → All P0. if testers decided stability. Build Version Control: In general test engineers are receiving build from development in below modes. . developers are using version control tools also. Soft Base means that collections of software’s. 3. to estimate satiability for complete testing. Level 0: ( Sanity / TAT / BVT) After receiving initial build test engineers concentrate on Basic functionality of that build. In this sanity testing test engineers try to execute all P0 test cases to cover basic functionality. P1 and P2 test cases as batches Level 2 → Selected P0.r.r.(Ex: VSS(Visual Source safe) 4.↑ Defect Resolving Modified Build Level 2 (Regression) ↓ Level 3 (Final Regression) 2. For this version controlling. P1 and P2 test cases w. If functionality not working or functionality is missing testing team reject that build. they concentrate on test execution of all test cases to detect defects. To distinguish old builds & new build. development team gives unique version no in system. which is understandable to testers. Build → Server Soft Base ↓ FTP(File Transfer Protocol) Test Environment Testers From the above approach test engineers are dumping application build from server to local host through FTP.t modifications Level 3 → Selected P0. During test execution test engineers are receiving modified builds from soft base. P1 and P2 test cases w.t critical areas in the master build.

Level 2 (Regression Testing): During comprehensive test execution. After receiving modified build from them. test engineers concentrate on regression testing to ensure bug fixing work and occurrences of side effects. Test Automation: If test automation is possible then testing team concentrate on test scripts creation using corresponding testing tool. test engineers observe below factors on the build. → Passed → Failed → Blocked – All expected = Actual – Any one expected != Actual – Corresponding parent functionality failed. Level 1(Comprehensive Testing) : After completion of sanity testing and possible test automation. Every test script consists of navigational statements along with checkpoints. Comprehensive Test Cycles Passed Skip In queue 3 1 2 4 In Progress 5 Failed Partial Pass / Fail Closed Blocked 7. testing team concentrates on test batches formation with dependent test cases. test engineers are reporting mismatches as defects to developers. 5. During these test batches execution. → Understandable → Operatable → Consistency → Controllable → Simplicity → Maintainable → Automatable From the above 8 testable issues sanity testing is also known as Testability Testing or Octangle Testing.During this sanity testing. test engineer prepare test log document this document consists of three types of entries. Test batch is also known as test suit or test set. Resolved Bug Severity . Stable Build ↑ Test Automation (Selective Automation) (All P0 and Carefully selected P1 test cases) 6.

5. NO – Rarely defect appears) : Attach test Procedure : Attach snap shot and strong reasons. Case 3: If development team resolved bug impact (Severity) is low. TEST REPORTING: During comprehensive testing. 2. test engineers re execute all P0. Severity . Case 2: If development team resolved bug impact (Severity) is medium. 7.High All P0 All P1 Carefully selected P2 Medium All P0 Carefully selected P1 Carefully selected P2 Low Some P0 Some P1 Some P2 On Modified Build Case 1: If development team resolved bug impact (Severity) is high. carefully selected P1 and some of P2 test cases on that modified Build.r. test engineers re some of P0. 8. 9. Case 4: If development team released modified build due to sudden changes in project requirements. : New / Reopen (New – Defect appears first time.t functionality High→ With out resolving that defect test engineer is not able to continue testing. test engineers re execute all P0. 6. VII. Defect ID Description Feature Test Case Name Reproducible If Yes If NO Status : Unique Number / Name : Summary of defect : Module/Function/Service (In this module test engineers found this defect) : Corresponding failed test condition : Yes / NO (Yes – every time Defect appears. Reopen – Reappearance of the defect once closed) : Seriousness of defect w. P1 and P2 test cases on that modified Build.(Snow Stopper) Medium→ Able to continue testing but mandatory to resolve. all P1 and Carefully selected P2 test cases w.r. test engineers re execute all P0. 3. 4. P1 and carefully selected P2 test cases on that modified Build.t that requirements modifications. test engineers are reporting mismatches as defects to developers through IEEE format. 1.

t customer(High. 15. Priority : Importance of the defect w. medium. Build Version ID : In which version of build test engineer found this defect. Resolved On 19. Resolution Type 20. Reported by : Name of test engineer 12.r. Reported on : Date of submission 13. low) 11. Suggested Fix : Tester try to produce suggestions to solve this defect(Optional) _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 16. Resolved By 18. Fixed By 17. Approved By Defect Age: The time gap between “ Reported on” and “Resolved On”. Assigned to : Name of responsible person in development side(PM) 14.Low→ May or may not resolve 10. Defect Submission Process: Large-Scale Organisations QA Test Manager ↑ If high severity defect rejected Test Lead ↑ Test Engineer Project manager ↓ Team Lead ↓ Developer By Developers : PM / Team Lead : Programmers Name : Date of resolving : : Sign of PM Transmittal Reports Medium & Small-Scale Organisations Project Manager Test Lead ↑ Test Engineer Team Lead ↓ Developer Transmittal Reports Defect Status Cycle: New ↓ .

t limitations of Hardware devices. Software Limitations: Rejected due to this defect raised w. 9. Not Applicable: Rejected due to no proper meaning to this defect.r. No Plan to Fix it: Not accepted and not rejected but they want extra time to fix. 6. Enhancement: Rejected due to this defect related to future requirement of customer. 7. 5. Need More Information: Not accepted and not rejected but developer requires extra information to understand that defect.t design document. Fixed: Developer accepted as to be resolved. 2. Not Reproducible: Not accepted and not rejected but developer requires correct procedure to reproduce that defect. Duplicate: Rejected due to this defect same as previously reported defect. Fixed Indirectly: Accepted but not interested to resolve in this version (Deferred). 12. Function as Designed: Rejected due to coding is correct w. developers reviews that defect and send resolution type to testers as reply. 8.Open / Rejected / Deferred (defect accepted but not interested to resolve in this version) ↓ Closed ↓ Reopen Defect Life Cycle / Bug Life Cycle: Detect Defect ↓ Reproduce Defect ↓ Report Defect ↓ Fix Defect ↓ Resolve Defect ↓ Close Defect Defect Resolution Type: After receiving defect reports from testers.t limitations of Software technologies. 3. 11.r. 10. Hardware Limitations: Rejected due to this defect raised w. User Misunderstanding: Extra negotiations between testing and development teams. 1. . 4.r.

Error Handling Bugs: Medium Severity Ex 1: Does not providing error massage window → High Priority Ex 2: Improper meaning of error massages → Low Priority 4. Hardware Bugs: High Severity Ex 1: Does not handle device → High Priority Ex 2: Wrong output from device → Low Priority 8. tester names missing. Source Bugs: Medium Severity Ex: Mistakes in help documents. Boundary Related Bugs: Medium Severity Ex 1: Does not allow valid type → High Priority Ex 2: Allows invalid type also → Low Priority 3. Version Control Bugs: Medium Severity Ex: Difference between two consequitive build versions. Calculation Bugs: High Severity Ex 1: Final output is wrong → low priority Ex 2: Dependent results are wrong → high priority 5. 10. developers name missing.Types of Defects: 1. User Interface Bugs: Low Severity Ex 1: Spelling Mistake → High Priority Ex 2: Improper alignment → low priority 2. Test Closer: . Load Condition Bugs: High Severity Ex 1: Does not allow multiple users to operate → High Priority Ex 2: Does not allow customer expected load → Low Priority 7. 9. copyright window missing. ID Control Bugs: Medium Severity Ex 1: Logo missing. wrong logo. VIII. Race Condition Bugs: High Severity Ex 1: Dead Lock → High Priority Ex 2: Improper order of services → Low Priority 6. version no mistake.

Analysis of Differed Bugs: Whether differed bugs are deferrable or not? At the end of this review. . Coverage Analysis: → BR based coverage → Use case based coverage → Data modal based coverage → UI based coverage → TRM based coverage 2. User Acceptance Testing: After completion of final regression cycles. X. test lead conducts a review to estimate completeness & correctness of testing. It is a part of software release note.test. There are two approaches to conduct this testing such as α test and β . testing team concentrates on final regression testing on high bug density modules if time is available. Level 3: (Final Regression / Pre Acceptance Testing) Gather Regression Requiremen ts Test Reporting Effort Estimation Final Regression Plan Regression IX. Sign OFF: After completion of user acceptance testing and their modifications. test lead concentrates on final test summary report creation. our organisation management concentrates on user acceptance testing to collect feedback. This final test summary report consists of below documents. 1.After completion of all possible test cycles executions. Bug Density: Ex: A → 20% B → 20% C → 40 % ← Final Regression D → 20% 3. In this review test lead follow below factors along with test engineer.

→ Test Strategy / Methodology (TRM) → System Test Plan → Requirements Tracebility matrix → Automated Test Scripts → Bugs Summary Report BUG Description Feature Found By Severity Status (Closed/Differed) Commants .

Auditing: To audit testing process Quality people three types of measurements & Metrics. 2. TMM (Test Management Measurement): These measurements used by test lead during testing process (weekly twice). Stability: y Defect arrival Rate No of defects 0 Time x 20% testing → 80% defects 80% testing → 20% defects Sufficiency: → Requirements Coverage →Type-Trigger analysis Defect Severity Distribution: → Organization – Trend limit check. PCM (Process Capability Measurement): . Test Status: → Completed → In progress → Yet to execute Delays in Delivery: → Defect arrival rate → Defect resolution rate → Defect age Test Efficiency: → Cost to find a defect ( No of defects / Person-Day) 3. QAM (Quality Assessment Measurement): These measurements used by quality analysts / PM during testing process(Monthly once). 1.

Test Effectiveness: → Requirements Coverage →Type-Trigger analysis Defect Escapes (Missed defects): → Type – Phase analyses Test Efficiency: → Cost to find a defect ( No of defects / Person-Day) .These measurements used by project management to improve capability of testing process depends on feed back of customer in existing maintenance software’s.