P. 1


|Views: 458|Likes:
Published by Eshwar Chaitanya

More info:

Published by: Eshwar Chaitanya on Feb 20, 2011
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less





Software Testing Material Software Testing: Testing is a process of executing a program with the intent of finding error.

Software Engineering: Software Engineering is the establishment and use of sound engineering principles in order to obtain economically software that is more reliable and works efficiently on real machines. Software engineering is based on Computer Science, Management Science, Economics, Communication Skills and Engineering approach. What should be done during testing? Confirming product as • Product that has been developed according to specifications • Working perfectly • Satisfying customer requirements Why should we do testing? • Error free superior product • Quality Assurance to the client • Competitive advantage • Cut down costs How to test? Testing can be done in the following ways: • Manually • Automation (By using tools like WinRunner, LoadRunner, TestDirector …) • Combination of Manual and Automation. Software Project: A problem solved by some people through a process is called a project. Information Gathering – Requirements Analysis – Design – Coding – Testing – Maintenance: Are called as Project Software Project




Software Development Phases: Information Gathering: It encompasses requirements gathering at the strategic business level. Planning: To provide a framework that enables the management to make reasonable estimates of

Page 1 of 132

Software Testing Material

• • • •

Resources Cost Schedules Size

Requirements Analysis: Data, Functional and Behavioral requirements are identified. • • • Data Modeling: Defines data objects, attributes, and relationships. Functional Modeling: Indicates how data are transformed in the system. Behavioral Modeling: Depicts the impact of events.

Design: Design is the engineering representation of product that is to be built. • • • • Data Design: Transforms the information domain model into the data structures that will be required to implement the software. Architectural design: Relationship between major structural elements of the software. Represents the structure of data and program components that are required to build a computer based system. Interface design: Creates an effective communication medium between a human and a computer. Component level Design: Transforms structural elements of the software architecture into a procedural description of software components.

Coding: Translation into source code (Machine readable form) Testing: Testing is a process of executing a program with the intent of finding error • • • • Unit Testing: It concentrates on each unit (Module, Component…) of the software as implemented in source code. Integration Testing: Putting the modules together and construction of software architecture. System and Functional Testing: Product is validated with other system elements are tested as a whole User Acceptance Testing: Testing by the user to collect feed back.

Maintenance: Change associated with error correction, adaptation and enhancements. • • • • Correction: Changes software to correct defects. Adaptation: Modification to the software to accommodate changes to its external environment. Enhancement: Extends the software beyond its original functional requirements. Prevention: Changes software so that they can be more easily corrected, adapted and enhanced.

Business Requirements Specification (BRS): Consists of definitions of customer requirements. Also called as CRS/URS (Customer Requirements Specification / User Requirements Specification) Page 2 of 132

Software Testing Material

Software Requirements Specification (S/wRS): Consists of functional requirements to develop and system requirements(s/w & H/w) to use. Review: A verification method to estimate completeness and correctness of documents. High Level Design Document (HLDD): Consists of the overall hierarchy of the system in terms of modules. Low Level Design Document (LLDD): Consists of every sub module in terms of Structural logic (ERD) and Backend Logic(DFD) Prototype: A sample model of an application without functionality is called as prototype(Screens) White Box Testing: A coding level testing technique to verify completeness and correctness of the programs with respect to design. Also called as Glass BT or Clear BT Black Box Testing: It is a .exe level of testing technique to validate functionality of an application with respect to customer requirements. During this test engineer validate internal processing depends on external interface. Grey Box Testing: Combination of white box and black box testing. Build: A .Exe form of integrated module set is called build. Verification: whether system is right or wrong? Validation: whether system is right system or not? Software Quality Assurance(SQA): SQA concepts are monitoring and measuring the strength of development process. Ex: LCT (Life Cycle Testing) Quality: • Meet customer requirements • Meet customer expectations (cost to use, speed in process or performance, security) • Possible cost • Time to market For developing the quality software we need LCD and LCT LCD: A multiple stages of development stages and the every stage is verified for completeness. V model: Build: When coding level testing over. it is a completely integration tested modules. Then it is called a build. Build is developed after integration testing. (.exe)

Page 3 of 132

UAT and Test management process where the independent testers or testing team will be involved. Assessment of Development Plan Prepare TestPlan Information Gathering Requirements Phase Testing & Analysis Design and Coding Design Phase Testing Program Phase Testing (WBT) Functional & System Testing User Acceptance Testing Test Environment Process Install Build Maintenance Port Testing Test Software Changes Test Efficiency Port Testing: This is to test the installation process.Software Testing Material Test Management: Testers maintain some documents related to every project. Change Request: The request made by the customer to modify the software. Page 4 of 132 . a = Total no of defects found by testers during testing. BBT. This type of organizations are maintaining a refinement form of v-model. Refinement form of V-Model: Due to cost and time point of view v-model is not applicable to small scale and medium scale companies. b = Total no of defects found by customer during maintenance. Defect Removel Efficiency: DRE= a/a+b. DRE is also called as DD(Defect Deficiency). They will refer these documents for future modifications.

During the system and functional testing the actual testers are involved and conducts tests based on S/wRS. During unit testing. Page 5 of 132 . they conduct program level testing with the help of WBT techniques. the testers and programmers or test programmers integrating the modules to test with respect to HLDD. after completion of information gathering and analysis a review meeting conducted to decide following 5 factors. From the above model the small scale and medium scale organizations are also conducts life cycle testing. During the design phase two types of designs are done. and they perform tests based on the BRS. HLDD and LLDD. But they maintain separate team for functional and system testing. During the UAT customer site people are also involved. Tech Leads will be involved. at the end of this phase S/wRS is prepared. During the Integration Testing. It is prepared by the system analyst. During the coding phase programs are developed by programmers. After the requirements gathering BRS/CRS/URS will be prepared. It consists of the functional (customer requirements) + System Requirements (h/w + S/w) requirements. Reviews during Analysis: Quality Analyst decides on 5 topics. This is done by the Business Analyst. During the requirements analysis all the requirements are analyzed.Software Testing Material BRS/URS/CRS User Acceptance Testing S/wRS Functional & System Testing HLDD Integration Testing LLDD Code Fig: Refinement Form of V-Model Unit Testing Development starts with information gathering.

such as 1. 5.) Page 6 of 132 . In this review they can apply the below factors. This WBT is also known as glass box testing or clear box testing.)  Loops coverage (correctness of loops termination. • • • • • Is the design good? (understandable or easy to refer) Are they complete? (all the customer requirements are satisfied or not) Are they correct? Are they right Requirements? (the design flow is correct or not) Are they follow able? (the design logic is correct or not) Does they handle error handling? ( the design should be able to specify the positive and negative flow also) User Information User Logi n Inbox Invalid User Unit Testing: After the completion of design and their reviews programmers are concentrating on coding. The senior programmers will conduct testing on programs WBT is applied at the module level. WBT is based on the code.Software Testing Material 1. with the help of the WBT techniques. During this stage they conduct program level testing. In this every stage they will develop HLDD and LLDD. they (tech leads) concentrate on review of the documents for correctness and completeness. Are they complete? Are they correct? Or Are they right Requirements? Are they achievable? Are they reasonable? ( with respect to cost & time) Are they testable? Reviews during Design: After the completion of analysis of customer requirements and their reviews. After the completion of above like design documents. 2. 3. Execution Testing  Basis path coverage (correctness of every statement execution. 4. There are two types of WBT techniques. technical support people (Tech Leads) concentrate on the logical design of the system.

Driver: It is a calling Program. browsers and etc…sys s/w. There are two types of approaches to conduct Integration Testing: • • Top-down Approach Bottom-up approach. ( ex login. compilers. Mai n Stub Sub Module2 Sub Module1 Bottom-Up: This approach starts testing. It sends back control to main module instead of sub module. create driver to accept default uid and pwd) Mai n Driver Sub Module1 Sub Module2 + Page 7 of 132 . when they complete dependent modules of unit testing.) Integration Testing: After the completion of unit testing. development people concentrate on integration testing.Software Testing Material  Program technique coverage (Less no of Memory Cycles and CPU cycles during execution. from lower-level modules. During this test programmers are verifying integration of modules with respect to HLDD (which contains hierarchy of modules). Stub: It is a called program. from the root. Operations Testing: Whither the software is running under the customer expected environment platforms (such as OS.) 2. It invokes a sub module instead of main module. drivers are used to connect the sub modules. Top-down: This approach starts testing.

Usability Testing: User friendliness of the application or build. Functional Testing is called as Dynamic Testing.Software Testing Material Sandwich: This approach combines the Top-down and Bottom-up approaches of the integration testing. Mai n Driver Sub Module1 Stub Sub Module2 Sub Module3 System Testing: • Conducted by separate testing team • Follows Black Box testing techniques • Depends on S/wRS • Build level testing to validate internal processing depends on external interface processing depends on external interface • This phase is divided into 4 divisions After the completion of Coding and that level tests(U & I) development team releases a finally integrated all modules set as a build.) Usability testing consists of following subtests also. In this middle level modules are testing using the drivers and stubs. (WYSIWYG. Usability Testing is a Static Testing. Low level Priority in Testing) Functional Testing (Functionality is correct or not. High Priority in Testing) Usability and System testing are called as Core testing and Performance and Security Testing techniques are called as Advanced testing. This testing is classified into 4 divisions. Medium Priority in Testing) Performance Testing (Speed of Processing. From the testers point of view functional and usability tests are important. Medium Priority in Testing) Security Testing (To break the security of the system. separate testing team concentrate on functional and system testing with the help of BBT. • • • • Usability Testing (Ease to use or not. After receiving a stable build from development team. Page 8 of 132 .

Development Team releases Build User Interface Testing. If they have less time to do system testing. of events to complete a task. through below coverage. Most of the testing tools. technical writers prepares user manuals after completion of all possible tests execution and their modifications also. Now a days help documentation is released along with the main application. But actually user manuals are prepared after the completion of all other system test techniques and also resolving all the bugs. Help documentation is also called as user manual. Functional testing: During this stage of testing. testing team concentrate on " Meet Customer Requirements". which are available in the market are of this type.) Manual Support Testing: In general. the system is developed met or not can be tested. System Testing Manual Support Testing. Remaining System Testing techniques like Functionality. For every project functionality testing is most important. Page 9 of 132 . they will be doing Functionality Testing only. Performance and Security Tests. The functional testing consists of following subtests System Testing 80 % Functional Testing 80 % Functionality / Requirements Testing Functionality or Requirements Testing: During this subtest. For performing what functionality. test engineer validates correctness of every functionality in our application build.Software Testing Material User Interface Testing • Ease of Use ( understandable to end users to operate ) • Look & Feel ( Pleasantness or attractiveness of screens ) • Speed in interface ( Less no.

All the above coverages are mandatory or must. Password allows alphabet from 4-8 characters long. Boundary Value analysis: Boundary values are used for testing the size and range of an object.Software Testing Material Functionality or Requirements Testing has following coverages • • • • • • • • Behavioral Coverage ( Object Properties Checking ). database disconnected etc… Abnormal Backup & Recovery Procedures Normal Recovery Testing is an extension of Error Handling Testing. Error Handling Coverage ( Preventing negative navigation ). Equivalence Class Partitions: Equivalence classes are used for testing the type of the object. Page 10 of 132 . During this test. the test engineer validates size and type of every input object. Ex: During process power failure. network disconnect. URL’s Coverage (Links execution in web pages) Service Levels ( Order of functionality or services ). User id allows alpha numeric from 4-16 characters long. Ex: A login process allows user id and password. Input Domain Testing: During this test. whether our application build can recover from abnormal situations or not. Input Domain Coverage ( Correctness of Size and Type of every i/p Object ). test engineer prepares boundary values and equivalence classes for every input object. test engineers validates that. In this coverage. Successful Functionality ( Combination of above all ). Calculations Coverage ( correctness of o/p values ). Recovery Testing: This test is also known as Reliability testing. server down. Backend Coverage ( Data Validation & Data Integrity of database tables ).

Compilers.. During this test. During this test. Water Bill Automation WBAS Electricity Bill Automation Tele Phone Bill Automation Local Data Base Server EBAS TPBAS Income Tax Bill Automation Newly Added Component Sharable Resource ITBAS New Server Local ESeva Center Remote Servers Page 11 of 132 . etc. test engineer validates that whither our application build coexistence with other existing software in the customer site to share the resources (H/w or S/w). hardware devices or not? Inter Systems Testing: This test is also known as End-to-End testing. test engineer validates that whether our application build supports different technology i.Software Testing Material Compatibility Testing: This test is also known as portable testing. Buil d OS Backward compatibility: The application is not ready to run on the technology or environment. browsers. but the project technology or environment like OS is not supported for running. test engineer validates continuity of our application execution on customer expected platforms( like OS. Buil d OS Configuration Testing: This test is also known as Hardware Compatibility testing.) During this compatibility two types of problems arises like 1. Backward compatibility Forward compatibility: The application which is developed is ready to run.e. During this test. Forward compatibility 2.

Build Server Test Engineer Systems Build +Required S/w components to run application Instal lation Customer Site Like Environment 1. System Software Level: Compatibility Testing Hardware Level: Configuration Testing Application Software Level: Inter Systems Testing Installation Testing: Testing the applications. Easy Interface 3. Occupied Disk Space The following conditions or tests done in this installation process. whither it is providing easy interface or not ? Occupied Disk Space: How much disk space it is occupying after the installation? Page 12 of 132 . Setup Program 2. The second example is same system but different components.Software Testing Material Banking Information System Bank Loans The first example is one system is our application and other one is sharable. • • • Setup Program: Whither Setup is starting or not? Easy Interface: During Installation. installation process in customer specified environment and conditions.

Load: No.Software Testing Material Sanitation Testing: This test is also known as Garbage Testing. 4. 2. Page 13 of 132 . During this test. by our application build. During this test. of users try to access system at a time. Load Testing Stress Testing Data Volume Testing Storage Testing Load Testing: This test is also known as scalability testing. Stress Testing: During this test. 1. User Id Password Login Forgot Password Parallel or Comparitive testing: During this test. This performance test classified into below subtests. test engineer finds extra features in your application build with respect to S/w RS. Manual Testing. Load Runner. Performance Testing: It is an advanced testing technique and expensive to apply. This test can be done in two ways 1. test engineer executes our application under customer expected configuration and load to estimate performance. Maximum testers may not get this type of problems.By using the tool. During this test. 3. test engineer compares our application build with similar type of applications or old versions of same application to find competitiveness. 2. This comparative testing can be done in two views: • Similar type of applications in the market. • Upgraded version of application with older versions. Data Volume Testing: A tester conducts this test to find maximum size of allowable or maintainable data. test engineer executes our application build under customer expected configuration and peak load to estimate performance. testing team concentrate on Speed of Processing.

And has specific customer. Encryption / Decryption: Encryption. To conduct user acceptance tests.Software Testing Material Storage Testing: Execution of our application under huge amounts of resources to estimate storage limitations to be handled by our application is called as Storage Testing.To convert actual data into a secret code which may not be understandable to others. Note: In s/w development projects are two types based on the products like software application ( also called as Project ) and Product. Performance + = Trashing -- Resources Security Testing: It is also an advanced testing technique and complex to apply. Source Encryption Decryption Destination Client Destination Decryption Encryption Server Source User Acceptance Testing: After completion of all possible system tests execution. Authorization: Verifies authors identity to check he is a authorized user or not. highly skilled persons who have security domain knowledge. For this Alpha test will be done. This software is for only one company. Page 14 of 132 . Decryption. our organization concentrate on user acceptance test to collect feed back. they are following two approaches like Alpha (α) . Access Control: Also called as Privileges testing.Converting the secret data into actual data.Test and Beta (β) -Test. The rights given to a user to do a system task. This test is divided into three sub tests. To conduct this tests. Software Application ( Project ) : Get requirements from the client and develop the project.

they are sending some Change Request (CR) to our company. During utilization of our application by customer site people. During this Port testing Release team validate below factors in customer site: • • • • • • • • Compact Installation (Fully correctly installed or not) On screen displays Overall Functionality Input device handling Output device handling Secondary Storage Handling OS Error handling Co-existence with other Software The above tests are done by the release team. This team conducts Port Testing in customer site.Version or Trial version will be released in the market to do Beta test. By customer site like people. And has no specific customer. After the completion of above testing. Virtual environment Collect Feedback. Collect Feedback. to estimate completeness and correctness of our application installation. This software may have more than one company. In development site Real environment. Enhancement 2. When CR is received the following steps are done Based on the type of CR there are two types.Software Testing Material Product : Get requirements from the market and develop the project. For this β. 1. the Release Team will gives training and application support in customer site for a period. In customer site like environment. our organization concentrate on Release Team (RT) formation. Missed Defect Page 15 of 132 . Testing during Maintenance: After the completion of UA Testing. Alpha Testing For what software applications applicable to specific customer By real customer Beta Testing For software products.

Software Testing Material Change Request Enhancement Impact Analysis Perform that change Test that S/w Change CCB Developers Missed Defect Impact Analysis Perform that change Review old test process capability to improve Test that S/w Change Tester Change Control Board: It is the team which will handles customer requests for enhancement changes. Functional & System Testing – Test Engineer. Senior Programmer. Test Engineer. Developer / Test Engineer. Change Control Board Page 16 of 132 . Release Team. Unit Testing – Senior Programmer. Port Testing – Release Team. Testing during Maintenance – Change Control Board Testing Stages Reviews in Analysis Reviews in Design Unit Testing Integration Testing Functional & System Testing User Acceptance Testing Port Testing Testing during Maintenance/ Test Software Changes Testing Team: From refinement form of V-Model small scale companies and medium scale companies are maintaining separate testing team for some of the stages in LCT. Reviews in Design – Technical Support / Technical Lead. Integration Testing – Developer / Test Engineer. In their teams organisation maintains below roles Quality Control: Defines the objectives of Testing Quality Assurance: Defines approach done by Test Manager Roles Business Analyst / Functional Lead. User Acceptance Testing – Customer site people with involvement of testing team. Customer site people with involvement of Testing team. Technical Support / Technical Lead. Testing Stages Vs Roles: Reviews in Analysis – Business Analyst / Functional Lead.

Software Testing Material Test Manager: Schedule that approach Test Lead: Maintain testing team with respect to the test plan Test Engineer: Conducts testing to find defects Quality Control Quality Assurance Project Manager Test Managers Project Lead Test Lead Programmers Test Engineer / QA Engineer Quality Control: Defines the objectives of Testing Quality Assurance: Defines approach done by Test Manager Test Manager: Schedule. (No rules and regulations to test a issue) Exploratory Testing: Level by level of activity coverage of activities in your application during testing is called exploratory testing.(Less Time) Gerilla Testing: To cover a single functionality with multiple possibilities to test is called Gerilla ride or Gerilla Testing. Planning Test Lead: Applied Test Engineer: Followed Testing Terminology:Monkey / Chimpanzee Testing: The coverage of main activities only in your application during testing is called as monkey testing. (Covering main activities first and other activities next) Sanity Testing: This test is also known as Tester Acceptance Test (TAT). They test for whither developed team build is stable for complete testing or not? Development Team Released Build Sanity Test / Tester Acceptance Test Functional & System Testing Page 17 of 132 .

This is also known as formal testing. Ex: User Interface Testing Dynamic Testing: Conduct a test through running an application is called as Dynamic Testing. This process is called as Automation. It is also known as informal testing. For verifying the need for automation they will consider following two types: Impact of the test: It indicates test repetition No1 No2 Multiply Result Page 18 of 132 . Load Testing. Incremental Testing: A multiple stages of testing process is called as incremental testing. If your company already has testing tools they may follow automation. A tester conduct a test with the help of software testing tool. Automation (40% -60%) Impact & Criticality Need for Automation: When tools are not available they will do manual testing only. Static Testing: Conduct a test without running an application is called as Static Testing.Software Testing Material Smoke Testing: An extra shakeup in sanity testing is called as Smoke Testing. Compatibility Testing Manual Vs Automation: A tester conduct a test on application without using any third party testing tool. This process is called as Manual Testing. Bebugging: Development team release a build with known bugs to testing them. Ex: Functional Testing. Bigbang Testing: A single state of testing after completion of all modules development is called Bigbang testing. before start testing. Testing team rejects a build to development team with reasons.

Maturity between expected releases: More Maturity – Manual. Support from Senior Management: Page 19 of 132 . of Releases: Several Releases – Automation. for 1000 users. Any dependent modules may also cause side effects. corresponding project manager or test manager or quality analyst defines the need of test automation for that project depends on below factors. Less Maturity – Automation. Impact indicates test repetition. Tester Efficiency: Knowledge of automation on tools to test engineers – Automation. Retesting: Re execution of our application to conduct same test with multiple test data is called Retesting. No Knowledge of automation on tools to test engineers – Manual. Type of external interface: GUI – Automation. Impacted Passed Tests Failed Tests Modifie d Build Buil d 11 Test Fail 10 Tests Passed Development Selection of Automation: Before starting one project level testing by one separate testing team. Expected No. Regression Testing: The re execution of our test on modified build to ensure bug fix work and occurrences of side effects is called regression testing. Criticality indicates complex to apply that test manually. CUI – Manual. Less Releases – Manual.Software Testing Material Criticality: Load Testing. Size of external interface is Small – Manual. Size of external interface: Size of external interface is Large – Automation.

This document defines testing objectives. Address Testing Definition : Verification & Validation of S/w Testing Process : Proper Test Planning before start testing Testing Standard : 1 Defect per 250 LOC / 1 Defect per 10 FP Testing Measurements : QAM.E. C. PCM.O Testing Policy Company Level Test Strategy Test Manager/ QA / PM Test Methodology Test Lead Test Plan Test Cases Test Procedure Test Lead. Management rejects – Manual. TMM.Software Testing Material Management accepts – Automation. to develop a quality software. Test Engineer Project Level Test Script Test Log Defect Report Test Lead Test Summary Report Testing Policy: It is a company level document and developed by QC people. CEO Sign QAM: Quality Assessment Measurements Page 20 of 132 .

Test Factor: A test factor defines a testing issue. Roles and Responsibilities: Defines names of jobs in testing team with required responsibilities. 10. There are 15 common test factors in S/w Testing. 6. Test Strategy: 1. 5.Software Testing Material TMM: Test Management Measurements PCM: Process Capability Measurements Note: The test policy document indicates the trend of the organization. Risk Analysis and Mitigations: Analyzing of future common problems appears during testing and possible solutions to recover. Ex: QC – Quality PM/QA/TM – Test Factor TL – Testing Techniques TE – Test cases PM/QA/TM – Ease of use TL – UI testing TE – MS 6 Rules PM/QA/TM – Portable TL – Compatibility Testing TE – Run on different OS Page 21 of 132 . Test approach: defines the testing approach between development stages and testing factors. 8. Test environment specifications: Required test documents developed by testing team during testing. Test management process capability. Business Issues: Budget Controlling for testing 3. Training plan: Need of training for testing to start/conduct/apply. TRM: Test Responsibility Matrix or Test Matrix defines mapping between test factors and development stages. Communication & Status Reporting: Required negotiation between two consecutive roles in testing. Defect Tracking System: Required negotiation between the development and testing team to fix defects and resolve. 11. Test Automation: Possibilities to go test automation with respect to corresponding project requirements and testing facilities / tools available (either complete automation or selective automation) 9. 4. Change and Configuration Management: required strategies to handle change requests of customer site. Testing measurements and metrics: To estimate work completion in terms of Quality Assessment. Scope & Objective: Definition. need and purpose of testing in your in your organization 2. 12. 7.

To convert a over all approach into corresponding project level approach. Access Control: Permission to valid user to use specific service Security Testing Functionality / Requirements Testing 3. Maintainable: Whither application is long time serviceable to customers or not Compliance Testing (Mapping between quality to testing connection) Quality Gap: A conceptual gap between Quality Factors and Testing process is called as Quality Gap. Ease of Use: User friendliness User Interface Testing Manual Support Testing 8. Portable: Run on customer expected platforms Compatibility Testing Configuration Testing 12. Service Levels: Order of functionalities Stress Testing Functionality / Requirements Testing 14. Reliability: Recover from abnormal situations or not. Step 1: Collect test strategy Page 22 of 132 . Correctness: Meet customer requirements in terms of functionality All black box Testing Techniques 5. Performance: Speed of processing Load Testing Stress Testing Data Volume Testing Storage Testing 13. Test Methodology: Test strategy defines over all approach. Methodology: Follows standard methodology during testing Compliance Testing 15. Coupling: Co existence with other application in customer site Inter Systems Testing 7. Backup files using or not Recovery Testing Stress Testing 11. quality analyst / PM defines test methodology.Software Testing Material Test Factors: 1. Continuity in Processing: Inter process communication Execution Testing Operations Testing 6. Ease of Operate: Ease in operations Installation testing 9. Audit Trail: Maintains metadata about operations Error Handling Testing Functionality / Requirements Testing 4. Authorization: Validation of users to connect to application Security Testing Functionality / Requirements Testing 2. File Integrity: Creation of internal files or backup files Recovery Testing Functionality / Requirements Testing 10.

QA try to add some of the deleted factors once again. (Number of rows in the TRM) Step 6: Finalize TRM for current project Step 7: Prepare Test Plan for work allocation. Test Initiation Test Plannin g Test Executio n Defect Test Closur e Page 23 of 132 . Testing Process: Test Design Regression Testing Test Report PET (Process Experts Tools and Technology): It is an advanced testing process developed by HCL. It is a refinement form of V-Model.This process is approved by QA forum of India. Step 5: Determine scope of application: Depends on future requirements / enhancements. Step 4: Identify risks: Depends on tactical risks.Software Testing Material Step 2: Project type Project Type Information Gathering & Design Coding System Maintenance Analysis Testing Traditional Y Y Y Y Y Off-the-Shelf X X X Y X Maintenance X X X X Y Step 3: Determine application type: Depends on application type and requirements the QA decrease number of columns in the TRM. Chennai. the QA decrease number of factors (rows) in the TRM.

Software Testing Material Information Gathering (BRS) Analysis ( S/wRS ) Design ( HLDD & LLDD ) Coding Unit Testing + Integration Testing PM / QA Test Lead Test Initiation Test Planning Study S/wRS & Design Docs Test Design Initial Build Level – 0 ( Sanity / Smoke / TAT ) Test Automation Test Batches Creation Next Select a batch and starts execution ( Level .1 ) (Modified Build) Bug Resolving (Regression ) (Level – 2) Defect Defect Fixing Independent If u got any mismatch then suspend that Batch Report Otherwise Test Closure Final Regression / Pre Acceptance / Release / Post Mortum / Level -3 Testing User Acceptance Test Sign Off Page 24 of 132 .

Embedded. Team Formation In general test planning process starts with testing team formation. LISP .3-5 months System Software .Medium – 7-9 months Machine Critical . VB. 2.Development Plan . Test Duration: Common market test team duration for various types of projects.C. test plan author concentrates on test plan What to test How to test When to test Who to test .Software Testing Material Test Planning: After completion of test initiation.Design Documents .12-15 months System Software Projects: Network. C++ . JAVA – Small .S/wRS . Identify tactical Risks After completion of team formation. test plan author concentrates on risks analysis and mitigations.SAP. depends on below factors. ERP projects . C/S. Air Traffic. Satellite. Knowledge base. Web. Lack of knowledge on that domain Lack of budget Lack of resources(h/w or tools) Lack of testdata (amount) Delays in deliveries(server down) Lack of development process rigor Lack of communication( Ego problems) Prepare Test Plan Page 25 of 132 . Games.Prolog.Big . • Availability of Testers • Test Duration • Availability of test environment resources The above three are dependent factors. 1) 2) 3) 4) 5) 6) 7) 3. Compilers … Machine Critical Software: Robotics.Team Formation Development Plan & S/wRS & Design Documents Team Formation Identify tactical Risks Prepare Test Plan Review Test Plan Test Plan TRM 1.

every selected test engineer concentrate on test designing for responsible modules.Software Testing Material Format: 1) Test Plan id: Unique number or name 2) Introduction: About Project 3) Test items: Modules 4) Features to be tested: Responsible modules to test 5) Feature not to be tested: Which ones and why not? 6) Feature pass/fail criteria: When above feature is pass/fail? 7) Suspension criteria: Abnormal situations during above features testing. Page 26 of 132 . selected testers also involved to give feedback. A usecase in S/wRS defines how a user can use a specific functionality in your application. • • • S/wRS based coverage ( What to test ) Risks based coverage ( Analyze risks point of view ) TRM based coverage ( Whither this plan tests all tests given in TRM ) Test Design: After completion of test plan and required training days. Review Test Plan After completion of test plan writing test plan author concentrate on review of that document for completeness and correctness. In this phase test engineer prepares a list of testcases to conduct defined testing. testing team conducts coverage analysis.  Business Logic based testcase design  Input Domain based testcase design  User Interface based testcase design Business Logic based testcase design: In general test engineers are writing list of testcases depends on usecases / functional specifications in S/wRS. 8) Test environment specifications: Required docs to prepare during testing 9) Test environment: Required H/w and S/w 10) Testing tasks: what are the necessary tasks to do before starting testing 11) Approach: List of Testing Techniques to apply 12) Staff and training needs: Names of selected testing Team 13) Responsibilities: Work allocation to above selected members 14) Schedule: Dates and timings 15) Risks and mitigations : Common non technical problems 16) Approvals: Signatures of PM/QA and test plan author 4. There are three basic methods to prepare testcases to conduct core level testing. on responsible modules. In this review. In this reviews meeting.

Page 27 of 132 .Exe To prepare testcases depends on usecases we can follow below approach: Step 1: Collect responsible modules usecases Step 2: select a usecase and their dependencies ( Dependent & Determinant ) Step 2-1: identify entry condition Step 2-2: identify input required Step 2-3: identify exit condition Step 2-4: identify output / outcome Step2-5: study normal flow Step 2-6: study alternative flows and exceptions Step3: prepare list of testcases depends on above study Step 4: review testcases for completeness and correctness TestCase Format: After completion of testcases selection for responsible modules. p1-difft oss.Software Testing Material BRS S/wRS Usecases + Functional Specifications TestCases HLDD LLDD Coding . Priority : Importance of that testcase Po – Basic functionality P1 – General Functionality (I/p domain. test engineer prepare an IEEE format for every test condition. TestCase Id : Unique number or name TestCase Name : Name of the test condition Feature to be tested : Module / Feature / Service TestSuit Id : Parent batch Id’s. Error handling …) P2 – Cosmetic TestCases (Ex: p0 – os. p2 – look & feel) Test Environment: Required H/w and S/w to execute the test cases Test Effort: (Person Per Hour or Person / Hr) Time to execute this test case ( 20 Mins ) Test Duration: Date of execution Test Setup: Necessary tasks to do before start this case execution Test Procedure: Step by step procedure to execute this testcase. in which this case is participating as a member.

Here they are testing that message is easy to understand or not) TestCase4: Accuracy of data displayed (WYSIWYG) (Amount.666 66. micro soft 6 rules) Testcase3: Meaningful error messages or not. ECP ( Type ) Input Attribute Fig: Data Matrix User Interface based testcase design: To conduct UI testing. Step3: Identify non critical attributes which are input. type and constraints. style.range. float constraint – Primary key) Step2: Identify critical attributes in that list.7 Page 28 of 132 . Step4: Prepare BVA & ECP for every attribute. test engineer write a list of test cases. For preparing this UI testcases they are not studying S/wRS. To prepare input domain testcases test engineers are depending on data model of the project (ERD & LLD) Step1: Identify input attributes in terms of size.Software Testing Material Step No. (size. text. Input Domain based TestCase Design: To prepare functionality and error handling testcases. LLDD etc… Functionality testcases source: S/wRS. which are participating in data retrievals and manipulations. d o b) Testcase5: Accuracy of data in the database as a result of user input. size. tc5 at database level) Form Table Valid Invalid BVA ( Size / Range ) Minimum Maximum DSN Bal 66. When that testcase is fail. (Tc4 screen level. font. Action I/p Required Expected Result Defect ID Comments Test Design Test Execution TestCase Pass/Fail Criteria: When that testcase is Pass. I/P domain testcases source: LLDD Testcases: For all projects applicable Testcase1: Spelling checking Tesecase2: Graphics checking (alignment. type – int. depends on our organization level UI rules and global UI conventions. output type. (Error Handling Testing – related message is coming or not. test engineers are using UseCases or functional specifications in S/wRS.

Business Requirements ****** Sources (Use Cases. In this review testing team conducts coverage analysis 1. 4. testing team along with test lead concentrate on review of testcases for completeness and correctness.Gif Testcase7: Meaningful Help messages or not?(First 6 tc for uit and 7 manual support testing) Review Testcases: After completion of testcases design with required documentation [IEEE] for responsible modules.Gif Import Mail + .Software Testing Material Testcase6: Accuracy of data in the database as a result of external factors? DS Mail Server Image compression Image Decompression Mail + . 3. 5. Business Requirements based coverage UseCases based coverage Data Model based coverage User Interface based coverage TRM based coverage Fig: Requirements Validation / Traceability Matrix. Data Model…) ***** ***** ***** TestCases * * * * * * Page 29 of 132 . 2.

P1 and P2 testcases at build. P1 and P2 testcases as batches Level 2– Selected P0. P1 and P2 testcases with respect to modifications Level 3– Selected P0.Software Testing Material Test Execution: Development Site Initial Build Testing Site Level-0 (Sanity / Smoke / TAT) Stable Build Defect Report Defect Fixing 8-9 Times Bug Resolving Modified Build Test Automation Level-1 (Comprehensive) Level-2 (Regression) Level-3 (Final Regression) Test Execution levels Vs Test Cases: Level 0 – P0 Level 1– P0. Test Harness = Test Environment + Test Bed Build Version Control: Unique numbering system. ( FTP or SMTP) Server Build FTP Softbase Test Environment After defect reporting the testing team may receive • Modified Build • Modified Programs Page 30 of 132 .

After completion of dumping / installation testing team ensure that basic functionality of that build to decide completeness and correctness of test execution. Test Automation two types: Complete and Selective. Test Automation: After receiving a stable build from development team. Operable: Build is working without runtime errors in test environment. development team use version control softwares. Automatable: Interfaces supports automation test script creation.Software Testing Material To maintain this original builds and modified builds. testing team concentrate on test automation. 5. 3. 2. This level-0 testing is also called as Testability or Octangle Testing (bcz based on 8 factors). 7. Observable: Process completion and continuation in build is estimated by tester. Test Automation * Complete Selective (All P0 and carefully selected P1 Testcases) Page 31 of 132 . 1. Controllable: Able to Start/ Stop processes explicitly. testing team install into test environment. testing team observes below factors on that initial build. During this testing. 6. Consistent: Stable navigations Maintainable: No need of reinstallations Simplicity: Short navigations to complete task. 4. Server 1 2 Modified Build Modified Programs Test Environment Embed into Old Build Level 0 (Sanity / Smoke / TAT): After receiving initial build from development team. Understandable: Functionality is understandable to test engineer. 8.

Medium: Able to continue testing. During comprehensive test execution. High: Without resolving this mismatch tester is not able to continue remaining testing. Ex: High: Database not connecting. testing team reports mismatches to development team as defects. Blocked: Corresponding testcases are failed. test engineers prepares test log with three types of entries. Failed: Any expected value is variated with actual. then Do on z and colleges: High Full z module: Medium Partial z module: Low Page 32 of 132 . Low: May or may not resolve. After receiving that defect. Medium and Low. testing team starts test execution of their testcases as batches. base state of one testcase is end state of previous testcase. (Accepting wrong values also) Low: Spelling mistake. In every batch. Failed 3. Xyz are three dependent modules. Medium: Input domain wrong. Blocked Passed: All expected values are equal to actual. During this test batches execution. (Show stopper). In organizations they will be giving three types of severity like High. testing team concentrate on regression testing before conducts remaining comprehensive testing. Passed 2. If u find bug in z. development team performs modifications in coding to resolve that accepted defects.Software Testing Material Level-1: (Comprehensive Testing): After completion of stable build receiving from development team and automation. Skip Passed In Queue In Progress Failed Partial Pass / Fail Closed Blocked Level-2 Regression Testing: Actually this Regression testing is part of Level-1 testing. When they release modified build. Severity: Seriousness of the defect defined by the tester through Severity (Impact and Criticality) importance to do regression testing. but resolve must. 1. The test batch is also known as TestSuit or test set.

P1. P1 and carefully selected P2 test cases with respect to that modification. Priority: All defects are not with same priority. testing team will re execute some of the P0. Case 2: If development team resolved bug and its severity is medium. selected P1 [80-90 %] and some of P2 test cases with respect to that modification. Case 4: If development team performs modifications due to project requirement changes. testing team will re execute all P0. Page 33 of 132 . Case 3: If development team resolved bug and its severity is low. testing team reexecute all P0 and selected testcases. P2 test cases with respect to that modification. Priority: Customer point of view important. Priority: Importance of the defect. Severity: Project functionality point of view important.Software Testing Material Resolved Bug Severity High Medium Less All P0 All P1 Selected P2 All P0 Selected P1 Some P2 Some P0 Some P1 Some P2 On modified Build to ensure bug resolving Possible ways to do Regression Testing: Case 1: If development team resolved bug and its severity is high. Severity: All defects are not with same severity. testing team will re execute all P0. Severity: With respect to functionality Priority: With respect to customer. Severity: Seriousness of the defect.

Resolution type: 20. attach test procedure. test engineers are reporting mismatches to development team as defect reports in IEEE format. Severity: High / Medium / Low 10. Reported by: Name of the test engineer 13. Approved by: Signature of the PM Defect Age: The time gap between resolved on and reported on. Testcase name and Description: Failed testcase name with description 6. Resolved by: Name of the Developer 18. Fixed by: PM or Team Lead 17. Status: New / Reopen (after 3 times write new programs) 12. Priority 11. attach snapshots and strong reasons 9. Resolved on: Date of solving 19. If yes. If No. 4. Suggested fix: optional 15. 2. 8. 3. Defect Submission: QA Test Manager Test Lead Test Engineer Project Manager Team Lead Developers Transmittal Reports Fig: Large Scale Organizations. Build Version Id: Parent build version number. 1. Reported on: Date of Submission 14. Feature: Module / Functionality 5. Defect Description: Summary of defect. Defect Id: A unique number or name. Page 34 of 132 . Reproducible: (Yes / No) 7. Assign to: Name of PM 16.Software Testing Material Defect Reporting and Tracking: During comprehensive test execution.

Deferred) Closed Reopen Bug Life Cycle: Page 35 of 132 . Defect Status Cycle: New Fixed (Open.Software Testing Material Defect Submission: Project Manager Test Lead Test Engineer Team Lead Developers Transmittal Reports Fig: Small Scale Organizations. Reject.

3. No plan to fix it: Postponed part timely (Not accepted and rejected) 8. 2. Enhancement: Rejected due to defect related to future requirement of the customer. 6. (Not accepted and rejected) 10. Fixed Indirectly: Differed to resolve (Accepted) Types of Bugs: Page 36 of 132 . Duplicate: Rejected due to defect like same as previous reported defect. Need for More Information: Developers want more information to fix. Not Applicable: Rejected due to lack of correctness in defect. 5.Software Testing Material Detect Defect Reproduce Defect Report Defect Fix Bug Resolve Bug Close Bug Resolution Type: Testing Development Defect Report Resolution Type There are 12 resolution types such as 1. H/w Limitation: Raised due to limitations of hardware (Rejected) 4. (Not accepted and rejected) 9. Fixed: Opened a bug to resolve (Accepted) 12. User misunderstanding: (Both argues you r thinking wrong) (Extra negotiation between tester and developer) 11. Functions as design: Rejected due to coding is correct with respect to design documents. 7. Not Reproducible: Developer want more information due to the problem is not reproducible. S/w Limitation: Rejected due to limitation of s/w technology.

test lead conduct test execution closure review along with test engineers. Logo Version Control bugs: (Medium severity) Difference between two consecutive versions Source bugs: (Medium severity) Mismatch in help documents Test Closure: After completion of all possible testcase execution and their defect reporting and tracking. In this review test lead depends on coverage analysis: • • • • • BRS based coverage UseCases based coverage (Modules) Data Model based coverage (i/p and op) UI based coverage (Rules and Regulations) TRM based coverage (PM specified tests are covered or not) Page 37 of 132 .Software Testing Material UI bugs: (Low severity) Spelling mistake: High Priority Wrong alignment: Low Priority Input Domain bugs: (Medium severity) Object not taking Expected values: High Priority Object taking Unexpected values: Low Priority Error Handling bugs: (Medium severity) Error message is not coming: High Priority Error message is coming but not understandable: Low Priority Calculation bugs: (High severity) Intermediate Results Failure: High Priority Final outputs are Wrong: Low Priority Service Levels bugs: (High severity) Deadlock: High Priority Improper order of Services: Low Priority Load condition bugs: (High severity) Memory leakage under load: High Priority Doesn't allows customer expected load: Low Priority Hardware bugs: (High severity) Printer not connecting: High Priority Invalid printout: Low Priority Boundary Related Bugs: (Medium Severity) Id control bugs: (Medium severity) Wrong version no.

2. Beta testing SignOff: After completion of UA and then modifications. Alpha testing 2. 4. Testing team try to execute the high priority test cases once again to confirm correctness of master build. our organization concentrates on UAT to collect feed back from customer / customer site like people. 3.Software Testing Material Analysis of the differed bugs: Whither deferred bugs are postponable or not. This TSR consists of 1. test lead creates Test Summary Report (TSR). Final Regression Process: Gather requirements Effort estimation (Person/Hr) Plan Regression Execute Regression Report Regression Final Regression Testing: Gather requirements Report Regression Effort estimation Execute Regression Plan Regression User Acceptance Testing: After completion of test execution closure review and final regression. Test Strategy / Methodology (what tests) System Test Plan (schedule) Traceability Matrix (mapping requirements and testcases) Automated Test Scripts (TSL + GUI map entries) Final Bug summary Report Page 38 of 132 . It is a part of s/w release note. There are two approaches: 1. 5.

K. Prasad Nagaraju_testing@yahoo.com What u r doing? What type of testing process going on ur company? What type of test documentation prepared by ur organization? What type of test documentation u will prepare? Whats ur involvement in that? What are key components of ur company test plan? What type of format u prepare for test cases? How ur pm selects what type of tests need for ur project? When u will go to automation? What is regression testing? When u will do this? How u report defects to development team? How u know whither defect accepted or rejected? What u do when ur defect rejected? How u will learn project with out documentation? What is the difference between defect age and Build interval period? How u will do test without documents? What do u mean by green box testing? Experience on winrunner Exposure to td… Winrunner 8/10.V.K. Test Engineer Test Lead Test Engineer Test Engineer Test Engineer & Test Lead Completion Time 20-30 days 4-5 days 1 day 20-30 days 40-60 days On going during test execution Weakly twice Communication and Status Everyone in testing team Reporting Final Regression Testing & Test Engineer and Test Lead 4-5 days Closer Review User Acceptance Testing Customer Site People 5-10 days ( Involvement of Testing Team) Test Summary Report Test Lead 1-2 days (Sign Off) Testing computer software – Cem Kamer Effective methods for software testing – William E Perry Software Testing Tools – Dr. Load runner 7/10. Page 39 of 132 .K.Software Testing Material Bug Id Description Found By Status(Closed Deferred) / Severity Module / Comme Functionality nts Case Study (Schedule for 5 Months): Deliverable TestCase Selection TestCase Review RVM / RTM Sanity & Test Automation Test Execution as Batches Test Reporting Responsibility Test Engineer Test Lead.

O f bu g s 20% Testing – 80% Bugs 80% Testing – 20% Bugs Duration Sufficiency: • Requirements Coverage • Type – Trigger Analysis (Mapping between covered requirements and applied tests) Defect Severity Distribution Organization trend limit check: • Organisation trend limit check Test Management Measurements: These measurements are used by test lead during test execution of current project [weakly twice] Test Status • Executed tests • In progress • Yet to execute Delays in Delivery • Defect Arrival Rate • Defect Resolution Rate • Defect Aging Test Effort • Cost of finding a defect (Ex: 4 defects / person day) Page 40 of 132 .Software Testing Material Auditing: During testing and maintenance. testing team conducts audit meetings to estimate status and required improvements. In this auditing process they can use three types of measurements and metrics. Quality Measurement Metrics: These measurements are used by QA or PM to estimate achievement of quality in current project testing [monthly once] Product Stability: N o.

it is generally recognised that reviews performed by only one person are not as effective as reviews conducted by a group of people all examining the same document (or whatever it is that is being reviewed). Reviews can vary a lot from very informal to highly formal. Data stepping is a slightly different process for reviewing source code: the reviewer follows a set of data values through the source code to ensure that the values are correct at each step of the processing. They are basically the same processes: the reviewer double-checks the document or source code on their own. There are a variety of different ways in which reviews are carried out across different organisations and in many cases within a single organisation. There are two basic types of static testing. A Page 41 of 132 . Review techniques for groups The static techniques that involve groups of people are generically referred to as reviews. More or less any activity that involves one or more people examining something could be called a review. Both of these basic types are described in separate sections below. Some are very formal. What are Reviews? “Reviews” is the generic name given to people-based static techniques. (It depends on old projects maintenance level feedback) Test Efficiency • Type-Trigger Analysis • Requirements Coverage Defect Escapes • Type-Phase analysis. some are very informal. (What type of defects my testing team missed in which phase of testing) Test Effort • Cost of finding a defect (Ex: 4 defects / person day) This topic looks at Static Testing techniques. documentation and source code that comprise the software are examined in varying degrees of detail. and many lie somewhere between the two. One person can perform a review of his or her own work or of someone else’s work. Two examples of types of review are walkthroughs and Inspection. The chances are that you have been involved in reviews of one form another. One of these is people-based and the other is toolbased. Review techniques for individuals Desk checking and proof reading are two techniques that can be used by individuals to review a document such as a specification or a piece of source code. However.Software Testing Material Process Capability Measurements: These measurements are used by quality analyst and test management to improve the capability of testing process for upcoming projects testing. These techniques are referred to as "static" because the software is not executed. The tool-based techniques examine source code and are known as "static analysis". rather the specifications. People-based techniques are generally known as “reviews” but there are a variety of different ways in which reviews can be performed. as will be discussed in more detail shortly.

What can be Inspected? Anything written down can be Inspected. although Inspection can be performed on code. Inspection is the most formal of all the formal review techniques. All this is achieved (where it is achieved) by finding and fixing faults in the products of development phases before they are used in subsequent phases. test designs and test cases. it gives more value if it is performed on more "upstream" documents in the software development process. Reviews are cost-effective There are a number of published figures to substantiate the cost-effectiveness of reviews. but reviews can apply to more things than just those ideas that are written down. Gilb and Graham give a number of documented benefits for software Inspection. a 28 times reduction in maintenance cost. It can be applied to contracts. Inspection is discussed in more detail below. and even marketing material. Freedman and Weinberg quote a ten times reduction in faults that come into testing with a 50% to 80% reduction in testing cost. Reviews generally reduce fault levels and lead to increased quality. This can also result in improved customer relations. It Page 42 of 132 . for example about whether or not to develop a given feature. it was found to be very effective applied to testware. Typically the author "walks" the group through the ideas to explain them and so that the attendees understand the content. Reviews and Inspections are complementary. user manuals. Inspection excludes discussion and solution optimising. and it is the most effective review technique in finding them (although the other types of review also find some faults). It is also very appropriate to apply to all types of test documentation such as test plans. including 25% reduction in schedules. strategies. Any type of review that tries to combine more than one objective tends not to work as well as those with a single focus. They can also reduce testing time and cost. strategic plans and "big picture" ideas. but these activities are often very important. budgets. They can improve software development productivity and reduce development timescales. Reviews can be done on visions. feasibility studies and designs. What can be reviewed? Anything that can be Inspected can also be reviewed. Its main focus during the process is to find faults.Software Testing Material walkthrough is a form of review that is typically used to educate a group of people about a technical document. Reviews and the test process Benefits of reviews There are many benefits from reviews in general. such as requirements. Yourdon in his book on Structured Walkthroughs found that faults were reduced by a factor of ten. and finding 80% of defects in a single pass (with a mature Inspection process) and 95% in multiple passes. reviews find faults in specifications and other documents (including source code) which can then be fixed before those specifications are used in the next phase of development. Inspection also applies to all types of system development documentation. In other words. procedures and training material. They can lead to lifetime cost reductions throughout the maintenance of a system over its useful life. as well as to policies. Many people have the impression that Inspection applies mainly to code (probably because Fagan's original article was on "Design and code inspection"). However. business plans. Project progress can be reviewed to assess whether work is proceeding according to the plans. A review is also the place where major decisions may be made. In fact even with Fagan's original method.

and finding the faults (which are already there) much later when it will be much more expensive to fix them. and should also review their own test documentation. metrics or statistics are recorded and analysed to ensure the continued effectiveness and efficiency of the review process. Normally it simply consists of someone giving their document to someone else and asking them to look it over. as it is the most effective of all review types. so that lessons learned in a review can be folded back into development and testing processes. this may go down to around 5%.) Types of review We have now established that reviews are an important part of software testing. Note that the reviews apply not only to the products of development but also to the test documentation that is produced early in the life cycle. and this does have a cost. Remember that the cost of reviews always needs to be balanced against the cost of not doing them. no meeting and no follow--up. but the cost varies depending on the type of review. Characteristics of different review types Informal review As its name implies. The studying of the documents to be reviewed by each participant on their own is normally the main cost (although in practice this may not be done as thoroughly as it should).e. In this section. something between 5% and 15% of project effort would typically be spent on reviews. most other forms of review do not. In the more formal review techniques. We will also look at the Inspection process in a bit more detail. but is required for Inspection). this is very much an ad hoc process. come to a consensus and make decisions. and the activities that are done to a greater or lesser extent in all of them. it is an effort cost. Note that 10% is half a day a week. i. The costs of reviews are mainly in people's time. The fixing of any faults found or the resolution of issues found may or may not be followed up by the leader. Process improvement should also be a part of any review process. and the author of the document would hope to receive back some helpful comments. Testers should be involved in reviewing the development documents that tests are based on. (Inspection formally includes process improvement. It is a very cheap form of review because there is no monitoring of metrics. What to review / Inspect? Looking at the ‘V’ life cycle diagram that was discussed in Session 2. This is yet another way to find faults as early as possible in the life cycle so that they can be removed at the least cost. If Inspections are being introduced into an organisation. Page 43 of 132 . then 15% is a recommended guideline. Costs of reviews You cannot gain the benefits of reviews without investing in doing them. reviews and Inspections apply to everything on the left-hand side of the V-model. It is generally perceived to be useful. As a rough guide. We have found that reviewing the business needs alongside the Acceptance Tests works really well. The leader or moderator of the review may need to spend time in planning the review (this would not be done for an informal review. It clarifies issues that might otherwise have been overlooked. If a meeting is held. the cost is the length of the meeting times the number of people present.Software Testing Material works better to use Inspection to find faults and to use reviews to discuss. A document may be distributed to a number of people. Once the Inspection process is mature. we will look at different types of reviews.

for example deciding on the best way to implement a design. For technical documents. and can be rather subjective. an overview or kickoff meeting is held to put everyone "in the picture" about what is to be reviewed and how the review is to be conducted. Decision-making review This type of review is closely related to the previous one (in fact the syllabus does not distinguish them). Some types of review have process improvement as a goal (this is formally included in Inspection). Each person spends time on the review document (and related documents). Page 44 of 132 . which may be technical or managerial. Often this level of review will have some documentation. In this type of review. The planning stage is part of all except informal reviews. it is often a peer group technique. Informal reviews probably do not. A peer review would exclude managers from the review. However. this part of the process is optional (at least in practice). In Inspection (and possibly other reviews). Inspection An Inspection is the most formal of the formal review techniques. it is probably the least effective form of review (although no one can prove that since no measurements are ever done!) Technical review or Peer review A technical review may have varying degrees of formality. In Inspection it is required. Sometimes metrics will be kept. A walkthrough may include "dry runs" of business scenarios to show how the system would handle certain specific situations. even if just a list of issues raised. it is. Activities There are a number of activities that may take place for any review. there are defined roles for searching for faults based on defined rules and checklists. Walkthrough A walkthrough is typically led by the author of a document. Characteristics of reviews in general Objectives and goals The objectives and goals of reviews in general normally include the verification and validation of documents against specifications and standards. for the purpose of educating the participants about the content so that everyone understands the same thing. becoming familiar with it and/or looking for faults. This type of review does focus on technical issues and technical documents. but can also be used to resolve difficult technical problems. This pre-meeting may be a walkthrough in its own right. the focus is on discussing the issues. and Inspection does not hold a meeting if it would not add economic value to the process. for example about whether a given feature should be included in the next release or not. Most reviews include a meeting of the reviewers. coming to a consensus and making decisions. In some reviews. The success of this type of review typically depends on the individuals involved they can be very effective and useful. This type of review can find important faults. Metrics are a required part of the process. The preparation or individual checking is usually where the greatest value is gained from a review process. but sometimes they are very wasteful (especially if the meetings are not well disciplined). Some types of review also have an objective of achieving a consensus among the attendees (but not Inspection).Software Testing Material and compared to not doing any reviews at all. There are strict entry and exit criteria to the Inspection process. Sometimes the meeting time is the only time people actually look at the document. it is led by a trained Leader or moderator (not the author).

for example an organisation-wide co-ordinator who would keep and monitor metrics. as do some forms of technical or peer review). including Inspection) is the metrics about the costs. This may be the author of the document (walkthrough) or an independent Leader or moderator (formal reviews and Inspection). For Inspection. checklists. The author of the document normally edits these. that adequate time is allowed for reviews in project schedules.e. or someone to "own" the review process itself . Deliverables The main deliverable from a review is the changes to the document that was reviewed. The reviewers or Inspectors are the people who bring the added value to the process by helping the author to improve his or her document. and benefits achieved by the review or Inspection process.Software Testing Material Sometimes the meetings run on for hours and discuss trivial issues. (Note that these are improvements to processes. although there are some variants that exclude the author. The author of the document being reviewed or Inspected is generally included in the review. If the author does not have the authority to change a related document (e. There may be other roles in addition to these. In some types of review. Even if they are excluded from some types of peer review. there is someone responsible for the review of a document (the individual review cycle).e. perform follow-up and record relevant metrics. faults found. choose reviewers. Roles and responsibilities For any of the formal reviews (i. not to reviewed documents. The best reviews (of any level of formality) ensure that value is gained from the meeting.this person would be responsible for updating forms. etc. The more formal review techniques include follow-up of the faults or issues found to ensure that action has been taken on everything raised (Inspection does. Managers have an important role to play in reviews. In other types of review. the reviewers suggest improvements to the document itself. not informal reviews). The responsibility of the Leader is to ensure that the review process works. i. the changes would be limited to faults found as violations of accepted rules. Generally the author can either accept or reject the changes suggested. He or she may distribute documents. then a change request may be raised to change the other document(s). This includes improvements to the review or Inspection process itself and also improvements to the development process that produced the document just reviewed. if the review found that a correct design conflicted with an incorrect requirement specification). Page 45 of 132 . The author actually has the most to gain from the review in terms of learning how to do their work better (if the review is conducted in the right spirit!). call and lead the meeting. mentor the reviewers. They need to ensure that the reviews are done properly. process improvement suggestions are a deliverable. They also need to understand the economics of reviews and the value that they bring.g. individual checkers are given specific types of fault to look for to make the process more effective. The more formal review techniques collect metrics on cost (time spent) and benefits achieved. they can (and should) review management level documents with their peers. For Inspection and possibly other types of review.) The final deliverable (for the more formal types of review.

Many people keep on doing reviews even if they don't know whether it is worthwhile or not. and this is more critical the more formal the review. 1993. One of the most common causes for poor quality in the review process is lack of training. it is an entry criterion to the meeting that each checker has done the individual checking. They are sometimes very inefficient. some about technical issues but much about trivia. then a meeting may not be held at all. The Leader's role is very important to keep the meetings on track and focused and to keep pulling people away from trivia and pointless discussion. Long-term. Page 46 of 132 . Special defect-hunting roles are defined. and the instructions are simply "Please review this.a chunk or sample of the whole document. sometimes the reviewers have time to look through the document before the meeting. In typical reviews. Entry criteria to the review or Inspection process can ensure that reviewers' time is not wasted on documents that are not worthy of the review effort. The only thing that is permissible to raise as an issue is a potential violation of an agreed Rule (the Rulesets are what the document should conform to). Comments are often mainly subjective. the process is objective. and Inspectors are trained in how to be most effective at finding faults. and it is limited to two hours. In a typical review. there is often a lot of discussion. but also source or predecessor documents.Software Testing Material Pitfalls Reviews are not always successful. it can be disheartening to become expert at detecting faults if the same faults keep on being injected into all newly written documents. so faults that could have been found slip through the net. In a typical review. Another problem with reviews is having to deal with documents that are of poor quality. Inspection Typical reviews versus Inspection There are a number of differences between the way most people practice reviews and the Inspection process as described in Software Inspection by Gilb and Graham. If managers say that they want reviews to take place but don't allow any time in the schedules for the. In Inspection. The meeting is often difficult to arrange and may last for hours. along the lines of "I don't like the way you did this" or "Why didn't you do it this way?" In Inspection." In Inspection. They are sometimes not very effective. it is not just the document under review that is given out in advance. Every activity in the Inspection process is done only if its economic value is continuously proven. and some do not. Process improvements are the key to long-term effectiveness and efficiency. this is only "lip service" not commitment to quality.it just evolves over time. Often insufficient thought has gone into the definition of the review process itself . The instructions given to checkers are designed so that each individual checker will find the maximum number of unique faults. AddisonWesley. so that Inspectors (checkers) check a limited area in depth . If it is not economic. The number of pages to focus the Inspection on is closely controlled. The meeting is highly focused and efficient. A lack of management support is a frequent problem. there are typically dozens of pages to review. so that people feel that they are wasting their time. the document is given out in advance. Discussion is severely curtailed in an Inspection meeting or postponed until the end.

its effectiveness is around 30% to 40% (this is demonstrated in Inspection training courses). and was the Inspection process carried out properly? For example. Process improvement suggestions may be raised at any time. At least with Inspection you Page 47 of 132 . there are a number of Inspection standards also. When Inspection is still being learned. Optimum checking rate to get the greatest value out of the time spent by looking deep. The return on investment is usually not known because no one keeps track even of their cost. This may be say 3 per page initially. and it also may require Change Requests to documents not under the control of the editor. Standards are used in the Inspection process. 95% in multiple passes. A gleaming new improved document is the result of the process. This involves redoing some of the activities that produced the document initially. Training for maximum effectiveness and efficiency. The return on investment ranges from 6 hours to 30 for every hour spent. if the checking rate was too fast.Software Testing Material Inspection is more Inspection contains many mechanisms that are additional to those found in other formal reviews. The Inspection process The diagram shows a product document infected with faults. The Individual Checking is where most of the benefits are gained. Inspection is better Typical reviews are probably only 10% to 20% effective at detecting existing faults. A Kickoff meeting is held to "set the scene" about the documents and the process. The document must pass through the Exit gate before it is allowed to leave the Inspection process. A meeting is held (if economic). this process can find up to 80% of faults in a single pass. 80% or more of the faults found will be found in this stage. then the checking has not been done properly. The document must pass through the entry gate before it is allowed to start the Inspection process. It is not economic to be 100% effective in Inspection. has some action been taken on all issues logged). These include the following: Entry criteria. but there is still a "blob" on it. to ensure that we don't waste time Inspecting an unworthy document. The editing of the document is done by the author or the person now responsible for the document. Process improvement is built in to the Inspection process Exit criteria ensure that the document is worth and that the Inspection process was carried out correctly One of the most powerful exit criteria is the quantified estimate of the remaining defects per page. There are two aspects to investigate here: is the product document now ready (e. but can be brought down by orders of magnitude over time. Once Inspection is well established and mature.g. for improvements either to the Inspection process or to the development process. The Inspection Leader performs the planning activities. Prioritising the words: Inspect the most important documents and their most important parts.

Of course it doesn't take an hour just to read a single page.it found a major fault. We have found the major one found in the other review plus two (other) minors. it is not at all difficult to spend an hour on one page!) How does this depth-oriented approach affect the faults found? On the picture. (Note that the optimum rate needs to be established over time for different types of document and will depend on a number of factors." Think: are we missing anything here? Inspection is different. the time and size of document determine the checking rate.Software Testing Material consciously predict the levels of remaining faults rather than fallaciously assuming that we have found them all! How the checking rate enables deep checking in Inspection There is a dramatic difference of Inspection to normal reviews. The logging rates are much faster than in typical reviews (30 to 60 seconds. So in this example we will have corrected 5 major faults instead of one. and that is in the depth of checking. This is illustrated by the picture of a document. which we would never have seen or even suspected if we had not spent the time to go deep. but we have also found a deep-seated major fault. This ensures that the meeting is very efficient. then the size of the sample or chunk will be 2 pages. but the checking done in Inspection includes comparing each paragraph or sentence on the target page with all source documents. both generic and specific.you can fix faults you didn't find! Inspection surprises To summarise the Inspection process. The Rules are democratically agreed as applying (this helping to defuse author defensiveness) and by definition a fault is a Rule violation. we have gone deep in the Inspection on a limited number of pages. he or she can look through the rest of the document for similar faults. Now we can fix that and the two other minor faults. Our typical reaction is now to think: "This review was worthwhile wasn't it . and it is based on prioritised words (logical page rather than physical page). there are a number of things about Inspection which surprise people. One reason Page 48 of 132 . The fundamental importance of the Rules is what makes Inspection objective rather than a subjective review. We will find some faults . This gives tremendous leverage to the Inspection process . working through checklists for different role assignments. (Any two of these three factors determine the third. There is no guarantee that the most dangerous faults are lying near the surface! When the author comes to fix this deep-seated fault. but it is the optimum rate for the type of document that is used to determine the size of the document that will be checked in detail. as well as the time to read around the target page to set the context. and all of them can then be corrected. If checking is done to this level of thoroughness. then the checking rate will be 50 page per hour. Typically in reviews. typical reviews log one thing in 3 to 10 minutes). The slow checking rates are surprising. The strict entry and exit criteria help to ensure that Inspection gives value for money. checking each paragraph or phrase against relevant rule sets. So for example if you have 2 hours available for a review and the document is 100 pages long.) This is equivalent to "skimming the surface" of the document.in this example we have found one major and two minor faults. Initially there are no faults visible. So if the optimum rate is one page per hour and we have two hours. We do not take any more time. but the value to be gained by depth gives far greater long-term gains than surface-skimming review that miss major deep-seated problems. and the document will now be OK.

The significance of this is that static analysis tools can perform a number of simple checks. Other static metrics Lines of code (LOC or KLOC for 1000’s of LOC) is a measure of the size of a code module. Working from code. The result of this addition is then put into the memory location called “x”. in the statement "x = y + z". A variable is basically a location in the computer's memory that has a name so that the programmer can refer to it more conveniently in the source code. Like a compiler. the static analysis tool analyses the code without executing it. When that value is accessed. Tom Gilb and Dorothy Graham. This is an example of a data flow fault. we say that the variable is "defined". but it is not the whole story. count the total number of IF's and any loop constructs (DO. WHILE. Another check that a static analysis tool can make is to ensure that every time a variable is defined it is used somewhere later on in the program.Software Testing Material this works is that the final responsibility for all changes is fully given to the author. Operands and operators is a very detailed measurement devised by Halstead. who has total responsibility for final classification of faults as well as the content of all fixes. It can check for violations of standards and can find things that may or may not be faults. Fan-in is related to the number of modules that call (in to) a given module. However. defensive programming may result in code that is technically unreachable. There are also a number of stand-alone static analysis tools for various different computer programming languages. One of these checks is to ensure that every variable is defined before it is used. so x is defined. In fact. The easiest way to compute it is to count the number of decisions (diamond-shaped boxes) on a control flow graph and add 1. inaccessible code. then why was defined in the first place? This is known as a data flow anomaly and although can be a perfectly harmless fault. and many other suspicious aspects. it can also indicate something more serious is at fault. or in libraries where they are Page 49 of 132 . and can alert the developer to various things such as unreachable code. If a variable is not defined before it is used. many compilers may have static analysis facilities available for developers to use if they wish. Control flow analysis Control flow analysis can find infinite loops. The cyclomatic complexity does reflect to some extent how complex a code fragment is. Static analysis What can static analysis do? Static analysis is a form of automated testing. we say that it is "used". not all of the things found are necessarily faults. More information on Inspection can be found in the book Software Inspection. Modules with high fan-in are found at the bottom of hierarchies. For example. Data flow analysis Data flow analysis is the study of program variables. the variables y and z are used because the values that they contain are being accessed and added together. 1993. Static analysis tools can also compute various metrics about code such as cyclomatic complexity. etc. Addison-Wesley. Static analysis is descended from compiler technology. but not much used now. undeclared variables. When a value is put into this location. FOR. If it isn’t. Cyclomatic complexity Cyclomatic complexity is related to the number of decisions in a program or control flow graph. REPEAT) and add 1. the value that it contains may be different every time the program is executed and in any case is unlikely to contain the correct value. ISBN 0-201-63181-4.

SAP. People Soft. It cannot distinguish "fail-safe" code from real faults or anomalies.0  Developed by Mercury Interactive  Functionality testing tool ( Not suitable to Performance. WinRunner Recording Process: Page 50 of 132 . XML. We feel that all developers should use static analysis tools. Other metrics include the number of function calls and a number of metrics specific to objectoriented code. Maya.Net.Net. oracle applications etc…  To support .g. SAP. but cyclomatic complexity does not distinguish them. oracle applications etc… we can use QTP ( Quick Test Professional )  QTP is an extension of WinRunner. People Soft. and they are not related to real operating conditions. Maya. However. Limitations and advantages Static analysis has its limitations. Flash. Delphi. XML. This is a good metric to have in addition to cyclomatic complexity. and may create a lot of spurious failure messages. Any module with both high fan-in and high fan-out probably needs re-designing. the main menu). so they are not a substitute for dynamic testing. Static analysis tools do not execute the code. HIML etc…  WinRunner wont supports . Modules with high fan-out are typically at the top of hierarchies. Usability and Security Testing)  Supports c/s and web technologies ( VB. power builder. java. d2k. Flash. since the information they can give can find faults very early when they are very cheap to fix. WinRunner 7. vc++.Software Testing Material frequently called. because they call out to many modules (e. XML. since highly nested code is harder to understand than linear code. Nesting levels relate to how deeply nested statements are within other IF statements. static analysis tools can find faults that are difficult to see and they give objective quality information about the code.

Run Script: A test engineer executes automated test script to get results. Every statement ends with . Recording: A test engineer records our manual process in winrunner to automate. Analyze Results: A test engineer analyzes test results to concentrate on defect tracking. Edit Script: Test Engineer inserts required check points into that recorded test script. Page 51 of 132 . In winrunner scripting language is also called as TSL ( Test Script Language ) like as C. like C.0 provides auto learning facility to recognize objects and windows in your project without your interaction. User Id ***** Passwor ***** d Ok Exp: Ok enabled after entering user id and password. Explain Icons in WinRunner Note: WinRunner 7.Software Testing Material Learning Recording Edit Script Run Script Analyze that Test Results Learning: Recognization of objects and windows in your application by testing tool is called Learning. Test Script: A test script consists of Navigational Statements & Check Points.

we can use this mode. Syntax: type(“Typed characters”/”ASCII notation”). 1. Context Sensitive mode (Default Mode) 2. If u want to use Analog mode for recording. In analog mode. In Analog Mode tester maintains constant monitor resolution and application position during recording and learning Application areas: Digital Signatures. Note: 1. Context Sensitive mode: To record mouse and key board operations on our application build. we can maintain monitor resolution as constant during recording and running. mtype(): WinRunner uses this operations this function to record mouse pointer operations on the desktop. Due to this reason. we can use this mode. 2. Note: If all options in Add in Manager are off by default it supports VB. It is a default mode.Software Testing Material Add-in Manager: This window provides a list of WinRunner supported technologies with respect to our purchased license. Actually it is a memory location. It is not based on time. In this mode WinRunner Page 52 of 132 . But based on operation. Recording Modes: To record our business operations (Navigations) in winrunner we can use 2 types of recording modes. Graphs drawing. Syntax: move_locator_track(track number). VC++ interface (Win32 API). Track no – Deck top coordinates in which you operate the mouse. It stores the mouse coordinates.). Ex: mtype ("<T20><kLeft>+"). test engineer maintains corresponding context sensitive mode window in default position in recording and running. move_locator_track() : WinRunner use this function to record mouse pointer movements on the desktop in one unit of time. Analog mode Analog Mode: To record mouse pointer movements on the desktop. image movements. Context Sensitive Mode: In general functionality test engineer creates automation test scripts in Context Sensitive mode with required check points. Syntax: mtype(<T Track Number> < K key on the mouse used > + / . By default it starts with 1. Type(): We can use this function to record keyboard operations in analog mode. WinRunner records mouse pointer movements with respect to desktop co-ordinates.

it provides a set of facilities to cover below sub tests.OFF). Call State: An intermediate state of an application between base state and end state is call state. Text check points GUI Check point: To automate behavior of objects we can use this check point.”Encrypted Pwd”). Functionality Testing Techniques: Behavioral Coverage ( Object Properties Checking ). For Object/Window 3. List/Combo Box List_select_item(“List1”. Create Menu. Bitmap check points 3.ON). “Selected Item”). Service Levels ( Order of functionality or services ). Option Name”). Ex: Focus to Window Set_window(“Window Name”. To automate above sub tests. Base State: An application state to start test is called as Base State. for single property. TextBox Edit_set(“Edit Name”. To record mouse and key board operations on our application build. Error Handling Coverage ( Preventing negative navigation ). we can use this mode. Navigation: select a position in Script. Data Base check points 4. For Single Property 2. Button_set(“Button Name”. Backend Coverage ( Data Validation & Data Integrity of database tables ). time). It consists of sub options.OFF). 1. we can use 4 check points in WinRunner: 1. select testable object(Double Click).ON). For Multiple Properties For Single Property: To test a single property of an object we can use this option. End State: An application state to stop test is called as Base State. Push Button Button_press(“Button Name”). GUI check points 2. Button_set(“Button Name”.Software Testing Material records our application operation with respect to objects and windows.”Typed Characters”). It is a default mode. Check Box Button_set(“Button Name”. Radio Button Button_set(“Button Name”. select required property with expected. Successful Functionality ( Combination of above all ) Check points: WinRunner is a functionality testing tool. GUI check point. Input Domain Coverage ( Correctness of Size and Type of every i/p Object ). Calculations Coverage ( correctness of o/p values ). Password text box Password_edit_set(“Pwd Object”. click paste. Menu Menu_select_item(“Menu Name. Ex: Update Object Page 53 of 132 .

Problem: Focus to Window – Ok should be Disabled Page 54 of 132 . If the checkpoints are for numeric value. NagaRaju Journey Fly From Fly To Ok Ex: if u select an item in a list box then the no of items in next list boxes decreased by 1. then no need for double quotes. Problem: Focus to Window – Item No should be Focused Ok enabled after filling itemno & qty. Expected value). If the checkpoints are for string value. no of items in Fly From -1. But winrunner takes any value by default in string with double quotes. of Items in Fly to equal to."enabled". Ex: button_check_info("Update Order". NagaRaju Shopping Item No Quantity Ok Expected: No. "Property".0).Software Testing Material Update Order Disable Disable Enable Focus to Window Open a Record Perform Change Syntax: object_check_info("Object Name". then place the data in between double quotes. when you select an item in fly from.

If type is C. (use switch stmt) Type Age Gender Qualification Others switch(x) { case “A”: edit_check_info(“Age”.Software Testing Material Enter Roll No . If type is B. Else others is focused. Age is Focused. Qualification is Focused. 1). break. } List Page 55 of 132 Text . Gender is Focused.”focused”.Ok should be disabled Enter Name – Ok should be disabled Enter Class – Ok should be disabled NagaRaju Shopping Item No NagaRaju Journey Quantity List1 List2 List3 Ok Ok Problem: If type is A.

Else If basic salary < 5000 then commission = 200 Rs. Else If basic salary in between 5000 & 10000 then commission = 5% of basic salary. Else If Total in between 800 & 700 then Grade = B. Else Grade = C. Problem: If Total >= 800 then Grade = A. Exp: Selected Item in List box appears in Sample 2 text object after clicking display button. Roll No Ok Grade Page 56 of 132 .Software Testing Material Ok Exp: Selected Item in List box appears in text box after Clicking Ok button. Sample1 List1 Text Sample2 Display Ok NagaRaju Employee Emp No Dept No Ok B Sal Comm Problem: If basic salary >= 10000 then commission = 10% of basic salary.

"list3. Off ) Status ( On .Software Testing Material For Object/Window: To test more than one properties of a single object. Case Study: What type of properties you check for what objects? Object Type Push Button Radio Button Check Box Properties Enabled.txt”. "Expected Values File". Focused Status ( On . right click to relieve. click ok. "gui3". 1). Syntax: win_check_gui("Object Name". In the above syntax check list file specifies list of properties to test of a single object.ckl Expected values file specifies list of expected values for that selected or testable properties. 1). For Multiple Objects: To test more than one property of more than one object in a single checkpoint we can use this option.ckl". specify expected for required properties for every selected object. To create this checkpoint tester selects multiple objects in a single window. For Multiple Objects.ckl". create menu. time to create ).ckl". And its extension is . "gui1". Ex: Focus to Window Open a Record Perform Change Insert Order Disable Disable Disable Update Order Disable Disable Enable & Focused Delete Order Disable Enable Enable Navigation: select position in script. GUI checkpoint. select testable objects. “Check List File.txt Ex: obj_check_gui("Update Order". "Check List File. “expected values file. Ex: win_check_gui("Flight Reservation". Off ) Page 57 of 132 . Ex: Update Object Focus to Window Open a Record Perform Change Update Order Disable Disable Enable & Focused Syntax: obj_check_gui(“obj name”. Time to Create). "list1. click add.ckl”. we can use this option. And its extension is .

Columns. change run mode to update. Focused. Range. select new properties to test. Date Format. Update mode: in this runmode. 1. Time Format Changing Check Points: WinRunner allows us to perform changes in the existing check points. Value. click run in verify mode to get results. Value ( Current Selected Value ) Rows. click run executed (default values selected as exp values). perform changes in result if required Enable d Focuse d Value ON OFF Default Value Running Modes in WinRunner: Verify mode: in this mode wr compare our expected values with actual. click ok. to overwrite. click ok. reexecute the test script to get right result Add new properties to test Sometimes test engineer add extra properties to existing checkpoint due to incompleteness of test through below navigation. perform changes in expected values in results window of required. There are 2 types of changes in existing checkpoints due to project sudden changes or tester mistake. During GUI check point creation Winrunner creates checklist files and expected values files in HardDisk. edit gui check list.ckl Exp values: c:\program files\mi\wr\tmp\testname\exp\gui1 Input Domain Coverage: Range and Size Page 58 of 132 . Regular Expression. default values select as expected value Debug mode: to run our test scripts line by line. click results. Table Content Enabled. Navigation: Create menu. Winrunner maintains the test scripts by default in tmp folder Script: c:\program files\mi\wr\tmp\testname\script Checklists: c:\program files\mi\wr\tmp\testname\chklist\list1. Change expected values 2.Software Testing Material List Box Table Grid Text / Edit Box Count ( No of items in List Box ). select check list file name. Add new properties to test Change expected values Wr allows u to perform changes in expected values in existing checkpoints Navigarion: execute test script. click ok.

2. “Range of Values from & To”. "gui1". And its extension is .ckl Expected values file specifies list of expected values for that selected or testable properties.ckl". select object. 1). for object/window.txt Ex: obj_check_gui("Update Order". select range property. “What Property you are checking”. GUI Check point. "gui1". for object/window. select object. NagaRaju Sample Age Input Domain Coverage: Valid and Invalid Classes Navigation: Create Menu.ckl Expected values file specifies list of expected values for that selected or testable properties. 1). “Range of Values from & To”.Software Testing Material Navigation: Create Menu. Alphabets in lower case and initial cap only Alpha numeric and starting and ending with alphabets only Alphabets in lower case but starts with R ending with o only Alphabets in lower case with Under Score in middle Alphabets in lower case with Space and Under Score in middle Page 59 of 132 . GUI Check point. And its extension is . And its extension is . click ok. click ok. 5. Syntax: obj_check_gui(“obj name”. time to create ) In the above syntax check list file specifies list of properties to test of a single object. enter Expected Expression as []*. And its extension is . enter from & to values. “What Property you are checking”.txt Ex: obj_check_gui("Update Order". "list1. "list1. 3. Problem: The Name text box should allow only lower level characters NagaRaju Sample Name 1.ckl". time to create ) In the above syntax check list file specifies list of properties to test of a single object. select Regular Expression property. 4. Syntax: obj_check_gui(“obj name”.

create menu. select required region in testable image. Navigation: select a position in script. For Object/Window (Entire Image Testing) 2. graphs and other graphical objects. logos. Expected – Record time Actual – Run time Differences – what are differences For Screen Area (Part of Image Testing): To compare our expected image part with actual image in your application build. x. gui checkpoint is obligatory to use. for object/window. “Image file. obj_check_bitmap(“Image object Name”. 29. time to create the image check point) win_check_bitmap("About Flight Reservation System".( Like signatures) This checkpoint consists of two sub types: 1. 191.Software Testing Material Bitmap Check Point: It is an optional checkpoint in functionality testing tool. "Img1". Navigation: select a position in script. Run on different versions. Bmp”. Expected – Record time Actual – Run time Differences – what are differences Note: TSL supports variable size of parameter line a function overloading For every project functionality testing. bitmap checkpoint. height ) win_check_bitmap("About Flight Reservation System". For Screen Area (Part of Image Testing) These options supports testing on static images only. Run on different versions. bitmap checkpoint. width. time to create the check point. we can use this option. for Screen Area. select image object. right click to releave. 1. 122. "Img2". Tester can use this checkpoint to compare images. Maya… For Object/Window: To compare our expected image with actual image in your application build. we can use this option. 1). 71). y. By bitmap check point used by tester depends on requirements Database Check Point: To conduct backend testing using WinRunner we can use this option. create menu. “Expected image file. Page 60 of 132 . obj_check_bitmap(“Image object Name”. Bmp”. WinRunner doesn't support dynamic images developed using Flash.

Database checkpoint provides three sub options 1. 2. Page 61 of 132 . depends on content. 4. Connect to the database Execute the select statement Return results in Excel Sheet Analyze the results manually Application DataBase DSN Front End Data Base Check Point Wizard 1 2 Back End Select 3 In bitmap checking test between two versions of images.0) Application DataBase DSN Front End Back End Default Check: To check data validation and data integrity in database. DSN: Data Source Name. It is a connection string between front end and back end. In general. 3. Custom Check (Depends on rows count. Steps: 1. Default Check (Depends on Content) 2. To automate this test.Software Testing Material Back End Testing: Validating Completeness and Correctness of front end operation impact on the backend tables. columns count and content) 3. This process is also known as the database testing. the Backend testing is also known as validation or data and integrity of data. we can use this option. It will maintain the connection process. Runtime Record Check (New option in WinRunner7.

Software Testing Material In GUI checking test same application but with expected behavior. In Database checking test twice on the original data. To conduct testing, test engineer collects some information from development team. • • • Connection String or DSN Table definitions or Data dictionary Mapping between front end forms and backend tables.

Database Testing Process: Create Database checkpoint (Current content of database selected as Expected.) Insert / Delete / Update operation through front end. Execute Database checkpoint (Current content of database selected as Actual) Navigation: In GUI & Bitmap checkpoints we will starts with selecting the position in script. Create Menu, Database Checkpoint, default checkpoint, specify connection to database (ODBC / Data Junction) , select sql statement(c:\\PF\MI\WR\temp\testname\msqr1.sql), click next, click create to select DSN, write select statement ( select * from orders ), click finish. Syntax: db_check("Check List File .cdl", "Query Result File.Xls(EXE File)"); Ex: db_check("list5.cdl", "dbvf5"); Criteria: Expected Difference – Pass Wrong Difference – Fail What Updated: Data Validation Who and when updated: Data Integrity New Record - Green Color. Modified Record - Yellow Color. Custom Check: Test engineer use this option to conduct backend testing depends on rows count or columns count or table content or combination of above three properties. Default Checkpoint: Content is Property & Content is Expected Custom Checkpoint: Rows Count is Property & No of rows is Expected. During Custom check point creation, winrunner provides a facility to select these properties, in general test engineers are using default check option as maximum. Because content is also suitable to find the number of rows and columns. Syntax: db_check("Check List File .cdl", "Query Result File.Xls(EXE File)"); db_check("list11.cdl", "dbvf8");

Page 62 of 132

Software Testing Material


B 1 2 3

1 2 3

Expected: X Y A B

Front End – Programmers (Programming Division) Back End – DataBase Administrators (DB Division) The front objects names should be understandable to the end user. (WYSIWYG) Runtime Record Checkpoint: Sometimes test engineer use this option to find mapping between front end objects and backend columns, it is optional checkpoint. Navigation: Create Menu, Database Checkpoint, runtime record check, specify SQL statement, click next, click create to select DSN, write select statement with doubtful columns ( select orders.order_number, orders.customername from orders), select doubtful front end objects for that columns, click next, select any of below options • Exactly one match • One or more match • No match record Click finish. Note: For custom and default check points you have to give ; at the end of the sql statement. But in Runtime record check point u have no need to give it. Syntax: db_record_check("Check list File Name.cvr", DVR_ONE_MORE_MATCH / DVR_NO_MATCH, Variable); Ex: db_record_check("list1.cvr", DVR_ONE_MATCH, record_num); In the above syntax checklist specifies expected mapping to test and variable specifies number of records matched. If mapping correct the same values will be presented. Runtime record checkpoint allows you to perform changes in existing mapping, through below navigation. DVR_ONE_MATCH/

Page 63 of 132

Software Testing Material Create menu, edit runtime recordlist, select checklist file name, click next, change query (if u want to test on new columns), click next, change object selection for new objects testing, click finish. Synchronization: To define the time mapping between testing tool and application, we can use synchronization point concepts. Wait(): To define fixed waiting time during test execution, test engineer use this function. Syntax: wait (time in seconds); Ex: wait (10); Drawback: This function defines fixed waiting time, but our applications are taking variable times to complete, depends on test environment.

Change Runtime settings: During our test script execution, winrunner doesn't depends on recording time parameters. To maintain any waiting state, in winrunner we can use wait () function or change runtime settings. It maintains mainly following information: Delay: Time to wait between window focusing Timeout: How much time application should wait for context sensitive stage and checkpoints. There are two runtime settings time parameters Delay: For window synchronization Timeout: For execute in context sensitive and check points Window based statements are not able to execute: Delay + Timeout. Object based statements are not able to execute: Timeout. Navigation: Settings, General options, run tab, change delay & timeout depends on requirement, click apply, click ok. Window statements : delay – 1sec To focus – 10 sec Set_window(“”,6) ; -time = 11sec -2. button_press(ok); time = 10 3. button_check_info(“ok”,”enabled”,1);

Page 64 of 132

width. Syntax: obj_wait_info("Object Name". specify maximum time to wait. we can use this option / concept in WinRunner. Text Check Point: To cover calculation and other text based tests. "image1.". Maximum time to wait. create menu. select the required image region. create menu. for object/window Bitmap. This option consists of two sub options : 1. For Object / Window Bitmap: Sometimes test engineer defines time mapping between tool and project depends on images in that application. "image1.bmp". Maximum time to wait).bmp". select object. Navigation: Select position in script. height). To create this type of check points in testing. select the required image. Navigation: Select position in script. for Screen Area Bitmap.10). For Screen Area Bitmap: Sometimes test engineer defines time mapping between tool and project depends on image area in that application. for object/window property. Maximum time to wait). specify property with expected ( Ex: Status / Progress Bar – 100% completed and enabled…). From screen area Page 65 of 132 . Syntax: obj_wait_info("Object Name"."enabled". Due to this most of the times they are not using this change runtime settings option.Software Testing Material time = 10sec Drawbacks in Change Settings: If you are changing the Settings once they will be applied to each and every test without user specifications. From object / window 2. synchronization point. y. "Property". right click to releave. Ex: obj_wait_info("Insert Done. we can use this “Get Text” option from the create menu. Now a days most of the test engineers are using the for object / window property for avoiding the time mismatch problems For Object/Window Property: Navigation: Select position in script. Syntax: obj_wait_info("Object Name". create menu. Expected Value. click ok. synchronization point. x. synchronization point..1..

3. Get Text. From Screen area. From Object / Window. text). Variable).y2).v1). right click to relieve Syntax: obj_get_text("Object Name". Ex: obj_get_text("Flight No:". with multiple test data is called retesting. text.60). x1. Data is driving or changing to test the application. Ex: obj_get_info("ThunderTextBox_3". select required object (Dbl Click) Syntax: obj_get_text("Object Name"."value".2.50. Page 66 of 132 . In WinRunner retesting is also called as Data Driven Test (DDT). From Screen Area: To capture static text in your application build screen we can use this option. Syntax: obj_get_info("Object Name". Navigation: Create Menu. Variable). Ex: obj_get_text("Flight No:". select required required region to capture value [+sign] . "Property".Software Testing Material From object / window: To capture object values into variables we can use this option. Get Text.x2. Variable. NagaRaju Sample Input Output Item No Quantity Ok Price $ Total Retesting: Re execution of our test on same application build. Navigation: Create Menu.y1.

Key Board Build Test Script No1 No2 Multiply Result Exp: res = no1 * no2 Item No Quantity Ok Price $ Total $ Page 67 of 132 . test engineer use below TSL statements. test engineer submits required test data to tool dynamically. to validate functionality. To read keyboard values during the test execution. Dynamic test data submission: To conduct retesting. From front end grids (List box) 4. Syntax: create_input_dialog(“ Message “). Through excel sheet During the test execution based on first type.Software Testing Material In WinRunner test engineers are conducting Retesting in 4 ways 1. Dynamic test data submission 2. tester gives values based on that test execution will be completed (like scanf() in C) But in the remaining three types can be done with out tester execution. Through flat file (notepad) 3. Ex: create_input_dialog(“ Enter Your Account Number : “).

Exp: gsal = bsal + comm. We can use this function to define user defined pass or fail message. Bsal >= 15000 then comm. Page 68 of 132 . is 200. comm. Then it will displays bsal. Pass – green – 0 Fail – red – 1 Password_edit_set(“pwd”. and gsal. is 15% If bsal between 15000 and 8000 then commission is 5% If bsal < 8000 then comm.Software Testing Material Tl_step(): tl stands for test log. Test log means that test result.password_encrypt(y)) User Id Password Login Next Sample 2 Display Text2 Sample 1 Text1 Ok Problem: First enter EmpNo and Click Ok Button.

Syntax: file_compare(“ path of file1”. Syntax: Substr (main string. Syntax: file_close(“Path of the File”). %f – floating point. File3 is optional. Syntax: file_getline( “Path of the File”. \n – New Line. Syntax: file_open(“Path of the File”. To manipulate file data for testing test engineer uses below TSL functions file_open(): To load required flat file into RAM. Like in C file pointer incremented automatically. Page 69 of 132 . file_getline(): We can use this function to read a line from a opened file. %d . Variable). “path of file3”).Tab. length of substring). And it specifies concatenate of content of both files.string. “ path of file2”. separator).integer. start position. file_close(): We can use function to swap out a opened file into RAM. Syntax: file_printf(“Path of the File”.txt files). %s . with specified permissions. Split: we can use this function to divide a string to field: Syntax: Split(main string. we can use function. \r – Carriage return Substr: we can use this function to separate a substring from given string. File-compare: to compare two file contents. array name. In the above syntax separator must be a single character. file_printf(): We can use this function to write specified text into a opened file in WRITE or APPEND mode.Software Testing Material Through flat file (notepad) Sometimes test engineer conducts data driven testing. what values you want to write or which variable values you want to write).FO_MODE_READ/ FO_MODE_WRITE/ FO_MODE_APPEND). \t . depends on multiple test data in flat files (like notepad . ”Format”.

Software Testing Material File Values Build .txt Test Script No1 No2 Multiply Result Exp: res = no1 * no2 Item No Quantity Ok Price $ Total $ User Id Password Login Next Page 70 of 132 .

list_select_item(“ListBox Name”.Software Testing Material From Front End Grids (ListBox): Sometimes test engineer conducts retesting depends on multiple test data objects (like list box).Item No. variable). list_get_info(): We can use this function to information about the specified property(like enabled. count) of list box item into given variable.Variable).Variable). list_get_info(“ListBox Name”. To manipulate file data for testing test engineer uses below TSL functions list_get_item(): We can use this function to capture specified list box item through item number. NagaRaju Journey Fly From Fly To Ok Page 71 of 132 . focused. list_select_item(): We can use this function to select specified list box item through given variable. list_get_item(“ListBox Name”. Property.

Page 72 of 132 .Data Build Software Testing Material Test Script Sample 2 Display Text2 Sample 1 Text1 Ok NagaRaju Sample1 List1 NagaRaju Sample2 Display Text Ok Type Age Gender Qualification Others Data Driven Testing: In generally test engineers are creating data driven tests. depends on excel sheet data.

data driven wizard. select specify sql statement mssql1. Write result into same excel sheet. automatically. click create to select dsn (machine data sourse – flight32). Our own test data Navigation: Create test script for one input. specify connection to database. To generate this type of script. write select statement to capture database content for testing into excel sheet. 2. From data base tables using select statement (Back End) 2. 1. In this type of retesting.sql . Prepare a data driven program to find factorial of given number. click next. select import data from database. click finish. line by line 2. select add statements to create ddt. click next. Prepare a TSL script to write a list box item into excel sheet one by one. optimized text 1. select show data table now. specify variable name to assign path of excel sheet ( by default table as variable ). tools menu. browse the path of the excel sheet ( path ). Col1 Col2 Col3 Test Script C3 = c1 + c2 Problems: 1. test engineer fills excel sheet with test data in two ways. specify database connection (ODBC/Data Junction). test engineer use data driven test wizard. click next. Page 73 of 132 . specify position to replace excel sheet column in ur test script.Software Testing Material Loop ---- Test Data Build Excel Sheet Data Test Script From Excel Sheet: In general test engineers are creating retest test scripts depnds on multiple test data in excel sheet.

variable). “path of query file”. Ddt_get_row_count(): To find the no of rows in excel sheet. When you are executing our tests as batches you are getting a chance to increase our probability of defect detection. Delete. Update Syn: Ddt_update_from_db(“path of excel sheet”. In the above syntax variable specifies that how many rows newly altered. In specified mode. Test Suite / Test Batch:. value or variable): Ddt_close(): To swap out excel sheet from ram Ddt_close(“path of excel file”): Write a program to write list box items into a excel sheet one by one. Else it returns E_FILE_NOT_OPEN.Arranging all tests in one proper order based on their functionality.col no): Ddt_set_val(): To write a value into a specified column and pointed row. Ddt_update_from_db(): To extend excel sheet data depends on dynamic changes in the database ( Insert . Ddt_get_row_count(“path of excel sheet”. when calling and called tests both are in the same folder. Page 74 of 132 . In the above syntax we can use first one.Software Testing Material Ddt_open(): We can use this function to open excel sheet into Ram. Ddt_set_row(): To point a row in excel sheet. variable) Var stores the no of rows in sheet.row no): Ddt_val(): To read a value from specified column & pointed row. “col name”. Batch Testing: In general test engineers are executing their scripts as batches. Ddt_set_val(“path of excel file”. Ddt_save(“ path of excel sheet”). DDT_MODE_READ/ DDT_MODE_READWRITE) This function will returns E_FILE_OPEN when that file is opened into RAM. Ddt_save(): To save recent modifications in excel sheet. Ddt_set_row(“path of excel file”. It gives what test output is used as a input to all other values. Syntax: call “test name” () call “path of the test”(). We can use second syntax when both are in different folders. Every batch consists of a set of tests. Ddt_val(“path of excel file”. Syn: ddt_open(“path of file”. In every batch end state of one test is base state to next test. they all are dependent.

use that parameters in required place to test script. Treturn(10). calling test to called test. Treturn(): we can use this statement to return a value from a called test to a calling test. test properties. Data Driven Batch Test: WinRunner allows you to execute our batches with multiple test data. subtest maintains a set of parameter variables. click apply. Navigation: Open subtest. or main test to subtest. Calling Test Call TestName(n) ------- Test Name ------- n = 10 Main Test SubTest texit(): sometimes test engineers are using the statement in test script to stop test execution in the middle of the process. From the above model main test is passing values to subtest. click add to create more parameters. select parameters table. file menu. Page 75 of 132 . To receive that values. Treturn(variable or value). click ok.Software Testing Material Calling Test Call TestName() ------- Test Name1 ------- Main Test SubTest Parameter passing: Winrunner allows you to pass arguments between.

Navigation: Settings.n). If(condition) Treturn(0) Else Treturn(1). Fail Test1 Next Enabled Test2 Test3 Test4 Fail Test1 Test2 Test3 Test4 Sample Windo w appear s Page 76 of 132 . Else Printf(). If u want to execute our tests scripts without any initiation when a checkpoint is failed we can follow below navigation to define silent mode. select “run in batchmode” option. Main Test Edit_set(“”. click apply. click ok.Software Testing Material Calling Test Temp = Call TestName(n) If(temp==1) Printf(). run tab. any standard checkpoint is failed during test execution. Test Name n = 10 SubTest Silent Mode: In general winrunner returns pause message. general options.

Test engineers are calling this login process as base state. Homework: Login after 5 secs. ……) { Repeatable Navigation return (value or variable ) } if u want to create a user defined function to maintain end state of one time execution is base state to next execution we can use static functions.Software Testing Material window appears: if (win_exists(“sample”) == E_OK) win_exists() we can use this function to find existence of a window. test engineer creates four automation test scripts to test four different functionalities depends on functionality dependency. max or hidden position. time). Page 77 of 132 . User Defined Functions: Like as programming languages winrunner also provides a facility to create user defined functions. But static maintains constant locations for internal variables in that current test execution. Shopping: Prepare above batch test for ten users which information available in excel sheet during this batch execution tester passing item no & quantity as parameters. In the desktop in min. Out put of one test execution is input to other test. In TSL user defined functions are created by test engineer to initiate repeatable navigation. Else try for other user. If next enabled go to next window. In the above example. Public / static function function name(in/out/inout argument name. Time – is optional. Syn: win_exists(“ window name “.

Public function add (in a. ). out c) { c = a + b. Y = 6. One test invoking one function depends on function name. z Printf(z). Note2: In batch testing one test calling other test through saved test name. to call one function in test. that function .Software Testing Material a = 100 Static a=0 ------a = 100 Test Note1: User Defined Functions allows only context sensitive statements and Control statements and doesn't allow check points and Analog statements. Add( x . y .exe must reside in RAM. Page 78 of 132 . } Calling Test: X = 6. in b.

y). general tab. Z = Add( x . Page 79 of 132 . Add( x . test properties. } Calling Test: X = 6. click apply. This script executed automatically when u launching winrunner. Printf(y). Compiled Module: Open winrunner and build. Note: WinRunner maintains a default program as a startup script. y ). Public function add (in a.general args Out – return values Inout – both Return: to return one value Note: udf allow only cs statements & control stmts and doesn't allow check points & analog statements. return c. inout b) { c = a + b. click ok. } Calling Test: X = 6. Y = 6. in b) { c = a + b. click new in winrunner. Printf(z). file menu. write load() statement of that compiled module in startup script of winrunner. save that test in dat folder. In . In this script we can write load() statement to load your function. Y = 6. change test type to compiled module. record repeatable navigations as user defined functions.Software Testing Material Public function add (in a.

0/1."DSN=Flight32"). click paste. Syntax: unload(“ Path of the Compiled Module “. SW_SHOW / SW_HIDE / SW_MINIMIZE / SW_RESTORE /SW_SHOWMAXIMIZED / SW_SHOWMINIMIZED / SW_SHOWMINNOACTIVE / SW_SHOWNOACTIVE). insert function.Software Testing Material Load(“ Name of the compiled Module”. "commands". from function generator. Syntax: db_connect(“Session Name”. Reload(): We can use this function to reload. enter arguments. To search for a required function in function generator. select required category.exe". Working directory – At the time of running the temporary files are stored in this directory. unloaded functions again. WinRunner provides a facility to search required tsl function in a library. If u didn’t specify any directory by default it takes c:\windows\temp folder.0/1) 0-User Defined compiled module 1-system Defined compiled module 0-Path appears in the winrunner window menu 1-Hides the path unload(): We can use this function to unload unwanted functions from RAM. Commands – Used in X Runner for Unix OS. Create menu. we can follow below navigation. invoke_application("Path of . “ Unwanted Function Name “). "working directory" . invoke_application(): WinRunner allows you to open a project automatically. 0/1.0/1) 0-User Defined compiled module 1-system Defined compiled module 0-Path appears in the winrunner window menu 1-Hides the path Predefined Functions: These functions are also known as built in functions or system defined functions. select required function depends on description. Ex: db_connect("Query1". Executing a Prepared Query: Db_connect(): We can use this function to connect to database using existing DSN or Connection. Page 80 of 132 . “DSN=*******”). called function generator. Syntax: reload(“ Path of the compiled Module”.

arity. Generator_add_function: we can use this function to add your user defined function name to all functions category..NO_LIMIT). Generator_add_function_to_category(): Generator_add_function_to_category(“category name”..TRUE. “point_window()”. “browse()”. Syntax: generator_add_function(“Function Name”. “Description”. “”. Ex: db_write_records("Query1". “”). 5. . Change time out without using settings Page 81 of 132 . Extra Functions in WinRunner: Some times test engineers are adding user defined function names to generator to maintain user defined functions for future references. We can write above three statements in start up script of WinRunner Select TSL functions for: 1. Syntax: Generator_add_category(“ Category Name “).-). To do this task. “”. Syntax: db_write_records(“Session Name”. “Argument name”."select * from orders where order_number <= "&x. “d”. “select_list(0 1 2 3 4 5)”. “function name”): Note: We can execute above third function after completion of second function execution. we can use below DSN statements. “a”. “”.”DSN=******”. “argument type”. Ex: Generator_add_category(“ NagaRaju “).xls".variable”). “File Path”. “”. Prepare TSL to execute below Prepared Query.. Generator_add_function(“ name”. Syntax: db_execute_query(“Session Name”. “b”.Software Testing Material Db_execute_query(): We can use this function to execute required “Select” statement on connected database. “default value”."nrdbc1. Ex: db_execute_query("Query1". “description”. Db_write_records(): We can use this function to write query results into specified file. TRUE/FALSE. “e”. “c”. (select * from orders where order_number <= x and order_number >= y) 2. “point_object”.rno).We can use this function to create a new category generator.is for file path Point_window() – is for window message Point_object() is for object types Select_list() is for selecting list of items Type_edit() is for if we no need of all out to select we take this function (by default we use space). “type_edit”. NO_LIMIT”). Generator_add_category():. Browse() .

getvar(): To capture system variable values ex: Timeout. time). 4.exe path. System category function.Edit_get_selection(“ obj name”. 1. Clip board testing: A tester conduct a test on selected content of an object is called clip board testing. What is the difference between invoke_application().exe is enough for invoke_application. get_time(): To capture system time value. time). System category there are 8 functions usually in interviews Syntax: 1. m_root() 7. Select Function name with arguments. 2. Create menu. Test engineer select required function depends on requirements (depends on Automation needed) through below navigation. system(): To open an application using title of the software. dos_system(): To execute DOS commands. Point SystemDate: Get_time (only time not date) Time_str 5. var) – specified selected objects. In the desktop in min. getenv(): To capture environment information ex: m_home(). Getenv(“M_ROOT”). 3. Built in functions / Predefined Functions: All TSL language functions are available in “Function Generator”. Select Category. from Function Generator. 4. To open one application system() means through title of the software. Time – is optional. 8.Software Testing Material Setval(“time out”. setvar(): To change sytem variable values 6. time_str(): To capture system date with time. invoke_application(): To open an application using . insert function. Syn: win_exists(“ window name “. win_exists() we can use this function to find existence of a window. 3. max or hidden position. Find parent directory of WinRunner(Where WinRunner installed in your computer) Getenv(“M_HOME”). and system(). Click Paste. Difference between edit_get_selection() and obj_get_text(). delay 5. Page 82 of 132 . One . Some part of application can be tested ie called as clip board testing and All entire application can be tested is called as general testing.

SW_SHOW / SW_HIDE / SW_MINIMIZE / SW_RESTORE /SW_SHOWMAXIMIZED / SW_SHOWMINIMIZED / SW_SHOWMINNOACTIVE / SW_SHOWNOACTIVE). Executing a Prepared Query: Db_connect(): We can use this function to connect to database using existing DSN or Connection. invoke_application(): WinRunner allows you to open a project automatically. open application. Ex: db_connect("Query1"). Db_write_records(): We can use this function to write query results into specified file.”Destination File Path”.txt”.variable”).Set Window SW_SHOW – Focus to Window. SW.exe"."DSN=Flight32"). Commands – Used in X Runner for Unix OS.”Nr. FALSE – Without Header Ex: db_write_records(“Query1”. Ex: db_connect("Query1". Syntax: db_write_records(“Session Name”.rno). execute prepared query(db_disconnect) Learning: In general a test automation process starts with learning. Db_disconnect(): We can use this function to remove database connection establishment."select * from orders where order_number <= "&x. TRUE – With Header. Working directory – At the time of running the temporary files are stored in this directory. Ex: db_execute_query("Query1". Syntax: db_execute_query(“Session Name”. NO_LIMIT).”DSN=******”. TRUE”. If u didn’t specify any directory by default it takes c:\windows\temp folder. TRUE/FALSE”. "working directory" . Syntax: db_disconnect(“Session Name”). "commands". Syntax: db_connect(“Session Name”. Learning means that recognization of objects and windows in your application by testing tool. win_exists(): 4.Software Testing Material Open Application: WinRunner provides a facility to open your project automatically (System Category Function). Db_execute_query(): We can use this function to execute required “Select” statement on connected database. invoke_application("Path of . “DSN=*******”). 3. Page 83 of 132 . NO_LIMIT).

Page 84 of 132 . WR recognizes objects and windows with respect to tester operations. Auto Learning: During recording. Note: If GUI Map is empty. we can follow two possible administrations. Save script Save GUI Map Disadvantage of winrunner is without entries it won’t works. Steps: Start recording Recognize object Script generation Catch entries Catch objects WinRunner Button_press(“Ok”). This type of auto recognization is called as Auto Learning. To maintain these entries longtime along with our test scripts.Software Testing Material WR 7. By default WinRunner allows you to create global GUI Map file. our existing test scripts are not able to execute. Ok 4 5 2 3 Logical Name: Ok { class: Push Button label: Ok } 1 Build GUI Map So before closing the WinRunner we have to do two tasks. Global GUI MAP File: From the above model test engineer creates a global GUI Map file and maintains explicitly in hard disk.0 supports auto learning and pre learning.

Software Testing Material Test1 ---Test2 --Test3 ----

GUI Map ---Save Open .gui (HDD) ----

Explicitly (Using File Menu in GUI MAP Editor).

Per Test Mode: It is a new option in WinRunner 7.0. In this mode winrunner implicitly handles entries in GUI MAP. From the above model WR maintains auto process for save and open of entries with respect to test. Due to this reason, WR increase entry redundancy (Repetition) when an object / window participate in more than one test. By default WR follows Global GUI Map. If you want to change to per test mode, we can follow below navigation. Navigation: Settings, general options, environment tab, select the gui map file per test, click apply, click ok.

Test 1 ---Test 2 --Test 3 ----

GUI Map ----

.gui ---.gui Save Ope n --.gui ----


Pre Learning: Sometimes winrunner7.0 testers are also follows pre learning concept, before start recording. Due to this reason, pre learning is only suitable for global GUI Map.

Page 85 of 132

Software Testing Material Navigation: open project, create menu in winrunner, Rapid Test Script Wizard, click next, show application main window, click next, (Select No Tests), click next, specify sub menu symbols (.., >>, ->), click next, specify learning mode (Express or Comprehensive), learn, (after learning) say yes or no to open project automatically, click next, remember the paths of start up script and gui map file, click ok. In general test engineers are following the auto learning concept in global GUI map file. They are not using auto learning with per test mode and pre learning regularly. Difference between Auto learning and Pre learning: Auto Learning During Recording No need for extra navigation Global GUI Map file or Per Test Mode Pre Learning Before Recording Using RTSW Per Test Mode

Depends on test requirements, winrunner test engineers are performing changes in corresponding objects or windows recognization entries. There are six types of situations to perform changes in GUI map entries. You will change the entries of GUI Map in 6 ways. 1. 2. 3. 4. 5. 6. Wild Card Character Regular Expressions Virtual Object Wizard Mapped to standard class GUI Map Configuration Selective Recording

Wild Card Character: Sometimes window or object labels are variating with respect to inputs in your application. To create data driven test on this type of windows and objects we can perform changes in corresponding entries in GUI Map. The Wild Card characters can be used to organize entries in WinRunner using !,* Fax Order No. 6 { class: window, label: "Fax Order No.6", MSW_class: "#32770" } Fax Order No. 6 { class: window, label: "!Fax Order No.*", MSW_class: "#32770" }

Page 86 of 132

Software Testing Material Regular Expressions: Sometimes in your application build objects / windows labels are variating depends on the events. We are changing in logical name. Winrunner at runtime will catches entries it used in logical name with respect to runtime point. Start { class: push_button, label: "![S][t][ao][a-z]*" } for(i=1;i<=5;i++) { set_window ("Personal Web Manager", 3); button_press ("Start"); printf(" Button Pressed is : "&i); } For Number change: Wild card Characters If Toggle Characters: Regular Expression GUI Map Configuration: Sometimes in your application more than one object consists of same physical description with respect to WinRunner defaults (Class and Labels). To recognize this object individually we can perform changes in GUI map configuration. It is used when one object is not recognized by the tool, then in WinRunner it recognizes by using this feature. Navigation: Tools, GUI Map Configuration, Select Object Type, Click Configure, Select distinguishable properties into obligatory and optional (In general Test engineers are maintaining mswid as optional), click ok. If class and label are same. We select mswid (Micro Soft Window ID) If applicable properties and obligatory properties are same we use optional (mswid). Command1 { class: push_button, label: Command1, MSW_id: 1 } Note: Here we can maintain MSWID as assistive. Because every two objects consists of different mswids. Mapped to Standard Class: Sometimes test engineers are not getting required properties of an object. This is used when one object is recognized but the required properties are not coming to that object. Then map this object to any of the standard matching object and get the required properties. Navigation: Tools, GUI Map Configuration, Select non testable Object, Click Ok, Click

Page 87 of 132

TRUE/FALSE. In WinRunner if u have more than one application on the desktop at the time of recording it may record about the unnecessary application details also in the TSL if you didn’t specify exactly what application u need.0. click ok. Record on Start Menu and windows explorer. click finish. interest of tester to test required six rules in that six. Ok/Cancel existence 3. User Interface Testing: WinRunner is a functionality testing tool. mark that non recognized object area. TRUE/FALSE. Navigation: Tools.Software Testing Material Configure. WinRunner depends on Micro Soft 6 Rules. Controls are Init Cap 2. right click to relieve. System menu existence 4. checks alignment of controls. Selective Recording: It is a new concept in WinRunner 7. Click Next. Controls are aligned To apply the above six rules in your application build. Controls are visible 5. say yes/no to create more. click ok. Virtual Object Wizard: To forcibly recognize. In this user interface automation testing. enter logical name to new entry. checks existence of system menu. TRUE/FALSE. Note: Selective recording is a new concept in WinRunner7. checks that controls do not overlap. Page 88 of 132 . Syntax: configure_chkui(TRUE/FALSE. Controls are not overlapped 6. overlap_chk=FALSE. TRUE/FALSE). Settings -> General Options -> Record Tab. Virtual Object Wizard. Syntax: load_os_api() Configure_chkui(): To specify. because WinRunner records operations with respect to desktop co-ordinates in analog mode. sys_chk=TRUE. browse required application path. But it provides a facility to conduct user interface testing. windows OS system calls and application programming interface to apply that six rules. text_chk=TRUE. checks capital letter of labels on controls. select mapped to class. align_chk=FALSE. This concept is not applicable to analog mode. checks if all text of controls is visible. lbl_chk=TRUE. For this type of situations in WinRunner we are specifying it explicitly using this path. Select expected type depends on nature of the object. click selective recording. WinRunner use below TSL functions. click next. ok_can_chk=TRUE. checks existence of OK/Cancel buttons. click next. Micro Soft 6 Rules: 1. select record only on selected application(By default Off). Load_os_api(): WinRunner use this function to maintain path between. non recognized objects we can use this option.0. TRUE/FALSE.

Bitmap Regression. click next. Navigation: Open application build on desktop. Bitmap regression. GUI regression. The above three functions are not built in functions. open new build. Old New GUI Check Points Navigation: Open old build on the desktop. click learn. remember the path of test script. specify sub menu symbols(>>. click next. remember paths of startup script and GUI map file. Real regression to ensure bug fixing and resolving. select use existing information. create menu. click next. To perform this preliminary level verification we can use WinRunner concepts in RTSW(Rapid Test Script Wizard). But developed by Mercury Interactive as a system defined compiled module. analyze results manually. Check_ui(): WinRunner use this function to apply configured rules on specified window. GUI Regression Testing: To find objects properties differences between old build and new build. Rapid Test Script Wizard. create menu. show application main window.exe of user defined functions. click next. test engineer performs GUI regression and bitmap regression before perform functionality level regression. click next. Regression Testing: Receive modified build from development team. remember the path of UI testing. click next. after learning. click run. Compiled module means that a permanent . Development Team released Modified Build GUI Regression Bit Map Regression Find Screen level differences between old and new builds Regression Test to ensure that modification To find screen level changes: GUI Regression. select GUI regression test. analyze the results manually. rapid test script wizard. …).Software Testing Material Note: Orders of rules is mandatory. click next. click next. click run. click next. click next. <<. click next. select user interface test. Syntax: check_ui(“windowname”). show application main window. click ok. we can use this option in RTSW. close old build. specify learning mode (Express / Comprehensive). Page 89 of 132 . specify true for required rules. click ok. say yes/no to open your application during winrunner launching. From the above process.

To handle testing exceptions WinRunner provides three types of handlers. exception handling. Note: After receiving modified build testing team plans functionality regression after completion of GUI regression and Bitmap regression. close old build. click next. select expected return code. Page 90 of 132 . click ok after reading suggestion. click next. create menu. click paste. Tools. Old New Bitmap Check Points Navigation: Open old build on the desktop. click next. select Bitmap regression test. click run. show application main window. remember the path of test script. click close. we can use this option in RTSW. click new. analyze results manually. enter exception name. In this scenario. record required navigation to recover expected situations as function body. To create TSL exceptions we can follow below navigations. Public function nagaraju(in rc. in func) { printf(func &” returns “&rc). enter handler function name. Exception Handling: A non-modifiable runtime errors is called exception. write load statement in startup script of WinRunner. click next. click next. Some of the negative situations defined by tester with respect to object properties. • TSL Exceptions • Object Exceptions • Pop-up Exceptions TSL Exceptions: These exceptions are raised when specified TSL statement returns specified error code.Software Testing Material Bitmap Regression Testing: To find image objects level differences between old build and new build. click ok. make it as compiled module. click ok. } Object Exceptions: TSL exceptions depend on corresponding TSL statement and return code. Object exceptions raised when specified object property is equal to our expected. select use existing information. But all negative situations are not suitable to define depends on the TSL statement and return code. open new build. Rapid Test Script Wizard. GUI regression is mandatory and bitmap regression is optional. select expected TSL function. select exception type as TSL.

Note: When you create exception. sys_chk=TRUE. click paste.Software Testing Material Build ----- Down Enabled ---- Test Script Handler To create this type of exceptions. Exception_off_all() . #checks existence of system menu. show that unwanted window. select exception type as Pop Up. in attr. click new. in obj. make it as compiled module. select traceable object. RB is like as VB Page 91 of 132 . click close. select exception type as object. To administrate exceptions WinRunner provides the below TSL functions Exception_off(“ Exception Name ”) . Navigation: Tools. specify handler action. we can follow below navigation. click new. enter exception name. } Pop-up Exceptions: These exceptions raised when specified window comes to focus during test execution we can use this type of exceptions to skip unwanted windows during test execution. exception Handling. click ok. text_chk=TRUE. Rational Robot      Developed by Rational Also known as SQA Robot Functionality testing tool like as WinRunner Supports c/s and web technologies Records our business operations in Rational Basic(RB). click ok. Exception_on(“ Exception Name ”) . align_chk=FALSE. record recoverable navigation. overlap_chk=FALSE. Tools. #checks existence of OK/Cancel buttons. by default exception is ON. in val) { printf(“ Enabled “). enter exception name. enter handler function name. #checks that controls do not overlap. write load statement in startup script of WinRunner. ok_can_chk=TRUE. click ok after reading suggestion. Public function nagaraju (in win. select property with expected. #checks capital letter of labels on controls. #checks if all text of controls is visible. #checks alignment of controls. Exception Handling.

Numeric ( Textbox / From Screen Area Inbox). Clipboard ( Copy content). click ok. tl_step(). File Mode). TestCase. checkpoint. Custom. For Object/Window Insert. Insert. Page 92 of 132 1 2 3 4 5 Developed by Records operations in Recording language like Learning Recording 6 GUI CheckPoint 7 8 9 Bit Map CheckPoint Database CheckPoint Text CheckPoint 10 Window Existence 11 File Comparison 12 File Existence 13 User Defined Pass/Fail . specify result type (Pass. click ok. For Screen Area Window/Region Image. click ok. Data Window and Active X) Win_exists(“Window Name”. Browse file1 & file2. specify expected for required properties. Note: In Low Level recording Robot records the mouse pointer movements along with time For Single Property Insert. File_open(“path of file1”. select testable object.”path of comparison. File “path of file2”. Table. TestCase.Software Testing Material Win Runner Mercury Interactive Test Script Language (TSL) C Auto Learning and Pre Learning Rational Robot Rational Rational Basic(RB) VB Implicit learning (Recognizes objects based on Mswid) Context Sensitive and Analog Object Orientation and Mode Low Level Recording Record Menu. save check point. Browse Testable file. printf() Insert. object properties. Object Data(List. Window existence. Insert. RunTime Not Applicable Record Checkpoint Get Text Insert. select testable window. For Multiple Objects save check point. Robot allows one checkpoint for one object. Testcase. right click to . Turn to other Mode. Insert. TestCase (Check For Object / Window Point). save checkpoint. Testcase. save file3”). Alpha From Object/Window. File_compare(“path of file1”. TestCase. Menu. Default. time). Existence.

edit script are not separate. continue recording. Warning. browse manual test path. open window by window manually. noname2 … 17 18 Login Saves Test names with Silk Test 5. Check points. set mouse pointer on required object and click ctrl + Alt to create check point(Property. read suggestions for recording. analyze results manually. change runtime settings. click ok. object properties. No Runtime settings but by default 10 seconds. Bitmap). None). set application base state. The present version is QTP 6. click next.Net. 15 16 Open Project Synchronization Invoke_application(). For Object/Window property. Rational Robot and QTP Supports c/s and web technologies Records our business operations in 4TL(Four Test Language) like as java Follows single thread of process (Learning. click return to wizard. click new.Software Testing Material Fail.5. Programs. Insert.0       Developed by Segue Also known as SQA Robot Functionality testing tool like as WinRunner.exe files Wait(). click logo. For the last two we have Positive / Negative region. record out business operations. … 14 Batch Testing Call “TestName”(). Recording. specify depth to walk. Insert. click done to stop recording. Silk test. testcase. wait status. black color working) QTP (Quick Test Professional) Some of the professionals are also calling it as Quick Test Pro.test2. method. Call “Path of Test”(). One login window Test1. click next. All are done at a time) Navigation: Start. click next. file menu. click next. click ok. For Object / Window Bitmap. click close. Call test procedure. click next. Delayfor(). click press. click ok. select required subtest. start application. click next. insert. click next. URL’s testing: Enter Base URL. for Screen area No login window Noname1. WinRunner does not support the ERP and . for . read suggestions. select new test frame or existing test frame. browse application path. analyze results manually ( Red color not working. insert checkpoints like as above. Note: Robot doesn't allow parameter passing. Page 93 of 132 . click ok. click run test.

Drawback: It won’t provide auto save and open. [In WinRunner every entry is maintained in GUI Map Editor. Global GUI Map Editor: Advantage: Entries can be used in more than one test. Learning: Automation starts with learning. But in Winrunner to this task it takes more time. Supports Client/Server. Virtual Object Wizard 10. Path: Tools -> Object Repository. Mapped to standard class 11. GUI Map Configuration 12. Selective Recording 1. Web Applications. Global GUI Map Editor and Per Test Mode. 7. Page 94 of 132 . The Wild Card characters can be used to organize entries in QTP (like WinRunner using !. You will change the entries of GUI Map in 6 ways. Flash … like dynamic images) for functionality testing. Records our business operations in VBScript. then in WinRunner it recognizes by using this feature. In WinRunner entries are maintained in two ways. Per Test Mode: Disadvantage: The entries can’t be used in more than one Test. Virtual Object Wizard: It is used when one object is not recognized by the tool. QTP also supports Regular Expressions like WinRunner 3. Mapped to Standard Class: This is used when one object is recognized but the required properties are not coming to that object.] QTP maintains entries in object repository. Every entry consists of logical name and physical description. where as in QTP also same process but with small navigations.* ) 2. Derived from WinRunner. ERP and Multimedia Technologies (Maya. Then map this object to any of the standard matching object and get the required properties. Like as WinRunner QTP supports auto learning only. [Repository is a Folder or Directory and it is created by user and saved by system] This repository maintains auto save and open. Wild Card Character 8. Advantage: It provides auto save and open. Regular Expressions 9. 4.Software Testing Material • • • • Developed by Mercury Interactive. During recording QTP creates recognization entries for objects and windows.

For this type of situations in WinRunner v are specifying it explicitly using this path. Recording: QTP records our business operations in VBScript. Selective Recording: In WinRunner if u have more than one application on the desktop at the time of recording it may record about the unnecessary application details also in the TSL if u didn’t specify exactly what application u need. Standard checkpoint: To test the behavior and input domains of objects we can use this checkpoint. Because every two objects consists of different mswids. Database Checkpoint: Page 95 of 132 . Note: Here we can maintain MSWID as assistive.Software Testing Material 5. But in QTP we have to follow the given path. To differentiate one object from other in WinRunner it internally uses the MSWID. Like as WinRunner QTP also supports the static recording. 6. Select position in script -> insert menu -> checkpoint -> standard checkpoint ->select testable object -> click ok after confirmation -> select required properties with expected ->click ok. The maximum timeout for picture elements is 10 seconds. Tools -> Object Identification -> select object Type -> Select distinguishable properties into mandatory and assistive -> click Ok. This checkpoint allows one object at a time. 2. In QTP for one property u can give 2 values like constant expected or parameter expected. If you want to record mouse pointer movements. This option appears when u click recording. In QTP three modes are available like General. GUI Map Configuration: Some times two objects may have same logical and physical names also. If u choose selective recording then it displays one window in which u have to choose the application and working directories. v can use Test Menu -> Analog / Low level recording. QTP provides below check points. Check points: To conduct functionality testing on different technology applications. By default this tool starts recording in general mode. 1. Settings -> General Options -> Where as in QTP if u click start recording it asks for whither u want to do Selective recording or not. 3. In Winrunner two modes are available Context sensitive or Analog mode. Bitmap Checkpoint: QTP supports static and dynamic images to compare. Analog and Low level.

Var = inputbox(“Message”).Software Testing Material QTP provides backend testing facility through this checkpoint like as WinRunner default check. activex. menu. If u want to search any vbscript functions follow below navigation. Batch Testing can be done in 2 ways 1.0 and higher versions only because QTP supports auto learning and from WinRunner7. Test engineer conducts retesting. QTP Test to WinRunner Test: Insert -> call to WinRunner Test-> browse the path of test -> click ok. Vbscript supports variables declaration. 4. insert -> checkpoint ->Database Checkpoint -> specify sql statement -> click create to select DSN -> write select statement -> click finish. 2. Data Driven Testing: Like as WinRunner. Dynamic Testdata: In WinRunner we will be use Create_input_dialog(“Dialog Message : ”) to do same dynamic testdata under data driven testing. There are 3 possibilities such as Dynamic Testdata. insert -> step -> method -> select required object -> click ok after confirmation -> click next -> enter arguments -> click next. Text Checkpoint: To capture object values into variables we can use this option. Excel Sheet: Create testscript for one input -> insert testdata into excel sheet columns -> tools menu -> data driver -> select position to use or replace excel sheet columns -> click parameterize -> click next -> select required column name -> click finish. from FrontEnd Grids and Excel Sheet. Page 96 of 132 . 3. 2. 5.0 onwards auto learning is possible. QTP also supports retesting with multiple testdata. Batch Testing: Like as WinRunner QTP also allows batch testing. table and data window. 1. function to read data from the user. QTP Test to QTP Test: Insert -> call to action -> browse subtest -> specify parameter data using excel sheet columns -> click ok. To form batches QTP supports WinRunner Tests also. Note: QTP supports WinRunner 7. From FrontEnd Grids: Depends on listbox. But in QTP we use inputbox(“Message”). TextArea checkpoint: To capture static text from screens we can use this option.

select project name in list. SQL Server.0 • • • Developed by Mercury Interactive Test Management Tool Working as Client / Server application Project Administrator Test Director Ms – Access. TD 6. TD 6. Estimate Test Status: start. click create. Create Database: Start. programs. Login by Test Lead. programs. extend query if required. test director tool maintains tables and views. to store new projects testing documents and to estimate test status of an on going project. specify location of database(Private. select required table in list. Project Administrator. Common). select project name in list. project Administrator. click Extension symbol in front of project name. click connect. analyze the results manually to estimate the test status. • Supports . Through this concept. to create new database areas. Tools -> Recovery Scenario Manager -> Click New -> click next -> Select Trigger type ( pop up. new project. project menu. Extra Features in QTP: • Faster than WinRunner to create a test. test run error ) -> define the situation with handler -> click ok. QTP recover from executable scenarios with required handler. Oracle Project Administrator: This part is used by test lead.0.0. click Run SQL. Page 97 of 132 . click ok. Oracle Applications. object state. login by test lead. It records business operations in vbscript Test Director 6. SAP. insert -> step -> synchronization piont [this is exactly equal to for object/window property in WinRunner]-> select indicator object -> click ok after confirmation -> specify expected property with value -> specify maximum time wait -> click ok. Recovery Scenario Manager: This concept is equal to exception handling in WinRunner.Net. application crash. For one project data database. Multimedia and XML as extra than WinRunner. People Soft.Software Testing Material Synchronization points: To define time mapping between QTP and project we can follow below navigation.

Login by Test engineer • • • Plan Tests Run Tests Track Defects. Test Script: For automation test scripts. select required tests and add into batch. click open. select sub subject. click close. click folder new. click new to create more steps. Details: After completion of testcase creation. • Run Tests: After receiving a stable build from the development team concentrate on test execution. enter sub subject name. Test Effort.Software Testing Material Test Director: This part is used by the test engineer to store corresponding test documents into corresponding database. Test Setup and Testcase Pass/Fail Criteria. created by Test Lead. click save. TestCase ID. click Folder New.0. enter Test name. Design steps. Page 98 of 132 . Create Sub Subject: Plan test. TD 6. select subject name. click Test New. click stop recording. click ok. Create Batch: Run Tests. Create Subject: Plan Tests. Design Steps: After typing required details for testcase we can prepare a step by step procedure for that testcase to execute. test engineers use this part to store their testcases into database for future references. test engineer maintains below details for that testcase. Browse required file path to attach. click Ok. Attachment. click new. enter step description with expected. click new. select subject name. test director provides launch button to open WinRunner. click ok. Start. Test Environment. select Test type. enter suit ID. click ok. Click launch. TD provides a facility to create automated TestLog during testcase execution. Test Director. Select project Name. Create TestCase: Plan Test. programs. set application base state for that test. Attachments: To maintain extra information for test cases. Click File/Web. Plan Tests: During test cases writing for responsible modules. click close. test engineer use this part. TestSuit ID. record required navigation. Test Duration. insert required check points. It is optional. Priority. click testset builder. Enter Responsible module name as Test Script.

      Page 99 of 132 . info or table. This option provides list of all test cases under all subjects and sub subjects. specify printout type. Test Director Icons: Filter: To select required tests or defects in existing list we can use filters concept. ERP and Multimedia Technologies (Maya. click start run. Manual Test Execution: select manual testing batch. tools menu. set application in base state as per that test. click run. click create. click automated. click print per every page. run every step manually. Quick Test Professional Developed by Mercury Interactive Also known as Quick Test Pro Functionality testing tool like as WinRunner Extension of WinRunner Supports c/s and web technologies Supports Client/Server. click add. analyze results manually. Columns: We can use Icon to select specific columns in display list. Test Grid: List of testcases coming in single window. click close after execution of last step. Specify Report Type. Navigation: Click Sort Icon. specify status for every step. select required filed. During test execution. click ok. click manual. click mail. Navigation: click filter icon. open. enter To Mail ID. under all subjects and sub subjects. click ok. test engineer use this part to report defects to development team.Software Testing Material Execute Automated Test: Select automated test in batch. close winrunner. Navigation: Click Report Icon. file menu. fill fields in the defect report. specify sort direction (Ascending / Descending).  Records our business operations in VBScript. test results. click ok. Sort: To arrange defects in a specified order in a list. Web Applications. specify filter condition. Flash … like dynamic images) for functionality testing. Report: To create printouts we can use this icon to create hard copies for defects. browse executed test. click ok. Track defects. we can use this sort icon. click close. click ok. change test status to Passed / Failed depends on results analysis. select required columns into visible list. Navigation: Click Columns Icon. • Track Defects. set application in base state.

Tools.. And hierarchical steps in tree view VB QTP supports only Auto learning to recognize the objects and windows in your application Maintains that recognized entries in Object Repository to edit that entries we can follow below navigation Tools. new virtual object. specify mswid as assistive properties . Object Repository Global entries with auto save and auto open into object repository Uses the Wild Card characters (! . GUI Map Editor Types of entry Global GUI Map file / Per maintenance Test mode to maintain entries longtime Wild Card Character Uses the Wild Card characters (! . Object Identification. virtual object wizard 8 9 GUI Map Configuration 10 Mapped to standard class 11 Virtual Object Wizard Tools. click ok Tools. click add When winrunner does not returns all testable properties to objects (Mapped to standard class) Tools. specify environment. GUI Map configuration. * )in reorganization entries when that window labels are variating with respect to input Regular Expressions Uses the regular expression entries when object labels are variating Tools . 5 6 7 C WinRunner supports Auto Learning and Pre Learning to recognize the objects and windows in your application Entry Maintenance Maintains that recognized Location entries in GUI Map to edit that entries we can follow below navigation Tools.. click ok Tools. Object Identification. click configure. click add Select non testable object. * )in reorganization entries when that window labels are variating with respect to input Uses the regular expression entries when object labels are variating Tools. GUI Map configuration When more than one object consists of same physical description (MSW ID as optinal) Tools. when any object is not When objects are not recognized by WinRunner recognized by QTP Page 100 of 132 . GUI Map configuration. select object type. object identification. specify MSW ID as assistive property.Software Testing Material  Supports launching of WinRunner to execute TSL scripts 1 2 3 4 Developed by Records operations in Recording language like Learning Win Runner Mercury Interactive Test Script Language (TSL) Quick Test Professional Mercury Interactive VBScript for expert view. Select Object type. virtual objects. click ok.

Software Testing Material 12 Selective Recording Settings, general options, record tab, click selective recording When we want to record our business operations on specific applications Recording allows two types of modes such as Context Sensitive and Analog Mode File Menu, New Test, Click Start Recording, selective recording window appears






Recording allows three types of modes such as general, Analog Mode and low level recording Default mode is Context In Low level recording QTP Sensitive and F2 is short key records mouse pointer to change from one mode to movements on desktop other along with time as extra General mode is default and allows below Shortcuts Start Recording: F3 Low level Recording: Ctrl + Shift + F3 Analog Recording: Ctrl + Shift + F4 GUI CheckPoint For Single Property Select position in script -> For Object / Window insert menu -> checkpoint -> For Multiple Objects standard checkpoint ->select testable object -> click ok after confirmation -> select required properties with expected ->click ok. Note: In WinRunaner check points allows constant values are expected. But QTP checkpoints allows constant and parameter values as expected [Ex: Expected values in Excel column] X = create_input_dialog(“xx”); Button_check_info(“OK”,”enabled”,x); Note2: In QTP standard checkpoint allows one object at a time to test Bit Map CheckPoint For Object/Window Insert, checkpoint, bitmap For Screen Area check point, select testable WinRunner supports static image [static of dynamic], images only click ok after confirmation, click select area if required, click ok Note1: QTP supports static and dynamic images to compare when you select multimedia option in add-in manager Note2: It supports dynamic images play up to 10 seconds as maximum. Database CheckPoint Default, Custom, RunTime insert -> checkpoint Record Checkpoint ->Database Checkpoint [like win runner default checkpoint -> specify sql statement -> click create to select DSN -> write select statement -> click finish Page 101 of 132

Software Testing Material Note: QTP supports database testing w r t database content Text CheckPoint Get Text: From Object/Window, From Screen Area From Selection Web Web test checkpoint only Functions that will be Obj_get_text(“ Object generated in Text Name”,variable); Checkpoint Obj_get_text(“ Object name ”,variable,x1,y1,x2,y2);


insert -> checkpoint ->Text Checkpoint & Text area Check point


Option explicit Dim vnames …. Window(“window name”). Winedit(“Object Name”). GetVname Window(“window name”). Web_obj_get_text(“object Winedit(“Object name”, “#Row no “, Name”,x1,y1,x2,y2). “#Column no”, variable, GetVname “text before”, “text after”, Window(“frame name”). time to create); Winedit(“Object Name”, “text before”,”text after”). Web_frame_get_text(“frame GetVname name”, variable, “text before”, “text after”, time to create); DDT/Retesting in 3 ways: Dynamic test data submission From front end grids (List box) Through excel sheet Option explict Dim vname Vname = inputbox(“ Message”); For I=1 to n step 1 next Through flat files driven testing is applicable


Data Driven Methods


Dynamic submission

Testing DDT/Retesting in 4 ways: Dynamic test data submission Through flat file (notepad) From front end grids (List box) Through excel sheet test data N = Create_input_dialog(“ Message”); For(I=1;I<=n;I++) { }


Through flat file

22 23

From front end grids Through excel sheet

File_open(); data File_getline(); not File_compare(); File_printf(); File_close(); List, menu, active x, label, List, menu, active x, label, data window data window Tools, data driver wizard Create testscript for one input -> insert testdata into excel sheet columns -> tools menu -> data driver -> select position to use or replace excel sheet columns -> click Page 102 of 132

Software Testing Material parameterize -> click next -> select required column name -> click finish. Function If u want to search any vbscript functions follow below navigation. insert -> step -> method -> select required object -> click ok after confirmation -> click next -> enter arguments -> click next. To form batches QTP supports WinRunner Tests also. Batch Testing can be done in 2 ways 1. QTP Test to QTP Test: Insert -> call to action -> browse subtest -> specify parameter data using excel sheet columns -> click ok. 2. QTP Test to WinRunner Test: Insert -> call to WinRunner Test-> browse the path of test -> click ok. Note: QTP supports WinRunner 7.0 and higher versions only because QTP supports auto learning and from WinRunner7.0 onwards auto learning is possible. User Defined Functions User Defined Functions User Defined Actions Repeatable navigations in Repeatable navigations in application recorded as application recorded as functions. To make it as actions to create one permanent .ext we can use reusable action. compiled module concept We can follow below navigation Insert, new action, enter action name with description, select reusable action, click ok, record repeatable navigation in your application Note: To call that reusable action in required test, we can use insert , call to action Synchronization point Wait insert -> step -> Change runtime settings synchronization piont [this is For object/window exactly equal to for For object/window bitmap object/window property in For screen area WinRunner]-> select Page 103 of 132


Searching for required Create menu, functions Generator


Batch Testing

Call “TestName”(); Or Call “Path of Test”();



Net. QTP supports . XML. SAP. Oracle applications and multimedia objects for testing 28 Exception Handling TSL Pop Up Object Web for web only 29 Technology Supported Does not supports . People Soft. SAP. object state.Software Testing Material indicator object -> click ok after confirmation -> specify expected property with value -> specify maximum time wait -> click ok. XML. test run error ) -> define the situation with handler -> browse reusable action for recovery> click finish.Net. Oracle applications and multimedia objects for testing Page 104 of 132 . People Soft. application crash. Tools -> Recovery Scenario Manager -> Click New -> click next -> Select Trigger type ( pop up.

Actually ISOCESS means equal or total. India is among them. 9004 – Continuous improvement. The major concern is on the process being implemented. Review the reports and documents that are prepared by QC team or whole project team. In the year 1947. Page 105 of 132 . Prepare the reports. Are we following the right method for developing or not. methodologies etc… for the testing of the application. 6-Sigma is for all companies. 9002. 9003. testing and inspection. non government organizations joined together and formed ISO. It is equal for all in the world India. Verification Quality Control For detection of defects Responsible for implementation of the life cycles. There are 145 countries are there in ISO. If you implement the 20 clause (8 sections) then u will get ISO. methodologies etc… according to quality standards. documents. development. And we can expect the next version in 2007. But by verifying the scope we can confirm what type of company. (Companies called as Production) 9003 – Testing and inspection only. Whither it is a hotel or software company they can get 9001. USA … ISO 9000 – Guidelines 9001. Validation ISO (International Organization for Standardization) ISO is given for all companies. Now a days there is no 9002 and 9003. 9001:2000 (Year or Version) For every six years they are releasing the version. 9001 – For companies design.Software Testing Material Quality Assurance They are mainly responsible for prevention of defects Identifying efficient life cycle models. process. 9004 – Certifications Whenever you want to get certifications first you have to follow certain guidelines. They are giving only 9000. 9001 and 9004. ISO is the Greek word. This is derived from the word ISOCESS. Latest version is 2000. CMM is given for only software companies. according to the standards or guidelines given by QA team The major concern is product being developed Product properly done or not. 9002 – Except design remaining activities.

000 and they will conduct with in 4-5 days. Next they will come to audit and finally certifies.Software Testing Material How to get Certification: BVQI – Beaurea of Verta Quality International (USA based company. Reasons are 1. If u don’t know how to implement 20 clause they are conducting training through company as External Auditor 3 months course. branch in Hyd) ICL – International Certification Limited (USA based company. Recertification Page 106 of 132 . SURVELLANCE Audit 3. Employees may leave organization Generally auditor should have 10+Exp and 5 cycles of implementation. They will conduct this. Internal Auditor for Rs 25. branch in Secunderabad) STQC – Software Testing. Procedure Manual Procedure Check List Format NCR – Non Conformance Report Types of Certifications: 1. Procedure Manual – Prepare procedure and distribute to all departments and inform them to implement it to get the Certification. Check list – What are the requirements Procedure – Work based on 20 Clause. Quality Testing If u want to get Certification first approach any one of the above company they will say implement 20 clause. Future reference 2. They visit all the departments and prepare this. The later will works in only one company. What ever the work you are doing you have to prepare the documents. Difference between the External or Lead auditor and Internal auditor is the former can work in two or three companies in a day. External Audit 2. Format – The structure is studied.

USA) formed together.5 There are different CMMs are there like SEI-CMM also called as Software CMM. CMMI. KPMG etc. External Audit: To renewals for every 3 years 2. They have observed the ISO. Anybody can become as Assessors but you have to attend training classes in Chennai or Mumbai. Each process is called as KPA. They gives 3 or 4 NCRs and finally cancels the certification. Page 107 of 132 . But they informs before coming. in ISO software organizations are not getting any special facilities. In the year 1987. SURVELLANCE Audit: Every 6 months they will come and checks.2. Recertification: If they cancels then go for recertification SEI-CMM (Software Engineering Institute – Capability Maturity Model) SEI-CMM levels: This is given to software companies only There are five levels are there in CMM like level 1. They will issues one NCR if u didn’t follow and they once again audits the same issue after 3 months.. Initial Adhoc 2. Pits burgh. 3. In CMM auditors are called as Assessors.4. PCMM. Defined Software Change Management 4.3. If an organization implements all the KPA’s then based on them it is given a level. There are two types of companies Disciplined / Matured Company Indiscipline / immature Company 1. So they formed SEI and released CMM. They released CMM version1.0 from the SEI. MIKE PAULK and BILL CURTIS (They are working as faculty in CARNEGIE MELLON University. Repeatable Project Management 3. Institutes are conducting this course..CMM for Integration. each level has got number of processes. Infosys was assessed at level4 in Dec 1997 and at level5 in Dec 1999. Managed Quality Management 5. For example level2 has the process as project management. Optimized Hitech Change Adhoc Discipline Change Predictable Hitech There are five levels of CMM.Software Testing Material 1.

more will be quality and customer satisfaction. DMAIC – Define. That will be given by this. If it is 5 σ the error may be 265 in 1 million LOC. Black belt holder will train the Green belt holder. For selecting and recruiting they are having one structure. This is mainly deals with the HR principles. CMM Levels: What is CMM: It defines how software organizations mature or improve in their ability to develop software. This model was developed SEI of Carnegie Mellon University in late 80s. It also got 5 levels. Improve and Control. Major Black Belt. Keep improving is CMM Mantra. Analyze. Level1: initial or Ad-hoc. CMM and 6 σ all are for customer satisfaction. Motorola. Measure. If it is 6 σ the error may be 3 in 1 million LOC. Infosys was addressed at level 4 in Dec 1997 and at level 5 in Dec 1999. There are no KPAs in this level. KPAs at this level look at project planning and execution. Black Belt. Small company can get up to ISO and CMM Level-3. This is derived from Greek letter ‘σ’ which means Standard Deviation. TCS etc… But the first company in Hyderabad which got this one is GE. PCMM Level-3 and CMMI. Green Belt. Wipro. Generally any company first does this DMAIC and next goes for 6 σ. There are 6 KPAs in this level. CMMI is the latest technology and most of the companies are trying to get this. CMM describes how software organizations can take the path of continuing improvement. which is so required in this highly competitive world.Software Testing Material PCMM: People CMM. 6 σ is a metric which gives various standard deviations The greater the number before ‘σ’ the less will be the defect in the process variation. 6 σ companies – Satyam. White Belt. Orange Belt. Level2: Repeatable. CMMI: CMM for Integration. DFSS – Design for Six Sigma. 6 σ (Six Sigma) This is given to all companies. ISO. PPMQ – Parts for Proper Million. Why CMM: CMM is a software specific model. Page 108 of 132 . They use SEI CMM. In 6 σ you will be given Champion. Champion – Owner of the company. This is for software organizations. Systems engineering principles and IPDCMM (Integrated Product Development).

Highlights of this level:  Realistic project commitments are based on the results observed on previous projects and on the requirements of the current project. The focus here is continual improvement. Here there is no objective basis for judging product quality or for solving product or process problems.Software Testing Material Level3: Defined. Also the performance depends on the capabilities of the individuals rather than the organizational capability.  The project managers for a project track software costs. Level1: Initial or Ad-hoc. The necessary process discipline is in place to repeat earlier success on projects with similar applications using best practices from past projects. There are 3 KPAs in this level. Requirements Management: To establish a common understanding between the customer and the project team Page 109 of 132 . There are 2 KPAs in this level. schedules. as the word reveals. Ie. schedule. As we move from level 1 to 5. and success depends on individual effort. and functionality. the project risk decreases and quality and productivity increases. As we move from level1 to level5. Activities intended to enhance quality such as reviews and testing are often curtailed or eliminated when projects fall behind schedule. things come to a halt. Organizational process is the focus area here. The problems meeting commitments are identified when they arise. following realistic plans based on the performance of previous projects. Level 1 is immature state. Basic project management principles are established to track cost. Highlights of this level:  The processes with in this level are highly unstable and unpredictable. The software process is characterized as adhoc. Level 4: Managed. Understanding of data Level 5: Optimizing. KPAs at this level look at project planning and execution. Therefore product quality is difficult to predict. Repeatable. and occasionally even chaotic. the project risk decreases and quality and productivity increases.  The projects are purely person dependent.  The projects process is under the effective control of a project management system. Projects in these organizations have installed basic software management controls. Few processes are defined. There are 6 KPAs in this level. when the persons involved leave the project or the company. means that processes employed in the project are repeatable. There are no KPAs in this level. There are 7 KPAs in this level. (KPA can be compared to Clause in ISO standards). Level2: Repeatable. and functionality.

products. Changes to baselines and the release of software products built from the software baseline library are systematically controlled via the change control and configuration auditing functions of Software Configuration Management. Goal: software plans. Goal: software estimates are documented for use in planning and tracking the software project. Software Quality Assurance The purpose of the Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built. establishing the necessary commitments. Goal: Software Quality Assurance activities are planned. Software Configuration Management: The purpose of the Software Configuration Management is to establish and maintain the integrity of the products of the software project throughout the project’s software life cycle. and activities are kept consistent with the system requirements allocated to software. A documented (Project Plan) is used for tracking. Page 110 of 132 . Software project planning involves developing estimates for the work to be performed. Software Project Tracking: To provide the adequate visibility into actual progress so that management can take effective actions when the software project’s performance deviates significantly from the software plans. A software baseline library is established containing the software baselines as they are developed. Software Subcontract Management: The purpose of software subcontract management is to select qualified software subcontractors and manage them effectively. Software project tracking and oversight involves tracking and reviewing the software accomplishments and results against documented estimates. and plans and adjusting these plans based on the actual accomplishments and results. commitments. Software Quality Assurance involves reviewing and auditing the software products and activities to verify that they comply with the applicable procedures and standards and providing the software project and other appropriate managers with the results of these reviews and audits. Software Project Planning: This involves establishing reasonable plans for performing the software engineering and for managing the software project. and defining the plan to perform the work. Goal: Actual results and performances are tracked against the software plans.Software Testing Material It involves establishing and maintaining an agreement with the customer on the requirements for the software project.

Information related to the use of process by projects is collected and reviewed. Level3 looks from the organizational view point. Training Program involves first identifying the training needed by the organization. projects. Page 111 of 132 . Changes to work products are controlled. standardized. This process engineering is done by SEPG. Such as a plan should include periodic assessments of the organizations process maturity. Organizational process is the focus area here. Selected work products are identified and controlled. and individuals. leading to plans for improvement in capability. Organizational Process Focus: The purpose of the Organizational Process Focus is to establish the organizational responsibility for software process activities that improve the organizations overall software process capability. which looks out for the interest of every project in the organization. The software process for both management and engineering activities is documented.g. Organizational Process Maturity: The purpose of this KPA is to provide a usable set of software processes assets that improve process performance across projects. and integrated into a standard SW process for the organization (E. Each software project evaluates its current and future skills needs and determines how these skills will be obtained. Data and information from projects is regularly and systematically collected and organized so that the same can be reused by other projects. Some goals of the KPA are to have a standard software process for the organization. The organizations software process database is established and maintained. All projects use approved and tailored versions of the organizations standard software process for developing and maintaining software. Level2 is concentrated on project level processes. Some skills are effectively and efficiently imparted through informal methods. There are 7 KPAs in this level. Level3: Defined. Software Configuration Management process). then developing or procuring training to address the identified needs.Software Testing Material Goal: Software Configuration Management activities are planned. To do an effective job of identifying and using the best practices. where as other skills need more formal training methods to be effectively and efficiently imparted. organizations must establish a group with that responsibility and build a plan for how the organization will improve its process. This involves developing and maintaining the organization’s standard software process. along with related process assets. The important goal of this KPA is software process development and improvement activities are coordinated across the organization. Descriptions of software life cycles that are approved for use by the projects are documented and maintained. Training Program: The purpose of this KPA is to develop the skills and knowledge if individuals so they can perform their roles effectively and efficiently.

The focus here is continual improvement. Level4: Managed. Software Product Engineering involves performing the engineering tasks to build and maintain the software using the projects defined products and appropriate methods and tools. Integrate the application development and testing life cycles. Software Product Engineering: The purpose of the Software Product Engineering is to consistently perform a well defined engineering process that integrates all the software engineering activities to produce correct. 6. There are 3 KPAs in this level. 3. it will lower costs. it forms the basis for the testing methodology. You'll get better results and you won't have to mediate between two armed camps in your IT shop. stress and load). integration. Don't let your programmers check their own work. Software Testing 10 Rules 1. Develop a comprehensive test plan. Page 112 of 132 .Software Testing Material Integrated Software Management: The purpose of Integrated Software Management is to integrate the software engineering and management activities into a coherent. 7. you'll test everything the same way and you'll get uniform results. 10. they'll miss their own errors. Formalize a testing methodology. There are 2 KPAs in this level. Define your expected results. 2. defined software process that is tailored from the organizations standard software process. Test early and test often. consistent software products effectively and efficiently. 8. Understand the business reason behind the application. Review and inspect the work. 4. You'll write a better application and better testing scripts. 5. Use multiple levels and types of testing (regression. Use both static and dynamic testing. Understanding of data Level5: Optimizing. 9. systems.

” This definition neatly breaks down configuration management into four key areas:  configuration identification. They will change for a number of reasons. Configuration identification is the process of identifying and defining Configuration Items in a system. Some problems that commonly occur as a result of poor configuration management systems include:  the inability to reproduce a fault reported by a customer. fault fixes.  recording and reporting the status of configuration items and change requests. and  verifying the completeness and correctness of configuration items. Configuration Items are those items that have their own version number such that when an item is changed. etc.2.  two programmers have the same module out for update and one overwrites the other’s change. such as version A contains modules 1. Software Engineering Terminology.6 & 7.3. Problems resulting from poor configuration management Often organisations do not appreciate the need for good configuration management until they experience one or more of the problems that can occur without it. but tests have been updated.3. a new version is created with a different version number.4 & 5 and version B contains modules 1.  do not know which fixes belong to which versions of the software.  configuration control. Definition of configuration management A good definition of configuration management is given in the ANSI/IEEE Standard 7291983. This says that configuration management is:  “the process of identifying and defining Configuration Items in a system. We may need different modules depending on the environments they run under (such as Windows NT and Windows 2000). An indication of a good Configuration Management system is to ask ourselves whether we can go back two releases of our software and perform some specific tests with relative ease. Configuration Management is all about effective and efficient management and control of these items. We might also have different items for different customers. During the lifetime of the system many of the items will change. how these will be structured (where they will be stored in relation to each other) the Page 113 of 132 .  faults that have been fixed reappear in a later release.  configuration status accounting.  unable to match object code with source code.  controlling the release and change of these items throughout the system life cycle. new features.Software Testing Material Configuration Management What is configuration management? Our systems are made up of a number of items (or things).  a fault fix to an old version needs testing urgently.2. and  configuration audit. So configuration identification is about identifying what are to be the configuration items in a system. environment changes.

VERIFICATION AND VALIDATION (V&V) Verification: Are we developing the right product? Validation: Are we developing the product right? Verification and Validation is the difference between 'What and How' Two types of V&V 1. Configuration control is also determines how fault reporting and change control is handled (since fault fixes usually involve new versions of configuration items being created). say baseline 2. Technical Review 2.2 of module B.g. That is not to say that everything should.Software Testing Material version numbering system. A baseline is a set of different configuration items (one version of each) that has a version number itself.2) of module B is created. pharmaceutical) it can be a legal requirement to do so. all necessary configuration items have been included and extraneous items have not been included).1 of module A and version 1. Configuration control is about the provision and management of a controlled library containing all the configuration items.0 that comprises version 1. Status accounting enables traceability and impact analysis. If module B changes. For example.1 of program X that comprises version 1.e. a new version (say 1. and baselines. this would be able to tell us which configuration items are being updated. A database holds all the information relating to the current and past states of all configuration items. Thus. Configuration auditing is the process of ensuring that all configuration management procedures have been followed and of verifying the current state of any and all configuration items is as it is supposed to be. Inspection Page 114 of 132 . Dynamic V & V Static V&V: 1. actual test results may not be though in some industries (e. Static V&V 2. Configuration management in testing Just about everything used in testing can reasonably be place under the control of a configuration management system. We should be able to ensure that a delivered system is a complete system (i. we could define a baseline for version 1. who has them and for what purpose. We may then have a new version of program X. if program X comprises modules A and B. This will govern how new and updated configuration items can be submitted into and copied out of the library.1 of module B. selection criteria. For example.1 of module A and version 1. naming conventions.

Loop testing. As testing is the last phase in the SDLC (Software Development Life Cycle) before the final software is delivered. That's why it is called as Static V & V Dynamic V&V: In Dynamic V&V we are conducting Testing the application in real time with executables. Definition 2: Testing is a process of exercising or evaluating a system component. size and shape etc. Software Engineers can derive test cases that Page 115 of 132 . SOFTWARE TESTING Definition 1: Software Testing is the process of executing a program with the intent of finding bugs. In an effort to detect errors.White Box Testing or Structural testing or Glass Box testing. We are doing V&V in documents which is in papers. Static Verification corresponds to verification and validation of products.Data flow testing 4.Path testing 2. 2. each phase ends with V & V activity such as Technical review. Its structure. when it is static.Condition testing 3. composition of the product. That's why it is called as Dynamic V&V. Black Box Testing or Functional testing Combination of white box and block box testing is called as 'Gray box testing' White Box Testing: White Box Testing is done by the developers. by manual or automated means to verify that it satisfies a specified requirement. This includes all quality Reviews. Code Walk through. But most of the V & V (review) is based on human evaluation and can't detect all errors. The basic goal of the software development process is to produce a software that has no errors. eg. Developers have to do 1. it has the enormous responsibility of detecting any type of errors Two basic approaches for software testing: 1.Software Testing Material 3.

in a detailed manner. Theses faults reflect in the code. 2. in addition to the faults introduced in coding phase.Interface errors 3. 2.Initialization and termination errors.Exercise internal data structures to assure their validity. Usually all the organizations go for Block Box Testing. For better customer satisfaction.Software Testing Material 1. BLACK BOX TESTING Block box testing focuses on functional requirements of a software. structure of the program is not considered.Incorrect or missing functions. We must go for 'white box testing' when Typographical errors are random. Logical errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed.Errors in data structures 4. we are checking the functionality of the application. Levels of Black Box Testing Faults occur during any phases in SDLC.Execute all loops at their boundaries within their operational bounds. testers attempt to find errors in the following categories: 1. Because in block box testing. different levels of testing are used in the testing process.Guarantee that all 'independent paths' within a module have been exercised at least once. we have to do white box testing first. taking no consideration of detailed processing logic. 3. In Black box testing. 4. But some faults are likely to remain undetected by these methods.Exercise all logical decisions on their true and false sides. Verification is performed on the output of each phase. We will discuss about the BLACK BOX Testing.Performance errors 5. Testing is usually relied on to detect these faults. Due to this. Clients Needs | Requirement <---------> Acceptance Testing | <---------> System Testing Page 116 of 132 . then conduct block box testing. In block box testing.

2.System Testing UNIT TESTING In Unit Testing. Module interface is tested to ensure that information properly flows into and out of the program unit under test.Negative values Page 117 of 132 . to check whether the field accepts 1.Field level validation 3. we have to do the following checks 1. Unit testing is the lowest level of testing Individual unit of the software are tested in isolation from other parts of a program. against the specifications produced during design for the modules. Unit testing is essentially for verification of the code produced during the coding phase and hence the goal is to test the internal logic of the modules.Field level checks. In UNIT TESTING. Different modules are tested.User Interface check 4.Length 4. Field Level Checks: In Field Level checks. we have to do 7 types of checks.Unit Testing 2.Unique characters 3.Number 5. Here we are checking a particular field in a screen or module.Null characters 2.Date 6.Functionality check.Integration Testing 3.Software Testing Material | Architecture & Design | Coding | <---------> <---------> Integration Testing | Unit Testing From the service providers point of view the following are to be done. 1.

Funlty chk.Software Testing Material 7. Funlty chk. Automatic generation. Drop down combo box. COURSE REGISTRATION FORM SCREEN Option (Add/Modify/Delete) Type of Course Registration Number Student Name Address Phone Number Date Time (Part/Full time) Timing (7-9am/9-11am/7pm-9pm) Student ID Batch Code Push Button Push button Funlty chk. Page 118 of 132 . Internal Test Plan FC --> Functionality Check and we have to test the functionality of the screen. we have to prepare a internal test plan. ** Funlty chk.Default values. – Functionality Check. Automatic generation Save Button Exit Button Funlty chk. Based on the above screen. Funlty chk. Funlty chk. we can prepare test cases. Drop down combo box. Funlty chk. consider a Course Registration form that contains the following fields. For Example. Drop down combo box. Based on the internal test plan. Funlty chk. Number Field Text Field Text Field Number Field Date Field Drop down combo box.

For Ex. For that. we have unit test cases as indicated below Unit Test case for Student name field Sl no. UTC/001 Test case Expected result Actual result Enter blank space and proceed Should display error message and set focus (Null check) back to student name field. N --> Not necessary to write test case Field Name Remarks Null (type of check) Option FC Type ofFC course Student Y name Address Y Phone N number Date Y Time FC Timing FC Student FC ID Batch FC code Save FC button Exit FC button Unique Length Number Date -ve Default N N N N Y Y Y Y Y N Y Y N N N Y N N Y Y N N N N We have to write test cases only for the 'Y' option. Student name is text field type. The above internal test plan is mainly to reduce the number of test cases.Software Testing Material Y --> Have to write Test cases.( because it should not accept the Page 119 of 132 UTC/002 . ( because it should not accept blank) Skip the field and proceed Should display error message and set focus (Null check) back to student name field. Not necessary to write test cases for the 'N' Option.

The above test case is written based on the internal test plan.( we assume (length check) 20 as the maximum limit of the student name field) Enter name of 21 characters Should display error message. --This is to reduce the number of test cases. For applicable one. Test cases is written only for 'Y'. Date range check 2. ( because. UTC/003 UTC/004 UTC/005 For 'Student name' field we have to write test case. Date field should not accept blank space) UTC/002 Enter date in DD/MM/YYYYShould accept and format. i.If we are having a Date field in a screen we have to write test case to check the date field as Date field. It is of DD/MM/YYYY Page 120 of 132 . ( because text field should not accept numbers) Enter name of 20 characters. we have to check whether the application is accepting greater than the system date or not.Software Testing Material null or blank space) Should accept and proceed. Sl no. Field Level Validation: Here we have to check 1. Test description Test case Expected result UTC/001 Enter blank space or skip theShould display error field ( null check) message and set focus back to the date field. Not necessary to write Test case for 'N'. And set focus back to the field( because. In date range check. Boundary value check.( number check) message and set focus back to the field. (date check) proceed UTC/003 Enter date in mm/dd/yyyyShould display error format (date check) message. (Because (length check) the maximum limit is 20 characters) Enter numbers '12345' in theShould display error name field. Date Range check .e.

100 -.Software Testing Material format.Readability of controls 7.Message box check.e. we have to check 1. =. Because it should not accept just numbers. User interface Check In User Interface check. Press the arrow keys Should move across the fields in a sequence. < . '1234567'Should display error message. 0.have to check with < .Short cut keys 2. check Page 121 of 132 . Dialog box contentShould be clear to the user. UTC/001 UTC/002 UTC/003 UTC/004 UTC/005 UTC/006 Test case Tab related checks Expected result Actual result To move across all the field in the screen with a sequence. > to the lower boundary and 98.Tab movement check 4.g. 6. For User Interface Check we have to write test case as Sl no.( because it should not accept more than system date) Enter number (number check) UTC/004 UTC/005 UTC/006 In boundary value check we have to check a particular field with stand in the boundaries For e. = . 99. message. If a number field has a range of 0 to 99 we have to check whether the field is accepting -1.Help check 3.Arrow key check 5. Enter date greater than the Should display error system date.Consistency with the user interface across the product. Enter '-23232324' and proceedShould display error ( -ve check) message. Press the short cutShould open the keys (Alt + K) corresponding screen Tool tip check To display the tool tip based on the selection. Because it should not accept -ve numbers. Screen title check Should visible to the user. 1 i.Tool tip validations 8. > values of upper boundary.

for e. checking whether it is saving or not. In functionality check.Functionality of buttons. Let us see a sample test case for functionality checks. 4. to enter the new student details Select 'Delete' option of the Should delete the combo box current student details. Automatic result generation like. automatic generated results 3. UTC/001 Test case Expected result Actual result Select 'ADD' option of theShould open a new combo box.. how the application is User Friendly. Select 'Modify' option of theShould allow the user combo box.Software Testing Material UTC/007 Scroll bar checks Should scroll softy. Here we are checking whether Combo box drop down menu is coming or not While clicking 'save' button after entering details. So we have to do this type of functional checks. UTC/002 UTC/003 UTC/004 UTC/005 UTC/006 Page 122 of 132 .. Functionality checks Here we have to check 1. we have to check. When entering date of birth. selected student details. While clicking 'Exit' Button should close the current window. system should automatically generate age.Functionality of buttons. to do the modification. Sl no.Screen functionality 2. In User Interface Check we have to check. Click 'Save' and proceed Should save the entered details and update in the date Click 'Exit' and proceed Should close the screen. computation.g. whether we are able to ADD or MODIFY or DELETE or VIEW and SAVE and EXIT and other main functions in a screen.Field dependencies. Registration form. Select 'View' option of theShould display the combo box. based on the system date.

Top Down approach 3.Software Testing Material INTEGRATION TESTING Many Unit Tested Modules are combined into subsystems. If Set of bugs encountered correction is difficult. The goal is to see if the modules can be integrated properly. beginning with the main control module. The entire program is tested as a whole. Page 123 of 132 . BIG BANG APPROACH: A Type of Integration Testing. If one error is corrected new bug appears and the process continues. TOP DOWN APPROACH Program is merged and tested from top to bottom. sub ordinate to the Main Control modules are incorporated into structure in either depth first or breadth first method. Module. After each module is tested. This testing activity can be considered testing the design.  According to this approach. Integration Testing refers to the testing in which the software units of an application are combined and tested for evaluating the interaction bet them. Big bang approach is called as " Non Incremental Approach" Here all modules are combined and integrated in advance. and tested. all of the modules are integrated together at once. In Integration Testing we have to check the integration between the module Mainly we have to check Data Dependency between the modules Data Transfer between the modules. Modules are integrated by moving downward through the control hierarchy. which are then tested. Disadvantages: Tracing down of defect is not easy. in which software components of an application are combined all at once into a overall system.Bottom Up approach. every module is first unit tested in isolation from every module.Big Bang approach 2. Types of Approaches for Integration Testing 1.

Stubs are functionally simpler than drivers and therefore. TYPES OF SYSTEM TESTING VOLUME TESTING: To find the weakness in the system with respect to its handling of large amount of data. except for top controlling module.Software Testing Material  Here we have to create a 'Stub' . Advantage: Unit testing of each module can be done very thoroughly. (moment) Page 124 of 132 . stub can be written with less time and labor. Disadvantage: Test Drivers have to be generated for modules at all levels.this is a dummy routine that simulates a behavior of a subordinate. Advantage: It is done in a an environment that closely resembles that of reality. If a particular module is not completed or not started. ( focus is amount of data) STRESS TESTING: The purpose of stress testing is. Compete software build is made and tested to show. that all requirements are met. whether it is handling large number of processing transactions during peak periods. Disadvantage: Unit testing of lower modules can be complicated by the complexity of upper modules. during short time period. we can simulate this module.e. The terminal module is tested in isolation first. SYSTEM TESTING Here Testing conducted on a complete. BOTTOM UP APPROACH Begins construction & testing with atomic modules (i. Modules of lowest levels in the program structure) Program is merged and tested from bottom to top. integrated system to evaluate the system's compliance with its specified requirements. that accept the test case data. so the tested product is more reliable. passes such data to the module (to be tested) and prints the relevant results. Here we have to write ' Drivers'  Driver is nothing more than a program. just by developing a stub. to test the system capacity. and then the next set of the higher level modules are tested with the previously tested lower level modules.

to ensure that changes(after defect fix) have not propagated unintended side effects. Regression Testing is the activity that helps to ensure that changes do not introduce unintended behavior or additional bugs. This is a real time and highly tedious to web testing. because system performance is assessed under all conditions. customer) and recording errors and usage problems. System performance is generally assessed in terms of response time and throughput rates. Page 125 of 132 . SERVER TESTING: Here we have to check Volume. SECURITY TESTING: Attempts to verify that protection mechanisms built into a system will infact protect it from improper penetration.Non repudiation. Performance. web security testing and directory set up. with respect to security levels. Automated tool is a must to do web testing. Alpha test are conducted in a controlled environment. RECOVERY TESTING: Forcing the system to fail in different ways and checking how fast it recovers from fail. The software is tested in a natural setting with the developer 'looking over the shoulder' of the user(i. video testing (pixel. Stress. as a whole. here we are checking the system capacity to handle large number of processing transactions in an INSTANT. Testing here focuses on the external behavior of the system. COMPATIBILITY TESTING: Checking whether the system is functionally consistent across all platforms. backup and restore testing.e.Authentication of parties I.testing on font and alignment) modem speed. PERFORMANCE TESTING: System performance can be accomplished in parallel with volume and stress testing. browser compatibility.Software Testing Material CONCURRENCY TESTING: It is similar to Stress Testing. REGRESSION TESTING: Is the re-execution of same subsets of test cases that have already executed. by the customer. Here we have to check the PAIN ( e business concept) PAIN: P-Privacy A. System is protected in accordance with importance to organization. WEB TESTING: In web testing we have to do compatibility testing.Integrity of transactions N . data recovery testing. error trapping data security. ALPHA TESTING: Alpha testing is conducted at the developers place. under different processing and configuration condition. ACCEPTANCE TESTING: Performed with realistic data of the client to demonstrate that the software is working satisfactorily.

Test Deliverables: Should be specified in the test plan before the actual testing begins Deliverables could be Test cases that were used Detailed results of testing Test summary report Page 126 of 132 . and the schedules of intended testing activities.Requirement specification document 3. well before the actual testing commences and can be done in parallel with the coding and design phase. Here the client tests the software or system in his place and recording defects and sending his comments to development team.Project plan 2. the features to be tested. The test planning can be done. Approach for Testing: specifies the overall approach to be followed in the current project. It identifies test items. approach to be taken. Test unit may be a module or few modules or a complete system. Here the developer is not present during testing.Architecture and design document. TEST PLAN: A test plan is a general document for the entire project that defines the scope. who will do each task and any risks requiring contingency planning. The inputs for forming test plan are 1. The technique that will be used to judge the testing effort should also be specified. Features to be tested: Include all software features and combinations of features that should be tested. Requirements document and Design document are the basic documents used for selecting the test units and deciding the approaches to be used during testing. the testing tasks. So the above is the detailed description about the System Testing.Software Testing Material BETA TESTING: Beta Testing is conducted at one or more customer sites by the end user of the software. A software feature is a software characteristics specified or implied by the requirements or design document. that are from a single computer program and that are the object of testing. Test plan should contain Test unit specifications Features to be used Approaches for testing Test deliverables Schedule Personnel allocation Test Unit: Test unit is a set of one or more modules together with associated data.

Page 127 of 132 . User requirement is not built into the product. 2. Personnel Allocation: Identifies the persons responsible for performing the different activities. DEFECT CATEGORIES Defect Categories: Defects are mainly classified into two categories Defect Category-I: Here in Defect Category . Test summary report.I is again classified in to 1. incorrect implementation 2. environment in which testing was done. This document specifies special req.e. Defect in capturing user requirement: Variance is something that user wanted. Defect Category .II: Here defects are in 3 categories 1. Bug Report: Give the summary of all errors found. Schedule: Specifies the amount of time and effort to be spent on different activities of testing and testing of different units that have been identified. Test summary Report: It defines the items tested. that is not in the built product.Software Testing Material In general Test case specification report Test summary report and Test Log report. Test Case Execution and Analysis: Steps to be performed to execute the test cases are specified in a separate document called the 'test procedure specification'. But was also not specified in the product. Missing: i. and any variations from the specification observed during testing. and bug report. Output of the test case execution is: Test log report. that exist for setting the test environment and describes the methods and formats for reporting the result of testing. Test log: Describes the details of testing Test summary report: Gives total number of test cases executed. Wrong: i. Defects from specifications: Products built varies from the product specified.e. and summary of any metrics data. Test Log Report: Provides chronological record of relevant details about the executions of the test cases. Should be specified as deliverables. the number and nature of bugs found.

Guidelines for finding equivalence class * Look for range numbers * Look for membership in a group * Look for equivalent output events * Look for equivalent operating environment. BVA leads to selection of test cases that exercises bounding values. BVA leads to the selection of test case at the 'edges' of the class. * They all tests the something * If one test finds a defect. Equivalence Class Partitioning (ECP) 2. Page 128 of 132 . So we can reduce the number of test cases by avoid some unwanted checks.Software Testing Material 3. Extra: Unwanted requirement built into the product. the others will * If one test does not find a defect. Group of tests forms equivalence class if. To reduce the number of test cases. Rather than selecting any elements of equivalence. Techniques to Reduce the Test Cases Writing test cases to all possible checks is irrelevant. Boundary Value Analysis (BVA) 3. there are three methods to be followed. the others will not. there by reducing the total number of test cases that must be developed. Tests are grouped into one equivalence class when  They affect the same output variables  They result in similar operations in the program  They involve the same input variables Process of finding equivalence classes is * Identify all inputs * Identify all outputs * Identify equivalence classes for each input and output * Ensure that test cases test each input and output equivalence class at least once. It uncovers classes of errors. 1. Cause Effect Graphing (CEG) Equivalence Class Partitioning (ECP): ECP is a black box testing method that divides the input domain of program into classes of data. from which test cases can be derived. Boundary Value Analysis (BVA): BVA is a test case design technique that complements equivalence 'partitioning'.

e. This is the reason 'why testing is expensive. This may increase the change of module's error undetected. In other words. 2. Bug: Non Functionality to a functionality Presence of an error implies that a failure must have occurred. Values just above and just below the maximum and minimum should be tested. just above and just below a & b. and the observance of a failure implies that a fault must be present in the system. it will be extremely difficult to pin point the source of error. for identifying faults after testing has revealed the presence of faults the expensive task of debugging has to be performed.e. SDLC) must be detected.Software Testing Material Guidelines for BVA: 1. Failure: Is the inability of the system or component to perform a required function according to its specifications. i. If input condition is a range bounded by values 'a' and 'b'. SOME IMPORTANT TESTING HINTS Testing is the phase where the errors remaining from all the previous phases (i. test case should be developed that exercises the minimum and maximum numbers. Fault is a condition that causes a system to fail in performing its required function. A Software failure occurs if the behavior of the software is different from the specified behavior. Apply the above guidelines for output conditions also. What is the Difference between Error. During the testing process only failures are observed by which presence of fault is deduced. Fault: Fault is the basic reason for software malfunction. Success of testing in revealing errors depends critically on test cases.' Reason for Testing System separately( Unit. Integration and System Testing): Reason for testing parts separately is that if a test case detects an error in a large program. Fault Failure and Bug ? Error: It refers to the discrepancy between computed or measured value and theoretically correct value. Page 129 of 132 . It is difficult to construct test cases so that all the modules will be executed. i. Hence testing performs a very critical role for quality assurance and for ensuring the reliability of software. The actual faults are identified by separate activities commonly referred to us 'debugging'.e. Difference between actual output and correct output of the software. Test case should be designed with values 'a' and 'b'. If input condition specifies a number of values.

Poorly documented code 5. Testing should begin 'in the small' and process towards testing 'in the large' 4. To be most effective. When to Stop Testing? We can Stop Testing when  Full execution of all test cases with internal acceptance and customer acceptance  When Beta or Alpha Testing period ends  Bug rate falls below certain level  Test budget depleted  Test cases completed with certain % passed What is Error Seeding? Page 130 of 132 . A good test is not redundant. but independent testing may succeed in finding them.Software Testing Material What is the need for independent testing/ third party testing:?  Sometimes error occurs because the programmer did not understand the specification clearly. 2. 3. A good test has a high probability of finding an error 2. All the test cases should be traceable to the customer requirements. Testing of a program by its programmer will not detect such errors. A good test should be neither too simple nor too complex. testing should be conducted by an independent third party What is the life time of a bug? Once you find the defect. time spent to fix the defect is called life time of the bug. Programming errors 3.  Time concern  If the customer want the third party testing  Non-availability of testing resources  It is not easy for some one to test their own program with proper frame of mind for testing What is the Testing Principles? 1. Miscommunication between the inter group 6. Software development tools or OS may introduce their own bugs. Why Software has bugs? Due to 1. Software complexity 2. Changing in requirement 4. 3. Testing should be planned long before testing begins. A good test should be 'best of breed' 4. Attributes of a Good test: 1.

Efficient tester will find the 'inserted bugs'. 4. In addition to the defect severity level defined above. The system cannot be used until the repair has been effected Give High Attention: The defect must be resolved as soon as possible because it is impairing development / and or testing activities. The levels are: Resolve Immediately: Further development and /or testing cannot occur until the defect has been repaired. It can be resolved in a future major system revision or not resolved at all. or is a request for an enhancement. we have to 'insert certain number of bugs' in project in various points and give it to tester to test. does not impair usability. is related to the aesthetics of the system. or inconsistent results. or of a software unit (program or module) within the system. Critical: The defect results in the failure of the complete software system. We have to check the efficiency of the tester once the software is 100% bug free. There is no way to make the failed components. System use will be severely affected until the defect is fixed. but causes the system to produce incorrect. 5. DEFECT CLASSIFICATION As per ANSI/IEEE standard 729 the following are the five level of defect classification are 1. It can wait unit a new build or version is created. 2.Software Testing Material Once the software is 100% bug free. Cosmetic: The defect is the result of non-conformance to a standard. Normal Queue: The defect should be resolved in the normal course of development activities. Major: The defect results in the failure of the complete software system of a subsystem. however. A five repair priority scale has also be used in common testing practice. Just to check the efficiency of Tester. of a subsystem. or of a software unit (program or module) with the system. there are acceptable processing alternatives which will yield the desired result. or the defect impairs the systems usability. Minor: The defect does not cause failure. Low Priority: The defect is an irritant that should be repaired but which can be repaired after more serious defect have been fixed Defer: The defect repair can be put of indefinitely. and the desired processing results are easily obtained by working around the defect. 3. defect priority level can be used with severity categories to determine the immediacy of repair. Page 131 of 132 . Error seeding is just to check the efficiency of the tester. incomplete. Average: The defect does not result in a failure. Defects at this level may be deferred or even ignored.

mmsindia.com www.com www.ftech.com www.stqemagazine.aptest.com www. of defects. no.testworks.io.com www.uk www.com www. of defect.testing.badsoftware.com www.soft.com www.softwareqatest. Software Testing Related Web Sites: www.rstcorp. of defect Defect Density == -------------KLOC/FP KLOC. of Test cases ----------------No.kaner.stickyminds.com www.Kilo Lines Of Code FP .sqe.jrothman.com www. Testing efficiency == Defect Closure rate == how much time takes to close the defect No.webservepro.com www.autotestco.Functional Point analysis.com www.com www.geocities.com www.model-based-testing.com www.com www. No.com www.com www.com www.Software Testing Material Cost of a defect == Total effort spent in testing -----------------------Tot.com Page 132 of 132 .facilita.testingstuff.co.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->