Software Testing – made easy

Software Testing
- made easy

Prepared By

K. Muthuvel, B.Com.,M.C.A.
E.P.G.D.S.T * (Software Testing)

K. Muthuvel

Page 1 of 127

Software Testing – made easy

History
Version 1.0 Description / Changes Baseline version Author K. Muthuvel Approver Effective Date 10th Aug.'2005

For “Maveric Systems” Internal Use Only No part of this volume may be reproduced or transmitted in any form or by any means electronic or mechanical including photocopying and recording or by any information storage or retrieval system except as may be expressly permitted.

K. Muthuvel

Page 2 of 127

Software Testing – made easy

This book is dedicated to

Lord Vignesh

K. Muthuvel

Page 3 of 127

............................ 10 2.......................................................................1...........5......... 9 1..........................2...................3..............................................................BCS ...............................5............... Muthuvel Page 4 of 127 ........................................Waterfall Model................Models of SDLC & STLC...............2...........4......................... 20 5..........................11 3..2........... 10 2....19 5.......1........4................Review ..3....................1..........................17 4........................................................Software Testing – made easy Table of Contents 1.........10 2.................................ANSI / IEEE Standards............ 18 5.......1...................... 9 1......................................................................STLC – Software Testing Life Cycle ....................................W-Model.2...........................3.18 4.............14 4....................................1.........................................2......................................... 19 5............. 10 2..Activities performed during review.........................Spiral Model..............................................................................23 K.1..................................Definition...........................................................................................................................3...Dynamic Testing Techniques........ 19 5..........................V-Model.......................................16 4...........Testing Fundamentals............................1......2...............................................................2..........................................1...................... 16 4........Objective......Walkthrough...............1.....1......................1..................Quality Assurance.................................................................................4..............3...........................ISO : International Organisation for Standardisation........ 12 3......................................................19 5.................................2.....2......15 4....................Types of Reviews........................................................................................Review of the Specification / Planning and Preparing System Test...............Quality Assurance...................................11 3..............Technical Review....................2..............................10 3...................................................................2..........................SIGIST.............2.............................................. 11 3.....2.............................................. 19 5........................................1.......15 4........................................Extreme Programming Model...........SW-TMM Summary..............Informal Review...............13 3...............Levels of SW –TMM...............1................ 9 1....4..............3.......................1..........18 4.Level 1: Initial..................................1.....................1............ 22 5.......................................................2......................................17 4.............................4. Verification & Validation... 17 4..........1....................................3............Benefits of Testing...............5........................4. 16 4...............Validation.............................................2.................................. 20 5........SW-TMM Assessment Process....................................................2........Roles and Responsibilities..............................1.......... Quality Control.. 16 4.............................2....Level 2: Phase Definition........................................................................................................1.. 13 3.............4..................................................................2.......SW – TMM........................Inspection...............................................................................................................................................................2............................................. 16 4.......Level 4: Management and Measurement. 9 2...............2......Verification....................... 16 4............SW – CMM:.. 21 5...............5.............................. 16 4.................2..20 5.....SDLC & STLC.......................2.............................................3...........................................1...................2......................................1....................Level 3: Integration.......2....Level 5: Optimization / Defect Prevention and Quality Control.............1............. 19 5........1............................1............................................................Testing Techniques.... 11 3............................................2...............................................................................Definition......2....Testing Standards.....Static Testing Techniques..............Need to use SW-TMM......2..............Quality Control............................2....

..........3.........................................................2...........................................4.....2............................... 47 K.................................................................Branch Coverage:................1..............1...2.Black Box Testing: ................................................5................Stress testing.2............................................35 6....10.........SIT & IST................................................Grey Box Testing.....................Path Coverage:................................2................................ 34 6............. 33 6...............Testing Vs Debugging.............................. 42 7......System Testing.................2.........2........................................7........2...................46 7......................45 7...........................2.....................................Business-Process based Non-Functional Testing..35 6............................................ 28 5............ 33 6................. 36 7.......................................................................2............................................................................Condition Coverage:..........................3................ 33 6.........35 7.....................................1......................2........Security testing.43 7..................................................27 5........................................................1.................32 5...............................1..3...........................................2....Comparison Testing..............3.......... 27 5.......................................Difference Tables.............................. Alpha Testing & Beta Testing..............................Black Box Testing & White Box Testing.........................................2.............. 33 6....................36 7.........2.Statement Coverage:...........................3.Benefits of Unit Testing..................2.1.........................3...8................................................1........................................................................................................................2....4................................................................. 24 5........................................31 5...1....................IST & UAT.............2...... Muthuvel Page 5 of 127 .................................2.................................3......6.......................42 7..........1..............................................36 7................Big Bang Integration.....3................................1.........4......User Acceptance Testing.......Requirement based Testing..........................................3...... 45 7.........................................................................................2.1...................Data Flow-Based Testing:.....................................Verification & Validation...........2.................................................................................................44 7.......................Levels of Testing..................................2........3...Bottom up Integration..2.....2..........1............24 5.............2.............................2...........................Equivalence Class Partitioning................................1. 41 7.......................................................................................Alpha and Beta testing.........2................................ 34 6......4...........2.....1.............1.................. 34 6...................2..........Performance testing....Recovery testing..................................................2......................Stub and Drivers...............1................3..................2......Functional Testing..2.............................................6...........Boundary Value Analysis................Non-Incremental Testing........ 42 7...................... 43 7...........................................................28 5...............................................Validation Testing...........1...................2.....Configuration review.................................Cause and Effect Graphs.... 42 7.Re-testing and Regression Testing...2....2.............2..........Incremental Integration Testing..Test Bed and Test Environment . 40 7............2...... 35 6...........Pre-requisites.................................Mutation Testing:......................................2............................2. 26 5..............................................White-Box Testing:.........2...2............Unit Testing............................................................ 46 7.2.............................................Quality Assurance Vs Quality Control... 39 7............ 44 7..1........................2.....................2................................2....3...........9...... 39 7.....................Quality Vs Testing...2.......... 42 7..........1.....1.5.......1...3............... 32 6..............................3..................... 23 5.............................4...........................................................................1................Software Testing – made easy 5.. 38 7.........25 5...................................3..Integration Testing.......................... 29 5................3...........................3..........36 7.................................................................Top Down Integration...................................................................

........11..............................................................Pilot Testing...........5..................................... 50 8........................................................................................................................ 52 9...................................................BR and FS...57 10................7...............................................................Test Lead........................Features Not to Be Tested.........................................................52 9......................1........................10......................................Load Testing.......56 10..................................................6..............2...........Senior Test Engineer.................FS and Test conditions.............. 49 8..................... 54 10..................3........................................................................................55 10.3................................................................53 9.....6............1...................................49 8..................................Test Plan Identifier..........1.4................ 57 10.................................7........7.............................1...............................................47 7........................................... 48 7...... 52 9...................Intersystem Testing / Interface Testing.............................. 56 10.........57 K....................................47 7.6......................... 55 10................Environmental Testing..................Gap Analysis.....1.7................................56 10..........................................................................50 8..2............................................1.....54 10..2.........................................................................Item Pass/Fail Criteria.... 54 10........1......Functional Specification......... Muthuvel Page 6 of 127 .............7...................Error Guessing....................1.....5.....Configuration Testing................... 56 10..............3.........8..Roles & Responsibilities......Exit Criteria.....................Usability Testing........................48 7................................Test Items...............................Stress and Volume Testing.........................................................................Test Manager.1.................................2..........................................................................................System Specification.....Compliance Testing.....2....... 50 8....................................................Software Testing – made easy 7...53 10... 51 8......................................7..........56 10......Design Specification...........................................................2..................................................7..... 49 8.......................................................Database Testing.................Factors favour Automation of Regression Testing.................Choosing Testing Techniques......2...................................................1.......................Tools used in Regression testing ....... 51 8..............................................................................................Parallel Testing....4........................................................4..........................................5..................................Test Preparation & Design Process......1..........Approach...........1.........................................Automated Testing.....................5.........................................................................................................................4.....4..........................................................................Error Seeding......................7.....................................................................................................Introduction...............Manual support Testing................2.....2........7....................................................Ad-hoc Testing.. 50 8.............57 10....................9...........Suspension Criteria and Resumption Requirements.....................54 10................56 10.............................................................................................................................................................Types of Testing...8..................Entry Criteria................................ 49 8.........................................................Baseline Documents......52 9...4.......3.......Features to be Tested............12.......54 10.........5...........Traceability ..........54 10..............50 8......................................4................................................................................................................................. 54 10.....................................................................................51 9.................... 49 8...............................Test Engineer.....................................................Regression Testing and Re-testing..................................................13......................................................3.............48 8.........................57 10.........5.Test Associate........55 10.........................Business Requirement...................................2.........7.......................Test Plan...............................................................................................7........................... 54 10.....................5......................................................................................................................................................................................... 51 8...........

.....2........................................Test Case Allocation....Approvals.............. 69 13..........................................1..........................................Deliverables.....................................2...............................2..............9..............................................16......Comprehensive Testing ........................1......1....13................... 64 11....4........................Metrics....Testing Tasks.................2.... 69 13............................................................ 68 13....................2...................................................................2.......69 13.............12........................... 63 11.......Test Execution Process..........1.................7...............................10.58 10......................... 59 10..................9........3..............................................................................................................................7.Pre-requirements..................................................................3.............. Muthuvel Page 7 of 127 ........................................................................4........7....... 66 12.11...................... 66 12...Tools Used..................................2..........8.......................................67 12................61 10.............High Level Test Conditions / Scenario...................8...............................Defects Publishing....................... 62 10.........................................Responsibilities....3..................................................69 13.........67 12............................................................................................. 69 13......Sanity Testing ....................................... 67 12.....Interfaces for the application.........1........... 66 12...............................................Expected Results........................................................2.......63 11............................Discrepancy Testing ....................................Feeds Analysis........................................................................2............Risks and Contingencies.Authorities........ 59 10.............Unit testing sign off.............. 58 10....... 62 11........................4...........................6.........................................7....................2...........69 13..............7.........................1...............2.......................Test Closure Process...... 63 11..................................................... 63 11.................................9......Single Expected Result.. 60 10........... 61 10......................................................1...................................................................................8............Staffing and Training Needs..................8...............................................Sign Off.......................................... 61 10...........................3....................................1...............Round II..........................9................... 64 11...........64 11...............3................................Round I....... 69 13............ 59 10......................63 11................................Processing logic...Requirements ........ 66 12.............................................................Pre.................1...........................................4..........5.....4..................................7....TestDirector (TD):..........................Environmental Needs...........Stages of Testing: .................................................1.........................................................................................7.....................................................................................................................7....Version Identification Values.........3.......................................... 67 12..................ClearQuest (CQ)........Defect Tracker..........4...3....................................9.......................................... 59 10........................................ 64 11............... 65 12...............................................Defect Management.................................................................2..............................................................................61 10.................Software Testing – made easy 10.........................1........................Defect Metrics...............Defect Analysis: .............. 58 10....1.............9.......................Data Definition ...... 58 10................Data definition.Test Case.................................................4......... 65 12.........3...............................................................Test Deliverables..............................7........................1............1..........................1............................2.................... 58 10..............................15..........Defect Reporting ........... 57 10.64 12......................................4.......Defect age: ..................Defects Meetings...................Types of Defects...9..59 10......................................................Round III.....................4................................... 69 K..........................................................................................................................................Defect – Definition.....1.........................................................................Schedule...........Multiple Expected Result.......................................................14.................Defect Life Cycle.................................

...........5....................................................79 15.............Defect Report.............................................................. 79 15... 76 15.5.....Test Management Metrics...... 99 17...............................74 15...E.... Muthuvel Page 8 of 127 ............................Risk Analysis Document..............................................................................4..........................................................Testing Activities & Deliverables....Defect Management Process..Overview......................................5......... 78 15....70 13................ – Interview ....Execution and Defect Management Phase...................4.....Final Test Report...........9..............................................1.................Maveric Systems Limited...................... 88 15............... 92 15.......................................................................Test Execution Process................................................1..........11..........................4............................................ 71 14............85 15...... 73 15........Test Execution & Defect Management Phase.Test Closure Phase...............72 14...........................................................................Test condition / Test Case Document.............................5........................................................10.........6.........................Test Deliverables Template.2............ 82 15.......3.................Test Initiation Phase.. 89 15...............................Debriefs With Test Team.......5...1...............................4.........Test Planning Phase......................................................................... 94 16..........................5...................................................................................86 15............................2..................Test Design Phase..........................Testing Process / Methodology...................................................Quality Policy............................................................Traceability Matrix.. 95 16...........................Q & A......Top Level Project Checklist......................................75 15...............................7..................................Test Script Document.3......................13........................................................5..........5...............................Test Closure Phase..............Configuration Management and Quality Plan............................3............4...........1...........5..................5.... 79 15......... 72 15................................5..............94 15..................................... 119 K.......4.....................4.Test Design Phase.........................4...............2..............................2.................4.....................................G.Project De-brief Form...........15.73 15..............................................4...............................................5...............................................91 15...................................................Clarification Document..................................................Test Initiation Phase...Test Strategy Document.......................................5.........4... 88 15..........5......................................................4......4............................................................................................................................................. 71 14.............................................................Project Details Form..................................................................................................3.......... 87 15........86 15.......5.................................5................. 93 15...1...88 15......................88 15..........Weekly Status Report.................................................................................70 14.........................................................................................................5.........Daily Status Report....Minutes of Meeting................................................................................2.......71 14..........16....................Final Test Checklist.......5.......5............5.17....12............................................................................................................ 81 15...........Test Environment Request.............................71 14............8...............Leadership Team.................................. 82 15...............73 15...................73 15.................84 15..............95 16..............................4..... 89 15..............2.......5...........Software Testing – made easy 13.........14.................Test Planning Phase.........................Glossary.........................General......................................5......................1......4............................4.....................

A good test is one that has a high probability of finding an as-yet-undiscovered error. usability. Testing is the measurement of software quality.Benefits of Testing · · · · · · Increase accountability and Control Cost reduction Time reduction Defect reduction Increase productivity of the Software developers Quantitative Management of Software delivery K. maintainability.Definition “The process of exercising software to verify that it satisfies specified requirements and to detect errors “ …BS7925-1 “Testing is the process of executing a program with the intent of finding errors” …Glen Myers Testing identifies faults. reusability and testability.Software Testing – made easy 1. whose removal increases the software quality by increasing the software’s potential reliability. The objective is to design tests that systematically uncover different classes of errors and do so with a minimum amount of time and effort. reliability. Testing should also aim at suggesting changes or modifications if required. Software reliability and software quality based on the data collected during testing 1.3. Muthuvel Page 9 of 127 . Testing Fundamentals 1.Objective · · · · · · · · Testing is a process of executing a program with intent of finding an error. We measure how closely we have achieved quality by testing the relevant factors such as correctness.2. A successful test is one that uncovers an as-yet-undiscovered error.1. ` 1. Demonstrating that the software application appears to be working as required by the specification Meeting performance requirements. thus adding value to the entire process.

Validation Determination of the correctness of the products of software development with respect to the user needs and requirements.Quality Assurance “A planned and systematic pattern for all actions necessary to provide adequate confidence that the item or product conforms to established technical requirements” 2.Quality Control “QC is a process by which product quality is compared with applicable standards. and the action taken when nonconformance is detected.” “Quality Control is defined as a set of activities or techniques whose purpose is to ensure that all quality requirements are being met. Muthuvel Page 10 of 127 . Quality Assurance.4.” … [IEEE] 2.3. … BS7925-1 Difference Table: Quality Analysis Study on Process followed in Project development Verification Process of determining whether output of one phase of development conforms to its previous phase Verification is concerned containment of errors with phase Quality Control Study on Project for its Function and Specification Validation Process of determining whether a fully developed system conforms to its SRS document Validation is concerned about the final product to be error free K.” 2. Verification & Validation 2. In order to achieve this purpose. Quality Control. processes are monitored and performance problems are solved.Software Testing – made easy 2.2.1.Verification “The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase.

K. This paper outlines some of the more commonly used software development life cycle.: Setting up the Test Environment Executing the Test Scripts (Automated as well as Manual Tests). Muthuvel Page 11 of 127 . with particular emphasis on the testing activities in each model.. 3. software has to be tested.2. 3.STLC – Software Testing Life Cycle · · · · · · · Preparation of Testing Project Plan which includes Test Strategy. One thing which all models have in common is that at some point in the life cycle. Preparation of Testing Bed. The design has Integration Testing (IT) and the System Integration Testing (SIT) and so on.1. Preparation of Test Scripts which contains Test Scenarios.Models of SDLC & STLC There are a number of different models for software development life cycle. i. SDLC & STLC 3. Every phase of the STLC in this model corresponds to some activity in the SDLC. Preparation of Test Metrics for Continuous Process Improvement. Defect Tracking with any bug tracking tools.1. The Requirement Analysis would correspondingly have an acceptance testing activity at the end.V-Model The figure shows the brief description of the V-Model kind of testing.2.Software Testing – made easy 3.e. Preparation of Test Completion Report and Test Incident Report.

V model is the classic software development model. test cases. it includes testing from the unit level to business level. And at last he decides to have the complete stable product. It encapsulates the steps in Verification and Validation phases for each step in the SDLC. debug and change tasks during the test phase is not clear In the following. reflect development input for the corresponding testing activities. and at last he will go for business scenarios where he can validate by the customer and he can do the alpha testing and beta testing. For each phase. This is based on the general V-model and the disadvantages previously mentioned are removed.2. all of the models presented previously are deficient in various ways. K.2. It is a parallel activity which would give the tester the domain knowledge and perform more value added. after that he will verify for system is according to the requirements or not. the subsequent phase becomes the verification (QA) phase and the corresponding testing phase in the other arm of the V becomes the validating (Testing) phase In the Software Development Life Cycle. Muthuvel Page 12 of 127 . high quality testing with greater efficiency. The test activities first start after the implementation: · The connection between the various test stages and the basis for the test is not clear · The tight link between test. but it fails to address how to start for all these test levels in parallel to development.W-Model From the view of testing. test strategy are prepared during the development stage itself. · · · 3. The V model shows the Development Cycle Stages and Maps it to Testing Cycles. That is after completing the coding tester starts testing the code by keeping the design phase documents that all the modules had been integrated or not. Its a parallel process and finally arrives to the product with almost no bugs or errors V-model is one of the SDLC STLC. Left side of v model.Software Testing – made easy · · V model is model in which testing is done parallel with development. The development team will apply "do-procedures" to achieve the goals and the testing team will apply "Check-procedures" to verify that. both the Development activity and the testing activities start almost at the same time with the same information in their hands. the W-model is presented. Also it reduces time since the test plans.

W.Waterfall Model One of the first models for software development is the so-called waterfall-model by B. is unfortunately all too common. In the waterfall-model. the expense of the removal of faults and defects found is only recognizable through a return to the implementation phase. testing directly follows the implementation. A further disadvantage is that testing.4. could be relatively easily shortened or omitted altogether.e. as the last activity before release. In this model. A return in the development process was only possible to an immediate previous phase. Muthuvel Page 13 of 127 .2. In this it was set out that each of the activities in the software development must be completed before the next phase begins. in practice. activities that were defined here are to be found in nearly all models proposed since.3. This.Extreme Programming Model K. Preparatory tasks for the testing were not clear. By this model it was suggested that activities for testing could first be started after the implementation.Boehm.2.Software Testing – made easy 3. 3. The individual phases i.

The test activities included module. The exception to this is that the test plan should be constructed after the design of the system.Software Testing – made easy 3. validation of requirements and of the development) and the test phase was divided into stages. in this model the testing also follows the coding. Tests were explicitly mentioned (risk analysis. However. Muthuvel Page 14 of 127 . The spiral model also identifies no activities associated with the removal of defects K. integration and acceptance tests.2.5.Spiral Model In the spiral-model a cyclical and prototyping view of software development was shown.

speak to testing. Maturity implies a potential growth in capability and indicates both the richness of an organization’s software process and the consistency with which it is applied in projects throughout the organization The five levels of SW.The software process capability of an organization provides one means of predicting the most likely outcomes to be expected from the next software project the organization undertakes.International Organization of Standards CMM. etc.Capability Maturity Model Software Process A software process can be defined as a set of activities. etc. methods. So. That may help you out. measured. like IEEE. etc. You have process standards bodies. Testing Standards Testing of software is defined very differently by different people and different corporations. managed. software. ISO is the standard for international projects and yet it.Software Engineering Institute.Capability Maturity Model SPICE. Muthuvel Page 15 of 127 . IEEE. Software Process Maturity Software Process Maturity is the extent to which a specific process is explicitly defined. and transformations that people use to develop and maintain software and the associated products Software Process Capability Software Process Capability describes the range of expected results that can be achieved by following a software process. does not really force or even advocate a certain "testing standard.Software Testing – made easy 4. like ISO. However. Carnegie Mellon University.SW – CMM: SEI . embedded systems. those IEEE templates tell you nothing about actually testing the product itself. these are more there to guide the process and not the testing.) and some of that will.Software Process Improvement and Capability Determination NIST-National institute of Standards and Technology DoD-Department of Defense 4. for example. controlled. practices. The same thing pretty much applies with ISO. SPICE. IEEE will give you ideas for templates for such things as test case specifications. On the other hand. that attempt to impose a process to whatever types of development projects you do (be it hardware." You also have other process and project oriented concepts out there like the Capability Maturity Model (CMM) Some of the organization that define testing standards are · · · · · · BS – British Standards ISO. and effective. They basically just show you how to document that you are testing the product. test plans. such as it is. by proxy.CMM Level 1: Initial Level 2: Repeatable Level 3: Managed Level 4: Defined Level 5: Optimum K. CMM .1.

Levels of SW –TMM 4.Level 4: Management and Measurement · · · · Testing is a measured and quantified process Development products are now tested for quality attributes such as Reliability.Level 5: Optimization / Defect Prevention and Quality Control · Testing is institutionalized within the organization K.1.Level 2: Phase Definition · · · · Identify testing as a separate function from debugging Testing becomes a defined phase following coding Standardize their process to the point where basic testing techniques and methods are in place The objective of testing is to show that the system and software meets specifications 4.1.Software Testing – made easy 4. and assigned a priority for correction 4.Level 1: Initial · · · A chaotic process Not distinguished from debugging and ill defined The tests are developed ad hoc after coding is complete Usually lack a trained professional testing staff and testing tools The objective of testing is to show that the system and software work · · 4.2.2.2.1.Level 3: Integration · · · · · · · · Integrate testing into the entire life cycle Establish a formal testing organization establishes formal testing technical trainings controls and monitors the testing process begins to consider using automated test tools The objective of testing is based on system requirements Major milestone reached at this level: management recognizes testing as a professional activity 4.1.3. Muthuvel Page 16 of 127 .1. Usability and Maintainability.2.SW – TMM SW-TMM is a testing process improvement tool that can be used either in conjunction with the SW-CMM or as a stand-alone tool. Test cases are collected and recorded in a test database for reuse and regression testing Defects found during testing are now logged.2. given a severity level.1.2.2.4. 4.1.2.5.

SW-TMM Summary · baseline the current testing process level of maturity K.4.2.Software Testing – made easy · · · · Testing process is well defined and managed Testing costs and effectiveness are monitored Automated tools are a primary part of the testing process There is an established procedure for selecting and evaluating testing tools 4.2.2.Need to use SW-TMM · · · · · · · easy to understand and use provide a methodology to baseline the current test process maturity designed to guide organization selecting process improvement strategies identifying critical issues to test process maturity provide a road map for continuous test process improvement provide a method for measuring progress allow organizations to perform their own assessment · · Organizations that are using SW-CMM · · SW-TMM fulfills the design objective of being an excellent companion to SW-CMM SW-TMM is just another assessment tool and easily incorporated into the software process assessment Organizations that are not using SW-CMM · · · provide an unbiased assessment of the current testing process provide a road map for incremental improvements save testing cost as the testing process moves up the maturity levels 4.g. Muthuvel Page 17 of 127 .SW-TMM Assessment Process · · · · · · · · · · Prepare for the assessment choose team leader and members choose evaluation tools (e.3. questionnaire) training and briefing Conduct the assessment Document the findings Analyze the findings Develop the action plan Write the final report Implement the improvements best to implement the improvements either in a pilot project or in phases track progress and achievements prior to expanding organization wide also good in a limited application easier to fine-tune the new process prior to expanded implementation 4.2.

1061-1998. 6. 1045-1992 IEEE Standard for Software Productivity metrics 13. 828-1998 IEEE Standard for Software Configuration Management 4. Draft 1. 1016-1998.4.SIGIST A meeting of the Specialist Interest Group on Software Testing was held in January 1989 (this group was later to affiliate with the British Computer Society). Much of the feedback from this review suggested that the approach to the standard needed re-consideration. 610. 730-1998 IEEE Standard for Software Quality Assurance plans 3. The SIG formed a subgroup to develop a standard which addresses the quality of testing performed.IEEE Recommended Practice for Software Design description 10.1-1987 IEEE Standard for Software Management 15. without being specific about how to choose and develop test cases.1 IEEE Standard for Software Quality Metrics Methodology.2 was completed by November 1990 and this was made a semi-public release for comment. K. 1044-1993 IEEE Standard Classification for Software Anomalies 12.Software Testing – made easy · · · · identify areas that can be improved identify testing processes that can be adopted organization-wide provide a road map for implementing the improvements provide a method for measuring the improvement results provide a companion tool to be used in conjunction with the SW-TMM · 4. Testers should be familiar with all the standards mentioned in IEEE.ISO : International Organisation for Standardisation · · · Q9001 – 2000 – Quality Management System : Requirements Q9000 – 2000 – Quality Management System : Fundamentals and Vocabulary Q9004 – 2000 – Quality Management System : Guidelines for performance improvements 4.12-1990 IEEE Standard Glossary of Software Engineering Terminology 2. 1.BCS . This meeting agreed that existing testing standards are generally good standards within the scope which they cover. 1058-1998 IEEE Standard for Software Project Management Plans 14. but they describe the importance of good test case selection. Muthuvel Page 18 of 127 . A few members of the subgroup trialed this draft of the standard within their own organisations. 1058.‘American National Standards Institute’ IEEE Standards: Institute of Electrical and electronics Engineers (Founded in 1884) Have an entire set of standards devoted to software. 1028-1997 IEEE Standard for Software Reviews 11.3. 1008-1987(R1993) IEEE Standard for Software Unit Testing 7. 829-1998 IEEE Standard for Software Test Documentation 5. 4. 1012a-1998 IEEE Standard for Software Verification and validation – Supplement to 1012-1998 Content 9.5. Draft 1.ANSI / IEEE Standards ANSI .3 was circulated in July 1992 (it contained only the main clauses) to about 20 reviewers outside of the subgroup. 830-1998 IEEE Recommended Practice for Software Requirement Specifications. 1012-1998 IEEE Standard for Software Verification and validation 8.

1. managers. The subject of the inspection is typically a document such as a requirements specification or a test plan.1. These are led by the author of the document. code and scenarios/ test cases. users. but is one of the most cost effective methods of ensuring quality. It consists of two aspects. Little or no preparation is usually required.1. and a recorder to take notes. Typically they entail dry runs of designs. and the purpose is to find problems and see what's missing.1. Muthuvel Page 19 of 127 . reader.Definition Review is a process or meeting during which a work product or set of work products.Review .Types of Reviews There are three general classes of reviews: · · · Informal / peer reviews Semiformal / walk-through Formal / inspections. [BS 7925-1] An inspection is more formalized than a 'walkthrough'. [IEEE] 5.2. most problems will be found during this preparation. not to fix anything. product (document itself) improvement and process improvement (of both document production and inspection). Testing Techniques 5.2.2.1.Software Testing – made easy 5.2. Thorough preparation for inspections is difficult. The result of the inspection meeting should be a written report. designs or code characterized by the author of the material under review guiding the progression of the review.Inspection A group review quality improvement process for written material. Attendees should prepare for this type of meeting by reading thru the document. 5.” … BS 7925-1 5.Static Testing Techniques “Analysis of a program carried out without executing the program.Walkthrough “A review of requirements. or other interested parties for comment or approval. Communication is therefore predominately one-way in nature. K.1. “ [BS 7925-1] A 'walkthrough' is an informal meeting for evaluation or informational purposes.1. customers. 5. and are educational in nature. typically with 3-8 people including a moderator. painstaking work. is presented to project personnel.

2.Technical Review Technical reviews are also known as peer review as it is vital that participants are made up from the 'peer group'.Software Testing – made easy Led by trained moderator (not author). Muthuvel Page 20 of 127 . Author. and includes metrics and formal process based on rules and checklists with entry and exit criteria. 5. Review meeting and follow-up. Deliverables in Review: Product changes. rather than including managers. Factors for pitfall of review: Lack of training.1. source document changes and improvements.3. Recorder. has defined roles.Informal Review · · · Unplanned and Undocumented Useful.3. technical Formal fault experts detection process 5.1.2.Activities performed during review Activities in Review: Planning. These first activities are: · Fixing of test strategy and test concept · risk analysis · determine criticality · expense of testing · test intensity · Draw up the test plan · Organize the test team · Training of the test team . Review of the Requirements / Planning and Preparing Acceptance Test At the beginning of the project the test activities must start. Cheap and widely used Contrast with walkthroughs is that communication is very much two-way in nature 5.4. overview meeting. Inspector Not defined Degree of formality Presentational Formal defined Inspection process Largely Unplanned and Undocumented Finding faults and process Moderator improvement Find problems quickly and cheaply Finding faults Not defined Chairperson Peers. documentation and management support.1.If necessary · Establish monitoring and reporting K. · Documented · Defined fault detection process · Includes peers and technical experts · No management participant Comparison of review types Review type Walkthrough Inspection Informal review Technical review Primary purpose Education Led by Author Participants Peers Reader.

So this is the best place for doing all the planning and preparing for acceptance testing. If you have no idea how to test some requirements then it is likely that you have no idea how to implement these requirements.4. test tools. 5. Ask questions like: Are the requirements testable? Are they testable with defensible expenditure? If the answer is no.Review of the Specification / Planning and Preparing System Test In the review meeting of the specification documents ask questions like: Is the specification testable? Are they testable with defensible expenditure? Only these kinds of specifications can be realistically implemented and be used for the next steps in the development process. …) Provide required software resources (software version.if possible . But just as important is a look forward. like specifying control flow and data flow integration test cases. then there will be problems to implement these requirements. For example one can · Establish priorities of the tests depending on criticality · Specify (functional and non-functional) test cases · Specify and . K. A test strategy is determined after a risk evaluation. In a mature development process reviews and inspections are carried out through the whole process. a cost estimate and test plan are developed and progress monitoring and reporting are established. All preparation. all of the system test preparation is finished at this early development stage.1. …) The activities include the foundations for a manageable and high-quality test process. · · · Review of the Architectural Design Detailed Design Planning and Preparing Integration/Unit Test During the review of the architectural design one can look forward and ask questions like: What is about the testability of the design? Are the components and interfaces testable? Are they testable with defensible expenditure? If the components are too expensive to test a re-work of the architectural design has to be done before going further in the development process. At this stage of the development process all the knowledge for the acceptance tests is available and to hand. can be achieved. It is a look back to fix problems before going on in development. Tasks in planning and preparing for system testing include: · · · Establishing priorities of the tests depending on criticality Specifying (functional / non-functional) system test cases Defining and establishing the required infra-structure As with the acceptance test preparation. During the development process all plans must be updated and completed and all decisions must be checked for validity.provide the required infra-structure · At this stage all of the acceptance test preparation is finished and can be achieved. Here all the knowledge for the system tests is available and to hand. Also at this stage all the knowledge for integration testing is available.Software Testing – made easy · · Provide required hardware resources (PC. data base. All accordingly activities of the review of the architectural design and the integration tests can be done here at the level of unit tests. The review of the requirement document answers questions like: Are all customers’ requirements fulfilled? Are the requirements complete and consistent? And so on. Muthuvel Page 21 of 127 . There must be a re-work of the specifications if the answers to the questions are no.

The presenter is there to kick-off the discussion. Keeping track of what was discussed and documenting actions to be taken is a full-time task. The moderator ensures that side-discussions do not derail the review. not on the presenter.the focus should be on the artifact. Worse yet. The presenter explains the artifact and any background information needed to understand it (although if the artifact was not selfexplanatory. Muthuvel Page 22 of 127 . Assigning this task to one of the reviewers essentially keeps them out of the discussion. It’s important to keep focused on this. It’s important that reviews not become “trials” . to answer questions and to offer clarification. It is the moderator’s role to make sure that participants (including the presenter) keep this in mind. Make sure to have a recorder and make sure that this is the only role the person plays. K. everyone has a role to play. Focus on results. it probably needs some work). and that all reviewers participate equally.Roles and Responsibilities In order to conduct an effective review. there are certain roles that must be played. failing to document what was decided will likely lead to the issue coming up again in the future. not the means. More specifically.5. Reviewer: Reviewers raise issues.Software Testing – made easy 5. The basic roles in a review are: · The moderator · The recorder · The presenter · Reviewers Moderator: The moderator makes sure that the review follows its agenda and stays focused on the topic at hand. Presenter: The presenter is often the author of the artifact under review. Recorder: The recorder is an often overlooked. and reviewers cannot switch roles easily. but essential part of the review team.1. and not get drawn into side discussions of how to address the issue.

“ … [IEEE] 5. and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Black-box test design is usually described as focusing on testing functional requirements. so it does not explicitly use knowledge of the internal structure. Test Case design Techniques under Black Box Testing: · · · · · · Equivalence class partitioning Boundary value analysis Comparison testing Orthogonal array testing Decision Table based testing Cause Effect Graph K. Muthuvel Page 23 of 127 .1. Black box testing focuses on testing the function of the program or application against its specifications.2. this technique determines whether combinations of inputs and operations produce expected results. In comparison.Dynamic Testing Techniques “The process of evaluating a system or component based upon its behaviour during execution.Black Box Testing: “Test case selection that is based on an analysis of the specification of the component without reference to its internal workings. Black box testing is based solely on the knowledge of the system requirements. The goal is to test how well the component conforms to the published requirements for the component It attempts to find: · · · · · Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors Initialization and termination errors Black-box test design treats the system as a "black-box".2.Software Testing – made easy 5.” …BS7925-1 Testing based on an analysis of the specification of a piece of software without reference to its internal workings. Specifically. White-box testing allows one to peek inside the "box".

5000.2. and 5001}.2. 500. numbers between 1 to 5000. K.2. If input is an enumerated set of values: o e. {a.g.g.Equivalence Class Partitioning Equivalence class: A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification. o Example o A program reads an input value in the range of 1 and 5000: o computes the square root of the input number o There are three equivalence classes: o the set of negative integers. o set of integers in the range of 1 and 5000.1. …BS7925-1 Equivalence partition testing: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes. …BS7925-1 Determination of equivalence classes · Examine the input data.Software Testing – made easy 5. · · Few general guidelines for determining the equivalence classes can be given If the input data to the program is specified by a range of values: o e. 6000}. o The test suite must include: o representatives from each of the three equivalence classes: o A possible test suite can be: {-5.Boundary Value Analysis Boundary value: An input value or output value which is on the boundary between equivalence classes. 1. · · 5.1. or an incremental distance either side of the boundary. Muthuvel Page 24 of 127 .c} o one equivalence class for valid input values Another equivalence class for invalid input values should be defined. o One valid and two invalid equivalence classes are defined.b. …BS7925-1 Example · · For a function that computes the square root of an integer in the range of 1 and 5000: Test cases must include the values: {0. o Integers larger than 5000. …BS7925-1 Boundary value analysis: A test case design technique for a component in which test cases are designed which include representatives of boundary values.1.

3. sort.1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each. and display possible causes of a specific problem or quality characteristic (Viewgraph 1). Decision table rules are converted to test cases. as shown below Example C&E diagram for a Server crash issue: K. It graphically illustrates the relationship between a given outcome and all the factors that influence the outcome.Software Testing – made easy 5. The C&E diagram is also known as the Fishbone/Ishikawa diagram because it was drawn to resemble the skeleton of a fish. which can be used to design test cases” …BS7925-1 Cause-effect graphing attempts to provide a concise representation of logical combinations and corresponding actions.Cause and Effect Graphs “A graphical representation of inputs or stimuli (causes) with their associated outputs (effects).2. with the main causal categories drawn as "bones" attached to the spine of the fish. Muthuvel Page 25 of 127 . · · Graph converted to a decision table. A Cause-and-Effect Diagram is a tool that helps identify. A cause-effect graph developed.

each is investigated to determine if there is a defect. For redundant s/w. the reliability is critical. use separate teams to develop independent versions of the software. Redundant hardware and software may be used. Muthuvel Page 26 of 127 .2.1. for some critical applications can develop independent versions and use comparison testing or back-to-back testing. Even if will on run one version in final system. When outputs of versions differ.4.Comparison Testing · · · · · · · · · In some applications. Method does not catch errors in the specification. Run all versions in parallel with a real-time comparison of results.Software Testing – made easy Advantages · · · · · Helps determine root causes Encourages group participation Indicates possible causes of variation Increases process knowledge Identifies areas for collecting data 5. Exercise on Live Application K. Test each version with same test data to ensure all provide identical output.

Also known as Structural Testing / Glass Box Testing / Clear Box Testing. Muthuvel Page 27 of 127 . y=3).Software Testing – made easy 5. implementation and code Test Case design techniques under White Box Testing: · · · · · · Statement coverage Branch coverage Condition coverage Path coverage Data flow-based testing Mutation testing 5. y=3). K. y=4)} all statements are executed at least once. } return x.2. Tests are based on coverage of code statements. int y){ while (x != y){ if (x>y) then x=x-y.1. paths. } By choosing the test set {(x=3.White-Box Testing: “Test case selection that is based on an analysis of the internal structure of the component. conditions . else y=y-x.2.” … BS7925-1 Design test cases so that every statement in a program is executed at least once. (x=4. • • Aims to establish that the code works as designedExamines the internal structure and implementation of the program Target specific paths through the programNeeds accurate knowledge of the design. Unless a statement is executed. (x=3.” …BS7925-1 Testing based on an analysis of internal workings and structure of a piece of software.2. branches.2.Statement Coverage: “A test case design technique for a component in which test cases are designed to execute statements. we have no way of knowing if an error exists in that statement Example: Euclid's GCD computation algorithm: int f1(int x.

· Condition testing stronger than branch testing.2.2. Muthuvel Page 28 of 127 .” … BS7925-1 Test cases are designed such that: Each component of a composite conditional expression given both true and false values. K.Branch Coverage: Branch : A conditional transfer of control from any statement to any other statement in a component. (x=3. or an unconditional transfer of control from any statement to any other statement in the component except the next statement. or when a component has more than one entry point.3.Condition Coverage: Condition: “A Boolean expression containing no Boolean operators.c3): Each of c1. y=3). … BS7925-1 Branch testing guarantees statement coverage Example Test cases for branch coverage can be: {(x=3.Software Testing – made easy 5. · Branch testing stronger than statement coverage testing.2. c2 and c3 are exercised at least once i. given true and false values.e.and. y=3). Branch Testing: A test case design technique for a component in which test cases are designed to execute branch outcomes. A<B is a condition but A and B is not. a transfer of control to an entry point of the component. (x=4.c2). Example Consider the conditional expression ((c1.or. y=4)} 5.2. For instance.2.

Note that compound Boolean expressions at tests generate at least two predicate nodes and additional arcs..e. An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i. Test cases which exercise basic set will execute every statement at least once.4. a new edge) K. … BS7925-1 A testing mechanism proposed by McCabe. it measures the number of linearly-independent paths through a program module. This measure provides a single ordinal number that can be compared to the complexity of other programs. Flow Graph Notation Notation for representing control flow Sequence If While Until Case On a flow graph: · Arrows called edges represent flow of control · Circles called nodes represent one or more actions · Areas bounded by edges and regions called regions · A predicate node is a node containing a condition Any procedural design can be translated into a flow graph. Cyclomatic complexity is often referred to simply as program complexity.Path Coverage: Path: A sequence of executable statements of a component. Cyclomatic Complexity: The Cyclomatic complexity gives a quantitative measure of the logical complexity. Path testing: A test case design technique in which test cases are designed to execute paths of a component. This value gives the number of independent paths in the Basis set. Muthuvel Page 29 of 127 . or as McCabe's complexity. and an upper bound for the number of tests to ensure that each statement is executed at least once. Aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths.Software Testing – made easy 5.2.2. Introduced by Thomas McCabe in 1976. from an entry point to an exit point.

Deriving Test Cases 1. 1. As one of the more widely-accepted software metrics. 4. 3. 8 2. 1. Determine the Cyclomatic complexity of the flow graph. 8 Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements. Muthuvel Page 30 of 127 . Independent paths: 1. 1. 8 4. 2. K. 7b. 5. 1. 2. 4. 8 3. Using the design or code. draw the corresponding flow graph. 6. 7a. 7b.Software Testing – made easy Cyclomatic complexity (CC) = E . 1. Determine a basis set of independent paths. 7b. Prepare test cases that will force execution of each path in the basis set. 3. 4. 2. 2.N + p Where E = the number of edges of the graph N = the number of nodes of the graph p = the number of connected components Example has: 1 2 4 3 6 5 7a 7b 8 Cyclomatic complexity of 4. it is intended to be independent of language and language format. Note: Some paths may only be able to be executed as part of another test. 1. 1. 7a.

Muthuvel Page 31 of 127 . /* Defines variable a */ 7 } 8 print(a). B2.S. B5. Every DU chain in a program is covered at least once. X in DEF(S) X in USES(S1).5. DEF(1)={a}. B3. and the definition of X in the statement S is live at statement S1. However only 5 paths are needed to cover these chains. 7 else if (C3) B2. USES(1)={a. B4. } 9 B6 } [a.b}.Data Flow-Based Testing: “Testing in which test cases are designed based on variable usage within the code. DEF (1) = {a}. B4. } /*Uses variable a */ Definition-use chain (DU chain) [X. …BS7925-1 K.2. B6} There are 25 DU chains. S and S1 are statement numbers.1. USES (1) = {b}. /* Defines variable a */ 3 While(C1) { 4 if (C2) 5 b=a*a.S1].Software Testing – made easy 5. 8 else B3. Example: 2: a=a+b. /* Defines variable a */ 3 While(C1) { 4 if (C2) 5 if(C4) B4. It is very useful for selecting test paths of a program containing nested if and loop statements 1X(){ 2 B1. Assume: DEF(X) = {B1. /*Uses variable a */ 6 else B5. For a statement numbered S. A variable X is said to be live at statement S1.5]: a DU chain. if X is defined at a statement S: there exists a path from S to S1 not containing any definition of X. /*Uses variable a */ 6 a=a-1.2. DU Chain Example 1 X(){ 2 a=5. B3.” Selects test paths of a program: According to the locations of definitions and uses of different variables in a program. B5} USED(X) = {B2. DEF(S) = {X/statement S contains a definition of X} USES(S) = {X/statement S contains a use of X} Example: 1: a=b.

A mutated program tested against the full test suite of the program. the test suite is enhanced to kill the mutant.2.2. The idea behind mutation testing is to make a few arbitrary small changes to a program at a time. Tester should have the knowledge of both the internals and externals of the function. Tester should have good knowledge of White Box Testing and complete knowledge of Black Box Testing. then the mutant is said to be dead.3.Grey Box Testing · · · Grey box Testing is the new term.Software Testing – made easy 5. After the initial testing is complete.6. which evolved due to the different architectural usage of the system. Changing the value of a constant.2. Grey box testing is especially important with Web and Internet applications. Changing a data type. The primitive changes can be: · · · Altering an arithmetic operator. If a mutant remains alive even after all test cases have been exhausted. This is just a combination of both Black box & White box testing. mutation testing is taken up. The process of generation and killing of mutants can be automated by predefining a set of primitive changes that can be applied to the program.Mutation Testing: The software is first tested using an initial testing method based on white-box strategies we already discussed. A major disadvantage of mutation testing: · · computationally very expensive. etc. A large number of possible mutants can be generated. because the Internet is built around loosely integrated components that connect via relatively welldefined interfaces K. Muthuvel Page 32 of 127 . If there at least one test case in the test suite for which a mutant gives an incorrect result. each time the program is changed it is called a mutated program. the change is called a mutant. 5.

Muthuvel Page 33 of 127 . QA is the determination of correctness of the final software product by a development project with respect to the user needs and requirements and the responsibility of the entire team Quality Control Study on Project for its Function and Specification QC is a process by which product quality is compared with applicable standards. which verifies if the product meets pre-defined standards. characteristics” It is usually said as Journey towards Excellence. It is performed during development on key artifacts. · Sets up measurements programs to evaluate processes. like walkthroughs. It is an activity. · help establish processes. training. · Identifies weaknesses in processes and improves them.Software Testing – made easy 6.1. K. checklists and standards. Difference Tables 6. reviews and inspections. Qc improves the development of a specific product or service by identifying the defects. and the action taken when nonconformance is detected. QA improves the processes that are applied to multiple products that will ever be produced by a process. reporting them and correcting the defects It is performed after a work product is produced against established criteria ensuring that the product integrates correctly into the environment.2.Quality Vs Testing Quality Testing “Quality is giving more cushions for user to Testing is an activity done to achieve the use system with all its expected quality. mentor feedback.Testing Vs Debugging Testing Testing is done to find bugs Debugging Debugging is an art of fixing bugs. 6. completeness. Qc is a demonstration of consistency. It is an activity that establishes and evaluates the processes to produce the products by preventing the introduction of issues or defects.3.Quality Assurance Vs Quality Control Quality Analysis Study on Process followed in Project development QA is a planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements. 6. and correctness of the software at each stage and between each stage of the development life cycle and the responsibility of the tester.

4. .Software Testing – made easy 6.interface errors. and meticulous result checking Skilled manual tester know how to follow a trail of bugs.IST & UAT Particulars Base line document Data Environment Orientation Tester composition Purpose IST Functional Specification Simulated Controlled Component Testing Firm Verification UAT Business Requirement Live Data Simulated Live Business Testing Firm / users Validation K.Verification & Validation Verification Process of determining whether output of one phase of development conforms to its previous phase Verification is concerned containment of errors with phase Validation Process of determining whether a fully developed system conforms to its SRS document Validation is concerned about the final product to be error free 6. A good manual tester also applies on the spot judgment to observed results that an automated tool can’t White Box / Structural Test case selection that is based on an analysis of the internal structure of the component.initialization and termination errors. It involves insightful test planning.incorrect or missing functionality . 6. they will need to understand some fundamental test design techniques to do a good job. careful design. performance. being particularly effective at discovering localized errors in control and data flows It involves the creation of custom test data.Black Box Testing & White Box Testing Black Box / Functional Test case selection that is based on an analysis of the specification of the component without reference to its internal workings It focuses on global issues of workflows.behavior or performance errors .5. Muthuvel Page 34 of 127 . and so forth It attempts to find errors in the external behavior of the code in the following categories: . configuration.errors in data structures used by interfaces . It is based on how the system is built It applied to individual components and interfaces. And we can reuse such test data for other kinds of tests No matter who does the structural testing.6.

Data guidelines etc. tools. 6. Test environment includes all supportive elements namely hardware. To check for the added or new functionality's effect on the existing system K.10. Muthuvel Page 35 of 127 .7. Browsers.8. Alpha Testing & Beta Testing Component Test data Test Environment To Achieve Tested by Supporting Document Used Alpha testing Simulated Controlled Functionality Only testers Functional Specification Beta testing Live Uncontrolled User needs Testers and End-Users Customer Requirement Specification 6. etc.Re-testing and Regression Testing Re-testing Regression Testing To check for a particular bug and its dependencies after it is said to be fixed. Servers.Test Bed and Test Environment Test Bed Test Environment Test bed holds only testing documents which supports testing which includes Test data.SIT & IST SIT SIT can be done when system is on the process of integration IST IST need integrated System of various Unit levels of independent functionality and checks its workability after integration and compares it before integration.Software Testing – made easy 6. software. 6.9.

This shall include a description of the hardware and software environment in which all component tests will be run.1.2. 7. You are done coding once your code can pass all the tests. Write many short tests (in code) that span the extents of the requirements for the module you wish to test. All debugging is separated from the code. Tests can be designed to ensure that the code fulfills the requirements. The component test strategy shall document whether the component testing is carried out using isolation. If measures not described explicitly in this clause are used they shall comply with the 'Other Test Measurement Techniques' clause (4.Pre-requisites Before component testing may begin the component test strategy (2.1) and project component test plan (2. b) the test cases are designed by another person(s). Selection of techniques shall be according to clause 3.2) shall be specified. d) the test cases are designed by a person(s) from a different organisation. K. such as: a) the test cases are designed by the person(s) who writes the component under test.Unit Testing “The testing of individual software components.1.1. The component test strategy shall document the environment in which component tests will be executed. Muthuvel Page 36 of 127 .1.Benefits of Unit Testing · · · · Assurance of working components before integration Tests are repeatable . The component test strategy shall specify criteria for test completion and the rationale for their choice. If techniques not described explicitly in this clause are used they shall comply with the 'Other Testing Techniques' clause (3. c) the test cases are designed by a person(s) from a different section. Component test strategy The component test strategy shall specify the techniques to be employed in the design of test cases and the rationale for their choice. bottom-up or top-down approaches. 7.Every time you change something you can rerun your suite of tests to verify that the unit still works.” · · · … BS795-1 Individual testing of separate units . The component test strategy shall document the degree of independence required of personnel designing test cases from the design process.Software Testing – made easy 7. e) the test cases are not chosen by a person.1. or some mixture of these. Levels of Testing 7. These test completion criteria should be test coverage levels whose measurement shall be achieved by using the test measurement techniques defined in clause 4.13). Write tests before you write the code.13).1.methods and classes.

Whenever an error is corrected by making a change or changes to test materials or the component under test. Component Test Specification. the test process documentation shall require that the following activities occur in the following sequence: a) Component Test Planning. Component Test Execution.1.4 (27-Apr-01) © British Computer Society. This Figure illustrates the generic test process described in clause 2. c) Component Test Execution. be carried out for a subset of the test cases associated with a component. Standard for Software Component Testing 6 Working Draft 3. e) Checking for Component Test Completion. The test process documentation shall define the testing activities to be performed and the inputs and outputs of each activity. Muthuvel Page 37 of 127 .1. Component Test Planning shall begin the test process and Checking for Component Test Completion shall end it. 2001 For any given test case.Software Testing – made easy The component test strategy shall document the test process that shall be used for component testing. and Component Test Recording may however. b) Component Test Specification. K. the affected activities shall be repeated. on any one iteration. d) Component Test Recording. SIGIST. these activities are carried out for the whole component. Later activities for one test case may occur before earlier activities for another.8.

Entry criteria: · · · · · · The integration team is adequately staffed and trained in software integration testing.. Software integration testing must be automated if adequate regression testing is to occur. o Been ported to the integration environment. Objective: The typical objectives of software integration testing are to: · Cause failures involving the interactions of the integrated software components when running on a single platform. client and server applications on a network etc.Software Testing – made easy 7. Adequate program or component documentation is available Verification that the correct version of the unit has been turned over for integration. The first two software components have: o Passed unit testing.2. · Minimize the number of low-level defects that will prevent effective system and launch testing. · Report these failures to the software development team so that the underlying defects can be identified and fixed. Successful execution of the integration test plan No open severity 1 or 2 defects Component stability Guidelines: · · · The iterative and incremental development cycle implies that software integration testing is regularly performed in an iterative and incremental manner. The 'parts' can be code modules. Software integration testing can elicit failures produced by defects that are difficult to detect during system or launch testing once the system has been completely integrated. o Been integrated. · Help the software development team to stabilize the software so that it can be successfully distributed prior to system testing. The integration environment is ready. Muthuvel Page 38 of 127 .Integration Testing “Testing performed to expose faults in the interfaces and in the interaction between integrated components” … BS7925-1 Testing of combined parts of an application to determine they function together correctly. the tests completely execute and the actual test results match the expected test results). All software integration test suites successfully execute (i. Exit criteria: · · · · · A test suite of test cases exists for each interface between software components. individual applications. This type of testing is especially relevant to client/server and distributed systems. K. Documented Evidence that component has successfully completed unit test.e.

. Verifies major control and decision points early in design process. Level 1 Testing sequence Level 1 .2. requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed.” … BS795-1 · · Modules integrated by moving down the program design hierarchy. Test after each module integrated Use regression testing (conducting all or some of the previous tests) to ensure new errors are not introduced. Level 2 Level 2 stubs Level 2 Level 2 Level 2 Level 3 stubs Steps: · · · · · · Main control module used as the test driver.1. Tested components are then used to test lower level components. K.1. Integration testing where system components are integrated into the system one at a time until the entire system is integrated.1. Muthuvel Page 39 of 127 . The process is repeated until the lowest level components has been tested.Incremental Integration Testing “Integration testing where system components are integrated into the system one at a time until the entire system is integrated” … BS795-1 Continuous testing of an application as new functionality is added. done by programmers or by testers.. with lower level components being simulated by stubs.Top Down Integration “An approach to integration testing where the component at the top of the component hierarchy is tested first.Software Testing – made easy 7. Can use depth first or breadth first top down integration. Replace stubs either depth first or breadth first Replace stubs one at a time.2. 7. or that test drivers be developed as needed. with stubs for all subordinate modules.

moving upwards in program structure.1.” …BS7925-1 · · Begin construction and testing with atomic modules (lowest level modules). Driver programs removed and clusters combined. Top-down The control program is tested first Modules are integrated one at a time Major emphasis is on interface testing · Major emphasis is on module functionality and performance. Driver program developed to test. Test drivers Level N Level N Level N Level N Level N Testing sequence Test drivers Level N–1 Level N–1 Level N–1 Steps: · · · Low level modules combined in clusters (builds) that perform specific software subfunctions.Bottom up Integration “An approach to integration testing where the lowest level components are tested first.2.2. Use driver program to test. then used to facilitate the testing of higher level components.Software Testing – made easy 7. K. Muthuvel Page 40 of 127 . Bottom-up Major Features · · Allows early testing aimed t proving feasibility and practicality of particular modules. The process is repeated until the component at the top of the hierarchy is tested. Cluster is tested. · · · Modules can be integrated in various clusters as desired.

· · 7. pure top-down strategy in practice.2.Software Testing – made easy Advantages · · · No test stubs are needed It is easier to adjust manpower needs Errors in critical modules are found early · · · · · · No test drivers are needed The control program plus a few modules forms a basic early prototype Interface errors are discovered early Modular features aid debugging Test stubs are needed The extended early phases dictate a slow manpower buildup Errors in critical modules at low levels are found late Disadvantages Comments Test drivers are needed Many modules must be integrated before a working program is available · · Interface errors are discovered late At any given point.1.1 Stubs: Stubs are program units that are stand-ins² for the other (more complex) program units that are directly referenced by the unit being tested. Some people feel that bottom-up is being made. or otherwise creating. and a means of monitoring the states of these items. K.2.1. any input and output mechanisms needed in the testing of the unit Sandwich Testing: Combines bottom-up and top-down testing using testing layer.3. or other items needed in the testing of the unit. more code has been An early working program raises morale written and tested that with top down and helps convince management progress testing. Stubs are usually expected to provide the following: An interface that is identical to the interface that will be provided by the actual program unit.3.2 Drivers: Drivers are programs or tools that allow a tester to exercise/examine in a controlling manner the unit of software being tested.2.1. It is hard to maintain a is a more intuitive test philosophy. declaring.Stub and Drivers 5. A driver is usually expected to provide the following: A means of defining. (This can be as simple as a return statement) 5. Muthuvel Page 41 of 127 . and the minimum acceptable behavior expected of the actual program unit. any variables.3. constants.

To check that: all functional requirements satisfied. error recovery. 7.System Testing “System testing is the process of testing an integrated system to verify that it meets specified requirements".2.2. maintainability). Tests conformance of the software to the Software Requirements Specification.2. Validation test criteria · · A set of black box tests to demonstrate conformance with requirements. compatibility.2.Big Bang Integration “Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.3. and has necessary detail to support maintenance.2. Need to negotiate a method of resolving deficiencies with the customer. and other requirements are met (e.Validation Testing Validation testing aims to demonstrate that the software functions in a manner that can be reasonably expected by the customer.g.2. This should contain a section “Validation criteria” which is used to develop the validation tests. catalogued.Configuration review An audit to ensure that all elements of the software configuration are properly developed.. documentation is correct and 'human-engineered'.2. all performance requirements achieved.2.3.Non-Incremental Testing 7. When validation tests fail it may be too late to correct the error prior to scheduled delivery. Muthuvel .1. … BS7925-1 It is further sub-divided into · · Functional system testing Non-Functional system testing System test Entrance Criteria: · · · · Successful execution of the Integration test cases No open severity 1 or 2 defects 75-80% of total system functionality and 90% of major functionality delivered System stability for 48-72 hours to start test Page 42 of 127 K. · 7.2.Software Testing – made easy 7.” … BS7925-1 7.

Muthuvel Page 43 of 127 . Unless the system can function correctly over an extended period of time management will not be able to rely upon the system.3. The responsible user is normally only one of many groups having an interest in the application system. and continue through every phase of the life cycle into operations and maintenance.3.1.. rather. The process should begin in the requirements phase. The system can be tested for correctness throughout the lifecycle.Functional Testing 7. K. It is not a question as to whether requirements must be tested but. but it is difficult to test the reliability until the program becomes operational.1. the extent and methods used in requirements testing.1.g. When to use Requirements Testing: Every application should be requirements tested. The objectives that need to be addressed in requirements testing are: · · · User requirements are implemented Correctness is maintained over extended processing periods. tests that exercise specific functions or probe the non-functional constraints such as performance or security)” … BS7925-1 Requirements testing must verify that the system can perform its function correctly and that the correctness can be sustained over a continuous period of time. Application processing complies with the organization’s policies and procedures.Software Testing – made easy System Test Exit Criteria: · · · Successful execution of the system test cases .Requirement based Testing “Designing tests based on objectives derived from requirements for the software component (e. Objectives: Successfully implementing user requirements is only one aspect of requirements testing.and documentation that shows coverage of requirements and high-risk system components System meets pre-defined quality goals 100% of total system functionality delivered 7.

3. Maintainability Portability Performance Procedure Reliability Recovery Stress Security Usability 7.2. etc.” . The amount of the potential loss should both determine the amount of resource to be put into disaster planning as well as recovery testing. but also the effectiveness of the component parts of that process.1..Business-Process based Non-Functional Testing Testing of those requirements that do not relate to functionality.e. recovery testing not only verifies the recovery process. BS7925-1 Recovery is the ability to restart operations after the integrity of the application has been lost. usability. The importance of recovery will vary from application to application.3. The user should estimate the potential loss associated with inability to recover operations over various time spans. Muthuvel Page 44 of 127 .2.Recovery testing “Testing aimed at verifying the system's ability to recover from varying degrees of failure.. K. I. The process normally involves reverting to a point where the integrity of the system is known.Software Testing – made easy 7. Objectives: Recovery testing is used to ensure that operations can be continued after a disaster. Specific objectives of recovery testing include: · · · · · Adequate backup data is preserved Backup data is stored in a secure location Recovery procedure are documented Recovery personnel have been assigned and trained Recovery tools have been developed and are available When to use Recovery Testing: Recovery testing should be performed whenever the user of the application states that the continuity of operation of the application is essential to the proper functioning of the user area. “ …BS7925-1 Non-Functional testing types: Configuration Compatibility Conversion Disaster Recovery Interoperability Installability Memory Management. performance. and then reprocessing transactions up until the point of failure.

Test cases that cause excessive 'hunting' for data on disk systems. the objectives of security testing are to identify defects that are very difficult to identify. Muthuvel Page 45 of 127 . Objectives: Security defects do not become as obvious as other types of defects.” … BS7925-1 Stress testing is designed to test the software with abnormal situations. K. Even failures in the security system operation may not be detected. Test cases that cause 'thrashing' in a virtual operating system. and the individual assigned to conduct the test should be selected based on the estimated sophistication that might be used to penetrate security. 7. Protecting the confidentiality of the information is designed to protect the resources of the organization.3. The security testing objectives include: · · · · Determine that adequate attention has been devoted to identifying security risks Determining that a realistic definition and enforcement of access to the system has been implemented Determining that sufficient expertise exists to perform adequate security testing Conducting reasonable tests to ensure that the implemented security measures function properly When to Use security Testing: Security testing should be used when the information and/or assets protected by the application system are of significant value to the organization.Stress testing “Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. The testing should be performed both prior to the system going into an operational status and after the system is placed into an operational status.2. · · · · · Higher rates of interrupts. For example. resulting in a loss or compromise of information without the knowledge of that loss. Security testing is designed to evaluate the adequacy of the protective procedures and countermeasures.Software Testing – made easy 7.3.Security testing “Testing whether the system meets its specified security objectives.2.2.” … BS7925-1 Security is a protection system that is needed for both secure confidential information and for competitive purposes to assure third parties their data will be protected. The extent of testing should depend on the security risks. Therefore. Data rates an order of magnitude above 'normal'.3. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs. Test cases that require maximum memory or other resources.

K. Alpha testing is conducted at the developer's site by a customer. The benefit of this type of acceptance testing is that it will bring out operational issues from potential customers prepared to comment on the software before it is officially released. The customer records and reports difficulties and errors at regular intervals. That is. External instrumentation can monitor intervals. 7.” … IEEE Performance testing is designed to test run time performance of software within the context of an integrated system.Software Testing – made easy 7. Beta testing is conducted at one or more customer sites by end users. The customer uses the software with the developer 'looking over the shoulder' and recording errors and usage problems. log events.” … BS7925-1 This is testing of an operational nature once the software seems stable. Alpha testing conducted in a controlled environment.Alpha and Beta testing “Alpha testing: Simulated or actual operational testing at an in-house site not otherwise involved with the software developers. It should be conducted by people who represent the software vendor's market. it is necessary to measure resource utilization in an exacting fashion. It is not until all systems elements are fully integrated and certified as free of defects the true performance of a system can be ascertained.3.” … BS7925-1 “Beta testing: Operational testing at a site not otherwise involved with the software developers. Muthuvel Page 46 of 127 . It is 'live' testing in an environment not controlled by the developer.3.2.4. and who will use the product in the same way as the final version once it is released. By instrument the system. the tester can uncover situations that lead to degradation and possible system failure. Performance tests are often coupled with stress testing and often require both hardware and software infrastructure.3.Performance testing “Testing conducted to evaluate the compliance of a system or component with specified performance requirements.

Entry Criteria · · · · SIT must be completed. compatibility.User Acceptance Testing “Acceptance testing: Formal testing conducted to enable a user. The client team should sign off the ‘Deferred’ defects.Exit Criteria · · · All Test Scenarios/conditions would be executed and reasons will be provided for untested conditions arising out of the following situations Non -Availability of the Functionality Deferred to the Future Release All Defects Reported are in the ‘Closed’ or ‘Deferred’ status. are satisfied. error recovery etc.2. Availability of stable Test Environment with the latest version of the Application. Acceptance criteria specified by the user is met. 7. All User IDs requested by the testing team to be created and made available to the testing team one week prior to start of testing. customer. UAT focuses on the following aspects: · · · · All functional requirements are satisfied All performance requirements are achieved Other requirements like transportability. Test Cases prepared by the testing team to be reviewed and signed-off by the Project coordinator (AGM-Male). or other authorized entity to determine whether to accept a system or component” … BS7925-1 User Acceptance Testing (UAT) is performed by Users or on behalf of the users to ensure that the Software functions in accordance with the Business Requirement Document.1.4.4. Muthuvel Page 47 of 127 . 7. · K.Software Testing – made easy 7.4.

Factors favour Automation of Regression Testing · · · · · · Ensure consistency Speed up testing to accelerate releases Allow testing to happen more frequently Reduce costs of testing by reducing manual labor Improve the reliability of testing Define the testing process and reduce dependence on the few who know it 7.1. Instead.5.Tools used in Regression testing · · · · · · WinRunner from Mercury e-tester from Empirix WebFT from Radview Silktest from Radview Rational Robot from Rational QA Run from Compuware K.5.” “When making improvements on software. they seek to select all tests that exercise changed or affected program components. · Safe attempt instead to select every test that will cause the modified program to produce different output than original program.” … BS7925-1 “Regression Testing is the process of testing the changes to computer programs to make sure that the older programs still work with the new changes. but do not require minimization of the test set. There are three main groups of test selection approaches in use: · Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be rerun. Four things can happen when a developer attempts to fix a bug.Regression Testing and Re-testing “Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.” Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affects other system components. Muthuvel Page 48 of 127 .Software Testing – made easy 7. the tests that are deemed necessary to validate modified software. it is necessary to do regression testing. Three of these things are bad. retesting previously tested functions to make sure adding new features has not introduced new problems.5.2. from an existing test set. · Coverage approaches are also based on coverage criteria. 7. A regression test selection technique chooses. and one is good: New Bug No New Bug Bad Good Successful Change Bad Unsuccessful Change Bad Because of the high probability that one of the bad outcomes will result from a change to the system.

2. database APIs. · 8. Testing a new or an alternate data processing system with the same source data that is used in another system. K. but also your product’s quality.1. The other system is considered as the standard of comparison. database triggers. recovery.Compliance Testing Involves test cases designed to verify that an application meets specific criteria.3. such as processing four-digit year dates. Database testing includes the process of validation of database stored procedures. leaving for another application frequently in multiple cycles The intersystem testing involves the operations of multiple systems in test. where multiple systems are integrated in cycles.Software Testing – made easy 8.Intersystem Testing / Interface Testing “Integration testing where the interfaces between system components are tested” … BS7925-1 The intersystem testing is designed to check and verify the interconnection between application function correctly Applications are frequently interconnected to other systems. The interconnection may be data coming into the system from another application.4. Testing at the data access layer is the point at which your application communicates with the database. backup.Parallel Testing · · The process of comparing test results of processing production data concurrently in both the old and new systems. Muthuvel Page 49 of 127 .Database Testing The database component is a critical piece of any data-enabled application. Tests at this level are vital to improve not only your overall Test strategy. security and database conversion. commonly done with modules like Payroll. Today’s intricate mix of client-server and Web-enabled database applications is extremely difficult to Test productively. properly handling special data boundaries and other business requirements. The basic need of intersystem test arises whenever there is a change in parameters between application systems. 8. Process in which both the old and new modules run at the same time so that performance and outcomes can be compared and corrected prior to deployment. 8. Types of Testing 8.

if testing occurs very late in the development cycle.Ad-hoc Testing “Testing carried out using no recognised test case design technique. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products.Manual support Testing Manual support testing involves all functions performed by the people in preparing data for and using data from automated system.Configuration Testing Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.6. With some projects this type of testing is carried out as an adjunct to formal testing.Software Testing – made easy 8. 8. this will be the only kind of testing that can be performed.” … BS7925-1 Testing without a formal test plan or outside of a test plan. Muthuvel Page 50 of 127 .7. it can often find problems that are not caught in regular testing. Sometimes ad hoc testing is referred to as exploratory testing. If carried out by a skilled tester.Automated Testing Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. 8. 8. The objective of manual support testing is · · · Verify the manual – support procedures are documented and complete Determine the manual-support responsibilities has been assigned Determine manual support people are adequately trained.5. The method of testing may be testing is same but the objective remains the same.8. K. Manual support testing involves first the evaluation of the adequacy of the process and seconds the execution of the process. 8. is conducted over a short period of time and is tightly controlled.Pilot Testing Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests. Typically involves many users.9. Sometimes.

Usability Testing “Testing the ease with which users can learn and use a product. 8.10. K. BS7925-1 “Volume Testing: Testing where the system is subjected to large volumes of data. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval. as its name implies.11.” …. “ …. Volume Testing.Software Testing – made easy 8. is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. 8.Stress and Volume Testing “Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.12. Load testing applications can emulate the workload of hundreds or even thousands of users. BS7925-1 8. BS7925-1 Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity. Requirements might include tolerance for · heat · humidity · chemical presence · portability · electrical or magnetic fields · Disruption of power. so that you can predict how an application will work under different user loads and determine the maximum number of concurrent users accessing the site at the same time. etc.” All aspects of user interfaces are tested: · Display screens · messages · report formats · navigation and selection problems ….13. Muthuvel Page 51 of 127 .Environmental Testing These tests check the system’s ability to perform at the installation site.Load Testing Load Testing involves stress testing applications under real-world conditions to predict system behavior and performance and to identify and isolate problems.

2. Cases.) with the test data generated Reviews test ware.) with the test data generated Reviews test ware. Roles & Responsibilities 9. Test scripts etc. retest and close defects Preparation of reports on Test progress 9. record defects. test scripts Defect Management Preparation of test deliverable documents and defect metrics analysis report K.Test Associate Reporting To: Team Lead of a project Responsibilities: · · · · · Design and develop test conditions and cases with associated test data based upon requirements Design test scripts Executes the test ware (Conditions. supervision of test cases preparation based on the business scenarios Maintaining the run details of the test execution. Muthuvel Page 52 of 127 .3.Senior Test Engineer Reporting To: Team Lead of a project Responsibilities: · · · · · · · Responsible for collection of requirements from the users and evaluating the same and send out for team discussion Preparation of the High level design document incorporating the feedback received on the high level design document and initiate on the low level design document Assist in the preparation of test strategy document drawing up the test plan Preparation of business scenarios.Software Testing – made easy 9. Test scripts etc. retest and close defects Preparation of reports on Test progress 9.1. record defects. Review of test condition/cases. Cases.Test Engineer Reporting To: Team Lead of a project Responsibilities: · · · · · Design and develop test conditions and cases with associated test data based upon requirements Design test scripts Executes the test ware (Conditions.

Test tool selection and introduction. staff supervision. if required Review of the proposal K. Test planning including development of testing goals and strategy. scripts) Preparation of test scenarios and configuration management and quality plan Manage test cycles Assist in recruitment Supervise test team Resolve team queries/problems Report and follow-up test systems outrages/problems Client interface Project progress reporting Defect Management Staying current on latest test approaches and tools. Test budgeting and scheduling. training and continual improvement. conditions. Test process definition.Software Testing – made easy 9. and transferring this knowledge to test team Ensure test project documentation 9. Test program oversight and progress tracking.Test Lead Reporting To: Test Manager Responsibilities: · · · · · · · · · · · · · · · · · Technical leadership of the test project including test approach and tools to be used Preparation of test strategy Ensure entrance criteria prior to test start-off Ensure exit criteria prior to completion sign-off Test planning including automation decisions Review of design documents (test cases. Coordinating pre and post test meetings. Nomination of training Cohesive integration of test and development activities.5.Test Manager Reporting To: Management Responsibilities: · · · · · · · · · · · · · · · Liaison for interdepartmental interactions: Representative of the testing team Client interaction Recruiting. including test-effort estimations. Test environment and test product configuration management. and staff training. Use of metrics to support continual test process improvement. Mail Training Process for training needs.4. Muthuvel Page 53 of 127 .

3. This is used in case of small application or an enhancement to an application.BR and FS The requirements specified by the users in the business requirement document may not be exactly translated into a functional specification. 10. As this contains user perspective requirements. It contains the system architecture. These documents are written in sequence. This is used henceforth to develop further documents for software construction. It is primarily derived from Business requirement document.Design Specification The Design Specification document is prepared based on the functional specification. The proposed application should adhere to the specifications specified in the document.Baseline Documents Construction of an application and testing are done using certain documents.2.Traceability 10. Test Preparation & Design Process 10. design of the flow and user maintained parameters. Therefore.” … BS7925-1 The Functional Specification document describes the functional needs. a trace on specifications between K. This should also portray functionalities that are technically feasible within the stipulated time frames for delivery of the application.Functional Specification “The document that describes in detail the characteristics of the product with regard to its intended capability.1. User acceptance test is based on this document.1. Muthuvel Page 54 of 127 .Software Testing – made easy 10. This is ideally prepared and used by the construction team. The test team should also have a detailed understanding of the design specification in order to understand the system architecture.1. validation and verification of the software.4.2. 10.2. which specifies the client's business needs.1. Case Study on each document and reverse presentation 10. 10.1. table structures and program specifications.1. 10. each of it derived from the previous document. This is done over a period of time and going through various levels of requirements.Business Requirement It describes user’s needs for the application.1.System Specification The System Specification document is a combination of Functional specification and design specification.

testers must keep in mind the rules specified in Test condition writing. 10. 10. as deferring or taking in a gap may have ripple effect on the application.2. or deferred after discussions. K. which will satisfy their needs.Choosing Testing Techniques · · · · · The testing technique varies based on the projects and risks involved in the project. after getting this signed off from the author of the FS.Gap Analysis This is the terminology used on finding the difference between "what it should be" and "what it is". These gaps are then closed by the author of the FS. 10. The technique used for testing will be chosen based on the organizational need of the end user and based on the caracal risk factor or test factors that do impacts the systems The technique adopted will also depend on the phases of testing The two factors that determine the test technique are · Test factors: the risks that need to be address in testing · Test Phases: the phase of the systems development life cycle in which testing will occur. As explained. B=C. tester must then build conditions for the gaps. It is determined by the criticality and risks involved with the Application under Test (AUT). In the case of UAT. Sometimes. Addendum’s may sometime affect the entire system and the test case development. Muthuvel Page 55 of 127 . Testers should understand these gaps and use them as an addendum to the FS. This helps finding the gap between the documents. The final FS form may vary from the original. it is done on the Business requirement to FS and FS to test conditions.2.3. Simplifying the above.4. Mathematically. as Business requirement and Test conditions are matched. In this process.FS and Test conditions Test conditions built by the tester are traced with the FS to ensure full coverage of the baseline document. If gaps between the same are obtained. it becomes evident that Business requirements that are user’s needs are tested. A=Business requirement B=Functional Specification C=Test conditions A=B. · And also depends onetime and money spend on testing. there by giving the customer an application.Software Testing – made easy functional specification and business requirements is done a one to one basis. Therefore A=C Another way of looking at this process is to eliminate as many mismatches at every stage of the process. these ripple effects may not be reflected in the FS. there is a direct translation of specification from the Business Requirement to Test conditions leaving lesser amount of understandability loss.

configuration management plan. and estimating the number of faults remaining in the program.1.2.Introduction § § § § Summary of the items and features to be tested Need for and history of each item (optional) References to related documents such as project authorization. and schedule of intended testing activities. installation guide References to bug reports related to test items Items which are specifically not going to be tested (optional) K. QA plan. the features to be tested. approach.3. relevant policies.Software Testing – made easy 10.5. and to design tests specifically to expose them. project plan.6.7.Test Items § § § § § Test items and their version Characteristics of their transmittal media References to related documents such as requirements specification. the testing tasks.Error Guessing “A test case design technique where the experience of the tester is used to postulate what faults might occur. Muthuvel Page 56 of 127 .Test Plan This is a summary of the ANSI/IEEE Standard 829-1983. design specification.Error Seeding “The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal. operations guide.” … [IEEE] 10. relevant standards References to lower level test plans 10.Test Plan Identifier § A unique identifier 10. resources.” … (ANSI/IEEE Standard 829-1983) This standard specifies the following test plan outline: 10. and any risks requiring contingency planning. It identifies test items.7. “ … BS7925-1 10.7.7. users guide. It describes a test plan as: “A document describing the scope. who will do each task.

such as test-item availability.Suspension Criteria and Resumption Requirements § § Specify criteria to be used to suspend the testing activity Specify testing activities which must be redone when testing is resumed 10. testingresource availability.8.Features Not to Be Tested § § All features and significant combinations of features which will not be tested The reasons these features won’t be tested 10.Item Pass/Fail Criteria § Specify the criteria to be used to determine whether each test item has passed or failed testing 10. test incident reports.Test Deliverables § Identify the deliverable documents: test plan. test design specifications. specify the approach Specify major activities.6.4.7.7.7. test logs.7. test item transmittal reports. Muthuvel Page 57 of 127 .9. and deadline 10. and tools which are to be used to test the groups Specify a minimum degree of comprehensiveness required Identify which techniques will be used to judge comprehensiveness Specify any additional completion criteria Specify techniques which are to be used to trace requirements Identify significant constraints on testing. test summary reports Identify test input and output data Identify test tools (optional) § § K.7.Software Testing – made easy 10. techniques.7.Features to be Tested § § All software features and combinations of features to be tested References to test-design specifications associated with each feature and combination of features 10.5. test case specifications.Approach § § § § § § § § Overall approach to testing For each major group of features of combinations of featres. test procedure specifications.7.

Environmental Needs § § § Specify the level of security required Identify special test tools needed Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware. and any other software or supplies needed Identify any other testing needs Identify the source for all needs which are not currently available § § 10.Testing Tasks § § § Identify tasks necessary to prepare for and perform testing Identify all task interdependencies Identify any special skills required 10. preparing.14.Software Testing – made easy 10.7.10. stand-alone). checking and resolving Identify groups responsible for providing the test items identified in the Test Items section Identify groups responsible for providing the environmental needs identified in the Environmental Needs section 10. communications and system software. specify its periods of use K. the mode of usage (i.Staffing and Training Needs § § Specify staffing needs by skill level Identify training options for providing necessary skills 10. Muthuvel Page 58 of 127 .11.e.Responsibilities § § § Identify groups responsible for managing.7.Schedule § § § § § Specify test milestones Specify all item transmittal events Estimate time required to do each testing task Schedule all testing tasks and test milestones For each testing resource. executing..7.7.7.12. witnessing. designing.13.

Data Definition K.1. It is left to the test team to decide on the application segmentation. Boundary condition .8. Negative condition .8.7. For the segments identified by the test team.7.Software Testing – made easy 10. The functionality can be broken into · Field level rules · Module level rules · Business rules · Integration rules 10. The importance of determining the conditions are: · Deciding the architecture of testing approach · Evolving design of the test scripts · Ensuring coverage · Understanding the maximum conditions for a specification At this point the tester will have a fair understanding of the application and his module.Approvals § § Specify the names and titles of all persons who must approve the plan Provide space for signatures and dates 10.2.Polarity of the value given for test is to assess the extreme values of the condition.15.Polarity of the value given for test is not to comply with the condition existence.Risks and Contingencies § § Identify the high-risk assumptions of the test plan Specify contingency plans for each 10.8.High Level Test Conditions / Scenario It represents the possible values that can be attributed to a particular specification.Polarity of the value given for test is to comply with the condition existence. the possible condition types that can be built are · · · · Positive condition .Polarity of the value given for test is to analyze the practical usage of the condition.Processing logic It may not be possible to segment the specifications into the above categories in all applications.16. User perspective condition . 10. Muthuvel Page 59 of 127 .

Muthuvel Page 60 of 127 . having reference to its condition. Some applications may be stand alone i. will be processed by local programs and populated in respective tables. These feeds. they are sent in a format.Feeds Analysis Most applications are fed with inputs at periodic intervals. It is therefore. Example: Using the above example. To populate an interest to be paid field of a deposit. By constructing such intelligent data.e. This type of population can be used for testing the performance of the application and its behavior to random data. at the application end. These will aid in triggering certain action by the application. received from other machines. then the system should give a warning. In the case of applications having feeds. Translation of the high level data designed previously should be converted into the feed formats. we can give 9. all processes will happen within its database and no external inputs of processed data are required. There are two ways of populating the data into tables of the application. 10. in order to populate the application database. Application may have its own hierarchy of data structure which is interconnected. to find a suitable record with interest exceeding 8 % and the Tenor being more than two months is difficult. like end of day or every hour etc. Example: Business rule.8. It will be difficult for the tester to identify his requirements from the mass data.3. Usually. a document is published in this regard. the tester should be able to design intelligent data for his test conditions. · Intelligent: Data is tailor-made for every condition and value. essential for testers to understand the data mapping between the feeds and the database tables of the application.5478 and make the tenor as two months for a particular deposit. corresponding to the table structures. · Unintelligent: Data is populated in mass. if the interest to be paid is more than 8 % and the tenor of the deposit exceeds one month. Having now understood the difference between intelligent and unintelligent data and also at this point having a good idea of the application.Software Testing – made easy In order to test the conditions and values that are to be tested. few data records will suffice the testing process. This will trigger the warning in the application. Its values are chosen at random and not with reference to the conditions derived. the application should be populated with data. which are redesigned.. · · · Data Sheet format (ISO template) Exercise with the live application Test Case K.

The resultant behavior of the application after execution is the expected result. 10. · · Description: Here the details of the test on a specification or a condition are written. 10. such as to exercise a particular program path or to verify compliance with a specific requirement.Multiple Expected Result It has multiple impacts on executing the instructions.1.” … BS7925-1 Test cases are written based on the test conditions. The results expressed should be clear and have only one interpretation possible. the tester should include the following: · · · Reference to the rules and specifications under test in words with minimal technical jargons. There are three headings under which a test case is written. and expected outcomes developed for a particular objective. Language used in the expected results should not have ambiguity. Example: Test Case Description: Click on the hyperlink "New deposit" at the top left hand corner of the main menu screen. · While writing a test case. Muthuvel Page 61 of 127 . Example: Test Case Description: Click on the hyperlink "New deposit" at the top left hand corner of the main menu screen.Test Case “A set of inputs.1. to make the test cases explicit.9. 10. It is the phrased form of Test conditions.1.9. Namely. Pre-requirements for the test to be executed should also be clearly mentioned. Expected results: The expected result on the execution of the instruction in the description is mentioned. K.1.Software Testing – made easy 10. In general.9.9. It is advisable to use the term "Should" in the expected results. Expected result: New time deposit screen should be displayed.2.Expected Results The outcome of executing an instruction would have a single or multiple impacts on the application. Check on data shown by the application should refer to the table names if possible Names of the fields and screens should also be explicit. Data and Pre-requirements: Here either the data for the test or the specification is mentioned. execution preconditions.Single Expected Result It has a single impact on the instruction executed. which becomes readable and understandable by all. it should reflect in detail the result of the test execution.

2.Data definition Data for executing the test cases should be clearly defined in the test cases. They should indicate the values that will be entered into the fields and also indicate default values of the field. it is difficult to give the value of the maturity date while data designing or preparing test cases.as its sometimes not possible to predict the dates of testing . Foreign exchange rate information service organization server to be connected to the application.3.Software Testing – made easy Expected result: New time deposit screen should be displayed & Customer contact date should be pre-filled with the system date.and populate certain date fields when they are to trigger certain actions in the application.Pre-requirements Test cases cannot normally be executed with normal state of the application. 10. the test cases should indicate the calculated value in the expected results of the test case. Deletion of certain records to trigger an action by the application o Example: A document availability indicator field to be made null.m. Change values if required to trigger an action by the application o Example: Change the value of the interest for a deposit so as to trigger a warning by the application. Example: Description: Check the default value of the interest for the deposit Data: $ 400 This value ($400) should be calculated using the formula specified well in advance while data design.9. Below is the list of possible pre-requirements that could be attached to the test case: · Enable or Disable external interfaces o Example: Reuters. o Example: Maturity date of a deposit should be the date of test. So. K. in order to trigger a warning. Example: Description: Enter Client's name Data: John Smith (OR) Description: Check the default value of the interest for the deposit Data: $ 400 In the case of calculations involved.9.30 p. Date's that are to be maintained (Pre-date or Post date) in the database before testing . Muthuvel Page 62 of 127 . so as to trigger a warning from the application. · · · · 10. Time at which the test case is to be executed o Example: Test to be executed after 2.

development team should have completed tests on the software at Unit levels. 11.3. Some applications have concepts that would require sequencing of the test cases before actual execution.Version Identification Values The application would contain several program files for it to function. The version of these files and a unique checksum number for these files is a must for change management.Interfaces for the application In some applications external interfaces may have to connected or disconnected. it can be detected by using these numbers. The details of the execution are documented in the test plan. important control paths and filed validations are tested. o Test Plan – Internal o Test Execution Sequence Test cases can either be executed in a random format or in a sequential fashion. 11.Software Testing – made easy 11.Requirements 11.Unit testing sign off · · · To begin an integrated test on the application. Unit testing focuses verification effort on the smallest unit of software design. The test team should next plan the execution of the test on the application. K.1. These numbers will be generated for every program file on transfer from the development machine to the test environment. Muthuvel Page 63 of 127 . comparing and tracking before and after soft base transfer lie with the test team. Using the Design specification as a guide. Actual navigation to and from an interface may not be covered in black box testing.1.2. as one module would populate or formulate information required for another. These values have to be obtained from the development team by the test team. and hand over the signed off application with the defect report to the testing team. Test Execution Process The preparation to test the application is now over.1. Sequencing can also be done on the modules of the application. In both cases the development team should certify that the application would function in an integrated fashion. Clients and the development team must sign off this stage.1.Pre. The responsibilities of acquiring. 11. These identification methods vary from one client to another. The number attributed to each program file is unique and if any change is made to the program file between the time it is transferred to the test environment and the time when it is transferred back to the development for correction. This helps in identifying unauthorized transfers or usage of application files by both parties involved.1.

Software Testing – made easy 11.1.4.Test Case Allocation
The test team should decide on the resources that would execute the test cases. Ideally, the tester who designed the test cases for the module executes the test. In some cases, due to time or resource constraint additional test cases might have to be executed by other members of the team. Clear documentation of responsibilities should be mentioned in the test plan. Test cases are allocated among the team and on different phases. All test cases may not be possibly executed in the first passes. Some of the reasons for this could be: · Functionality may some-times be introduced at a later stage and application may not support it, or the test team may not be ready with the preparation · External interfaces to the application may not be ready · The client might choose to deliver some part of the application for testing and rest may be delivered during other passes Targets for completion of Phases Time frames for the passes have to be decided and committed to the clients well in advance to the start of test. Some of the factors consider for doing so are · Number of cases/scripts: Depending on the number of test scripts and the resource available, completion dates are prepared. · Complexity of testing: In some cases the number of test cases may be less but the complexity of the test may be a factor. The testing may involve time consuming calculations or responses form external interfaces etc. · Number of errors: This is done very exceptionally. Pre-IST testing is done to check the health of the application soon after the preparations are done. The number of errors that were reported should be taken as a benchmark. The preparation to test the application is now over. The test team should next plan the execution of the test on the application. In this section, we will see how test execution is performed.

11.2.Stages of Testing: 11.2.1.Comprehensive Testing - Round I
All the test scripts developed for testing are executed. Some cases the application may not have certain module(s) ready for test; hence they will be covered comprehensively in the next pass. The testing here should not only cover all the test cases but also business cycles as defined in the application.

11.2.2.Discrepancy Testing - Round II
All the test cases that have resulted in a defect during the comprehensive pass should be executed. In other words, all defects that have been fixed should be retested. Function points that may be affected by the defect should also be taken up for testing. This type of testing is called as Regression testing. Defects that are not fixed will be executed only after they are fixed.

11.2.3.Sanity Testing - Round III
This is final round in the test process. This is done either at the client's site or at Maveric depending on the strategy adopted. This is done in order to check if the system is sane enough for the next stage i.e. UAT or production as the case may be under an isolated environment. Ideally the defects that are fixed from the previous phases are checked and freedom testing done to ensure integrity is conducted.

K. Muthuvel

Page 64 of 127

Software Testing – made easy

12.

Defect Management

12.1.Defect – Definition
“Error: A human action that produces an incorrect result. “

… [IEEE]

“Fault: A manifestation of an error in software. A fault, if encountered may cause a failure. “ … BS7925-1 “Failure: Deviation of the software from its expected delivery or service. “ … BS7925-1

“A deviation from expectation that is to be tracked and resolved is termed as a defect. “ An evaluation of defects discovered during testing provides the best indication of software quality. Quality is the indication of how well the system meets the requirements. So in the context defects are identified as any failure to meet the system requirements. Error:

“Is an undesirable deviation from requirements?” Any problem or cause for many problems which stops the system to perform its functionality is referred as Error

Bug: Any Missing functionality or any action that is performed by the system which is not supposed to be performed is a Bug. “Is an error found BEFORE the application goes into production?” Any of the following may be the reason for birth of Bug 1. Wrong functionality 2. Missing functionality 3. Extra or unwanted functionality Defect: A defect is a variance from the desired attribute of a system or application. “Is an error found AFTER the application goes into production?” Defect will be commonly categorized into two types: 1. Defect from product Specification 2. Variance from customer/user expectation. Failure: Any Expected action that is supposed to happen if not can be referred as failure or we can say absence of expected response for any request. Fault: This generally referred in hardware terminologies. A Problem, which cause the system not to perform its task or objective.

K. Muthuvel

Page 65 of 127

Software Testing – made easy 12.2.Types of Defects
Defects that are detected by the tester are classified into categories by the nature of the defect. The following are the classification · Showstopper: A Defect which may be very critical in terms of affecting the schedule, or it may be a show stopper – that is, it stops the user from using the system further · Major: A Defect where a functionality/data is affected significantly but not cause a showstopping condition or a block in the test process cycles. · Minor: A Defect which is isolated or does not stop the user from proceeding, but causes inconvenience. Cosmetic Errors would also feature in this category

12.3.Defect Reporting
Defects or Bugs when detected in the application by the tester must be duly reported through an automated tool. Particulars that have to be filled by a tester are · Defect Id: Number associated with a particular defect, and henceforth referred by its ID · Date of execution: The date on which the test case which resulted in a defect was executed · Defect Category: These are explained in the next section, ideally decided by the test leader · Severity: As explained, it can be Major, Minor and Show-stopper · Module ID: Module in which the defect occurred · Status: Raised, Authorized, Deferred, Fixed, Re-raised, And Closed. · Defect description: Description as to how the defect was found, the exact steps that should be taken to simulate the defect, other notes and attachments if any. · Test Case Reference No: The number of the test case and script in combination which resulted in the defect. · Owner: The name of the tester who executed the test cases · Test case description: The instructions in the test cases for the step in which the error occurred · Expected Result: The expected result after the execution of the instructions in the test case descriptions · Attachments: The screen shot showing the defect should be captured and attached · Responsibility: Identified team member of the development team for fixing the defect.

12.4.Tools Used
Tools that are used to track and report defects are,

12.4.1.ClearQuest (CQ)
It belongs to the Rational Test Suite and it is an effective tool in Defect Management. CQ functions on a native access database and it maintains a common database of defects. With CQ the entire Defect Process can be customized. For e.g., a process can be designed in such a manner that a defect once raised needs to be definitely authorized and then fixed for it to attain the status of retesting. Such a systematic defect flow process can be established and the history for the same can be maintained. Graphs and reports can be customized and metrics can be derived out of the maintained defect repository.

K. Muthuvel

Page 66 of 127

and displays the results. TestDirector enables us to manage user access to a project by creating a list of authorized users and assigning each user a password and a user group such that a perfect control can be exercised on the kinds of additions and modifications and user can make to the project. including planning. schedules prepared earlier may slip based on certain factors.Defect Tracker Defect Tracker is a tool developed by Maveric Systems Ltd. an Independent Software Testing Company in Chennai for defect management. · Reports that are published are · Daily status report · Summarized defect report for the individual domain / product if any · Final defect reported Test down Times: During the execution of the test. Before meetings with the development team. This tool is used to manage the defect. Muthuvel Page 67 of 127 . the WinRunner automated test scripts of the project can also be executed directly from TestDirector.Defects Publishing Defects that are authorized are published in a mutually accepted media like Internet or sending the issue by email etc. · To monitor defects closely from initial detection until resolution. 12. TestDirector activates WinRunner. Here. creating tests. and tracking defects. · Server problems o Test team may come across problems with the server.Software Testing – made easy 12. · As a sophisticated system for tracking software defects. executing tests. it is used for · To report defects detected in the software. track the defect and report the defect effectively by the testing team. The process ensures that all defects are accurate and authentic to the best knowledge of the test team. Apart form the above. Apart from Manual Test Execution. runs the tests. test team should have internal discussions with the test lead on the defects reported to the test lead.Defects Meetings Meetings are conducted at the end of everyday between the test team and development team to discuss test execution and defects.2. 12. defect categorizations are done. · To analyze our Testing Process by means of various graphs and reports. 12. Possible causes for the problems are o Main server on which the application may have problems with number of instances on it slowly down the system o Networking to the main server or internal network may get down K.4.TestDirector (TD): TestDirector is an Automated Test Management Tool developed by Mercury Interactive for Test Management to help to organize and manage all phases of the software testing process.4. on which the application is planted.5.3.6. Time lost due to these should be recorded duly by the test team.

Defect Life Cycle K. Muthuvel Page 68 of 127 .7.Software Testing – made easy · · · · · · · · · Software compatibility with application and middleware if any may cause concerns delaying the test start New version of databases or middleware may not be fully compatible with the applications Improper installation of system applications may cause delays Interfaces with applications may not be compatible with the existing hardware setup Problems on Testing side / Development side Delays can also be from the test or development teams likely Data designed may not be sufficient or compatible with the application (missing some parameters of the data) Maintenance of the parameters may not be sufficient for the application to function Version transferred for testing may not be the right one 12.

These are categorized as 13.Deliverables The following are the deliverables to the Clients · · · · · · Test Strategy High Level Test Conditions or Scenarios and Test Conditions document Consolidated defect report Weekly Status report Traceability Matrix Test Acceptance/Summary Report. K. occurrence and category of the defects.Defect Analysis: The analysis of the defects can be done based on the severity. · · · All passes have been completed All test cases should have been executed All defects raised during the test execution have either been closed or deferred 13.3.1. This would give a fair idea on the defect set to be included for smoke test during regression.Authorities The following personnel have the authority to sign off the test execution process · · · Client: The owners of the application under test Project manager: Maveric Personnel who managed the project Project Lead: Maveric Personnel who managed the test process 13.2.Software Testing – made easy 13.2.Sign Off Sign off Criteria: In order to acknowledge the completion of the test process and certify the application. Muthuvel Page 69 of 127 . the following has to be completed. As an example Defect density is a metric which gives the ratio of defects in specific modules to the total defects in the application.3.4. Further analysis and derivation of metrics can be done based on the various components of the defect management.Metrics 13.4.Defect age: Defect age is the time duration between the points of introduction of defect to the point of closure of the defect.4. 13. 13.Defect Metrics Analysis on the defect report is done for management and client information. Test Closure Process 13.1.4.

Effort: Effort variance is a metric determined by the ratio of the planned effort to the actual effort exercised for the project. Muthuvel Page 70 of 127 .e. 13. Polarity of the knowledge i. positive and negative should be shared commonly with the management and peer groups..4. K.Test Management Metrics Analysis on the test management is done for management and client information.5.4.Debriefs With Test Team Completion of a project gives knowledge enrichment to the team members. These are categorized as · · Schedule: Schedule variance is a metric determined by the ratio of the planned duration to the actual duration of the project.Software Testing – made easy 13.4.

Each phase of testing has various documents to be maintained that tracks the progress of testing activities and helps for future references.Test Planning Phase The various activities happening during Test Planning phase are: · · · · · · · · Team formation and Task allocation Application understanding Preparation of Clarification document Internal Presentation Client Presentation Assess and Prioritize risk Preparation of Test Schedule(Effort estimation) Preparation of Test strategy document.2. Discussion with Client Contract sign-off 14.Test Initiation Phase The various activities happening during Test Initiation phase are: · · · · · · Functional Point Analysis. Testing Activities & Deliverables The process of testing involves sequence of phases with several activities involved in each phase.Test Design Phase The various activities happening during Test design phase are: · · · · Environment set up for Testing by the IT department Preparation of Test condition. 14.1. Risk Analysis Effort Estimation Proposal preparation and submission.Software Testing – made easy 14. Test script and Test data Preparation of Traceability matrix Preparation of Daily status and Weekly status report K. test activities and dependencies 14. The testing deliverables of different phases are significant for monitoring the testing process and for process improvement.3. It plays a significant role in · · · · · · Identify and prioritize improvement areas Analyze the results and about the variability and current strengths and weakness and indicate improvement areas List improvement areas Analyze effectiveness measurements Exercise on Test strategy preparation using Maveric template Identification of test phases. Muthuvel Page 71 of 127 .

Muthuvel Page 72 of 127 .Test Execution & Defect Management Phase The various activities happening during Test Execution and defect Management are: · Environment Check-up · Test data population · Execution and Defect management · Comprehensive (Round 1) · Discrepancy (Round 2) · Sanity (Round3) · Preparation of Daily status and Weekly status report · Defect Analysis · Preparation of Consolidated Defect report.Software Testing – made easy · Approval of Design documents by the Client 14. 14.5.Test Closure Phase The various activities happening during Test Closure are: · · · · · Final Testing Checklist Preparation of Final Test Summary Report Test Deliverables Project De-brief Project Analysis Report K.4.

Our five founding Directors of Maveric took to entrepreneurship while they were at the prime of their careers in business and technology consulting.Quality Policy “We will significantly enhance the functionality. Bangalore. CRM. Domain expertise in chosen areas enables us to understand client requirements quickly and completely to offer a value-added testing service. “ K. A core group of anchor clients have propelled us to become a key independent software testing firm in India within a short span of three years. A Maveric spirit and collective vision is nurturing our unique culture. Dallas. Delhi. specialist software testing service provider.Testing Mahesh VN Rosario Regis Director Manager .Technical Writing Kannan Sethuraman Sajan CK Principal Manager . Complementing our core service offering in testing is a strong capability in strategic technical writing and risk management (verification) services. · · · · · Maveric Systems is an independent software testing company Delivery hubs in India and UK 185 Professionals on Projects across Chennai. and performance of IT solutions deployed by our clients. financial services and insurance verticals.Banking.Software Testing – made easy 15. and healthcare domains.2. Our forte lies in banking. Mumbai.1. usability. London. Financial Services and Insurance New Focus Areas . Hyderabad.Testing Venkatesh P Hari Narayana Director Manager .Telecom and Manufacturing sectors 15. Chicago.Overview Maveric Systems is an independent. Maveric Systems Limited 15. We bring in a fresh perspective and rigor to testing engagements by virtue of our dedicated focus. and driving us to relentlessly deliver value to our clients. usability. and performance of technology solutions deployed by our clients. Singapore and Tokyo Primary Focus .Fulfillment 15.Leadership Team Exceptional management bandwidth is a unique strength that has fuelled Maveric's aggressive growth. We significantly enhance the functionality.Testing Subramanian NN AP Narayanan Director Manager . Melbourne. Ranga Reddy P K Bandyopadhyay CEO Manager . We have also delivered projects in telecom. Muthuvel Page 73 of 127 .3.

Muthuvel Page 74 of 127 .4.Testing Process / Methodology Input Output Key Signoff K.Software Testing – made easy 15.

Operations · Distribute the baseline documents based on the individual roles defined. Lead . · Prepare a micro level schedule indicating the roles allocated for all team members with timelines in MPP · Fill in the Project Details form and the Top Level Project Checklist Output · Minutes of meeting · MPP to be attached to Test Strategy (Test Planning process) · Project Details form · Top Level Project Checklist to Test Delivery Management K.1. Muthuvel Page 75 of 127 .Software Testing – made easy 15. Test Manager.4.Test Initiation Phase Input · Signed proposal Procedure · Arrange internal kick-off meeting among the team members.Commercial and Lead .

Test Planning Phase K.Software Testing – made easy 15.4. Muthuvel Page 76 of 127 .2.

Software Testing – made easy

Input · Baseline documents · MPP Procedure · Team members understand the functionality from baseline documents · Raise and obtain resolution of clarifications · Internal presentation and reverse presentation to the client · TL defines test environment and request the client for the same · TL identifies risks associated and monitors the project level risks throughout the project. The risks at the project delivery level are maintained by the Lead - Ops · TL prepares Test Strategy, review is by Lead - Commercial, Lead - Ops and Test Manager · AM revises commercials if marked difference between Test Strategy and the Proposal · TL prepares Configuration Management and Quality Plan, Review is by Lead Ops and Test Manager Output · Clarification document · Test Environment Request to client · Risk Analysis document to Test Delivery Management

K. Muthuvel

Page 77 of 127

Software Testing – made easy 15.4.3.Test Design Phase

Input · Test Strategy Procedure · Team members prepare the Test Conditions, Test Cases and Test Script · TL prepares the Test Scenarios (free form)
· ·

·
· · · Output

Review of the above by the Lead - Ops, Test Manager Client review records are also maintained. Lead - Ops is responsible for sign-off Team members prepare Traceability matrix if agreed in Test Strategy and updated during Test Execution with defect ids Team members prepare Data Guidelines whenever required TL sends daily status reports to clients and the Test Delivery Management team. TL sends the weekly status reports to clients, Test Manager, delivery management team and the Account Manager TL escalates any changes in baseline documents to Delivery Management team.

· ·
·

Test Condition/Test Case document, Test Script, Test Scenarios (free form) Traceability matrix to the client Daily and Weekly status reports to client, Test Delivery Management and Account Management

K. Muthuvel

Page 78 of 127

Software Testing – made easy 15.4.4.Execution and Defect Management Phase

15.4.4.1.Test Execution Process

15.4.4.2.Defect Management Process

K. Muthuvel

Page 79 of 127

Software Testing – made easy Input · Test Conditions/ Test Cases document · Test Scenarios document · Traceability matrix Procedure · Validate test environment and deal with any issues · Execute first rounds of testing · Update the Test Condition/Test case document (and the Test Scripts. consolidate all defects and send to client. Output · Defect Report · Daily and Weekly status reports to the client. Test Delivery Management and Account Management K. No review or sign-off required Carry out the test in the new version of the application Changes to baseline or scope of work escalated to Lead .Ops Complete rounds/stages of testing as agreed in the Test Strategy Send daily and weekly status reports to clients and the Test Delivery Management team Escalate any changes in baseline documents to Delivery Management team. Muthuvel Page 80 of 127 . Test Manager and · · · · · · · delivery management team Conducts defect review meetings with client (as specified in Test Strategy) Consolidate the Test Conditions/Test Cases to be executed in the subsequent round in a separate document. if prepared) with actual result and status of test · Log in the Defect Report.

5. Lead ..Test Closure Phase Input · Consolidated Defect Report Procedure · Team Lead in consultation with Lead .Software Testing – made easy 15. TL and team members carry out the de-brief meeting.Ops approves the same · TL prepares the Final Test Report and Lead .Ops. If required. inputs are given to Quality Department for process improvements Output · Final Testing Checklist and Final Test Report to the client · Project De-brief to Quality Department K. decisions whether to close and release deliverables are taken by delivery management team · The team prepare the Quantitative measurements · TL prepares the Final Testing Checklist and Lead . Lead . · Internal Review records and review records of clients are also stored. Muthuvel Page 81 of 127 .4.Comm. Test Manager.Ops and Test Manager Reviews the same. % compliance to schedule are documented in Project De-brief form. Effort variances. Account Manager.Ops decides about closure of a project (both complete and incomplete) · In case of incomplete testing.

Project Details Form Name of Project Client Location Contact at Client Location Project In-charge (Testing) Project In-charge (Development) Name: Designation: Email: Phone: Mobile: Name: Designation: Email: Phone: Mobile: Name: Designation: Email: Phone: Mobile: Domain of the Project Duration of Testing Level of Testing Test Summary From To White Box Testing Black Box Testing Manual Testing Automation Testing Type of Testing Functional Testing Regression Testing Performance Testing Automation Tools. if any K.5.1.Software Testing – made easy 15. if any Defect tracking / Project management tools used.Test Deliverables Template 15.5. Muthuvel Page 82 of 127 .

Software Testing – made easy Application Summary Application Overview OS with version Database Backend Middleware Front-end Software languages used Module Summary Module Name Description Testers Summary K. Muthuvel Page 83 of 127 .

2.Minutes of Meeting Meeting Topic: Host of the Meeting: Participants: Absentees: Previous Meeting Follow up: Meeting Time: Date and Venue and Meeting: Minutes of the Meeting (Detailed) Attach if additional sheets require: Corrective and Preventive Actions with Target Date: Prepared By: Date: Approved By: Date: K. Muthuvel Page 84 of 127 .5.Software Testing – made easy 15.

Software Testing – made easy 15.3. Muthuvel Page 85 of 127 .5.Top Level Project Checklist K.

Configuration Management and Quality Plan K. Muthuvel Page 86 of 127 .4.5.Software Testing – made easy 15.5.5.Test Strategy Document 15.

Test Environment Request Project Code: Project Name: Application Version No: Type of Testing: Prepared By: Date: Approved By: Date: Description Client side Hardware Details like RAM.Software Testing – made easy 15.6. of CPU RAM Hard Disk capacity Operating System Software CPU RAM Hard Disk capacity Number of Database Servers Location of Database Servers Details Version K. No. of CPU RAM Hard Disk capacity Operating System Software Number of Servers Location of Server Server side – Database Hardware Platform. Muthuvel Page 87 of 127 . No. Hard Disk Capacity etc. MS Projects etc. Server side – Middle Tier Hardware Platform.5. Operating System Client Software’s to be installed Front end language. tools Browser support Internet connection Automation Tools to be installed Utility software’s like Toad.

7.5.Test condition / Test Case Document 15.5.5.5.Test Script Document K.8. Muthuvel Page 88 of 127 .10.9.Risk Analysis Document 15.Software Testing – made easy 15.Clarification Document 15.

Traceability Matrix 15. Defect Distribution K. Muthuvel Page 89 of 127 . Execution Phase Module Tot no of Conds in the module No.Daily Status Report Project Code: Project Name: Phase of the Testing Life Cycle: Application Version No: Round: Report Date: Highlights of the Day: A1.12. Executed during the day Planned Actua l Total executed till date Planned Actual Remarks B. Design Phase Module No of Test Condn Designed No of Test Cases Designed Remarks Sn No WinRunner Scripting Progress SRs/Transaction Status A2.11.5.5.Software Testing – made easy 15.

Environment Non-availability From Time To Time Time Lost Man-hours Lost Error encountered & Root cause (If Identified) Total D. Other Issues / Concerns Description of the Issue / Concern Action Proposed Proposed Target Date Responsibility Remarks E.Software Testing – made easy Show Stopper Defects Raised Today (A) Till Yesterday (B) Total (A + B) Defects Closed (C) Balance Open (A + B – C) Fixed but to be retested Rejected Critical Major Minor Total C. General Remarks: Prepared By: Date: Approved By: Date: K. Muthuvel Page 90 of 127 .

Highlights of the Week B1.Software Testing – made easy 15. Life Cycle/Process Planned Start Date End Date Manmonths Revised Start Date End Date Start Date Man months utilized till date Actual Projected man months till closure Projected End Date Reasons B.13. Execution Phase Module Total no of conditions in the module No of conditions executed during the week Planned Actual Total executed till date Planned Actual Remarks C.Weekly Status Report Project Code: Project Name: Phase of the Testing Life Cycle: Application Version No: Report Date: Report for Week: A. Defect Distribution Show stopper Open Defects Break-up of Open defects Pending Clarifications Fixed but to be re-tested Re-raised Being Fixed Rejected Critical Major Minor Total D. Environment Non-availability Total man-hours lost during the week: K. Muthuvel Page 91 of 127 . Design Phase Module No of Test Condition Designed No of Test Cases Designed Remarks WinRunner Scripting Progress Sn No SRs/Transaction Status B2.5.

14.5. Muthuvel Page 92 of 127 . Other Issues / Concerns Description of the Issue / Concern Action Proposed Proposed Target Date Responsibility Remarks F.Software Testing – made easy E.Defect Report Round 2& Round 3 as same as Round 1 K. General Remarks Prepared By: Date: Approved By: Date: 15.

client review comments etc) Have all defects been re-tested Have all defects been closed or deferred status Are all deliverables ready for delivery Are all deliverables taken from Configuration Management tool Are all soft copy deliverables checked to be virus free Comments and Observations: Final inspection result: Approved/ Rejected Status (Y/N) Remarks K. timelines been tracked in Top Level Project Checklist. Muthuvel Page 93 of 127 .5.Software Testing – made easy 15.15. Test Strategy and design documents Have all agreed-upon changes carried out (change in scope.Final Test Checklist Project Code: Project Name: Prepared By: Date: Approved By: Date: Check Have all modules been tested Have all conditions been tested Have all rounds been completed All deliverables named according to naming convention Are all increase in scope.

5.Project De-brief Form Project Code: Project Name: Prepared By: Date: Approved By: Date: Overview of the Application: Key Challenges faced during Design or Execution: Lessons learnt: Suggested Corrective Actions: K.17.5.Software Testing – made easy 15. Muthuvel Page 94 of 127 .Final Test Report 15.16.

there will be no clear-cut way to determine if a software application is performing correctly. When deadlines loom and the crunch comes. often requiring a lot of guesswork. resulting in added bugs. Poorly documented code . etc. then the list will be checked to ensure that it is not a duplicate. Time pressures . attainable.scheduling of software projects is difficult at best. complete. Programming errors .A redesign. · · · Q2: What does "finding a bug" consist of? Ans: Finding a bug consists of number of steps that are performed: · · · · · Searching for and locating a bug Analyzing the exact circumstances under which the bug occurs Documenting the bug found Reporting the bug and if necessary. reasonably detailed. If there are many minor changes or any major changes. Changing requirements .understand the application requirements. Software complexity . Q&A 16. Without such documentation. mistakes will be made. known and unknown dependencies among parts of the project are likely to interact and cause problems. and testable. A non-testable requirement would be. effects on other projects. rescheduling of engineers.programmers "can" make mistakes.the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.1. Requirements should be. If a bug is found. and the complexity of keeping track of changes may result in errors. Q4: What's the big deal about 'requirements'? Ans: Requirements are the details describing an application's externally perceived functionality and properties.Software Testing – made easy 16. the error is simulated Testing the fixed code to verify that the bug is really fixed Q3: What will happen about bugs that are already known? Ans: When a program is sent for testing (or a website given) a list of any known bugs should accompany the program. Any bugs not found on the list will be assumed to be new. Software development tools . K.General Q1: Why does software have bugs? Ans: · · · · Miscommunication or no communication . 'userfriendly' (too subjective).various tools often introduce their own bugs or are poorly documented.it's tough to maintain and modify code that is badly written or poorly documented that result as bugs. for example. cohesive. Muthuvel Page 95 of 127 . clear & documented.

indicating already well tested) Deadlines achieved (release deadlines. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. While there will be little affect on black box testing (where an K. inherent risks. indicating a normal amount of programmer testing) High stability (bugs are expected to be difficult to find. so a complete testing can never be performed. and run in such an interdependent environment. Muthuvel Page 96 of 127 . An initial assessment of the software is made. Q8: Does it matter how much the software has been tested already? Ans: No. and it will be classified into one of three possible stability levels: · · · Low stability (bugs are expected to be easy to find.) Test cases completed with certain percentage passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point Defect rate falls below a certain level Beta or Alpha testing period ends Q7: How does a client/server environment affect testing? Q9: How is testing affected by object-oriented designs? Ans: Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. Many modern software applications are so complex. and costs of significant requirements changes. data communications. It is up to the tester to decide how much to test it before it is tested.Software Testing – made easy Q5: What can be done if requirements are changing continuously? Ans: It's helpful if the application's initial design allows for some adaptability so that any changes done later do not require redoing the application from scratch. or set up only higher-level generic-type test plans) Q6: When to stop testing? Ans: This can be difficult to determine. etc. Additionally. Common factors in deciding when to stop testing are: · · · · · · Ans: Client/server applications can be quite complex due to the multiple dependencies among clients. indicating that the program has not been tested or has only been very lightly tested) Normal stability (normal level of bugs. and servers. load/stress/performance testing may be useful in determining client/server application limitations and capabilities. Be sure that customers and management understand the scheduling impacts. the best bet might be to minimize the detail in the test cases. When time is limited (as it usually is) the focus should be on integration and system testing. To makes changes easier for the developers the code should be well commented and well documented. hardware. testing deadlines. Design some flexibility into test cases (this is not easily done. Thus testing requirements can be extensive.

change processes. etc. Q10: Will automated testing tools make testing easier? Ans: A tool set that allows controlled access to all test assets promoted better communication between all the team members.Testing can go in parallel with the software development life cycle to minimize the time needed to develop the software.) Identify application's higher-risk aspects. those responsible for tasks. Testing a software application now involves a variety of skills. and services needed to efficiently develop software. software. tools. and will ultimately break down the walls that have traditionally existed between various groups. Determine test environment requirements (hardware.) Determine test-ware requirements (record/playback tools. etc.) Determine test input data requirements Identify tasks. integration. for different clients. communications. The complete solution is based on providing the user with principles. · · Focus . coverage analyzers. · · Q12: What steps are needed to develop and run software tests? Ans: The following are some of the steps needed to develop and run software tests: · · · Obtain requirements. on multiple platforms and across different domain areas. functional. system. Automated testing tools are only one part of a unique solution to achieving customer success. Muthuvel Page 97 of 127 . reporting requirements. and internal design specifications and other necessary documents Obtain budget and schedule requirements Determine project-related personnel and their responsibilities. required standards and processes (such as release processes. set priorities. functional design.Independent test team looks afresh at each test project while bringing with them the experience of earlier test assignments.Software Testing – made easy understanding of the internal design of the application is unnecessary). If the application was well designed this can simplify test design. and determine scope and limitations of tests Determine test approaches and methods . usability tests. Q11: Why outsource testing? Ans: Skill and Expertise Developing and maintaining a team that has the expertise to thoroughly test complex and large applications is expensive and effort intensive. and labor requirements · · · · · K. in their domain areas.Using a dedicated and expert test team frees the development team to focus on sharpening their core skills in design and development. only when needed.Outsourcing testing offers the flexibility of having a large test team. test tracking. etc. load. problem/bug tracking. Reduce Cost . etc. Save time .unit. white-box testing can be oriented to the application's objects. Independent assessment . This reduces the carrying costs and at the same time reduces the ramp up time and costs associated with hiring and training temporary personnel.

which efficiently meets the needs of an organization. what sequence they are to be tested in. this means the Requirements Specification. milestones Determine input equivalence classes. identifying what levels of testing are to be applied and the methods. set up or obtain test input data Obtain and install software releases Perform tests Evaluate and report results Track problems/bugs and fixes Retest as needed Maintain and update test plans. K. being applicable to all of organizations software developments. The application of a test strategy to a software development project should be detailed in the projects software quality plan.Software Testing – made easy · · · · · Set schedule estimates. boundary value analyses. These may form part of the Detailed Design Specifications. This would also usually be published as a separate document. error classes Prepare test plan document and have needed reviews/approvals Write test cases Have needed reviews/inspections/approvals of test cases Prepare test environment and test ware. by testing the software. In the case of acceptance testing and system testing. describing the plan for acceptance testing of the software. and describes the test environment. techniques and tools to be used. Muthuvel Page 98 of 127 . A System Test Plan. This may form part of the Architectural Design Specification. A test strategy should ideally be organization wide. A test plan may be project wide. at what level they will be tested. Unit Test Plan(s). how the test strategy will be applied to the testing of each item. describing the plans for testing of individual units of software. A Software Integration Test Plan. which is the first stage within a software development project. A test plan states what the items to be tested are. This would usually be published as a separate document. and test ware through life cycle · Q13: What is a Test Strategy and Test Plan? Ans: A test strategy is a statement of the overall approach to testing. or may in fact be a hierarchy of plans relating to the various levels of specification and testing: · An Acceptance Test Plan. describing the plan for integration of testes software components. obtain needed user manuals/reference documents/configuration guides/installation guides. Developing a test strategy. but might be published with the system test plan as a single document. timelines. The next stage of test design. that the software produced fulfils the requirements or design statements of the appropriate software specification. test cases. describing the plan for system integration and testing. set up test tracking processes. but might be published with the acceptance test plan. is critical to the success of software development within the organization. set up logging and archiving processes. is the development of a test plan. test environment. · · · · · · · · · The objective of each test plan is to provide a plan for verification.

What is Software Testing? “The process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results. /BRS (CRS)/User Manual. What types of testing do testers perform? Black-box testing. – Interview 1. Muthuvel Page 99 of 127 . To achieve the maximum usability of the system To demonstrate expected performance of the system. Whether the system is not performing what it is not suppose to do? 6." 2.2. performing its task as expected. Stable for testing Performance Testing: K. White box testing is the basic type of testing testers Performs.. 3. What is the Purpose of Testing? · · · To uncover hidden error. What is the Outcome of Testing? A stable application. Apart from that they also perform a lot of tests like · Ad-Hoc testing · Cookie testing · CET ( Customer Experience test) · Client-Server Test · Configuration Tests · Compatibility testing · Conformance Testing · Depth Test · Error Test · Event-Driven · Full Test · Negative Test · Parallel Testing · Performance Testing · Recovery testing · Sanity Test · Security Testing · Smoke testing · Web Testing 4.G. What are the entry criteria for Functionality and Performance testing? Functional testing: Functional Spec.Software Testing – made easy 16.E. 5. Whether the system is doing what it supposes to do? B. What is the need for testing? The Primary need is to match requirements get satisfied with the functionality and also to answer two questions A.. An integrated application.

structures. Muthuvel Page 100 of 127 . If ‘V’ model is followed then testing can be started at design phase itself. Functional Specification Business Requirement Document 13. starts actual testing. when Black box testing is available? A benchmark that certifies Commercial (Business) aspects and also functional (technical) aspects is objectives of black box testing. When to start and Stop Testing? If we follow ‘Waterfall’ model then testing can be started after coding. which relates to a software system. documentation and personnel) “Are we Building the Right Product” K. conditions. When Application study was confident enough To Stop: After full coverage of Scope of testing After getting enough confidence on health of the application. are very micro level but they are Basement for any application. hardware. Regard less of model the following criteria should considered To start: When test Environment was supportive enough for testing. What are the entry criteria for Automation testing? Application should be stable. Clear Design and Flow of the application is needed 10. files. What is verification? A tester uses verification method to ensure the system complies with an organization standards and processes. arrays. Number of Conditions/Cases tested per day It can be · Test Efficiency · Total number of tests executed 8. What is test metrics? After doing the actual testing. What is Quality “Fitness to use” “A journey towards excellence” 12. What is Baseline document. So White box takes these things in Macro level and test these things 9. etc. Here loops. which starts the understanding of the application before the tester. Eg: Size of code and Found bugs on that count Number of bugs reported per day. Why do you go for White box testing. Software metrics is any type of measurement. Can you say any two? A baseline document. process or related documentation.. an evaluation doing on the testing to extract some information about the application health using outputs of testing. relying on review or non executable methods (such as software. 11.Software Testing – made easy Same above mentioned baseline document support and good and healthy application that supports drastic performance testing 7.

· Ad . 17.Hoc testing · Cookie Testing · CET (Customer Experience Test) · Client-Server Test · Configuration Tests · Compatibility testing · Conformance Testing · Depth Test · Error Test · Event-Driven · Full Test · Negative Test · Parallel Testing K. What are SDLC and TDLC? The Flow and explanation process. What is validation? Validation physically ensures that the system operates according to plan by Executing the system functions through series of tests that can be observed or evaluated. What is quality assurance? A planned and systematic pattern for all actions necessary to provide adequate confidence that the item or product conforms to established technical requirements 16.Software Testing – made easy 14. which clearly pictures how a software development and testing should be done. processes are monitored and performance problems are solved.What are the various levels of testing? · Unit Testing · Integration testing · System Testing · User Acceptance Testing 20. What are the Qualities of a Tester? · · · · · · Should be perfectionist Should be tactful and diplomatic Should be innovative and creative Should be relentless Should possess negative thinking with good judgment skills Should possess the attitude to break the system 19. (Software development Life Cycle and testing development Life cycle) TDLC is a informal concept and also referred as TLC 18. “Are we building the Product Right” 15. In order to achieve this purpose. were explained in SDLC and TDLC respectively. Tell names of some testing type which you learnt or experienced? Any 5 or 6 types which are related to companies profile is good to say in the interview. Muthuvel Page 101 of 127 . What is quality control? Quality Control is defined as a set of activities or techniques whose purpose is to ensure that all quality requirements are being met.

Software Testing – made easy
· · · · · · Performance Testing Recovery testing Sanity Test Security Testing Smoke testing Web Testing

21. What exactly is Heuristic checklist approach for unit testing? It is method of achieving the most appropriate solution of several found by alternative methods is selected at successive stages testing. The check list Prepared to Proceed is called Heuristic check list 22. After completing testing, what would you deliver to the client? · · · · · · · Test deliverables namely Test plan Test Data Test design Documents (Condition/Cases) Defect Reports Test Closure Documents Test Metrics

23. What is a Test Bed? Before Starting the Actual testing the elements which supports the testing activity such as Test data, Data guide lines. Are collectively called as test Bed. 24. What is a Data Guideline? Data Guidelines are used to specify the data required to populate the test bed and prepare test scripts. It includes all data parameters that are required to test the conditions derived from the requirement / specification The Document, which supports in preparing test data are called Data guidelines

25. Why do you go for Test Bed? When Test Condition is executed its result should be compared to Test result (expected result), as Test data is needed for this here comes the role of test Bed where Test data is made ready. 26. What is Severity and Priority and who will decide what? Severity: How much the Bug found is supposed to affect the systems Function/Performance, Usually we divide as Emergency, High, Medium, and Low. Priority: Which Bug should be solved fist in order of benefit of system’s health? Normally it starts from Emergency giving first Priority to Low as last Priority. 27. Can Automation testing replace manual testing? If it so, how? Automated testing can never replace manual Testing.

K. Muthuvel

Page 102 of 127

Software Testing – made easy
As these tools to Follow GIGO principle of computer tools. Absence of creativity and innovative thinking. But 1. It speeds up the process. Follow a clear Process, which can be reviewed easily. Better Suited for Regression testing of Manually tested Application and Performance testing. 28. What is a test case? A Test Case gives values / qualifiers to the attributes that the test condition can have. Test cases, typically, are dependent on data / standards. A Test Case is the end state of a test condition, i.e., it cannot be decomposed or broken down further. Test Case design techniques for Black box Testing. · · · · · · Decision table Equivalence Partitioning Method Boundary Value Analysis Cause Effect Graphing State Transition Testing Syntax Testing

29. What is a test condition? A Test Condition is derived from a requirement or specification. It includes all possible combinations and validations that can be attributed to that requirement/specification. 30. What is the test script? A Test Script contains the Navigation Steps, Instructions, Data and Expected Results required to execute the test case(s). Any test script should say how to drive or swim through out the application even for a new user. 31. What is the test data? The value which are given at expected places(fields) in a system to verify its functionality have been made ready in a piece of document called test data. 32. What is an Inconsistent bug? The Bug which is not occurring in a definable format or which cannot be caught, even if a process is followed. It may occur and may not when tested with same scenario. 33. What is the difference between Re-testing and Regression testing? Retest-To check for a particular bug and its dependencies after it is said to be fixed. Regression testing: To check for the added or new functionality's effect on the existing system 34. What are the different types of testing techniques? · · · White box Black box Gray Box

35. What are the different types of test case techniques?

K. Muthuvel

Page 103 of 127

Software Testing – made easy

Test Case design techniques for Black box Testing. · Decision table · Equivalence Partitioning Method · Boundary Value Analysis · Cause Effect Graphing · State Transition Testing · Syntax Testing 36. What are the risks involved in testing? · · · Resource Risk (A. Human Resource B. Hardware resource C. Software resource) Technical risk Commercial Risk

37. Differentiate Test bed and Test Environment? Test bed holds only testing documents which supports testing which includes Test data, Data guidelines etc. Test environment includes all supportive elements namely hardware, software, tools, Browsers, Servers, etc., 38. What ifs the difference between defect, error, bug, failure, fault? Error: “Is an undesirable deviation from requirements?” Any problem or cause for many problems which stops the system to perform its functionality is referred as Error Bug: Any Missing functionality or any action that is performed by the system which is not supposed to be performed is a Bug. “Is an error found BEFORE the application goes into production?” Any of the following may be the reason for birth of Bug 1. Wrong functionality 2. Missing functionality 3. Extra or unwanted functionality Defect: A defect is a variance from the desired attribute of a system or application. “Is an error found AFTER the application goes into production?” Defect will be commonly categorized into two types: 1. Defect from product Specification 2. Variance from customer/user expectation. Failure: Any Expected action that is suppose to happen if not can be referred as failure or we can say Absence of expected response for any request. Fault: This generally referred in hardware terminologies. A Problem, which cause the system not to perform its task or objective. 39. What is the difference between quality and testing? “Quality is giving more cushions for user to use system with all its expected characteristics”It is usually said as Journey towards Excellence. Testing is an activity done to achieve the quality.

K. Muthuvel

Page 104 of 127

relying on review or non executable methods (such as software. What is the difference between Testing and debugging? Testing is done to find bugs Debugging is an art of fixing bugs. “Is an error found AFTER the application goes into production?” Defect will be commonly categorized into two types: 1. This is known as ‘white box’ testing because you see into the internal workings of the code. In other words we can say Verification as “Are we Building the Right Product” A tester uses verification method to ensure the system complies with an organization standards and processes. Muthuvel Page 105 of 127 . Extra or unwanted functionality Defect: A defect is a variance from the desired attribute of a system or application. In other words we can say Verification as “Are we building the Product Right” K. If the program consistently provides the desired features with acceptable performance. Defect from product Specification 2. then specific source code features are irrelevant. What is the difference between White & Black Box Testing? White box: Structural tests verify the structure of the software itself and require complete access to the object's source code. inspection and reviewing. What is the difference between bug and defect? Bug: Any Missing functionality or any action that is performed by the system which is not supposed to be performed is a Bug. The technique for validation is testing. It's a pragmatic and down-to-earth assessment of software.Software Testing – made easy 40. 41. Wrong functionality 2. hardware. Black Box: Functional tests examine the observable behavior of software as evidenced by its outputs without reference to internal functions. “Is an error found BEFORE the application goes into production?” Any of the following may be the reason for birth of Bug 1. inspection and reviewing. What is the difference between verification and validation? Verification: The process of determining whether of not the products of a given phase of the software development cycle meets the implementation steps and can be traced to the incoming objectives established during the previous phase. Both are done to achieve the quality 43. The techniques for verification are testing. Variance from customer/user expectation 44. documentation and personnel) Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. What is the difference between Quality Assurance and Quality Control? QA: Study on Process followed in Project development QC: Study on Project for its Function and Specification 42. Hence ‘black box’ testing. Missing functionality 3.

What is the difference between Volume & Load? Testing Type Data Increase Till saturation Point is reached Co User Load Constant Increase Till saturation Volume Point is reached 48. K. modules or units of code.: size.Software Testing – made easy Validation physically ensures that the system operates according to plan by Executing the system functions through series of tests that can be observed or evaluated. Deliverable of the unit testing is software unit ready for testing with other system components. Deliverable of integration testing is parts of system ready for testing with other portions of system. It holds properties of a field and functionality dependencies E. Business Requirement Specification will be more business oriented which throws light more on needs or requirements 46. Validates that multiple parts of the system perform as per specification. so need not to increase in volume alone even user can also increased objective here is to check the up to which extend it can bare the increasing load and volume. Integration Testing: Testing of related programs. Muthuvel Page 106 of 127 . It validates that the software performs as designed.g. 45. it is usually done by the developer of the unit. Type of data whether numeric or alphabets etc. What is the difference between functional spec? And Business requirement Specification? Functional specification will be more technical. What is difference between Volume & Stress? Volume testing is increasing the volume of data to maximum withstand capacity of the system. Stress is the combination of both volume and load. What is the difference between unit testing and integration testing? Unit Testing Coding & Debugging Individual Unit Under Integration Testing Unit Testing: Testing of single unit of code. 47. module or program.

51. What is the difference between Client Server & Web Based Testing? Client server needs a Client server environment that is a system to Request and another to respond to its request. 54. 53. More than all that without executing the application this testing cannot be done. What is the Difference between static and dynamic? · · Static testing: Testing performed with expecting any response for specific request placed at that time. A walk through is generally carried out without any plan or preparation. but prior to Unit testing. Done Based on structures. 56. What is the difference between Integration & System Testing? Integration testing 52. There are three separate inspection performed. What is the Difference between Code Walkthrough & Code Review? Both are almost same except in one issue that is Walkthrough need not be done by people inside the team are who have more knowledge about the system. IST need integrated System of various Unit levels of independent functionality and checks its workability after integration and compares it before integration. What is the Difference between SIT & IST? · · SIT can be done when system is on the process of integration. What is the difference between Stress & Load Testing? Stress is the combination of both volume and load. · Finally inspecting Unit testing. Algorithms. logic. 55. Load Testing is increasing number of user to maximum withstand capacity of the system. the material being examined is presented by a reviewed and evaluated by a team of reviewers. Inspections: Design and code inspection was first described by FAGUN.. so need not to increase in volume alone even user can also increased objective here is to check the up to which extend it can bare the increasing load and volume. Dynamic testing: Performed to the System that responds to any specific request. they are · Following design. etc.Software Testing – made easy 49. · Following implementation. 50. this was not considered to be cost-effective in discovering errors. What is the difference between walkthrough and inspection? Walkthrough: In a walk through session. done to check its stability and functionality when goes online. Muthuvel Page 107 of 127 . Review is highly recommended to be done by people of higher level in team or who have good knowledge about the application. but prior to implementation. the aim of this review is to enhance the process carried out in production environment. What is the difference between alpha testing and beta testing? K. Web Based testing normally goes with 3W sites testing.

Importance of cookie testing: · To evaluate the performance of a web application · To assure the health of www application where more cookies are involved 61. 59. Many critical software applications and services have integrated security measures against malicious attacks. and Tools) Optimum maintenance of resource 58. Enough hardware and software support E. 62. or in the environment when technical/testing materials are not 100% completed. Simulated security attacks can be performed to find vulnerabilities. This can be performed even with non-availability of of Baseline documents. Servers. . What is Ad hoc testing? When it can be done? Appropriate and very often syndicated when tester wants to become familiar with the product. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations.Software Testing – made easy Component Test data Test Environment To Achieve Tested by Supporting Document Used Alpha testing Simulated Controlled Functionality Only testers Functional Specification Beta testing Live Uncontrolled User needs Testers and Users Customer Requirement Specification End- 57.g. 60. Muthuvel Page 108 of 127 . password validation and details about your session. What are the Minimum requirements to start testing? · · · · Baseline Documents. What is security testing? · · To test how the system can defense itself from external attacks. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire. How much it can with stand from breaking the system from performing its assigned task. Browsers. What is Smoke Testing & when it will be done? A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. It is also largely based on general software product functionality/testing understanding and the normal 'Human Common Sense'. Cookies will get stored in our machines (client). Stable application. What is database testing? The demonstrate the backend response for front end requests K. and validating the effectiveness of security measures. What is cookie testing? Cookie is a text file normally written by web applications to store all your login-id.Its mainly to verify whether cookies are being written correctly.

which stores and retrieve back the data and supports the front end when in need is justified database testing. 69. That too test design documents which plays vital role in test execution can never be said waste of time as without which proper testing cannot be done. 64. Testing is the way of achieving quality. What is the Initial Stage of testing? Right from understanding the application testing starts with clarifying the ambiguities in the application and continues to Test initiation encloses.finally this execution is going to find the bugs to be fixed so we have prepare this documents. 71. What is Web Based Application Testing? Web Based testing normally goes with 3W sites testing. test cases. What is Client Server Application Testing? Client server needs a Client server environment that is a system to Request and another to respond to its request. what to be tested? The Scope of testing should be created based on the requirements or needs given by the end user or client. 70. What is the relation ship between Quality & Testing? Quality is a journey towards excellence. test script (Before Starting Testing)? These are test design document which are used to execute the actual testing Without which execution of testing is impossible . Is it not waste of time in preparing the test condition. test case & Test Script? No document prepared in any process is waste of time. Muthuvel Page 109 of 127 . Test process.Software Testing – made easy How backend. Tells right from syntax to its functionality and dependencies Eg: for a password and user id fields It should accept <n>number of characters in<Type> of type of data and it gets input from <x> and gives output to <y>. test data. How do you determine. 65. 68. How do you go about testing of Web Application? K. 72. Why do we prepare test condition. done to check its stability and functionality when goes online. says how the system should behave in ideal scenario. Data guidelines Preparation and test design which is finally executed 67. 63. What is the use of Functional Specification? Functional Specification is a baseline document prepared in technical perspective. How do you go about testing a project? · · · System study Understanding the application Test environment setup 66. based on these things the testing scope should be decided.

Conditions.Software Testing – made easy To approach a web application testing. Program Logic. test case & test script help you in performing the static testing? Static testing will be done based on Functions. so hardly not needed to have These documents. 78. Can test condition. Workability testing. what all can be tested? · · · · · Functions. arrays and structures. 80. Analysis of a program carried out without executing the program. the system that responds to request of user is tested by executing it is called dynamic testing 79.e. Regardless of static or dynamic if applications functionality's are attacked keeping in mind to achieve the need then it will come under functional testing. Regression testing. the first attack on the application should be on its performance behavior as that is very important for a web application and then transfer of data between web server . Is the dynamic testing a functional testing? Yes. Regardless of static or dynamic if applications functionality's is attacked keeping in mind to achieve the need then it will come under functional testing. Can be done regardless of type of application. can be tested. Condition coverage. 73.. 81. 82. 74. Is the Static testing a functional testing? Yes. Function Validation and Field level validation testing. Check the compatibility and verify the individual behavior and then to compare as client and server. How do you go about testing of Client Server Application? To approach a client server environment we can track back the data transfer. Muthuvel Page 110 of 127 . What is meant by Alpha Testing? K. loops. Conditions Loops Arrays Structures 77. security server and back end server. 76. What does dynamic testing mean? Any dynamic application i. In the Static Testing. What is the functional testing you perform? I have done Conformance testing. What is meant by Static Testing? Structure of a program. 75. Can the static testing be done for both Web & Client Server Application? Yes. By keeping this also static testing can be done.front end server. but Depends on the Application’s individual structure and behavior. Code coverage etc.

What kind of Document you need for going for a Functional testing? Functional specification is the ultimate document. if Applicable v Verify completeness of project test plan How to Validation Not usable use Project Planning Not usable K. which expresses all the functionality's of the application and other documents like user manual and BRS are also need for functional testing. 85. In this testing usually users or testers will involve in performing.: if an application have 5 functionality's to work together. 83.Software Testing – made easy Alpha testing is testing of product or system at developer’s site by the customer. if they have been developed individually then unit testing can be carried out before their integration is suppose to be done. Say for E.: a Product after completion given to customers for trail as Beta version and feedback from users and important suggestions which will add quality will be done before release. At what stage the unit testing has to be done? After completing coding of individual functionality's unit testing can be done. What is meant by Beta Testing? User Acceptance testing which is done with the objective of achieving all users needs. Gap analysis document will add value to understand expected and existing system.g.g. 84. Muthuvel Page 111 of 127 . When will the Verification & Validation be done? Software How To use Development Verification Phases v Verify Requirements Completeness of gathering Requirements v Verify vendor capability. E. Who can perform the Unit Testing? Both developers and testers can perform this unit level testing 86.

or system. When do you go for Integration Testing? When all Separate unit in Unit Level testing is assured to do good their performance. An application which is integrated together after assuring their individual functionality's. subsystem. as we must have certain knowledge of the units to recognize if we have been successful in fusing them together in the module. Preparing Test plan. especially with the top-down method. What is Incremental Integration Testing? Incremental Integration Testing is an approach of testing where we will integrate the modules top to bottom or on the incrementing scale of intensity. K. Ambiguity/Clarification Document and test design Documents. Verify contingency plan · · 86A. System testing can occur in parallel with integration test. 88. 87. Integration testing focuses on the interfaces between units. What is meant by SIT? System Integration Testing done after the completion of Unit level testing. Understanding the applications expected functionality's. you prefer & Prepare before starting Testing? Study the application. 90. What is Integration Testing? Integration testing exercises several units that have been combined to form a module. 89. Then Application is recommended for integration after these unit getting integrated. What are the things.Software Testing – made easy · · · Validate Correctness of changes Validate Regression Validate meets user acceptance criteria Validate Supplier’s software Process correctly Validate Software interfaces · · Project Implementation Verify Correctness and completeness of Interim Deliverables. Muthuvel Page 112 of 127 . 92. to make sure the units work together. application can be performed integration testing. What is meant by System Testing? The system test phase begins once modules are integrated enough to perform tests in a whole system environment. The nature of this phase is certainly 'white box'. 91. What is the testing that a tester performs at the end of Unit Testing? Integration testing will be performed after unit testing to ensure that unit tested modules get integrated correctly.

If that was done for integration testing then it is top down model testing in Integration and vice versa for Bottom up model 96. What is Mutation testing & when can it be done? K. 101. Stubs simulate sub-programs or modules while testing higher-level routines. What is the Outcome of Integration Testing? At the completion of integration testing all the unit level functionalities or sub modules are integrated together and finally it should work as a system as whole as expected. What is meant by Back-End Testing? Database Testing is also called as back end testing checking whether database elements have been accessed by front end whenever required as desired. but nothing to test it with. Can the System testing be done at any stage? No. 95. What are the features. where you have a complete low-level module. What is the Concept of Up-Down & Down-Up in Testing in integration testing? There is two approach in testing an application if the functionality sequence was mapped and tracked from top to bottom then it is called top down method . you take care in Prototype testing? Prototype testing is carrying out testing in same method reputedly to understand the system behavior. 94. The system as a whole can be tested only if all modules are integrated and all modules work correctly System testing should be done before UAT (User Acceptance testing) and Before Unit Testing. What is meant by GUI Testing? Testing the front-end user interfaces to applications. 100. which use GUI support systems and standard such as MS Windows. 99. 102. Muthuvel Page 113 of 127 . Drivers are important for bottom-up testing. 97. here full coverage of functionality should be taken care With the same process followed as for Prototype testing. UAT test cases can be written from URS/BRS and System test cases can be written from SRS. 98. If we follow V model then testing can be started at the design phase itself. What is the final Stage of Integration Testing? All the individual units integrated together to Perform a task as a system or Part of the system as expected to do so.Software Testing – made easy 93. If we use Waterfall model then testing will comes in to picture only after coding is done. Where in the SDLC. What are stubs & drivers? Driver programs provide emerging low-level modules with simulated inputs and the necessary resources to function. the Testing Starts? It depends upon the Software Model which we follow.

104. Synonymous with "ease of use".Software Testing – made easy Mutation testing is a powerful fault-based testing technique for unit level testing. efficiency. 107. The competent programmer hypothesis assumes that competent programmers tend to write nearly "correct" programs. What is Usability Testing? Usability testing is a core skill because it is the principal means of finding out whether a system (see our definition below) meets its intended purpose. Since it is a fault-based testing technique. When we prefer Regression & what are the stages where we go for Regression Testing? We Prefer regression testing to provide confidence that changes are correct & has not affected the flow or Functionality of an application which got Modified or bugs got fixed in it. Three of these things are bad. Mutation testing injects faults into code to determine optimal test inputs 103. namely simple syntactic changes to a program. The coupling effect stated that a set of test data that can uncover all simple faults in a program is also capable of detecting more complex faults. 105. What is meant by regression Testing? Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affects other system components. Operating Systems. K. Four things can happen when a developer attempts to fix a bug. Testing only demonstrates that the product performs each function intended & to show the product is free from defect. it is necessary to do regression testing. What is Compatibility Testing? Testing to ensure compatibility of an application with different browsers. What is the Importance of testing? Software Testing is more Oriented to detecting the defects or often equated to finding bugs. it is aimed at testing and uncovering some specific kinds of faults. Mutation testing is based on two assumptions: the competent programmer hypothesis and the coupling effect. and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite. Muthuvel Page 114 of 127 . 106. and one is good: Change New Bug No New Bug Bug Successful Bad Good Change Unsuccessful Bad Bad Change Because of the high probability that one of the bad outcomes will result from a change to the system. use) successful. All other skills that we deploy or cultivate aim to make usability (and. Testing is mainly done to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is a Process of Testing the effectiveness. ultimately. and satisfaction with which specified users could achieve specified goals in the Application.

Load testing. A drawback of performance testing is that can easily confirm that the system can handle heavy loads. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal. · Safe attempt instead to select every test that will cause the modified program to produce different output than original program.Testing the Application under varying loads. keeping varying the Number of Users simultaneously & there by finding the Response time & the system With Standing Capability or varying the Load & Users till saturation Point is reached K. Performance testing can be applied to understand your application or WWW site's scalability. Muthuvel Page 115 of 127 . they seek to select all tests that exercise changed or affected program components. Manually: . for identifying whether the application works fine. · Coverage approaches are also based on coverage criteria. What is the Performance testing. What is Volume. keeping varying the Number of Users & there by finding the Response time & the system With Standing Capability or varying the Users till saturation Point is reached Stress Testing: . once stress and load factors have been successfully overcome. Stress. In other words. Volume testing. Unfortunately. What is performance testing? An important phase of the system test. Stress. and exceptional load conditions. often-called load.Load. Instead. peak. it is only necessary to stress test again if major changes take place. Stress Testing Stress testing is the combination of both load and volume. by using the Automated Skills. contain vulnerable nodes that should be tested before deployment. The following three types highly influence Performance of an application. Stress & Load Testing? Volume testing: . but do not require minimization of the test set. as the users would experience it. peak.Software Testing – made easy Stages where we go for Regression Testing are: · Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be retested. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal.Load. Web sites. processing incorrect transactions at high speed can cause much more damage and liability than simply stopping or slowing the processing of correct transactions. but cannot so easily determine if the system is producing the correct information.Testing the Application under varying loads. 110. volume or performance test. 109. and exceptional load conditions. Fortunately. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. keeping the Number of Users constantly & finding the Response time & the system With Standing Capability or varying the Load till saturation Point is reached Load Testing: -Testing the Application under Constant load. Stress tests are most useful when systems are being scaled up to larger environments or being implemented for the first time. like any other large-scale system that requires multiple accesses and processing. & Volume are the types of testing which are been done Manually. Automated: . those can be done Manually & Automatically? This sort of testing is particularly useful to identify performance bottlenecks in high use applications. 108. and stress tests try to determine the failure point of a system under extreme pressure. most stress testing can only simulate loads on various points of the system and cannot truly stress the entire network. or to benchmark the performance in an environment of third party products such as servers and middleware for potential purchase. & Volume are the types of testing which are been done automatically.

) like the Critical bugs should be solved first & then the Major bugs can be taken care. (i.e.“Is an error found AFTER the application goes into production?” 113. what importance should be given to each bug by the Developer. it is done again by the Development team. What is the Priority in fixing the Bugs? Priority: . it will be Re-raised again & even the new bugs with status Open will be sent to the Development team) The above-mentioned cycle flows on continuously. Muthuvel Page 116 of 127 .The value will be given to the bugs.“Is an error found BEFORE the application goes into production?” 112. 115. by both Testers & Developers (But Mostly the Development team will take care of this). until all the bugs gets fixed in the application. What is a Bug? Bug: . 114.Software Testing – made easy Testing Type Data User Load Constant Increase Till saturation Point is reached Increase Till saturation Point is reached Increase Till saturation Point is reached Volume Constant Stress Increase Till saturation Point is reached 111. Explain the Severity you rate for the bugs found? · · · · K. It mainly deals with. the status will be made as closed or if the Defect still remains. Here the Status after the Development team fixing the bugs will be (Fixed) & Status will be Differed for the bugs which got Differed) · The Fixed bugs will be again Re-tested by the Test Team (Here based on the Closure of the Bug. What is a Defect? Defect: . What is the defect Life Cycle? Test Team (Here the Defect status will be Open) Test Lead Authorize the bugs found (Here the Defect Status will be Open) Development Team reviewing the Defect (Here the Defect Status will be Open) The defect can be Authorized or Unauthorized by the development Team (Here the Status of the Defect will be Open (For Authorized Defects) & Reject (For Unauthorized Defects) · Development Team fixing the Defect (Here the authorized Bugs will get fixed or differed.

: . Difference between UAT & IST? UAT & IST UAT: 1. It is mostly done based on the nature of the defect found in the Application. been involved with the design.Software Testing – made easy · · · · Emergency High (Very High & high) Medium Low (Very Low & Low) Testers will rate severity. Eg: . Done with the Simulated Data 3. Testing is done in User Style 4. it's usually too late. K.: .When user tries to add an record & then tries to view the same record added & if the details getting displayed to the fields are not the same which the user provided as the value to the fields (These Type of Bugs will be rated as Major Bugs) E. Done with the Live Data 3. and understood the evolution of the system. Testing in done in the Client Place 5. Unfortunately. Severity can be rated as Critical or major or Minor. 118. What all are the requirements needed for UAT? · Business Requirement Document is the Documents required for performing the UAT Testing by the testers. Muthuvel Page 117 of 127 . Done Using BRD 2.Mostly the FLV Bugs & some functional bugs (Related the value display etc. Testing in done in Offsite 5.) will be rated as Minor. they are inevitably going to be unhappy with the result.When user is not able to proceed or system gets crashes & so that tester is not able to proceed further testing (These Bugs will be rated as Critical) E. Testing is done in a Controlled Way.g.g. by this time. it is based on the Defect we find in the application. 116. Testing is done in the Testers Company 117. What is meant by UAT? Traditionally. Done Using FS 2. this is where the users ‘get their first crack’ at the software. Testing is done by the Real Users or some Third Party Testers IST: 1. If you can perform every test as user acceptance tests. 4. If the users have not seen prototypes. you have a much better chance of a successful project User Acceptance testing is done to achieve the following:· User specified requirements have been satisfied · Functionality is doing as per supporting documents · Expected performance have been achieved · End user is comfortable to use the application.

& vulnerabilities. What are test closure documents? · Test Conditions · Test Case · Test Plan · Test Strategy · Traceability Matrix · Defect Reports · Test Closure Document · Test Data (The Above Mentioned Deliverables are based on the deliverables accepted by the Testing Team & mentioned in the Test Strategy) 123. Muthuvel Page 118 of 127 . or operations flaw that may be exploited by a threat? Control: -Control is anything that tends to cause the reduction of risk. 120. How to do risk management? Identifying the Risk Involved in the project & finding Mitigation for the Risk Found will do risk Management. What are the docs required for Performance Testing? Bench Mark is the Basic Document required for Performance Testing. 124. Threat: . Virtual Memory in which the Application should work. Where the documents contains in detail about the Response Time. It is a process of evaluating risks. all the Modules should be tested at least once after Integrating the Modules) 119.Software Testing – made easy · Application should be Stable (Means. implementation. Risk Mitigation will be a solution for the Risk Identified. Vulnerability: -Is a design. Commonly available tools like TEST DIRECTOR can also be employed K. What is Traceability matrix? Traceability Matrix: Through out the testing life cycle of the project Traceability matrix has been maintained to ensure the Verification & Validation of the testing is complete. What ways can be followed for defect management? · · · Reporting the Bugs through the Defect Report (Excel Template) Any in-house tool inbuilt in the company may also be used. threats.Which is capable of exploiting vulnerability in the security of a computer system or application. Data Transfer Time. What is risk analysis? Risk Analysis is a series step that helps the Software or Testing Team to understand & manage Uncertainty. 121. controls. 122. Transaction Time.

product (document itself) improvement and process improvement (of both document production and inspection). designs or code characterized by the author of the material under review guiding the progression of the review. users. or other interested parties for comment or approval.” Validation Determination of the correctness of the products of software development with respect to the user needs and requirements. Glossary Testing “The process of exercising software to verify that it satisfies specified requirements and to detect errors “ Quality Assurance “A planned and systematic pattern for all actions necessary to provide adequate confidence that the item or product conforms to established technical requirements” Quality Control “QC is a process by which product quality is compared with applicable standards. is presented to project personnel.” Review .” Verification “The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. customers. It consists of two aspects. managers. K.Definition Review is a process or meeting during which a work product or set of work products.Software Testing – made easy 17. “ Inspection A group review quality improvement process for written material. and the action taken when nonconformance is detected. Static Testing Techniques “Analysis of a program carried out without executing the program. Muthuvel Page 119 of 127 . Walkthrough “A review of requirements.

” Branch Testing: Branch Testing: A test case design technique for a component in which test cases are designed to execute branch outcomes. which can be used to design test cases” White-Box Testing: “Test case selection that is based on an analysis of the internal structure of the component. Boundary Value Analysis Boundary value: An input value or output value which is on the boundary between equivalence classes.” Statement Coverage: “A test case design technique for a component in which test cases are designed to execute statements. Equivalence class: A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification. “ Black Box Testing: “Test case selection that is based on an analysis of the specification of the component without reference to its internal workings. K.Software Testing – made easy Dynamic Testing Techniques “The process of evaluating a system or component based upon its behaviour during execution.” Equivalence partition testing: Equivalence partition testing: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes. Cause and Effect Graphs “A graphical representation of inputs or stimuli (causes) with their associated outputs (effects). Muthuvel Page 120 of 127 . Boundary value analysis: A test case design technique for a component in which test cases are designed which include representatives of boundary values. or an incremental distance either side of the boundary.

Path Testing Path: A sequence of executable statements of a component.Software Testing – made easy Branch : A conditional transfer of control from any statement to any other statement in a component. Path testing: A test case design technique in which test cases are designed to execute paths of a component. or an unconditional transfer of control from any statement to any other statement in the component except the next statement. with lower level components being simulated by stubs. from an entry point to an exit point. then used to facilitate the testing of higher level components. Data Flow-Based Testing: “Testing in which test cases are designed based on variable usage within the code.” Unit Testing “The testing of individual software components. or when a component has more than one entry point. a transfer of control to an entry point of the component.” Integration Testing “Testing performed to expose faults in the interfaces and in the interaction between integrated components” Incremental Integration Testing “Integration testing where system components are integrated into the system one at a time until the entire system is integrated” Top Down Integration “An approach to integration testing where the component at the top of the component hierarchy is tested first. Tested components are then used to test lower level components. The process is repeated until the component at the top of the hierarchy is tested.” Bottom up Integration “An approach to integration testing where the lowest level components are tested first. The process is repeated until the lowest level components has been tested.” K. Muthuvel Page 121 of 127 .

” Validation Testing Validation testing aims to demonstrate that the software functions in a manner that can be reasonably expected by the customer. and has necessary detail to support maintenance.. catalogued. System Testing “System testing is the process of testing an integrated system to verify that it meets specified requirements". Configuration review An audit to ensure that all elements of the software configuration are properly developed. “ Recovery testing “Testing aimed at verifying the system's ability to recover from varying degrees of failure. Drivers: Drivers are programs or tools that allow a tester to exercise/examine in a controlling manner the unit of software being tested. Big Bang Integration “Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system. tests that exercise specific functions or probe the non-functional constraints such as performance or security)” Business-Process based Non-Functional Testing Testing of those requirements that do not relate to functionality. usability.e.” K.g. Requirement based Testing “Designing tests based on objectives derived from requirements for the software component (e.Software Testing – made easy Stubs: Stubs are program units that are stand-ins² for the other (more complex) program units that are directly referenced by the unit being tested. performance. etc. I. Muthuvel Page 122 of 127 .

customer. Load testing applications can emulate the workload of hundreds or even thousands of users. K. so that you can predict how an application will work under different user loads and determine the maximum number of concurrent users accessing the site at the same time. Muthuvel Page 123 of 127 .” Load Testing Load Testing involves stress testing applications under real-world conditions to predict system behavior and performance and to identify and isolate problems.” Alpha and Beta testing “Alpha testing: Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.” Stress testing “Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.” User Acceptance Testing “Acceptance testing: Formal testing conducted to enable a user. or other authorized entity to determine whether to accept a system or component” Regression Testing and Re-testing “Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.” “Beta testing: Operational testing at a site not otherwise involved with the software developers.” Ad-hoc Testing “Testing carried out using no recognised test case design technique.Software Testing – made easy Security testing “Testing whether the system meets its specified security objectives.” Performance testing “Testing conducted to evaluate the compliance of a system or component with specified performance requirements.

Muthuvel Page 124 of 127 . and to design tests specifically to expose them. “ Usability Testing “Testing the ease with which users can learn and use a product. and estimating the number of faults remaining in the program. Error Guessing “A test case design technique where the experience of the tester is used to postulate what faults might occur.” Design Specification The Design Specification document is prepared based on the functional specification. table structures and program specifications. It contains the system architecture.” “Volume Testing: Testing where the system is subjected to large volumes of data. System Specification The System Specification document is a combination of Functional specification and design specification.” K. Business Requirement It describes user’s needs for the application. “ Error Seeding “The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal.” Environmental Testing These tests check the system’s ability to perform at the installation site. Functional Specification “The document that describes in detail the characteristics of the product with regard to its intended capability.Software Testing – made easy Stress and Volume Testing “Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

who will do each task. Sanity Testing . Discrepancy Testing . The testing here should not only cover all the test cases but also business cycles as defined in the application. In other words. K. the testing tasks. and schedule of intended testing activities.” .IEEE Test Case “A set of inputs. Muthuvel Page 125 of 127 . resources.” Comprehensive Testing . the features to be tested. This is done in order to check if the system is sane enough for the next stage i. It identifies test items. This is done either at the client's site or at Maveric depending on the strategy adopted.BS “A document describing the scope.Software Testing – made easy Test Plan A record of the test planning process detailing the degree of tester indedendence. Some cases the application may not have certain module(s) ready for test. hence they will be covered comprehensively in the next pass. all defects that have been fixed should be retested. approach. This type of testing is called as Regression testing. and the rationale for their choice.Round I All the test scripts developed for testing are executed.Round III This is final round in the test process. and expected outcomes developed for a particular objective. such as to exercise a particular program path or to verify compliance with a specific requirement.e. and any risks requiring contingency planning.Round II All the test cases that have resulted in a defect during the comprehensive pass should be executed. Ideally the defects that are fixed from the previous phases are checked and freedom testing done to ensure integrity is conducted. the test case design techniques and test measurement techniques to be used. the test environment. execution preconditions. UAT or production as the case may be under an isolated environment. Function points that may be affected by the defect should also be taken up for testing. . Defects that are not fixed will be executed only after they are fixed.

and Low. Cosmetic Errors would also feature in this category Severity: How much the Bug found is supposed to affect the systems Function/Performance. Priority: Which Bug should be solved fist in order of benefit of system’s health? Normally it starts from Emergency giving first Priority to Low as last Priority. Medium. Muthuvel Page 126 of 127 . Minor A Defect which is isolated or does not stop the user from proceeding.Software Testing – made easy Defect – Definition “Error: A human action that produces an incorrect result. “ “Failure: Deviation of the software from its expected delivery or service. if encountered may cause a failure. “ “Fault: A manifestation of an error in software. or it may be a show stopper – that is. A fault. Usually we divide as Emergency. “ Defects Classification Showstopper A Defect which may be very critical in terms of affecting the schedule. K. it stops the user from using the system further Major A Defect where a functionality/data is affected significantly but not cause a showstopping condition or a block in the test process cycles. but causes inconvenience. High. “ “A deviation from expectation that is to be tracked and resolved is termed as a defect.

Instructions. The Document. Data Guidelines Data Guidelines are used to specify the data required to populate the test bed and prepare test scripts. Traceability Matrix Through out the testing life cycle of the project Traceability matrix has been maintained to ensure the Verification & Validation of the testing is complete. Test data The value which are given at expected places(fields) in a system to verify its functionality have been made ready in a piece of document called test data. Data guide lines. and any other software with which the software under test interacts when under test including stubs and test drivers. Muthuvel Page 127 of 127 .Software Testing – made easy Test Bed Before Starting the Actual testing the elements which supports the testing activity such as Test data. which supports in preparing test data are called Data guidelines Test script A Test Script contains the Navigation Steps. K. Data and Expected Results required to execute the test case(s). Are collectively called as test Bed. It includes all data parameters that are required to test the conditions derived from the requirement / specification. Test environment A description of the hardware and software environment in which the tests will be run. Any test script should say how to drive or swim through out the application even for a new user.

Sign up to vote on this title
UsefulNot useful