Software Testing

Tutorial for Beginners

Collected by: Eng. Moataz Abd Elkarim

Software Testing Tutorials for Beginners
This topic is especially for beginners. It bridges the gap between theoretical knowledge and real world implementation. This article helps you gain an insight to Software Testing understand technical aspects and the processes followed in a real working environment. Who this tutorial is for ?
• Fresh graduates who want to start a career in SQA & Testing • Software engineers who want to switch to SQA & Testing • Testers who are new to Software Testing field. • Testers who want to prepare for interviews.

What is Software Testing?
Definitions of Software Testing It is the process of Creating, Implementing and Evaluating tests. • Testing measures software quality • Testing can find faults. When they are removed, software quality is improved. • Testing is executing a program with an indent of finding Error/Fault and Failure. • IEEE Terminology : An examination of the behavior of the program by executing on sample data sets. 1. Why is Software Testing Important?

1. To discover defects. 2. To avoid user detecting problems 3. To prove that the software has no faults 4. To learn about the reliability of the software. 5. To avoid being sued by customers 6. To ensure that product works as user expected.

7. To stay in business 8. To detect defects early, which helps in reducing the cost of defect fixing?

Why start testing early? Introduction : You probably heard and read in blogs “Testing should start early in the life cycle of development". In this chapter, we will discuss Why start testing Early? very practically.

Fact One Let’s start with the regular software development life cycle:

When project is planned

First we’ve got a planning phase: needs are expressed, people are contacted, meetings are booked. Then the decision is made: we are going to do this project. • After that analysis will be done, followed by code build. • Now it’s your turn: you can start testing. Do you think this is what is going to happen? Dream on. This is what's going to happen:

This is what actual happened when the project executes

Planning, analysis and code build will take more time then planned. • That would not be a problem if the total project time would prolonger. Forget it; it is most likely that you are going to deal with the fact that you will have to perform the tests in a few days. • The deadline is not going to be moved at all: promises have been made to customers, project managers are going to lose their bonuses if they deliver later past deadline. Fact Two The earlier you find a bug, the cheaper it is to fix it.

Price of Buggy Code If you are able to find the bug in the requirements determination, it is going to be 50 times cheaper (!!) than when you find the same bug in testing. It will even be 100 times cheaper (!!) than when you find the bug after going live.

Easy to understand: if you find the bug in the requirements definitions, all you have to do is change the text of the requirements. If you find the same bug in final testing, analysis and code build already took place. Much more effort is done to build something that nobody wanted. Conclusion: start testing early! This is what you should do:

Testing should be planned for each phase • Make testing part of each Phase in the software life cycle • Start test planning the moment the project starts • Start finding the bug the moment the requirements are defined • Keep on doing that during analysis and design phase • Make sure testing becomes part of the development process • And make sure all test preparation is done before you start final testing. If you have to start then, your testing is going to be crap! Want to know how to do this? Go to the Functional testing step by step page. (will be added later)

generate one or more equivalence Class • Label the classes as “Valid” or “Invalid” • Generate one test case for each Invalid Equivalence class • Generate a test case that covers as many Valid Equivalence Classes as possible . Errors in data structures or external data base access. • Black box testing is also called as Functionality Testing. Behavior or performance based errors. • User doesn’t require the knowledge of software code. It attempts to find errors in the following categories: • • • • • Incorrect or missing functions. Interface errors. Approaches used in Black Box Testing The following basic techniques are employed during black box testing: • Equivalence Class • Boundary Value Analysis • Error Guessing Equivalence Class: • For each piece of the specification.Test Design Techniques • Black Box Testing • White Box Testing (include its approaches) • Gray Box Testing Black Box Testing What is Black Box Testing? • Test the correctness of the functionality with the help of Inputs and Outputs. Initialization or termination errors.

• If an input condition is Boolean. • Generating test cases against to the specification. one valid and two invalid equivalence classes are defined. Boundary Value Analysis • Generate test cases for the boundary values. Minimum Value -1 • Maximum Value. • Minimum Value.An input condition for Equivalence Class: • A specific numeric value • A range of values • A set of related values • A Boolean condition Equivalence classes can be defined using the following guidelines: • If an input condition specifies a range.1 Error Guessing. one valid and two invalid equivalence class are defined. one valid and one invalid equivalence classes are defined. . Minimum Value + 1. Maximum Value + 1. one valid and one invalid classes are defined. Maximum Value . • If an input condition specifies a member of a set. • If an input condition requires a specific value.

• Provides flow-graph notation to identify independent paths of processing • Once paths are identified . conditions • Process guarantees that every statement will get executed at least once.tests can be developed for . Conditions and Loops.White Box Testing • Testing the Internal program logic • White box testing is also called as Structural testing. • User does require the knowledge of software code. Approaches / Methods / Techniques for White Box Testing Basic Path Testing (Cyclomatic Complexity(Mc Cabe Method) • Measures the logical complexity of a procedural design. Logic Errors and incorrect assumptions most are likely to be made while coding for “special cases”. . Purpose • • • • • • Testing all loops Testing Basis paths Testing conditional statements Testing data structures Testing Logic Errors Testing Incorrect assumptions Structure = 1 Entry + 1 Exit with certain Constraints.loops. Need to ensure these execution paths are tested.

Gray box testing is especially important with Web and Internet applications. • It is platform independent and language independent. • Tester should have the knowledge of both the internals and externals of the function • If you know something about how the product works on the inside. Unless you understand the architecture of the Net. • Data Flow Testing.Selects test paths according to the location of definitions and use of variables. your testing will be skin deep. • Used to test embedded systems.All logical conditions contained in the program module should be tested. • Functionality and behavioral parts are tested. • . you can test it better from the outside. • Loop Testing: • Simple Loops • Nested Loops • Concatenated Loops • Unstructured Loops Gray Box Testing: It is just a combination of both Black box & white box testing.Structure Testing: • Condition Testing . because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces.

In this method. you will need to perform two steps 1. A class is a set of input conditions that are is likely to be handled the same way by the system.Equivalence Class Partitioning Simplified WHAT IS EQUIVALENCE PARTITIONING? Concepts: Equivalence partitioning is a method for deriving test cases. the tester identifies various equivalence classes for partitioning. to find the most errors with the smallest number of test cases. Identify the equivalence classes 2. classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING To use equivalence partitioning. It is an attempt to get a good 'hit rate'. If the system were to handle one case in the class erroneously. it would handle all cases erroneously. In this method. WHY LEARN EQUIVALENCE PARTITIONING? Equivalence partitioning significantly reduces the number of test cases required to test a system reasonably. Design test cases .

e. Following are some general guidelines for identifying equivalence classes: a) If the requirements state that a numeric value is input to the system and must be within a range of values.e. 1003… so on. specifications state that a maximum of 4 purchase orders can be registered against anyone product.STEP 1: IDENTIFY EQUIVALENCE CLASSES Take each input condition described in the specification and derive at least two equivalence classes for it. 1000. The invalid class (QTY is less than 1). specify one valid class where the number of inputs is within the valid range. identify the following classes: 1. This is written as (+1 < = QTY < = 999) 2. 1001. 1002. of purchase orders < = 4) the . … so on 3. -1. if an item in inventory (numeric field) can have a quantity of +1 to +999. 0. One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class). 2. Invalid class 0 -1 -2 -3 -4… So on Valid Class 1 2 3 4 5… up to 999 Invalid class 1000 1001 1002 1003 1004.. For example. also written as (QTY < 1) i. also written as (QTY >999) i. For example. So on b) If the requirements state that the number of items input by the system at some point must lie within a certain range. The equivalence classes are: the valid equivalence class: (number of purchase an order is greater than or equal to 1 and less than or equal to 4. identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. One valid class: (QTY is greater than or equal to +1 and is less than or equal to +999). The invalid class (QTY is greater than 999). one invalid class where there are too few inputs and one invalid class where there are too many inputs. also written as (1 < = no.

23.One partition: number of inputs . “24<x” . identify a valid class for values in the set and one invalid class representing values outside of the set.14.5.Chosen values: 3. “4<=x<=24”. Says that the code accepts between 4 and 24 inputs.Classes “x<4”.25 Software-Testing-Techniques-II >>> Chart Next Page .invalid class (no. each is a 3-digit integer: .24. of purchase orders < 1) c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the same way. of purchase orders> 4) the invalid class (no.4.

.

Tester needs not to have any knowledge of internal structure or code of software under test. though usually functional. In this post. we will discuss the following: 1. Black Box Test Design Techniques • Specification Based • Experience Based 2. requirements. Specification-based techniques: • • • • • • • • Equivalence partitioning “mentioned before” Boundary value analysis Use case testing Decision tables Cause-effect graph State transition testing Classification tree method Pair-wise testing . including specifications. Test design techniques help in achieving high test coverage.Black-box testing techniques These include specification-based and experienced-based techniques. These tests can be functional or non-functional.Test Design Techniques: The purpose of test design techniques is to identify test conditions and test scenarios through which effective and efficient test cases can be written. and design to derive test cases.Using test design techniques is a best approach rather the test cases picking out of the air. These use external descriptions of the software. White-box or Structural Test design techniques 1.

It is often be used by creating a list of potential problem areas/scenarios. The software was designed to only accept numerical data. An example of how to use the ‘Error Guessing’ method would be to imagine you had a software program that accepted a ten digit customer code. . This test case design technique can be very effective at pin-pointing potential problem areas in software. are used for the specification of the problem to be solved. then producing a set of test cases from it. This approach can often find errors that would otherwise be missed by a more structured testing approach. • From these models test cases can be derived systematically. either formal or informal. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Unscripted testing Approaches Error Guessing Why can one Tester find more errors than another Tester in the same piece of software? More often than not this is down to a technique called ‘Error Guessing’. Experienced-based techniques: • Error Guessing • Exploratory Testing Read Unscripted testing Approaches for the above. or just a level of knowledge in that area of technology. This could be based on the Testers experience with a previous iteration of the software.From ISTQB Syllabus: Common features of specification-based techniques: • Models. A Tester can then make an educated guess at where potential problems may arise. the software or its components. To be successful at Error Guessing. a certain level of knowledge and experience is required.

Ad-hoc testing should only be used as a last resort. and by many it is considered to be the least effective. Take some time to think about what you want to achieve 2. but if careful consideration is given to its usage then it can prove to be beneficial. Input of greater than ten digits 3. This testing approach should only be used to compliment an existing formal test method. If you have a very small window in which to test something.Here are some example test case ideas that could be considered as Error Guessing: 1. then the following are points to consider: 1. Input of mixture of numbers and letters 4. Log as much detail as possible about the item under test and its environment 5. Prioritize functional areas to test if under a strict amount of testing time 3. Often. more often than not additional information is not logged such as. software versions. It can basically ensure that major functionality is working as expected without fully testing it. Ad-hoc Testing This type of testing is considered to be the most informal. is to think about what could have been missed during the software design. Log as much as possible about the tests and the results . Even if this information is included. or compliment a more formal approach. It is most effective when there are little or no specifications available. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. Input of identical customer codes What we are effectively trying to do when designing Error Guessing test cases. Input of a blank entry 2. Exploratory Testing This type of testing is normally governed by time. test environment details etc. Ad-hoc testing is simply making up the tests as you go along. as it cannot be considered a complete form of testing software. dates. and should not be used on its own. It should only really be used to assist with. Allocate time to each functional area when you want to test the whole item 4. It consists of using tests based on a test chapter that contains test objectives. it is used when there is only a very small amount of time to test something.

In order for random testing to be effective. developers. You often find in the real world that ‘Random Testing’ is used in association with other structured techniques to provide a compromise between targeted testing and testing everything. there is little structure involved in ‘Random Testing’. Random Testing is simply when the Tester selects data from the input domain ‘randomly’. There are many tools available today that are capable of selecting random test data from a specified data value range. Most experts agree that using random test data provides little chance of producing an effective test. There has been much debate over the effectiveness of using random testing techniques over some of the more structured techniques. Is random data sufficient to prove the module meets its specification when tested? 2. In order to avoid dealing with the above questions.Random Testing A Tester normally selects test input data from what is termed an ‘input domain’ in a structured manner. • Knowledge of testers. . This approach is especially useful when it comes to tests associated at the system level. Should random data only come from within the ‘input domain’? 3. • Knowledge about likely defects and their distribution. its usage and its environment. However. using a random approach could save valuable time and resources if used in the right circumstances. How many values should be tested? As you can tell. a more structured Black-box Test Design could be implemented instead. users and other stakeholders about the software. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> From ISTQB* Syllabus: * International Software Testing Qualifications Board Common features of experience-based techniques: • The knowledge and experience of people are used to derive the test cases. there are some important open questions to be considered: 1.

Structural or structure-based techniques includes: • • • • • Statement testing Condition testing LCSAJ (loop testing) Path testing Decision testing/branch testing From ISTQB Syllabus: Common features of structure-based techniques: • Information about how the software is constructed is used to derive the test cases. Tester must have knowledge of internal structure or code of software under test. for example.White-box techniques Also referred as structure-based techniques. and further test cases can be derived systematically to increase coverage. . code and design. These are based on the internal structure of the component. • The extent of coverage of the software can be measured for existing test cases.

” • Test cases must be written by a team member who thoroughly understands the function being tested. • A test case has components that describes an input.Art of Test Case Writing Objective and Importance of a Test Case . to determine if a feature of an application is working correctly. The most extensive effort in preparing to test a software.The basic objective of writing test cases is to ensure complete test coverage of the application. • Documenting the test cases prior to test execution ensures that the tester does the ‘homework’ and is prepared for the ‘attack’ on the Application Under Test • Breaking down the Test Requirements into Test Scenarios and Test Cases would help the testers avoid missing out certain test conditions What is a Test Case? It is the smallest unit of Testing • A test case is a detailed procedure that fully tests a feature or an aspect of a feature. action or event and an expected response. is writing test cases. Whereas the test plan describes what to test. Elements of a Test Case Every test case must have the following details: Anatomy of a Test Case Test Case ID Requirement # / Section: • • . a test case describes how to perform a particular test. • Gives better reliability in estimating the test effort • Improves productivity during test execution by reducing the “understanding” time during execution • Writing effective test cases is a skill and that can be achieved by experience and in-depth study of the application on which test cases are being written.

An error message is displayed on entering special characters . (Changed as per Rick’s suggestion – See comments) 4.Choose the option1 .Click on OK button . Use Active voice while writing test cases For eg. screenshots (optional)] Comments: Any CMMi company would have defined templates and standards to be adhered to while writing test cases. . Use Simple and Easy-to-Understand language. 2.Navigate to the account Summary page.Enter the data in screen1 . Language to be used in Test Cases: 1. .Objective: [What is to be verified? ] Assumptions & Prerequisites Steps to be executed: Test data (if any): [Variables and their values ] Expected result: Status: [Pass or Fail with details on Defect ID and proofs [o/p files. Use words like “is/are” and use Present Tense for Expected Results . 3. Use words like “Verify” / ”Validate” for starting any sentence in Test Case description (Specially for checking GUI) For eg.The application displays the account information screen .Validate the fields available in _________ screen/tab.

• Bug is terminology of Tester .Fault. IEEE Definitions • Failure: External behavior is incorrect • Fault: Discrepancy in code that causes a failure. Error and Failure: Fault : It is a condition that causes the software to fail to perform its required function. Failure : It is the inability of a system or component to perform required function according to its specification. Error : Refers to difference between Actual Output and Expected output. • Error: Human mistake that caused fault Note: • Error is terminology of Developer.

This could be based on the Testers experience with a previous iteration of the software. . Input of greater than ten digits 3. is to think about what could have been missed during the software design. a certain level of knowledge and experience is required. This testing approach should only be used to compliment an existing formal test method. Input of mixture of numbers and letters 4. A Tester can then make an educated guess at where potential problems may arise. as it cannot be considered a complete form of testing software. It is often be used by creating a list of potential problem areas/scenarios. then producing a set of test cases from it. or just a level of knowledge in that area of technology. This test case design technique can be very effective at pin-pointing potential problem areas in software. Here are some example test case ideas that could be considered as Error Guessing: 1. This approach can often find errors that would otherwise be missed by a more structured testing approach. Input of a blank entry 2. and should not be used on its own. Input of identical customer codes What we are effectively trying to do when designing Error Guessing test cases.Unscripted Testing Techniques/Approaches -Error Guessing Why can one Tester find more errors than another Tester in the same piece of software? More often than not this is down to a technique called ‘Error Guessing’. To be successful at Error Guessing. The software was designed to only accept numerical data. An example of how to use the ‘Error Guessing’ method would be to imagine you had a software program that accepted a ten digit customer code.

Ad-hoc testing should only be used as a last resort. It can basically ensure that major functionality is working as expected without fully testing it. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. it is used when there is only a very small amount of time to test something. and by many it is considered to be the least effective. dates. Even if this information is included. but if careful consideration is given to its usage then it can prove to be beneficial. It is most effective when there are little or no specifications available. Often. more often than not additional information is not logged such as. Allocate time to each functional area when you want to test the whole item 4. Log as much as possible about the tests and the results . Log as much detail as possible about the item under test and its environment 5. software versions. It consists of using tests based on a test chapter that contains test objectives.-Exploratory Testing This type of testing is normally governed by time. If you have a very small window in which to test something. -Ad-hoc Testing This type of testing is considered to be the most informal. Prioritize functional areas to test if under a strict amount of testing time 3. test environment details etc. Take some time to think about what you want to achieve 2. or compliment a more formal approach. It should only really be used to assist with. then the following are points to consider: 1. Ad-hoc testing is simply making up the tests as you go along.

You often find in the real world that ‘Random Testing’ is used in association with other structured techniques to provide a compromise between targeted testing and testing everything. However. a more structured Black-box Test Design could be implemented instead. Random Testing is simply when the Tester selects data from the input domain ‘randomly’. Should random data only come from within the ‘input domain’? 3. In order to avoid dealing with the above questions. there is little structure involved in ‘Random Testing’. There are many tools available today that are capable of selecting random test data from a specified data value range. using a random approach could save valuable time and resources if used in the right circumstances. How many values should be tested? As you can tell. There has been much debate over the effectiveness of using random testing techniques over some of the more structured techniques. there are some important open questions to be considered: 1. In order for random testing to be effective. This approach is especially useful when it comes to tests associated at the system level.-Random Testing A Tester normally selects test input data from what is termed an ‘input domain’ in a structured manner. Most experts agree that using random test data provides little chance of producing an effective test. . Is random data sufficient to prove the module meets its specification when tested? 2.

V-model is the basis of structured testing You will find out this is a great model! V-model is the basis of structured testing The left side shows the classic software life cycle & Right side shows the verification and validation for Each Phase • .

Note: In the classic waterfall software life cycle testing would be at the end of the life cycle. We already added some testing review to it. Later this will be extremely useful in your quality reporting! Look for inconsistency and things you don't understand in the analysis documents. Ask them what cases they want to test. One 'feature to test' can only have 2 answers: 'pass' or 'fail'. neither will the developers. . They write down what they found out and usually this is reviewed by Development/Technical Team. invite the analyst team to the test preparation sessions. end users and third parties. . They will learn a lot! System requirements One or more analysts interview end users and other parties to find out what is really wanted. It might help you to find good test cases if you interview end users about the every day cases they work on.Code / Build Developers program the application and build the application. . In testing you can start now by breaking the analyses down into 'features to test'.Global and detailed design Development translates the analysis documents into technical design. This is a second review delivered by testing in order to find the bug as early as possible! Lets discuss Left side of V Model: . Or even better.Analyze User requirements End users express their wish for a solution for one or more problems they have. In testing you have to start preparation of your user tests at this moment! You should do test preparation sessions with your acceptance testers. There’s a good chance that if you don't understand it. Ask them for difficulties they meet in every days work now. the questions) to the analyst team. One analysis document will have a number of features to test. The V-model is a little different. Give feedback about the results of this preparation (hand the list of real life cases. Give Feedback your questions and remarks to the analyst team.

E. Typical stuff to test: navigation between different screens. . .Component integration testing Once a set of application parts is finished.. consistency in GUI. . are realised properly.. Once these tests pass successfully. screen description) is tested apart.System and System integration testing In this testing level we are going to check whether the features to test.System testing In this testing level each part (use case. background processes started in one screen. These interface tests are usually not easy to realise. destilated from the analyses documents. updating a database.Component & Component integration testing These are the tests development performs to make sure that all the issues of the technical and functional analysis is implemented properly.System integration testing Different parts of the application now are tested together to examine the quality of the application. Best results will be achieved when these tests are performed by professional testers. . This is an important (but sometimes difficult) step. . system testing can start. giving a certain output (PDF. . if you have a web shop.).The right side shows the different testing levels : .Component testing (unit testing) Every time a developer finishes a part of the application he should test this to see if it works properly. System integration testing also involves testing the interfacing with other systems.g. you probably will have to test whether the integrated Online payment services works.. because you will have to make arrangements with parties outside the project group. a member of the Development team should test to verify whether the different parts do what they have to do.

Make him tell this phrase to another person.. They are your customer! . You'll be surprised how much the phrase has changed! This is exactly the same phenomenon you see in software development! Let the end users test the application with the real cases you listed up in the test preparation sessions.instead of getting angry . Ask them to use real life cases! And . This comic explains why end users need to accept the application: This is what actually Client Needs :-( During the project a lot off interpretation has to be done. Tell somebody a phrase. The analyst team has to translate the wishes of the customer into text.. And this person to another one. Development has to translate these to program code..Acceptance testing Here real users (= the people who will have to work with it) validate whether this application is what they really wanted. Do this 20 times.listen when they tell you that the application is not doing what it should do. They are the people who will suffer the applications shortcomings for the next couple of years. Testers have to interpret the analysis to make features to test list.

every activity is shadowed by a test activity. those testing activities are covered which are skipped in V Model. Paul Herzlich introduced the W-Model. Fig 1: W Model . The purpose of the test activity specifically is to determine whether the objectives of that activity have been met and the deliverable meets its requirements. Couple of testing activities are not explained in V model. write requirements) is accompanied by a test activity test the requirements and so on. architecture also. In W Model. If you see the below picture. the deliverables of a development activity (for example. The ‘W’ model illustrates that the Testing starts from day one of the of the project initiation. 1st “V” shows all the phases of SDLC and 2nd “V” validates the each phase. V Model Represents one-to-one relationship between the documents on the left hand side and the test activities on the right. WModel presents a standard development lifecycle with every development stage mirrored by a test activity. This is a major exception and the V-Model does not support the broader view of testing as a continuously major activity throughout the Software development lifecycle. System testing not only depends on Function requirements but also depends on technical design.V Model to W Model | W Model in SDLC Simplified We already discuss that V-model is the basis of structured testing. On the left hand side. However there are few problem with V Model. This is not always correct. typically. In 1st “V”.

• . More comparison of W Model with other SDLC models>> Document PDF. in the above figure. there are regression test cycles and then User acceptance testing. security testing. • Point 2 refers to . Unit/integration testing. path testing. Dotted arrow shows that every phase in brown is validated/tested through every phase in sky blue. specification based testing.Build Test Plan & Test Strategy. • After this. performance testing).Scenario Identification.V model only shows dynamic test cycles. Point 6 refers to – Various testing methodologies (i. usability testing. So if you see.e. equivalence partition. Now.Fig 2: Each phase is verified/validated. the above 5 points covers static testing. • Point 1 refers to . 4 refers to – Test case preparation from Specification document and design documents • Point 5 refers to – review of test cases and update as per the review comments. boundary value. • Point 3. the connection between the various test stages and the basis for the test is clear with W Model (which is not clear in V model). but W models gives a broader view of testing. Conclusion .

But they are just that – assumptions. This methodology/approach is required in testing. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. • Always asks the question “why?” • Seek to drive out certainty where there is none. Developers often neglect primary ambiguities in specification documents in order to complete the project. . Without being proved they are no more correct than guesses. they can offer excellent quality in the project and reduce cost of the project. Identifying the need for Testing Mindset is the first step towards a successful test approach and strategy. They suppose the application under test is inherently defective and it is their job to ‘illuminate’ the defects. By taking a skeptical approach. Sometimes this attitude can bring argument with Development Team. But development team can be testers too! If they can accept and adopt this state of mind for a certain portion of the project. Designers and developers approach software with an optimism based on the guess/assumption that the changes they make are the accurate solution to a particular problem. • Seek to illuminate the darker part of the projects with the light of inquiry.The Testing Mindset A professional tester approaches a product with the mind-set that the product is already broken . or they fail to identify them when they see them.it has bugs and it is their job to find out them. the tester offers a balance. A Good Professional tester: • Takes nothing at face value.

Concept of Complete Testing | Exhaustive testing is impossible It is not unusual to find people making claims such as “I have exhaustively tested the program. testing means there are no undiscovered faults at the end of the test phase. an input may be valid at a certain time and invalid at other times.” Complete. that is. or exhaustive. such as weather.Software testing and quality assurance: theory and practice By Kshirasagar Naik. • It may not be possible to create all possible execution environments of the system. temperature. Priyadarshi Tripathy] Must Read: Testing Limitations . An input value which is valid but is not properly timed is called an inopportune input. There are both valid inputs and invalid inputs. The input domain of a system can be very large to be completely used in testing a program. outside world. and so on. complete testing is near impossible because of the following reasons: • The domain of possible inputs of a program is too large to be completely used in testing a system. The program may have a large number of states. altitude. For example. a programmer may use a global variable or a static variable to control program execution. This becomes more significant when the behaviour of the software system depends on the real. The design may have included implicit design decisions and assumptions. pressure. [From book . • The design issues may be too complex to completely test. All problems must be known at the end of complete testing. There may be timing constraints on the inputs. For most of the systems.

Incomplete or ambiguous requirements may lead to inadequate or incorrect testing. you’ll never know it • You will run out of time before you run out of test cases • You cannot test every path • You cannot test every valid input • You cannot test every invalid input The Impossibility of Complete Testing by Dr. . . • Even if you do find the last bug. • Compromise between thoroughness and budget.May not detect errors in the requirements. • Time and budget constraints normally require very careful planning of the testing effort. Cem Kaner >>>> Document PDF. • Test results are used to make business decisions for release dates.Testing Limitations • You cannot test a program completely • We can only test against system requirements . • Exhaustive (total) testing is impossible in present scenario.

Review and SQA audit Reports for all Test Cases. reliability and performance of an Information System. User Acceptance Testing. Integration Test Cases Document. Integration Testing. Requirement Traceability Matrix. Development Team prepares System Requirement Specification.. Once the Development Team-lead analyzes the requirements. System Test Plan Document. which is the scheduler for entire testing process. he will prepare the System Requirement Specification. System Testing. The involvement of Testing team will start from here. Software Measurements/metrics plan. Updated Requirement Traceability Matrix. Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers). The active involvement of the testing team will give the testers a clear vision of the functionality of the system by which we can expect a better quality and error-free product. the total schedule of modules. Here they will prepare some important Documents like Detailed Design Document. Requirement Traceability Matrix. it is always better to involve the Testing team right from the beginning of the Requirement Analysis phase. Test Lead will prepare the Test Strategy and Test Plan. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie. Generally Organizations follow the V – Model for their development and testing. Deliverables and Versions. Design. Software Configuration Management Plan. Here he will plan when each phase of testing such as Unit Testing.How and When Testing Starts For the betterment. The Development Team-lead will explain regarding the Project. Software Project Plan. . After analyzing the requirements.

And at the end of Delivery Test Lead and individual testers prepare some reports. which is specifically used for Testing. Of course it should have some minor bugs. Testers work will start from this stage. They will check the code against historical logical errors checklist. Test Cases simultaneously the Development team will work on their individual Modules. Test plan. Typically the Test Environment replicates the Client side system setup. indentation. . Once the Test Lead approves it. which are going to be fixed in next iteration or release (generally called Deferred bugs). At the time of Release there should not be any high severity and high priority bugs. After all the bugs are fixed they will release next build. Generally we will document 4 Cycles information in the Test Case Document. proper commenting. After that they will send them for review to the Test Lead. Before three or four days of First Release they will give an interim Release to the Testing Team. We are ready for Testing. we have to check whether the change in the code give any side effects to the already tested code. Once Cycle #1 testing is done. And here we will do regression testing means. After that the Testing team do testing against Test Cases. then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool.After preparation of the Test Plan. They will deploy that software in Test Machine and the actual testing will start. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. Some times the Testers also participate in the Code Reviews. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product. Test Lead distributes the work to the individual testers (white-box testers & black-box testers). The Testing Team handles configuration management of Builds. they will prepare the Test Environment/Test bed. They will track the bugs by changing the status of Bug at each and every stage. Again we repeat the same process till the Delivery Date. which is static testing. While testing team will work on Test strategy. which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization).

It will be easy to understand. Also take care of vague/fuzzy terms like . and so on. For example: “The range of Number field is from 10 to 100. such as. But is it Decimal? Ask for Clarification. mostly." And be sure all the items/list values are understood. Always review specification document with the entire testing team.skipped. . 5. When you are doing spec review.” But whose value (of ABC Module or XYZ Module)? 7. then draw a picture of that in order to understand and try to find the expected result. some. rejected. The purpose of Software Requirement Specification Review is to uncover problems that are hidden within the specification document. Discuss each point with team members. Whenever a scenario/condition is defined in paragraph. 2.. So following guidelines for a detailed specification review is suggested: 1. most. This is a part of defect prevention.Requirement Specification document Review Guidelines and Checklists To prepare effective test cases. then work on its calculations with minimum two examples. 4. If paragraph is too long. if a scenario is described which hold calculations. 8. While reviewing specification document. make sure stated ranges don’t contain unstated/implicit assumptions. These terms can be interpreted in many ways. Take care of unclear pronouns like – “The ABC module communicates with the XYZ module and its value is changed to 1. eliminated. and so forth. often. 6. and usually” and ask for clarification. testers and QA engineers should review the software specs documents carefully and raise as much queries as they can. In the specification document. Look for terms: "etc. These problems always lead the software to incorrect implementation. look carefully for vague/fuzzy terms like – “ordinarily. 3. processed. sometimes. break it in multiple steps. handled. Many times it happens that list values are given but not completed.

if any change come. After the specs are sign off and finalized. While studying specification documents. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. developers plays an important role. or they fail to identify them when they see them. requirement document gets changed/updated.9. Always go thru the revision history carefully. 13. If any point of the specs is not clear then get your queries resolved from the Business Analyst or Product Manager as soon as possible. In this post we will discuss the role of a tester in these phases and how to testers can prevent more defects in Defect Prevention phase and how testers can detect more bugs in Defect Detection phase Role of a tester in defect prevention and defect detection. 12. We will discuss “How to review the specification document?” . 11. then keep track of those issues. unit testing. 10. If there is any open issue (under discussion) in the specs (sometimes to be resolved by client). Defect prevention – In Defect prevention. then see the impacted areas. Developers often neglect primary ambiguities in specification documents in order to complete the project. In this phase Developers do activities like – code reviews/static code analysis. Role of a tester in defect prevention and defect detection Some Testers (especially beginners) often get confused with this Question “What is the role of a tester in Defect Prevention and Defect Detection?”. Testers are also involved in defect prevention by reviewing specification documents. testers encounter various queries. If any mentioned scenario is complex then try to break it into points. Studying the specification document is an art. etc. This is how testers help in defect prevention. And many times it happens that with those queries.

To increase the defect detection rate. functional testing.preparation/execution of effective test cases and conducting the necessary tests like . tester should have complete understanding of the application.Defect Detection – In Defect detection. etc.exploratory testing. role of a tester include Implementing the most appropriate approach/strategy for testing . . Ad hoc /exploratory testing should go in parallel with the test case execution as a lot of bugs can be found through that means.

5. These terms can be interpreted in many ways. then work on its calculations with minimum two examples. most. 2. handled. Look for terms: "etc.” But whose value (of ABC Module or XYZ Module)? 7. The purpose of Software Requirement Specification Review is to uncover problems that are hidden within the specification document. such as.Requirement Specification document Review Guidelines and Checklists To prepare effective test cases. Also take care of vague/fuzzy terms like . and so forth. Whenever a scenario/condition is defined in paragraph. if a scenario is described which hold calculations. 6. Always review specification document with the entire testing team..skipped. eliminated. rejected. mostly. 3. look carefully for vague/fuzzy terms like – “ordinarily. . Take care of unclear pronouns like – “The ABC module communicates with the XYZ module and its value is changed to 1. 8. While reviewing specification document. break it in multiple steps. Many times it happens that list values are given but not completed. These problems always lead the software to incorrect implementation. testers and QA engineers should review the software specs documents carefully and raise as much queries as they can. Discuss each point with team members. often. processed. and so on. In the specification document. If paragraph is too long. then draw a picture of that in order to understand and try to find the expected result. It will be easy to understand. sometimes." And be sure all the items/list values are understood. When you are doing spec review. 4. This is a part of defect prevention. make sure stated ranges don’t contain unstated/implicit assumptions. and usually” and ask for clarification. So following guidelines for a detailed specification review is suggested: 1. But is it Decimal? Ask for Clarification. some. For example: “The range of Number field is from 10 to 100.

Here he will plan when each phase of testing such as Unit Testing. . User Acceptance Testing. Once the Development Team-lead analyzes the requirements. which is the scheduler for entire testing process. Test Lead will prepare the Test Strategy and Test Plan. If there is any open issue (under discussion) in the specs (sometimes to be resolved by client). 13. System Testing. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). Always go thru the revision history carefully. Requirement Traceability Matrix. Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie. Requirement Traceability Matrix. 12. Integration Testing. then see the impacted areas. After analyzing the requirements. it is always better to involve the Testing team right from the beginning of the Requirement Analysis phase. How and When Testing Starts For the betterment.. Generally Organizations follow the V – Model for their development and testing. reliability and performance of an Information System. Design. the total schedule of modules. 10. After the specs are sign off and finalized. Deliverables and Versions. he will prepare the System Requirement Specification. Software Project Plan. The Development Team-lead will explain regarding the Project. The involvement of Testing team will start from here. if any change come. Software Measurements/metrics plan. If any mentioned scenario is complex then try to break it into points. then keep track of those issues. If any point of the specs is not clear then get your queries resolved from the Business Analyst or Product Manager as soon as possible. 11. Software Configuration Management Plan. Development Team prepares System Requirement Specification.9. The active involvement of the testing team will give the testers a clear vision of the functionality of the system by which we can expect a better quality and error-free product.

indentation. System Test Plan Document. Typically the Test Environment replicates the Client side system setup. which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). And at the end of Delivery Test Lead and individual testers prepare some reports. . Of course it should have some minor bugs. Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers). They will track the bugs by changing the status of Bug at each and every stage. And here we will do regression testing means. which is specifically used for Testing. Testers work will start from this stage. Integration Test Cases Document. After that the Testing team do testing against Test Cases. Test Lead distributes the work to the individual testers (white-box testers & black-box testers). Once the Test Lead approves it. They will deploy that software in Test Machine and the actual testing will start. Test Cases simultaneously the Development team will work on their individual Modules. At the time of Release there should not be any high severity and high priority bugs. then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. The Testing Team handles configuration management of Builds. Some times the Testers also participate in the Code Reviews. While testing team will work on Test strategy. we have to check whether the change in the code give any side effects to the already tested code. Review and SQA audit Reports for all Test Cases.Here they will prepare some important Documents like Detailed Design Document. They will check the code against historical logical errors checklist. Again we repeat the same process till the Delivery Date. based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. which are going to be fixed in next iteration or release (generally called Deferred bugs). which is static testing. Test plan. Generally we will document 4 Cycles information in the Test Case Document. proper commenting. We are ready for Testing. Updated Requirement Traceability Matrix. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. Before three or four days of First Release they will give an interim Release to the Testing Team. After preparation of the Test Plan. After all the bugs are fixed they will release next build. After that they will send them for review to the Test Lead. they will prepare the Test Environment/Test bed. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product. Once Cycle #1 testing is done.

This document is prepared to make the clients satisfy that the coverage done is complete as end to end. Sample formats of Traceability Matrix basic version to advanced version. Unit Testing & Integration Testing phase Validation – System Testing. 5. 3. this document consists of Requirement/Base line doc Ref No. Functional Testing phase In this topic we will discuss: • What is Traceability Matrix from Software Testing perspective? (Point 5) • Types of Traceability Matrix • Disadvantages of not using Traceability Matrix • Benefits of using Traceability Matrix in testing • Step by step process of creating an effective Traceability Matrix from requirements. Using this document the person can track the Requirement based on the Defect id Note – We can make it a “Test case Coverage checklist” document by adding few more columns. Risk Analysis phase Requirements Analysis and Specification phase Design Analysis and Specification phase Source Code Analysis. • . Test case/Condition.Traceability Matrix from Software Testing perspective Traceability Matrix is used in entire software development life cycle phases: 1.A Good Traceability matrix is the References from test cases to basis documentation and vice versa.. 4. We will discuss in later posts Types of Traceability Matrix: Forward Traceability – Mapping of Requirements to Test cases • Backward Traceability – Mapping of Test Cases to Requirements • Bi-Directional Traceability . and Defects/Bug id. 2.A requirements traceability matrix is a document that traces and maps user requirements [requirement Ids from requirement specification document] with the test case ids. Purpose is to make sure that all the requirements are covered in test cases so that while testing no functionality can be missed. In Simple words .

for each phase of the SDLC. Disadvantages of not using Traceability Matrix [some possible (seen) impact]: . This will help us in identifying if there are test cases that do not trace to any coverage item— in which case the test case is not required and should be removed (or maybe a specification like a requirement or two should be added!). that I have correctly accounted for all the customer’s needs? • How can I certify that the final software product meets the customer’s needs? Now we can only make sure requirements are captured in the test cases by traceability matrix. Backward Traceability Matrix ensures – We the Building the Product Right. Through Backward Traceability Matrix. we can see that test cases are mapped with which requirements.Why Bi-Directional Traceability is required? Bi-Directional Traceability contains both Forward & Backward Traceability. This “backward” Traceability is also very helpful if you want to identify that a particular test case is covering how many requirements? Through Forward Traceability – we can check that requirements are covered in which test cases? Whether is the requirements are coved in the test cases or not? Forward Traceability Matrix ensures – We are building the Right Product. Traceability matrix is the answer of the following questions of any Software Project: • How is it feasible to ensure.

then we can easily find out which test cases need to update. misunderstandings between different teams over project dependencies. 3. Then a lot of discussions arguments with other teams and managers before release. • If there is a change request for a requirement. .No traceability or Incomplete Traceability Results into: 1. Difficult project planning and tracking. • The completed system may have “Extra” functionality that may have not been specified in the design specification. Poor or unknown test coverage. It will lead to miss some bugs in earlier test cycles which may arise in later test cycles. delays. more defects found in production 2. • To make sure that all requirements included in the test cases • To make sure that developers are not creating features that no one has requested • Easy to identify the missing functionalities. resulting in wastage of manpower. etc Benefits of using Traceability Matrix • Make obvious to the client that the software is being developed as per the requirements. time and effort.

Now from below table you can conclude – Requirement SR-1.2 is covered in TC 001 Requirement SR-1. 5. Requirements SR-1.Steps to create Traceability Martix: 1. Assume (as per below table). Map Requirement IDs to the test cases. which test cases need to be updated if there is any change request].1 is covered in TC 001 Requirement SR-1. Make use of excel to create Traceability Matrix: 2. Identify all the testable requirements in granular level from requirement document.. So on.2 are covered. TC 003 [Now it is easy to identify. . Typical requirements you need to capture are as follows: Used cases (all the flows are captured) Error Messages Business rules Functional rules SRS FRS So on… 4. Test case “TC 001” is your one flow/scenario. Define following columns: Base Specification/Requirement ID (If any) Requirement ID Requirement description TC 001 TC 002 TC 003. Now in this scenario. 3.5 is covered in TC 001. So mark “x” for these requirements. Identity all the test scenarios and test flows.1 and SR-1.

2 SR-1.1. Technical Assumption(s) and/or Customer Need(s). 1. System Component(s).TC 001 Covers SR-1. Additional Comments. Technical Specification. Requirement ID SR-1.5 SR-1.. Assoc ID. Tested In.6 SR-1. Status.3 Requirement TC 001 description User should be able to x do this User should be able to x do that On clicking this.4 SR-1. Check the Excel worksheet>>>>> . Test Case Number. Architectural/Design Document.. SR. Software Module(s). So on.1 SR-1.7 x x x This is a very basic traceability matrix format.3.2 [we can easily identify that test cases covers which requirements]. Functional Requirement. TC 002 covers SR-1. following message should appear x x TC 002 TC 003 x SR-1. You can add more following columns and make it more effective: ID. Implemented In. Verification.

adding features to variants of the product to be released shortly thereafter. What makes these product lines part of a family. and from variants in your company’s own product line/family. or differ from the basic functionality along some quality attribute (such as performance or memory utilization). One strategy for quickly penetrating a market. and features that differentiate the system from competitors’ products. it is not just the functional requirements of the first product or release that must be supported by the architecture. tasks or functions the system is required to perform. Features may be additional functionality. A platform-based development approach leverages this commonality. This is especially true in the object-oriented community where they originated. etc. In product development. In particular. These strategies have important implications for software architecture. Later releases are accommodated through architectural qualities such as extensibility. and product families that include a number of product lines targeted at somewhat different markets or usage situations. but their applicability is not limited to object-oriented systems.Functional Requirements and Use Cases Functional Requirements Functional requirements capture the intended behavior of the system. basic product. or stripped down. it is useful to distinguish between the baseline functionality necessary for any system to compete in that product domain. . This release strategy is obviously also beneficial in information systems development. are some common elements of functionality and identity. companies produce product lines with different cost/feature variations per product in the line. The latter are expressed as non-functional requirements. flexibility. is to produce the core. The functional requirements of early (nearly concurrent) releases need to be explicitly taken into account. This behavior may be expressed as services. utilizing a set of reusable assets across the family. Use cases have quickly become a widespread practice for capturing functional requirements. In many industries. staging core functionality for early releases and adding features over the course of several subsequent releases.

This is engaging for users who can easily follow and validate the use cases. An actor may be a class of users. and the interactions with system. bounding the scope of the system. Scenarios A scenario is an instance of a use case. without dealing with system internals. The system is treated as a "black box". Cockburn (1997) distinguishes between primary and secondary actors.113. etc. and represents a single path through the use case.g. A use case is initiated by a user with a particular goal in mind. . Thus. pp. e. Thus. This paper also contains an updated version of Derek Coleman's use case template (Coleman. one may construct a scenario for the main flow through the use case.. error conditions. use cases capture who (actor) does what (interaction) with the system.Use Cases A use case defines a goal-oriented set of interactions between external actors and the system under consideration. roles users can play. and completes successfully when that goal is satisfied. for what purpose (goal). security breaches. including system responses.123). alternative sequences that may also satisfy the goal. and other scenarios for each possible variation of flow through the use case (e..g. Role of Use Cases in Architecting See the "Functional Requirements and Use Cases" white paper by Ruth Malan and Dana Bredemeyer. 1998) and a full use case bibliography. as well as sequences that may lead to failure to complete the service because of exceptional behavior. It describes the sequence of interactions between actors and the system necessary to deliver the service that satisfies the goal. etc.2. and therefore defines all behavior required of the system. or other systems. are as perceived from outside the system. error handling. Scenarios may be depicted using sequence diagrams. triggered by options. and the accessibility encourages users to be actively involved in defining the requirements. use case steps are written in an easy-to-understand structured narrative using the vocabulary of the domain. 2. A secondary actor is one from which the system needs assistance.). Generally. A primary actor is one having a goal requiring the assistance of the system. for the role of use cases in the architecting process. Actors are parties outside the system that interact with the system (UML 1999. It also includes possible variants of this sequence. A complete set of use cases specifies all the different ways to use the system.

3 Alpha R5.hpl.hpl. Available on http://members. G. Magnus. UML Specification. Fusion Newsletter. Karl Wiegers. Rumbaugh.pl Cockburn. 1999. Scott's Tyner Blain blog covers requirements-related topics.htm Cockburn. Andy. AddisonWesley.Recommended Reading on Use Cases Booch.com Malan. We have referenced V1. Alistair. Larry. 39k) June 1999. Malan. R. 219-241.rosearchitect.pdf.com/acockburn/papers/uctempla.html . and D. The Impact of Change on Use Cases.com/articles/usecase. 25kb) April 2000. (Use_Case_Template. 1997. April 1998. 2006. (functreq. pp. Oct . "Structuring Use Cases with Goals".com/fusion/md_newsletters. "A Use Case Template: Draft for discussion"." http://www. "What Do Users Want? Engineering usability into software".foruse. I. Christerson. (This used to be available athttp://www. http://www. Derek. Bredemeyer. "Functional Requirements and Use Cases".html. Journal of Object-Oriented Programming. Alistair.. Pols. March 1999 in this paper.com/acockburn/papers/usecases.html Constantine. 1997 and Nov-Dec.processimpact. Sep-Oct. http://www.aol. The Unified Modeling Language User Guide.com/fusion/md_newsletters. Jacobson and J. 1997. "From Use Cases to Components". http://www.) Sehlhorst. Rose Architect. Scott. http://www. Feb. Fusion Newsletter. Bredemeyer. "Use Case Action Guide". "Basic Use Case Template". R.com/cgi-bin/viewprint.hp.com/uml/index. "Use Case Rules of Thumb: Guidelines and lessons learned". "Listening to the Customer's Voice. July 24.aol.hp. Also available onhttp://members.rational. and D. 5/99.htm Coleman.pdf.1998.jtmpl.

Then we have to think about suitable test case design technique [Black Box design techniques like Specification based test cases. Severity is given by the reporter of bug. if the client complaints a defect in the production we will have to reproduce it in test environment. If the developer is unable to find this behaviour he will ask us to reproduce the bug. functional test cases. How can we design the test cases from requirements? Do the requirements. it is a bug. What is the responsibility of a tester when a bug which may arrive at the time of testing. Boundary Valve Analysis (BVA). If the bug was not reproducible by developer. In another scenario. High severity: hardware bugs application crash. calculation bugs etc. Low severity: User interface bugs. then check whether the bug is valid or not then forward the same bug to the team leader and then after confirmation forward it to the concern developer. for example click the button and the corresponding action didn’t happen.Practical interview questions on Software Testing Part 1 1. High priority: Error message is not coming on time. Explain? First check the status of the bug. etc 2. so that that case we can mark the bugs as inconsistent and temporarily close the bug with status working fine now. requirements should represents exact functionality of AUT. First of all you have to analyze the requirements very thoroughly in terms of functionality. Equivalence Class Partitioning (ECP). For example. What do you mean by reproducing the bug? If the bug was not reproducible. Low priority: Wrong alignment. and just hope it would never come back ever again. what is the next step? If you find a defect. 4. . it is not reproducible in which case we will do further testing around it and if we cannot see it we will close it. On which basis we give priority and severity for a bug and give one example for high priority and low severity and high severity and low priority? Always the priority is given by team leader or Business Analyst. then bug is assigned back to reporter or goto meeting or informal meeting (like walkthrough) is arranged in order to reproduce the bug. If we cannot reproduce it.Error guessing and Cause Effect Graphing] for writing the test cases. Sometimes the bugs are inconsistent. 3. represent exact functionality of AUT? Ofcourse.

7.Test case (ID) . Mapping between high level design(Design Doc) .Failed test cases. Usually the traceability matrix is mapping between the requirements. For automation. Test cases: It is prepared by test engineer based on the use cases from FRS to check the functionality of an application thoroughly Test Plan: Team lead prepares test plan.test case (ID) . in test plan. What is the difference between use case. scheduling.By these concepts you should design a test case. in it he represents the scope of the test. test plan? Use Case: It is prepared by Business analyst in the Functional Requirement Specification(FRS). what to test using automation etc. The test cases are sorted in test plan tab or more precisely in the test director. lets say quality centers database. test case. 5.Failed test cases. 6. 1. .test cases (ID) . Mapping the functional requirement scenarios(FS Doc) . 3. Mapping between business requirements (BR Doc) . mapping between test plan (TP Doc) . which are nothing but a steps which are given by the customer. what to test and what not to test. create a new automated test and launch the tool and create the script and save it and you can run from the test lab the same way as you did for the manual test cases. client requirements.failed test cases. Read: Art of Test case writing 5.test cases (ID) Failed test cases(Bugs) 2.Failed test cases. test director is now referred to as quality center. How to launch the test cases in Quality Centre (Test Director) and where it is saved? You create the test cases in the test plan tab and link them to the requirements in the requirement tab. 4. test plan and test cases. Once the test cases are ready we change the status to ready and go to the “Test Lab” Tab and create a test set and add the test cases to the test set and you can run from there. How is traceability of bug follow? The traceability of bug can be followed in so many ways. function specification.test cases (ID) . Mapping between requirements(RS Doc) . which should have the capability of finding the absence of defects.

as per as i know. Now open excel. find the excel add-In. only if you have created the bug from Application. it is not possible to map defects directly to an requirements. . and install it in your machine. you can find the new menu option export to Quality Centre. We need to add test to these test trees from the tests.Mercury Quality Center Interview Questions 1. which are placed under test plan in the project. Open the Quality Centre project 2.Structure • Create the test case structure and the test cases • Map the test cases to the App. 2.Req • Run and report bugs from your test cases in the test lab module. test case may be we can update the mapping by using some code in the bug script module(from the customize project function). Can we upload test cases from an excel sheet into Quality Centre? Yes go to Add-In menu Quality Centre. how do you run reports from Quality Centre. Internally Quality Centre will refer to this test while running then in the test lab. The database structure in Quality Centre is mapping test cases to defects. What is meant by test lab in Quality Centre? Test lab is a part of Quality Centre where we can execute our test on different cycles creating test tree for each one of them. This is how you do it 1. Can you map the defects directly to the requirements(Not through the test cases) in the Quality Centre? In the following methods is most likely to used in this case: • Create your Req. Rest of the procedure is self explanatory. Choose report Analysis > reports > standard requirements report 4. 3. Displays the requirements modules 3.

The test cases will be loaded in the test plan module 3. If yes then how? Requirement tab– Right click on main req/click on export/save as word. You cans elect any test case you want to map with your requirements. Select requirement by clicking on parent/child or grandchild 3. On right hand side(In coverage view window) another window will appear. when raising the defects. 8. 7. We generate the graph in the test lab for daily report and sent to the on site (where ever you want to deliver) 5. It has two tabs a) Tests coverage b) Details Test coverage tab will be selected by default or you click on it. Select a test script. Select all or selected option Defects Tab: Right click anywhere on the window. test lab : To execute the test cases and track the results. How many types of tabs are there in Quality Centre. attach the defects with the screen shot. 4. Once the execution is started. If we got any defects and raise the defects in the defect module. This would save all the child requirements Test plan tab: Only individual test can be exported. Export the test cases into Quality Centre( It will contained total 8 steps) 2. How to map the requirements with test cases in Quality Centre? 1. Click on select tests button a new window will appear on right hand side and you will see a list of all tests. we execute the test cases and put as pass or fail or incomplete. 4. How to use Quality Centre in real time project? Once completed the preparing of test cases 1. Default save option is excel. We move the test cases from test plan tab to the test lab module. In requirements tab select coverage view 2. . Click on export and save as. 6. Right click anywhere. But can be saved in documents and other formats. excel or other template. Click on execution grid if it is not selected. Requirement : To track the customer requirements 2. In test lab. Testplan : To design the test cases and to store the test scripts 3. right click anywhere on the open window. click on the design steps tab. Explain? There are four types of tabs are available 1. 4. Defect : To log a defect and to track the logged defects. Can we export the file from Quality Centre to excel sheet. export all or selected defects and save excel sheet or document. no parent child export is possible.5. Test lab tab: Select a child group.

9. Parent Requirements nothing but title of the requirements. Two kinds of requirements are available in TD. Web inspect enables users to perform security assessments for any web application or web service. Web Inspect ensures the security of your most critical information by identifying known and unknown vulnerabilities within the web application. With web Inspect. 10. . How can we add requirements to test cases in Quality Centre? Just you can use the option of add requirements. it covers low level functions of the requirements. Child requirements. it covers high level functions of the requirements Child requirement nothing but sub title of requirements. auditors. compliance officers and security experts can perform security assessments on a web enabled application. 1. Parent Requirement 2. including the industry leading application platforms. Difference between Web Inspect-QA Inspect? QA Inspect finds and prioritizes security vulnerabilities in an entire web application or in specific usage scenarios during testing and presents detail information and remediation advise about each vulnerability.

Sign up to vote on this title
UsefulNot useful