You are on page 1of 43

Q. What is White Box Testing?

White box testing is a method to test the functionality of the code by passing parameters within the code itself. White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised. White box testing is also called Structural testing or Glass box testing. Q. Why is White Box Testing needed?

White box testing is needed because of the following reasons: Logical error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program The design errors due to difference between logical flow of the program and the actual implementation Typographical errors and syntax checking

Q. What are the different techniques used in white box testing? Code coverage: Ensure that each code statement is executed once. Branch Coverage or Node Testing: Coverage of each code branch in from all possible was. Compound Condition Coverage: For multiple condition test each condition with multiple paths and combination of different path to reach that condition. Basis Path Testing: Each independent path in the code is taken for testing. Data Flow Testing: In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code. Data Flow Testing tends to reflect dependencies but it is mainly through sequences of data manipulation. In short each data variable is tracked and its use is verified. This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on. Path Testing: Path testing is where all possible paths through the code are defined and covered. Its a time consuming task. Loop Testing: These strategies relate to testing single loops, concatenated loops, and nested loops. Independent and dependent code loops and values are tested by this approach.

Q. What are the limitations of white box testing? It is not possible for testing each and every path of the loops in program. This means exhaustive testing is impossible for large systems. This does not mean that white box testing is not effective. By selecting important logical paths and data structure for testing is practically possible and effective. Q. What are the levels of testing in which white box testing can be present? There are three levels in which white box testing can be present: 1.UnitTesting 2.Integretion Testing 3.Regression testing Q. What are Dirty Test Cases? Dirty test cases are those that check the functionalities for the negative scenarios. In dirty test cases we provide negative / invalid inputs and verify if the application is behaving correctly. Here the application is tested for maximum amount of invalid inputs. Example: For a banking application which allows fund transfer of maximum $10000 US per day. A dirty test case would be to try to transfer more than that value, i.e. say $10001 US. Here the application should sanely give an error that maximum allowed amount for transfer is $10000 US. It should not perform any transaction. Q. What are the impacts caused by failure in white box testing? A failure of a white box testing may result in a change that requires all black box testing to be repeated and white box testing paths to be reviewed and possibly changed. Q. Why is white box testing called as glass box testing? Everything is visible inside a glass box. When you perform white box testing, you can see the code and test it. But in black box testing the tester does not know what is happening inside the code when he is giving a particular input. He only sees the output, but does not get to know how that output has come. In white box testing the tester can know what exactly is happening inside the code and why a particular out put is coming for a particular input. That's why it is called Glass box testing. Q. Explain relation between white box testing and equivalence partitioning? White box testing is a type of testing used basically for checking the code of an application, where as equivalence partitioning is a strategy used for both white box and black box testing to decide the input values for the testing. In equivalence partitioning we will have to come up with multiple inputs in which each one represents a set of inputs of similar behavior.

Q. What is meant by API Testing? Explain the API Testing process. API testing is done to make sure that the basic units of the software application function properly as desired. Reason why we perform API testing right from the initial stages of the product cycle to the final phase, ensuring that the product release in the market is error-free and worth every penny you invested. API testing process involves testing the methods of NET, JAVA, J2EE APIs for valid and invalid inputs, plus testing the APIs on Application servers. API-testing targets the code-level, and can be done even by testers as well as developers. Q. Give one example where you did not find the bug in black box testing but you found the bug in white box testing? Let's say you have an entity that is stored across multiple tables and the test case is to delete the entity. In black-box testing, once the entity disappears from the GUI after deletion, your black-box test case is considered passed. But with white-box testing, you'd check if all related rows are deleted from the tables. If the deletion happens to delete only the parent record and leave behind orphan rows, the test case is considered failed. Q. Given specification: if (a>4) then b= a-1; else b= a+1, where a and b are integer variables. List all possible test cases that can detect the bug in the following implementation if (a >=8) b = a-1; else b=a+1; Test with following Data 1. a=0 Then Result Should Be b=1 2. a=-1 Then Result Should Be b=0 3. a=2 Then Result Should Be b=3 4. a=4 Then Result Should Be b=5 5. a=5 Then Result Should Be b=4 Test Case Number 5 will fail while testing if (a >=8) b = a-1; else b=a+1; Here 5 >=8 is false and Result Displayed will be b=6, which is wrong.

Q. What knowledge is essential to be able to perform white box testing? The following are the knowledge essential for white box testing: 1. Good understanding of the code/programming language. 2. Good analytical/logical ability to able to give the necessary inputs and to cover the required paths/branches. 3. Good understanding of testing concepts and methodologies. Q. What are stubs and drivers used for in white-box testing? Stubs and Drivers are the small programs used in the integration testing such that these programs are placed where the output of undeveloped modules. that is in some cases, some of the non priority module's output is needed in the project. then to get out of that situation, we write a small program such that it generates exactly similar output of the un-generated module. these programs are called stubs and drivers. Here stub is used when we are testing the application in top-down approach. and driver is used when we are testing the application in bottom-top approach. Q. What is path coverage? How path coverage testing done in white box testing? Path coverage measures whether each of the possible paths in each function have been covered for testing. A path is a unique sequence of branches from the function entry to the exit. A very through testing is possible by Path Coverage. Path coverage is measures by an entity called cyclomatic complexity. The cyclomatic complexity of a software program is calculated from a connected graph of the module (that shows the topology of control flow within the program): Cyclomatic complexity (CC) = E - N + P where E = the number of edges of the graph N = the number of nodes of the graph P = the number of connected components

Q. What is cyclomatic complexity in a software program? Cyclomatic complexity is the number of linearly independent paths through a program's source code. Cyclomatic complexity is computed using the control flow graph of the program: the nodes of the graph correspond to the commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command. CC = E N + P Where

CC= cyclomatic complexity E = the number of edges of the graph N = the number of nodes of the graph P = the number of connected components. Q. Who performs white box testing? The developer or the tester? White box testing can be performed by anybody who has a good understanding of the code and the program logic. Generally developers perform white box testing during unit testing for their own components. API tester also perform white box testing.

Automation Testing
Q. Do automated testing tools make testing easier? The answer is: it depends. For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects automation tools can be valuable. A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms. Other automated tools can include: code analyzers - monitor code complexity, adherence to standards, etc. coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc. memory analyzers - such as bounds-checkers and leak detectors. load/performance test tools - for testing client/server and web applications under various load levels. web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure. other tools - for test case management, documentation management, bug reporting, and configuration management

Q. What is meant by framework? When we use framework in our project?

A test automation framework is a set of assumptions, concepts, and practices that provide support for automated software testing. There is no hard and fast rule to use a specific Automation frame work. It all depends on your project needs, here are some info on the same. Data Driven approach is suitable for applications that have limited functionality but large number of variations in terms of test data. Functional Framework is suitable for applications that have variety of functionality but limited variations in terms of test data. Hybrid Test Automation Framework is suitable for applications that have variety of functionality and larger number of variations in terms of test data. Record, enhance and play back methodology is suitable to convert small to medium size manual scripts into equivalent automation scripts - one to one basis. Q. What is the difference between Test script and Test case? Test Script: It is the command language that we write to perform testing. For e.g. if we use Rational Robot we shall write all commands in sqa Basic commands. Scripting is a program which automatically checks "n" number of inputs for a particular requirement or a check point. Further more, if we are performing the database check point for say, 1000 data, we can write a program (script) which shall automatically input and check for all 1000 entries in the database. Test Case: It is the actual step by step representation for all valid and invalid inputs. Its a detailed description for all inputs (valid and invalid) and covers all requirements.

Q. Does Jmeter generate any scripts? How to use the Jmeter tool and also the How to analyze the results?
When you create a test in Jmeter, save the file. The file saves with the extension .jmx. Open the .jmx file in an editor. There you can see script. Q. How to read data from a CSV file using Test Complete functional tool? Test Complete is having Drivers to work with any database such as CSV files, EXCEL files. This is done in Test Complete using DDT project item. DDT.CSVDriver("Application Path"); //then wotk with ur CSV file

Q. How can we reverse the string without using the reverse of String function in QTP? Str = "Reverse" Cnt = Len(Str) RStr = "" For i = 1 to Cnt RStr = Mid(Str,i,1) & RStr Next Msgbox RStr Q. What are the different automation tools available in the market? The popular automation testing tools available in the market are: Win Runner Load Runner Quick Test professional Silk Test Rational Robot Q. What is the default time for executing 1000 lines of script in winrunner and also in QTP? In WinRunner the default time for executing 1000 lines or n number of lines is 10 seconds. If the test script is not executed in that time limit we have to synchronize the test script. In QTP, am not sure about teh default time limit for executing the lines Q. Which tool we should use to retrieve test data? To retrieve test data we should use global/local data sheet in QTP. Also we can use data driven wizard in Win Runner. Q. In real time what is the Testing Process with respect to automation? Select the Test cases to automate Create a Test plan for Test automation. Follow the development life cycle like below: Design your test automation based on your selected test case Find the common functionality through out the testing application Create common Global functions or follow OOP concept Do not forget to include error verification in the script. Create a test script Unit test your script Create the Test automation Suite Execute the suite. Analyze the Results Log the defect if the script fails

Q. How we start automation testing in company? Is there any document for it? What are the different fields contains in Automation plan?

When repeated action need to be done on an application where it cannot be done manually we start AUTOMATION testing. Automation plan is nothing but the area or fields where we need to conduct automation testing in the companies. Q. What is End-To-End Testing? Explain in detail? End to end testing is nothing but system testing, that is testing all the functionalities from starting point to ending point against the requirements. Q. Which tools is the best in Automation? There r so many open source tools available in the market, if u client is happy to pay large amount of money then HP products are better (QTP/LoadRunner/WinRunner etc) else there are open source tools which do the same functionality as QC for Management and QTP for functional testing. Q. What are the factors to consider when selecting a test tool? Capability: Does the tool have all the critical features we need, especially in the area of test result validation and test suite management? Reliability: Does the tool work for long periods without failure, or is it full of bugs? Many test tools are developed by small companies that do a poor job of testing the software. Capacity: Can it handle large scale test suites that run for hours or days and involve thousands of scripts? Learnability: Can the tool be mastered in a short time? Are there training classes or books available to aid that process? Operability: Are the features of the tool cumbersome to use, or prone to user error? Performance: Is the tool quick enough to allow a substantial savings in test development and execution time versus hand testing. Compatibility: Does the tool work with the particular technology that we need to test? Non-Intrusiveness: How well does the tool simulate an actual user? Is the behavior of the software under test the same with automation as without? Q. I have recorded a script for some window application (login screen) and I have used browser as IE and I have executed that script successfully. Now I want to execute the same script in different browsers. Will the same script execute successfully on all other browsers? The browsers are different , so to execute the same script it needs the GUI map for the particular explorer. After the explorer is taken to the MAP editor then only the script will run.

Q. What are the applications does WinRunner support? What are the applications does QTP support? Can winrunner, QTP be used in Linux OS? WinRunner supports VB, .NET, JAVA, D2K, DELPHI, SEIBEL, HTML, VC++, POWER BUILDER etc. QTP supports VB, .NET, JAVA, D2K, DELPHI, SEIBEL, HTML, VC++, POWER BUILDER, SAP, PEOPLE SOFT, ORACLE, MULTIMEDIA AND XML etc. WinRunner and QTP both work in WINDOWS os only but for UNIX and LINUX WinRunner is used as "XRunner" tool for QTP no such tool is present. Q. Why companies are moving to QTP instead of using WinRunner? Does QTP support any other applications which WinRunner cannot support? QTP supports many technologies other than WinRunner like Sap, Siebel, oracle, Mainframes, People soft etc. Q. What is automation framework? Which automation framework u r using? Why? Framework: Framework is the collection of predefined classes and variables which can be reused. It is developed only when the application under test is a continuous development process i.e., there must be subsequent releases of the AUT. We declare all the objects at one location in multiple files and the business logic (ie, classes and functions are placed at one place). Whoever wants to use, will go and use these class and methods and the only thing they need to do is to write the new test cases. There is no hard and fast rule to use a specific Automation frame work. It all depends on your project needs, here are some info on the same. Data Driven approach is suitable for applications that have limited functionality but large number of variations in terms of test data. Functional Framework is suitable for applications that have variety of functionality but limited variations in terms of test data. Hybrid Test Automation Framework is suitable for applications that have variety of functionality and larger number of variations in terms of test data. Record, enhance and play back methodology is suitable to convert small to medium size manual scripts into equivalent automation scripts - one to one basis. Q. On what basis we select test cases to automate? Usually, we automate the test cases if the test is: 1.Reusable 2.Repeatable 3.Data driven test (for multiple set of data) 4.regression

Q. Are reusable test cases a big plus of automated testing and explain why? Yes. Because test cases are reusable and before automate the test cases just prepare the test data and call that test data in to your script So as per test cases the tester can prepare the data is very easy as per test cases, that is the particular reason the test cases are most useful at the time of automation.

Q. What are the types of testing we cannot do, by using automation tools? Testing that cannot be sensibly carried out via automation are those tests relating to subjective measures of quality. As an example when is an image clear enough, does something look good, is the workflow ergonomic. Q. How to open an application through scripting in QTP? Give syntax and example. 1) invokeapplication "path of the application example(c:\program files\iexplore.exe)" 2) SystemUtil.Run "application path("C:\Program Files\Internet Explorer\iexplore.exe")"

Q. What testing activities you may want to automate in a project?


Below is the list of tests/activities you may want to automate 1) Tests that need to be run for every build of the application or website(sanity testing) 2) Tests that use multiple data for the same set of actions (data driven testing) 3) Identical tests that needs to be executed using different browser 4) Mission critical pages 5) Transaction with pages which do not change in short term 6) The tests which are of repetitive nature Below is the list of activities which are poor candidates for automation 1) Usability testing 2) One time testing 3) Ad-hoc/Random testing based on the knowledge of the application or website

Q. How to find that tools work well with your existing system?
1. 2. 3. 4. If the Tool supports our tools and Technologies. Tool compatibility with third party tools used by our application Tool cost VS project Cost Platform Support

Q. What are test metrics? Below are certain examples of test metrics. Number of remarks Number of defects Defect severity Time to find a defect Test coverage Test case effectiveness Q. What are the scripting languages used in WinRunner, LoadRunner, QTP, Test director, Clear Quest, Rational Robot? WinRunner - TSL(Test script language) QTP- VB Script Test Director- TSL script Clear Quest- is a defect tracking tool. English like language will be used. Load Runner- Vugen scripts Rational Robot- QABASIC language Q. What criteria do you use when determining when to automate a test or leave it manual? When all the initials bugs r fixed and the application has it maturity level then we will gor automation for regression testing once we complete it with manual testing. Q. What automated tools are you familiar with? I am familiar with WinRunner, LoadRunner, QTP, Rational robot.

Q. What types of scripting techniques for test automation do you know?


VB-Script, which is very powerful language and it has lots of resources available when compared to TSL. Q. What testing activities you may want to automate? Automation is a costly activity. once the application is stable we can automate those test cases/functionalities which are very critical and needs to be regressed in the future releases. Among the test cases written for AUT, we should automate only P1 and P2 test cases which will cover end to end functionality. Q. What are main benefits of test automation? 1. Manual Intervention for repeated tasks can reduced

2. Effort required to run tests manually can be reduced. 3. Access of the test artifacts can be made easy Q. What are the main attributes of test automation? 1. Functional testing 2. GUI testing 3. Back-end testing

Black Box Testing Questions


Q. What is Black box testing? Black box testing is also known as functional testing. This is a software testing technique whereby the internal workings of the item being tested are not known by the tester. For example, in a black box test on a software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.

Q. What are the advantages of Black box testing? The test is unbiased because the designer and the tester are independent of each other. The tester does not need knowledge of any specific programming languages. The test is done from the point of view of the user, not the designer. Test cases can be designed as soon as the specifications are complete.

Q. What are the disadvantages of Black box testing? The test can be redundant if the software designer has already run a test case. The test cases are difficult to design. Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.

Q. Give real time examples of Black box testing.

In this technique, we do not use the code to determine a test suite; rather, knowing the problem that we're trying to solve, we come up with four types of test data:

1. 2. 3. 4.

Easy-to-compute data Typical data Boundary / extreme data Bogus data

For example, suppose we are testing a function that uses the quadratic formula to determine the two roots of a second-degree polynomial ax2+bx+c. For simplicity, assume that we are going to work only with real numbers, and print an error message if it turns out that the two roots are complex numbers (numbers involving the square root of a negative number). We can come up with test data for each of the four cases, based on values of the polynomial's discriminant (b2-4ac): Easy data (discriminant is a perfect square): a 1 1 b 2 3 c 1 2 Roots -1, -1 -1, -2

Typical data (discriminate is positive): a 1 2 b 4 4 c 1 1 Roots -3.73205, -0.267949 -1.70711, -0.292893

Boundary / extreme data (discriminate is zero): a 2 2 b -4 -8 c 2 8 Roots 1, 1 2, 2

Bogus data (discriminate is negative, or a is zero): a 1 b 1 c 1 Roots square root of negative number

division by zero

As with glass-box testing, you should test your code with each set of test data. If the answers match, then your code passes the black-box test.

Q. Describe the Black box testing techniques. Equivalence partitioning Boundary value analysis State transition tables Decision table testing Pairwise testing Error Guessing.

Details of each types of testing are given below: (Note: Some of the definitions and examples have been borrowed from Wikipedia) Equivalence partitioning: Equivalence partitioning is a black box testing technique with the goal: 1. To reduce the number of test cases to a necessary minimum. 2. To select the right test cases to cover all possible scenarios. Although in rare cases equivalence partitioning is also applied to outputs of a software component, typically it is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behavior. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained at the following example of a function which has the pass parameter "month" of a date. The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13. ... -2 -1 0 1 .............. 12 13 14 15 ..... --------------|-------------------|--------------------invalid partition 1 valid partition invalid partition 2 The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behavior of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviors of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably. An additional effect by applying this technique is that you also find the so called "dirty" test cases. An inexperienced tester may be tempted to use as test cases the

input data 1 to 12 for the month and forget to select some out of the invalid partitions. This would lead to a huge number of unnecessary test cases on the one hand, and a lack of test cases for the dirty ranges on the other hand. The tendency is to relate equivalence partitioning to so called black box testing which is strictly checking a software component at its interface, without consideration of internal structures of the software. But having a closer look at the subject there are cases where it applies to grey box testing as well. Imagine an interface to a component which has a valid range between 1 and 12 like in the example above. However internally the function may have a differentiation of values between 1 and 6 and the values between 7 and 12. Depending on the input value the software internally will run through different paths to perform slightly different actions. Regarding the input and output interfaces to the component this difference will not be noticed, however in your grey-box testing you would like to make sure that both paths are examined. To achieve this it is necessary to introduce additional equivalence partitions which would not be needed for black-box testing. For this example this would be: ... -2 -1 0 1 ..... 6 7 ..... 12 13 14 15 ..... --------------|---------|----------|--------------------invalid partition 1 P1 P2 invalid partition 2 valid partitions

To check for the expected results you would need to evaluate some internal intermediate values rather than the output interface. Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions

Boundary value analysis: Boundary value analysis is a black box testing technique to determine test cases covering off-by-one errors. The boundaries of software component input ranges are areas of frequent problems Testing experience has shown that especially the boundaries of input ranges to a software component are liable to defects. A programmer implement e.g. the range 1 to 12 at an input, which e.g. stands for the month January to December in a date, has in his code a line checking for this range. This may look like: if (month > 0 && month < 13) But a common programming error may check a wrong range e.g. starting the range at 0 by writing: if (month >= 0 && month < 13)

For more complex range checks in a program this may be a problem which is not so easily spotted as in the above simple example.

Applying boundary value analysis: To set up boundary value analysis test cases the tester first has to determine which boundaries are at the interface of a software component. This has to be done by applying the equivalence partitioning technique. Boundary value analysis and equivalence partitioning are inevitably linked together. For the example of the month a date would have the following partitions: ... -2 -1 0 1 .............. 12 13 14 15 ..... --------------|-------------------|--------------------invalid partition 1 valid partition invalid partition 2 Applying boundary value analysis a test case at each side of the boundary between two partitions has to be selected. In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "negative" test case. A "clean" test case should give a valid operation result of the program. A "negative" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data. The boundary value analysis can have 6 test cases: n, n-1, and n+1 for the upper limit; and n, n-1, and n+1 for the lower limit. A further set of boundaries has to be considered when test cases are set up. A solid testing strategy also has to consider the natural boundaries of the data types used in the program. If working with signed values, for example, this may be the range around zero (-1, 0, +1). Similar to the typical range check faults, there tend to be weaknesses in programs in this range. e.g. this could be a division by zero problem where a zero value may occur although the programmer always thought the range started at 1. It could be a sign problem when a value turns out to be negative in some rare cases, although the programmer always expected it to be positive. Even if this critical natural boundary is clearly within an equivalence partition it should lead to additional test cases checking the range around zero. A further natural boundary is the natural lower and upper limit of the data type itself. E.g. an unsigned 8-bit value has the range of 0 to 255. A good test strategy would also check how the program reacts at an input of -1 and 0 as well as 255 and 256. The tendency is to relate boundary value analysis more to so called black box testing, which is strictly checking a software component at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing. After determining the necessary test cases with equivalence partitioning and subsequent boundary value analysis, it is necessary to define the combinations of the test cases when there are multiple inputs to a software component

State Transition table: In automata theory and sequential logic, a state transition table is a table showing what state (or states in the case of a nondeterministic finite automaton) a finite semi automaton or finite state machine will move to, based on the current state and other inputs. A state table is essentially a truth table in which some of the inputs are the current state, and the outputs include the next state, along with other outputs. A state table is one of many ways to specify a state machine, other ways being a state diagram, and a characteristic equation. One-dimensional state tables Also called characteristic tables, single-dimension state tables are much more like truth tables than the two-dimensional versions. Inputs are usually placed on the left, and separated from the outputs, which are on the right. The outputs will represent the next state of the machine. Here's a simple example of a state machine with two states, and two combinatorial inputs: A 0 0 0 0 1 1 1 1 B Current State Next State Output 0 S1 S2 1 0 S2 S1 0 1 S1 S2 0 1 S2 S2 1 0 S1 S1 1 0 S2 S1 1 1 S1 S1 1 1 S2 S2 0

S1 and S2 would most likely represent the single bits 0 and 1, since a single bit can only have two states. Two-dimensional state tables State transition tables are typically two-dimensional tables. There are two common forms for arranging them. The vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates events, and the cells (row/column intersections) in the table contain the next state if an event happens (and possibly the action linked to this state transition).

State Transition Table

Events State S1 S2 ...

E1 ...

E2 Ay/Sj ... -

... ... ... ... ...

En Ax/Si ... -

Sm Az/Sk

(S: state, E: event, A: action, -: illegal transition) The vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates next states, and the row/column intersections contain the event which will lead to a particular next state. State Transition Table next current S1 S2 ... Az/Ek ... ... ... ... ... Sm Ax/Ei ... -

S1 Ay/Ej S2 ... Sm ... -

(S: state, E: event, A: action, -: impossible transition)

Decision tables: Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch-case statements, associate conditions with actions to perform. But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way Decision tables are typically divided into four quadrants, as shown below. The four quadrants Conditions Condition alternatives Actions Action entries

Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to. Many decision tables include in their condition alternatives the don't care symbol, a hyphen. Using don't cares can simplify decision tables, especially when a given condition has little influence on the actions to be performed. In some cases, entire conditions thought to be important initially are found to be irrelevant when none of the conditions influence which actions are performed. Aside from the basic four quadrant structure, decision tables vary widely in the way the condition alternatives and action entries are represented. Some decision tables use simple true/false values to represent the alternatives to a condition (akin to ifthen-else), other tables may use numbered alternatives (akin to switch-case), and some tables even use fuzzy logic or probabilistic representations for condition alternatives. In a similar way, action entries can simply represent whether an action is to be performed (check the actions to perform), or in more advanced decision tables, the sequencing of actions to perform (number the actions to perform). Pair wise testing: All-pairs testing or pairwise testing is a combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameter pairs. The number of tests is typically O (nm), where n and m are the number of possibilities for each of the two parameters with the most choices. The reasoning behind all-pairs testing is this: the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing. Bugs involving interactions between three or more parameters are progressively less common, whilst at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs. Many testing methods regard all-pairs testing of a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order combinatorial testing methods, and less exhaustive methods which fail to exercise all possible pairs of parameters. Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as unit testing, symbolic execution, fuzz testing, and code review. Error Guessing: Error guessing is a test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them. This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.

Q. What is the difference between client server testing and web server testing? Web systems are one type of client/server. The client is the browser, the server is whatever is on the back end (database, proxy, mirror, etc). This differs from socalled traditional client/server in a few ways but both systems are a type of client/server. There is a certain client that connects via some protocol with a server (or set of servers). Also understand that in a strict difference based on how the question is worded, testing a Web server specifically is simply testing the functionality and performance of the Web server itself. (For example, I might test if HTTP Keep-Alives are enabled and if that works. Or I might test if the logging feature is working. Or I might test certain filters, like ISAPI. Or I might test some general characteristics such as the load the server can take.) In the case of client server testing, as you have worded it, you might be doing the same general things to some other type of server, such as a database server. Also note that you can be testing the server directly, in some cases, and other times you can be testing it via the interaction of a client. You can also test connectivity in both. (Anytime you have a client and a server there has to be connectivity between them or the system would be less than useful so far as I can see.) In the Web you are looking at HTTP protocols and perhaps FTP depending upon your site and if your server is configured for FTP connections as well as general TCP/IP concerns. In a traditional client/server you may be looking at sockets, Telnet, NNTP, etc. Q. What is Impact analysis? How to do impact analysis in yr project? Impact analysis means when we r doing regressing testing at that time we r checking that the bug fixes r working properly, and by fixing these bug other components are working as per their requirements r they got disturbed. Q. How to test a website using manual testing technique? Web Testing During testing the websites the following scenarios should be considered. Functionality Performance Usability Server side interface Client side compatibility Security Functionality: In testing the functionality of the web sites the following should be tested. Links Internal links External links Mail links Broken links Forms

Field validation Functional chart Error message for wrong input Optional and mandatory fields Database Testing will be done on the database integrity. Cookies Testing will be done on the client system side, on the temporary internet files. Connection speed: Tested over various Networks like Dial up, ISDN etc Load What is the no. of users per time? Check for peak loads & how system behaves. Large amount of data accessed by user. Stress Continuous load Performance of memory, cpu, file handling etc. Usability : Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction. Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several Criteria: Ease of learning Navigation Subjective user satisfaction General appearance Server side interface: In web testing the server side interface should be tested. This is done by Verify that communication is done properly. Compatibility of server with software, hardware, network and database should be tested. The client side compatibility is also tested in various platforms, using various browsers etc. Security: The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them. The following types of testing are described in this section: Network Scanning Vulnerability Scanning Password Cracking Log Review

Integrity Checkers Virus Detection Performance Testing Performance testing is a rigorous usability evaluation of a working system under realistic conditions to identify usability problems and to compare measures such as success rate, task time and user satisfaction with requirements. The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly. A clearly defined set of expectations is essential for meaningful performance testing. For example, for a Web application, you need to know at least two things: expected load in terms of concurrent users or HTTP connections acceptable response time Load testing: Load testing is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing Examples of volume testing: testing a word processor by editing a very large document testing a printer by sending it a very large job testing a mail server with thousands of users mailboxes Examples of longevity/endurance testing: testing a client-server application by running the client in a loop against the server over an extended period of time Stress testing: Stress testing is a form of testing that is used to determine the stability of a given system or entity. This is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs. Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully this quality is known as recoverability. Stress testing does not break the system but instead it allows observing how the system reacts to failure. Stress testing observes for the following. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? Is it able to recover from the last good state on restart? Compatibility Testing A Testing to ensure compatibility of an application or Web site with different browsers, OS and hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the product and introduce costly and embarrassing bugs. We test for compatibility using real test

environments. That is testing how will the system performs in the particular software, hardware or network environment. Compatibility testing can be performed manually or can be driven by an automated functional or regression test cases. The purpose of compatibility testing is to reveal issues related to the product& interaction session test suite with other software as well as hardware. The product compatibility is evaluated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix. Finally, the script is executed against the matrix, and any anomalies are investigated to determine exactly where the incompatibility lies. Some typical compatibility tests include testing your application: On various client hardware configurations Using different memory sizes and hard drive space On various Operating Systems In different network environments With different printers and peripherals (i.e. zip drives, USBs, etc.) Q. What is the difference between Test Strategy and Test Plan. Which comes first? Test strategy is the high level document and the overall approach for the testing. Test plan is the document which is prepared by the test lead based on the test strategy. Hence Test strategy comes first and Test plan is prepared from Test Strategy.. Q. What is the significance of doing Regression testing? Regression testing is done to check that the bug fixes are not affecting/disturbing the other existing functionalities. To Ensure the newly added functionality or existing modified functionality or developer fixed bug arises any new bug or affecting any other side effect. this is called regression test and ensure already PASSED TEST CASES would not arise any new bug. Q. What are the diff ways to check a date field in a website? There are different ways. Some are mentioned below : 1) you can check the field width for minimum and maximum. 2) If that field only take the Numeric Value then check itll only take Numeric no other type. 3) If it takes the date or time then check for other. 4) Same way like Numeric you can check it for the Character, Alpha Numeric and all. 5) And the most Important if you click and hit the enter key then some time page may give the error of java script, that is the big fault on the page . 6) Check the field for the Null value. The date field we can check in different ways Positive testing: first we enter the date in given format

Negative Testing: We enter the date in invalid format suppose if we enter date like 25/01/2007 it should display some error message and also we use to check the numeric or text Q. Give an example of a high severity, low priority bug? A page is rarely accessed, or some activity is performed rarely but that thing outputs some important Data incorrectly, or corrupts the data, this will be a bug of High severity and Low priority Q. If project needs to be released in a very short time, how to carry out testing of teh application? Use risk analysis to determine where testing should be focused. Since its rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include: Which functionality is most important to the projects intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio Q. Where were you involved in testing life cycle? What type of test you perform ? Generally test engineers involved from entire test life cycle i.e, test plan, test case preparation, execution, reporting. Generally system testing, regression testing, adhoc testing etc. Q. What is the testing process flow in your organization? Testing process is going as follows: Quality assurance unit Quality assurance manager

Test lead Test engineer Q. Who prepares the use cases? In most companies except the small ones, business analysts prepare the use cases. But in small company Business analyst prepares along with team lead. Q. What testing methodologies have you used to develop test cases? I have used the below types of testing methodologies: 1. Boundary value analysis 2.Equivalence partition 3.Error guessing 4.cause effect graphing Q. What is the difference between regression testing and re-testing. Re-testing is done to verify if the previous issues/defects have been fixed and those functionalities are working fine. Regression testing is done on the neighboring areas of the features where changes/fixes have been made to make sure that these fixes have not introduced any bugs in the neighboring areas which were working previously. Q. Is automated testing better than manual testing. If so, why? A: Automated testing and manual testing have advantages as well as disadvantages Advantages: It increase the efficiency of testing process. Reliable Flexible Minimizes human error Disadvantages: Tools should have compatibility with our development or deployment tools needs lot of time initially If the requirements are changing continuously Automation is not suitable. Manual testing: If the requirements are changing continuously Manual is suitable Once the build is stable with manual testing then only we go for automation Disadvantages: It needs lot of time We can not do some type of testing manually, E.g. Performance testing

Q. What is the exact difference between a product and a project. Explain with example ?

Project

Product 1- Product developed for market 1- A Project is developed for particular requirements are defined by company client requirements are defined by client. itself by conducting market survey. 2- Example: The shirt which we are interested stitching with tailor as per our specifications is project 2- Example is Ready made Shirt where the particular company will imagine particular measurements they made the product

Q. Define Brain Storming and Cause Effect Graphing with example. Brain Storming: A learning technique involving open group discussion intended to expand the range of available ideas OR A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bimonthly brainstorming sessions are held by various work groups within the firm. Our monthly IPower brainstorming meeting is attended by the entire agency staff. OR Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes). Cause Effect Graphing: A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications. Q. What is the need of priority of a bug? How is it different from severity of a bug? Severity reflects the seriousness of the bug from the application or technical point of view, where as priority refers to which bug should rectify first. Of course if the severity is high usually the priority is also high, but this is not the case always. Severity decided by the tester where as priority decided by developers. which one needs to be resolved/fixed first knows through priority not with severity. severity is nothing impact of that bug on the application. Priority is nothing but importance to resolve the bug. By looking at severity we can judge but sometimes high severity bug doesnt have high priority and High priority bug doesnt have high severity. So we need both severity and priority

Q. What will you do if the bug that you found is not accepted by the developer and he is saying that it is not reproducible. Note: The developer is in the on site location ? Once again we will check that condition with all reasons. Then we will attach screen shots and the log files which will give ample information about what exactly is happening when that particular defect is being reproduced. Sometimes bug is not reproducible it is because of different environment suppose development team using other environment and you are using different environment at this situation there is chance of bug not reproducing. At this situation please check the environment in the base line documents that is functional documents if the environment which we are using is correct we will raise it as defect again and this time give the exact access details of the setup where we are able to reproduce the issue to the developer so that he can try to reproduce it on our setup. Q. What is the difference between two-tier and three-tier application? Client server applications are 2-tier applications. In this, front end or client is connected to Data base server through Data Source Name, front end is the monitoring level. Web based architecture is a 3-tier application. In this, browser is connected to web server through TCP/IP and web server is connected to Data base server, browser is the monitoring level. In general, Black box testers are concentrating on monitoring level of any type of application. All the client server applications are 2 tier architectures. Here in these architecture, all the Business Logic is stored in clients and Data is stored in Servers. So if user request anything, business logic will be performed at client, and the data is retrieved from Server (DB Server). Here the problem is, if any business logic changes, then we need to change the logic at each any every client. The best example is a super market, it has branches in the city. At each branch it has clients, so business logic is stored in clients, but the actual data is store in servers. If assume it wants to give some discount on some items, so it needs to change the business logic. For this it needs to go to each branch and need to change the business logic at each client. This the disadvantage of Client/Server architecture. So 3-tier architecture came into picture: Here Business Logic is stored in one Server, and all the clients are dumb terminals. If user requests anything the request first sent to server, the server will bring the data from DB Sever and send it to clients. This is the flow for 3-tier architecture. Assume for the above example. if it wants to give some discount, all the business logic is there in Server. So it needs to be changed at one place, not at each client. This is the main advantage of 3-tier architecture. Q. What is the difference between bug log and defect tracking?

Bug log is a document which maintains the information of the bug where as bug tracking is the process. Q. Who will change the Bug Status as Deferred? Bug will be in open status while developer is working on it Fixed after developer completes his work if it is not fixed properly the tester puts it in reopen After fixing the bug properly it is in closed state. If for some reason the bug is not be fixed in the current release and the management team decides to fix in the next release then the Developer/Development manager changes the state to Deferred. Q. What is bug, defect, issue, error? Bug: A fault or defect in a system or machine, it is usually identified by the tester. Defect: An imperfection in a device or machine. Whenever the project is received for the analysis phase, may be some requirement miss to get or understand most of the time Defect itself come with the project (when it comes). Issue: An issue is a major problem that will impede the progress of the project and cannot be resolved by the project manager and project team without outside help Error: When anything is happened wrong in the project from the development side i.e. called as the error, most of the time this knows by the developer. Q. What is the difference between functional testing and integration testing? Functional testing is testing the whole functionality of the system or the application whether it is meeting the functional specifications Integration testing means testing the functionality of integrated module when two individual modules are integrated for this we use top-down approach and bottom up approach Q. What types of testing you perform in organization while you do System Testing? Functional testing User interface testing Usability testing Compatibility testing Model based testing Error exit testing User help testing Security testing Capacity testing Performance testing Sanity testing Regression testing Reliability testing

Recovery testing Installation testing Maintenance testing Accessibility testing, including compliance with: Americans with Disabilities Act of 1990 Section 508 Amendment to the Rehabilitation Act of 1973 Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C) Q. What is the main use of preparing Traceability matrix and explain the real time usage? A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement. A traceability matrix is a report from the requirements database or repository. Q. How can u do the following? 1) Usability testing 2) scalability testing Usability Testing: Testing the ease with which users can learn and use a product. Scalability Testing: Its a Web Testing which helps in capability improvement of the website. Q. What do you mean by Positive and Negative testing & what is the difference between them? Explain with an example. Positive Testing: Testing the application functionality with valid inputs and verifying that output is correct Negative testing: Testing the application functionality with invalid inputs and verifying the output. Difference is nothing but how the application behaves when we enter some invalid inputs suppose if it accepts invalid input the application Functionality is wrong Positive test: testing aimed to show that s/w work i.e. with valid inputs. This is also called as test to pass Negative testing: testing aimed at showing s/w doesnt work. Which is also know as test to fail Boundary Value Analysis is the best example of negative testing. Q. What is change request, how do you use it? Change Request is a part of the overall Software Development Life Cycle where a request is made to the board for approving any particular change in any particular area of the application.

This change could be a new development or an enhancement or a fix to a defect. Change request controlled by Change Control Board (CCB). If any changes required by client after we start the project, it has to come through CCB and they have to approve it. CCB has the full rights to accept or reject based on the project schedule and cost. Q. What is risk analysis, what type of risk analysis you did in your project? Risk Analysis: A systematic use of available information to determine how often specified events and unspecified events may occur and the magnitude of their likely consequences. This is the procedure to identify threats & vulnerabilities, analyze them to ascertain the exposures, and highlight how the impact can be eliminated or reduced Types : 1.Quantitative risk analysis 2.Qualitative risk analysis. Q. What is bug life cycle? Below are the different states of a bug in the bug life cycle: New: when tester reports a new defect Open: when developer accepts that it is a bug or if the developer rejects the defect, then the status is turned into Rejected Fixed: when developer make changes to the code to rectify the bug. Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned into Closed and if the problem resists again, it is Reopen" Q. What is deferred status in defect life cycle? Deferred status means the developer accepted the bus, but it is scheduled to rectify in the next release. Q. Do you use any automation tool for smoke testing? Yes we use automation tool for smoke testing. Q. What is Verification and validation? Verification is static. No code is executed. Say, analysis of requirements etc. Validation is dynamic. Code is executed with scenarios present in test cases. 23. What is Ad-hoc testing? Ad-hoc testing is a testing process where any area of the application is randomly pciked up and tested. There is no planning done for this type of testing.

Q. What is mean by release notes? Its a document released along with the product which explains about the product. It also contains about the bugs that are in deferred status and the known issues with the current release. Q. Scalability testing comes under in which tool? Scalability testing comes under performance testing. Load testing, scalability testing both are same in terms of the testing performed but different in terms of the results focus area. Q. What is hot fix? A hot fix is a single, cumulative package that includes one or more files that are used to address a problem in a software product. Typically, hot fixes are made to address a specific customer situation and may not be distributed outside the customer organization. Generally hot fix bugs have high priority as they have been found at the customer place. Q. What is Acid Testing? ACID Means: ACID testing is related to testing a transaction. A-Atomicity C-Consistent I-Isolation D-Durable Mostly this is done in database testing. Q. What is the main use of preparing a traceability matrix? To Cross verify the prepared test cases and test scripts with user requirements. To monitor the changes, enhance occurred during the development of the project. Traceability matrix is prepared in order to cross check the test cases designed against each requirement, hence giving an opportunity to verify that all the requirements are covered in testing the application.

Functional Testing Questions


Q. What are the different possibilities for testing a text field? The different possibilities for testing a text field can be derived with the help of boundary value analysis (BVA). Assumption: The max length is 10 1. Leave the field blank.(0)-If the requirement says the field is mandatory then this test should fail. If not should allow to leave it blank. 2. Enter any text with length max-1(9) 3. Enter any text with maximum length(10) 4. Enter any text with length max+1(11) If there is a requirement for the text field to contain only A-Z, a-z and 0-9 values

then the below scenarios should be considered. 1.It should give an error when a special character is entered. 2. Should give error when space is entered. etc Q. A page has four mandatory fields to be filled in before clicking Submit. What is the minimum number of test cases required to verify this? Explain each test case. The below test cases can be considered: 1. Enter the data in all the mandatory fields and submit. Should not throw any error message. 2. Enter data in any two mandatory fields and submit. Should throw error message. 3. Do not enter in any of the fields and submit. Should throw an error message 4. If the fields accept only number, enter numbers in the fields and submit. Should not throw error message, 5. Enter alphabets in two fields and number in other two fields. Should throw an error message. 6. If the fields do not accept special characters, then enter the characters and submit it Q. What should be done if there is a new build and there is not much time left to test? The best approach is by prioritizing the tasks and areas/modules to test. For example, login window of an application has a very high priority because it authenticates the user and allows the users to access the application. Another example is if it is a database application, the user rights need to be checked in the login screen. Also data storing needs to be verified on priority by checking data addition, deletion and edition etc. Q. There are three text fields, one should accept a string as input, second text box should accept a float as input and the third text box is for output. The output should be an integer. what are the possible test cases we can write for the above text fields? 1. Verify that the first field is accepting string. 2. Verify that the first field is not accepting any thing other than a text (such as pure numbers with out any string data type) null characters, special characters etc. 3. Verify that the second field is accepting float values. 4. Verify that the second field is not accepting any thing other than float. 5. Verify the output when all the above conditions are met. 6. Verify the output when some/all the above conditions are not met.

Q. What is the difference between functional testing and and functionality testing? Functional Testing: This is also called black box testing because no knowledge of the internal logic of the system is used to develop test cases. For example, if a certain function key should produce a specific result when pressed, a functional test validates this expectation by pressing the function key and observing the result. When performing functional tests, you will use validation techniques almost exclusively. Functionality Testing: Its part of Functional testing. Giving the input and checking the output. Testing the application against specifications.

General Testing Questions


Q. What is the first test in software testing process? A) Monkey testing B) Unit Testing C) Static analysis D) None of the above Answer: B. Unit testing is the first test in testing process, though it is done by developers after the completion of coding. Q. When does testing start? Testing starts once the requirements are complete. This is Static testing. Here, you are supposed to read the documents (requirements) and it is quite a common issue in software industry that many requirements contradict with other requirements. These are also can be reported as bugs. However, they will be reviewed before reporting them as bugs (defects). Q. What is the part of QA and QC in V model? V model is a kind of SDLC. QC (Quality Control) team tests the developed product for quality. It deals only with product, both in static and dynamic testing. QA (Quality Assurance) team works on the process and manages for better quality in the process. It deals with (reviews) everything right from collecting requirements to delivery. Q. What are the bugs we cannot find in black box? If there are any bugs in security settings of the pages or any other internal mistake made in coding cannot be found in black box testing.

Q. What are Microsoft 6 rules? Microsoft 6 rules are used to test the user interface. These are also called Microsoft windows standards. They are as below: GUI objects are aligned in windows All defined text is visible on a GUI object Labels on GUI objects are capitalized Each label includes an underlined letter (mnemonics) Each window includes an OK button, a Cancel button, and a System menu

Q. What are the steps to test any software using an automation tool? First, you need to segregate the test cases that can be automated. Then, prepare test data as per the requirements of those test cases. Write reusable functions which are used frequently in those test cases. Now, prepare the test scripts using those reusable functions and by applying loops and conditions where ever necessary. However, Automation framework that is followed in the organization should be strictly followed through out the process. Q. What is Defect removable efficiency (DRE)? The DRE is the percentage of defects that have been removed during testing activity. DRE Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product. This is calculated by dividing the effort required for defect detection, defect resolution time and retesting time by the number of remarks. This is calculated per test type, during and across test phases. The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.) Number Defects Removed DRE =A/(A+B) * 100 Number Defects at Start of Process A = Number of defects found by testing team B = Number of defects found by customer If DRE <=0.8 then good product otherwise not. Q. What is the difference between ad-hoc testing and error guessing? Ad-hoc testing: Testing is performed without test data or any documents. Error Guessing: This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.

Q. What is the difference between test plan and test strategy? Test plan: After completion of SRS learning and business requirement gathering test management concentrate on test planning, this is done by Test lead, or Project lead. Test Strategy: Depends on corresponding testing policy quality analyst finalizes test Responsibility Matrix. This is done by QA. Q. What is V-n-V Model? Why is it called as V& why not U? What stage of this model testing should be best to started? It is called V because there are two prongs of this model (Verification and Validation) which look like the two prongs of alphabet V. The detailed V model is shown below. SRS Acceptance testing \ / \ / High Level Design System testing \ / \ / Low level Design Integration testing \ / \ / \ Unit Testing \ / \ / Coding Testing starts at the requirement (SRS) phase, i.e. from the top of the left prong of the V. You can raise defects that are present in the document. Its called verification. Q. What are 5 common problems in the software development process? Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems. Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable. Inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash. Features - requests to pile on new features after development is underway; extremely common. Miscommunication - if developers dont know whats needed or customers have erroneous expectations, problems are guaranteed.

Q. What are 5 common solutions to software development problems? Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out. Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. Stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on. Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bugtracking tools and change management tools, intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes early on so that customers expectations are clarified. Q. What is configuration management? Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes. (See the Tools section for web resources with listings of configuration management tools. Also see the Bookstore sections Configuration Management category for useful books with more information.) Q. What if the software is so buggy it cant really be tested at all? The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem. Q. How can it be known when to stop testing? This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are: Deadlines (release deadlines, testing deadlines, etc.) Test cases completed with certain percentage passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point

Bug rate falls below a certain level Beta or alpha testing period ends

QA Processes Questions
Q. What is the QA process? Quality Assurance (QA) Process consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities). Quality Assurance helps establish processes. It sets up measurement programs to evaluate processes, identifies weaknesses in processes and improves them. QA is a management responsibility, frequently performed by a staff function. It is concerned with all of the products that will ever be produced by a process. QA personnel should not perform Quality Control unless it is to validate Quality Control. QA is preventive process. QA applies to the entire life cycle.

Web Testing Questions


What are the important scenarios for testing emails? how do you test emails? which tool is best for testing email? We can categorize the different part on which tester may perform the testing: 1. 2. 3. 4. For incoming mail with attachment For outgoing mail with attachment Mail failure Other operations like Delete, Edit etc.

1. For incoming mail with attachment: Check the proper incoming address or id. Check Not only the One address (To) but also for the cc and Bcc Addresses. Check the maximum and minimum limit for number of addresses. Check if address has some error like @ or dot(.) is missing. Check if address has more than 1 @ sign. Check if dot(.) placing and the number of times it is present in the email address Check the address should only have @, ., _ and - special symbols that are standard signs. Check that mail does not have unnecessary content with it self. Check if there is/are any attachments then they should open properly. Check the attachment size not exceeds the standard size. Check if there are more than 1 attachments then the calculated size must be under the Standard size. Check if the email content have some images or some different Flash picture must show properly. Check the different extensions files attached with the mails shows properly. Check if the user read the mail ,then it should be marked as 'read'. 2. For outgoing mail with attachment:

All the scenarios that we mentioned above for incoming mails are valid for outgoing mails also. Hence all of them have to be verified. 3) Mail failure: Check mail failure if mail is send to incorrect address and also the failure notice should indicate the reason for failure. 4) Other properties like Delete, Edit etc should be verified.

Q. What is Web Services Testing. Have you done this? Web Services testing is nothing, but testing the application. In this testing we just see whether the concept (functionality) of web services is working or not. Q. What things should be considered in usability testing of web application Usability testing is done for "user friendliness". In this we check how comfortable the customer is in using the application. Suppose for an example while logging in he forgot his password, in usability testing u have to check whether there is an "forgot password option" and if we click this it is asking for secret question or not and many things u can test like there should be minimize and maximize button for a window....and so on. Q. How will you calculate the response time at server end? This can be done by using Microsoft Visual Studio 2005 and 2008. In which you can find new ways to test a particular webpage such as load testing, website testing and so on. There is one software called Fidller which is used to calculate the traffic rate for a website which is currently used by a number of users. Q. What test cases you will execute for compatibility testing with different browsers? There are lot of issues which may arise while Testing in different browsers. Following are some points while comparing IE with Firefox, in IE following issues may occur which need compatibility testing but does not occur in Firefox 1. 2. 3. 4. 5. Java Script errors. DIV Tag issue. GetElementBy Issue. Parent and Client Window size, pixel related issues may come. After JS in IE sometimes it may not take us to further navigations.

Q. What are the main components of Performance Test Report? The main components of performance test report are, Processor Time, User load per second, Memory use, Threshold States, Server Response and so on. When you carry

out performance testing you will get the result in the form of graph, then you can see the response when user load increased and decreased per second. You can also check the processor performance and can also check threshold value when load get increased suddenly. These are some of the main components of performance test report. Q. What errors can occur when the page loads? The page may take too long to load...or the page may fail to load By performing performance testing we can verify how the system behaves when subjected to or beyond specification and requirements load limits. Perform configuration testing to determine how the system deals with hardware, software, operating systems, network conditions etc. Q. Give some test cases for testing a search engine website say google. The test cases for a search engine would be very vast. It totally depends upon the scope of testing. Some of the test cases are as mentioned below: 1) Check for simple strings like "European Premier League" or "Grammy Awards 2005". 2) Test the functionality of multiple page display by clicking on page number. 3) Verify whether a combination string works like "European Premier League_Christiano Ronaldo" 4) Perform test for opening the links in new windows.

Q. What can be the the security checks on the web site, other than login/password screens? Other than login/password....you can do other security testing checks like SQL injection methods, Cookie encryption testing, testing of authorization & authentication etc. Q. What type of testing is carried out to find memory-leakages? Give a sample example. Through Volume testing it is possible. i.e., An application tries to retrieve large amount of data that require large temporary buffer area. If the data exceed the buffer area the situation of memory leakage will occur and query will fail without returning any result as sorting did not finish as buffer exceeds the limit. We need to know the memory size before the test execution and after test execution by using memory related API functions or MFC functions.

Q. How to Test the Cookies and Memory leakages? For cookie testing follow the below url http://www.stickyminds.com/sitewide.asp? Function=edetail&ObjectType=ART&ObjectId=2935

For memory leakage testing follow the below url: http://www.liutilities.com/products/wintaskspro/whitepapers/paper1/ Q. How to do browser testing( create a standard script and run it for the different browser combinations.) The GUI architecture and events messaging differs from browser to browser. Like IE uses Win32::OLE messaging and Firefox uses some GTK based messaging. So it is generally difficult to create one standard script that runs on all browsers. But tools like Win Runner, QTP use complex procedures inside them to handle different browsers. Manual testing can always be performed if the application supports different browsers like IE, Firefox, Opera, Netscape etc. Q. What bugs are mainly come in web testing what severity and priority we are giving In web testing, mainly the bugs come from navigation area. These could be missing links, broken links, invalid links etc. Also there are bugs in downloading data/image/audio/video files from the website to the local machine and in uploading data/image/audio/video from local machine to the web server. Other than these a lot of bugs also come from the contents/look and feel/cosmetic issues. Q. Write test cases for a web URL. 1. Type URL in the address bar (for e.g. click www.yahoo.com) and click 'go' button. 2.check to see whether the page is navigated to the yahoo home page. 3.if navigated to yahoo home page test case is passed else failed 4.also check to see when we enter the URL in the address bar and press the enter button in the keyboard it navigates to the yahoo home page. 5.when we click the refresh button in the yahoo home page the same page should be displayed. Q. what happens in a web application when you enter all the data and click on submit button suddenly the connection goes off? Will the data be present if you return to the page? If the data reaches the web server by the time of disconnection, the system will persist the data in the database .If the connection fails before reaching the server ,the data won't be persisted and data will be lost.

Q. Suppose their is a website and after clicking OK in login window there is a window opening with the message "the page can not be displayed". Is it a bug? This can be bug but we can not be sure before checking the below factors: - Is the internet connection working fine? - Is the browser on which the error comes supported for the software? - Is the URL correct? - Is the popup blocker off? If the answer to all the above questions is yes then this is a bug.

Q. What are the main bugs found in browser compatibility testing? Following are the main bugs found in browser compatibility testing - Particular pages are not opening in every browser like (opera, Firefox, Netscape) - Cookies are not available in particular browser - Exact link is not opening - Link is broken or not exist

Q. What do you understand by the terms Response Time, Pages Per Second, Transactions Per Second? Response time is the time taken by the server to give response to a particular action or request. Pages per second gives the number of pages downloaded per second. Transaction Response time is the time taken to perform a transaction in the scenario. Q. What are the models used for Testing Web Application? All the SDLC models can be used for testing web applications. A web application is a combination of one or more modules. Depending upon the web application we use different models. Ex:- V-model, Spiral model and waterfall model. Q. How to Calculate Session Time Out in Web Testing? We can calculate the session time as below: Scenario1: Login into the application and put the application in the idle state for the time equal to or a little more than the prescribed session time out time. As the application is open, click on any of the links. It should give error that session has expired. Scenario2:

Login into the application and put the application in the idle state for the time a little less than the prescribed session time out time. As the application is open, click on any of the links. The link should open the appropriate page. Q. What are the different ways in which cookie testing can be done for a website? Following are the different ways to perform cookie testing: 1. Disabling Cookies- This is probably the easiest area of cookie testing. What happens to the Web site if all cookies are disabled? Start by closing all instances of your browser and deleting all cookies from your PC set by the site under test. The cookie file is kept open by the browser while its running, so you must close the browser to delete the cookies. Closing the browser also removes any per-session cookies in memory. Disable all cookies and attempt to use the sites major features and functions. Most of the time, you will find that these sites wont work when cookies are disabled. This isnt a bug, but rather a fact of life: disabling cookies on a site that requires cookies (of course!) disables the sites functionality. 2. Selectively Rejecting Cookies- What happens to the site if some cookies are accepted and others are rejected? Start by deleting all cookies from your PC set by the site under test and set your browsers cookie option to prompt you whenever a Web site attempts to set a cookie. Exercise the sites major functions. You will be prompted for each and every cookie the site attempts to set. Accept some and reject others. (Analyze site cookie usage in advance and draw up a test plan detailing what cookies to reject/accept for each function.) How does the site hold up under this selective cookie rejection? As above, does the Web server detect that certain cookies are being rejected and respond with an appropriate message? Or does the site malfunction, crash, corrupt data, or misbehave in other ways? 3. Corrupting Cookies- Along the way, as cookies are created and modified, try things like a. Altering the data in the persistent cookies. Since the per-session cookies are stored only in memory, they arent readily accessible for editing. b. Selectively deleting cookies. Allow the cookie to be written (or modified), perform several more actions on the site, then delete that cookie. Continue using the site. What happens? Is it easy to recover? Any data loss or corrupted? 4. Cookie Encryption - While investigating cookie usage on the site youre testing, pay particular attention to the meaning of the cookie data. Sensitive information like usernames and passwords should NOT be stored in plain text for all the world to read; this data should be encrypted before it is sent to your computer. Q. What is the difference in testing a 'https' site and a 'http'? HTTP is Hyper Text Transport Protocol and is transmitted over the wire via PORT 80(TCP). You normally use HTTP when you are browsing the web, it is not secure, so someone can eavesdrop on the conversation between your computer and the web server.HTTPS (Hypertext Transfer Protocol over Secure Socket Layer or HTTP over SSL) is a Web protocol developed by Netscape and built into its browser that encrypts and decrypts user page requests as well as the pages that are returned by the Web server. HTTPS is really just the use of Netscape's Secure Socket Layer (SSL)

as a sub layer under its regular HTTP application layering. (HTTPS uses port 443 instead of HTTP port 80 in its interactions with the lower layer, TCP/IP.) SSL uses a 40-bit key size for the RC4 stream encryption algorithm, new-age browsers use 128bit key size which is more secure than the former, it is considered an adequate degree of encryption for commercial exchange. HTTPS is normally used in login pages, shopping/commercial sites. Q. What is scalability testing with respect to a website? The purpose of scalability testing is to determine whether your application scales for the workload growth. Suppose your company expects a six-fold load increase on your server in the next two months. You may need to increase the server performance and to shorten the request processing time to better serve visitors. If your application is scalable, you can shorten this time by upgrading the server hardware, for example, you can increase the CPU frequency and add more RAM (also, you can increase the request performance by changing the server software, for example, by replacing the text-file data storages with SQL Server databases. To find a better solution, first you can test hardware changes, then software changes and after that compare the results of the tests). If the scalability tests report that the application is not scalable, this means there is a bottleneck somewhere within the application. Scalability testing can be performed as a series of load tests with different hardware or software configurations keeping other settings of testing environment unchanged. When performing scalability testing, you can vary such variables as the CPU frequency, number and type of servers, amount of available RAM, and so on

You might also like