Software Testing:Notes Collected From Site

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Software Testing Introduction Testing Start Process Testing Stop Process Testing Strategy Testing Plan Risk Analysis Software Testing Life Cycle Software Testing Types Static Testing Dynamic Testing Blackbox Testing Whitebox Testing. Unit Testing. Requirements Testing. Regression Testing. Error Handling Testing. Manual support Testing. Intersystem Testing. Control Testing. Parallel Testing. Volume Testing. Stress Testing. Performance Testing. Testing Tools Win Runner Load Runner Test Director Silk Test Test Partner Interview Question Win Runner Load Runner Silk Test Test Director General Testing Question

Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software. In other words testing is nothing but CRITICISM or COMPARISION. Here comparison in the sense comparing the actual value with expected one. There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces. The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria. Testing helps is Verifying and Validating if the Software is working as it is intended to be working. Thins involves using Static and Dynamic methodologies to Test the application.

Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software. Software Testing Fundamentals Testing objectives include 1. Testing is a process of executing a program with the intent of finding an error. 2. A good test case is one that has a high probability of finding an as yet undiscovered error. 3. A successful test is one that uncovers an as yet undiscovered error. Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.

When Testing should start:
Testing early in the life cycle reduces the errors. Test deliverables are associated with every phase of development. The goal of Software Tester is to find bugs, find them as early as possible, and make them sure they are fixed. The number one cause of Software bugs is the Specification. There are several reasons specifications are the largest bug producer. In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t thorough enough, its constantly changing, or it’s not communicated well to the entire team. Planning software is vitally important. If it’s not done correctly bugs will be created. The next largest source of bugs is the Design, That’s where the programmers lay the plan for their Software. Compare it to an architect creating the blue print for the building, Bugs occur here for the same reason they occur in the specification. It’s rushed, changed, or not well communicated.

Coding errors may be more familiar to you if you are a programmer. Typically these can be traced to the Software complexity, poor documentation, schedule pressure or just plain dump mistakes. It’s important to note that many bugs that appear on the surface to be programming errors can really be traced to specification. It’s quite common to hear a programmer say, “ oh, so that’s what its supposed to do. If someone had told me that I wouldn’t have written the code that way.” The other category is the catch-all for what is left. Some bugs can blamed for false positives, conditions that were thought to be bugs but really weren’t. There may be duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be traced to Testing errors. Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A bug found and fixed during the early stages when the specification is being written might cost next to nothing, or 10 cents in our example. The same bug, if not found until the software is coded and tested, might cost $1 to $10. If a customer finds it, the cost would easily top $100

When to Stop Testing
This can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:

• • • • • • •

Deadlines ( release deadlines,testing deadlines.) Test cases completed with certain percentages passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point The rate at which Bugs can be found is too small Beta or Alpha Testing period ends The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

Based on the individual plans only. when Testing will occur.What • • • • • • Derived from Test Approach. Details out project-specific Test Approach.Why • • • Identify Risks and Assumptions up front to reduce surprises later. Failing to plan = planning to fail. Critical Success factors. Communicate objectives to all team members.• • • Measuring Test Coverage. A good test strategy is: Specific Practical Justified The purpose of a test strategy is to clarify the major tasks and challenges of the test project. Include testing Risk Assessment. For example.. and Design Spec. Functional Spec. and ultimately the Bugs we find. boundary testing. Lists general (high level) Test Case areas. Activities at each level must be planned well in advance and it has to be formally documented. Foundation for Test Spec. Exit . Tradeoffs Test Plan . Task is the activity that is performed. and white box testing to test this product against its specification. cause-effect graphing. Example of a poorly stated (and probably poorly conceived) test strategy: "We will use black box testing. Number of test cycles. Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy. Include preliminary Test Schedule Lists Resource requirements. the individual test levels are carried out. Requirements. Entry means the entry point to that phase. Test Plan . which are going to be performed for the project. the coding must be complete and then only one can start unit testing. Test Strategy: How we plan to cover the product so as to develop an adequate assessment of quality. Project Plan. for unit testing." Test Strategy: Type of Project. Number of high priority bugs. Validation is the way in which the progress and correctness and compliance are verified for that phase. Test Cases. Type of Software. Test Plan The test strategy identifies multiple test levels.

The interface part is out of scope of this test level. This includes. to execute test cases based on the priority. the . it may need the data that is fed by unit A and unit X. which contains the following sections. after the validation is done. the unit B has to be tested. the dependencies between the modules play a vital role. • • • • • • Unit Testing Tools Priority of Program units Naming convention for test cases Status reporting mechanism Regression test approach ETVX criteria Integration Test Plan The integration test plan is the overall plan for carrying out the activities in the integration test level. whether to execute positive test cases first or negative test cases first. What is to be tested? This section clearly specifies the kinds of interfaces fall under the scope of testing internal. very specific to unit testing. In this. the exit criterion for unit testing is all unit test cases must pass. the units A and X have to be integrated and then using that data. accuracy and the totals. negative test cases prove that the system does not perform what is not supposed to do. This list may not be exhaustive. their format and their boundary conditions. Positive test cases prove that the system performs what is supposed to do. files. alignment. with request and response is to be explained. The lead tester prepares it and it will be distributed to the individual testers. Given this correctly. database etc. In this case mostly the input units will be tested for the format. which contains the following sections. The UTP will clearly give the rules of what data types are present in the system.tells the completion criteria of that phase. For example. This has to be stated to the whole set of units in the program. but it is better to have a complete list of these details. In this. In this case. normally the basic input/output of the units along with their basic functionality will be tested. to execute test cases based on test groups etc. If a unit B has to be executed. This need not go deep in terms of technical details but the general approach how the interfaces are triggered is explained. What is to be tested? The unit test plan must clearly specify the scope of unit testing. Basic Functionality of Units How the independent functionalities of the units are tested which excludes any communication between the unit and other units. Unit Test Plan {UTP} The unit test plan is the overall plan to carry out the unit test activities. Sequence of Testing The sequences of test activities that are to be carried out in this phase are to be listed in this section. the sequence in which they are to be integrated will be specified in this section. external interfaces. the following sections are addressed. are to be given in proper sequence.. Testing the screens. Apart from the above sections. Sequence of Integration When there are multiple modules present in an application.

Explicitly lists each feature. This covers the functionality of the product. A sample Test Plan Outline along with their description is as shown below: Test Plan Outline 1. 6. In the system test. This section also mentions all the approaches which will be followed at the various stages of the test execution. 7. 5. besides software. FEATURES TO BE TESTED . which are applicable to system test. 2. INTRODUCTION 3. Functional Groups and the Sequence The requirements can be grouped in terms of the functionality.List each of the features (functions or requirements) which will be tested or demonstrated by the test. TEST ITEMS . 4. Since this is just one level of testing done by the client for the overall product. TEST DELIVERABLES . Normally. such as stress testing etc. it may include test cases including the unit and integration test level details. Etc. very specific to the project. What is to be tested? This section defines the scope of system testing.Must the test run from start to completion? Under what circumstances it may be resumed in the middle? Establish check-points in long tests. or requirement which won't be tested and why not.Itemized list of expected output and tolerances 9. anything related to inter-branch transactions may be grouped into one area etc. APPROACH . Acceptance Test Plan {ATP} The client at their place performs the acceptance testing. Apart from this what special testing is performed are also stated here. function. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing the application. System Test Plan {STP} The system test plan is the overall plan carrying out the system test level activities. there is no specific clue on the way they will carry out the testing.List each of the items (programs) to be tested. But it will not differ much from the system testing. anything related to customer accounts can be grouped into one area. Since the client is the one who decides the format and testing methods as part of acceptance testing. in a banking application. All requirements are to be verified in the scope of system testing. SUSPENSION/RESUMPTION CRITERIA . the system testing is based on the requirements. can be implemented to acceptance testing also. will be delivered? Test report . Same way for the product being tested. based on the priorities are to be described. unit by unit and then integrating them.testing activities will lead to the product. The following are the sections normally present in system test plan. slowly building the product. Based on this. ITEM PASS/FAIL CRITERIA Blanket statement . apart from testing the functional aspects of the system. For example.What. BACKGROUND – This item summarizes the functions of the application system and the tests to be performed. Assume that all the rules. Simulation or Live execution. 10.Describe the data flows and test philosophy. there are some special testing activities carried out. It will be very similar to the system test performed by the Software Development Unit. there may be priorities also among the functional groups. 8. these areas are to be mentioned here and the suggested sequences of testing of these areas. FEATURES NOT TO BE TESTED .

g. System Tests should be clearly mentioned along with the estimated efforts. ENVIRONMENTAL NEEDS Security clearance Office space & equipment Hardware/software requirements 13. If a design solution does not solve any requirement. Risk Identification: 1. . Risk Analysis attempts to identify all the risks and then quantify the severity of the risks. 5. tools being used. Integration tests. it is not solved by just one design-solution and it is not solved by one line of code. The matrix deals with the where. once you know the where.A threat as we have seen is a possible damaging event. Software Risks: Knowledge of the most common risks associated with Software development. left) the sub-requirements that together are supposed to solve the UF requirement. Many partial design-solutions may contribute to this Requirement and many groups of lines of code may contribute to it. along with other (sub-)requirements. 4. it exploits vulnerability in the security of a computer based system.Test software 11. which design solutions solve (more. Now you can connect on the crosspoints of the matrix. top) you specify all design solutions. Business Risks: Most common risks associated with the business using the Software 3. it should be deleted. assessing their likelihood. or less) any requirement. RESOURCES 17. 2. RISKS & CONTINGENCIES 18. APPROVALS The schedule details of the various test pass such as Unit tests. Since UF is a complex concept. and test methods being applied. while the how you have to do yourself. Take e. RESPONSIBILITIES Who does the tasks in Section 10? What does the user do? 14.g. equipment set up) Administrative tasks 12. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology. and the platform you are working on. as it is of no value. If it occurs. and initiating strategies to test those risks..g. TESTING TASKS Functional tasks (e. STAFFING & TRAINING 15. products and process. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on.g. A Requirements-Design Traceability Matrix puts on one side (e. Traceability means that you would like to be able to trace back and forth how and where any work product fulfills the directions of the preceeding (source-) product. Risk Analysis: A risk is a potential for loss or damage to an Organization from materialized threats. SCHEDULE 16. On the other side (e. the Requirement of UserFriendliness (UF). Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Prodicts.

Serves as a single source for tracking purposes. In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a particular design and how changes in design or code affect each other. The level and scope are important to assure that the language and granularity of scenario steps remain consistent within the use case. characterized by the goal the primary actor has toward the system's declared responsibilities. • • Users of a program are called users or clients. . Users of an enterprise are called customers. etc. showing how the primary actor's goal might be delivered or might fail. Use cases focus attention on aspects of a system useful to people outside of the system itself.Having this matrix. And if you change any design. you can check which requirements may be affected and see what the impact is. Demonstrates that the implemented system meets the user requirements. Use cases are goals (use cases and goals are used interchangeably) that are made up of scenarios. This hierarchical relationship is needed to properly model the requirements of a system being developed. If you have to change any requirement. which can be brought about by having to backtrack to fill Software Testing Life Cycle: The test development life cycle contains the following components: Requirements Use Case Document Test Plan Test Case Test Case execution Report Analysis Bug Analysis Bug Reporting Typical interaction scenario from a user's perspective for system requirements studies or testing. you can see which designs are affected. each step in a scenario is a sub (or mini) goal of the use case. suppliers. In addition the level at which the use case is operating at it is important to understand the scope it is addressing. As such each sub goal represents either another use case (subordinate use case) or an autonomous action that is at the lowest level desired by our use case decomposition. Scenarios consist of a sequence of steps to achieve the goal. A use case describes the use of a system from start to finish. In other words. Use Case: A collection of possible scenarios between the system under discussion and external actors. you can check whether any requirement has at least one design solution and by checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of) connected design(s). "an actual or realistic example scenario". A complete use case analysis requires several levels. Identifies gaps in the design and testing. Prevents delays in the project timeline.

The use case shows how the system is used to benefit the organization. It represents a user task or elementary business process. A user level goal addresses the question "Does your job performance depend on how many of these you do in a day". Always at System Scope. For example summary goal "Configure Data Base" might include as a step. System Scope: Use cases at system scope are bounded by the system under development. These goals are goals of value to the organization. There are also three levels: Summary. Either at System of Strategic Scope. Always at System Scope. Summary Level Use Case: Written for either strategic or system scope. Sub-function Level Use Case: A sub goal or step is below the main level of interest to the user. Examples are "logging in" and "locate a device in a DB". user level goal "Add Device to database". They represent collections of User Level Goals. The majority of the use cases are at system scope. Test Documentation Test documentation is a required tool for managing and maintaining the testing process.There are two scopes that use cases are written from: Strategic and System. These use cases are often steps in strategic level use cases Levels: Summary Goal ./p> These strategic use cases will eventually use some of the same lower level (subordinate) use cases. User Level Use Case: This is the level of greatest interest. The goals represent specific functionality required of the system. Documents produced by testers should answer the following questions: • • • What to test? Test Plan How to test? Test Specification What are the results? Test Results Analysis Report the gaps . User Goal and Sub-function. Scopes: Strategic and System Strategic Scope: The goal (Use Case) is a strategic goal with respect to the system. For example "Create Site View" or "Create New Device" would be user level goals but "Log In to System" would not.. User and Sub-function.

He doesn’t think its bad enough to fix and assigns it to the project manager to decide. looks for and finds a more obvious and general case that demonstrates the bug. Figure 18. The bug then enters its final state. resolves it as fixed. larvae. In some situations though. the life cycle gets a bit more complicated. The tester then performs a regression test to confirm that the bug is indeed fixed and. if it closes it out. . This example shows that when a bug is found by a Software Tester. living Bugs). but the programmer doesn’t fix it.2 shows an example of the simplest.Bug Life cycle: In entomology(the study of real. and assigns it to the Programmer to fix. Once the programmer fixes the code . he assigns it back to the tester and the bugs enters the resolved state. The Project Manager agrees with the Programmer and places the Bug in the resolved state as a “wont-fix” bug. This state is called open state. If you think back to your high school biology class. the term life cycle refers to the various stages that an insect assumes over its life. The tester confirms the fix and closes the bug. its logged and assigned to a programmer to be fixed. pupae and adult. that a similar life cycle system is used to identify their stages of life. and most optimal. software bug life cycle. given that software problems are also called bugs. you will remember that the life cycle stages for most insects are the egg. The tester disagrees. reopens it. the closed state. It seems appropriate. In this case the life cycle starts out the same with the Tester opening the bug and assigning to the programmer. and assign it to the Tester. The programmer fixes the bg.

approvals.You can see that a bug might undergo numerous changes and iterations over its life. Of course every software company and project will have its own system. but this figure is fairly generic and should cover most any bug life cycle that you’ll encounter . sometimes looping back and starting the life all over again. Figure below takes the simple model above and adds to it possible decisions. and looping that can occur in most projects.

The review state is where Project Manager or the committee. tested and closed could reappear. Used to back create test case where none existed before. It gets reopened and the bugs life cycle repeats. Notice that the review state can also go directly to the closed state. you follow it through its life cycle. Such bugs are often called Regressions. don’t lose track of it. resolved. sometimes called a change Control Board. Since a Tester never gives up. or not at all. and prove the necessary information to drive it to being fixed and closed. If either of these occurs. decides whether the bug should be fixed. lost or ignored. and regression. its possible that a bug was thought to be fixed. In some projects all bugs go through the review state before they’re assigned to the programmer for fixing. closed). Bug Report . Track bug status (open.Why • • • Communicate bug for reproducibility. Most Project teams adopt rules for who can change the state of a bug or assign it to someone else.The generic life cycle has two additional states and extra connecting lines. It’s possible that a deferred bug could later be proven serious enough to fix immediately. resolution. maybe only the Project Manager can decide to defer a bug or only a tester is permitted to close a bug. The review may determine that the bug should be considered for fixing at sometime in the future. The additional line from resolved state back to the open state covers the situation where the tester finds that the bug hasn’t been fixed. but not for this release of the software. This happens if the review decides that the bug shouldn’t be fixed – it could be too minor is really not a problem. What’s important is that once you log a bug. . or is a testing error. this may not occur until near the end of the project. the bug is reopened and started through the process again. The other is a deferred.For example. Ensure bug is not forgotten. In other projects. The two dotted lines that loop from the closed and the deferred state back to the open state rarely occur but are important enough to mention.

interface errors. 5. giving input values and checking if the output is as expected. 4. These standards can be for Coding. As we go further.Software Testing Types Static Testing The Verification activities fall into the category of Static Testing. reduce the number of additional test cases that must be designed to achieve reasonable testing. How is the function's validity tested? What classes of input will make good test cases? Is the system particularly sensitive to certain input values? How are the boundaries of a data class isolated? What data rates and data volume can the system tolerate? What effect will specific combinations of data have on system operation? White box testing should be performed early in the testing process. Tests are designed to answer the following questions: 1. Review's. These are the Validation activities. System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. 4. you can download the Software Testing Guide Book from here. Difference Between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the differnce between the static testing and dynamic testing. Inspection's and Walkthrough's are static testing methodologies. Black box testing Introduction Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. 3. To understand more of software testing. Integration Tests. tell us something about the presence or absence of classes of errors. performance errors. 6. During static testing. incorrect or missing functions. It is not an alternative to white box testing. 2. and 5. Dynamic Testing Dynamic Testing involves working with the software. various methodologies. Unit Tests. initialization and termination errors. Integrating and Deployment. 3. tools and techniques. This type of testing attempts to find errors in the following categories: 1. you have a checklist to check whether the work you are doing is going as per the set standards of the organization. rather than an error . errors in data structures or external database access. Test cases should be derived which 1. let us understand the various Test Life Cycle's and get to know the Testing Terminologies. and 2. while black box testing tends to be applied during later stages. 2.

4. Cause-Effect Graphing Techniques Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. Boundary Value Analysis This method leads to a selection of test cases that exercise boundary values. one valid and two invalid equivalence classes are defined. exercise all logical decisions on their true and false sides. It complements equivalence partitioning since it selects test cases at the edges of a class. 3. Rather than focusing on input conditions solely. 2. condition requires a specific value. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits. Equivalence classes may be defined according to the following guidelines: 1. The graph is converted to a decision table. Equivalence Partitioning This method divides the input domain of a program into classes of data from which test cases can be derived. 3. 3. BVA guidelines include: 1. 2. Blackbox Testing plans. For input ranges bounded by a and b. If an input condition specifies a number of values. 4. If an input are defined. 4. then one valid and one invalid equivalence class condition is boolean. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each. 4. difference between blackbox testing and whitebox testing. unbiased blackbox testing White box testing White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. A cause-effect graph is developed. a test case should be designed to exercise the data structure at its boundary. then one valid and one invalid equivalence class are defined. Apply guidelines 1 and 2 to the output. then one valid and two invalid equivalence classes condition specifies a member of a set. There are four steps: 1. An equivalence class represents a set of valid or invalid states for input conditions. Decision table rules are converted to test cases. 3. If an input are defined. BVA derives test cases from the output domain also. . execute all loops at their boundaries and within their operational bounds. If an input 2. test cases should include values a and b and just above and just below a and b respectively. If an input condition specifies a range. and exercise internal data structures to ensure their validity. It is based on an evaluation of equivalence classes for an input condition. guarantee that all independent paths within a module have been exercised at least once. What is blackbox testing. Test cases can be derived that 1. If internal data structures have prescribed boundaries.associated only with the specific test at hand. 2.

N + 2 where E is the number of edges and N is the number of nodes. The edges between nodes represent flow of control. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) . 3. V(G). Any number of different basis sets can be derived for a given procedural design. Flow Graphs Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. General processing tends to be well understood while special case processing tends to be prone to errors. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing. The Basis Set An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). Determine a basis set of linearly independent paths. The basis set is not unique. for a flow graph G is equal to 1. V(G) can be determined by counting the number of conditional statements in the code. From the design or source code. Each node that contains a condition is called a predicate node.The Nature of Software Defects Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. even if the node does not represent any useful procedural statements. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. An edge must terminate at a node. Cyclomatic complexity. Each test case is executed and compared to the expected results. derive a flow graph. V(G) = E . 3. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. Typographical errors are random. The number of regions in the flow graph. V(G) = P + 1 where P is the number of predicate nodes. 2. Each flow graph node represents one or more procedural statements. 2. 4. Prepare test cases that will force execution of each path in the basis set. We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Automating Basis Set Derivation The derivation of the flow graph and the set of basis paths is amenable to automation. Even without a flow graph. Basis Path Testing This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing. Determine the cyclomatic complexity of this flow graph. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed. A region in a flow graph is an area bounded by edges and nodes. Predicate nodes are useful for determining the necessary paths. Deriving Test Cases 1.

skip the loop entirely. n + 1 passes through the loop.g. and unstructured loops. m passes through the loop where m < n. Four different classes of loops can be defined: 1. But other types of link weights can be represented: � � � � the the the the probability that an edge will be executed. One approach for nested loops: 1. 4. .1. 2. Nested Loops The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. Concatenated Loops Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e. 2. n . Start at the innermost loop. processing time expended during link traversal. Loop Testing This white box technique focuses exclusively on the validity of loop constructs. Continue until all loops have been tested. Simple Loops The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop: 1. the loop counter for one is the loop counter for the other). Set all other loops to minimum values. 3. more information about the control flow can be captured. 3. Unstructured Loops This type of loop should be redesigned not tested!!! Other White Box Techniques Other white box testing techniques include: 1. 4. the link weight is 1 if an edge exists and 0 if it does not. Data flow testing selects test paths according to the locations of definitions and uses of variables in the program. 2. only pass once through the loop. simple loops.between nodes. Condition testing exercises the logical conditions in a program. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. 2. Work outward. 4. 3. memory required during link traversal. concatenated loops. n. then the nested approach can be used. Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set. conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values. By adding a link weight to each matrix entry. or resources required during link traversal. nested loops. In its simplest form. Add tests for out-of-range or excluded values.

and then implements that interface with their own Mock Object. Separation of Interface from Implementation Because some classes may have references to other classes. Toolkits. testing a class can frequently spill over into testing another class. Unit Testing . performance problems and any other system-wide issues. Simplifies Integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. This isolated testing provides four main benefits: Encourages change Unit testing allows the programmer to refactor code at a later date.Software Unit Testing. Unit testing is only effective if it is used in conjunction with other software testing activities. By definition. Extreme Programming Unit Testing . By testing the parts of a program first and then testing the sum of its parts will make integration testing easier. Documents the code Unit testing provides a sort of "living document" for the class being tested. a unit test is a method of testing the correctness of a particular module of source code. because a unit test should never go outside of its own class boundary. the software developer abstracts an interface around the database connection. Therefore. Research Topics. Limitations It is important to realize that unit-testing will not catch every error in the program. it only tests the functionality of the units themselves. it will not catch integration errors. Tools. As a result. in order to test the class. and make sure the module still works correctly (regression testing). This type of testing is mostly done by the developers. This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly.Unit Testing In computer programming. This is a mistake. A common example of this is classes that depend on a database. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs. Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. thus minimizing dependencies in the system. This results in loosely coupled code. It provides a written contract that the piece must satisfy. In addition. it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. the tester finds herself writing code that interacts with the database.

Creating checklist to verify that application complies to the organizational policies and procedures./li> Correctness maintained over considerable period of time Processing of the application complies with the organization’s policies and procedures. System can be tested for correctness through all phases of SDLC but incase of reliability the programs should be in place to make system operational. Objective: • • Successfully implementation of user requirements. which becomes test cases as the SDLC progresses until system is fully operational. Test conditions are more effective when created from user’s requirements. The method used to carry requirement testing and the extent of it is important. Test conditions if created from documents then if there are any error in the documents those will get incorporated in Test conditions and testing would not be able to find those errors. Secondary users needs are fulfilled: • • • • • Security officer DBA Internal auditors Record retention Comptroller How to Use Test conditions created • • • • • These test conditions are generalized ones. . Test conditions if created from other sources (other than documents) error trapping is effective. Example • • Creating test matrix to prove that system requirements as documented are the requirements desired by the user. When to Use • • • Every application should be Requirement tested Should start at Requirements phase and should progress till operations and maintenance phase. Functional Checklist created.Requirements Testing Usage: • • • To ensure that system performs correctly To ensure that correctness can be sustained for a considerable period of time.

re-run to ensure that the results of the segment tested currently and the results of same segment tested earlier are same. Objective: • • • Determine System documents remain current Determine System test data and test conditions remain current Determine Previously tested system functions properly without getting effected though changes are made in some other segment of application system. How to Use? • • • Test cases. Reviewing previously prepared system documents (manuals) to ensure that they do not get effected after changes are made to the application system. Test automation is needed to carry out the test transactions (test condition execution) else the process is very time consuming and tedious. When to Use? • • • When there is high risk that the new changes may effect the unchanged areas of application system. In Maintenance phase : regression testing should be carried out if there is a high risk that loss may occur when the changes are made to the system Example • • Re-running of previously conducted tests to ensure that the unchanged portion of system functions properly. In development process: Regression testing should be carried out after the pre-determined changes are incorporated in the application system.Regression Testing Usage: • • All aspects of system remain functional after testing. which were used previously for the already tested segment is. Change in one segment does not change the functionality of other segment. Disadvantage • Time consuming and tedious if test automation not done . In this case of testing cost/benefit should be carefully evaluated else the efforts spend on testing would be more and payback would be minimum.

Then logical test error conditions should be created based on this assimilated information. Used to assist in error management process of system development and maintenance. How to Use • • • A group of knowledgeable people is required to anticipate what can go wrong in the application system. In some system approx. auditing and error tracking.. Objective: • • • Determine Application system recognizes all expected error conditions Determine Accountability of processing errors has been assigned and procedures provide a high probability that errors will be properly corrected Determine During correction process reasonable control is maintained over errors. . which were not present in the system earlier. 50% of programming effort will be devoted to handling error condition. Impact from errors should be identified and should be corrected to reduce the errors to acceptable level. It is needed that all the application knowledgeable people assemble to integrate their knowledge of user area. When to Use • • • Throughout SDLC. Correct them. Using iterative testing enters transactions and trap errors. Then enter transactions with errors. Example • • Create a set of erroneous transactions and enter them into the application system then find out whether the system is able to identify the problems.Error Handling Testing Usage: • • • It determines the ability of applications system to process the incorrect transactions properly Errors encompass all unexpected conditions.

Manual Support Testing Usage: • It involves testing of all the functions performed by the people while preparing the data and using these data from automated system. Should not be done at later stages of SDLC. manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them. Objective: • • • • Verify manual support documents and procedures are correct. . Determine Manual support and automated segment are properly interfaced. Execution of the can be done in conjunction with normal system testing. To test people it requires testing the interface between the people and application system. Users can be provided a series of test conditions and then asked to respond to those conditions. execution and entering actual test transactions the clerical and supervisory personnel can use the results of processing from application system. Instead of preparing. Conducted in this manner. Determine Manual support responsibility is correct Determine Manual support people are adequately trained. Example • • Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it in the computer. Best done at installation stage so that the clerical people do not get used to the actual system just before system goes to production. How to Use? • • • • Process evaluated in all segments of SDLC. When to Use? • • • Verification that manual systems function properly should be conducted throughout the SDLC.

Disadvantage • • Time consuming and tedious if test automation not done Cost may be expensive if system is run several times iteratively. are corrected in the document. Entering test transactions in live production environment and then using integrated test facility to check the processing from one system to another. Example • • • Develop test transaction set in one application and passing to another system to verify the processing. Multiple systems are run from one another to check that they are acceptable and processed properly. . Ensure Proper timing and coordination of functions exists between the application system.Intersystem Testing Usage: • To ensure interconnection between application functions correctly. Objective: • • • Determine Proper parameters and data are correctly passed between the applications Documentation for involved system is correct and accurate. How to Use? • • Operations of multiple systems are tested. which are erroneous then risk associated to such parameters. would decide the extent of testing and type of testing. When to Use? • • • When there is change in parameters in application system The parameters. Verifying new changes of the parameters in the system. which are being tested. Intersystem parameters would be checked / verified after the change or new application is placed in the production.

Control Testing Usage: • Control is a management tool to ensure that processing is performed in accordance to what management desire or intents of management.e. Parallel Testing Usage: • To ensure that the processing of new application (new version) is consistent with respect to the processing of previous application version. controls. Testers should have negative approach i. Demonstrating consistency and inconsistency between 2 versions of the application. Objective: • • • • • Accurate and complete data Authorized transactions Maintenance of adequate audit trail of information. Process meeting the needs of the user. How to Use? • Same input data should be run through 2 versions of same application system. Efficient. Example • • file reconciliation procedures work Manual controls in place. When to Use? • Should be tested with other system tests. Develop risk matrix. effective and economical process. Objective: • • Conducting redundant processing to ensure that the new version or application performs correctly. should determine or anticipate what can go wrong in the application system. which identifies the risks. segment within application system in which control resides. . How to Use? • • • To test controls risks must be identified.

In financial applications like banking where there are many similar applications the processing can be verified for old and new version through parallel testing Example • • Operating new and old version of a payroll system to determine that the paychecks from both systems are reconcilable. Once the test environment is built it must be fully utilised.• Parallel testing can be done with whole system or part of system (segment). The exercise should seek to answer the following questions: What service level can be guaranteed. How can it be specified and monitored? Are changes in user behaviour likely? What impact will such changes have on resource consumption and service delivery? Which transactions/processes are resource hungry in relation to their tasks? What are the resource bottlenecks? Can they be addressed? How much spare capacity is there? The purpose of volume testing is to find weaknesses in the system with respect to its handling of large amount of data during extended time periods . The creation of a volume test environment requires considerable effort. if the tests are to reliably reflect the to be production environment. Volume tests offer much more than simple service delivery measurement. We are particularly interested in its behavior when the maximum number of users are concurrently active and when the database contains the greatest data volume. It is essential that the correct level of complexity exists in terms of the data within the database and the range of transactions and data used by the scripted users. Running old version of application to ensure that the functions of old system are working fine with respect to the problems encountered in the new system. When to Use? • • When there is uncertainty regarding correctness of processing of new application where the new and old version are similar. Volume testing Whichever title you choose (for us volume test) here we are talking about realistically exercising an application in order to measure the service delivered to users at different levels of usage.

execute controlled performance tests that collect the data about volume. To attack the performance problems. stress. a script might require users to login and proceed with their daily activities while. Performance testing System performance is generally assessed in terms of response time and throughput rates under differing processing and configuration conditions. provide temporary data storage on the client while the application is executing. . at the same time. Third. examine and tune the database queries and. Second. The best strategy for improving client-sever performance is a three-step process [11]. there are several questions should be asked first: • How much application logic should be remotely executed? • How much updating should be done to the database server over the network from the client workstation? • How much data should be sent to each in each transaction? According to Hamilton [10]. or delete from the database. requiring that a series of workstations emulating a large number of other systems are running recorded scripts that add. if necessary.Stress testing The purpose of stress testing is to find defects of the system capacity of handling large numbers of transactions during peak periods. the performance problems are most often the result of the client or server being configured inappropriately. analyze the collected data. First. For example. and loading tests. update.

Testing Tools
Win Runner
Introduction

WinRunner, Mercury Interactive’s enterprise functional testing tool. It is used to quickly create and run sophisticated automated tests on your application. Winrunner helps you automate the testing process, from test development to execution. You create adaptable and reusable test scripts that challenge the functionality of your application. Prior to a software release, you can run these tests in a single overnight run- enabling you to detect and ensure superior software quality.

Win Runner
What's New in WinRunner 7.5? Automatic Recovery The Recovery Manager provides an easy-to-use wizard that guides you through the process of defining a recovery scenario. You can specify one or more operations that enable the test run to continue after an exception event occurs. This functionality is especially useful during unattended test runs, when errors or crashes could interrupt the testing process until manual intervention occurs. Silent Installation Now you can install WinRunner in an unattended mode using previously recorded installation preferences. This feature is especially beneficial for those who use enterprise software management products or any automated software distribution mechanisms. Enhanced Integration with TestDirector WinRunner works with both TestDirector 6.0, which is client/server-based, and TestDirector 7.x, which is Web-based. When reporting defects from WinRunner’s test results window, basic information about the test and any checkpoints can be automatically populated in TestDirector’s defect form. WinRunner now supports version control, which enables updating and revising test scripts while maintaining old versions of each test. Support for Terminal Servers Support for Citrix and Microsoft Terminal Servers makes it possible to open several window clients and run WinRunner on each client as a single user. Also, this can be used with LoadRunner to run multiple WinRunner Vusers. Support for More Environments WinRunner 7.5 includes support for Internet Explorer 6.x and Netscape 6.x, Windows XP and Sybase's PowerBuilder 8, in addition to 30+ environments already supported by WinRunner 7.

WinRunner provides the most powerful, productive and cost-effective solution for verifying enterprise application functionality. For more information on WinRunner, contact a Mercury Interactive local representative for pricing, evaluation, and distribution information. WinRunner(Features & Benefits) Test functionality using multiple data combinations in a single test WinRunner's DataDriver Wizard eliminates programming to automate testing for large volumes of data. This saves testers significant amounts of time preparing scripts and allows for more thorough testing. Significantly increase power and flexibility of tests without any programming The Function Generator presents a quick and error-free way to design tests and enhance scripts without any programming knowledge. Testers can simply point at a GUI object, and WinRunner will examine it, determine its class and suggest an appropriate function to be used. Use multiple verification types to ensure sound functionality WinRunner provides checkpoints for text, GUI, bitmaps, URL links and the database, allowing testers to compare expected and actual outcomes and identify potential problems with numerous GUI objects and their functionality. Verify data integrity in your back-end database Built-in Database Verification confirms values stored in the database and ensures transaction accuracy and the data integrity of records that have been updated, deleted and added. View, store and verify at a glance every attribute of tested objects WinRunner’s GUI Spy automatically identifies records and displays the properties of standard GUI objects, ActiveX controls, as well as Java objects and methods. This ensures that every object in the user interface is recognized by the script and can be tested. Maintain tests and build reusable scripts The GUI map provides a centralized object repository, allowing testers to verify and modify any tested object. These changes are then automatically propagated to all appropriate scripts, eliminating the need to build new scripts each time the application is modified. Test multiple environments with a single application WinRunner supports more than 30 environments, including Web, Java, Visual Basic, etc. In addition, it provides targeted solutions for such leading ERP/CRM applications as SAP, Siebel, PeopleSoft and a number of others.

Win Runner
NAVIGATIONAL STEPS FOR WINRUNNER LAB-EXERCISES Using Rapid Test Script wizard

Start->Program Files->Winrunner->winrunner

• • • • • • • • • • • •

Select the Rapid Test Script Wizard (or) create->Rapid Test Script wizard Click Next button of welcome to script wizard Select hand icon and click on Application window and Click Next button Select the tests and click Next button Select Navigation controls and Click Next button Set the Learning Flow(Express or Comprehensive) and click Learn button Select start application YES or NO, then click Next button Save the Startup script and GUI map files, click Next button Save the selected tests, click Next button Click Ok button Script will be generated. then run the scripts. Run->Run from top Find results of each script and select tools->text report in Winrunner test results.

Using GUI-Map Configuration Tool:

• • • • • • • • • •

Open an application. Select Tools-GUI Map Configuration; Windows pops-up. Click ADD button; Click on hand icon. Click on the object, which is to be configured. A user-defined class for that object is added to list. Select User-defined class you added and press ‘Configure’ button. Mapped to Class ;( Select a corresponding standard class from the combo box). You can move the properties from available properties to Learned Properties. By selecting Insert button Select the Selector and recording methods. Click Ok button Now, you will observe Winrunner identifying the configured objects.

Using Record-Context Sensitive mode:

• • • • •

Create->Record context Sensitive Select start->program files->Accessories->Calculator Do some action on the application. Stop recording Run from Top; Press ‘OK’.

Using Record-Analog Mode:

• • • • • • • •

Create->Insert Function->from function generator Function name :( select ‘invoke application’ from combo box). Click Args button; File: mspaint. Click on ‘paste’ button; Click on ‘Execute’ button to open the application; finally click on ‘Close’. Create->Record-Analog. Draw some picture in the paintbrush file. Stop Recording Run->Run from Top; Press ‘OK’.

GUI CHECK POINTS-Single Property Check:

• • •

Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a) Click on’paste’ and click on’execute’ & close the window. Create->Record Context sensitive.

window. Click on some button whose property to be checked. Click on some button whose property to be checked.• • • • • • • Do some operations & stop recording. Run->Run from top. Press ‘OK’ it displays results window. Signature Press ‘Cancel’ button. Create->GUI Check Point->For Multiple Object. File :Flight 1a) Click on’paste’ and click on’execute’ & close the window. Select each object & select corresponding properties to be checked for that object: click ‘OK’. Run->Run from Top. It shows the expected value & actual value window. GUI CHECK POINTS-For Object/Window Property: • • • • • • • • • • Create->Insert function->Function Generator-> (Function name:Invoke_application. Double click on the result statement. Create->GUI Check Point->Object/Window Property. Create->GUI Check Point->For single Property. Create->Record Context sensitive. Create->Stop Recording. File :Flight 1a) Click on’paste’ and click on’execute’ & close the window. Run->run from top. • • • • • • • • • • • • Create->Insert function->Function Generator-> (Function name:Invoke_application. Do some operations & stop recording. . The test fails and you can see the difference. Click on paste. Password & click ‘OK’ button Open the Order in Flight Reservation Application Select File->Fax Order& enter Fax Number. Click on Add button. Then open Fax Order in Flight Reservation Application Create->Bitmap Check->For obj. Do some operations & stop recording. Run->Run from top. Click on paste. Click on some button whose property to be checked. GUI CHECK POINTS-For Object/Window Property: • • • • • • • • • • Create->Insert function->Function Generator-> (Function name:Invoke_application. Create->Record Context sensitive. It shows the expected value & actual value window. 40Now close the Flight 1a application. Create->Record Context sensitive. File :Flight 1a) Click on’paste’ and click on’execute’ & close the window. Press ‘OK’ it displays results window. Enter the Username. It displys the results. BITMAP CHECK POINT: For object/window. Now close the Flight1a application. Double click on the result statement. Click on few objects & Right click to quit.

3.Order_Number. Select Finish Button Select hand Icon button& select Order No in your Application Click Next button. Open winrunner window Create->RecordContext Sensitive Insert information for new Order &click on "insert Order" button After inserting click on "delete" button Stop recording& save the file. select an image with cross hair pointer. Flights where Flight. Do slight modification in the paint file(you can also run on the same paint file). DATABASE CHECK POINTS Using Default check(for MS-Access only) • • • • • • • • • • • • Create->Database Check Point->Default check Select the Specify SQL Statement check box Click Next button Click Create button Type New DSN name and Click New button Then select a driver for which you want to set up a database & double clcik that driver Then select Browse button and retype same DSN name and Click save button. One match record 2. Paint file pops up.Flight_Number. Type query of two related tables in SQL box Ex: select Orders. Create->Bitmapcheck point->from screen area. Select hand Icon button& select Filght No in your Application Click Next button Select any one of the following check box 1. The test fails and you can see the difference of images.Flight_Number=Orders.For Screen Area: • • • • • • Open new Paint Brush file. One or more match records. Flights. Click Next button & click Finish button Select database button & set path of the your database name Click ‘OK’ button & then Click the your DSN window ‘OK’ button Type the SQL query in SQL box Theb click Finish button Note : same process will be Custom Check Point Runtime Record Check Point. No match record select Finish button the script will be generated. Run->Run from top: Gives your results. Run->Run from Top. Win Runner Synchronization Point For Obj/Win Properties: • • • • • • • Open start->Programs->Win Runner->Sample applications->Flight1A.Flight_Number from Orders. . • • • • • • • • • Repeat above 10 steps.

Go to the TSL Script. Add the following TSL(Note:Change "text" to text1 & text2 for each statement) if(text1==text2) report_msg("correct" text1).0. just before inserting of data into "date of flight" insert pointer.1. right clcik &close the graph Now . open the graph(Analysis->graphs) Go to Winrunner window."update property"). It inserts the Synch statement. insert new order. Run->Run from top to see the results. Else report_msg("incorrect" text2). If(text2==text1) tl_step("text comparision". press statement. results are displayed. For Obj/Win Bitmap: • • • • • • • Create-> Record Context Sensitive. Create->Synchronization->For Obj/Window Property Click on"Delete Order" button & select enable property. create->get text->from screen area Capture the No of tickets sold and right click. Run & see the results Win Runner . click on "paste". Note:(Keep "Timeout value" :1000) Get Text: From Screen Area: (Note: Checking whether Order no is increasing when ever Order is created) • • • • • • • • • Open Flight1A. Analysis->graphs(Keep it open) Create->get text->from screen area Capture the No of tickets sold. "Timeout for checkpoints& Cs statements’ value:10000 follow 1 to 7->the test display on "Error Message" that "delete" button is disabled."updateed"). Get Text: For Object/Window: • • • • • • • Open a "Calc" application in two windows (Assuming two are two versions) Create->get text->for Obj/Window Click on some button in one window Stop recording Repeat 1 to 4 for Capture the text of same object from another "Calc" application. Create->Synchronization->For Obj/Win Bitmap is selected.Without Synchronization: • settings->General Options->Click on "Run" tab. With Synchronization: • • • • • Keep Timeout value:1000 only Go to the Test Script file. else tl_step("text comparision". Insert information for new order & click on "Insert order" button Stop recording & save the file. close the graph Save the script file Add the followinf script. insert pointed after "Insert Order" button. (Make sure flight reservation is empty) click on "data of flight" text box Run->Run from Top.

Click Next Button Select standard class object for the virtual object Ex: class:Push_button Click Next button Click Mark Object button Drag the cursor to mark the area of the virtual object. . you can view and verify the properties of any GUI object on selected application • • • • • Tools->Gui Spy… Select Spy On ( select Object or Window) Select Hand icon Button Point the Object or window & Press Ctrl_L + F3. Do some operations Stop Recording Using Gui Map Editor: Using the GUI Map Editor. To modify an object’s logical name in a GUI map file • • • • • • Tools->GUI Map Editor Select Learn button Select the Application A winrunner message box informs “do you want to learn all objects within the window” & select ‘yes’’ button. The object is highlighted in the GUI map file. This name will appear in the test script when you record object. Choose View > GUI Files. Using Virtual Object Wizard: Using the Virtual Object wizard. you can assign a bitmap to a standard object class. Select perticular object and select Modify Button Change the Logical Name& click ‘OK’ Button Save the File To find an object in a GUI map file: • • • • • Choose Tools > GUI Map Editor. and assign it a logical name • • • • • • • • • • • • • tools->Virtual Object Wizard. You can view and verify the properties. To highlight an object in a Application: • Choose Tools > GUI Map Editor.Using GUI-Spy: Using the GUI Spy. Click Find. Select Yes or No check box Click Finish button Go to winrunner window & Create->Start Recording. The mouse pointer turns into a pointing hand. define the coordinates of that object. you can view and modify the properties of any GUI object on selected application. Click the object in the application being tested. Click Next button Assign the Logical Name. Choose File > Open to load the GUI map file.

Tools->Data Driver Wizard Click Next button &select the data table Select Parameterize the test.• • • • Choose View > GUI Files. . If you chose Auto Merge and the source GUI map files are merged successfully without conflicts. Select the Auto Merge in Merge Type. Manual Merge enables you to manually add GUI objects from the source to target files.DSN=Flight32). select Line by Line check box Click Next Button Parameterize each specific values with column names of tables. Click ‘OK’ button GUI Map File Manual Merge Tool Opens Select Objects and move Source File to Target File Close the GUI Map File Manual Merge Tool Auto Merge • • • • • • Tools->Merge GUI Map Files A WinRunner message box informs you that all open GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’ button. insert the fields. The object is highlighted in the Application. Run->Run from top. To specify the Target GUI map file click the browse button& select GUI map file To specify the Source GUI map file. Enter different Customer names in one row and Tickets in another row. To specify the Target GUI map file click the browse button& select GUI map file To specify the Source GUI map file. Select the Manual Merge. Win Runner Merge the GUI Files: Manual Merge • • • • • • • Tools->Merge GUI Map Files A WinRunner message box informs you that all open GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’ button. Choose File > Open to load the GUI map file.Repeat for all Finalli Click finish button. Click the Insert Order Tools->Data Table. Click the add button& select source GUI map file. Data Driver Wizard • • • • • • • • • • • • • • • Start->Programs->Wirunner->Sample applications->Flight 1A Open Flight Reservation Application Go to Winrunner window Create->Start recording Select file->new order. Manually Retrive the Records form Database • db_connect(query1. Click the add button& select source GUI map file. Select the object in the GUI map file Click Show. View the results. Default that two column names are Noname1 and Noname2. Click ‘OK’ button A message confirms the merge.

link The name of the link.c:\\str. text_before. // retrieves the text content of a frame. y ). x. // moves the cursor to a label on a page.#0.and y-coordinates of the mouse pointer when moved to an object 7. x.and y-coordinates of the mouse pointer when moved to a label.#0).web_event ( object. y ] ). object A file-type object. Value A text string. 4. web_find_text ( frame. Win Runner TSL SCRIPTS FOR WEB TESTING 1.and ycoordinates of the mouse pointer when moved to an image 3. x.web_cursor_to_link ( link. // invokes the browser and opens a specified site.web_file_browse ( object ). result_array [.5.row_con). web_frame_get_text ( frame. x. x. image The logical name of the image. y ). 9. text_after.y The x. db_get_row(query1.and y-coordinates of the mouse pointer when moved to a link. x.select * from Orders. text_to_find. 8. y ). db_get_field_value(query1. site The address of the site.y The x. web_cursor_to_image ( image.10). x. event_name [.txt. text_after. // moves the cursor to a link on a page. db_get_headers(query1. x . // sets the text value in a file-type object. field_num. 10. 5. 6.TRUE. show ] ). index ] ). object The name of the object. event_name The name of an event handler. x. value ). db_write_records(query1. site ). // returns the location of text within a frame. out_text [. // moves the cursor to an image on a page.• • • • • db_execute_query(query1. index. object The logical name of the recorded object.rec). label The name of the label. 2. // clicks a browse button. object A file-type object. browser The name of browser (IE or NETSCAPE)..headers).y The x.y The x. web_browser_invoke ( browser.y The x. // moves the cursor to an object on a page. x. . text_before. web_cursor_to_label ( label. y ).web_cursor_to_obj ( object.and ycoordinates of the mouse pointer when moved to an object. // uns an event on a specified object.web_file_set ( object.

count ). out_timeout The maximum interval in seconds 15.web_get_run_event_mode ( out_mode ). returns a text string from an object.y The x. 12. index.and ycoordinates of the mouse pointer when clicked on a hypergraphic link or an image. y ). Win Runner 17.y The x. 20. // clicks the specified label. If the mode is FALSE. name The logical name of a link. 16. text_after.11. // returns the maximum time that WinRunner waits for response from the web. the test runs by events.and y-coordinates of the mouse pointer when clicked on an object. is specified. label The name of the label. x. property_value ). text_before. // returns the number of occurrences of a regular expression in a frame. text_before. table_row. the default parameter. function returns the count of the children in an object. text_after ] ). web_frame_get_text_count ( frame. object_type. 23. If TRUE. // clicks a hypergraphic link or an image. link The name of link. index] ). . web_obj_get_child_item ( object. image The logical name of the image. 13.web_image_click ( image. // checks whether a URL name of a link is valid (not broken). web_obj_get_info ( object. object_count ). web_frame_text_exists ( frame. web_obj_click ( object. web_link_click ( link ). 21. table_column. web_obj_get_child_item_count ( object. returns the value of an object property. web_obj_get_text ( object. text_to_find [. // returns the current run mode out_mode The run mode in use. // returns the description of the children in an object. x. out_text [. table_column. y ). web_label_click ( label ). regex_text_to_find . the test runs by mouse operations. // returns a text value if it is found in a frame. valid The status of the link may be valid (TRUE) or invalid (FALSE) 19. x. table_row. x. web_get_timeout ( out_timeout ). object The logical name of an object. object_type. table_column. property_name. 22. 18. valid ). 14. web_link_valid ( name. table_row. out_object ). // clicks a hypertext link.

// checks whether a URL is valid. text_to_find [. LoadRunner 7 Features & Benefits LoadRunner 7 Features & Benefits New Tuning Module Add-In . 29 web_set_timeout ( timeout ). event_name. We create scripts that generate a series of actions. regex_text_to_find.Introduction Load Runner is divided up into 3 smaller applications: The Virtual User Generator allows us to determine what actions we would like our Vusers. returns the number of occurrences of a regular expression in an object. //sets the event run mode. //waits for the navigation of a frame to be completed. web_obj_text_exists ( object.24. web_set_run_event_mode ( mode ). 31. and exiting the program. as well as the details of the load test for pinpointing problems or bottlenecks. web_restore_event_default ( ). text_before. valid ). returns a text value if it is found in an object. The Controller takes the scripts that we have made and runs them through a schedule that we set up. and how to group the Vusers and keep track of them. It allows us to see summaries of data. such as logging on. 28. web_set_event ( class. // sets the event status. The Results and Analysis program gives us all the results of the load test in various forms. event_status ). 32. table_row. 26. //. web_set_tooltip_color ( fg_color.sets the maximum time WinRunner waits for a response from the web. web_sync ( timeout ). // sets the colors of the WebTest ToolTip. navigating through the application. table_column. text_after] ). Load Runner . count ). to perform within the application. 30. We tell the Controller how many Vusers to activate. bg_color ). 25. web_obj_get_text_count ( object. table_row. //resets all events to their default settings. or virtual users. when to activate them. 27. event_type. table_column. web_url_valid ( URL.

this WAN emulation capability introduces testing for bandwidth limits. performance. you can optimize performance before clients have been developed and thereby save time and resources. JBuilder for Java IDE Add-in LoadRunner now works with Borland's JBuilder integrated development environment (IDE) to create powerful support for J2EE applications. This add-in enables LoadRunner users who create J2EE applications and services with JBuilder to create virtual users based on source code within a JBuilder project. WAN Emulation Support This powerful feature set enables LoadRunner to quickly point out the effect of the wide area network (WAN) on application reliability. they select the recorded scripts to be uploaded and start running the tests on the host machines*. Once the application has been stress tested using LoadRunner. and response time. It connects directly to back-end database servers and imports desired data into test scripts. Provided through technology from Shunra Software. Customers begin by using LoadRunner Hosted Virtual Users' simple Web interface to schedule tests and reserve machines on Mercury Interactive's load farm. Sun ONE Studio4 IDE Add-in Mercury Interactive and Sun Microsystems have worked together to integrate LoadRunner with the Sun ONE Studio4 add-in. Goal-Oriented Testing with AutoLoad The new AutoLoad technology allows you to pre-define your testing objectives beyond the number of concurrent users to streamline the testing process. latency. jointly developed with Citrix. Native ICA Support for Citrix MetaFrame LoadRunner now supports Citrix's Independent Computing Architecture (ICA) for the testing of applications being deployed with Citrix MetaFrame. you can quickly and easily view and manipulate XML data within the test scriptsHosted Virtual Users Hosted Virtual Users How it Work LoadRunner Hosted Virtual Users complements in-house load testing tools and allows companies to load test their Web-based applications from outside the firewall using Mercury Interactive's infrastructure. Data Wizard for Data-Driven Testing LoadRunner's Data Wizard enables you to quickly create data-driven tests and eliminate manual data manipulation. network. and server and provides powerful drill-down capabilities. Enterprise Java Bean Testing By testing EJB components with LoadRunner.The LoadRunner Tuning Module allows customers to isolate and resolve system performance bottlenecks. network errors. and more to LoadRunner. At the scheduled time. LoadRunner's Web Transaction Breakdown Monitor splits end-to-end transaction response times for the client. Web Transaction Breakdown Monitor for Isolating Performance Problems Now you can more efficiently isolate performance problems within your architecture. . As a result. XML Support With LoadRunner's XML support. you can identify and solve problems during the early stages of application development. the Tuning Module provides component test libraries and a knowledgebase that help users isolate and resolve performance bottlenecks. This support is the first native ICA load testing solution of its kind.

Setup and installation of the monitors therefore is trivial. The application under test only needs to be accessible via the Web. Complements in-house solutions to provide comprehensive load testing. as well as download data for further analysis. When the test is complete. software or bandwidth to increase their testing coverage. such as hits per second. How the Monitors Work How the Monitors Work To minimize the impact of the monitoring on the system under test. At any time in the application lifecycle. scalability and availability. Features and Benefits Provides pre. operations groups can thoroughly load test their Web applications and Internet infrastructures from inside and outside the firewall. *Customers who do not own LoadRunner can download the VUGen component for free to record their scripts. testers can view real-time performance metrics. Since all the monitoring information is sampled at a low frequency (typically 1 to 5 seconds) there is only a negligible effect on the servers. CPU and memory levels). organizations do not need to invest in additional hardware. By combining LoadRunner Hosted Virtual Users with Mercury Interactive's LoadRunner or another in-house load testing tool. Through LoadRunner Hosted Virtual Users’ Web interface. As a result. The interface to LoadRunner Hosted Virtual Users enables test teams to control the load test and view tests in progress. as well as views of the individual machines generating the load. Testing groups create the scripts. throughput. testers can analyze results online. from anywhere. They can perform testing at their convenience and easily access all performance data to quickly diagnose performance problems. run the tests and perform their own analyses. Mercury Interactive’s load testing infrastructure is available 24x7 and consists of load farms located worldwide.and post-deployment testing. transaction response times and hardware resource usage (e. They also can view performance metrics gathered by Mercury Interactive’s server monitors and correlate this with end-user performance data to diagnose bottlenecks on the back end. As a result. organizations can generate real-user loads over the Internet to stress their Web-based applications at any time. Gives customers complete control over all load testing. Provides access to Mercury Interactive's extensive load testing infrastructure.. LoadRunner can be used to monitor the performance of the servers regardless of the hardware and operating system on which they run.These scripts will emulate the behavior of real users on the application and generate load on the system. With LoadRunner Hosted Virtual Users. no matter their locations. Likewise.g. LoadRunner Hosted Virtual Users gives testers complete control of the testing process while providing critical real-time performance information. the LoadRunner analysis pack can be downloaded for free. LoadRunner enables IT groups to extract data without having to install intrusive capture agents on the monitored servers. . organizations can use LoadRunner Hosted Virtual Users to verify performance and fine-tune systems for greater efficiency.

grouped by status code.the page was not found returned from the Web server during each second of the scenario run (x-axis). User-defined Data Point User Defined Data Points graph allows you to add your own measurements by defining a data point function in your Vuser script. hits per second.Provide end-user response times. . Data point information is gathered each time the script executes the function or step. which indicate the status of HTTP requests. The y-axis displays the average values of the recorded data point statements. downloaded pages per second is a representation of the amount of data that the Vusers received from the server at any given second. in terms of the number of pages downloaded. or the last 60. Load Pages Downloaded per Second · Testing Monitors • Pages Downloaded per Second The Pages Downloaded per Second graph shows the number of Web pages downloaded from the server during each second of the scenario run. The User-Defined Data Point graph shows the average value of the data points during the scenario run. for example. You can compare this graph to the Transaction Response Time graph to see how the number of hits affects transaction performance. This graph can display the whole scenario. HTTP Responses HTTP Responses The HTTP Responses per Second graph shows the number of HTTP status codes. 180. transactions per second Hits per Second and Throughput Hits per Second The Hits per Second graph shows the number of hits on the Web server (y-axis) as a function of the elapsed time in the scenario (x-axis). This graph helps you evaluate the amount of load Vusers generate. You can compare this graph to the Transaction Response Time graph to see how the throughput affects transaction performance. The x-axis represents the number of seconds elapsed since the start time of the run. the request was successful. Throughput is measured in kilobytes and represents the amount of data that the Vusers received from the server at any given second. Like throughput. 600 or 3600 seconds.Supported Monitors Astra LoadTest and LoadRunner support monitors for the following components: Client-side Monitors End-to-end transaction monitors . Throughput The Throughput graph shows the amount of throughput on the Web server (y-axis) during each second of the scenario run (x-axis).

· Server and Network time The Time to First Buffer Breakdown graph also displays each Web page component's relative server and network time (in seconds) for the period of time until the first buffer is successfully received back from the Web server.axis) as a function of the elapsed time in the scenario (x. The connection measurement is a good indicator of problems along the network. or problems with the DNS server. · Receive Time Displays the amount of time that passes until the last byte arrives from the server and the downloading is complete.axis). you can use this graph to determine whether the problem is server. · Ready The number of Vusers that completed the initialization section of the script and are ready to run.Transaction Monitors · • • • Transaction Response Time The Transaction Response time graph shows the response time of transactions in seconds (y-axis) as a function of the elapsed time in the scenario (xaxis). · Transaction per Second (Passed) The Transaction per Second (Passed) graph shows the number of successful transactions performed per second (y-axis) as a function of the elapsed time in the scenario (x-axis). · Finished The number of Vusers that have finished running. This includes both Vusers that passed and failed · Error The number of Vusers whose execution generated an error. Virtual User Status The monitor's Runtime graph provides information about the status of the Vusers running in the current scenario on all host machines.related. The following table describes each Vuser status. while the information in the legend indicates the number of Vusers in each state. The Status field of each Vuser displays the current status of the Vuser. It also indicates whether the server is responsive to requests. · Transaction per Second (Failed) The Transaction per Second (Failed) graph shows the number of failed transactions per second (y. Web Transaction Breakdown Graphs · • • • • • DNS Resolution Displays the amount of time needed to resolve the DNS name to an IP address. The DNS Lookup measurement is a good indicator of problems in DNS resolution. The Receive measurement is a good indicator of network quality (look at the time/size ratio to calculate receive rate).or network. The first buffer measurement is a good indicator of Web server delay as well as network latency. · • • • • Running The total number of Vusers currently running on all load generators. · Connection Time Displays the amount of time needed to establish an initial connection with the Web server hosting the specified URL. · . using the closest DNS server. The graph shows the number of running Vusers. If the download time for a component is high. · Time To First Buffer Displays the amount of time that passes from the initial HTTP request (usually GET) until the first buffer is successfully received back from the Web server.

Request Wait Time . Requests Not Authorized . network and operating system performance metrics. currently allocated by Active Server Pages Request Bytes In Total . in bytes. This does not include standard HTTP response headers. authorization failure and rejections.The number of requests currently executing.The total number of requests failed due to errors.Provide hardware.The number of milliseconds required to execute the most recent request. Requests Disconnected . Errors/Sec . Requests Executing . of responses sent to clients. memory and network throughput. and other stages). server certificate transfer.Number of requests failed due to runtime errors. Errors from ASP Preprocessor . such as CPU. The SSL Handshaking measurement is only applicable for HTTPS communications · FTP Authentication Displays the time taken to authenticate the client. Errors during Script Runtime .The total amount of memory. The FTP Authentication measurement is only applicable for FTP protocol communications. Request Execution Time . Memory Allocated . client public key transfer. Request Bytes Out Total . of all requests.• • • • Client Time Displays the average amount of time that passes while a request is delayed on the client machine due to browser think time or other client-related delays. in bytes. .The number of requests failed due to insufficient access rights. Server Monitors NT/UNIX/Linux monitors . Load Runner The following list describes the recommended objects to be monitored during a load test: ASP Server Cache HTTP Content Index Internet Information Service Global Logical Disk Memory Physical Disk Processor Server Load Runner ASP Server • • • • • • • • • • • • • • Debugging Requests . · Error Time Displays the average amount of time that passes from the moment an HTTP request is sent until the moment an error message (HTTP errors only) is returned · SSL Handshaking Time Displays the amount of time taken to establish an SSL connection (includes the client hello.Number of debugging document requests. With FTP. Requests Failed Total .The total size.The number of milliseconds the most recent request was waiting in the queue. Errors from Script Compilers .Number of requests failed due to preprocessor errors. a server must authenticate a client before it starts processing the client's commands.The number of requests disconnected due to communication failure.The total size. server hello.Number of requests failed due to script compilation errors. in bytes.The number of errors per second.

The application will regain control immediately.Number of queries per minute. .Current Requests Queued . however. directory listing or service-specific object request was not found in the cache.The frequency that an application uses a file system.IIS Internet Information Service Global • • Cache Hits % .Percentage of queries not found in the query cache. permits direct retrieval of cache data without file system involvement. however. Normally. Cache Items . one invocation of the file system is avoided.Cache Cache • • • • • Async Copy Reads/Sec .Number of completed queries in cache.Percentage of queries found in the query cache.The total number of times a file open. %Cache Misses . Load Runner HTTP Content Index • • • • • • • %Cached Hits .The frequency of reads from cache pages that involve a memory copy of the data from the cache to the application's buffer. Async Data Maps/Sec .Current number of running queries. Load Runner . file I/O requests invoke the appropriate file system to retrieve data from a file. Fast Reads/Sec .Current number of query requests queued. Total Requests Rejected Total number of query requests rejected. the application making the change to the file does not have to wait for the disk write to be completed before proceeding. Lazy Write Flushes/Sec . Async Fast Reads/Sec . Total Queries .The frequency with which the cache's Lazy Write thread has written to disk. even if the disk must be accessed to retrieve the page. This path. file I/O requests will invoke the appropriate file system to retrieve data from a file. Cache Misses .The frequency of reads from cache pages that bypass the installed file system and retrieve the data directly from the cache. Even if the data is not in the cache. If the data is not in the cache. Active Queries . Normally. More than one page can be transferred on each write operation. as long as the data is in the cache. This path. In this way.The number of requests that timed out. permits direct retrieval of cache data without file system involvement if the data is in the cache. such as NTFS or HPFS.Total number of queries run since service start.The frequency of reads from cache pages that bypass the installed file system and retrieve the data directly from the cache. Load Runner . one invocation of the file system is avoided.• • Requests Succeeded . but will get control immediately. Even if the data is not in the cache. the request (application program call) will not wait until the data has been retrieved from disk.The number of requests that were executed successfully. Requests Timed Out . Lazy Writing is the process of updating the disk after the page has been changed in memory.The ratio of cache hits to all cache requests. to map a page of a file into the cache to read because it does not wish to wait for the cache to retrieve the page if it is not in main memory. Queries per Minute .

The number of directory listings cached by all of the Internet Information Services.• • • • • • • Cached Files Handles . Avg. Free and Standby lists.Logical Disc Logical Disk • • • • • • % Disk Read Time . This represents the amount of available virtual memory in use. Notice that this is an instantaneous count. % Disk Time . The system cache is used to buffer data retrieved from disk or LAN.The average number of bytes transferred to or from the disk during write or read operations. Load Runner . directory listing objects and service specific objects.The ratio of the free space available on the logical disk unit to the total usable space provided by the selected logical disk drive Avg. Cache Bytes Peak .Total requests temporarily blocked due to bandwidth throttling settings (counted since service startup).The ratio of the Committed Bytes to the Commit Limit. The system cache is used to buffer data retrieved from disk or LAN. Directory Listings . Measured Async I/O Bandwith Usage . Current Blocked Async I/O Requests .The average number of bytes transferred from the disk during read operations. Total Allowed Async I/O Requests . Total Blocked Async I/O Requests .Measures the number of bytes currently in use by the system cache. Available Bytes . Note that the Commit Limit may change if the paging file is extended. Load Runner . Disk Bytes/Transfer .The number of objects cached by all of the Internet Information Services.Measured bandwidth of asynchronous I/O averaged over a minute. Objects . % Disk Write Time .Total requests allowed by bandwidth throttling settings (counted since service startup). the system cache uses memory not in use by active processes in the computer.Current requests temporarily blocked due to bandwidth throttling settings. . Cache Bytes . % Free Space . Standby memory is memory removed from a process's Working Set but still available. the system cache uses memory not in use by active processes in the computer.The percentage of elapsed time that the selected disk drive was busy servicing read or write requests. Zeroed and Free memory is ready for use. In addition. This is an instantaneous value. not an average over the time interval.The number of open file handles cached by all of the Internet Information Services. The objects include file handle tracking objects. not an average. with Zeroed memory cleared to zeros.Measures the maximum number of bytes used by the system cache.Displays the size of the virtual memory currently on the Zeroed.Memory Memory • • • • % Committed Bytes in Use . Disk Bytes/Read .The percentage of elapsed time that the selected disk drive was busy servicing read requests. In addition.The percentage of elapsed time that the selected disk drive was busy servicing write requests.

The average number of both read and write requests that were queued for the selected disk during the sample interval.The percentage of elapsed time that the selected disk drive is busy servicing read requests. printer device drivers and the window manager also execute in User Mode. The graphics engine. % Disk Time . Avg. All application code and subsystem code execute in User Mode. graphics device drivers.The average number of bytes transferred to the disk during write operations. Disk Bytes/Write . Disk Bytes/Transfer . Avg. the Executive routines.Server . Windows NT uses process boundaries for subsystem protection in addition to the traditional protection of User and Privileged modes.The percentage of elapsed time that the selected disk drive is busy servicing read or write requests. The Windows NT service layer. Unlike some early operating systems. % Disk Write Time . % User Time . and device drivers. Unlike some early operating systems. Device drivers for most devices other than graphics adapters and printers also execute in Privileged Mode.Cache faults occur whenever the cache manager does not find a file's page in the immediate cache and must ask the memory manager to locate the page elsewhere in memory or on the disk. Avg.The average number of bytes transferred to or from the disk during write or read operations.Physical Disk Physical Disk • • • • • • • % Disk Read Time .Processor Time is expressed as a percentage of the elapsed time that a processor is busy executing a non-idle thread. the Interrupt Handler will execute to handle the condition. Load Runner Processor Processor • • • • • % DPC Time .The percentage of elapsed time that the selected disk drive is busy servicing write requests. When a hardware device interrupts the Processor. It can be viewed as the fraction of the time spent doing useful work.The percentage of processor time spent in Privileged Mode in non-idle threads. When a hardware device interrupts the Processor.The percentage of processor time spent in User Mode in non-idle threads. and the Windows NT Kernel execute in Privileged Mode.) % Privileged Time . so that it can be loaded into the immediate cache.The average number of bytes transferred from the disk during read operations. usually by signaling I/O completion and possibly issuing another pending I/O request. % Processor Time . Code executing in User Mode cannot damage the integrity of the Windows NT Executive. Kernel. % Interrupt Time . Avg. This counter can help determine the source of excessive time being spent in Privileged Mode. Disk Bytes/Read . DPCs run at lower priority than Interrupts. Load Runner .• Cache Faults/Sec .The percentage of elapsed time that the Processor spent handling hardware Interrupts. Some of this work may be done in a DPC (see % DPC Time.The percentage of elapsed time that the Processor spent in Deferred Procedure Calls (DPC). Disk Queue Length . Load Runner . Each processor is assigned an idle thread in the idle process that consumes those unproductive processor cycles not used by any other threads. the Interrupt Handler may elect to execute the majority of its work in a DPC.

The number of bytes the server has received from the network. Errors Access Permissions . Context Blocks Queued/Sec . Choose:File->Save. Click Browser Button and Browse for the Application Choose the Working Dir Let start recording into sections Vuser_Init and click Ok button After the application appears. This value indicates how busy the server is.Server • • • • • • Blocking Requests Rejected .The number of times the server has rejected blocking Server Message Blocks (SMBs) due to insufficient count of free work items. cilck edit button and change the group name . Insert the Rendezvous Point Choose :Vuser->Run.RTE script) Select Simulation group list. change sections to Actions.The rate that work context blocks had to be placed on the server's FSP queue to await server action.No of Vuser. Bytes Total/Sec . Bytes Received/Sec . Can indicate whether somebody is randomly attempting to access files in hopes of accessing data that was not properly protected. 2.. Do some actions on the application Change sections to Vuser_End and close the application Click on stop Recording Icon in the tool bar of Vuser Generator Insert the Start_Transaction and End_Transactions.Running the script in the Controller with Wizard • • • • • • • • • • Start-> program Files->Load Runner->Controller. Choose: Group->Init or Group->Run or Scenario->Start. This value provides an overall indication of how busy the server is. Bytes Transmitted/Sec . Click Next in the welcome Screen In the host list .Creating Script Using Virtual User Generator • • • • • • • • • • • • • • • Start-> program Files->Load Runner->Virtual User Generator Choose File->New Select type and Click Ok Button Start recording Dialog Box appears Besides Program to Record. .The number of times file opens on behalf of clients have failed with STATUS_ACCESS_DENIED. This value indicates how busy the server is. Click Next Button Select Finish Button.The number of bytes the server has sent on the network. Load Runner NAVIGATIONAL STEPS FOR LOADRUNNER LAB-EXERCISES 1. Verify the status of script at the bottom in Execution Log. May indicate whether the maxworkitem or minfreeworkitems server parameters need tuning.DB script. click add button and mention the machine name Click Next Button Select the related script you are generated in Vuser Generator(GUI Vuser Script.(Remember the path of the script). Choose: wizard option and click OK.The number of bytes the server has sent to and received from the network. Finally Load runner Analysis graph report appears.

Host Name. Start_transaction(). Finally Load runner Analysis graph report appears . Click Next in the welcome Screen In the host list . End_transaction().ini • Select Host window. Select Script and select Add button. Choose: Group->Init or Group->Run or Scenario->Start. Choose: wizard option and click OK. Select Results->Analyse Results.ini • • Choose: Group->Init or Group->Run or Scenario->Start. select tools->option Select winrunner tab. Identify where the transaction points to be inserted. click add button and mention the name of Script Click Next button. Select winrunner tab .3. 5. Click Next Button Select Finish Button. 4. Insert the Rendezvous Point Rendezvous(). select GUI-winrunner check box. Declare_rendezvous(). Select Group->Add Group Vuser Information Box appears Select Group Name. Select Stop Recording Create->Stop Record Declare the Transactions and Rendezvous Point at the top of script Declare_transaction().set the path of Winrunner Path:Program files->winrunner->dat->wrun.Creating GUI Vuser Script Using Winrunner(GUI Vuser) • • • • • • • • • • Start-> Program Files->Winrunner->Winrunner. select Host->details Select Vuser Limits tab. cilck edit button and change the group name . select path of the script. set path Path:Program files->winrunner->dat->wrun. Save the script and remember the path of the script.Running the script in the Controller with Wizard • • • • • • • • • • Start-> program Files->Load Runner->Controller.No of Vuser.Running the script in the Controller with out Wizard • • • • • • • • • • Start-> program Files->Load Runner->Controller. Select Vuser window. click add button and mention the machine name Click Next Button In GUI Vuser Script. Chose: File->New Start recording through Create->Record ContextSensitive Mode Invoke the application Do some actions on the application. Vuser Quantity. Choose File->New. Select Simulation group list. Click OK button. Four Windows appears Select Vusers window.

allows you to initiate an online chat session with another TestDirector user. Test Director Features in TestDirector 7. Its four modules Requirements. and Defects are seamlessly integrated. Collaboration Module The Collaboration module. While in a chat session. Version Control Version control enables you to keep track of the changes you make to the testing information in your TestDirector project. helps organizations deploy highquality applications more quickly and effectively. A domain contains a group of related TestDirector projects. available to existing customers as an optional upgrade. monitoring licenses and monitoring TestDirector server information. and assists you in organizing and managing a large number of projects. Enhanced Reports and Graphs Additional standard report types and graphs have been added. adding users and defining user properties. efficient global applicationtesting process. users can share applications and make changes. Domain Management TestDirector projects are now grouped by domain. the industry’s first global test management solution. Test Plan. The new format enables you to customize more features. monitoring connected users. and the user interface is richer in functionality. You can use your version control database for tracking manual. The completely Web-enabled TestDirector supports high levels of communication and collaboration among distributed testing teams. WinRunner and QuickTest Professional tests in the test plan tree and test grid.5? Web-based Site Administrator The Site Administrator includes tabs for managing projects. allowing for a smooth information flow between various testing stages. .Test Director Introduction TestDirector. Test Lab. driving a more effective.

ability to add memo fields and create input masks users can customize their TestDirector projects to capture any data required by their testing process. the associated test is flagged and tester is notified that the test may need to be reviewed to reflect requirement changes. Automatic Traceability Notification The new traceability automatically traces changes to the testing process entities such as requirements or tests. and notifies the user via flag or e-mail.Features in TestDirector 8. TestDirector Features & Benefits Supports the entire testing process .grouped according to test status. New rich edit option add color and formatting options to all memo fields. unlimited ways to aggregate and compare data and ability to generate cross-project analysis reports. Improved Customization With a greater number of available user fields. For example. Workflow for all TestDirector Modules The addition of the script editor to all modules enables organizations to customize TestDirector to follow and enforce any methodology and best practices.0? TestDirector Advanced Reports Add-in With the new Advanced Reports Add-in. Hierarchical Test Sets Hierarchical test sets provide the ability to better organize your test run process by grouping test sets into folders. TestDirector users are able to maximize the value of their testing project information by generating customizable status and progress reports. Test Director Coverage Analysis View in Requirements Module The graphical display enables you to analyze the requirements according to test coverage status and view associated tests . when the requirement changes. The Advanced Reports Add-in offers the flexibility to create custom report configurations and layouts.

and can help jumpstart a user’s automation project by converting manual tests to automated test scripts. Links test plans to requirements TestDirector connects requirements directly to test cases. . even overnight. preserving your investment and accelerating your testing process.TestDirector incorporates all aspects of the testing process requirements management. Integrates with Microsoft Office TestDirector can import requirements and test plans from Microsoft Office. ensuring that functional requirements have been covered by the test plan. Accelerates testing cycles TestDirector's TestLab manager accelerates the test execution cycles by scheduling and running tests automatically—unattended. Access and Sybase. Manages manual and automated tests TestDirector stores and runs both manual and automated tests. Oracle. Leverages innovative Web technology Testers. creating an accurate audit trail for analysis. Uses industry-standard repositories TestDirector integrates easily with industry-standard databases such as SQL. Supports test runs across boundaries TestDirector allows testers to run tests on their local machines and then report the results to the repository that resides on a remote server. developers and business analysts can participate in and contribute to the testing process by working seamlessly across geographic and organizational boundaries. scheduling. The results are reported into TestDirector’s central repository. planning. running tests. issue management and project status analysis into a single browserbased application.

WinRunner and LoadRunner) and external third-party lifecycle applications.g. Provides Anytime. planning progress.. Provides Analysis and Decision Support Tools TestDirector's integrated graphs and reports help analyze application readiness at any point in the testing process. Supports decision-making through analysis TestDirector helps you make informed decisions about application readiness through dozens of reports and analysis features. Using information about requirements coverage. Generates customizable reports TestDirector features a variety of customizable graphs and reports that provide a snapshot of the process at any time during testing. Anywhere Access to Testing Assets . It defines the role of each tester in the process and sets the appropriate permissions to ensure information integrity. managers are able to make informed decisions on whether the application is ready to go live. Test Director Enables structured information sharing TestDirector controls the information flow in a structured and organized manner.Integrates with internal and third-party tools Documented COM API allows TestDirector to be integrated both with internal tools (e. run schedules or defect statistics. Provides easy defect reporting TestDirector offers a defect tracking process that can identify similar defects in a database. You can save your favorite views to have instant access to relevant project information.

Accelerates Testing Cycles TestDirector's TestLab manager accelerates the test execution cycles by scheduling and running tests automatically—unattended. It defines the role of each tester in the process and sets the appropriate permissions to ensure information integrity. developers and business analysts can participate in and contribute to the testing process by collaborating across geographic and organizational boundaries.Using TestDirector's Web interface. and can help jumpstart a user’s automation project by converting manual tests to automated test scripts. The results are reported into TestDirector’s central repository. Microsoft Office or a homegrown defect management tool. even overnight. Provides Traceability Throughout the Testing Process TestDirector links requirements to test cases. the tester is notified of the change. Integrates with Third-Party Applications Whether an individual uses an industry standard configuration management solution. any application can be integrated into TestDirector. to ensure traceability throughout the testing cycle. testers. Enables structured information sharing TestDirector controls the information flow in a structured and organized manner. Test Director Supports test runs across boundaries TestDirector allows testers to run tests on their local machines and then report the results to the repository that resides on a remote server. creating an accurate audit trail for analysis. Provides Analysis and Decision Support Tools . Through the open API. TestDirector preserves the users’ investment in their existing solutions and enables them to create an end-to-end lifecycle-management solution. Integrates with internal and third-party tools Documented COM API allows TestDirector to be integrated both with internal tools (e. When requirement changes or the defect is fixed. and test cases to issues. WinRunner and LoadRunner) and external third-party lifecycle applications.g.. Manages Manual and Automated Tests TestDirector stores and runs both manual and automated tests.

Manages Manual and Automated Tests TestDirector stores and runs both manual and automated tests. and can help jumpstart a user’s automation project by converting manual tests to automated test scripts. Supports decision-making through analysis TestDirector helps you make informed decisions about application readiness through dozens of reports and analysis features. TestDirector preserves the users’ investment in their existing solutions and enables them to create an end-to-end lifecycle-management solution. and test cases to issues. Anywhere Access to Testing Assets Using TestDirector's Web interface. Microsoft Office or a homegrown defect management tool. to ensure traceability throughout the testing cycle. developers and business analysts can participate in and contribute to the testing process by collaborating across geographic and organizational boundaries. You can save your favorite views to have instant access to relevant project information. . managers are able to make informed decisions on whether the application is ready to go live. any application can be integrated into TestDirector. Generates customizable reports TestDirector features a variety of customizable graphs and reports that provide a snapshot of the process at any time during testing. Provides Anytime. Provides easy defect reporting TestDirector offers a defect tracking process that can identify similar defects in a database. Through the open API. run schedules or defect statistics. testers. Using information about requirements coverage.TestDirector's integrated graphs and reports help analyze application readiness at any point in the testing process. When requirement changes or the defect is fixed. planning progress. Provides Traceability Throughout the Testing Process TestDirector links requirements to test cases. the tester is notified of the change. Integrates with Third-Party Applications Whether an individual uses an industry standard configuration management solution.

Its aim is to deliver quality applications in less time. The results are reported into TestDirector’s central repository. TestDirector supports requirements-based testing and provides the testing team with a clear. Requirements are linked to tests—that is. test execution and defect management—in one powerful. TestDirector facilitates the adoption of a more consistent testing process. Managing Requirements Requirements are what the users or the system needs. It is the first tool to capture the entire test management process—requirements management. this information is reflected in the requirement records. concise and functional blueprint for developing test cases. test planning.Accelerates Testing Cycles TestDirector's TestLab manager accelerates the test execution cycles by scheduling and running tests automatically unattended. In many organizations. which can be repeated throughout the application lifecycle or shared across multiple applications or lines of business (LOB). You can also generate a test based on a functional requirement and instantly create a link between the requirement. testers can start building the test plan and designing the actual tests. The test management process is the main principle behind Mercury Interactive's TestDirector. you can design tests—manual and automated— . This parallel approach to test planning and application design ensures that testers build a complete set of tests that cover every function the system is designed to perform. when the test passes or fails. Facilitates Consistent and Repetitive Testing Process By providing a central repository for all testing assets. before implementation. In the Test Plan module. organizations no longer wait to start testing at the end of the development stage. Testing Process Test management is a method for organizing application test assets—such as test requirements. is a structured process for gathering. test plans. creating an accurate audit trail for analysis. which is invaluable for gathering input from different members of the testing team and providing a central reference point for all of your future testing efforts. documenting and managing the requirements throughout the project lifecycle. requirements are neglected during the testing effort. which makes it difficult for team members to share information and to make frequent revisions and changes. leading to a chaotic process of fixing what you can and accepting that certain functionality will not be verified. TestDirector provides a centralized approach to test design. Test Planning Based on the requirements. even overnight. Too often. requirements are maintained in Excel or Word documents. however. scalable and flexible solution. Instead. organizing. Requirements management. testing and development begin simultaneously. Today. test documentation. test scripts or test results—to enable easy accessibility and reusability. the relevant test and any defects that are uncovered during the test run.

TestDirector can also schedule automated tests. Managers often decide whether the application is ready to go live based on defect analysis. who can change a defect's status to "fixed" and under which conditions the defect can be officially closed. which saves testers time by running multiple tests simultaneously across multiple machines on the network. In TestDirector's Test Lab. For both manual and automated tests. etc. Tests with TestDirector can be scheduled to run unattended. who has the authority to open a new defect. overnight or when the system is in least demand for other tasks. your testing team is ready to start running tests. different members of the team can have instant access to defect information. age. Because TestDirector is completely Web-based. Managing Defects The keys to creating a good defect management process are setting up the defect workflow and assigning permission rules. their status. you can clearly define how the lifecycle of a defect should progress. TestDirector can help configure the test environment and determine which tests will run on which machines. TestDirector will also help you maintain a complete history and audit trail throughout the defect lifecycle. By analyzing the defect statistics in TestDirector. Most applications must be tested on different operating systems . With TestDirector.document the testing procedures and create quick graphs and reports to help measure the progress of the test planning effort. priority. greatly improving communication in your organization and ensuring everyone is up to date on the status of the application Interview Question WinRunner Interview Questions . you can take a snapshot of the application under test and see exactly how many defects you currently have. testers can easily trace changes to tests and test runs. testers can set up groups of machines to most efficiently use their lab resources. Running Tests After you have addressed the test design and development issues and built the test plan. By using this audit trail. TestDirector can keep a complete history of all test runs. severity. different browser versions or other configurations.

it uses the GUI map to locate objects. These statements appear as a test script in a test window. We can debug the script by executing the script in the debug mode. or a combination of both. or user messages. We can also debug script using the Step. system messages. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. There are 2 types of GUI Map files. WinRunner stores information it learns about a window or object in a GUI Map. Each of these objects in the GUI Map file will be having a logical name and a physical description. functional and regression testing of the AUT. You can then enhance your recorded test script. Yes I have created test scripts. 8) How do you run your test scripts? . 5) Have you created test scripts and what is contained in the test scripts? Ans. View Results: determines the success or failure of the tests. Step Into. When WinRunner runs a test. programming. Global GUI Map file: a single GUI Map file for the entire application ii. 7) Have you performed debugging of the scripts? Ans. Create test scripts by recording. i. it uses the GUI map to locate objects. Following each test run. iii. The report details all the major events that occurred during the run. WinRunner displays the results in a report. the Function Generator. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested ii. you can report information about the defect directly from the Test Results window. 6) How does WinRunner evaluates test results? Ans. Yes. Step out functionalities provided by the WinRunner. 3) What in contained in the GUI map? Ans. It contains the statement in Mercury Interactive’s Test Script Language (TSL). GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. 2) Explain WinRunner testing process? Ans. Report Defects: If a test run fails due to a defect in the application being tested. Run Tests: run tests in Verify mode to test your application. you can view the expected results and the actual results from the Test Results window. either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool.1) How you used WinRunner in your project? Ans. 4) How does WinRunner recognize objects on the application? Ans. WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test. I have performed debugging of scripts. vi. WinRunner testing process involves six main stages i. Yes. insert checkpoints where you want to check the response of the application being tested. While recording tests. I have been WinRunner for creating automates scripts for GUI. Debug Test: run tests in Debug mode to make sure they run smoothly iv. error messages. v. If mismatches are detected at checkpoints during the test run. such as checkpoints. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT. The object is not a standard windows object.and y-coordinates traveled by the mouse pointer across the screen. mouse clicks. An object’s logical name is determined by its class. it compares the current data of the application being tested to the expected data captured earlier. and tracking defects before a software release. such as checkpoints. running tests.Ans. you can view the expected results and the actual results from the Test Results window. Following each test run. We run tests in Verify mode to test your application. you can report information about the defect directly from the Test Results window. the logical name is the label that appears on an object. This information is sent via e-mail to the quality assurance manager. 13) What is the purpose of loading WinRunner Add-Ins? Ans. Ans. who tracks the defect until it is fixed. Each time WinRunner encounters a checkpoint in the test script. . If mismatches are detected at checkpoints during the test run. GUI Map Editor will not be able to learn any of the objects displayed in the browser window. 15) What do you mean by the logical name of the object. WinRunner captures them as actual results. You can also create reports and graphs to help review the progress of planning tests. i. There are two type of recording in WinRunner. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. If a test run fails due to a defect in the application being tested. 14) What are the reasons that WinRunner fails to identify an object on the GUI? Ans. WinRunner displays the results in a report. WinRunner fails to identify an object in a GUI due to various reasons. In most cases. It helps quality assurance personnel plan and organize the testing process. run tests. 11) How you integrated your automated scripts from TestDirector? Ans When you work with WinRunner. error messages. and the precise x. With TestDirector you can create a database of manual and automated tests. ii. If the browser used is not compatible with the WinRunner version. The report details all the major events that occurred during the run. build test cycles. Analog recording records keyboard input. If any mismatches are found. TestDirector is Mercury Interactive’s software test management tool. 10) What is the use of Test Director software? Ans. ii. 12) What are the different modes of recording? Ans. or user messages. 9) How do you analyze results and report the defects? Ans. you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function. and report and track defects. system messages. i.

Global GUI Map file: a single GUI Map file for the entire application ii. 17) What is the different between GUI map and GUI map files? Ans. The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus. Syntax: set_window(. Syntax: GUI_load(). We use this command to set the focus to the required window before executing tests on a particular window. So we will have to learn the object again and update the GUI File and reload it. 22) What is the disadvantage of loading the GUI maps through start up scripts? Ans. which are to be learned in a window. The GUI map is actually the sum of one or more GUI map files. GUI Map editor displays the content of a GUI Map. If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object.1. We can load a GUI Map by using the GUI_load command. time).16) If the object does not have a name then what will be the logical name? Ans.If there is any change in the object being learned then WinRunner will not be able to recognize the object. Set_Window command sets the focus to the specified window. 20) What is the purpose of set_window command? Ans. 24) What actually happens when you load GUI map? . since we will be working with only those objects while creating scripts. If the object does not have a name then the logical name could be the attached text. 2. 21) How do you load GUI map? Ans. We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory. or GUI_close_all. i. 23) How do you unload the GUI map? Ans. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description. as it is not in the GUI Map file loaded in the memory. Syntax: GUI_close(). 19) When you create GUI map do you record all the objects of specific objects? Ans.If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high. 18) How do you view the contents of the GUI map? Ans. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description. There are two modes for organizing GUI map files.

1. 2. the information about the windows and the objects with their logical names and physical description are loaded into memory. ii. The GUI Map Editor is been provided with a Find and Show Buttons. it can identify the objects using this information loaded in the memory. 28) What different actions are performed by find and show button? Ans. When the object is selected.To find a particular object in the GUI Map file in the application. 32) When it is appropriate to change physical description? Ans. 29) How do you identify which files are loaded in the GUI map? Ans. Changing the physical description is necessary when the property value of an object changes. To find a particular object in the GUI Map file in the application. The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the memory. Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long. 27) How do you find an object in an GUI map. if the object has been learned to the GUI Map file it will be focused in the GUI Map file.To find a particular object in a GUI Map file click the Find button.gui”. So when the WinRunner executes a script on a particular window. 30) How do you modify the logical name or the physical description of the objects in GUI map? Ans. 25) What is the purpose of the temp GUI map file? Ans. To find a particular object in a GUI Map file click the Find button. 31) When do you feel you need to modify the logical name? Ans. You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor. which gives the option to select the object. 26) What is the extension of gui map file? Ans. which gives the option to select the object. 33) How WinRunner handles varying window labels? . This blinks the selected object. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options. select the object and click the Show window. This blinks the selected object.Ans. Ans. i. The extension for a GUI Map file is “. if the object has been learned to the GUI Map file it will be focused in the GUI Map file. This is actually stored into the temporary GUI Map file. WinRunner learns objects and windows by itself. When we load a GUI Map file. While recording a script. When the object is selected. select the object and click the Show window.

40) How do you configure GUI map? . select the objects you want to copy or move. To select all objects in a GUI map file. To select all objects in a GUI map file. Physical description displays only objects matching the specified physical description. The dialog box expands to display two GUI map files simultaneously. These properties are regexp_label and regexp_MSW_class. ii.Ans. i. GUI Map Editor has a Filter option. 36) How do you copy and move objects between different GUI map files? Ans. iii. It is obligatory for all types of windows and for the object class object. click Collapse. Logical name displays only objects with the specified logical name. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. such as all the push buttons. ii. The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description. We can handle varying window labels using regular expressions. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It operates “behind the scenes” to insert a regular expression into a window’s label description. Use the Shift key and/or Control key to select multiple objects. iv. In one file. choose Edit > Select All. 35) How do you suppress a regular expression? Ans. Use the Shift key and/or Control key to select multiple objects. To restore the GUI Map Editor to its original size. The steps to be followed are: i. This provides for filtering with 3 different types of options. Click Copy or Move. WinRunner uses two “hidden” properties in order to use regular expression in an object’s physical description. The regexp_label property is used for windows only. iii. ii. choose Edit > Select All. Use any substring belonging to the physical description. It is obligatory for all types of windows and for the object class object. i. We can suppress the regular expression of a window by replacing the regexp_label property with label property. 34) What is the purpose of regexp_label property and regexp_MSW_class property? Ans. Click Expand in the GUI Map Editor. We can copy and move objects between different GUI Map files using the GUI Map Editor. vii. vi. v. 38) How do you clear a GUI map files? Ans. 37) How do you select multiple objects during merging the files? Ans. Choose Tools > GUI Map Editor to open the GUI Map Editor. Choose View > GUI Files. 39) How do you filter the objects in the GUI map? Ans. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists. We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor. Class displays only objects of the specified class.

Step 6: Analyzing test results. Instead. we set the scenario configuration and scheduling. Explain the Load testing process? Step 1: Planning the test. For example. it does not learn all its properties. We use LoadRunner’s graphs and reports to analyze the application’s performance. A scenario describes the events that occur during a testing session. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. system resource. Vuser groups. Whereas single user testing primarily on functionality and user interface of a system component. We monitor scenario execution using the LoadRunner online runtime.2. streaming media resource. database server resource. 5. If a custom object is similar to a standard object. ERP server resource. Step 2: Creating Vusers. and tasks measured as transactions. tasks performed by Vusers as a whole.a. This gives rise to issues such as what is the response time of the system. the load generator machines. we define the number of Vusers. To make the mapping and the configuration permanent. etc. firewall server resource. LoadRunner automatically builds a scenario for us. it learns the minimum number of properties to provide a unique identification of the object. we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. you must add configuration statements to your startup test script. What is load testing? Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users. Controller. . It includes a list of machines. LoadRunner Books Online. we create Vuser scripts that contain tasks performed by each Vuser. can it hold so many hundreds and thousands of users. Web resource. We can create manual scenarios as well as goal-oriented scenarios.The components of LoadRunner are The Virtual User Generator. 4. We create scenarios using LoadRunner Controller. does it crash. This is when we set do load and performance testing. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction. Step 3: Creating the scenario. LoadRunner Analysis and Monitoring. The mapping and the configuration you set are valid only for the current WinRunner session. Step 4: Running the scenario. transactions and to determine weather it can handle peak usage periods.Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. and Java performance monitors. Many applications also contain custom GUI objects. Here. 2. Modern system architectures are large and complex. For web tests. transaction. will it go with different software applications and platforms. We can run the entire scenario. 3. Here. Step 5: Monitoring the scenario.We perform load testing once we are done with interface (GUI) testing. When WinRunner learns the description of a GUI object. 6. In manual scenarios. network delay. b. A custom object is any object not belonging to one of the standard classes used by WinRunner. we may create a goaloriented scenario where we define the goal that our test has to achieve. What is Performance testing? . These objects are therefore assigned to the generic “object” class. or individual Vusers. LoadRunner Interview Questions 1. and Vusers that run during the scenario. When WinRunner records an operation on a custom object. When do you do load and performance Testing? . you can map it to one of the standard classes. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. c. and percentage of Vusers to be assigned to each script. During scenario execution. Web server resource. Web application server resource. and the Agent process. it generates obj_mouse_ statements in the test script. What are the components of LoadRunner? . scripts. Did u use LoadRunner? What version? Yes. LoadRunner records the performance of the application under different loads. application testing focuses on performance and reliability of an entire system. Version 7. a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. Before the testing.

17. Disable . 15. 14. where we can define rules for that correlation. and see the list of values which can be correlated. What is correlation? Explain the difference between automatic correlation and manual correlation? .A scenario defines the events that occur during each testing session. 13.Parameters are like script variables. one script can emulate many different users on the system. We use VuGen to: Monitor the communication between the application and the server. in order to avoid errors while running my script. This is done during a scenario run where a vuser script is executed by a number of vusers in a group. logging is automatically disabled. When do you disable log in Virtual User Generator. For example.Two ways: First we can scan for correlations. the database server. Where do you set automatic correlation options? . Standard Log Option: When you select Standard log. What is a scenario? . 12. We can look up the difference file to see for the values which needed to be correlated. and the machines on which the virtual users run their emulations. we just do create correlation for the value and specify how the value to be created. it creates a standard log of functions and messages sent during script execution to use for debugging. a scenario defines and controls the number of users to emulate. Why do you create parameters? . to emulate peak load on the bank server.We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. Here values are replaced by data which are created by these rules.Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries.The Controller component is used to playback the script in multi-user mode. in order that they may simultaneously perform a task. and received from. Generate the required function calls. I did using scan for correlation. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point. In manual correlation. What is a function to capture dynamic values in the web Vuser script? Web_reg_save_param function saves dynamic data information to a parameter. For example. When we add a script to a scenario. the actions to be performed. I had to correlate this value. In my project. Disable this option for large load testing scenarios. They are used to vary input to the server and to emulate real users. 8. Explain the recording mode for web Vuser script? . How do you find out where correlation is required? Give few examples from your projects? . and Insert the generated function calls into a Vuser script. If we know the specific value to be correlated. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate.The Virtual User Generator (VuGen) component is used to record a script. 9. it was generated automatically and it was sequential and this value was unique. What is a rendezvous point? .Automatic correlation from web point of view can be set in recording options and correlation tab.Once we debug our script and verify that it is functional. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). in web based applications. It enables you to develop Vuser scripts for a variety of application types and communication protocols. What Component of LoadRunner would you use to record a Script? . VuGen monitors the client end of the database and traces all the requests sent to. Automatic correlation is where we set some rules for correlation. it was nothing but Insurance Number. the value we want to correlate is scanned and create correlation is used to correlate. logging is automatically disabled Extended Log Option: Select extended log to create an extended log. From this we can pick a value to be correlated. When do you choose standard and extended logs? . we can record two scripts and compare them. 16. you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time. there was a unique id developed for each customer. For example. It can be application server specific.You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Different sets of data are sent to the server each time the script is run. When you copy a script to a scenario. Here we can enable correlation for the entire script and choose either issue online messages or offline actions. Secondly. 11. including warnings and other messages. What Component of LoadRunner would you use to play Back the script in multi user mode? . we can enable logging for errors only.7. 10. Better simulate the usage model for more accurate testing from the Controller. VuGen creates the script by recording the activity between the client and the server.

Similarly. They help in finding out the troubled area in our scenario which causes increased response time. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? . If you want to stop the execution of your script on error. char*)Examples of user defined functions are as follows:GetVersion. execute the vuser_end section and end the execution. This enables more Vusers to be run per generator. this option for large load testing scenarios. we can determine how much load the server can sustain. go to ‘Scenario Scheduling Options’ What is the advantage of running the Vuser as thread? . How do you debug a LoadRunner script? . software applications. the database server. and any other components that go with this larger system so as to achieve the load testing objectives.VuGen provides the facility to use multithreading. This is useful if we want to receive debug information about a small section of the script only. We add this library to VuGen bin directory. 27. What are the changes you can make in run-time settings? . How do you perform functional testing under load? . We can manually set the message class within your script using the lr_set_debug_message function. What is Ramp up? How do you set this? . When you copy a script to a scenario.In think time we have two options like Ignore think time and Replay think time.Performance Bottlenecks can be detected by using monitors. the peak throughput and highest response time would occur approximately at the same time.Functionality under load can be tested by running several Vusers concurrently. The measurements . What is the relation between Response Time and Throughput? .Before we create the User Defined functions we need to create the external library (DLL) with the function. These monitors might be application server monitors. 20.The Run Time Settings that we make are: a) Pacing . How do you identify the performance bottlenecks? . only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). 26. This limits the number of Vusers that can be run on a single generator. database server monitors and network monitors. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution.Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction. 22.The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. The configuration of any client machine includes its hardware settings. thus enabling more Vusers to be run per generator. the Vuser is assigned the status "Stopped". Explain the Configuration of your systems? . the response time also decreased. operating system.Under this we have Disable Logging Standard Log and c) Extended Think Time .It has iteration count. 24. By increasing the amount of Vusers. the same driver program is loaded into memory for each Vuser. etc. logging is automatically disabled. Once the library is added then we assign user defined function as a parameter. memory. the web server. The function should have the following format: __declspec (dllexport) char* <function name>(char*. GetPltform are some of the user defined functions used in my earlier project. When you end a script using this function. This function is useful when you need to manually abort a script execution as a result of a specific error condition. we will notice that as throughput decreased. The debug information is written to the Output window. 23. This system component configuration should match with the overall system configuration that would include the network infrastructure. b) Log . If the Vuser is run as a thread. Each thread shares the memory of the parent driver program. For this to take effect. we have to first uncheck the “Continue on error” option in Run-Time Settings. 19.This option is used to gradually increase the amount of Vusers/load on the server. 21. An initial value is set and a value to wait between intervals can be specified. To set Ramp Up. how do you do that? The lr_abort function aborts the execution of a Vuser script. 25. If the Vuser is run as a process. We can specify which additional information should be added to the extended log using the Extended log options. When we compare this with the transaction response time.18. GetCurrentTime. development tools.VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. It instructs the Vuser to stop executing the Actions section.The configuration of our systems refers to that of the client machines on which we run the Vusers. d) General . thus taking up a large amount of memory. web server monitors.

the number of downloaded pages per second. If the graph were to remain relatively flat as the number of Vusers increased. The number of pages per minute 5. Explain the following functions: . The number of hits per second 3. What does vuser_init action contain? . 29.The lrd_fetch function fetches the next row from the result set. etc. database and Network are all fine where could be the problem? The problem could be in the system itself or in the application server or in the code written for the application. what kind of machines we are going to use and from where they are run.Vuser_end section contains log off procedures.Load Runner provides you with five different types of goals in a goal oriented scenario: 1. the average response time of the check itinerary transaction very gradually increases. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. lr_output_message . Data returned by the server. The transaction response time that you want your scenario Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases. This delay is known as the think time. The response time clearly began to degrade when there were more than 56 Vusers running simultaneously. At 56 Vusers. The default value is five (5) seconds. The number of concurrent Vusers 2. This is mainly used during debugging when we want information about: Parameter substitution. sharp increase in the average response time. What is the difference between standard log and extended log? . .g. 32. Advanced trace. the number of http responses per second. throughput. This function sets a SQL statement to be processed. the user may wait several seconds to review the data before responding. network delay graphs. Example: When a user receives data from a server.The standard log sends a subset of functions and messages sent during script execution to a log. E. 36.The lr_output_message function sends notifications to the Controller Output window and the Vuser log file.Think time is the time that a real user waits between actions. Task Distribution Diagram and Transaction profile. 30. 35. lr_error_message . the average response time steadily increases as the load increases. 33. 37. 34. In other words. hits/sec. made are usually performance response time. 31. You can specify the resource you want to measure on before running the controller and than you can see database related issues How did you plan the Load? What are the Criteria? .Load test is planned to decide the number of users. What does vuser_end action contain? . The peak usage and off-usage are decided from this Diagram.The lr_error_message function sends an error message to the LoadRunner Output window.Vuser_init action contains procedures to login to a server. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log.By running “Database” monitor and help of “Data Resource Graph” we can find database related issues.The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. It is based on 2 important documents. Using these monitors we can analyze throughput on the web server. We say that the test broke the server. The number of transactions per second 4.If the throughput scales upward as time progresses and the number of Vusers increase. it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered. We can change the think time threshold in the Recording options of the Vugen. this indicates that the bandwidth is sufficient. there is a sudden.The lr_debug_message function sends a debug message to the output log when the specified message class is set. lrd_stmt . How did you find web server related issues? . Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding. number of hits per second that occurred during scenario.lr_debug_message . 38. lrd_fetch .28. How did you find database related issues? . What is think time? How do you change the threshold? . If web server.Using Web resource monitors we can find the performance of web servers. Throughput . Types of Goals in Goal-Oriented Scenario . That is the mean time before failure (MTBF). Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load.

What are the command line functions that import and export the DS jobs? 6.Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. 41.What are types of Hashed File? 9.? TestDirector Interview Question 1Types of vies in Datastage Director? 2. can be set in recording options and correlation tab. What is a function to capture dynamic values in the web vuser script? Web_reg_save_param function saves dynamic data information to a parameter.Automatic correlation from web point of view. Here values are replaced by data which are created by these rules. what is NLS in datastage? how we use NLS in Datastage ? what advantages in that ? at the time of ins. It can be application server specific. In manual correlation. 40. Here we can enable correlation for the entire script and choose either issue online messages or offline actions.? What are the important features of SilkTest as compare to other tools? How to define new class in SilkTest? What is SilkMeter and how does it works with SilkTest .How many types of database triggers can be specified on a table ? What are they ? 7. How does the Recovery System Work in SilkTest? What is the purpose of user-defined base state method .39. Automatic correlation is where we set some rules for correlation. can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. Where do you set automatic correlation options? . 3. SilkTest Interview Question 1.What are Routines and where/how are they written and have you written any routines before? 5.What are the datatypes a available in PL/SQL ? General Testing Interview Questions . we just do create correlation for the value and specify how the value to be created. 5. the value we want to correlate is scanned and create correlation is used to correlate. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). What is correlation? Explain the difference between automatic correlation and manual correlation? . . 6. where we can define rules for that correlation. .</< td> 8. Orchestrate Vs Datastage Parallel Extender? 3. Automatic correlation for database. 2. If we know the specific value to be correlated. 4.? What are the components of SilkTest .What is an Exception ? What are types of Exception ? 4.

The controlled conditions should include both normal and abnormal conditions. a software bug was determined to be a major contributor to the 2003 Northeast blackout. 3. government IT systems project.monitoring and improving the process.S. It is oriented to 'prevention'. * Millions of bank accounts were impacted by errors due to installation of inadequately tested software code in the transaction processing system of a major North American bank. The bug resulted in performance problems for many of the sites simultaneously and required disabling of the software until the bug was fixed. What are some recent major computer system failures caused by software bugs? * Media reports in January of 2005 detailed severe problems with a $170 million high-profile U. making sure that any agreed-upon standards and procedures are followed.) 2.S. forced shutdown of 100 power plants. * According to news reports in April of 2004.What is 'Software Quality Assurance'? Software QA involves the entire software development Process . The error was found and corrected after examining millions of lines of code.1. * In July 2004 newspapers reported that a new government welfare management system in Canada costing several hundred million dollars was unable to handle a simple benefits rate increase after being put into live operation. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one anothers' online orders. Reportedly the original contract allowed for only 6 weeks of acceptance testing and the system was never tested for its ability to handle a rate increase. Sometimes they're the combined responsibility of one group or individual. and economic losses estimated at $6 billion.What is 'Software Testing'? Testing involves operation of a system or application under controlled conditions and evaluating the results (eg. * In early 2004. This eventually resulted in major industrial disruption in the country that used the stolen flawed software. * A major U. It is oriented to 'detection'. then D should happen'). portions of the project could be salvaged. Software testing was one of the five major problem areas according to a report of the commission reviewing the project. Studies were under way to determine which. that additional problems resulted when the incident drew a large number of e-mail phishing attacks against the bank's customers. which was unable to correctly handle and report on an unusual confluence of initially localized events. and that the total cost of the incident could exceed $100 million. Articles about the incident stated that it took two weeks to fix all the resulting errors. 'if the user is in interface A of the application while using hardware B. The bug was reportedly in one utility company's vendor-supplied power monitoring and management system. The failure involved loss of electrical power to 50 million customers. if any. * A bug in site management software utilized by companies with a significant percentage of worldwide web traffic was reported in May of 2004. and does C. . news reports revealed the intentional use of a software bug as a counter-espionage tool. According to the report. in the early 1980's one nation surreptitiously allowed a hostile nation's espionage service to steal a version of sophisticated industrial software that had intentionally-added flaws. Also common are project teams that include a mix of testers and developers who work closely together. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. and ensuring that problems are found and dealt with. (See the Books section for a list of useful books on Software Quality Assurance. Organizations vary considerably in how they assign responsibility for QA and testing. the worst power system failure in North American history. with overall QA processes monitored by project managers. according to mid-2004 news reports. It will depend on what best fits an organization's size and business structure.

000 erroneous report cards and students left stranded by failed class registration systems. the district's CIO was fired. If there are many minor changes or any major changes. * Programming errors . Among other tasks. 4. futures exchange. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems. the trains were started by altering the control system's date settings. can make mistakes. * Bugs in software supporting a large commercial high-speed data network affected 70. problems included 10. The problem went undetected until customers called up with questions about their bills. 5.* News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. Enthusiasm of engineering staff may be affected. * In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. and sheer size of applications have all contributed to the exponential growth in software/system complexity. hardware requirements that may be affected.000+ students. * News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender.000 business customers over a period of 8 days in August of 1999.Why is it often hard for management to get serious about quality assurance? * Solving problems is a high-visibility process.redesign. like anyone else. which was shut down for most of a week as a result of the outages. * In early 2000. * In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. one of whom was known throughout the land and employed as a physician to a great lord. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.000 customers. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors. * Changing requirements (whether documented or undocumented) . Among those affected was the electronic trading system of the largest U.S.S. data communications. client-server and distributed applications. preventing problems is low-visibility. * Software complexity . * January 1998 news reports told of software problems at a major U. In some fast-changing business . enormous relational databases.the end-user may not understand the effects of changes.Why does software have bugs? * Miscommunication or no communication . major problems were reported with a new computer system in a large suburban U.S. Multi-tiered applications. and the complexity of coordinating changes may result in errors. work already completed that may have to be redone or thrown out.programmers. which failed for unknown reasons in December 1999. known and unknown dependencies among parts of the project are likely to interact and cause problems. effects on other projects. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'. public school district with 100. rescheduling of engineers. the vendor had reportedly delivered an online mortgage processing system that did not meet specifications. and didn't work.as to specifics of what an application should or shouldn't do (the application's requirements). etc. It was determined that spacecraft software used certain data in English units that should have been in metric units.the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. or may understand and request them anyway . This is illustrated by an old parable: In ancient China there was a family of healers. telecommunications company that resulted in no charges for long distance calls for a month for 400. the orbiter was to serve as a communications relay for the Mars Polar Lander mission. was delivered late.

managers.visual tools. management must understand the resulting risks. etc. developers. (b) design inspections and code inspections.How can new Software QA processes be introduced in an existing organization? * A lot depends on the size of the organization and the risks involved.people prefer to say things like: * * 'no problem' * * 'piece of cake' * * 'I can whip that out in a few hours' * * 'it should be easy to update that old code' * instead of: * * 'that adds a lot of complexity and we could end up making a lot of mistakes' * * 'we have no idea if we can do that.What is verification? validation? * Verification typically involves reviews and meetings to evaluate documents. and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control . * The most value for effort will often be in (a) requirements management processes. until I take a close look at it' * * 'we can't figure out what that old spaghetti code did in the first place' If there are too many unrealistic 'no problem's'. we'll wing it' * * 'I can't estimate how long it will take. For large organizations with high-risk (in terms of lives or property) projects. When deadlines loom and the crunch comes. step-at-a-time process. complete. and ensuring adequate communications among customers. 7. understandable. * Poorly documented code . * Where the risk is lower. depending on the type of customers and projects.environments. testable requirement specifications embodied in requirements or design documentation. management and organizational buy-in and QA implementation may be a slower. it's usually the opposite: they get points mostly for quickly turning out code. maintainable code. often introduce their own bugs or are poorly documented. and there's job security if nobody else can understand it ('if it was hard to write. Also see information about 'agile' approaches such as XP. code.see 'What can be done if requirements are changing continuously?' in Part 2 of the FAQ. often requiring a lot of guesswork. and testers. * For small groups or projects. feedback to developers. plans. resulting in added bugs. the result is bugs. * Software development tools . 6. In many organizations management provides no incentive for programmers to document their code or write clear. it should be hard to read'). * egos . In this case. or in 'agile'-type environments extensive continuous coordination with end-users. continuously modified requirements may be a fact of life. mistakes will be made.scheduling of software projects is difficult at best. a more ad-hoc process may be appropriate. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand. compilers. . and (c) post-mortems/retrospectives. the result is bugs. also in Part 2 of the FAQ.it's tough to maintain and modify code that is badly written or poorly documented. with a goal of clear. class libraries. In fact. scripting tools. serious management buy-in is required and a formalized QA process is necessary. A lot will depend on team leads or managers. * Time pressures .

What is a 'walkthrough'? * A 'walkthrough' is an informal meeting for evaluation or informational purposes. typically with 3-8 people including a moderator.similar to system testing. * White box testing . 9. Typically done by the programmer and not by testers. using network communications. Tests are based on requirements and functionality.black-box type testing that is based on overall requirements specifications. * End-to-end testing .requirements.not based on any knowledge of internal design or code. the 'macro' end of the test scale. Little or no preparation is usually required. bogging down systems to a crawl. or interacting with other hardware. or corrupting databases. and a recorder to take notes. This can be done with checklists. client and server applications on a network. Attendees should prepare for this type of meeting by reading thru the document. to test particular functions or code modules. etc. This type of testing is especially relevant to client/server and distributed systems. and specifications. branches. such as interacting with a database. done by programmers or by testers. Tests are based on coverage of code statements. if the new software is crashing systems every 5 minutes.based on knowledge of the internal logic of an application's code. 8.testing of combined parts of an application to determine if they function together correctly. The subject of the inspection is typically a document such as a requirements spec or a test plan.continuous testing of an application as new functionality is added. * Incremental integration testing . The result of the inspection meeting should be a written report. individual applications. 10. Validation typically involves actual testing and takes place after verifications are completed. involves testing of a complete application environment in a situation that mimics real-world use. * Unit testing . most problems will be found during this preparation.black-box type testing geared to functional requirements of an application. not to fix anything. walkthroughs. and the purpose is to find problems and see what's missing. applications.the most 'micro' scale of testing. * Functional testing . may require developing test driver modules or test harnesses.typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. this type of testing should be done by testers. * Integration testing .What's an 'inspection'? * An inspection is more formalized than a 'walkthrough'. * Sanity testing or smoke testing . and inspection meetings. . For example. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing. or that test drivers be developed as needed. requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed. or systems if appropriate. covers all combined parts of a system. as it requires detailed knowledge of the internal program design and code.) * System testing . The 'parts' can be code modules. conditions.What kinds of testing should be considered? * Black box testing . Not always easily done unless the application has a welldesigned architecture with tight code. paths. issues lists. the software may not be in a 'sane' enough condition to warrant further testing in its current state. The term 'IV & V' refers to Independent Verification and Validation. reader.

large complex queries to a database system. or based on use by end-users/customers over some limited period of time. the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game. * Performance testing . Also used to describe such tests as system functional testing while under unusually heavy loads. * Acceptance testing . informal software test that is not based on formal test plans or test cases. and other techniques can be used.testing how well software performs in a particular hardware/software/operating system/network/etc.testing how well a system recovers from crashes.often taken to mean a creative.term often used interchangeably with 'stress' and 'load' testing. input of large numerical values. not by programmers or testers. and intended use of software. and will depend on the targeted end-user or customer. by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected.testing of an application when development is nearing completion. * Failover testing . * User acceptance testing .a method for determining if a set of test data or test cases is useful. but often taken to mean that the testers have significant understanding of the software before testing it. * Recovery testing .testing of full. may require sophisticated testing techniques.testing when development and testing are essentially completed and final bugs and problems need to be found before final release. minor design changes may still be made as a result of such testing. especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.term often used interchangeably with 'load' and 'performance' testing. surveys. * Alpha testing . video recording of user sessions. Clearly this is subjective. * Install/uninstall testing . * Compatability testing . User interviews.re-testing after fixes or modifications of the software or its environment.determining if software is satisfactory to an end-user or customer. culture. willful damage. partial. * Mutation testing . * Comparison testing .* Regression testing . etc. It can be difficult to determine how much re-testing is needed. hardware failures.testing driven by an understanding of the environment. etc.typically used interchangeably with 'recovery testing' * Security testing . Typically done by end-users or others. environment. not by programmers or testers. Typically done by end-users or others. * Stress testing .testing for 'user-friendliness'. * Context-driven testing . * Ad-hoc testing .comparing software weaknesses and strengths to competing products. * Beta testing . Programmers and testers are usually not appropriate as usability testers. testers may be learning the software as they test it. * Load testing .final testing based on specifications of the end-user or customer. Proper implementation requires large computational . heavy repetition of certain actions or inputs. or upgrade install/uninstall processes. * Exploratory testing . * Usability testing .testing how well the system protects against unauthorized internal or external access. For example.testing an application under heavy loads. or other catastrophic problems.similar to exploratory testing.

networked bug-tracking tools and change management tools.preferably electronic.resources. If possible. 12.use both upper and lower case. * Realistic schedules .the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. work closely with customers/end-users to manage expectations. design. * Use descriptive variable names . stockholders. here are some typical ideas to consider in setting rules/standards. and is readable and maintainable. changes. 'buddy checks' code analysis tools. intranet capabilities. Some organizations have coding 'standards' that all developers are supposed to adhere to. It will depend on who the 'customer' is and their overall influence in the scheme of things. * Stick to initial requirements as much as possible . This will provide them a higher comfort level with their requirements decisions and minimize excessive changes later on. delivered on time and within budget. If changes are necessary.What are 5 common problems in the software development process? * Solid requirements . be consistent in naming conventions. * Adequate testing . etc. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. customer contract officers.start testing early on. Each type of 'customer' will have their own slant on 'quality' . promote teamwork and cooperation. There are also various theories and metrics. re-testing.clear. avoid abbreviations.be prepared to defend against excessive changes and additions once development has begun.What is software 'quality'? * Quality software is reasonably bug-free. plan for adequate time for testing and bug-fixing. magazine columnists. re-test after fixes or changes. make extensive use of group communication tools . 13. but everyone has different ideas about what's best. For C and C++ coding. meets requirements and/or expectations. can be used to check for problems and enforce standards. use protoypes if possible to clarify customers' expectations. attainable. future software maintenance engineers. testable requirements that are agreed to by all players.e-mail. they should be adequately reflected in related schedule changes. However. is bug free. the development organization's. 'Peer reviews'. such as McCabe Complexity metrics. In 'agile'-type environments. avoid abbreviations. * Use descriptive function and method names . and is maintainable. etc. detailed. * Communication . not paper. and documentation. quality is obviously a subjective term. Use prototypes to help nail down requirements. * Management/accountants/testers/salespeople. use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line).require walkthroughs and inspections when appropriate. groupware. . insure that information/documentation is available and up-to-date . complete. and be prepared to explain consequences. testing. customer acceptance testers. use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line). etc. these may or may not apply to a particular situation: * Minimize or eliminate use of global variables.allow adequate time for planning. personnel should be able to complete the project without burning out.What is 'good code'? * * 'Good code' is code that works. 11.use both upper and lower case. bug fixing. be consistent in naming conventions. continuous coordination with customers/end-users is necessary. A wide-angle view of the 'customers' of a software development project might include end-users. 'Early' testing ideally includes unit testing by developers and built-in testing and diagnostic capabilities.. cohesive. customer management. or what is too many or too few rules.

* One code statement per line. * Function descriptions should be clearly spelled out in comments preceding a function's code. Organizations can receive CMMI ratings by .What is 'good design'? * * 'Design' could refer to many things. Defense Department contractors. * Coding style should be consistent throught a program (eg. less than 100 lines of code is good. or if possible a separate flow chart and detailed program documentation. some common rules-of-thumb include: * The program should act in a way that least surprises the user * It should always be evident to the user what can be done next and how to exit * The program shouldn't let the users do something stupid without warning them. * Make extensive use of error handling procedures and status and error logging. and if reasonably applied can be helpful. 15. avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). keep class methods small. * For C++. use of brackets. but often refers to 'functional design' or 'internal design'. understandable. Minimize use of multiple inheritance.For programs that have a user interface.vertically and horizontally. err on the side of too many rather than too few comments.) * In adding comments. developed by the SEI. make liberal use of exception handlers. * CMM = 'Capability Maturity Model'. less than 50 lines is preferable. * Each line of code should contain 70 characters max. now called the CMMI ('Capability Maturity Model Integration').S.* Function and method sizes should be minimized. a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. Defense Department to help improve software development processes. naming conventions. * Organize code for readability. and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. indentations. * Use whitespace generously . is robust with sufficient error-handling and status logging capability. initiated by the U. and works correctly when implemented. easily modifiable.) * For C++. etc.What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help? * SEI = 'Software Engineering Institute' at Carnegie-Mellon University. many of the QA processes involved are appropriate to any organization. 14. It is geared to large organizations such as large U. However. it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help.S. and maintainable. less than 50 lines of code per method is preferable. * No matter how small. Good internal design is indicated by software code whose overall structure is clear. to minimize complexity and increase maintainability. * For C++. an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing).

not just software. * Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE. and certification is typically good for about 3 years. * ANSI = 'American National Standards Institute'.software project tracking.characterized by chaos.Quality Management Systems: Guidelines for Performance Improvements. successes may not be repeatable. the most problematical key process area was in Software Quality Assurance. 17. 23% at 2. It covers documentation. installation. * Level 4 .the focus is on continouous process improvement. For larger projects. documentation planning. test planning. federal contractors or agencies. (c)Q9004-2000 . Bootstrap.undergoing assessments by qualified auditors. 39% at 2. requirements analysis. successful practices can be repeated.S.S.Will automated testing tools make testing easier? * Possibly For small projects. design. and configuration management processes are in place. and it applies to many kinds of production and manufacturing organizations. Trillium. the time needed to learn and implement them may not be worth it. processes. maintenance. coding.S. integration. or on-going long-term projects they can be valuable. Project performance is predictable. 23% at 3. 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008). * Level 1 . servicing. 2% at 4. 16. ITIL. 1018 organizations were assessed. * ISO = 'International Organisation for Standardization' . * Level 2 .) The median size of organizations was 100 software engineering/maintenance personnel.4% at 5. development. The impact of new processes and technologies can be predicted and effectively implemented when required.asq. publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality). In the U.standard software development and maintenance processes are integrated throughout an organization.What is the 'software life cycle'? * The life cycle begins when an application is first conceived and ends when it is no longer in use. For those rated at Level 1. and other processes. updates.Quality Management Systems: Requirements. phase-out. (For ratings during the period 1992-96. periodic panics. testing.metrics are used to track productivity.it indicates only that documented processes are followed. and heroic efforts required by individuals to successfully complete projects. .The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors. and quality is consistently high. The full set of standards consists of: (a)Q9001-2000 . realistic planning. 32% of organizations were U. the standards can be purchased via the ASQ web site at http://e-standards. and 5% at 5. 6% at 4.ch/ for the latest information.Quality Management Systems: Fundamentals and Vocabulary. TickIT. To be ISO 9001 certified. testing. after which a complete reassessment is required. Also see http://www. 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730). Few if any processes in place. and others. and other aspects. It includes aspects such as initial concept. and 0. Note that ISO certification does not necessarily indicate quality products . the primary industrial standards body in the U.. and CobiT. retesting. (b)Q9000-2000 . 13% at 3. functional design. a third-party auditor assesses an organization. 27% were rated at Level 1. * Level 5 . internal design. * Perspective on CMM ratings: During 1997-2001. * Level 3 .org/ * IEEE = 'Institute of Electrical and Electronics Engineers' . a Software Engineering Process Group is is in place to oversee software processes.among other things. and training programs are used to ensure understanding and compliance. document preparation.iso. production. requirements management. and products. Of those. 62% were at Level 1. creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829). MOF.

such as bounds-checkers and leak detectors. condition coverage. and for all types of platforms. etc. data. etc. * Other automated tools can include: * Code analyzers .monitor code complexity. * Coverage analyzers . HTML code usage is correct. Note that there are record/playback tools for textbased interfaces also. The data and actions can be more easily maintained . in an application GUI and have them 'recorded' and the results logged by a tool. etc. * Load/performance test tools . documentation. documentation management. bug reporting.these tools check which parts of the code have been exercised by a test. a web site's interactions are secure. adherence to standards. . and may be oriented to code statement coverage.to check that links are valid.since they are separate from the test drivers. the 'recordings' may have to be changed so much that it becomes very timeconsuming to continuously update the scripts. * Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing.such as via a spreadsheet . the application might then be retested by just 'playing back' the 'recorded' actions. This approach can enable more efficient control. a tester could click through all combinations of menu choices.for testing client/server and web applications under various load levels. client-side and serverside programs work. interpretation and analysis of results (screens. and configuration management. and maintenance of automated tests/test cases.) can be a difficult task. For example. * Memory analyzers . in which the test drivers are separated from the data and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text box'). path coverage. * Other tools . buttons.for test case management. The problem with such tools is that if there are continual changes to the system being tested. or some underlying code in the application is changed. If new buttons are added. The test drivers 'read' the data/action information to perform specified tests. development. Additionally.* A common type of automated tool is the 'record/playback' type. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. etc. dialog box choices. Test drivers can be in the form of automated test tools or custom-written testing software. and comparing the logging results to check effects of the changes. * Web test tools . logs. etc.