You are on page 1of 197

Quality Assurance (QA) Guide

Get your JOB in 30 days!!!

Disclaimer
This material has been prepared out of experience by few IBOs and they take no responsibilities for authenticity and accuracy of the material provided. WinRunner/ LoadRunner/ TestDirector/ QTP are the products and trade marks of Mercury interactive corporation. Most of the material has been collected from different on-line resources, which may not be cited in the material. Please get back to the person who have loaned this material to you for further questions. You may find some spelling mistakes, because English is authors 3rd language.

Types of Testing:

Term Acceptance Testing

Definition Testing the system with the intent of confirming readiness of the product and customer acceptance. Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing. Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department. Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests. Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.

Ad Hoc Testing

Alpha Testing

Automated Testing

Beta Testing

Testing software without any knowledge of the inner workings, structure Black Box Testing or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.. Compatibility Testing Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.

Configuration Testing

Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software. Testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

Functional Testing

The process of exercising software with the intent of ensuring that the Independent software system meets its requirements and user expectations and Verification and doesn't fail in an unacceptable manner. The individual or group doing Validation (IV&V) this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices. Installation Testing Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs. Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing) Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation. Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and finetune performance are used most often for this type of testing. Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing) Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done

Integration Testing

Load Testing

Performance Testing

Pilot Testing

Regression Testing

to ensure that no degradation of baseline functionality has occurred. Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers. The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation) Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity. Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparate parts where custom configurations and/or unique installations are the norm. See Acceptance Testing.

Security Testing

Software Testing

Stress Testing

System Integration Testing User Acceptance Testing

White Box Testing Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.

How to classify the test cases what to be automated and what are manual test. It depends on how you desire to execute your automated tests, (attended or unattended), what type of tool you are using, and several other issues. We select our tests by eliminating any tests requiring hardware changes but if you have enough test platforms then you could overcome this by using multiple platforms. Other than that we insure that the manual tests can be automated efficiently by restricting the total steps per case to 25 or 30 simple single action steps with one or two checkpoints to validate. If validating data or logs and lists, a baseline must be supplied to compare against. This can be updated later by us, but we need something to validate against initially. To be depth Automation depends upon the project and that if the client wishes (depends upon the budget too).

Its a fact that everything cannot be atomized parallel everything cannot be manual (load and stress for bulk entities). Considered areas which can be automated are to be considerable are. 1.Regression Testing 2.Bitmap testing 3.Font Expert testing 4.Database testing 5.Performance testing (Stress & load) 6.Network testing 7.Gui Testing 8.Synchronization 9.Throughput 10 Transaction monitoring 11.Compatability of different browsers 12.Mouse Capturing 13.Event Driven 14.Keystrokes 15.Batch testing 16.Testing multi doc's 17.Record and play back etc... and so on.........

Rest can be considered to be done manual; Above given also , is not that everything can be automated in all the environments Requirements must be: 1. Design independent 2. Unambiguous 3. Precise 4. Traceable

5. Understandable 6. Verifiable 7. Prioritized

In reference to the SRS, (Software Requirements Specification), document they must be: 1. Complete 2. Consistent 3. Organized 4. Modifiable

Development environments : The complete integrated set of hardware and associated software tools that are used by the project team to develop an application. Testing Environment: The complete integrated set of hardware and associated software tools that is used by the system test team to perform system testing on an application. The purpose & scope is something usually written by the company and directed at their product line. The purpose would be to help deliver a quality product by extensively testing and validating that the product conforms to the original/updated requirements as set forth by the customer and sponsors prior to delivery of the product to the customer. The scope would be written to the level of the resources available for ensuring a quality product. What can be done if requirements are changing continuously? Use Evolutionary Methods (Evo), not just any Agile method. Requirements will change in any project, even if your original requirements were not bad, because during the project you learn more, your customer learns more and the circumstances will change.

Evo will ensure that you do not just follow any proposed requirements change, but that you constantly check what the most important things are and only do the most important things. If you encounter a requirements change, you will check the importance relative to all the other things you have to do and only implement any change when there is nothing more important left. At the same time, you will oversee the total project making sure that you will never overrun any agreed timings and budgets. In WinRunner what is the major difference between GUI File per test mode and Global mode? In WinRunner there are two modes: 1. The global GUI map file mode used to test overall application having all the GUI object description in it. 2. In GUI File per mode we have separate GUI file for each object in application. We have to load the file each time the object to test. Why do we need functional testing in Banks? To avoid the mistakes regarding A/c transactions Use case: is a design specification which describes the flow of the system Test case: is a test specification which describes the areas which needed to be tested Use case describes a sequence of actions a system performs that yields an observable result of value to a particular actor. U derive Test cases from Use cases. U identify requirements thru Use cases. An example for a low level bug with high priority while testing a web application and vice versa: Consider You are logging out of a webpage, its not logged off... it is low priority but if you consider that the page is of some online payment, the Severity is HIGH as someone else may misuse it...OK? Another example: "Display of Alert message" is important. Instead of displaying appropriate alert message, if it is displaying error alert message then such type bugs can be categorized as Low priority with high severity. Smoke Test: A smoke test is a series of test cases that are run prior to commencing with full-scale testing of an application. The idea is to test the high-level features of the application to ensure that the essential features work. If they do not work, there is no need to continue testing details of the application, so the testing team will refuse to do any additional testing until all smoke test cases pass. This puts pressure on the development team to ensure that the testing team does not waste valuable testing time with code that is not ready to be tested.

Defect rating system: ROI (Return on Investment) is the name of the game: What is the cost of not repairing a defect against the cost of repairing it. If these are equal or worse, don't repair. If the cost of not repairing is at least twice as much as the cost of repair, you could repair. But do this only if there is absolutely not more important work to do, with an even better ROI. In short: always work on the highest priority. That is: what yields the highest ROI. Defect Seepage: Defect Seepage is a metric which measures the number of defects which seep or migrate from one phase of a project to another. Such as a defect which was not found in unit testing or integration testing and is found in system testing. What is the difference between severity and priority? Severity of a defect means the nature of defect which can be one of the following: System Crash Functionality failure Cosmetic issue. Whereas Priority of a defect means the impact this defect may have on the end customer. A defect of high severity may not necessarily be a high priority defect e.g. a system crash may occurs due to some steps that the end customer would never follow. On the other hand a cosmetic issue in EULA (end user license agreement) may be a defect with very high priority. How would u differentiate b/w Client Server testing and Web Testing? Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. In case of limited time, the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. Web Testing: Web sites are essentially client/server applications - with web servers and 'browser' clients. You will have to consider a lot of issues such as interactions within html pages, TCP/IP communications, firewalls, applications running in web pages and applications running on the server side, browser versions, variety of browsers, connection speeds etc. Additionally, u need to check for Load, Performance and security testing, internal and external link functions, web server response time, database query response time. A lot more can be added under web testing. Steps are needed to develop and run Software Tests:

Obtain requirements, functional design, and internal design specifications and other necessary documents Obtain budget and schedule requirements Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.) Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc. Determine test environment requirements (hardware, software, communications, etc.) Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.) Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements Set schedule estimates, timelines, milestones Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals Write test cases Have needed reviews/inspections/approvals of test cases Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data Obtain and install software releases Perform tests Evaluate and report results Track problems/bugs and fixes Retest as needed Maintain and update test plans, test cases, test environment, and testware through life cycle Bugs VS Defects: Bugs and defects mean the same but it has two views. When the organization sees an error during the development or when the product is within the company, we just call it as BUG, but when an error is found after we deliver the product to the customer it is called DEFECT. Hence the intensity, priority or degree matters a lot. Corrective action is to just fix a bug / defect, where as preventive action is to prevent defects from reoccurrence. Corrective action is taken after a bug/defect is found, and preventive actions are the measures taken so that least number of defects or errors occurs. For example, analyzing the database architecture at its design stage for mistakes/improvements in design might be termed as a preventive action, while fixing the bug, and its impacted areas might be termed as corrective action. White Box Testing, is also known as glass box, structural, clear box and open box testing. Test Bed: 1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a

logically or physically separate component. 2) A suite of test programs used in conducting the test of a component or system. Test Case: The definition of test case differs from company to company, engineer to engineer, and even project to project. A test case usually includes an identified set of information about observable states, conditions, events, and data, including inputs and expected outputs. Test Procedure: The formal or informal procedure that will be followed to execute a test. This is usually a written document that allows others to execute the test with a minimum of training. The detailed procedure for execution of test cases. Test Case VS Test Script: The classic definition of a test case has three components: a test condition (triggering event) an expected response a process to perform the test condition and evaluate the result So, a test case can be very small and standalone in nature...or, it can be combined with other test cases to achieve an overall test. That's where the idea of a test script comes in. A test script is a sequence of test actions and expected results to validate a process performed in using the application. You can visualize a test script as a sequential collection of test cases. The key word is "sequential." If what you are testing is process-based, then a test script is useful. Otherwise, you will probably just want to use a test case. Q: What is verification? A: Verification ensures the product is designed to deliver all functionality to the customer. It typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, and walkthroughs and inspection meetings. Q: What is validation? A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed. Q: What is the difference between verification and validation? A: Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect actual product. Q: What is quality assurance (QA)? A: Quality Assurance (QA) ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews. QA

depends on clients and projects, team leads or managers, feedback to developers and communication among customers, managers, developers' test engineers and testers. Q: What is software quality assurance? A: Software quality assurance is oriented to prevention. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software testing is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization's size and business structure. Q: Do automated testing tools make testing easier? A: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile. A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change. One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very timeconsuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You CAN learn to use automated testing tools, with little or no outside help. Get CAN get free information. Click on a link! Q: What is the role of documentation in QA? A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible. Q: What about requirements? A: Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented requirement specifications. Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. A non-testable requirement would be, for example, "user-friendly", which is too subjective. A testable

requirement would be something such as, "the product shall allow the user to enter their previously-assigned password to access the application". Care should be taken to involve all of a project's significant customers in the requirements process. Customers could be in-house or external and could include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and anyone who could later derail the project. If his/her expectations aren't met, they should be included as a customer, if possible. In some organizations, requirements may end up in high-level project plans, functional specification documents, design documents, or other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests. Without such documentation there will be no clear-cut way to determine if a software application is performing correctly. You CAN learn to capture requirements, with little or no outside help. Get CAN get free information. Click on a link! Q: How do you create a test plan/design? A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking... * Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application. * Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases. * It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing. * Test scenarios are executed through the use of test procedures or scripts. * Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. * Test procedures or scripts include the specific data that will be used for testing the process or transaction. * Test procedures or scripts may cover multiple test scenarios. * Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope. * Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment. * Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing. * A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release. Inputs for this process: * Approved Test Strategy Document. * Test tools, or automated test tools, if applicable. * Previously developed scripts, if applicable.

* Test documentation problems uncovered as a result of testing. * A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data. Outputs for this process: * Approved documents of test scenarios, test cases, test conditions and test data. * Reports of software design issues, given to software developers for correction. Q: What is a test case? A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a... * Test case identifier; * Test case name; * Objective; * Test conditions/setup; * Input data requirements/steps, and * Expected results. Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible. Q: How do test case templates look like? A: Software test cases are in a document that describes inputs, actions, or events, and their expected results, in order to determine if all features of an application are working correctly. Test case templates contain all particulars of every test case. Often these templates are in the form of a table. One example of this table is a 6-column table, where column 1 is the "Test Case ID Number", column 2 is the "Test Case Name", column 3 is the "Test Objective", column 4 is the "Test Conditions/Setup", column 5 is the "Input Data Requirements/Steps", and column 6 is the "Expected Results". All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for users to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. You CAN learn to create test case templates, with little or no outside help. Get CAN get free information. Click on a link! Q: What should be done after a bug is found? A: When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these

determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. Q: What is configuration management? A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs. Q: What is software configuration management? A: Software Configuration management (SCM) is the control, and the recording of, changes that are made to the software and documentation throughout the software development life cycle (SDLC). SCM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, and changes made to them, and to keep track of who makes the changes. Rob Davis has experience with a full range of CM tools and concepts, and can easily adapt to an organization's software tool and process needs. Q: What are some of the software configuration management tools? A: Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS; and there are many others. Rational ClearCase is a popular software tool, made by Rational Software, for revision control of source code. DOORS, or "Dynamic Object Oriented Requirements System", is a requirements version control software tool. CVS, or "Concurrent Version System", is a popular, open source version control system to keep track of changes in documents associated with software projects. CVS enables several, often distant, developers to work together on the same source code. PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX command that compares contents of two files. You CAN learn to use SCM tools, with little or no outside help. Get CAN get free information. Click on a link! Q: What if the software is so buggy it can't be tested at all? A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.

Q: Why are there so many software bugs? A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development. * There are unclear software requirements because there is miscommunication as to what the software should or shouldn't do. * Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications. * Programming errors occur because programmers and software engineers, like everyone else, can make mistakes. * As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too. * Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too. * Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made. * Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read. * Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.

Q: How do you know when to stop testing? A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are... * Deadlines, e.g. release deadlines, testing deadlines; * Test cases completed with certain percentage passed; * Test budget has been depleted; * Coverage of code, functionality, or requirements reaches a specified point; * Bug rate falls below a certain level; or * Beta or alpha testing period ends.

Q: What if there isn't enough time for thorough testing? A: Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions: * Which functionality is most important to the project's intended purpose? * Which functionality is most visible to the user? * Which functionality has the largest safety impact? * Which functionality has the largest financial impact on users? * Which aspects of the application are most important to the customer? * Which aspects of the application can be tested early in the development cycle? * Which parts of the code are most complex and thus most subject to errors? * Which parts of the application were developed in rush or panic mode? * Which aspects of similar/related previous projects caused problems? * Which aspects of similar/related previous projects had large maintenance expenses? * Which parts of the requirements and design are unclear or poorly thought out? * What do the developers think are the highest-risk aspects of the application? * What kinds of problems would cause the worst publicity? * What kinds of problems would cause the most customer service complaints? * What kinds of tests could easily cover multiple functionalities? * Which tests will have the best high-risk-coverage to time-required ratio? Q: What can be done if requirements are changing continuously? A: Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to... * Ensure the code is well commented and well documented; this makes changes easier for the developers. * Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes. * In the project's initial schedule, allow for some extra time to commensurate with probable changes. * Move new requirements to a 'Phase 2' version of an application and use the original requirements for the 'Phase 1' version. * Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application. * Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that's their job. * Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes. * Design some flexibility into automated test scripts;

* Focus initial automated testing on application aspects that are most likely to remain unchanged; * Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs; * Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans; * Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails. Q: What if the application has functionality that wasn't in the requirements? A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk. SMOKE vs SANITY TESTING: Smoke Test: When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing. Smoke testing can be done for testing the stability of any interim build. Smoke testing can be executed for platform qualification tests. Sanity testing: Once a new build is obtained with minor revisions, instead of doing a thorough regression, sanity test is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. Its generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app. Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after thorough regression cycles. Smoke Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too Sanity A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.

deep, is tested. 2 3 4 A smoke test is scripted--either using a written set of tests or an automated test A Smoke test is designed to touch every part of the application in a cursory way. It's is shallow and wide. Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification). Smoke testing is normal health check up to a build of an application before taking it to testing in depth. A sanity test is usually unscripted. A Sanity test is used to determine a small section of the application is still working after a minor change. Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

HOW DO YOU HANDLE MULTIPLE TASKS ALL HAVING EQUAL PRIORITY? Answer Example: I handle multiple tasks with equal priority in this fashion: I organize the tasks at hand in their own priority to meet the required release dates. Some tasks having more weight than others in their own right. Some tasks may take longer to execute. For example, a portion of an application that is automated, may take several hours to complete its testing cycle unattended. While the unattended test is executing, this allows me to execute additional manual tests to complete the current task or focus on the next task in the priority list. GIVE AN EXAMPLE OF A SITUATION WHERE YOU HAD TO DO THIS. An example of the above situation is at my current position where I am responsible for testing 2 applications. There are times when the development/release schedule overlap thus creating a scenario where 2 releases (tasks) are equally important. To accomplish this, planning is the key. I organize my test environment in a manner to execute untended WinRunner tests on both applications. By doing this I can start to prioritize the manual tests that need to run and analyze any failures from the WinRunner Results Log. The concept is to let WinRunner handle the regression portion of the testing and I can focus on any issues that arise including status reports and meetings if needed. WOULD YOU BE ABLE TO WORK IN A TEAM ENVIRONMENT? Yes. I am currently working in a team Quality Assurance environment of 6. I am comfortable working with both Full-Time Orasi employees and consultants. WOULD YOU BE ABLE TO ADJUST TO CHANGING SCHEDULES? Yes. Currently, the software release schedules are ever changing. Sometimes projects are moved back as priorities shift, making other projects come to the forefront faster than expected.

HOW WOULD YOU HANDLE PRIORITIZING THE TASKS ASSIGNED YOU? The way I handle prioritizing tasks is to group them into categories that contain the most critical functionality and those with the least critical functionality. If I am responsible for a set of tasks then I may group them like the following: Risk Exposure to high liability for the company. Operational Characteristics Highly used business functions and less frequently used business functions. Exposure The probability of failure. Example: Moving an application into a new environment, may initiate some configuration issues even though the software was tested as functional. In addition, I find out what tasks can be executed at the same time utilizing automated testing tools, thus maximizing my efficiency. HOW DO YOU FEEL ABOUT PUTTING IN EXTRA-UNEXPECTED HOURS TO MEET A DEADLINE? The unexpected is always bound to happen and you have to be prepared for it. My approach on this issue is to do what is needed to complete the project on time. HOW DO YOU FEEL ABOUT WORKING IN CLOSE TEAM ENVIRONMENT WHERE YOU INTERACT DAILY WITH YOUR MANAGER, DEVELOPERS, DBA AND PROJECT MANAGERS? I am comfortable with this type of team atmosphere. Currently I interact with the Director of Quality Assurance, Software developers (both Full-Time Company employees and contractors), Quality Assurance Analysts (Full-Time Company employees and Contractors) on a daily basis. WHY IS YOUR SKILL SET APPROPRIATE AT ORASI (CLIENT COMPANY)? I feel I am a match for Orasi(Client company) because I have the needed experience in the quality assurance arena and in all of the technologies being utilized, specifically the Mercury Interactive products (WinRunner and TestDirector). One of my current roles is that I am the TestDirector administrator. In this position, I have extensively customized projects using the VB Workflow and created a corporate template for defects. I have experience in testing web applications, and outstanding analytical skills. My current environment has allowed me to develop the skill needed to a rapid or changing development lifecycle. WHAT TYPE OF INFORMATION WAS PROVIDED YOU TO WRITE TEST CASES AND TEST PLANS? High Level Requirements documentation, Design Specification from development, and Project and Product management advice in the event a business rule was unclear to develop a repeatable test.

DEFINE THE FOLLOWING: (SEE ANSWER KEY * NOTE: DEFINITIONS DO NOT HAVE TO BE EXACT WORD-FORWORD.) Regression Testing Verifying that modified software or systems perform as specified and that no unintended change has occurred in its operation. Automated regression testing allows for a more efficient method of completing tasks. Integration Testing This is to verify that each software unit interfaces correctly with other software functions. User Acceptance Testing Test team interaction with end-users to validate business functions, usability, and to make sure that defect detection is included in the test process. Unit Testing Involves testing each individual component of an application. This is usually performed by developers when a component is deemed ready. Black Box Testing - Focuses on the external behavior of inputs/outputs and its ability to satisfy the User Requirements. Product Development Life Cycle This is the complete process software goes through evaluation and improvement to Production and maintenance thus completing its maturity. Software Testing FAQ Explain the software development lifecycle. There are seven stages of the software development lifecycle 1. Initiate the project The users identify their Business requirements. 2. Define the project The software development team translates the business requirements into system specifications and put together into System Specification Document. 3. Design the system The System Architecture Team design the system and write Functional Design Document. During design phase general solutions re hypothesized and data and process structures are organized. 4. Build the system The System Specifications and design documents are given to the development team code the modules by following the Requirements and Design document. 5. Test the system - The test team develops the test plan following the requirements. The software is build and installed on the test platform after developers have completed development and Unit Testing. The testers test the software by following the test plan. 6. Deploy the system After the user-acceptance testing and certification of the software, it is installed on the production platform. Demos and training are given to the users. 7. Support the system - After the software is in production, the maintenance phase of the life begins. During this phase the development team works with the development document staff to modify and enhance the application and the test

team works with the test documentation staff to verify and validate the changes and enhancement to the application software. What is software quality'? OR Define software quality for me, as you understand it? Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. What's the role of documentation in QA? Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible. At what stage of the SDLC does testing begin in your opinion? QA process starts from the second phase of the Software Development Life Cycle i.e. Define the System. Actual Product testing will be done on Test the system phase(Phase5). During this phase test team will verify the actual results against expected results. Explain the pre testing phase, acceptance testing and testing phase. Pre testing Phase: 1. Review the requirements document for the testability: Tester will use the requirement document to write the test cases. 2. Establishing the hard freeze date: Hard freeze date is a date after which system test team will not accept any more software and documentation changes from development team, unless they are fixes of severity 1 MRs. The date is scheduled so that product test team will have time for final regression. 3. Writing master test plan: It is written by the lead tester or test coordinator. Master test plan includes entire testing plan, testing resources and testing strategy. 4. Setting up MR Tool: The MR tool must be set as soon as you know of the different modules in the product, the developers and testers on the product, the hardware platform, and operating system testing will be done. This information will be available upon the completion of the first draft of the architecture document. Both testers and developers are trained how to use the system. 5. Setting up the test environment: The test environment is set on separate machines, database and network. This task is performed by the technical support team. First time it takes some time, Afterwards the same environment can be used by the later releases. 6. Writing the test plan and test cases: Template and the tool is decided to write the test plan, test cases and test procedures. Expected results are organized in the test plan according to the feature categories specified in the requirement document. For each feature positive and negative test cases are written.

Writing test plan requires the complete understanding of the product and its interfaces with other systems. After test plan is completed, a walkthrough is conducted with the developers and design team members to baseline the test plan document. 7. Setting up the test automation tool: Planning of test strategy on how to automate the testing. Which test cases will be executed for regression testing. Not all the test cases will be executed during regression testing. 8. Identify acceptance test cases: Select subsets that are expected on the first day of system test. These tests must pass to accept the product in the system test. Acceptance testing phase: 1. When the product enters system test, check it has completed integration test and must meet the integration test exit criteria. 2. Check integration exit criteria and product test entrance criteria in the master test plan or test strategy documents. 3. Check the integration testing sign off criteria sheet. 4. Coordinate release with product development. 5. How the code will be migrated from development environment to the test environment. 6. Installation and acceptance testing. Product testing phase: 1. Running the test: Execution of test cases and verify if actual functionality of application matches the expected results. 2. Initial manual testing is recommended to isolate unexpected system behavior. Once application is stable automated regression test could be generated. 3. Issue MRs upon detection of the bugs. What is the value of a testing group? How do you justify your work and budget? All software products contain defects/bugs, despite the best efforts of their development teams. It is important for an outside party (one who is not developer) to test the product from a viewpoint that is more objective and representative of the product user. Testing group test the software from the requirements point of view or what is required by the user. Testers job is to examine a program and see if it does not do what it is supposed to do and also see what it does what it is not supposed to do. What is master test plan? What it contains? Who is responsible for writing it? OR What is a test plan? Who is responsible for writing it? What it contains. OR What's a 'test plan'? What did you include in a test plan? A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The

following are some of the items that might be included in a test plan, depending on the particular project: Title Identification of software including version/release numbers Revision history of document including authors, dates, approvals Table of Contents Purpose of document, intended audience Objective of testing effort Software product overview Relevant related document list, such as requirements, design documents, other test plans, etc. Relevant standards or legal requirements Trace ability requirements Relevant naming conventions and identifier conventions Overall software project organization and personnel/contactinfo/responsibilities Test organization and personnel/contact-info/responsibilities Assumptions and dependencies Project risk analysis Testing priorities and focus Scope and limitations of testing Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable Outline of data input equivalence classes, boundary value analysis, error classes Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems Test environment validity analysis - differences between the test and production systems and their impact on test validity. Test environment setup and configuration issues Software migration processes Software CM processes Test data setup requirements Database setup requirements Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs Test automation - justification and overview Test tools to be used, including versions, patches, etc. Test script/test code maintenance processes and version control Problem tracking and resolution - tools and processes Project test metrics to be used Reporting requirements and testing deliverables Software entrance and exit criteria Initial sanity testing period and criteria Test suspension and restart criteria Personnel allocation

Personnel pre-training needs Test site/location Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues Relevant proprietary, classified, security, and licensing issues. Open issues Appendix - glossary, acronyms, etc.

The team-lead or a Sr. QA Analyst is responsible to write this document. Why is test plan a controlled document? Because it controls the entire testing process. Testers have to follow this test plan during the entire testing process. What information you need to formulate test plan? Need the Business requirement document to prepare the test plan. What is MR? MR is a Modification Request also known as Defect Report, a request to modify the program so that program does what it is supposed to do. Why you write MR? MR is written for reporting problems/errors or suggestions in the software. What information does MR contain? OR Describe me to the basic elements you put in a defect report? OR What is the procedure for bug reporting? The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available. The following are items to consider in the tracking process: Complete information such that developers can understand the bug, get an idea of its severity, and reproduce it if necessary. Bug identifier (number, ID, etc.) Current bug status (e.g., 'Released for Retest', 'New', etc.) The application name or identifier and version The function, module, feature, object, screen, etc. where the bug occurred Environment specifics, system, platform, relevant hardware specifics Test case name/number/identifier One-line bug description Full bug description Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool Names and/or descriptions of file/data/messages/etc. used in test

File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common) Was the bug reproducible? Tester name Test date Bug reporting date Name of developer/group/organization the problem is assigned to Description of problem cause Description of fix Code section/file/module/class/method that was fixed Date of fix Application version that contains the fix Tester responsible for retest Retest date Retest results Regression testing requirements Tester responsible for regression tests Regression testing results

What is White box testing/unit testing? Unit testing - The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. What is Integration testing? Integration testing - Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. What is Black box testing? Black Box testing is also called system testing which is performed by the testers. Here the features and requirements of the product as described in the requirement document are tested. What knowledge you require to do white box, integration and black box testing? For white box testing you need to understand the internals of the module like data structures and algorithms and have access to the source code and for black box testing only understanding/functionality of the application. What is Regression testing? Regression testing: Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing..

Why do we do regression testing? In any application new functionalities can be added so the application has to be tested to see whether the added functionalities have affected the existing functionalities or not. Here instead of retesting all the existing functionalities baseline scripts created for these can be rerun and tested. How do we regression testing? Various automation testing tools can be used to perform regression testing like WinRunner, Rational Robot and Silk Test. What is Integration testing? Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. What is the difference between exception and validation testing? Validation testing aims to demonstrate that the software functions in a manner that can be reasonably expected by the customer. Testing the software in conformance to the Software Requirements Specifications. Exception testing deals with handling the exceptions (unexpected events) while the AUT is run. Basically this testing involves how to change the control flow of the AUT when an exception arises. What is the difference between regression automation tool and performance automation tool? Regression testing tools capture test and play them back at a later time. The capture and playback feature is fundamental to regression testing. Performance testing tool determine the load a server can handle. And must have feature to stimulate many users from one machine, scheduling and synchronize different users, able to measure the network load under different number of simulated users. What are the roles of glass-box and black-box testing tools? Glass-box testing also called as white-box testing refers to testing, with detailed knowledge of the modules internals. Thus these tools concentrate more on the algorithms, data structures used in development of modules. These tools perform testing on individual modules more likely than the whole application. Black-Box testing tools refer to testing the interface, functionality and performance testing of the system module and the whole system. What was the test team hierarchy? Project Leader QA lead QA Analyst Tester Which MR tool you used to write MR? Test Director Rational ClearQuest.

PVCS Tracker What are the different automation tools you know? Automation tools provided by Mercury Interactive - WinRunner, LoadRunner; Rational Rational Robot; Segue- SilkTest. What is the role of a bug tracking system? Bug tracking system captures, manages and communicates changes, issues and tasks, providing basic process control to ensure coordination and communication within and across development and content teams at every step.. What is ODBC? Open Database Connectivity (ODBC) is an open standard application-programming interface (API) for accessing a database. ODBC is based on Structured Query Language (SQL) Call-Level Interface. It allows programs to use SQL requests that will access databases without having to know the proprietary interfaces to the databases. ODBC handles the SQL request and converts it into a request the individual database system understands. Did you ever have problems working with developers? NO. I had a good rapport with the developers. Describe your experience with code analyzers? Code analyzers generally check for bad syntax, logic, and other language-specific programming errors at the source level. This level of testing is often referred to as unit testing and server component testing. I used code analyzers as part of white box testing. How do you feel about cyclomatic complexity? Cyclomatic complexity is a measure of the number of linearly independent paths through a program module. Cyclomatic complexity is a measure for the complexity of code related to the number of ways there are to traverse a piece of code. This determines the minimum number of inputs you need to test all ways to execute the program. Who should test your code? QA Tester How do you survive chaos? I survive by maintaining my calm and focusing on the work. What Process/Methodologies are you familiar with? Waterfall methodology Spiral methodology [Or talk about Customized methodology of the specific client] What you will do during the first day of job? Get acquainted with my team and application Tell me about the worst boss youve ever had.

Fortunately I always had the best bosses, talking in professional terms I had no complains on my bosses. What is a successful product? A bug free product, meeting the expectations of the user would make the product successful. What do you like about Windows? Interface and User friendliness Windows is one the best software I ever used. It is user friendly and very easy to learn. What is good code? These are some important qualities of good code Cleanliness: Clean code is easy to read; this lets people read it with minimum effort so that they can understand it easily. Consistency: Consistent code makes it easy for people to understand how a program works; when reading consistent code; one subconsciously forms a number of assumptions and expectations about how the code works, so it is easier and safer to make modifications to it. Extensibility: General-purpose code is easier to reuse and modify than very specific code with lots of hard coded assumptions. When someone wants to add a new feature to a program, it will obviously be easier to do so if the code was designed to be extensible from the beginning. Correctness: Finally, code that is designed to be correct lets people spend less time worrying about bugs and more time enhancing the features of a program. Who are Kent Beck, Dr Grace Hopper, and Dennis Ritchie? Kent Beck is the author of Extreme Programming Explained and The Smalltalk Best Practice Patterns. Dr. Grace Murray Hopper was a remarkable woman who grandly rose to the challenges of programming the first computers. During her lifetime as a leader in the field of software development concepts, she contributed to the transition from primitive programming techniques to the use of sophisticated compilers. Dennis Ritchie created the C programming language. How you will begin improve the QA process? By following QA methodologies like waterfall, spiral instead of using ad-hoc procedures. What is UML and how it is used for testing? The Unified Modeling Language (UML) is the industry-standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems. It simplifies the complex process of software design, making a "blueprint" for construction. UML state charts provide a solid basis for test generation in a form that can be easily manipulated. This technique includes coverage criteria that enable highly effective tests to be developed. A tool has been developed that uses UML state charts produced by Rational Software Corporation's Rational Rose tool to generate test data.

What are CMM and CMMI? What is the difference? The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes. The Capability Maturity Model Integration (CMMI) provides the guidance for improving your organization's processes and your ability to manage the development, acquisition, and maintenance of products and services. CMM Integration places proven practices into a structure that helps your organization assess its organizational maturity and process area capability, establish priorities for improvement, and guide the implementation of these improvements. The new integrated model (CMMI) uses Process Areas (known as PAs) which are different to the previous model, and covers as well systems as software processes, rather than only software processes as in the SW-CMM. Do you have a favorite QA book? Why? Effective Methods for Software Testing - Perry, William E. It covers the whole software lifecycle, starting with testing the project plan and estimates and ending with testing the effectiveness of the testing process. The book is packed with checklists, worksheets and N-step procedures for each stage of testing. When should testing be stopped? This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are: Deadlines (release deadlines, testing deadlines, etc.) Test cases completed with certain percentage passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point Bug rate falls below a certain level Beta or alpha testing period ends When do you start developing your automation tests? First, the application has to be manually tested. Once the manual testing is over and baseline is established. What are positive scenarios? Testing to see whether the application is doing what it is supposed to do. What are negative scenarios? Testing to see whether the application is not doing what it is not suppose to do. What is quality assurance? The set of support activities (including facilitation, training, measurement and analysis) needed to provide adequate confidence that processes are established and continuously improved in order to produce products that meet specifications and fit for use.

What is the purpose of the testing? Testing provides information whether or not a certain product meets the requirements. What is the difference between QA and testing? Quality Assurance is that set of activities that are carried out to set standards and to monitor and improve performance so that the care provided is as effective and as safe as possible. Testing provides information whether or not a certain product meets the requirements. It also provides information where the product fails to meet the requirements. What are benefits of the test automation? 1. Fast 2. Reliable 3. Repeatable 4. Programmable 5. Comprehensive 6. Reusable Describe some problems that you had with automation testing tools One of the problems with Automation tools is Object recognition Can test automation improver test effectiveness? Yes, because of the advantages offered by test automation, which includes repeatability, consistency, portability and extensive reporting features. What are the main use of test automation? Regression Testing. Does automation replace manual testing? No, it does not. There could be several scenarios that cannot be automated or simply too complicated that manual testing would be easier and cost effective. Further automation tools have several constrains with regard the environment in which they run and IDEs they support. How will you choose a tool for test automation? OR How we decide which automation tool we are going to use for the regression testing? Based on risk analysis like: personnel skills, companies software resources Based on Cost analysis Comparing the tools features with test requirement. Support for the applications IDE, support for the application environment/platform. What could wrong with automation testing? There are several things. For ex. Script errors can cause a genuine bug to go undetected or report a bug in the application when the bug does not actually exist.

How will you describe testing activities? Testing planning, scripting, execution, defect reporting and tracking, regression testing. What type of scripting techniques for test automation do you know? Modular tests and data driven test What are good principles for test scripts? 1. Portable 2. Repeatable 3. Reusable 4. Maintainable What type of document do you need for QA, QC and testing? Following is the list of documents required by QA and QC teams Business requirements SRS Use cases Test plan Test cases What are the properties of a good requirement? Understandable, Clear, Concise, Total Coverage of the application What kinds of testing have you done? Manual, automation, regression, integration, system, stress, performance, volume, load, white box, user acceptance, recovery. Have you ever written test cases or did you just execute those written by others? Yes, I was involved in preparing and executing test cases in all the projects. How do you determine what to test? Depending upon the User Requirement document. How do you decide when you have tested enough? Using Exit Criteria document we can decide that we have done enough testing. Realizing you wont be able to test everything-how do you decide what to test first? (OR) What if there isn't enough time for thorough testing? Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include: Which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact?

Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

Where do you get your expected results? User requirement document If automating-what is your process for determining what to automate and in what order? OR Can you automate all the test scripts? Explain ? OR How do you plan test automation? OR What criteria do you use when determining when to automate a test or leave it manual? 1. Test that need to be run for every build of the application 2. Tests that use multiple data values for the same actions( data driven tests) 3. Tests that require detailed information from application internals 4. Stress/ load testing If youre given a program that will average student grades, what kinds of inputs would you use? Name of student, Subject, Score What is the exact difference between Integration and System testing, give me examples with your project? Integration testing: An orderly progression of testing in which software components or hardware components, or both are combined and tested until the entire system has been integrated. System testing: The Process of testing an integrated hardware and software system to verify that the system meets its specified requirements. How do you go about testing a project? 1. Analyze user requirement documents and other documents like software specifications, design document etc. 2. Write master test plan which describe the scope, objective, strategy, risk/contingencies, resources 3. Write system test plan and detailed test cases

4. Execute test cases manually and compare actual results against expected results. 5. Identify mismatches, report defect to the development team using defect reporting tool. 6. Track defect, perform regression test to verify that defect is fixed and did not disturb other parts of the application. 7. Once all the defects are closed and application is stabilized, automate the test scripts for regression and performance testing. How do you go about testing a web application? We check for User interface, Functionality, Interface testing, Compatibility, Load/Stress, and Security. Difference between Black and White box testing? Black box testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. White Box testing: Testing approaches that examine the program structure and device test data from the program logic. What is configuration management? Tools used? Configuration management: helps teams control their day-to-day management of software development activities as software is created, modified, built and delivered. Comprehensive software configuration management includes version control, workspace management, build management, and process control to provide better project control and predictability What are Individual test case and Workflow test case? Why we do workflow scenarios An individual test is one that is for a single features or requirement. However, it is important that related sequences of features be tested as well, as these correspond to units of work that user will typically perform. It will be important for the system tester to become familiar with what users intend to do with the product and how they intend to do it. Such testing can reveal errors that might not ordinarily be caught otherwise. For example while each operations in a series might produce the correct results it is possible that intermediate results get lost or corrupted between operations. What are the testing tools are you familiar with? TestDirector, WinRunner, LoadRunner, Rational RequisitPro, Rational TestManager, Rational Robot, Rational ClearQuest and SilkTest. How did you use automating testing tools in your job? Automating testing tools are used for preparing and managing regression test scripts and load and performance tests. What is data-driven automation? If you want to perform the same operations with different set of data, we can create data driven test with loop. In each iteration test is driven by different set of data. In order for automation to use data to drive the test, we must substitute the fixed values in the test with variables.

Describe me the difference between validation and verification? Verification: typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation: typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation Is coding required in SQA robot? Yes, to enhance the script for testing the business logic, and when we write the user define the functions. What do you mean by set up the test environment and provide full platform support? We need to provide the following for setting up the environment 1) Required software 2) Required hardware 3) Required testing tools 4) Required test data After providing these we need to provide support for any problems that occur during the testing process. What are the two ways to copy a file in windows? 1) Using the copy menu item in the edit menu. 2) By dragging the file where ever you want to copy it like a floppy If the functionality of an application had an inbuilt bug because of which the test script fails, would you automate the test? No, we do the automation once the application is tested manually and it is stabilized. Automation is for regression testing. What is the bug reporting tool used? Rational ClearQuest TestDirectror PVCS Tracker Did use SQA Manager? Yes. For creating test plan and defect reporting/tracking. Is your project finished? Yes You find a bug and the developer says Its not possible what do u do? Ill discuss with him under what conditions (working environment) the bug was produced. Ill provide him with more details and the snapshot of the bug. How do you help developer to track the fault s in the software? By providing him with details of the defects which include the environment, test data, steps followed etc and helping him to reproduce the defect in his environment.

Were you able to meet deadlines? Absolutely. What is Polymorphism? Give example. In object-oriented programming, polymorphism refers to a programming language's ability to process objects differently depending on their data type or class. More specifically, it is the ability to redefine methods for derived classes. For example, given a base class shape, polymorphism enables the programmer to define different circumference methods for any number of derived classes, such as circles, rectangles and triangles. No matter what shape an object is, applying the circumference method to it will return the correct results. Polymorphism is considered to be a requirement of any true objectoriented programming language (OOPL). What are the different types of MRs? MR for suggestions, MR for defect reports, MR for documentations changes What is test Metrics? Test metrics consist of: Total test Test run Test passed Test failed Tests deferred Test passed the first time What is the use of Metrics? Provide the accurate measurement of test coverage. If you have shortage of time, how would you prioritize you testing? Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Considerations can include: Which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application?

What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

What is the impact of environment on the actual results of performance testing? Environment plays an important role in the results and effectiveness of test, particularly in the area of performance testing. Some of the factors will be under our control, while others will not be. These may involve the DBMS, the operating system or the network. Some of the items that we cannot control unless you can secure a standalone environment (which will generally be unrealistic) are: Other traffic on the network Other process running on the server Other process running on the DBMS What is stress testing, performance testing, Security testing, Recovery testing and volume testing. Stress testing: Testing the system if it can handle peak usage period loads that result from large number of simultaneous users, transactions or devices. Monitoring should be performed for throughput and system stability. Performance Testing: Testing the system whether the system functions are being performed in an acceptable timeframe under simultaneous user load. Timings for both read and update transactions should be gathered to determine whether. This should be done stand-alone and then in a multi-user environment to determine the transaction throughput. Security Testing: Testing the system for its security from unauthorized use and unauthorized data access. Recovery Testing: Testing a system to see how it responds to errors and abnormal conditions, such as system crash, loss of device, communications, or power. Volume Testing: Testing to the system to determine if it can correctly process large volumes of data fed to the system. Systems can often respond unpredictably when large volume causes files to overflow and need extensions. What criteria you will follow to assign severity and due date to the MR? Defects (MR) are assigned severity as follows: Critical: show stoppers (the system is unusable) High: The system is very hard to use and some cases are prone to convert to critical issues if not taken care of. Medium: The system functionality has a major bug but is not too critical but needs to be fixed in order for the AUT to go to production environment. Low: cosmetic (GUI related) What is user acceptance testing? It is also called as Beta Testing. Once System Testing is done and the system seems stable to the developers and testers, system engineers usually invite the end users of the software to see if they like the software. If the users like the software the way it is then software will be delivered to the user. Otherwise necessary changes will be made to the software and software will pass through all phases of testing again.

What is manual testing and what is automated testing? Manual testing involves testing of software application by manually performing the actions on the AUT based on test plans. Automated testing involves testing of a software application by performing the actions on the AUT by using automated testing tool (such as WinRunner, LoadRunner) based on test plans What are the entrance and exit criteria in the system test? Entrance and exit criteria of each testing phase is written in the master test plan. Entrance Criteria: Integration exit criteria have been successfully met. All installation documents are completed. All shippable software has been successfully built State, test plan is baselined by completing the walkthrough of the test plan. Test environment should be setup. All severity 1 MRs of integration test phase should be closed. Exit Criteria: All the test cases in the test plan should be executed. All MRs/defects are either closed or deferred. Regression testing cycle should be executed after closing the MRs. All documents are reviewed, finalized and signed-off. If there are no requirements, how will you write your test plan? If there are no requirements we try to gather as much details as possible from: Business Analysts Developers (If accessible) Previous Version documentation (if any) Stake holders (If accessible) Prototypes. What is smoke testing? The smoke test should exercise the entire system from end to end. It does not have to be exhaustive, but it should be capable of exposing major problems. The smoke test should be thorough enough that if the build passes, you can assume that it is stable enough to be tested more thoroughly. The daily build has little value without the smoke test. The smoke test is the sentry that guards against deteriorating product quality and creeping integration problems. Without it, the daily build becomes just a time-wasting exercise in ensuring that you have a clean compile every day. The smoke test must evolve as the system evolves. At first, the smoke test will probably test something simple, such as whether the system can say, "Hello, World." As the system develops, the smoke test will become more thorough. The first test might take a matter of seconds to run; as the system grows, the smoke test can grow to 30 minutes, an hour, or more.

What is soak testing? The software system will be run for a total of 14 hours continuously. If the system is a control system, it will be used to continuously move each of the instrument mechanisms during this time. Any other system will be expected to perform its intended function continuously during this period. The software system must not fail during this period. What is a pre-condition data? Data required to setup in the system before the test execution. What are the different documents in QA? Requirements Document, Test Plan, Test cases, Test Metrics, Task Distribution Diagrams ( Performance), Transaction Mix, User Profiles, Test Log, Test Incident Report, Test Summary Report How do you rate yourself in software testing excellent What are the best web sites that you frequently visit to upgrade your QA skills? http://www.softwareqatest.com/ http://sqp.asq.org/ Is defect resolution a technical skill or interpersonal skill from QA view point? It is a combination of both , because it deals with the interaction with developer either directly or indirectly which needs interpersonal skills and is also based on the skills of the QA personnel to provide a detailed proof to the developer like snap shots and system resource utilization and some suggestions which are little bit technical. What is End to End business logic testing? Testing the integration of all the modules of the AUT. What is an equivalence class? A portion of a component's input or output domains for which the component's behavior is assumed to be the same from the component's specification What is the task bar and where does it reside? The task bar shows a task button for each open application. At a glance, it shows you which applications are running; you can switch applications by clicking different task buttons. In most cases the task bar is located at the very bottom of your desktop or computer screen. However the task bar can be moved to the sides or top. How do you analyze your test results? what metrics do you try to provide? OR How do you view test results? Test log is created for analyzing the test results. This is a chronological record of the Test executions and events that happened during testing. It includes the following sections: Description: Whats being tested, including Version ID, where testing is being done, what hardware and all other configuration information.

Activity and Event Entries: What happened including Execution Description: The procedure used. Procedure Result: What happened. What did you see and where did you store the output? Environment Information: Any changes (hardware substitution) made specifically for this test. Unexpected Events: What happened before and after problem/bug occurred. Incident/Bug Report Identifiers: Problem Report number If you come onboard, give me a general idea of what your first overall tasks will be as far as starting a quality effort? Try to learn about the application, Environment and Prototypes to have the better understanding of application and existing testing efforts How do you differentiate the roles of Quality Assurance Manager and Project Manager? Quality assurance manager responsibilities includes setting up the standards, the methodology and the strategies for testing the application and providing guidelines to the QA team. Project Manager is responsible to testing and development activities. What do you like about QA? QA is the field where in one will be working to multiple environments and can learn more. Who in the company is responsible for Quality? Both development and quality assurance departments are responsible for the final product quality. Should we test every possible combination/scenario for a program? Ideally, yes we should test every possible scenario, but this may not always be possible. It depends on many factors viz., deadlines, budget, complexity of software and so on. In such cases, we have to prioritize and thoroughly test the critical areas of the application. What is client-server architecture? Client-server architecture, a client is defined as a requester of services and a server is defined as the provider of services. Communication takes place in the form a request message from the client to the server asking for some work to be done. Then the server does the work and sends back the reply. How Intranet is different from client-server? Internet applications are essentially client/server applications - with web servers and 'browser' What is three-tier and multi-tier architecture? A design which separate (1) client, (2) application, and (3) data each into their own separate areas which allows for more scalable, robust solutions A three-tier system is one that has presentation components, business logic and data access physically running on different platforms. Web applications are perfect for

three-tier architecture, as the presentation layer is necessarily separate, and the business and data components can be divided up much like a client-server application What is Internet? The Internet, sometimes called simply "the Net," is a worldwide system of computer networks - a network of networks in which users at any one computer can, if they have permission, get information from any other computer. Physically, the Internet uses a portion of the total resources of the currently existing public telecommunication networks. Technically, what distinguishes the Internet is its use of a set of protocols called TCP/IP (for Transmission Control Protocol/Internet Protocol). Two recent adaptations of Internet technology, the intranet and the extranet, also make use of the TCP/IP protocol. What is Intranet? An intranet is a private network that is contained within an enterprise. It may consist of many interlinked local area networks and also use leased lines in the Wide Area Network. The main purpose of an intranet is to share company information and computing resources among employees. An intranet can also be used to facilitate working in groups and for teleconferences. What is Extranet? An extranet is a private network that uses the Internet protocol and the public telecommunication system to securely share part of a business's information or operations with suppliers, vendors, partners, customers, or other businesses. An extranet can be viewed as part of a company's intranet that is extended to users outside the company. It has also been described as a "state of mind" in which the Internet is perceived as a way to do business with other companies as well as to sell products to customers. What is a byte code file? Machine-independent code generated by the Java(TM) compiler and executed by the Java interpreter What is an applet? A program written in the Java(TM) programming language to run within a web browser compatible with the Java platform, such as Hot Java(TM) or Netscape Navigator(TM). How applet is different from application? An applet is an application program that uses the client's web browser to provide a user interface while an application is a computer program with a user interface. What is Java Virtual Machine? Java(TM) Virtual Machine*. The part of the Java Runtime Environment responsible for interpreting byte codes What is ISO-9000? A set of international standards for both quality management and quality assurance that has been adopted by over 90 countries worldwide. The ISO 9000 standards apply to all types of organizations, large and small, and in many industries. The ISO 9000

series classifies products into generic product categories: hardware, software, processed materials, and services. What is QMO? The QMO is a set of processes and guidelines that software systems projects and products that are built under a contract (with a customer ) must follow to comply with ISO-9000 standards. ISO-9000 states that the guidelines for software development must be documented. What is Object Oriented model? In a Object Oriented model each class is a separate module and has a position in a class hierarchy. Methods or code in one class can be passed down the hierarchy to a subclass or inherited from a super class. This is called inheritance. What is Procedural model? A term used in contrast to declarative language to describe a language where the programmer specifies an explicit sequence of steps to follow to produce a result. Common procedural languages include Basic and C. What is an Object? An object is a software bundle of related variables and methods. Software objects are often used to model real-world objects you find in everyday life What is class? A class is a blueprint or prototype that defines the variables and the methods common to all objects of a certain kind. What is encapsulation? Give one example. Encapsulation is the ability to provide a well-defined interface to a set of functions in a way which hides their internal workings. In object-oriented programming, the technique of keeping together data structures and the methods (procedures) which act on them. What is inheritance? Give example. In object-oriented programming, the ability to derive new classes from existing classes. A derived class (or "subclass") inherits the instance variables and methods of the "base class" (or "superclass"), and may add new instance variables and methods. New methods may be defined with the same names as those in the base class, in which case they override the original one. What is the difference about web-testing and client server testing? Web applications are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, JavaScript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort.

Is a Fast database retrieval rate a testable requirement? This is not a testable requirement. Fast is a subjective term. It could mean different things depending on a persons perception. For a requirement to be testable, it should be quantified and repeatable, so that the actual value could be measured against the expected value. What different type of test cases you wrote in the test plan? Test cases for interface, functionality, security, load and performance testing.

people who are preparing for interviews. -------------------------------------------------------------------------------------------------------------------

Note: These are the Questions once asked in an on-line Test before interview and are useful for

Please describe for me a time that you made a big mistake, what did you learn from it and how did you work to prevent yourself from making that mistake again? In the beginning of my QA career when I was working on the (your first project) project I made a mistake. I found a defect and I informed the developer about the defect verbally (through conversation). And he told me that its not a defect and it will not create any problems. But later on in the project the defect found out to be a severe one and I was blamed for not reporting the defect. But I told my QA lead and project manager that I informed the developer about the defect, so they asked the developer about that. Then the developer said that I didnt tell him about that. Just because I didnt log the defect and e-mail the developer about it obviously it appeared to be my fault. Then onwards when ever I found a defect I always used to e-mail the defect to the developer or log it into the defect tracking tool (Test Director). This is one of the biggest lessons I have learned and then onwards if I found a defect I always used to have a written form of the defect, like where it happened, under what circumstances, test data thats been given and snap shots of the defect.

Have you ever worked with Test Director? How would you rate your experience with Test Director (no knowledge, some knowledge, very knowledgable, or expert)? Yes, I have been working with Test Director over 4 years. I used Test Director for managing my tests, to organize my projects and also I used Test Director as a defect tracking tool to log and track defects. I would rate myself as an expert working with Test Director.

Please describe the difference between the priority of a defect and the severity of a defect?
Priority of a defect describes the overall impact of the defect on the system and

severity identifies the importance to the various stakeholders. Priorities describe the overall effect of the defect. The more serious the defect the lower the priority number (1-Priority one) and the more attention should be paid to it. Severities are a way to further gauge the importance of defects. o Please describe the difference between black box testing and white box testing? The basic difference between white box and black box testing is, for white box testing we need to understand the internals of the module like data structures and algorithms and have access to the source code and for black box testing we just need to have an understanding and know the functionality of the application.

people who are preparing for interviews. -------------------------------------------------------------------------------------------------------------------

Note: These are the Questions once asked in an on-line Test before interview and are useful for

Have you ever worked with web services? How would you rate your experience with web services (no knowledge, some knowledge, very knowledgable, or expert)? Yes I have worked with web services. Five of six projects that I have worked so far are web based applications. I would rate myself as an expert in this area.

Have you developed any automation? What tools/languages did you use? How would you rate your experience with automation (no knowledge, some knowledge, very knowledgable, or expert? What is your automation strategy? Yes, I have developed automation in my projects. I used WinRunner/TSL (Test Script language), QuickTest Pro/VB script for automation. I would rate myself as an expert in the area of automation. In my projects once I am done with reviewing all the requirements documents, I used to start writing test plans and test cases. Depending upon the functionality I used to decide if I have to do manual testing or automation testing. For eg: In my Statefarm Insurance project, the application generates insurance plans based on their city/location. So here I parameterized the test and do the data-driven testing to see how the application is performing with various data (cities). And also if there are many releases of an application, I used to perform regression testing and I used to perform the regression testing using WinRunner and QuickTest Professional. Sometimes I had to perform the performance/load testing to see how the application is performing with the varying load and I used LoadRunners Vuser generator to create virtual users. Basically depending upon the functionality I used to choose if I have to go for manual or automation testing.

What is the Capability Maturity Model (CMM)? Have you ever worked with CMM before? How would you rate your experience with CMM (no knowledge, some knowledge, very knowledgable, or expert)? The CMM describes the principles and practices underlying software process maturity. Basically it is intended to help software organizations improve the maturity of their software processes in terms of an evolutionary path from ad hoc, chaotic processes to mature, disciplined software processes. I have worked with CMM before. I would rate myself as a very knowledgable person in this area.

What is a test plan? Test plan is a document, which describes the objectives, scope, approach and focus of a software testing effort. It contains the purpose the test, objective of the test, scope of the test, risk analysis, time frame needed for test/dead lines, definitions used if any, testing environment, etc.

What experience do you have leading teams? What did you find most challenging while leading a team? As a matter of fact I was never worked as a QA lead officially. But in my project with ____________ as our QA lead had some personal situations and he

people who are preparing for interviews. -------------------------------------------------------------------------------------------------------------------

Note: These are the Questions once asked in an on-line Test before interview and are useful for

had temporarily take time off from the project. At that time since the deadlines were approaching I was giving charge as a QA lead unofficially and I had to make sure that the project gets finished by the dead line. It was an unexpected task for me but I handled it very well. I feel for the success in everything teamwork is very important. The challenge I had was to get the project done on time. I made sure that each and every team member has a vital role to play in the project and they are very important to finish the project on time. It was a big challenge for me to handle such task but since I always have good leadership qualities that helped to bring all the team members together and get the project done. I had very good time doing that. o Have you ever worked with Oracle? What was the last version that you worked with? How would you rate your experience with Oracle (no knowledge, some knowledge, very knowledgable, or expert)? Yes I have been working with Oracle for the past 4 years. The last version i used is Oracle9i. I would rate myself as a very knowledgable person in my experience with Oracle. o Have you ever worked with SQL Server? What was the last version that you worked with? How would you rate your experience with SQL Server (no knowledge, some knowledge, very knowledgable, or expert)? Yes I have worked with SQL server. The last version I used is Microsoft SQL Server 2000. I would rate myself as a very knowledgable person in my experience with SQL server. o Can you describe for me a defect lifecycle (from logging the defect to closing the defect)? Whenever I find a defect before logging it as a defect I would check it 3 or 4 times. I used Test Director as a defect tracking tool. If I find a defect I would log that defect and attach the snap shots of it, and also information like where it occurred, under what circumstances, and the test data given and send it to the developer. Then the status of the defect will be given as NEW. When the developer sees it he would change the status to OPEN, which means he is working on fixing the defect. Once the defect is fixed, he would change the status to FIXED. And I would test the application again (regression testing), if I am satisfied I would close the defect and the new status of the defect will be CLOSED. If I am not satisfied I would re-open the defect and here the status will be RE-OPEN.

The Software Development Life Cycle (SDLC)


For Small To Medium Database Applications

Document ID: Version:

REF-0-02 1.0d

1 / 22

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

TABLE OF CONTENTS

INTRODUCTION ....................................................................................................... 4 THE SDLC W ATERFALL ............................................................................ 4 ALLOWED VARIATIONS .............................................................................. 5 OTHER SDLC MODELS ............................................................................. 6 REFERENCES ........................................................................................... 7 GENERIC STAGE ..................................................................................................... 8 KICKOFF PROCESS ................................................................................... 8 INFORMAL ITERATION PROCESS ................................................................. 9 FORMAL ITERATION PROCESS .................................................................... 9 IN-STAGE ASSESSMENT PROCESS ........................................................... 10 STAGE EXIT PROCESS ............................................................................ 11 SDLC STAGES ...................................................................................................... 12 OVERVIEW ............................................................................................. 12 PLANNING STAGE ................................................................................... 13 REQUIREMENTS DEFINITION STAGE.......................................................... 14 DESIGN STAGE ....................................................................................... 16 DEVELOPMENT STAGE ............................................................................ 17 INTEGRATION & TEST STAGE ................................................................... 18 INSTALLATION & ACCEPTANCE STAGE ...................................................... 19 CONCLUSION ......................................................................................................... 20 SCOPE RESTRICTION .............................................................................. 20 PROGRESSIVE ENHANCEMENT ................................................................. 20 PRE-DEFINED STRUCTURE ...................................................................... 21 INCREMENTAL PLANNING ......................................................................... 21

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

INTRODUCTION

This document describes the Software Development LifeCycle (SDLC) for small to medium database application development efforts. This chapter presents an overview of the SDLC, alternate lifecycle models, and associated references. The following chapter describes the internal processes that are common across all stages of the SDLC, and the third chapter describes the inputs, outputs, and processes of each stage. Finally, the conclusion describes the four core concepts that form the basis of this SDLC.

THE SDLC WATERFALL


Small to medium database software projects are generally broken down into six stages:
Project Planning Requirements Definition Design

Development Integration & Test Installation & Acceptance

The relationship of each stage to the others can be roughly described as a waterfall, where the outputs from a specific stage serve as the initial inputs for the following stage. During each stage, additional information is gathered or developed, combined with the inputs, and used to produce the stage deliverables. It is important to note that the additional information is restricted in scope; new ideas that would take
4

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

the project in directions not anticipated by the initial set of high-level requirements are not incorporated into the project. Rather, ideas for new capabilities or features that are out-of-scope are preserved for later consideration. After the project is completed, the Primary Developer Representative (PDR) and Primary End-User Representative (PER), in concert with other customer and development team personnel develop a list of recommendations for enhancement of the current software. PROTOTYPES The software development team, to clarify requirements and/or design elements, may generate mockups and prototypes of screens, reports, and processes. Although some of the prototypes may appear to be very substantial, they're generally similar to a movie set: everything looks good from the front but there's nothing in the back. When a prototype is generated, the developer produces the minimum amount of code necessary to clarify the requirements or design elements under consideration. No effort is made to comply with coding standards, provide robust error management, or integrate with other database tables or modules. As a result, it is generally more expensive to retrofit a prototype with the necessary elements to produce a production module then it is to develop the module from scratch using the final system design document. For these reasons, prototypes are never intended for business use, and are generally crippled in one way or another to prevent them from being mistakenly used as production modules by end-users.

ALLOWED VARIATIONS
In some cases, additional information is made available to the development team that requires changes in the outputs of previous stages. In this case, the development effort is usually suspended until the changes can be reconciled with the current design, and the new results are passed down the waterfall until the project reaches the point where it was suspended. The PER and PDR may, at their discretion, allow the development effort to continue while previous stage deliverables are updated in cases where the impacts are minimal and strictly limited in scope. In this case, the changes must be carefully tracked to make sure all their impacts are appropriately handled.

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

OTHER SDLC MODELS


The waterfall model is one of the three most commonly cited lifecycle models. Others include the Spiral model and the Rapid Application Development (RAD) model, often referred to as the Prototyping model. SPIRAL LIFECYCLE The spiral model starts with an initial pass through a standard waterfall lifecycle, using a subset of the total requirements to develop a robust prototype. After an evaluation period, the cycle is initiated again, adding new functionality and releasing the next prototype. This process continues, with the prototype becoming larger and larger with each iteration. Hence, the spiral. The theory is that the set of requirements is hierarchical in nature, with additional functionality building on the first efforts. This is a sound practice for systems where the entire problem is well defined from the start, such as modeling and simulating software. Business-oriented database projects do not enjoy this advantage. Most of the functions in a database solution are essentially independent of one another, although they may make use of common data. As a result, the prototype suffers from the same flaws as the prototyping lifecycle described below. For this reason, the software development team has decided against the use of the spiral lifecycle for database projects. RAPID APPLICATION DEVELOPMENT (RAD) / PROTOTYPING LIFECYCLE RAD is, in essence, the try before you buy approach to software development. The theory is that end users can produce better feedback when examining a live system, as opposed to working strictly with documentation. RAD-based development cycles have resulted in a lower level of rejection when the application is placed into production, but this success most often comes at the expense of a dramatic overruns in project costs and schedule. The RAD approach was made possible with significant advances in software development environments to allow rapid generation and change of screens and other user interface features. The end user is allowed to work with the screens online, as if in a production environment. This leaves little to the imagination, and a significant number of errors are caught using this process. The down side to RAD is the propensity of the end user to force scope creep into the development effort. Since it seems so easy for the developer to produce the basic screen, it must be just as easy to add a widget or two. In most RAD lifecycle failures, the end users and developers were caught in an unending cycle of enhancements, with the users asking for more and more and the developers trying to satisfy them. The participants lost sight of the goal of producing a basic, useful system in favor of the siren song of glittering perfection.
6

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

For this reason, the software development team does not use a pure RAD approach, but instead blends limited prototyping in with requirements and design development during a conventional waterfall lifecycle. The prototypes developed are specifically focused on a subset of the application, and do not provide an integrated interface. The prototypes are used to validate requirements and design elements, and the development of additional requirements or the addition of user interface options not readily supported by the development environment is actively discouraged.

REFERENCES
The following standards were used as guides to develop this SDLC description. The standards were reviewed and tailored to fit the specific requirements of small database projects. ANSI/IEEE 1028: Standard for Software Reviews and Audits ANSI/IEEE 1058.1: Standard for Software Project Management Plans ANSI/IEEE 1074: Standard for Software Lifecycle Processes SEI/CMM: Software Project Planning Key Process Area

This document makes extensive use of terminology that is specific to software engineering. A glossary of standard software engineering terms is available online at: http://www.elucidata.org/refs/seglossary.pdf

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

GENERIC STAGE

Each of the stages of the development lifecycle follow five standard internal processes. These processes establish a pattern of communication and documentation intended to familiarize all participants with the current situation, and thus minimize risk to the current project plan. This generic stage description is provided to avoid repetitive descriptions of these internal processes in each of the following software lifecycle stage descriptions. The five standard processes are Kickoff, Informal iteration, Formal iteration, In-stage assessment, and Stage exit:

SDLC Stage
Kickoff Process Informal Iteration Formal Iteration In-Stage Assessment Stage Exit

KICKOFF PROCESS
Each stage is initiated by a kickoff meeting, which can be conducted either in person, or by Web teleconference. The purpose of the kickoff meeting is to review the output of the previous stage, go over any additional inputs required by that particular stage, examine the anticipated activities and required outputs of the current stage, review the current project schedule, and review any open issues. The PDR is responsible for preparing the agenda and materials to be presented at this meeting. All project participants are invited to attend the kickoff meeting for each stage.

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

INFORMAL ITERATION PROCESS


Most of the creative work for a stage occurs here. Participants work together to gather additional information and refine stage inputs into draft deliverables. Activities of this stage may include interviews, meetings, the generation of prototypes, and electronic correspondence. All of these communications are deemed informal, and are not recorded as minutes, documents of record, controlled software, or official memoranda. The intent here is to encourage, rather than inhibit the communication process. This process concludes when the majority of participants agree that the work is substantially complete and it is time to generate draft deliverables for formal review and comment.

FORMAL ITERATION PROCESS


In this process, draft deliverables are generated for formal review and comment. Each deliverable was introduced during the kickoff process, and is intended to satisfy one or more outputs for the current stage. Each draft deliverable is given a version number and placed under configuration management control. As participants review the draft deliverables, they are responsible for reporting errors found and concerns they may have to the PDR via electronic mail. The PDR in turn consolidates these reports into a series of issues associated with a specific version of a deliverable. The person in charge of developing the deliverable works to resolve these issues, then releases another version of the deliverable for review. This process iterates until all issues are resolved for each deliverable. There are no formal check off / signature forms for this part of the process. The intent here is to encourage review and feedback. At the discretion of the PDR and PER, certain issues may be reserved for resolution in later stages of the development lifecycle. These issues are disassociated from the specific deliverable, and tagged as "open issues." Open issues are reviewed during the kickoff meeting for each subsequent stage. Once all issues against a deliverable have been resolved or moved to open status, the final (release) draft of the deliverable is prepared and submitted to the PDR. When final drafts of all required stage outputs have been received, the PDR reviews the final suite of deliverables, reviews the amount of labor expended against this stage of the project, and uses this information to update the project plan. The project plan update includes a detailed list of tasks, their schedule and estimated level of effort for the next stage. The stages following the next stage

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

(out stages) in the project plan are updated to include a high level estimate of schedule and level of effort, based on current project experience. Out stages are maintained at a high level in the project plan, and are included primarily for informational purposes; direct experience has shown that it is very difficult to accurately plan detailed tasks and activities for out stages in a software development lifecycle. The updated project plan and schedule is a standard deliverable for each stage of the project. The PDR then circulates the updated project plan and schedule for review and comment, and iterates these documents until all issues have been resolved or moved to open status. Once the project plan and schedule has been finalized, all final deliverables for the current stage are made available to all project participants, and the PDR initiates the next process.

IN-STAGE ASSESSMENT PROCESS


This is the formal quality assurance review process for each stage. In a small software development project, the deliverables for each stage are generally small enough that it is not cost effective to review them for compliance with quality assurance standards before the deliverables have been fully developed. As a result, only one in-stage assessment is scheduled for each stage. This process is initiated when the PDR schedules an in-stage assessment with the independent Quality Assurance Reviewer (QAR), a selected End-user Reviewer (usually a Subject Matter Expert), and a selected Technical Reviewer. These reviewers formally review each deliverable to make judgments as to the quality and validity of the work product, as well as its compliance with the standards defined for deliverables of that class. Deliverable class standards are defined in the software quality assurance section of the project plan. The End-user Reviewer is tasked with verifying the completeness and accuracy of the deliverable in terms of desired software functionality. The Technical Reviewer determines whether the deliverable contains complete and accurate technical information. The QA Reviewer is tasked solely with verifying the completeness and compliance of the deliverable against the associated deliverable class standard. The QAR may make recommendations, but cannot raise formal issues that do not relate to the deliverable standard. Each reviewer follows a formal checklist during their review, indicating their level of concurrence with each review item in the checklist. Refer to the software quality assurance plan for this project for deliverable class standards and associated review checklists. A deliverable is considered to be acceptable when
10

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

each reviewer indicates substantial or unconditional concurrence with the content of the deliverable and the review checklist items. Any issues raised by the reviewers against a specific deliverable will be logged and relayed to the personnel responsible for generation of the deliverable. The revised deliverable will then be released to project participants for another formal review iteration. Once all issues for the deliverable have been addressed, the deliverable will be resubmitted to the reviewers for reassessment. Once all three reviewers have indicated concurrence with the deliverable, the PDR will release a final in-stage assessment report and initiate the next process.

STAGE EXIT PROCESS


The stage exit is the vehicle for securing the concurrence of principal project participants to continue with the project and move forward into the next stage of development. The purpose of a stage exit is to allow all personnel involved with the project to review the current project plan and stage deliverables, provide a forum to raise issues and concerns, and to ensure an acceptable action plan exists for all open issues. The process begins when the PDR notifies all project participants that all deliverables for the current stage have been finalized and approved via the InStage Assessment report. The PDR then schedules a stage exit review with the project executive sponsor and the PER as a minimum. All interested participants are free to attend the review as well. This meeting may be conducted in person or via Web teleconference. The stage exit process ends with the receipt of concurrence from the designated approvers to proceed to the next stage. This is generally accomplished by entering the minutes of the exit review as a formal document of record, with either physical or digital signatures of the project executive sponsor, the PER, and the PDR.

11

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

SDLC STAGES

OVERVIEW
The six stages of the SDLC are designed to build on one another, taking the outputs from the previous stage, adding additional effort, and producing results that leverage the previous effort and are directly traceable to the previous stages. This top-down approach is intended to result in a quality product that satisfies the original intentions of the customer.
Project Planning Requirements Definition Design

Development Integration & Test Installation & Acceptance

Too many software development efforts go awry when the development team and customer personnel get caught up in the possibilities of automation. Instead of focusing on high priority features, the team can become mired in a sea of nice to have features that are not essential to solve the problem, but in themselves are highly attractive. This is the root cause of a large percentage of failed and/or abandoned development efforts, and is the primary reason the development team utilizes the Waterfall SDLC.

12

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

PLANNING STAGE
The planning stage establishes a bird's eye view of the intended software product, and uses this to establish the basic project structure, evaluate feasibility and risks associated with the project, and describe appropriate management and technical approaches.
Application Goals Lifecycle Model

Planning Stage

Software Configuration Management Plan

Software Quality Assurance Plan

Project Plan & Schedule

The most critical section of the project plan is a listing of high-level product requirements, also referred to as goals. All of the software product requirements to be developed during the requirements definition stage flow from one or more of these goals. The minimum information for each goal consists of a title and textual description, although additional information and references to external documents may be included. The outputs of the project planning stage are the configuration management plan, the quality assurance plan, and the project plan and schedule, with a detailed listing of scheduled activities for the upcoming Requirements stage, and highlevel estimates of effort for the out stages.

13

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

REQUIREMENTS DEFINITION STAGE


The requirements gathering process takes as its input the goals identified in the high-level requirements section of the project plan. Each goal will be refined into a set of one or more requirements. These requirements define the major functions of the intended application, define operational data areas and reference data areas, and define the initial data entities. Major functions include critical processes to be managed, as well as mission critical inputs, outputs and reports. A user class hierarchy is developed and associated with these major functions, data areas, and data entities. Each of these definitions is termed a Requirement. Requirements are identified by unique requirement identifiers and, at minimum, contain a requirement title and textual description.
High-Level Requirements (Project Plan)

Requirements Definition Stage

Requirements Document

Updated Project Plan & Schedule

Requirements Traceability Matrix

These requirements are fully described in the primary deliverables for this stage: the Requirements Document and the Requirements Traceability Matrix (RTM). the requirements document contains complete descriptions of each requirement, including diagrams and references to external documents as necessary. Note that detailed listings of database tables and fields are not included in the requirements document. The title of each requirement is also placed into the first version of the RTM, along with the title of each goal from the project plan. The purpose of the RTM is to show that the product components developed during each stage of the software development lifecycle are formally connected to the components developed in prior stages.
14

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

In the requirements stage, the RTM consists of a list of high-level requirements, or goals, by title, with a listing of associated requirements for each goal, listed by requirement title. In this hierarchical listing, the RTM shows that each requirement developed during this stage is formally linked to a specific product goal. In this format, each requirement can be traced to a specific product goal, hence the term requirements traceability. The outputs of the requirements definition stage include the requirements document, the RTM, and an updated project plan.

15

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

DESIGN STAGE
The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudocode, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input.
Requirements Document

Design Stage

Design Document

Updated Project Plan & Schedule

Updated Requirements Traceability Matrix

When the design document is finalized and accepted, the RTM is updated to show that each design element is formally associated with a specific requirement. The outputs of the design stage are the design document, an updated RTM, and an updated project plan.

16

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

DEVELOPMENT STAGE
The development stage takes as its primary input the design elements described in the approved design document. For each design element, a set of one or more software artifacts will be produced. Software artifacts include but are not limited to menus, dialogs, data management forms, data reporting formats, and specialized procedures and functions. Appropriate test cases will be developed for each set of functionally related software artifacts, and an online help system will be developed to guide users in their interactions with the software.
Design Document

Development Stage

Software

Online Help

Updated Project Plan & Schedule

Implementation Map

Test Plan

Updated Requirements Traceability Matrix

The RTM will be updated to show that each developed artifact is linked to a specific design element, and that each developed artifact has one or more corresponding test case items. At this point, the RTM is in its final configuration. The outputs of the development stage include a fully functional set of software that satisfies the requirements and design elements previously documented, an online help system that describes the operation of the software, an implementation map that identifies the primary code entry points for all major system functions, a test plan that describes the test cases to be used to validate the correctness and completeness of the software, an updated RTM, and an updated project plan.

17

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

INTEGRATION & TEST STAGE


During the integration and test stage, the software artifacts, online help, and test data are migrated from the development environment to a separate test environment. At this point, all test cases are run to verify the correctness and completeness of the software. Successful execution of the test suite confirms a robust and complete migration capability. During this stage, reference data is finalized for production use and production users are identified and linked to their appropriate roles. The final reference data (or links to reference data source files) and production user list are compiled into the Production Initiation Plan.
Online Help Implementation Map

Software

Test Plan

Integration & Test Stage

Integrated Software

Implementation Map

Online Help

Production Initiation Plan

Acceptance Plan

Updated Project Plan & Schedule

The outputs of the integration and test stage include an integrated set of software, an online help system, an implementation map, a production initiation plan that describes reference data and production users, an acceptance plan which contains the final suite of test cases, and an updated project plan.

18

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

INSTALLATION & ACCEPTANCE STAGE


During the installation and acceptance stage, the software artifacts, online help, and initial production data are loaded onto the production server. At this point, all test cases are run to verify the correctness and completeness of the software. Successful execution of the test suite is a prerequisite to acceptance of the software by the customer. After customer personnel have verified that the initial production data load is correct and the test suite has been executed with satisfactory results, the customer formally accepts the delivery of the software.
Production Initiation Plan Acceptance Plan Integrated Software Online Help

Implementation Map

Installation & Acceptance Stage

Production Software

Completed Acceptance Test

Customer Acceptance Memorandum

Archived Software Artifacts

Archived Project Plan & Schedule

The primary outputs of the installation and acceptance stage include a production application, a completed acceptance test suite, and a memorandum of customer acceptance of the software. Finally, the PDR enters the last of the actual labor data into the project schedule and locks the project as a permanent project record. At this point the PDR "locks" the project by archiving all software items, the implementation map, the source code, and the documentation for future reference.

19

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

CONCLUSION

The structure imposed by this SDLC is specifically designed to maximize the probability of a successful software development effort. To accomplish this, the SDLC relies on four primary concepts: Scope Restriction Progressive Enhancement Pre-defined Structure Incremental Planning

These four concepts combine to mitigate the most common risks associated with software development efforts.

SCOPE RESTRICTION
The project scope is established by the contents of high-level requirements, also known as goals, incorporated into the project plan. These goals are subsequently refined into requirements, then design elements, then software artifacts. This hierarchy of goals, requirements, elements, and artifacts is documented in a Requirements Traceability Matrix (RTM). The RTM serves as a control element to restrict the project to the originally defined scope. Project participants are restricted to addressing those requirements, elements, and artifacts that are directly traceable to product goals. This prevents the substantial occurrence of scope creep, which is the leading cause of software project failure.

PROGRESSIVE ENHANCEMENT
Each stage of the SDLC takes the outputs of the previous stage as its initial inputs. Additional information is then gathered, using methods specific to each stage. As a result, the outputs of the previous stage are progressively enhanced with additional information.

20

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

By establishing a pattern of enhancing prior work, the project precludes the insertion of additional requirements in later stages. New requirements are formally set aside by the development team for later reference, rather than going through the effort of backing the new requirements into prior stage outputs and reconciling the impacts of the additions. As a result, the project participants maintain a tighter focus on the original product goals, minimize the potential for scope creep, and show a preference for deferring out-of-scope enhancements, rather than attempting to incorporate them into the current effort.

PRE-DEFINED STRUCTURE
Each stage has a pre-defined set of standard processes, such as Informal Iteration and In-stage Assessment. The project participants quickly grow accustomed to this repetitive pattern of effort as they progress from stage to stage. In essence, these processes establish a common rhythm, or culture, for the project. This pre-defined structure for each stage allows the project participants to work in a familiar environment, where they know what happened in the past, what is happening in the present, and have accurate expectations for what is coming in the near future. This engenders a high comfort level, which in turn generates a higher level of cooperation between participants. Participants tend to provide needed information or feedback in a more timely manner, and with fewer miscommunications. This timely response pattern and level of communications quality becomes fairly predictable, enhancing the ability of the PDR to forecast the level of effort for future stages.

INCREMENTAL PLANNING
The entire intent of incremental planning is to minimize surprises, increase accuracy, provide notification of significant deviations from plan as early in the SDLC as possible, and coordinate project forecasts with the most current available information. In this SDLC, the project planning effort is restricted to gathering metrics on the current stage, planning the next stage in detail, and restricting the planning of later stages, also known as Out Stages, to a very high level. The project plan is updated as each stage is completed; current costs and schedule to date are combined with refined estimates for activities and level of effort for the next stage. The activities and tasks of the next stage are defined only after the deliverables for the current stage are complete and the current metrics are available. This allows the planner to produce a highly accurate plan for the next stage. Direct experience has shown that it is very difficult to develop more than a cursory estimate of anticipated structure and level of effort for out stages.
21

The Software Development Life Cycle (SDLC) For small to medium database applications

REF-0-02 Version 1.0d

The estimates for out stages are included to allow a rough estimate of ultimate project cost and schedule. The estimates for out stages are reviewed and revised as each stage is exited. As a result, the total project estimate becomes more and more accurate over time. As each stage is exited, the updated project plan and schedule is presented to the customer. The customer is apprised of project progress, cost and schedule, and the actual metrics are compared against the estimates. This gives the customer the opportunity to confirm the project is on track, or take corrective action as necessary. The customer is never left in the dark about the progress of the project.

22

Test Scripts for: Single Passenger booking a one-way domestic award ticket Prerequisite: Logged in customer has sufficient miles to complete a 25K D111 SkySaver booking Script # Test Condition Expected Results Status Req. 1.1 Ref # Invoke browser and go to Delta.com home page is displayed www.delta.com award ticket link is available on the home page Click award ticket link Non-logged in award ticket landing page is displayed Log in using SkyMiles account SkyMiles customer is logged in 2397770864 PIN 4321 successfully Award ticket RTR form is displayed with link to expanded search page If no preferred trips are saved, the following default values are prefilled: o default departure/return dates are 7/14 days in advance o default time is morning o default passengers is 1 o default preferred cabin is Coach Click More Search Options link Expanded Search Page is displayed with AT1_01 fields available for 3 legs A Round-trip search options text link is displayed On leg 1 enter the following: Select Flights Page is shown with AT6_02 Leaving from: ATL, Going to: itinerary display at top of page and search AT3_01 LGA, departure date > 14 days results below. AT6_03 in advance, default times, AT3_04 itinerary display shows flight default passengers, default AT3_03 information from ATL to LGA with cabin, and click GO> button AT5_01 Find Flight button search results show the following o mileage requirements for customers itinerary o links to allow customer to see previous/next day o list of flights from ATLLGA Continue button is disabled Select a SkySaver flight from Itinerary display is updated to show the AT6_02 the search results and click Add full flight details for the selected flight AT3_01 Flight button AT6_03 Continue button is enabled AT3_04 AT3_03 AT5_01

Click Continue button

Click Continue button

Click the Edit Billing info button Enter valid data in all required fields including an e-mail addresses and click Continue button

Review Itinerary page is displayed with current itinerary and flight information SkyMiles mileage and fees table shown with two sections: the top section shows mileage information and the bottom section shows costs applicable to the current itinerary Passenger Information page is displayed with logged in customers information pre-filled Telephone number is editable Edit Billing Info and Use new Card buttons are displayed Extended fields for billing info are shown Verify and Redeem page is displayed with summary of selected itinerary and costs SkyMiles mileage and fees table appears below billing Information section Confirmation page is displayed with Passenger Itinerary, Passenger Information and Seat Assignments, Billing Information, and SkyMiles Mileage and Fees table nav bar appears on left side of page Customer is logged out successfully

AT5_02 AT6_04

AT1_05 AT6_01

AT6_01 AT6_05

Click Redeem Award Ticket button

AT6_01

Click log out link

Award Ticket II Master Test Plan

Award Ticket II Phase I


Master Test Plan

Prepared by: Version:

Ed Cooper 1.3

5/8/2005

1 of 17

Award Ticket II Phase I & II Master Test Plan

Date: Revision Date:

April 19, 2005 October 4, 2004

Delta Technology
A wholly owned subsidiary of

5/8/2005

2 of 17

Award Ticket II Phase I & II Master Test Plan

Approval Signatures

Name: Chuck Hoskins DT Development Manager

Name: Rob Roy DT Quality Assurance Manager

Name: Eric Muniz DT Project Manager

Name: Cheryl Rousseau Delta - Project Manager

5/8/2005

3 of 17

Award Ticket II Master Test Plan

Contents
1. 2. 3. 4. 5. INTRODUCTION ................................................................................................................................ 5 PROJECT OVERVIEW...................................................................................................................... 5 TEST SCHEDULE AND TEAM ........................................................................................................ 6 RELATED DOCUMENTATION ....................................................................................................... 6 TEST STRATEGY............................................................................................................................... 7 5.1 5.2 5.3 5.4 5.5 5.6 5.7 6. 7. 8. OBJECTIVES ..................................................................................................................................... 7 ASSUMPTIONS AND DEPENDENCIES ................................................................................................. 7 RISKS .............................................................................................................................................. 7 TEST ENVIRONMENTS ...................................................................................................................... 8 TEST PHASES ................................................................................................................................... 8 DEFECT TRACKING AND REPORTING ............................................................................................... 8 QA ENTRY / EXIT CRITERIA ............................................................................................................ 9

QA TOOLS ......................................................................................................................................... 11 TEST SCOPE ..................................................................................................................................... 11 TEST PROCESS ................................................................................................................................ 11 8.1 8.2 8.3 8.4 PRE-TEST DESIGN ......................................................................................................................... 11 TEST DESIGN ................................................................................................................................. 11 TEST EXECUTION .......................................................................................................................... 12 POST TEST EXECUTION.................................................................................................................. 12

9.

TEST CONCLUSIONS...................................................................................................................... 13 9.1 9.2 9.3 9.4 9.5 DEFECT SUMMARY ........................................................................................................................ 13 OPEN ISSUES/DEFECTS .................................................................................................................. 13 CLOSED ISSUES ............................................................................................................................. 13 BUILDS COMPLETED ...................................................................................................................... 13 OVERALL DEFECT COUNT ............................................................................................................. 13 GLOSSARY .................................................................................................................................... 14 TEST SCRIPTS AND TEST CASES ........................................................................................... 15 REQUIREMENTS TRACEABILITY MATRIX........................................................................ 16 PROJECT CHECK LIST.............................................................................................................. 17

10. 11. 12. 13.

5/8/2005

4 of 17

Award Ticket II Phase I & II Master Test Plan

1. INTRODUCTION
This document outlines the testing approach which will be used for the Award Ticket II Phase I Project. The Master Test Plan defines a comprehensive plan for Quality Assurance activities and deliverables through all project phases. Major activities include: establishing the strategy that must be taken to ensure successful testing identifying and establishing the testing environment that needs to be in place describing the test approach by identifying the test design, test data and test priorities preparing and executing the test scripts and test cases in the System and Integration test environments scheduling the testing activities and resource assignments in accordance with the testing priorities

The information in this test plan has been carefully reviewed and is believed to be accurate. It is a living document that will be modified and updated as needed throughout the project lifecycle.

2. PROJECT OVERVIEW
The Award Ticket II Phase I release includes the following additions, changes & enhancements: Req. Reference # AT1_01A AT6_02 AT3_01 AT1_03 AT5_03 AT4_02A AT6_03 AT3_04 AT3_03 AT5_01 AT5_02 AT1_05 AT6_01 AT6_04 AT4_04 AT6_05 Requirements Description The ability to book one way, open jaw, stopovers for domestic destinations Reformat Departure calendar. Better visibility Display SkySavers or SkyChoice once. Offer web-only Award Tickets or web-only reduced mileage awards Cancel the record when customer does not compete transaction Change/Save dates, times, city w/o re-entering all data Display outbound flight when displaying inbound flight options View mileage reqs for each leg of itinerary on Available Flight page Display RT mileage requirements and current balance on each page in the booking process Remove N/A as a status and rename Rename the Verify Itinerary Page, title Award Ticket and other pages The ability to provide third party redemption Improve Psgr info, Verify and Red, Confirmation pages Redesign fee table itemization of applicable taxes and fees Update the RTR to use the common spellings, misspellings as used in booking Increase visibility to ISM and support for ISM

5/8/2005

5 of 17

Award Ticket II Phase I & II Master Test Plan

3. TEST SCHEDULE AND TEAM


Below is the schedule for the testing phase for the Award Ticket II project. This is a tentative schedule and is subject to change. Overview of Project Schedule Milestone for Project Implementation Develop Project Test Plan Develop test cases for System Test Develop test cases for Integration Test Test script and case review with the QA team Test script and case review with the project team Migration to the System Test environment Execution of test cases in System Test Migration to the Integration Test environment Execution of test cases in Integration Test Completion of QA Authorization to deploy in Production Migration to the Delta Production environment Schedule Dates 4/12/04-10/15/04 7/28/04-10/15/04 7/28/04-10/15/04 n/a 10/4/04 10/18/04 10/18/04-11/16 11/17/04 11/17/04-12/2/04 12/3/2004 12/06/04

Project Team Members System & Integration Testing System & Integration Testing Development Development Development Project Management Human Factors

Resource Ed Cooper Aravind Ravisekar Faith Ammons Helen Zhang Aparna Ganta Eric Muniz David Davila

Role QA Test Engineer QA Test Engineer Developer Developer Developer Defect Review and Assignments User Interface Design & Solutions

Phone # 4-6003 5-4324 4-1547 4-2760 7-5249 4-2868 7-5289

4. RELATED DOCUMENTATION
The following documents were used as inputs to this deliverable: Work Papers/Products Award Ticket II - Project Charter Location http://dpi948/dtedmprod:/Skylinks/Loyalty/Awa rd Ticket II/Source Docs/Communications/Award Ticket II Project Charter.doc http://dpi948/dtedmprod:/Skylinks/Loyalty/Awa rd Ticket II/Source Docs/Deliverables/1 SDD Physical - Phase I/AwardTicketII_GUISpec_1.0

Award Ticket II Phase I GUI Spec

5/8/2005

6 of 17

Award Ticket II Phase I & II Master Test Plan

Award Ticket II Phase I Solution Definition Document - Physical

http://dpi948/dtedmprod:/Skylinks/Loyalty/Awa rd Ticket II/Source Docs/Deliverables/1 SDD Physical - Phase I/SDD_physica_AWD_II_Ph1.doc http://dpi948/dtedmprod:/Skylinks/Loyalty/Awa rd Ticket II/Source Docs/Deliverables/1 SDD Physical - Phase I/Technical_Design_Phase1 http://dpi948/dtedmprod:/Skylinks/Loyalty/Awa rd Ticket II/Source Docs/Deliverables/1 SDD Physical - Phase I/ATII_DataModel http://dpi948/dtedmprod:/Skylinks/Loyalty/Awa rd Ticket II/Source Docs/Deliverables/1 SDD Logical - Phase I/Detailed Requirements_Business_Phase I_II.doc http://dpi948/dtedmprod:/Skylinks/Loyalty/Awa rd Ticket II/Source Docs/Deliverables/1 SDD Logical - Phase I/SDD_logical.doc

Award Ticket II Phase I Technical Design Document Award Ticket II Phase I Data Model

Award Ticket II Detailed Requirements Document

Award Ticket II Phase I Solution Definition Document Logical

5. TEST STRATEGY
5.1 Objectives
The testing objectives are: To ensure that the requirements outlined in the approved Logical Solution Definition Document and Physical Solution Definition Documents are complete and function to meet the business requirements. To verify that system and integration testing is completed with no level 1 (critical) or 2 (severe) defects outstanding.

5.2 Assumptions and Dependencies


The following assumptions and dependencies have been identified as follows: QA testing will begin at the completion of development unit testing (sandbox environment) All functional requirements are properly defined and meet the business teams needs. Entry criteria deliverables meet QA requirements. Testing will occur on the most current version of project code in the System Test and Integration Test environments. During the test process all required interfaces are available and accessible in System Test and Integration Test. Development and QA will have access and will use the ClearQuest defect tracking system. QA will initially assign a severity level to all defects using established guidelines. The Project Manager will be responsible for the timely resolution of all defects. Defect resolution does not impede QA testing. The project team will provide the business team with a list of unresolved defects when testing is completed. Matrix team members are available to work with QA during test execution if needed.

5.3 Risks
QA testing may be impeded by the following risks: Several high priority projects are competing for testing resources. There is a risk that required resources for this project may not be available as needed.

5/8/2005

7 of 17

Award Ticket II Phase I & II Master Test Plan

The testing environments, TSAT and Worldspans Large Customer Test System, may be unstable or unavailable when needed for testing. The System and Integration Test environments are not exact mirror images of the production environment. Security structure to protect the System Test and Integration Test environments is not present to avoid inappropriate access to the application while in test. Functionality changes implemented late in the development cycle. Testing will not determine all defects. The number of manually executed test cases will have a direct impact upon the amount of time it takes to execute the test. Limitations to Stress and Performance testing are driven by the functionality of the available test tools.

5.4 Test Environments


The System Test and Integration Test environments will be utilized during the QA testing process. The application code will be loaded into Clear Case for version control and subsequent loading to the web app servers. Clear Case support will be used to track all modifications to the base code during the QA testing process. The System and Integration Test environments are not exact mirror images of the production environment.

5.5 Test Phases


Unit Test Unit testing is performed by the Developers to ensure the code works properly. This testing is completed during the development process. System Test System Testing is performed by QA. Defects are logged into ClearQuest, corrected and retested and builds are performed as needed. Integration Test The code is in a production ready state. Defects (severity 1, 2 and possibly 3) that are found during Integration Testing will result in the code being moved back to System Test for re-testing. Automated testing and performance testing are performed during the Integration Test phase.

5.6 Defect Tracking and Reporting


Defects are entered into ClearQuest during the System Test and Integration Test phases of the QA process. Defects are categorized into five levels of severity. Severity 1. 2. Type Showstoppe r (Critical) Severe (Major) Description Large scale defects, no workaround exists. Defect would prevent the product from being released. Impaired functionality, a workaround may exist but its use is unsatisfactory. Defect would prevent the product from being released.

5/8/2005

8 of 17

Award Ticket II Phase I & II Master Test Plan

3.

Moderate

Defect causes failure of non-critical aspects of the system. There is a reasonably satisfactory workaround. The existence of the defect may cause customer dissatisfaction. The product may be released if the defect is documented and acceptable to the business. Defect of minor significance. A work around exists or, if not, the impairment is slight. Generally, the product could be released and most customers would be unaware of the defects existence or only slightly dissatisfied. Would be included in a future maintenance release. Very minor defect. A work around exists or the problem can be ignored. The product can easily be released with such a problem. Known trivial items should be resolved, as time allows in future maintenance releases.

4.

Minor

5.

Trivial

5.7 QA Entry / Exit Criteria


System Test Entry Criteria The following items are required before entering the System Test Environment: The code has been unit tested and the developers have provided unit testing results to QA prior to the code being installed in the System Test environment. Product components are placed under RET control ClearQuest setup is complete for the project team participants System Test move sheet is complete System Test Move sheet review meeting Requirements Verification and Traceability Matrix is complete QA team review of the Test Plan with test scripts and test cases Project team review of the Test Plan with test scripts and test cases Validation of Web calendar project dates

System Test Exit Criteria The following items are required before exiting the System Test Environment: Test scripts and test cases completed and executed per build. No outstanding severity level 1 or 2 defects All defects and action items are closed or an approved resolution plan is in place A list of all known and outstanding defects is supplied to the project team and business team by QA QA Signoff Test Plan updated Application is under RET control All deployment requests have been submitted by the developers and approved by QA QA has moved all components in ClearQuest to an Integration state. Integration Test move sheet is complete.

5/8/2005

9 of 17

Award Ticket II Phase I & II Master Test Plan

Integration Test move sheet review meeting Move sheet meeting A go/no go meeting is held to determine if the move to Integration Test can begin

Integration Test Entry Criteria The following items are required before entering the Integration Test Environment: Test scripts and test cases completed No outstanding severity level 1 or 2 defects All defects and action items are closed or an approved resolution plan is in place A list of all known and outstanding defects is supplied to the project team and business team by QA QA Signoff Test Plan updated Application is under RET control All deployment requests have been submitted by the developers and approved by QA QA has moved all components in ClearQuest to an Integration state. Integration Test move sheet is complete Integration Test Move sheet meeting A go/no go meeting is held to determine if the move to Production can begin

Integration Test Exit Criteria The following items are required before exiting the Integration Test Environment: Test scripts and test cases completed No outstanding severity level 1 or 2 defects All defects and action items are closed or an approved resolution plan is in place A list of all known and outstanding defects is supplied to the project team and business team by QA QA Signoff Test Plan updated Application is under RET control All deployment requests have been submitted by the developers and approved by QA QA has moved all components in ClearQuest to a Released state. Production move sheet is complete Production move sheet meeting Back out plan has been tested or a back out walkthrough meeting has occurred. Authorization to Deploy from QA completed and distributed

5/8/2005

10 of 17

Award Ticket II Phase I & II Master Test Plan

6. QA TOOLS
The following tools will be used throughout the QA Process. Rational ClearQuest will be used for defect tracking and component promotion through the Software Development Lifecycle. This tool enables defect tracking, change request, and issue management. http://cleqrquest/cqweb/url/ Release Engineering Tool (RET) will be use to deploy the applications and services to the applicable environment. This tool also handles all software application construction processes and procedures. http://ret.delta.com/ret/retw1 Load Runner will be used for stress and performance testing. Test Director will be used to manage and launch the environment sanity scripts. This tool facilitates both automated and manual test script creation and execution. Visio Link Checker will be used to verify active links in each test system

7. TEST SCOPE
The testing scope for the Award Ticket II defines the coverage of testing that will be performed. The following areas will be included in the Award Ticket II testing phase:

Types of Testing Functionality Error Handling Regression Application Logging and Reporting Failover and Recovery Load Testing

System Test

Integration Test

Testing will be performed on delta.com on Internet Explorer browser 6.0. For delta.com a subset of test cases will be executed on Netscape 7.0.

8. TEST PROCESS
The test process consists of four major areas:

8.1 Pre-Test Design


Review of Project Charter and high-level requirements work-session documents Participation in detail processing work-sessions Review of project documents prepared throughout the project lifecycle

8.2 Test Design


Test data strategy is defined to include all necessary setup requirements to execute the test cases. The test data setup details are included with the test case document.

5/8/2005

11 of 17

Award Ticket II Phase I & II Master Test Plan

Preparation of QA test scripts and test case documents. Test cases are prepared to a level of detail with step by step instructions along with detailed expected results. This allows for flexibility of test execution resources by providing the detail to have a crossexecution team approach. Review of test scripts, test cases and data setup strategy within the QA team Preparation of Requirements traceability matrix Review of requirements traceability matrix within the QA team Review of requirements traceability matrix with the project team.

8.3 Test Execution


Execution of all test cases through product builds Entry of defects discovered during testing along with test conditions and steps to recreate in the ClearQuest tool. Defect review and resolution testing when a new build is received from development for correction of known defects.

8.4 Post Test Execution


Preparation and distribution of QA Authorization to Deploy document. Finalization of test case results Finalization of test conclusion statistics Updates complete in project test plan Final recap prepared QA project performance matrix updated

5/8/2005

12 of 17

Award Ticket II Phase I & II Master Test Plan

9. TEST CONCLUSIONS
9.1 Defect Summary
Award Ticket II Phase I was loaded to Production on XX/XX/XX. The following is a description of the issues encountered during testing. All test conditions with the expected results and status can be found in section 11 of this document

9.2 Open Issues/Defects


Defect # Component Name Header Detail Sev Status

9.3 Closed Issues


Defect # Component Name Header Detail Sev Status

9.4 Builds Completed


Current Build Status:
Component Name MOVE TO SYSTEM TEST System Test Rebuild #1 System Test Rebuild #2 MOVE TO INTEGRATION TEST MOVE TO PRODUCTION Date Completed Next Scheduled Build Notes

9.5 Overall Defect Count


Defect State Total Open Defects: Total Defects: Defect Status Submitted: Assigned: Work In Progress: Resolved: Duplicate: Closed: Count 0 0 0 0 0 0 0 0

5/8/2005

13 of 17

Award Ticket II Phase I & II Master Test Plan

10. GLOSSARY
Key Term SkySaver SkyChoice Extended Domestic Description

an award type that includes travel restrictions. an award type usable for unrestricted travel. This award type requires the redemption of more miles than a SkySaver award. the travel area for which Award Tickets can be redeemed online, including the US (and Alaska and Hawaii), Canada, Mexico, the Caribbean, and Bermuda. an itinerary for travel from one origin city to one destination city. an itinerary for travel from one origin city to one destination city and back. an itinerary including more than two cities where either the origin and destination cities are the same but the intermediate cities are different (ATL-DFW, LAX-ATL), or where the origin and destination cities are different but the departure city in the intermediate segment is the same (ATL-LAX, LAX-MIA). an itinerary that includes a stop of more than 4 hours and includes a flight number change as a result of the stop.

One-way Round-trip Open Jaw

Stopover

5/8/2005

14 of 17

Award Ticket II Master Test Plan

11. TEST SCRIPTS AND TEST CASES


Test cases are inserted here when finished and signed off on.

5/8/2005

15 of 17

Award Ticket II Master Test Plan

12. REQUIREMENTS TRACEABILITY MATRIX


Test Script Section 1-12 Section 1-12 Section 2 Section 17 Section 15 Section 15 Section 1-12 Section 1-12 Section 1-12 Test Script 5.3 Section 1-12 Section 14 Section 1-12 Section 1-12 Test Scripts 1.2, 1.5, 2.6, 7.4, 7.7, 8.7, 10.4 Test Scripts 1.6, 2.6, 3.6, 5.6, 7.6, 8.6, 9.6, 11.6 Req. Reference # AT1_01A AT6_02 AT3_01 AT1_03 AT5_03 AT4_02A AT6_03 AT3_04 AT3_03 AT5_01 AT5_02 AT1_05 AT6_01 AT6_04 AT4_04 Requirements Description The ability to book one way, open jaw, stopovers for domestic destinations Reformat Departure calendar. Better visibility Display SkySavers or SkyChoice once. Offer web-only Award Tickets or web-only reduced mileage awards Cancel the record when customer does not compete transaction Change/Save dates, times, city w/o re-entering all data Display outbound flight when displaying inbound flight options View mileage reqs for each leg of itinerary on Available Flight page Display RT mileage requirements and current balance on each page in the booking process Remove N/A as a status and rename Rename the Verify Itinerary Page, title Award Ticket and other pages The ability to provide third party redemption Improve Psgr info, Verify and Red, Confirmation pages Redesign fee table itemization of applicable taxes and fees Update the RTR to use the common spellings, misspellings as used in booking Pages Impacted

AT6_05

Increase visibility to ISM and support for ISM

5/8/2005

16 of 17

Award Ticket II Phase I & II Master Test Plan

13. PROJECT CHECK LIST


(The completed project checklist is linked here at the conclusion of the project)

5/8/2005

17 of 17

WinRunner FAQ What are the Advantages of Automation in testing? 1. Fast 2. Reliable 3. Repeatable 4. Programmable 5. Comprehensive 6. Reusable What is the latest version if WinRunner? WinRunner 7.5 released in April2002 What is the language of WinRunner? Test Script Language (TSL) Explain the Testing process? The WinRunner Testing process consists of 6 Main phases: 1. Teaching WinRunner the objects in the application. 2. Creating test scripts to test application functionality 3. Debugging the test 4. Running the tests on the application 5. Examining the test results 6. Reporting defects How WinRunner identifies the GUI objects? WinRunner identifies the objects based on their Physical properties. What browsers are supported by WinRunner 7.x? WinRunner 7.x supports Internet Explorer 4.x-5.5, Netscape Navigator 4.0-6.1 (excluding versions 6.0 and 6.01) and AOL 5 and 6. What is GUI Spy? GUI Spy is an integrated tool for spying on standard, ActiveX and Java controls. It displays the properties of standard controls and the properties and methods of ActiveX and Java controls. You can copy and paste functions for activating Java methods from the GUI Spy into your test script. What is the use of GUI Map File per Test mode? This mode automatically manages GUI map files for you, so you do not have to load or save GUI map files in your test. GUI map files per test can be combined into the Global GUI map file if needed

How does selective recording work? We can specify exactly which applications WinRunner should record of those currently running on your desktop. This avoids the common problem of extraneous script lines when you suddenly switch to other applications while recording a test. What add-ins are available for WinRunner 7.x? Add-ins are available for Java, ActiveX, WebTest, Siebel, Baan, Stingray, Delphi, Terminal Emulator, Forte, NSDK/Natstar, Oracle and PowerBuilder. Can WR automatically back up test scripts? Yes, WinRunner 7.x can automatically create a backup copy of your test script at intervals you specify What are the system requirements for WinRunner 7.x? Minimum system requirements are listed below. Computer/ Processor: PC with a single-CPU, Pentium 100MHz or higher processor Operating System: Microsoft Windows 95/98/NT version 4.0/2000/ME/XP Memory: 32 MB of RAM Free Hard Disk Space: 62 MB of free disk space for a compact installation or 214 MB for typical or complete installation Display: Monitor with resolution of 800x600 or higher What are the components of WinRunner? Softkey configuration Font Expert Guy Spy What are the different modes of learning an application are available in Rapid Script Wizard? RapidTest script wizard have Express and Comprehensive mode. Express mode: Uses WinRunners defaults for logical names. Does not pauses after each window Comprehensive mode: Allows you to modify logical names. Allows you to map custom objects to standard. Pauses after each window learned. What are the different modes of recording in WinRunner? Two recording modes are: Context Sensitive mode: Context Sensitive mode is object oriented, scripts are readable, maintainble and portable. Scripts are not affected by user interface changes. WinRunner records a statement, which corresponds to the action you perform on the GUI object. Analog mode: Analog recording is based on absolute screen coordinates relative to the entire screen. Test script describes mouse and keyboard activities. These tests are not easily portable because of screen dependencies however because analog mode is non-intrusive and does not depend on knowledge of GUI objects. It will always work when the screen coordinates are correct.

When to use each recording mode? I general, we use Context Sensitive mode to test application with a graphical user interface. Analog mode is used for non-GUI objects areas and when mouse tracks are an important part of what you want to test. For example when we draw in Paintbrush, the movements of the mouse would be an important part of your test. What are the different run modes? Three modes for running test: Verify (default): To check your application against expected results. WinRunner compares the current response of the application to its expected response. Any discrepancies between the current and expected response are captured and saved as verification results. Debug: The debug mode helps you identify bugs in test scripts. Running a test in the debug mode is the same as running a test in the Verify mode, expect that debug results are always saved in the debug directory. Update: The update mode to update the expected results a test. What is synchronization? Synchronization points instructs WinRunner to wait for a certain response from the application during a test run. The response of the application might be the appearance of a window, a text string or any other element. How to change synchronization options? There are two ways: 1. We can change the synchronization by adding synchronization points in the scripts. 2. Settings->General Options What is GUI checkpoint? GUI Checkpoints allows to verify the current state or attributes of GUI objects. When we insert GUI checkpoint in script, WinRunner captures the current value of the object properties and saves them in the expected result directory (exp) of the test. When we run the test WinRunner compares the current state of the object in the application to the expected state and detects and reports any mismatches. What is checklist? Checklist file stores the objects and the attributes you want to check. Why a bitmap check point fail? Bitmap checkpoints allow you to verify non-GUI areas in your application, such as drawings or graphs. Bitmap checkpoints depends on resolution, color, fonts, drivers, etc. If there is any change in the GUI, resolution, fonts or color of the image, bitmap checkpoint will fail What is tl_step()? tl_step() sends a pass/fail status to WinRunners report and to TestDiretcor.

What is watch list? OR What is Add Watch? The Watch List is used to monitor the values of variables, expressions and array elements while debugging the test script. Prior to running the test script we can add these elements in the watch list using Add Watch to monitor. How to read text from an application? There are three ways, depending on text: The text is part of standard GUI object Use Context Sensitive functions to query the object. The text is part of a Custom GUI Object Use Get Text-> Object/Window The text is bitmap or ASCII Use Get Text-> Area How to teach font to WinRunner? Using Font Expert What is batch test (Modular Test Tree)? Modular test tree consists of main driver test that calls others test. Passes parameter values and controls execution. Run tests in an unattended mode. What is GUI map? What information is stored in GUI Map file? GUI map enables WinRunner to uniquely identify objects using physical attributes. Allows WinRunner to refer to objects in the scripts using intuitive logical name. Provides the connection between logical name and physical attributes. GUI map contains physical attributes and logical name to uniquely identify each object. When we need to update the GUI map? We need to update the GUI map when objects in the application have changed. This usually happens when a new major version of the application is released.

Assume you are about to begin creating automated test for your application. Which method would you use to generate the GUI map? The RepaidTest script Wizard and the GUI map editor. Use the Wizard to learn the windows and objects of an application. Use the GUI map editor to learn additional windows and objects. What are the advantages of parameterizing your test rather than having hard coded data within the test? OR What is the need for Data Driven Tests? Parameterizing the test allows to run the same test with different data each time. In addition the test is expandable and easier to maintain. What is the purpose of the set_window function? The set_window function sets the focus of the window in the application as well as sets the scope of the window in the GUI map. What is the difference between call() and load() function? call functions invokes a test from within a test script but load is used for loading a compiled module into memory. What is compiled module? Why do you create a complied module? Compiled module is library of frequently used functions. We can save user defined functions in compiled module and then call them in the test scripts. Compiled module improves the organization and performance of the tests. Compiled modules are debugged before using; they will require less error checking. Calling a function whose is already compiled is significantly faster than interpreting a function in test script. Complied module does not support analog recoding and checkpoints. How do you create user-defined functions? User Defined functions enhance the efficiency and reliability of test scripts. Easy way to create a function is 1. Create the process by recording the TSL functions 2. Enclose it into the function header 3. Replace values with parameters 4. Declare local variable 5. Handle errors What is database checkpoint? Database checkpoint is used to check the contents of database in different versions of the application. What do Runtime Database Record Checkpoints do? Runtime Database Record Checkpoints enable you to check that your application inserts, deletes, updates or retrieves data in a database correctly. By mapping application controls to database fields, you can check that the values in your application are correctly read from or written to the matching database fields when you run your test. What is Startup script? What is role of Startup script?

A startup script is a test scripts that is automatically run each time we start WinRunner. We can create startup tests that load GUI map, compiled modules, configuring recording options and staring AUT. What is Function Generator and how it is used? In Function Generator functions are grouped in categories according to the object class (list, button) or the type of function (input/output, system, file, etc). In Function Generator we choose a function, then expand the dialog box (by pressing the Args>> button) to fill in the argument values and paste it to script. If you want to run the same script 100 times, what is the syntax? for(i=1;i<=100;i++) { TSL statements } How can we check if the object is enable or disable? obj_get_info(<object>,enabled, value); if(value == 1) obj_state=enbale; else obj_state=disable; Is it possible to automate a legacy application? Yes, using TE (Terminal Emulator) addins. Why do you modify the logical name? 1. Open GUI map file in GUI Map Editor 2. Select the object to be changed 3. Press Modify button 4. Enter new logical name 5. Press OK 6. Save GUI map file How WinRunner handles varying window labels? Using Regular Expression How do you find an object in a GUI map file? Using Find button in GUI Map editor How do you map a custom control to a standard control? 1. Select Tools -> GUI Configuration 2. On GUI Map Configuration window press Add button 3. Enter the customer class name or use hand icon to learn the class name of an object 4. Press OK 5. On GUI Map Configuration window, press Configure button 6. On Configure Class dialog, select the Standard class name from Mapped to Class list box.

7. Press OK How can you create a Permanent GUI map configuration? Paste following TSL functions in startup script 1. set_class_map 2. set_record_attr 3. set_record_method What is a virtual object? How do you handle Virtual Objects in WinRunner? Our applications may contain bitmaps that look and behave like GUI objects. WinRunner record operations on these objects using win_mouse_click statements. We can define these objects as virtual objects and instruct WinRunner to treat them as GUI objects when we record or run the tests. Using Virtual Object wizard we can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name. What are the different Checkpoints that you can insert in a WinRunner Script? Four types of checkpoints can be added to any WinRunner script. 1. GUI Checkpoint 2. Bitmap Checkpoint 3. Database Checkpoint 4. Text Checkpoint (only for Web scripts) How do you check for the database table contents? Using Database checkpoint How do you handle Web Exceptions? We can instruct WinRunner to handle the appearance of specific dialog box in the web site during the test run. WinRunner contains a list of exceptions that it supports in the Web Exception Editor. We can modify the list and configure additional exceptions that we would like WinRunner to support. What is the difference between Main Test and a Compile Module file? Main test contains the TSL script to test the AUT. Compiled module is library of frequently used functions. We can save user defined functions in compiled module and then call them in the main test scripts. How do you start client/server applications from the script? Using following TSL function: invoke_application ( file, command_option, working_dir, show ); How do you start a web application from the script? Using following TSL function: web_browser_invoke ( browser, site ); When do you run a test in batch mode? Batch testing is execution of a suite of test scripts towards an overall testing goal. We need to run a batch test when we want to test the overall AUT.

Explain the line: create_input_dialog(Enter the Date) The create_input_dialog function will create a dialog box with an Enter the Date edit box. What is Breakpoint? When do you use breakpoints? Breakpoint marks a place in the test scripts where we want to pause a test run. Breakpoints helps to find flaws in the test script. WinRunner enables us to add types of breakpoints Breakpoint by location and Breakpoint by function. Breakpoint by location stops the test run on a specified line in the test script. Breakpoint by function stops the test run when it calls a specified user defined function. How do you monitor variables in a script? Using Watch List How do you configure WinRunner to test Web Applications? Activate WebTest addin for web page functionality testing. Web Test Addin provide: 1. Recognize web frames of the web applications 2. Create scripts that can test functionality of the web site under Netscape Navigator or Internet Explorer. 3. Write scripts with functions exclusive to web testing. What can you check on a web page using the checkpoints in WinRunner? WinRunner can check numerous properties of frames, tables, link, images and table cells objects using checkpoints. For example Frame: Height, Width, Format, Maximizable, Maximized, Resized, Count Objects Table : Colums, Format, Rows, TableContent Table Cell: BackgroundColor, Format, CellContent, Link, Broken Links, Images What are the different types of Check Points you have inserted in the Script? GUI checkpoint, Bitmap checkpoint, Database checkpoint and Text checkpoint How do you Check Images? Using Bitmap checkpoint How do you Analyze result? Using WinRunner Result file In which file you store the definition of the GUI objects? How you load this file during script execution? Object GUI definition is stored in GUI map file. Add GUI_load(<GUI file name>); either in startup script or in test script. Where all the object definition files are stored? <WinRunner Installation Folder>\dat How you will handle custom objects in your application? If the Object look and behave like standard object we can do Class mapping or by default WinRunner will map that object class to object class and identify the object.

What check point you will use to test broken links. Using GUI checkpoint What command you will write to open a file in the read mode and read each line of the file sequentially. Open text file in read mode file_open(<filename>,FO_MODE_READ); Read each line sequentially file_read_line(<filename>,line);

Interview Questions for WinRunner. 1) How you used WinRunner in your project? a) Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression testing of the AUT. 2) Explain WinRunner testing process? a) WinRunner testing process involves six main stages i. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested ii. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested. iii. Debug Test: run tests in Debug mode to make sure they run smoothly iv. Run Tests: run tests in Verify mode to test your application. v. View Results: determines the success or failure of the tests. vi. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. 3) What is contained in the GUI map? a) WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an objects description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. b) There are 2 types of GUI Map files. i. Global GUI Map file: a single GUI Map file for the entire application ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. 4) How does WinRunner recognize objects on the application? a) WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an objects description in the GUI map and then looks for an object with the same properties in the application being tested. 5) Have you created test scripts and what is contained in the test scripts? a) Yes I have created test scripts. It contains the statement in Mercury Interactives Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunners visual programming tool, the Function Generator. 6) How does WinRunner evaluates test results? a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

7) Have you performed debugging of the scripts? a) Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner. 8) How do you run your test scripts? a) We run tests in Verify mode to test your application. Each time WinRunner encounters a

checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results. 9) How do you analyze results and report the defects? a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed. 10) What is the use of Test Director software? a) TestDirector is Mercury Interactives software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release. 11) How you integrated your automated scripts from TestDirector? a) When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT. 12) What are the different modes of recording? a) There are two type of recording in WinRunner. i. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen. 13) What is the purpose of loading WinRunner Add-Ins? a) Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function. 14) What are the reasons that WinRunner fails to identify an object on the GUI? a) WinRunner fails to identify an object in a GUI due to various reasons. i. The object is not a standard windows object. ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window. 15) What do you mean by the logical name of the object? a) An objects logical name is determined by its class. In most cases, the logical name is the label that appears on an object. 16) If the object does not have a name then what will be the logical name? a) If the object does not have a name then the logical name could be the attached text. 17) What is the different between GUI map and GUI map files?

a) The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. i. Global GUI Map file: a single GUI Map file for the entire application ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. b) GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description. 18) How do you view the contents of the GUI map? a) GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description. 19) When you create GUI map do you record all the objects of specific objects? a) If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts. 20) What is the purpose of set_window command? a) Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window. Syntax: set_window(, time); The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus. 21) How do you load GUI map? a) We can load a GUI Map by using the GUI_load command. Syntax: GUI_load(); 22) What is the disadvantage of loading the GUI maps through start up scripts? a) If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high. b) If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it. 23) How do you unload the GUI map? a) We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory. Syntax: GUI_close(); or GUI_close_all; 24) What actually happens when you load GUI map? a) When we load a GUI Map file, the information about the windows and the objects with their logical names and physical description are loaded into memory. So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory. 25) What is the purpose of the temp GUI map file? a) While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options. 26) What is the extension of gui map file? a) The extension for a GUI Map file is .gui.

27) How do you find an object in a GUI map? a) The GUI Map Editor is been provided with a Find and Show Buttons. i. To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object. ii. To find a particular object in a GUI Map file click the Find button, which gives the option to select the object? When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file. 28) What different actions are performed by find and show button? a) To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object. b) To find a particular object in a GUI Map file click the Find button, which gives the option to select the object? When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file. 29) How do you identify which files are loaded in the GUI map? a) The GUI Map Editor has a drop down GUI File displaying all the GUI Map files loaded into the memory. 30) How do you modify the logical name or the physical description of the objects in GUI map? a) You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor. 31) When do you feel you need to modify the logical name? a) Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long. 32) When it is appropriate to change physical description? a) Changing the physical description is necessary when the property value of an object changes. 33) How WinRunner handles varying window labels? a) We can handle varying window labels using regular expressions. WinRunner uses two hidden properties in order to use regular expression in an objects physical description. These properties are regexp_label and regexp_MSW_class. i. The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a windows label description. ii. The regexp_MSW_class property inserts a regular expression into an objects MSW_class. It is obligatory for all types of windows and for the object class object. 34) What is the purpose of regexp_label property and regexp_MSW_class property? a) The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a windows label description. b) The regexp_MSW_class property inserts a regular expression into an objects MSW_class. It is obligatory for all types of windows and for the object class object. 35) How do you suppress a regular expression? a) We can suppress the regular expression of a window by replacing the regexp_label property with label property. 36) How do you copy and move objects between different GUI map files? a) We can copy and move objects between different GUI Map files using the GUI Map Editor. The

steps to be followed are: i. Choose Tools > GUI Map Editor to open the GUI Map Editor. ii. Choose View > GUI Files. iii. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously. iv. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists. v. In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All. vi. Click Copy or Move. vii. To restore the GUI Map Editor to its original size, click Collapse. 37) How do you select multiple objects during merging the files? a) Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All. 38) How do you clear a GUI map files? a) We can clear a GUI Map file using the Clear All option in the GUI Map Editor. 39) How do you filter the objects in the GUI map? a) GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options. i. Logical name displays only objects with the specified logical name. ii. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description. iii. Class displays only objects of the specified class, such as all the push buttons. 40) How do you configure GUI map? a) When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object. b) Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic object class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script. c) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script. 41) What is the purpose of GUI map configuration? a) GUI Map configuration is used to map a custom object to a standard object. 42) How do you make the configuration and mappings permanent? a) The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script. 43) What is the purpose of GUI spy? a) Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.

44) What is the purpose of obligatory and optional properties of the objects? a) For each class, WinRunner learns a set of default properties. Each default property is classified obligatory or optional. i. An obligatory property is always learned (if it exists). ii. An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object. 45) When the optional properties are learned? a) An optional property is used only if the obligatory properties do not provide unique identification of an object. 46) What is the purpose of location indicator and index indicator in GUI map configuration? a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available: i. A location selector uses the spatial position of objects. 1. The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description. ii. An index selector uses a unique number to identify the object in a window. 1. The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window. 47) How do you handle custom objects? a) A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic object class. WinRunner records operations on custom objects using obj_mouse_ statements. b) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. 48) What is the name of custom class in WinRunner and what methods it applies on the custom objects? a) WinRunner learns custom class objects under the generic object class. WinRunner records operations on custom objects using obj_ statements. 49) In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies? a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available: i. A location selector uses the spatial position of objects. ii. An index selector uses a unique number to identify the object in a window. 50) What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore. a) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.) b) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click. c) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were object class.

d) Ignore instructs WinRunner to disregard all operations performed on the class. 51) How do you find out which is the start up file in WinRunner? a) The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner. 52) What are the virtual objects and how do you learn them? a) Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests. b) Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name. To define a virtual object using the Virtual Object wizard: i. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next. ii. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next. iii. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual objects coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next. iv. Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc. v. You can accept the wizards suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming your choice. Click Next.

53) How you created you test scripts 1) by recording or 2) programming? a) Programming. I have done complete programming only, absolutely no recording. 54) What are the two modes of recording? a) There are 2 modes of recording in WinRunner i. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen. 55) What is a checkpoint and what are different types of checkpoints? a) Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version. You can add four types of checkpoints to your test scripts: i. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list. ii. Bitmap checkpoints take a snapshot of a window or area of your application and compare this to an image captured in an earlier version. iii. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents. iv. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database. 56) What are data driven tests?

a) When you test your application, you may want to check how it performs the same operations with multiple sets of data. You can create a data-driven test with a loop that runs ten times: each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table. You can perform these operations manually, or you can use the DataDriver Wizard to parameterize your test and store the data in a data table. 57) What are the synchronization points? a) Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen. b) For analog testing, you can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When you run a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window. 58) What is parameterizing? a) In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table. 59) How do you maintain the document information of the test scripts? a) Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document. 60) What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax? a) You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script: i. button_check_info ii. scroll_check_info iii. edit_check_info iv. static_check_info v. list_check_info vi. win_check_info vii. obj_check_info Syntax: button_check_info (button, property, property_value ); edit_check_info ( edit, property, property_value ); 61) What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax? a) You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check. b) Creating a GUI Checkpoint using the Default Checks i. You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled. ii. To create a GUI checkpoint using default checks: 1. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK

GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen. 2. Click an object. 3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the tests expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement Syntax: win_check_gui ( window, checklist, expected_results_file, time ); c) Creating a GUI Checkpoint by Specifying which Properties to Check d) You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that it is in focus, instead of enabled. e) To create a GUI checkpoint by specifying which properties to check: i. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen. ii. Double-click the object or window. The Check GUI dialog box opens. iii. Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object. iv. Select the properties you want to check. 1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it. 2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects. 3. To change the viewing options for the properties of an object, use the Show Properties buttons. 4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the tests expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time ); 62) What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax? a) To create a GUI checkpoint for two or more objects: i. Choose Create > GUI Checkpoint > For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens. ii. Click the Add button. The mouse pointer becomes a pointing hand and a help window opens. iii. To add an object, click it once. If you click a window title bar or menu bar, a help window prompts

you to check all the objects in the window. iv. The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check. v. Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens. vi. The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected. 1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it. 2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects. 3. To change the viewing options for the properties of an object, use the Show Properties buttons. vii. To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time ); 63) What information is contained in the checklist file and in which file expected results are stored? a) The checklist file contains information about the objects and the properties of the object we are verifying. b) The gui*.chk file contains the expected results which is stored in the exp folder 64) What do you verify with the bitmap check point for object/window and what command it generates, explain syntax? a) You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy. b) When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement. c) Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap. d) To capture a window or object as a bitmap: i. Choose Create > Bitmap Checkpoint > For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens. ii. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax:

win_check_bitmap ( object, bitmap, time ); iii. For an object bitmap, the syntax is: obj_check_bitmap ( object, bitmap, time ); iv. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be: win_check_bitmap ('Flight Reservation', 'Img2', 1); v. However, if you click the Date of Flight box in the same window, the statement might be: obj_check_bitmap ('Date of Flight:', 'Img1', 1); Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] ); 65) What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax? a) You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window). b) To capture an area of the screen as a bitmap: i. Choose Create > Bitmap Checkpoint > For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens. ii. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button. iii. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script. iv. The win_check_bitmap statement for an area of the screen has the following syntax: win_check_bitmap ( window, bitmap, time, x, y, width, height ); 66) What do you verify with the database checkpoint default and what command it generates, explain syntax? a) By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application. b) When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query. c) You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you d) specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found. e) You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test

run. Syntax: db_check(, ); f) You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script. Syntax: db_record_check(ChecklistFileName,SuccessConditions,RecordNumber ); ChecklistFileName A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard. SuccessConditions Contains one of the following values: 1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found. 2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found. 3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found. RecordNumber An out parameter returning the number of records in the database. 67) How do you handle dynamically changing area of the window in the bitmap checkpoints? a) The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch 68) What do you verify with the database check point custom and what command it generates, explain syntax? a) When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set. b) You can create a custom check on a database in order to: i. check the contents of part or the entire result set ii. edit the expected results of the contents of the result set iii. count the rows in the result set iv. count the columns in the result set c) You can create a custom check on a database using ODBC, Microsoft Query or Data Junction. 69) What do you verify with the sync point for object/window property and what command it generates, explain syntax? a) Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test. b) You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object. c) You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax: Syntax: obj_exists ( object [, time ] ); win_exists ( window [, time ] );

70) What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax? a) You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested. b) During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test. Syntax: obj_wait_bitmap ( object, image, time ); win_wait_bitmap ( window, image, time ); 71) What do you verify with the sync point for screen area and what command it generates, explain syntax? a) For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution Syntax: obj_wait_bitmap(object, image, time, x, y, width, height); 72) How do you edit checklist file and when do you need to edit the checklist file? a) WinRunner has an edit checklist file option under the create menu. Select the Edit GUI Checklist to modify GUI checklist file and Edit Database Checklist to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects. 73) How do you edit the expected value of an object? a) We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values. 74) How do you modify the expected results of a GUI checkpoint? a) We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode. 75) How do you handle ActiveX and Visual basic objects? a) WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects. 76) How do you create ODBC query? a) We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement. 77) How do you record a data driven test? a) We can create a data-driven testing using data from a flat file, data table or a database. i. Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data. ii. Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ddt_* functions to manipulate data in the data table. iii. Database: we store test data in the database and access these data using db_* functions. 78) How do you convert a database file to a text file? a) You can use Data Junction to create a conversion file which converts a database to a target text file.

79) How do you parameterize database check points? a) When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes. 80) How do you create parameterize SQL commands? a) A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application: i. SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?) SELECT defines the columns to include in the query. FROM specifies the path of the database. WHERE (optional) specifies the conditions, or filters to use in the query. Departure is the parameter that represents the departure point of a flight. Day_Of_Week is the parameter that represents the day of the week of a flight. b) When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script: db_check('list1.cdl', 'dbvf1', NO_LIMIT, dbvf1_params); The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint. 81) Explain the following commands: a) db_connect i. to connect to a database db_connect(, ); b) db_execute_query i. to execute a query db_execute_query ( session_name, SQL, record_number ); record_number is the out value. c) db_get_field_value i. returns the value of a single field in the specified row_index and column in the session_name database session. db_get_field_value ( session_name, row_index, column ); d) db_get_headers i. returns the number of column headers in a query and the content of the column headers, concatenated and delimited by tabs. db_get_headers ( session_name, header_count, header_content );

e) db_get_row i. returns the content of the row, concatenated and delimited by tabs. db_get_row ( session_name, row_index, row_content ); f) db_write_records i. writes the record set into a text file delimited by tabs. db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] ); g) db_get_last_error i. returns the last error message of the last ODBC or Data Junction operation in the session_name database session. db_get_last_error ( session_name, error ); h) db_disconnect i. disconnects from the database and ends the database session. db_disconnect ( session_name ); i) db_dj_convert i. runs the djs_file Data Junction export file. When you run this file, the Data Junction Engine converts data from one spoke (source) to another (target). The optional parameters enable you to override the settings in the Data Junction export file. db_dj_convert ( djs_file [ , output_file [ , headers [ , record_limit ] ] ] ); 82) What check points you will use to read and check text on the GUI and explain its syntax? a) You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text. b) You can use a text checkpoint to: i. Read text from a GUI object or window in your application, using obj_get_text and win_get_text ii. Search for text in an object or window, using win_find_text and obj_find_text iii. Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text iv. Click on text in an object or window, using obj_click_on_text and win_click_on_text 83) Explain Get Text checkpoint from object/window with syntax? a) We use obj_get_text (, ) function to get the text from an object b) We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window. 84) Explain Get Text checkpoint from screen area with syntax? a) We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window. 85) Explain Get Text checkpoint from selection (web only) with syntax? a) Returns a text string from an object. web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]); i. object The logical name of the object. ii. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the # character.

iii. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the # character. iv. out_text The output variable that stores the text string. v. text_before Defines the start of the search area for a particular text string. vi. text_after Defines the end of the search area for a particular text string. vii. index The occurrence number to locate. (The default parameter number is numbered 1). 86) Explain Get Text checkpoint web text checkpoint with syntax? a) We use web_obj_text_exists function for web text checkpoints. web_obj_text_exists ( object, table_row, table_column, text_to_find [, text_before, text_after] ); a. object The logical name of the object to search. b. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the character #. c. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the character #. d. text_to_find The string that is searched for. e. text_before Defines the start of the search area for a particular text string. f. text_after Defines the end of the search area for a particular text string. 87) Which TSL functions you will use for a) Searching text on the window i. find_text ( string, out_coord_array, search_area [, string_def ] ); string The string that is searched for. The string must be complete, contain no spaces, and it must be preceded and followed by a space outside the quotation marks. To specify a literal, case-sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. In this case, the string variable can include a regular expression. out_coord_array The name of the array that stores the screen coordinates of the text (see explanation below). search_area The area to search, specified as coordinates x1,y1,x2,y2. These define any two diagonal corners of a rectangle. The interpreter searches for the text in the area defined by the rectangle. string_def Defines the type of search to perform. If no value is specified, (0 or FALSE, the default), the search is for a single complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word. b) getting the location of the text string i. win_find_text ( window, string, result_array [, search_area [, string_def ] ] ); window The logical name of the window to search. string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character. For more information regarding Regular Expressions, refer to the 'Using Regular Expressions' chapter in your User's Guide. result_array The name of the output variable that stores the location of the string as a four-element array.

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1,y1,x2,y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window is considered the search area. string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word. c) Moving the pointer to that text string i. win_move_locator_text (window, string [ ,search_area [ ,string_def ] ] ); window The logical name of the window. string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression (the regular expression need not begin with an exclamation mark). search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1, y1, x2, y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window specified is considered the search area. string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word. d) Comparing the text i. compare_text (str1, str2 [, chars1, chars2]); str1, str2 The two strings to be compared. chars1 One or more characters in the first string. chars2 One or more characters in the second string. These characters are substituted for those in chars1. 88) What are the steps of creating a data driven test? a) The steps involved in data driven testing are: i. Creating a test ii. Converting to a data-driven test and preparing a database iii. Running the test iv. Analyzing the test results. 89) Record a data driven test script using data driver wizard? a) You can use the DataDriver Wizard to convert your entire script or a part of your script into a datadriven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data. To create a data-driven test: i. If you want to turn only part of your test script into a data-driven test, first select those lines in the test script. ii. Choose Tools > DataDriver Wizard. iii. If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven

test, click Next. iv. The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use v. The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder. vi. In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, table. vii. At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table viii. To the script at a later time without making changes throughout the script. ix. Choose from among the following options: 1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements 2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually. 3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable. 4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement. 5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package. 6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data. 7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table. x. The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace. Choose whether and how to replace the selected data: 1. Do not replace this data: Does not parameterize this data. 2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list. 3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name. xi. The final screen of the wizard opens. 1. If you want the data table to open after you close the wizard, select Show data table now. 2. To perform the tasks specified in previous screens and close the wizard, click Finish. 3. To close the wizard without making any changes to the test script, click Cancel.

90) What are the three modes of running the scripts? a) WinRunner provides three modes in which to run testsVerify, Debug, and Update. You use each mode during a different phase of the testing process. i. Verify 1. Use the Verify mode to check your application. ii. Debug 1. Use the Debug mode to help you identify bugs in a test script. iii. Update 1. Use the Update mode to update the expected results of a test or to create a new expected results folder. 91) Explain the following TSL functions: a) Ddt_open i. Creates or opens a datatable file so that WinRunner can access it. Syntax: ddt_open ( data_table_name, mode ); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. mode The mode for opening the data table: DDT_MODE_READ (read-only) or DDT_MODE_READWRITE (read or write). b) Ddt_save i. Saves the information into a data file. Syntax: dt_save (data_table_name); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. c) Ddt_close i. Closes a data table file Syntax: ddt_close ( data_table_name ); data_table_name The name of the data table. The data table is a Microsoft Excel file or a tabbed text file. The first row in the file contains the names of the parameters. d) Ddt_export i. Exports the information of one data table file into a different data table file. Syntax: ddt_export (data_table_namename1, data_table_namename2); data_table_namename1 The source data table filename. data_table_namename2 The destination data table filename. e) Ddt_show i. Shows or hides the table editor of a specified data table. Syntax: ddt_show (data_table_name [, show_flag]); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

show_flag The value indicating whether the editor should be shown (default=1) or hidden (0). f) Ddt_get_row_count i. Retrieves the no. of rows in a data tables Syntax: ddt_get_row_count (data_table_name, out_rows_count); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. out_rows_count The output variable that stores the total number of rows in the data table. g) ddt_next_row i. Changes the active row in a database to the next row Syntax: ddt_next_row (data_table_name); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. h) ddt_set_row i. Sets the active row in a data table. Syntax: ddt_set_row (data_table_name, row); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. row The new active row in the data table. i) ddt_set_val i. Sets a value in the current row of the data table Syntax: ddt_set_val (data_table_name, parameter, value); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. parameter The name of the column into which the value will be inserted. value The value to be written into the table. j) ddt_set_val_by_row i. Sets a value in a specified row of the data table. Syntax: ddt_set_val_by_row (data_table_name, row, parameter, value); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. row The row number in the table. It can be any existing row or the current row number plus 1, which will add a new row to the data table. parameter The name of the column into which the value will be inserted. value The value to be written into the table.

k) ddt_get_current_row i. Retrieves the active row of a data table. Syntax: ddt_get_current_row ( data_table_name, out_row ); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. out_row The output variable that stores the active row in the data table. l) ddt_is_parameter i. Returns whether a parameter in a datatable is valid Syntax: ddt_is_parameter (data_table_name, parameter); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. parameter The parameter name to check in the data table. m) ddt_get_parameters i. Returns a list of all parameters in a data table. Syntax: ddt_get_parameters ( table, params_list, params_num ); table The pathname of the data table. params_list This out parameter returns the list of all parameters in the data table, separated by tabs. params_num This out parameter returns the number of parameters in params_list. n) ddt_val i. Returns the value of a parameter in the active roe in a data table. Syntax: ddt_val (data_table_name, parameter); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. parameter The name of the parameter in the data table. o) ddt_val_by_row i. Returns the value of a parameter in the specified row in a data table. Syntax: ddt_val_by_row ( data_table_name, row_number, parameter ); data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. row_number The number of the row in the data table. parameter The name of the parameter in the data table. p) ddt_report_row i. Reports the active row in a data table to the test results Syntax: ddt_report_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0. q) ddt_update_from_db i. imports data from a database into a data table. It is inserted into your test script when you select the Import data from a database option in the DataDriver Wizard. When you run your test, this function updates the data table with data from the database. 92) How do you handle unexpected events and errors? a) WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.

WinRunner enables you to handle the following types of exceptions: Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window. TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code. Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object. Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run. 93) How do you handle pop-up exceptions? a) A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be i. Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box. ii. User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box. 94) How do you handle TSL exceptions? a) A TSL exception enables you to detect and respond to a specific error code returned during test execution. b) Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch. c) The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs.

d) Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file. 95) How do you handle object exceptions? a) During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results. b) You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution 96) How do you comment your script? a) We comment a script or line of script by inserting a # at the beginning of the line. 97) What is a compile module? a) A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test. b) Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script. 98) What is the difference between script and compile module? a) Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable. b) WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of Compiled Module. c) By default, modules containing TSL code have a property value of 'main'. Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a 'call' statement. Example of a call for the 'app_init' script: call cso_init(); call( 'C:\\MyAppFolder\\' & 'app_init' ); d) Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement: reload (C:\\MyAppFolder\\' & 'flt_lib'); or load ('C:\\MyAppFolder\\' & 'flt_lib'); 99) Write and explain various loop command? a) A for loop instructs WinRunner to execute one or more statements a specified number of times. It has the following syntax: for ( [ expression1 ]; [ expression2 ]; [ expression3 ] ) statement i. First, expression1 is executed. Next, expression2 is evaluated. If expression2 is true, statement is executed and expression3 is executed. The cycle is repeated as long as expression2 remains true. If expression2 is false, the for statement terminates and execution passes to the first statement immediately following. ii. For example, the for loop below selects the file UI_TEST from the File Name list iii. in the Open window. It selects this file five times and then stops. set_window ('Open')

for (i=0; i<5; i++) list_select_item('File_Name:_1','UI_TEST'); #Item Number2 b) A while loop executes a block of statements for as long as a specified condition is true. It has the following syntax: while ( expression ) statement ; i. While expression is true, the statement is executed. The loop ends when the expression is false. For example, the while statement below performs the same function as the for loop above. set_window ('Open'); i=0; while (i<5){ i++; list_select_item ('File Name:_1', 'UI_TEST'); # Item Number 2 } c) A do/while loop executes a block of statements for as long as a specified condition is true. Unlike the for loop and while loop, a do/while loop tests the conditions at the end of the loop, not at the beginning. A do/while loop has the following syntax: do statement while (expression); i. The statement is executed and then the expression is evaluated. If the expression is true, then the cycle is repeated. If the expression is false, the cycle is not repeated. ii. For example, the do/while statement below opens and closes the Order dialog box of Flight Reservation five times. set_window ('Flight Reservation'); i=0; do { menu_select_item ('File;Open Order...'); set_window ('Open Order'); button_press ('Cancel'); i++; } while (i<5); 100) Write and explain decision making command? a) You can incorporate decision-making into your test scripts using if/else or switch statements. i. An if/else statement executes a statement if a condition is true; otherwise, it executes another statement. It has the following syntax: if ( expression ) statement1; [ else statement2; ] expression is evaluated. If expression is true, statement1 is executed. If expression1 is false, statement2 is executed.

b) A switch statement enables WinRunner to make a decision based on an expression that can have more than two values. It has the following syntax: switch (expression ) { case case_1: statements case case_2: statements case case_n: statements default: statement(s) } The switch statement consecutively evaluates each case expression until one is found that equals the initial expression. If no case is equal to the expression, then the default statements are executed. The default statements are optional. 101) Write and explain switch command? a) A switch statement enables WinRunner to make a decision based on an expression that can have more than two values. It has the following syntax: switch (expression ) { case case_1: statements case case_2: statements case case_n: statements default: statement(s) } b) The switch statement consecutively evaluates each case expression until one is found that equals the initial expression. If no case is equal to the expression, then the default statements are executed. The default statements are optional. 102) How do you write messages to the report? a) To write message to a report we use the report_msg statement Syntax: report_msg (message); 103) What is a command to invoke application? a) Invoke_application is the function used to invoke an application. Syntax: invoke_application(file, command_option, working_dir, SHOW); 104) What is the purpose of tl_step command? a) Used to determine whether sections of a test pass or fail. Syntax: tl_step(step_name, status, description); 105) Which TSL function you will use to compare two files? a) We can compare 2 files in WinRunner using the file_compare function. Syntax: file_compare (file1, file2 [, save file]); 106) What is the use of function generator? a) The Function Generator provides a quick, error-free way to program scripts. You can: i. Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested. ii. Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report. iii. Add Customization functions that enable you to modify WinRunner to suit your testing

environment. 107) What is the use of putting call and call_close statements in the test script? a) You can use two types of call statements to invoke one test from another: i. A call statement invokes a test from within another test. ii. A call_close statement invokes a test from within a script and closes the test when the test is completed. iii. The call statement has the following syntax: 1. call test_name ( [ parameter1, parameter2, ...parametern ] ); iv. The call_close statement has the following syntax: 1. call_close test_name ( [ parameter1, parameter2, ... parametern ] ); v. The test_name is the name of the test to invoke. The parameters are the parameters defined for the called test. vi. The parameters are optional. However, when one test calls another, the call statement should designate a value for each parameter defined for the called test. If no parameters are defined for the called test, the call statement must contain an empty set of parentheses. 108) What is the use of treturn and texit statements in the test script? a) The treturn and texit statements are used to stop execution of called tests. i. The treturn statement stops the current test and returns control to the calling test. ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test. b) Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0. treturn c) The treturn statement terminates execution of the called test and returns control to the calling test. The syntax is: treturn [( expression )]; d) The optional expression is the value returned to the call statement used to invoke the test. texit e) When tests are run interactively, the texit statement discontinues test execution. However, when tests are called from a batch test, texit ends execution of the current test only; control is then returned to the calling batch test. The syntax is: texit [( expression )]; 109) Where do you set up the search path for a called test. a) The search path determines the directories that WinRunner will search for a called test. b) To set the search path, choose Settings > General Options. The General Options dialog box opens. Click the Folders tab and choose a search path in the Search Path for Called Tests box. WinRunner searches the directories in the order in which they are listed in the box. Note that the search paths you define remain active in future testing sessions. 110) How you create user-defined functions and explain the syntax? a) A user-defined function has the following structure: [class] function name ([mode] parameter...) { declarations; statements; }

b) The class of a function can be either static or public. A static function is available only to the test or module within which the function was defined. c) Parameters need not be explicitly declared. They can be of mode in, out, or inout. For all non-array parameters, the default mode is in. For array parameters, the default is inout. The significance of each of these parameter types is as follows: in: A parameter that is assigned a value from outside the function. out: A parameter that is assigned a value from inside the function. inout: A parameter that can be assigned a value from outside or inside the function. 111) What does static and public class of a function means? a) The class of a function can be either static or public. b) A static function is available only to the test or module within which the function was defined. c) Once you execute a public function, it is available to all tests, for as long as the test containing the function remains open. This is convenient when you want the function to be accessible from called tests. However, if you want to create a function that will be available to many tests, you should place it in a compiled module. The functions in a compiled module are available for the duration of the testing session. d) If no class is explicitly declared, the function is assigned the default class, public. 112) What does in, out and input parameters means? a) in: A parameter that is assigned a value from outside the function. b) out: A parameter that is assigned a value from inside the function. c) inout: A parameter that can be assigned a value from outside or inside the function. 113) What is the purpose of return statement? a) This statement passes control back to the calling function or test. It also returns the value of the evaluated expression to the calling function or test. If no expression is assigned to the return statement, an empty string is returned. Syntax: return [( expression )]; 114) What does auto, static, public and extern variables means? a) auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the variable is created each time the function is called. b) static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed. c) public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules. d) extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module. 115) How do you declare constants? a) The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner. b) The syntax of this declaration is: [class] const name [= expression]; 116) How do you declare arrays? a) The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in TSL.

b) class array_name [ ] [=init_expression] c) The array class may be any of the classes used for variable declarations (auto, static, public, extern). 117) How do you load and unload a compile module? a) In order to access the functions in a compiled module you need to load the module. You can load it from within any test script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the compiled module. b) You can load a module either as a system module or as a user module. A system module is generally a closed module that is invisible to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload statement with no parameters (global unload). load (module_name [,1|0] [,1|0] ); The module_name is the name of an existing compiled module. Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1 indicates a system module; 0 indicates a user module. (Default = 0) The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain open. (Default = 0) c) The unload function removes a loaded module or selected functions from memory. d) It has the following syntax: unload ( [ module_name | test_name [ , 'function_name' ] ] ); 118) Why you use reload function? a) If you make changes in a module, you should reload it. The reload function removes a loaded module from memory and reloads it (combining the functions of unload and load). The syntax of the reload function is: reload ( module_name [ ,1|0 ] [ ,1|0 ] ); The module_name is the name of an existing compiled module. Two additional optional parameters indicate the type of module. The first parameter indicates whether the module is a system module or a user module: 1 indicates a system module; 0 indicates a user module. (Default = 0) The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded. 1 indicates that the module will close automatically. 0 indicates that the module will remain open. (Default = 0) 119) Why does the minus sign not appear when using obj_type(), win_type(), type()? If using any of the type() functions, minus signs actually means hold down the button for the previous character. The solution is to put a backslash character '\\' before the minus sign. This also applies to + < >.

WinRunner VS. QuickTest Pro


WinRunner
Pros:
1. Mature product that has been around since about 1995. 2. Simple interface. 3. Many features. 4. Many consultants and user group/forums for support. 5. Decent built in help. 6. Fewer features to have to learn and understand compared to QuickTest Pro. 7. Interfaces with the Windows API. 8. Integrates with TestDirector.

QTP
Pros:
1. Will be getting the initial focus on development of all new features and supported technologies. 2. Ease of use. 3. Simple interface. 4. Presents the test case as a business workflow to the tester (simpler to understand). 5. Numerous features. 6. Uses a real programming language (Microsofts_VBScript) with numerous resources available. 7. QuickTest Pro is significantly easier for a non-technical person to adapt to and create working test cases, compared to WinRunner. 8. Data table integration better and easier to use than WinRunner. 9. Test Run Iterations/Data driving a test is easier and better implemented with QuickTest. 10. Parameterization easier than WinRunner. 11. Can enhance existing QuickTest scripts without the Application under Test being available; by using the ActiveScreen. 12. Can create and implement the Microsoft Object Model (Outlook objects, ADO objects, FileSystem objects, supports DOM, WSH, etc.). 13. Better object identification mechanism. 14. Numerous existing functions available for implementation both from within QuickTest Pro and _VBScript. 15. QTP supports .NET development environment (currently WinRunner 7.5 does not). 16. XML support (currently WinRunner 7.5 does not). 17. The Test Report is more robust in QuickTest compared to WinRunner. 18. Integrates with TestDirector and WinRunner (can kick off WinRunner scripts from QuickTest).

Cons:

1. Has basically been superceded by QuickTest Pro. 2. Looking at program code for the test case. 3. Coding is done in a proprietary language (TSL). 4. Very few resources available on TSL programming (it is based on the C Programming language, but is not C). 5. Need to be able to program to a certain extent in order to gain flexibility and parameterization. 6. Need training to implement properly. 7. The GUI Map can be difficult to understand and implement

Cons: 1. Currently there are fewer resources (consultants and expertise) available due to QTP being a newer product on the market and because there is a greater Demand than Supply, thus fewer employee/consulting resources. 2. Must know _VBScript in order to program at all. 3. Must be able to program in _VBScript in order to implement the real advance testing tasks and to handle very dynamic situations. 4. Need training to implement properly. 5. The Object Repository (OR) and testing environment (paths, folders, function libraries, OR) can be difficult to understand and not simple.

Mercury LoadRunner System Requirements

Requirements Computer/ Processor

LoadRunner LoadRunner Virtual Controller With On- User Generator Line Monitors (VuGen) Pentium 350 MHZ or higher Pentium 350 MHZ or higher

LoadRunner Virtual Users (Load Generator Machine) Pentium 1 GHz or higher

LoadRunner Analysis Module Pentium 350 MHZ or higher

Operating System Windows NT service pack 6a Windows 2000 Windows XP

Windows NT service pack Windows NT service pack 6a Windows NT service pack 6a 6a Windows 2000 Windows Xp HP UX 11.x or higher, Solaris Windows XP 2.6 or higher, AIX 4.3.3 or higher, Linux Red Hat 6.0 or higher Windows 2000 Windows XP

Windows 2000

Memory

128 MB or more

128 MB or more

At least 1 MB RAM for nonmultithreaded Vuser or at least 512 KB multithreaded Vuser Two times the total physical memory Installation: 130 MB Free: Minimum 500 MB N/A

128 MB or more

Swap Space Hard Disk Space

Two times the total physical memory Installation: 300 MB Free: 200 MB

Two times the total physical memory Installation: 300 MB Free: 200 MB

Two times the total physical memory Installation: 100 MB Free: Minimum 500 MB Internet Explorer 5.x or higher

Browser

Internet Explorer 5.x or Internet Explorer 5.x or higher higher Netscape Navigator 4.x, Netscape Navigator 4.x, 6.x 6.x

Netscape Navigator 4.x, 6.x

APPLICATION DELIVERY

LOAD TESTING TO PREDICT WEB PERFORMANCE

ABSTRACT Businesses that leverage the Web to conduct daily transactions need to provide customers with the best possible user experience in order to be successful. Often, however, these businesses lose customers because their sites are unable to handle surges in Web traffic. For example, a successful promotion that drives Web traffic can radically impact business performance and response time to end users. Online customers, tired of waiting, will simply click to competitors sites, and any opportunities for revenue growth will be lost. Whether a business is a brick-and-mortar or a dot.com, the challenges of successfully conducting business online are the same: high user volumes, slow response times for Web requests, and ensuring the overall reliability of the service. This white paper illustrates how maintaining Web-application performance is key to overcoming these e-business challenges and generating revenue. The paper then discusses the importance of maintaining a Web application

TABLE OF CONTENTS
Abstract 2 E-Business is Booming 3 Ensuring Optimal End-User Experience A Complex Issue 3 Application Load Testing Prior to Going Live 4 Challenges of Automated Load Testing Tools 6 Accuracy 6 Scalability 7 The Process of Automated Load Testing 7 Step 1: System Analysis7 Step 2: Creating Virtual User Scripts 8 Step 3: Defining User Behavior 9 Step 4: Creating a Load Test Scenario 9 Step 5: Creating Network Impact Tests 9 Step 6: Running the Load Test Scenario and Monitoring the Performance 10 Step 7: Analyzing Results 10 Mercury LoadRunner 10 Step 1: System Analysis 11 Step 2: Creating Virtual User Scripts11 Step 3: Defining User Behavior 12 Step 4: Creating a Load Test Scenario 13 Step 5: Creating Network Impact Tests 14 Step 6: Running the Load Test Scenario and Monitoring the Performance 14 Step 7: Analyzing Results 15 Summary 18

APPLICATION DELIVERY

to ensure customer satisfaction and why load testing is critical to successfully launching and managing Web sites. In addition, it examines various types of load testing and provides a detailed discussion on the load testing process and the attributes of a reliable testing tool. In closing, this paper provides an overview of Mercury LoadRunner, our load testing solution.

E-Business is Booming
In the past few years, e-business has grown at an accelerated rate. Today, analysts estimate that 260 million people use the Internet and there is little sign that this growth will slow down. In fact, the International Data Corporation expects the number of online users to reach 500 million within the next two years. E-business has become a popular commercial medium for two reasons: It enables businesses to share information and resources worldwide, and it offers them an efficient channel for advertising, marketing, and e-commerce. By using the Internet, businesses have been able to improve their sales and marketing reach, increase their quality assurance to customers, and conduct multimedia conversations with customers. More important, businesses are realizing the challenges and rewards of providing customers with a positive end-user experience. After all, customers who are satisfied with their online experience are likely to engage in repeat business and provide a steady stream of revenue. As a result, businesses have become more focused on providing positive end-user experiences.

Ensuring Optimal End-User Experience A Complex Issue


In addition to being fast-growing, e-business is very complex. According to a December 1999 report by the IBM High-Volume Web site team, commercial Web sites can be classified into four categories based on the types of business transactions they perform: publishing/subscribers, online shopping, customer self-service, and trade/auction sites. By understanding these categories, businesses can better predict their level of user volume and understand how users prefer to access the site. Following is an overview of the different commercial Web site categories: Publishing/subscribers sites provide the user with media information, such as magazine and newspaper publications. Although the total number of concurrent users is generally low on these sites, the number of individual transactions performed on a per-user basis is relatively high, resulting in the largest number of page views of all site categories. Online shopping sites allow users to browse and shop for anything found in a traditional brick-andmortar store. Traffic is heavy, with daily volumes ranging between one and three million hits per day. Customer self-service sites include banking and travel reservation sites. Security considerations (e.g., privacy, authentication, site regulation, etc.) are high. Trade/auction sites allow users to buy and sell commodities. This type of site is volatile, with very high transaction rates that can be extremely time-sensitive.

WWW.MERCURY.COM

APPLICATION DELIVERY

No matter the transaction type, Web sites must enable customers to conduct business in a timely manner. For this reason, a scalable architecture is essential. A well-structured Web environment, however, consists of an extremely complex multi-tier system. Scaling this infrastructure from end-to-end means managing the performance and capacities of individual components within each tier. Figure 1 illustrates the complexity of these components.

INTERNET

CLIENTS

ROUTERS

SWITCHES

WEB SERVERS

INTERNET FIREWALL

LOAD BALANCERS

APPLICATION SERVERS

DATABASE SERVERS & OTHER DATABASE SOURCES

Figure 1. Pictured is the schematic of a complex Web infrastructure.

This complexity prompts many questions about the integrity and performance capabilities of a Web site. For instance, will the response time experienced by the user be less than eight seconds? Will the Web site be able to sustain a given number of users? Will all the pieces of the system, in terms of interoperability, coexist when connected together? Is communication between the application server and the database server fast enough? Is there sufficient hardware on each tier to handle high volumes of traffic? Will the clients have quality experiences over wide area networks (WANs)? To eliminate these performance issues, businesses must implement a method for predicting how Web applications will behave in a production environment, prior to deployment.

Application Load Testing Prior to Going Live


To accommodate the growth of their sites, Web developers can optimize software or add hardware to each component of the system. However, to ensure optimal performance, businesses must load test the complete assembly of a system prior to going live. Application load testing is the measure of an entire Web applications ability to sustain a number of simultaneous users and/or transactions, while maintaining adequate response times. Because it is comprehensive, load testing is the only way to accurately test the end-to-end performance of a Web site prior to going live.

WWW.MERCURY.COM

APPLICATION DELIVERY

Application load testing enables developers to isolate bottlenecks in any component of the infrastructure. Two common methods for implementing this process are manual and automated testing. Manual testing, however, has several built-in challenges, such as determining how to: Emulate hundreds of thousands of manual users that will interact with the application to generate load. Coordinate the operations of users. Measure response times. Repeat tests in a consistent way. Compare results. Because load testing is iterative in nature, you must identify performance problems, tune the system, and retest to ensure that tuning has had a positive impact countless times. For this reason, manual testing is not a very practical option. With automated load testing tools, tests can be easily rerun and the results automatically measured. In this way, automated testing tools provide a more cost-effective and efficient solution than their manual counterparts. Plus, they minimize the risk of human error during testing. Today, automated load testing is the preferred choice for load testing a Web application. The testing tools typically use three major components to execute a test. These include: A control console, which organizes, drives, and manages the load. Virtual users, which are processes used to imitate the real user performing a business process on a client application. Load servers, used to run the virtual users.

INTERNET VIRTUAL USERS WEB SERVER APPLICATION SERVER DATABASE

MANUAL TESTERS

CONTROL CONSOLE

Figure 2. A single console controlling several thousand virtual users replaces manual testers.

WWW.MERCURY.COM

APPLICATION DELIVERY

Using these components, automated load testing tools can: Replace manual testers with automated virtual users. Simultaneously run many virtual users on a single load-generating machine. Automatically measure transaction response times. Easily repeat load scenarios to validate design and performance changes. This advanced functionality in turn allows you to save time and costly resources. Automated testing tools recently demonstrated their value in a report by the Newport Group. The report, published in 1999, revealed that 52 percent of Web-based businesses did not meet their anticipated Web-based business scalability objectives. Of this group, 60 percent did not use any type of automated load testing tool. In contrast, nearly 70 percent of businesses that met their scalability expectations had used an automated load testing tool.

Challenges of Automated Load Testing Tools


Primary challenges of load testing tools include the ability to be accurate and scalable and to isolate performance problems. To isolate performance problems, load testing tools monitor key system-level components and identify bottlenecks during the run of a load test. Accuracy is defined by how closely an automated tool can emulate real user behavior. Scalability relates to the products ability to generate the maximum load using the minimum amount of resources.

DID YOUR APPLICATION SCALE AS EXPECTED? YES 48%

FROM 48% DID YOU USE AN AUTOMATED LOAD TESTING TOOL?

FROM 52% DID YOU USE AN AUTOMATED LOAD TESTING TOOL?

NO 52%

NO 30%

YES 70%

YES 40%

NO 60%

Figure 3. Automated load testing enables businesses to meet scalability expectations.

Automated load testing tools must address all aspects of accuracy and scalability and be able to pinpoint problems in order to ensure reliable end-to-end testing. Following are some key attributes of accuracy and scalability.

WWW.MERCURY.COM

APPLICATION DELIVERY

ACCURACY Recording ability against a real client application Capturing protocol-level communication between the client application and the rest of the system Providing flexibility and the ability to define user behavior configuration (e.g., think times, connection speeds, cache settings, iterations) Verifying that all requested content returns to the browser to ensure a successful transaction Showing detailed performance results that can be easily understood and analyzed to quickly pinpoint the root cause of problems Measuring end-to-end response times Using real-life data Synchronizing virtual users to generate peak loads Monitoring different tiers of the system with minimal intrusion

SCALABILITY Generating the maximum number of virtual users that can be run on a single machine before exceeding the machine's capacity Generating the maximum number of hits per second against a Web server Managing thousands of virtual users Increasing the number of virtual users in a controlled fashion Simulate the effect of scaling out to remote locations over WANs

Figure 4. Accuracy and scalability are key attributes in load testing.

The Process of Automated Load Testing


By taking a disciplined approach to load testing, you can optimize resources; better predict hardware, software, and network requirements; and set performance expectations to meet end-user service-level agreements (SLAs). Repeatability of the testing process is necessary to verify that changes have taken place. Following is a step-by-step overview of the automated load testing process:
Step 1: System Analysis

This step is critical to interpreting your testing needs and is used to determine whether the system will scale and perform to your expectations. Testers essentially translate existing requirements of the user into load testing objectives. A thorough evaluation of the requirements and needs of a system, prior to load testing, will provide more realistic test conditions. First, you must identify all key performance goals and objectives before executing any testing strategies. Examples include identifying which processes/transactions to test, which components of a system architecture to use in the test, and the number of concurrent connections and/or hits per second to expect against the Web site as well as clarifying which processes are to be tested. By referring to the four models of Web sites (see page 3), developers can easily classify their sites process/transaction type, allowing users to conduct transactions in a more timely fashion. For example, a business-to-consumer model can implement an online shopping process in which a customer browses through an online bookstore catalog, selects an item, and makes a purchase. This process could be labeled buy book for the purposes of the test. Defining these objectives will provide a concise outline of the SLAs and mark the goals to be achieved with testing.

WWW.MERCURY.COM

APPLICATION DELIVERY

Second, you need to define the input data used for testing. The data can be created dynamically. For example, auctioning bids may change every time a customer sends in for a new request. Random browsing also may be used to obtain the data. Examples include any non-transactional process, such as browsing through a brochure or viewing online news. Emulating data input can avoid potential problems with inaccurate load test results. Third, you must determine the appropriate strategy for testing applications. You can select from three strategy models: load testing, stress testing, and capacity testing. Load testing is used to test an application against a requested number of users. The objective is to determine whether the site can sustain this requested number of users with acceptable response times. Stress testing, on the other hand, is load testing over extended periods of time to validate an applications stability and reliability. The last strategy is capacity testing. Capacity testing is used to determine the maximum number of concurrent users an application can manage. For example, businesses would use capacity testing to benchmark the maximum loads of concurrent users their sites can sustain before experiencing system failure. Fourth, testers need to cultivate a solid understanding of the system architecture, including: Defining the types of routers and network connectivity used in the network setup. Determining whether multiple servers are being used. Establishing whether load balancers are used as part of the IP networks to manage the servers. Finding out which servers are configured into the system (Web, application, database). Last, developers must determine which resources are available to run the virtual users. This requires deciding whether there is a sufficient number of load generators or test machines to run the appropriate number of virtual users. It also requires determining whether the testing tool has multithreading capabilities and can maximize the number of virtual users being run. Ultimately, the goal is to minimize system resource consumption while maximizing the virtual user count.
Step 2: Creating Virtual User Scripts

A script recorder is used to capture all the business processes into test scripts, often referred to as virtual user scripts or virtual users. A virtual user emulates the real user by driving the real application as a client. It is necessary to identify and record all the various business processes from start to finish. Defining these transactions will assist in the breakdown of all actions and the time it takes to measure the performance of a business process.

WWW.MERCURY.COM

APPLICATION DELIVERY

Step 3: Defining User Behavior

Run-time settings define the way that the script runs in order to accurately emulate real users. Settings can configure think time, connection speed, and error handling. Think times can vary in accordance with different user actions and the users level of experience with Web technology. For example, novice users require more time to execute a process because they have the least experience using the Web. Therefore, a tester will need to emulate more think time in the form of pauses. Advanced users, however, have much more experience and can execute processes at an accelerated level, often by using shortcuts. System response times also can vary because they are dependent on connection speed, and all users connect to the Web system at different speeds (e.g., modem, LAN/WAN). WAN emulation accurately simulates a variety of connections at varying network bandwidth and latencies (e.g., 28.8 Kbps, 56.6 Kbps, etc.). This is very useful for determining how the underlying network affects application response times. Error handling is another setting that requires configuration. Errors arise throughout the course of a scenario and can impede the test execution. You can configure virtual users to handle these errors so that the tests can run uninterrupted. Errors in network communications can also have a profound influence on application response times. You can also configure WAN emulation to introduce underlying network errors to understand their impact and measure the applications tolerance for them.
Step 4: Creating a Load Test Scenario

The load test scenario contains information about the groups of virtual users that will run the scripts and the load machines on which the groups are running. In order to run a successful scenario, you must first define individual groups based on common user transactions. Second, you need to define and distribute the total number of virtual users. A varying number of virtual users can be assigned to individual business processes to emulate user groups performing multiple transactions. Third, you must determine which load-generating machines the virtual users will run on. Load-generator machines can be added to the client side of the system architecture to run additional virtual users. Last, testers need to specify how the scenario will run. Virtual user groups can either run in staggered or parallel formation. Staggering the virtual users allows you to examine a gradual increase of the user load to a peak.
Step 5: Creating Network Impact Tests

The network test leverages information about where the groups of virtual users will be located relative to the server. The network scenario defines changes in certain network characteristics, such as bandwidth availability, contention, latency, errors, and jitter. The number of virtual users in this test is held constant; only network characteristics will vary. Staggering the decreases in network bandwidth or increases in latency, errors, and jitter enables you to understand their relative influence on application behavior. This data can be used to set network requirements for the application when it is deployed. This kind of testing

WWW.MERCURY.COM

APPLICATION DELIVERY

can be conducted directly over the network to remote locations, but for testing purposes it is generally more practical to emulate the network where a variety of conditions using WAN emulation can be easily established. This will allow the prediction of performance for remotely located users.
Step 6: Running the Load Test Scenario and Monitoring the Performance

Real-time monitoring allows testers to view the applications performance at any time during the test. Every component of the system requires monitoring: the clients, the network, the Web server, the application server, the database, and all server hardware. Real-time monitoring allows for early detection of performance bottlenecks during test execution. You then have the ability to view the performance of every single tier, server, and component of the system during testing. As a result, you can instantly identify performance bottlenecks during load testing. You can then accelerate the test process and achieve a more stable application.
Step 7: Analyzing Results

This is the most important step in collecting and processing the data to resolve performance bottlenecks. The analysis yields a series of graphs and reports that help summarize and present the end-to-end test results. For example, Figure 5 uses generic data to display a standard performance under load graph that shows the total number of virtual users against the response time. This can be used to determine the maximum number of concurrent users until response times become unacceptable. Figure 6 shows a transaction overview revealing the total number of transactions that passed in a scenario. Analysis of these types of graphs can help testers isolate bottlenecks and determine which changes are needed to improve system performance. After these changes are made, the tester must rerun the load test scenarios to verify the adjustments.
Figure 5. This is a generic graph showing performance under load. This graph is useful in pinpointing bottlenecks. For example, if a tester wants to inquire about the user threshold at two seconds, the results above show a maximum of 7,500 concurrent users.

Mercury LoadRunner
Mercury LoadRunner, part of Mercury Performance Center, is a load testing tool that predicts system behavior and performance. It exercises an entire enterprise infrastructure by emulating thousands of users to identify and isolate problems. Able to support multiple
Figure 6. This graph is a generic display of the number of transactions that passed or failed. In the above example, if the goal is to obtain a 90-percent passing rate for the number of transactions, then transaction 2 fails. Approximately 33 percent out of 100 transactions failed.

WWW.MERCURY.COM

10

APPLICATION DELIVERY

environments, LoadRunner can test an entire enterprise infrastructure, including e-business, ERP, CRM, and custom client/server applications, thereby enabling IT and Web groups to optimize application performance. By emulating the behavior of a real user, LoadRunner can test applications communicating with a wide range of protocols, such as HTTP(s), COM, CORBA, Oracle Applications, etc. LoadRunner also features a seamless integration with Mercury Business Availability Center. Therefore, the same tests created during testing can be reused to monitor the application once it is deployed. LoadRunner enhances every step of the load testing process to ensure that users reap the maximum return on their investment in the tool. The remainder of this paper discusses how LoadRunner offers support for each segment of the load testing process:
Step 1: System Analysis

LoadRunner advocates the same system analysis as mentioned previously in this paper. In emulating a test environment, it is necessary to identify all testing conditions, including system architecture components, the processes being tested, and the total number of virtual users with which to test. A good system analysis will enable customers to convert their goals and requirements into a successful, automated test script.
Step 2: Creating Virtual User Scripts

You begin by recording the business processes to create a test script. Script recording is done using LoadRunners Virtual User Generator (VUGen). VUGen is a component that runs on a client desktop to capture the communication between the real client application and the server. VUGen can emulate the exact behavior of a real browser by sending various e-business protocol requests to the server. VUGen also can record against Netscape or Internet Explorer browsers or any user-defined client that provides the ability to specify a proxy address. After the recording process, a test script is generated. You can then add logic to the script to make it more realistic. Intelligence can be added to the scripts so that they emulate virtual user reasoning while executing a transaction. LoadRunner executes this stage using the transactions, as well as its verification and parameterization features. Transactions: Transactions represent a series of operations that are required to be measured under load conditions. A transaction can be a single URL request or a complete business process leading through several screens, such as the online purchase of a book.
Figure 7. The Virtual User Generator allows testers to capture business processes to create virtual users.

WWW.MERCURY.COM

11

APPLICATION DELIVERY

Verification: VUGen allows insertion of verification checkpoints using ContentCheck. ContentCheck verifies the application functionality by analyzing the returned HTML Web page to ensure a successful transaction. If the verification fails, LoadRunner will log the error and highlight the reasons for the failure (e.g., broken link, missing images, erroneous text). Parameterization: To accurately emulate real user behavior, LoadRunner virtual users use varying sets of data during load testing, replacing constant values in the script with variables or parameters. The virtual user can substitute the parameters with values from a data source, such as flat files, random numbers, date/time, etc. This allows a common business process, such as searching for or ordering a book, to be performed many times by different users.
Step 3: Defining User Behavior

LoadRunner provides comprehensive run-time settings to configure scripts that emulate the behavior of real users. Examples of run-time settings include: Think Time: Controls the speed at which the virtual user interacts with the system by including pauses of think times during test execution. By varying think times for users, LoadRunner can emulate the behaviors of different users from novice to expert users. Dial-Up Speed: Emulates a user connected to the system using a modem and/or LAN/WAN connections. Modem speeds range from 14.4 Kbps to 56.6 Kbps. This is useful for controlling user behavior in order to accurately emulate response times for each request. Emulate Cache: Emulates a user browsing with a specific cache size. Caching can be turned off based on server requirements. Browser Emulation: Enables you to specify which browser the virtual user emulates. LoadRunner supports both Netscape and Internet Explorer, as well as any custom browser. Number of Connections: Allows the virtual user to control the number of connections to a server, like a real browser, for the download of Web-page content. IP Spoofing: Tests the performance impact of IP-dependent components by assigning virtual users their own IP addresses from the same physical machine.
Figure 8. The run-time settings are used to emulate the real user as closely as possible. In this example, think time is randomly generated to simulate the speed at which the user interacts with the system.

WWW.MERCURY.COM

12

APPLICATION DELIVERY

Iterations: Commands repetition of virtual user scripts. Also paces virtual users, instructing how long to wait between intervals. Iterative testing defines the amount of work a user does based on the number of times a process is performed using varying data. Error Handling: Regulates how a virtual user handles errors during script execution. LoadRunner can enable the Continue on Error feature when the virtual user encounters an error during replay. Log Files: Stores information about virtual user server communication. Standard logging maps all transactions, rendezvous, and output messages. Extended logging also tracks warnings and other messages.
Step 4: Creating a Load Test Scenario

LoadRunners Controller is used to create scenarios. As a single point of control, it provides complete visibility of the tests and the virtual users. The Controller facilitates the process of creating a load test scenario by allowing you to: Assign scripts to individual groups. Define the total number of virtual users needed to run the tests. Define the host machines on which virtual users are running. In addition, LoadRunner offers a Scenario Wizard, a Scheduler, and TurboLoad to enhance the tester experience. Scenario Wizard. LoadRunners Scenario Wizard is a feature that enables testers to quickly compose multi-user load test scenarios. Using five easy-to-follow screens, the Scenario Wizard steps you through a process of selecting the workstations that will host the virtual users as well as the test scripts to run. During this step-by-step process, you also create simulation groups of virtual users. Scheduler: LoadRunner Scheduler is used to ramp virtual user numbers up/down in order to position virtual users in both the ready state and the running state. For example, you may want to gradually increase the load of users logging into a site with a fixed batch size. This is referred to as the ready state. This method is useful for avoiding unnecessary strain on the system. The schedule also manages scheduling and features an automated process that allows the user to run the script without being present. In real time this would be analogous to running a script during offpeak hours of Internet traffic 6 p.m. to 6 a.m. To schedule a test, you simply click the Run Scenario button and enter the desired starting time.
Figure 9. LoadRunners Controller is an interactive environment for organizing, driving, and managing the load test scenario.

WWW.MERCURY.COM

13

APPLICATION DELIVERY

TurboLoad: TurboLoad is a patent-pending technology that provides for maximum scalability. TurboLoad can minimize CPU consumption for each virtual user, thereby enabling more users to run on each loadgenerator machine. In recent customer benchmarks, using 10 Windows-based load servers (4 CPU, 500 MHz Xeon processors, 4 GB RAM), LoadRunner generated 3 billion Web hits per day against the Web system (or 3,700 hits/sec per machine). Moreover, the load generators were running at less than 40 percent CPU utilization. TurboLoad also can generate more hits/sec for a given machine. LoadRunners replay speed can thereby generate more throughput against the server using a minimum amount of resources.
Step 5: Creating Network Impact Tests

With LoadRunner WAN emulation, the same virtual-user scripts used in the previous steps are leveraged for network impact tests. Network characteristics such as connection speed, latency, and error rates are modified for groups of virtual users that are simultaneously emulated during a single test run. The impact of the network on response time on the different groups, and the sensitivities of the application to the network, can then be accurately ascertained. Expected response time data can be recorded and network requirement set for use later when the application is deployed.
Step 6: Running the Load Test Scenario and Monitoring the Performance

Once the scenario is built, you are ready to run the test. LoadRunners Controller provides a suite of performance monitors that can monitor each component of a multi-tier system during the load test. By capturing performance data over the entire system, you can then correlate this information with the end-user loads and response times in order to pinpoint bottlenecks. LoadRunner provides performance monitors for the network, network devices, and the most common Web servers, application servers, and database servers. The performance monitoring is done in a completely non-intrusive manner to minimize performance impact. Additionally all of these monitors are hardware and OS independent, as they do not require that agents be installed on the remotely monitored servers.
Figure 10. LoadRunner online monitors help identify and isolate performance bottlenecks in real time.

WWW.MERCURY.COM

14

APPLICATION DELIVERY

LoadRunner supports a number of environments to provide for more accurate online monitoring, including: Runtime Graphs: Virtual User Status, User Defined Data Points Transaction Graphs: Response Time, Transactions (pass/fail) Web-Server Resource Graphs: Hits per Second, Throughput, Apache, MS IIS, Netscape System Resource Graphs: Server resource, SNMP Tuxedo Web-Application Server Graphs: BroadVision, ColdFusion, MS Active Server Pages, SilverStream, WebLogic Database Server Resource Graphs: SQL Server, Oracle
Step 7: Analyzing Results

Evaluating results is the most important step in the load testing process. Up until now, you have been able to record and play the actions of a real user with extreme precision, while conducting multiple processes on the Web. In addition, the performance-monitoring feature offers an accurate method for pinpointing bottlenecks while running the scripts. To fix these problems, you can follow several steps. First, a network specialist (DBA, consultants) can make the necessary adjustments to the system. As a next step, you need to rerun the scripts, to verify that the changes have taken place. Last, a comparison of the results from before and after enables the tester to measure the amount of improvement that the system has undergone. LoadRunners Analysis component provides a single integration environment that collectively gathers all the data generated throughout the testing cycle. Because this tool is powerful and easy to use, you can create cross-scenario comparisons of the graphs and thereby enrich the data analysis process. For example, Figure 11a shows the results of a dot.com after testing the maximum number of concurrent users that its existing system can handle. Based on these results, the dot.com plans to improve its infrastructure to allow more user traffic. Figure 11b provides a comparison of a repeated test after adjustments had been made to the Web architecture to optimize server software.

Figure11a.

Figure11b.

WWW.MERCURY.COM

15

APPLICATION DELIVERY

LoadRunner Analysis provides advanced, highlevel drill-down capabilities that enable testers to locate bottlenecks in these scenarios. In addition, LoadRunner Analysis uses a series of sophisticated graphs and reports that answer such questions as: What was the Web servers CPU memory when the system was under a load of 5,000 concurrent users? How many total transactions passed or failed after the completion of the load test? How many hits per second did the Web server uphold? What were the average transactions times for each virtual user? Below are sample graphs that LoadRunner Analysis provides testers as it solves complex bottleneck issues. Running Virtual Users: Displays running virtual users during each second of a scenario. Rendezvous: Indicates when and how virtual users were released at each point. Transaction/Sec (Passed): Displays the number of completed, successful transactions performed per second. Transaction/Sec (Failed): Displays the number of incomplete, failed transactions performed per second.
Figure 12. This activity graph displays the number of completed transactions (successful and unsuccessful) performed during each second of a load test. This graph helps testers determine the actual transaction load on their system at any given moment. The results show that after six minutes an application is under a load of 200 transactions per second.

Figure 13. This performance graph displays the number of transactions that passed, failed, aborted, or ended with errors. For example, these results show the Submit_Search business process passed all its transactions at a rate of approximately 96 percent.

Figure 14. This performance graph displays the minimum, average, and maximum response times for all the transactions in the load test. This graph is useful in comparing the individual transaction response times in order to pinpoint where most of the bottlenecks of a business process are occurring. For example, the results of this graph show that FAQ business process has an average transaction response time of 1.779 seconds. This would be an acceptable statistic in comparison to the other processes.

WWW.MERCURY.COM

16

APPLICATION DELIVERY

LoadRunner provides a variety of performance graphs: Percentile: Analyzes percentage of transactions that were performed within a given time range. Performance Under Load: Indicates transaction times relative to the number of virtual users running at any given point during the scenario. Transaction Performance: Displays the average time taken to perform transactions during each second of the scenario run. Transaction Performance Summary: Displays the minimum, maximum, and average performance times for all the transactions in the scenario. Transaction Performance by Virtual User: Displays the time taken by an individual virtual user to perform transactions during the scenario. Transaction Distribution: Displays the distribution of the time taken to perform a transaction. LoadRunner offers two types of Web graphs: Connections Per Second: Shows the number of connections made to the Web server by virtual users during each second of the scenario run. Throughput: Shows the amount of throughput on the server during each second of the scenario run.
Figure 16. This Web graph displays the amount of throughput (in bytes) on the Web server during the load test. This graph helps testers evaluate the amount of load Vusers generate in terms of server throughput. For example, this graph reveals a total throughput of more than 7 million bytes per second. Figure 15. This Web graph displays the number of hits made on the Web server by Vusers during each second of the load test. This graph helps testers evaluate the amount of load Vusers generate in terms of the number of hits. For instance, the results provided in this graph indicate an average of 2,200 hits per second against the Web server.

Figure 17. This graph correlates the relationship of the system behavior to the number of users, using a compilation of results from other graphs. This enables the tester to view the CPU consumption based on the total number of users.

WWW.MERCURY.COM

17

APPLICATION DELIVERY

LoadRunners Analysis includes a Correlation of Results feature to enhance the user analysis process of the data. Correlation enables the tester to custom design a graph beyond the basics, using any two metrics. As a result, the tester can pinpoint and troubleshoot performance problems more quickly.

Summary
In a short time, e-business has proven to be a viable business model for dot.coms and brick-andmortars alike. With the number of Internet users growing exponentially, it is therefore critical for these businesses to prepare themselves for high user volumes. Todays businesses can leverage load testing practices and tools to ensure that Web-application performance keeps pace with end-user demand. Moreover, by using automated load testing tools, businesses can quickly and cost-effectively assess the performance of applications before they go live, as well as analyze their performance after deployment. As a result, businesses can confidently stay one step ahead of performance issues and focus on initiatives to drive Web traffic and revenues. Mercury LoadRunner is the leading tool for predicting scalability, reliability, and performance issues of an e-business application; identifying system bottlenecks; and displaying results. LoadRunner emulates various types of transactions using a highly scalable number of users. This is essential for understanding an applications limitations while planning for growth and reducing business risk. LoadRunner also tests system behavior under real-time conditions and converts this data into easy-to-use, yet sophisticated graphs and reports. With this information, businesses can more quickly and efficiently resolve problems, thereby ensuring a positive end-user experience and providing the opportunity for increased revenue. Download a 10-day trial of Mercury LoadRunner and see how load testing your applications in pre-production will save you time and money when you're ready to go live. http://download.mercuryinteractive.com/cgi-bin/portal/download/loginForm.jsp?id=160&source=1102709805#d5160

WWW.MERCURY.COM

18

Mercury Interactive is the global leader in business technology optimization (BTO). We are committed to helping customers optimize the business value of IT.
WWW.MERCURY.COM
2004 Mercury Interactive Corporation. Patents pending. All rights reserved. Mercury Interactive, the Mercury Interactive logo, the Mercury logo, Mercury Business Availability Center, Mercury Performance Center, and Mercury LoadRunner are trademarks or registered trademarks of Mercury Interactive Corporation in the United States and/or other foreign countries. All other company, brand, and product names are marks of their respective holders. WP-1079-0604

Load Testing material

April 8 th 2005

This page contains summary descriptions of a number of load and performance tests. It also contains links to more detailed pages, containing additional information including diagrams, tables, examples and screen dumps for a variety of load and performance tests.

Load Tests Load Tests are end to end performance tests under anticipated production load. The objective such tests are to determine the response times for various time critical transactions and business processes and ensure that they are within documented expectations (or Service Level Agreements - SLAs). Load tests also measures the capability of an application to function correctly under load, by measuring transaction pass/fail/error rates. An important variation of the load test is the Network Sensitivity Test, which incorporates WAN segments into a load test as most applications are deployed beyond a single LAN. Load Tests are major tests, requiring substantial input from the business, so that anticipated activity can be accurately simulated in a test environment. If the project has a pilot in production then logs from the pilot can be used to generate usage profiles that can be used as part of the testing process, and can even be used to drive large portions of the Load Test. Load testing must be executed on todays production size database, and optionally with a projected database. If some database tables will be much larger in some months time, then Load testing should also be conducted against a projected database. It is important that such tests are repeatable, and give the same results for identical runs. They may need to be executed several times in the first year of wide scale deployment, to ensure that new releases and changes in database size do not push response times beyond prescribed SLAs. What is the purpose of a Load Test? The purpose of any load test should be clearly understood and documented. A load test usually fits into one of the following categories: 1. Quantification of risk. - Determine, through formal testing, the likelihood that system performance will meet the formal stated performance expectations of stakeholders, such as response time requirements under given levels of load. This is a traditional Quality Assurance (QA) type test. Note that load testing does not mitigate risk directly, but through identification and quantification of risk, presents tuning opportunities and an impetus for remediation that will mitigate risk. 2. Determination of minimum configuration. - Determine, through formal testing, the minimum configuration that will allow the system to meet the formal stated performance expectations of stakeholders - so that extraneous hardware, software and the associated cost of ownership can be minimized. This is a Business Technology Optimization (BTO) type test.

Load Testing material


What functions or business processes should be tested?

April 8 th 2005

The following table describes the criteria for determining the business functions or processes to be included in a test. Basis for inclusion in Load Test Comment The most frequently used transactions have the High frequency transactions potential to impact the performance of all of the other transactions if they are not efficient. The more important transactions that facilitate the core objectives of the system should be included, Mission Critical transactions as failure under load of these transactions has, by definition, the greatest impact. At least one READ ONLY transaction should be included, so that performance of such transactions Read Transactions can be differentiated from other more complex transactions. At least one update transaction should be included Update Transactions so that performance of such transactions can be differentiated from other transactions. Example of Load Test Configuration for a web system The following diagram shows how a thorough load test could be set up using LoadRunner.

The important thing to understand in executing such a load test is that the load is generated at a protocol level, by the load generators, that are running scripts developed with the VUGen tool.

Load Testing material

April 8 th 2005

Transaction times derived from the VUGen scripts do not include processing time on the client PC, such as rendering (drawing parts of the screen) or execution of client side scripts such as JavaScript. The WinRunner PC(s) is utilized to measure end user experience response times. Most load tests would not employ a WinRunner PC to measure actual response times from the client perspective, but is highly recommended where complex and variable processing is performed on the desktop after data has been delivered to the client. The LoadRunner controller is capable of displaying real-time graphs of response times as well as other measures such as CPU utilization on each of the components behind the firewall. Internal measures from products such as Oracle, WebSphere are also available for monitoring during test execution. After completion of a test, the Analysis engine can generate a number of graphs and correlations to help locate any performance bottlenecks.

Simplified Load Test Configuration for a web system

In this simplified load test, the controller communicates directly to a load generator that can communicate directly to the load balancer. No WinRunner PC is utilized to measure actual user experience. The collection of statistics from various components is simplified as there is no firewall between the controller and the web components being measured.

Load Testing material


Reporting on Response Ti0me at various levels of load.

April 8 th 2005

Expected output from a load test often includes a series of response time measures at various levels of load, eg 500 users, 750 users and 1,000 users. It is important when determining the response time at any particular level of load, which the system has run in a stable manner for a significant amount of time before taking measurements. For example, a ramp-up to 500 users may take ten minutes, but another ten minutes may be required to let the system activity stabilize. Taking measurements over the next ten minutes would then give a meaningful result. The next measurement can be taken after ramping up to the next level and waiting a further ten minutes for stabilization and ten minutes for the measurement period and so on for each level of load requiring detailed response time measures.

Failover Tests Failover Tests verify of redundancy mechanisms while the system is under load. This is in contrast to Load Tests which are conducted under anticipated load with no component failure during the course of a test. For example, in a web environment, failover testing determines what will happen if multiple web servers are being used under peak anticipated load, and one of them dies. Does the load balancer react quickly enough? Can the other web servers handle the sudden dumping of extra load? Failover testing allows technicians to address problems in advance, in the comfort of a testing situation, rather than in the heat of a production outage. It also provides a baseline of failover capability so that a 'sick' server can be shutdown with confidence, in the knowledge that the remaining infrastructure will cope with the surge of failover load. Explanatory Diagrams: The following is a configuration where failover testing would be required.

This is just one of many failover configurations. Some failover configurations can be quite complex, especially when there are redundant sites as well as redundant equipment and communications lines.

Load Testing material

April 8 th 2005

In this type of configuration, when one of the application servers goes down, then the two web servers that were configured to communicate with the failed application server can not take load from the load balancer, and all of the load must be passed to the remaining two web servers. See diagram below:

When such a failover event occurs, the web servers are under substantial stress, as they need to quickly accommodate the failed over load, which probably will result in doubling the number of HTTP connections as well as application server connections in a very short amount of time. The remaining application server will also be subjected to severe increase in load and the overheads associated with catering for the increased load. It is crucial to the design of any meaningful failover testing that the failover design is understood, so that the implications of a failover event, while under load can, be scrutinized. Fail-back Testing: After verifying that a system can sustain a component outage, it is also important to verify that when the component is back up, that it is available to take load again, and that it can sustain the influx of activity when it comes back online.

Soak Tests (Also Known as Endurance Testing) Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day (or night) than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed. Also, it is possible that a system may stop working after a certain number of transactions have been processed due to memory leaks or other defects. Soak tests provide an opportunity to

Load Testing material

April 8 th 2005

identify such defects, whereas load tests and stress tests may not find such problems due to their relatively short duration.

The above diagram shows activity for a certain type of site. Each login results in an average session of 12 minutes duration with and average eight business transactions per session. A soak test would run for as long as possible, given the limitations of the testing situation. For example, weekends are often an opportune time for a soak test. Soak testing for this application would be at a level of 550 logins per hour, using typical activity for each login. The average number of logins per day in this example is 4,384 per day, but it would only take 8 hours at 550 per hour to run an entire days activity through the system. By Starting a 60 hour soak test on Friday evening at 6 pm (to finish at 6am Monday morning), 33,000 logins would be put through the system, representing 7 days of activity. Only with such a test, will it be possible to observe any degradation of performance under controlled conditions. Some typical problems identified during soak tests are listed below: Serious memory leaks that would eventually result in a memory crisis, Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system. Failure to close database cursors under some conditions which would eventually result in the entire system stalling. Gradual degradation of response time of some functions as internal data-structures become less efficient during a long test. Apart from monitoring response time, it is also important to measure CPU usage and available memory. If a server process needs to be available for the application to operate, it is often worthwhile to record it's memory usage at the start and end of a soak test. It is also important to monitor internal memory usages of facilities such as Java Virtual Machines, if applicable. Long Session Soak Testing

Load Testing material

April 8 th 2005

When an application is used for long periods of time each day, the above approach should be modified, because the soak test driver is not Logins and transactions per day, but transactions per active user for each user each day. This type of situation occurs in internal systems, such as ERP and CRM systems, where users login and stay logged in for many hours, executing a number of business transactions during that time. A soak test for such a system should emulate multiple days of activity in a compacted timeframe rather than just pump multiple days worth of transactions through the system. Long session soak tests should run with realistic user concurrency, but the focus should be on the number of transactions processed. VUGen scripts used in long session soak testing may need to be more sophisticated than short session scripts, as they must be capable of running a long series of business transactions over a prolonged period of time. Test Duration The duration of most soak tests is often determined by the available time in the test lab. There are many applications, however, that require extremely long soak tests. Any application that must run, uninterrupted for extended periods of time, may need a soak test to cover all of the activity for a period of time that is agreed to by the stakeholders, such as a month. Most systems have a regular maintenance window, and the time between such windows is usually a key driver for determining the scope of a soak test. A classic example of a system that requires extensive soak testing is an air traffic control system. A soak test for such a system may have a multi-week or even multi-month duration.

Stress Tests Stress Tests determine the load under which a system fails, and how it fails. This is in contrast to Load Testing, which attempts to simulate anticipated load. It is important to know in advance if a stress situation will result in a catastrophic system failure, or if everything just goes really slow. There are various varieties of Stress Tests, including spike, stepped and gradual ramp-up tests. Catastrophic failures require restarting various infrastructures and contribute to downtime, a stress-full environment for support staff and managers, as well as possible financial losses. If a major performance bottleneck is reached, then the system performance will usually degrade to a point that is unsatisfactory, but performance should return to normal when the excessive load is removed. Before conducting a Stress Test, it is usually advisable to conduct targeted infrastructure tests on each of the key components in the system. A variation on targeted infrastructure tests would be to execute each one as a mini stress test. The diagram below shows an unexpectedly high amount of demand on a typical web system. Stress situations are not expected under normal circumstances.

Load Testing material

April 8 th 2005

The following table lists possible situations for a variety of applications where stress situations may occur. Type of Application Online Banking Marketing / Sales Application Various applications Focus of stress test. In a stress event, it is most likely that many more connections will be requested per minute than under normal levels of expected peak activity. In many stress situations, the actions of each connected user will not be typical of actions observed under normal operating conditions. This is partly due to the slow response and partly due to the root cause of the stress event. Lets take an example of a large holiday resort web site. Normal activity will be characterized by browsing, room searches and bookings. If a national online news service posted a sensational article about the resort and included a URL in the article, then the site may be subjected to a huge number of hits, but most of the visits would probably be a quick browse. It is unlikely that many of the additional visitors would search for rooms and it would be even less likely that they would make bookings. However, if instead of a news article, a national newspaper advertisement erroneously understated the price of accommodation, then there may well be an influx of visitors who clamor to book a room, only to find that the price did not match their expectations. In both of the above situations, the normal traffic would be increased with traffic of a different usage profile. So a stress test design would incorporate a Load Test as well as additional virtual users running a special series of 'stress' navigations and transactions. For the sake of simplicity, one can just increase the number of users using the business processes and functions coded in the Load Test. However, one must then keep in mind that a system failure Circumstances that could give rise to Stress levels of activity. After an outage - when many clients have been waiting for access to the application to do their banking transactions. Very successful advertising campaign - or substantial error in advertising campaign that understates pricing details. Unexpected publicity - for example, in a news article in a national online newspaper.

Load Testing material

April 8 th 2005

with that type of activity may be different to the type of failure that may occur if a special series of 'stress' navigations were utilized for stress testing. Stress test execution. Typically, a stress test starts with a Load Test, and then additional activity is gradually increased until something breaks. An alternative type of stress test is a Load Test with sudden bursts of additional activity. The sudden bursts of activity generate substantial activity as sessions and connections are established, where as a gradual ramp-up in activity pushes various values past fixed system limitations.

Ideally, stress tests should incorporate two runs, one with burst type activity and the other with gradual ramp-up as per the diagram above, to ensure that the system under test will not fail catastrophically under excessive load. System reliability under severe load should not be negotiable and stress testing will identify reliability issues that arise under severe levels of load. An alternative, or supplemental stress test is commonly referred to as a spike test, where a single short burst of concurrent activity is applied to a system. Such tests are typical of simulating extreme activity where a 'count-down' situation exists. For example, a system that will not take orders for a new product until a particular date and time. If demand is very strong, then many

Load Testing material

April 8 th 2005

users will be poised to use the system the moment the count down ends, creating a spike of concurrent requests and load.

Targeted Infrastructure Tests are Isolated tests of each layer and or component in an end to end application configuration. It includes communications infrastructure, Load Balancers, Web Servers, Application Servers, Crypto cards, Citrix Servers, Database allowing for identification of any performance issues that would fundamentally limit the overall ability of a system to deliver at a given performance level. Each test can be quite simple, For example, a test ensuring that 500 concurrent (idle) sessions can be maintained by Web Servers and related equipment, should be executed prior to a full 500 user end to end performance test, as a configuration file somewhere in the system may limit the number of users to less than 500. It is much easier to identify such a configuration issue in a Targeted Infrastructure Test than in a full end to end test. The following diagram shows a simple conceptual decomposition of load to four different components in a typical web system.

Load Testing material

April 8 th 2005

Targeted infrastructure testing separately generates load on each component, and measures the response of each component under load. The following diagram shows four different tests that could be conducted to simulate the load represented in the above diagram.

Different infrastructure tests require different protocols. For example, VUGen supports a number of database protocols, such as DB2 CLI, Informix, MS SQL Server, Oracle and Sybase.

Performance Tests Performance Tests are tests that determine end to end timing (benchmarking) of various time critical business processes and transactions, while the system is under low load, but with a production sized database. This sets best possible performance expectation under a given configuration of infrastructure. It also highlights very early in the testing process if changes need to be made before load testing should be undertaken. For example, a customer search may take 15 seconds in a full sized database if indexes had not been applied correctly, or if an SQL 'hint' was

Load Testing material

April 8 th 2005

incorporated in a statement that had been optimized with a much smaller database. Such performance testing would highlight such a slow customer search transaction, which could be remediated prior to a full end to end load test. It is 'best practice' to develop performance tests with an automated tool, such as WinRunner, so that response times from a user perspective can be measured in a repeatable manner with a high degree of precision. The same test scripts can later be re-used in a load test and the results can be compared back to the original performance tests. Repeatability A key indicator of the quality of a performance test is repeatability. Re-executing a performance test multiple times should give the same set of results each time. If the results are not the same each time, then the differences in results from one run to the next can not be attributed to changes in the application, configuration or environment. Performance Tests Precede Load Tests The best time to execute performance tests is at the earliest opportunity after the content of a detailed load test plan have been determined. Developing performance test scripts at such an early stage provides opportunity to identify and remediate serious performance problems and expectations before load testing commences. For example, management expectations of response time for a new web system that replaces a block mode terminal application are often articulated as 'sub second'. However, a web system, in a single screen, may perform the business logic of several legacy transactions and may take 2 seconds. Rather than waiting until the end of a load test cycle to inform the stakeholders that the test failed to meet their formally stated expectations, a little education up front may be in order. Performance tests provide a means for this education. Another key benefit of performance testing early in the load testing process is the opportunity to fix serious performance problems before even commencing load testing. A common example is one or more missing indexes. When performance testing of a "customer search" screen yields response times of more than ten seconds, there may well be a missing index, or poorly constructed SQL statement. By raising such issues prior to commencing formal load testing, developers and DBAs can check that indexes have been set up properly. Performance problems that relate to size of data transmissions also surface in performance tests when low bandwidth connections are used. For example, some data, such as images and "terms and conditions" text are not optimized for transmission over slow links. Pre-requisites for Performance Testing A performance test is not valid until the data in the system under test is realistic and the software and configuration is production like. The following table list pre-requisites for valid performance testing, along with tests that can be conducted before the pre-requisites are satisfied:
Performance Test Pre-Requisites Production Like Environment Caveats on testing where pre-requisites are not satisfied. Lightweight transactions that do not require significant Performance tests need to be executed processing can be tested, but only substantial deviations from on the same specification equipment as expected transaction response times should be reported. production if the results are to have integrity. Low bandwidth performance testing of high bandwidth Comment

Load Testing material

April 8 th 2005
transactions where communications processing contributes to most of the response time can be tested.

Production Like Configuration

Configuration of each component needs to be production like.

For example: Database configuration and Operating System Configuration. The version of software to be tested Production Like should closely resemble the version to Version be used in production. If clients will access the system over a WAN, dial-up modems, DSL, ISDN, etc. then testing should be conducted using each communication access Production Like Access method. See Network Sensitivity Tests for more information on testing WAN access. All relevant tables in the database need to be populated with a production like quantity with a realistic mix of data. Production Like Data

While system configuration will have less impact on performance testing than load testing, only substantial deviations from expected transaction response times should be reported. Only major performance problems such as missing indexes and excessive communications should be reported with a version substantially different from the proposed production version.

Only tests using production like access are valid.

Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to e.g. Having one million customers, 999,997 of which have the name "John most of the response time can be tested. Smith" would produce some very unrealistic responses to customer search transactions

Documenting Response Time Expectations. Rather that simply stating that all transactions must be 'sub second', a more comprehensive specification for response time needs to be defined and agreed to be relevant stakeholders. One suggestion is to state an Average and a 90th Percentile response time for each group of transactions that are time critical. In a set of 100 values that are sorted from best to worst, the 90th percentile simply means the 90th value in the list. Executing Performance Tests. Performance testing involves executing the same test case multiple times with data variations for each execution, and then collating response times and computing response time statistics to compare against the formal expectations. Often, performance is different when the data used in the test case is different, as different numbers of rows are processed in the database, different processing and validation come into play, and so on. By executing a test case many times with different data, a statistical measure of response time can be computed that can be directly compared against a formal stated expectation.

Network Sensitivity Tests Network sensitivity tests are variations on Load Tests and Performance Tests that focus on the Wide Area Network (WAN) limitations and network activity (eg. traffic, latency, error rates...). Network sensitivity tests can be used to predict the impact of a given WAN segment or traffic profile on various applications that are bandwidth dependant. Network issues often arise at low

Load Testing material

April 8 th 2005

levels of concurrency over low bandwidth WAN segments. Very 'chatty' applications can appear to be more prone to response time degradation under certain conditions than other applications that actually use more bandwidth. For example, some applications may degrade to unacceptable levels of response time when a certain pattern of network traffic uses 50% of available bandwidth, while other applications are virtually un-changed in response time even with 85% of available bandwidth consumed elsewhere. This is a particularly important test for deployment of a time critical application over a WAN. Also, some front end systems such as web servers, need to work much harder with 'dirty' communications compared with the clean communications encountered on a high speed LAN in an isolated load and performance testing environment. Why execute Network Sensitivity Tests The three principle reasons for executing Network Sensitivity tests are as follows: Determine the impact on response time of a WAN link. (Variation of a Performance Test) Determine the capacity of a system based on a given WAN link. (Variation of a Load Test) Determine the impact on the system under test that is under 'dirty' communications load. (Variation of a Load Test) Execution of performance and load tests for analysis of network sensitivity require test system configuration to emulate a WAN. Once a WAN link has been configured, performance and load tests conducted will become Network Sensitivity Tests. There are two ways of configuring such tests. Use a simulated WAN and inject appropriate background traffic. This can be achieved by putting back to back routers between a load generator and the system under test. The routers can be configured to allow the required level of bandwidth, and instead of connecting to a real WAN, they connect directly through to each other.

When back to back routers are configured to be part of a test, they will basically limit the bandwidth. If the test is to be more realistic, then additional traffic will need to be applied to the routers. This can be achieved by a web server at one end of the link serving pages and another load generator generating requests. It is important that the mix of traffic is realistic. For example, a few continuous file transfers may impact response time in a different way to a large number of small transmissions.

Load Testing material

April 8 th 2005

By forcing extra more traffic over the simulated WAN link, the latency will increase and some packet loss may even occur. While this is much more realistic than testing over a high speed LAN, it does not take into account many features of a congested WAN such as out of sequence packets. Use the WAN emulation facility within LoadRunner. The WAN emulation facility within LoadRunner supports a variety of WAN scenarios. Each load generator can be assigned a number of WAN emulation parameters, such as error rates and latency. WAN parameters can be set individually, or WAN link types can be selected from a list of pre-set configurations. For detailed information on WAN emulation within LoadRunner follow this link - mercuryinteractive.com/products/LoadRunner/wan_emulation.html. It is important to ensure that measured response times incorporate the impact of WAN effects both at an individual session, as part of a performance test, and under load as part of a load test, because a system under WAN affected load may work much harder than a system doing the same actions over a clean communications link. Where is the WAN? Another key consideration in network sensitivity tests is the logical location of a WAN segment. A WAN segment is often between a client application and it's server. Some application configurations may have a WAN segment to a remote service that is accessed by an application server. To execute a load test that determines the impact of such a WAN segment, or the point at which the WAN link saturates and becomes a bottleneck, one must test with a real WAN link, or a back to back router setup - as described above. As the link becomes saturated, response time for transactions that utilize the WAN link will degrade. Response Time Calculation Example. A simplified formula for predicting response time is as follows: Response Time = Transmission Time + Delays + Client Processing Time + Server Processing Time. Where: Transmission Time = Data to be transferred divided by Bandwidth. Delays = Number of Turns multiplied by 'Round Trip' response time.

Load Testing material


Client Processing Time = Time taken on users software to fulfil request. Server Processing Time = Time taken on server computer to fulfil request.

April 8 th 2005

Try entering in values and clicking on various buttons below to see how various parameters affect response time. Note that this is a simplified model to demonstrate impact of various parameters. Other parameters such as error rates, lost pack rates .... are not included.

Volume Tests Volume Tests are often most appropriate to Messaging, Batch and Conversion processing type situations. In a Volume Test, there is often no such measure as Response time. Instead, there is usually a concept of Throughput. A key to effective volume testing is the identification of the relevant capacity drivers. A capacity driver is something that directly impacts on the total processing capacity. For a messaging system, a capacity driver may well be the size of messages being processed. Volume Testing of Messaging Systems Most messaging systems do not interrogate the body of the messages they are processing, so varying the content of the test messages may not impact the total message throughput capacity, but significantly changing the size of the messages may have a significant effect. However, the message header may include indicators that have a very significant impact on processing efficiency. For example, a flag saying that the message need not be delivered under certain circumstances is much easier to deal with than a message with a flag saying that it must be held for delivery for as long as necessary to deliver the message, and the message must not be lost. In the former example, the message may be held in memory, but in the later example, the message must be physically written to disk multiple times (normal disk write and another write to a journal mechanism of some sort plus possible mirroring writes and remote failover system writes!) Before conducting a meaningful test on a messaging system, the following must be known: The capacity drivers for the messages (as discussed above). The peak rate of messages that need to be processed, grouped by capacity driver. The duration of peak message activity that needs to be replicated. The required message processing rates. A test can then be designed to measure the throughput of a messaging system as well as the internal messaging system metrics while that throughput rate is being processed. Such measures would typically include CPU utilization and disk activity. It is important that a test be run, at peak load, for a period of time equal to or greater than the expected production duration of peak load. To run the test for less time would be like trying to test a freeway system with peak hour vehicular traffic, but limiting the test to five minutes. The traffic would be absorbed into the system easily, and you would not be able to determine a realistic forecast of the peak hour capacity of the freeway. You would intuitively know that a reasonable test of a freeway system must include entire 'morning peak' and 'evening peak' of

Load Testing material

April 8 th 2005

traffic profiles, as both peaks are very different. (Morning traffic generally converges on a city, whereas evening traffic is dispersed into the suburbs.) Volume Testing of Batch Processing Systems Capacity drivers in batch processing systems are also critical as certain record types may require significant CPU processing, while other record types may invoke substantial database and disk activity. Some batch processes also contain substantial aggregation processing, and the mix of transactions can significantly impact the processing requirements of the aggregation phase. In addition to the contents of any batch file, the total amount of processing effort may also depend on the size and makeup of the database that the batch process interacts with. Also, some details in the database may be used to validate batch records, so the test database must 'match' test batch files. Before conducting a meaningful test on a batch system, the following must be known: The capacity drivers for the batch records (as discussed above). The mix of batch records to be processed, grouped by capacity driver. Peak expected batch sizes (check end of month, quarter & year batch sizes). Similarity of production database and test database. Performance Requirements (eg. records per second) Batch runs can be analyzed and the capacity drivers can be identified, so that large batches can be generated for validation of processing within batch windows. Volume tests are also executed to ensure that the anticipated numbers of transactions are able to be processed and that they satisfy the stated performance requirements. Sociability (sensitivity) Tests Sensitivity analysis testing can determine impact of activities in one system on another related system. Such testing involves a mathematical approach to determine the impact that one system will have on another system. For example, web enabling a customer 'order status' facility may impact on performance of telemarketing screens that interrogate the same tables in the same database. The issue of web enabling can be that it is more successful than anticipated and can result in many more enquiries than originally envisioned, which loads the IT systems with more work than had been planned. Tuning Cycle Tests A series of test cycles can be executed with a primary purpose of identifying tuning opportunities. Tests can be refined and re-targeted 'on the fly' to allow technology support staff to make configuration changes so that the impact of those changes can be immediately measured. Protocol Tests Protocol tests involve the mechanisms used in an application, rather than the applications themselves. For example, a protocol test of a web server may will involve a number of HTTP interactions that would typically occur if a web browser were to interact with a web server - but the test would not be done using a web browser. LoadRunner is usually used to drive load into a system using VUGen at a protocol level, so that a small number of computers (Load Generators) can be used to simulate many thousands of users.

Load Testing material


Thick Client Application Tests

April 8 th 2005

A Thick Client (also referred to as a fat client) is a purpose built piece of software that has been developed to work as a client with a server. It often has substantial business logic embedded within it, beyond the simple validation that is able to be achieved through a web browser. A thick client is often able to be very efficient with the amount of data that is transferred between it and its server, but is also often sensitive to any poor communications links. Testing tools such as WinRunner are able to be used to drive a Thick Client, so that response time can be measured under a variety of circumstances within a testing regime. Developing a load test based on thick client activity usually requires significantly more effort for the coding stage of testing, as VUGen must be used to simulate the protocol between the client and the server. That protocol may be database connection based, COM/DCOM based, a proprietary communications protocol or even a combination of protocols. Thin Client Application Tests An internet browser that is used to run an application is said to be a thin client. But even thin clients can consume substantial amounts of CPU time on the computer that they are running on. This is particularly the case with complex web pages that utilize many recently introduced features to liven up a web page. Rendering a page after hitting a SUBMIT button may take several seconds even though the server may have responded to the request in less than one second. Testing tools such as WinRunner are able to be used to drive a Thin Client, so that response time can be measured from a users perspective, rather than from a protocol level. VUGen - As a protocol replay tool We use Virtual User Generator (VUGen) to record the protocol from a vast variety of user applications for playback in various types of load tests. As VUGen is a protocol based testing tool, we can use a relatively small amount of load generation hardware to simulate a vast number of users for a load test. Visit mercuryinteractive.com/products/loadrunner/ for detailed information on LoadRunner, of which VUGen is a component. The following diagram shows how a protocol level recording and playback tool, such as VUGen works. Note that only the communications is captured and recorded for subsequent playback. Events that do not generate communications such as moving the mouse, or editing data in a field on a web form are not recorded because those events do not interact with the system under test.

Load Testing material

April 8 th 2005

During a load test we need to run many virtual user sessions, and we do it by replaying the protocol that is described in a VUGen script under the control of a LoadRunner Controller. A copy of the client application (a web browser in the case of a web application) does not need to run for each session, as all of the protocol information is contained within the VUGen script. The example below relates to an Internet Explorer session starting up and connecting to www.google.com.au and then performing a search on "Mercury Interactive". As can be seen, protocol based tools such as VUGen records each of the interactions between the browser and the Google web site, including the setting of cookies and the transmission of the search request containing the text "Mercury Interactive". The tool needs to do the following: Add Cookies that Google uses to each header. Make an HTTP request on the URL "www.google.com.au". Make HTTP requests for each of the GIFs on the page. Perform an HTTP GET on the URL "http://www.google.com.au/search" constructed with content to cause the web site to execute a search on the text "Mercury Interactive". Each of the above steps can be seen in the screen print shown below.

Load Testing material

April 8 th 2005

The concept of 'protocol replay' is central to the way that Load testers are able to generate substantial load with minimal load generation hardware. This is a direct contrast to the way that GUI based testing tools, such as WinRunner operate, as GUI based tools need to use an entire instance of the client software for each virtual user. Refer to the page on WinRunner for more information on the way that GUI based testing tools can be used in the context of load testing. The concept of protocol replay can be better understood by looking at other examples using different protocols. COM/DCOM protocol example LoadRunner comes with various sample applications, so that a tester can get some practice on a simple application before attempting to script a test for a complex application. One such sample is the COM based 'flights' application. The following screen shows what the Virtual User Generator script looks like for a COM/DCOM protocol. You can see from script example that this portion of code relates to the end of the login process and the start of the process to select a flight (because of the start and end transaction marks that I inserted). You can also see the SQL statement showing that the list of flights relate to 'Denver to Los Angeles on Tuesday'. For a realistic test, these values need to be paramatised so that the same SQL statement is not executed over and over again. The script also needs to be correlated so that the values returned from such an SQL statements is able to be used later in the script. For example, if the first flight from Denver to LA on Tuesday were selected when the script was recorded, that flight may have a flight number of 12345. However, if the locations were changed from LA to New York then the first flight may have a number of 23456. VUGen needs to know to send 23456 instead of 12345 in subsequent communications to the server, and this is achieved by correlation.

Load Testing material

April 8 th 2005

As can be seen from this example, some logic is required to replay the COM protocol, but much less processing power is required than the actual application that was recorded. This means that a large number of such Virtual Users can be simulated with a single load generator computer. RESPONSE TIME: Stating Response Time Requirements Traditionally, response time is often defined as the interval from when a user initiates a request to the instant at which the first part of the response is received at by the application. However, such a definition is not usually appropriate within a performance related application requirement specification. The definition of response time must incorporate the behavior, design and architecture of the system under test. While understanding the concept of response time is critical in all load and performance tests, it is probably most crucial to Load Testing, Performance Testing and Network Sensitivity Testing. Response time measuring points must be carefully considered because in client server applications, as well as web systems, the first characters returned to the application often does not

Load Testing material

April 8 th 2005

contribute to the rendering of the screen with the anticipated response, and do not represent the users impression of response time. For example response time in a web based booking system, that contains a banner advertising mechanism, may or may not include the time taken to download and display banner adds, depending on your interest in the project. If you are a marketing firm, you would be very interested in banner add display time, but if you were primarily interested in the booking component, then banner adds would not be of much concern. Also, response time measurements are typically defined at the communications layer, which is very convenient for LoadRunner / VUGen based tests, but may be quite different to what a user experiences on his or her screen. A user sees what is drawn on a screen and does not see the data transmitted down the communications line. The display is updated after the computations for rendering the screen have been performed, and those computations may be very sophisticated and take a considerable amount of time. For response time requirements that are stated in terms of what the user sees on the screen, WinRunner should be used, unless there is a reliable mathematical calculation to translate communications based response time into screen based response time. It is important that response time is clearly defined, and the response time requirements (or expectations) are stated in such a way to ensure that unacceptable performance is flagged in the load and performance testing process. One simple suggestion is to state an Average and a 90th Percentile response time for each group of transactions that are time critical. In a set of 100 values that are sorted from best to worst, the 90th percentile simply means the 90th value in the list. The specification is as follows: Time to display order details Average time to display order details. 90th percentile time to display order details. less than 5.0 seconds. less than 7.0 seconds.

The above specification, or response time service level agreement, is a reasonably tight specification that is easy to validate against. For example, a customer 'display order details' transaction was executed 20 times under similar conditions, with response times in seconds, sorted from best to worst, as follows 2,2,2,2,2, 2,2,2,2,2, 3,3,3,3,3, 4,10,10,10,20 Average = 4.45 seconds, 90th Percentile = 10 seconds The above test would fail when compared against the above stated criteria, as too many transactions were slower than seven seconds, even though the average was less than five seconds. If the performance requirement was a simple "Average must be less than five seconds" then the test would pass, even though every fifth transaction was ten seconds or slower. This simple approach can easily extended to include 99th percentile and other percentiles as required for even tighter response time service level agreement specifications.

Load Runner/Load testing/Performance testing Q&A's


Performance Test Planning
1. Which of the following information are considered relevant when gathering system usage for a performance test? There are two answers. a. System architecture b. Business processes c. Application modules about to be unit-tested d. Financial data such as general ledger and P&L statements A, B 2. There are three main criteria to determine which business processes to select for performance testing. What are these criteria? Mission-critical, heavy throughput, dynamic content 3. Each business process takes a certain amount of time to complete. Under ideal conditions, you determine this amount of time as ________________________ Preferred response time 4. You want to determine how many users are active on a Web site during a twenty-four hour period. What type of diagram can you use to map the business processes and the volume of each across a fixed time line? Task Distribution Diagram 5. How many transactions will need to run per minute if a load test has to run for two hours with 5000 users, assuming an average transaction length of five minutes? 1000 transactions per minute 6. This value represents the number of users performing business processes on the application during the busiest time frame of an atypical day (e.g. holiday). What do you call this value? Peak load 7. Write a quantifiable performance test objective given the following information: Maximum number of concurrent users at peak time: 6000 Business Process: Update Totals Preferred response time range: 5 to 7 seconds The Update Totals transaction time should be seven seconds or less during peak hours

Core LoadRunner and Virtual User Generator (VuGen)


1. What is a Load Test? A load test is a short-term test of system performance under typical real-world conditions using critical business processes. 2. What are the LoadRunner Components and what role does each play in creating a performance test? Controller: the administrative center for creating, maintaining, executing and monitoring scenarios. Scenarios have a .lrs extension LoadRunner Generators : Machines that emulate user volume and locally stores load test results until the scenario completes running, then the results are transferred to the results file specified. LoadRunner Analysis: Processes the results from the scenario run. Results files have a .lrr extension. After the results are processed by the LR Analysis tool, the results files have a .lra extension. Virtual User Generator : records Vuser scripts that emulate the steps of real users using the application under test 3. General Vuser Functions: Transactions are defined and measured in a Vuser script using which two functions? lr_start_transaction lr_end transaction 4. What is ThinkTime? ThinkTime is a measure of time that a real user takes to pause between the execution of steps. 5. What is correlation? Correlation is the method of capturing values in a script as a result of dynamic data passed from server to the client and back. The values are saved in a LoadRunner parameter and is reused instead of the original re corded value.

Basic and Important LoadRunner commands which may be asked in an interview ---------------------------------------------------------------------------------------------------------------

lr_message("a message"); // sends a message to both the the Output window and the Vuser log. lr_output_message("an output message"); // sends a message to the output log prefixed with the action name and line number, such as: Actions.c (4): an output message lr_log_message("a message"); // sends a message to the output log without the action name and line number. lr_vuser_status_message("a vuser status message"); // sends a message only to the Controller Vuser status area. In VuGen, this message appears briefly. lr_error_message("an error message"); // sends a highlighted red message -17999 to the LoadRunner Controller or Tuning Module Consoles Output window Issuing an lr_error_message causes the display of the log message stack automatically cleared at the beginning of each new action. If "Send messages only when an error occurs" is selected, messages are still being created in the "log message stack", but suppressed (not displayed) until an error is detected.

There are several message functions:

Web User Actions

In VuGen, the default mode of script generation is HTML-based "a script describing user actions" that corresponds directly to user actions: web_url for a URL in the Address field on the internet browser. web_link for clicking a text link between <a href= ...> and <a> web_image for clicking an HTML <img href= link. web_submit_form for pressing "submit" of a GET or PUT form obtained in the context of a previous operation perhaps recorded by VuGen in HTML-based recording mode. web_submit_data for pressing "submit" of a GET or PUT form without the context of a previous operation

Transaction Timing Scripting

lr_set_transaction Create a transaction manually. lr_start_transaction_instance Starts a nested transaction specified by its parents handle. lr_set_transaction_status Sets the status of open transactions. lr_set_transaction_status_by_name Sets the status of a transaction. lr_set_transaction_instance_status Sets the status of a transaction instance. lr_stop_transaction_instance Stops collecting data for a transaction specified by its handle.

Basic and Important LoadRunner commands which may be asked in an interview ---------------------------------------------------------------------------------------------------------------

lr_resume_transaction_instance Resumes collecting transaction instance data for performance analysis. lr_resume_transaction Resumes collecting transaction data for performance analysis. lr_end_transaction_instance Marks the end of a transaction instance for performance analysis. lr_fail_trans_with_error Sets the status of open transactions to LR_FAIL and sends an error message. lr_stop_transaction Stops the collection of transaction data.

Exit Scripting
Value 0 Constant LR_EXIT_VUSER Explanation Exit without any condition, and go directly to end action. Stop current action, and go to the next action.

LR_EXIT_ACTION_AND_CONTINUE

LR_EXIT_ITERATION_AND_CONTINUE Stop current iteration, and go to the next iteration. LR_EXIT_VUSER_AFTER_ITERATION Run until the end of the current iteration and then exit. Run until the end of the current action and then exit.

LR_EXIT_VUSER_AFTER_ACTION

The Error Continuation constants:


Object Oriente d Value 0 1 2 3 Constant Explanation

LR_ON_ERROR_NO_OPTIONS Exit without any condition, and go directly to end action. LR_ON_ERROR_CONTINUE LR_ON_ERROR_SKIP_TO_NE XT_ACTION LR_ON_ERROR_SKIP_TO_NE Stop current action, and go to the next action. Stop current iteration, and go to the next iteration. Run until the end of the

Basic and Important LoadRunner commands which may be asked in an interview ---------------------------------------------------------------------------------------------------------------

XT_ITERATION 4 5 LR_ON_ERROR_END_VUSER

current iteration and then exit. Run until the end of the current action and then exit.

LR_ON_ERROR_CALL_USER_ DEFINED_HANDLER

The exit_status attribute of the exit command:


Value 0 1 2 3 Macro LR_PASS LR_FAIL LR_AUTO Abort by default. LR_STOP Explanation

LoadRunner XML Functions:


lr_xml_set_values Sets the values of XML elements found by a query lr_xml_extract Extracts XML string fragments from an XML string lr_xml_delete Deletes fragments from an XML string lr_xml_replace Replaces fragments of an XML string lr_xml_insert Inserts a new XML fragment into an XML string lr_xml_transform Applies Extensible Stylesheet Language (XSL) Transformation to XML data

Time can be obtained with these functions:


lr_get_transaction_status Gets the current status of a transaction. lr_get_transaction_duration Gets the duration of a transaction by its name. lr_get_transaction_think_time Gets the think time of a transaction by its name. lr_get_transaction_wasted_time Gets the wasted time of a transaction by its name. lr_get_trans_instance_status Returns the current status of a transaction instance. lr_get_trans_instance_duration Returns the duration of a transaction instance specified by its handle. lr_get_trans_instance_think_time Gets the think time of a transaction instance specified by its handle.

Basic and Important LoadRunner commands which may be asked in an interview --------------------------------------------------------------------------------------------------------------

lr_get_trans_instnce_wasted_time Gets the wasted time of a transaction instance by its handle.

Rendezvous Points are specified to ensure that all specified vusers begin a transaction at precisely the same time. The Controller automatically enables its Scenario > Rendezvous pulldown menu when its scans recognize in the scripts of all Vuser groups in the scenario a command such as: lr_rendezvous("rendezvous_1"); // Set rendezvous point. Source Link: http://www.wilsonmar.com/1lrscript.htm#Basicz

Rendezvous Points

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

SQL
SQL (Structured Query Language) is a syntax for executing queries. SQL Data Manipulation Language (DML): These query and update commands together form the Data Manipulation Language (DML) part of SQL: SELECT - extracts data from a database table UPDATE - updates data in a database table DELETE - deletes data from a database table INSERT INTO - inserts new data into a database table SQL Data Definition Language (DDL) : The Data Definition Language (DDL) part of SQL permits database tables to be created or deleted. We can also define indexes (keys), specify links between tables, and impose constraints between database tables. The most important DDL statements in SQL are: CREATE TABLE - creates a new database table ALTER TABLE - alters (changes) a database table DROP TABLE - deletes a database table CREATE INDEX - creates an index (search key) DROP INDEX - deletes an index select "column1" [,"column2",etc] from "tablename" [where "condition"]; [] = optional SELECT [ALL | DISTINCT] column1[,column2]FROM table1[,table2] [WHERE "conditions"] [GROUP BY "column-list"] [HAVING "conditions] [ORDER BY "column-list" [ASC | DESC] ]; SELECT DISTINCT column_name(s) FROM table_name EX : SELECT DISTINCT Company FROM Orders The DISTINCT keyword is used to return only distinct (different) values The WHERE Clause To conditionally select data from a table, a WHERE clause can be added to the SELECT statement. SELECT column FROM table WHERE column operator value With the WHERE clause, the following operators can be used: Operator Description = Equal <> Not equal > Greater than < Less than >= Greater than or equal <= Less than or equal BETWEEN Between an inclusive range LIKE Search for a pattern The LIKE Condition The LIKE condition is used to specify a search for a pattern in a column.

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

SELECT column FROM table WHERE column LIKE pattern A "%" sign can be used to define wildcards (missing letters in the pattern) both before and after the pattern. Using LIKE The following SQL statement will return persons with first names that start with an 'O': SELECT * FROM Persons WHERE FirstName LIKE 'O%' The following SQL statement will return persons with first names that end with an 'a': SELECT * FROM Persons WHERE FirstName LIKE '%a' The following SQL statement will return persons with first names that contain the pattern 'la': SELECT * FROM Persons WHERE FirstName LIKE '%la%' INSERT INTO table_name VALUES (value1, value2,....) INSERT INTO table_name (column1, ....) VALUES (value1, value2,....) UPDATE table_name SET column_name = new_value WHERE column_name = some_value UPDATE Person SET Address = 'Stien 12', City = 'Stavanger' WHERE LastName = 'Rasmussen DELETE FROM table_name WHERE column_name = some_value DELETE FROM table_name (or) DELETE * FROM table_name Delete All Rows SELECT Company, OrderNumber FROM Orders ORDER BY Company SELECT column_name FROM table_name WHERE column_name IN (value1,value2,.. SELECT * FROM Persons WHERE FirstName='Tove' AND LastName='Svendson' SELECT column_name FROM table_name WHERE column_name BETWEEN value1 AND value2 Column Name Alias :SELECT column AS column_alias FROM table Table Name Alias : SELECT column FROM table AS table_alias Example: Using a Column Alias This table (Persons): LastName FirstName Address Hansen Ola Timoteivn 10 Svendson Tove Borgvn 23 Pettersen Kari Storgt 20 SELECT LastName AS Family, FirstName AS Name FROM Persons Returns this result:

City Sandnes Sandnes Stavanger

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

Family Hansen Svendson Pettersen

Name Ola Tove Kari

Example: Using a Table Alias This table (Persons): LastName FirstName Address Hansen Ola Timoteivn 10 Svendson Tove Borgvn 23 Pettersen Kari Storgt 20 SELECT LastName, FirstName FROM Persons AS Employees Returns this result: Table Employees: LastName FirstName Hansen Ola Svendson Tove Pettersen Kari

City Sandnes Sandnes Stavanger

INNER JOIN SELECT Employees.Name, Orders.Product FROM Employees INNER JOIN Orders ON Employees.Employee_ID=Orders.Employee_ID SELECT Employees.Name, Orders.Product FROM Employees, Orders WHERE Employees.Employee_ID=Orders.Employee_I LEFT JOIN SELECT Employees.Name, Orders.Product FROM Employees LEFT JOIN Orders ON Employees.Employee_ID=Orders.Employee_ID The LEFT JOIN returns all the rows from the first table (Employees), even if there are no matches in the second table (Orders). If there are rows in Employees that do not have matches in Orders, those rows also will be listed. Name Product Hansen, Ola Printer Svendson, Tove Svendson, Stephen Table Svendson, Stephen Chair Pettersen, Kari RIGHT JOIN SELECT Employees.Name, Orders.Product FROM Employees RIGHT JOIN Orders ON Employees.Employee_ID=Orders.Employee_ID

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

The RIGHT JOIN returns all the rows from the second table (Orders), even if there are no matches in the first table (Employees). If there had been any rows in Orders that did not have matches in Employees, those rows also would have been listed. Name Product Hansen, Ola Printer Svendson, Stephen Table Svendson, Stephen Chair Who ordered a printer? SELECT Employees.Name FROM Employees INNER JOIN Orders ON Employees.Employee_ID=Orders.Employee_ID WHERE Orders.Product = 'Printer' UNION The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type. SQL Statement 1 UNION SQL Statement 2 List all different employee names in Norway and US (Without repetitions in this case they dont repeat name) SELECT E_Name FROM Employees_Norway UNION SELECT E_Name FROM Employees_US UNION ALL The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values. SQL Statement 1 UNION ALL SQL Statement 2 List all employees in Norway and USA: (With repetitions in this case they repeat name) SELECT E_Name FROM Employees_Norway UNION ALL SELECT E_Name FROM Employees_USA CREATE TABLE table_name ( column_name1 data_type, column_name2 data_type, ....... ) Data Type Description integer(size) Hold integers only. The maximum number of digits are int(size) specified in parenthesis. smallint(size) tinyint(size) decimal(size,d) Hold numbers with fractions. The maximum number of digits numeric(size,d) are specified in "size". The maximum number of digits to the right of the decimal is specified in "d". char(size) Holds a fixed length string (can contain letters, numbers, and special characters). The fixed size is specified in parenthesis.

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

varchar(size)

date(yyyymmdd) A unique Index Creates a unique index on a table. A unique index means that two rows cannot have the same index value. CREATE UNIQUE INDEX index_name ON table_name (column_name) The "column_name" specifies the column you want indexed. A Simple Index Creates a simple index on a table. When the UNIQUE keyword is omitted, duplicate values are allowed. CREATE INDEX index_name ON table_name (column_name) Ex CREATE INDEX PersonIndex ON Person (LastName) CREATE INDEX PersonIndex ON Person (LastName DESC) CREATE INDEX PersonIndex ON Person (LastName, FirstName) Drop Index You can delete an existing index in a table with the DROP statement. DROP INDEX table_name.index_name Delete a Table or Database To delete a table (the table structure, attributes, and indexes will also be deleted): DROP TABLE table_name To delete a database: DROP DATABASE database_name Truncate a Table What if we only want to get rid of the data inside a table, and not the table itself? Use the TRUNCATE TABLE command (deletes only the data inside the table): TRUNCATE TABLE table_name ALTER TABLE The ALTER TABLE statement is used to add or drop columns in an existing table. ALTER TABLE table_name ADD column_name datatype ALTER TABLE Person ADD City varchar(30) ALTER TABLE table_name DROP COLUMN column_name ALTER TABLE Person DROP COLUMN Address SQL Functions SQL has a lot of built-in functions for counting and calculations. SELECT function(column) FROM table Aggregate functions in MS Access Function Description AVG(column) Returns the average value of a column COUNT(column) Returns the number of rows (without a NULL value) of a column COUNT(*) Returns the number of selected rows FIRST(column) Returns the value of the first record in the specified field

Holds a variable length string (can contain letters, numbers, and special characters). The maximum size is specified in parenthesis. Holds a date

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

LAST(column) Returns the value of the last record in the specified field MAX(column) Returns the highest value of a column MIN(column) Returns the lowest value of a column STDEV(column) STDEVP(column) SUM(column) Returns the total sum of a column VAR(column) VARP(column) Aggregate functions in SQL Server Function Description AVG(column) Returns the average value of a column BINARY_CHECKSUM CHECKSUM CHECKSUM_AGG COUNT(column) Returns the number of rows (without a NULL value) of a column COUNT(*) Returns the number of selected rows COUNT(DISTINCT column) Returns the number of distinct results FIRST(column) Returns the value of the first record in the specified field (not supported in SQLServer2K) LAST(column) Returns the value of the last record in the specified field (not supported in SQLServer2K) MAX(column) Returns the highest value of a column MIN(column) Returns the lowest value of a column STDEV(column) STDEVP(column) SUM(column) Returns the total sum of a column VAR(column) VARP(column) Scalar functions :Scalar functions operate against a single value, and return a single value based on the input value. Useful Scalar Functions in MS Access Function Description UCASE(c) Converts a field to upper case LCASE(c) Converts a field to lower case MID(c,start[,end]) Extract characters from a text field LEN(c) Returns the length of a text field INSTR(c) Returns the numeric position of a named character within a text field LEFT(c,number_of_char) Return the left part of a text field requested RIGHT(c,number_of_char) Return the right part of a text field requested ROUND(c,decimals) Rounds a numeric field to the number of decimals

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

specified MOD(x,y) Returns the remainder of a division operation NOW() Returns the current system date FORMAT(c,format) Changes the way a field is displayed DATEDIFF(d,date1,date2) Used to perform date calculations ORDER BY clause ORDER BY is an optional clause which will allow you to display the results of your query in a sorted order (either ascending order or descending order) based on the columns that you specify to order by. SELECT column1, SUM(column2)FROM "list-of-tables" ORDER BY "column-list" [ASC | DESC]; SELECT employee_id, dept, name, age, salary FROM employee_info WHERE dept = 'Sales' ORDER BY salary, age DESC; GROUP BY GROUP BY... was added to SQL because aggregate functions (like SUM) return the aggregate of all column values every time they are called, and without the GROUP BY function it was impossible to find the sum for each individual group of column values The GROUP BY clause will gather all of the rows together that contain data in the specified column(s) and will allow aggregate functions to be performed on the one or more columns. This can best be explained by an example: SELECT column,SUM(column) FROM table GROUP BY column Ex :SELECT Company,SUM(Amount) FROM Sales GROUP BY Company HAVING. HAVING... was added to SQL because the WHERE keyword could not be used against aggregate functions (like SUM), and without HAVING... it would be impossible to test for result conditions. SELECT column,SUM(column) FROM table GROUP BY column HAVING SUM(column) condition value Ex : Company,SUM(Amount) FROM Sales GROUP BY Company HAVING SUM(Amount)>10000 The SELECT INTO Statement The SELECT INTO statement is most often used to create backup copies of tables or for archiving records. SELECT column_name(s) INTO newtable [IN externaldatabase] FROM source Make a Backup Copy The following example makes a backup copy of the "Persons" table: SELECT * INTO Persons_backup FROM Persons The IN clause can be used to copy tables into another database: SELECT Persons.* INTO Persons IN 'Backup.mdb' FROM Persons If you only want to copy a few fields, you can do so by listing them after the SELECT statement: SELECT LastName,FirstName INTO Persons_backup FROM Persons

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

You can also add a WHERE clause. The following example creates a "Persons_backup" table with two columns (FirstName and LastName) by extracting the persons who lives in "Sandnes" from the "Persons" table: SELECT LastName,Firstname INTO Persons_backup FROM Persons WHERE City='Sandnes' Selecting data from more than one table is also possible. The following example creates a new table "Empl_Ord_backup" that contains data from the two tables Employees and Orders: SELECT Employees.Name,Orders.Product INTO Empl_Ord_backup FROM Employees INNER JOIN Orders ON Employees.Employee_ID=Orders.Employee_ID SQL The CREATE VIEW Statement A view is a virtual table based on the result-set of a SELECT statement. What is a View? In SQL, a VIEW is a virtual table based on the result-set of a SELECT statement. A view contains rows and columns, just like a real table. The fields in a view are fields from one or more real tables in the database. You can add SQL functions, WHERE, and JOIN statements to a view and present the data as if the data were coming from a single table. Note: The database design and structure will NOT be affected by the functions, where, or join statements in a view. CREATE VIEW view_name AS SELECT column_name(s) FROM table_name WHERE condition Note: The database does not store the view data! The database engine recreates the data, using the view's SELECT statement, every time a user queries a view. Using Views A view could be used from inside a query, a stored procedure, or from inside another view. By adding functions, joins, etc., to a view, it allows you to present exactly the data you want to the user. The sample database Northwind has some views installed by default. The view "Current Product List" lists all active products (products that are not discontinued) from the Products table. The view is created with the following SQL: CREATE VIEW [Current Product List] AS SELECT ProductID,ProductName FROM Products WHERE Discontinued=No We can query the view above as follows: SELECT * FROM [Current Product List] Another view from the Northwind sample database selects every product in the Products table that has a unit price that is higher than the average unit price: CREATE VIEW [Products Above Average Price] AS SELECT ProductName,UnitPrice FROM Products WHERE UnitPrice>(SELECT AVG(UnitPrice) FROM Products) We can query the view above as follows: SELECT * FROM [Products Above Average Price]

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

Another example view from the Northwind database calculates the total sale for each category in 1997. Note that this view select its data from another view called "Product Sales for 1997": CREATE VIEW [Category Sales For 1997] AS SELECT DISTINCT CategoryName,Sum(ProductSales) AS CategorySales FROM [Product Sales for 1997] GROUP BY CategoryName We can query the view above as follows: SELECT * FROM [Category Sales For 1997] We can also add a condition to the query. Now we want to see the total sale only for the category "Beverages": SELECT * FROM [Category Sales For 1997] WHERE CategoryName='Beverages' Statement AND / OR Syntax SELECT column_name(s) FROM table_name WHERE condition AND|OR condition ALTER TABLE (add column) ALTER TABLE table_name ADD column_name datatype ALTER TABLE (drop column) ALTER TABLE table_name DROP COLUMN column_name AS (alias for column) SELECT column_name AS column_alias FROM table_name AS (alias for table) SELECT column_name FROM table_name AS table_alias BETWEEN SELECT column_name(s) FROM table_name WHERE column_name BETWEEN value1 AND value2 CREATE DATABASE CREATE DATABASE database_name CREATE INDEX CREATE INDEX index_name ON table_name (column_name) CREATE TABLE CREATE TABLE table_name ( column_name1 data_type, column_name2 data_type, ....... ) CREATE UNIQUE INDEX CREATE UNIQUE INDEX index_name ON table_name (column_name) CREATE VIEW CREATE VIEW view_name AS SELECT column_name(s) FROM table_name WHERE condition

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

DELETE FROM

DELETE FROM table_name (Note: Deletes the entire table!!) or DELETE FROM table_name WHERE condition DROP DATABASE DROP DATABASE database_name DROP INDEX DROP INDEX table_name.index_name DROP TABLE DROP TABLE table_name GROUP BY SELECT column_name1,SUM(column_name2) FROM table_name GROUP BY column_name1 HAVING SELECT column_name1,SUM(column_name2) FROM table_name GROUP BY column_name1 HAVING SUM(column_name2) condition value IN SELECT column_name(s) FROM table_name WHERE column_name IN (value1,value2,..) INSERT INTO INSERT INTO table_name VALUES (value1, value2,....) or INSERT INTO table_name (column_name1, column_name2,...) VALUES (value1, value2,....) LIKE SELECT column_name(s) FROM table_name WHERE column_name LIKE pattern ORDER BY SELECT column_name(s) FROM table_name ORDER BY column_name [ASC|DESC] SELECT SELECT column_name(s) FROM table_name SELECT * SELECT * FROM table_name SELECT DISTINCT SELECT DISTINCT column_name(s) FROM table_name SELECT INTO SELECT * (used to create backup copies INTO new_table_name of tables) FROM original_table_name or SELECT column_name(s) INTO new_table_name FROM original_table_name TRUNCATE TABLE TRUNCATE TABLE table_name (deletes only the data inside the table)

Some good and useful sites to practice SQL: www.sqlcourse.com, www.sqlcourse2.com, http://www.w3schools.com/sql/default.asp

UPDATE

UPDATE table_name SET column_name=new_value [, column_name=new_value] WHERE column_name=some_value WHERE SELECT column_name(s) FROM table_name WHERE condition Source : http://www.w3schools.com/sql/sql_quickref.asp

SQL FAQ What is SQL and where does it come from? Structured Query Language (SQL) is a language that provides an interface to relational database systems. The proper pronunciation of SQL is "ess cue ell," and not "sequel" as is commonly heard. SQL was developed by IBM in the 1970s for use in System R, and is a de facto standard, as well as an ISO and ANSI standard. In common usage SQL also encompasses DML (Data Manipulation Language), for INSERTs, UPDATEs, DELETEs and DDL (Data Definition Language), used for creating and modifying tables and other database structures. The development of SQL is governed by standards. A major revision to the SQL standard was completed in 1992, called SQL2. SQL3 support object extensions and are (partially?) implemented in Oracle8 and 9i. What are the difference between DDL, DML and DCL commands? DDL is Data Definition Language statements. Some examples: CREATE - to create objects in the database ALTER - alters the structure of the database DROP - delete objects from the database TRUNCATE - remove all records from a table, including all spaces allocated for the records are removed COMMENT - add comments to the data dictionary GRANT - gives user's access privileges to database REVOKE - withdraw access privileges given with the GRANT command DML is Data Manipulation Language statements. Some examples: SELECT - retrieve data from the a database INSERT - insert data into a table UPDATE - updates existing data within a table DELETE - deletes all records from a table, the space for the records remain CALL - call a PL/SQL or Java subprogram EXPLAIN PLAN - explain access path to data LOCK TABLE - control concurrency DCL is Data Control Language statements. Some examples: COMMIT - save work done SAVEPOINT - identify a point in a transaction to which you can later roll back ROLLBACK - restore database to original since the last COMMIT SET TRANSACTION - Change transaction options like what rollback segment to use How does one escape special characters when writing SQL queries? The LIKE keyword allows for string searches. The '_' wild card character is used to match exactly one character, '%' is used to match zero or more occurrences of any characters. These characters can be escaped in SQL. Example: SELECT name FROM emp WHERE id LIKE '%/_%' ESCAPE '/'; Use two quotes for every one displayed. Examples: SQL> SELECT 'Frank''s Oracle site' AS text FROM DUAL; TEXT -------------------Franks's Oracle site SQL> SELECT 'A ''quoted'' word.' AS text FROM DUAL; TEXT ---------------A 'quoted' word. SQL> SELECT 'A ''''double quoted'''' word.' AS text FROM DUAL; TEXT ------------------------A ''double quoted'' word.

How does one prevent Oracle from using an Index? In certain cases, one may want to disable the use of a specific, or all indexes for a given query. Here are some examples: -- Adding an expression to the indexed column SQL>select count(*) from t where empno+0=1000; COUNT(*) ---------1 Execution Plan --------------------------------------------- ----- -------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=3) 1 0 SORT (AGGREGATE) 2 1 TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=3) -- Specifying the FULL hint to force full table scan SQL>select /*+ FULL(t) */ * from t where empno=1000; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO GRADE ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ---------1000 Victor DBA 7839 20-MAY-03 11000 0 10 JUNIOR Execution Plan --------------------------------------------- ----- -------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=41) 1 0 TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=41) -- Specifying NO_INDEX hint SQL>select /*+ NO_INDEX(T) */ count(*) from t where empno=1000; COUNT(*) ---------1 Execution Plan --------------------------------------------- ----- -------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=3) 1 0 SORT (AGGREGATE) 2 1 TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=3) -- Using a function over the indexed column SQL>select count(*) from t where to_number(empno)=1000; COUNT(*) ---------1 Execution Plan --------------------------------------------- ----- -------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=3) 1 0 SORT (AGGREGATE) 2 1 TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=3) How does one select the TOP N rows from a table? From Oracle 9i onwards, the RANK() and DENSE_RANK() functions can be used to determine the TOP N rows. Examples:

-- To get the top 10 employees based on their Salary SELECT ename, sal FROM ( SELECT ename, sal, RANK() OVER (ORDER BY sal DESC) sal_rank FROM emp ) WHERE sal_rank <= 10; -- To get the employees making the top 10 Salaries SELECT ename, sal FROM ( SELECT ename, sal, DENSE_RANK() OVER (ORDER BY sal DESC) sal_rank FROM emp ) WHERE sal_rank <= 10; For Oracle 8i and above, one can get the Top N rows using an inner-query with an ORDER BY clause: SELECT * FROM (SELECT * FROM my_table ORDER BY col_name_1 DESC) WHERE ROWNUM < 10; Use this workaround for older releases: SELECT * FROM my_table a WHERE 10 >= (SELECT COUNT(DISTINCT maxcol) FROM my_table b WHERE b.maxcol >= a.maxcol) ORDER BY maxcol DESC; Can one select a random collection of rows from a table? From Oracle 8i, the easiest way to randomly select rows from a table is to use the SAMPLE clause with a SELECT statement. Examples: SELECT * FROM emp SAMPLE(10); In the above example, Oracle is instructed to randomly return 10% of the rows in the table. SELECT * FROM emp SAMPLE(5) BLOCKS; This example will sample 5% of all formatted database blocks instead of rows. This clause only works for single table queries on local tables. If you include the SAMPLE clause within a multi-table or remote query, you will get a parse error or "ORA-30561: SAMPLE option not allowed in statement with multiple table references". One way around this is to create an inline view on the driving table of the query with the SAMPLE clause. Example: SELECT t1.dept, t2.emp FROM (SELECT * FROM dept SAMPLE(5)) t1, emp t2 WHERE t1.dep_id = t2.dep_id; If you examine the execution plan of a "Sample Table Scan", you shoule see a step like this: TABLE ACCESS (SAMPLE) OF 'EMP' (TABLE) How does one eliminate duplicates rows from a table? Choose one of the following queries to identify or remove duplicate rows from a table leaving only unique records in the table: Method 1: SQL> DELETE FROM table_name A WHERE ROWID > ( 2 SELECT min(rowid) FROM table_name B 3 WHERE A.key_values = B.key_values); Method 2: SQL> create table table_name2 as select distinct * from table_name1; SQL> drop table_name1; SQL> rename table_name2 to table_name1; SQL> -- Remember to recreate all indexes, constraints, triggers, etc on table... Method 3: (thanks to Dennis Gurnick) SQL> delete from my_table t1 SQL> where exists (select 'x' from my_table t2 SQL> where t2.key_value1 = t1.key_value1

SQL> and t2.key_value2 = t1.key_value2 SQL> and t2.rowid > t1.rowid); Note: One can eliminate N^2 unnecessary operations by creating an index on the joined fields in the inner loop (no need to loop through the entire table on each pass by a record). This will speed-up the deletion process. Note 2: If you are comparing NOT-NULL columns, use the NVL function. Remember that NULL is not equal to NULL. This should not be a problem as all key columns should be NOT NULL by definition. How does one get the time difference between two date columns? Look at this example query: SELECT floor(((date1-date2)*24*60*60)/3600) || ' HOURS ' || floor((((date1-date2)*24*60*60) floor(((date1-date2)*24*60*60)/3600)*3600)/60) || ' MINUTES ' || round((((date1-date2)*24*60*60) floor(((date1-date2)*24*60*60)/3600)*3600 (floor((((date1-date2)*24*60*60) floor(((date1-date2)*24*60*60)/3600)*3600)/60)*60) )) || ' SECS ' time_difference FROM ... If you don't want to go through the floor and ceiling math, try this method (contributed by Erik Wile): SELECT to_char(to_date('00:00:00','HH24:MI:SS ') + (date1 - date2), 'HH24:MI:SS') time_difference FROM ... Note that this query only uses the time portion of the date and ignores the date itself. It will thus never return a value bigger than 23:59:59. How does one add a day/hour/minute/second to a date value? The SYSDATE pseudo-column shows the current system date and time. Adding 1 to SYSDATE will advance the date by 1 day. Use fractions to add hours, minutes or seconds to the date. Look at these examples: SQL> select sysdate, sysdate+1/24, sysdate +1/1440, sysdate + 1/86400 from dual; SYSDATE SYSDATE+1/24 SYSDATE+1/1440 SYSDATE+1/86400 -------------------- -------------------- -------------------- -------------------03-Jul-2002 08:32:12 03-Jul-2002 09:32:12 03-Jul-2002 08:33:12 03-Jul-2002 08:32:13 The following format is frequently used with Oracle Replication: select sysdate NOW, sysdate+30/(24*60*60) NOW_PLUS_30_SECS from dual; NOW NOW_PLUS_30_SECS -------------------- -------------------03-JUL-2002 16:47:23 03-JUL-2002 16:47:53 Here are a couple of examples: Description Now Tomorow/ next day Seven days from now One hour from now Three hours from now An half hour from now 10 minutes from now 30 seconds from now Tomorrow at 12 midnight Tomorrow at 8 AM SYSDATE SYSDATE + 1 SYSDATE + 7 SYSDATE + 1/24 SYSDATE + 3/24 SYSDATE + 1/48 SYSDATE + 10/1440 SYSDATE + 30/86400 TRUNC(SYSDATE + 1) TRUNC(SYSDATE + 1) + 8/24 Date Expression

Next Monday at 12:00 noon First day of the month at 12 midnight The next Monday, Wednesday or Friday at 9 a.m

NEXT_DAY(TRUNC(SYSDATE), 'MONDAY') + 12/24 TRUNC(LAST_DAY(SYSDATE ) + 1) TRUNC(LEAST(NEXT_DAY(sysdate,''MONDAY' ' ),NEXT_DAY(sysdate,''WEDNESDAY''), NEXT_DAY(sysdate,''FRIDAY'' ))) + (9/24) or columns as rows . Look at these example crosstab queries (also

How does one code a matrix/crosstab/pivot report in SQL? Newbies frequently ask how one can display rows as columns sometimes called matrix or pivot queries): SELECT * FROM (SELECT job, sum(decode(deptno,10,sal)) DEPT10, sum(decode(deptno,20,sal)) DEPT20, sum(decode(deptno,30,sal)) DEPT30, sum(decode(deptno,40,sal)) DEPT40 FROM scott.emp GROUP BY job) ORDER BY 1;

JOB DEPT10 DEPT20 DEPT30 DEPT40 --------- ---------- ---------- ---------- ---------ANALYST 6000 CLERK 1300 1900 950 MANAGER 2450 2975 2850 PRESIDENT 5000 SALESMAN 5600 Here is the same query with some fancy headers and totals: SQL> ttitle "Crosstab Report" SQL> break on report; SQL> compute sum of dept10 dept20 dept30 dept40 total on report; SQL> SQL> SELECT * 2 FROM (SELECT job, 3 sum(decode(deptno,10,sal)) DEPT10, 4 sum(decode(deptno,20,sal)) DEPT20, 5 sum(decode(deptno,30,sal)) DEPT30, 6 sum(decode(deptno,40,sal)) DEPT40, 7 sum(sal) TOTAL 8 FROM emp 9 GROUP BY job) 10 ORDER BY 1; Mon Aug 23 Crosstab Report JOB DEPT10 DEPT20 DEPT30 DEPT40 TOTAL --------- ---------- ---------- ---------- ---------- ---------ANALYST 6000 6000 CLERK 1300 1900 950 4150 MANAGER 2450 2975 2850 8275 PRESIDENT 5000 5000 SALESMAN 5600 5600 ---------- ---------- ---------- ---------- ---------sum 8750 10875 9400 29025 Here s another variation on the theme: page 1

SQL> SELECT DECODE(MOD(v.row#,3) 2 ,1, 'Number: ' ||deptno 3 ,2, 'Name: ' ||dname 4 ,0, 'Location: '||loc 5 ) AS DATA 6 FROM dept, 7 (SELECT rownum AS row# FROM user_objects WHERE rownum < 4) v 8 WHERE deptno = 30 9 / DATA --------------------------------------- --------Number: 30 Name: SALES Location: CHICAGO

Note: This is an individual project document please do not copy/circulate/re-produce. This is just a model to help you in defending your project. -----------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------

Cingular

-----------------------------------------------------------------------------------------------Front end: Java/J2EE Back end: Oracle QA team size: 10

Application:
Cingular already has a well developed web application. Basically Cingular has changed its specification documents and added some more functionality keeping its customers in mind. I have worked on two separate projects First Project: For eg: if customers need to pay their bills on April 25th and they posted the amount on 25th after regular business hours instead of before 25th, they will be charged late payment fines. SO what Cingular did is that it has changed its specifications that if they post on the same day before night 12o clock depending upon their geographical location theyll not be charged any extra late fines. If the customers contract is over and if they want to extend their contract with Cingular for one more year then those customers will be given a choice to choose between two options: They can choose to get 10% discount of their monthly bills or get a new cell phone. If the customer was a past Cingular customer but in the middle he has changed to a different wireless service provider and if he again wants to come back to Cingular hell get both the 10% off of his monthly bills and also a free cell phone. Second project: Cingular has added another functional requirement: The customers have an option of choosing 5 frequently dialed/called numbers and he can call these numbers at any time with unlimited minutes with an extra charge of $5.00. And they can keep track of the calls apart from the calls to these 5 numbers.

Note: This is an individual project document please do not copy/circulate/re-produce. This is just a model to help you in defending your project. -----------------------------------------------------------------------------------------------------------------

The customers have a choice to choose between paper bills and on-line bills. If they choose on-line bills instead of paper bills theyll be given $1.oo off of their bill. The total due shown will be their monthly charges before any of these discount/offers[for eg: $50.00], but in their bill they will see amount that has been taken off because of certain discount[10% off/$1.00 off] and will show an updated amount that need to be paid as updated total[for eg: $44.00+taxes]

My role:
In the initial stages of the project I used to attend the requirement meetings and design meetings for documenting and analyzing the requirements and business rules. Then I started writing the Test Plan based on the business rules and functional requirements. I used to prepare test scenarios and then test cases. I wrote in-detail manual scripts for the entire application and made sure that each and every business rule has been covered in my test scripts. I wrote around 90 test scripts. We used QuickTest pro 6.5 for automation of the functional testing. Because these are the additional functionalities added to the existing web application, apart from the functional testing I had to perform integration testing to make sure that there are no problems created because of putting the constructed components. I also perform regression testing using QuickTest pro. We started of with data base testing. Each customer can track their calls that are being stored in the database. We have performed data base testing by writing queries [SQL] to check if the customer was a Cingular customer in the past, and etc. We had to test the Graphic User Interface depending upon the customers selections like his 5 frequently called numbers and to make sure that it is customized according the customers profile. One of the important things we had to consider is that the response time should be very less [1 sec] even at high loads on the server. Initially according to the business requirements documents and system specification specifications we had created 15000 Virtual users using LoadRunners Virtual User Generator at an instance to check the response time of the server and see how the server behaves at varying loads and apart from this we also tested by creating 18000 Vusers to check the behavior of the application and mostly the database server to make sure

Note: This is an individual project document please do not copy/circulate/re-produce. This is just a model to help you in defending your project. -----------------------------------------------------------------------------------------------------------------

that the database server will not crash. I have created rendezvous points to make sure that all the specified Vusers begin the transaction precisely at the same time. [lr_rendezvous(rendezvous_1);]. I also generated Transaction Analysis Graph: where the difference in time from several runs allows different loads to be compared in this graph. As you can see that most of the application is data driven, we had to parameterize and do the data-driven testing basically to see whether the calls are displayed/discounts are shown on billing cycles, etc. I have used Quick Tests Data Driver Wizard for performing the data-driven tests. I used to insert check points during my testing wherever it is necessary. For eg: I inserted standard check point to check the buttons like view bill/pay bill, etc. I used TestDirector as a defect tracking tool. Whenever I find any defects I used to check it 3 or 4 times before logging it as a defect in TestDirector and would inform the development team about this defect. Some times what used to happen is developers used to add some new objects to the application, and then I used to add these new objects to the Object Repository in the QuickTest.

Defects:
We faced some defects while performing the Integration testing. And most of the defects are related to the functionality. Before performing Load testing we made sure that there is no system bottle-neck and network bottle-neck. When we are testing the application basically web application and database server, we found that the server response time is low, and we sent it as a defect using TestDirector explaining it as an indexing problem in the Database.

Problems:
There was one time where the development team didnt agree a defect and then I talked to my QA lead and Project manager and solved the problem. I always used to keep my QA lead and Project manager in loop.

Note: This is an individual project document please do not copy/circulate/re-produce. This is just a model to help you in defending your project. -----------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------

Statefarm Insurance
Front end: Visual Basic Back end: Oracle QA team size: 12

--------------------------------------------------------------------------------------------

Application:
When I was working at Statefarm insurance I was involved in testing a web application that deals with insurance. Its basically an insurance provider. The focus was on attracting customers from difference parts of the country. The customers can select the insurance plan available from a help assistance which guides them to choose the best plan. Depending upon the information provided by the customers like, the state they are from, their driving history/drivers license for the automobile insurance and their credit history for their home insurance; the application provides them with different plans and allows the customers to choose from them. They can have different payment methods like; 1. Payment at onetime, 2. Payment on monthly basis, 3. Payment for 3 months, 4. Payment for 6 months. The site handles critical information of the customer like, credit history/driving history. The site also provides discounts to customers based on: 1. Referral basis, 2. if they had taken insurance plan in the past with Statefarm, 3. if they are preferred clients, 4. Adult discounts. The site provides them with uploaded comparison sheet, which compares with the other insurance providers.

My Role:
I used to perform Database testing of the customer information to make sure that the integrity of the database is maintained.

Note: This is an individual project document please do not copy/circulate/re-produce. This is just a model to help you in defending your project. -----------------------------------------------------------------------------------------------------------------

We have created some batch files (*.bat) to test the database. We have to make sure that the application is user friendly and it has acceptable response times. We have to do lot of load testing and we used LoadRunner to perform this. When the customer is choosing the plans depending upon his critical information provided by him the web application has to give an output of all the plans available and while doing this the response time should be minimum. Here we had to perform the Load testing using LoadRunner for this. Once we get the critical information of the customer the next step will be showing all the available plans, we have inserted synchronization point in between. When we are doing the Load testing according to our specification document we have created 10,000 virtual users at a time using LoadRunners Virtual User Generator. We went ahead and tested the application by creating 11,000 virtual users to see how the server is performing. The crucial part of the application was functional testing. Depending upon customers critical information the application should quote different plans and rates. We have used WinRunner for performing functional testing and regression testing. The insurance plans changes with the cities the customers are located at, so we have to parameterize the application for different cities. We parameterized using WinRunner. I used to log and track defects using TestDiector. ------------------------------------------------------------------------------------------------

COX communications

-----------------------------------------------------------------------------------------------Front end: Java/J2EE Back end: Oracle 9i QA team size: 10

Application:
When I was working at COX communications I was involved in testing a new application called VAS [Voice Activated System]. This system acts like a personal secretary to the user.

Note: This is an individual project document please do not copy/circulate/re-produce. This is just a model to help you in defending your project. -----------------------------------------------------------------------------------------------------------------

They have voice recognition software which sends information to the application. When the customers subscribe for the VAS theyll given 1-800 #. If somebody tries to reach the user through this system [VAS], they will be given options like, 1. Find the user, 2. Leave a message for him, 3. Give more options. This give more options is used when there is a conference call allotted and if the person wants to enter into the conference he needs to enter the conference ID. The user can login into the VAS by dialing the 1-800 # and by then pressing asterisk (*). He will be asked to enter the Pin number. Once he enters the correct pin he will enter into the VAS and he can check his messages/faxes/e-mails. They can also change their account information like, pin number, etc.

My Role:
I used to review and analyze all the requirements documents and also used to sit with business analysts and gather information about the requirements. I used to add all the requirements in the requirements tab of the TestDirector. I also used to organize projects using TestDirector. I was involved in the functional testing of the login module where the user enters his pin number. And if the pin number is correct he should be able to hear a welcome message, if not he should hear an error message saying the pin number is invalid. I used WinRunner for performing the functional testing and regression testing. The application connects to the database to access customers records like, calendars [schedule appointments/add appointments]/mail box number, etc. I was involved in the Database testing. We did lot of functional testing, for eg: If there is an appointment at 11:30 AM we performed functional testing to check whether its sending protocol to the communications server. We performed load testing using LoadRunner. We have created 1000 users by LoadRunners Vuser generator and put a calendar event for a particular day/time but different appointments for each user. And we used to check the reactions of the application and see if its calling the user at the right time.

Note: You can use the following description to defend one of your project in the interview and use the same kind of creativity for the remaining. ------------------------------------------------------------------------------------------------------------------

I wrote the test plan from the scratch, and we would have meetings with the client and the development teams to gather all the information, (Information like application flow, the business requirement and business rules.) Wrote in-detail Manual test scripts for entire application and made sure that each and every business rule has been covered in my test scripts. I wrote around 70 odd manual test scripts in my previous project. Based on all the manual test scripts I automated entire application using WinRunner as my automated testing tool. Automation was done in such a way that, one script is used to check different scenarios by selecting different options to check one business process; this is achieved by editing the script and looping the program in a way that it selects different options. For each different option, which is selected, I had checkpoints to check the corresponding values. For example to create a record which has a drop down with 5 values in that, I wrote the program to loop for 5 times and each time it selects the different value while creating that record. Each time it selects a different value I have a condition and a checkpoint for that selection. I used to run the test over the night and used to check the results every morning for any errors. If I found any errors I usually go into the application and check it manually to reconfirm before I actually log as a defect. I created a separate script for sanity check, I used to run for every new build that comes to the QA, and after successful run I used to run my full script. And we used to have daily meetings with the developers to discuss the status of the defects and what changes or new requirements they implemented in the applications so that I can go ahead and change/write the code in WinRunner for the new changes/requirements.

You might also like