You are on page 1of 25

Load Testing

to Predict
Web Performance

White Paper

Mercury Interactive Corporation


1325 Borregas Avenue
Sunnyvale, CA 94089
408-822-5200
www.mercuryinteractive.com
Load Testing White Paper

Abstract
Businesses that are leveraging the Web to conduct daily transactions need to provide
customers with the best possible user experience in order to be successful. Often, how-
ever, these businesses lose customers because their sites are not able to handle surges
in Web traffic. For example, a successful promotion that drives Web traffic can radi-
cally impact system performance. Online customers, tired of waiting, will simply click
to competitors’ sites, and any opportunities for revenue growth will be lost.

Whether a business is a brick-and-mortar or a dot.com, the challenges of successfully


conducting business online are the same: high user volumes, slow response times for
Web requests and ensuring the overall reliability of the service. This paper illustrates
how maintaining Web application performance is key to overcoming these e-business
challenges and generating revenue. The paper then discusses the importance of
maintaining a Web application to ensure customer satisfaction and why load testing
is critical to successfully launching and managing Web sites. In addition, it examines
various types of load testing and provides a detailed discussion on the load testing
process and the attributes of a reliable testing tool. In closing, this paper provides an
overview of Mercury Interactive’s LoadRunner® load testing solution.

3
Load Testing White Paper

Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Ensuring Optimal End-user Experience—A Complex Issue . . . . . . . . . . . . . . . . .6
Application Load Testing Prior to Going Live . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Challenges of Automated Load Testing Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
The Process of Automated Load Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Mercury Interactive’s LoadRunner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
About Mercury Interactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

5
Load Testing White Paper

Introduction
In the last few years, e-business has grown at an accelerated rate. Today, analysts esti-
mate that 260 million people use the Internet—and there is little sign that this growth
will slow down. In fact, the International Data Corporation expects the number of
online users to reach 500 million within the next two years.

E-business has become a popular commercial medium for two reasons: it enables busi-
nesses to share information and resources worldwide, and it offers them an efficient
channel for advertising, marketing and e-commerce. By using the Internet, businesses
have been able to improve their sales and marketing reach, increase their quality assur-
ance to customers and conduct multimedia conversations with customers.

More important, businesses are realizing the challenges—and rewards—of providing


customers with a positive end-user experience. After all, customers who are satisfied
with their online experience are likely to engage in repeat business and provide a steady
stream of revenue. As a result, businesses have become more focused on providing posi-
tive end-user experiences.

Ensuring Optimal End-user Experience—A Complex Issue


In addition to being fast-growing, e-business is also very complex. According to a
December 1999 report by the IBM High-Volume Web site team, commercial Web sites
can be classified into four categories based on the types of business transactions that
they perform: publishing/subscribers, online shopping, customer self-service and
trade/auction sites. By understanding these categories, businesses can better predict
their level of user volume and understand how users prefer to access the site.

Following is an overview of the different commercial Web site categories:

Publishing/subscribers sites provide the user with media information, such as magazine and
newspaper publications. Although the total number of concurrent users is generally low on
these sites, the number of individual transactions performed on a per user basis is relatively
high, resulting in the largest number of page views of all site categories.

Online shopping sites allow users to browse and shop for anything found in a tradi-
tional brick-and-mortar store. Traffic is heavy, with daily volumes ranging between one
and three million hits per day.

Customer self-service sites include banking and travel reservation sites. Security con-
siderations (e.g., privacy, authentication, site regulation, etc.) are high.

Trade/auction sites allow users to buy and sell commodities. This type of site is volatile,
with very high transaction rates that can be extremely time-sensitive.

No matter the transaction type, Web sites must enable customers to conduct business in
a timely manner. For this reason, a scalable architecture is essential.

A well-structured Web environment, however, consists of an extremely complex multi-tier


system. Scaling this infrastructure from end-to-end means managing the performance

6
Load Testing White Paper

and capacities of individual components within each tier. Figure 1 illustrates the com-
plexity of these components.

Internet

Web Internet Load Application Database Servers &


Clients Routers Switches Servers Firewall Balancers Servers Other Database Sources

Fig. 1. Schematic of a complex Web infrastructure

This complexity prompts many questions about the integrity and performance capabili-
ties of a Web site. For instance, will the response time experienced by the user be less
than 8 seconds? Will the Web site be able to sustain a given number of users? Will all
the pieces of the system, in terms of interoperability, co-exist when connected together?
Is communication between the application server and the database server fast enough?
Is there sufficient hardware on each tier to handle high volumes of traffic?

To eliminate these performance issues, businesses must implement a method for predicting
how Web applications will behave in a production environment, prior to deployment.

Application Load Testing Prior to Going Live


To accommodate the growth of their sites, Web developers can optimize software or add
hardware to each component of the system. However, to ensure optimal performance,
businesses must load test the complete assembly of a system prior to going live.

Application load testing is the measure of an entire Web application’s ability to sustain
a number of simultaneous users and/or transactions, while maintaining adequate
response times. Because it is comprehensive, load testing is the only way to accurately
test the end-to-end performance of a Web site prior to going live.

Application load testing enables developers to isolate bottlenecks in any component of


the infrastructure. Two common methods for implementing this process are manual and
automated testing. Manual testing, however, has several built-in challenges, such as
determining how to:

• Emulate hundreds of thousands of manual users that will interact with the application
to generate load
• Coordinate the operations of users
• Measure response times
• Repeat tests in a consistent way
• Compare results

7
Load Testing White Paper

Because load testing is iterative in nature, testers must identify performance problems,
tune the system and retest to ensure that tuning has had a positive impact—countless
times. For this reason, manual testing is not a very practical option.

With automated load testing tools, tests can be easily rerun and the results automati-
cally measured. In this way, automated testing tools provide a more cost-effective and
efficient solution than their manual counterparts. Plus, they minimize the risk of human
error during testing.

Today, automated load testing is the preferred choice for load testing a Web application.
The testing tools typically use three major components to execute a test:
• A control console, which organizes, drives and manages the load
• Virtual users, which are processes used to imitate the real user performing a business
process on a client application
• Load servers, which are used to run the virtual users

Using these components, automated load testing tools can:


• Replace manual testers with automated virtual users
• Simultaneously run many virtual users on a single load-generating machine
• Automatically measure transaction response times
• Easily repeat load scenarios to validate design and performance changes

This advanced functionality in turn allows testers to save time and costly resources.

Internet

Virtual Web Application


Users Server Server Database

Manual Testers

Control Console

Fig. 2. Manual testers are replaced by a single console controlling several thousand virtual users

Automated testing tools recently demonstrated their value in a report by the Newport
Group. The report, published in 1999, revealed that 52 percent of Web-based businesses
did not meet their anticipated Web-based business scalability objectives. Of this group,
60 percent did not use any type of automated load testing tool. In contrast, nearly 70 percent
of businesses that met their scalability expectations had used an automated load testing tool.

8
Load Testing White Paper

©Newport Group Inc. 1999

Fig. 3. Automated load testing enables businesses to meet scalability expectations

Challenges of Automated Load Testing Tools


Primary challenges of load testing tools include the ability to be accurate and scalable
and to isolate performance problems. To isolate performance problems, load testing
tools monitor key system-level components and identify bottlenecks during the run of a
load test. Accuracy is defined by how closely an automated tool can emulate real user
behavior. Scalability relates to the product’s ability to generate the maximum load using
the minimum amount of resources.

Automated load testing tools must address all aspects of accuracy and scalability and be
able to pinpoint problems in order to ensure reliable end-to-end testing. Listed below
are some key attributes of accuracy and scalability.

Accuracy: Scalability:

• Recording ability against a real client • Generating the maximum number of


application virtual users that can be run on a single
• Capturing protocol-level communication machine before exceeding the machine’s
between the client application and the capacity
rest of the system • Generating the maximum number of
• Providing flexibility and the ability to hits per second against a Web server
define user behavior configuration • Managing thousands of virtual users
(e.g., think times, connection speeds, • Increasing the number of virtual users in
cache settings, iterations) a controlled fashion
• Verifying that all requested content returns to
the browser to ensure a successful transaction
• Showing detailed performance results that
can be easily understood and analyzed to
quickly pinpoint the root cause of problems
• Measuring end-to-end response times
• Using real-life data
• Synchronizing virtual users to generate
peak loads
• Monitoring different tiers of the system
with minimal intrusion

Fig. 4. Key attributes of accuracy and scalability in load testing

9
Load Testing White Paper

The Process of Automated Load Testing


By taking a disciplined approach to load testing, businesses can optimize resources, as
well as better predict hardware and software requirements, and set performance expecta-
tions to meet end-user service level agreements (SLAs). Repeatability of the testing
process is necessary to verify that changes have taken place. Following is a step-by-step
overview of the automated load testing process:

Step 1: System Analysis


This step is critical to interpreting the user’s testing needs and is used to determine
whether the system will scale and perform to the user’s expectations. Testers essentially
translate existing requirements of the user into load testing objectives. A thorough eval-
uation of the requirements and needs of a system, prior to load testing, will provide
more realistic test conditions.

First, the tester must identify all key performance goals and objectives before executing
any testing strategies. Examples include identifying which processes/transactions to test,
which components of a system architecture to use in the test and the number of concur-
rent connections and/or hits per second to expect against the Web site, as well as
clarifying which processes are to be tested.

By referring to the four models of Web sites (see page 6), developers can easily classify
their site’s process/transaction type, allowing users to conduct transactions in a more
timely fashion. For example, a business-to-consumer model can implement an online
shopping process in which a customer browses through an online bookstore catalog,
selects an item and makes a purchase. This process could be labeled “buy book” for the
purposes of the test. Defining these objectives will provide a concise outline of the
SLAs and mark the goals that are to be achieved with testing.

Second, the tester needs to define the input data used for testing. The data can be cre-
ated dynamically. For example, auctioning bids may change every time a customer sends
in for a new request. Random browsing also may be used to obtain the data. Examples
include any non-transactional process, such as browsing through a brochure or viewing
online news. Emulating data input can avoid potential problems with inaccurate load
test results.

Third, testers must determine the appropriate strategy for testing applications. They can
select from three strategy models: load testing, stress testing and capacity testing. Load
testing is used to test an application against a requested number of users. The objective
is to determine whether the site can sustain this requested number of users with accept-
able response times. Stress testing, on the other hand, is load testing over extended
periods of time to validate an application’s stability and reliability. The last strategy is
capacity testing. Capacity testing is used to determine the maximum number of concur-
rent users an application can manage. For example, businesses would use capacity
testing to benchmark the maximum loads of concurrent users their sites can sustain
before experiencing system failure.

10
Load Testing White Paper

Fourth, testers need to cultivate a solid understanding of the system architecture, including:
• Defining the types of routers used in the network setup
• Determining whether multiple servers are being used
• Establishing whether load balancers are used as part of the IP networks to manage the servers
• Finding out which servers are configured into the system (Web, application, database)

Last, developers must determine which resources are available to run the virtual users.
This requires deciding whether there is a sufficient number of load generators or test
machines to run the appropriate number of virtual users. It also requires determining
whether the testing tool has multithreading capabilities and can maximize the number
of virtual users being run. Ultimately, the goal is to minimize system resource consump-
tion while maximizing the virtual user count.

Step 2: Creating Virtual User Scripts


A script recorder is used to capture all the business processes into test scripts, often
referred to as virtual user scripts or virtual users. A virtual user emulates the real user by
driving the real application as a client.

It is necessary to identify and record all the various business processes from start to fin-
ish. Defining these transactions will assist in the breakdown of all actions and the time
it takes to measure the performance of a business process.

Step 3: Defining User Behavior


Run-time settings define the way that the script runs in order to accurately emulate real
users. Settings can configure think time, connection speed and error handling.

Think times can vary in accordance with different user actions and the user’s level of
experience with Web technology. For example, novice users require more time to exe-
cute a process because they have the least experience using the Web. Therefore, a tester
will need to emulate more think time in the form of pauses. Advanced users, however,
have much more experience and can execute processes at an accelerated level, often by
using shortcuts.

System response times also can vary because they are dependent on connection speed, and
all users connect to the Web system at different speeds (e.g., modem, LAN/WAN). This
feature emulates dial-up connections over PPP at varying modem speeds (e.g., 28.8 Kbps,
56.6 Kbps, etc.) and is useful for measuring application response times based on the
connection speed.

Error handling is another setting that requires configuration. Errors arise throughout the
course of a scenario and can impede the test execution. The tester can configure virtual
users to handle these errors so that the tests can run uninterrupted.

Step 4: Creating a Load Test Scenario


The load test scenario contains information about the groups of virtual users that will
run the scripts and the load machines that the groups are running on.

11
Load Testing White Paper

In order to run a successful scenario, testers must first define individual groups based on
common user transactions. Second, they need to define and distribute the total number
of virtual users. A varying number of virtual users can be assigned to individual business
processes to emulate user groups performing multiple transactions.

Third, testers must determine which load generating machines the virtual users will run
on. Load generator machines can be added to the client side of the system architecture
to run additional virtual users. Last, testers need to specify how the scenario will run.
Virtual user groups can either run in staggered or parallel formation. Staggering the vir-
tual users allows testers to examine a gradual increase of the user load to a peak.

Step 5: Running the Load Test Scenario and Monitoring the Performance
Real-time monitoring allows testers to view the application’s performance at any time
during the test. Every component of the system requires monitoring: the clients, the
network, the Web server, the application server, the database and all server hardware.
Real-time monitoring allows for early detection of performance bottlenecks during test
execution. Testers then have the ability to view the performance of every single tier,
server and component of the system during testing. As a result, testers can instantly
identify performance bottlenecks during load testing. They then can accelerate the test
process and achieve a more stable application.

Step 6: Analyzing Results


This is the most important step in collecting and processing the data to resolve perform-
ance bottlenecks. The analysis yields a series of graphs and reports that help summarize
and present the end-to-end test results. For example, Figure 5 uses generic data to display
a standard performance under load graph that shows the total number of virtual users
against the response time. This can be used to determine the maximum number of con-
current users until response times become unacceptable. Figure 6 shows a transaction
overview revealing the total number of transactions that passed in a scenario. Analysis of
these types of graphs can help testers isolate bottlenecks and determine which changes
are needed to improve system performance. After these changes are made, the tester
must rerun the load test scenarios to verify the adjustments.

Fig. 5. This is a generic graph showing performance under load. This graph is useful in pinpointing bottle-
necks. For example, if a tester wants to inquire about the user threshold at 2 seconds, the results above
show a maximum of 7,500 concurrent users.

12
Load Testing White Paper

Fig. 6. This graph is a generic display of the number of transactions that passed or failed. In the above exam-
ple, if the goal is to obtain a 90-percent passing rate for the number of transactions, then transaction 2 fails.
Approximately 33 percent out of 100 transactions failed.

Mercury Interactive’s LoadRunner


Mercury Interactive’s LoadRunner is a load testing tool that predicts system behavior and
performance. It exercises an entire enterprise infrastructure by emulating thousands of
users to identify and isolate problems. Able to support multiple environments,
LoadRunner can test an entire enterprise infrastructure, including e-business, ERP, CRM
and custom client/server applications, thereby enabling IT and Web groups to optimize
application performance. By emulating the behavior of a real user, LoadRunner can test
applications communicating with a wide range of protocols, such as HTTP(s), COM,
CORBA, Oracle Applications, etc. LoadRunner also features a seamless integration with
Mercury Interactive’s Web performance monitoring tool, Topaz™. Therefore, the same
tests created during testing can be reused to monitor the application once it is deployed.

LoadRunner enhances every step of the load testing process to ensure that users reap
the maximum return on their investment in the tool. The remainder of this paper dis-
cusses how LoadRunner offers support for each segment of the load testing process:

Step 1: System Analysis


LoadRunner advocates the same system analysis as mentioned previously in this paper. In
emulating a test environment, it is necessary to identify all testing conditions, including
system architecture components, the processes being tested and the total number of vir-
tual users to test with. A good system analysis will enable customers to convert their
goals and requirements into a successful, automated test script.

Step 2: Creating Virtual User Scripts


You begin by recording the business processes to create a test script. Script recording is
done using LoadRunner’s Virtual User Generator (VUGen). VUGen is a component
that runs on a client desktop to capture the communication between the real client
application and the server. VUGen can emulate the exact behavior of a real browser by

13
Load Testing White Paper

sending various e-business protocol requests to the server. VUGen also can record
against Netscape or Internet Explorer browsers—or any user-defined client that provides
the ability to specify a proxy address. After the recording process, a test script is generated.

Fig. 7. The Virtual User Generator allows testers to capture business processes to create virtual users

You can then add logic to the script to make it more realistic. Intelligence can be added
to the scripts so that they emulate virtual user reasoning while executing a transaction.
LoadRunner executes this stage using the transactions, as well as its verification and
parameterization features.

• Transactions. Transactions represent a series of operations that are required to be


measured under load conditions. A transaction can be a single URL request or a
complete business process leading through several screens, such as the online purchase
of a book.

• Verification. VUGen allows insertion of verification checkpoints using ContentCheck™.


ContentCheck verifies the application functionality by analyzing the returned HTML
Web page to ensure a successful transaction. If the verification fails, LoadRunner will
log the error and highlight the reasons for the failure (e.g., broken link, missing
images, erroneous text).

• Parameterization. To accurately emulate real user behavior, LoadRunner virtual users


use varying sets of data during load testing, replacing constant values in the script
with variables or parameters. The virtual user can substitute the parameters with values
from a data source, such as flat files, random numbers, date/time, etc. This allows a
common business process, such as searching for or ordering a book, to be performed
many times by different users.

14
Load Testing White Paper

Step 3: Defining User Behavior


LoadRunner provides comprehensive run-time settings to configure scripts that emulate
the behavior of real users.

Fig. 8. The run-time settings are used to emulate the real user as closely as possible. In this example, think
time is randomly generated to simulate the speed at which the user interacts with the system.

Examples of run-time settings include:

Think time: Controls the speed at which the virtual user interacts with
the system by including pauses of think times during test
execution. By varying think times for users, LoadRunner
can emulate the behaviors of different users—from novice
to expert users.

Dial-up speed: Emulates a user connected to the system using a modem


and/or LAN/WAN connections. Modem speeds range from
14.4 Kbps to 56.6 Kbps. This is useful for controlling user
behavior in order to accurately emulate response times for
each request.

Emulate cache: Emulates a user browsing with a specific cache size.


Caching can be turned off based on server requirements.

Browser emulation: Enables the tester to specify which browser the virtual user
emulates. LoadRunner supports both Netscape and Internet
Explorer, as well as any custom browser.

Number of connections: Allows the virtual user to control the number of connections
to a server, like a real browser, for the download of Web
page content.

IP spoofing: Tests the performance impact of IP-dependent components


by assigning virtual users their own IP addresses from the
same physical machine.

15
Load Testing White Paper

Iterations: Commands repetition of virtual user scripts. Also paces virtual


users, instructing how long to wait between intervals. Iterative
testing defines the amount of work a user does based on the
number of times a process is performed using varying data.

Error handling: Regulates how a virtual user handles errors during script
execution. LoadRunner can enable the Continue on Error
feature when the virtual user encounters an error during replay.

Log files: Stores information about virtual user server communication.


Standard logging maps all transactions, rendezvous and output
messages. Extended logging also tracks warnings and other
messages.

Step 4: Creating a Load Test Scenario


LoadRunner’s Controller is used to create scenarios. As a single point of control, it pro-
vides complete visibility of the tests and the virtual users.

Fig. 9. LoadRunner’s Controller is an interactive environment for organizing, driving and managing the load
test scenario

The Controller facilitates the process of creating a load test scenario by allowing users to:

• Assign scripts to individual groups


• Define the total number of virtual users needed to run the tests
• Define the host machines that virtual users are running on

In addition, LoadRunner offers a Scenario Wizard, a Scheduler and TurboLoad™ to


enhance the tester experience.

16
Load Testing White Paper

Scenario Wizard. LoadRunner’s Scenario Wizard is a feature that enables testers to


quickly compose multi-user load test scenarios. Using five easy-to-follow screens, the
Scenario Wizard steps the tester through a process of selecting the workstations that will
host the virtual users as well as the test scripts to run. During this step-by-step process,
testers also create simulation groups of virtual users. (The steps for creating a scenario
are the same as those described on page 13.)

Scheduler. LoadRunner Scheduler is used to ramp virtual user numbers up/down in


order to position virtual users in both the ready state and the running state. For exam-
ple, the tester may want to gradually increase the load of users logging into a site with a
fixed batch size. This is referred to as the ready state. This method is useful for avoiding
unnecessary strain on the system.

The schedule also manages scheduling and features an automated process that allows
the user to run the script without being present. In real time this would be analogous to
running a script during off-peak hours of Internet traffic—6:00 p.m. to 6:00 a.m. To
schedule a test, the user simply has to click the Run Scenario button and enter the
desired starting time.

TurboLoad. TurboLoad is a patent-pending technology that provides for maximum scalabil-


ity. TurboLoad can minimize CPU consumption for each virtual user, thereby enabling more
users to run on each load generator machine. In recent customer benchmarks, using 10
Windows-based load servers (4 CPU, 500 MHz Xeon processors, 4 GB RAM), LoadRunner
generated 3 billion Web hits per day against the Web system (or 3,700 hits/sec per machine).
Moreover, the load generators were running at less than 40 percent CPU utilization.

TurboLoad also can generate more hits/sec for a given machine. LoadRunner’s replay
speed can thereby generate more throughput against the server using a minimum
amount of resources.

Step 5: Running the Load Test Scenario and Monitoring the Performance
Once the scenario is built, the tester is ready to run the test. LoadRunner’s Controller
provides a suite of performance monitors that can monitor each component of a multi-
tier system during the load test. By capturing performance data over the entire system,
testers can then correlate this information with the end-user loads and response times
in order to pinpoint bottlenecks. LoadRunner provides performance monitors for the
network, network devices and the most common Web servers, application servers and
database servers. The performance monitoring is done in a completely non-intrusive
manner to minimize performance impact. Additionally all of these monitors are hard-
ware and OS independent, as no agents are required to be installed on the remotely
monitored servers.

17
Load Testing White Paper

Fig. 10. LoadRunner online monitors help identify and isolate performance bottlenecks in real time

LoadRunner supports a number of environments to provide for more accurate online


monitoring, including:

Runtime Graphs: Virtual User Status, User Defined Data Points

Transaction Graphs: Response Time, Transactions (pass/fail)

Web Server Resource Graphs: Hits per Second, Throughput, Apache, MS IIS,
Netscape

System Resource Graphs: Server resource, SNMP, Tuxedo

Web Application Server Graphs: BroadVision, ColdFusion, MS Active Server


Pages, SilverStream, WebLogic

Database Server Resource Graphs: SQL Server, Oracle

Step 6: Analyzing Results


Evaluating results is the most important step in the load testing process. Up until now
the tester has been able to record and play the actions of a real user with extreme preci-
sion, while conducting multiple processes on the Web. In addition, the performance
monitoring feature offers an accurate method for pinpointing bottlenecks while running
the scripts. To fix these problems, testers can follow several steps. First, a network spe-
cialist (DBA, consultants) can be used to make the necessary adjustments to the system.
As a next step, testers need to rerun the scripts, to verify that the changes have taken
place. Last, a comparison of the results from before and after enables the tester to measure
the amount of improvement that the system has undergone.

LoadRunner’s Analysis component provides a single integration environment that col-


lectively gathers all the data generated throughout the testing cycle. Because this tool is
powerful and easy to use, testers can create cross-scenario comparisons of the graphs and
thereby enrich the data analysis process. For example, Figure 11a shows the results of a
dot.com after testing the maximum number of concurrent users that its existing system
can handle. Based on these results, the dot.com plans to improve its infrastructure to
allow more user traffic. Figure 11b provides a comparison of a repeated test after adjust-
ments had been made to the Web architecture to optimize server software.

18
Load Testing White Paper

Fig. 11a. Results of a dot.com using LoadRunner Fig. 11b. Results of the same company using
Analysis before making any adjustments to the system LoadRunner Analysis after making enhance-
infrastructure. The site reached a peak of 4,000 Vusers ments to the performance of the system. The
before any performance degradation. number of Vusers increased five fold to a
peak of 20,000.

LoadRunner Analysis provides advanced, high-level drill-down capabilities that enable


testers to locate bottlenecks in these scenarios. In addition, LoadRunner Analysis uses a
series of sophisticated graphs and reports that answer such questions as: What was the
Web server’s CPU memory when the system was under a load of 5,000 concurrent users?
How many total transactions passed or failed after the completion of the load test? How
many hits per second did the Web server uphold? What were the average transactions
times for each virtual user?

Below are sample graphs that LoadRunner Analysis provides testers as it solves complex
bottleneck issues.

• Running virtual users. Displays running virtual users during each second of a scenario.

• Rendezvous. Indicates when and how virtual users were released at each point.

• Transaction/sec (passed). Displays the number of completed, successful transactions


performed per second.

• Transaction/sec (failed). Displays the number of incomplete, failed transactions


performed per second.

Fig. 12. This activity graph displays the number of completed transactions (successful and unsuccessful) per-
formed during each second of a load test. This graph helps testers determine the actual transaction load on
their system at any given moment. The results show that after six minutes an application is under a load of
two hundred transactions per second.

19
Load Testing White Paper

LoadRunner provides a variety of performance graphs:

• Percentile. Analyzes percentage of transactions that were performed within a given


time range.

• Performance under load. Indicates transaction times relative to the number of virtual
users running at any given point during the scenario.

• Transaction performance. Displays the average time taken to perform transactions


during each second of the scenario run.

• Transaction performance summary. Displays the minimum, maximum and average


performance times for all the transactions in the scenario.

• Transaction performance by virtual user. Displays the time taken by an individual


virtual user to perform transactions during the scenario.

• Transaction distribution. Displays the distribution of the time taken to perform a transaction.

Fig. 13. This performance graph displays the number of transactions that passed, failed, aborted or ended
with errors. For example, these results show the “Submit_Search” business process passed all its transac-
tions at a rate of approximately 96 percent.

Fig. 14. This performance graph displays the minimum, average and maximum response times for all the
transactions in the load test. This graph is useful in comparing the individual transaction response times in
order to pinpoint where most of the bottlenecks of a business process are occurring. For example, the results
of this graph show that “FAQ” business process has an average transaction response time of 1.779 seconds.
This would be an acceptable statistic in comparison to the other processes.

20
Load Testing White Paper

LoadRunner offers two types of Web graphs:

• Connections per second. Shows the number of connections made to the Web server
by virtual users during each second of the scenario run.

• Throughput. Shows the amount of throughput on the server during each second of
the scenario run.

Fig. 15. This Web graph displays the number of hits made on the Web server by Vusers during each second
of the load test. This graph helps testers evaluate the amount of load Vusers generate in terms of the num-
ber of hits. For instance, the results provided in this graph indicate an average of 2,200 hits per second
against the Web server.

Fig. 16. This Web graph displays the amount of throughput (in bytes) on the Web server during the load test.
This graph helps testers evaluate the amount of load Vusers generate in terms of server throughput. For
example, this graph reveals a total throughput of over 7 million bytes per second.

LoadRunner’s Analysis includes a Correlation of Results feature to enhance the user


analysis process of the data. Correlation enables the tester to custom design a graph
beyond the basics, using any two metrics. As a result, the tester can pinpoint and trou-
bleshoot performance problems more quickly.

21
Load Testing White Paper

Fig. 17. This graph correlates the relationship of the system behavior to the number of users, using a compi-
lation of results from other graphs. This enables the tester to view the CPU consumption based on the total
number of users.

Summary
In a short time, e-business has proven to be a viable business model for dot.coms and brick-
and-mortars alike. With the number of Internet users growing exponentially, it is therefore
critical for these businesses to prepare themselves for high user volumes. Today’s businesses
can leverage load testing practices and tools to ensure that Web application performance
keeps pace with end-user demand. Moreover, by using automated load testing tools, busi-
nesses can quickly and cost-effectively assess the performance of applications before they go
live, as well as analyze their performance after deployment. As a result, businesses can confi-
dently stay one step ahead of performance issues and focus on initiatives to drive Web traffic
and revenues.

Mercury Interactive’s LoadRunner is the leading tool for predicting scalability, reliability and
performance issues of an e-business application; identifying system bottlenecks; and displaying
results. LoadRunner emulates various types of transactions using a highly scalable number of
users. This is essential for understanding an application’s limitations while planning for
growth and reducing business risk. LoadRunner also tests system behavior under real-time
conditions and converts this data into easy-to-use, yet sophisticated, graphs and reports. With
this information, businesses can more quickly and efficiently resolve problems, thereby ensur-
ing a positive end-user experience and providing the opportunity for increased revenue.

22
Load Testing White Paper

About Mercury Interactive


Mercury Interactive Corporation is the worldwide leader in Web performance management
solutions. Mercury Interactive solutions turn Web application performance, scalabil-
ity and user experience into competitive advantage. The company’s performance
management products and hosted services are open and integrated to best test and
monitor business-critical Web applications.

Together with our world-class partners and award-winning service and support, Mercury
Interactive provides the industry’s best Web application performance solutions capable
of supporting both traditional and wireless e-business applications.

More than 10,000 e-commerce customers, Internet service providers, applications


service providers, systems integrators and consultants use Mercury Interactive solutions.
Mercury Interactive is headquartered in Sunnyvale, California, and has 40 offices
worldwide. For more information on Mercury Interactive, visit our Web site at
www.mercuryinteractive.com.

Topaz, TurboLoad and ContentCheck are trademarks, and LoadRunner, Mercury Interactive and the Mercury Interactive logo are regis-
tered trademarks of Mercury Interactive Corporation. All other company, brand and product names are marks of their respective holders.
© 2000 Mercury Interactive Corporation. Patents pending. All rights reserved. 384-BR-LOADTEST

23

You might also like