This action might not be possible to undo. Are you sure you want to continue?
group, timer, and HTTP sampler elements. This article complements the JMeter User's Manual and provides guidelines for using some of the JMeter modeling elements to develop a quality test script. This article also addresses an important issue in a larger context: specifying precise response-time requirements and validating test results. Specifically, a rigorous statistical method, the confidence interval analysis, is applied. Please note that I assume readers know the basics of JMeter. This article's examples are based on JMeter 2.0.3. Determine a thread group's ramp-up period The first ingredient in your JMeter script is a thread group, so let's review it first. As shown in Figure 1, a Thread Group element contains the following parameters:
• • • •
Number of threads. The ramp-up period. The number of times to execute the test. When started, whether the test runs immediately or waits until a scheduled time. If the latter, the Thread Group element must also include the start and end times.
Figure 1. JMeter Thread Group. Click on thumbnail to view full-sized image. Each thread executes the test plan independently of other threads. Therefore, a thread group is used to model concurrent users. If the client machine running JMeter lacks enough computing power to model a heavy load, JMeter's distributive testing feature allows you to control multiple remote JMeter engines from a single JMeter console. The ramp-up period tells JMeter the amount of time for creating the total number of threads. The default value is 0. If the ramp-up period is left unspecified, i.e., the ramp-up period is zero, JMeter will create all the threads immediately. If the ramp-up period is set to T seconds, and the total number of threads is N, JMeter will create a thread every T/N seconds. Most of a thread group's parameters are self-explanatory, but the ramp-up period is a bit weird, since the appropriate number is not always obvious. For one thing, the ramp-up period should not be zero if you have a large number of threads. At the beginning of a load test, if the ramp-up period is zero, JMeter will create all the threads at once and send out requests immediately, thus potentially saturating the server and, more importantly, deceivingly increasing the load. That is, the server could become overloaded, not because the average hit rate is high, but because you send all the threads' first requests simultaneously, causing an unusual initial peak hit rate. You can see this effect with a JMeter Aggregate Report listener. As this anomaly is not desirable, therefore, the rule of thumb for determining a reasonable ramp-up period is to keep the initial hit rate close to the average hit rate. Of course, you may need to run the test plan once before discovering a reasonable number.
That is. The hit rate of the first sampler (e. JMeter creates a test plan when recording your actions. or the pause between successive requests. you simply cannot find a suitable ramp-up period that passes both rules. The time difference between the two should be as far apart as possible.. an HTTP request) is closely related to the ramp-up period and the number of threads. and a thread quickly finishes its work. i. timer. and the estimated hit rate is 10 hits per second. Failure to properly consider think time often leads to seriously biased test results. The proxy server records your actions while you browse a Web application with a normal browser (such as FireFox or Internet Explorer). For example. in such a plan. thus. it contains the average hit rate of each individual request (JMeter samplers). Click on thumbnail to view full-sized image. shown in Figure 2. thereby preventing a large ramp-up period Sometimes the two rules conflict with each other. the maximum load (concurrent users) that the system can sustain.g.e. the estimated ideal ramp-up period is 100/10 = 10 seconds. In summary. Various circumstances cause the delay: user needs time to read the content. JMeter provides a set of timer elements to model the think time. the test plan is too short. if the number of threads is 100. or to search for the right link. JMeter Aggregate Report. guess the average hit rate and then calculate the initial ramp-up period by dividing the number of threads by the guessed hit rate. In addition. You just have to run the test script once first. but a question still remains: how do you determine an appropriate think time? Fortunately. some of the threads might not have even started. For example. the estimated scalability. This feature is extremely convenient for several purposes: . you lack enough samplers for each thread.By the same token. That is. and proxy server An important element to consider in a load test is the think time. or to fill out a form. to the test plan. because. add an Aggregate Report listener. the determination of a good ramp-up time is governed by the following two rules: • • The first sampler's hit rate should be close to the average hit rate of other samplers. Figure 2. User think time. a large ramp-up period is also not appropriate. since the peak load may be underestimated. Adjust the ramp-up period so the hit rate of the test plan's first sampler is close to the average hit rate of all other samplers. verify in the JMeter log (located in JMeter_Home_Directory/bin) that the first thread that finishes does indeed finish after the last thread starts. A trivial test plan usually causes this problem. while some initial threads have already terminated.. will appear low. Page 2 of 6 So how do you verify that the ramp-up period is neither too small nor too large? First. Third. thereby preventing a small ramp-up period The first thread that finishes does indeed finish after the last thread starts. Second. preferably as far apart as possible. How do you come up with an estimated hit rate? There is no easy way. JMeter offers a good answer: the JMeter HTTP Proxy Server element.
There is a set of well-known rules for determining response time criteria: • • • • Users do not notice a delay of less than 0. but some delay is noticed Users will still wait for the response if it is delayed by less than 10 seconds After 10 seconds. but because it prefers a more stylistic look. Click on thumbnail to view full-sized image. . that is.5 seconds. add a constant to the measured response time. Amazon. you should also adjust them for your particular application. Technically. Though you should set your response-time requirements in accordance with these rules. to the thread group. Add a Gaussian random timer to the HTTP Proxy Server element. Note that a timer causes the affected samplers to be delayed. response time should include time for the browser to render the HTML page. there appears to be two different ways to specify response-time requirements: • • Average response time Absolute response time. response time refers to the time elapsed between the submission of a request and the receipt of the resulting HTML. those elements will be added to WorkBench directly. making the perceived response time less. Therefore. stop the HTTP proxy server. After the recording. right-click the Recording Controller element to save the recorded elements in a separate file so you can retrieve them later. it is important to add an HTTP Request Defaults element (a Configuration element) to the recording controller. In the context of Web applications. you should add a thread group to the test plan and then. Don't forget to resume your browser's proxy server setting. In addition. Before starting the HTTP proxy server. a load-test tool calculates the response time without considering rendering time. but the fact that this requirement fails to take into account data variation is disturbing. add a recording controller. What if the response time of 20 percent of the samples is more than three times the average? Note that JMeter calculates the average response time as well as the standard deviation for you in the Graph Results listener. If in doubt.Figure 4. the affected sampling requests are not sent before the specified delay time has passed since the last received response.1 second A delay of less than 1 second does not interrupt a user's flow of thought. for practical purposes of performance testing. Otherwise. specifying response-time requirements and validating test results are two critical tasks for load testing. For example. That is. with JMeter being the bridge that connects them. so that JMeter will leave blank those fields specified by the HTTP request defaults. users lose focus and start doing something else These thresholds are well known and won't change since they are directly related to the cognitive characteristics of humans. Therefore. typically. Specify response-time requirements and validate test results Although not directly related to JMeter. but a browser typically displays the page piece by piece. it sacrifices a little response time. we adopt the definition described above. the response times of all responses must be under the threshold Specifying average response-time requirements is straightforward. you should manually remove the first sampler's generated timer since the first sampler usually does not need one.com's homepage abides by the rules above. At first glance. In addition. where the generated elements will be stored. say 0.
if you run your test script many times. the sampling means are within the interval ±Zσ. say 90 percent. shifted so the population mean is at the origin. The distribution of the sampling itself is not necessarily normal. Let's review basic statistics before going further. the distribution of the resulting average response times will be normal.On the other hand. Z value for 90 percent Figure 6.645 and σ is the standard deviation. then. with mean μmean = μ and standard deviation σmean = σ/√n. a rigorous statistical method does consider sampling variation: the confidence interval analysis. the horizontal axis is the sampling mean of response time. Figure 5 shows that 90 percent of the time. Figure 6 shows the 99-percent case. The central limit theorem The central limit theorem states that if the population distribution has mean μ and standard deviation σ. for sufficiently large n (>30). Fortunately. this is related to sampling variation. What if only 0. we can look up the corresponding Z value with a normal curve and vice versa. Note that the distribution of the sampling mean is normal. the absolute response-time requirement is quite stringent and statistically not practical. For a given probability. the sampling distribution of the sampling mean is approximately normal. Z value for 99 percent . where Z=1. Figures 5 and 6 below show two normal distributions. That is. Figure 5.5 percent of the samples failed to pass the tests? Again. In our context.576. where Z=2.
9999994 Confidence interval The confidence interval is defined as [sampling mean . You may also look up approximate values from the tables below. The result is not acceptable. if the confidence interval is 90 percent. does not give the standard deviation. The sample size is 120. while the standard deviation is 4.g.9 seconds. unfortunately. Of course.80703 ±3. Therefore the confidence interval is [4. to measure a scenario's response time.95996 ±2. 5.990 0. after the performance tests. you must add loading requirements and specify a particular scenario as well. To conduct a confidence interval analysis. which is [3.5 seconds. First. it is more likely that the result is not acceptable." Note that if σ is larger. not the statistics of all samples.5 < X < 1. sampling mean + 1.5) or a cumulated area (e.64485 ±1.999 Z ±1.. 2. Standard deviation range corresponding to a given confidence interval Confidence Interval 0.1. JMeter's Aggregate Report listener calculates the average response time of individual samplers for you.38]. 3. By looking in Table 1.29053 Table 2. Note that JMeter's Graph Result listener calculates the average response time and standard deviation of all requests. Note that in the context of Web applications.95996*4.645*σ/√n]. X < 1.9999366 0. 4.95996*4. which means that it is more likely that the upper bound of the interval will exceed an acceptable value. we can look up the Z value to be 1.62. even though the average response time looks pretty good. .5).645*σ/√n.9/√120]. if σ is larger. Confidence interval corresponding to given standard deviation Z 1 2 3 4 5 Confidence Interval 0. Login Display a form Submit the form Assume we are interested in Request 3.95996. sampling mean + Z*σ/√n].800 0. for example: 1. applying confidence interval analysis gives you a much more precise method to estimating the quality of your tests.9973002 0. Note that in those sites. Response-time requirements Let's translate all this information into response-time requirements. you find the Z value is 1. As you can see. we can calculate the probability of either a symmetric bounded region (e..28155 ±1. the (unknown) population mean is within this interval. you can define the performance requirements like so: The upper bound of the 95-percent confidence interval of the average response time must be less than 5 seconds.995 0. we typically need to instruct the load-testing tool to send multiple requests. That is.9544997 0.1.Z*σ/√n.A few Websites for normal curve calculation are listed in Resources.5 . Now.6826895 0. You then calculate the 95-percent confidence interval. the confidence interval will be larger.5 + 1. we need the average response time and the standard deviation of all of Request 3's samples. suppose you analyze the results and discover that the average response time is 4.645. our measurement is "close.950 0. which means that 90 percent of the time. That is. and the confidence interval is [sampling mean .9/√120. you can verify that the result is not acceptable even for an 80-percent confidence interval. -1.900 0. but.57583 ±2. Table 1. In fact. For example.g.
What if the average response time is acceptable. . a statistical method that we can leverage to specify better response-time requirements You can improve the quality of your JMeter scripts with the techniques described in this article. I have discussed: • • • A fine point of specifying loads with the JMeter Thread Group element Guidelines for creating a JMeter test script automatically using the JMeter Proxy Server element. the following activities: • • • • • • • • Developing performance requirements Selecting testing scenarios Preparing environment for testing Developing test scripts Performing tests Reviewing test scripts and test results Identifying bottlenecks Writing test reports In addition. since it says nothing about data variation. During this process. but is not limited to. including the identified bottlenecks. Conclusion In this article. the performance test results. are fed back to the development team or to an architect for additional optimization design. developing quality test scripts and reviewing test scripts are probably the trickiest parts and really need careful management. specifying the requirement of average response times alone is dangerous. From a larger viewpoint. what I have discussed is really part of a performance testing workflow. with emphasis on modeling user think time Confidence interval analysis. however. you cannot accept the result. A performance testing workflow includes. Applying the confidence internal analysis. Armed with test-script writing guidelines and a good performance testing workflow. you will have a much better chance for optimizing the performance of your software under heavy loads. which differs from an ordinary functional testing workflow.In summary. but your confidence interval is only 75 percent? Most likely. gives you much more certainty.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.