Professional Documents
Culture Documents
Client Responce Time
Client Responce Time
Abstract—As e-commerce services are exponentially growing, businesses need quantitative estimates of client-perceived response
times to continuously improve the quality of their services. Current server-side nonintrusive measurement techniques are limited to
nonsecured HTTP traffic. In this paper, we present the design and evaluation a monitor, namely sMonitor, which is able to measure
client-perceived response times for both HTTP and HTTPS traffic. At the heart of sMonitor is a novel size-based analysis method that
parses live packets to delimit different webpages and to infer their response times. The method is based on the observation that most
HTTP(S)-compatible browsers send significantly larger requests for container objects than those for embedded objects. sMonitor is
designed to operate accurately in the presence of complicated browser behaviors, such as parallel downloading of multiple webpages
and HTTP pipelining, as well as packet losses and delays. It requires only to passively collect network traffic in and out of the monitored
secured services. We conduct comprehensive experiments across a wide range of operating conditions using live secured Internet
services, on the PlanetLab, and on controlled networks. The experimental results demonstrate that sMonitor is able to control the
estimation error within 6.7 percent, in comparison with the actual measured time at the client side.
Index Terms—Client-perceived service quality, monitoring and measurement, pageview response time, secured Internet services.
1 INTRODUCTION
Furthermore, in SSL protocols, the padding added to an respectively. In total, there were 1,041 container objects and
HTTP message prior to the encryption operation is the 959 embedded objects.
minimum amount required so that the total size of the data Furthermore, we made following enhancements to
to be encrypted is a multiple of the cipher’s block size. In SURGE to mimic behaviors of real-world browsers:
contrast, TLS protocols define a random padding mechanism
so that the padding can be any amount that results in a total 1. To support secured Internet services, we updated
that is a multiple of the cipher’s block size, up to a SURGE with OpenSSL 0.9.8 (www.openssl.org).
maximum of 255 bytes. It aims to frustrate attacks based on 2. We enhanced SURGE to mimic behaviors of IE and
a size analysis of exchanged messages. From Fig. 5, we can Firefox so that requests for container objects may
see that the random padding is not implemented in IE 6.0 have large Accept headers.
and 7.0 RC 1 for TLS protocols. We can draw the same 3. We updated SURGE to support parallel downloading
conclusion for Firefox. by establishing another two persistent TCP connec-
The compression operation can change the size of an tions for page retrievals.
HTTP request. Such a change might affect the accuracy of 4. We added the support of HTTP pipelining to SURGE
the size-based analysis method presented in Section 3.1. by following the default behaviors of Firefox since it
From Fig. 5, however, we observe that the compression is the most widely used browser that has imple-
operation is not performed. This is because no default mented HTTP pipelining.
compression algorithm is specified in SSL/TLS protocols. 5. We followed HTTP/1.1 to set SURGE to use two
In summary, we can determine the size of an HTTP parallel persistent TCP connections to retrieve
request from the corresponding HTTPS request in the case of webpages and their embedded objects.
stream ciphers, such as RC4, or an estimation within a block- 6. In the enhanced SURGE, each user equivalent (UE)
size difference in the case of block ciphers, such as 8 bytes for binds to a unique IP address using IP aliasing on the
DES and 3DES. client machines. This makes each client machine
appear to the server as a collection of unique clients.
7. We updated SURGE so that each UE would send
4 EVALUATION METHODOLOGY requests for a page’s embedded objects only after the
We conducted experiments under a wide range of operating container object is received.
conditions using live secured Internet services, on the 8. The enhanced SURGE also records the retrieval time of
PlanetLab, and on the controlled networks to evaluate the every webpage as the client-perceived response time.
accuracy of sMonitor. We evaluated the accuracy in three In experiments with live secured Internet services, we
aspects. The first is the ability to accurately infer pageview used IE and Firefox to retrieve 44 webpages from several
response times perceived by clients. The second is the different Internet services, including online banking sites.
ability to correctly delimit different webpages. The third is We recorded their retrieval times manually. Because it was
measurement lag from the time when the last object of a page infeasible for us to deploy sMonitor near those servers, we
is delivered to the time the page is delimited by sMonitor. placed sMonitor on the client side to measure response
The measurement lag is mainly determined by when the times. We believe that such a placement of sMonitor has
request for container object of next page arrives and the little effect on our accuracy evaluation since the key for the
timeout mechanism used in sMonitor. For example, when measurement is the identifications of retrieval beginnings
the response of the last object in the page is retrieved, and ends of individual pages [5]. In the experiment, we also
sMonitor cannot decide if the page is completely down- consider network transfer times from the client to servers to
loaded until it detects the arrival of requests for another page, simulate measuring from server-side and use sMonitor to
or the timeout period has passed. Measurement lag exists in
handle the variance of RTT during the experiment.
other response-time monitors, such as ksniffer [21] and EtE
We also conducted experiments on the PlanetLab to
[5]. For example, ksniffer used a timeout mechanism or the
evaluate sMonitor’s accuracy in a real-world environment.
arrival of a request for a container object to identify the end of
In these experiments, clients resided on nine geographically
a page retrieval. There exists time gaps between the ends of
diverse nodes: Cambridge, Massachusetts, San Diego,
page retrievals and their identifications.
California, and Cambridge, United Kingdom. The webser-
In the experiments, sMonitor captured network traffic in
and out of monitored services, analyzed packets, and then ver was setup in Detroit, Michigan. It was a Dell PowerEdge
inferred pageview response times. We used an Apache 2450 configured with dual-processor (1 GHz Pentium III)
webserver with the support of OpenSSL to provide secured and 512 MB main memory. We connected the server to the
Internet services. In addition, the Apache webserver also Internet via a 100 Mbps network card. During these
supports HTTP/1.1. experiments, the RTTs between the server and the clients
We used SURGE [2] to generate emulated web objects, in were around 45 ms (Cambridge), 70 ms (San Diego), and
which there were 2,000 unique objects. Similar to [21], we 130 ms (United Kingdom). One issue in the experiments on
also made minor changes to SURGE to reflect more recent the PlanetLab was that we could only simulate nine clients
work on web traffic characterizations [9], [30]. That is, the using nine nodes. It is because we couldn’t use IP aliasing
maximum number of embedded objects in a given page was on these nodes without root privileges. To make the
set to 100 instead of 150; the percentages of base, embedded, experiment environment more realistic, we ran SURGE on
and loner objects were changed from 30 percent, 38 percent, other machines to simulate another 100 clients to access the
and 32 percent to 42 percent, 48 percent, and 10 percent, service at the same time.
WEI AND XU: MEASURING CLIENT-PERCEIVED PAGEVIEW RESPONSE TIME OF INTERNET SERVICES 779
TABLE 1
The Summary of Experiment Results
The page split and page merge are two types of false page delimitations.
To further evaluate sMonitor’s accuracy in different environments where nonnegligible portion of clients uses
operating conditions, we implemented a network simulator browsers with different implementations of the Accept
similar to [29] and Dummynet [27] to simulate wide-area header. For example, in Safari, all requests have the same
network conditions. In these experiments, two machines Accept headers. We also changed the percentage of
were used as clients and one as network simulator. They had parallel downloaded pages. While we varied this percen-
the same hardware configurations as the server and were tage between zero and 15 percent, in this section, we shall
connected by a 100 Mbps Ethernet. We changed the network also investigate its effect on sMonitor’s accuracy. In
routing in the server and client machines so that the packets addition, we varied the percentage of pipelined HTTP
between them were sent to the simulator. Upon receiving a requests. It is to simulate environments where some users of
packet, the simulator routed the packet to an “ethertap” Firefox may change the HTTP pipelining option from the
device. A small user-space program read the packet from the default off to on. We shall also examine its effect on the
“ethertap” device, delayed or dropped it according to the accuracy of sMonitor, in this section.
settings, and wrote it back to the device. The packet was then Table 1 summarizes the experimental results. Experiments
routed to the Ethernet. The simulator was shown to be B and F were conducted on the PlanetLab and others were on
effective in simulating wide-area network delays and packet the controlled networks. Because the results on the controlled
losses. For example, with the RTT set as 180 ms, ping times networks with different RTTs were similar, in Table 1 we only
were showing a round trip of around 182 ms. present the experiments with the RTT as 180 ms for
For the experiments on the controlled environments, we simplicity. In all test cases, we observed measurement errors
set the RTT between clients and the server to be 40, 80, or no larger than 6.7 percent, and the absolute measurement
180 ms. They represent the transmission latency within the errors were always smaller than 210 ms. When large Accept
continental US, the latency between the east and west coasts headers were not used in all requests for container objects, the
of US, and the one between US and Europe, respectively sMonitor measured response times were always larger than
[28]. Similar to [21], we set the packet-loss rate to 2 percent. the client-perceived ones. This is because sMonitor is unable
The number of UEs was set to 100. Each experiment lasted to perfectly delimit the beginning and end of every page
20 min. Because the results showed no clear trend of change retrieval because of the lack of size difference between
over the increase or decrease of the time window size, we requests for container and embedded objects. In some cases,
assumed that the results should be robust in this regard. several pages might be falsely identified as one page,
resulting in an estimate of larger response time. On the other
hand, the sMonitor measurements could become smaller
5 EXPERIMENTAL RESULTS than client-perceived ones when the effects of parallel
5.1 Measurement Accuracy on Average downloading and HTTP pipelining become dominant, as
We conducted comprehensive evaluations under different we shall discuss in Section 5.3.
network and traffic conditions and compared sMonitor’s Note that in Table 1, we evaluate the accuracy of sMonitor
measurements with those obtained by the enhanced SURGE via the measurement errors averaged over all retrieved pages
running on the client machines. We changed the settings in in the 20-min experiments. We also investigated the transient
SURGE to mimic different browser behaviors. We varied the behaviors of sMonitor. Fig. 6 shows the average response
percentage of requests for container objects that have large times measured by sMonitor in experiment A in different
Accept headers. Although the market share of IE and time scales, and compares them with those measured from
Firefox is around 93 percent [18], we set the lower bound to the client sides. For brevity, we only present the results in the
80 percent to investigate the accuracy of sMonitor in period from 200th to 400th second.
780 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 22, NO. 5, MAY 2011
Fig. 6. Comparisons of average response times for experiment A. (a) Average for every 5 s. (b) Average for every 1 s.
The results show that most measurement errors are less individual pages for experiments conducted on PlanetLab
than 2 s for the 1-s interval results. It further indicates that and controlled testbed. From Fig. 7d, we can observe that
sMonitor is accurate in measuring client-perceived response the measurement errors for most individual pages are close
times. Comparing the 1-s and 5-s interval results, we can to zero.
observe that the 5-s interval results have smaller errors Furthermore, the correlation coefficient of the results
(0.473 s) than the 1-s interval results (1.136 s). This is from the PlanetLab and the controlled networks is 0.96 and
because the large average interval can decrease the effect of 0.98, respectively. Notice that the closer the correlation
the measurement lag on the accuracy of sMonitor. Such a coefficient to 1 (a perfect monitoring where every page is
measured accurately), the more accurate sMonitor is. These
result also demonstrates that it is necessary to include the
results demonstrate that sMonitor can accurately measure
measurement lag in assessing the accuracy of sMonitor.
client-perceived response times in different environments.
5.2 Measurement Accuracy of Individual Pages Notice that the measured response time of one client is
We further investigated the accuracy of sMonitor by not affected by those of other clients. This is because
comparing its measured response times of individual pages sMonitor measures response time of a client separately by
delimiting its own page requests. Moreover, from Fig. 2 we
against those measured from the client sides. Fig. 7 presents
can observe that, the changes of network characteristics
the scatter plots, which we used to determine relationships
during one page retrieval does not affect the accuracy of
between two measurements. Fig. 7a shows the comparison
sMonitor significantly as long as the requests for container
results for the experiments conducted using live secured
objects are identified correctly. For example, assume that
Internet services. In Fig. 7b, we present the measurement
the packet loss rate changes during retrieval of a page and it
comparisons of experiment B for requests sent by SURGE
causes the request for embedded object i to be retransmitted
running on the nodes on the PlanetLab. Fig. 7c shows the
several times. sMonitor treats these retransmissions as
comparison results of experiment A. From these figures, we
requests for different embedded objects rather than the
can observe that there is a strong linear relationship same object. The accuracy of measured client-perceived-
between these two measurements. Such a linear relation- response time is, however, not affected.
ship indicates that sMonitor measured response times are
very close to those measured from the client sides. Actually, 5.3 Measurement Accuracy in Various Browser
the stronger the linear relationship between the measure- Settings
ments, the more accurate sMonitor is. This is also confirmed To further investigate the accuracy of sMonitor in
by Fig. 7d, which presents the error distributions of different operating environments, we conducted three
Fig. 7. Response time comparisons for individual pages. (a) Live secured Internet services. (b) On the PlanetLab. (c) On the controlled
environments. (d) Error distributions.
WEI AND XU: MEASURING CLIENT-PERCEIVED PAGEVIEW RESPONSE TIME OF INTERNET SERVICES 781
Fig. 13. Behaviors of the measurement lags inferred from comparisons of page delimitations. (a) Average for every 5 s. (b) Average for every 1 s.
784 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 22, NO. 5, MAY 2011
traffic in and out of monitored services. Server-instrumenta- We have implemented sMonitor as a stand-alone
tion approaches track the request arrivals and response application in the user space. We have conducted compre-
departures on the application level or on the kernel level. hensive evaluations of its accuracy using live Internet
Application-level server instrumentations, such as [1], services, on the PlanetLab, and on the controlled networks.
provide an easy way to obtain information regarding Our results demonstrate the unique ability of sMonitor to
Internet service transactions. They, however, do not consider infer client-perceived pageview response times accurately.
network delays incurred during TCP connection establish- More importantly, the measurement is obtained in the
ments or kernel waiting times of client requests. In [20], the presence of complicated browser behaviors, such as parallel
authors showed that application-level approaches could downloading and HTTP pipelining, as well as packet losses
underestimate response times by more than an order of and delays.
magnitude. Kernel-level approaches, such as [20], overcome We note that sMonitor is required to be deployed in front of
these limitations. They, however, measure service perfor- a website so as to capture all the traffic in and out of the
mance at a per connection level. The measured results are website. However, in many applications, the objects of a web
different from what perceived by end users due to wide page may be located or generated in geographically dis-
usages of parallel and persistent connections. tributed websites. How to measure the client-perceived
pageview response time for such web pages deserve further
Traffic-analysis approaches, including [5], [21], decode
studies. Recent studies, such as Link Gradients [4] and WISE
packets to the HTTP layer to identify the beginning and the
[32], made a step forward by developing ways to estimate the
end of each HTTP transaction. Since passive server-side
end-to-end request/response transaction time of distributed
monitors have a detailed view of the monitored system, applications due to the change of network conditions. On the
they provide the most accurate performance characteristics client side, there may be a firewall, NAT-enabled router, or
of servers. Moreover, they observe actual service traffic, and proxy in general that hides clients from server by changing the
therefore, provide a measurement of experiences of all real request sources to the proxy. In this case, sMonitor would
clients. They can be deployed easily and run without measure the response time to the proxy, instead. sMonitor
interfering a server’s operation. Due the unavailability of treats the requests from different clients as parallel down-
HTTP headers, however, none of these approaches can loading and the results remain instructional for performance
measure response times for secured Internet services. diagnosis and QoS provisioning.
Log-analysis approaches, such as [15], estimate the
response time of a page as the serving time of the container
ACKNOWLEDGMENTS
object and the last embedded object. This approach,
however, suffers the same shortcoming as the application- The authors would like to thank the anonymous reviewers for
level server instrumentation since it does not take network their constructive comments and suggestions. This research
delays into account. Moreover, with dynamically generated was supported in part by US National Science Foundation
webpages, it is difficult to determine which embedded (NSF) grants DMS-0624849, CNS-0702488, CRI-0708232,
object belongs to which container object without the help of CNS-0914330, and CCF-1016966. The original idea of the
referrer field. size-based analysis method appeared in [34].
Server-side monitoring methods, including sMonitor
and ksniffer, share the limitation that they are unable to REFERENCES
measure latencies before clients send packets to servers. [1] J. Almeida, M. Dabu, A. Manikutty, and P. Cao, “Providing
They are also limited to measure the latencies of requests Differentiated Levels of Service in Web Content Hosting,” Proc.
handled by intermediate components between clients and ACM SIGMETRICS Workshop Internet Server Performance, pp. 91-
102, June 1998.
servers, such as browser caches and proxies. [2] P. Barford and M. Crovella, “Generating Representative Web
Other work on network traffic analysis include the Workloads for Network and Server Performance Evaluation,”
passive packet monitor used on the AT&T network [6] and Proc. ACM SIGMETRICS, pp. 151-160, June 1998.
inferring HTTP characteristics from TCP/IP packet headers [3] N. Bhatti, A. Bouch, and A. Kuchinsky, “Integrating User-
Perceived Quality into Web Server Design,” Proc. Ninth Int’l
[30]. They have discussed many challenges, such as TCP World Wide Web (WWW) Conf. Computer Networks, pp. 1-16, 2000.
connection reconstructions, which we faced in the design [4] S. Chen, K. Joshi, M. Hiltunen, W. Sanders, and R. Schlichting,
and implementation of sMonitor. “Link Gradients: Predicting the Impact of Network Latency on
Multi-Tier Applications,” Proc. IEEE INFOCOM, 2009.
[5] L. Cherkasova, Y. Fu, W. Tang, and A. Vahdat, “Measuring and
Characterizing End-to-End Internet Service Performance,” ACM
7 CONCLUSIONS Trans. Internet Technology, vol. 3, no. 4, pp. 347-391, 2003.
We have designed, implemented, and evaluated sMonitor, [6] A. Feldmann, “BLT: Bi-Layer Tracing of HTTP and TCP/IP,” Proc.
Ninth Int’l World Wide Web (WWW) Conf. Computer Networks,
a monitor that can determine client-perceived pageview pp. 321-335, 2000.
response times for secured Internet services without [7] N. Ferguson and B. Schneier, Practical Cryptography. John Wiley &
decrypting HTTPS messages. Since sMonitor passively Sons, 2003.
[8] R.T. Fielding, J. Gettys, J.C. Mogul, H.F. Nielsen, L. Masinter, P.J.
collects network traffic in and out of the monitored services, Leach, and T. Berners-Lee, Hypertext Transfer Protocol - HTTP/1.1.
it requires no changes to any part of the services or clients Network Working Group, Request for Comments 2616bis, June
and can be deployed easily. It measures the response times 1999.
nonintrusively using the novel size-based analysis method [9] F. Hernandez-Campos, K. Jeffay, and F.D. Smith, “Tracking the
Evolution of Web Traffic: 1995-2003,” Proc. 11th IEEE Int’l Symp.
on HTTP requests to characterize client accesses and delimit Modeling, Analysis, and Simulation of Computer and Telecomm.
different pages from live network traffic. Systems (MASCOTS), pp. 16-25, 2003.
WEI AND XU: MEASURING CLIENT-PERCEIVED PAGEVIEW RESPONSE TIME OF INTERNET SERVICES 785
[10] HP, “Openview Transaction Analyzer,” http://openview.hp. [35] J. Wei, X. Zhou, and C.-Z. Xu, “Robust Processing Rate Allocation
com/, 2010. for Proportional Slowdown Differentiation on Internet Servers,”
[11] IBM, “Page Detailer,” http://www.alphaworks.ibm.com/tech/ IEEE Trans. Computers, vol. 54, no. 8, pp. 964-977, Aug. 2005.
pagedetailer, 2010. [36] C.-Z. Xu, J. Wei, and F. Liu, “Model Predictive Feedback Control
[12] Keynote Systems, Inc., www.keynote.com, 2010. for QoS Assurance in Web Servers,” Computer, vol. 41, no. 3,
[13] R. Kohavi, R. Henne, and D. Sommerfield, “Practical Guide to pp. 66-72, Mar. 2008.
Controlled Experiments on the Web: Listen to Your Customers
Not the HiPPO,” Proc. ACM SIGKDD, 2007. Jianbin Wei received the BS degree in compu-
[14] H. Krawczyk, M. Bellare, and R. Canetti, HMAC: Keyed-Hashing for ter science from the Huazhong University of
Message Authentication. Network Working Group, Request for Science and Technology, China, in 1997. He
Comments 2104, Feb. 1997. received the MS and PhD degrees in computer
[15] B. Krishnamurthy and C.E. Wills, “Improving Web Performance engineering from Wayne State University in
by Client Characterization Driven Server Adaptation,” Proc. 11th 2003 and 2006, respectively. His research
Int’l Conf. World Wide Web, 2002. interests are in distributed and Internet comput-
[16] Z. Li, M. Zhang, Z. Zhu, Y. Chen, A. Greenberg, and Y.-M. Wang, ing systems. He is currently with Yahoo, working
“WebProphet: Automating Performance Prediction for Web on platforms of cloud computing. He is a
Services,” Proc. Seventh USENIX Symp. Networked Systems Design member of the IEEE Computer Society.
and Implementation (NSDI), 2010.
[17] Microsoft Corporation, “How to Restrict the Use of Certain
Cryptographic Algorithms and Protocols in Schannel.dll,” http://
support.microsoft.com/?kbid=245030, Dec. 2004. Cheng-Zhong Xu received the BS and MS
[18] NetApplications.com, “Browser Version Market Share,” http:// degrees from Nanjing University in 1986 and
marketshare.hitslink.com/report.aspx?qprid=6, Dec. 2006. 1989, respectively, and the PhD degree in
[19] D. Olshefski and J. Nieh, “Understanding the Management of computer science from the University of Hong
Client Perceived Response Time,” Proc. ACM SIGMETRICS, Kong in 1993. He is currently a professor in the
pp. 240-251, 2006. Department of Electrical and Computer Engi-
[20] D. Olshefski, J. Nieh, and D. Agrawal, “Using Certes to Infer neering at Wayne State University, the Director
Client Response Time at the Web Server,” ACM Trans. Computer of the Cloud and Internet Computing Laboratory,
Sysmtems, vol. 22, no. 1, pp. 49-93, 2004. and the Director of Sun Microsystems’ Center of
[21] D.P. Olshefski, J. Nieh, and E. Nahum, “ksniffer: Determining the Excellence in Open Source Computing and
Remote Client Perceived Response Time from Live Packet Applications. His research interest is mainly in scalable distributed and
Streams,” Proc. Sixth USENIX Symp. Operating Systems Design parallel systems and wireless embedded computing devices, with an
and Implementation (OSDI), pp. 333-346, 2004. emphasis on resource and system management for performance,
[22] V.N. Padmanabhan and L. Qiu, “The Content and Access availability, reliability, energy efficiency, and security. He has published
Dynamics of a Busy Web Site: Findings and Implications,” Proc. more than 160 articles in peer-reviewed journals and conferences in
ACM SIGCOMM, pp. 111-123, 2000. these areas, including more than 20 papers in IEEE and ACM
transactions. He is the author of book Scalable and Secure Internet
[23] D. Patterson, “A Simple Way to Estimate the Cost of Downtime,”
Services and Architecture (Chapman & Hall/CRC Press, 2005) and a
Proc. 16th USENIX Large Installation System Administration Conf.
coauthor of book Load Balancing in Parallel Computers: Theory and
(LISA), pp. 185-188, 2002.
Practice (Kluwer Academic/Springer Verlag, 1997). He serves on the
[24] V. Paxson and M. Allman, Computing TCP’s Retransmission Timer. editorial boards of IEEE Transactions on Parallel and Distributed
Network Working Group, Request for Comments 2988, Nov. 2000. Systems, Journal of Parallel and Distributed Computing, Journal of
[25] R. Rajamony and M. Elnozahy, “Measuring Client-Perceived Parallel, Emergent, and Distributed Systems, Journal of Computers and
Response Time on the WWW,” Proc. Third Conf. USENIX Symp. Applications, Journal of High Performance Computing and Networking,
Internet Technologies and Systems (USITS), 2001. and ZTE Communications. He served dozens of international confer-
[26] Jupiter Research, “Retail Web Site Performance: Consumer ences and workshops in the capacity of program chair, general chair,
Reaction to a Poor Online Shopping Experience,”technical report, and plenary speaker. He was a recipient of the Faculty Research Award,
JupiterKagan, Inc., 2006. the President’s Award for Excellence in Teaching, and the Career
[27] L. Rizzo, “Dummynet: A Simple Approach to the Evaluation of Development Chair Award of Wayne State University, and the “Out-
Network Protocols,” ACM SIGCOMM Computer Comm. Rev., standing Oversea Scholar” award of the National Science Foundation of
vol. 27, no. 1, pp. 31-41, 1997. China. He is a senior member of the IEEE. For more information, please
[28] S. Shakkottai, R. Srikant, N. Brownlee, A. Broido, and K. Claffy, visit http://www.ece.eng.wayne.edu/~czxu.
“The RTT Distribution of TCP Flows in the Internet and Its Impact
on TCP-Based Flow Control,” technical report, The Cooperative
Assoc. for Internet Data Analysis (CAIDA), 2004.
. For more information on this or any other computing topic,
[29] J. Slottow, A. Shahriari, M. Stein, X. Chen, C. Thomas, and P.B.
please visit our Digital Library at www.computer.org/publications/dlib.
Ender, “Instrumenting and Tuning Dataview—A Networked
Application for Navigating through Large Scientific Datasets,”
Software: Practice and Experience, vol. 32, no. 2, pp. 165-190, Feb.
2002.
[30] F.D. Smith, F. Hernandez-Campos, K. Jeffay, and D. Ott, “What
TCP/IP Protocol Headers Can Tell Us About the Web,” Proc. ACM
SIGMETRICS, pp. 245-256, 2001.
[31] Q. Sun, D.R. Simon, Y.-M. Wang, W. Russell, V.N. Padmanabhan,
and L. Qiu, “Statistical Identification of Encrypted Web Browsing
Traffic,” Proc. IEEE Symp. Security and Privacy, pp. 19-30, May
2002.
[32] M. Tariq, K. Bhandankar, V. Valancius, A. Zeitoun, N. Feamster,
and M. Ammar, “Answering “What-If” Deployment and Config-
uration Questions with WISE: Techniques and Deployment
Experience,” Proc. ACM SIGCOMM, 2008.
[33] J. Wei and C. Xu, “eQoS: Provisioning of Client-Perceived End-to-
End QoS Guarantees in Web Servers,” IEEE Trans. Computers,
vol. 55, no. 12, pp. 1543-1556, Dec. 2006.
[34] J. Wei and C.-Z. Xu, “sMonitor: A Non-Intrusive Client-Perceived
End-to-End Performance Monitor of Secured Internet Services,”
Proc. USENIX Ann. Technical Conf., June 2006.