You are on page 1of 16

An NS2 TCP Evaluation Tool:

Installation Guide and Tutorial


Gang Wang, Yong Xia David Harrison
NEC Laboratories China BitTorrent
{wanggang, xiayong}@research.nec.com.cn dave@bittorrent.com

April 29, 2007

Abstract
A TCP performance evaluation tool for the network simulator NS2 has been developed. This
document describes how to install and use this tool.

1 Introduction
Researchers frequently use the network simulator NS2 to evaluate the performance of their protocols
in the early stage of design. One particular area of recent intest is the congestion control protocols
(a.k.a., TCP alternatives) for high-speed, long-delay networks. There is significant overlap among
(but lack of a community-agreed set of) the topologies, traffic, and metrics used by many researchers
in the evaluation of TCP alternatives: effort could be saved by starting research from an existing
framework. As such, we developed a TCP performance evaluation tool. This tool includes several
typical topologies and traffic models; it measures some of the most important metrics commonly
used in TCP evaluation; and it can automatically generate simulation statistics and graphs ready
for inclusion in latex and html documents. The tool is very easy to use and contains an extendable
open-source framework.
This tool can be used not only for high-speed TCP protocols, but for other proposed changes to
congestion control mechanisms as well, such as ECN added to SYN/ACK packets, changes to make
small transfers more robust, changes in RTO estimation, and proposals to distinguish between loss
due to congestion or corruption, etc.
This simulation tool does not attempt to be a final one. Instead, it intends to serve as a starting
point. We invite community members to contribute to the project by helping to extend this tool
toward a widely-accepted, well-defined set of NS2 TCP evaluation benchmarks.
Below we describe how to install and use this tool for TCP performance evaluation.

2 Installation
This tool builds upon a set of previous work. There are two ways to install the tool: #1) install
all the required components one by one, or #2) install an “all-in-one” patch that includes all the
needed components. We recommend the approach #2, but will first describe the approach #1 for
clarity purpose.

1
2.1 Install the Components One-by-One
First you need to install NS2. Our tool has been tested with ns-2.29, ns-2.30, and ns-2.31. But we
recommond the most recent version. Suppose you install the ns-allinone-2.31 package (available at
http://www.isi.edu/nsnam/ns/ns-build.html) under the directory $HOME/ns-allinone-2.31.

Second, you need to install the RPI NS2 Graphing and Statistics package from http://www.ecse.
rpi.edu/∼harrisod/graph.html, which provides a set of classes for generating commonly used
graghs and gathering important statistics.

Third, you need the PackMime-HTTP Web Traffic Generator from http://dirt.cs.unc.edu/
packmime/. This package is implemented in NS2 by the researchers at UNC-Chapel Hill based on
a model developed at the Internet Traffic Research group of Bell Labs. It generates synthetic web
traffic in NS2 based on the recent Internet traffic traces.

Fourth, to test the high-speed TCP protocols you have to install them, e.g.,

* HTCP, designed by R. Shorten and D. Leith, downloadable from http://www.hamilton.ie/


net/research.htm#software (with STCP implementation included);

* BIC/CUBIC, designed by L. Xu and I. Rhee, downloadable from http://www.csc.ncsu.edu/


faculty/rhee/export/bitcp/bitcp-ns/bitcp-ns.htm and http://www.csc.ncsu.edu/faculty/
rhee/export/bitcp/cubic-script/script.htm;

* FAST, designed by S. Low, C. Jin, and D. Wei, implemented in NS2 by T. Cui and L.
Andrew, downloadable from http://www.cubinlab.ee.mu.oz.au/ns2fasttcp;

* VCP, designed by Y. Xia, L. Subramanian, I. Stoica, and S. Kalyanaraman, downloadable


from http://networks.ecse.rpi.edu/∼xiay/vcp.html;

The ns-allinone-2.31 distribution includes other TCP protocols like Reno, SACK, HSTCP (S.
Floyd), and XCP (D. Katabi and M. Handley).You can also add whichever protocols according to
your need.

Finally, install our tool from http://labs.nec.com.cn/tcpeval.htm. Just unpack the package
to the NS2 root directory $HOME/ns-allinone-2.31.

> cd $HOME/ns-allinone-2.31
> tar zxvf tcpeval-0.1.tar.gz

This creates a directory called eval under $HOME/ns-allinone-2.31. The eval directory con-
tains all the scripts and documents of our tool. To use this tool, an environment variable TCPEVAL
must be defined. You can define it in the file $HOME/.bash profile to avoid doing this repeatedly.

> export TCPEVAL=$HOME/ns-allinone-2.31/eval

2
Now you can try an example simulation provided in the tool.

> cd $TCPEVAL/ex # directory for examples


> ns test_dumb_bell.tcl # a dumb_bell topology simulation

2.2 Install the All-in-one Patch


To ease the installation process, we also provide a patch file for the latest ns-allinone package
(version 2.31 as of the date this document was written). This patch contains all the components
described above.

First, go to http://www.isi.edu/nsnam/ns/ns-build.html, download the ns-allinone-2.31


package. Again, suppose you install the package under the directory $HOME/ns-allinone-2.31.

Second, go to http://labs.nec.com.cn/tcpeval.htm, download the patch file ns-allinone-


2.31.tcpeval-0.1.patch.gz, and then install the patch. The TCP evaluation tool will appear in
the directory $HOME/ns-allinone-2.31/eval.

> cd $HOME # note it is $HOME directory


> gunzip ns-allinone-2.31.tcpeval-0.1.patch.gz
> patch -p0 < ns-allinone-2.31.tcpeval-0.1.patch

Now rebuild the NS2 package. To do that, first, you need to configure the environment settings.
In the file $HOME/.bash profile, set NS to the directory containing the NS2 package, NSVER to the
NS2 version, and TCPEVAL to the directory of the TCP evaluation tool scripts.

> export NS=$HOME/ns-allinone-2.31/ns-2.31


> export NSVER=2.31
> export TCPEVAL=$HOME/ns-allinone-2.31/eval

Then configure the RPI Graphing and Statistics package and rebuild NS2.

> ns $HOME/ns-allinone-2.31/ns-2.31/tcl/rpi/configure.tcl
> cd $HOME/ns-allinone-2.31/ns-2.31
> ./configure
> make depend
> make

Now you can try an example simulation provided in the tool.

> cd $TCPEVAL/ex # directory for examples


> ns test_dumb_bell.tcl # a dumb_bell topology simulation

3
Figure 1: The architecture of our tool.

3 Tool Components
The architecture of our tool is shown in Figure 1, which is primarily composed of the following com-
ponents: network topologies, traffic models, performance evaluation metrics, and, after a simulation
is done, a set of result statistics and graphs generated.

3.1 Network Topologies


The tool includes three commonly used topologies in TCP performance evaluations. They are
single-bottleneck dumb-bell, multiple-bottleneck parking-lot, and a simple network topology. More
realistic and complex topologies can be added to the tool easily.

3.1.1 A Single-Bottleneck Dumb-Bell Topology


This is shown in Figure 2, in which source nodes and sink nodes connect to router 1 or router 2.
The bandwidth between the two routers is much lower than the other links, which causes the link
between the routers to be a bottleneck. (Traffic can be either uni-directional or bi-directional.)

3.1.2 A Multiple-Bottleneck Parking-Lot Topology


The parking-lot topology shown in Figure 3 is similar to the dumb-bell topology except that it
introduces cross traffic traversing the intermediate routers.

3.1.3 A Simple Network Topology


A simple network topology is illustrated in Figure 4. In this configuration, the core routers represent
the backbone of the network with the access routers responsible for sender or receiver nodes to
connect to the network. It is similar to the transit and stub domains in GT-ITM. Static routing is
employed as the default routing protocol.

4
Figure 2: A dumb-bell topology.

Src_1 CrossSrc_1 CrossSrc_2 CrossSrc_3 Sink_1

Src_2 RouterN Sink_2


...
Router1
Src_3 Sink_N

CrossSink_1 CrossSink_2 CrossSink_N

Figure 3: A parking-lot topology.

Src_1 Core router


Sink_1

Core router
Src_2
Sink_N
Access router
Src_M
Bottleneck

Access router

Src_M+1 Src_N

Figure 4: A simple network topology.

5
3.2 Traffic Models
The tool attempts to apply the typical traffic settings. The applications involved include four
common traffic types.

3.2.1 Long-lived FTP Traffic


FTP traffic uses infinite, non-stop file transmission, which begins at a random time and runs on
the top of TCP. Implementation details and choice of TCP variants are decided by users, which is
not in the scope of this tool.

3.2.2 Short-lived Web Traffic


The web traffic module employs the PackMime HTTP traffic generator, which is available in the
recent NS2 releases.

3.2.3 Streaming Video Traffic


Streaming traffic is modeled using CBR traffic over UDP. Both sending rate and packet size are
settable.

3.2.4 Interactive Voice Traffic


There are currently two synthetic voice traffic generation methods available in this tool. One is
based on CBR-like streaming traffic. The other is generated according to a two-state ON/OFF
model, in which ON and OFF states are exponentially distributed. The mean ON period is 1.0
sec, and the mean OFF duration is 1.35 sec. These values are set in accordance with ITU-T
recommendations, but are changeable if needed.
The voice packet size is 200 bytes, including the 160 bytes data packet (codec G.711, 64 kbps
rate and 20 ms duration), 20 byte IP header, 8 byte UDP header, and 12 byte RTP header. These
parameters can be changed by using other voice/audio codecs.

3.3 Performance Metrics


A comprehensive list of the metrics for TCP performance evaluation is described in the TMRG
RFC “Metrics for the Evaluation of Congestion Control Mechanisms” by S. Floyd. In the first step,
this tool tries to implement some commonly used metrics described there. Here we follow the RFC
and classify the metrics into network metrics and application metrics. They are listed as follows.

3.3.1 Throughput, Delay, Jitter and Loss Rate


• Throughput
For network metrics, we collect bottleneck link utilization as the aggregate link throughput.
Throughput is sometimes different from goodput, because goodput consists solely of useful
transmitted traffic, where throughput may also include retransmitted traffic. But users care
more about the useful bits the network can provide. So the tool collects application level
end-to-end goodput no matter what the transport protocol is employed.

6
For long-lived FTP traffic, it measures the transmitted traffic during some intervals in bits
per second.
For short-lived web traffic, the PackMime HTTP model collects request/response goodput
and response time to measure web traffic performance.
Voice and video traffic are different from above. Their performance is affected by packet
delay, delay jitter and packet loss rate as well as goodput. So their goodput is measured in
transmitted packet rate excluding lost packets and delayed packets in excess of a predefined
delay threshold.

• Delay
We use bottleneck queue size as an indication of queuing delay in bottlenecks. Besides mean
and max/min queue size statistics, we also use percentile queue size to indicate the queue
length during most of the time.
FTP traffic is not affected much by packet transmission delay.
For web traffic, we report on the response time, defined as the duration between the client’s
sending out requests and receiving the response from the server.
For streaming and interactive traffic, packet delay is a one-way measurement, as defined by
the duration between sending and receiving at the end nodes.

• Jitter
Delay jitter is quite important for delay sensitive traffic, such as voice and video. Large jitter
requires much more buffer size at the receiver side and may cause high loss rates in strict
delay requirements. We employ standard packet delay deviation to show jitter for interactive
and streaming traffic.

• Loss Rate
To obtain network statistics, we measure the bottleneck queue loss rate.
We do not collect loss rates for FTP and web traffic because they are less affected by this
metric.
For interactive and streaming traffic, high packet loss rates result in the failure of the receiver
to decode the packet. In this tool, they are measured during specified intervals. The received
packet is considered lost if its delay is beyond a predefined threshold.

3.3.2 Response Times and Oscillations


One of the key concerns in the design of congestion control mechanisms has been the response time
to sudden network changes. On the one hand, the mechanism should respond rapidly to changes
in the network environment. On the other hand, it has to make sure changes are not too severe to
ensure the stability of the network. This tool is designed so the response time and fluctuations can
be easily observed using a series of figures it generates, if the simulation scenarios we use include
variable bandwidth, round trip delay, various traffic start times and other parameters.

7
3.3.3 Fairness and Convergence
In this tool, the fairness measurement uses Jain’s fairness index to measure the fair bandwidth
share of end-to-end FTP flows that traverse the same route.
Convergence times are the time elapsed between multiple flows from an unfair share of link
bandwidth to a fair state. They are quite important for environments with high-bandwidth, long-
delay flows. This tool includes scenarios to test the convergence performance.

3.3.4 Robustness in Challenging Environments


A static link packet error model has been introduced in the tool to investigate TCP performance
in challenging environments. Link failure, routing changes and other diagnostic markers can easily
be tested by changing the tool’s parameters.

3.4 Simulation Results


The tool includes the RPI graphing package to automatically generate the above-discussed perfor-
mance metrics. At the end of a simulation, it also automatically generates a series of user-defined
statistics (e.g. bottleneck average utilization, bottleneck 90-percentile queue length, average per-
flow goodput, etc.) and graphs (like bottleneck utilization and queue length variation over time,
per-flow throughput over time, etc). It can create latex and html files in order to present the
simulation results in a paper or webpage form. All the simulation-generated data is stored in a
temporary directory for later use.

4 Usage Details
Before using this tool, you should have some experience about NS2. All the examples shown below
are those commonly used in TCP performance evaluation.
The main body of this tool includes three files: create topology.tcl, create traffic.tcl,
and create graph.tcl in the $HOME/ns-allinone-2.31/eval/tcl directory. As their file names
indicate, create topology.tcl implements the three common network topologies discussed in
Section 3.1, create traffic.tcl defines the traffic model parameters in the simulation (see Section
3.2), and crate graph.tcl generates simulation statistics (see Section 3.3.1) and plots graphs at
the end of simulations.
Three example scripts are given in the $HOME/ns-allinone-2.31/eval/ex directory. They
are test dumb bell.tcl, test parking lot.tcl and test network 1.tcl for the above-discussed
topologies. Their parameters definitions are in def dumb bell.tcl, def parking lot.tcl, and
def network 1.tcl, respectively.
Here, we take the dumb bell topology simulation as an example; simulations for other topologies
are the similar.

4.1 Example 1: A Simple Simulation


Users are recommended to run the example scripts as a starting point. For example,

> cd $TCPEVAL/ex
> ns test_dumb_bell.tcl

8
It will run the dumb-bell topology simulation with default parameters defined in
def dumb bell.tcl. The results can be reviewed by opening /tmp/index100.html.
The output format will be explained in the following Section 4.4. If one wants to write his own
examples, the following code should be incorporated into his tcl script.

source $TCPEVAL/tcl/create_topology.tcl
source $TCPEVAL/tcl/create_traffic.tcl
source $TCPEVAL/tcl/create_graph.tcl

4.2 Example 2: The TCP Variants Used in the Simulation


Before evaluating a TCP variant, one must make sure that the protocol exists in his NS2 simulator.
Or, it will give an error message “No such TCP installed in ns2”. Reno, SACK, HSTCP, and XCP
exist in the NS 2.31 distribution. Other TCP variants should be put into NS2 before they can
be used. After that, one needs to set their configuration parameters used in the simulation. For
example, if one wants to evaluate the performance of VCP, firstly he needs to download the VCP
code from the following link and install it according to its manual.

http://networks.ecse.rpi.edu/~xiay/vcp.html

Then the configuration parameters for VCP need to be set in the procedure get tcp params of
create topology.tcl.

if { $scheme == "VCP" } {
set SRC TCP/Reno/VcpSrc # For VCP source and sink.
set SINK VcpSink
set QUEUE DropTail2/VcpQueue # Bottleneck Queue.
...
}

To simplify the above process, the all-in-one patch has included six other TCP variants’ im-
plementations and settings: STCP, HTCP, BIC, CUBIC, FAST, and VCP. For the details, please
refer to Section 2.1 for their implementation and typical settings.

4.3 Example 3: Scenario Configuration


When start to simulate a scenario, firstly, one needs to set the parameters used in the simulation,
and then, send them to the tool. Here, we take the dumb-bell topology as an example.

4.3.1 Parameter Settings


The parameter setting in def dumb bell.tcl includes three parts: topology setting, traffic setting,
and simulation statistics and graph setting. The topology setting defines the specific topology
parameters. For dumb-bell, it sets the bottleneck bandwidth, round trip time, propagation delay,
and packet error rate in the bottleneck link. The traffic setting defines the traffic parameters used

9
in the simulation, such as the number of FTP traffic, what high-speed TCP protocol employed by
FTP, using AQM or not, and how long the simulation runs. Finally, you choose the performance
statistics to be generated (like bottleneck utilization, packet loss rate, etc.), and the graphs to be
displayed (e.g., queue length variation over time) after the simulation is done. Each item in the file
has its meaning explained.
For example, in the topology settings, per sets the static packet error rate in the bottlenecks.
The following command defines the packet error rate to 0.01. That is, when sending 100 packets
on the link, approximately there is a corrupted one. If set to 0, there is no packet error occurring
on the link.

> set per 0.01 # packet error rate

Currently, there are four traffic models in this tool: long-lived FTP, short-lived web, interactive
voice and streaming video. These are explained in Section 3.2. For example, if we want to use XCP
for the FTP traffic, just do

> set TCP_scheme XCP

If we want to generate the bottleneck statistics and graphs when the simulation finishes, just
set

> set show_bottleneck_stats 1

If set to 0, it does not show graphs of bottleneck statistics after the simulation. Other parameters
can be set in a similar way.

4.3.2 Tool Configuration


After setting the simulation parameters, you need to send them to the tool. This is in the file
test dumb bell.tcl. It tells the tool what topology, traffic and graph would be used. The format
is like

> $graph config -show_bottleneck_stats $show_bottleneck_stats \


-show_graph_ftp $show_graph_ftp \
-show_graph_http $show_graph_http \
...

Where $show bottleneck stats is set in def dumb bell.tcl as discuss above. Then the fol-
lowing command would run a dumb-bell simulation.

> ns test_dumb_bell.tcl

10
4.4 Example 4: Multiple Output Formats
All the simulation results are stored in /tmp/expX, where X stands for the simulation sequence
number. The data sub-directory contains the trace file and plot scripts used in the simulation. The
figure sub-directory stores the generated graphs. Mainly, there are three kinds of output formats
of the simulation results: text, html and eps. The selection is according to the def dumb bell.tcl
settings. It works like,

if ( verbose == 0 ) {
output text statistics
}
if ( verbose == 1 && html_index != -1 ) {
output indexN.html in /tmp directory.
where N is the html_index in def_dumb_bell.tcl.
}
output eps graph

4.4.1 Text Format


The output of text statistics is designed for repeated simulations in order to evaluate the influence
of a parameter on the performance. It prints the output (dumb-bell topology) shown in Figure 5
when the simulation ends. The columns and their meanings are explained in Table 1. The network
simulation output format is explained in Table 2.

Figure 5: Text output.

Table 1: Text Output Columns and Meanings for the Dumb-Bell and Parking-Lot Topology
1. TCP Scheme 2. Number of bottleneck
3. Bandwidth of bottleneck (Mbps) 4. Rttp (ms)
5. Num. of forward FTP flows 6. Num. of reverse FTP flows
7. HTTP generation rate (/s) 8. Num. of voice flows
9. Num. of forward streaming flows 10. Num. of reverse streaming flows
11. No. bottleneck 12. Bottleneck utilization
13. Mean bottleneck queue length 14. Bottleneck buffer size
15. Percent of mean queue length 16. Percent of max queue length
17. Num. of drop packets 18. Packet drop rate
... repeat 11-18 number of bottleneck times ...
... Elapsed time

11
Table 2: Text Output Columns and Meanings for the Network Topology
1. TCP Scheme 2. Number of transit node
3. Bandwidth of core links (Mbps) 4. Delay of core links (ms)
5. Bandwidth of transit links (Mbps) 6. Delay of transit links (ms)
7. Bandwidth of stub links (Mbps) 8. Delay of stub links (ms)
9. Num. of FTP flows 10. HTTP generation rate flows (/s)
11. Num. of voice flows 12. Num. of streaming flows
13. No. core link 14. Core link utilization
15. Mean core link queue length 16. Core link buffer size
17. Percent of core link mean queue length 18. Percent of max core link queue length
19. Num. of core link drop packets 20. Packet drop rate in the core link
... repeat 13-20 number of core link times ...
Transit link statistics, the same as core links ...
... Elapsed time

4.4.2 HTML Format


HTML output works for the one who wants to browse all the simulation results in a much more intu-
itive and convenient way. When the simulation ends, the tool will generate a file /tmp/indexN.html,
which incorporates all the simulation results, including scenarios setting, interested metrics shown
in graph and some other collected statistics.

4.4.3 EPS Format


To make the simulation results much easier to be included in papers for publication, the tool stores
eps format files in /tmp/expX/figure/. They are named according to the contents contained, e.g.
btnk util fwd 0 plot1.eps is the first bottleneck utilization versus time on the forward path, and
http res thr plot1.eps is the HTTP response throughput. They are shown in Figures 6 and 7
(for the default parameters in def dumb bell.tcl with TCP Reno).

12
Forward Bottleneck No.1 Utilization vs Time

0.8

utilization

0.6

0.4

0.2
Interval=1.0s

0
0 20 40 60 80 100
time (seconds)

Figure 6: Forward bottleneck link utilization

Forward HTTP Traffic Response Throughput


1.4e+06

1.2e+06

1e+06
Throughput (bps)

800000

600000

400000

200000

0
0 10 20 30 40 50 60 70 80 90 100
seconds

Figure 7: HTTP response throughput

4.5 Example 5: A Convergence Time Test


Convergence time is an important metric which shows the elapse time when the bandwidth alloca-
tion changes from an unfair states to a fair one. To get this metric, you need to set the following
parameter in def dumb bell.tcl.

> set show_convergence_time 1

13
The total simulation time of this scenario is 1000 seconds. It has 5 reverse FTP flows which
start at the beginning of the simulation, and 5 forward flows which starts every 200 seconds. When
the simulation is done, the forward FTP throughput shown in Figure 8 presents the employed XCP
convergence speed (with the default parameters in def dumb bell.tcl).

Forward FTP Throughput


9e+06

8e+06

7e+06

6e+06
Throughput (bps)

5e+06

4e+06

3e+06

2e+06

1e+06
Interval=1.0s

0
0 100 200 300 400 500 600 700 800 900 1000
seconds
flow0 flow1 flow2 flow3 flow4

Figure 8: XCP convergence speed

4.6 Example 6: A Comparison of the TCP Variants’ Performance


Each TCP variant has its own advantages or disadvantages. So a common question is: which
alternative achieves the most “balanced” performance tradeoffs in the common scenarios? To
answer such questions, the performance comparison of TCP variants on the same set of test scenarios
should be investigated. The result is helpful for the researchers in that it can provide some hints
in the congestion control protocol design.
This comparison process needs to run the simulation scripts repeatedly. With the TCP evalu-
ation tool, this is very easy to do. For example, if you want to use the dumb-bell topology. You
only has to use the text output and send the changing parameters, including the employed TCP
variants and the scenario parameters, to def dumb bell.tcl. When simulation finishes, a com-
parison report would be generated automatically. In the scenario/dumb bell sub-directory of the
distribution, three examples (var bw.sh, var rtt.sh, var ftp.sh) are given. They varies the
bottleneck bandwidth, the round trip propagation delay, and the number of FTP flows to investi-
gate how the metrics, such as the bottleneck link utilization, the bottleneck queue length, and the
packet drop rate, is affected when varying the simulation parameters and the TCP schemes. Users
are encouraged to add more changing parameters to enrich the simulation scenarios, particularly
for the parking-lot and the network topologies, which we intentionally left empty.
In order to run this comparison simulation for changing bottleneck capacity, do the following,
> cd $TCPEVAL/scenarios/dumb_bell/var_bw.sh
> ./var_bw.sh

14
Link Utilization with BW Changes

100

80

Link Utilization (%)


60

40

RENO + RED
SACK + RED
HSTCP + RED
20 HTCP + RED
STCP + RED
BICTCP + RED
CUBIC + RED
XCP
VCP
1 10 100 1000
Bandwidth (Mbps) Log Scale

Figure 9: Bottleneck utilization variation when capacity changes

Percent of Mean Queue Length with BW Changes


100
RENO + RED
SACK + RED
HSTCP + RED
HTCP + RED
STCP + RED
80 BICTCP + RED
CUBIC + RED
XCP
VCP
Mean Queue Length (%)

60

40

20

0
1 10 100 1000
Bandwidth (Mbps) Log Scale

Figure 10: Average bottleneck queue length variation when capacity changes

Packet Drop Rate with BW Changes

RENO + RED
SACK + RED
8 HSTCP + RED
HTCP + RED
STCP + RED
BICTCP + RED
CUBIC + RED
XCP
6 VCP
Packet Drop Rate (%)

1 10 100 1000
Bandwidth (Mbps) Log Scale

Figure 11: Packet drop rate variation when capacity changes

15
When the simulation finishes, a file named myreport.pdf is generated, which includes the
comparison graphs. For example, when the bottleneck capacity varies from 1 Mbps to 1000 Mbps
(the other parameters are fixed), Figures 9–11 illustrate how the bottleneck link utilization, the
average bottleneck queue length and the packet drop rate change accordingly.
In addition, there are many other parameters in def dumb bell.tcl. Users can set them
according to their needs. The parking-lot and the simple network simulations are similar to the
dumb-bell topology.

5 Acknowledgements
The authors would like to thank Dr. Sally Floyd of ICIR for her encouragement and a lot of
valuable advice. Part of David Harrison and Yong Xia’s work was conducted when they were PhD
students at Rensselaer Polytechnic Institute (RPI). They thank Prof. Shivkumar Kalyanaraman
of RPI for his support and guidance.

16