You are on page 1of 9

Design and Implement Differentiated Service Routers in OPNET

Jun Wang, Klara Nahrstedt, Yuxin Zhou Department of Computer Science University of Illinois at Urbana-Champaign junwang3, klara, z-yuxin @cs.uiuc.edu

In early 90s, the Integrated Service Model (IntServ) was proposed which provides an integrated infrastrucDifferentiated Service Model (Diffserv) is currently a ture to handle conventional Internet applications and those popular research topic as a low-cost method to bring QoS QoS-sensitive applications together [7, 13]. IntServ usto todays Internet, especially in the backbone. Simulation es resource ReSerVation Protocol (RSVP) as its signaling is the best way to study Diffserv before deploying it to the protocol. [4, 5, 16] Although IntServ / RSVP can provide real Internet. In this paper, we introduce the techniques QoS guarantees to applications, it has a scalability proband methodologies that we used to design and implement lem since each router in the model has to keep track of Diffserv enabled routers by using OPNET. We have im- individual ows. To address the scalability issue, a new plemented the Token Bucket and Leaky Bucket algorithm- core stateless model, called Differentiated Service Mods, RIO and PS queueing schemes, RED dropping schemes el (Diffserv) was proposed and has become a popular reand other components in OPNET IP modules. Based search topic as a low-cost method to bring QoS to todays on these Diffserv enabled routers, we setup a large scale Internet, especially in the backbone networks [10, 6]. network to study Diffserv QoS features: priority dropIntensive research efforts have been done on the Diffping (discrimination between different service classes), serv topic. For example, [12] studied pricing issue in the QoS guarantees, token bucket effects, fragmentation/de- Diffserv model. [2] implemented a Diffserv router on Linfragmentation effects and so on. Furthermore, we present ux platform. [1] introduced a combination of Diffserv and problems we encountered during our study, and their so- MPLS. [3] talked about multi-eld packet classication. lutions. [9] analyzed PHB mechanisms for the premium service. [15] studied packet marking issue in Diffserv. [11, 14] did research on scheduling issues in core stateless networks. 1 Introduction Although intensive research has been done on this topic, it is still hard to progress in the Diffserv research with Internet trafc has increased at an exponential rate recent- respect to the overall impact on the Internet because it is ly and shows no signs of slowing down. In the meanwhile, too expensive and still not possible to deploy real Diffsome new classes of applications (e.g., distributed multi- serv enabled routers into the whole Internet at one shot media applications, distributed realtime applications, net- just for research purposes. Thus simulation is the best work management etc.) raise requirements for underlying way to study Diffserv before deploying it to the real Innetwork infrastructure to provide soft or even hard Quality ternet. The OPNET is a good simulator which provides of Service (QoS) guarantees, which throws big challenges complete node and model libraries besides the thorough to current Internet, since the current Internet provides only documentations. one simple service class to all uses with respect to QoS: In this work, we introduce our design and implementabest-effort datagram delivery which can not provide any tion of DS-enabled routers in the OPNET simulation enservice quality guarantees. The gap between QoS provi- vironment. Intensive simulations are conducted to verify sioning and demanding is even enlarged. our design and implementation and to study the UDP per This work was supported by the National Science Foundation PACI formance over Diffserv in a large scale network. We also grant under contract number NSF PACI 1 1 13006, and NSF CISE In- introduce several problems we have encountered during this work, and their solutions. frastructure grant under contract number NSF EIA 99-72884. Please address all correspondences to Jun Wang and Klara NahrstThe paper is organized as follows. In section 2, we edt at Department of Computer Science, University of Illinois at UrbanaChampaign, Urbana, IL 61801, phone: (217) 333-1515, fax: (217) 244- introduce the Diffserv model and the design issues. In section 3 we cover the implementation of DS routers in 6869. 1

Abstract

OPNET. Section 4 describes the simulations we conduct in the OPNET environment and their results. In the last section, we conclude our work.
Classifier

Meter (Token Bucket / Leaky Bucket)

Marker / Remarker

Queueing Disciplining PS-queue / RIO-queue

Shaper

2 Differentiated Service in the Internet


2.1 Diffserv Model
The main purpose of the Diffserv model is to provision end-to-end QoS guarantees by using service differentiations in the Internet. Unlike the Intserv model, it does not keep soft states for individual ows, instead, it achieves QoS guarantees by a low-cost method - aggregating individual ows into several service classes. Therefore, the Diffserv model has a good scalability. The Diffserv model works as follows. Incoming packets are classied and marked into different classes, using so-called Differentiated Services CodePoint (DSCP) [8] (e.g., IPv4 TOS bits or IPv6 Trafc Class bits in a IP header). Complex trafc conditioning such as classication, marking, shaping, policing are pushed to network edge routers or hosts. Therefore, the core routers are relatively simple - classify packets and forward them using corresponding Per-Hop Behaviors (PHBs). From the administrative point of view, a Diffserv network could consist of multiple DS domains. To achieve end-to-end QoS guarantees, the negotiation and agreement between these DS domains are needed. Although the boundary nodes need to perform complex conditioning like the edge nodes, the interior nodes within DS domains are simple. [6, 10] Three service classes have been proposed: the premium class, assured class and the best-effort class. Different service class is suitable to different types of applications. For example, the premium service provides a virtual reliable leased line to customers with desired bandwidth and delay guarantees, while the assured service focuses on statistical provisioning of QoS requirements and it can provide soft and statistical guarantees to the users. [10]

Dropper

Figure 1: The Structure of a DS Router

The Classier. The classier classies packets according to their DSCP in the IP headers. The classier in a edge node may consider other information, such as source addresses and port numbers. After being classied, packets are put into premium, assured and best-effort classes accordingly.

The Meter. The meter performs in-prole / out-ofprole checking on each incoming packet. It uses the token bucket scheme to monitor the assured trafc, and uses the leaky bucket scheme to monitor the premium trafc, since the token bucket allows certain amount of bursts but the leaky bucket does not. Both leaky bucket scheme and token bucket scheme can control the output rates by the token generation rates.

The Marker/Re-marker. After being classied, packets are marked into premium, assured and best-effort classes accordingly. Re-marking happens when assured packets become out-of-prole, which means they violate the contracted speed limit. They are remarked as best-effort packets.

The Dropper/Shaper. If premium packets become


out-of-prole, they are dropped directly by the dropper. Shaping happens in the edge nodes or boundary nodes, which eliminates jitters.

The Queueing Disciplining Modules. The queueing discipline modules are very important for the DS model. The differentiation is achieved here. We use two separated queues: the Premium Service Queue (PS-queue) for the premium packets and the RIO-queue 1 for both assured packets and best-effort packets. The PS-queue is a simple FIFO queue, while the RIO-queue is more complicated. Figure 2 illustrates the multi-class Random Early Detection (RED) algorithm which the RIOqueue is using. When RIO-queue length exceeds the dropping threshold
 , new best-effort packets are dropped with increasing probability up to  .
1 Random Early Detection with distinction of In-prole and Out-ofprole packets [2]

2.2 Design of DS Routers


The Differentiated Service enabled routers (DSenabled routers or DS routers) are key nodes in the Diffserv model. There are two types of DS-enabled routers: (1) edge routers and (2) core routers. In this work, we focus on the design and implementation of the edge routers, since the core routers are simpler compared to the edge routers. Figure 1 shows the structure of a DS router. In the gure, we note that there are several key components in the DS router structure.

3
Dropping Probability
50

1.0
45

40 clients

INET_CLOUD 35 servers

Pb

Pa
30

25

Tmin_b

Tmax_b

Tmin_a

Tmax_a

Queue Length
-125 -120 -115 -110 -105 -100 -95 -90 -85 -80 -75 -70 -65

Figure 2: RIO Queueing Discipline When RIO queue length exceeds


 , new assured packets are dropped with increasing probability up to   . When queue length exceeds   , all new best-effort packets are dropped. When queue length exceeds   , all incoming packets are dropped. By tuning the values of
 ,
  ,   ,   ,  and  , we can expect different dropping behaviors for both best-effort and assured packets.

OR
44
38.5

client0

41

43.5

38

40.5
switch0 E_router_0 43 client1

router_0

router_1

router_2
E_router_1 37.5 server

40

42.5 client2 -120.5 -120 -119.5 -119

37

-101

-100.5

-100

-83

-82.5

Figure 3: Network Topology for the Simulation

3 Implementation
We implement the Diffserv enabled router (DS router) using the OPNET simulation environment. Based on the DS routers, we construct a large-scale network environment in which multiple DS routers and trafc senders/receivers are included. The simulations we conduct focus on the verications of the DS routers and the study of their performance. In our conguration we consider multiple DS routers, trafc senders and one receiver (Figure 3). The simulation is implemented in OPNET Modeler 6.0.L running on Windows NT 4.0 Workstation with dual PentiumPro 200Mhz CPU and 128MB of RAM. Figure 3 also shows the scale of the simulation environment. The clients subnet comprises three client nodes, one switch and one DS edge router. The INET CLOUD consists of three DS-enabled / non-DS-enabled routers (it can be expanded to a more complicated topology). The servers subnet contains one server and one edge router. In this section, we rst introduce the implementation of the required network nodes, including the DS router, trafc sender and trafc receiver. Then we introduce problems we encountered during the implementation, and their solutions.

3.1 DS router
To implement the DS scheme in a router, we have two options: (1) start the implementation from the scratch; (2) take advantage of existing router architecture in OP-

NET. We choose the later. Thanks for the complete node library supported by OPNET, we have multiple choices to base our DS router on. In our real implementation, we choose Cisco 7204 router as our base, which saves a lot of time for implementation, since we do not have to handle routing, MAC or TCP/UDP at all. According to the DS scheme, IP packets are classied with respect to the DSCP in their IP headers so that IP ows are aggregated into different service classes. It is natural to put DS scheme into IP module. Therefore, we re-write the IP module and put DS components in it. Figure 4 shows the node model picture of a DS router. The picture is the same as that of a conventional router. But the ip process model in the picture (the block which is just below ip-encap block) has been changed to our DS enabled ip process model. In this gure, we notice that the overall structure of the router has not been changed a lot, although making the router DS enabled is a signicant enhancement with respect to functionality. The reason is that in OPNET different modules (e.g., MAC, IP, TCP, OSPF, RIP and so on) are implemented as separated objects, which communicate with each through interfaces. As long as the new module keeps an appropriate interface, the whole model works ne. Figure 5 illustrates the process model for a DS enabled IP model. We notice that in the process model, there are two different processes. The upper one is the main IP process which implements main IP and Diffserv functionality (which is called diff ip rte v4 model), and the lower one is the child process which implements priority scheduling scheme for Diffserv (which is called diff pq model). The IP diff ip rte v4 process model is implemented as

udp

ospf

bgp

Packet Monitoring and Policing. Packet monitoring


ARP1 mac1

IPAL_0

rip

igrp

tcp

AAL_0

ip_encap

ATM_mgmt_0

ATM_layer_0

ip

eth_port_rx_1_0 eth_port_tx_1_0

ATM_trans_0

ATM_switch_0

ARP2

pr_0_0

pt_0_0

mac2

eth_port_rx_2_0 eth_port_tx_2_0

ARP3

mac3

ARP4

eth_port_rx_3_0 eth_port_tx_3_0

mac4

eth_port_rx_4_0 eth_port_tx_4_0

ARP5

Figure 4: The Node Model for a DS Router


mac5 ARP6 eth_port_rx_5_0 eth_port_tx_5_0 mac6 eth_port_rx_6_0 eth_port_tx_6_0

ARP7
IP_servic (PK_READY) (SERVICE_NEW_PK) (SERVICE_QUEUED_PK)

and policing are implemented within the DS schd state too. After being classied, an incoming packet is monitored and policed according to the class it is belonging to. If the packet is a premium class packet, it is monitored and policed by using the leaky bucket model (Section 2.2). If the packet is an assured or a best-effort class packet, it is monitored and policed by using the token bucket model (Section 2.2). If the packet is premium and conformed (In-prole), it is processed by the next state (IP serv state) directly. If it is non-conformed (Out-of-prole), it is dropped (destroyed) without any further process. If the packet is an in-prole assured packet, it is processed by the IP serv state, otherwise it is re-marked as a besteffort packet in DS schd state and processed by IP serv state later. If the packet is a best-effort packet, it goes ahead into the IP serv state and gets processed there.

mac7
(NO_DIFFSERV) init init_too DS_schd

ARP8
svc_start svc_compl

eth_port_rx_7_0 eth_port_tx_7_0
(ARRIVAL) (SELF_NOTIFICATION) (DIFFSERV) (DS_SCHD) (default) wait (default)

mac8
(default) eth_port_rx_8_0 eth_port_tx_8_0

Packet Routing and Forwarding. After classication


and conformance checking, the packet enters the regular IP forwarding process, which is implemented by IP serv, srv start, srv compl and idle states. All of these states are almost the same as those in a conventional IP module, except that the idle state is diffserv-aware. 2

ARP9
(SELF_NOTIFICATION)

cmn_rte_tbl

arrival (ARRIVAL)

idle

(SVC_COMPLETION)

mac9
(default)

pt_10_0

hub_rx_9_0

hub_tx_9_0

pr_10_0

pt_11_0

pt_12_0

(RECEIVE_PACKET)
pr_11_0 pr_12_0

enqueue

Leaky Bucket and Token Bucket. As we described


in Section 2.2, we use the leaky bucket model and token bucket model to do conformance checking on premium class trafc and assured class trafc respectively. The reason behind this is that for premium class trafc, the resource reservation is done based on the peak rate, thus we do not allow any burst rate which exceeds this reserved rate. While for the assured class trafc, the reservation is based on the statistical guaranteed rate, thus a certain amount of bursts are allowed. How many bursts are allowed in the system is determined by the token bucket depth Figure 6 shows the implementation. To calculate the token availability, instead of scheduling a selfinterrupt for each time unit, we do the calculation only at the time a packet is arriving, which is more efcient. For a premium packet, we hold it in the bucket until it gets enough tokens. If the bucket is overowed, the packet is discarded directly. For the token bucket, we keep track of two time variables: the current time ( !#" ) and the last service time ( $&%(') '+*, !-" ), which are used to calculate the available tokens. When an assured packet
2 Besides the conditional transitions ARRIVAL SVC COMPLETION, DS SCHD transition is added.

pt_13_0

init

idle

pt_14_0

pr_13_0

pr_14_0

extract

(SEND_PACKET)
pt_15_0 pt_16_0

pr_15_0

pr_16_0

Figure 5: The Process Model for a DS Enabled IP Module


pt_17_0 pr_17_0

follows:

Initializations. All the initializations are done in


init, wait, cmn rte tbl and init too states sequentially, which is the same as what regular IP process model does.

DS or non-DS. If the node is set to DS enabled, the


transition with DIFFSERV condition occurs. Otherwise, the transition with NO DIFFSERV condition occurs. The reason why we design the model to handle both DS-enabled or non-DS-enabled cases will be described later (Section 3.3, Problem I).

Packet Classication. The packet classication is


done within the DS schd state.

and

comes, the token bucket rst updates its available tokens,

%.*,%.!$/%(0$&" 2143("657'98:2143;")5 <%.2" =?>/@ !-".A $/%('+ ')*, !#"<BDCEF"<')GH%,$ 2143("657'


If there are enough tokens to hold the current packet, the packet will be forwarded to the next state directly, otherwise the packet is re-marked as a best-effort packet and then forwarded to the next state.

algorithm (Section 2.2). In our implementation, all parameters of the algorithm are implemented as attributes in the node interface (e.g., PS-queue size, RIO-queue size, thresholds for assured and besteffort trafc to begin dropping, and so on). The user can tune these parameters, resulting in different dropping behaviors for the assured packets and besteffort packets (Figure 2). All the dropping probabilities are implemented by using the uniform distribution function call op dist uniform() provided by the OPNET.

We use the Video Conferencing Transport (which uses UDP as the transport protocol and is provided by the OPNET) as our application for our simulations. The sceif ( packet is assured ) then nario for the regular Video Conferencing Transport is: available tokens = token rate * ( current time - last service time ) + residual tokens; if ( available tokens > token bucket depth ) then (1) the client (trafc sender in our cases) sends UDP trafavailable tokens = token bucket depth; endif c to the server at a constant rate; (2) the server (trafc if ( available tokens < the packet size ) then re-mark the packet as a best-effort packet; forward this packet to the next state; receiver in our cases) echos the trafc back to the sender else forward this packet to the next state directly; at the same rate. In our simulation, we modify the servresidual tokens = available tokens - packet size; endif er so that the echo is disabled, which means the server last service time = current time; endif (trafc receiver) becomes a pure trafc sink. The sending rate at the trafc sender can be tuned for different simulation cases. In the trafc receiver, we added a monitoring Figure 6: Algorithm to Implement the Leaky Bucket and module into its ip encap model which provides three the Token Bucket local statistics (premium rate, assured rate and bestThe child process diff pq handles priority packet effort rate) in its interface. These three local statistics scheduling by using two queues: the PS-queue for the pre- keep track of the receiving rates of the premium trafc, mium class trafc and the RIO-queue for the assured and assured trafc and best-effort trafc respectively. The reabest-effort trafc (see Section 2). The child process is im- son why we put the monitoring module in the ip encap layer instead of the application layer will be introduced plemented as follows: in the next subsection (Section 3.3, Problem II). Process Model. The child process model is shown in Figure 5. The model is simple. If no packet is com3.3 Problems and Solutions ing and no packet is being scheduled, it is in idle state. When a new packet comes, it enters the enDuring the implementation and simulation, we have enqueue state where the PS-queue and RIO-queue are countered several problems. Below we list two of them as implemented. The incoming packet is put into appro- well as their solutions. priate queue with respect to its service class. The PS Problem I. Our simulation environment includes queue has higher priority over the RIO-queue, which means all packets waiting in the PS-queue are serboth DS-enabled nodes and non-DS-enabled nviced before any packet from the RIO-queue. odes, which is natural in the real Internet. Therefore both DS-enabled IP module and non-DS PS-queue. The PS-queue is quite simple, which is enabled IP module are used in the same simuimplemented as a simple FIFO queue. If the queue is lation scenario. For example, the trafc receivoverowed, the incoming packets will be discarded. er uses non-DS-enabled IP module, while the DActually, the overow case happens rarely, since all S routers use DS-enabled IP modules. But we althe premium packets are monitored and shaped when ways get compilation errors when we try to use they enter this node so that little bursty happens here. both IP modules simultaneously. It seems OP RIO-queue. The RIO-queue is more complicated NET denes some internal global variables for the IP modules (e.g., routing table export le created, than the PS-queue. It adopts the multi-class RED

If ( packet is premium ) then if ( current size of the leakybucket + packet size <= bucket depth ) then Insert the packet into the leakybucket; if ( the bucket is empty before the insertion of this packet ) then holding time = the packet size / token rate; schedule a self-interrupt after the holding time; endif else discard the packet; /* the leakybucket is full */ endif endif

3.2 Trafc Sender and Receiver

routing table import export ag, and so on). Since our DS-enabled IP module is based on the regular IP module, if both DS-enabled and non-DS-enabled IP modules are used simultaneously, variables redened compilation error messages are given out. Our solution to this problem is that we accommodate both IP modules in our diff ip rte v4 model, so that only one kind of IP modules are used in one simulation scenario. In the user interface, we provide an attribute called ip.diffserv ag to the user. The default value is 0, which means the node is non-DSenabled. If the node is DS-enabled, the user should set the ip.diffserv ag to 1. As illustrated in Section 3.1, there are two paths from the arrival state to the IP serv state within the diff ip rte v4 model (Figure 5), one for DS-enabled nodes and the other for non-DS-enabled nodes. Upon a packet arrival, an appropriate path is chosen with respect to the value of ip.diffserv ag. Results show that it does solve the problem.

4 Simulations and Results


In the sections above we introduced our DS router design, implementation and simulation conguration. In this section, we will give the detailed description to our simulation cases and their results. Throughout this section we will use parameter settings shown in Figure 7 for four simulation cases.
Key Parameters PS-queue size (Bytes) Queue size (Bytes) RIO-queue Pa / Pb Ta0 / Ta1 / Tb0 / Tb1 Premium traffic Sending rate (Bytes/s) Period Assured traffic Best-effort traffic Sending rate (Bytes/s) Period Sending rate (Bytes/s) Period Token rate (Bytes/s) Bucket depth (Bytes) Token Bucket Token rate (Bytes/s) Bucket depth (Bytes) Case I 10,000 20,000 0.5 / 1.0 0.8 / 1.0 / 0.5 / 0.75 100,000 15s ~ 50s 100,000 15s ~ 50s 200,000 30s ~ 2m 100,000 200,000 200,000 200,000 Case II 20,000 20,000 0.5 / 1.0 Case III 10,000 20,000 0.5 / 1.0 Case IV 10,000 8,000 0.5 / 1.0

0.8 / 1.0 / 0.5 / 0.75 0.8 / 1.0 / 0.5 / 0.75 0.8 / 1.0 / 0.5 / 0.75 150,000 40s ~ 1m20s 150,000 40s ~ 1m20s 200,000 20s ~ 1m50s 50,000 200,000 50,000 200,000 100,000 30s ~ 1m 150,000 45s ~ 1m30s 200,000 20s ~ 1m50s 150,000 200,000 200,000 200,000 --------150,000 30s ~ 1m35s 20,000 20s ~ 1m50s 150,000 200,000 50,000 500,000

Leaky Bucket

Figure 7: Simulation Parameter Settings

Problem II. Originally, we put the monitoring and


reporting module in the application layer of the 4.1 Case I - Verify the PS-queue and RIOtrafc receiver, which monitors and reports the requeue ceiving rate of the three class trafcs. But a problem occurs when we use large packet sizes (for example, 10,000 bytes/packet) for the video conferencing trafc. When we use a large packet size, we get nearly nothing at the receiver, which means the recorded receiving rates for all three classes are approximately equal to 0. But when we check the statistics recorded for the incoming ethernet link of the receiver in the meantime, everything is just ne. Finally, we realize that this is because of a thrashing phenomenon. The packet with large size is fragmented into multiFigure 8: Test the PS-queue ple IP packets in the IP layer during transporting, in the case that the sending rate exceeds the speed limit , some or all of these IP packets are dropped, resulting in nearly no complete video conferencing application packet is received after the de-fragmentation, since the whole application packet will be discarded by the IP encap module even if only one IP packet of the application packet is dropped during the transporting. To solve this problem, we can use small packet size (e.g., less than 1,500 bytes/packet). But considering some special cases where a large packet Figure 9: Test the RIO-queue size may be used, we move the monitoring and reporting module down to the IP encap layer in order The parameter settings for this simulation case is shown to get more accurate IP statistics for different service in Figure 7. Figure 8 and Figure 9 show the results. In classes. The simulation results show that it solves the Figure 8 we verify the correctness of the PS-queue we problem. have implemented. In the gure, the left hand side graph
client1 client1 router_0 router_0 15 router_2 router_2

15

E_router_0 E_router_0

-15

-15

E_router_1 E_router_1

server server

-30

client2 client2

-30

router_3 router_3

-45

router_1 router_1

-45

-60

-60

-75

-75 45 -90

-120 -90

-90

-75

-60

-45

-30

-15

15

30

60

75

90

105

120

135

150

165

180

client1 client1

router_0 router_0

router_2 router_2

E_router_0 E_router_0

E_router_1 E_router_1

server server

client2 client2

router_3 router_3

router_1 router_1

-60

-60

-75

-75

-120 -90

-90

-75

-60

-45

-30

-15

15

30

-90

60

75

90

105

120

135

150

165

180

shows the result with regular non-DS-enabled routers. We note that the premium trafc rate is not guaranteed (during the interval from 0m30s through 1m0s, the premium rate drops from 100,000 bytes/s to 65,000 bytes/s), since the regular routers do not discriminate the premium trafc from the best-effort trafc so that both trafcs have to contend for the same T1 link (with the bandwidth of 1.5 Mbps or 190,000 bytes/s). the right hand side graph shows the result with our DS-enabled routers. We note that the premium trafc rate is guaranteed with the expense of lowing the best-effort trafc rate. With the same simulation parameter settings, except that assured trafc is used instead of the premium trafc, Figure 9 veries the correctness of the RIO-queue we have implemented. This simulation case show that our PS-queue and RIOqueue implementation is correct.

premium rate is bound to the token rate 50,000 bytes/s, verifying that the leaky bucket works well too. This simulation case show that we have correctly implemented the leaky bucket scheme and the token bucket scheme in the DS-enabled router.

4.3 Case III - Verify the Service Differentiation Between the Three Service Classes

90

90

75

75

60

60

45

45

30 client0 client0 15 router_0 router_0

30
router_2 router_2

15

0 -120 -90 client1 client1 -75

switch0 -60 switch0 -45

-30 -15 0 E_router_0 E_router_0

15

30

0 45

60

75

90 105 120 E_router_1 E_router_1

135 server 150 165 server

180

4.2 Case II - Verify the Leaky Bucket and Figure 11: Test the Service Differentiation between Service Classes Token Bucket Scheme
client2 client2 router_1 router_1

router_3 router_3

The parameter settings for this simulation case is shown in Figure 7 too. In this case, we inject trafcs from all the three service classes into the network to verify the service differentiation is correctly implemented. The left graph shows the result with regular routers. It is clear that there is no differentiation between trafcs. All trafcs have to contend the T1 bandwidth. The right graph shows the result with our DS-enabled routers. It shows clearly that the premium trafc has the highest priority and its rate Figure 10: Test the Leaky Bucket and the Token Bucket is guaranteed with the expense of dropping assured and best-effort rates (from 30s through 1m, The premium rate The parameter settings for this simulation case is shown remains 100,000 bytes/s without any degradation). Durin Figure 7. Figure 10 shows the results of testing the ing the interval from 1m through 1m30s, where there is leaky bucket and the token bucket. In this simulation, we no premium trafc, the assured trafc has a higher priorinject the best-effort trafc as the background load at the ity than the best-effort trafc. Its rate is guaranteed with rate of 200,000 bytes/s. Since the bandwidth of T1 link the expense of dropping the best-effort rate. After 1m30s, can hold only 190,000 bytes/s, both graphs in Figure 10 both premium and assured trafcs shut down. The bestshows that the actual maximum rate for best-effort trafc effort trafc grabs all the T1 bandwidth from then on. This test case shows clearly that our DS-router impleis little lower than 200,000 bytes/s. In the left hand side mentation complies with the DS principle and our design. graph, we show the token bucket result. Notice the sudden jump for the assured rate at 0m40s. This is due to that the token bucket is full at the beginning of the assured trafc. 4.4 Case IV - Verify the Re-marker We call it the Token Bucket Effect, which veries that the token bucket can allow certain amount of bursts (We The parameter settings for this simulation case is shown will give more explanations in Case IV). Accordingly, the in Figure 7 too. In this test case, we verify the correctsudden drop of the best-effort rate is due to the sudden ness of the implementation of the re-marker and the token jump of the assured rate. The graph also shows that the bucket. We inject the assured trafc at rate of 150,000 assured rate is bound to the token rate 50,000 bytes/s, ver- bytes/s, while the token rate for the token bucket is only ifying that the token bucket works well. 50,000 bytes/s, which means the achieved assured rate is In the right hand side graph, we show the result of the 50,000 so that packet re-marking rate is 150,000 - 50,000 leaky bucket. Since the leaky bucket does not allow any = 100,000 bytes/s. The original best-effort rate is 20,000 burst, there is no sudden jump or drop in the graph. The bytes/s. From the graph, we note that the assured rate
router_2 router_2 15
15

client0 client0

router_0 router_0

switch0 switch0

E_router_0 E_router_0

server server

-15

client1 client1

-15

E_router_1 E_router_1

-30

-30

router_3 router_3

-45

client2 client2

router_1 router_1

-45

-60

-60

-75

-75 45 -90

-120 -90

-90

-75

-60

-45

-30

-15

15

30

60

75

90

105

120

135

150

165

180

90

75

in the same gure). Since this result comes from the simulation using the regular routers provided by OPNET, we still do not know the reason behind it. In future work, we will look into this problem.

60

45

30 client0 15 router_0 router_2

5 Conclusion and Future Work


server 90 105 120 E_router_1

In this paper, we introduced the design and implementation of DS-enabled routers in the Internet under the OPNET simulation environment. We also conducted a large Figure 12: Test the Re-marker number of simulations based on our DS-enabled routers. Through these simulations, we not only veried the corcurve jumps from 0 to 150,000 bytes/s (the actual sending rectness of our design and implementation, but also studrate) then drops to 50,000 bytes/s (the token rate) and re- ied some Diffserv QoS features in a large scale network, mains this rate until it nishes. The sudden jump at 30s is such as priority dropping, QoS guarantees, token bucket due to the token bucket (Token Bucket Effect, see Case effect, and so on. Moreover, we introduced some probII). The reason is that at the beginning, the token bucket lems we encountered during our study and their solutions. is full, which means it allows bursty. Therefore the trafc We hope that our DS-enabled router can help further DS can go through it at its sending rate. But after a certain study. period of time when all extra token are used up, the rate There is still a lot of work to be done. In the future drops to the token rate. We conduct several simulations work, we plan to look into the following problems: with different values of the token bucket depth (the de The time scale stretch-out problem (Section 4.5). tailed data is not presented in this work because of the page limit). We nd that the width of this sudden jump TCP performance evaluation. In this work, we sdepends on the bucket depth - the larger the bucket depth tudied only UDP performance. TCP performance is is, the wider the jump will be. more complicated than UDP, especially in a Diffserv The best-effort rate jumps from 0 to 20,000 bytes/s at environment. 20s and jumps again to over 125,000 bytes/s at 35s when the assured rate drops to the token rate. The reason for the Build up a multi-domain DS simulation environmensecond jump is that all the out-of-prole packets in the ast, which means we need to design and implemensured trafc are re-marked as the best-effort packets. We t bandwidth brokers and suitable admission control can calculate the best-effort rate theoretically as follows: mechanism. Based on this multi-domain DS simuE_router_0 -30 -15 0 15 30 45 60 75 client1 router_3 client2 router_1

switch0

%,$ 0"<') "<JJ1FF <%.2"?8K1F6ML,!5D%.$ F%.2" C?%('F')<"FG <"6#%.43,!5L F%.2"CE1F*,"64NO"F%,G

where the overhead is 6,000 bytes/s 3 . So the theoretical actual best-effort rate is 20,000 + 100,000 + 6,000 = 126,000 bytes/s, which conforms to the simulation result.

lation environment, we can conduct further research on Diffserv topics, for example, new pricing scheme, new PHBs, new Diffserv-aware routing protocols and so on.

4.5 An Open Problem


When we conducted the simulations, we found a problem. From Figure 8, Figure 9 and Figure 11, we note that in all the left hand side graphs, the time scale is stretched out. For example, in Figure 8, the premium trafc ends after 1m, while the actual trafc sent by the client ends before 1m (which can be seen in the right hand side graph
3 The UDP packet size we are using is 1,000 bytes/packet. The overhead comes from the IP header for the UDP packets, which is 40 bytes/packet. So the overhead is 40 / 1,000 = 4%. The overhead rate is 150,000 * 4% = 6,000 bytes/s

References
[1] Ilias Andrikopoulos and George Pavlou. Supporting differentiated services in mpls networks. In IWQoS99, London, 1999. [2] Roland Bless and Klaus Wehrle. Evaluation of differentiated services using an implementation under linux. In IWQoS99, London, 1999. [3] Niklas Borg, Emil Svanberg, and Olov Schelen. Efcient multi-eld packet classication for qos purposes. In IWQoS99, London, 1999.

[4] R. Braden and L. Zhang. Resource ReSerVation Protocol (RSVP) - Version 1 Functional Specication. RFC 2205, September 1997. [5] Robert Braden, Deborah Estrin, Steven Berson, Shai Herzog, and Daniel Zappala. The Design of the RSVP Protocol. RSVP Project: Final Report, June 1995. [6] S.Blake et. al. An Architecture for Differentiated Services. RFC 2475, December 1998. [7] S.Shenker et. al. Integrated Services in the Internet Architecture: an Overview. RFC 1633, June 1994. [8] F.Baker, D.Black, S.Blake, and K.Nichols. Denition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers. RFC 2474, December 1998. [9] Tiziana Ferrari and Philip F. Chimento. A measurement-based analysis of expedited forwarding phb mechanisms. In IWQoS00, Pittsburgh, PA USA, June 2000. [10] K.Nichols, V.Jacobson, and L.Zhang. A Two-bit Differentiated Services Architecture for the Internet. RFC 2638, July 1999. [11] M.Nabeshima, T.Shimizu, and I.Yamasaki. Fair queuing with in/out bit in core stateless networks. In IWQoS00, Pittsburgh, PA USA, June 2000. [12] Andrew Odlyzko. Paris metro pricing: The minimalist differentiated services solution. In IWQoS99, London, 1999. [13] S. Shenker, C. Partridge, and R. Guerin. Specication of Guaranteed Quality of Service. RFC 2212, Sept. 1997. [14] Ion Stoica and Hui Zhang. Providing Guaranteed Services Without Per Flow Management. In ACM SIGCOMM99, Cambridge, MA, USA, pages 8194, October 1999. [15] Ikjun Yeom and A.L. Narasimha Reddy. Impact of marking strategy on aggregated ows in a differentiated services network. In IWQoS99, London, 1999. [16] L. Zhang, S. Deering, D. Estrin, S. Shenker, and D. Zappala. RSVP: A New Resource ReSerVation Protocol. IEEE Network, September 1993.