You are on page 1of 8

Traffic Control in High-speed ATM Networks

Peifang Zhou and Oliver W. W. Yang


School of Information Technology and Engineering
University of Ottawa
Ottawa. Ontario
Canada I<lN 6N5
E-mail: yang Qsite.uottawa.ca

Abstract Many broadband services demand the underlying


high-speed networks to provide them with t.heir char-
Thzs paper presents a frameuiork of traffic control i n acteristically required levels of performance. Key QoS
hzgh-speed AT.11 netuorks on boih connection level and parameters include delay, throughput, cell loss ratio.
cell leuel. O n the connection level, we consider traf- etc. Enforcing QoS parameters requires a complex in-
fic classification, bandulzdth allocataon, call admzssion t.erplay among call admission control, handwidth alloca-
control, and billing method. O n the cell leuel. we ezam- tion, flow control, queueing, link scheduling, priorit ips.
ine queueing archateciure. f l o w control, and schedulzng. discard policies, etc. T h e design of the entire entl-to-
W e also capture the interdependencaes among varzous end system and the interaction of various componeiits
traffic control aspects t o ensure a proper network opera- are often more important than the optimization of intli-
tion. The key contribution of this paper is an integrated vidual e:lements. A related issue is t,he tratleoff between
methodology i o handle bursty data traffic. Based on the efficiency and implementation complexity. It is often
per-\% queueing archztecture, we introduce I ) a szm- preferable to have simple schemes that are less efficient
ple banduidth allocation mechanism which requzres no but can be easily implemented a t Gb/s speeds than to
complex computation for call admzssion control, .2} an have schiemes which are efficient but complex. \Vithin
improved credit-based flow control scheme which ensures this cont;ext, we present a framework of traffic control in
lossless and congestzon-free cell iransport, 3) an inno- high-speed ATM networks on both connection level and
uatrve scheduling algorzthm wzth throughput guarantee. cell level. On t,he connection level, we consider traffic
and 4) an easy billing method which imposes tariff on classification, bandwidth allocation, call admission con-
bursty data flows without real-time measurement and trol, and billing method. On the cell level, we exam-
processing. ine queueing architecture, flow control, and scheduling.
Keywords: AT11 networking, traffic control, quality of Unlike other papers which tend to focus on a particular
services issues, d a t a traffic engineering, network control topic, this paper considers the traffic control problem
management. from thle inter-operation point of view, and captures
the interdependencies among various traffic control as-
pects to ensure a proper network operat,ion. T h e em-
1 Introduction phasis is on the development of a traffic control scheme
t h a t can be easily implemented a t Gb/s speeds in every
ATN networks supply switching and transport infras- node throughout high-speed networks. The key contri-
tructure in an integrated fashion to a wide multitude bution of this paper is an integrated methodology to han-
of broadband multi-media services. To provide such dle bursty d a t a traffic. Based on the per-VC queueing
a wide spectrum of services with acceptable grades of architecture, we introduce
quality of service ( Q o S ) , it is necessary to design flow
regulation and resource management schemes and algo- 0 a simple bandwidth allocation mechanism which
rithms. \Yith communication bandwidth well into t h e requires no complex computation for call admission
Gb/s range, the ratio between propagation delay a n d control,
cell transmission time becomes very large, thereby ren- 0 an improved credit-based flow control scheme to
dering ineffective many of the traffic control schemes ensure lossless and congestion-free cell transport,
that rely on the feedback from the network to regulate
traffic flow. It becomes obvious that new approaches 0 an innovat.ive scheduling algorit.hin \vit.h throrig11-
need to he developed in the area of traffic control. put guarantee, and

183
0-8186-9014-3/98 $10.00 0 1998 IEEE
an easy billing method which imposes tariff on
0

bursty d a t a flows without real-time measurement ,Throughput Link Capacity


and processing. -__
Delay-insensitiveTraffic
T h e rest of this paper is organized as follows. Section
2 presents our proposed fraiiiework of traffic control on
both connection and cell levels. Section 3 conducts per-
formance evaluation of our proposed schemes. Section
4 concludes the paper.

2 A Framework of Traffic Con-


Time
trol
T h e objective of any t,raffic control schemes is to pro- Figure 1: Bandwidth sharing between delay-sensitive
vide a stable. lossless, congestion-free environment to and delay-insensitive traffic.
carry t,raffic while achieving as much multiplexing gain
as possible. In this section. we presents a framework of
traffic control on both connection level and cell level. ary. Feedback control will inevitably introduce UII-
On the connection level: we address the issues of traf- necessary delays, which can not be tolerated by the
fic classification. bandwidth allocation. call admission delay-sensit.ive traffic. Instead, we can simply elimi-
control (C'AC), and billing method. \\'e then translate nate any feedback control on delay-sensitive traffic, give
the requirements on the connection level into those on them priority treatment, and let them go through the
the cell level. and we consider the queueing architecture transportation pipe without any impediment. This is
within switches. flow control between switches, and link the best a network can do for delivering delay-sensitive
scheduling. cells. However this arrangement requires t,hat resources
have to be reserved along t.he path from the source to
2.1 Connection-level traffic control the destination. In other words, bandwidth should he
reserved and allocated by the peak rate.
In this subsection. we examine the issues of traffic clas-
Although we use peak-rate allocation for delay-
sification, bandwidth allocation, call admission control!
sensitive traffic, we can still achieve high bandwidth uti-
and billing method on the connection level of traffic
lization by recycling unused bandwidths: we fill in the
control.
gap between actual bandwidth of delay-sensitive traffic
and physical link capacity with delay-insensitive traffic.
2.1.1 Tiaffic classification T h e key idea is t h a t any bandwidth unused by delay-
ATM supports transport of mixed traffic and there are sensitive connections is momentarily made available to
essentially two types traffic within a n ATM network. delay-insensitive ones. T h e scenario is depicted in Fig-
One type is continuous stream traffic such as voice and ure 1. Delay-insensitive traffic tend to be bursty and
video, which is sensitive to delay but tolerant to a cer- they have different QoS requirements. They have a high
tain level of cell loss. Another type is bursty d a t a traf- peak-to-average ratio, and they are mainly concerned
fic such as IP packets. which is sensitive to cell loss but with instantaneous bandwidth throughput. When a
can accommodate delay. \Frith a well-tuned flow con- connection source transmits a t peak rate which results
trol mechanism [lo], we can eliminate cell loss for d a t a in a burst arriving at a n intermediate node/switch, we
traffic. Therefore we can classify traffic based only on need to increase t h e bandwidth allocated to the con-
the delay attribute. In our proposed framework, we di- nection as much as possible to get the burst out of t h e
vide traffic into two types: delay-sensitive traffic and switch quickly. In other words. we need to devise a
delay-insensitive traffic. scheduling scheme which can adjust handwidth accord-
ing t o the state of t h e connection. On the other hand,
we have to recognize t h a t each link in a network has
2.1.2 Bandwidth allocation
a limited capacity and it is shared among many con-
Feedback control is not appropriate for delay-sensitive nections, therefore we have to treat connections in a
traffic, because cells are only meaningful to the des- fair and square way a n d properly allocate unused band-
tination when they arrive in time, and they are coni- width for each individual connection.
pletely useless if they are out of specified delay bound- \Ve propose t h a t each delay-insensitive connection

184
specify or negotiate with the underlying network an av- but it doesn’t, work in practice, especially in high-speed
erage transport rate or bandwidth it requires during the networks. The reason is quite simple: real-time measur-
connection’s setup time. This average bandwidth will ing generates astronomical amount of data which need
be guaranteed by the network. Nodes/switches inside to be storNed and processed. The storage and processing
the network should re-allocate any unused bandwidth power reqjuired are just too demanding to be fulfilled.
and therefore they shall allow a higher transmission rate In authors’ point of view. real-time metering is unnec-
when a burst arrives. In other words, instantaneous essary. We can simply charge each connection based
transmission rate can be higher than the average band- on the rake it requested. For delay-sensitive connec-
width which is agreed upon during the connection’s tions, pricing will be based on the peak rate. Such con-
setup time. This is one of the connection-level require- nections pay a premium for a higher bandwidth than
ments which need to be implemented on the cell-level what might be necessary, but they obtain a higher pri-
in each AT11 switch. IVe shall present a cell scheduling ority and resource guarantee to achieve minimum delay.
algorithm to satisfy the above requirement in the nest For delay-insensitive connections, pricing will be based
sect ion. on the average rate specified during connection’s setup
time. T h e network will guarantee the bandwidth re-
2.1.3 Call admission control quested in c o n g e s f e d situation, but will provide a higher
throughput if the load is light. In other words, delay-
Based on our proposed traffic classification and band- insensitive connections can reap statist.ical niult,iplesing
width allocation discussed in the previous subsections, gain by rseusing handwidth left by delay-sensitive con-
call admission control is fairly simple and straightfor- nections. If a delay-insensitive connection wants to in-
ward. A call or connection which requires bandwidth crease the transmission rate provided by the network. it
bw will be admitted if the following condition is satis- can simplly order a higher average handwidth and pay
fied a t every node/switch along the path from source to according;ly. In summary, our pricing model is practical,
destination: because it doesn’t require any real-time measiiremmt
and processing.
existing

where C’‘L is the outgoing link capacity. In other words, 2.2 Cell-level traffic control
a call/connection will be admitted if its required band-
In this subsection. we examine the issues of queueing.
width does not exceed what is left in terms of band-
flow control, and scheduling on the cell-level traffic c o w
width capacity. For a delay-sensitive connection. bw
tiol.
represents the peak rate. For a delay-insensitive connec-
tion, bw is the average transniission rate. If a connec-
tion is admitted. the bandwidth requirement bw repre- 2.2.1 Queueing architecture
sents the commitment made by a node/switch to trans-
mit cells at rate bill calculated over a long period of time. In most proposed switch architectures in literat,ure
For clelay-iiisensit,ive connections, the actual through- [l. 91, -YI‘M switches were port-oriented. They were
put depends on the source rate and the load condition based on “first-in, first-out” (FIFO) principle: cells
a t each node. As we shall see later, the throughput from various virtual connections (VCs) leaving an out-
can go much higher than bw in a light load condition put port were organized into a FIFO queue and pro-
because of an innovative scheduling algorithm imple- cessed in the order in which they arrived. The func-
mented in every node. From another point of view, bw tionalit,y of traditional ATAI switches was limited to
is the minimum transmission rate guaranteed by a node routing cells from input ports to output ports. As we
under the heavy load condition. all know. the operations of ATM networks are not just
limited to the transport of cells from sources to their
destinatitons. ATM networks have to handle ATM traf-
2.1.4 Billing method
fic efficiently and reliably. ATM switch is an essential
Current Internet eniploys an “all you can eat” pricing element of an AThI network, and it should be designed
model. However, giving everyone unlimited access t o in line with the operations of the network. We argue
a network with finite bandwidth is a recipe for con- that ATM switches should be traffic-flow-oriented and
gestion, as epitomized by t,he well-known phenomenon switches should handle each traffic flow separately. New
of .‘IVorld \Vide iVait”. I t becomes inevitable that AT\I switches that are currently being developed will
we have to met,er network bandwidth usage, providing provide per-VC queueing [IO], and therefore they can
quality of service ( Q o S ) and making network perfor- handle each traffic flow or coniiection separately.
mance pr(dictable. Real-time metering is preferable, The shift from port-oriented design to flow-oriented

185
design in the structure of ATlI switches is significant, the amount of buffer space available a t the nest-hop re-
and it has profound implications on the operations of ceiver. If the transmitter does not exceed this quantity
ATM networks. Per-VC‘ queueing is the cornerstone of data, there is no risk of buffer overflow. Even under
of our proposed framework of traffic control on the cell extreme overloadsj queue lengthes inside switches can’t
level. T h e other two components of cell-level traffic con- grow beyond what the credits allowed. In contrast.
trol, floiv control mechanism and link scheduling algo- rate-based approaches can not guarantee zero cell loss.
rithm, are based on the per-\’C queueing architecture. Under estreme overloads, queues can easily grow large,
resulting in buffer overflow and cell loss. Credit-based
schemes address the root cause of congestion, which is
2.2.2 Flow control the unespected arrivals from various sources, and de-
Since we use peak rate allocation for delay-sensitive mand t h a t no cell can be transmitted to the receiver
t.raffic. we can eliminate flow control on delay-sensitive without its permission. As a result of this kind of strict
connections. Cells which belong to those connections restriction. the feedback control loop is inherently stable
receive priority treatment. and they will traverse in- and no cell will ever be lost.
terim nodes/switches inside the net,work without any In our hop-by-hop credit-based flow control scheme.
hurdle. Flow control is however necessary for delay- there are two credit types. link credit and VC credit.
insensitive traffic. We use average bandwidth for delay- between two adjacent switches. Link credit represents
insensitive connections. but the instantaneous through- the masimum number of cells the upstream switch can
put can he much higher than the average. Therefore transmit ( o r the maximum number of cells the doivn-
flow control is essential for delay-insensitive connections stream sn*itch can receive). without overflowing the
to ensure a lossless, congest ion-free environment. buffer in the downstream switch. VC credit is set on a
T h e credit-based flow control for ABR service was re- per VC basis. and it controls the masimuni number of
jected by the ATAI Forum in 1994, because it required cells the upstream switch can send on the VC (or the
per-IyC queueing in switches and was considered too masinium number of cells the downstream switch can
complex to implement. Recent development in switch receive on the VC). In summary, there is one link credit
design made it possible to implement per-VC queueing and many VC credits between two adjacent switches.
in AT11 switches [lo]. Credit-based flow control repre- Within link credit and VC credit liniits, the upstream
sents t,remendous advantages over it.s rate-based coun- can transmit cells on any VC.
terpart. and in t,his paper we argue t h a t credit-based It is worthwhile to point out our proposal is different.
flow control should be used in the AT11 \VANS. from the original credit-based flow control scheme and
Credit-hased flow cont,rol is fast and effective. Con- credit update protocol (CUP) [6]. In [6], flow control
trol theory dictates that any control loop must operate was receiver-oriented, credits maintained by the sender
faster than the device it controls. This is impossible in were updated by the receiver on a per-VC basis, and
AT11 \i--ASs. where the round-trip time for any control there was no link credit. However, our flow control
signal is limited by the speed of light (this is physics is both receiver- and sender-oriented: receiver controls
and round-trip delay can’t be improved!), and the of- sender’s credit limits (both link credit and VC credits),
fered traffic load can fluctuate faster than the round- but sender can select any VC to transmit within the
trip time. In the rate-based algorithms. Rhl cells are limits.
used for traffic nianagement and all R11 cells esperi- We set limits on VC credits to address the issue of
ence round-trip delays. In other words. an ABR traffic fairness. so t h a t no single VC or a group of VCs will
source relies on the network-wide knowledge (returned dominate. T h e s u m of individual VC credits can be
Rhl cells) to acljust its sending rate. a n d controls on much greater than link credit. This gives the upstream
traffic can‘t be enforced quickly and effectively due to switch flexibility as t o which VC to dispatch cells. It
the long round-trip delay. This is why hop-by-hop con- allows a burst of cells to “blast” through the switch
trol is preferred over end-to-end control. All traffic if traffic load is moderate. This is very desirable for
sources can respond promptly once credits are avail- bursty d a t a traffic.
able from the receiver of the nest hop. without waiting In summary, we use peak rate bandwidth allocation
for any feedback control signal froni the destination a t and no flow control for delay-sensitive traffic, and hop-
the far end. by-hop credit-based flow control for delay-insensitive
Another advantage of using a credit-base flow control traffic. Delay-sensitive traffic are transported with any
is the guarantee of zero cell loss. This is extremely im- barrier, while delay-insensitive cells will not be admit-
portant for loss-sensitive data t.raffic. .As we all know ted to the network after credits are exhaust.ed because
that for d a t a traffic. t,he loss of even one cell can trigger of congestions within t.he network. In essence. all traf-
a retransmission o f t housantls of cells. Credits represent fic flow are well regulated. either by peak-rat.e con-

186
straint, or by available credits. In this fashion, we get D I O represents t,he set of delay-insensitive connec-
rid of the root cause of congestion, which is the un- tions for which the numbers of cells at the beginning of
predicted arrivals of traffic. Therefore ATM networks a frame are over their rations, i.e.,
operate in a stable, congestion-free, and lossless (for
delay-insensitive traffic) environment. DIO =: {delay-insensitive connection 1, nl > r , } . ( 5 )
From the the scheduling point of view, some slots in
2.2.3 Sclieduling
an iV-slot frame may be un-allocated, and not all slots
Per-VC queueing architecture facilitates scheduling of allocated to connections in D S and DIU are utilized.
cell transmission, because scheduler can obtain queue We can calculate extra slots unused by connections in
information on a per-VC basis. It is much easier for the the sets D S and DIG‘ as follows:
scheduler to make decisions to provide QoS guarantee
for each and every VC connection. In this subsection, es =(N - ri) + x(rj,- n j , ) + x ( r . i . ,- nk,),
we propose a scheduling algorithm for per-VC queueing 1 J t k,
ATAI switches. which allows a much higher instanta- i E { D S . D I } , ji E D S , hi E DII’. ( 6 )
neous throughput than the average. This is one of the
requirements from the connection-level traffic control To achieve multiplexing gain among connrctions. we
that needs to be implemented on the cell-level. need t.o re-allocate those extra slots left by connections
The operation of each outgoing link is time-slotted. in the set D S and DIlT. and use them to dispatch cells
Each time slot corresponds to the transmission time of which belong to connections in D I O . In the mean time.
one AT11 cell. There is a link scheduler for every out- connections in D I P should be credited for “freeing up“
going link. Link scheduler selects cells for transmission slots which allows the scheduler to transmit cells for
on a frame basis (logically) with iV slots in a frame. For other connections. They should be compensated by be-
a connection i with an average bandwidth requirement ing allowed to transmit more cells than rations later 011.
of b m , , the slot ration r, in a frame is Data traffic are bursty. iVhen a burst arrives. a connec-
tion may experience the transition from the set D I I - to
btu.
r’ - 2 x S, i E { D S , D I } .
D I O , arid it is fair for the connection to claim prrvi-
- CL ously unused rations to transmit more cells and get thc
burst quickly out of the I’C queue. In other words. the
where D S is the set of delay-sensitive connections and scheduler should schedule more cell transmissions for
D I is the set of delay-insensitive connections. connections which haven‘t used up their rations. This
At the beginning of a frame, the scheduler for a n out- should be viewed as part of scheduler’s cornmitnient of
put link inspects all VC queues associated with the link, providing average bandwidth to every delay-insensit ive
and schedules transmission of cells in each VC queue up connection. For this purpose, we introduce a gmnt vari-
to the VC’s ration. We impose credit-based flow control able. gi for connection i . to keep track of unused ration
for delay-insensitive connections. Therefore cell trans- on a per-VC basis. T h e value of gi is non-negat,ive. It
mission is subject to the availability of credit. This is initialized as 0 (zero). It is incremented when fewer
condition is implied throughout this subsection and we cells thain connection i ‘ s ration are scheduled at the be-
won’t repeat it from now on. ginning of a frame, and it is decremented when more
Since we use peak-rate allocation for delay-sensitive cells (up to g k ) than connection i ‘ s ration are scheduled
connections, cells in delay-sensitive VC queues never during a frame.
exceed their rations. Let ni be the number of cells in the Our proposed scheduler calculates extra slots avail-
VC queue for connection i a t the beginning of a frame, able in each frame, and selects connections which have
DS represent the set of delay-sensitive connections, we grunts a.vailable for transmitting more cells than their
have rations. The selection is based on the grant variable.
nj 5 rj, j E D S . (3) The larger value of g r a n t , the higher priority a connec-
For delay-insensitive connections, the number of cells tion has.. If grants are used up for all connections but
at the beginning of a frame can be larger than their extra slots are still not exhausted. the scheduler will
rations. \Ve divide delay-insensitive connection into two select connections in t,he set DIO for cell transmission.
sets: DII’ and D I O . D I U designates the set of delay- Based on our billing policy t,hat end users pay more for
insensitive connections for which the numbers of cells higher bandwidth. the scheduler will distribute extra
a t the beginning of a frame are under their rations, i.e.. slots based on the bandwidth requirement. The higher
bandwidth a connection requires. t h e higher priority
D I U = {delay-insensitive connection k. nk 5 Q}. the connection has. Here we need to iiitroduce another
(4) absorption variable to keep t,rack of cxt,ra bandwidt h

187
consumption on a per-VC basis. T h e absorption vari- whose peak rate summation is less than C L I N , i.e.,
able, ai for connection i, is non-negative. It starts a t 0
(zero)? and it is increment.ed when a connection takes
k
L‘L
-
advantage of extra slots available in a frame to transmit
Cpri I 7: A.
i=l
more cells. The smaller value of absorption, the higher
where pri is the peak rate for connection i . Because
priority a connection has in consuming any extra slots.
the rate summation of the above k connections is not
T o maintain fairness among connections which have re-
greater t h a n the bandwidth allocated to one slot in a n
quired different bandwidths and to reflect tariff imposed
N-slot frame structure, we can arrange cells from these
on different bandwidths. any extra bandwidth left by
connections in DS and DIU should be distributed in
k connections to share the same slot position in consec-
utive frames.
proportion to bandwidth requirements for connections
There are two components for the delay experienced
in the set D I O . In other words, extra slot usage needs
to be nornialized by the bandwidth requirement,. If m i by a cell arriving a t the first-hop access switch: the
waiting time to be transmitted and the service time for
extra slots are used by connection I . its absorption vari-
able a ( , should be updated as follows: the transmission of the cell. We first calculate the wait-
ing time encountered by the cell. In the worst case, one
cell from each of the k connections arrives a t the same
(7) time, a n d they all miss the start of a frame. They have
to wait for the start of the next frame. \.Ve niark the cell
where Ii is a positive constant., and a1 is the updated from one connection as ‘*tagged”. This “tagged” cell
value of (11. has to wait for the finish of transmission of cells from
Note that calculation in the above scheduling algo- other ( k - 1) connections, and therefore the .‘tagged”
rithm is performed on a per connection basis. not on a cell experiences the longest waiting time, which is
per cell basis. This leads to a significant reduction in d, 5 T + ( k - 1)T+ T = ( k + 1)T. (9)
computational effort when the algorithm is compared
with other ones [2. 7, 81 which mark a time-stamp on T h e first term in the above equation represents the up-
every arriving cell. Therefore our discipline is much per bound of waiting time for the start of a new frame.
simpler than virtual time-stamp type of scheduling al- the second term is the time elapsed to transmit ( k - 1 )
gorithms. This makes our algorithm a feasible candi- cells in t h e same slot position. and the last term is the
date for practical implementation in per-VC queueing upper bound between the start of frame and the start
ATM switches. of transmission of the “tagged” cell.
Now we calculate the second component of the clelay
at t h e first-hop access switch. which is the cell trans-
mission time. Let 1 denote the cell length. T h e cell
3 Performanee Analysis transmission time m is
1
In this section. we conduct performance analysis of our m = -.
proposed framework of traffic control. We focus on CL
the following most important QoS parameters: delay, Combining the above two terms, we can express the
throughput, and cell loss ratio. delay d l , at the first-hop access switch as

3.1 Delay and Delay Jitter Bounds


Since cells from the above 12 connections share the
In a previous section, we proposed a logical frame struc- same slot position, the superposition of these k con-
ture of .V cell slots for a link with capacity C L . T h e nections will be treated as a single connection coming
frame duration is denoted as T . S o w we derive the o u t of t h e access switch and going into core of t h e net-
end-to-end delay bound, assuming t h a t the same N-slot work. Reusing eqn. (9), we have delays introduced at
frame structure and link capacity C L are used within the core switches with a maximum of 2T each. If there
t h e network. The end-t,o-end delay is defined t,o be the are h hops between the first-hop access switch and the
time elapsed between the arrival of a cell a t the first-hop last-hop access switch, the maximum delay introduced
access switch and t,he receipt of the cell a t t.he last-hop by switches other than the first one will be h x 2T. T h e
access switch. delay ititroduced by all swit.ches is given by
First we obtain the delay hound at the first-hop ac- 1
cess switch. LVe assume t h a t h e r e are k connections d,, = di, i-
h x 2T I ( k + l)T+ - +h x 2T. (12)
CL

188
Adding the the propagation delay prop to the above Proof: L ' Ve use the absorption variable to keep track
d,,, we will attain the end-to-end delay d , of e x t r a slot usage. If there are any extra slots left
by connections in D S and D I U , PVQS sorts out con-
1
d =pr~p+d,, 5 prop+ ( I : + 1 ) T + c
'+ h x 2T. (13) nections in the set DIO according to the absorption
L variable. Connections with less absorption values have
a higher priority to use extra slots to transmit more
Based on the above analysis and derivation, we can eas-
cells. After cell transmission, a connection's absorption
ily obtain the upper bound and lower bound of the end-
variable is increased and the connection has less chance
to-end delay d:
to use any more slots. In the long run, the differences
I among absorption variables are much smaller than ab-
d,,,ax 5 p r o p + ( I : +1)T+ -
CL
+ h x 2T (14) sorption variables themselves. Let I and J represent
extra sloi s consumed by connections i and j during the
interval (0,x). Ignoring the small difference between
absorption variables al and a,, we have

T h e delay jitter dj is defined as the diffcrence between


d,,,, and d,,,,. The upper bound for delay jitter can
be tleri\etl from eqn. (14) and eqn. (1.5) as follows:

Q.E.D.

Theorern: For delay-insensitive connection i , the es-


tra bandwidth received beyond t he requested average
If the rate of a connection is so large that at least two bandwid1 h bw, is
cells have to be forwarded in a franie. we can decompose
the connection into sub-connections, such that we can
use the above derivation to obtain similar results.

Proof: From Lemma 2. any remaining bandwidth ca-


3.2 Throughput pacity, which is CL - Clbw, - clbu.,, i E D I . j E
Our per-\'C queueing scheduler ( P V Q S ) has attractive DS. will be distributed in proportion to requested av-
properties that can be expressed by the following lem- erage rates of the delay-insensitive connections. For
inas and theorem. delay-insensitive connection i which has requested a
rate bw,. its portion in total requested average band-
Lemma 1: P\'QS provides bandwidth guarantee for width is bw,/ E, bw,, therefore connection i can benefit
every connection. from the following extra bandwidth beyond bw,:
Proof: For an outgoing link with capacity C'L and N
(logical) slots in a frame. we allocate slot ration T ; for bw;
extra bandwidth = -x (CL- bwj - bwj),
connection i according to eqn. (2):
bwiXi i j
b 111. i E D I , j E DS. Q.E.D.
r . - 2 x .y . ie { D S . D I )
- C'L

The baiidwidth allocated by PVQS to connection i is


3.3 Cell loss ratio
\Ve consider t h e buffer requirement for delay-sensitive
and dela,y-insensitive connections in the core switches.
For delay-sensitive connections, we use peak-rate allo-
cation and reserve enough slots accordingly. T h e masi-
m u m number of delay-sensitive cell arrivals in an .V-slot
frame is ,V. Therefore the buffering requirement :\IDS
Loinilia 2: P\'QS is fair in the sense t h a t it allocates for delay-sensitive connectioiis is N cells, i.e.,
any est ra hailclwidth in proportion to requested band-
witltlis. = ,V,
.UD~ for delay-sensitive connections. (Id)

189
Now n e consider the case for delay-insensitive flows. ment and processing.
In our flow control proposal, link credit represents the
maxiniuin buffer space available in the receiver (down-
A ckn ow1e d gm e nt
stream switch). As long as the sender (upperstream
switch) does not send more cells than what the link This paper is partially supported by an NSERC Oper-
credit allows. no buffer overflow will occur in the re- ating G r a n t under contract #OGP0042878.
ceiver.
HoweLer, receiver has to take into account the link
delay before sender receives any link credit update. In References
the worst case, the maximum number of cells A U othat ~
[l] R. Awdeh and H. Sfouftah. “Survey of AT31 switch
can be transmitted by the sender before it receives feed-
architectures,” Computer lVetworks arid ISD!V Sys-
back from the receiver is
t e m s , vol. 27, pp. 1567-1613. Nov. 1995.
[2] .J. Bennett and H. Zhang, “WF2Q: worst-case
fair weight fair queueing,”. in Proc. IEEE I S F O -
where C‘L is the link capacity and RTT is the round- COJf’96, San Francisco, CA, Mar. 1996, pp 120-
trip time between a sender and a receiver. Therefore, 128.
the minimum buffer size required a t the downstream
switch to prevent buffer overflow is given by [3] F. Bonomi and K. IV. Fendick, “The rate-basecl flow
control framework for the available hit rate AT11
service.” IEEE 1VetworX: JIaga:ine, vol. I). pp. 2%-
39. 1far./Apr. 1995.

As long as the downstream switch maintains a buffer [4] AT11 Forum. “AT31 Traffic hIanagenient Specifica-
size of at least B,,,i,, there is no cell loss. tion.” Version 4.0. Apr. 1996.
In suniniary. our proposed framework of traffic con-
trol imposes an upper bound on the e n d - b e n d delay for [5] L. Meinrock, ”The latency/bandwidth tradeoff in
gigabit networks,“ IEEE Comm. :2iiaga:zne, vol. 30,
delay-sensitive connections. It also provides throughput
guarantee for delay-insensitive connections. and ensures pp. 36-40, Apr. 1992.
zero cell loss. [6] H. T. Kung and R. Morris, ;‘Credit-based flow con-
trol for ATM networks,” IEEE N e t w o r k Maga:ine,
vol. 9. pp. 40-48, Mar./Apr. 1995.
4 Conclusion [7] A. Parekh and R. Gallager, “A generalized proces-
sor sharing approach to flow control in integrated
In this paper, we int,roduced a framework of traffic con- services networks-the single node case,” in Proc.
trol in high-speed A T l I networks on both connection I E E E IiVFOCOAf ’92. Florence, Italy, May 1992, pp.
level and cell level. On the connection level, we consid- 9 1.5924.
ered traffic classification, bandwidth allocation, call ad-
mission cont.rol. and billing method. On the cell level, [SI L. Zhang, “VirtualClock: a new traffic control algo-
we examined queueing architecture, flow control, and rithm for packet switching networks,” in Proc. AC,\l
scheduling. This paper considered the traffic control SIGCrOMM’90, Philadelphia, PA, Sept. 1990, pp.
problem from the inter-operation point of view, and 19-29.
captured the int.erdependencies among various traffic
control aspects to ensure a proper network operation. [9] P. Zhou and 0. Yang, ”A new design of central-
T h e key cont,ribution of this paper was an iniegrated queueing ATM switches,” in Proc. IEEE G L O B E -
methodology t,o handle bursty d a t a traffic. Based on COll1’97, Nov. 1997, Phoenix, AZ, pp. 541-545.
the per-VC queueing architecture, we introduced 1) a [lo] P. Zhou and 0. Yang, “Design of per-VC queue-
simple bandwiclth allocation mechanism which required ing XTM switches,” in Proc. I E E E ICC’98, Atlanta,
no complex computation for call admission control, 2) G X , Jun. 1998, pp. 304-308.
an improved credit-based flow control scheme which en-
sured lossless and congestion-free cell transport, 3 ) an f l l ] P. Zhou and 0. Yang, “A framework offlow control
innovative scheduling algorithm with throughput guar- in high-speed ATSI networks,” To appear in Proc.
antee. and 4 ) an easy billing method which imposed I E E E dlILCOAI’98. Bedford, M A , Oct. 1998.
tariff on Itursty (lata flows wit,liout real-t inie measure-

190

You might also like