Professional Documents
Culture Documents
Atm 2
Atm 2
• Also
– cell delay variation tolerance (CDVT)
– burst tolerance (BT)
Traffic characterization (2)
• Peak cell rate (PCR):
– This is the maximum rate, expressed in cells/s, that can be
submitted by a source to an ATM network.
– Often, we use the peak bit rate, instead of the peak cell rate.
One can be obtained from the other given that we know the
specific AAL that is used.
– The minimum allowable interval between cells is T=1/PCR
Finite buffer
Arrival Cell
Link
of cells loss
lost rate
cells
burstiness
From queueing theory, we know that as the arrival rate increases, the cell loss
increases as well. What is interesting to observe is that a similar behavior can
be also seen for the burstiness of a source. The curve shows qualitatively how
the cell loss rate increases as the burstiness increases while the arrival rate
remains constant.
Classification of ATM sources (1)
ATM sources are classified into constant bit rate (CBR) and
variable bit rate (VBR).
A CBR source generates the same number of bits every unit time
A VBR source generates traffic at a rate that varies over time.
Examples of CBR sources are circuit emulation services such as T1
and E1, unencoded voice, and unencoded video.
Examples of VBR sources are encoded video, encoded voice with
suppressed silence periods, IP over ATM, and frame relay over
ATM.
The arrival process of a CBR source is easy to characterize. The
arrival process of a VBR source is more difficult to characterize and
it has been the object of many studies.
Classification of ATM sources (2)
CBR
A CBR source generates the same number of bits every unit time.
For instance, a 64-Kbps unencoded voice produces 8 bits every 125
msec. Since the generated traffic stream is constant, the PCR, SCR,
and average cell rate of a CBR source are all the same, and a CBR
source can be completely characterized by its PCR.
Let us assume that a CBR source has a PCR equal to 150 cells/s, and
the ATM link over which it transmits has a speed of 300 cells/s.
Then, if we observe the ATM link, we will see that every other slot
carries a cell. If the speed of the link is 450 cells per second, then every
third slot carries a cell, and so on.
Classification of ATM sources (3)
Representation of VBR by the on-off process (1)
A commonly used traffic model for data transfers is the on/off process (see Fig).
In this model, a source transmits only during an active period, known as the on
period. This period is followed by a silent period, known as the off period,
during which the source does not transmit. This cycle of an on period followed
by an off period repeats continuously until the source terminates its connection.
The PCR of an on/off source is the rate at which it transmits cells during the on
period. E.g., if it transmits every other slot, then its PCR is equal to half the
speed of the link, where the link’s speed is expressed in cells/s. Alternatively,
we can say that the source’s peak bit rate is half the link’s capacity, expressed in
bits/s.
This quantity can be also seen as the fraction of time that the
source is active transmitting. When r is close to 0 or 1, the source is
not bursty. The burstiness of the source increases as r approaches 0.5.
Quality of service (QoS) parameters (1)
• cell loss rate (CLR)
• jitter
• cell transfer delay (CTD)
• Peak-to-peak cell delay variation (CDV)
• maximum cell transfer delay (max CTD)
• cell error rate (CER), and
• cell misinsetion rate (CMR)
Quality of service (QoS) parameters (2)
Cell loss rate
– This is a very popular QoS parameter and it was the first one to
be used extensively in ATM networks. This is not surprising,
since there is no flow control between two adjacent ATM
switches or between an end device and the switch to which it is
attached.
t t s i-1 si
i-1 i
Inter-departure gaps Inter-arrival gaps
t t s s
i-1 i i-1 i
Inter-departure gaps Inter-arrival gaps
Quality of service (QoS) parameters (5)
Cell transfer delay (CTD)
1% of the
pdf
total area
Cell Delay
variation
1% of the
pdf total area
The CLR, the peak-to-peak cell delay variation, and the max CTD
can be signaled at call setup time. That is, at call setup time, the calling
party can specify values for these parameters.
These values are the upper bounds, and represent the highest acceptable
values. The values for the peak-to-peak cell delay variation and for the
max CTD are expressed in msec.
As an example, the calling party can request that the CLR is less or equal
than 10−6, the peak-to-peak cell delay variation is less or equal than 3
msec, and the max CTD is less or equal than 20 msec.
The network will accept the connection, if it can guarantee the requested
QoS values. If it cannot guarantee these values then it will reject the
connection. Also, the network and the calling party might negotiate new
values for the QoS parameters.
Quality of service (QoS) parameters (9)
Cell error ratio (CER) and Cell misinsertion rate (CMR)
These parameters are not used by the calling party at call set-
up. They are only monitored by the network.
The GFR service does not provide explicit feedback to the user
regarding the current level of congestion in the network. Rather, the user
is supposed to determine network congestion through a mechanism such
as TCP, and adapt its transmission rate.
Attributes of ATM service-categories
• CBR
– Class attributes: PCR, CDVT
– QoS attributes: peak-to-peak CDV, MaxCTD, CLR
• rt-VBR
– Class attributes: PCR, CDVT, SCR, MBS, CDVT
– QoS attributes: peak-to-peak CDV, MaxCTD, CLR
• nrt-VBR
– Class attributes: PCR, CDVT, SCR, MBS, CDVT
– QoS attributes: CLR
• UBR
– PCR is specified, but it may not be subject to CAC and policing
– No QoS parameters are signaled
• ABR
– Class attributes: PCR, CDVT, MCR
– QoS attributes: CLR (possible, depends on network)
– Other attributes: feedback messages
• GFR
– Class attributes: PCR, CDVT, MCR, MBS, MFS, CDVT
– QoS attributes: CLR (possible, depends on network)
Congestion control
• Preventive
– It prevents the occurrence of congestion using
• Call admission control (CAC)
• Policing (GCRA)
• Reactive
– It is based on feedback from the network to control
transmission rates
• Available bit rate (ABR) service
Preventive congestion control
• When a new connection is requested, each ATM switch
on the path has to decide whether to accept it or not.
• Two questions need to be answered:
– Will the new connection affect the quality-of-service of the
existing connections already carried by the switch?
– Can the switch provide the quality-of-service requested by the
new connection?
• If a switch along the path is unable to accept the new
connection, then the switch refuses the setup request and
it sends it back to a switch in the path that can calculate
an alternative path.
• Once a new connection has been accepted, bandwidth
enforcement is exercised at the cell level to assure that
the transmitting source is within its negotiated traffic
parameters.
Call admission control (CAC)
Assume that the QoS, expressed in cell loss rate, of the existing connections is
satisfied. The question that arises is whether the cell loss rate will still be
maintained if the new connection is accepted. This can be answered by solving
the ATM multiplexer queueing model with the existing connections and the
new connection. However, the solution to this problem is CPU intensive and it
cannot be done in real-time. In view of this, a variety of different CAC
algorithms have been proposed which do not require the solution of such a
queueing model.
Proposed CAC algorithms
• Most CAC algorithms are based on the CLR
– A new connection is accepted if the switch can provide the
requested cell loss rate without affecting the cell loss rate of the
existing connections. Jitter, or CTD are not taken into account
– A very popular example of this type of CAC algorithm is the
equivalent bandwidth.
• Other algorithms are based on the cell transfer delay
– In these algorithms, the decision to accept or reject a new
connection is based on a calculated absolute upper bound of the
end-to-end delay of a cell. These algorithms are closely
associated with specific scheduling mechanisms, such as static
priorities, early deadline first, and weighted fair queueing.
Given that the same scheduling algorithm runs on all of the
switches in the path of a connection, it is possible to construct
an upper bound of the end-to-end delay.
The equivalent bandwidth of a source (1)
• Let us consider a finite capacity queue served by a server at the rate of .
• We assume that this queue is fed by a single source, whose equivalent
bandwidth we wish to calculate.
• Now, if we set equal to the source’s peak bit rate, then we will observe
no accumulation of cells in the buffer. This is due to the fact that the cells
do not arrive faster than they are served.
• Now, if we slightly reduce the service rate , then we will see that cells
are beginning to accumulate in the buffer.
• If we reduce μ further, then the buffer occupancy will increase.
• If we keep repeating this experiment and each time we lower slightly the
service rate, then we will see that the cell loss rate begins to increase.
Arrival time
at the UNI ts t t ts
s s
That is, for some cells it might be greater than T , and for others less than T. In the
former case, there is no penalty in arriving late. In the latter case, the cells will appear
to the UNI that they were transmitted at a higher rate, even though they were
transmitted conformally to the PCR. In this case, these cells should not be penalized by
the network. The CDVT is a parameter that permits the network to tolerate a number
of cells arriving at a rate which is faster than the agreed PCR. This parameter is not
source dependent. Rather, it depends on the number of sources that use the same UNI
and the access to the UNI. It is specified by a network administrator
Reactive congestion control
• In reactive congestion control, we let sources transmit
without bandwidth reservation and policing, and we take
action only when congestion occurs.
• The network is continuously monitored for congestion. If
congestion begins to build up, a feedback message is sent
back to each source requesting them to slow down or
even stop. Subsequent feedback messages permit the
sources to increase their transmission rates.
• Typically, congestion is measured by the occupancy level
of critical buffers within an ATM switch, such as the
output port buffers in a non-blocking switch with output
buffering.
• The available bit rate (ABR) service is the only
standardized ATM service category that uses a reactive
congestion control scheme by using Resource
Management (RM) cells.
RM messages are used to implement ABR
Nrm cells
RM
...
Switch ... Switch
Source Dest
... ...
Answer:
A constant bit rate source implies
PCR = SCR = Average cell rate
PCR = 64 Kbps = 64000/53*8 = 151 cells/s
Thus
PCR=SCR= Avg. Cell rate=151 cells/sec.
Questions – Problems (2)
Consider an on/off source where the off period is constant and equal to 0.5
msec. The MBS of the source is 24 cells. During the on period, the source
transmits at the rate of 20 Mbps.
a. What is its PCR?
b. What is the maximum length of the on period in msec?
c. Assuming a 1-msec period, calculate its SCR.
Answer:
a) PCR = Peak Cell Rate
PCR = Maximum rate of source in on period = 20 Mbps = (20*10^6) / (53*8) =
= 47170 cell/sec (approx)
b) Maximum on-period = MBS/PCR = (24)/(47170) = 0.50879 ms.
c) SCR is the largest average cell rate over a pre-specified length of period.
Thus here we have T = 1 msec. If we approximate answer in part (b) above as 0.5ms,
the SCR will be half of PCR because no matter how we look at different portions of
length T (e.g. last 0.3 ms of off period followed by 0.5 ms of on period and 0.2ms of
off period), the overall picture is the source sending the cells at PCR for on-period and
sending no cells during the off-period. Since the on and off periods are of equal
length, SCR will be half of PCR. If, however, we had not approximated the on period
length as 0.5 ms, then the answer will be SCR = 0.508*PCR+0*(0.5)=0.508*PCR=
(approx) 23962 cells/sec
Questions – Problems (3)
a) An AAL1 layer receives data at 2 Mbps. How many cells are created per
second by the ATM layer?
b) What is the total efficiency of ATM using AAL1 (the ratio of received bits
to sent bits)?
c) Explain why padding is unnecessary in AAL1, but necessary in other
AALs.
Answer:
a) In AAL1, each cell carries only 47 bytes of user data. This means the
number of cells sent per second can be calculated as [(2,000,000/8)/47] ≈
5319.15.
b) In AAL1, each 53-byte cell carries only 47 bytes of user data. There are 6
bytes of overhead. The efficiency can be calculated as 47/ 53 ≈ 89%.
c) AAL1 takes a continuous stream of bits from the user without any
boundaries. There are always bits to fill the data unit; there is no need for
padding. The other AALs take a bounded packet from the upper layer.
Questions – Problems (4)
Using AAL5, show the situation where we need __ of padding.
a. 0 bytes (no padding)
b. 40 bytes
c. 47 bytes
Answer:
In AAL5 the number of bytes in CS, after adding padding and trailer must be multiple
of 48.
a. When user (user data + 8) mod 48 = 0.
b. When user (user data + 40 + 8) mod 48 = 0.
c. When user (user data + 43 + 8) mod 48 = 0.
Questions – Problems (5)
An ATM system can process 200 calls per second. The maximum bandwidth of the
system is 5 Gbps. The system provides nonpermanent service to 10.000 users, all with
a variable rate from 50 to 150 Mbps and permanent service to 20.000 users with 100
Mbps. If 5% of the users request connectivity calls simultaneously, comment on the
CAC processing efficiency of the system.
Answer:
5% of 10.000 users is 500 call requests. Since the system can handle only 200 calls per
second, the first 200 calls will be processed within the first second. The remaining 300
calls will be dropped, unless the system stores the remaining 300 requests, to be
processed in the next 2 seconds.
Questions – Problems (6)
Explain why the end-to-end cell transfer delay consists of a fixed part
and a variable part. What is the fixed part equal to?
Answer:
Delay experienced by any cell/packet in a real network consists of:
a. Propagation Delay (fixed)
b. Transmission Delay (fixed)
c. Processing Delay (fixed)
d. Queueing Delay (variable)
There are fixed delays because of the properties of the links/switching
elements in the network. Processing delay is the time taken by the CPU
inside the switch to do all processing such as header conversion.
Propagation delay is a function of the speed of light, which is fixed
and transmission delay is a function of the speed of the link (also fixed).
Queueing delays are variable because they are caused at the buffers of
switches, where cells queue up for service. Depending upon the occupancy
level at a buffer, the queueing delay can be more or less at a particular switch.
Questions – Problems (7)
Explain why jitter is important to a delay-sensitive applications.
Answer:
Jitter is important in real-time applications and delay sensitive
application because the playback can either run out of cells to play
(inter-arrival time being more than inter-departure time at the
destination for a long period of time) or there could be over-flow
problems (inter-arrival time being less than inter-departure time at
the destination for a long period of time).
Questions – Problems (8)
a) A digitized image is contained in a long packet and is to be transmitted
through an ATM network. At the network interface, the SAR function
segments the long packet into cells. As the cells are transported through
the network, there is no assurance that all cells will follow the same path.
Comment on the order of arrival of the cells and identify a reliable
mechanism that will reconstruct the image correctly at the receiver.
b) A digitized image is contained in a long packet. The SAR function adds
a sequence number in each cell. In which field should the sequence
number be included and why?
Answer:
a) The ATM cells, due to different path delays, arrive out of order.
Therefore, a sequence number should be included in the cells so that, as
they arrive out of order at the receiver, they may be reordered.
b) Since the header changes at each node, the sequence number should be
included in the first bytes of the payload (information field).