You are on page 1of 48

ATM Traffic characterization (1)

The traffic submitted by a source to an ATM network can be


described by the following traffic parameters:
• peak cell rate (PCR)
• minimum cell rate (MCR)
• sustained cell rate (SCR)
• maximum burst size (MBS)
• burstiness

• Also
– cell delay variation tolerance (CDVT)
– burst tolerance (BT)
Traffic characterization (2)
• Peak cell rate (PCR):
– This is the maximum rate, expressed in cells/s, that can be
submitted by a source to an ATM network.
– Often, we use the peak bit rate, instead of the peak cell rate.
One can be obtained from the other given that we know the
specific AAL that is used.
– The minimum allowable interval between cells is T=1/PCR

• Minimum Cell Rate (MCR):


– It is the minimum average cell rate, in cells/s, that the source is
always allowed to send.
Traffic characterization (3)
• Sustained cell rate (SCR):
– Compute the average number of cells submitted by the source
over successive short periods T. The largest of all these
averages is called the sustained cell rate (SCR)
For instance, if the source transmits for a period D equal to 30
minutes and T is equal to one second, then there are 1800 T
periods and we will obtain 1800 averages (one per period). The
largest of all of these averages is the SCR.

– SCR is not to be confused with the average rate of cells


submitted by a source. However, if we set T equal to the
entire time (e.g. equal to D) that the source is transmitting
over the ATM network, then the SCR becomes the average
cell rate.

Average cell rate ≤ SCR ≤PCR


Traffic characterization (4)

• Maximum burst size (MBS)


– Depending upon the type of the source, cells might be
submitted to the ATM network in bursts. These bursts are
either fixed or variable in size. For instance, in a file
transfer, if the records retrieved from the disk are of fixed
size, then each record results to a fixed number of ATM
cells submitted to the network back-to-back. In an
encoded video transfer, however, each coded image has a
different size, which results to a variable number of cells
submitted back-to-back.
The maximum burst size (MBS) is defined as the
maximum number of consecutive cells that can be
submitted by a source at the PCR.
Traffic characterization (5)
• Burstiness
A source is bursty if it transmits for a time and then becomes idle for a time.

active idle active idle

Burstiness of a source affects the ATM switch’s performance!!!

Finite buffer
Arrival Cell
Link
of cells loss
lost rate
cells

burstiness
From queueing theory, we know that as the arrival rate increases, the cell loss
increases as well. What is interesting to observe is that a similar behavior can
be also seen for the burstiness of a source. The curve shows qualitatively how
the cell loss rate increases as the burstiness increases while the arrival rate
remains constant.
Classification of ATM sources (1)
ATM sources are classified into constant bit rate (CBR) and
variable bit rate (VBR).

A CBR source generates the same number of bits every unit time
A VBR source generates traffic at a rate that varies over time.
Examples of CBR sources are circuit emulation services such as T1
and E1, unencoded voice, and unencoded video.
Examples of VBR sources are encoded video, encoded voice with
suppressed silence periods, IP over ATM, and frame relay over
ATM.
The arrival process of a CBR source is easy to characterize. The
arrival process of a VBR source is more difficult to characterize and
it has been the object of many studies.
Classification of ATM sources (2)
CBR
A CBR source generates the same number of bits every unit time.
For instance, a 64-Kbps unencoded voice produces 8 bits every 125
msec. Since the generated traffic stream is constant, the PCR, SCR,
and average cell rate of a CBR source are all the same, and a CBR
source can be completely characterized by its PCR.

Let us assume that a CBR source has a PCR equal to 150 cells/s, and
the ATM link over which it transmits has a speed of 300 cells/s.

Then, if we observe the ATM link, we will see that every other slot
carries a cell. If the speed of the link is 450 cells per second, then every
third slot carries a cell, and so on.
Classification of ATM sources (3)
Representation of VBR by the on-off process (1)
A commonly used traffic model for data transfers is the on/off process (see Fig).
In this model, a source transmits only during an active period, known as the on
period. This period is followed by a silent period, known as the off period,
during which the source does not transmit. This cycle of an on period followed
by an off period repeats continuously until the source terminates its connection.

The PCR of an on/off source is the rate at which it transmits cells during the on
period. E.g., if it transmits every other slot, then its PCR is equal to half the
speed of the link, where the link’s speed is expressed in cells/s. Alternatively,
we can say that the source’s peak bit rate is half the link’s capacity, expressed in
bits/s.

Average cell rate


Classification of ATM sources (4)
Representation of VBR by the on-off process (2)

The on/off model captures the notion of burstiness, which is an


important traffic characteristic in ATM networks. There are several
formulas for measuring burstiness. The simplest formula is the ratio
of the mean length of the on period, divided by the sum of the mean
on and off periods:

This quantity can be also seen as the fraction of time that the
source is active transmitting. When r is close to 0 or 1, the source is
not bursty. The burstiness of the source increases as r approaches 0.5.
Quality of service (QoS) parameters (1)
• cell loss rate (CLR)
• jitter
• cell transfer delay (CTD)
• Peak-to-peak cell delay variation (CDV)
• maximum cell transfer delay (max CTD)
• cell error rate (CER), and
• cell misinsetion rate (CMR)
Quality of service (QoS) parameters (2)
Cell loss rate
– This is a very popular QoS parameter and it was the first one to
be used extensively in ATM networks. This is not surprising,
since there is no flow control between two adjacent ATM
switches or between an end device and the switch to which it is
attached.

– It is easy to quantify, as opposed to other QoS parameters such


as jitter and cell transfer delay.

It has been used extensively as a guidance to


dimensioning ATM switches, and
in call admission control algorithms.
Quality of service (QoS) parameters (3)
Jitter (1)
It refers to the variability of the inter-arrival times at the destination

An important QoS parameter for voice and video. In these


applications, the inter-arrival gap between successive cells at the
destination end device cannot be greater than a certain value, as this
can cause the receiving play-out process to pause.
Sender Receiver

cell cell cell cell cell cell


i-1 i i+1 ATM i-1 i i+1
cloud

t t s i-1 si
i-1 i
Inter-departure gaps Inter-arrival gaps

The inter-departure gap ti can be less, equal, or greater than si . This is


due to buffering and congestion delays in the ATM network.
Quality of service (QoS) parameters (4)
Jitter (2)
If the inter-arrival gaps si are less than the inter-departure gaps ti ,
then the play-out process will not run out of cells. (If this persists
for a long time, however, it might cause overflow problems). If the
inter-arrival gaps are consistently greater than the inter-departure
gaps, then the play-out process will run out of cells and will pause.
This is not desirable, because the quality of the voice or video
delivered to the user will be affected.

Bounding jitter is not easy to accomplish.


Sender Receiver

cell cell cell cell cell cell


i-1 i i+1 ATM i-1 i i+1
cloud

t t s s
i-1 i i-1 i
Inter-departure gaps Inter-arrival gaps
Quality of service (QoS) parameters (5)
Cell transfer delay (CTD)

• The time it takes to transfer a cell end-to-end, that


is, from the transmitting end-device to the
receiving end-device. It comprises of
– Fixed cell transfer delay
• Propagation delay, fixed delays induced by
transmission systems, and fixed switch processing
times
– Variable cell transfer delay, known as the peak-
to-peak cell delay variation
• Queueing delays in the switches along the cell’s
path.
Quality of service (QoS) parameters (6)
Max. cell transfer delay (max CTD) (1)

This is a statistical upper bound on the end-to-end cell transfer delay,


which means that the actual end-to-end cell
transfer delay might occasionally exceed the max CTD.
Cell Delay
variation

1% of the
pdf
total area

Fixed CTD Peak to peak CDV cells


delivered
max CTD late
Quality of service (QoS) parameters (7)
Maximum cell transfer delay (max CTD) (2)
Assume that the max CTD is set to 20 msec and the fixed CTD is equal to 12
msec. Then, there is no guarantee that the peak-to-peak cell delay variation
(which is the difference between max. CTD and fixed CTD) will always be
less than 8 msec. The max CTD can be obtained as a percentile of the end-
to-end cell transfer delay, so that the end-to-end cell transfer delay exceeds it
only a small percent of the time. For instance, if it is set to the 99th
percentile, then 99% of the time the end-to-end cell transfer delay will be
less than the max CTD and 1% of the time it will be greater.

Cell Delay
variation

1% of the
pdf total area

Fixed CTD Peak to peak CDV cells


delivered
max CTD late
Quality of service (QoS) parameters (8)

The CLR, the peak-to-peak cell delay variation, and the max CTD
can be signaled at call setup time. That is, at call setup time, the calling
party can specify values for these parameters.
These values are the upper bounds, and represent the highest acceptable
values. The values for the peak-to-peak cell delay variation and for the
max CTD are expressed in msec.

As an example, the calling party can request that the CLR is less or equal
than 10−6, the peak-to-peak cell delay variation is less or equal than 3
msec, and the max CTD is less or equal than 20 msec.

The network will accept the connection, if it can guarantee the requested
QoS values. If it cannot guarantee these values then it will reject the
connection. Also, the network and the calling party might negotiate new
values for the QoS parameters.
Quality of service (QoS) parameters (9)
Cell error ratio (CER) and Cell misinsertion rate (CMR)
These parameters are not used by the calling party at call set-
up. They are only monitored by the network.

• The CER of a connection is the ratio of the number of errored cells


to the total number of cells transmitted by the source. An errored
cell is a cell delivered with erroneous payload. The CER depends
on the underlying physical medium.

• The CMR is the rate of cells (in cells/s) delivered to a wrong


destination, calculated over a fixed period of time. The CMR is
based on the rate at which undetected header errors result in mis-
delivered cells.
ATM service categories
An ATM service category is a QoS class.
Each service category is associated with a set of traffic and a set of QoS
parameters. Functions such as call admission control and bandwidth allocation
are applied differently for each service category. Also, the scheduling
algorithm that determines in what order the cells in an output buffer of an
ATM switch are transmitted out, provides different priorities to cells
belonging to different service categories
The service category of a connection is signaled to the network at call setup
time, along with its traffic and QoS parameters. The ATM service categories
are:
• Constant bit rate (CBR), Real time
• Real time variable bit rate (RT-VBR), applications
• Non-real time variable bit rate (NRT-VBR),
• Available bit rate (ABR),
Non-Real time
• Unspecified bit rate (UBR), and
applications
• Guaranteed frame rate (GFR)
The constant bit rate (CBR) service
• Intended for real-time applications which require tightly
coupled constrained delay and delay variations, such as
– circuit emulation services, constant-bit rate video, and high-
quality audio.
• Sources are expected to transmit at a constant rate, i.e.
the peak cell rate is sufficient to describe the amount of
traffic that the application transmits over the connection.
• A CBR service is for real-time applications, and
therefore, the end to-end delay is an important QoS
parameter. In view of this, in addition to the CLR, the
two delay-related parameters (peak-to-peak cell delay
variation and the max CTD) are specified.
The real-time VBR service
• It is intended for real-time applications, i.e. applications
that require constrained delay and delay variations, such
as
– video and
– voice.
In addition to the CLR, the two delay-related parameters (peak-to
peak cell delay variation and the max CTD) are specified.

• Sources are expected to transmit at a variable rate and be


bursty. Therefore, the peak cell rate is not sufficient to
describe the amount of traffic that the application will
transmit over the connection. In addition to the PCR and
the cell delay variation tolerance, the sustained cell rate
(SCR) and the maximum burst size (MBS) are specified.
• The non-real time VBR service:
– It is for variable bit rate and bursty sources which do not
require real-time constraints.

• The unspecified bit rate (UBR) service:


– A best-effort type of service. It is intended for delay tolerant
applications (Web browsing, file transfer, email). It has no QoS
guarantees.

• The available bit rate (ABR) service:


– Feedback-based service for sources that can adjust their
transmission rate according to the congestion level in the
network.
– A user requesting the ABR service specifies a minimum cell
rate (MCR) and a maximum cell rate (PCR). MCR could be 0.
The user varies its transmission rate between its MCR and its
PCR in response to feedback messages (based on Resource
Management (RM) cells) that it receives from the network.
•The Guaranteed Frame Rate (GFR) service
This service is for non-real-time applications that require a MCR
guarantee, but they can transmit in excess of their requested MCR. The
application transmits data organized into frames, and the frames are
carried in AAL 5 CPS-PDUs.
The network does not guarantee delivery of the excess traffic. When
congestion occurs, the network attempts to discard complete AAL 5
CPS-PDUs rather than individual cells.

The GFR service does not provide explicit feedback to the user
regarding the current level of congestion in the network. Rather, the user
is supposed to determine network congestion through a mechanism such
as TCP, and adapt its transmission rate.
Attributes of ATM service-categories
• CBR
– Class attributes: PCR, CDVT
– QoS attributes: peak-to-peak CDV, MaxCTD, CLR
• rt-VBR
– Class attributes: PCR, CDVT, SCR, MBS, CDVT
– QoS attributes: peak-to-peak CDV, MaxCTD, CLR
• nrt-VBR
– Class attributes: PCR, CDVT, SCR, MBS, CDVT
– QoS attributes: CLR
• UBR
– PCR is specified, but it may not be subject to CAC and policing
– No QoS parameters are signaled
• ABR
– Class attributes: PCR, CDVT, MCR
– QoS attributes: CLR (possible, depends on network)
– Other attributes: feedback messages
• GFR
– Class attributes: PCR, CDVT, MCR, MBS, MFS, CDVT
– QoS attributes: CLR (possible, depends on network)
Congestion control

• Preventive
– It prevents the occurrence of congestion using
• Call admission control (CAC)
• Policing (GCRA)
• Reactive
– It is based on feedback from the network to control
transmission rates
• Available bit rate (ABR) service
Preventive congestion control
• When a new connection is requested, each ATM switch
on the path has to decide whether to accept it or not.
• Two questions need to be answered:
– Will the new connection affect the quality-of-service of the
existing connections already carried by the switch?
– Can the switch provide the quality-of-service requested by the
new connection?
• If a switch along the path is unable to accept the new
connection, then the switch refuses the setup request and
it sends it back to a switch in the path that can calculate
an alternative path.
• Once a new connection has been accepted, bandwidth
enforcement is exercised at the cell level to assure that
the transmitting source is within its negotiated traffic
parameters.
Call admission control (CAC)

• The CAC algorithm is used by an ATM switch to


decide whether to accept or reject a new
connection.
• CAC algorithms may be classified into
– non-statistical allocation (or peak bit rate
allocation) , and
– statistical allocation.
CAC - Non-statistical allocation
• Otherwise known as peak bit rate allocation.
• It is used for connections requesting a CBR service.
• CAC algorithm is very simple.
– The decision to accept or reject a new connection is based
purely on whether its peak bit rate is less than the available
bandwidth on the link.
– Peak bit rate allocation can lead to a grossly underutilized link,
unless the connections transmit continuously at peak bit rate.
CAC - Statistical allocation (1)
• In this case, the allocated bandwidth is less than the peak
bit rate of the source.
• In the case where statistical allocation is used for all the
connections on the link, the sum of the peak bit rates of all
the connections may exceed the link’s capacity.

Statistical allocation makes economic sense when dealing


with bursty sources.
However, it is difficult to implement effectively because:

• It is difficult to characterize the traffic of a source and how it is


shaped deep in the network.
• The CAC algorithm has to run real-time (since an SVC has to be
set-up real time). There is no time for CPU-intensive calculations.
CAC - Statistical allocation (2)
• It is difficult to characterize the traffic of a source and how it is
shaped deep in the network.
Example:
Assume that a source has a maximum burst size of 100 cells. As the
cells that belong to the same burst travel through the network, they
get buffered in each switch. Due to multiplexing with cells from other
connections and scheduling priorities, the maximum burst of 100
cells might become much larger deep in the network.
Other traffic descriptors, such as the PCR and the SCR, can be
similarly modified deep in the network. Consider a source with a
peak bit rate of 128 Kbps. Due to multiplexing and scheduling
priorities, it is possible that several cells from this source can get
batched together in the buffer of an output port of a switch. Let us
assume that this output port has a speed of, say 1.544 Mbps. Then,
these cells will be transmitted out back-to-back at 1.544 Mbps, which
will cause the peak bit rate of the source to increase temporarily!
CAC - Statistical allocation (3)
The problem of whether to accept or reject a new connection can
be formulated as a queueing problem.

For instance, let us consider again our non-blocking switch with


output buffering. The CAC algorithm has to be applied to each output
port. If we isolate an output port and its buffer from the switch, we
will obtain a queueing model:
CAC - Statistical allocation (4)

This type of queueing structure is known as the ATM multiplexer. It represents


a number of ATM sources feeding a finite-capacity queue, which is served by
a server, i.e., the output port. The service time is constant and is equal to the
time it takes to transmit an ATM cell.

Assume that the QoS, expressed in cell loss rate, of the existing connections is
satisfied. The question that arises is whether the cell loss rate will still be
maintained if the new connection is accepted. This can be answered by solving
the ATM multiplexer queueing model with the existing connections and the
new connection. However, the solution to this problem is CPU intensive and it
cannot be done in real-time. In view of this, a variety of different CAC
algorithms have been proposed which do not require the solution of such a
queueing model.
Proposed CAC algorithms
• Most CAC algorithms are based on the CLR
– A new connection is accepted if the switch can provide the
requested cell loss rate without affecting the cell loss rate of the
existing connections. Jitter, or CTD are not taken into account
– A very popular example of this type of CAC algorithm is the
equivalent bandwidth.
• Other algorithms are based on the cell transfer delay
– In these algorithms, the decision to accept or reject a new
connection is based on a calculated absolute upper bound of the
end-to-end delay of a cell. These algorithms are closely
associated with specific scheduling mechanisms, such as static
priorities, early deadline first, and weighted fair queueing.
Given that the same scheduling algorithm runs on all of the
switches in the path of a connection, it is possible to construct
an upper bound of the end-to-end delay.
The equivalent bandwidth of a source (1)
• Let us consider a finite capacity queue served by a server at the rate of .
• We assume that this queue is fed by a single source, whose equivalent
bandwidth we wish to calculate.
• Now, if we set  equal to the source’s peak bit rate, then we will observe
no accumulation of cells in the buffer. This is due to the fact that the cells
do not arrive faster than they are served.
• Now, if we slightly reduce the service rate , then we will see that cells
are beginning to accumulate in the buffer.
• If we reduce μ further, then the buffer occupancy will increase.
• If we keep repeating this experiment and each time we lower slightly the
service rate, then we will see that the cell loss rate begins to increase.

• The equivalent bandwidth of the source is defined as the service rate e of


the queue that corresponds to a cell loss rate of .
• It falls somewhere between its average bit rate and the peak bit rate.
– If the source is very bursty, it is closer to its peak bit rate, otherwise, it
is closer to its average bit rate.
• We note that the equivalent bandwidth of a source is not related to the
source’s SCR.
The equivalent bandwidth of a source (2)
There are various approximations that can be
used to compute quickly the equivalent
bandwidth of a source.

The equivalent bandwidth of a source is used


in statistical bandwidth allocation in the same way
that the peak bit rate is used in
non-statistical bandwidth allocation
Bandwidth enforcement
• Used to ensure that the traffic generated by a source conforms with
the traffic contract, agreed between the user and the network at call
set-up.
• The traffic contract consists of
– A connection traffic descriptor,
– A requested quality of service class, and
– A definition of conformance.
• Testing the conformance of a source, otherwise known as policing
the source, is carried out at the user-network interface (UNI). It
involves policing the PCR and the SCR using the generic cell rate
algorithm (GCRA).
• Policing each source is an important function for a network
operator, since a source exceeding its contract might affect the QoS
of other existing connections. Also, depending upon the pricing
scheme used by the network operator, revenue might be lost. A
source might exceed its contract due to various reasons; the user
equipment might malfunction, or the user might underestimate
(either intentionally or unintentionally) the bandwidth requirements.
The leaky bucket
The GCRA is based on a popular policing mechanism known as the leaky bucket. The
leaky bucket can be unbuffered or buffered.
The unbuffered leaky bucket consists of a token pool of size K. Tokens are
generated at a fixed rate. A token is lost if it is generated when the pool is full. An
arriving cell takes a token from the pool, and then enters the network. The number of
tokens in the token pool is then reduced by one. A cell is considered to be a violating
cell (or, a noncompliant cell ), if it arrives at a time when the token pool is empty.
The buffered leaky bucket has an input buffer of size M, where a cell can wait if it
arrives at a time when the token pool is empty. A cell is considered to be a violating
cell, if it arrives at a time when the input buffer is full. Violating cells are either
dropped or tagged.
Unbuffered leaky bucket Buffered leaky bucket
The cell delay variation tolerance (CDVT)
• A new traffic parameter used by the GCRA.
• Assume that a source is transmitting at PCR and it produces a cell every T units of
time, where T = 1/PCR. Due to multiplexing with cells from other sources and
with signaling and network management cells, the inter-arrival time of successive
cells belonging to the same UNI source could potentially vary around T (see Fig).
T
transmission
time

Arrival time
at the UNI ts t t ts
s s
That is, for some cells it might be greater than T , and for others less than T. In the
former case, there is no penalty in arriving late. In the latter case, the cells will appear
to the UNI that they were transmitted at a higher rate, even though they were
transmitted conformally to the PCR. In this case, these cells should not be penalized by
the network. The CDVT is a parameter that permits the network to tolerate a number
of cells arriving at a rate which is faster than the agreed PCR. This parameter is not
source dependent. Rather, it depends on the number of sources that use the same UNI
and the access to the UNI. It is specified by a network administrator
Reactive congestion control
• In reactive congestion control, we let sources transmit
without bandwidth reservation and policing, and we take
action only when congestion occurs.
• The network is continuously monitored for congestion. If
congestion begins to build up, a feedback message is sent
back to each source requesting them to slow down or
even stop. Subsequent feedback messages permit the
sources to increase their transmission rates.
• Typically, congestion is measured by the occupancy level
of critical buffers within an ATM switch, such as the
output port buffers in a non-blocking switch with output
buffering.
• The available bit rate (ABR) service is the only
standardized ATM service category that uses a reactive
congestion control scheme by using Resource
Management (RM) cells.
RM messages are used to implement ABR
Nrm cells
RM
...
Switch ... Switch
Source Dest

... ...

• The source sends an RM cell every Nrm data cells. The


defaulted value for Nrm is 32.
• The RM cells and data cells may traverse a number of
switches before they reach their destination end-device.
• The destination end-device turns around the RM cells, and
transmits them back to the sending end-device.
• Each switch writes information about its congestion status
onto the RM cells, which is used by the sending end-device to
adjust its transmission rate.
Questions – Problems (1)
Consider a 64-Kbps voice connection transmitted at constant bit rate.
a. What is its PCR?
b. What is its SCR?
c. What is its average cell rate?

Answer:
A constant bit rate source implies
PCR = SCR = Average cell rate
PCR = 64 Kbps = 64000/53*8 = 151 cells/s
Thus
PCR=SCR= Avg. Cell rate=151 cells/sec.
Questions – Problems (2)
Consider an on/off source where the off period is constant and equal to 0.5
msec. The MBS of the source is 24 cells. During the on period, the source
transmits at the rate of 20 Mbps.
a. What is its PCR?
b. What is the maximum length of the on period in msec?
c. Assuming a 1-msec period, calculate its SCR.
Answer:
a) PCR = Peak Cell Rate
PCR = Maximum rate of source in on period = 20 Mbps = (20*10^6) / (53*8) =
= 47170 cell/sec (approx)
b) Maximum on-period = MBS/PCR = (24)/(47170) = 0.50879 ms.
c) SCR is the largest average cell rate over a pre-specified length of period.
Thus here we have T = 1 msec. If we approximate answer in part (b) above as 0.5ms,
the SCR will be half of PCR because no matter how we look at different portions of
length T (e.g. last 0.3 ms of off period followed by 0.5 ms of on period and 0.2ms of
off period), the overall picture is the source sending the cells at PCR for on-period and
sending no cells during the off-period. Since the on and off periods are of equal
length, SCR will be half of PCR. If, however, we had not approximated the on period
length as 0.5 ms, then the answer will be SCR = 0.508*PCR+0*(0.5)=0.508*PCR=
(approx) 23962 cells/sec
Questions – Problems (3)
a) An AAL1 layer receives data at 2 Mbps. How many cells are created per
second by the ATM layer?
b) What is the total efficiency of ATM using AAL1 (the ratio of received bits
to sent bits)?
c) Explain why padding is unnecessary in AAL1, but necessary in other
AALs.

Answer:
a) In AAL1, each cell carries only 47 bytes of user data. This means the
number of cells sent per second can be calculated as [(2,000,000/8)/47] ≈
5319.15.

b) In AAL1, each 53-byte cell carries only 47 bytes of user data. There are 6
bytes of overhead. The efficiency can be calculated as 47/ 53 ≈ 89%.

c) AAL1 takes a continuous stream of bits from the user without any
boundaries. There are always bits to fill the data unit; there is no need for
padding. The other AALs take a bounded packet from the upper layer.
Questions – Problems (4)
Using AAL5, show the situation where we need __ of padding.
a. 0 bytes (no padding)
b. 40 bytes
c. 47 bytes
Answer:
In AAL5 the number of bytes in CS, after adding padding and trailer must be multiple
of 48.
a. When user (user data + 8) mod 48 = 0.
b. When user (user data + 40 + 8) mod 48 = 0.
c. When user (user data + 43 + 8) mod 48 = 0.
Questions – Problems (5)
An ATM system can process 200 calls per second. The maximum bandwidth of the
system is 5 Gbps. The system provides nonpermanent service to 10.000 users, all with
a variable rate from 50 to 150 Mbps and permanent service to 20.000 users with 100
Mbps. If 5% of the users request connectivity calls simultaneously, comment on the
CAC processing efficiency of the system.

Answer:
5% of 10.000 users is 500 call requests. Since the system can handle only 200 calls per
second, the first 200 calls will be processed within the first second. The remaining 300
calls will be dropped, unless the system stores the remaining 300 requests, to be
processed in the next 2 seconds.
Questions – Problems (6)
Explain why the end-to-end cell transfer delay consists of a fixed part
and a variable part. What is the fixed part equal to?

Answer:
Delay experienced by any cell/packet in a real network consists of:
a. Propagation Delay (fixed)
b. Transmission Delay (fixed)
c. Processing Delay (fixed)
d. Queueing Delay (variable)
There are fixed delays because of the properties of the links/switching
elements in the network. Processing delay is the time taken by the CPU
inside the switch to do all processing such as header conversion.
Propagation delay is a function of the speed of light, which is fixed
and transmission delay is a function of the speed of the link (also fixed).
Queueing delays are variable because they are caused at the buffers of
switches, where cells queue up for service. Depending upon the occupancy
level at a buffer, the queueing delay can be more or less at a particular switch.
Questions – Problems (7)
Explain why jitter is important to a delay-sensitive applications.

Answer:
Jitter is important in real-time applications and delay sensitive
application because the playback can either run out of cells to play
(inter-arrival time being more than inter-departure time at the
destination for a long period of time) or there could be over-flow
problems (inter-arrival time being less than inter-departure time at
the destination for a long period of time).
Questions – Problems (8)
a) A digitized image is contained in a long packet and is to be transmitted
through an ATM network. At the network interface, the SAR function
segments the long packet into cells. As the cells are transported through
the network, there is no assurance that all cells will follow the same path.
Comment on the order of arrival of the cells and identify a reliable
mechanism that will reconstruct the image correctly at the receiver.
b) A digitized image is contained in a long packet. The SAR function adds
a sequence number in each cell. In which field should the sequence
number be included and why?

Answer:
a) The ATM cells, due to different path delays, arrive out of order.
Therefore, a sequence number should be included in the cells so that, as
they arrive out of order at the receiver, they may be reordered.

b) Since the header changes at each node, the sequence number should be
included in the first bytes of the payload (information field).

You might also like