You are on page 1of 10

TRAFFIC MANAGEMENT

INTRODUCTION
Traffic management is the art of managing network traffic, providing service guarantees
to user connections and ensuring optimal utilization of network resources. Definition
provides three basic concepts:-
1. Concept of traffic
2. Concept of service
3. The means to ensure optimal utilization of network resources.
These are also referred to as elements of traffic management.

Concept of traffic
To accurately model traffic flows, two sets of parameters are necessary. One set measures
the short term changes and another measures the long term average.
The term measures include the following:-
1. Maximum allowable data rate (MADR):- This is the maximum data rate that a user
is allowed at any instant of time.

2. Burst duration (BD):- It is the time duration for which the user is allowed to send the
data at the MAR.
Another parameter which measures the long term average of the connection is the
average data rate (ADR), is the average rate fro the duration of the connection.
The short term and the long term measures collectively form what is referred to as Traffic
parameters.

Concept of service
From the user point of view, it is important that the user knows how the network handles
the offered load. For this user must have means to quantitatively measure how the offered
load is serviced by the network. These are covered into three parameters:-

1. Delay: - It is measured between pair of boundaries. It is the measure of time taken by a


data unit to reach one boundary from another. Specifically, it is the difference between
the times when the first bit of data unit crosses the first boundary and the last bit of the
same data unit crosses the second boundary. They are important in time-sensitive
applications like voice conversation and video conferencing.

2. Delay variation (jitter): - It is the variation in delay over time. The transient changes
in network load causes jittery transmission. It is the important parameter in real time
applications like voice conversation on internet.

3. Data loss: - Ideally no user likes to use the network with high data loss rate like FTP.
They can tolerate some delays, but cannot tolerate any data loss.
These parameters are also referred to as service parameters or service descriptors.

1
Sometimes throughput is also categorized as service attribute. It is the traffic injected into
the network minus the data loss.

Traffic management

The scope and nature of traffic management depends on number of factors like the
switching technique used, the degree of flexibility provided (single service or multi-
service platform), the performance measures provided and the extent of statistical
multiplexing done.
To provide the service guarantees and to ensure optimal resource utilization, several
elements of traffic management are used. These are:-
1. Traffic contracting
2. Traffic shaping
3. Traffic policing
4. Priority control
5. Flow control
6. Congestion control

Traffic Contracting: Traffic contracting provides the means to allocate network


resources to user connections and to manage the network resources efficiently.
Contracting is the art of deciding whether a new connection request can be accepted or
not.
In general, a traffic contract in a network is defined by:-
1. Type and volume of traffic to be carried: - This is described in terms of the
descriptors.

2. Service characteristics desired: - This is defined in terms of delay, losses and jitters.

3. Network acceptance to service the user request.

4. Network fees for the service and the user acceptance to pay.

Traffic contract can be implicit or in the case of telephone networks or explicit as in the
case of ATM and frame relay networks. In explicit traffic contract, the traffic and service
parameters are fixed. Traffic contracting takes place when a new connection is
established. For establishing a new connection, user first specifies the traffic parameters
and service parameters. For these parameters, the network checks whether there are
sufficient resources to accept the new connection or not. If it has adequate resources, the
connection request is accepted, else it is rejected.
The terms of traffic contract are mentioned in Service Level Agreements (SLA’s).

2
Traffic shaping

A burst transmission is characterized by alternating periods of high and low activity. In


the period of high activity, traffic is generated at maximum data rates. This mechanism of
achieving the desired modification of traffic characteristics is known as traffic shaping.
Traffic shaping alters the traffic characteristics of the connection, and delivers a more
predictable traffic to the network. Traffic shaper is usually applied before the user traffic
enters the network boundary. There are many schemes used for traffic shaping. The most
commonly being the leaky bucket technique.

Traffic Policing

Once the traffic contract is agreed upon and a connection is established, the network has
to continuously monitor each connection to ensure that the connection is adhering to the
terms and the connections of the contract. This is necessary because the user may specify
very low data rates, but sends data at higher rates.
The mechanism to ensure that a connection adheres to its traffic contract parameters and
those non-conforming connections are penalized is termed as traffic policing.
Preventive measures taken by the policing function, includes the following: -
1. Tagging or marking: - This refers to changing the priority of the packets from high to
low. If somewhere in the downstream there is overloading, the tagged packets are first to
be discarded.

2. Packet Discard: - This refers to discarding the packets belonging to a non-conforming


connection.

3. Connection termination: - In the extreme case, a non-conforming connection may be


terminated.

Traffic policing is usually done at the ingress point of the network. It can also be done at
the boundary of the two networks.

Priority Control

Priority control refers to the treating the packets of unequal importance unequally.
Priorities are important in communication networks because the importance of packets
differs from packet to packet. Priorities are necessary for providing deterministic delays
and losses.

Flow Control

It is a mean to synchronize the sender’s capacity and the receiver’s receiving capacity.
Flow control is the mechanism to control the flow of data between the sender and the
receiver so that the receiver’s buffers do not overflow. Flow control is necessary when a
fast sender is communicating with a slow receiver.
Flow control mechanisms are broadly classified into two broad categories: -

3
1. Window-based Flow control
2. Rate-based Flow control

1. Window-based Flow control: - It limits the amount of data a sender can send at a
time, thereby preventing memory buffer overflows at the receiver. In window based
schemes, the window size acts as a count of the amount of buffer space available at the
receiver.

Packets sent and Packets in the


Acknowledged current window Packets to be sent

1 2 3 4 5 6 7 8 9 10 11 12 13

Transmission
Window

Flow Control

In a typical window based flow control implementation, the source maintains the
transmission window. Packets towards the left of the window are those which have been
acknowledged by the receiver. The transmission window contains:
1. Packets that have been sent, but are waiting for an acknowledgement.
2. Packets that can be sent without the arrival of subsequent acknowledgments.
Towards the right of window are packets that can be sent only after one or more packets
of the current transmission window are acknowledged. If at any time, the receiver is short
of buffer space, it withholds the transmission of acknowledgments.
In the simplest form of window based flow control called stop-and-wait, the source waits
for an acknowledgment after sending every packet. The acknowledgment from the
receiver acts as the permission for sending the next packet. This scheme has the
advantage that successive packets can be sent only after the delay of round trip
propagation time thereby reducing the data rates. It can be improved by allowing the
source to send multiple packets without receiving an acknowledgment. The window size
is fixed for the entire duration for which the connection remains active. This scheme is
called static-window-based-flow-control.
If the window size is extremely large, the receiver must have adequate buffers for the
packets. In absence of adequate buffers, there will be packet loss.

4
One of the ways to ensure that the receiver has the adequate buffer space and there is no
data loss is to use credit-based-flow-control. In this the receiver keeps sending credits to
the sender. The credit value indicates the amount of data that a sender can send. Once the
sender has sent the data corresponding to the credit, it has to wait till it receives more
credit from the receiver. It is also known as dynamic-window-flow-control.

2. Rate-based flow-control: - Instead of limiting the amount of data sent by a sender, the
rate at which data is send is controlled. In this technique, the sender stars with some
initial data rate. This initial rate is fixed at the time of establishment of the connection.
Depending on resource availability, the receiver may request the sender to: -
1. Increase the rate
2. Decrease the rate
3. Leave it unchanged
The sender alters its data rate to the indication received from the receiver.

Congestion Control

In communication networks, congestion occurs when the available network resources are
not enough to meet the demands.

Causes of Congestion
1. Over commitment of shared resources:-Congestion occurs when the demand for a
resource exceeds the supply.

2. It can also occur if proper policing mechanisms are not in place to control the traffic
inflow of non-conforming users.

3. System Breakdown:- Failure of the network elements like routers and switches also
cause congestion.

4. Growing disparities in communication link speeds also cause congestion.

Effects of Congestion

It reduces the throughput and increases the delay of the network. During low loads,
substantial amounts of network resources remain unutilized. Hence network throughput is
low. However, low loads also results in less delay because user data does not have to wait
for the resources to be available. As the offered load is increased, more and more
resources start being utilized and the net effect is an increase in throughput and a
proportional increase in delay. However, the linear increase in throughput (region 1),
with an increase in the offered load is applicable only till a point. Any further increase in
a load, results only in marginal improvement in throughput. This is the region of Mild
congestion.
Once the offered load is increased beyond region 2, network performance decreases
exponentially. This is the region of Severe congestion where an increase in the load

5
decreases the throughput and increases delay. The point where the throughput becomes
nearly zero is called the point of congestion collapse.

Region 1 Region 3
No Region 2
Mild Congestion Severe Congestion
Congestion

Ideal Behavior
Eg:- Token Ring

Throughput Delay

Point of Congestion

Offered Load

Throughput curve
Delay curve

Congestion Control Mechanisms

There are two types of congestion control mechanisms: -


1. Preventive Congestion Control
2. Reactive Congestion Control

Preventive Congestion Control

One of the simplest ways to prevent congestion is to build a network that is capable of
carrying the worst case of user traffic.

6
A different approach of congestion prevention would be to reserve some portion of the
bandwidth (say 10-20%) exclusively for handling transient overload conditions.

Traffic contracting is another congestion prevention technique. It ensures that the demand
from the existing connections does not exceed available resources. A new connection is
accepted only if there is sample resource available in the network to service the request.

Reactive Congestion Control: - Congestion prevention techniques results in the


underutilization of resources. Thus the preferred approach is to react to the problem o9f
congestion than to prevent it. Two Reactive congestion control schemes are: -
1. Packet discard
2. Congestion notification

Packet discard: - One of the simplest ways to deal with congestion is to discard excess
packets. There are many issues that must be taken into account before discarding the
packets. They are:-
a. Conformity and Fairness: - Consider a case where ten applications require 100 Kbps
and Intensive Application sharing a common link requires 5Mbps. If the congestion
occurs due to second one, it would be unfair to drop the packets of first application, that
is, congestion control mechanism should be fair.

b. Packet Retransmission: - A discarded packet is generally retransmitted by the sender.


Thus random packet discarded leads to a situation where the network carries
retransmitted packets only, bringing the network performance to halt. Hence packet
discard must be considered under extreme circumstances.

c. Intelligent Packet Discard

Congestion Notification

Even if the packets are discarded, the problem of congestion cannot be solved, unless the
end systems and the intermediate routers take measures to slow down their data injection
rates. For them to slow down, they must first be informed of the state of the congestion in
the network. This information is delivered through congestion notification or feedback.
Notification involves informing the end systems or the intermediate routers about the
current state of the network.

Traffic Descriptors

A source traffic descriptor is set of traffic parameters belonging to the ATM source. The
source traffic descriptor is used to capture the intrinsic traffic characteristics of the
connection requested by a particular source. The source traffic descriptor includes the
following: -

7
1. Peak Cell Rate (PCR): - This is the maximum rate at which a user is allowed to inject
data into the network. Specifically, PCR defines an upper bound on the traffic that can be
submitted by an ATM source.

2. Sustainable Cell Rate (SCR): - It is the measure if long term average of user traffic.

3. Maximum Burst Size (MBS): - This is the amount of data that an ATM source can
send at its peak cell rate.

4. Burst Tolerance (BT): -It is the measure of the internal tolerance between consecutive
bursts during which cells are sent at PCR.
BT= (MBS-1) ((1/SCR)-(1/PCR))

5. Minimum Cell Rate (MCR): - It is the minimum cell rate that the network must
provide to a connection. Its value can even be zero.

6. Maximum Frame Size (MFS): - This parameter defines the maximum size of AAL
PDU for the guaranteed frame relay (GFR) service category.

ATM Service Descriptors

1. Cell Loss Ratio (CLR)


2. Maximum Cell Transfer Delay (Max CTD)
3. Peak to Peak Cell Delay Variation (Peak-to-Peak CDV)
4. Cell Error Ratio (CER)
5. Cell Mis-insertion Ration (CMR)
6. Severely Error Cell Block Ratio (SECBR)

Out of these six parameters, the first three are negotiable while the last three are not. Here
the negotiable implies whether the value of that particular QoS parameter can be decided
by a user or not.
All six parameters cover three broad categories of assessment: -

1. Speed: - It is important for an ATM network to specify how fast it can deliver cells at
the destination.
CTD and CDV measure the speed of ATM networks in delivering cells.
CTD measures transit delay.
CDV measures the variation in these delays.

2. Accuracy: - After speed comes the accuracy with which the cells are delivered. For
these, three parameters CER, CMR and SECBR are specified.

3. Dependability: - To measure whether a cell has actually arrived or not, CLR is


specified. Thus CLR measures the dependability of the connection.

8
Factors affecting QoS parameters

1. Propagation Delay: - This represents the delay in transporting bits over the physical
media. It depends upon the distance between the source and the destination as well as the
speed of the links between them. This factor affects only the CTD.

2. Media Error Statistics: - This represents the error characteristics introduced by the
physical media. Errors introduced by the media may be single bit or multiple bits. This
factor affects CTD and CDV.

3. Switch Architecture

4. Buffer Capacity: - A very small buffer size leads to frequent buffer overflows during
load conditions while a very large buffer leads to high delays. This factor affects CTD,
CDV, CLR and SECBR.

5. Traffic Load: - A fluctuating traffic load implies variable amount of buffering from
the source to destination. A high traffic load may also cause occasional buffer overflows.
As the traffic load increases, CLR, CTD and CDV also increases.

6. Number of Intermediate Nodes: - This factor affects all the QoS parameters.

7. Resource Allocation: - It will affect three negotiable QoS parameters.

8. Failures: - A failure in the form of link breakdown, switch crashing or port failure
affects cell loss, therefore affecting CER and SECBR.

CLR: - It is the fraction of cells that are either not delivered to destination or delivered
after a pre-specified time.
(Cells lost / total cells transmitted)

Max CTD: - It is the (1-a) quintile of CTD.

Peak-to-Peak CDV: - It is the difference between (1-a) quintile of the CTD and fixed
CTD.

CER: - It is the ratio of total number of cells delivered with error to total number of cells
delivered.

CMR: - It is the number of cells meant for some other destination inserted per second.

SECBR: -It is the ratio of severely error cell blocks to the total transmitted cell blocks.

9
Probability
Density

Cells delayed beyond this point are


(1-a) quantile deemed as delivered late or lost

Fixed Delay P2P CDeltaV

Cell Transfer Delay

Probability Density model

10

You might also like