You are on page 1of 125

A Course Material on

HIGH SPEED NETWORKS

By

Mr. M.SHANMUGHARAJ

ASSISTANT PROFESSOR

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

SASURIE COLLEGE OF ENGINEERING

VIJAYAMANGALAM 638 056


QUALITY CERTIFICATE

This is to certify that the e-course material

Subject Code : CS2060

Subject : HIGH SPEED NETWORKS

Class : IV Year ECE

being prepared by me and it meets the knowledge requirement of the university curriculum.

Signature of the Author

Name: M.Shanmugaraj

Designation: Assistant Professor

This is to certify that the course material being prepared by Mr.M.SHANMUGHARAJ is of


adequate quality. He has referred more than five books among them minimum one is from
abroad author.

Signature of HD

Name:Dr.K.Pandiarajan

SEAL
UNIT-1 HIGH SPEED NETWORKS 1-19

1.1 FRAME RELAY NETWORKS 1

1.2 STANDARD FRAME RELAY FRAME 1

1.3 CONGESTION-CONTROL MECHANISMS 2

1.4 FRAME RELAY VERSUS X.25 2

1.5 ASYNCHRONOUS TRANSFER MODE (ATM) 2

1.6 ATM PROTOCOL ARCHITECTURE 3

1.7 LOGICAL CONNECTION 4

1.7.1 CALL ESTABLISHMENT USING VPS 5

1.7.2 VIRTUAL CHANNEL CONNECTION USES 5

1.7.3 VP/VC CHARACTERISTICS 6

1.7.4 CONTROL SIGNALLING VCC 6

1.7.5 CONTROL SIGNALING VPC 6

1.8 STRUCTURE OF AN ATM CELL 6

1.8.1 GENERIC FLOW CONTROL 7

1.8.2 HEADER ERROR CONTROL 8

1.8.3 EFFECT OF ERROR IN CELL HEADER 9

1.9 ATM SERVICE CATEGORIES 10

1.10 ATM ADAPTION LAYER 11

1.11 HIGH-SPEED LANS 13

1.12 1.12 CSMA/CD 13

1.13 1.12.1 HUBS AND SWITCHES 14

1.14 1.14 FIBRE CHANNEL 16


1.14.1 I/O CHANNEL 17

1.15 1.15 WIRELESS LAN REQUIREMENTS 18

1.16 1.16 IEEE 802.11 SERVICES 18

UNIT-2 CONGESTION AND TRAFFIC MANAGEMENT 20-30

2.1 QUEING ANALYSIS 20

2.2 QUEING MODELS 20

2.3 SINGLE-SERVER QUEUE 21

2.4 MULTIPLE-SERVERS QUEUE 22

2.5. QUEUEING SYSTEM CLASSIFICATION 22

2.6 POISSON PROCESS 24

2.6.1 MATHEMATICAL FORMALIZATION OF LITTLE'S 24

THEOREM

2.7 EFFECTS OF CONGESTION 26

2.8 CONGESTION-CONTROL MECHANISMS 26

2.9.1 EXPLICIT CONGESTION SIGNALING 26

2.9 TRAFFIC MANAGEMENT IN CONGESTED NETWORK 27

SOME CONSIDERATIONS

2.10 FRAME RELAY CONGESTION CONTROL 28

UNIT-3 TCP AND CONGESTION CONTROL 31-58

3.1 3.1 TCP FLOW CONTROL 31

3.2 TCP CONGESTION CONTROL 34

3.2.1 TCP FLOW AND CONGESTION CONTROL 35

3.3 RETRANSMISSION TIMER MANAGEMENT 35


3.4 EXPONENTIAL RTO BACKOFF 36

3.5 KARNS ALGORITHM 36

3.6 WINDOW MANAGEMENT 36

3.7 PERFORMANCE OF TCP OVER ATM 39

3.8 TRAFFIC AND CONGESTION CONTROL IN ATM NETWORKS 41

3.9 REQUIREMENTS FOR ATM TRAFFIC AND CONGESTION 41

CONTROL

3.10 ATM TRAFFIC-RELATED ATTRIBUTES 43

3.11 TRAFFIC MANAGEMENT FRAMEWORK 45

3.12 TRAFFIC CONTROL 46

3.13 ABR TRAFFIC MANAGEMENT 51

3.14 RM CELL FORMAT 54

3.15 ABR CAPACITY ALLOCATION 54

3.15.1 COMPONENTS OF GFR MECHANISM 58

UNIT-4 INTEGRATED AND DIFFERENTIATED SERVICES 59-69

4.1 INTEGRATED SERVICES ARCHITECTURE (ISA) 60

4.2 ISA APPROACH 60

4.3 ISA COMPONENTS BACKGROUND FUNCTIONS 61

4.4 ISA SERVICES 62

4.5 QUEUING DISCIPLINE 63

4.6 FAIR QUEUING (FQ) 63

4.7 GENERALIZED PROCESSOR SHARING (GPS) 64

4.8 WEIGHTED FAIR QUEUE 64


4.9 RANDOM EARLY DETECTION(RED) 65

4.10 DIFFERENTIATED SERVICES (DS) 66

UNIT -5 PROTOCOLS FOR QOS SUPPORT 70-80

5.1 RESOURCE RESERVATION PROTOCOL (RSVP) DESIGN GOALS 70

5.2 DATA FLOWS - SESSION 71

5.3 RSVP OPERATION 71

5.4 RSVP Protocol MECHANISMS 74

5.5 Multiprotocol Label Switching (MPLS) 74

5.6 MPLS OPERATION 76

5.7 MPLS PACKET FORWARDING 77

5.8 RTP ARCHITECTURE 80

5.9 RTP ARCHITECTURE DIAGRAM 80


CS2060 HIGH SPEED NETWORKS

UNIT I HIGH SPEED NETWORKS 9


Frame Relay Networks Asynchronous transfer mode ATM Protocol Architecture,
ATM logical Connection, ATM Cell ATM Service Categories AAL, High Speed
LANs: Fast Ethernet, Gigabit Ethernet, Fiber Channel Wireless LANs: applications,
requirements Architecture of 802.11

UNIT II CONGESTION AND TRAFFIC MANAGEMENT 8


Queuing Analysis- Queuing Models Single Server Queues Effects of Congestion
Congestion Control Traffic Management Congestion Control in Packet Switching
Networks Frame Relay Congestion Control.

UNIT III TCP AND ATM CONGESTION CONTROL 11


TCP Flow control TCP Congestion Control Retransmission Timer Management
Exponential RTO back off KARNs Algorithm Window management Performance
of TCP over ATM. Traffic and Congestion control in ATM Requirements Attributes
Traffic Management Frame work, Traffic Control ABR traffic Management ABR
rate control, RM cell formats, ABR Capacity allocations GFR traffic management.

UNIT IV INTEGRATED AND DIFFERENTIATED SERVICES 8


Integrated Services Architecture Approach, Components, Services- Queuing Discipline,
FQ, PS, BRFQ, GPS, WFQ Random Early Detection, Differentiated Services.

UNIT V PROTOCOLS FOR QOS SUPPORT


RSVP Goals & Characteristics, Data Flow, RSVP operations, Protocol Mechanisms
Multiprotocol Label Switching Operations, Label Stacking, Protocol details RTP
Protocol Architecture, Data Transfer Protocol, RTCP.
TOTAL: 45 PERIODS

TEXT BOOK
1. William Stallings, HIGH SPEED NETWORKS AND INTERNET, Pearson
Education, Second Edition, 2002.

REFERENCES
1. Warland, Pravin Varaiya, High performance communication networks, Second
Edition, Jean Harcourt Asia Pvt. Ltd., , 2001.
2. Irvan Pepelnjk, Jim Guichard, Jeff Apcar, MPLS and VPN architecture, Cisco Press,
Volume 1 and 2, 2003.
3. Abhijit S. Pandya, Ercan Sea, ATM Technology for Broad Band Telecommunication
Networks, CRC Press, New York, 2004.
CS2060 HIGH SPEED NETWORKS

Unit I
HIGH SPEED NETWORKS

1.1 FRAME RELAY NETWORKS

Frame Relay often is described as a streamlined version of X.25, offering fewer of the robust
capabilities, such as windowing and retransmission of last data that are offered in X.25.
Frame Relay Devices
Devices attached to a Frame Relay WAN fall into the following two general categories:
Data terminal equipment (DTE)
Data circuit-terminating equipment (DCE)
DTEs generally are considered to be terminating equipment for a specific network and
typically are located on the premises of a customer. In fact, they may be owned by the
customer. Examples of DTE devices are terminals, personal computers, routers, and bridges.
DCEs are carrier-owned internetworking devices. The purpose of DCE equipment is to provide
clocking and switching services in a network, which are the devices that actually transmit data
through the WAN. In most cases, these are packet switches. Figure 10-1 shows the relationship
between the two categories of devices.

1.2 STANDARD FRAME RELAY FRAME

Standard Frame Relay frames consist of the fields illustrated in Figure


Figure Five Fields Comprise the Frame Relay Frame

Each frame relay PDU consists of the following fields:


1. Flag Field. The flag is used to perform high level data link synchronization which
indicates the beginning and end of the frame with the unique pattern 01111110. To
ensure that the 01111110 pattern does not appear somewhere inside the frame, bit
stuffing and destuffing procedures are used.
2. Address Field. Each address field may occupy either octet 2 to 3, octet 2 to 4, or octet 2
to 5, depending on the range of the address in use. A two-octet address field comprising
the EA=ADDRESS FIELD EXTENSION BITS and the
C/R=COMMAND/RESPONSE BIT.
3. DLCI-Data Link Connection Identifier Bits. The DLCI serves to identify the virtual
connection so that the receiving end knows which information connection a frame
belongs to. Note that this DLCI has only local significance. A single physical channel
can multiplex several different virtual connections.
4. FECN, BECN, DE bits. These bits report congestion:
o FECN=Forward Explicit Congestion Notification bit
o BECN=Backward Explicit Congestion Notification bit
o DE=Discard Eligibility bit

SCE 1 ECE
CS2060 HIGH SPEED NETWORKS
5. Information Field. A system parameter defines the maximum number of data bytes that
a host can pack into a frame. Hosts may negotiate the actual maximum frame length at
call set-up time. The standard specifies the maximum information field size
(supportable by any network) as at least 262 octets. Since end-to-end protocols
typically operate on the basis of larger information units, frame relay recommends that
the network support the maximum value of at least 1600 octets in order to avoid the
need for segmentation and reassembling by end-users.
Frame Check Sequence (FCS) Field. Since one cannot completely ignore the bit error-rate of
the medium, each switching node needs to implement error detection to avoid wasting
bandwidth due to the transmission of erred frames. The error detection mechanism used in
frame relay uses the cyclic redundancy check (CRC) as its basis.

1.3 CONGESTION-CONTROL MECHANISMS

Frame Relay reduces network overhead by implementing simple congestion-notification


mechanisms rather than explicit, per-virtual-circuit flow control. Frame Relay typically is
implemented on reliable network media, so data integrity is not sacrificed because flow control
can be left to higher-layer protocols. Frame Relay implements two congestion-notification
mechanisms:
Forward-explicit congestion notification (FECN)
Backward-explicit congestion notification (BECN) FECN and BECN each is controlled by
a single bit contained in the Frame Relay frame header. The Frame Relay frame header also
contains a Discard Eligibility (DE) bit, which is used to identify less important traffic that can
be dropped during periods of congestion.

1.4 FRAME RELAY VERSUS X.25

The design of X.25 aimed to provide error-free delivery over links with high error-rates. Frame
relay takes advantage of the new links with lower error-rates, enabling it to eliminate many of
the services provided by X.25. The elimination of functions and fields, combined with digital
links, enables frame relay to operate at speeds 20 times greater than X.25.
X.25 specifies processing at layers 1, 2 and 3 of the OSI model, while frame relay operates at
layers 1 and 2 only. This means that frame relay has significantly less processing to do at each
node, which improves throughput by an order of magnitude.
X.25 prepares and sends packets, while frame relay prepares and sends frames. X.25 packets
contain several fields used for error and flow control, none of which frame relay needs. The
frames in frame relay contain an expanded address field that enables frame relay nodes to
direct frames to their destinations with minimal processing .
X.25 has a fixed bandwidth available. It uses or wastes portions of its bandwidth as the load
dictates. Frame relay can dynamically allocate bandwidth during call setup negotiation at both
the physical and logical channel level.

1.5 ASYNCHRONOUS TRANSFER MODE (ATM)

Asynchronous Transfer Mode (ATM) is an International Telecommunication Union-


Telecommunications Standards Section (ITU-T) standard for cell relay wherein information for
multiple service types, such as voice, video, or data, is conveyed in small, fixed-size cells.
ATM networks are connection-oriented.

SCE 2 ECE
CS2060 HIGH SPEED NETWORKS
ATM is a cell-switching and multiplexing technology that combines the benefits of circuit
switching (guaranteed capacity and constant transmission delay) with those of packet switching
(flexibility and efficiency for intermittent traffic). It provides scalable bandwidth from a few
megabits per second (Mbps) to many gigabits per second (Gbps). Because of its asynchronous
nature, ATM is more efficient than synchronous technologies, such as time-division
multiplexing (TDM).
With TDM, each user is assigned to a time slot, and no other station can send in that time slot.
If a station has much data to send, it can send only when its time slot comes up, even if all
other time slots are empty. However, if a station has nothing to transmit when its time slot
comes up, the time slot is sent empty and is wasted. Because ATM is asynchronous, time slots
are available on demand with information identifying the source of the transmission contained
in the header of each ATM cell.
ATM transfers information in fixed-size units called cells. Each cell consists of 53
octets, or bytes. The first 5 bytes contain cell-header information, and the remaining 48 contain
the payload (user information). Small, fixed-length cells are well suited to transferring voice
and video traffic because such traffic is intolerant of delays that result from having to wait for a
large data packet to download, among other things. Figure illustrates the basic format of an
ATM cell. Figure :An ATM Cell Consists of a Header and Payload Data

1.6 ATM PROTOCOL ARCHITECTURE

ATM is almost similar to cell relay and packets witching using X.25and framerelay.like packet
switching and frame relay,ATM involves the transfer of data in discrete pieces.also,like packet
switching and frame relay ,ATM allows multiple logical connections to multiplexed over a
single physical interface. in the case of ATM,the information flow on each logical connection
is organised into fixed-size packets, called cells. ATM is a streamlined protocol with minimal
error and flow control capabilities :this reduces the overhead of processing ATM cells and
reduces the number of overhead bits required with each cell, thus enabling ATM to operate at
high data rates.the use of fixed-size cells simplifies the processing required at each ATM
node,again supporting the use of ATM at high data rates. The ATM architecture uses a logical
model to describe the functionality that it supports. ATM functionality corresponds to the
physical layer and

part of the data link layer of the OSI reference model. . the protocol referencce model shown
makes reference to three separate planes:

user plane provides for user information transfer ,along with associated controls (e.g.,flow
control ,error control).
control plane performs call control and connection control functions.

SCE 3 ECE
CS2060 HIGH SPEED NETWORKS
management plane includes plane management ,which performs management function related
to a system as a whole and provides coordination between all the planes ,and layer
management which performs management functions relating to resource and parameters
residing in its protocol entities .
The ATM reference model is composed of the following ATM layers:

Physical layerAnalogous to the physical layer of the OSI reference model, the ATM
physical layer manages the medium-dependent transmission.
ATM layerCombined with the ATM adaptation layer, the ATM layer is roughly
analogous to the data link layer of the OSI reference model. The ATM layer is responsible for
the simultaneous sharing of virtual circuits over a physical link (cell multiplexing) and passing
cells through the ATM network (cell relay). To do this, it uses the VPI and VCI information in
the header of each ATM cell.
ATM adaptation layer (AAL)Combined with the ATM layer, the AAL is roughly
analogous to the data link layer of the OSI model. The AAL is responsible for isolating higher-
layer protocols from the details of the ATM processes. The adaptation layer prepares user data
for conversion into cells and segments the data into 48-byte cell payloads.
Finally, the higher layers residing above the AAL accept user data, arrange it into packets, and
hand it to the AAL. Figure :illustrates the ATM reference model.

1.7 LOGICALCONNECTION

Virtual channel connections (VCC)


Analogous to virtual circuit in X.25
Basic unit of switching
Between two end users
Full duplex
Fixed size cells
Data, user-network exchange (control) and network-network exchange (network
management and routing)

SCE 4 ECE
CS2060 HIGH SPEED NETWORKS
Virtual path connection (VPC)
Bundle of VCC with same end points

Simplified network architecture..


Increased network performance and reliability.
Reduced processing.
Short connection setup time..
Enhanced network services.

1.7.1 CALL ESTABLISHMENT USING VPS

1.7.2 VIRTUAL CHANNEL CONNECTION USES

Between end users


End to end user data
Control signals
VPC provides overall capacity
VCC organization done by users
Between end user and network
Control signaling
Between network entities

SCE 5 ECE
CS2060 HIGH SPEED NETWORKS
Network traffic management
Routing

1.7.3 VP/VC CHARACTERISTICS

Quality of service
Switched and semi-permanent channel connections
Call sequence integrity
Traffic parameter negotiation and usage monitoring
VPC only
Virtual channel identifier restriction within VPC

1.7.4 CONTROL SIGNALLING VCC

Done on separate connection


Semi-permanent VCC
Meta-signaling channel
Used as permanent control signal channel
User to network signaling virtual channel
For control signaling
Used to set up VCCs to carry user data
User to user signaling virtual channel
Within pre-established VPC
Used by two end users without network intervention to establish and release
user to user VCC

1.7.5 CONTROL SIGNALING VPC

Semi-permanent
Customer controlled
Network controlled

1.8 STRUCTURE OF AN ATM CELL

An ATM cell consists of a 5 byte header and a 48 byte payload. The payload size of 48 bytes
was a compromise between the needs of voice telephony and packet networks, obtained by a
simple averaging of the US proposal of 64 bytes and European proposal of

32, said by some to be motivated by a European desire not to need echo-cancellers on national
trunks.
ATM defines two different cell formats: NNI (Network-network interface) and UNI (User-
network interface). Most ATM links use UNI cell format.

SCE 6 ECE
CS2060 HIGH SPEED NETWORKS

GFC = Generic Flow Control (4 bits) (default: 4-zero bits)


VPI = Virtual Path Identifier (8 bits UNI) or (12 bits NNI)
VCI = Virtual channel identifier (16 bits)
PT = PayloadType(3 bits)
CLP = Cell Loss Priority (1-bit)
HEC = Header Error Correction (8-bit CRC, polynomial = X8 + X2 + X + 1)

The PT field is used to designate various special kinds of cells for Operation and Management
(OAM) purposes, and to delineate packet boundaries in some AALs.
Several of ATM's link protocols use the HEC field to drive a CRC-Based Framing algorithm,
which allows the position of the ATM cells to be found with no overhead required beyond
what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit
header errors and detect multi-bit header errors. When multi-bit header errors are detected, the
current and subsequent cells are dropped until a cell with no header errors is found.
In a UNI cell the GFC field is reserved for a local flow control/submultiplexing system
between users. This was intended to allow several terminals to share a single network
connection, in the same way that two ISDN phones can share a single basic rate ISDN
connection. All four GFC bits must be zero by default.The NNI cell format is almost identical
to the UNI format, except that the 4-bit GFC field is re-allocated to the VPI field, extending the
VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212
VPs of up to almost 216 VCs each (in practice some of the VP and VC numbers are reserved).

1.8.1 GENERIC FLOW CONTROL

Control traffic flow at user to network interface (UNI) to alleviate short term overload

Two sets of procedures
Uncontrolled transmission
Controlled transmission
Every connection either subject to flow control or not
Subject to flow control

SCE 7 ECE
CS2060 HIGH SPEED NETWORKS
May be one group (A) default
May be two groups (A and B)
Flow control is from subscriber to network
Controlled by network side
Terminal equipment (TE) initializes two variables
TRANSMIT flag to 1
GO_CNTR (credit counter) to 0
If TRANSMIT=1 cells on uncontrolled connection may be sent any time
If TRANSMIT=0 no cells may be sent (on controlled or uncontrolled connections)
If HALT received, TRANSMIT set to 0 and remains until NO_HALT
If TRANSMIT=1 and no cell to transmit on any uncontrolled connection:
If GO_CNTR>0, TE may send cell on controlled connection
Cell marked as being on controlled connection
GO_CNTR decremented
If GO_CNTR=0, TE may not send on controlled connection
TE sets GO_CNTR to GO_VALUE upon receiving SET signal
Null signal has no effect

USE OF HALT

To limit effective data rate on ATM


Should be cyclic
To reduce data rate by half, HALT issued to be in effect 50% of time
Done on regular pattern over lifetime of connection

1.8.2 HEADER ERROR CONTROL

8 bit error control field


Calculated on remaining 32 bits of header
Allows some error correction

Initialize condition, receiver error correction is default mode for single bit error
correction
After cell is received, HEC calculation & comparison is performed.
No error is detected, receiver remains error correction mode.
If error is detected, it checks for single or multi bit error
Mode is changed to detection mode.
622.08Mbps

SCE 8 ECE
CS2060 HIGH SPEED NETWORKS
155.52Mbps
51.84Mbps
25.6Mbps
Cell Based physical layer
SDH based physical layer

1.8.3 EFFECT OF ERROR IN CELL HEADER

SCE 9 ECE
CS2060 HIGH SPEED NETWORKS

1.9 ATM SERVICE CATEGORIES

Constant bit rate (CBR)


Real time variable bit rate (rt-VBR)
Non-real time
Non-real time variable bit rate (nrt-VBR)
Available bit rate (ABR)
Unspecified bit rate (UBR)
Guaranteed frame rate (GFR)

Real Time Services:


Constant bit rate (CBR)
It is used where Fixed data rate continuously available.
Tight upper bound on transfer delay.
Mostly used in Uncompressed audio and video.
Examples.
a. Video conferencing.
b. Interactive audio.
c. A/V distribution and retrieval.

Real time variable bit rate (rt-VBR)


Time sensitive application.
Tightly constrained delay and delay variation.
rt-VBR applications transmit at a rate that varies with time.
Example : compressed video
a. Produces varying sized image frames.
b. Original (uncompressed) frame rate constant.
c. So compressed data rate varies.

Can statistically multiplex connections


i.e., allows network more flexible.

Non Real Time Services:


Non-real time variable bit rate (nrt-VBR)
It is possible to characterize expected traffic flow.
So that Improve QoS in loss and delay.
End system specifies:.
a. Peak cell rate.
b. Sustainable or average rate.
c. Measure of how bursty traffic .

Unspecified bit rate (UBR)


May be additional capacity over and above that used by CBR and VBR traffic.
a. Not all resources dedicated to CBR & VBR.
b. Due to Bursty nature of VBR, less than committed capacity is used.
For application that can tolerate some cell loss or variable delays

SCE 10 ECE
CS2060 HIGH SPEED NETWORKS
a. e.g. TCP based traffic.
Cells forwarded on FIFO basis.
Best efforts service.
i.e., no initial commitment is made to a UBR
Source & no feedback concerning congestion is provided.

Available bit rate (ABR)

Application using ABR specifies peak cell rate (PCR) and minimum cell rate
(MCR).
Resources allocated to give at least MCR..
Spare capacity shared among all ARB sources.
e.g. LAN interconnection.

Guaranteed frame rate (GFR)


Designed to support IP backbone sub networks.
Better service than UBR for frame based traffic.
Including IP and Ethernet.

Optimize handling of frame based traffic passing from LAN through router to ATM
backbone.
Used by enterprise, carrier and ISP networks.
Consolidation and extension of IP over WAN.
ABR difficult to implement between routers over ATM network.
GFR better alternative for traffic originating on Ethernet
a. Network aware of frame/packet boundaries.
b. When congested, all cells from frame discarded.
c. User was Guaranteed minimum capacity.
d. Additional frames carried out if not congested.

1.10 ATM ADAPTION LAYER

AAL layer is organized into 2 logical sub layers


1. Convergence sub layer
2. Segmentation and re-assembly sub layer

SCE 11 ECE
CS2060 HIGH SPEED NETWORKS
Convergence sublayer (CS)
Support for specific applications
AAL user attaches at SAP
Segmentation and re-assembly sublayer (SAR)
Packages and unpacks info received from CS into cells
Four types
Type 1
Type 2
Type 3/4
Type 5

AAL TYPE 1

It is dealing with CBR source


SAR packs the bits into cells for transmission and unpacks bits at reception.
Block accompanied by sequence number so that error PDUs (Protocol Data Unit) are
tracked.
4 bit SN field consists of a convergence sub layers indicator (CSI) bit & 3 bit Sequence
Count (SC)
Sequence Number Field (SNF) is an error code for error detection and possibly
correction on the sequence number field.

AAL TYPE 2

It deals with VBR


It is used in Analog applications

SCE 12 ECE
CS2060 HIGH SPEED NETWORKS
AAL TYPE 3\4

Connectionless each block of data presented to SAR layer is tracked independently.


Connected possible to define multiple SAR logical connection over single ATM
connection
Message mode transfers framed data
stream mode service supports the transfer of low speed continues data into low
delay requirements.

AAL TYPE 5

Streamlined transport for connection oriented higher layer protocols


To reduce protocol overhead.
To reduce transmission overhead.
To reduce adaptability to existing transport protocols.

1.11 HIGH-SPEED LANS

Emergence of High-Speed LANs


2 Significant trends
Computing power of PCs continues to grow rapidly
Network computing
Examples of requirements
Centralized server farms
Power workgroups
High-speed local backbone

Classical Ethernet
Bus topology LAN
10 Mbps
CSMA/CD medium access control protocol
2 problems:
A transmission from any station can be received by all stations
How to regulate transmission
Solution to First Problem
Data transmitted in blocks called frames:
User data
Frame header containing unique address of destination station

1.12 CSMA/CD
Carrier Sense Multiple Access/ Carrier Detection
If the medium is idle, transmit.
If the medium is busy, continue to listen until the channel is idle, then transmit
immediately.
If a collision is detected during transmission, immediately cease transmitting.
After a collision, wait a random amount of time, then attempt to transmit again (repeat from
step 1).

SCE 13 ECE
CS2060 HIGH SPEED NETWORKS

1.13 Medium Options at 10Mbps

<data rate> <signaling method> <max length>


10Base5
10 Mbps
50-ohm coaxial cable bus
Maximum segment length 500 meters
10Base-T
Twisted pair, maximum length 100 meters
Star topology (hub or multipoint repeater at centralpoint)

1.12.1 HUBS AND SWITCHES

Hub
Transmission from a station received by central hub and retransmitted on all outgoing lines
Only one transmission at a time

Bridge
Frame handling done in software

SCE 14 ECE
CS2060 HIGH SPEED NETWORKS
Analyze and forward one frame at a time
Store-and-forward

Layer 2 Switch
Frame handling done in hardware
Multiple data paths and can handle multiple frames at a time
Can do cut-through
Incoming frame switched to one outgoing line
Many transmissions at same time

Layer 2 Switches
Flat address space
Broadcast storm
Only one path between any 2 devices

Solution 1: subnetworks connected by routers


Solution 2: layer 3 switching, packet-forwarding logic in hardware

SCE 15 ECE
CS2060 HIGH SPEED NETWORKS

Benefits of 10 Gbps Ethernet over ATM


No expensive, bandwidth consuming conversion between Ethernet packets and ATM
cells
Network is Ethernet, end to end
IP plus Ethernet offers QoS and traffic policing capabilities approach that of ATM
Wide variety of standard optical interfaces for 10 Gbps Ethernet

1.14 FIBRE CHANNEL


2 methods of communication with processor:
I/O channel
Network communications
Fibre channel combines both
Simplicity and speed of channel communications
Flexibility and interconnectivity of network communications

SCE 16 ECE
CS2060 HIGH SPEED NETWORKS

1.14.1 I/O CHANNEL


Hardware based, high-speed, short distance
Direct point-to-point or multipoint communications link
Data type qualifiers for routing payload
Link-level constructs for individual I/O operations
Protocol specific specifications to support e.g. SCSI
Fibre Channel Network-Oriented Facilities
Full multiplexing between multiple destinations
Peer-to-peer connectivity between any pair of ports
Internetworking with other connection technologies
Fibre Channel Requirements
Full duplex links with 2 fibres/link
100 Mbps 800 Mbps

SCE 17 ECE
CS2060 HIGH SPEED NETWORKS
Distances up to 10 km
Small connectors
high-capacity
Greater connectivity than existing multidrop channels
Broad availability
Support for multiple cost/performance levels
Support for multiple existing interface command sets
Fibre Channel Protocol Architecture
FC-0 Physical Media
FC-1 Transmission Protocol
FC-2 Framing Protocol
FC-3 Common Services
FC-4 Mapping

1.15 WIRELESS LAN REQUIREMENTS


Throughput
Number of nodes
Connection to backbone
Service area
Battery power consumption
Transmission robustness and security
Collocated network operation
License-free operation
Handoff/roaming
Dynamic configuration

1.16 IEEE 802.11 SERVICES


Association
Reassociation
Disassociation
Authentication
Privacy

SCE 18 ECE
CS2060 HIGH SPEED NETWORKS
Access Points perform the wireless to wired bridging function between networks
Wireless medium means of moving frames from station to station
Station computing devices with wireless network interfaces
Distribution System backbone network used to relay frames between access points
On wireless LAN, any station within radio range of other devices can transmit
Any station within radio range can receive
Authentication: Used to establish identity of stations to each other
Wired LANs assume access to physical connection conveys authority to connect
to LAN
Not valid assumption for wireless LANs
Connectivity achieved by having properly tuned antenna
Authentication service used to establish station identity
802.11 s upports several authentication schemes
Range from relatively insecure handshaking to public-key encryption schemes
802.11 requires mutually acceptable, successful authentication before
association
MAC layer covers three functional areas
Reliable data delivery
Access control
Security
Beyond our scope802.11 physical and MAC layers subject to
unreliability
Noise, interference, and other propagation effects result in loss of
frames
Even with error-correction codes, frames may not successfully be
received
Can be dealt with at a higher layer, such as TCP
However, retransmission timers at higher layers typically order of
seconds
More efficient to deal with errors at the MAC level
If noACK within short period of time, retransmit
802.11 includes frame exchange protocol
Station receiving frame returns acknowledgment (ACK) frame
Exchange treated as atomic unit
Not interrupted by any other station

SCE 19 ECE
CS2060 HIGH SPEED NETWORKS

Unit -02
CONGESTION AND TRAFFIC MANAGEMENT

2.1 QUEING ANALYSIS

In a queueing model is used to approximate a real queueing situation or system, so


the queueing behaviour can be analysed mathematically. Queueing models allow a
number of useful steady state performance measures to be determined, including:
the average number in the queue, or the system,
the average time spent in the queue, or the system,
the statistical distribution of those numbers or times,
the probability the queue is full, or empty, and
the probability of finding the system in a particular state.
These performance measures are important as issues or problems caused by
queueing situations are often related to customer dissatisfaction with service or may
be the root cause of economic losses in a business. Analysis of the relevant queueing
models allows the cause of queueing issues to be identified and the impact of any
changes that might be wanted to be assessed.

2.2 QUEING MODELS

Queing models can be represented using Kendall's notation:

A/B/S/K/N/Disc
where:
A is the interarrival time distribution
B is the service time distribution
S is the number of servers
K is the system capacity
N is the calling population
Disc is the service discipline assumed
Some standard notation for distributions (A or B) are:
M for a Markovian (exponential) distribution
E for an Erlang distribution with phases
D for Deterministic (constant)
G for General distribution
PH for a Phase-type distribution

Models
Construction and analysis

Queueing models are generally constructed to represent the steady state of a


queueing system, that is, the typical, long run or average state of the system. As a
consequence, these are stochastic models that represent the probability that a
queueing system will be found in a particular configuration or state.
A general procedure for constructing and analysing such queueing models is:

SCE 20 ECE
CS2060 HIGH SPEED NETWORKS
1. Identify the parameters of the system, such as the arrival rate, service time, Queue
capacity, and perhaps draw a diagram of the system.
2. Identify the system states. (A state will generally represent the integer number of
customers, people, jobs, calls, messages, etc. in the system and may or may not be
limited.)
3. Draw a state transition diagram that represents the possible system states and
identify the rates to enter and leave each state. This diagram is a representation of a
Markov chain.
4. Because the state transition diagram represents the steady state situation between
state there is a balanced flow between states so the probabilities of being in adjacent
states can be related mathematically in terms of the arrival and service rates and state
probabilities.
5. Express all the state probabilities in terms of the empty state probability, using the
inter-state transition relationships.
6. Determine the empty state probability by using the fact that all state probabilities
always sum to 1.
Whereas specific problems that have small finite state models are often able to be
analysed numerically, analysis of more general models, using calculus, yields useful
formulae that can be applied to whole classes of problems.

2.3 SINGLE-SERVER QUEUE

Single-server queues are, perhaps, the most commonly encountered queueing


situation in real life. One encounters a queue with a single server in many situations,
including business (e.g. sales clerk), industry (e.g. a production line), transport (e.g.
a bus, a taxi rank, an intersection), telecommunications (e.g. Telephone line),
computing (e.g. processor sharing). Even where there are multiple servers handling
the situation it is possible to consider each server individually as part of the larger
system, in many cases. (e.g A supermarket checkout has several single server queues
that the customer can select from.) Consequently, being able to model and analyse a
single server queue's behaviour is a particularly useful thing to do.

Poisson arrivals and service

M/M/1// represents a single server that has unlimited queue capacity and infinite
calling population, both arrivals and service are Poisson (or random) processes,
meaning the statistical distribution of both the inter-arrival times and the service
times follow the exponential distribution. Because of the mathematical nature of the
exponential distribution, a number of quite simple relationships are able to be
derived for several performance measures based on knowing the arrival rate and
service rate.
This is fortunate because, an M/M/1 queuing model can be used to approximate
many queuing situations.

Poisson arrivals and general service

M/G/1// represents a single server that has unlimited queue capacity and infinite
calling population, while the arrival is still Poisson process, meaning the statistical

SCE 21 ECE
CS2060 HIGH SPEED NETWORKS
distribution of the inter-arrival times still follow the exponential distribution, the
distribution of the service time does not. The distribution of the service time may
follow any general statistical distribution, not just exponential. Relationships are still
able to be derived for a (limited) number of performance measures if one knows the
arrival rate and the mean and variance of the service rate. However the derivations a
generally more complex.
A number of special cases of M/G/1 provide specific solutions that give broad
insights into the best model to choose for specific queueing situations because they
permit the comparison of those solutions to the performance of an M/M/1 model.

2.4 MULTIPLE-SERVERS QUEUE

Multiple (identical)-servers queue situations are frequently encountered in


telecommunications or a customer service environment. When modelling these
situations care is needed to ensure that it is a multiple servers queue, not a network
of single server queues, because results may differ depending on how the queuing
model behaves.
One observational insight provided by comparing queuing models is that a single
queue with multiple servers performs better than each server having their own queue
and that a single large pool of servers performs better than two or more smaller
pools, even though there are the same total number of servers in the system.
One simple example to prove the above fact is as follows: Consider a system having
8 input lines, single queue and 8 servers.The output line has a capacity of 64 kbit/s.
Considering the arrival rate at each input as 2 packets/s. So, the total arrival rate is
16 packets/s. With an average of 2000 bits per packet, the service rate is 64
kbit/s/2000b = 32 packets/s. Hence, the average response time of the system is 1/(-
) = 1/(32-16) = 0.0667 sec. Now, consider a second system with 8 queues, one for
each server. Each of the 8 output lines has a capacity of 8 kbit/s. The calculation
yields the response time as 1/(-) = 1/(4-2) = 0.5 sec. And the average waiting time
in the queue in the first case is /(1-) = 0.25, while in the second case is 0.03125.

Infinitely many servers

While never exactly encountered in reality, an infinite-servers (e.g. M/M/) model


is a convenient theoretical model for situations that involve storage or delay, such as
parking lots, warehouses and even atomic transitions. In these models there is no
queue, as such, instead each arriving customer receives service. When viewed from
the outside, the model appears to delay or store each customer for some time.

2.5 QUEUEING SYSTEM CLASSIFICATION

With Little's Theorem, we have developed some basic understanding of a


queueing system. To further our understanding we will have to dig deeper into
characteristics of a queueing system that impact its performance. For example,
queueing requirements of a restaurant will depend upon factors like:
How do customers arrive in the restaurant? Are customer arrivals more during lunch
and dinner time (a regular restaurant)? Or is the customer traffic more uniformly
distributed (a cafe)?

SCE 22 ECE
CS2060 HIGH SPEED NETWORKS
How much time do customers spend in the restaurant? Do customers typically leave
the restaurant in a fixed amount of time? Does the customer service time vary with
the type of customer?
How many tables does the restaurant have for servicing customers?
The above three points correspond to the most important characteristics of a
queueing system. They are explained below:
Arrival Process The probability density distribution that determines
the customer arrivals in the system.
In a messaging system, this refers to the message
arrival probability distribution.
Service Process The probability density distribution that determines
the customer service times in the system.
In a messaging system, this refers to the message
transmission time distribution. Since message
transmission is directly proportional to the length of
the message, this parameter indirectly refers to the
message length distribution.
Number of Number of servers available to service the customers.
Servers In a messaging system, this refers to the number of
links between the source and destination nodes.
Based on the above characteristics, queueing systems can be classified by the
following convention:
A/S/n
Where A is the arrival process, S is the service process and n is the number of
servers. A and S are can be any of the following:
M (Markov) Exponential probability density
D (Deterministic) All customers have the same value
G (General) Any arbitrary probability distribution

Examples of queueing systems that can be defined with this convention are:
M/M/1: This is the simplest queueing system to analyze. Here the arrival and
service time are negative exponentially distributed (poisson process). The system
consists of only one server. This queueing system can be applied to a wide variety of
problems as any system with a very large number of independent customers can be
approximated as a Poisson process. Using a Poisson process for service time
however is not applicable in many applications and is only a crude approximation.
Refer to M/M/1 Queuing System for details.
M/D/n: Here the arrival process is poisson and the service time distribution is
deterministic. The system has n servers. (e.g. a ticket booking counter with n
cashiers.) Here the service time can be assumed to be same for all customers)
G/G/n: This is the most general queueing system where the arrival and service time
processes are both arbitrary. The system has n servers. No analytical solution is
known for this queueing system.
Markovian arrival processes

SCE 23 ECE
CS2060 HIGH SPEED NETWORKS
In queuing theory, Markovian arrival processes are used to model the arrival
customers to queue.
Some of the most common include the Poisson process, Markovian arrival process
and the batch Markovian arrival process.
Markovian arrival processes has two processes. A continuous-time Markov process
j(t), a Markov process which is generated by a generator or rate matrix, Q. The
other process is a counting process N(t), which has state space
(where is the set of all natural numbers). N(t) increases every time there is a
transition in j(t) which marked.

2.6 POISSON PROCESS

The Poisson arrival process or Poisson process counts the number of arrivals, each
of which has a exponentially distributed time between arrival. In the most general
case this can be represented by the rate matrix,
Markov arrival process
The Markov arrival process (MAP) is a generalisation of the Poisson process by
having non-exponential distribution sojourn between arrivals. The homogeneous
case has rate matrix,
Little's law
In queueing theory, Little's result, theorem, lemma, or law says:
The average number of customers in a stable system (over some time interval), N, is
equal to their average arrival rate, , multiplied by their average time in the system,
T, or:

Although it looks intuitively reasonable, it's a quite remarkable result, as it implies


that this behavior is entirely independent of any of the detailed probability
distributions involved, and hence requires no assumptions about the schedule
according to which customers arrive or are serviced, or whether they are served in
the order in which they arrive.
It is also a comparatively recent result - it was first proved by John Little, an
Institute Professor and the Chair of Management Science at the MIT Sloan School of
Management, in 1961.
Handily his result applies to any system, and particularly, it applies to systems
within systems. So in a bank, the queue might be one subsystem, and each of the
tellers another subsystem, and Little's result could be applied to each one,

as well as the whole thing. The only requirement is that the system is stable -- it can't
be in some transition state such as just starting up or just shutting down.

2.6.1 Mathematical formalization of Little's theorem

Let (t) be to some system in the interval [0, t]. Let (t) be the number of departures
from the same system in the interval [0, t]. Both (t) and (t) are integer valued
increasing functions by their definition. Let Tt be the mean time spent in the system
(during the interval [0, t]) for all the customers who were in the system during the
interval [0, t]. Let Nt be the mean number of customers in the system over the
duration of the interval [0, t].

SCE 24 ECE
CS2060 HIGH SPEED NETWORKS
If the following limits exist,

and, further, if = then Little's theorem holds, the limit

exists and is given by Little's theorem,

Ideal Performance

SCE 25 ECE
CS2060 HIGH SPEED NETWORKS

2.8 EFFECTS OF CONGESTION

2.9 CONGESTION-CONTROL MECHANISMS

Backpressure
Request from destination to source to reduce rate
Useful only on a logical connection basis
Requires hop-by-hop flow control mechanism
Policing
Measuring and restricting packets as they enter the network
Choke packet
Specific message back to source
E.g., ICMP Source Quench
Implicit congestion signaling

2.9.1 Explicit congestion signaling

Frame Relay reduces network overhead by implementing simple congestion-


notification mechanisms rather than explicit, per-virtual-circuit flow control. Frame
Relay typically is implemented on reliable network media, so data

SCE 26 ECE
CS2060 HIGH SPEED NETWORKS
integrity is not sacrificed because flow control can be left to higher-layer protocols.
Frame Relay implements two congestion-notification mechanisms:
Forward-explicit congestion notification (FECN)
Backward-explicit congestion notification (BECN)
FECN and BECN each is controlled by a single bit contained in the Frame Relay
frame header. The Frame Relay frame header also contains a Discard Eligibility
(DE) bit, which is used to identify less important traffic that can be dropped during
periods of congestion.
The FECN bit is part of the Address field in the Frame Relay frame header. The
FECN mechanism is initiated when a DTE device sends Frame Relay frames into
the network. If the network is congested, DCE devices (switches) set the value of the
frames' FECN bit to 1. When the frames reach the destination DTE device, the
Address field (with the FECN bit set) indicates that the frame experienced
congestion in the path from source to destination. The DTE device can relay this
information to a higher-layer protocol for processing. Depending on the
implementation, flow control may be initiated, or the indication may be ignored.
The BECN bit is part of the Address field in the Frame Relay frame header. DCE
devices set the value of the BECN bit to 1 in frames traveling in the opposite
direction of frames with their FECN bit set. This informs the receiving DTE device
that a particular path through the network is congested. The DTE device then can
relay this information to a higher-layer protocol for processing. Depending on the
implementation, flow-control may be initiated, or the indication may be ignored.

Frame Relay Discard Eligibility

The Discard Eligibility (DE) bit is used to indicate that a frame has lower
importance than other frames. The DE bit is part of the Address field in the Frame
Relay frame header.
DTE devices can set the value of the DE bit of a frame to 1 to indicate that the frame
has lower importance than other frames. When the network becomes congested,
DCE devices will discard frames with the DE bit set before discarding those that do
not. This reduces the likelihood of critical data being dropped by Frame Relay DCE
devices during periods of congestion.

Frame Relay Error Checking


Frame Relay uses a common error-checking mechanism known as the cyclic
redundancy check (CRC). The CRC compares two calculated values to determine
whether errors occurred during the transmission from source to destination. Frame
Relay reduces network overhead by implementing error checking rather than error
correction. Frame Relay typically is implemented on reliable network media, so data
integrity is not sacrificed because error correction can be left to higher-layer
protocols running on top of Frame Relay.

2.10 TRAFFIC MANAGEMENT IN CONGESTED NETWORK SOME


CONSIDERATIONS

Fairness

SCE 27 ECE
CS2060 HIGH SPEED NETWORKS
Various flows should suffer equally.
Last-in-first-discarded may not be fair
Quality of Service (QoS)
Flows treated differently, based on need
Voice, video: delay sensitive, loss insensitive
File transfer, mail: delay insensitive, loss sensitive
Interactive computing: delay and loss sensitive
Reservations
Policing: excess traffic discarded or handled on best-effort basis

2.11 FRAME RELAY CONGESTION CONTROL

Minimize frame discard


Maintain QoS (per-connection bandwidth)
Minimize monopolization of network
Simple to implement, little overhead
Minimal additional network traffic
Resources distributed fairly
Limit spread of congestion
Operate effectively regardless of flow
Have minimum impact other systems in network
Minimize variance in QoS

Congestion Avoidance with Explicit Signaling

Two general strategies considered:


Hypothesis 1: Congestion always occurs slowly, almost always at egress nodes
forward explicit congestion avoidance
Hypothesis 2: Congestion grows very quickly in internal nodes and requires quick
action
backward explicit congestion avoidance

Explicit Signaling Response


Network Response
each frame handler monitors its queuing behavior and takes action

SCE 28 ECE
CS2060 HIGH SPEED NETWORKS

use FECN/BECN bits


some/all connections notified of congestion
User (end-system) Response
receipt of BECN/FECN bits in frame
BECN at sender: reduce transmission rate
FECN at receiver: notify peer (via LAPF or higher layer) to restrict flow

Frame Relay Traffic Rate Management Parameters

Committed Information Rate (CIR)


Average data rate in bits/second that the network agrees to support for a
connection
Data Rate of User Access Channel (Access Rate)
Fixed rate link between user and network (for network access)
Committed Burst Size (Bc)
Maximum data over an interval agreed to by network
Excess Burst Size (Be)
Maximum data, above Bc, over an interval that network will attempt to
transfer

Relationship of Congestion Parameters

SCE 29 ECE
CS2060 HIGH SPEED NETWORKS

SCE 30 ECE
CS2060 HIGH SPEED NETWORKS
UNIT- 03
TCP AND CONGESTION CONTROL

3.1 TCP FLOW CONTROL

Uses a form of sliding window.


Differs from mechanism used in LLC, HDLC, X.25, and others:
Decouples acknowledgement of received data units from granting permission to
send more.
TCPs flow control is known as a credit allocation scheme:
Each transmitted octet is considered to have a sequence number.

TCP Header Fields for Flow Control:


Sequence number (SN) of first octet in data segment.
Acknowledgement number (AN).
Window (W)
Acknowledgement contains AN = i, W = j:
Octets through SN = i - 1 acknowledged.
Permission is granted to send W = j more octets,
i.e., octets i through i + j - 1

TCP Credit Allocation Mechanisms

SCE 31 ECE
CS2060 HIGH SPEED NETWORKS

Credit Allocation Is Fexible:


Suppose last message B issued was AN = i, W = j.
To increase credit to k (k > j) when no new data, B issues AN = i, W = k.
To acknowledge segment containing m octets (m < j), B issues AN = i + m, W = j - m.

Credit Policy:
Receiver needs a policy for how much credit to give sender
Conservative approach: grant credit up to limit of available buffer space
May limit throughput in long-delay situations
Optimistic approach: grant credit based on expectation of freeing space before data
arrives.

Effect Of Window Size:


W = TCP window size (octets)
R = Data rate (bps) at TCP source
D = Propagation delay (seconds)
After TCP source begins transmitting, it takes D seconds for first octet to arrive, and D
seconds for acknowledgement to return.
TCP source could transmit at most 2RD bits, or RD/4 octets.

Sending and receiving Flow Control Perceptives

SCE 32 ECE
CS2060 HIGH SPEED NETWORKS
Normalized Throughput:
1 W > RD / 4
S =
4W W < RD / 4
RD

Complication Factor:
Multiple TCP connections are multiplexed over same network interface, reducing R
and efficiency
For multi-hop connections, D is the sum of delays across each network plus delays at
each router
If source data rate R exceeds data rate on one of the hops, that hop will be a bottleneck
Lost segments are retransmitted, reducing throughput. Impact depends on
retransmission policy.

Rewtransmission Fails:
TCP relies exclusively on positive acknowledgements and retransmission on
acknowledgement timeout
There is no explicit negative acknowledgement
Retransmission required when:
1. Segment arrives damaged, as indicated by checksum error, causing
receiver to discard segment
2. Segment fails to arrive.

Timers:
A timer is associated with each segment as it is sent
If timer expires before segment acknowledged, sender must retransmit
Key Design Issue:
value of retransmission timer
Too small: many unnecessary retransmissions, wasting network bandwidth
Too large: delay in handling lost segment.

Implementation Policy:
Send
Deliver
Accept

SCE 33 ECE
CS2060 HIGH SPEED NETWORKS
In-order
In-window
Retransmit
First-only
Batch
individual
Acknowledge
immediate
cumulative.

3.2 TCP CONGESTION CONTROL

Dynamic routing can alleviate congestion by spreading load more evenly


But only effective for unbalanced loads and brief surges in traffic
Congestion can only be controlled by limiting total amount of data entering network
ICMP source Quench message is crude and not effective
RSVP may help but not widely implemented

TCP Congestion Control is Difficult

IP is connectionless and stateless, with no provision for detecting or controlling congestion


TCP only provides end-to-end flow control
No cooperative, distributed algorithm to bind together various TCP entities

TCP Flow and Congestion Control

The rate at which a TCP entity can transmit is determined by rate of incoming ACKs to
previous segments with new credit
Rate of Ack arrival determined by round-trip path between source and destination
Bottleneck may be destination or internet
Sender cannot tell which
Only the internet bottleneck can be due to congestion

TCP Segment Pacing

SCE 34 ECE
CS2060 HIGH SPEED NETWORKS

3.2.1 TCP FLOW AND CONGESTION CONTROL

3.3 RETRANSMISSION TIMER MANAGEMENT

Three Techniques to calculate retransmission timer (RTO):


RTT Variance Estimation
Exponential RTO Backoff
Karns Algorithm

RTTVarianceEstimation
(Jacobsons Algorithm)
3 sources of high variance in RTT
If data rate relative low, then transmission delay will be relatively large, with larger
variance due to variance in packet size
Load may change abruptly due to other sources
Peer may not acknowledge segments immediately

Jacobsons Algorithm

SRTT(K + 1) = (1 g) SRTT(K) + g RTT(K + 1)

SERR(K + 1) = RTT(K + 1) SRTT(K)

SDEV(K + 1) = (1 h) SDEV(K) + h |SERR(K + 1)|

RTO(K + 1) = SRTT(K + 1) + f SDEV(K + 1)

g = 0.125
h = 0.25
f = 2 or f = 4 (most current implementations use f = 4)

Two Other Factors


Jacobsons algorithm can significantly improve TCP performance, but:
What RTO to use for retransmitted segments?

SCE 35 ECE
CS2060 HIGH SPEED NETWORKS

ANSWER: exponential RTO backoff algorithm


Which round-trip samples to use as input to Jacobsons algorithm?
ANSWER: Karns algorithm

3.4 EXPONENTIAL RTO BACKOFF

Increase RTO each time the same segment retransmitted backoff process
Multiply RTO by constant:
RTO = q RTO
q = 2 is called binary exponential backoff
Which Round-trip Samples?
If an ack is received for retransmitted segment, there are 2 possibilities:
Ack is for first transmission
Ack is for second transmission
TCP source cannot distinguish 2 cases
No valid way to calculate RTT:
From first transmission to ack, or
From second transmission to ack?

3.5 KARNS ALGORITHM

Do not use measured RTT to update SRTT and SDEV


Calculate backoff RTO when a retransmission occurs
Use backoff RTO for segments until an ack arrives for a segment that has not been
retransmitted
Then use Jacobsons algorithm to calculate RTO

3.6 WINDOW MANAGEMENT

Slow start
Dynamic window sizing on congestion
Fast retransmit
Fast recovery
Limited transmit

Slow Start

awnd = MIN[ credit, cwnd]


where
awnd = allowed window in segments
cwnd = congestion window in segments
credit = amount of unused credit granted in most recent ack
cwnd = 1 for a new connection and increased by 1 for each ack received, up to a maximum

SCE 36 ECE
CS2060 HIGH SPEED NETWORKS

Effect of Slow Start

Dynamic Window Sizing on Congestion

A lost segment indicates congestion


Prudent to reset cwsd = 1 and begin slow start process
May not be conservative enough: easy to drive a network into saturation but hard for
the net to recover (Jacobson)
Instead, use slow start with linear growth in cwnd

Illustration of Slow Start and Congestion Avoidance

SCE 37 ECE
CS2060 HIGH SPEED NETWORKS

Fast Retransmit
RTO is generally noticeably longer than actual RTT
If a segment is lost, TCP may be slow to retransmit
TCP rule: if a segment is received out of order, an ack must be issued immediately for the
last in-order segment
Fast Retransmit rule: if 4 acks received for same segment, highly likely it was lost, so
retransmit immediately, rather than waiting for timeout

Fast Recovery

When TCP retransmits a segment using Fast Retransmit, a segment was assumed lost
Congestion avoidance measures are appropriate at this point
E.g., slow-start/congestion avoidance procedure
This may be unnecessarily conservative since multiple acks indicate segments are getting
through
Fast Recovery: retransmit lost segment, cut cwnd in half, proceed with linear increase of
cwnd
This avoids initial exponential slow-start

Limited Transmit

If congestion window at sender is small, fast retransmit may not get triggered, e.g., cwnd =
3
Under what circumstances does sender have small congestion window?
Is the problem common?
If the problem is common, why not reduce number of duplicate acks needed to trigger
retransmit?

Limited Transmit Algorithm

Sender can transmit new segment when 3 conditions are met:


Two consecutive duplicate acks are received

SCE 38 ECE
CS2060 HIGH SPEED NETWORKS
Destination advertised window allows transmission of segment
Amount of outstanding data after sending is less than or equal to cwnd + 2

3.7 PERFORMANCE OF TCP OVER ATM

How best to manage TCPs segment size, window management and congestion
control
at the same time as ATMs quality of service and traffic control policies
TCP may operate end-to-end over one ATM network, or there may be multiple ATM
LANs or WANs with non-ATM networks

TCP/IP over AAL5/ATM

Performance of TCP over UBR


Buffer capacity at ATM switches is a critical parameter in assessing TCP throughput
performance
Insufficient buffer capacity results in lost TCP segments and retransmissions

Effect of Switch Buffer Size

Data rate of 141 Mbps


End-to-end propagation delay of 6 s
IP packet sizes of 512 octets to 9180
TCP window sizes from 8 Kbytes to 64 Kbytes
ATM switch buffer size per port from 256 cells to 8000
One-to-one mapping of TCP connections to ATM virtual circuits
TCP sources have infinite supply of data ready

Observations

If a single cell is dropped, other cells in the same IP datagram are unusable, yet ATM
network forwards these useless cells to destination
Smaller buffer increase probability of dropped cells

SCE 39 ECE
CS2060 HIGH SPEED NETWORKS
Larger segment size increases number of useless cells transmitted if a single cell
dropped

Partial Packet and Early Packet Discard

Reduce the transmission of useless cells


Work on a per-virtual circuit basis
Partial Packet Discard
If a cell is dropped, then drop all subsequent cells in that segment (i.e., look for
cell with SDU type bit set to one)
Early Packet Discard
When a switch buffer reaches a threshold level, preemptively discard all cells in
a segment
Selective Drop

Ideally, N/V cells buffered for each of the V virtual circuits


W(i) = N(i) = N(i) V
N/V N
If N > R and W(i) > Z
then drop next new packet on VC i
Z is a parameter to be chosen

ATM Switch Buffer Layout

Fair Buffer Allocation

More aggressive dropping of packets as congestion increases


Drop new packet when:

N > R and W(i) > Z B R


N-R

TCP over ABR

Good performance of TCP over UBR can be achieved with minor adjustments to switch
mechanisms
This reduces the incentive to use the more complex and more expensive ABR service
Performance and fairness of ABR quite sensitive to some ABR parameter settings

SCE 40 ECE
CS2060 HIGH SPEED NETWORKS
Overall, ABR does not provide significant performance over simpler and less expensive
UBR-EPD or UBR-EPD-FBA

3.8 TRAFFIC AND CONGESTION CONTROL IN ATM NETWORKS

Introduction
Control needed to prevent switch buffer overflow
High speed and small cell size gives different problems from other networks
Limited number of overhead bits
ITU-T specified restricted initial set
I.371
ATM forum Traffic Management Specification 41
Overview
Congestion problem
Framework adopted by ITU-T and ATM forum
Control schemes for delay sensitive traffic
Voice & video
Not suited to bursty traffic
Traffic control
Congestion control
Bursty traffic
Available Bit Rate (ABR)
Guaranteed Frame Rate (GFR)

3.9 REQUIREMENTS FOR ATM TRAFFIC AND CONGESTION CONTROL

Most packet switched and frame relay networks carry non-real-time bursty data
No need to replicate timing at exit node
Simple statistical multiplexing
User Network Interface capacity slightly greater than average of channels
Congestion control tools from these technologies do not work in ATM

Problems with ATM Congestion Control


Most traffic not amenable to flow control
Voice & video can not stop generating
Feedback slow
Small cell transmission time v propagation delay
Wide range of applications
From few kbps to hundreds of Mbps
Different traffic patterns
Different network services
High speed switching and transmission
Volatile congestion and traffic control

Key Performance Issues-Latency/Speed Effects


E.g. data rate 150Mbps
Takes (53 x 8 bits)/(150 x 106) =2.8 x 10-6 seconds to insert a cell

SCE 41 ECE
CS2060 HIGH SPEED NETWORKS
Transfer time depends on number of intermediate switches, switching time and
propagation delay. Assuming no switching delay and speed of light propagation, round
trip delay of 48 x 10-3 sec across USA
A dropped cell notified by return message will arrive after source has transmitted N
further cells
N=(48 x 10-3 seconds)/(2.8 x 10-6 seconds per cell)
=1.7 x 104 cells = 7.2 x 106 bits
i.e. over 7 Mbits

Cell Delay Variation


For digitized voice delay across network must be small
Rate of delivery must be constant
Variations will occur
Dealt with by Time Reassembly of CBR cells (see next slide)
Results in cells delivered at CBR with occasional gaps due to dropped cells
Subscriber requests minimum cell delay variation from network provider
Increase data rate at UNI relative to load
Increase resources within network

Time Reassembly of CBR Cells

Network Contribution to Cell Delay Variation


In packet switched network
Queuing effects at each intermediate switch
Processing time for header and routing
Less for ATM networks
Minimal processing overhead at switches
Fixed cell size, header format
No flow control or error control processing
ATM switches have extremely high throughput
Congestion can cause cell delay variation
Build up of queuing effects at switches

SCE 42 ECE
CS2060 HIGH SPEED NETWORKS
Total load accepted by network must be controlled

Cell Delay Variation at UNI


Caused by processing in three layers of ATM model
See next slide for details
None of these delays can be predicted
None follow repetitive pattern
So, random element exists in time interval between reception by ATM stack and
transmission

3.10 ATM TRAFFIC-RELATED ATTRIBUTES

Six service categories (see chapter 5)


Constant bit rate (CBR)
Real time variable bit rate (rt-VBR)
Non-real-time variable bit rate (nrt-VBR)
Unspecified bit rate (UBR)
Available bit rate (ABR)
Guaranteed frame rate (GFR)
Characterized by ATM attributes in four categories
Traffic descriptors
QoS parameters
Congestion
Other
Traffic Parameters

Traffic pattern of flow of cells


Intrinsic nature of traffic
Source traffic descriptor
Modified inside network
Connection traffic descriptor
Source Traffic Descriptor

Peak cell rate


Upper bound on traffic that can be submitted
Defined in terms of minimum spacing between cells T
PCR = 1/T
Mandatory for CBR and VBR services
Sustainable cell rate
Upper bound on average rate
Calculated over large time scale relative to T
Required for VBR
Enables efficient allocation of network resources between VBR sources
Only useful if SCR < PCR
Maximum burst size
Max number of cells that can be sent at PCR
If bursts are at MBS, idle gaps must be enough to keep overall rate below SCR
Required for VBR

SCE 43 ECE
CS2060 HIGH SPEED NETWORKS
Minimum cell rate
Min commitment requested of network
Can be zero
Used with ABR and GFR
ABR & GFR provide rapid access to spare network capacity up to PCR
PCR MCR represents elastic component of data flow
Shared among ABR and GFR flows
Maximum frame size
Max number of cells in frame that can be carried over GFR connection
Only relevant in GFR

Connection Traffic Descriptor

Includes source traffic descriptor plus:-


Cell delay variation tolerance
Amount of variation in cell delay introduced by network interface and UNI
Bound on delay variability due to slotted nature of ATM, physical layer
overhead and layer functions (e.g. cell multiplexing)
Represented by time variable
Conformance definition
Specify conforming cells of connection at UNI
Enforced by dropping or marking cells over definition

Quality of Service Parameters-maxCTD


Cell transfer delay (CTD)
Time between transmission of first bit of cell at source and reception of last
bit at destination
Typically has probability density function (see next slide)
Fixed delay due to propagation etc.
Cell delay variation due to buffering and scheduling
Maximum cell transfer delay (maxCTD)is max requested delay for
connection
Fraction of cells exceed threshold
Discarded or delivered late

Peak-to-peak CDV & CLR


Peak-to-peak Cell Delay Variation
Remaining (1-) cells within QoS
Delay experienced by these cells is between fixed delay and maxCTD
This is peak-to-peak CDV
CDVT is an upper bound on CDV
Cell loss ratio
Ratio of cells lost to cells transmitted

Cell Transfer Delay PDF

SCE 44 ECE
CS2060 HIGH SPEED NETWORKS

Congestion Control Attributes


Only feedback is defined
ABR and GFR
Actions taken by network and end systems to regulate traffic submitted
ABR flow control
Adaptively share available bandwidth
Other Attributes
Behaviour class selector (BCS)
Support for IP differentiated services (chapter 16)
Provides different service levels among UBR connections
Associate each connection with a behaviour class
May include queuing and scheduling
Minimum desired cell rate

3.11 TRAFFIC MANAGEMENT FRAMEWORK

Objectives of ATM layer traffic and congestion control


Support QoS for all foreseeable services
Not rely on network specific AAL protocols nor higher layer application
specific protocols
Minimize network and end system complexity
Maximize network utilization
Timing Levels
Cell insertion time
Round trip propagation time
Connection duration
Long term

Traffic Control and Congestion Functions

SCE 45 ECE
CS2060 HIGH SPEED NETWORKS

Traffic Control Strategy


Determine whether new ATM connection can be accommodated
Agree performance parameters with subscriber
Traffic contract between subscriber and network
This is congestion avoidance
If it fails congestion may occur
Invoke congestion control

3.12 TRAFFIC CONTROL

Resource management using virtual paths


Connection admission control
Usage parameter control
Selective cell discard
Traffic shaping
Explicit forward congestion indication

Resource Management Using Virtual Paths


Allocate resources so that traffic is separated according to service characteristics
Virtual path connection (VPC) are groupings of virtual channel connections (VCC)
Applications
User-to-user applications
VPC between UNI pair
No knowledge of QoS for individual VCC
User checks that VPC can take VCCs demands
User-to-network applications
VPC between UNI and network node
Network aware of and accommodates QoS of VCCs
Network-to-network applications
VPC between two network nodes

SCE 46 ECE
CS2060 HIGH SPEED NETWORKS
Network aware of and accommodates QoS of VCCs

Resource Management Concerns


Cell loss ratio
Max cell transfer delay
Peak to peak cell delay variation
All affected by resources devoted to VPC
If VCC goes through multiple VPCs, performance depends on consecutive VPCs and
on node performance
VPC performance depends on capacity of VPC and traffic characteristics of
VCCs
VCC related function depends on switching/processing speed and priority

VCCs and VPCs Configuration

Allocation of Capacity to VPC


Aggregate peak demand
May set VPC capacity (data rate) to total of VCC peak rates
Each VCC can give QoS to accommodate peak demand
VPC capacity may not be fully used
Statistical multiplexing
VPC capacity >= average data rate of VCCs but < aggregate peak demand
Greater CDV and CTD
May have greater CLR
More efficient use of capacity
For VCCs requiring lower QoS
Group VCCs of similar traffic together

Connection Admission Control


User must specify service required in both directions
Category
Connection traffic descriptor
Source traffic descriptor

SCE 47 ECE
CS2060 HIGH SPEED NETWORKS
CDVT
Requested conformance definition
QoS parameter requested and acceptable value
Network accepts connection only if it can commit resources to support requests

Cell Loss Priority


Two levels requested by user
Priority for individual cell indicated by CLP bit in header
If two levels are used, traffic parameters for both flows specified
High priority CLP = 0
All traffic CLP = 0 + 1
May improve network resource allocation

Procedures to Set Traffic Control Parameters

Usage Parameter Control


UPC
Monitors connection for conformity to traffic contract
Protect network resources from overload on one connection
Done at VPC or VCC level
VPC level more important
Network resources allocated at this level

Location of UPC Function

SCE 48 ECE
CS2060 HIGH SPEED NETWORKS

Peak Cell Rate Algorithm


How UPC determines whether user is complying with contract
Control of peak cell rate and CDVT
Complies if peak does not exceed agreed peak
Subject to CDV within agreed bounds
Generic cell rate algorithm
Leaky bucket algorithm

Generic Cell Rate Algorithm

Virtual Scheduling Algorithm

Leaky Bucket Algorithm

SCE 49 ECE
CS2060 HIGH SPEED NETWORKS

Continuous Leaky Bucket Algorithm

Sustainable Cell Rate Algorithm


Operational definition of relationship between sustainable cell rate and burst tolerance
Used by UPC to monitor compliance
Same algorithm as peak cell rate

UPC Actions
Compliant cell pass, non-compliant cells discarded
If no additional resources allocated to CLP=1 traffic, CLP=0 cells C
If two level cell loss priority cell with:
CLP=0 and conforms passes
CLP=0 non-compliant for CLP=0 traffic but compliant for CLP=0+1 is tagged
and passes
CLP=0 non-compliant for CLP=0 and CLP=0+1 traffic discarded
CLP=1 compliant for CLP=0+1 passes
CLP=1 non-compliant for CLP=0+1 discarded

Possible Actions of UPC

SCE 50 ECE
CS2060 HIGH SPEED NETWORKS

Explicit Forward Congestion Indication

Essentially same as frame relay


If node experiencing congestion, set forward congestion indication is cell headers
Tells users that congestion avoidance should be initiated in this direction
User may take action at higher level

3.13 ABR TRAFFIC MANAGEMENT

QoS for CBR, VBR based on traffic contract and UPC described previously
No congestion feedback to source
Open-loop control
Not suited to non-real-time applications
File transfer, web access, RPC, distributed file systems
No well defined traffic characteristics except PCR
PCR not enough to allocate resources
Use best efforts or closed-loop control

Best Efforts

Share unused capacity between applications


As congestion goes up:
Cells are lost
Sources back off and reduce rate
Fits well with TCP techniques (chapter 12)
Inefficient
Cells dropped causing re-transmission

Closed-Loop Control

Sources share capacity not used by CBR and VBR


Provide feedback to sources to adjust load
Avoid cell loss
Share capacity fairly

SCE 51 ECE
CS2060 HIGH SPEED NETWORKS
Characteristics of ABR

ABR connections share available capacity


Access instantaneous capacity unused by CBR/VBR
Increases utilization without affecting CBR/VBR QoS
Share used by single ABR connection is dynamic
Varies between agreed MCR and PCR
Network gives feedback to ABR sources
ABR flow limited to available capacity
Buffers absorb excess traffic prior to arrival of feedback
Low cell loss
Major distinction from UBR

Feedback Mechanisms

Cell transmission rate characterized by:


Allowable cell rate
Current rate
Minimum cell rate
Min for ACR
May be zero
Peak cell rate
Max for ACR
Initial cell rate
Start with ACR=ICR
Adjust ACR based on feedback
Feedback in resource management (RM) cells
Cell contains three fields for feedback
Congestion indicator bit (CI)
No increase bit (NI)
Explicit cell rate field (ER)

Source Reaction to Feedback

If CI=1
Reduce ACR by amount proportional to current ACR but not less than CR
Else if NI=0
Increase ACR by amount proportional to PCR but not more than PCR
If ACR>ER set ACR<-max[ER,MCR]

Cell Flow on ABR

Two types of cell


Data & resource management (RM)
Source receives regular RM cells
Feedback
Bulk of RM cells initiated by source
One forward RM cell (FRM) per (Nrm-1) data cells

SCE 52 ECE
CS2060 HIGH SPEED NETWORKS
Nrm preset usually 32
Each FRM is returned by destination as backwards RM (BRM) cell
FRM typically CI=0, NI=0 or 1 ER desired transmission rate in range
ICR<=ER<=PCR
Any field may be changed by switch or destination before return

ATM Switch Rate Control Feedback

EFCI marking
Explicit forward congestion indication
Causes destination to set CI bit in ERM
Relative rate marking
Switch directly sets CI or NI bit of RM
If set in FRM, remains set in BRM
Faster response by setting bit in passing BRM
Fastest by generating new BRM with bit set
Explicit rate marking
Switch reduces value of ER in FRM or BRM

Flow of Data and RM Cells

ARB Feedback v TCP ACK

ABR feedback controls rate of transmission


Rate control
TCP feedback controls window size
Credit control
ARB feedback from switches or destination
TCP feedback from destination only

SCE 53 ECE
CS2060 HIGH SPEED NETWORKS
3.14 RM CELL FORMAT

RM Cell Format Notes

ATM header has PT=110 to indicate RM cell


On virtual channel VPI and VCI same as data cells on connection
On virtual path VPI same, VCI=6
Protocol id identifies service using RM (ARB=1)
Message type
Direction FRM=0, BRM=1
BECN cell. Source (BN=0) or switch/destination (BN=1)
CI (=1 for congestion)
NI (=1 for no increase)
Request/Acknowledge (not used in ATM forum spec)

3.15 ABR CAPACITY ALLOCATION

ATM switch must perform:


Congestion control
Monitor queue length
Fair capacity allocation
Throttle back connections using more than fair share
ATM rate control signals are explicit
TCP are implicit
Increasing delay and cell loss

Congestion Control Algorithms-Binary Feedback

Use only EFCI, CI and NI bits


Switch monitors buffer utilization
When congestion approaches, binary notification

SCE 54 ECE
CS2060 HIGH SPEED NETWORKS
Set EFCI on forward data cells or CI or NI on FRM or BRM
Three approaches to which to notify
Single FIFO queue
Multiple queues
Fair share notification

Single FIFO Queue


When buffer use exceeds threshold (e.g. 80%)
Switch starts issuing binary notifications
Continues until buffer use falls below threshold
Can have two thresholds
One for start and one for stop
Stops continuous on/off switching
Biased against connections passing through more switches

Multiple Queues
Separate queue for each VC or group of VCs
Separate threshold on each queue
Only connections with long queues get binary notifications
Fair
Badly behaved source does not affect other VCs
Delay and loss behaviour of individual VCs separated
Can have different QoS on different VCs
Fair Share

Selective feedback or intelligent marking


Try to allocate capacity dynamically
E.g.
fairshare =(target rate)/(number of connections)
Mark any cells where CCR>fairshare

Explicit Rate Feedback Schemes


Compute fair share of capacity for each VC
Determine current load or congestion
Compute explicit rate (ER) for each connection and send to source
Three algorithms
Enhanced proportional rate control algorithm
EPRCA
Explicit rate indication for congestion avoidance
ERICA
Congestion avoidance using proportional control
CAPC
Enhanced Proportional Rate Control Algorithm(EPRCA)

Switch tracks average value of current load on each connection


Mean allowed cell rate (MARC)
MACR(I)=(1-)*(MACR(I-1) + *CCR(I)
CCR(I) is CCR field in Ith FRM

SCE 55 ECE
CS2060 HIGH SPEED NETWORKS
Typically =1/16
Bias to past values of CCR over current
Gives estimated average load passing through switch
If congestion, switch reduces each VC to no more than DPF*MACR
DPF=down pressure factor, typically 7/8
ER<-min[ER, DPF*MACR]
Load Factor
Adjustments based on load factor
LF=Input rate/target rate
Input rate measured over fixed averaging interval
Target rate slightly below link bandwidth (85 to 90%)
LF>1 congestion threatened
VCs will have to reduce rate

Explicit Rate Indication for Congestion Avoidance (ERICA)

Attempt to keep LF close to 1


Define:
fairshare = (target rate)/(number of connections)
VCshare = CCR/LF
= (CCR/(Input Rate)) *(Target Rate)
ERICA selectively adjusts VC rates
Total ER allocated to connections matches target rate
Allocation is fair
ER = max[fairshare, VCshare]
VCs whose VCshare is less than their fairshare get greater increase

Congestion Avoidance Using Proportional Control (CAPC)

If LF<1 fairshare<-fairshare*min[ERU,1+(1-LF)*Rup]
If LF>1 fairshare<-fairshare*min[ERU,1-(1-LF)*Rdn]
ERU>1, determines max increase
Rup between 0.025 and 0.1, slope parameter
Rdn, between 0.2 and 0.8, slope parameter
ERF typically 0.5, max decrease in allottment of fair share
If fairshare < ER value in RM cells, ER<-fairshare
Simpler than ERICA
Can show large rate oscillations if RIF (Rate increase factor) too high
Can lead to unfairness

GRF Overview

Simple as UBR from end system view


End system does no policing or traffic shaping
May transmit at line rate of ATM adaptor
Modest requirements on ATM network
No guarantee of frame delivery
Higher layer (e.g. TCP) react to congestion causing dropped frames

SCE 56 ECE
CS2060 HIGH SPEED NETWORKS
User can reserve cell rate capacity for each VC
Application can send at min rate without loss
Network must recognise frames as well as cells
If congested, network discards entire frame
All cells of a frame have same CLP setting
CLP=0 guaranteed delivery, CLP=1 best efforts

GFR Traffic Contract

Peak cell rate PCR


Minimum cell rate MCR
Maximum burst size MBS
Maximum frame size MFS
Cell delay variation tolerance CDVT

Mechanisms for supporting Rate Guarantees

Tagging and policing


Buffer management
Scheduling

Tagging and Policing

Tagging identifies frames that conform to contract and those that dont
CLP=1 for those that dont
Set by network element doing conformance check
May be network element or source showing less important frames
Get lower QoS in buffer management and scheduling
Tagged cells can be discarded at ingress to ATM network or subsequent switch
Discarding is a policing function

Buffer Management

Treatment of cells in buffers or when arriving and requiring buffering


If congested (high buffer occupancy) tagged cells discarded in preference to untagged
Discard tagged cell to make room for untagged cell
May buffer per-VC
Discards may be based on per queue thresholds
Scheduling
Give preferential treatment to untagged cells
Separate queues for each VC
Per VC scheduling decisions
E.g. FIFO modified to give CLP=0 cells higher priority
Scheduling between queues controls outgoing rate of VCs
Individual cells get fair allocation while meeting traffic contract

SCE 57 ECE
CS2060 HIGH SPEED NETWORKS

3.15.1 COMPONENTS OF GFR MECHANISM

GFR Conformance Definition


UPC function
UPC monitors VC for traffic conformance
Tag or discard non-conforming cells
Frame conforms if all cells in frame conform
Rate of cells within contract
Generic cell rate algorithm PCR and CDVT specified for connection
All cells have same CLP
Within maximum frame size (MFS)

QoS Eligibility Test


Test for contract conformance
Discard or tag non-conforming cells
Looking at upper bound on traffic
Determine frames eligible for QoS guarantee
Under GFR contract for VC
Looking at lower bound for traffic
Frames are one of:
Nonconforming: cells tagged or discarded
Conforming ineligible: best efforts
Conforming eligible: guaranteed delivery

Simplified Frame Based GCRA

SCE 58 ECE
CS2060 HIGH SPEED NETWORKS

UNIT - 04
INTEGRATED AND DIFFERENTIATED SERVICES

INTRODUCTION

New additions to Internet increasing traffic


High volume client/server application
Web
Graphics
Real time voice and video
Need to manage traffic and control congestion
IEFT standards
Integrated services
Collective service to set of traffic demands in domain
Limit demand & reserve resources
Differentiated services
Classify traffic in groups
Different group traffic handled differently

4.1 INTEGRATED SERVICES ARCHITECTURE (ISA)

IPv4 header fields for precedence and type of service usually ignored
ATM only network designed to support TCP, UDP and real-time traffic
May need new installation
Need to support Quality of Service (QoS) within TCP/IP
Add functionality to routers
Means of requesting QoS

Internet Traffic Elastic


Can adjust to changes in delay and throughput
E.g. common TCP and UDP application
E-Mail insensitive to delay changes
FTP User expect delay proportional to file size
Sensitive to changes in throughput
SNMP delay not a problem, except when caused by congestion
Web (HTTP), TELNET sensitive to delay
Not per packet delay total elapsed time
E.g. web page loading time
For small items, delay across internet dominates
For large items it is throughput over connection
Need some QoS control to match to demand

Internet Traffic Inelastic


Does not easily adapt to changes in delay and throughput
Real time traffic
Throughput
Minimum may be required

SCE 59 ECE
CS2060 HIGH SPEED NETWORKS
Delay
E.g. stock trading
Jitter - Delay variation
More jitter requires a bigger buffer
E.g. teleconferencing requires reasonable upper bound
Packet loss

Inelastic Traffic Problems


Difficult to meet requirements on network with variable queuing delays and congestion
Need preferential treatment
Applications need to state requirements
Ahead of time (preferably) or on the fly
Using fields in IP header
Resource reservation protocol
Must still support elastic traffic
Deny service requests that leave too few resources to handle elastic traffic
demands

4.2 ISA APPROACH


Provision of QoS over IP
Sharing available capacity when congested
Router mechanisms
Routing Algorithms
Select to minimize delay
Packet discard
Causes TCP sender to back off and reduce load
Enahnced by ISA

Flow
IP packet can be associated with a flow
Distinguishable stream of related IP packets
From single user activity
Requiring same QoS
E.g. one transport connection or one video stream
Unidirectional
Can be more than one recipient
Multicast
Membership of flow identified by source and destination IP address, port numbers,
protocol type
IPv6 header flow identifier can be used but isnot necessarily equivalent to ISA flow

ISA Functions
Admission control
For QoS, reservation required for new flow
RSVP used
Routing algorithm
Base decision on QoS parameters
Queuing discipline

SCE 60 ECE
CS2060 HIGH SPEED NETWORKS
Take account of different flow requirements
Discard policy
Manage congestion
Meet QoS

ISA Implementation in Router


Background Functions
Forwarding functions

4.3 ISA COMPONENTS BACKGROUND FUNCTIONS


Reservation Protocol
RSVP
Admission control
Management agent
Can use agent to modify traffic control database and direct admission control
Routing protocol

ISA Components Forwarding


Classifier and route selection
Incoming packets mapped to classes
Single flow or set of flows with same QoS
E.g. all video flows
Based on IP header fields
Determines next hop
Packet scheduler
Manages one or more queues for each output
Order queued packets sent
Based on class, traffic control database, current and past activity on outgoing port

SCE 61 ECE
CS2060 HIGH SPEED NETWORKS
Policing

4.4 ISA SERVICES


Traffic specification (TSpec) defined as service for flow
On two levels
General categories of service
Guaranteed
Controlled load
Best effort (default)
Particular flow within category
TSpec is part of contract

Token Bucket
Many traffic sources can be defined by token bucket scheme
Provides concise description of load imposed by flow
Easy to determine resource requirements
Provides input parameters to policing function
Token Bucket Diagram

ISA SERVICES
Guaranteed Service
Assured capacity level or data rate
Specific upper bound on queuing delay through network
Must be added to propagation delay or latency to get total delay
Set high to accommodate rare long queue delays
No queuing losses
I.e. no buffer overflow
E.g. Real time play back of incoming signal can use delay buffer for incoming signal but
will not tolerate packet loss

SCE 62 ECE
CS2060 HIGH SPEED NETWORKS
Controlled Load
Tightly approximates to best efforts under unloaded conditions
No upper bound on queuing delay
High percentage of packets do not experience delay over minimum transit delay
Propagation plus router processing with no queuing delay
Very high percentage delivered
Almost no queuing loss
Adaptive real time applications
Receiver measures jitter and sets playback point
Video can drop a frame or delay output slightly
Voice can adjust silence periods

4.5 QUEUING DISCIPLINE


Traditionally first in first out (FIFO) or first come first served (FCFS) at each router port
No special treatment to high priority packets (flows)
Small packets held up by large packets ahead of them in queue
Larger average delay for smaller packets
Flows of larger packets get better service
Greedy TCP connection can crowd out altruistic connections
If one connection does not back off, others may back off more

4.6 FAIR QUEUING (FQ)


Multiple queues for each port
One for each source or flow
Queues services round robin
Each busy queue (flow) gets exactly one packet per cycle
Load balancing among flows
No advantage to being greedy
Your queue gets longer, increasing your delay
Short packets penalized as each queue sends one packet per cycle

FIFO and FQ

PROCESSOR SHARING
Multiple queues as in FQ
Send one bit from each queue per round

SCE 63 ECE
CS2060 HIGH SPEED NETWORKS
Longer packets no longer get an advantage
Can work out virtual (number of cycles) start and finish time for a given packet
However, we wish to send packets, not bits

BIT-ROUND FAIR QUEUING (BRFQ)


Compute virtual start and finish time as before
When a packet finished, the next packet sent is the one with the earliest virtual finish
time
Good approximation to performance of PS
Throughput and delay converge as time increases

Comparison of FIFO, FQ and BRFQ

4.7 GENERALIZED PROCESSOR SHARING (GPS)


BRFQ can not provide different capacities to different flows
Enhancement called Weighted fair queue (WFQ)
From PS, allocate weighting to each flow that determines how many bots are sent during
each round
If weighted 5, then 5 bits are sent per round
Gives means of responding to different service requests
Guarantees that delays do not exceed bounds

4.8 WEIGHTED FAIR QUEUE


Emulates bit by bit GPS
Same strategy as BRFQ

SCE 64 ECE
CS2060 HIGH SPEED NETWORKS
FIFO v WFQ

\
Proactive Packet Discard
Congestion management by proactive packet discard
Before buffer full
Used on single FIFO queue or multiple queues for elastic traffic
E.g. Random Early Detection (RED)

4.9 RANDOM EARLY DETECTION(RED)

Surges fill buffers and cause discards


On TCP this is a signal to enter slow start phase, reducing load
Lost packets need to be resent
Adds to load and delay
Global synchronization
Traffic burst fills queues so packets lost
Many TCP connections enter slow start
Traffic drops so network under utilized
Connections leave slow start at same time causing burst
Bigger buffers do not help
Try to anticipate onset of congestion and tell one connection to slow down

SCE 65 ECE
CS2060 HIGH SPEED NETWORKS

RED Design Goals


Congestion avoidance
Global synchronization avoidance
Current systems inform connections to back off implicitly by dropping packets
Avoidance of bias to bursty traffic
Discard arriving packets will do this
Bound on average queue length
Hence control on average delay

RED Algorithm Overview


Calculate average queue size avg
if avg < THmin
queue packet
else if THmin avg Thmax
calculate probability Pa
with probability Pa
discard packet
else with probability 1-Pa
queue packet
else if avg THmax
discard packet
RED Buffer

4.10 DIFFERENTIATED SERVICES (DS)


ISA and RSVP complex to deploy
May not scale well for large volumes of traffic
Amount of control signals
Maintenance of state information at routers
DS architecture designed to provide simple, easy to implement, low overhead tool
Support range of network services
Differentiated on basis of performance

Characteristics of DS
Use IPv4 header Type of Service or IPv6 Traffic Class field
No change to IP

SCE 66 ECE
CS2060 HIGH SPEED NETWORKS
Service level agreement (SLA) established between provider (internet domain) and
customer prior to use of DS
DS mechanisms not needed in applications
Build in aggregation
All traffic with same DS field treated same
E.g. multiple voice connections
DS implemented in individual routers by queuing and forwarding based on DS field
State information on flows not saved by routers

Services
Provided within DS domain.Contiguous portion of Internet over which consistent
set of DS policies administered.Typically under control of one administrative entity,Defined in
SLA.Customer may be user organization or other DS domain.Packet class marked in DS
field.Service provider configures forwarding policies routers.Ongoing measure of performance
provided for each class.DS domain expected to provide agreed service internally.If destination
in another domain, DS domain attempts to forward packets through other domains.Appropriate
service level requested from each domain

SLA Parameters
Detailed service performance parameters
Throughput, drop probability, latency
Constraints on ingress and egress points
Indicate scope of service
Traffic profiles to be adhered to
Token bucket
Disposition of traffic in excess of profile

Example Services
Qualitative
A: Low latency
B: Low loss
Quantitative
C: 90% in-profile traffic delivered with no more than 50ms latency
D: 95% in-profile traffic delivered
Mixed
E: Twice bandwidth of F
F: Traffic with drop precedence X has higher delivery probability than that with
drop precedence Y

DS Field Detail
Leftmost 6 bits are DS codepoint
64 different classes available
3 pools
xxxxx0 : reserved for standards
000000 : default packet class
xxx000 : reserved for backwards compatibility with IPv4 TOS
xxxx11 : reserved for experimental or local use

SCE 67 ECE
CS2060 HIGH SPEED NETWORKS
xxxx01 : reserved for experimental or local use but may be allocated for future
standards if needed
Rightmost 2 bits unused

Configuration Diagram

Configuration Interior Routers


Domain consists of set of contiguous routers
Interpretation of DS codepoints within domain is consistent
Interior nodes (routers) have simple mechanisms to handle packets based on codepoints
Queuing gives preferential treatment depending on codepoint
Per Hop behaviour (PHB)
Must be available to all routers
Typically the only part implemented in interior routers
Packet dropping rule dictated which to drop when buffer saturated

Configuration Boundary Routers


Include PHB rules
Also traffic conditioning to provide desired service
Classifier
Separate packets into classes
Meter
Measure traffic for conformance to profile
Marker
Policing by remarking codepoints if required
Shaper
Dropper

SCE 68 ECE
CS2060 HIGH SPEED NETWORKS
DS Traffic Conditioner

PER HOP BEHAVIOUR


Expedited forwarding
Premium service
Low loss, delay, jitter; assured bandwidth end-to-end service through domains
Looks like point to point or leased line
Difficult to achieve
Configure nodes so traffic aggregate has well defined minimum departure rate
EF PHB
Condition aggregate so arrival rate at any node is always less that minimum
departure rate
Boundary conditioners

Explicit Allocation
Superior to best efforts
Does not require reservation of resources
Does not require detailed discrimination among flows
Users offered choice of number of classes
Monitored at boundary node
In or out depending on matching profile or not
Inside network all traffic treated as single pool of packets, distinguished only as in or out
Drop out packets before in packets if necessary
Different levels of service because different number of in packets for each user

PHB - Assured Forwarding


Four classes defined
Select one or more to meet requirements
Within class, packets marked by customer or provider with one of three drop
precedence values
Used to determine importance when dropping packets as result of congestion

Codepoints for AF PHB

SCE 69 ECE
CS2060 HIGH SPEED NETWORKS
UNIT- 05
PROTOCOLS FOR QOS SUPPORT

INTRODUCTION
INCREASED DEMANDS
Need to incorporate bursty and stream traffic in TCP/IP architecture
Increase capacity
Faster links, switches, routers
Intelligent routing policies
End-to-end flow control
Multicasting
Quality of Service (QoS) capability
Transport protocol for streaming

Resource Reservation - Unicast


Prevention as well as reaction to congestion required
Can do this by resource reservation
Unicast
End users agree on QoS for task and request from network
May reserve resources
Routers pre-allocate resources
If QoS not available, may wait or try at reduced QoS

Resource Reservation Multicast


Generate vast traffic
High volume application like video
Lots of destinations
Can reduce load
Some members of group may not want current transmission
Channels of video
Some members may only be able to handle part of transmission
Basic and enhanced video components of video stream
Routers can decide if they can meet demand

Resource Reservation Problems on an Internet


Must interact with dynamic routing
Reservations must follow changes in route
Soft state a set of state information at a router that expires unless refreshed
End users periodically renew resource requests

5.1 RESOURCE RESERVATION PROTOCOL (RSVP) DESIGN GOALS


Enable receivers to make reservations
Different reservations among members of same multicast group allowed
Deal gracefully with changes in group membership
Dynamic reservations, separate for each member of group
Aggregate for group should reflect resources needed
Take into account common path to different members of group
Receivers can select one of multiple sources (channel selection)

SCE 70 ECE
CS2060 HIGH SPEED NETWORKS
Deal gracefully with changes in routes
Re-establish reservations
Control protocol overheadIndependent of routing protocol

RSVP Characteristics
Unicast and Multicast
Simplex
Unidirectional data flow
Separate reservations in two directions
Receiver initiated
Receiver knows which subset of source transmissions it wants
Maintain soft state in internet
Responsibility of end users
Providing different reservation styles
Users specify how reservations for groups are aggregated
Transparent operation through non-RSVP routers
Support IPv4 (ToS field) and IPv6 (Flow label field)

5.2 DATA FLOWS - SESSION


Data flow identified by destination
Resources allocated by router for duration of session
Defined by
Destination IP address
Unicast or multicast
IP protocol identifier
TCP, UDP etc.
Destination port
May not be used in multicast

Flow Descriptor
Reservation Request
Flow spec
Desired QoS
Used to set parameters in nodes packet scheduler
Service class, Rspec (reserve), Tspec (traffic)
Filter spec
Set of packets for this reservation
Source address, source prot

5.3 RSVP OPERATION


G1, G2, G3 members of multicast group
S1, S2 sources transmitting to that group
Heavy black line is routing tree for S1, heavy grey line for S2
Arrowed lines are packet transmission from S1 (black) and S2 (grey)
All four routers need to know reservation s for each multicast address
Resource requests must propagate back through routing tree

SCE 71 ECE
CS2060 HIGH SPEED NETWORKS
Treatment of Packets of One Session at One Router

RSVP Operation Diagram

Filtering
G3 has reservation filter spec including S1 and S2
G1, G2 from S1 only
R3 delivers from S2 to G3 but does not forward to R4
G1, G2 send RSVP request with filter excluding S2
G1, G2 only members of group reached through R4
R4 doesnt need to forward packets from this session
R4 merges filter spec requests and sends to R3
R3 no longer forwards this sessions packets to R4
Handling of filtered packets not specified

SCE 72 ECE
CS2060 HIGH SPEED NETWORKS
Here they are dropped but could be best efforts delivery
R3 needs to forward to G3
Stores filter spec but doesnt propagate it
Reservation Styles
Determines manner in which resource requirements from members of group are
aggregated
Reservation attribute
Reservation shared among senders (shared)
Characterizing entire flow received on multicast address
Allocated to each sender (distinct)
Simultaneously capable of receiving data flow from each sender
Sender selection
List of sources (explicit)
All sources, no filter spec (wild card)

Reservation Attributes and Styles


Reservation Attribute
Distinct
Sender selection explicit = Fixed filter (FF)
Sender selection wild card = none
Shared
Sender selection explicit= Shared-explicit (SE)
Sender selection wild card = Wild card filter (WF)

Wild Card Filter Style

Single resource reservation shared by all senders to this address


If used by all receivers: shared pipe whose capacity is largest of resource requests from
receivers downstream from any point on tree
Independent of number of senders using it
Propagated upstream to all senders
WF(*{Q})
* = wild card sender
Q = flowspec
Audio teleconferencing with multiple sites

Fixed Filter Style


Distinct reservation for each sender
Explicit list of senders
FF(S1{Q!}, S2{Q2},)
Video distribution.

Shared Explicit Style


Single reservation shared among specific list of senders
SE(S1, S2, S3, {Q})
Multicast applications with multiple data sources but unlikely to transmit
simultaneously

SCE 73 ECE
CS2060 HIGH SPEED NETWORKS
5.4 RSVP Protocol MECHANISMS
Two message types
Resv
Originate at multicast group receivers
Propagate upstream
Merged and packet when appropriate
Create soft states
Reach sender
Allow host to set up traffic control for first hop
Path
Provide upstream routing information
Issued by sending hosts
Transmitted through distribution tree to all destinations

RSVP Host Model

RSVP is a transport layer protocol that enables a network to provide differentiated levels of
service to specific flows of data. Ostensibly, different application types have different
performance requirements. RSVP acknowledges these differences and provides the
mechanisms necessary to detect the levels of performance required by different appli-cations
and to modify network behaviors to accommodate those required levels. Over time, as time and
latency-sensitive applications mature and proliferate, RSVP's capabilities will become
increasingly important.

5.5 Multiprotocol Label Switching (MPLS)


Routing algorithms provide support for performance goals
Distributed and dynamic
React to congestion
Load balance across network
Based on metrics
Develop information that can be used in handling different service needs
Enhancements provide direct support
IS, DS, RSVP
Nothing directly improves throughput or delay
MPLS tries to match ATM QoS support

SCE 74 ECE
CS2060 HIGH SPEED NETWORKS
Background
Efforts to marry IP and ATM
IP switching (Ipsilon)
Tag switching (Cisco)
Aggregate route based IP switching (IBM)
Cascade (IP navigator)
All use standard routing protocols to define paths between end points
Assign packets to path as they enter network
Use ATM switches to move packets along paths
ATM switching (was) much faster than IP routers
Use faster technology

Developments
IETF working group in 1997, proposed standard 2001
Routers developed to be as fast as ATM switches
Remove the need to provide both technologies in same network
MPLS does provide new capabilities
QoS support
Traffic engineering
Virtual private networks
Multiprotocol support

Connection Oriented QoS Support


Guarantee fixed capacity for specific applications
Control latency/jitter
Ensure capacity for voice
Provide specific, guaranteed quantifiable SLAs
Configure varying degrees of QoS for multiple customers
MPLS imposes connection oriented framework on IP based internets

Traffic Engineering
Ability to dynamically define routes, plan resource commitments based on known
demands and optimize network utilization
Basic IP allows primitive traffic engineering
E.g. dynamic routing
MPLS makes network resource commitment easy
Able to balance load in face of demand
Able to commit to different levels of support to meet user traffic requirements
Aware of traffic flows with QoS requirements and predicted demand
Intelligent re-routing when congested

VPN Support
Traffic from a given enterprise or group passes transparently through an internet
Segregated from other traffic on internet
Performance guarantees
Security

SCE 75 ECE
CS2060 HIGH SPEED NETWORKS
Multiprotocol Support
MPLS can be used on different network technologies
IP
Requires router upgrades
Coexist with ordinary routers
ATM
Enables and ordinary switches co-exist
Frame relay
Enables and ordinary switches co-exist
Mixed network

5.6 MPLS OPERATION


Label switched routers capable of switching and routing packets based on label
appended to packet
Labels define a flow of packets between end points or multicast destinations
Each distinct flow (forward equivalence class FEC) has specific path through LSRs
defined
Connection oriented
Each FEC has QoS requirements
IP header not examined
Forward based on label value

MPLS Operation Diagram

Explanation Setup
Labelled switched path established prior to routing and delivery of packets
QoS parameters established along path
Resource commitment
Queuing and discard policy at LSR
Interior routing protocol e.g. OSPF used
Labels assigned

SCE 76 ECE
CS2060 HIGH SPEED NETWORKS
Local significance only
Manually or using Label distribution protocol (LDP) or enhanced
version of RSVP

Explanation Packet Handling


Packet enters domain through edge LSR
Processed to determine QoS
LSR assigns packet to FEC and hence LSP
May need co-operation to set up new LSP
Append label
Forward packet
Within domain LSR receives packet
Remove incoming label, attach outgoing label and forward
Egress edge strips label, reads IP header and forwards

Notes
MPLS domain is contiguous set of MPLS enabled routers
Traffic may enter or exit via direct connection to MPLS router or from non-MPLS
router
FEC determined by parameters, e.g.
Source/destination IP address or network IP address
Port numbers
IP protocol id
Differentiated services codepoint
IPv6 flow label
Forwarding is simple lookup in predefined table
Map label to next hop
Can define PHB at an LSR for given FEC
Packets between same end points may belong to different FEC

5.7 MPLS PACKET FORWARDING


LABEL STACKING
Packet may carry number of labels
LIFO (stack)
Processing based on top label
Any LSR may push or pop label
Unlimited levels
Allows aggregation of LSPs into single LSP for part of route
C.f. ATM virtual channels inside virtual paths
E.g. aggregate all enterprise traffic into one LSP for access provider to
handleReduces size of tables

Label Format Diagram

SCE 77 ECE
CS2060 HIGH SPEED NETWORKS
Time to Live Processing
Needed to support TTL since IP header not read
First label TTL set to IP header TTL on entry to MPLS domain
TTL of top entry on stack decremented at internal LSR
If zero, packet dropped or passed to ordinary error processing (e.g. ICMP)
If positive, value placed in TTL of top label on stack and packet forwarded
At exit from domain, (single stack entry) TTL decremented
If zero, as above
If positive, placed in TTL field of Ip header and

Label Stack
Appear after data link layer header, before network layer header
Top of stack is earliest (closest to network layer header)
Network layer packet follows label stack entry with S=1
Over connection oriented services
Topmost label value in ATM header VPI/VCI field
Facilitates ATM switching
Top label inserted between cell header and IP header
In DLCI field of Frame Relay
Note: TTL problem

Position of MPLS Label Stack

FECs, LSPs, and Labels


Traffic grouped into FECs
Traffic in a FEC transits an MLPS domain along an LSP
Packets identified by locally significant label
At each LSR, labelled packets forwarded on basis of label.
LSR replaces incoming label with outgoing label
Each flow must be assigned to a FEC
Routing protocol must determine topology and current conditions so LSP can be
assigned to FEC
Must be able to gather and use information to support QoS

SCE 78 ECE
CS2060 HIGH SPEED NETWORKS
LSRs must be aware of LSP for given FEC, assign incoming label to LSP,
communicate label to other LSRs

Topology of LSPs
Unique ingress and egress LSR
Single path through domain
Unique egress, multiple ingress LSRs
Multiple paths, possibly sharing final few hops
Multiple egress LSRs for unicast traffic
Multicast

Route Selection
Selection of LSP for particular FEC
Hop-by-hop
LSR independently chooses next hop
Ordinary routing protocols e.g. OSPF
Doesnt support traffic engineering or policy routing
Explicit
LSR (usually ingress or egress) specifies some or all LSRs in LSP for given
FEC
Selected by configuration,or dynamically

Constraint Based Routing Algorithm


Take in to account traffic requirements of flows and resources available along hops
Current utilization, existing capacity, committed services
Additional metrics over and above traditional routing protocols (OSPF)
Max link data rate
Current capacity reservation
Packet loss ratio
Link propagation delay

Label Distribution
Setting up LSP
Assign label to LSP
Inform all potential upstream nodes of label assigned by LSR to FEC
Allows proper packet labelling
Learn next hop for LSP and label that downstream node has assigned to FEC
Allow LSR to map incoming to outgoing label

Real Time Transport Protocol


TCP not suited to real time distributed application
Point to point so not suitable for multicast
Retransmitted segments arrive out of order
No way to associate timing with segments
UDP does not include timing information nor any support for real time applications
Solution is real-time transport protocol RTP

SCE 79 ECE
CS2060 HIGH SPEED NETWORKS
5.8 RTP ARCHITECTURE
Close coupling between protocol and application layer functionality
Framework for application to implement single protocol
Application level framing
Integrated layer processing

Application Level Framing


Recovery of lost data done by application rather than transport layer
Application may accept less than perfect delivery
Real time audio and video
Inform source about quality of delivery rather than retransmit
Source can switch to lower quality
Application may provide data for retransmission
Sending application may recompute lost values rather than storing them
Sending application can provide revised values
Can send new data to fix consequences of loss
Lower layers deal with data in units provided by application
Application data units (ADU)

Integrated Layer Processing

Adjacent layers in protocol stack tightly coupled


Allows out of order or parallel functions from different layers

5.9 RTP ARCHITECTURE DIAGRAM

RTP Data Transfer Protocol


Transport of real time data among number of participants in a session, defined by:
RTP Port number
UDP destination port number if using UDP
RTP Control Protocol (RTCP) port number
Destination port address used by all participants for RTCP transfer
IP addresses
Multicast or set of unicast

Multicast Support
Each RTP data unit includes:

SCE 80 ECE
CS2060 HIGH SPEED NETWORKS
Source identifier
Timestamp
Payload format

Relays
Intermediate system acting as receiver and transmitter for given protocol layer
Mixers
Receives streams of RTP packets from one or more sources
Combines streams
Forwards new stream
Translators
Produce one or more outgoing RTP packets for each incoming packet
E.g. convert video to lower quality

RTP Header

RTP Control Protocol (RTCP)


RTP is for user data
RTCP is multicast provision of feedback to sources and session participants
Uses same underlying transport protocol (usually UDP) and different port number
RTCP packet issued periodically by each participant to other session members

RTCP Functions
QoS and congestion control
Identification
Session size estimation and scaling
Session control

RTCP Transmission
Number of separate RTCP packets bundled in single UDP datagram
Sender report
Receiver report

SCE 81 ECE
CS2060 HIGH SPEED NETWORKS
Source description
Goodbye
Application specific

RTCP Packet Formats

Packet Fields (All Packets)


Version (2 bit) currently version 2
Padding (1 bit) indicates padding bits at end of control information, with number of octets
as last octet of padding
Count (5 bit) of reception report blocks in SR or RR, or source items in SDES or BYE
Packet type (8 bit)
Length (16 bit) in 32 bit words minus 1
In addition Sender and receiver reports have:
Synchronization Source Identifier

Packet Fields(SenderReport)
Sender Information Block
NTP timestamp: absolute wall clock time when report sent
RTP Timestamp: Relative time used to create timestamps in RTP packets
Senders packet count (for this session)
Senders octet count (for this session)

SCE 82 ECE
CS2060 HIGH SPEED NETWORKS

Reception Report Block


SSRC_n (32 bit) identifies source refered to by this report block
Fraction lost (8 bits) since previous SR or RR
Cumulative number of packets lost (24 bit) during this session
Extended highest sequence number received (32 bit)
Least significant 16 bits is highest RTP data sequence number received from SSRC_n
Most significant 16 bits is number of times sequence number has wrapped to zero
Interarrival jitter (32 bit)
Last SR timestamp (32 bit)
Delay since last SR (32 bit)

Receiver Report
Same as sender report except:
Packet type field has different value
No sender information block
Source Description Packet
Used by source to give more information
32 bit header followed by zero or more additional information chunks
E.g.:
0 END End of SDES list
1 CNAME Canonical name
2 NAME Real user name of source
3 EMAIL Email address

Goodbye (BYE)
Indicates one or more sources no linger active
Confirms departure rather than failure of network

Application Defined Packet


Experimental use
For functions & features that are application specific

SCE 83 ECE
CS2060 HIGH SPEED NETWORKS

UNIT-01
HIGH SPEED NETWORKS
PART-A

1. What is ATM?[MAY/JUNE-2012]
Asynchronous Transfer Mode (ATM) is a method for multiplexing and switching that
supports a broad range of services. ATM is a connection-oriented packet switching technique
that generalizes the notion of a virtual connection to one that provides quality-of-service
guarantees.

2. What are the main features of ATM?


The service is connection-oriented, with data transfer over a virtual circuit.
The data is transferred in 53 byte packets called cells.
Cells from different VCs that occupy the same channel or link are statistically
multiplexed.
ATM switches may treat the cell streams in different VC connections unequally over
the same channel in order to provide different qualities of services (QOS).

3. What are the layers/plane of BISDN reference model?


User plane.
Control plane.
Layer management plane.
Plane management plane.

4. Define MPLS?
Multi Protocol Label Switching is to standardize a label switching paradigm that
integrates layer 2 switching with layer 3 routing. The device that integrates routing and
switching functions is called a Label Switching Router (LSR).

5. What is called frame relay?


Frame relay is a connection oriented data transport service for public switched
networks. The frame relay protocols are modification of X.25 standards.

6. What are the advantages of DQDB MAC protocol?


It is very efficient
There is no loss of capacity due to collision
The head station continuously generates an idle frame

7. Define VPI & VCI


The Virtual Path Identifier (VPI) constitutes a routing field for the network while the
Virtual Channel Identifier (VCI) is used for the routing to and from the end user.

8. Mention the High Speed LANs


Fast Ethernet
Gigabit Ethernet
Fibre Channel
High Speed Wireless LANs.

SCE 84 ECE
CS2060 HIGH SPEED NETWORKS

9. What are the requirements for wireless LANs?[]MAY/JUNE-2014


Throughput
Number of nodes
Service Area
Battery Power
Handoff/roaming
Dynamic Configuration.

10. What are the types of Ethernet?


Classical Ethernet
Fast Ethernet
10Mbps Ethernet
Gigabit Ethernet
10-Gpbs Ethernet.

11. Define VPN


MPLS provides an efficient mechanism for supporting Virtual Private Network
(VPNs).With a VPN, the traffic of a given enterprise or group passes transparently through an
internet providing performance guarantees and security.

12. Define ISDN?


The integrated services digital network is to provide a unique user network Interface
(UNI) for the support of the basic set of narrow band (NB) services that is voice and low speed
data thus providing a narrowband integrated access.

13. What are the features of an ISDN?


Standard user network interface (UNI).
Integrated digital transport.
Service integration.
Intelligent network services.

14. What are the services of LAPD?


Acknowledgement information transfer service.
Unacknowledgement information transfer service.

15. Define frame relay.


A form of packet switching based on the use of variable-length link-layer frames.
There is no network layer, and many of the basic functions have been streamlined or
eliminated to provide for greater throughput.

16. What are the traffic parameters of connection-oriented services?


Peak Cell Rate (PCR)
Sustained Cell Rate (SCR)
Initial Cell Rate (ICR).
Cell Delay Variation Tolerance (CDVT).
Burst Tolerance (BT).
Minimum Cell Rate (MCR).

SCE 85 ECE
CS2060 HIGH SPEED NETWORKS

17. What are the quality service (QoS) parameters of connection-oriented services?
Cell Loss Ratio (CLR).
Cell Delay Variation (CDV).
Peak-to-Peak Cell Delay Variation (Peak-to-Peak CDV).
Maximum Cell Transfer Delay (Max CTD).
Mean Cell Transfer Delay (Mean CTD).

18. Types of delays encountered by cells


Packetization delay (PD) at the source.
Transmission and propagation delay (TD).
Queuing delay (QD) at each switch.
Affixed processing delay (FD) at each switch.
A jitter compression or depacketization delay (DD) at the destination.

19. What is the datalink control functions provided by LAPF?


Frame delimiting, alignment & transparency.
Frame multiplexing/demultiplexing using the address field.
Inspection of the frame to ensure that it consist of an integer no. of octets prior to zero
bit insertion or following zero bit extraction.
Inspection of the frame to ensure that it is neither too long nor too short.
Detection of transmission errors.
Congestion control functions.

20. Difference b/w AAL & AAL 3/5


AAL 3/4 AAL 3/5
I I
n this MID field is used to multiplex n this MID field is assumed to that
diff streams of data on the same the higher layer software takes care
virtual ATM connection. of such multiplexing.
A A
10 bit CRC is provided for each 32 bit CRC protects the entire cpus
SAR PDU. PDU, provides strong protection
I against bit errors.
n this 8 ATM octets per AAL SDU, 8
4 octets per cell. octets per AAL SDU, 0 octets per
ATM cell.

21. What are the principles of ISDN ?


Support voice and non-voice communication.
Support switched and non switched application.
Reliance on 64Kbps connection.
Intelligence in the network.

SCE 86 ECE
CS2060 HIGH SPEED NETWORKS

22. Difference b/w Frame relay and X.25 packet switching.[NOV/DEC-2012]


Frame Relay X.25 Packet Switching

nd to End flow and error control. op by Hop flow and error control.

ultiplexing and switching ultiplexing and switching
operations are carried out in layer operations are carried out in layer
2(Data link layer). 3(network layer).

ommon Channel Signalling. nband Signalling

ata rate -2Mbps. ata rate -64Mbps.

23. Give the neat sketch of ATM Protocol Architecture.

24. Draw the ATM Cell structure or Cell Format.[MAY/JUNE-2014,2013,NOV/DEC-


2014]

SCE 87 ECE
CS2060 HIGH SPEED NETWORKS

SCE 88 ECE
CS2060 HIGH SPEED NETWORKS
UNIT-02
CONGESTION AND TRAFFIC MANAGEMENT

PART-A

1. What are the queuing models?[MAY/JUN-2013,APR/MAY-2010]


Two types of queing models are ,
Single server queue.
Multi server queue.

2. Why Congestion Occurs in the networks?[MAY/JUN-2012]


The phenomenon of congestion is a complex one, as in the subject of congestion
control, congestion noccurs when the number of packetsb being transmitted through a network
begins to approach the packet handling capacity of the network.
3. What is meant by the term congestion in networks?[MAY/JUN-2013]
The objective of the congestion control is to maintain the number of packets within the
network is known as congestion in the network.

4. State Kendalls notation.[APR/MAY-2011,NOV/DEC-2013]


Kendalls notation is X/Y/N, where X refers to the distribution of the interarrival
times, Y refers to the distribution of service times, and N refers to the number of servers.
The most common distributions are denoted as follows:
G = General distribution of interarrival times or service times
GI = General distribution of interarrival times with the restriction that
Interarrival times are independent.
M = Negative exponential distribution
D = Deterministic arrivals or fixed-length service.
Thus, M/M/1 refers to a single-server queuing model with poisson arrivals
(Exponential interarrival times) and exponential service times.

5. What is meant by congestion control technique?


Congestion Avoidance: It is the procedure used at beginning stage of congestion to
minimize its effort. This procedure initiated prior to or at point A. This procedure
prevent congestion from progressing to point B.
Techniques,
Back pressure
Choke packet
Implicit congestion Signalling
Explicit Congestion Signalling.

6. Define Backward explicit congestion notifivation?[NOV/DEC-2012]


The BECN bit is part of the Address field in the Frame Relay frame header. DCE
devices set the value of the BECN bit to 1 in frames traveling in the opposite direction of
frames with their FECN bit set. This informs the receiving DTE device that a particular path
through the network is congested.

SCE 89 ECE
CS2060 HIGH SPEED NETWORKS
7. What is single server queue?[MAY/JUN-2014]
The control element of the system is a server, which provides some service to items. If
the server is idle an item is served immediately. Otherwise an arriving items joins awaiting
line.
Dispatching Discipline
Arrival Departure
Waiting line(Queue) Server

Residence Time

8. Define committed burst size (BC)


It is defined as the maximum number of bits in a predefined period of time that the
network is committed to transfer with out discarding any frames.

9. Define committed information rate (CIR)


CIR is a rate in bps that a network agrees to support for a particular frame mode
connection. Any data transmitted in excess of CIR is vulnerable to discard in event of
congestion.
CIR < Access rate

10. Define excess burst size (Be)


It is defined as the maximum number of bits in excess of BC that a user can send during
a predefined period of time. The network is committed to transfer these bits if there is no
congestion. Frames with Be have lower probability to transfer than frames with BC.

11. Define access rate.


For every connection in frame relay network, an access rate (bps) is defined. The access
rate actually depends on bandwidth of channel connecting user to network.

12. Write Littles formula.[NOV/DEC-2009]


Littles formula is defined as the product of item arrive at a rate of , and Served time
of items Tr (or) product of item arrive at a rate of and waiting time of an items Tw.
It is given as, r = Tr (or) w = Tw

13. List out the model characteristics of queuing models.


Item population.
Queue size
Dispatching discipline
Service pattern

14. List out the fundamental task of a queuing analysis.


Queuing analysis as the following as a input information.
Arrival rate
Service rate
Number of servers
Provide as output information concerning:
Items waiting

SCE 90 ECE
CS2060 HIGH SPEED NETWORKS
Waiting time
Items queued
Residence time

15. List out the assumptions for single server queues.


Poisson arrival rate.
Dispatching discipline does not give preference to items based on service times
Formulas for standard deviation assume first-in, first-out dispatching.
No items are discarded from the queue.

16. List out the assumptions for Multiserver queues.


Poisson arrival rate.
Exponential service times
All servers equally loaded.
All servers have same mean service time.
First-in, first-out dispatching.
No items are discarded from the queue.

17. State Jacksons theorem.


Jacksons theorem can be used to analyse a network of queues. The theorem is based on
three assumptions:
1. The queuing network consists of m nodes, each of which provides an independent
exponential service.
2. Items arriving from outside the system to any one of the nodes arrive with a poisson
rate.
3. Once served at a node, an item goes (immediately) to one of the other nodes with a
fixed probability, or out of the system.

18. Define Arrival rate and service rate.


Arrival Rate: The rate at which data enters into a queuing system i.e., inter arrival rate.
It is indicated as .
Service Rate: The rate at which data leaves the queuing system i.e., service rate.
It is indicated as .

19.How does frame relay report congestion?


When the particular portion of the network is heavily congestion. It is
Desirable to route packets around rather than through the area of congestion.

SCE 91 ECE
CS2060 HIGH SPEED NETWORKS

UNIT-03
TCP AND ATM CONGESTION CONTROL

PART-A

1. Define congestion.
Excessive network or internetwork traffic causing a general degradation of service.

2. Define congestion control.[MAY/JUNE-2014]


A method to limit the total amount of data entering the network, to amount of data
that network can carry.

3. List out the TCP implementation policy option.


Send policy
Deliver policy
Accept policy
Retransmit policy
Acknowledge policy

4. List out the three retransmit strategies in TCP traffic control?[MAY/JUNE-2014]


First-only
Batch
Individual

5. Explain about the congestion control in a TCP/IP based internet implementation task.
IP is connectionless, stateless protocol that includes no provision for detecting,
much less controlling congestion.
TCP provides only end-to-end flow control and deduce the presence of
congestion.
There is no cooperative, distributed algorithm to bind together the various TCP
entities.

6. list out retransmission timer management techniques[NOV/DEC-2010]


RTT variance estimation.
Exponential RTO back off
Karns algorithm.

7. Write down the window management techniques.[NOV/DEC-2013]


Slow start.
Dynamic window sizing on congestion.
Fast retransmit
Fast recovery
Limited transmit.

8. Define binary exponential back off.[NOV/DEC-2012]


A simple technique for implementing RTO backoff is to multiply the RTO for a
segment by a constant value for each retransmission.

SCE 92 ECE
CS2060 HIGH SPEED NETWORKS
RTO = q * RTO . (1)
The equation causes RTO a grow exponentially with each retransmission. The most
commonly used value of q is 2.

9.State the condition that must be met for a cell to conform.


In case of ATM, the information flow on each logical connection is organized
into fixed-size packets called cells.
Cells should arrive with in theoretical arrive time but with in CDVT (limitation)
cell is conformed.

10.What are the mechanisms used in ATM traffic control to avoid congestion
condition?[MAY/JUNE-2015]
Resource management.
Connection admission control
Usage parameter control
Traffic shaping

11.How is times useful to control congestion in TCP?


The value of RTO (Retransmission time out) have a critical effect on TCPs reaction to
congestion. Hence by calculating RTO effectively congestion can be controlled.

12.What is the difference between flow control and congestion control?


Flow control: The transmitter should not overwhelm the receiver so flow control
is performed.
Congestion control: It aim to limit the total amount of data entering the network,
to amount of data that network can carry.

13. What is reactive congestion control and preventive congestion control.
Reactive congestion control: Whenever a packet discard, occur due to severe
congestion, some control mechanism is needed to recover from network collapse these
mechanism is reactive congestion control.
Preventive congestion control: Mechanism to avoid congestion before it occurs.

14. Why congestion control is difficult to implement in TCP?


The end system is expected to exercise flow control upon the source end system at a
higher layer. Thus it is difficult to implement in TCP.
15. What are the accept policies used in TCP traffic control?
Accept policy:
a). In-order policy
b). In window policy.

16. What is meant by silly window syndrome?


If frequently datas are send as small segment, the response will be speed in sender side
but it cause degradation in performance. This degradation is called silly window syndrome.
17. What is meant by cell insertion time?
Cell insertion time is the time taken to insert a single cell on to the network.

SCE 93 ECE
CS2060 HIGH SPEED NETWORKS
18. What are the mechanisms used in TCP to control congestion?
TCP congestion control mechanism:
a). RTO timer management
b). window management

19. What is meant by open loop and closed loop control in ABR mechanism?
Open loop control: If there is no feedback to the source concerning congestion, this
approach is called open loop control.
Closed loop control: ABR has feedback to the source concerning congestion; this
approach is called closed loop control.
20. What is meant by allowed cell rate (ACR)?[APR/MAY-2010]
Allowed cell rate: The current rate at which source is permitted to send or transmit cell
in ABR mechanism is called allowed cell rate.
21. Define Behavior Class Selector (BCS)
Behaviour Class Selector (BCS): BCS enables an ATM network to provide different
service levels among UBR connections by associating each connection with one of a set of
behaviour class.
22. What is cell delay variation?
In ATM cell network voice & video signals can be digitized & transmitted as a
system of cells. A key requirement especially for voice is that the delay across the network be
short. ATM is designed to minimize the processing & transmission overhead to the networks.
So that very fast cell switching & routing is possible.
23. Why retransmission policy essential in TCP?
TCP maintains a queue of segments that have been sent but not yet acknowledged. The
TCP specification states that TCP will retransmit a segment. If it fails to receive an
acknowledge within a given time. A TCP implement may employ one of three retransmission
strategies.
(i) First only
(ii) Batch
(iii) Individual
24. Why congestion control in a tcp/ip internet is complex?
The task is difficult one becoz of the following factor
(i)IP is a connectionless stateless protocol that includes no provision for detecting much
less controlling congestion.
(ii)TCP provides only end-to-end flow control.
(iii)There is no co-operative distributed algorithm.
23. Write relationship b/w throughput & TCP window size W.
S= 1 for W> RD/4
4W /RD for W< RD/4
Where
W TCP window size (octets)
R Data rate at TCP source available to a given TCP connection.
D Propagation delay b/w TCP source & destination over a given TCP
Connection.

26. Define ABR[MAY/JUNE-2013]


ABR is the available bit rate. ABR specifies a Peak Cell Rate (PCR) that it requires.
The network allocates resources so that all ABR applications receive at least their MCR

SCE 94 ECE
CS2060 HIGH SPEED NETWORKS
capacity. The ABR mechanism uses explicit feedback to sources to assure that capacity is
facility allocated.

27. Define CBR (Constant Bit Rate)


The CBR service is perhaps the simplest to define. It is used by applications that
require a fixed data rate that is continuously available during the connection lifetime & a
relatively tight upper bound on transfer delay. CBR is commonly used for uncompressed audio
& video information.

28. Write the examples for CBR.


Video conferencing
Interactive audio
Audio/video distribution
Audio/video retrieval

SCE 95 ECE
CS2060 HIGH SPEED NETWORKS
UNIT-04
INTEGRATED AND DIFFERENTIATED SERVICE

PART-A

1. Write down the two different, complementary IETF Standards traffic management
Frameworks?
Integrated services
Differentiated services

2. Write down the current traffic demand viewed by the IS provider?


Limits the demand that is satisfied to that which can be handled by the current
capacity of the network.
Reserves resources within the domain to provide a particular QoS to particular
portions of the satisfied demand.
3. Explain about differentiated services?
A DS framework does not attempt to view the total traffic demand in any
overall or integrated sense, nor does it attempt to reserve network capacity in advance.
In DS framework, traffic is classified into a number of traffic groups. Each groups is
labeled appropriately, and the service provided by network elements depends on group
membership, with packets belonging to different groups being handled differently.

4. What are the requirements for inelastic traffic?[APR/MAY-2008]


Throughput
Delay
Jitter
Packet loss

5. Give some applications that come under elastic traffic.[NOV/DEC-2013]


E-Mail (SMTP) Quite insensitive to changes in delay.
File transfer (FTP) The delay to be proportional to the file size and sensitive
to changes in throughput.
Network management (SNMP) To get through with minimum delay
increases with increased congestion.
Remote Logon and Web Access (TELNET and HTTP) These are called as
Interactive applications are quite sensitive to delay.

6. State the drawbacks of FIFO queering discipline?[APR/MAY-2008]


No special treatment is given to packets from flows that are of higher priority
(or) are more delay sensitive. If a number of packets from different flows are
ready to forward, they are handled strictly in FIFO order.
If a number of smaller packets are queued behind a long packet, then FIFO
Queuing results in a larger average delay per packet than if the shorter packets
were transmitted before the longer packet. In general, flows of larger packets
get better service.
A greedy TCP connection can crowd out more altruistic connections.
If congestion occurs and one TCP connection fails to back off, other

Connections along the same path segment must back off.

SCE 96 ECE
CS2060 HIGH SPEED NETWORKS

7. Distinguish between inelastic and elastic traffic?[NOV/DEC-2009]

S.No Elastic traffic Inelastic traffic


Elastic traffic is that which can Inelastic traffic does not easily
adjust , over wide ranges, to adapt, if at all, to changes in delay
1 changes in delay and throughput and throughput across an internet.
across an internet and still meet
the needs of its applications
Example is electronic Prime examples is real-time traffic
mail(SMTP),file transfer(FTP), (Voice chat, Tele conferencing)
2
Web access(HTTP),Network
management(SNMP)

8. Define the format of DS field?


Packets are labeled for service handling by means of the DS field, which is placed in
the type of service field of an IPv4 header or the traffic class field of the IPv6 header.
RFC 2474 defines the DS field as having the following format: the leftmost 6 bits form
a DS code point and the rightmost 2 bits are currently unused. The DS codepoint is the DS
label used to classify packets for differentiated services.

9. Define DS code point.


A specified value of 6 bit DS code point portion of the 8 bit DS field in the IP header
which indicate to which class packets belongs and its drop precedence.

10. What is meant by traffic conditioning agreement?


An agreement that specify rules that are to apply for packets selected by the classifier.
Control functions performed in TCA are metering, marking, shaping and dropping.

11. Define DS boundary node.


A DS node that connects one DS domain to the node in another domain.

12. Define DS interior node.


A node in DS domain, which is not the boundary node is called DS interior node.

13. Define DS node.


A router that supports DS policies is called as DS node. A host system that uses DS for
application is also called as DS node.

14. Write down the two routing mechanism use in ISA.


Routing algorithm- Decreases local congestion, reduces delay.
Packet discard- Most recent packet is discarded, sending TCP entity back off,
Reduces load.

15. List out the ISA components?


Reservation protocol.
Admission control
Management agent.

SCE 97 ECE
CS2060 HIGH SPEED NETWORKS
Routing protocol

16. List out the two principal functionality areas that accomplish forwarding packets in
the router.
Classifier and route selection.
Packet scheduler.

17. Define TSpec.


ISA service for a flow of packets is defined on two levels.
A number of general categories of service are provided, each of which provides
a certain general type of service guarantees.
Within each category, the service for a particular flow is specified by the values
of certain parameters.
Together, these values are referred to as a traffic specification (TSpec)

18. List out the categories of service in ISA.


Guaranteed service
Controlled load service
Best effort service

19. List out the advantages of ISA.[APR/MAY-2010]


Many traffic sources can easily and accurately be defined by a token bucket
scheme.
The token bucket scheme provides a concise description of the load to be
imposed by a flow, enabling the service to determine easily the resource
requirement.
The token bucket scheme provides the input parameters to a policing function.

20. Define delay jitter.


The delay jitter is the maximum variation in delay experienced by packets in a single
session.

21. What is meant by differentiated service?[MAY/JUNE-2012]


It does not attempt to view the total traffic demand in integrated sense.
It does not reserve network capacity in advance.
It provides different level of QoS to different traffic flows.

22. What is meant by integrated service?


The IS provider
Views the totally of current traffic demand.
Limits the demand with respect to the current capacity handled by the network.
Reserve resources with in the domain to provide a particular QOS guaranteed.
23. Define global synchronization.
Due to packet discard during congestion, many TCP connections entered slow start at
the same time. As a result, the network is unnecessarily under utilized for some time. The
TCP connections which entered into slow start, will come out of slow start at about same
time causing congestion again. This phenomenon is called global synchronization.

SCE 98 ECE
CS2060 HIGH SPEED NETWORKS
24. What are the design goals of RED algorithm?[MAY/JUNE-2013]
Congestion avoidance
Global synchronization avoidance

UNIT-05
PROTOCOLS FOR QOS SUPPORT

PART-A

1. What is meant by soft state in RSVP?[APR/MAY-2015]


RSVP use connectionless approach, each intermediate router maintain state information
about nature of flow, that will be refreshed by end system at predetermined amount of time.
This is called soft state.

2. Define session in RSVP?


Once a reservation is made at a router by a particular destination, the router considers
this as a session and allocates resources for the life of that session.
Session is defined by,
Session: Destination IP address
IP protocol identifier.

3.Define label switched swapping in MPLS.[NOV/DEC-2012]


The basic operation of looking up an incoming label to determine the outgoing label
and forwarding is called Label Swapping.

4. What are the features of RSVP?[MAY/JUNE-2013]


1. Performs resource reservations for unicast and multicast applications
2. Requests resource in one direction from a sender to a receiver
3. Requires the receiver to initiate and maintain the resource reservation.
4. Maintains soft state at each intermediate router
5. Does not require each router to be RSVP capable
6. Supports both IPv4 and IPv6.

5. Define soft state


When a state is not refreshed within a certain timeout, the state is deleted. The type of
state that is maintained by a timer is called a soft state.

6. What does RTCP provide to the sources?[NOV/DEC-2013]


RTCP provides:
a) Quality of service and congestion control
b) Identification
c) Session size estimation
d) Session control

7.Define The Format Of RTP Leader

V P X CC M PLT SQNO
TIME STAMP

SCE 99 ECE
CS2060 HIGH SPEED NETWORKS
SYNCHRONIZATION SOURCE IDENTIFIERS
(SSRC)
CONTRIBUTING SOURCE IDENTIFIER (CSRC)
.
.
.
.
CSRC IDENTIFIER
e) V Version (2 bit)
f) P padding (1 bit)
g) X Extension (1 bit)
h) CC CSRC count (4 bit)
i) M Marker (1 bit)
j) PLT Payload type (7 bit)
k) SQNO sequence no. (16 bit)
l) Time Stamp (32 bit)

8.List out the characteristics of MPLS.


MPLS characteristics that ensure its popularity are:
a) Connection-oriented QOS support
b) Traffic engineering
c) Virtual private network(VPN) support
d) Multi protocol support

9. What is Label Stacking?[APR/MAY-2015]


The Stack Entries appear after the data i\link layer headers,but before network layer
headers.The top of the label stack appears earliest in the packet and the bottom appears latest.
The network layer follows the label packetstack entry which has the s bit set. In the data link
frame, such as for PPP, the label stack appears between trhe IP header and data link header.

10.Define QOS[MAY/JUNE-2012]
It refers to the properties of a network that contributes to the degree of satisfaction that
users perceive, relative to the networks performance.

11.List QOS Parameters.[NOV/DEC-2014]


Capacity, or Data rate
Latenc, or delay
Jitter
Traffic lose

12 Define RSVP?[MAY/JUNE-2011]
Resource Reservation Protocol was designed as an IP signaling protocol for
the integrated services model. RSVP can be used by a host to request a specific QoS
resource for a particular flow and by a router to provide the requested QoS along the
paths by setting up appropriate states.

13. What is meant by integrated layer processing in RTP?


In TCP/IP each layer processed sequentially, whereas in integrated layer

SCE 100 ECE


CS2060 HIGH SPEED NETWORKS
processing, adjacent layers are tightly coupled and they function parallel.

14. What is the function of RTP relays and give its types?
A relay operating at a given protocol layer is an intermediate system that acts as both a
destination and a source in a data transfer.

15. What is the function of mixer and translator in RTP?


Mixer: It is source of synchronization. It receives stream of RTP packets from one or
more sources. Combines these streams and forwards a new RTP packet stream to one or more
destinations.
Translator: It produces one or more outgoing RTP packets for each incoming packets. It
change the format of the data that suite to transfer from one domain to another.

16.What are the resources used by an integrated service model?


Integrated service model requires resources such as bandwidth and buffers to be
explicitly reserved for a given dataflow to ensure that the application receives its requested
QoS

17. What do you mean by guaranteed service?


The guaranteed service in the internet can be used for applications that require real time
service delivery. For this application data that is delivered to the application after a certain time
is generally considered worthless. Thus guaranteed service has been designed to provide a
frame bound on the end to end packet delay for a flow.

SCE 101 ECE


UNIVERSITY QUESTION BANK

UNIT-01
HIGH SPEED NETWORKS

PART-A

1. What is ATM?[MAY/JUNE-2012]
2. What are the main features of ATM?
3. What are the layers/plane of BISDN reference model?
4. Define MPLS?
5. What is called frame relay?
6. What are the advantages of DQDB MAC protocol?
7. Define VPI & VCI
8. Mention the High Speed LANs
9. What are the requirements for wireless LANs?[]MAY/JUNE-2014
10. What are the types of Ethernet?
11. Define VPN
12. Define ISDN?
13. What are the features of an ISDN?
14. What are the services of LAPD?
15. Define frame relay.
16. What are the traffic parameters of connection-oriented services?
17. What are the quality service (QoS) parameters of connection-oriented services?
18. Types of delays encountered by cells
19. What is the datalink control functions provided by LAPF?
20. Difference b/w AAL & AAL 3/5
21. What are the principles of ISDN ?
22. Difference b/w Frame relay and X.25 packet switching.[NOV/DEC-2012]
23. Give the neat sketch of ATM Protocol Architecture.
24. Draw the ATM Cell structure or Cell Format.[MAY/JUNE-2014,2013,NOV/DEC-2014]

PART-B
1. Discuss the various ATM service categories.[MAY/JUNE-2015,2013]
2. Explain the ATM Protocol architecture with a neat block diagram.[MAY/JUNE-2015,2013]
3. Explain the Frame Relay Networks with suitable diagram.[MAY/JUNE-2012]
4. Draw IEEE 802.11 architecture and Protocol architecture.[MAY/JUNE2013,NOV/DEC-2013]
5.Discuss the relevance of CSMA/CD in gigabit ethernets.[]MAY/JUNE-2012NOV/DEC-2012
6.Explain in detail about Fiber Channel.
UNIT-02
CONGESTION AND TRAFFIC MANAGEMENT

PART-A

1. What are the queuing models?[MAY/JUN-2013,APR/MAY-2010]


2. Why Congestion Occurs in the networks?[MAY/JUN-2012]
4. State Kendalls notation.[APR/MAY-2011,NOV/DEC-2013]
5. What is meant by congestion control technique?
6. Define Backward explicit congestion notifivation?[NOV/DEC-2012]
7. What is single server queue?[MAY/JUN-2014]
8. Define committed burst size (BC)
9. Define committed information rate (CIR)
10. Define excess burst size (Be)
11. Define access rate.
12. Write Littles formula.[NOV/DEC-2009]
13. List out the model characteristics of queuing models.
14. List out the fundamental task of a queuing analysis.
15. List out the assumptions for single server queues.
16. List out the assumptions for Multiserver queues.
17. State Jacksons theorem.
18. Define Arrival rate and service rate.
19.How does frame relay report congestion?

PART-B
1. Explain Queuing theory.[APR/MAY-2015]
2. Explain Queuing Analysis and its types.[APR/MAY-2015]
3. Explain Traffic Management In Congestion Control.[MAY/JUNE-2012,NOV/DEC-2012]
4. Explain the Congestion Control Mechanisms.[NOV/DEC-2012]
UNIT-03
TCP AND ATM CONGESTION CONTROL

PART-A

1. Define congestion.
2. Define congestion control.[MAY/JUNE-2014]
3. List out the TCP implementation policy option.
4. List out the three retransmit strategies in TCP traffic control?[MAY/JUNE-2014]
5. Explain about the congestion control in a TCP/IP based internet implementation task.
6. list out retransmission timer management techniques[NOV/DEC-2010]
7. Write down the window management techniques.[NOV/DEC-2013]
8. Define binary exponential back off.[NOV/DEC-2012]
9. State the condition that must be met for a cell to conform.
10.What are the mechanisms used in ATM traffic control to avoid congestion
condition?[MAY/JUNE-2015]
11.How is times useful to control congestion in TCP?
12.What is the difference between flow control and congestion control?
13. What is reactive congestion control and preventive congestion control.
14. Why congestion control is difficult to implement in TCP?
15. What are the accept policies used in TCP traffic control?
16. What is meant by silly window syndrome?
17. What is meant by cell insertion time?
18. What are the mechanisms used in TCP to control congestion?
19. What is meant by open loop and closed loop control in ABR mechanism?
20. What is meant by allowed cell rate (ACR)?[APR/MAY-2010]
21. Define Behavior Class Selector (BCS)
22. What is cell delay variation?
23. Why retransmission policy essential in TCP?
24. Why congestion control in a tcp/ip internet is complex?
25.Write relationship b/w throughput & TCP window size W.
26. Define ABR[MAY/JUNE-2013]
27. Define CBR
28. Write the examples for CBR.

PART-B

1. Explain TCP Flow Control.


2. Explain the TCP Congestion Control with neat diagrams.[MAY/JUNE-2013]
3. Explain Retransmission and Timer Management Techniques.[NOV/DEC-2013]
4. Explain five important techniques in window management.
5. Explain Traffic And Congestion Control in ATM and its requirements.[NOV/DEC-2013]
6. Explain the ATM traffic related attributes.[NOV/DEC-2012]
7. Explain in detail ABR traffic management.[MAY/JUNE-2014]
UNIT-04
INTEGRATED AND DIFFERENTIATED SERVICE

PART-A

1. Write down the two different, complementary IETF Standards traffic management
Frameworks?
2. Write down the current traffic demand viewed by the IS provider?
3. Explain about differentiated services?
4. What are the requirements for inelastic traffic?[APR/MAY-2008]
5. Give some applications that come under elastic traffic.[NOV/DEC-2013]
6. State the drawbacks of FIFO queering discipline?[APR/MAY-2008]
7. Distinguish between inelastic and elastic traffic?[NOV/DEC-2009]
8. Define the format of DS field?
9. Define DS code point.
10. What is meant by traffic conditioning agreement?
11. Define DS boundary node.
12. Define DS interior node.
13. Define DS node.
14. Write down the two routing mechanism use in ISA.
15. List out the ISA components?
16. List out the two principal functionality areas that accomplish forwarding packets in
the router.
17. Define TSpec.
18. List out the categories of service in ISA.
19. List out the advantages of ISA.[APR/MAY-2010]
20. Define delay jitter.
21. What is meant by differentiated service?[MAY/JUNE-2012]
22. What is meant by integrated service?
23. Define global synchronization.
24. What are the design goals of RED algorithm?[MAY/JUNE-2013]
PART-B

1. Explain the block diagram for Integrated Services Architecture,and give details
about components.[ MAY/JUNE-2014,NOV/DEC-2013]
2. Explain the services offered Preferred by ISA [APR/MAY-2015]
3. Explain the various queuing disciplines in ISA .[MAY/JUNE-2013,NOV/DEC-2013,2012]
4. Explain the RED algorithm .[APR/MAY-2015,MAY/JUNE-2013 ,NOV/DEC-2013]
5. Explain Differentiated services briefly.[APR/MAY-2015,MAY/JUNE-2013,2014]
6. Write a short notes on DS per hop behaviour[NOV/DEC-2013].
UNIT-05
PROTOCOLS FOR QOS SUPPORT

PART-A

1. What is meant by soft state in RSVP?[APR/MAY-2015]


2. Define session in RSVP?
3.Define label switched swapping in MPLS.[NOV/DEC-2012]
4. What are the features of RSVP?[MAY/JUNE-2013]
5. Define soft state
6. What does RTCP provide to the sources?[NOV/DEC-2013]
7.Define The Format Of RTP Leader
8.List out the characteristics of MPLS.
9. What is Label Stacking?[APR/MAY-2015]
10.Define QOS[MAY/JUNE-2012]
11.List QOS Parameters.[NOV/DEC-2014]
12 Define RSVP?[MAY/JUNE-2011]
13. What is meant by integrated layer processing in RTP?
14. What is the function of RTP relays and give its types?
15. What is the function of mixer and translator in RTP?
16.What are the resources used by an integrated service model?
17. What do you mean by guaranteed service?

PART-B
1. Explain the Characteristics , goals of RSVP & the types of data flow[APR/MAY-2014,2015]
2. Explain the reservation style of the RSVP in detail.[NOV/DEC-2013,2012]
3. Explain the RSVP protocol operation and Mechanisms.[MAY/JUNE-2014]
4. Explain the MLPS architecture in detail[MAY/JUNE-2015,2013,NOV/DEC-2012]
5. Explain the RTP protocol architecture.[MAY/JUNE-2013,2015,NOV/DEC-2012]
6. Explain the RTP data transfer protocol.[NOV/DEC-2012]
www.vidyarthiplus.com

Anna University Of Technology , Chennai

B.E/B.TECH DEGREE EXAMINATION , MAY/JUNE 2012

Seventh Semester

Electronics and Communication Engineering

CS 2060/CS 807/EC1009 HIGH SPEED NETWORKS

(Regulation 2008)

Time : Three hours Maximum : 100 marks

Answer ALL qustions

Part A (10 X 2 = 20 marks)

1. Define asynchronous transfer mode.


2. List the functions provided by AAL layer.
3. What are the advantages of packet over circuit switching?
4. Why congestion occurs in the networks?
5. What are the types of traffic management?
6. Define exponential RTO back off.
7. Define random early detection.
8. What is meant by FQ?
9. What are the goals of RSVP?
10. Define QOS and give any of its 2 parameters.

PART B (5 X 16 = 80 marks)

11. (a) (i) Explain about frame relay networks in detail with suitable diagram. (8)

(ii) Explain in detail about fibre channel networks. (8)

(Or)

(b) (i) Describe in detail about Wifi and WiMax network application and
requirements. (8)

(ii) Explain about Gigabit Ethernet in detail with neat diagram. (8)

12. (a) (i) Explain in detail about frame relay congestion control technique. (8)

(ii) Explain about traffic management in packet switching. (8)

(Or)

(b) (i) Explain in detail about single server queues and its application. (8)

www.vidyarthiplus.com
www.vidyarthiplus.com

(ii) Describe about effect of congestion. (8)

13. (a) (i) Explain in detail about KARNs algorithm and window management.(8)

(ii) Explain about network management in detail with neat sketch. (8)

(Or)

(b) (i) Explain in detail about clock instability and jitter measurements. (10)

(ii) Explain about traffic management framework in detail. (6)

14. (a) Explain in detail about queuing disciplines : BRFQ, WFQ, GPS, and PS.

(Or)

(b) Explain about integrated service architecture and differentiated services in


detail with neat diagram.

15. (a) Explain in detail about RTCP architecture and RIP protocol details.

(Or)

(b) Discuss about protocols used for QOS support with neat diagram.

----------------------------------

www.vidyarthiplus.com
www.vidyarthiplus.com

Reg.No. :
Question Paper Code : 21293

B.E./B.TECH. DEGREE EXAMINATION , MAY/JUNE 2013


Seventh Semester
Electronics and Electronics Engineering
CS 2060 / CS 807 / EC 1009 HIGH SPEED NETWORKS
(Common to Eighth Semester Computer Science and Engineering)
(Regulation 2008)
(Common to PTCS 2060 High Speed Networks for B.E. (Part Time) Seventh
Semester ECE Regulation 2009)

Time : 3 hours Maximum : 100 marks

Answer ALL questions


PART A (10 X 2 = 20 marks)
1.Give few examples for High Speed networks.
2.Draw the ATM cell structure.
3.What is meant by the term Congestionin networks?
4.What are the types of queuing models?
5.What is exponential RTO backoff?
6.Define ABR and GFR?
7.Compare Integrated Services architecture and Differentiated Services
architecture.
8.What is significance of Random Early Detection technique?
www.vidyarthiplus.com
www.vidyarthiplus.com
www.vidyarthiplus.com

9.What are the goals of RSVP?


10.List the main functions of RTP and RTCP?
PART B (16 X 5 = 80 marks)
11. (a) (i) Explain ATM protocol architecture with a neat diagram. (8)
(ii) Briefly explain ATM service categories. (8)

(Or)
(b) (i) Explain in detail about 802.11 architecture. (10)
(ii) Write short notes on:
(a) Wireless LANs.
(b) Wi-Fi networks.
(c) Wi-Max networks. (6)

12. (a) (i) Explain the Single Server Queuing model in detail. (10)
(ii) Discuss briefly the effects of congestion in networks. (6)
(Or)
(b) Write notes on congestion control used in :
(i) Packet Switching Networks.
(ii) Frame Relay Networks.

13. (a) (i) Explain TCP Congestion control in detail. (10)


(ii) Discuss KARNs algorithm. (6)
(Or)
(b) (i) Explain ABR Traffic management in detail. (8)
(ii) Explain GFR Traffic management in detail. (8)

14. (a) (i) Briefly discuss the various queuing disciplines of integrated
services. (10)
www.vidyarthiplus.com
www.vidyarthiplus.com

(ii) Discuss the advantages and downsides of integrated service


architecture. (6)
(Or)
(b) (i) Explain differentiated services architecture in detail. (10)
(ii) Explain the benefits of Random Early detection algorithm. (6)
15. (a) Explain the Following :
(i) RSVP. (10)
(ii) Multiprotocol label switching mechanism. (6)
(Or)
(b) Explain the following :
(i) RTP. (10)
(ii) RTCP. (6)

-------------------------------

www.vidyarthiplus.com
www.Vidyarthiplus.com

www.Vidyarthiplus.com
www.Vidyarthiplus.com

www.Vidyarthiplus.com
www.vidyarthiplus.com

Reg No :

Question Paper Code : 11263

B.E./B.Tech. DEGREE EXAMINATION , NONEMBER/DECEMBER 2012

Seventh Semester

Electronics and Communication Engineering

CS 2060 / CS 807 / EC 1009 HIGH SPED NETWORKS

(Common to Eighth Semester Computer Science and Engineering)

(Regulation 2008)

(Common to PTCS High Speed Networks For B.E. (Part Time) Seventh Semester
Electronics and Communication Engineering (Regulation 2009))

Time : Three hours Maximum : 100 marks

Answer ALL questions

PART A (10 X 2 = 20 marks)

1. Differentiate between frame relaying and X.25 packet switching service.


2. State the data link control functions provided by LAPF protocol.
3. List and explain the parameters for a single server queue.
4. What is meant by BECN?
5. State the mechanisms for supporting rate guarantees in GFR traffic.
6. What is meant by exponential RTO back off?
7. Give some applications that follow elastic traffic.
8. State the performance parameters that should be in the SLA for a DS document.
9. What is meant by soft state?
10. Explain label stacking in MPLS network.

PART B (5 X 16 = 80 marks)

11. (a) (i) Explain the operation to AAL 1 and AAL with an example. (8)

(ii) Explain the working of an ATM error control algorithm. (8)

(Or)

(b) (i) Illustrate why CSMA/CD is not suitable for wireless LANs. (8)

(ii) Draw the 802.11 protocol stack and discuss the functions of PCF and DCF. (8)

www.vidyarthiplus.com
www.vidyarthiplus.com

12. (a) (i) Explain in detail the following congestion control techniques.
(1) Back pressure. (4)
(2) Choke packet. (4)
(3) Explicit congestion signalling. (4)

(ii) Explain the Kendalls notation in detail. (4)

(Or)

(b) (i) Explain the single server queuing model and its applications. (8)

(ii) Explain about traffic rate management in frame relay networks. (8)

13. (a) (i) Explain about TCP window management in detail. (8)

(ii) Explain the RTF variance estimation using Jacobsons algorithm in detail. (8)

(Or)

(b) (i) List and explain the ATM traffic parameter in detail. (8)

(ii) Explain the ATM ABR traffic management in detail. (8)

14. (a) (i) Explain the way in which ISA manages congestion and provides QOS
transport. (8)

(ii) Explain hit round fair queuing technique in detail. (8)

(Or)

(b) Explain the differentiated services operation and the traffic conditioning functions
in detail.

15. (a) (i) List and explain the three RSVP reservation styles in detail. (9)

(ii) Explain the MPLS operation in detail with a diagram. (7)

(Or)

(b) (i) Explain the RTP data transfer protocol architecture in detail. (8)

(ii) Explain the functions performed by the RTP control protocol and its packet types in
detail.

www.vidyarthiplus.com
www.vidyarthiplus.com

Reg.no :

Question paper Code : 31293

B.E./B.Tech. DEGREE EXAMINATION , NOVEMBER/DECEMBER


2013

Seventh Semester

Electronics and Communication Engineering

CS 2060/CS 807/EC 1009/10144 ECE 33 HIGH SPEED NETWORKS

(Common to Eighth Semester Computer Science and Engineering)

(Regulation 2008/2010)

(Also Common to PTCS 2060 High Speed Networks for B.E. (Part-Time)
Seventh Semester Electronics and Communication Engineering
Regulation 2009)

Time : Three hours Maximum : 100


marks

Answer ALL questions

PART A (10 X 2 = 20 marks)

1. State the advantages of frame relay.


2. Is CSMA/CD used in gigabit LANS? Justify.
3. What is meant by Kendalls notation?
4. Mention the congestion control techniques used in packet switching
networks.
5. Define peak cell rate.
6. List the TCP window management techniques.
7. State the characteristics of elastic traffic.
8. What is meant by controlled load service?
9. What is the need for RTCP?
10.What is meant by a flow descriptor?

PART B (5 X 16 = 80 marks)
www.vidyarthiplus.com

11. (a) (i) Explain the call control procedure in frame relay networks. (8)

(ii) Explain the various ATM service categories in detail. (8)

(Or)

(b) Explain the IEEE802.11 architecture in detail. Illustrate the functions


and combined operation of various protocol in MAC sub layer. (16)

12. (a) (i) Explain with an example the implementation of single server
queues. (8)

(ii) Explain in detail about the Jacksons theorem. (8)

(Or)

(b) (i) Explain the effects of congestion in packet switching networks. (8)

(ii) Explain how congestion avoidance is done in a frame relay


networks. (8)

13. (a) (i) Explain the TCP timer management techniques in detail. (8)
(ii) Discuss in detail about the congestion control techniques followed
in ATM networks. (8)
(Or)
(b) (i) Explain in detail about ABR capacity allocation. (8)
(ii) Discuss in detail about ABR traffic control. (8)

14. (a) (i) Draw the Integrated service architecture and explain it in detail.
(10)
(ii) Explain the fair queuing in detail. (6)
(Or)
(b) (i) Explain in detail the way in which RED techniques overcomes
congestion. (8)
(ii) Write a notes on the DS per hop behaviour. (8)

15. (a) (i) Explain the reservation styles of the RSVP in detail. (8)

(ii) Explain the features of MPLS. (8)

(Or)
www.vidyarthiplus.com

(b) (i) Explain the RTP protocol architecture in detail. (8)

(ii) Explain the functions and message types of the RTP control
protocol. (8)

--------------------------------