You are on page 1of 171

Network Models

Module I (10 Hours)


Introduction – Data Communication, Networks,
Internet, Intranet, Protocols, Network Models,
Addressing.
Physical Layer – Signals, Analog, Digital, Analog vs
Digital, Transmission impairment, Data Rate Limits,
Performance, Transmission Modes. Synchronous
TDM. Transmission Media – Guided and Unguided.
Switching – Circuit-Switched Networks, Datagram
networks, Virtual circuit networks, structure of
switch. Concepts of DSL and ADSL.
LAYERED TASKS

We use the concept of layers in our daily life. As an


example, let us consider two friends who communicate
through postal mail. The process of sending a letter to a
friend would be complex if there were no services
available from the post office.
Figure 2.1 Tasks involved in sending a letter
THE OSI MODEL
Established in 1947, the International Standards
Organization (ISO) is a multinational body dedicated to
worldwide agreement on international standards. An ISO
standard that covers all aspects of network
communications is the Open Systems Interconnection
(OSI) model. It was first introduced in the late 1970s.
Seven layers of the OSI model
The interaction between layers in the OSI model
An exchange using the OSI model
Physical layer
• The physical layer is responsible for movements of
individual bits from one hop (node) to the next.
• It deals with the mechanical and electrical specifications
of the interface and transmission medium.
• The physical layer defines the characteristics of the
interface between the devices and the transmission
medium. It also defines the type of transmission medium.
• Representation of bits: Bits must be encoded into signals--
electrical or optical. The physical layer defines the type of
encoding (how 0s and 1 s are changed to signals).
• Data rate. The transmission rate-the number of bits sent
each second-is also defined by the physical layer.
• Synchronization of bits. The sender and receiver not only
must use the same bit rate but also must be synchronized
at the bit level.
• Line configuration. The physical layer is concerned with
the connection of devices to the media. In a point-to-point
configuration, two devices are connected through a
dedicated link. In a multipoint configuration, a link is
shared among several devices.
• Physical topology. The physical topology defines how
devices are connected to make a network.
• Transmission mode. The physical layer also defines the
direction of transmission between two devices: simplex,
half-duplex, or full-duplex.
Data link layer
The data link layer is responsible for moving
frames from one hop (node) to the next.
• Framing. The data link layer divides the stream of bits received from
the network layer into manageable data units called frames.
• Physical addressing. If frames are to be distributed to different systems
on the network, the data link layer adds a header to the frame to define
the sender and/or receiver of the frame.
• Flow control. If the rate at which the data are absorbed by the receiver
is less than the rate at which data are produced in the sender, the data
link layer imposes a flow control mechanism to avoid overwhelming the
receiver.
• Error control. Error control is normally achieved through a trailer
added to the end of the frame.
• Access control. When two or more devices are connected to the same
link, data link layer protocols are necessary to determine which device
has control over the link at any given time.
Hop-to-hop delivery
Network layer
The network layer is responsible for the
delivery of individual packets from
the source host to the destination host.
• If two systems are connected to the same link, there is usually no
need for a network layer. However, if the two systems are attached to
different networks (links) with connecting devices between the
networks (links), there is often a need for the network layer to
accomplish source-to-destination delivery.
• Logical addressing. The physical addressing implemented by the data
link layer handles the addressing problem locally. If a packet passes
the network boundary, we need another addressing system to help
distinguish the source and destination systems.
• Routing. When independent networks or links are connected to
create intemetworks (network of networks) or a large network, the
connecting devices (called routersor switches) route or switch the
packets to their final destination. One of the functions of the network
layer is to provide this mechanism.

2.16
Source-to-destination delivery
Transport layer
The transport layer is responsible for the delivery
of a message from one process to another. A process is an
application program running on a host. Whereas the
network layer oversees source-to-destination delivery of
individual packets, it does not recognize any relationship
between those packets.

2.19
• The transport layer, on the other hand, ensures that the whole
message arrives intact and in order, overseeing both error control
and flow control at the source-to-destination level.
• Service-point addressing. The network layer gets each packet to the
correct computer; the transport layer gets the entire message to the
correct process on that computer.
• Segmentation and reassembly.
• Connection control. The transport layer can be either connectionless
or connection oriented. A connectionless transport layer treats each
segment as an independent packet and delivers it to the transport
layer at the destination machine. A connection oriented transport
layer makes a connection with the transport layer at the destination
machine first before delivering the packets.
• Flow control. Like the data link layer, the transport layer is
responsible for flow control. However, flow control at this layer is
performed end to end rather than across a single link.
• Error control. Like the data link layer, the transport layer is
responsible for error control. However, error control at this layer is
performed process-to process rather than across a single link.
2.20
Reliable process-to-process delivery of a message
Session layer
The session layer is responsible for dialog
control and synchronization.
• Dialog control. The session layer allows two systems to
enter into a dialog.
• Synchronization. The session layer allows a process to
add checkpoints, or synchronization points, to a stream of
data.
Presentation layer

2.24
The presentation layer is responsible for translation (Because
different computers use different encoding systems, the
presentation layer is responsible for interoperability between
these different encoding methods), compression, and
encryption.

2.25
Application layer
The application layer is responsible for
providing services to the user.

• Network virtual terminal. A network virtual terminal is a


software version of a physical terminal, and it allows a
user to log on to a remote host.
• File transfer, access, and management.
• Mail services. This application provides the basis for e-
mail forwarding and storage.
• Directory services. This application provides distributed
database sources and access for global information about
various objects and services
Summary of layers
Addressing
Figure 2.18 Relationship of layers and addresses in TCP/IP
Example 2.1

In Figure 2.19 a node with physical address 10 sends a


frame to a node with physical address 87. The two nodes
are connected by a link (bus topology LAN). As the
figure shows, the computer with physical address 10 is
the sender, and the computer with physical address 87 is
the receiver.
Figure 2.19 Physical addresses
Example 2.2

Most local-area networks use a 48-bit (6-byte) physical


address written as 12 hexadecimal digits; every byte (2
hexadecimal digits) is separated by a colon, as shown
below:

07:01:02:01:2C:4B

A 6-byte (12 hexadecimal digits) physical address.


Example 2.3

Figure 2.20 shows a part of an internet with two routers


connecting three LANs. Each device (computer or
router) has a pair of addresses (logical and physical) for
each connection. In this case, each computer is
connected to only one link and therefore has only one
pair of addresses. Each router, however, is connected to
three networks (only two are shown in the figure). So
each router has three pairs of addresses, one for each
connection.
Figure 2.20 IP addresses
Example 2.4

Figure 2.21 shows two computers communicating via the


Internet. The sending computer is running three
processes at this time with port addresses a, b, and c. The
receiving computer is running two processes at this time
with port addresses j and k. Process a in the sending
computer needs to communicate with process j in the
receiving computer. Note that although physical
addresses change from hop to hop, logical and port
addresses remain the same from the source to
destination.
Figure 2.21 Port addresses
Note

The physical addresses will change from hop to hop,


but the logical addresses usually remain the same.
Example 2.5

A port address is a 16-bit address represented by one


decimal number as shown.

753

A 16-bit port address represented


as one single number.
Transmission impairment
Figure 3.26 Attenuation
Example

Suppose a signal travels through a transmission medium and its power is reduced to
one-half. This means that P2 is (1/2)P1. In this case, the attenuation (loss of power)
can be calculated as

A loss of 3 dB (–3 dB) is equivalent to losing one-half the power.


Distortion
 Means that the signal changes its form
or shape
 Distortion occurs in composite signals
 Each frequency component has its own
propagation speed traveling through a
medium.
 The different components therefore
arrive with different delays at the
receiver.
 That means that the signals have
different phases at the receiver than
they did at the source.
Distortion
Noise
 There are different types of noise
 Thermal - random noise of electrons
in the wire creates an extra signal
 Induced - from motors and
appliances, devices act are
transmitter antenna and medium as
receiving antenna.
 Crosstalk - same as above but
between two wires.
 Impulse - Spikes that result from
power lines, lighning, etc.
Noise
Signal to Noise Ratio
(SNR)
 To measure the quality of a system
the SNR is often used. It indicates
the strength of the signal wrt the
noise power in the system.
 It is the ratio between two powers.
 It is usually given in dB and
referred to as SNRdB.
Example

The power of a signal is 10 mW and the power of the noise is 1 μW; what are the
values of SNR and SNRdB ?

Solution
The values of SNR and SNRdB can be calculated as follows:
Example

The values of SNR and SNRdB for a noiseless channel are

We can never achieve this ratio in real life; it is an ideal.


Two cases of SNR: a high SNR and a low SNR
DATA RATE LIMITS

A very important consideration in data communications


is how fast we can send data, in bits per second, over a
channel. Data rate depends on three factors:
1. The bandwidth available
2. The level of the signals we use
3. The quality of the channel (the level of noise)
Increasing the levels of a signal
increases the probability of an error
occurring, in other words it reduces the
reliability of the system. Why??
Capacity of a System
 The bit rate of a system increases with an
increase in the number of signal levels we use
to denote a symbol.
 A symbol can consist of a single bit or “n” bits.
 The number of signal levels = 2n.
 As the number of levels goes up, the spacing
between level decreases -> increasing the
probability of an error occurring in the
presence of transmission impairments.
Nyquist Theorem
 Nyquist gives the upper bound for the bit rate
of a transmission system by calculating the
bit rate directly from the number of bits in a
symbol (or signal levels) and the bandwidth
of the system
 Nyquist theorem states that for a noiseless
channel:
C = 2 B log22n
C= capacity in bps
B = bandwidth in Hz
Example

Consider a noiseless channel with a bandwidth of 3000 Hz transmitting a signal with


two signal levels. The maximum bit rate can be calculated as
Example

Consider the same noiseless channel transmitting a signal with four signal levels (for
each level, we send 2 bits). The maximum bit rate can be calculated as
Example

We need to send 265 kbps over a noiseless channel with a bandwidth of 20 kHz. How
many signal levels do we need?
Solution
We can use the Nyquist formula as shown:

Since this result is not a power of 2, we need to either increase the number of levels or
reduce the bit rate. If we have 128 levels, the bit rate is 280 kbps. If we have 64 levels,
the bit rate is 240 kbps.
Shannon’s Theorem

 Shannon’s theorem gives the capacity


of a system in the presence of noise.

C = B log2(1 + SNR)
Example

Consider an extremely noisy channel in which the value of the signal-to-noise ratio is
almost zero. In other words, the noise is so strong that the signal is faint. For this
channel the capacity C is calculated as

This means that the capacity of this channel is zero regardless of the bandwidth. In
other words, we cannot receive any data through this channel.
Example

We can calculate the theoretical highest bit rate of a regular telephone line. A
telephone line normally has a bandwidth of 3000. The signal-to-noise ratio is usually
3162. For this channel the capacity is calculated as

This means that the highest bit rate for a telephone line is 34.860 kbps. If we want to
send data faster than this, we can either increase the bandwidth of the line or
improve the signal-to-noise ratio.
Example

The signal-to-noise ratio is often given in decibels. Assume that SNR dB = 36 and the
channel bandwidth is 2 MHz. The theoretical channel capacity can be calculated as
Example

We have a channel with a 1-MHz bandwidth. The SNR for this channel is 63. What
are the appropriate bit rate and signal level?

Solution
First, we use the Shannon formula to find the upper limit.

The Shannon formula gives us 6 Mbps, the upper limit. For better performance we
choose something lower, 4 Mbps, for example. Then we use the Nyquist formula to
find the number of signal levels.
The Shannon capacity gives us the
upper limit; the Nyquist formula tells us
how many signal levels we need.

3.63
PERFORMANCE

One important issue in networking is the


performance of the network—how good is it?
We discuss quality of service, an overall
measurement of network performance, in
greater detail in Chapter 24. In this section, we
introduce terms that we need for future
chapters.
In networking, we use the term
bandwidth in two contexts.
 The first, bandwidth in hertz, refers to the range of frequencies in a
composite signal or the range of frequencies that a channel can
pass.
 The second, bandwidth in bits per second, refers to the speed of bit
transmission in a channel or link. Often referred to as Capacity.
Example

1. The bandwidth of a subscriber line is 4 kHz for voice or data. The bandwidth of
this line for data transmission
can be up to 56,000 bps using a sophisticated modem to change the digital signal to
analog.

2. If the telephone company improves the quality of the line and increases the
bandwidth to 8 kHz, we can send 112,000 bps by using the same technology as
mentioned in Example 3.42.
Example

A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames per
minute with each frame carrying an average of 10,000 bits. What is the throughput of
this network?

Solution
We can calculate the throughput as

The throughput is almost one-fifth of the bandwidth in this case.


Propagation & Transmission
delay
 Propagation speed - speed at
which a bit travels though the
medium from source to
destination.
 Transmission speed - the speed at
which all the bits in a message
arrive at the destination.
(difference in arrival time of first
and last bit)
Propagation and Transmission
Delay
 Propagation Delay = Distance/Propagation
speed

 Transmission Delay = Message size/bandwidth


bps

 Latency = Propagation delay + Transmission


delay + Queueing time + Processing time
Example

What is the propagation time if the distance between the two points is 12,000 km?
Assume the propagation speed to be 2.4 × 108 m/s in cable.

Solution
We can calculate the propagation time as

The example shows that a bit can go over the Atlantic Ocean in only 50 ms if there is
a direct cable between the source and the destination.

2.70
Example

What are the propagation time and the transmission time for a 2.5-kbyte message (an
e-mail) if the bandwidth of the network is 1 Gbps? Assume that the distance between
the sender and the receiver is 12,000 km and that light travels at 2.4 × 108 m/s.

Solution
We can calculate the propagation and transmission time as shown on the next slide:

Note that in this case, because the message is short and the bandwidth is high, the
dominant factor is the propagation time, not the transmission time. The transmission
time can be ignored.
Example

What are the propagation time and the transmission time for a 5-Mbyte message (an
image) if the bandwidth of the network is 1 Mbps? Assume that the distance between
the sender and the receiver is 12,000 km and that light travels at 2.4 × 108 m/s.

Solution
We can calculate the propagation and transmission times as shown on the next slide.

Note that in this case, because the message is very long and the bandwidth is not very
high, the dominant factor is the transmission time, not the propagation time. The
propagation time can be ignored.
Transmission medium and physical layer
Classes of transmission media
GUIDED MEDIA

Guided media, which are those that provide a conduit


from one device to another, include twisted-pair cable,
coaxial cable, and fiber-optic cable.
Twisted-pair cable
Unshielded twisted-pair (UTP) and shielded twisted-pair (STP) cables
Table 7.1 Categories of unshielded twisted-pair cables

7.78
Coaxial cable
Categories of coaxial cables
Fiber optics: Bending of light ray
Optical fiber
Propagation modes
Modes
UNGUIDED MEDIA: WIRELESS

Unguided media transport electromagnetic waves


without using a physical conductor. This type of
communication is often referred to as wireless
communication.

Radio Waves
Microwaves
Infrared
Electromagnetic spectrum for wireless communication
Propagation methods
Bands
Wireless transmission waves
Radio waves are used for multicast communications, such as radio and
television, and paging systems. They can penetrate through walls.
Highly regulated. Use omni directional antennas
Microwaves are used for unicast communication such as cellular telephones,
satellite networks,
and wireless LANs.
Higher frequency ranges cannot penetrate walls.
Use directional antennas - point to point line of sight communications.

7.91
Infrared signals can be used for short-range communication in a closed area
using line-of-sight propagation.

7.92
Wireless Channels
 Are subject to a lot more errors than guided
media channels.
 Interference is one cause for errors, can be
circumvented with high SNR.
 The higher the SNR the less capacity is
available for transmission due to the
broadcast nature of the channel.
 Channel also subject to fading and no
coverage holes.
Switching
CIRCUIT-SWITCHED NETWORKS

A circuit-switched network consists of a set of


switches connected by physical links. A connection
between two stations is a dedicated path made of
one or more links. However, each connection uses
only one dedicated channel on each link. Each link
is normally divided into n channels by using FDM or
TDM.
A circuit-switched network is made of a set of switches
connected by physical links, in which each link is divided into n
channels.
We need to emphasize several points here
Circuit switching takes place at the physical layer.

Before starting communication, the stations must make a reservation

for the resources to be used during the communication. These


resources, such as channels (bandwidth in FDM and time slots in
TDM), switch buffers, switch processing time, and switch input/output
ports, must remain dedicated during the entire duration of data
transfer until the teardown phase.
Data transferred between the two stations are not packetized

(physical layer transfer of the signal). The data are a continuous flow
sent by the source station and received by the destination station,
although there may be periods of silence.
There is no addressing involved during data transfer. Of course, there

is end-to-end addressing used during the setup phase, as we will see


shortly.
Three Phases
Setup Phase: connection setup means creating dedicated channels
between the switches.

system A needs to connect to system M, it sends a setup request that includes the address
of system M, to switch I. Switch I finds a channel between itself and switch IV that can be
dedicated for this purpose. Switch I then sends the request to switch IV, which finds a
dedicated channel between itself and switch III. Switch III informs system M of system A's
intention at this time.
In the next step to making a connection, an acknowledgment from system M needs to be
sent in the opposite direction to system A. Only after system A receives this
acknowledgment is the connection established.
Three Phases

Data Transfer Phase: After the establishment of


the dedicated circuit (channels), the two parties
can transfer data.

Teardown Phase: When one of the parties needs


to disconnect, a signal is sent to each switch to
release the resources.
Example
Let us use a circuit-switched network to connect eight telephones in a small
area. Communication is through 4-kHz voice channels. We assume that each
link uses FDM to connect a maximum of two voice channels. The bandwidth of
each link is then 8 kHz. Figure 8.4 shows the situation. Telephone 1 is
connected to telephone 7; 2 to 5; 3 to 8; and 4 to 6. Of course the situation
may change when new connections are made. The switch controls the
connections.
Delay

• Although a circuit-switched network normally has low efficiency, the delay in this
type of network is minimal.
• During data transfer the data are not delayed at each switch.
• The total delay is due to the time needed to create the connection, transfer data,
and disconnect the circuit.
• Delay caused by the setup is the sum of four parts: the propagation time of the
source computer request ,the request signal transfer time, the propagation time of
the acknowledgment from the destination computer, and the signal transfer time of
the acknowledgment.
DATAGRAM NETWORKS
 If the message is going to pass through a packet-switched network, it
needs to be divided into packets of fixed or variable size. The size of the
packet is determined by the network and the governing protocol.
 In packet switching, there is no resource allocation for a packet.
 Resources are allocated on demand.
 The allocation is done on a first-come, first-served basis. When a switch
receives a packet, no matter what is the source or destination, the packet
must wait if there are other packets being processed.
 In a datagram network, each packet is treated independently of all others.
 Datagram switching is normally done at the network layer.
 The datagram networks are sometimes referred to as connectionless
networks. The term connectionless here means that the switch (packet
switch) does not keep information about the connection state.
 There are no setup or teardown phases.
 Each packet is treated the same by a switch regardless of its source or
destination.
A datagram network with four switches (routers)
Routing Table
 If there are no setup or teardown phases, how are the packets routed to
their destinations in a datagram network?
 In this type of network, each switch (or packet switch) has a routing table
which is based on the destination address.
 The routing tables are dynamic and are updated periodically.
 The destination addresses and the corresponding forwarding output ports
are recorded in the tables.
Destination Address
 Every packet in a datagram network carries a
header that contains, among other information,
the destination address of the packet.
 When the switch receives the packet, this
destination address is examined; the routing
table is consulted to find the corresponding port
through which the packet should be forwarded.
 This address, unlike the address in a virtual-
circuit-switched network, remains the same
during the entire journey of the packet.
Delay
 There may be greater delay in a datagram network than in a virtual-circuit
network.
 Although there are no setup and teardown phases, each packet may
experience a wait at a switch before it is forwarded.
 Since not all packets in a message necessarily travel through the same
switches, the delay is not uniform for the packets of a message
VIRTUAL-CIRCUIT NETWORKS

A virtual-circuit network is a cross between a circuit-switched network


and a datagram network. It has some characteristics of both.
As in a circuit-switched network, there are setup and teardown phases

in addition to the data transfer phase.


Resources can be allocated during the setup phase, as in a circuit-

switched network, or on demand, as in a datagram network.


As in a datagram network, data are packetized and each packet

carries an address in the header. However, the address in the header


has local jurisdiction, not end-to-end jurisdiction.
As in a circuit-switched network, all packets follow the same path

established during the connection.


A virtual-circuit network is normally implemented in the data link

layer, while a circuit-switched network is implemented in the physical


layer and a datagram network in the network layer.
Virtual-circuit network
Addressing
In a virtual-circuit network, two types of addressing are involved: global and local
(virtual-circuit identifier).
Global Addressing:
A source or a destination needs to have a global address-an address that can be
unique in the scope of the network or internationally if the network is part of an
international network.
Virtual-Circuit Identifier:
The identifier that is actually used for data transfer is called the virtual-circuit

identifier (VCI).
A VCI, unlike a global address, is a small number that has only switch scope; it is

used by a frame between two switches.


When a frame arrives at a switch, it has a VCI; when it leaves, it has a different

VCl.
Note that a VCI does not need to be a large number since each switch can use its

own unique set of VCls.


Three Phases
 As in a circuit-switched network, a source and destination need to go
through three phases in a virtual-circuit network: setup, data transfer, and
teardown.
 In the setup phase, the source and destination use their global addresses to
help switches make table entries for the connection.
 In the teardown phase, the source and destination inform the switches to
delete the corresponding entry.
 Data transfer occurs between these two phases.

Data Transfer Phase:


To transfer a frame from a source to its destination, all switches need to have
a table entry for this virtual circuit. The table, in its simplest form, has four
columns. This means that the switch holds four pieces of information for
each virtual circuit that is already set up.
Switch and tables in a virtual-circuit network

• A frame arriving at port 1 with a VCI of 14. When the frame arrives, the switch
looks in its table to find port 1 and a VCI of 14. When it is found, the switch knows
to change the VCI to 22 and send out the frame from port 3.
• The data transfer phase is active until the source sends all its frames to the
destination.
• The procedure at the switch is the same for each frame of a message.
• The process creates a virtual circuit, not a real circuit, between the source and
destination.
Switch and tables in a virtual-circuit network
Setup Phase:
In the setup phase, a switch creates an entry for a virtual circuit. For example,

suppose source A needs to create a virtual circuit to B. Two steps are required:
the setup request and the acknowledgment.

The switch assigns the incoming port (1) and chooses an available incoming VCI (14)
and the outgoing port (3). It does not yet know the outgoing VCI, which will be found
during the acknowledgment step. The switch then forwards the frame through port 3
to switch 2.
Acknowledgment
Destination B receives the setup frame, and if it is ready to receive frames from
A, it assigns a VCI to the incoming frames that come from A, in this case 77.
This VCI lets the destination know that the frames come from A, and not other
sources.

Teardowil Phase:
In this phase, source A, after sending all frames to B, sends a special frame called
a teardown request. Destination B responds with a teardown confirmation frame.
All switches delete the corresponding entry from their tables.
Delay in Virtual-Circuit Networks
STRUCTURE OF A SWITCH
Structure of Circuit Switches:
Circuit switching today can use either of two technologies: the space-division
switch or the time-division switch.
Space-Division Switch
In space-division switching, the paths in the circuit are separated from one
another spatially.
Crossbar Switch: A crossbar switch connects n inputs to m outputs in a grid,
using electronic microswitches (transistors) at each crosspoint.

The major limitation of


this design is the number
of crosspoints required.
To connect n inputs to m
outputs using a crossbar
switch requires n x m
crosspoints which is
impractical when n and m
are large numbers.
Multistage Switch: The solution to the limitations of the crossbar switch is the
multistage switch, which combines crossbar switches in several (normally
three) stages
In a single crossbar switch, only one row or column (one path) is active for

any connection. So we need N x N crosspoints.


If we can allow multiple paths inside the switch, we can decrease the number

of crosspoints.
Each crosspoint in the middle stage can be accessed by multiple crosspoints in

the first or third stage.


To design a three-stage switch, we follow these steps:
1.We divide the N input lines into groups, each of n lines. For each group, we
use one crossbar of size n x k, where k is the number of crossbars in the
middle stage. In other words, the first stage has N/n crossbars of n x k
crosspoints.
2.We use k crossbars, each of size (N/n) x (N/n) in the middle stage.

3. We use N/n crossbars, each of size k x n at the third stage.

We can calculate the total number of crosspoints as follows

2.122
Example

2.123
Example:
DIGITAL SUBSCRIBER LINE

After traditional modems reached their peak data rate,


telephone companies developed another technology,
DSL, to provide higher-speed access to the Internet.
Digital subscriber line (DSL) technology is one of the
most promising for supporting high-speed digital
communication over the existing local loops.
Note
ADSL is an asymmetric communication technology designed for residential
users; it is not suitable for businesses.

Because business needs larger upload bandwidth


One interesting point is that ADSL uses the existing local loops.
But how does ADSL reach a data rate that was never achieved
with traditional modems? The answer is that the twisted-pair local
loop is actually capable of handling bandwidths up to 1.1 MHz,
but the filter installed at the end office of the telephone company
where each local loop terminates limits the bandwidth to 4 kHz
(sufficient for voice communication). If the filter is removed,
however, the entire 1.1 MHz is available for data and voice
communications.
Note
The existing local loops (twisted-pair lines) can handle bandwidths up to
1.1 MHz.

Traditional phone has a low-pass filter in front of it,


Which limits its bandwidth to 4KHz.
Note
ADSL is an adaptive technology.
The system uses a data rate
based on the condition of
the local loop line.

Distance between residence to switching office


Size of the cable
Signaling used
Discrete Multitone Technique
 The modulation technique that has become standard for ADSL is called the discrete
multitone technique (DMT) which combines QAM and FDM.
 Each system can decide on its bandwidth division. Typically, an available bandwidth of
1.104 MHz is divided into 256 channels.
 Each channel uses a bandwidth of 4.312 kHz.
 Channel 0 is reserved for voice communication.
 Channels 1 to 5 are not used and provide a gap between voice and data communication.
 Upstream data and control. Channels 6 to 30 (25 channels) are used for upstream data
transfer and control. One channel is for control, and 24 channels are for data transfer. If
there are 24 channels, each using 4 kHz (out of 4.312 kHz available) with QAM
modulation, we have 24 x 4000 x 15, or a 1.44-Mbps bandwidth, in the upstream
direction. However, the data rate is normally below 500 kbps because some of the
carriers are deleted at frequencies where the noise level is large. In other words, some
of channels may be unused.
 Downstream data and control. Channels 31 to 255 (225 channels) are used for
downstream data transfer and control. One channel is for control, and 224 channels are
for data. If there are 224 channels, we can achieve up to 224 x 4000 x 15, or 13.4
Mbps. However, the data rate is normally below 8 Mbps because some of the carriers
are deleted at frequencies where the noise level is large. In other words, some of
channels may be unused
Discrete multitone technique (QAM + FDM)
Bandwidth division in ADSL
Customer site: ADSL modem

Splitter and data line need installation


(maybe expensive)

ADSL Lite (universal ADSL or spliterless ADSL:


does not need additional installation from telephone company
telephone company site
Table 9.2 Summary of DSL technologies

ADSL Lite: does not need additional installation


from telephone company
Module II (10 Hours)
Data Link Layer – Introduction, Data Link Control & Protocol –
Framing, Flow & Error Control, HDLC & PPP, Multiple Access –
Random(CSMA), Controlled.
Wired LAN – LLC, MAC, Ethernet. Wireless LAN. Connecting
Devices – Repeaters, Hubs, Bridges,
Two & Three layer Switches, Routers, Gateways, Backbone
networks, V-LAN
Data Link Control

Data link control functions include framing, flow


and error control, and software implemented
protocols that provide smooth and reliable
transmission of frames between nodes.
Framing
 The data link layer needs to pack bits into frames, so that each frame is
distinguishable from another. Our postal system practices a type of framing.
The simple act of inserting a letter into an envelope separates one piece of
information from another; the envelope serves as the delimiter.
 Framing in the data link layer separates a message from one source to a
destination, or from other messages to other destinations, by adding a
sender address and a destination address. The destination address defines
where the packet is to go; the sender address helps the recipient
acknowledge the receipt.
 Although the whole message could be packed in one frame, that is not
normally done. One reason is that a frame can be very large, making flow
and error control very inefficient. When a message is carried in one very
large frame, even a single-bit error would require the retransmission of the
whole message. When a message is divided into smaller frames, a single-bit
error affects only that small frame.
• Fixed-Size Framing: Frames can be of fixed or variable size. In
fixed-size framing, there is no need for defining the boundaries
of the frames; the size itself can be used as a delimiter. An
example of this type of framing is the ATM (Asynchronous
Transfer Mode) wide-area network.
• In variable-size framing: we need a way to define the end of the
frame and the beginning of the next.
• Historically, two approaches were used for this purpose: a
character-oriented approach and a bit-oriented approach.
Character-Oriented Protocols
 In a character-oriented protocol, data to be carried are 8-bit characters
from a coding system such as ASCII.
 The header, which normally carries the source and destination addresses
and other control information, and the trailer, which carries error detection
or error correction redundant bits, are also multiples of 8 bits.
 To separate one frame from the next, an 8-bit (I-byte) flag is added at the
beginning and the end of a frame.
 The flag, composed of protocol-dependent special characters, signals the
start or end of a frame.
 Character-oriented framing was popular when only text was exchanged by
the datalink layers.
 The flag could be selected to be any character not used for text
communication.
 Any pattern used for the flag could also be part of the information. If this
happens, the receiver, when it encounters this pattern in the middle of the
data, thinks it has reached the end of the frame.
 To fix this problem, a byte-stuffing strategy was added to character-
oriented framing.
 In byte stuffing (or character stuffing), a special byte is added to the data
section of the frame when there is a character with the same pattern as the
flag.
 The data section is stuffed with an extra byte. This byte is usually called the
escape character (ESC), which has a predefined bit pattern. Whenever the
receiver encounters the ESC character, it removes it from the data section
and treats the next character as data, not a delimiting flag.
 Byte stuffing by the escape character allows the presence of the flag in the
data section of the frame, but it creates another problem. What happens if
the text contains one or more escape characters followed by a flag? The
receiver removes the escape character, but keeps the flag, which is
incorrectly interpreted as the end of the frame.
 To solve this problem, the escape characters that are part of the text must
also be marked by another escape character. In other words, if the escape
character is part of the text, an extra one is added to show that the second
one is part of the text.
 Character-oriented protocols present another problem in data
communications. The universal coding systems in use today, such
as Unicode, have 16-bit and 32-bit characters that conflict with 8-bit
characters.
 Moving toward the bit-oriented protocols is the solution.
Bit-Oriented Protocols
 In a bit-oriented protocol, the data section of a frame is a sequence of bits
to be interpreted by the upper layer as text, graphic, audio, video, and so
on.
 However, in addition to headers (and possible trailers), we still need a
delimiter to separate one frame from the other.
 Most protocols use a special 8-bit pattern flag 01111110 as the delimiter to
define the beginning and the end of the frame.

 This flag can create the same type of problem we saw in the byte-oriented
protocols. That is, if the flag pattern appears in the data, we need to
somehow inform the receiver that this is not the end of the frame.
 We do this by stuffing 1 single bit (instead of I byte) to prevent the pattern
from looking like a flag. The strategy is called bit stuffing.
Bit stuffing is the process of adding one extra 0 whenever five
consecutive 1s follow a 0 in the data, so that the receiver does not
mistake the pattern 0111110 for a flag.
FLOW AND ERROR CONTROL
The most important responsibilities of the data link layer are flow
control and error control. Collectively, these functions are known as
data link control.

Flow control refers to a set of procedures used to restrict the


amount of data that the sender can send before
waiting for acknowledgment.

Error control in the data link layer is based on automatic


repeat request, which is the retransmission of data.
PROTOCOLS

Now let us see how the data link layer can combine
framing, flow control, and error control to achieve the
delivery of data from one node to another. The
protocols are normally implemented in software by
using one of the common programming languages. To
make our discussions language-free, we have written
in pseudocode a version of each protocol that
concentrates mostly on the procedure instead of
delving into the details of language rules.
Taxonomy of protocols discussed in this chapter
NOISELESS CHANNELS

Let us first assume we have an ideal channel in which


no frames are lost, duplicated, or corrupted. We
introduce two protocols for this type of channel.

Topics discussed in this section:


Simplest Protocol
Stop-and-Wait Protocol
The design of the simplest protocol with no flow or error control
Algorithm Sender-site algorithm for the simplest protocol
Algorithm Receiver-site algorithm for the simplest protocol
Flow diagram for Example

11.153
Design of Stop-and-Wait Protocol
Algorithm Sender-site algorithm for Stop-and-Wait Protocol
Algorithm Receiver-site algorithm for Stop-and-Wait Protocol
Figure 11.9 Flow diagram for Example 11.2

11.157
NOISY CHANNELS

Although the Stop-and-Wait Protocol gives us an idea


of how to add flow control to its predecessor, noiseless
channels are nonexistent. We discuss three protocols
in this section that use error control.

Topics discussed in this section:


Stop-and-Wait Automatic Repeat Request
Go-Back-N Automatic Repeat Request
Selective Repeat Automatic Repeat Request
Design of the Stop-and-Wait ARQ Protocol
Sender-site algorithm for Stop-and-Wait ARQ

(continued)
(continued)
Sender-site algorithm for Stop-and-Wait ARQ
Receiver-site algorithm for Stop-and-Wait ARQ Protocol
HDLC

High-level Data Link Control (HDLC) is a bit-oriented protocol


for communication over point-to-point and multipoint links. It
implements the ARQ mechanisms we discussed in this chapter.

HDLC provides two common transfer modes that can be used in


different configurations: normal response mode (NRM) and
asynchronous balanced mode (ABM).
Normal response mode

In normal response mode (NRM), the station configuration is unbalanced. We have one
primary station and multiple secondary stations. A primary station can send commands;
a secondary station can only respond. The NRM is used for both point-to-point and
multiple-point links
Asynchronous balanced mode

In asynchronous balanced mode (ABM), the configuration is balanced. The link


is point-to-point, and each station can function as a primary and a secondary
(acting as peers). This is the common mode today.
HDLC frames

 HDLC defines three types of frames: information frames (I-frames),


supervisory frames (S-frames), and unnumbered frames (V-frames).

 Each type of frame serves as an envelope for the transmission of a


different type of message.
 I-frames are used to transport user data and control information
relating to user data (piggybacking).
 S-frames are used only to transport control information.
 V-frames are reserved for system management. Information carried
by V-frames is intended for managing the link itself.
HDLC frames

• Each frame in HDLC may contain up to six fields: a beginning flag field, an address
field, a control field, an information field, a frame check sequence (FCS) field, and
an ending flag field.
• In multiple-frame transmissions, the ending flag of one frame can serve as the
beginning flag of the next frame.
• Flag field:. The flag field of an HDLC frame is an 8-bit
sequence with the bit pattern 01111110 that identifies both the
beginning and the end of a frame and serves as a
synchronization pattern for the receiver.
• Address field. The second field of an HDLC frame contains the
address of the secondary station. If a primary station created the
frame, it contains a to address. If a secondary creates the frame,
it contains a from address.
• Control field. The control field is a 1- or 2-byte segment of the
frame used for flow and error control. The interpretation of bits
in this field depends on the frame type.
• Information field. The information field contains the user's data
from the network layer or management information. Its length
can vary from one network to another.
• FCS field. The frame check sequence (FCS) is the HDLC error
detection field. It can contain either a 2- or 4-byte CRC.
Control field format for the different frame types
U-frame control command and response
2.171

You might also like