Professional Documents
Culture Documents
BSNL
ES & IT FACULTY
COURSE CODE – BRBCOIF 114
INDEX
BRBRAITT : June-2011 1
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 2
―DATA NETWORK‖ FOR JTOs PH-II
Bit 7 0 0 0 0 1 1 1 1
Numbers 6 0 0 1 1 0 0 1 1
5 0 1 0 1 0 1 0 1
4321 0 1 2 3 4 5 6 7
The binary representation of a particular character can be easily determined from its
hexadecimal coordinates. For example, the coordinates of character ―K‖ are (4,B)
and, therefore, its binary code is 100 1011.
The control symbols are codes reserved for special functions. Table 2 lists the control
symbols. Some important functions and the corresponding control symbols are:
functions relating to basic operation of the terminal device, e.g., a printer or a
VDU
CR (Carriage Return)
LF (Line Feed)
functions relating to error control
ACK (Acknowledgement)
NAK (Negative Acknowledgement)
functions relating to blocking (grouping) of data characters
STX (Start of Text)
ETX (End of Text).
DC1, DC2, DC3 and DC4 are user definable. DCI and DC3 are generally used as X-
ON and X-OFF for switching the transmitter.
BRBRAITT : June-2011 3
―DATA NETWORK‖ FOR JTOs PH-II
ASCII is often used with an eighth bit called the parity bit. This bit is utilized for
detecting errors which occur during transmission. It is added in the most significant
bit (MBS) positions. We will examine the use of parity bits in detail in the chapter on
Error Control.
EXAMPLE 1
Represent the message ―3P.bat‖ in ASCII code. The eighth bit may be kept as ―0‖
Solution
Bit Positions 8 7 6 5 4 3 2 1
3 0 0 1 1 0 0 1 1
P 0 1 0 1 0 0 0 0
. 0 0 1 0 1 1 1 0
b 0 1 1 0 0 0 1 0
a 0 1 1 0 0 0 0 1
t 0 1 1 1 0 1 0 0
b0 b1 b2 b3 b4 b5 b6 b7
BRBRAITT : June-2011 4
―DATA NETWORK‖ FOR JTOs PH-II
MSB LSB
BRBRAITT : June-2011 5
―DATA NETWORK‖ FOR JTOs PH-II
0 0
0 0
1 1
1 1
0
2 0 0
3 0 0 0
4 1 1 1
5 0 0 0
6 1 1 1
1
7 1 1
Transmitter Receiver
In serial transmission, bits are transmitted serially one after the other (Fig.2). The
least significant
MSB LSB
1 1 0 1 0 0 1 0
1 1 0 1 0 0 1 0
Transmitter Receiver
Bit (LSB) is usually transmitted first. Note that as compared to transmission, serial
transmission requires only one circuit interconnecting the two devices. Therefore,
serial transmission is suitable for transmission over long distances.
EXAMPLE 2
Solution
3 p b t
11001100 00001010 01110100 0 01000110 10000110 00101110
BRBRAITT : June-2011 6
―DATA NETWORK‖ FOR JTOs PH-II
Bits are transmitted as electrical signals over the interconnecting wires. The two
binary states ―1‖ transmission is termed unipolar and if we choose to represent a
binary ―1‖ by, say, a positive voltage + V volts and binary ―0‖ by a negative voltage –
V volts, the transmission is said to be bipolar. Figure 3 shows the bipolar waveform of
the character ―K‖. Bipolar transmission is preferred because the signal does not have
any DC component. The transmission media usually do not allow the DC signals to
pass through.
Bit Rate
Bit rate is simply the number of bits which can be transmitted in a second. If t p is the
duration of a bit, the bit rate R will be 1/t p. It must be noted that bit duration is not
necessarily the pulse duration. For example, in Fig.3, the first pulse is of two-bit
duration. Letter, we will come across signal formats in which the pulse duration is
only half the bit duration.
Receiving Data Bits
The signal received at the other end of the transmitting medium is never identical to
the transmitted signal as the transmission medium distorts the signal to some extent.
As a result, the receiver has to put in considerable efforts to identify the bits. the
receiver must know the time instant at which it should look for a bit. Therefore, the
receiver must have synchronized clock pulses which mark the location of the bits. The
received signal is sampled using the clock pulses, and depending on the polarity of a
sample, the corresponding bit is identified (Fig. 4).
1 1 0 1 0 0 1 0 Transmitted Signal
Received Signal
Clock Signal
Sampled Signal
1 1 0 1 0 0 1 0
Recovered Signal
It is essential that the received signal is sampled at the right instants as otherwise it
could be misinterpreted. Therefore, the clock frequency should be exactly the same as
the transmission bit rate. Even a small difference will built up as timing error and
eventually result in sampling at wrong instants. When the clock frequency is slightly
faster or slightly slower than the bit rate, a bit may be sampled twice and may be
missed.
BRBRAITT : June-2011 7
―DATA NETWORK‖ FOR JTOs PH-II
Direction of Transmission
BRBRAITT : June-2011 8
―DATA NETWORK‖ FOR JTOs PH-II
Flag Block Of bytes Flag Idle data Flag Block Of bytes Flag
Block 2 Block 1
Sufficient signal transitions should be present in the transmitted signal for the
clock extraction circuit at the receiving end to work properly.
Bandwidth of the digital signal match the bandwidth of the transmission
medium.
There should not be any ambiguity in recognizing the binary states of the
received signal.
There are several ways of representing bits as digital electrical signals. Two broad
classes of signal representation codes are: Non-Return to Zero (NRZ) codes and
Return to Zero (RZ) codes.
Clock Signal
NRZ-M Coding
NRZ-S Coding
BRBRAITT : June-2011 9
―DATA NETWORK‖ FOR JTOs PH-II
Clock Signal
Manchester Coding
Bi-phase-M Coding
Bi-phase-S Coding
Differential
Manchester Coding
BRBRAITT : June-2011 10
―DATA NETWORK‖ FOR JTOs PH-II
R = 2B log2 L
Bauds
When bits are transmitted as an electrical signal having two levels, the bits rate and
the ―modulation‖ rate of the electrical signal are the same (Fig. 9). Modulation rate is
the rate at which the electrical
BRBRAITT : June-2011 11
―DATA NETWORK‖ FOR JTOs PH-II
Signal changes its levels. It is expressed in bauds (―per second‖ is implied). Note that
there is one to one correspondence between bits and electrical levels.
It is possible to associate more than one bit to one electrical level. For example, if the
electrical signal has four distinct levels, two bits can be associated with one electrical
level (Fig. 10). In this case, the bit rate is twice the baud rate.
Modem
In Fig. 10, the four levels define four states of the electrical signal. The electrical state
can also be defined in terms of other attributes of an electrical signal such as
amplitude, frequency or phase. The basic electrical signal is a sine wave in this case.
The binary signal modulates one of these signal attributes. The sine wave carries the
information and is, therefore termed ―carrier‖. The device which performs
modulation is called a modulator and the device which recovers the information signal
from the modulated carrier is called a demodulator. In data transmission, we usually
come across devices which perform both modulation as well as demodulation
function and these devices are called modems. Modems are required when data is to
be transmitted over long distances. In a modem, the input digital signal modulates a
carrier which is transmitted to the distant end. At the distant end, another modem
demodulates the received carrier to get the digital signal. A pair of modems is, thus,
always required.
DATA COMMUNICATION
Communication and transmission terms are often interchangeably used, but it is
necessary to understand the distinction between the two activities. Transmission is
physical movement of information and concerns issues like bit polarity,
synchronization, clock, electrical characteristics of signals, modulation, demodulation
etc. we have so far been examining these data transmission issues.
Communication has a much wider connotation than transmission. It refers to
meaningful exchange of information between the communicating entities. Therefore,
in data communications we are concerned with all the issues relating to exchange of
information in the form of a dialogue, e.g., dialogue discipline, interpretation of
messages, and acknowledgements.
BRBRAITT : June-2011 12
―DATA NETWORK‖ FOR JTOs PH-II
Synchronous Communication
Entity A Entity B
Hello B!
Hello!
Do you want to send data? Go ahead.
Yes. Here it is.
Any more data?
No.
Bye
Bye.
The dialogue between the entities A and B is ―synchronized‖ in the sense that each
message of the dialogue is a command or response. Physical transmission of data
bytes corresponding to the characters of these messages could be in synchronous or
asynchronous mode.
Asynchronous Communication
Asynchronous communication, on the other hand, is less disciplined. A
communicating entity can send message whenever it wishes to.
Entity A Entity B
Hello B!
Hello! Here is some data
Here is some data Here is more data
Did you receive what I sent?
Yes. Here is more data. Please
acknowledge.
Acknowledged, Bye
Bye.
Note the lack of discipline in the dialogue. The communicating entities send messages
whenever they please, Here again, physical transmission of bytes of the messages can
be in synchronous or asynchronous mode. We will come across many example of
synchronous and asynchronous communication in this book when we discuss
protocols. Protocols are the rules and procedures for communication.
BRBRAITT : June-2011 13
―DATA NETWORK‖ FOR JTOs PH-II
SUMMARY
Binary codes are used for representing the symbols for computer communications.
ASCII is the most common code set used worldwide. The bits of a binary code can be
transmitted in parallel or in serial form. Transmission is always serial unless the
devices are near each other. Serial transmission mode can be asynchronous or
synchronous. Asynchronous transmission is byte by byte transmission and start/stop
bits are appended to each byte. In synchronous transmission, data is transmitted in the
form of frames having flags to identify the start of a frame. Clock is required in
synchronous transmission. Digital signals are coded using RZ codes to enable clock
extraction at the receiving end.
A communication channel is limited in its information-carrying capacity by its
bandwidth and the noise present in the channel. To make best use of this limited
capacity of the channel, very sophisticated carrier-modulation methods are used.
modems are the devices which carry out the modulation and demodulation functions.
Data communication has wider scope as compared to data transmission.
Asynchronous and synchronous communication refer to non-disciplined and
disciplined exchange of messages respectively.
BRBRAITT : June-2011 14
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 15
―DATA NETWORK‖ FOR JTOs PH-II
THE MODEL
The Open Systems Interconnection model is a layered framework for the design of
network systems that allows for communication across all types of computer systems.
It consists of seven separate but related layers, each of which defines a segment of the
process of moving information across a network. Understanding the fundamentals of
the OSI model provides a solid basis for exploration of data communication.
Layered Architecture
The OSI model is built of seven ordered layers: physical (layer 1), data link (layer 2),
network (layer 3), transport (layer 4), session (layer 5), presentation (layer 6), and
application (layer 7).
As the message travels from A to B, it may pass through many intermediate nodes.
These intermediate nodes usually involve only the first three layers of the OSI model.
In developing the model, the designers distilled the process of transmitting data down
to its most fundamental elements. They identified which networking functions had
related uses and collected those functions into discrete groups that became the layers.
Each layer defines a family of functions distinct from those of the other layers. By
defining and localizing functionality in this fashion, the designers created an
BRBRAITT : June-2011 16
―DATA NETWORK‖ FOR JTOs PH-II
architecture that is both comprehensive and flexible. Most important, the OSI model
allows complete transparency between otherwise incompatible systems.
Peer-to-Peer processes
Within a single machine, each layer calls upon the services of the layer just below it.
Layer 3, for example, uses the services provided by layer 2 and provides services for
layer 4. Between machines, layer x on one machine communicates with layer x on
another machine. This communication is governed by an agreed-upon series of rules
and conventions called protocols. The processes on each machine that communicate at
a given layer are called peer-to-peer processes. Communication between machines is
therefore a peer-to-peer process using the protocols appropriate to a given layer.
At the physical layer, communication is direct: machine A sends a stream of bits to
machine B. At the higher layers, however, communication must move down through
the layers on machine A, over to machine B, and then back up through the layers.
Each layer in the sending machine adds its own information to the message it receives
from the layer just above it and passes the whole package to the layer just below it.
This information is added in the form of headers or trailers (control data added to the
beginning or end of a data parcel). Headers are added to the message at layers 6,5,4,3,
and 2. A trailer is added at layer 2.
At layer 1 the entire package is converted to a form that can be transferred to the
receiving machine. At the receiving machine, the message is unwrapped layer by
layer, with each process receiving and removing the data meant for it. For example,
layer 2 removes the data meant for it, then passes the rest to layer 3. Layer 3 removes
the data meant for it and passes the rest to layer 4, and so on.
Interfaces between Layers
The passing of the data and network information down through the layers of the
sending machine and back up through the layers of the receiving machine is made
possible by an interface between each pair of adjacent layers. Each interface defines
what information and services a layer must provide for the layer above it. Well-
defined interfaces and layer functions provide modularity to a network. As long as a
layer still provides the expected services to the layer above it, the specific
implementation of its functions can be modified or replaced without requiring
changes to the surrounding layers.
Organization of the Layer
The seven layers can be thought of as belonging to three subgroups. Layers 1, 2, and 3
– physical, data link, and network – are the network support layers; they deal with the
physical aspects of moving data from one device to another (such as electrical
specifications, physical connections, physical addressing, and transport timing and
reliability). Layers 5, 6, and 7 – session, presentation, and application – can be
thought of as the user support layers ; they allow interoperability among unrelated
software systems. Layer 4, the transport layer, ensures end-to-end reliable data
transmission while layer 2 ensures reliable transmission on a single link. The upper
OSI layers are almost always implemented in software: lower layers are a
combination of hardware and software ; except for the physical layer, which is mostly
hardware.
L7 data means the data unit at layer 7, L6 data means the data unit at layer 6, and so
on. The process starts out at layer 7 (the application layer), then moves from layer in
BRBRAITT : June-2011 17
―DATA NETWORK‖ FOR JTOs PH-II
descending sequential order. At each layer (except layer 7 and 1), a header is added to
the data unit. At layer 2, a trailer is added as well. When the formatted data unit
passes through the physical layer (layer 1), it is changed into an electromagnetic
signal and transported along a physical link.
Upon reaching its destination, the signal passes into layer 1 and is transformed back
into bits. The data units then move back up through the OSI layers. As each block of
data reaches the next higher layer, the headers and trailers attached to it at the
corresponding sending layer are removed, and actions appropriate to that layer are
taken. By the time it reaches layer 7, the message is again in a form appropriate to the
application and is made available to the recipient.
FUNCTIONS OF THE LAYERS
In this section we briefly describe the functions of each layer in the OSI model.
Physical Layer
The physical layer coordinates the functions required to transmit a bit stream over a
physical medium. It deals with mechanical and electrical specifications of the
interface and transmission medium. It also defines the procedures and functions that
physical devices and interfaces have to perform for transmission to occur.
BRBRAITT : June-2011 18
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 19
―DATA NETWORK‖ FOR JTOs PH-II
Framing.
The data link layer divides the stream of bits received from the network layer in to
manageable data units called frames.
Physical addressing.
If frames are to be distributed to different systems on the network, the data link layer
adds a header to the frame to define the physical address of the sender (source
address) and /or receiver (destination address) of the frame. If the frame is intended
for a system outside the sender‘s network, the receiver address is the address of the
device that connects one network to the next.
Flow control.
If the rate at which the data are absorbed by the receiver is less than the rate produced
in the sender, the data link layer imposes a flow control mechanism to prevent
overwhelming the receiver.
Error control.
The data link layer adds reliability to the physical layer by adding mechanisms to
detect and retransmit damaged or lost frames. It also uses a mechanism to prevent
duplication of frames. Error control is normally achieved through a trailer added to
the end of the frame.
Access control.
When two or more devices are connected to the same link, data link layer protocols
are necessary to determine which device has control over the link at any given time.
Network Layer
The network layer is responsible for the source-to destination delivery of a packet
possible across multiple network (link). Whereas the data link layer oversees the
delivery of the packet between two systems on the same network (link), the network
layer ensures that each packet gets from its point of origin to its final destination.
If two systems are connected to the same link, there is usually no need for a network
layer. However, if the two systems are attached to different networks (links) with
connecting devices between the networks (link), there is often a need for the network
layer to accomplish source-to-destination delivery.
BRBRAITT : June-2011 20
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 21
―DATA NETWORK‖ FOR JTOs PH-II
For this reason, source-to-destination delivery means delivery not only from one
computer to the next but also from a specific process (running program) on one
computer to a specific process (running program) on the other. The transport layer
header therefore must include a type of address called a service-point address (or port
address). The network layer gets each packet to the correct computer; the transport
layer gets the entire message to the correct process on that computer.
Segmentation and reassembly.
A message is divided into transmittable segments, each segment containing a
sequence number. These numbers enable the transport layer to reassemble the
message correctly upon arriving at the destination and to identify and replace packets
that were lost in the transmission.
Connection control.
The transport layer can be either connectionless or connection-oriented. A
connectionless transport layer treats each segment as an independent packet and
delivers it to the transport layer at the destination machine. A connection-oriented
transport layer makes a connection with the transport layer at the destination machine
first before delivering the packets. After all the data are transferred, the connection is
terminated.
Flow control.
Like the data link layer, the transport layer is responsible for flow control. However,
flow control at this layer is performed end to end rather than across a single link.
Error control.
Like the data link layer, the transport layer is responsible for error control. However,
error control at this layer is performed end to end rather than across a single link. The
sending transport layer makes sure that the entire message arrives at the receiving
transport layer without error (damage, loss, or duplication). Error correction is usually
achieved through retransmission.
BRBRAITT : June-2011 22
―DATA NETWORK‖ FOR JTOs PH-II
Session Layer
The services provided by the first three layers (physical, data link, and network) are
not sufficient for some processes. The session layer is the network dialog controller. It
establishes, maintains, and synchronizes the interaction between communicating
systems.
BRBRAITT : June-2011 23
―DATA NETWORK‖ FOR JTOs PH-II
between these different encoding methods. The presentation layer at the sender
changes the information from its sender-dependent format into a common format. The
presentation layer at the receiving machine changes the common format into its
receiver-dependent format.
Encryption.
To carry sensitive information, a system must be able to assure privacy. Encryption
means that the sender transforms the original information to another form and sends
the resulting message out over the network. Decryption reverses the original process
to transform the message back to its original form.
Compression.
Data compression reduces the number of bits to be transmitted. Data compression
becomes particularly important in the transmission of multimedia such as text, audio,
and video.
Application layer
The application layer enables the user, whether human or software, to access the
network. It provides user interfaces and support for services such as electronic mail,
remote file access and transfer, shared database management, and other types of
distributed information services. Of the many application services available, the figure
shows only three: X.400 (message-handling services); X.500 (directory services): and
file transfer, access, and management (FTAM). The user in this example uses X.400
to send an e-mail message. Note that no headers or trailers are added at this layer.
Specific services provided by the application layer include the following :
Network virtual terminal.
A network virtual terminal is a software version of a physical terminal and allows a
use to log on to a remote host. To do so, the application creates a software emulation
of a terminal at the remote host. The user computer talks to the software terminal
which in turn, talks to the host, and vice versa. The remote host believes it is
communicating with one of its own terminal and allows you to log on.
BRBRAITT : June-2011 24
―DATA NETWORK‖ FOR JTOs PH-II
in a remote computer.
Mail services.
This application provides the basis for e-mail forwarding and storage.
Directory services.
This application provides distributed database sources and access for global
information about various objects and services.
BRBRAITT : June-2011 25
―DATA NETWORK‖ FOR JTOs PH-II
PHYSICAL LAYER
BRBRAITT : June-2011 26
―DATA NETWORK‖ FOR JTOs PH-II
PHYSICAL LAYER
Transmission of digital information from one device to another is the basic function
for the devices to be able to communicate. This chapter describes the first layer of the
OSI model, the Physical layer, which carries out this function. After examining the
services it provides to the Data Link layer, functions of the Physical layer are
discussed. Relaying through the use of modems is a vary important data transmission
function carried out at the Physical layer level. Various protocols and interfaces which
pertain to the relaying functions are put into perspective. We then proceed to examine
EIA-232-D, a very important interface of the Physical layer. We discuss its
applications and limitations.
THE PHYSICAL LAYER
Let us consider a simple data communication situation shown in Fig.1, where two
digital devices A and B need to exchange data bits.
A B
Interface
Interconnecting Medium
The basic requirements for the devices to be able to exchange bits are the following:
BRBRAITT : June-2011 27
―DATA NETWORK‖ FOR JTOs PH-II
The Physical layer provides its service to the Data Link layer which is the next higher
layer and uses this service. It receives service of the physical interconnection medium
for transmitting the electrical signals.
Physical Connection
The Physical layer receives the bits to be transmitted from the Data Link layer (Fig.
2). At the receiving end, the Physical layer hands over these bits to the Data Link
layer. Thus, the Physical layers at the two ends provide a transport service from one
Data Link layer to the other over a ―Physical connection‖ activated by them. A
Physical connection is different from a physical transmission path in the sense that it
is at bit level while the transmission path is at the electrical signal level.
Data Link
Layer
Bits Bits
Physical )
Physical Connection
Layer ( ) ( )
Interconnection Medium
Fig. 2 Physical connection.
A B C
Data Link
Layer
Bits
Bits Bits
Physical
Physical Connection
Layer
Interconnection Medium
BRBRAITT : June-2011 28
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 29
―DATA NETWORK‖ FOR JTOs PH-II
Physical
Layer
Interconnection Media
Physical Connection End Points
Fig. 4 Relaying function of the Physical layer.
A
SCU SCU B
SCUs employ one or more of the following methods to ensure acceptable quality of
the signal received at the distant end:
1. Amplification
2. Regeneration
3. Equalization of media characteristics
4. Modulation.
Examples of SCUs which carry out these functions are: modems, LDMs (Limited
Distance Modems), line drivers, digital service unit, and optical transceiver.
BRBRAITT : June-2011 30
―DATA NETWORK‖ FOR JTOs PH-II
A pair of these devices is always required, one at each end. These two devices
together act as a relay. They receive electrical signals representing data bits at one end
and deliver the same signals at the other end.
The digital end devices face the SCUs and interact with the SCUs at the Physical
layer level. This is shown in detail in Fig.6. Notice that a number of protocols and
interfaces at Physical layer level are involved when SCUs are used as relay units.
A B
SCU-A SCU-B
Physical 1 2 1 Physical
layer Layer
I1 I1 I2 I2 I1 I1
M1 M2 M1
In the above example, the media M1 and M2 are usually different. M1 consists of a
bunch of copper wires, each carrying data or a control signal. M2, on the other hand,
can be a telephony channel or even optical fiber. Physical medium interfaces I1 and I2
depend on the type of medium used.
As regards the Physical layer protocols, note that the Physical layer of device A no
longer interacts with the Physical layer of device B. It interacts with the Physical layer
of SCU-A to carry out the Physical layer functions. The two SCUs have a different set
of Physical layer protocols between them.
BRBRAITT : June-2011 31
―DATA NETWORK‖ FOR JTOs PH-II
B
A
Data
Data
Bits Bits
Physical Layer
Protocol
Control Signals
Data Signals
The physical interconnecting medium consists of a number of wires carrying data and
control signals. It is essential to specify which wire carries which signal. Moreover,
the mechanical specifications of the connector, type of the connector (male or female)
and the electrical characteristics of the signals need to be specified. Definition of the
physical medium interface includes all these specifications.
PHYSICAL LAYER STANDAREDS
Historically, the specifications and standards of the physical medium interface have
also covered the Physical layer protocols. But these specifications have not identified
the Physical layer protocols as such.
Physical layer specifications can be divided into the following 4 components (Fig.8):
1. Mechanical specification
2. Electrical specification
3. Functional specification
4. Procedural specification.
BRBRAITT : June-2011 32
―DATA NETWORK‖ FOR JTOs PH-II
Procedural Specification
(Physical layer protocol)
Physical
Layer
Mechanical Specification
(Connector pin assignment)
Functional Specification
(Various Signals)
Electrical Specification
(Electrical characteristics)
Fig.8 Physical layer specifications
The procedural specification is the Physical layer protocol definition and the other
three specifications constitute the physical medium interface specifications.
The mechanical specification gives details of the mechanical dimensions and the type
of connectors to be used on the device and the medium. Pin assignments of the
connector are also specified.
The electrical specification defines the permissible limits of the electrical signals
appearing at the interface in terms of voltages, currents, impedances, rise time, etc.
The required electrical characteristics of the medium are also specified.
The functional specification indicates the functions of various control signals.
The procedural specification indicates the sequence in which the control signals are
exchanged between the Physical layers for carrying out their functions.
Although there are many standards of the Physical layer, only a few are of wide
significance. Some examples of Physical layer standards are given below.
EIA: EIA-232-D
RS-449, RS-422-A, RS-423-A
CCITT: X.20, X.20bis
X.21, X.21bis
V.35, V.24, V.28
ISO: ISO 2110
Out of the above, the EIA-232-D interface is the most common and is found in almost
all computers. We will examine EIA-232-D in detail in the following sections. other
less important Physical standards will also be discussed in brief.
BRBRAITT : June-2011 33
―DATA NETWORK‖ FOR JTOs PH-II
DTE DTE
DCE DCE
Physical
Layer
Interface Interface
Between Between
DTE and DCE DCE and DCE
(EIA-232-D)
Fig. 9 DTE/DCE interfaces at the Physical layer.
Two types of Physical layer interfaces are involved in the above configuration:
The physical media between the DTE and the DCE consist of several circuits carrying
data, control and timing signals. Each circuit carries one specific signal, either from
the DTE or from the DCE. These circuits are called interchange circuits.
BRBRAITT : June-2011 34
―DATA NETWORK‖ FOR JTOs PH-II
DCE-DCE Connection
A DCE has two interfaces, DTE-side interface which is EIA-232-D, and the line-side
interface which interconnects the two DCEs through the transmissions medium. There
can be several forms of connection and modes of transmission between the DCEs as
shown in Fig.10
EIA-232-D EIA-232-D
DTE DCE Dedicated DCE DTE
Transmission Medium
Telephone Telephone
Instrument Instrument
4-Wire Circuit
BRBRAITT : June-2011 35
―DATA NETWORK‖ FOR JTOs PH-II
Note that electronics of the DCE may not be directly connected to the interconnecting
transmission circuit. This connection is made on request from the DTE as we shall see
later.
EIA-232-D INTERFACE SPECIFICATIONS
EIA-232-D interface defines four sets of specifications for the interface between a
DTE and a DCE:
1. Mechanical specifications
2. Electrical specifications
3. Functional specifications
4. Procedural specifications
The protocol between the Physical layers of the DTE and DCE is defined by the
procedural specifications. Therefore, the scope of the EIA-232-D interface is not
confined to the Physical layer to the transmission media interface.
CCITT recommendations for the physical interface are as follows:
1. Mechanical specifications as per ISO 2110
2. Electrical specifications V.28
3. Functional specifications V.24
4. Procedural specifications V.24
These recommendations are equivalent to EIA-232-D.
Mechanical Specifications
Mechanical specifications include mechanical design of the connectors which are
used on the equipment and the interconnecting cables; and pin assignments of the
connectors.
EIA-232-D defines the pin assignments and the connector design is as per ISO 2110
standard. A DB-25 connector having 25 pins is used (Fig. 11). The male connector is
used for the DTE port and the female connector is used for the DCE port.
DB-25 Pin Male Connector for DTE Port
1 13
25
14
11
BRBRAITT : June-2011 36
―DATA NETWORK‖ FOR JTOs PH-II
13
Fig.11
25 25-pin connector of EIA-232-D interface.
14
Electrical specifications
The electrical specifications of the EIA-232-D interface specify characteristics of the
electrical signals. EIA-232-D is a voltage interface. Positive and negative voltages
within the limits as shown in Fig.12 are assigned to the two logical states of a binary
digital signal.
Limit + 25
Volts
Logic 0, On, Space
Nominal + 12
Volts
+ 3 Volts
0 Volt
Logic 1, Off, Mark
– 3 Volts
Nominal – 12
Volts
Limit – 25
Volts
BRBRAITT : June-2011 37
―DATA NETWORK‖ FOR JTOs PH-II
Functional Specifications
Functional specifications describe the various signals which appear on different pins
of the EIA-232-D interface. Table 1 lists these signals which are divided into five
categories:
1. Ground or common return
2. Data circuits
3. Control circuits
4. Timing circuits
5. Secondary channel circuits
A circuit implies the wire carrying a particular signal. The return path for all the
circuits in both directions (form DTE to DCE and from DCE to DTE) is common. It is
provided on pin 7 of the interface. EIA has used a two- or three-letter designation for
each circuit. CCITT, on the other hand, has given a three digit number to each circuit.
In day-to-day use, however, acronyms based on the function of individual circuits are
more common.
Not all the circuits are always wired between a DTE and a DCE. Depending on
configuration and application, only essential circuits are wired. Functions of the
commonly used circuits are now described.
Signal Ground (AB).
It is the common earth return for all data and control circuits in both directions. This
is one circuit that is always required whatever be the configuration.
Data Terminal Ready (CD), DTE DCE.
The ON condition of the signal on this circuit informs the DCE that the DTE is ready
to operate and the DCE should also connect itself to the transmission medium.
BRBRAITT : June-2011 38
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 39
―DATA NETWORK‖ FOR JTOs PH-II
ON
Request to
Send ON
(CA)
Clear to Send
(CB)
Transmitted Data
(BA)
Carrier A B C
A : DTE switches CA ―ON‖ indicating its wish to transmit
data;DCE sends carrier on the transmission media
B : DCE accepts to receive data by switching CB ―ON‖
C : DCE receives data from DTE on BA
Fig. 13 Time sequence
(BA), of Request to Send and ClearData
to Send circuits .
Transmitted Data DTE DCE. from DTE to DCE is
transmitted on this circuit. When no data is being transmitted, the DTE Keeps the
signal on this circuit in ―1‖ state.
Data can be transmitted on this circuit only when the following control signals are
ON:
1. Request to Send (CA)
2. Clear to Send (CB)
3. DCE Ready (CC)
4. Data Terminal Ready (CD).
The ON state of these signals ensures that the local DCE is in readiness to transmit
data and sufficient opportunity has been given to the remote DCE and DTE to get
ready for receiving data.
Received Data (BB), DTE DCE.
Data from DCE to DTE is received on this circuit. DCE maintains the signal on this
circuit in ―1‖ state when no data is being received
Received Line Signal Detector (CF), DTE DCE.
When a DTE asserts CA, the local DCE sends a carrier to the remote DCE so that it
may get ready to receive data. When the remote DCE detects the carrier on the line, it
alerts the DTE to get ready to receive data by turning the CF circuit ON.
BRBRAITT : June-2011 40
―DATA NETWORK‖ FOR JTOs PH-II
DD DB
Clock
BB BA
DD DA
Clock
In the first alternative, the DCE supplies clock to the DTE on circuit DB for the
transmitted data. At each clock transition, one data bit is pushed out of the DTE. At
the remote end, the clock is extracted from the received data and supplied to the DTE
on circuit DD for the received data.
In the second alternative, the DTE supplies clock to the DCE on circuit DA. For the
received data, the DCE extracts the clock from data and supplies it to the DTE as
before.
BRBRAITT : June-2011 41
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 42
―DATA NETWORK‖ FOR JTOs PH-II
CC
CC
2-Wire or 4-Wire Dedicated
Transmission Medium
If automatic answering equipment is used, the incoming call is detected by the DCE
and indicated to the DTE by Ring indicator signal (CE). If the DTE is in energized
condition, it sends the Data Terminal Ready signal (CD), which causes connection to
the transmission medium.
BRBRAITT : June-2011 43
―DATA NETWORK‖ FOR JTOs PH-II
The DCE indicates its readiness status simultaneously to the DTE on the DCE Ready
circuit (CC) (Fig.17)
DCE DTE
CD
To Switched CC
Telephone Network
Incoming Ring CE
RD
Thus, at the end of the equipment readiness phase, we have (a) ON state of the Data
Terminal Ready and DCE Ready signals and (b) the transmission medium connected
to the DCE electronics.
Circuit Assurance Phase.
In the circuit assurance phase, the DTEs, indicate their intent to transmit data to the
respective DCEs and the end-to-end (DTE to DTE) data circuit is activated. If the
transmission mode is half duplex, only one of the two directions of transmission of
the data circuit is activated.
Half Duplex Mode of Transmission:
A DTE indicates its intent to transmit data by asserting the Request to Send signal
(CA) which activates the transmitter of the DCE and a carrier is sent to the distant end
DCE (Fig. 18). The Request to send signal also inhibits the receiver of the DCE.
CC
CC
TX
CA RX CF
RX
CB CTS Carrier
Delay
After a short interval of time equal to the propagation delay, the carrier appears at the
input of the distant end DCE. The DCE detects the incoming carrier and gets ready to
demodulate data from the carrier. It also alerts the DTE using the Received Line
Signal Detector circuit (CF) as shown in the Fig. 18.
After activating the circuit, the sending end DCE signals the DTE to proceed with
data transmission by returning the Clear to Send signal (CB) after a fixed delay. This
delay ensures that sufficient opportunity is given to the distant end to get ready to
receive data. With the clear to Send signal, the equipment readiness and end-to-end
data circuit readiness are assured and the sending end DTE can initiate data
transmission
In half duplex operation, the Clear to Send signal is given in response to Request to
Send only if the local Received Line Signal Detector circuit is OFF.
Full Duplex Operation:
In full duplex operation, there are separate communication channels for each direction
of data transmission so that both the DTEs may transmit and receive simultaneously.
The circuit assurance phase is exactly the same in half duplex transmission mode
except that both the DTEs can independently assert Request to Transmit. In this case,
the receivers always remain connected to the receive side of the communication
channel.
Data Transfer Phase.
Once the circuit assurance phase is over, data exchange between DTEs can start. The
following circuits are in ON state during this phase:
Transmitting End Receiving End
Data Terminal Ready Data Terminal Ready
DCE Ready DCE Ready
Request to Send Received Line Signal Detector
Clear to Send
At the transmitting end, the DTE sends data on Transmitted Data circuit (BA) to the
DCE which sends a modulated carrier on the transmission medium. The distant end
DCE demodulates the carrier and hands over the data to the DTE on Received Data
circuit (BB).
In the half duplex operation, the direction of transmission needs to be reversed every
time a DTE completes its transmission and the other DTE wants to transmit. The
Request to send signal is withdrawn after the transmitting end DTE completes its
transmission. The DCE withdraws its carrier and switches the communication channel
to its receiver. The DCE also inhibits further flow of data from the local DTE by
turning off the Clear to send signal.
When the distant end DCE notices the carrier disappear, it withdraws the Received
Line Signal Detector circuit. Noticing that the transmission medium is free, the distant
end DTE performs actions of the circuit assurance phase and then transmits data.
Thus, a DTE wanting to transmit, checks each time if the channel is free by sensing
Received Line Signal Detector circuit and if it is OFF, it asserts the Request to Send.
BRBRAITT : June-2011 45
―DATA NETWORK‖ FOR JTOs PH-II
Disconnect Phase.
After the data transfer phase, disconnection of the transmission media is initiated by a
DTE. It withdraws Data Terminal Ready signal. The DCE disconnects from the
transmission media and turns off the DCE Ready signal.
COMMON CONFIGURATIONS OF EIA-232-D INTERFACE
Not all the circuits defined in EIA-232-D specifications are always implemented.
Depending on application and communication configuration only a subset of the
circuits is implemented.
DTE DCE
0 1
0 2
0 3
0 4
0 5
0 6
0 7
0 8
0 20
0 22
Shield
1 0 Transmitted Data
2 0 Received Data
3 0 Request to Send
4 0 Clear to Send
5 0 DCE Ready
6 0 Signal Ground
7 0 Received Line Signal Detector
8 0
Data Terminal Ready
20 0
22 0 Ring Indicator
BRBRAITT : June-2011 46
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 47
―DATA NETWORK‖ FOR JTOs PH-II
DTE DCE
1 0 Transmitted Data 0 1
2 0 0 2
Received Data
3 0 0 3
4 0 0 4
5 0 0 5
6 0 Signal Ground 0 6
7 0 0 7
8 0 0 8
20 0 0 20
22 0 0 22
DTE DCE
Shield
1 0 0 1 AB: Signal Ground
BA BA: Transmitted Data
2 0 BB
0 2 BB: Received Data
3 0 0 3 CA: Request to Send
CA CB: Clear to Send
4 0 0 4
CB CC: DCE Ready
5 0 0 5 CD: Data Terminal Ready
6 0 CC 0 6 CF: Received Line
AB
Signal Detector
7 0 0 7
8 0 CF 0 8
20 0 CD 0 20
22 0 0 22
Fig. 21 Three-wire interconnection with loop backs.
By jumpering the Data Terminal Ready circuit to DCE Ready circuit, the equipment
readiness phase is completed as soon as the DTE asserts the Data Terminal Ready
signal. Quite often, this occurs when power is applied to the DTE.
BRBRAITT : June-2011 48
―DATA NETWORK‖ FOR JTOs PH-II
When the DTE asserts the Request to send signal, the circuit assurance phase is
immediately completed because it receives immediately the Clear to Send and
Received Line Signal Detector signals.
By providing the loopbacks, the number of interconnecting wires is reduced but it
should be kept in mind that certain features of EIA-232-D interface have also been
omitted. There are many other configurations each tailored to a particular requirement
and with its own merits and limitations. In the following section we shall discuss the
special class of interface configurations associated with interconnection of devices
having similar interface ports even though EIA-232-D was designed to work between
two dissimilar devices, a DTE and a DCE.
Null Modem
If we view the EIA-232-D interface by standing between the DTE and the DCE, it is
seen that a signal which comes out of a particular pin of the DTE port goes towards
the DCE on the same pin. In other words, in any pair of corresponding pins of the
DTE and DCE ports, one is output pin and the other is input pin.
Therefore, in order to apply EIA-232-D to interconnect any two devices, it is
necessary that a DTE thinks that it is connected to a DCE, whether the other device is
actually a DCE or not. Thus, a computer and a terminal can be directly interconnected
using EIA-232-D interface if one of them has a DCE port and the other a DTE port
(Fig. 22a)
On the other hand, if both the devices which are to be interconnected have DTE ports,
one of the devices needs to be suitably modified to look like a DCE (Fig. 22b). A null
modem carries out this job externally by converting a DTE port to a DCE port and
vice versa (Fig.22c).
Terminal Computer
DTE DCE
(a) DTE-DCE interconnection
Terminal Computer
DCE DCE
DTE DTE
(b) DTE-DTE direct interconnection
DTE DTE
BRBRAITT : June-2011 49
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 50
―DATA NETWORK‖ FOR JTOs PH-II
When a DTE asserts a Data Terminal Ready signal, the other DTE is immediately
given a stimulus, the Ring indicator, to believe that it has an incoming call. It
responds with its Data Terminal Ready which results in the DCE Ready signal at the
calling DTE. Thus, the equipment readiness phase is complete. Before transmitting
data, the calling DTE asserts the Request to Send which raises the Received Line
Signal Detector at the other DTE. The Request to Send signal is looped back at the
calling DTE as Clear to Send. Therefore, the circuit assurance phase is also
immediately completed and data transmission can begin.
The above discussion applies to the asynchronous mode of operation because we have
not considered the clock. If the terminal devices require external clock, the null
modem cable will not serve the purpose. A synchronous null modem device which
has a clock source is required. Else, the internal clock of a DTE can serve the purpose.
This clock which is available on pin 24, is wired to pin 17 locally for receive timing,
and to pins 15 and 17 of the other device for transmit and receive timings.
LIMITIATIONS OF EIA-232-D
Although EIA-232-D is the most popular physical layer interface, its use in computer
networking is limited to low data rates and shot distance data transmission
applications. The distance between a DTE and DCE is limited to 15 meters, beyond
which modems are necessary. Even a small industrial plant or an office requires
modems between the host and its terminals. As regards the data rate, the EIA-232-D
interface meets the local transmission requirements which are usually below 9600 bps
but higher data rates of 48 kbps and above are required for computer networking. The
upper limit of 20 kbps of EIA-232-D is not sufficient for these applications.
The above limitations of the EIA-232-D interface are due to the following two
reasons:
1. Unbalanced transmission mode of its signals.
2. Shared common ground for all signals flowing in both the directions.
Raised ground potential, crosstalk and noise due to these factors result in introduction
of errors at high bit rates and for longer separation between the DTE and the DCE.
These limitations of the EIA-232-D have been overcome in the interface standards
developed subsequently.
RS-449 INTERFACE
In the early 1970s, the EIA introduced RS-422-A, RS-423-A and RS-449 interfaces to
overcome the limitations of RS-232-C. RS-422-A and RS-423A cover only the
electrical specifications, and RS-449 covers mechanical, functional and procedural
specifications. These specifications are compatible with EIA-232-D so that a device
having EIA-232-D interface can be interconnected to another having the RS-449
interface. CCITT also adopted RS-449, RS-422-A and RS-423-A subsequently and
published recommendations V.54, V.10 and V.11. Procedural specifications are the
same as in EIA-232-D and, therefore, have not been described again.
Mechanical Specifications
RS-449 gives detailed mechanical specifications of the interface. Since RS-449
incorporates more than 25 signals, two connectors, one with 37 pins and the other
with 9 pins have been specified. Mechanical designs of the connectors are as per ISO
4902 standard. All signals associated with the basic operation of the interface appear
BRBRAITT : June-2011 51
―DATA NETWORK‖ FOR JTOs PH-II
on the 37-pin connector. The secondary channel circuits are grouped on the 9-pin
connector. Table 2 gives a list of the signals present in the RS-449 interface with their
pin assignments. For purposes of comparison, we have included the signals which are
present in the EIA-232-D interface also in the table.
Mechanical compatibility between EIA-232-D and RS-449 is accomplished at
connector level using an adapter as shown in Fig. 25.
The RS-449 standard also specifies the maximum cable length and the corresponding
data rate supported by the cable. Figure 26 shows this relationship graphically.
BRBRAITT : June-2011 52
―DATA NETWORK‖ FOR JTOs PH-II
9 Pin Connector
DCE
DTE
25 37 RS-449
EIA-232-D
DCE
RS-449 37 25 EIA-232-D
9
Fig. 25 Adapter for EIA-232-D and RS-449 interfaces.
Electrical Specifications
To ensure electrical compatibility with EIA-232-D, both balanced and unbalanced
transmissions can be used. RS-422-A specifies electrical characteristics of the
balanced circuits while RS-423-A specifies electrical characteristics of the unbalanced
circuits. Circuits of RS-449 are divided into two categories. Category I circuits are as
follows:
1. Send Data (SD)
2. Receive Data (RD)
3. Terminal Timing (TT)
4. Send Timing (ST)
5. Receive Timing (RT)
BRBRAITT : June-2011 53
―DATA NETWORK‖ FOR JTOs PH-II
For data rates of less than 20 kbps (upper limit for EIA-232-D circuits), Category I
circuits may be implemented using either RS-422-A or RS-423-A electrical
characteristics. For data rates over 20 kbps, balanced RS-422-A electrical
characteristics must be used. Circuits belonging to Category II are always
implemented using RS-423-A characteristics.
V.35 Interface
The V.35 interface was originally specified by CCITT as an interface for 48kbps line
transmission. It has been adopted for all line speeds above 20kbps.
V.35 is a mixture of balanced (like RS422A) and common earth (like RS232) signal
interfaces. The control lines including DTR, DSR, DCD, RTS and CTS are single
wire common earth interfaces, functionally compatible with RS-232 level signals. The
data and clock signals are balanced, RS-422A-like signals.
The control signals in V.35 are common earth single wire interfaces because these
signal levels are mostly constant or vary at low frequencies. The high frequency data
and clock signals are carried by balanced lines. Thus single wires are used for the low
frequencies for which they are adequate, while balanced pairs are used for the high
frequency data and clock signals.
BRBRAITT : June-2011 54
―DATA NETWORK‖ FOR JTOs PH-II
The V.35 plug is standard. It is a black plastic plug about 20 mm by 70mm, often with
gold plated contacts and built-in hold down and mating screws. The V.35 plug is
roughly 30 times the price of a DB25.
Characteristics Standards
Electrical ITU: V11 and V28 recommendation
Circuits specifications ITU: V35 recommendation
Mechanical ISO: IS2593
G 703 Interface
It is probably the most cost competitive solution of connecting data communications
equipment to two mega-bit leased line private circuits. This interface can work from
64 kbps to 2 Mbps. Its functional interface is defined in G 704. G 703 interface
supports the electrical specification. Maximum cable length is 800 meters of nine pin
connector.
Specifications
ITU-T G. 703 interface specification
BRBRAITT : June-2011 55
―DATA NETWORK‖ FOR JTOs PH-II
The maximum signaling rate of HSSI is 52 Mbps. At this rate, HSSI can handle the
T3 speeds (45 Mbps) of many of today‘s fast WAN technologies, as well as the Office
Channel-1 (OC-1) speeds (52 Mbps) of the synchronous digital hierarchy (SDH). In
addition, HSSI easily can provide high-speed connectivity between LANs, such as
Token Ring and Ethernet.
BRBRAITT : June-2011 56
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 57
―DATA NETWORK‖ FOR JTOs PH-II
Where r is a factor related to the filter characteristics and its values lies in the range 0 – 1.
BRBRAITT : June-2011 58
―DATA NETWORK‖ FOR JTOs PH-II
ASK is very sensitive to noise and finds limited application in data transmission. It is
used at very low bit rates, of less than 100 bps.
Frequency Shift Keying (FSK)
In Frequency Shift Keying (FSK) , the frequency of the carrier is shifted between two
discrete values, one representing binary ―1‖ and the other representing binary ―0‖
(Fig.3). The carrier amplitude does not change. FSK is relatively simply to
implement. It is used extensively in low speed modems having bit rates below 1200
bps.
The instantaneous value of the FSK signal is given by
V(t) = d sin (2 f1t) + d sin (2 f0t)
Where f1 and f0 are the frequencies corresponding to binary ―1‖ and ―0‖
respectively and d is the data signal variable as before
BRBRAITT : June-2011 59
―DATA NETWORK‖ FOR JTOs PH-II
From the above equation, it is obvious that the FSK signal can be considered to be
comprising two ASK signals with carrier frequencies f1 and f0. Therefore, the
frequency spectrum of the FSK signal is as shown in Fig. 4.
BRBRAITT : June-2011 60
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 61
―DATA NETWORK‖ FOR JTOs PH-II
4 PSK Demodulator
Figure 9 shows a 4 PSK demodulator. The reference carrier is recovered from the
received modulated carrier. As in the modulator, a /2 phase shifted carrier is also
generated. When these carriers are multiplied with the received signal, we get
sin (2 fct+ ) sin (2 fct) = ½ cos ( ) – ½ cos (4 fct+ )
and
sin (2 fct+ ) sin (2 fct+ /2) = ½ cos ( – /2) – ½ cos (4 fct+ + /2)
where is the phase of the received carrier.
The multiplier output are passed through low pass filters to remove the 2f c frequency
component and are applied to the comparators which generate the dibits. Table 1
gives the outputs of low pass filters for various values of input phase .
BRBRAITT : June-2011 62
―DATA NETWORK‖ FOR JTOs PH-II
U V A B
/4 0.35 0.35 0 0
3 /4 0.35 –0.35 0 1
5 /4 –0.35 –0.35 1 1
7 /4 –0.35 0.35 1 0
EXAMPLE 1
1. What are the phase states of the carrier when the bit stream
1 0 1 1 1 0 0 1 0 0
is applied to 4 PSK modulator shown in Fig. 8.
2. If the recovered carrier at the demodulator is out of phase by radians, what
will be the output when the above 4 PSK carrier is applied to the demodulator
shown in Fig. 9.
Solution
1. Modulator input 1 0 1 1 1 0 0 1 0 0
Phase states of the 7 /4 5 /4 7 /4 3 /4 /4
transmitted carrier
2. Relative phase with respect 3 /4 /4 3 /4 7 /4 5 /4
to the recovered carrier
BRBRAITT : June-2011 63
―DATA NETWORK‖ FOR JTOs PH-II
Input
Encoder Level BPSK Differential
Data
Shifter Modulator BPSK
A t-1 t Mt-1 Mt
0 0 0 0 0 0
0 0 1 1
1 0 0 1
1 0 1 0
BRBRAITT : June-2011 64
―DATA NETWORK‖ FOR JTOs PH-II
EXMAPLE 2
Write the phase states of the differential BPSK carrier for input data stream
100110101. The starting phase of the carrier can be taken as 0.
Solution
A 1 0 0 1 1 0 1 0 1
0 0 0 0
0 0 0 0
Figure 11 shows the demodulation scheme for differential BPSK signal. The received
signal is delayed by one bit and multiplied by the received signal. In other words, the
carrier phase states
of the adjacent bits are multiplied. Adjacent phase states may be in phase or out of
phase. If they are in phase, the multiplier output is positive and if they are out of
phase, the multiplier output is negative.
Sin2 (2 fct) = sin 2 (2 fct + ) = ½ – ½ cos (4 fct)
Sin (2 fct) sin (2 fct + ) = – ½ + ½ cos(4 fct)
The low pass filter allows only the DC component to pass through. Thus polarity of
the signal at the filter output reflects the phase change. The comparator generates the
demodulated data signal.
The differential demodulator does not require phase coherent carrier for
demodulation. Also, note that there is no decoder corresponding to the encoder in the
modulator. If a phase-coherent demodulator is used in place of the differential
demodulator, a decoder will be required at the output of the demodulator.
Differential 4 PSK
Just like differential BPSK modulator, differential 4 PSK modulator can also be
implemented using an encoder before a 4 PSK modulator as shown in Fig. 12.
BRBRAITT : June-2011 65
―DATA NETWORK‖ FOR JTOs PH-II
A M
The encoder logic is so designed that its outputs M and N modulate the carrier to
produce the required phase changes in the carrier. Table 3a shows the relation
between the input dibit AB and the phase changes of the modulated carrier. This
modulation scheme has been standardized in CCITT recommendation V.26. Table 3b
shows the relation between MN bits and the corresponding phase of the modulated
carrier. Table 3c gives encoder logic derived from Tables 3a and 3b. From Table 3c, it
can be shown that
M t= A . B + A . B . P + A . B. P
N t = A . B + A . B . P + A . B. P
BRBRAITT : June-2011 66
―DATA NETWORK‖ FOR JTOs PH-II
EXAMPLE 3
The following bit steam is applied to the differential 4 PSK modulator described in
Table3. Write the carrier phase states taking the initial carrier phase as reference.
1 0 1 1 1 1 0 0 0 1
Solution
Bit stream 1 0 1 1 1 1 0 0 0 1
3 0
2 2
0 3 3 3 0
2 2 2 2
16 Quadrature Amplitude Modulation (QAM)
We can generalize the concept of differential phase shift keying to M equally spaced
phase states. The bit rate will become n (baud rate), where n is such that 2n = M.
This is called M-ary PSK or simply MPSK. The phase states of the MPSK signal are
equidistant from the origin and are separated by 2 /M radians (Fig. 13). As M is
increased, the phase states come closer and result in degraded error rate performance
because of reduced phase detection margin. In practice, differential PSK is used up to
M = 8.
Fig. 13 Phase states of M-ary PSK.
BRBRAITT : June-2011 67
―DATA NETWORK‖ FOR JTOs PH-II
modulates the carrier. The even numbered bits are combined in a similar manner to
modulate the other /2 phase shifted carrier. The modulated carriers are combined to
get the 16 QAM output.
It can be shown that 16 QAM gives better performance than does 16 PSK. Out of the
basic modulation methods PSK comes closest to Shannon‘s limit for bit rate which we
studied in Chapter 1. QAM displays further improvement over PSK.
MODEM
The term ‗Modem‘ is derived form the words, MOdulator and DEModulator. A
modem contains a modulator as well as a demodulator. The digital
modulation/demodulation schemes discussed above are implemented in the modems.
Most of the modems are designed for utilizing the analog voice band service offered
by the telecommunication network. Therefore, the modulated carrier generated by a
modem ―fits‖ into the 300-3400Hz bandwidth of the speech c
BRBRAITT : June-2011 68
―DATA NETWORK‖ FOR JTOs PH-II
.
Fig. 14 Phase states of 16 quadrature amplitude modulation.
A typical data connection set up using modems is shown in Fig. 16. The digital
terminal devices which exchange digital signals are called Data Terminal Equipment
(DTE). Two modems are always required, one at each end. The modem at the
transmitting end converts the digital signal from the DTE into an analog signal by
modulating a carrier. The modem at the receiving end demodulates the carrier and
hands over the demodulated digital signal to the DTE.
The transmission medium between the two modems can be a dedicated leased circuit
or a switched telephone circuit. In the latter case, modems are connected to the local
telephone exchanges. Whenever data transmission is required, connection between the
modems is established through the
BRBRAITT : June-2011 69
―DATA NETWORK‖ FOR JTOs PH-II
Telephone
DTE Modem Modem DTE
Network
The transmitter and receiver in a modem comprise several signal processing circuits
which include a modulator in the transmitter and a demodulator in the receiver.
Types of Modems
Modems can be of several types and they can be categorized in a number of ways.
Categorization is usually based on the following basic modem features:
1. Directional capability – Half duplex modem and full duplex modem.
2. Connection to the line – 2- wire modem and 4-wire modem.
3. Transmission mode – Asynchronous modem and synchronous modem.
Half Duplex and Full Duplex Modems.
A half duplex modem permits transmission in one direction at a time. If a carrier is
detected on the line by the modem, it gives an indication of the incoming carrier to the
DTE through a control signal of its digital interface (Fig. 18a). So long as the carrier
is being received, the modem does not give clearance to the DTE to transmit.
BRBRAITT : June-2011 70
―DATA NETWORK‖ FOR JTOs PH-II
Fig. 18
2W – 4W Modems.
The line interface of the modem can have a 2-wire or a 4-wire connection to the
transmission medium. In a 4-wire connection, one pair of wires is used for the
outgoing carrier and the other is used for the incoming carrier (Fig. 19). Full duplex
and half duplex modes of data transmission are possible on a 4-wire connection. As
the physical transmission path for each direction is separate, the same carrier
frequency can be used for both the directions.
A leased 2-wire connection is cheaper than a 4-wire connection because only one pair
of wires is extended to the subscriber‘s premises. The data connection established
through telephone exchanges is also a 2-wire connection. For the 2-wire connection,
modems, with a 2-wire line interface are required. Such modems use the same pair of
wires for outgoing and incoming carriers. Half duplex mode of transmission using the
same frequency for the incoming and outgoing carriers can be easily implemented
(Fig. 20a). The transmit and receive carrier frequencies can be the same because only
one of them is present on the line at a time.
BRBRAITT : June-2011 71
―DATA NETWORK‖ FOR JTOs PH-II
For full duplex mode of operation on a 2-wire connection, it is necessary to have two
transmission channels, one for the transmit direction and the other for the receive
direction (Fig. 20b). This is achieved by frequency division multiplexing of two
different carrier frequencies. These carriers are placed within the bandwidth of the
data on one carrier and receives data from the other end on the other carrier. A hybrid
is provided in the 2-wire modem to couple the line to its modulator and demodulator
(Fig. 21).
Note that available bandwidth for each carrier is reduced to half. Therefore, the baud
rate is also reduced to half. There is a special technique which allows simultaneous
transmission of incoming and outgoing carriers having the same frequency on the 2-
wire transmission medium. Full bandwidth of the speech channel is available to both
the carriers simultaneously. This technique is called echo cancellation technique and
is implemented in high speed 2-wire full duplex modems.
BRBRAITT : June-2011 72
―DATA NETWORK‖ FOR JTOs PH-II
Fig. 22
Scrambler and descrambler
As mentioned above, it is essential to have sufficient transitions in the transmitted
data for clock extraction. A scrambler is provided in the transmitter to ensure this. It
uses an algorithm to change the data stream received from the terminal in a controlled
way so that a continuous stream of zeros or ones is avoided. The scrambled data is
descrambled at the receiving end using a complementary algorithm.
There is another reason for using scramblers. It is often seen in data communications
that computers transmits ―idle‖ characters for relatively long periods of time and then
there is a sudden burst of data. The effect is seen as repeating errors at the beginning
of the data. The reason for these error is sensitivity of the receiver clock phase to
certain data patterns. If the transmission line has poor group delay characteristic in
some part of the spectrum and the repeated data pattern concentrates the spectral
energy in the part of the spectrum, the recovered clock phase can be offset from its
mean position. Drifted clock phase results in errors when the data bits are regenerated.
BRBRAITT : June-2011 73
―DATA NETWORK‖ FOR JTOs PH-II
This problem can be overcome by properly equalizing the transmission line but the
long term solution is to always randomize the data before it is transmitted so that
pattern sensitivity of the clock phase is avoided. The scramblers randomize the data
and thus avoid the errors due to pattern sensitivity of the clock phase.
The scrambler at transmitter consists of a shift register with some feedback loops &
exclusive OR gates. Figure 23 shows a scrambler used in V.27 4800 bps modem.
Note that in modulo –2 arithmetic, addition and subtraction operations are the same.
-6 -
Thus, a scrambler effectively divides the input data stream by polynomial 1 + x + x
7
. This polynomial is called the generating polynomial. By proper choice of the
polynomial, it can be assured that undesirable bit sequences are avoided at the output.
The generating polynomials recommended by CCITT for scramblers are given in
Table 4.
BRBRAITT : June-2011 74
―DATA NETWORK‖ FOR JTOs PH-II
-14 -17
V.22, V.22 bis 1+x +x
-6 -7
V.27 1+x +x
-18
V.29, V.32 1+x + x-23
V.26ter
-5
V.32 1 + x + x-23
To get back the data sequence at the receiving end, the scrambled data stream is
multiplied by the same generating polynomial. The descrambler is shown is Fig. 24.
bi = ci-6 + ci-7
-6 -7
ai = ci + bi = ci + ci-6 + ci-7 = ci (1 + x + x ) = ai
In the above analysis, we have assumed that there was no transmission error. If an
error occurs in the scrambled data, it is reflected in three data bits after descrambling.
In the expression
BRBRAITT : June-2011 75
―DATA NETWORK‖ FOR JTOs PH-II
for descrambler output, note that if one of the scrambled bits c i is received wrong, a‘i,
a‘i+6 and a‘i+7 will be affected as ci moves along the shift register. Therefore,
scramblers result in increased error rate but their usefulness outweighs this limitation.
Block Schematic of a Modem
With this background, we can now describe the detailed block schematic of a modem.
The modem design and complexity vary depending on the bit rate, type of modulation
and other basic features as discussed above. Low speed modems upto 1200 bps are
asynchronous and use FSK. Medium speed modems form 2400 to 4800 bps use
differential PSK. High speed modems which operate at 9600 bps and above employ
QAM and are the most complex. Medium and high speed modems operate in
synchronous mode of transmission.
Figure 25 shows important components of typical synchronous differential PSK
modem. It must, however, be born in mind that this design gives the general
functional picture of the modem. Actual implementation will vary from vendor to
vendor.
Digital Interface.
The digital interface connects the internal circuits of the modem to the DTE. On the
DTE side, it consist of several wires carrying different signals. These signals are
either from the DTE or from the modem. The digital interface contains drivers and
receivers for these signals. A brief description of same of the important signals is
given below.
1. Transmitted Data (TD) signal from the DTE to the modem carries data to be
transmitted.
2. Received Data (RD) signal from the modem carries the data received from the
other end.
3. DTE Ready (DTR) signal from the DTE and indicates readiness of the DTE to
transmit and receive data.
BRBRAITT : June-2011 76
―DATA NETWORK‖ FOR JTOs PH-II
4. Data Set Ready (DSR) signal from the modem indicate its readiness to
transmit and receive data signals.
5. Request to Send (RTS) signal from the DTE seeks permission of the modem
to transmit data.
6. Clear to Send (CTS) signal from the modem gives clearances to the DTE to
transmit its data. CTS is given as response to the RTS.
7. Received line signal detector signal from the modem indicates that the
incoming carrier has been detected on the line interface.
8. Timing signals are the clock signals from the DTE to the modem and from the
modem to the DTE for synchronous transmission.
BRBRAITT : June-2011 77
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 78
―DATA NETWORK‖ FOR JTOs PH-II
Digital interface has been standardized so that there are no compatibility problems.
There are several standards, but the most common standard digital interface is
EIA232D. There are equivalent CCITT recommendations also.
Scrambler.
A scrambler is incorporated in the modems which operate at data rates of 4800 bps
and above. The data stream received from the DTE at the digital interface is applied to
the scrambler. The scrambler divides the data stream by the generating polynomial
and its output is applied to the encoder.
Encoder.
An encoder consists of a serial to parallel converter for grouping the serial data bits
received from the scrambler, e.g., in a modem employing 4 PSK, dibits are formed.
The data bit groups are then encoded for differential PSK.
Modulator.
A modulator changes the carrier phase as per the output of the encoder. A pulse
shaping filter precedes the modulator to reduce the intersymbol interference. Raised
cosine pulse shape is usually used. the modulator output is passed through a band pass
filter to restrict the bandwidth of the modulated carrier within the specified frequency
band.
Compromise Equalizer.
It is a fixed equalizer which provides pre-equalization of the anticipated gain and
delay characteristics of the line.
Line Amplifier.
The line amplifier is provided to bring the carrier level to the desired transmission
level. Output of the line amplifier is coupled to the line through the line interface.
Transmitter Timing Source.
Synchronous modems have an in-built crystal clock source which generates all the
timing references required for the operation of the encoder and the modulator. The
clock is also supplied to the DTE through the digital interface. The modem has
provision to accept the external clock supplied by the DTE.
Transmitter Control.
This circuit controls the carrier transmitted by the modem. When the RTS is received
from the DTE, it switches on the outgoing carrier and sends it on the line. After a
brief delay, it generates the CTS signal for the DTE so that it may start transmitting
data. In half duplex modems CTS is not given if the modem is receiving a carrier.
Training Sequence Generator.
For reception of the data signals through the modems, it is necessary that the
following operational conditions are established in the receiver portion of the modems
beforehand:
BRBRAITT : June-2011 79
―DATA NETWORK‖ FOR JTOs PH-II
1. The demodulator carrier is detected and recovered. Gain of the AGC amplifier
is adjusted and absolute phase reference of the recovered carrier is established
2. The adaptive equalizer is conditioned for the line characteristics.
3. The receiver timing clock is synchronized.
4. The descrambler is synchronized to the scrambler.
These functions are carried out by sending a training sequence. It is transmitted by a
modem when it receives the RTS signal from the DTE. On receipt of RTS from the
DTE, the modem transmits a carrier modulated with the training sequence of fixed
length and then gives the CTS signal to the DTE so that it may commence
transmission of its data. From the training sequence, the modem at the receiving end
recovers the carrier, establishes its absolute phase reference, conditions its adaptive
equalizer and synchronizes its clock and descrambler. The composition of the
training sequence depends on the type of the modem. We will examine some of the
training sequences while discussing the modem standards later.
Line Interface.
The line interface provides connection to the transmission facilities through coupling
transformers. The coupling transformers isolate the line for DC signals. The
transmission facilities provide a two-wire or four-wire connection between the two
modems. For a four-wire connection, there are separate transformers for the transmit
and receive directions. For a 2-wire connection, the line interface is equipped with a
hybrid.
Receive Band Limiting Filter.
In the receive direction, the band limiting filter selects the received carrier from the
signals present on the line. It also removes the out-of-band noise.
AGC Amplifier.
Automatic Gain Control (AGC) amplifier provides variable gain to compensate for
carrier-level loss during transmission. The gain depends on the received carrier level.
Equalizer.
The equalizer section of the receiver corrects the attenuation and group delay
distortion introduced by the transmission medium and the band limiting filters. Fixed,
manually adjustable or adaptive equalizers are provided depending on speed, line
condition and the application. In high speed dial up modems, an adaptive equalizer is
provided because characteristics of the transmission medium change on each instance
of call establishment.
Carrier Recovery Circuit.
The carrier is recovered from the AGC amplifier output by this circuit. The recovered
carrier is supplied to the demodulator. An indication of the incoming carrier is given
at the digital interface.
Demodulator.
The demodulator recovers the digital signal from the received modulated carrier. The
carrier required for demodulation is supplied by the carrier recovery circuit.
BRBRAITT : June-2011 80
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 81
―DATA NETWORK‖ FOR JTOs PH-II
There is another type of echo which is called the far-end echo. Far-end echo is caused
by the hybrids present in the interconnecting telecommunication link. It is
characterized by low amplitude but long delay. For terrestrial connections, the delay
can be of the order 40 ms and for the satellite based connections, it is of the order of
half a second.
The echo being at the same carrier frequency as the received carrier, interfaces with
the demodulation process and needs to be removed. For this purpose, an echo
canceller is built into the high-speed modems. It generates a copy of the echo from
the transmitted carrier and subtracts it from received signals (Fig. 27).
BRBRAITT : June-2011 82
―DATA NETWORK‖ FOR JTOs PH-II
The echo canceller circuit consists of a tapped-delay line with a set of coefficients
which are adjusted to get the minimum echo at the receiver input. This adjustment is
carried out when the training sequence is being transmitted.
Secondary Channel.
We have seen that a DTE needs to exchanges RTS/CTS signals with the modem
before it transmits data. On receipt of the RTS signal, the modem gives the CTS after
a certain delay. During this period, it transmits the training sequence so that the
modem at the other end may detect the carrier, extract the clock, synchronize the
sIf the mode of operation is half duplex, each reversal of the direction of transmission
involves RTS-CTS delay and thus, reduces the effective throughput. In most of the
data communication situations, the receiver sends short acknowledgements for every
received data frame and for transmitting these acknowledgements the direction of
transmission must be reversed. To avoid frequent reversal of direction of
transmission, a low speed secondary channel is provided in the modems (Fig. 28).
BRBRAITT : June-2011 83
―DATA NETWORK‖ FOR JTOs PH-II
The secondary channel operates at 75 bps and uses FSK. The secondary channel has
its own RTS, CTS and other control signals which are available at the digital interface
of the modem. It should be noted that the main channel is used in half duplex mode
for data transmission and the DTEs are configured to send the acknowledgements on
the secondary channel.
Test Loops.
Modems are provided with the capability for locating faults in the digital connection
from DTE to DTE. The testing procedure involves sending a test data and looping it
back at various stages of the connection. The test pattern can be generated by the
modem internally or it can be applied externally using modem tester. The common
test configuration are shown in Fig. 29.
1. Loop 1: Digital loopback. This loop is set up as close as possible to the digital
interface.
2. Loop 2: Remote digital loopback. This loop checks the line and the remote
modem. It can be used only in full duplex modems.
3. Loop 3: Local analog loopback. The modulated carrier at the transmitter
output of the local modem is looped back to the receiver input. The loopback
may require some attenuators to adjust the level.
4. Loop 4: Remote analog loopback. This loop arrangement is applicable for 4-
wire line connections only. The two pairs at the distant end are disconnected
from the modem and connected to each other.
5. Loop 5: Local digital loopback and loopforward. In this case, the local digital
loopback is provided for the local modem and remote digital loopback is
provided for the remote modem.
6. Loop 6: Local analog loopback and loopforward. In this case, the local modem
has analog loopback and the remote modem has remote analog loopback.
The test configurations can be set up by pressing the appropriate switches provided on
the modems. The digital interface also provides some control signals for activating the
loop tests. When in the test mode, the modem indicates its test status to the local DTE
through a control signal in the digital interface.
All modems do not have provision for all these tests. Test features are specific to the
modem type. Test loops 1 to 4 have been standardized by CCITT in their
Recommendation V.54.
BRBRAITT : June-2011 84
―DATA NETWORK‖ FOR JTOs PH-II
The channel selection for the transmit and receive directions can be done through the
digital interface by switching on the appropriate control circuit.
CCITT V.22 Modem
This modem provides full duplex synchronous transmission over 2-wire leased line or
switched telephone network. It transmits data at 1200 bps. as an option, it can also
operate at 600 bps.
Scrambler.
A scrambler and a descrambler having the generating polynomial 1+x -14 +x-17 are
provided in the modem.
Modulation.
Differential 4 PSK over two channel is utilised in this modem. The dibits are encoded
as phase changes as given in Table 5. The carrier frequencies are
BRBRAITT : June-2011 85
―DATA NETWORK‖ FOR JTOs PH-II
A B
0 0 /2
0 1 0
1 1 3 /2
1 0
At 600 bps, the carrier phase changes are 3 /2 and /2 for binary ―1‖ and ―0‖
respectively.
Equalizer.
Fixed compromise equalizer shared equally between the transmitter and receiver are
provided in the modem.
Test Loops.
Test loops 2 and 3 as defined in Recommendation V.54 are provided in the modem.
For self –test, an internally generated binary pattern of alternating ―0‖s and ―1‖s is
applied to the scrambler. At the output of the descrambler, an error detector identifies
the errors and gives visual indication.
CCITT V.22bis Modem
This modem provides full duplex synchronous transmission on a 2-wire leased line or
switched telephone network. The bit rates supported are 2400 or 1200 bps at the
modulation rate of 600 bauds.
Scrambler.
The modem incorporates a scrambler and a descrambler having the generating
polynomial 1+x-14 +x-17.
Modulation.
At 2400 bps, the modem uses 16 QAM having a constellation as shown in Fig. 30.
From the scrambled data stream quadbits are formed. The first two bits of the
quadbits are coded as quadrant change as given in Table 6. The last two bits of the
quadbits determine the phase within a quadrant as shown in Fig. 30.
BRBRAITT : June-2011 86
―DATA NETWORK‖ FOR JTOs PH-II
Table 6 Quadrant Changes Determined by the First Two Bits of Quadbits (CCITT
V.22bis Modem).
00 01 11 10
1 2 1 4 3
Last quadrant 2 3 2 1 4 Next quadrant
3 4 3 2 1
4 1 4 3 2
At 1200 bps, the dibits are formed from the scrambled data stream and coded as
quadrant changes shown above. In each quadrant, the phase state corresponding to ―0‖
is transmitted.
The following two carriers used for transmit and receive directions, the calling
modem used the low channel to transmit data.
BRBRAITT : June-2011 87
―DATA NETWORK‖ FOR JTOs PH-II
Equalizer.
A fixed compromise equalizer is provided in the modem transmitter. The modem
receiver is equipped with an adaptive equalizer.
Test Loops.
Test loops 2 and 3 as defined in Recommendation V.54 are provided in the modem.
For self-test, an internally generated binary pattern of alternating ―0‖s and ―1‖s is
applied to the scrambler. At the output of the descrambler, an error detector identifies
the errors and gives visual indication.
CCITT V.23 Modem
The modem is designed to operate in full duplex asynchronous transmission mode
over a 4-wire leased line. It can also operate in half duplex over a 2-wire leased line
and switched telephone network.
The modem can operate at two speeds – 600 bps and 1200 bps. It is equipped with the
secondary channel which operates at 75 bps.
Modulation.
The modem employ FSK over two channels. The frequencies are:
BRBRAITT : June-2011 88
―DATA NETWORK‖ FOR JTOs PH-II
A B
Dibit
00 0 /4
01 /2 3 /4
11 5 /4
10 3 /2 7 /
BRBRAITT : June-2011 89
―DATA NETWORK‖ FOR JTOs PH-II
Scrambler.
The modem incorporates a scrambler and a descrambler. The generating polynomial
for the call-originating modem is 1+x-18+x-23. The generating polynomial of the
answering modem for transmission of its data is 1+x-5+x-23.
Test Loops.
Test loops 2 and 3 as defined in Recommendation V.54 are provided in the modem.
CCITT V.27 MODEM
This modem is designed for full duplex/half duplex synchronous transmission over a
4-wire or 2-wire leased connection which is specially conditioned as per M.1020. It
operates at the bit rate of 4800 bps with modulation rate of 1600 baud. It includes a
secondary channel which operates at 75 bps.
Scrambler.
The modem incorporates a scrambler and a descrambler having the generating
polynomial 1+x-6+x-7.
Modulation.
The modem uses differential 8 PSK for transmission at 4800 bps. The modulation
scheme is given in Table 8. The carrier frequency is 1800 Hz. The secondary channel
is the same as in V.23.
001 0
000 /4
010 /2
011 3 /4
111
110 5 /4
100 3 /4
101 7 /4
Equalizer.
A manually adjustable equalizer is provided in the receiver. The transmitter has
provision to send scrambled continuous binary ―1‖s for the equalizer adjustment. The
modem has means for indicating correct adjustment of the equalizer.
BRBRAITT : June-2011 90
―DATA NETWORK‖ FOR JTOs PH-II
The first segment consists of continuous phase reversals of the carrier. It enables AGC
convergence and carrier recovery. During the second segment, the adaptive equalizer
is conditioned. Differential BPSK carrier is transmitted during this interval. The
modulating sequence is generated from every third bit of a PRBS having the
generating polynomial 1+x-6+x-7. The phase changes in the carrier are 0 and radians
of binary ―0‖ and ―1‖ respectively. The third segment of the training sequence
synchronizes the descrambler. It consists of scrambled binary ―1‖s.
CCITT V.27ter Modem
This modem is designed for use in the switched telephone network. It is similar to
V.27 bis modem in most respects. it incorporates additional circuits for auto
answering, ring indicator etc.
CCITT V.29 Modem
This modem is designed for point-to-point full duplex/half synchronous operation on
4 wire leased circuits conditioned as per M. 1020 or M.1025. It operates at a nominal
speed of 9600 bps. The fall-back speed are 7200 and 4800 bps.
BRBRAITT : June-2011 91
―DATA NETWORK‖ FOR JTOs PH-II
Scrambler.
The modem incorporates a scrambler and a descrambler having the generating
polynomial 1+x-18+x-23.
Modulation.
The modem employs 16 state QAM with modulation rate of 2400 baud. The carrier
frequency is 1700 Hz. The scrambled data at 9600 bps is divided into quadbits. The
last three bits are coded to generate differential eight-phase modulation identical to
Recommendation V.27. The first bit along with the absolute phase of the carrier
determines its amplitude (Fig. 31). The absolute phase is established during
transmission of the training sequence.
At the fallback rate of 7200 bps, tribits are formed from the scrambled 7200 bps bit
stream. Each tribit is prefixed with a zero to a make the quadbit. At the fallback rate
of 4800 bps, dibits are formed from the scrambled 4800 bps bit stream. These dibits
constitute the second and third bits of the quadbits. The first bit of the quadbits is zero
as before and the fourth bit is modulo 2 sum of the second and third bits. The phase
state diagrams for the modem operation at 7200 and 4800 bps are shown in Fig.32a
and Fig. 32b respectively.
BRBRAITT : June-2011 92
―DATA NETWORK‖ FOR JTOs PH-II
Equalizer.
An adaptive equalizer is provided in the receiver.
Training Sequence.
The training sequence is shown in Table 10. It consists of four segments which
provide for clock synchronization, establishment of absolute phase reference for the
carrier, equalizer conditioning and descrambler synchronization.
1 No transmitted energy 48
2 Alternations 128
3 Equalizer conditioning pattern 384
4 Scrambled binary 1s 48
The second segment consists of two alternating signal elements A and B (Fig. 31).
This sequence establishes absolute phase of the carrier.
The third segment consists of the equalizer conditioning signal which consists of
elements C and D (Fig. 31). Whether C or D is to be transmitted is decided by a
pseudo-random binary sequence at 2400 bps generated using the generating
polynomial 1+x-6+x-7. The element C is transmitted when a ―0‖ occurs in the
sequence. The element D is transmitted when a ―1‖ occurs in the sequence.
The fourth segment consist of a continuous stream of binary ―1‖s which is scrambled
and transmitted. During this period descrambler synchronization is achieved.
CCITT V.32 Modem
This modem is designed for full duplex synchronous transmission on 2-wire leased
line or switched telephone network. It can operate at 9600 and 4800 bps. The
modulation rate is 2400 bauds.
Scrambler.
The modem incorporates a scrambler and a descrambler. The generating polynomial
for the call-originating modem is 1+x-18+x-23. The generating polynomial of the
answering modem for transmission of its data is 1+x -5+x-23.
BRBRAITT : June-2011 93
―DATA NETWORK‖ FOR JTOs PH-II
Modulation.
The carrier frequency is 1800 Hz in both directions of transmission. Echo cancellation
technique is employed to separate the two channels. 16 or 32 state QAM is employed
for converting the digital information into the analog signal. There are two
alternatives for encoding the 9600 bps scrambled digital signal.
Nonredundant Coding.
The scrambled digital signal is divided into quadbits. The first two bits of each
quadbit Q1n and Q2n are differentially encoded into y1n and y2n respectively as per
Table 11. Y1(n-1), y2(n-1) are the previous values of the Y bits. The last two bits are
taken without any change and the encoded quadbit Y1nY2nQ3nQ4n is mapped as shown
in Fig. 33.
Fig. 33 Phase states of CCITT V.32 modem at 9600 bps when non-redundant coding
is used.
At 4800bps, the scrambled data stream is grouped into dibits which are differentially
encoded as per Table 11 and mapped on a subset ABCD of the phasor states (Fig.
33).
Trellis Coding. Trellis coding enables detection and correction of error which are
introduced in the transmission medium. We will study the principles of error control
using trellis coding in the next chapter. Here, suffice it to say that some additional bits
are added to a group of data bits for detecting and correcting the errors. There are
several coding algorithms for error control and trellis coding is one of them. It is
implemented using convolution encoders.
BRBRAITT : June-2011 94
―DATA NETWORK‖ FOR JTOs PH-II
00 01 00 11 10
01 11 01 10` 00
10 00 10 01 11
11 10 11 00 01
In trellis coded V.32 modem, quadbits formed from the scrambled data stream are
converted into groups of five bits using a convolution encoder. The coding scheme is
as under:
1. The first two bits Q1n and Q2n of the quadbit are differentially encoded into Y1n
and Y2n as given in Table 12.
2. From Y1n and Y2n, Y0n is generated using the convolution encoder.
3. Y0n, Y1n and Y2n form the first three bits of the five bit code. The last bits of
the code are Q3n and Q4n bits of the quadbit.
00 00 01 10 11
01 01 00 11 10
10 10 11 01 00
11 11 10 00 01
The phase state diagram of the V.32 trellis coded modem is shown in Fig. 34.
BRBRAITT : June-2011 95
―DATA NETWORK‖ FOR JTOs PH-II
Equalizer.
An adaptive equalizer is provided in the receiver.
Training Sequence.
A training sequence is provided in the modem for adaptive equalization, echo
cancellation, data rate selection, and for the other function described earlier. It
consists of the following five segments:
1. Alterations between states A and B (Fig. 34) for 256 symbol intervals
2. Alterations between states C and D (Fig. 35 for 16 symbol intervals.
3. Equalizer and echo canceller conditioning signal of 1280 symbol intervals
Fig. 34
Phase states of CCITT V .32 modem at 9600 bps when trellis coding is used.
BRBRAITT : June-2011 96
―DATA NETWORK‖ FOR JTOs PH-II
Modulation.
The carrier frequency is 1800 Hz in both directions of transmission. 128 state QAM
using trellis coding is employed for converting the digital information into an analog
signal. The scrambled data bits are divided into groups of six bits. The first two bits of
each six-bit group are encoded into three bits using the differential encoder followed
by a convolution encoder as described in V.32. Seven bit code words are thus formed
and these codes are mapped on the 128 sate phase diagram as shown in Fig.35.
Fig. 35 Phase states of CCITT V.33 modem at 14400 bps.
At the fallback speed of 12,000 bps, five-bit groups are formed and the first two bits
of each group are coded into three bits using the same scheme as above.
BRBRAITT : June-2011 97
―DATA NETWORK‖ FOR JTOs PH-II
Equalizer.
An adaptive equalizer is provided in the receiver.
Training Sequence.
The training sequence given in Table 13 is provided in the modem for adaptive
equalization, data rate selection and the other functions described earlier.
states A and B are shown in the phase state diagrams. For details of the training
sequence, the reader is advised to refer to the CCITT recommendation.
BRBRAITT : June-2011 98
―DATA NETWORK‖ FOR JTOs PH-II
BRBRAITT : June-2011 99
―DATA NETWORK‖ FOR JTOs PH-II
DATA MULTIPLEXERS
A modem is an intermediary device which is used for interconnecting terminals and
computers when the distances involved are large. Another data transmission
intermediary device is the data multiplexer which allows sharing of the transmission
media. Multiplexing is adopted to reduce the cost of transmission media and modems.
Figure 37 shows a simple application of data multiplexers. In the first option, 16
modems and eight leased line are required for connecting eight terminals to the host.
In the second option, the terminals and the host are connected using two data
multiplexers. The modem requirement is reduced to two and the leased line
requirement is reduced to one.
The multiplexer ports which are connected to the terminal are called terminal ports
and the port connected to the leased line is called the line port. A multiplexer has a
built-in demultiplexer also for the signals coming from the other end. The terminal
port for incoming and outgoing signals is the same. One of the several wires of the
terminal port carries the outgoing signal and another carriers the incoming signal.
Besides consideration of economy, the other benefit of multiplexing is centralized
monitoring of all the channels. Data multiplexers can be equipped with diagnostic
hardware/software for monitoring the performance of individual data channels.
However, there is possibility of catastrophic failure. If any of the multiplexers or the
leased line fails, all the terminals will be cut off from the host.
Like speech channel multiplexing, data multiplexers use either frequency division
multiplexing (FDM) or time division multiplexing (TDM). In FDM, the line
frequency band is divided into sub-channels. Each terminal port is assigned one sub-
channel for transmission of its data. In TDM, the sub-channels are obtained by
assigning time intervals (time slots) to the terminals for use of the line. Time slot
allotment to the sub-channels may be fixed or dynamic. A time division multiplexer
with dynamic time slot allotment is called Statistical Time Division Multiplexer
(STDM or Stat Mux).
In the following sections we will briefly introduce the frequency division and time
division multiplexers. Stat Mux is more powerful and common than these two types
of multiplexers. It is described in considerable detail. The reader will find many new
concepts and terminology to which he has not been introduced so far. In order to
appreciate the operation of Stat Mux, it is first necessary to understand data link
protocols. The reader is strongly advised to read the section on Stat Mux only after
reading the chapter on Data Link Layer.
Frequency Division Multiplexers (FDM)
The leased line usually provides speech channel bandwidth of 300 – 3400 Hz.
Therefore, most of the multiplexers are designed for this band. For frequency division
multiplexing, the frequency band is divided into several sub-channels separated by
guard bands. The sub- channels utilize frequency shift keying for modulating the
carrier. Aggregate of all sub-channels is within the speech channel bandwidth and is
an analog signal. Therefore, the multiplexers does not require any modem to connect
it to the line. a four-wire circuit is always required for outgoing and incoming
channels.
Bandwidths of the sub-channels depend on the baud rates. Frequency division data
multiplexers provide baud rates from 50 to 600 bauds. The number of sub-channels
varies from thirty-six to four depending on baud rate (Table 14)
50 36 1,800
75 24 1,800
110 18 1,980
150 12 1,800
600 4 2,400
Multidrop operation of the frequency division multiplexer is shown in Fig. 38. Each
remote transmits and receives a different frequency as determined by the remote
single channels units. The multiple line unit which is connected to the host separates
the signals received on the line. It also carries out frequency division multiplexing of
the outgoing signals.
Frequency division multiplexers are not much in use. Their major limitations are
1. Production costs are high because of analog components.
2. Total capacity is limited to 2400 bps due to large wasted bandwidth in the
guard band
If all the sub-channels have the same bit rates, all the time slots have the same lengths.
If the multiplexer permits speed flexibility, the higher speed sub-channels have longer
time slots. The frame format and time slot lengths are, however, fixed for any given
configuration or number of sub-channels and their rates. Since the frame format is
fixed, time slots of all the sub-channels are always transmitted irrespective of the fact
that some of the sub-channels may not have any data to send.
Bit and Byte Interleaved TDM.
Time division multiplexer are of two types:
1. Bit interleaved multiplexer
2. Byte interleaved multiplexer.
In the bit interleaved multiplexer, each time slot is one bit long. Thus, the user data
streams are interleaved taking one bit from each stream. Bit interleaved multiplexers
are totally transparent to the terminals.
In the byte interleaved multiplexer,each time slot is one byte long. Therefore, the
multiplexed output consists of a series of interleaved characters of successive sub-
channels. Usually, a buffer is provided at the input of each of its ports to temporarily
store the character received from the terminal. The multiplexer reads the buffers
sequentially. The start-stop bits of the characters are stripped during multiplexing and
again reinserted after demultiplexing. It is necessary to transmit a special ―idle‖
character when a terminal is not transmitting.
The bit rate at the output of the multiplexer is slightly greater than the aggregate bit
rate of the sub-channels due to the overhead of the synchronization word. Another
feature of TDMs is that even though the multiplexed output is formatted, there is no
provision for detecting or correcting the errors.
Time division multiplexers permit the mixing of bit rates of the sub-channels. Their
line capacity utilization is also better than frequency division multiplexers. A line bit
rate of 9600 bps is possible.
STATISTICAL TIME DIVISION MULTIPLEXERS
Statistical time division multiplexer, Stat Mux in short, uses dynamic of time slots for
transmitting data. If a sub-channel has data waiting to be transmitted, the Stat Mux
allots it a time slot in the frame (Fig. 40). Duration of the time slot may be fixed or
variable. There is need to identify the time slots and their boundaries. Therefore, some
additional control fields are required. When we examine the Stat Mux protocols later
we will see how the time slots are identified.
Dynamic assignment allows the aggregate bit rates of the sub-channels to be more
than the line speed of the Stat Mux considering that all the terminals will not generate
traffic all the time. If sufficient aggregate traffic is assured at the input, the Stat Mux
permits full utilization of the line capacity. It is not so in TDMs, where the line time is
wasted if a time slot is not utilized by a sub-channel though another sub-channel may
have data to send.
Stat Mux Buffer
A Stat Mux is configured to handle an aggregate sub-channel bit rate which is more
than the line rate. it must have a buffer so that it may absorb the input traffic
fluctuations maintaining a constant flow of multiplexed data on the line. the Stat Mux
maintains a queue in the buffer to maintain sequence of the data bytes. Buffer size
may vary from vender to vendor but 64 kbyte is typical. This buffer is usually shared
by both the directions of transmission, i.e., by the multiplexer and the demultiplexer
portions of a Stat Mux. To guard against the overflow, the sub-channel traffic is flow-
controlled.
Stat Mux Protocol
Some of the important issues which need to be addressed to have dynamic time slot
allotment are:
1. In simple time division multiplexer, the location of time slot with respect
to the synchronization word identifies the time slot because fixed frame
format is used. But in Stat Mux, the frame has variable format. Therefore,
some mechanism to identify the time slots is required.
2. Lengths of the time slots are variable. There is need to define time slot
delimiters.
Therefore, a Stat Mux protocol which defines the format of the Stat Mux frame is
required. There are several proprietary protocols but none of them is standard. We
will discuss two common Stat Mux protocols, Bit Map and Multiple-character.
The Stat Mux has a well-defined frame structure and has built-in buffer to
temporarily store data. Therefore, it is possible to enhance its capability by
implementing a data link protocol for error control. A commonly implemented data
link protocol is HDLC.
Layered Architecture
Figure 41a shows the three-layer architecture of a Stat Mux. The control sublayer
generates a multiplexed data frame with a control field to identify the data fields. It is
handed over to the data link sublayer which adds a header and a trailer to it. The
resulting frame structure in case of HDLC protocol is shown in Fig.41b. The
information field of the HDLC frame contains the frame received from the control
sublayer. Note that the address and control fields of the HDLC frame have nothing to
do with the sub-channel. They are part of the HDLC protocol. The frame check
sequence (FCS) contains the CRC code of error detection.
The first layer constitutes the physical layer which is concerned with the physical
aspects of transmitting the multiplexed bit stream on the line.
The control protocol is proprietary with each vendor and determines the overall
efficiency of the Stat Mux.
Bit Map Stat Mux Protocol
In the bit map Stat Mux protocol, the multiplexed data frame formed by the control
sublayer consists of a map fields and several data fields (Fig. 42). The map field has
one bit for each sub-channel. It is two bytes long for the sixteen-port Stat Mux. If a
bit is ―1‖ in the map field, it indicates that the frame contains data field of the
corresponding sub-channel. A ―0‖ in the map field of a frame indicates that data field
of the corresponding sub-channel is missing from this particular frame.
Note that the map fields is present in all frames and has fixed length. The size of data
fields of a channel, if present, is fixed in the frame. It can be set to any value while
configuring the Stat Mux. Fixed sizes of the data field enable the receiving Stat Mux
to identify the boundaries of these
fields. For asynchronous terminal ports, the data field size is usually set to one
character. The start stop bits are stripped before multiplexing and reinserted after
demultiplexing.
The HDLC frame transmitted on the line contains seven overhead bytes (Flag-1,
address-1, control-1, FCS-2, bit map-2) which reduce effective line utilization. If
there are N bytes in the data fields of the control frame, the maximum line utilization
efficiency E can be estimated by
E= N
N+7
EXAMPLE 4
A host is connected to 16 asynchronous terminal through a pair of statistical time
division multiplexers utilizing the bit map protocol. The sixteen asynchronous
terminal ports operate at 1200 bps. The line port has a bit rate of 9600 bps the data
link control protocol is HDLC.
1. Calculate the maximum line utilization efficiency and throughput.
2. Will there be any queues in the Stat Mux
(a) if the average character rate at all the ports is 10 cps ?
(b) If the host sends full screen display of average 1200 characters to each
terminal ?
3. How much time will the Stat Mux take to clear the queues ?
Solution
1. As N = 16, the line utilization efficiency is given by
E = 16/(7 +16) = 0.696
= 1280 bps
since the throughput is 6678 bps, it is very unlikely there will be queues at the
terminal ports. (b) With start and stop bits, the minimum size of a character is 10
bits. therefore, at 1200 bps, the host will take 10 seconds to transfer 1200
characters of one screen of a terminal. The Stat Mux will get 1200 16 = 19200
characters in 10 seconds from the host. The throughput is
The Stat Mux will transmit 834.75 10 characters in 10 seconds. Therefore, queue at
the end of 10 seconds = 19200 – 8347.5 = 10852.5 characters
3. The Stat Mux will take 10852.5/84.75 = 13 additional seconds to clear the
queue.
di
N
E=
5+2N+ di
N
Where di is the number of data bytes in ith sub-channel.
EXAMPLE 5
A host is connected to 16 asynchronous terminal through a pair of statistical time
division multiplexers utilizing the multiple-character protocol described above. The
sixteen asynchronous terminal ports operate at 1200 bps. The line port has a bit rate
of 9600 bps. The data link control protocol is HDLC and the maximum size of the
HDLC frame is 261 bytes.
1. Calculate the line utilization efficiency when all the ports generate their
maximum traffic. Will queues develop for this load ?
2. What is the maximum line utilization efficiency without having the queues ?
3. If the host sends full screen display of average 1200 characters to each
terminal, will there by any queue ?
If so, how much time will the Stat Mux take to clear the queue.
Solution
If all the 16 users simultaneously generate a burst of data, each HDLC frame will
contain all the sub-channels. As the HDLC frame size is 261 bytes, each sub-channel
will occupy (261 – 5)/16 = 16 bytes. The data fields of each channel will be 16 – 2 =
14 bytes. Therefore,
16 14
E= = 0.8582
261
261 8
Time to transmit one frame t 0 = = 217.5 ms
9600
But out of these only 14 characters are transmitted in each frame; so queues will
develop
If there are fewer sub-channels, the overhead of two bytes per sub-channel is reduced.
Therefore, the line utilization efficiency may be increased. Let there be N sub-
channels in a frame and d data bytes in each sub-channel.
(5 + 2N + Nd) 8
Time to transmits the frame on the line t 0 =
9600
10d (5 + 2N + Nd) 8
=
1200 9600
Simplifying, we get
D = (5 + 2N)/(10 –N), N 10
We need to solve the above equation for integer values of d and N. Line utilization
efficiency is given by
Nd
E=
5 + 2N + Nd
At the line of 9600 bps, time taken to transmit one HDLC frame is given by
261 8
t0 =
9600
Assume all the sub-channels are present in the frame, the data character transfer rate
per HDLC frame is 224 characters/frame. Therefore, number of data characters
transferred in 10 seconds is
224 10
= 10298085 characters
t0
Additional time required to clear the queues
(19200 – 10298.85) 10
= 8.64s
10298.85
SUMMARY
Transmission of digital signal using the limited bandwidth of the speech channel of
the telephone network necessitates use of digital modulation methods, namely,
Frequency Shift Keying (FSK), differential Phase Shift Keying (PSK) and Quadrature
Amplitude Modulation (QAM). FSK is used in the low speed modems. PSK and
QAM are used in medium and high speed modems.
A modem has two interfaces, a digital interface which is connected to the Data
Terminal Equipment (DTE) and a line interface which is connected to the
transmission line. It comprises several functional blocks besides a modulator and a
demodulator. Encoding, scrambling, equalizing and timing extraction are some of the
additional functions, carried out in a modem. CCITT recommendations for modems
are summarized below. The number within brackets is the speed of the modem in bits
per second. Half duplex modems are indicated by the letters ―HD‖.
Wire-Asynchronus Modem: V.21 (300).
Wire-Synchronous Modems: V.22 (1200), V.22bis (2400), V.26bis (2400 HD),
V26ter (2400), V.27ter (4800), V.32 (9600).
1. The block of data bits to which check bits are added is called a data word.
2. The bigger block containing check bits is called the code word.
3. Hamming distance or simply distance between two code words is the number
of disagreements between them. For example, the distance between the two
words given below is 3 (Fig. 1).
4. The weight of a code word is the number of ―1‖ s in the code word e.g.,
11001100 has a weight of 4.
5. A code set consists of all valid code words. As the valid code words have a
built in ―characteristic‖ of the code set.
1 1 0 1 0 1 0 0
Distance =3
0 1 0 1 1 1 1 0
Error Correction
After an error is detected, there are two approaches to correction of errors:
10001110 10100110
3 Received Word 1
10110110
01001000 01011111
Fig.3 Error correction between valid based on the least Hamming distance.
If the minimum distance between valid code words is D, upto D/2 – 1 errors can be
corrected. More than D/2 – 1 errors will cause the received code word to be nearer to
the wrong valid code word.
Bit Error Rate (BER)
In analog transmission, signal quality is specified in terms of signal-to-noise ratio
(S/N) which is usually expressed in decibels. In digital transmission, the quality of
received digital signal is expressed in terms of Bit Error Rate (BER) which is the
number of errors in a fixed number of transmitted bits. A typical error rate on a high
quality leased telephone line is as low as 1 error in 106 bits or simply 1 10-6
1. Parity checking
2. Checksum error detection
3. Cyclic Redundancy Check (CRC).
Each of the above methods has its advantages and limitations as we shall see in the
following section.
Parity Checking
In parity checking methods, an additional bit called a ―parity‖ bit is added to each data
word. The additional bit is so chosen that the weight of the code word so formed is
either even (even parity) or odd (odd parity) (Fig .4). All the code words of a code set
have the same parity (either odd or even) which is decided in advance.
P Data Word
P Data
1 1001011
Word 0 001011
0
0 1001011
1 0010110
Fig. 4 Even and odd parity bits.
When a single error or an odd number of errors occurs during transmission, the parity
of the code word changes (Fig.5). Parity of the code word is checked at the receiving
end and violation of the parity rule indicates errors somewhere in the code word.
Transmitted Code 10010110 Even Parity
Received Code (single error) 00010110 Odd Parity (Error is detected)
Received Code (Double error) 00011110 Even Parity (Error is not
detected)
Fig. 5 Error detection by change in parity.
Note that double or any even number of errors will go undetected because the
resulting parity of the code word will not change. Thus, a simple parity checking
method has its limitations. It is not suitable for multiple errors. To keep the possibility
of occurrence of multiple errors low, the size of the data word is usually restricted to a
single byte.
Parity checking does not reveal the location of the erroneous bit. Also, the received
code word with an error is always at equal distance from two valid code words.
Therefore, errors cannot be corrected by the parity checking method.
EXAMPLE 2
Write the ASCII code of the word ― HELLO‖ using even parity.
Solution
Bit Positions 87654321
H 01001000
E 11000101
L 11001100
L 11001100
O 11001111
Burst Errors
There is a strong tendency for the errors to occur in bursts. An electrical interference
like lightning lasts for several bit times and, therefore, it corrupts a block of several
bits. The parity checking method fails completely in such situations. Checksum and
cyclic redundancy check are the two methods which can take care of burst errors.
Checksum Error Detection
In checksum error detection method, a checksum is transmitted along with every
block of data bytes. Eight-bit bytes of a block of data are added in an eight-bit
accumulator. Checksum is the resulting sum in the accumulator. Being an eight-bit
accumulator, the carries of the most significant bits are ignored.
EXAMPLE 3
Find the checksum of the following message. The MSB is on the left-hand side of
each byte.
Solution
1 1 Carries
1
1 1 1 1 1
1
1 0 1 0 0 1
0 1
0 0 1 0 0 1
1 0
1 1 1 0 0 0
1 0 Data
0 1 0 1 0 1
0 1 Bytes
1 0 1 0 1 0
1 0
1 1 0 0 1 1
0 0
0 0 1 0 0 1
0 0 Checksum
1 0 0 1 1 1 0 0
Byte
After transmitting the data bytes, the checksum is also transmitted. The checksum is
regenerated at the receiving end and errors show up as a different checksum. Further
simplification is possible by transmitting the 2‘ s complement of the checksum in
place of the checksum itself. The receiver in this case accumulates all the bytes
including the 2‘s complement of the checksum. If there is no error, the contents of
the accumulator should be zero after accumulation of the 2‘s complement of the
checksum byte.
The advantage of this approach over simple parity checking is that 8-bit addition
―mixes up‖ bits and the checksum is representative of the overall block. Unlike simple
parity where even number of errors may not be detected, in checksum there is 255 to
1 chance of detecting random errors.
Cyclic Redundancy Check
Cyclic Redundancy Check (CRC) codes are very powerful and are now almost
universally employed. These codes provide a better measure of protection at the lower
level of redundancy and can be fairly easily implemented using shift registers or
software.
A CRC code word of length N with m-bit data word is referred to as (N,m) cyclic
code and contains (N-m) check bits. These check bits are generated by modulo-2
division. The dividend is the data word followed by n= N-m zeros and the divisor is a
special binary word of length n+1. The CRC code word is formed by modulo-2
addition of the remainder so obtained and the dividend.
EXAMPLE 6
Generate CRC code for the data word 110101010 using the divisor 10101.
Solution
Data Word 110101010
Divisor 10101
111000111 Quotient
1 0 1 0 1) 1101010100000 Dividend
10101
11111
10101
10100
10101
11000
10101
11010
10101
11110 Remainder
10101
Code 1 0 1 1 110101010
In 0the
0 0 above
0 example, note that the CRC code word consists of the date word
Word
followed by the remainder. The code word so generated is completely divisible by the
divisor because
1 0 1it1is the difference of1 the dividend and the remainder (Modulo-2
addition
1 0 1 0and
1 0 subtraction
1 0 1 0 1 1 are equivalent). Thus, when the code word is again divided
by the same divisor at the receiving end, a non-zero remainder after so dividing will
indicate errors in transmission of the code word.
EXAMPLE 7
The code word of Example 6 be received as 1100100101011. Check if there are errors
in the code word.
Solution
Dividing the code word by 10101, we get
111110001
10101) 1100100101011
10101
11000
10101
11010
1jhkhhkhkhhkj
10101
11111
10101
10100
10101
11011
10101
1110 Remainder
Non-zero remainder indicates that there are errors in the received code word.
Algebraic Representation of Binary Code Words
For the purpose of analysis, the binary codes are represented using algebraic
polynomials. In a polynomial of variable x, coefficients of the powers of x are the bits
of the code, the most significant bit being the coefficient of the highest power of x. the
data word of Example 6 can be represented by a polynomial M (x) as::
M(x) = 1x8 + 1x7 + 0x6 + 1x5 + 0x4 + 1x3 + 0x2 + 1x1 + 0x0
Or M(x) = x8 + x7 + x5 + x3 + x
The polynomial corresponding to the divisor is called the generating polynomial G(x)
. G(x) corresponding to divisor used in last example would be
G (x) = 1x4 + 0x3 + 1x2 + 0x1 + 1x0
Or G(x) = x4 + x2 + 1
If Q (x) is the quotient and R(x) is the remainder when D(x) is divided by G(x),
Thus, the CRC code D(x) +R(x) is completely divisible by G(x). This characteristic of
the code is used for detecting errors.
Some of the common generating polynomials and their applications are :
CRC-32 x32 +x26 +x23 +x22 +x16 +x12 +x11 +x10 +x8 +x7 +x5 +x4 +x2 +x +1
It is used with 8-bit characters when very high probability of error detection is
required.
FORWARD ERROR CORRECTION METHODS
To locate and correct errors require a bigger overhead in terms of number of check
bits in the code word. Some of the important error-correction codes which find
application in data transmission devices are:
1. Block parity
2. Hamming code
3. Convolutional code.
Block Parity
The concept of parity checking can be extended to detect and correct single errors.
The data block is arranged in a rectangular matrix form as shown in Fig.8 and two
sets of parity bits are generated, namely,
1. Longitudinal Redundancy Check (LRC)
2. Vertical Redundancy Check (VRC).
VRC is the parity bit associated with the character code and LRC is generated over
the rows of bits. LRC is appended to the end of data block. The bit 8 of the LRC
represents the VRC of the other 7 bits of the LRC. In Fig.6, even parity is used for the
LRC and the VRC.
COMPUTER
1 1 1 1 0 1 0 1 0 1
2 1 1 0 0 0 0 0 1 1
7-Bit 3 0 1 1 0 1 1 1 0 1
ASCII 4 0 1 1 0 0 0 0 0 0
Codes 5 0 0 0 1 1 1 0 1 0
6 0 0 0 0 0 0 0 0 0
7 1 1 1 1 1 1 1 1 0
1 1 0 0 0 1 1 1 1
Solution
11 1 0 1 0 1 0 1
1 1 0 0 0 0 0 1 1
0 1 1 0 1 1 1 0 1
1 Wrong Parity
01 1 0 1 0 0 0 0
00 0 1 1 1 0 1 0
0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 0
1 1 0 0 0 1 1 1 1
Wrong Parity
Fourth bit
of the fifth byte is in error. It should be ―0‖.
Hamming Code
It is the single error correcting code devised by Hamming. In this code, there are
multiple parity bits in a code word. Bit positions 1, 2, 4, 8... etc. Of the code word are
reserved for the parity bits. The other bit position are for the data bits (Fig. 7). The
number of parity bits required for
1 2 3 4 5 6 7 8 9 10 11
P1 P2 D P4 D D D P8 D D D
Correcting single bit errors depends on the length of the code word. A code word of
length n contains m parity bits, where m is the smallest integer satisfying the
condition:
2m n+1
The MSB of the data word is on the right-hand side and its position is third in Fig. 7.
As usual, the LSB is transmitted first.
Each data bit is checked by a number of parity bits. Data bit position expressed as
sum of the powers of 2 determines parity bit positions which check the data bit. For
example, a data bit in position 6 is checked by parity bits P 4 and P2 (6=22 + 21 ).
Positions P1 P2 P4 P8
3
5
6
7
10
11
12
example, if even parity is used, P2 is such that the number of ―1‖ s in 2nd, 3rd, 6th,
7th, 10th and 11th positions is even. The logic behind this way of generating the
parity bits is that when a code word suffers an error, all the parity bits which check the
erroneous bit will indicate violation of the parity rule and the sum of these parity bit
positions will indicate the position of the erroneous bit. For example, if the 11th bit is
in error, parity bits P8, P2 and P1 will indicate error and 8+2+1=11 will immediately
point to the 11th bit.
EXAMPLE 9
Generate the code word for ASCII character ―K‖= 1001011. Assume even parity for
the Hamming code. No character parity is used.
Solution
Bit positions
1 2 3 4 5 6 7
8 9 10 11
P1 P2 1 P4 0 0 1
P8 0 1 1
Code Word 1 0 1 1 0 0 1 0
0 1 1
EXAMPLE 10
Detect and correct the single error in the received Hamming code word 10110010111.
Assume even parity.
Solution
Bit positions
Parity Check
1 2 3 4 5 6 7
8 9 10 11
P1 P2 D P4 D D D
P8 D D D
Code word 1 0 1 1 0 0 1 0
1 1 1
First Check 1 1 0 1
1 1 Odd fail 1
(P1, 3,5,7,9,11)
Second check 0 1 0 1
1 1 Even Pass
(P2, 3,6,7,10,11)
Third check 1 0 0 1
Even Pass
(P4, 5,6,7)
Fourth check 0
1 1 1 Odd Fail 8
(P8, 9,10,11) 9
Thus, the 9th bit position is in error. Correct code word is 10110010011.
Convolutional Codes
Unlike block codes in which the check bits are computed for a block of data,
convolutional codes are generated over a ―span‖ of data bits, e.g., a convolutional
code of constraint length 3 is generated bit by bit always using the ―last 3 data bits‖.
Figure 8 shows a simple convolutional encoder consisting of a shift register having
three stages and EXOR gates which generate two output bits for each input bit. It is
called a rate ½ convolutional encoder.
State transition diagram of this encoder is shown in Fig. 9. Each circle in the diagram
represents a state of the encoder, which is the content of two leftmost stages of the
shift register. There are four possible states 00, 01, 10, 11. The arrows represent the
state transitions for the input bit which can be 0 or 1. The label on each arrow shows
the input data bit by which the transition is caused and the corresponding output bits.
As an example, suppose the initial state of the encoder is 00 and the input data
sequence is 1011. The corresponding output sequence of the encoder will then be
11010010.
Trellis Diagram.
An alternative way of representing the states is by using the trellis diagram (Fig. 10).
Here the four states 00, 01, 11, 10 are represented as four levels. The arrows represent
state transitions as in the state transition diagram. The labels on the arrows indicate
the output. By convention, a ―0‖ input is always represented as an upward transition
and a ―1‖ input as a downward transition. The trellis diagram can be obtained from
the state transition diagram.
EXAMPLE 11
Generate the convolutional code using the trellis diagram of Fig . 10 for the input bit
sequence 0101 assuming the encoder is in state A to start with.
Solution
Starting from state A at top left corner in Fig . 10 and tracing the path through the
trellis for the input sequence 0101, we get
A 0 A
00
A 1 C
11
C 0 B
01
B 1 C
00
Step1
Step2
Data Path Output Distance Next Next Output
Distance
Bits sequence from data state sequence
from
111100 bit
11110010
EXAMPLE 12
What is the message sequence if the received rate ½ encoded bit sequence is
00010100? Use the trellis diagram given in Fig. 10.
Solution
Step1
Step2
Data Path Output Distance Next Next
Output Distance
Bits sequence from data state
sequence from
000101 bit
00010100
000 AAAA 000000 2 0
A 00000000 2
1
C 00000011 4
100 ACBA 110111 3
110 ACDB 111010 6
010 AACB 001101 1 0
A 00110111 3
1
C 00110100 1
001 AAAC 000011 2 0
B 00001101 3
1
D 00001110 2
101 ACBC 110100 3
111 ACDD 111001 4
011 AACD 001110 3 0
B 00111010 4
1 D 00111001 4
from the above table, it can be seen that minimum distance is for the path AACBC
which corresponds to the message bit sequence 0101.
REVERSE ERROR CORRECTION
We have seen some of the methods of forward error correction but reverse error
correction is more economical than forward error correction in terms of the number of
check bits. Therefore, usually error detection methods are implemented with an error
correction mechanism which requires the receiver to request the sender for
retransmission of the code word received with errors. There are three basic
mechanisms of reverse error correction:
1. Stop and wait
2. Go-back-N
3. Selective retransmission
In this scheme, the sending end transmits one block of data at a time and then waits
for acknowledgement from the receiver. If the receiver detects any error in the data
block, it sends a request for retransmission in the form of negative acknowledgement.
If there is no error, the receiver sends a positive acknowledgement in which case the
sending end transmits, the next block of data. Figure 11 illustrates the mechanism.
sender Receiver
Data Block No
with Check - --------------
bits Errors
Positive
----------------
Acknowledgement
Next Data Errors
e
Block --------------
Detected
Negative
---------------
Retransmission
Acknowledgem
---------------
ent
Fig . 11 Reverse error correction by stop-and-wait
mechanism.
Go-Back-N
In this mechanism all the data blocks are numbered and the sending end keeps
transmitting the data blocks with check bits. Whenever the receiver detects error in a
block, it sends a retransmission request indicating the sequence number of the data
block received with errors. The sending end then starts retransmission of all the data
blocks from the requested data block onwards (Fig.12).
Selective Retransmission
If the receiver is equipped with the capability to resequence the data blocks, it
requests for selective retransmission of the data block containing errors. On receipt of
the request, the sending end retransmits the data block but skips the following data
blocks already transmitted and continues with the next data block (Fig. 13).In data
communications, we use reverse error correction using one of the mechanisms
described above.
Sender Receiver
Data Block
With check
Bits 1 1
Errors
2
Detected
3 3
Request for
2 2 Retransmission
Of Data Block 2
2 2
3 3
Sender Receiver
Data Block
With Check 1 -- 1 No
Bits
Errors
2 Errors
--
Detected
3 3
Request for
2 2
Retransmission
Of Data Block 2
2 2
4 4
SUMMARY
Errors are introduced due to imperfections in the transmission media. For error
control, we need to detect the errors and then take corrective action. Parity bits,
checksum and cyclic redundancy check (CRC) are some of the error detection
methods. Out of the three, CRC is the most powerful and widely implemented.
Error correction methods include forward error correction or reverse error correction.
Forward error correction requires additional check bits which enable the receiver to
correct the errors as well. However, reverse errors correction mechanisms, namely,
stop and wait, go-back-N or selective retransmission are more common. In these
mechanisms the receiver requests for retransmission of the data blocks received with
errors.
PACKET SWITCHING
AND MESSAGE SWITCHING CONCEPTS
PACKET SWITCHING
AND MESSAGE SWITCHING CONCEPTS
Whenever we have multiple devices, we have the problem of how to connect them to
make one-on-one communication possible. One solution is to install a point-to-point
connection between each pair of devices(a mesh topology) or between a central
device and every other device (a star topology). These methods, however, are
impractical and wasteful when applied to very large networks. The number and length
of the links require too much infrastructure to be cost efficient, and the majority of
those links would be idle most of the time. Imagine a network of six devices: A, B, C,
D, E, and F. If device A has point-to-point links to devices B, C, D, E, and F, then
whenever only A and B are connected, the links connecting A to each of the other
device are idle and wasted.
Other topologies employing multipoint connections, such as a bus, are ruled out
because the distances between devices and the total number of devices increase
beyond the capacities of the media and equipment.
A better solution is switching. A switched network consists of a series of inter-linked
nodes, called switches. Switches are hardware and /or software devices capable of
creating temporary connections between tow or more devices linked to the switch but
not to each other. In a switched network, some of these nodes are connected to the
communicating devices. Others are used only for routing.
The communicating devices are labeled A, B, C, D, and so on, and the switches I, II,
III, IV, and so on. Each switch is connected to multiple links and is used to complete
the connections between them, two at a time.
Traditionally, three methods of switching have been important: circuit switching,
packet switching, and message switching. The first two are commonly used today.
The third has been phased out in general communication but still has networking
applications new switching strategies are gaining prominence, among them cell relay
(ATM) and Frame.
CIRCUIT SWITCING
Circuit Switching
creates a direct physical connection between two devices such as phones or
computers. Instead of point-to-point connections between the three computers on the
left (A, B, and C) to the four computer on the right (D, E, F, and G), requiring 12
links, we can use four switches to reduce the number and the total length of the links.
Computer A is connected through switches I, II, and III to computer D. By moving
the levers of the switches, any computer on the left can be connected on the right.
A circuit switch is a device with n inputs and m outputs that creates a temporary
connection between an input link and an output link. The number of inputs does not
have to match the number of outputs.
An n-by-n folded switch can connect n lines in full-duplex mode. For example, it can
connect n telephones in such a way that each phone can be connected to every other
phone.
Circuit switching today can use either of two technologies : space-division switches or
time-division switches.
Space-Division Switches
In space-division switching,
The paths in the circuit are separated from each other spatially. This technology was
originally designed for use in analog networks but is used currently in both analog and
digital networks. It has evolved through a ling history of many designs.
Time-Division Switches
Time-division switching uses time-division multiplexing to achieve switching. There
are two popular methods used in time-division multiplexing: the time-slot
interchange and the TDM bus.
Time-Slot Interchange (TSI)
Given a system connecting four input lines to four output lines. Imagine that each
input line wants to send data to an output line according to the following pattern:
1 3 2 4 3 1 4 2
Example shows the results of ordinary time-division multiplexing. As you can see, the
desired task is not accomplished. Data are output in the same order as they are input.
Data from 1 go to 1, from 2 go to 2, from 3 go to 3, and from 4 go to 4.
However, we insert a device called a time-slot interchange (TSI) into link. A TSI
changes the ordering of the slots based on the desired connections. In this case, it
changes the order of data from A, B, C, D to C, D, A, B. Now, when the
demultiplexer separates the slots, it passes them to the proper outputs
How a TSI works ? A TSI consists of random access memory (RAM) with several
memory locations. The size of each location is same as the size of a single time slot.
The number of locations is the same as the number of inputs (in most cases, the
number of inputs and outputs are equal). The RAM fills up with incoming data from
time slots in the order received. Slots are then sent out in an order based on the
decision s of a control unit.
Public Switched Telephone Network (PSTN)
An example of a circuit-switched telephone network is the Public Switched
Telephone Network. Subscriber telephones are connected, through local loops, to
end offices (or central offices). A small town may have only one end office, but a
large city will have several end offices. Many end offices are connected to one toll
office. Several toll offices are connected to a primary office. Several primary offices
are connected to a sectional office, which normally serves more than one state. And
finally several sectional offices are connected to one regional office. All the regional
offices are connected using mesh topology.
Accessing the switching station at end offices is accomplished through dialing. In the
past, telephones featured rotary or pulse dialing, in which a digital signal was sent to
the end office for each number dialed. This type of dialing was prone to errors due to
the inconsistency of humans during the dialing process.
Today, dialing is accomplished through the Touch-Tone technique. In this method,
instead of sending a digital signal, the user sends two small bursts of analog signals,
called dual tone. The frequency of the signals sent depends on the row and column of
the pressed pad. Note that there is also a variation with an extra column (16-pad
Touch-Tone), which is used for special purposes. When a user dials, for example, the
number 8, two bursts of analog signals with frequencies 852 and 1336 Hz are sent to
the end office.
PACKET SWITCHING
Circuit switching was designed for voice communication. In a telephone conversation,
for example, once a circuit is established, it remains, connected for the duration of the
session. Circuit switching creates temporary (dialed) or permanent (leased) dedicated
links that are well suited to this type of communication.
Circuit switching is less well suited to data and other nonvoice transmission. Non-
voice transmissions tend to be bursty, meaning that data come in spurts with idle gaps
between them. When circuit-switched links are used for data transmission, therefore,
the line is often idle and its facilities wasted.
A second weakness of circuit-switched connections for data transmission is in its data
rate. a circuit-switched link creates the equivalent of a single cable between two
devices and thereby assumes a single data rate for both devices. This assumption
limits the flexibility and usefulness of a circuit-switched connection for networks
interconnecting a variety of digital devices.
Third, circuit switching is inflexible. Once a circuit has been established, that circuit
is the path taken by all parts of the transmission whether or not it remains the most
efficient or available.
Finally, circuit switching sees all transmission as equal. Any request is granted to
whatever links available. But often with data transmission. We want to be able to
prioritize: to say, for example, that transmission x can go anytime but transmission z
is time dependent and must go immediately.
A better solution for data transmission is packet switching. In a packet-switched
network, data are transmitted in discrete units of potentially variable length blocks
called packets. The maximum length of the packet is established by the network.
Longer transmission are broken up into multiple packets. Each packet contains not
only data but also a header with control information (such as priority codes and source
and destination addresses). The packets are sent over the network node to node. At
each node, the packet stored briefly then routed according to the information in its
header.
There are two popular approaches to packet switching: datagram and virtual circuit.
Datagram Approach
In the datagram approach to packet switching, each packet is treated independently
form all other. Even when one packet represents just a piece of a multipacket
transmission, the network (and network layer functions) treats it as though in existed
alone. Packets in this technology are referred to as datagrams.
Example shows how the datagram approach can be used to deliver four packets from
station A to station X. In this example, all four packets (or datagrams) belong to the
same message but may go by different paths to reach their destination.
This approach can cause the datagrams of a transmission to arrive at their destination
out of order. It is the responsibility of the transport layer in most protocols to reorder
the datagrams before passing them on to the destination port.
The link joining each pair of nodes can contain multiple channels. Each of these
channels is capable, in turn, of carrying datagrams either from several different
sources or from one source. Multiplexing can be done using TDM or FDM .
Device A and B are sending datagram to device X and Y. Some paths use one channel
while others use more than one. As you can see, the bottom link is carrying two
packets form different source is the same direction. The link on the right, however, is
carrying datagrams in two directions.
and in sequential order. When the last packet has been received and, if necessary,
acknowledged, the connection is released and that virtual circuit ceases to exist. Only
one single route exists for the duration of transmission, although the network could
pick an alternate route in response to failure or congestion.
Each time that A wishes to communicate with X, a new route is established. The route
may be the same each time, or it may differ in response to varying network
conditions.
PVC
Permanent virtual circuits (PVC) are comparable to leased lines in circuit switching.
In this method, the same virtual circuit is provided between two users on a continuous
basis.
The circuit is dedicated to the specific users. No one else can use it and, because it is
always in place, it can be used without connection establishment and connection
termination. Whereas two SVC users may get a different route every time they request
a connection, two PVC users always get the same route
Circuit-Switched Connection versus Virtual-Circuit Connection
Although it seems that a circuit-switched connection and a virtual-circuit connection
are the same, there are differences:
Path versus route.
A circuit-switched connection creates a path between two points. The physical path is
created by setting the switches for the duration of the dial (dial-up line) or the
duration of the lease (leased line). A virtual-circuit connection creates a route between
two points. This means each switch creates an entry in its routing table for the
duration of the session (SVC) or duration of the lease (PVC). Whenever, the switch
receives a packet belonging to a virtual connection, it checks the table for the
corresponding entry and routes the packet out of one of its interfaces.
Dedicated versus sharing.
In a circuit-switched connection, the links that make a path are dedicated; they cannot
be used by other connections. In a virtual circuit connection, the links the make a
route can be shared by other connections.
INTRODUCTION
One of the problems with networks that is prevalent today is that there are many
different protocols and network types. The hardware choices are confusing enough,
but software protocol suites that run over the various types of network hardware
solutions can absolutely boggle the mind. Ethernet, for instance, boasts a vast number
of protocol suites such as DDCMP, LAT, MOP, XNS, SCS, TCP/IP, VRP, NRP, and
a slew of other three-letter acronyms for various protocols that will solve all the
problems a customer could have.
Within the scheme of protocols, however, some still seem to rear their ugly heads, no
matter how hard the industry tries to put them down or get rid of them. One suite,
Transmission control Protocol/Internet Protocol (TCP/IP), is such an occurrence.
Every other vendor of networks will claim that their protocol is better and that TCP/IP
is going away. Some will point to the decisions made by the US Department of
Defense (DOD) to eventually migrate to internationally recognized and standardized
communications hardware and protocols, obviating the need for TCP/IP and
eventually replacing it. Some view TCP/IP as a workhorse whose time has come to be
put out to pasture.
Then there are the zealots—those that think that the ONLY communications protocol
suite for use in the world is TCP/IP and all others are fluff. These folks are dangerous
because they not only are vocal about TCP/IP, many times they are UNIX zealots as
well.
Somewhere in the middle of the two camps are those who do not know what to do
with TCP/IP or, worse, do not even really understand its significance to networks.
Unfortunately, these individuals are usually the managers of such diverse camps of
attitudes and must make decisions on whether to use TCP/IP on a project or not.
Although it is the ISO open systems protocols which have received most recent
publicity, there are other well established protocol sets, particularly on Ethernet,
which have a large share of the current LAN market. Some argue that these protocols
offer a better alternative to the largely untried and potentially cumbersome ISO set,
but most manufacturers indicate a willingness to adopt ISO protocols at some point in
the future.
The non-ISO protocol described in this chapter illustrate different approach from the
ISO protocol set to Open Systems working. TCP/IP, is a vendor-independent wide
area network protocol set, which been widely used on LANs for peer-to-peer
communications. Here, we will examine the TCP and IP networking protocols and
some implementations that have become de-facto standards in the military area as
well as academic and UNIX areas.
TCP/IP PROTOCOL SET STRUCTURE
The TCP/IP suite is not a single protocol. Rather, it is four-layer communication
architecture that provides some reasonable network features, such as end-to-end
communications, unreliable communications line fault handling, packet sequencing,
internet work routing, and specialized functions unique to DOD communications
needs such as standardized message priorities. He bottom layer, network services,
provides for communication to network hardware. Network hardware used in the
various networks throughout the DOD typically reflects the usage of FIPS (Federal
Information Processing Standard) compliant network hardware (such as IEEE 802
series of LANs and other technologies such as X.25). The layer above the network
services layer is referred to as the internet protocol (IP) layer. The IP layer is
responsible for providing a datagram service that routes data packets between
dissimilar network architectures (such as between Ethernet, and, say, X.25). IP has a
few interesting qualities, one of which is the issue of data reliability. As a datagram
service, IP does not guarantee delivery of data. Basically, if the data gets there,
great.
Bit 0 BitBit
1516 Bit 31
Options (0 or 32 if any)
INTERNET PROTOCOL
There is a second difference in philosophy between ISO and the TCP/IP approach,
which revolves around the work ;network;. In the TCP/IP model, a network is an
individual packet switched network which may be a LAN or a WAN, but is generally
under the control of one organisation. These networks connect to each other by
gateways, and the resting
Collection of such network is called a catenet (from concatenation). The Internet
Protocol provides for the transmission of data-grams between systems over the whole
catenet. It specifically allows for the fragmentation and reassembly of the datagrams
at the gateways. As the underlying networks may demand different packet sizes.
The Internet Protocol (US Military Standard 1771) is a very simple protocol, with no
mechanism for end-to-end data reliability, flow control or sequencing. The header,
however, shown in Figure 1.2, is quite complex, the fields being as follows:
Version The version Number of IP. There have been several new releases, which (given
the size of ARPANET) must co-exist for some time.
IHL The IP header length. Because of the options field, the header is not a fixed length.
This field shows where the data starts.
Type of This field allows for a priority system to be imposed, plus an indication of the
Service desired, but not guaranteed, reliability required.
Length The total length of the IP packet. Although there is a theoretical maximum of
64Kbytes, most networks operate with much smaller packets, though all must
accept at least 576bytes.
ID/Flags/Offs These fields enable a gateway to split up the datagram into smaller segments. The
et ID field ensures that the receiver can piece together the fragments from the correct
datagrams, as fragments from many datagrams may arrive in any order. The offset
tells how far down the datagram this fragment is, and the flags can be used to
mark the datagram as non-fragmentable.
Time to live This is a count which limits the lifetime of a datagram on the catenet. Each time it
passes through a gateway, the count is decremented by one. If it reaches zero, the
gateway does not forward it. This prevents permanently circulating datagrams.
Protocol This indicates which higher level protocol is being carried, e.g. TCP or UDP
Checksum This checksum covers the header only. It is up to the higher layers to detect
transmission errors in the data.
Source/dest To assist the gateways to route datagrams by the most efficient path, each. IP
Address address is structured into a Network Number and a local address. There are three
classes of network providing different numbers of locally administered addresses.
Options The final part of the header is a variable number of optional fields, which are used
to enforce security or network management.
Padding This field is used to align the header to the next 32-bit boundary.
Because there is no facility for error reporting in IP, for example, the sender of a data-
gram is not informed if the intended recipient is available, an extra protocol is used
particularly to help gateways between networks. This is called the Internet Control
Length Checksum
DATA
16 bits 16bits
Sequence Number
Acknowledgement Number
Options Padding
DATA
Source/dest ports These fields identity multiple streams to the layer above.
Data Offset This is the number of 32-bit words in the TCP header which,
like the IP header has a variable length options field.
Flag bits There are several bits used as status indicators to show, for
example, the resetting of the connection
Window This field is used by the receiver to set the window size.
Checksum Again this covers only the header.
Urgent pointer The sender can indicate that an urgent datagram is coming and
urges the receiver to handle it as quickly as possible.
The procedures used by the TCP protocol are two complex to describe here. It can be
seen however, that the catenet style of networking has benefits for linking LANs—
hence the widespread use of TCP/IP on LANs. It should not be assumed, however,
that TCP/IP networks are immune from the compatibility problems discussed earlier
for ISO networks. Differences in interpretation of the protocols can drastically reduce
interoperability and there are reports of deficiencies in many of the protocols. One
interesting recent development, however, is an experimental implementation of the
ISO transport service on top of TCP. Which means that ISO applications could be
carried over IP catenets. TCP/IP can also co-exist with ISO and other protocols on a
LAN, and it can be expected that the production of protocol converters should ease
the transition between TCP/IP and ISO for many users.
IP ADDRESSING, SUBNETTING
AND
SUPERNETTING
INTRODUCTION
In the mid-1990s, the Internet is a dramatically different network than when it was
first established in the early 1980s. There is a direct relationship between the value of
the Internet and the number of sites connected to the Internet. Over the past few years,
the Internet has experienced two major scaling issues as it has struggled to provide
continuous and uninterrupted growth. The eventual exhaustion of the IPv4 address
space The ability to route traffic between the ever increasing number of networks that
comprise the Internet The first problem is concerned with the eventual depletion of
the IP address space.
IP ADDRESS
The current version of IP, IP version 4 (IPv4), defines a 32-bit address which means
that there are only 232 (4,294,967,296) IPv4 addresses available. This might seem
like a large number of addresses, but as new markets open and a significant portion of
the world's population becomes candidates for IP addresses, the finite number of IP
addresses will eventually be exhausted. The address shortage problem is aggravated
by the fact that portions of the IP address space have not been efficiently allocated.
Also, the traditional model of classful addressing does not allow the address space to
be used to its maximum potential.
In order to provide the flexibility required to support different size networks, the
designers decided that the IP address space should be divided into three different
address classes - Class A, Class B, and Class C. This is often referred to as "classful"
addressing because the address space is split into three predefined classes, groupings,
or categories. Each class fixes the boundary between the network-prefix and the host-
number at a different point within the 32-bit address.
One of the fundamental features of classful IP addressing is that each address contains
a self-encoding key that identifies the dividing point between the network-prefix and
the host-number.
SUBNETTING
In 1985, RFC 950 defined a standard procedure to support the subnetting, or division,
of a single Class A, B, or C network number into smaller pieces. Subnetting was
introduced to overcome some of the problems that parts of the Internet were
beginning to experience with the classful two-level addressing hierarchy:
Subnetting attacked the expanding routing table problem by ensuring that the subnet
structure of a network is never visible outside of the organization's private network.
The route from the Internet to any subnet of a given IP address is the same, no matter
which subnet the destination host is on. This is because all subnets of a given network
number use the same network-prefix but different subnet numbers. The routers within
the private organization need to differentiate between the individual subnets, but as
far as the Internet routers are concerned, all of the subnets in the organization are
collected into a single routing table entry. This allows the local administrator to
introduce arbitrary complexity into the private network without affecting the size of
the Internet's routing tables. Subnetting overcame the registered number issue by
assigning each organization one (or at most a few) network number(s) from the IPv4
address space. The organization was then free to assign a distinct subnetwork number
for each of its internal networks. This allows the organization to deploy additional
subnets without needing to obtain a new network number from the Internet.
The router accepts all traffic from the Internet addressed to network 130.5.0.0, and
forwards traffic to the interior subnetworks based on the third octet of the classful
address. The deployment of subnetting within the private network provides several
benefits: The size of the global Internet routing table does not grow because the site
administrator does not need to obtain additional address space and the routing
advertisements for all of the subnets are combined into a single routing table entry.
The local administrator has the flexibility to deploy additional subnets without
obtaining a new network number from the Internet. Route flapping (i.e., the rapid
changing of routes) within the private network does not affect the Internet routing
table since Internet routers do not know about the reachability of the individual
subnets - they just know about the reachability of the parent network number.
Extended-Network-Prefix Internet routers use only the network-prefix of the
destination address to route traffic to a subnetted environment. Routers within the
subnetted environment use the extended-network- prefix to route traffic between the
individual subnets. The extended-network-prefix is composed of the classful network-
prefix and the subnet-number.
The extended-network-prefix has traditionally been identified by the subnet mask. For
example, if you have the /16 address of 130.5.0.0 and you want to use the entire third
octet to represent the subnet-number, you need to specify a subnet mask of
255.255.255.0. The bits in the subnet mask and the Internet address have a one-to-one
correspondence. The bits of the subnet mask are set to 1 if the system examining the
address should treat the corresponding bit in the IP address as part of the extended-
network- prefix. The bits in the mask are set to 0 if the system should treat the bit as
part of the host-number.
The standards describing modern routing protocols often refer to the extended-
network-prefix- length rather than the subnet mask. The prefix length is equal to the
number of contiguous one-bits in the traditional subnet mask. This means that
specifying the network address 130.5.5.25 with a subnet mask of 255.255.255.0 can
also be expressed as 130.5.5.25/24. The /<prefix-length> notation is more compact
and easier to understand than writing out the mask in its traditional dotted-decimal
format.
Variable Length Subnet Masks (VLSM)
In 1987, RFC 1009 specified how a subnetted network could use more than one
subnet mask. When an IP network is assigned more than one subnet mask, it is
considered a network with "variable length subnet masks" since the extended-
network-prefixes have different lengths.RIP-1 Permits Only a Single Subnet Mask
When using RIP-1, subnet masks have to be uniform across the entire network-prefix.
RIP-1 allows only a single subnet mask to be used within each network number
because it does not provide subnet mask information as part of its routing table update
messages. In the absence of this information, RIP-1 is forced to make very simple
assumptions about the mask that should be applied to any of its learned routes.
How does a RIP-1 based router know what mask to apply to a route when it learns a
new route from a neighbor? If the router has a subnet of the same network number
assigned to a local interface, it assumes that the learned subnetwork was defined using
the same mask as the locally configured interface. However, if the router does not
have a subnet of the learned network number assigned to a local interface, the router
has to assume that the network is not subnetted and applies the route's natural classful
mask. Assuming that Port 1 of a router has been assigned the IP address
130.24.13.1/24 and that Port 2 has been assigned the IP address 200.14.13.2/24. If the
router learns about network 130.24.36.0 from a neighbor, it applies a /24 mask since
Port 1 is configured with another subnet of the 130.24.0.0 network. However, when
the router learns about network 131.25.0.0 from a neighbor, it assumes a "natural" /16
mask since it has no other masking information available. How does a RIP-1 based
router know if it should include the subnet-number bits in a routing table update to a
RIP-1 neighbor? A router executing RIP-1 will only advertise the subnet-number bits
on another port if the update port is configured with a subnet of the same network
number. If the update port is configured with a different subnet or network number,
the router will only advertise the network portion of the subnet route and "zero-out"
the subnet-number field.
For example, assume that Port 1 of a router has been assigned the IP address
130.24.13.1/24 and that Port 2 has been assigned the IP address 200.14.13.2/24. Also,
assume that the router has learned about network 130.24.36.0 from a neighbor. Since
Port 1 is configured with another subnet of the 130.24.0.0 network, the router assumes
that network 130.24.36.0 has a /24 subnet mask. When it comes to advertise this
route, it advertises 130.24.36.0 on Port 1, but it only advertises 130.24.0.0 on Port
2.For these reasons, RIP-1 is limited to only a single subnet mask for each network
number.
However, there are several advantages to be gained if more than one subnet mask can
be assigned to a given IP network number: Multiple subnet masks permit more
efficient use of an organization's assigned IP address space.Multiple subnet masks
permit route aggregation which can significantly reduce the amount of routing
information at the "backbone" level within an organization's routing domain.Efficient
Use of the Organization's Assigned IP Address Space.VLSM supports more efficient
use of an organization's assigned IP address space. One of the major problems with
the earlier limitation of supporting only a single subnet mask across a given network-
prefix was that once the mask was selected, it locked the organization into a fixed-
number of fixed-sized subnets. For example, assume that a network administrator
decided to configure the 130.5.0.0/16 network with a /22 extended-network-prefix.
A /16 network with a /22 extended-network prefix permits 64 subnets (26 ), each of
which supports a maximum of 1,022 hosts (2 10 -2). This is fine if the organization
wants to deploy a number of large subnets, but what about the occasional small subnet
containing only 20 or 30 hosts? Since a subnetted network could have only a single
mask, the network administrator was still required to assign the 20 or 30 hosts to a
subnet with a 22-bit prefix. This assignment would waste approximately 1,000 IP host
addresses for each small subnet deployed! Limiting the association of a network
number with a single mask did not encourage the flexible and efficient use of an
organization's address space. One solution to this problem was to allow a subnetted
network to be assigned more than one subnet mask. Assume that in the previous
example, the network administrator is also allowed to configure the 130.5.0.0/16
network with a /26 extended-network-prefix. Please refer to Figure 16. A /16 network
address with a /26 extended-network prefix permits 1024 subnets (210 ), each of
which supports a maximum of 62 hosts (26 -2). The /26 prefix would be ideal for
small subnets with less than 60 hosts, while the /22 prefix is well suited for larger
subnets containing up to 1000 hosts.
Conceptually, a network is first divided into subnets, some of the subnets are further
divided into sub-subnets, and some of the sub-subnets are divided into sub 2 -subnets.
This allows the detailed structure of routing information for one subnet group to be
hidden from routers in another subnet group.
11.0.0.0./8
11.1.0.0/16
11.2.0.0/16
11.3.0.0/16
11.252.0.0/16
11.253.0.0/16
11.254.0.0/16
11.1.1.0/24
11.1.2.0/24
11.1.253.0/24
11.1.254.0/24
11.253.32.0/19
11.253.64.0/19
11.253.160.0/19
11.253.192.0/19
11.1.253.32/27
11.1.253.64/27
11.1.253.160/27
11.1.253.192/27
The clever thing is that the IP address advertised with the /20 prefix could be a former
Class A, Class B, or Class C. Routers that support CIDR do not make assumptions
based on the first 3-bits of the address, they rely on the prefix-length information
provided with the route. In a classless environment, prefixes are viewed as bit wise
contiguous blocks of the IP address space. For example, all prefixes with a /20 prefix
represent the same amount of address space (212 or 4,096 host addresses).
Furthermore, a /20 prefix can be assigned to a traditional Class A, Class B, or Class C
network number.
It is important to note that there may be severe host implications when you deploy
CIDR based networks. Since many hosts are classful, their user interface will not
permit them to be configured with a mask that is shorter than the "natural" mask for a
traditional classful address. For example, potential problems could exist if you wanted
to deploy 200.25.16.0 as a /20 to define a network capable of supporting 4,094 (2 12 -
2) hosts. The software executing on each end station might not allow a traditional
Class C (200.25.16.0) to be configured with a 20-bit mask since the natural mask for a
Class C network is a 24-bit mask. If the host software supports CIDR, it will permit
shorter masks to be configured. However, there will be no host problems if you were
to deploy the 200.25.16.0/20 (a traditional Class C) allocation as a block of 16 /24s
since non-CIDR hosts will interpret their local /24 as a Class C. Likewise,
130.14.0.0/16 (a traditional Class B) could be deployed as a block of 255 /24s since
the hosts will interpret the /24s as subnets of a /16. If host software supports the
configuration of shorter than expected masks, the network manager has tremendous
flexibility in network design and address allocation.
LAN TECHNOLOGIES
LAN TECHNOLOGIES
Introduction
Networking means interconnection of computers. These computers can be linked
together for different purposes and using a variety of different cabling types.
The basic reasons why computers need to be networked are :
1. To share resources (files, printers, modems, fax machines etc.)
2. To share application software (MS Office, Adobe Publisher etc.)
3. Increase productivity (makes it easier to share data amongst users)
Take for example a typical office scenario where a number of users require access to
some common information. As long as all user computers are connected via a
network, they can share their files, exchange mail, schedule meetings, send faxes and
print documents all from any point of the network. It is not necessary for users to
transfer files via electronic mail or floppy disk, rather, each user can access all the
information they require, thus leading to less wastage of time and hence increased
productivity.
Imagine the benefits of a user being able to directly fax the Word document they are
working on, rather than print it out, then feed it into the fax machine, dial the number
etc.
Small networks are often called Local Area Networks (LAN). A LAN is a network
allowing easy access to other computers or peripherals. The typical characteristics of
a LAN are :
1. physically limited distance (< 2km)
2. high bandwidth (> 1mbps)
3. inexpensive cable media (coax or twisted pair)
4. data and hardware sharing between users
5. owned by the user
The factors that determine the nature of a LAN are :
1. Topology
2. Transmission medium
3. Medium access control technique
LAN Architecture
The layered protocol concept can be employed to describe the architecture of a LAN,
wherein each layer represents the basic functions of a LAN.
Protocol Architecture
The Protocols defined for LAN transmission address issues relating to the
transmission of blocks of data over the network. In the context of OSI model, higher
layer protocols (layer 3 or 4 and above) are independent of network architecture and
are applicable to LAN. Therefore LAN protocols are concerned primarily with the
lower layers of the OSI model.
Figure 1 relates the LAN protocols to the OSI model. This architecture has been
developed by the IEEE 802 committee and has been adopted by all organisations
concerned with the specification of LAN standards. It is generally referred to as the
IEEE 802 reference model.
Application
IEEE 802
Presentation Reference
Model LLC Service
Session Access Point
Transport
~ Upper
layer ~ (LSAP)
protocols
Network
() () ()
Logical Link
Control Scope of
Data Link IEEE 802
Medium Standards
access control
Physical Physical
Medium Medium
The lowest layer of the IEEE 802 reference model corresponds to the physical layer
of the OSI model, and includes the following functions :
1. Encoding/ decoding of signals
2. Preamble generation/ removal (for synchronisation)
3. Bit transmission/ reception
The physical layer of the 802 model also includes a specification for the transmission
medium and the topology. Generally, this is considered below the lowest layer of the
OSI model. However, the choice of the transmission medium and topology is critical
in LAN design, and so a specification of the medium is included.
Above the physical layer are the functions associated with providing service to the
LAN users. These comprise :
1. Assembling data into a frame with address and error-detection fields for
onward transmission.
2. Disassemble frame, perform address recognition and error detection during
reception.
3. Supervise and control the access to the LAN transmission medium.
4. Provide an interface to the higher layers and perform flow control and error
control.
The above functions are typically associated with OSI layer 2. The last function noted
above is grouped in to a logical link control (LLC) layer. The functions in the first
three bullet items are treated as a separate layer, called medium access control
(MAC). The separation is done for the following reasons:
L IEEE 802.2
ogical • Unacknowledged connectionless service
Link • Connection-mode service
Contro
l • Acknowledged connectionless service
(LLC)
Bus/ tree/ star topologies Ring topology Dual bus topology Wireless
Figure 2 illustrates the relationship between the various levels of the architecture.
User data is passed down to LLC, which appends control information as a header,
creating an LLC protocol data unit (PDU). This control information is used in the
operation of the LLC protocol. The entire LLC PDU is then passed down to the MAC
layer, which appends control information at the front and back of the packet,
forming a MAC frame.
Application layer
TCP layer
TCP
header
IP header IP layer
TCP segment
IP datagram
LLC protocol data unit
MAC frame
LAN Topologies
The common topologies for LANs are bus, tree, ring, and star. The bus is a special
case of the tree, with only one trunk and no branches.
Bus and Tree Topologies
Bus and Tree topologies are characterised by the use of a multi-point medium. For the
bus all stations attach, through appropriate hardware interfaces known as a Tap,
directly to a linear transmission medium, or bus. Full-duplex operation between the
station and the tap permits data to be transmitted onto the bus and received from the
bus. A transmission from any station propagates throughout the length of the medium
in both directions and can be received (heard) by all other stations. At each end of the
bus is a terminator, to avoid reflection of signals.
Tap
Terminating
Flow of data
Resistance
Station
Headend
The tree topology is a generalisation of the bus topology. The transmission medium is
a branched cable with no closed loops. The tree layout begins at a point known as the
head-end, where one or more cable start, and each of these may have branches. The
branches in turn may have additional branches. Transmission from any station
propagates throughout the medium and can be received (heard) by all other stations.
However, there are two problems in this arrangement. First, since a transmission from
any one station can be received by all other stations, there needs to be some way of
indicating that for whom the transmission is intended. Second, a mechanism is needed
to regulate the transmission. To visualise the logic behind this, consider that if two
stations on the bus attempt to transmit at the same time, their signals will overlap and
become garbled. Or, consider that one station decides to transmit continuously for a
long period of time.
To solve these problems, stations transmit data in small blocks, known as frames.
Each frame consists of a portion of data that a station wishes to transmit, plus a frame
header that contains control information. Each station on the bus is assigned a unique
address, or identifier, and the destination address for a frame is included in its header.
Figure 4 illustrates the concept. In this example, station C wishes to transmit a frame
of data to A. The frame header includes A‘s address. As the frame propagates along
the bus, it passes B, which observes the address and ignores the frame. A, on the other
hand, sees that the frame is addressed to itself and therefore copies the data from the
frame as it goes by.
Ring Topology
In the ring topology, the network consists of a set of repeaters joined by point-to point
links in a closed loop. The repeater is a comparatively simple device, capable of
receiving data on one link and transmitting them, bit by bit, on the other link as
quickly as they are received, with no buffering at the repeater. The links are
unidirectional, i.e. data is transmitted in one direction (clockwise or counter-
clockwise).
Each station is attached to the network at a repeater and can transmit data onto the
network through that repeater.
Ring
As with the bus and tree, data is transmitted in frames. As a frame circulates past all
other stations, the destination station recognises its address and copies the frame into a
local buffer as it goes by. The frame continues to circulate until it reaches the source
station, where it is ultimately removed (Figure 5).
Because multiple stations share the ring , medium access control is needed to
determine when each station may insert frames.
C
C
A B
B A
A
A
C C
A
B
B
A
A
A
Star Topology
In the Star type topology, each station is directly connected to a common central node.
Typically, each station attaches to a central node, referred to as the star coupler, via
two point-to point links, one for transmission in each direction.
In general, there are two alternatives for the operation of the central node :
One method is for the central node to operate in a broadcast fashion. The transmission
of a frame from one station to the Central Node is retransmitted in all of the outgoing
links. In this case, although the arrangement is physically a star, it is logically a bus; a
transmission from any station is received by all other stations, and only one station at
a time may transmit (successfully).
Another method is for the central node to act as a frame switching device. An
incoming frame is buffered in the node and then retransmitted on an outgoing link to
the destination station.
Central Hub,
Switch/
Repeater
Round Robin
With Round robin, each station in turn is given an opportunity to transmit. During that
period, the station may decline to transmit or may transmit subject to a specified
upper bound, usually expressed as a maximum amount of data transmitted or time for
this opportunity. In any case, the station, when it is finished, relinquishes its turn, and
the right to transmit passes to the next station in logical sequence. Control of this
sequence may be centralised or distributed. Polling is an example of a centralised
technique.
When many stations have to transmit data over an extended period of time, round
robin techniques can be very efficient. If only a few stations have data to transmit
over an extended period of time, then there is a considerable overhead in passing the
turn from station to station, as most of the stations will not transmit but simply pass
their turns. Under such circumstances, other techniques may be preferable, largely
depending on whether the data traffic has a stream or bursty characteristic. Stream
traffic is characterised by lengthy and fairly continuous transmissions; examples are
voice communication, telemetry, and bulk file transfer. Bursty traffic is characterised
by short, sporadic transmissions, (interactive terminal-host traffic fits this
description).
Reservation
For stream traffic, reservation techniques are well suited. In general, for these
techniques, time on the medium is divided into slots, similar to synchronous TDM. A
station wanting to transmit, reserves future slots for an extended or even an indefinite
period. Again, reservations may be made in a centralised or distributed manner.
Contention
For bursty traffic, contention techniques are more appropriate. With these techniques,
no control is required to determine whose turn it is; all stations contend for time.
These techniques are by nature distributed. Their principal advantage is that they are
simple to implement and, under light to moderate load, quite efficient. For some of
these techniques, however, performance tends to collapse under heavy load.
Although both centralised and distributed reservation techniques have been
implemented in some LAN products, round robin and contention techniques are the
most common.
The specific access techniques are discussed further in this chapter. Table 2 lists the
MAC protocols that are defined in LAN standards.
1 octet 1 or 2 Variable
LLC
PDU DSAP SSAP LLC control Information
Cabling
Cables are used to interconnect computers and network components together. There
are 3 main cable types used today :
1. twisted pair
2. coax
3. fibre optic
The choice of cable depends upon a number of factors like :
1. cost
2. distance
3. number of computers involved
4. speed
5. bandwidth i.e. how fast data is to be transferred
REPEATERS
Repeaters extend the network segments. They amplify the incoming signal received
from one segment and send it on to all other attached segments. This allows the
distance limitations of network cabling to be extended. There are limits on the number
of repeaters which can be used. The repeater counts as a single node in the maximum
node count associated with the Ethernet standard (30 for thin coax).
Repeaters also allow isolation of segments in the event of failures or fault conditions.
Disconnecting one side of a repeater effectively isolates the associated segments from
the network.
Using repeaters simply allows you to extend your network distance limitations. It
does not give you any more bandwidth or allow you to transmit data faster.
Workstation
BRIDGE
6. help in localising the network traffic by only forwarding data onto other
segments as required (unlike repeaters)
How Bridges Work
Bridges work at the Data Link layer of the OSI model. Because they were at this
layer, all information contained in the higher levels of the OSI model is unavailable to
them, Therefore, they do not distinguish between one protocol and another. Bridges
simply pass all protocols along the network. Because all protocols pass across bridges,
it is up to individual computers to determine which protocols they can recognise.
You may remember that the Data Link layer has two sub layers, the Logical Link
Control sub layer and the Media Access Control sub layer. Bridges work at the Media
Access Control sub layer and are sometimes referred to as Media Access Control
layer bridges.
A Media Access Control layer bridge :
Listens to all traffic.
Checks the source and destination addresses of each packet.
Builds a routing table as information becomes available.
Forwards packets in the following manner :
If the destination is not listed in the routing table, the bridges forwards the packets to
all segments, or
If the destination is listed in routing table, the bridge forwards the packets to that
segment (unless it is the same segment as the source).
A bridge works on the principle that each network node has its own address. A bridge
forwards packets based on the address of the destination node.
Bridges actually have some degree of intelligence in that they learn where to forward
data. As traffic passes through the bridge, information about the computer addresses is
stored in the bridge‘s RAM. The bridge uses this RAM to build a routing table based
on source addresses.
Initially, the bridge‘s routing table is empty. As nodes transmit packets, the source
address is copied to the routing table. With this address information, the bridge learns
which computers are on which segment of the network.
Creating the Routing Table
Bridges build their routing tables bases on the addresses of computers that have
transmitted data on the network. Specifically, bridges use source addresses – the
address of the device initiates the transmission – to create routing table.
When the bridge receives a packet, the source address is compared to the routing
table. If the source address is not there, it is added to the table. The bridge then
compares the destination address with the routing table database.
If the destination address is in the routing table and is on the same segment as the
source address, the packet is described. This filtering helps to reduce network traffic
and the isolate segment of the network.
If the destination address is in the routing table and not in the same segment as the
source address, the bridge forwards the packet out of the appropriate port to reach the
destination address.
If the destination address is not in the routing table, the bridges forwards the packet to
all of its ports, except the one on which it is originated.
In summary, if a bridge knows the location of the destination node, it forwards the
packet to it. If it does not know the destination, it forwards the packet to all segments.
Segmenting Network Traffic
A bridge can segment traffic because of it s routing table. A computer on segment 1
(the source), sends data to another computer (the destination) also located in segment
1. If the destination address is in the routing table, the bridge can determine that the
destination computer is also on segment 1. Because the source and the destination
computers are both on segment 1, the packet does not get forwarded across the bridge
to segment 2.
Therefore, bridges can use routing tables to reduce the traffic on the network by
controlling which packets get forwarded to other segments. This controlling (or
restricting) of the flow of network traffic is known as segmenting network traffic.
A large network is not limited to one bridge. Multiple bridges can be used to combine
several small networks into one large network.
Note : Remember that routing tables were also discussed with bridges. The routing
table maintained by a bridge contains Media Access Control sublayer addresses for
each node, while the routing table maintained by a router contains network numbers.
Even though manufacturers of these two different types of equipment have chosen to
use the term routing table, it has a different meaning for bridge than it does for
routers.
Routers require specific addresses. They only understand network numbers which
allow them to talk to other routers and local network adapter card addresses. Routers
do not talk to remote computers.
When router receives packets destined for a remote network, they send them to the
router that manages the destination network. In some ways this is an advantage
because it means routers can :
1. Segment large networks into smaller ones.
2. Act as safety barrier between segments.
3. Prohibit broadcast storms, because broadcasts are not forwarded.
Because routers must perform complex functions on each packet, routers are slower
than most bridges. As packets are passed from router to router, Data Link layer source
and destination addresses are stripped off and then recreated. This enables a router to
route a packet from a TCP/IP Ethernet network to a server on a TCP/IP Token Ring
Network.
Because the routers only read addresses network packets, they will not allow bad data
to get passed on to the network. Because they do not pass the bad data or broadcast
data storms, router put little stress on networks.
Routers do not look at the destination node address; they only look at the network
address. Routers will only pass information if the network address is known. This
ability to control the data passing through the router reduces the amount of traffic
between networks and allows router to use these links more efficiently than bridges.
Using the router addressing scheme, administrators can break one large network into
many separate networks, and because routers do not pass or even handle every packet,
they act as a safety barrier between network segments. This can greatly reduce the
amount of traffic on the network and the wait time experienced by users.
Routable Protocols
Not all protocols work with routers. The one that are routable include :
1. DECnet
2. IP
3. IPX
4. OSI
5. XNS
6. DDP (AppleTalk)
Protocols which are not routable include:
1. LAT (local area transport, a protocol from Digital Equipment Corporation.)
2. NetBEUI
There are routers available which can accommodate multiple protocols such as IP and
DECnet in the same network.
Packets are only passed to the network segment they are destined for.
They work similar to bridges and switches in that they filter out unnecessary network
traffic and remove it from network segments. Routers generally work at the protocol
level.
Routers were devised in order to separate networks logically. For instance, a TCP/ IP
router can segment the network based on groups of TCP/IP addresses. Filtering at this
level (on TCP/IP addresses, also known as level 3 switching) will take longer than
that of a bridge or switch which only looks at the MAC layer.
Most routers can also perform bridging functions. A major feature of routers, because
they can filter packets at a protocol level, is to act as a firewall. This is essentially a
barrier, which prevents unwanted (unauthorised) packets either entering or leaving
designated areas of the network.
Typically, an organisation which connects to the Internet will install a router as the
main gateway link between their network and the outside world. By configuring the
router with access lists (which define what protocols and what hosts have access) this
enforces security by restricted (or allowing) access to either internal or external hosts.
For example, an internal WWW server can be allowed IP access from external
networks, but other company servers which contain sensitive data can be protected, so
that external hosts outside the company are prevented access (you could even deny
internal workstations access if required).
A router works at the Network Layer or higher, by looking at information embedded
within the data field, like a TCP/IP address, then forwards the frame to the appropriate
segment upon which the destination computer resides.
Summary of Router features :
1. use dynamic routing
2. operate at the protocol level
3. remote administration and configuration via SNMP
4. support complex networks
5. the more filtering done, the lower the performance
6. provides security
7. segment the networks logically
8. broadcast storms can be isolated
9. often provide bridge functions also
10. more complex routing protocols used (such as RIP, IGRP, OSPF)
HUBS
There are many types of hubs. Passive hubs are simple splitters or combiners that
group workstations into a single segment, whereas active hubs include a repeater
function and are thus capable of supporting many more connections.
Nowadays, with the advent of 10BaseT, hub concentrators are being very popular.
These are very sophisticated and offer significant features which make them radically
different from the older hubs which were available during the 1980's. These 10BaseT
hubs provide each client with exclusive access to the full bandwidth, unlike bus
networks where the bandwidth is shared. Each workstation plugs into a separate port,
which runs at 10 Mbps and is for the exclusive use of that workstation, thus there is
no contention to worry about like in Ethernet.
In standard Ethernet, all stations are connected to the same network segment in bus
configuration. Traffic on the bus is controlled using CSMA (Carrier Sense Multiple
Access) protocol, and all stations share the available bandwidth.
BACKPLANE
10BaseT Hubs dedicate the entire bandwidth to each port (workstation). The W/S
attach to the Hub using UTP. The Hub provides a number of ports, which are
logically combined using a single backplane, which often runs at a much higher data
rate than that of the ports.
Ports can also be buffered, to allow packets to be held in case the hub or port is busy.
And, because each workstation has its own port, it does not contend with other
workstations for access, having the entire bandwidth available for its exclusive use.
The ports on a hub all appear as one Ethernet segment. In addition, hubs can be
stacked or cascaded (using master/ slave configurations) together, to add more ports
per segment. As hubs do not count as repeaters, this is a better solution for adding
more workstations than the use of a repeater.
Hub options also include an SNMP (Simple Network Management Protocol) agent.
This allows the use of network management software to remotely administer and
configure the hub.
The advantages of the newer 10 BaseT hubs are :
1. Each port has exclusive access to its bandwidth (no CSMA/ CD)
2. Hubs may be cascaded to add additional ports
3. SNMP managed hubs offer good management tools and statistics
4. Utilise existing cabling and other network components
5. Becoming a low cost solution
frame at about the same time. In the latter case, the two frames may interface with
each other at the receiver so that neither gets through; this is known as collision. If a
received frame is determined to be invalid, the receiving station simply ignores the
frame.
Description of CSMA/ CD
CSMA, although more efficient than ALOHA or slotted ALOHA, still has one glaring
inefficiency : when two frames collide, the medium remains unusable for the duration
of transmission of both damaged frames. For long frames, compared to propagation
time, the amount of wasted capacity can be considerable. This waste can be reduced if
a station continues to listen to the medium while transmitting.
t0
A B C D
t1
A B C D
t2
A B C D
t3
A B C D
With CSMA/CD, the amount of wasted capacity is reduced to the time it takes to
detect a collision. Question : how long does that take? Let us consider the first case of
a baseband bus and consider the two stations as far apart as possible. For example, in
the above figure, suppose that station A begins a transmission and that just before that
transmission reaches D, D is ready to transmit. Because D is not yet aware of A‘s
transmission, it begins to transmit. A collision occurs almost immediately and is
recognized by D. However, the collision must propagate all the way back to A before
A is aware of the collision. By this line of reasoning, we conclude that the amount of
time that it takes to detect a collision is no greater than twice the end-to-end
propagation delay.
For a broadband bus, the delay is even longer. Figure 11 shows the dual-cable
system. This time, the worst case occurs for two stations as close together as possible
as far as possible for the headend. In this case, the maximum time to detect a collision
is four times the propagation delay from an end of the cable to the head-end.
t0 A B
A begins transmission
t1 A B
B begins transmission just before leading edge of A‘s packet arrives at B‘s receiver;
B almost immediately detects A’s transmission and cases its own transmission.
t2 A B
A detects collision
4. Source address (SA) : Specifies the station that sent the frame.
5. Length : Length of the LLC data field.
6. LLC data : Data unit supplied by LLC.
7. Pad : Octets added to ensure that the frame is long enough for proper CD
operation.
8. Frame check sequence (FCS). A 32-bit cyclic redundancy check, based on
all fields except the preamble, the SFD, and the FCS.
Octets
LLC FCS
Preamble SFD DA SA Length Data Pad
LEGEND
The drawback to this type of technology is that the end-user must obtain an FCC
license for each site where it is employed.
Spread Spectrum Technology
Most wireless LAN systems use spread-spectrum technology, a wideband radio
frequency technique developed by the military for use in reliable, secure, mission-
critical communications systems. Spread-spectrum is designed to trade off bandwidth
efficiency for reliability, integrity and security. In other words, more bandwidth is
consumed to produce a louder and thus easier to detect broadcast signal. The
drawback to this technology is when the receiver is not tuned to the right frequency, a
spread-spectrum signal looks like background noise. There are two types of spread
spectrum radio: frequency hopping and direct sequence:
Frequency-hopping Spread Spectrum Technology – (FHSS) uses a narrowband
carrier that hops among several frequencies at a specific rate and sequence as a way of
avoiding interference. Properly synchronized, the net effect is to maintain a single
logical channel. To an unintended receiver, FHSS appears to be short-duration
impulse noise.
Direct-Sequence Spread Spectrum Technology – (DSSS) uses a radio transmitter to
spread data packets over a fixed range of the frequency band. To an unintended
receiver, DSSS appears as low-power wideband noise and is rejected by most
narrowband receivers. The interoperability standard IEEE 802.11b is focusing on
utilizing 11M bps high rate DSSS technology as the standard for wireless networks.
Infrared Technology – little used in commercial wireless LANs, infrared (IR)
systems use very high frequencies, just below visible light in the electromagnetic
spectrum, to carry data.
The most basic wireless LAN consists of two PCs equipped with wireless adapter
cards that form an independent network whenever they are within a range of one
another. On-demand networks, such as this example, require no administration or
preconfiguration. In this case, each client would only have access to the resources of
the other client and not to a central server. This wireless LAN setup is sometimes
called an Ad-Hoc network.
Installing an access point allows each client to have access to shared resources as well
as to other clients. The access point connects to the wired network from a fixed
location using standard cabling. Each access point can accommodate many clients (up
to 16 with the Multi-Tech RouteFinder RF802EW); the specific number depends on
the number and nature of the transmissions involved. This wireless LAN setup is
sometimes called Infrastructure Mode.
Multiple Access Points and Roaming
Access points have a finite range for transmission -- around 100 meters (328 feet)
indoors and 300 meters (984 feet) outdoors. In a very large facility such as a
warehouse, or on a college campus, it will probably be necessary to install more than
one access point. Access point positioning is accomplished by means of a site survey.
The goal is to blanket the coverage area with overlapping coverage cells so that
clients might range throughout the area without ever losing network contact. The
ability of clients to move seamlessly among a cluster of access points is called
roaming. Access points hand the client off from one to another in a way that is
invisible to the client, ensuring unbroken connectivity.
1.0 Scope:
Wi-Fi is a registered trademark by the Wi-Fi Alliance. The products tested and
approved as "Wi-Fi Certified" are interoperable with each other, even if they are from
different manufacturer. It is Short form for “Wireless-Fidelity” and is meant to
generically refer to any type of ‗802.11‘ network, whether ‗802.11‘b, ‗802.11‘a, dual-
band, etc. Initially the term "Wi-Fi" was used in place of the 2.4GHz ‗802.11‘b
standard, in the same way that "Ethernet" is used in place of IEEE 802.3 but Alliance
has expanded the generic use of the term to cover ‗802.11‘a, dual-band etc.
General description of Wi-Fi Network:
A Wi-Fi network provides the features and benefits of traditional LAN technologies
such as Ethernet and Token Ring without the limitations of wires or cables. It
provides the final few metres of connectivity between a wired network and the mobile
user thereby providing mobility, scalability of networks and the speed of installation.
WIFI is a wireless LAN Technology to deliver wireless broad band speeds up to 54
Mbps to Laptops, PCs, PDAs , dual mode wifi enabled phones etc. Apart from Data
delivery Voice over WIFI is also in pipeline. The backhaul bandwidth from wired
network i.e. ADSL modem, leased line etc. is shared among the users.
In a typical Wi-Fi configuration, a transmitter/receiver (transceiver) device, called the
Access Point (AP), connects to the wired network from a fixed location using
standard cabling. A wireless Access Point combines router and bridging functions, it
bridges network traffic, usually from Ethernet to the airwaves, where it routes to
computers with wireless adapters. The AP can reside at any node of the wired
network and acts as a gateway for wireless data to be routed onto the wired network
as shown in Figure-1. It supports only 10 to 30 mobile devices per Access Point (AP)
depending on the network traffic. Like a cellular system, the Wi-Fi is capable of
roaming from the AP and re-connecting to the network through another AP. The
Access Point (or the antenna attached to the Access Point) is usually mounted high
but may be mounted essentially anywhere that is practical as long as the desired radio
coverage is obtained.
Like a cellular phone system, the wireless LAN is capable of roaming from the AP
and re-connecting to the network through other APs residing at other points on the
wired network. This can allow the wired LAN to be extended to cover a much larger
area than the existing coverage by the use of multiple APs such as in a campus
environment as shown in Figure 2.
An important feature of the wireless LAN is that it can be used independent of a wired
network. It may be used as a stand alone network anywhere to link multiple
computers together without having to build or extend a wired network. Then a peer to
peer workgroup can be established for transfer or access of data. A member of the
workgroup may be established as the server or the network can act in a peer to peer
mode as Shown in Figure-3.
End users access the Wi-Fi network through Wi-Fi adapters, which are implemented
as cards in desktop computers, or integrated within hand-held computers. Wi-Fi
wireless LAN adapters provide an interface between the client Network Operating
System (NOS) and the airwaves via an antenna. The nature of the wireless connection
is transparent to the NOS. Wi-Fi deals with fixed, portable and mobile stations and of
course, the physical layers used here are fundamentally different from wired media
3.0 Wi-Fi Network Configuration:
3.1 A Wireless Peer-To-Peer Network: This mode is also known as ADHOC mode.
Wi-Fi networks can be simple or complex. At its most basic, two PCs equipped with
wireless adapter cards can set up an independent network whenever they are within
range of one another. This is called a peer-to-peer network. It requires no
administration or pre-configuration. In this case, each client would only have access
to the resources of the other client and not to a central server as shown in Figure-4.
Access Points have a finite range, of the order of 500 feet indoor and 1000 feet
outdoors. In a very large facility such as a warehouse, or on a college campus, it will
probably be necessary to install more than one Access Point. Access Point positioning
is done by a site survey. The goal is to blanket the coverage area with overlapping
coverage cells so that clients might range throughout the area without ever losing
network contact. The ability of clients to move seamlessly among a cluster of Access
Points is called roaming. Access Points hand the client off from one to another in a
way that is invisible to the client, ensuring unbroken connectivity as shown in Fig-6.
A second media access control method, the Point Coordination Function (PCF), is an
optional extension to DCF. PCF provides a time division duplexing capability to
allow the Access Point to deal with time bounded, connection-oriented services.
Using this method, one AP controls the access through a polling system.
CSMA/CA (Figure 10) needs each station to listen to other users. If the channel is
idle the station is allowed to transmit. If it is busy, each station waits until
transmission stops, and then enters into a random back off procedure. This prevents
multiple stations from owning the medium immediately after completion of the
preceding transmission. Packet reception in DCF requires acknowledgements (ACK).
The period between completion of packet transmission and start of the ACK frame is
one Short Inter Frame Space (SIFS). ACK frames have a higher priority than other
traffic. Fast acknowledgement is one of the features of the ‗802.11‘ standard, because
it requires ACKs to be handled at the MAC sub layer. Transmissions other than ACKs
must wait at least one DCF Inter Frame Space (DIFS) before transmitting data. If a
transmitter senses a busy medium, it determines a random back-off period by setting
an internal timer to an integer number of slot times. Upon expiration of a DIFS, the
timer begins to decrement. If the timer reaches zero, the station may begin
transmission. If the channel is seized by another station before the timer reaches zero,
the timer setting is retained at the decremented value for subsequent transmission.
The method described above relies on the underlying assumption that every station
can hear all other stations. This is not always the case: this problem is known as the
Hidden-Node Problem. The hidden node problem arises when a station is able to
successfully receive frames from two other transmitters but the two transmitters can
not receive signals from each other. In this case a transmitter may sense the medium
as being idle even if the other one is transmitting. This results in a collision at the
receiving station.
To provide a solution for this problem, another mechanism is present: the use of
RTS/CTS frames (Figure 11). A Request To Send (RTS) frame is sent by a potential
transmitter to the receiver and a Clear To Send (CTS) frame is sent from the receiver
in response to the received RTS frame. If the CTS frame is not received within a
certain time interval the RTS frame is re-transmitted by executing a back-off
algorithm. After a successful exchange of the RTS and CTS frames, the data frame
can be sent by the transmitter after waiting for a SIFS. RTS and CTS include a
duration field that specifies the time interval necessary to transmit the data frame and
the ACK. This information is used by stations which can hear the transmitter or the
receiver to update their Net Allocation Vector (NAV), a timer which is always
decremented. The drawback of using RTS/CTS is an increased overhead which may
be very important for short data frames. The efficiency of RTS/CTS depends upon the
length of the packets. RTS/CTS are typically used for large-size packets, for which re-
transmissions would be expensive from a bandwidth viewpoint.
Two other robustness features of the ‗802.11‘ MAC layer are the CRC checksum and
packet fragmentation. Each packet has a CRC attached to ensure its correctness. This
is different from Ethernet, where higher-level protocols such as TCP handle error
checking. Packet fragmentation is very useful in congested or high interference
environments since larger packets have a better chance to get corrupted. The MAC
layer is responsible for re-assembling the received fragments; this makes the process
transparent to higher-level protocols.
IEEE „802.11‟b: In 2000, ‗802.11‘b became the standard wireless ethernet
networking technology for both business and home. That year, wireless networking
took a giant leap with the release of 11 Mbps products, based on this ‗802.11‘b
standard (commonly known as Wi-Fi).
First generation of wireless adapters supported 1 or 2 Mbps. This is very low
compared to wired Ethernets, defined by the Institute of Electrical and Electronics
Engineers (IEEE) in the 802.3 standard, which are able to operate at 10 Mbps,
100Mbps,or even 1000Mbps. ‗802.11‘b transmits at 2.4 GHz, the same spectrum as
microwave ovens. The cards use less power than a mobile phone. Cisco warns that
their PCMCIA card should be more than 4 cm from your body, and the Access Point's
antenna should be at least 15 cm away from the body.
other Wi-Fi certified products. The ‗802.11‘ specifications are wireless standards that
define as ―over-the–air‖ interface between wireless client and a base station or Access
Point. It includes task groups called ‗802.11‘b, a, e, g working on amendments.
1. „802.11‟b was the first version to reach the marketplace. It is the slowest and
least expensive of the three. As mentioned above, ‗802.11‘b transmits at 2.4
GHz ISM band and can handle up to 11 megabits per second. Wi-Fi reaches
only about 7Mbps of throughput due to synchronization issues, ACK overhead
etc.
2. „802.11‟g: The -g group is a natural speed extension for the ‗802.11‘b
standard. It will extend the highly successful family of IEEE ‗802.11‘
standards, with data rates up to 54 Mbps in the 2.4 GHz band.
3. „802.11‟a: Task Group (TG a) operates in the 5GHz band. Because its
operating frequency is higher than that of ‗802.11‘b, ‗802.11‘a has a smaller
range. It tries to solve this distance problem by using more power and more
efficient data encoding schemes. The higher frequency band gives the
advantage of not residing in the crowded 2.4GHz region where we see
cordless phones, Bluetooth and even microwave ovens operating.
4. The major advantage is it's speed: the spectrum of ‗802.11‘a is divided into
8 sub-network segments or channels of about 20 MHz each. These channels
are responsible for a number of network nodes. The channels are made up of
52 carriers of 300 KHz each, and can present a maximum of 54 Mbps. This
speed takes WLAN from the first generation Ethernet (10 Mbps) to the second
(Fast Ethernet, 100Mbps). The new specification is based on a OFDM
modulation scheme. The RF system operates at 5.15 to 5.25, 5.25 to 5.35 and
5.725 to 5.825 GHz U-NII bands. The OFDM system provides 8 different data
rates between 6 to 54 Mbps. It uses BPSK, QPSK, 16-QAM and 64-QAM
modulation schemes coupled with forward error correcting coding. Important
to remember: ‗802.11‘b is completely incompatible with ‗802.11‘a
5. „802.11‟e Task Group (TG e) is proceeding to build improved support for
Quality of Service. The aim is to enhance the current ‗802.11‘ MAC to expand
support for LAN applications with Quality of Service requirements, to provide
improvements in security and in the capabilities & efficiency of the protocol.
Its applications include transport of voice, audio and video over ‗802.11‘
wireless networks, video conferencing, media stream distribution, enhanced
security applications and mobile & nomadic access applications.
6. „802.11‟d Task Group (TG d) describes a protocol that will allow an ‗802.11‘
device to receive the regulatory information required to configure itself
properly to operate anywhere on earth. The current ‗802.11‘ standard defines
operation in only a few regulatory domains (countries). This supplement will
add the requirements and definitions necessary to allow ‗802.11‘ WLAN
equipment to operate in markets not served by the current standard.
6.0 Specifications:
1. It uses one of the frequencies of the ISM Frequency band these bands are 902
to 928MHz, 2.4 to 2.4853 GHz, and 5.725 to 5.85 GHz, out of which 2.4 to
2.4853 GHz is most commonly used.
2. RF powers radiated by nodes are limited to one watt.
3. Spread spectrum modulation technique is used for data communication as it is
less susceptible to radio noise and creates little interference
Click the "Search button" in the software. The card will search for all of the available
hotspots in the area and show you a list.
Double-click on one of the hotspots to connect to it.
Old ‗802.11‘ equipment has no automatic search feature. You have to find what is
known as the SSID of the hotspot (usually a short word of 10 characters or less) as
well as the channel number (an integer between 1 and 11) and type these two pieces
of information in manually. All the search feature is doing is grabbing these two
pieces of information from the radio signals generated by the hotspot and displaying
them for you.
8.0 Benefits of Wi-Fi:
In a Wi-Fi users can access shared information without looking for a place to plug in,
and network managers can set up or augment networks without installing or moving
wires. Wi-Fi offers the following productivity, conveniences, and cost advantages
over traditional wired networks:
1. Mobility: Wi-Fi systems can provide LAN users with access to real-time
information anywhere in their organization. This mobility supports
productivity and service opportunities not possible with wired networks.
2. Installation Speed and Simplicity: Installing a Wi-Fi system can be fast and
easy and can eliminate the need to pull cable through walls and ceilings.
3. Installation Flexibility: Wireless technology allows the network to go where
wire cannot go.
4. Reduced Cost-of-Ownership: While the initial investment required for Wi-Fi
hardware can be higher than the cost of wired LAN hardware, overall
installation expenses and life-cycle costs can be significantly lower. Long-
term cost benefits are greatest in dynamic environments requiring frequent
moves, adds, and changes.
5. Scalability: Wi-Fi systems can be configured in a variety of topologies to
meet the needs of specific applications and installations. Configurations are
easily changed and range from peer-to-peer networks suitable for a small
number of users to full infrastructure networks of thousands of users that
allows roaming over a broad area.
6. It offers much high speed upto 54 Mbps which is very much greater than other
wireless access technologies like CORDECT, GSM and CDMA.
9.0 WPA (WiFi protected architecture)
The WiFi Protected Architecture (WPA) is a coding standard for the WLAN security,
which the ACCESS POINTs (AP) the entrance to the WLAN secures. WPA ensures
in connection with the speed ral key Integrity Protocol (TKIP) and the RC4-
Algorithmus for a good coding. WPA were taken over defined by the WiFi alliance
whereby all relevant specifications by the working group 802.11i. Beside TKIP as
replacement for incoming inspection minutes the standardized handshake enterprise
between Client and ACCESS POINT (AP) were taken over for the determination of
the meeting keys. As well as in addition a simplified procedure for the determination
of the master Secret by passport cliche, that without radius servers gets along the
Aushandlung of the coding procedure between ACCESS POINT and Client.
The version WPA2, which is conformal to 802.11i, sets on the AES coding and
fulfills thereby the safety guidelines demanded by many US authorities. WPA2 knows
two operatings mode, the personnel mode and the Enterprise mode, which differ in
the Authentifizierung. While in the personnel mode with passwords one works, in the
Enterprise mode on the remote Authentifizierung by means of RADIUS and EAP
minutes one sets. This procedure corresponds to 802,1 (x).( Information taken from
Net.)
10.0 Limitation of Wi-Fi networks:
The key areas of limitation of Wi-Fi are:
1. Coverage: A single Access Point can cover, at best, a radius of only about 60
metres. Hundreds of Access Points are necessary to provide seamless coverage
in small area. For 10 square kms area roughly 650 Access Points are required,
where as CDMA 2000 1xEV-DO requires just 09 sites.
2. Roaming: It lacks roaming between different networks hence wide spread
coverage by one service provider is not possible, which is the key to success
of wireless technology.
3. Backhaul: Backhaul directly affects data rate service provider used Cable or
DSL for backhaul. Wi-Fi real world data rates are at least half of the their
theoretical peak rates due to factors such as signal strength, interference and
radio overhead .Backhaul reduces the remaining throughput further.
4. Interference: Wi-Fi uses unlicensed spectrum, which mean no regulator
recourse against interference. The most popular type of Wi-Fi, ‗802.11‘b uses
the crowded 2.4 GHz band which is already used in Bluetooth, cordless
phones and microwave ovens.
5. Security: Wi-Fi Access Points and modems use the Wired Equivalent Privacy
(WEP) Standards, which is very susceptible to hacking and eavesdropping.
6. Security: WEP( Wired Equivalent Privacy) is not very secure. WPA (WIFI
Protected Access) offers much better security with the help of dynamic key
encryption and mutual authentication.
7. Authentication, Authorization and Accounting:
8. In a server based configuration whenever a laptop enters into a wifi zone, a
welcome page is sent to it. User enters username and password. It is connected
through the wireless gateway(router) to AAA, LDAP servers. Once
authenticated ,user can access sites of his choice. Prepaid and postpaid
customers can be billed.
( P Khan JTO, TP WMA Mum.)
11.0 Abbreviations:
1. LAN: Local Area Network.
2. AP: Access Point.
3. EP: Extension Point.
4. ISM: Industrial scientific & medical
5. MAC: Media Access Control.
6. CSMA/CA: Carrier Sense multiple Access with Collision Avoidance.
7. CDMA 2000 1x EV-DO: CDMA 2000 1x Evolution Version Data Only.
8. IEEE: Institute of Electrical & Electronics Engineers.
9. OSI: Open systems Interconnect.
10. WEP: Wireless Equivalent Privacy.
12. References:
1. Article in PC Quest Magazine August 2003 issue.
2. Article in CHIP magazine September 2004 issue.
3. Article in Network Magazine February 2001 issue.
4. Technical article at internet site: www.wirelesslan.com.
5. Technical article at internet site: www.proxim.com.
Name: A Name: B
IP Address : 144.12.12.06 IP Address: 144.12.12.26
ARP Format
Fig 3 shows the format of ARP-Request and ARP- Reply packets and its
encapsulation in the Data Link Frame (for e.g. MAC Frame). Ehernet type value
‗0806‘ Hexadecimal is reserved for ARP frames.
Fig.3
ARP Request Format
Hardware Type:- 2 Octets
Value ‗1‘ in Hardware type fields indicates it is Ethernet Network. Other values are
listed in Table 1.
Table 1
ARP Hardware Type Values
Hardware Type value
Description of Network
1 Ethernet (10 Mbps)
6 !EEE 802 Networks
7 ARCNET
Hlen :- 1 Octet
Hardware Address Length value is '6 Octets' in Ethernet
Plen :- 1 Octet
Protocol Address Length value is '4 Octets' in DoD IP Protocol.
Operation:- 2 Octets
For ARP Request operation field value is '1'. For ARP Reply the value is '2' Refer
Table 3. for other values.
Table 3
Operation Values for ARP Packet
Operation Field Value Type of Operation
1 ARP-Request
2 ARP-Reply
3 RARP-Request
4 RARP-Reply
5 DRARP-Request
6 DRARP-Reply
7 DRARP-Error
8 InARP-Request
9 InARP-Reply
10 ARP-NAK
Target IP Address:-
This is the IP Address of the Target Node . The Target node responds with Hardware
address in ARP-Reply Packet after identifying this IP Address.
Encapsulation of ARP-Request Packet at the Data Link Level:-
Data Link Source Hardware Address is Hardware address of the ARP Request sender.
Data Link Destination Hardware Address is Ethernet Broadcast Address usually all
'1's. (FFFF FFFF FFFF) Hexadecimal (See Fig.4)
Note: ARP Protocol operates on the Physical Network which supports Broadcast
capability viz. Ethernet, Token Ring, FDDI, ARCnet etc.
Ethernet Type Value is '0806' Hexadecimal which indicates that the ARP Data is
carried in the Frame.
Name: A Name: B
IP Address : 144.12.12.06 IP Address: 144.12.12.26
HA: (080010C2A102) Hexa HA: (080010310596) Hexa
Broadcast
‗FFFFFFFFFFFF‘ ‗080010310596‘ ‗0806‘ ARP Request Pkt CRC
Dest. H/W Addr Source H/W Addr. Ethernet Type
Pont-to-Point
‗080010310596‘ ‗080010C2A102‘ ‗0806‘ ARP Reply Pkt CRC
Dest. H/W Addr Source H/W Addr. Ethernet Type
Fig. 4
ARP Operation:-
When IP Datagram is ready for transmission the Routing Component in the Network
Layer (IP Layer) determines whether the Destination IP address is in Local Network
or Remote Network. If it is in Local Network the sender host needs to find out the
Hardware Address of the Target Node. If it is in the Remote Network the sender host
needs to find out the Hardware Address of the Router Port to which the IP Datagram
is to be forwarded (See Fig 5).
Network/ Routing
IP Component
ARP Protocol cannot be routed. That is it cannot cross the Router boundary. Before
sending the ARP request the ARP module tries to find the Target Address in the ARP
Cache table . The ARP cache table keeps pairs of entries of IP addresses and the
corresponding Hardware Addresses (See Table 4)
Table 4
ARP Cache Table
Protocol Type (IP) Protocol Address Hardware Address Time Stamp
(IP Address) (MAC Address) (Minutes)
0800 144.12.12.06 080010C2A102 15
------- --------------------- ---------------------- ---
If the Target IP address is found in the ARP Cache Table it returns the corresponding
Hardware Address and the IP datagram is transmitted to the destination in a MAC
Frame. If the Target IP Address is not found in the ARP Cache Table ARP Request is
broadcast at the Data Link Layer and on receipt of the ARP-Reply and the ARP
Cache Table is updated. Usually the Age of the ARP Cache entry is for 15 minutes.
After time out ARP Request is again needed to find the Hardware Address of the
Target.
Procedure involved in Routing an IP Packet from ‗Node A1‘ to ‗Node B1‘:- (See Fig.
6)
Procedure involved at Node A1:-
1. Since the Destination IP Address is not in the Local Network the IP Datagram
is to be forwarded to Router-A which is connected to the Remote Network.
2. Node A1 looks into ARP Cache Table to find the H/W Address of Router-A.
3. If found the IP datagram is forwarded to Destination H/W Address of
Router-A.
4. If not found the Node A1 generates ARP-Request packet to find the H/W
Address of Router-A and Broadcasts the MAC Frame Containing the ARP-
Request Packet.
5. Router-A responds with its H/W Address in the ARP-Reply Packet
encapsulated in a MAC Frame addressed to Node A1.
6. Node A1 updates the ARP-Cache Table and sets the Time stamp value to 15
Minutes.
7. Node A1 sends the IP Datagram encapsulated in a MAC Frame to Router-A .
Procedure involved at Router-A:-
1. Router-A analyses the Destination IP Address in IP Datagram and Routes the
Packet to Router-B
Procedure involved at Router-B:-
1. Since the Destination IP Address belongs to the Local Network the IP
Datagram is to be forwarded to Node B1 which is directly connected to the
Ethernet LAN.
2. Router-B looks into ARP Cache Table to find the H/W Address of the Node
B1.
3. If found the IP datagram is forwarded to Destination H/W Address of Node
B1.
4. If not found the Router-B generates ARP-Request packet to find the H/W
Address of Node B1 and Broadcasts the MAC Frame Containing the ARP-
Request Packet
5. Node B1 responds with its H/W Address in the ARP-Reply Packet
encapsulated in a MAC Frame addressed to Router-B.
6. Router-B updates the ARP-Cache Table and sets the Time stamp value to 15
Minutes.
7. Router-B sends the IP Datagram encapsulated in a MAC Frame to Node B1.
Node A1 Node B1
Router Network
Router A Router B
Node A2 Node B2
ARP-Request
ARP-Reply
IP Datagram
IP Datagram
ARP-Request
ARP-Reply
IP Datagram
Fig. 6
Dynamic allocation is the most interesting method of the three, because it involves not
only the assigning of a network address but also reclaiming and reusing of the same
address by another client. Therefore, using Dynamic allocation allows for an efficient
managing of a pool of network addresses and is particularly useful in cases where:
1. There is a limited amount of network addresses on the net.
2. The network has computers which temporarily connect and disconnect to it
(e.g. portable computers) and so the network is changing frequently.
The basic mechanism for the dynamic allocation of network addresses is simple: the
client requests the use of an address for a limited period of time (which is called a
lease). The DHCP server allocates an address for the client, marks it as 'used' and
notifies the client about the address and the lease time approved.
The client, in his turn, can:
1. Extend its lease with subsequent requests.
2. Ask for a permanent assignment by asking for an infinite lease.
3. Release the address back to the server before the lease expires, in case it
doesn't need it.
Client sets to zero, optionally used by relay agents when booting via
hops 1
a relay agent.
Transaction ID.
xid 4 A random number chosen by the client, used by the client and server
to associate the request message with its response.
flags 2 Flags
Client IP address.
ciaddr 4 Filled in by client if it knows its IP address (from previous requests
or from manual configurations) and can respond to ARP requests.
This is the format of a message which sent from client directly to the server:
transaction-
3 The ID for this message transport.
id
hop-count 1 Counts the relay-agents which the message has passed until now.
link- Used by the server to identify the link of the client in RELAY-
12
address FORW message or in RELAY-REPL message.
peer- The address of the relay-agent or the client from which the
12
address message was received ? the current hop.
Options for the message. Here the message has to have "Relay
options variable
Message option" among the other options.
Client/Server Model
The client and the server negotiate in a series of messages in order for the client to get
the parameters it needs.
The following diagram shows the messages exchanged between the DHCP client and
servers when allocating a new network address. Next is a detailed explanation of all
the various messages and a description of the communication steps.
This process can involve more than one server but only one server is selected by the
client. In the figure, the selected server is marked 'selected' and the other, 'not
selected' server stands for all the possible not selected servers.
DHCPACK
Server to client with configuration parameters, including
committed network address.
DHCPNAK
Server to client indicating client's notion of network address is
incorrect (e.g., client has moved to new subnet) or client's lease
as expired
DHCPDECLINE
Client to server indicating network address is already in use.
DHCPRELEASE
Client to server relinquishing network address and canceling
remaining lease.
DHCPINFORM
Client to server, asking only for local configuration parameters;
client already has externally configured network address.
Security in DHCP
Security is a significant subject in when concidering DHCP, this is because the main
goal is to get communication parameters/IP address from an external source. This can
give an opportunity do damage the host from outside of the system.
There numerous threats to a host which using DHCP. For example: Deploying fake
DHCP servers that will always deny service. Another way is by sending incorrect
communication parameters and wrong DHCP server information either because of
flawed server, or deliberatelly.
These threats require authentication of the DHCP server or/and the communication
parameters to ensure that we are dealing with real DHCP server which sends valid
parameters.
In order to achieve higher safety, the following two rules must be obeyed:
1. The protocol cannot be changed. (i.e. its stucrute, msg types etc. must remain
intact.)
2. Interact with the DHCP server as little as possible ? minimize the number of
stages of the communication with the DHCP server.
Security in IPv6
In addition to the method of adding "options", like in IPv4, there is also use of the
IPsec mechanisms for communication between relay-agents or relay-agent - server in
IPv6.
IPSec is mechanism of security on the IP level. It provides services such as reply
detection, access control etc.
The servers and relay-agents are configured manually. Each relay-agent or server has
to hold a list of pairs of servers and relay-agents to know which one will get the
message. Servers and relay agents can accept messages only from DHCP sources
which are on the list in their configuration.
In addition to this tool, one can use also the general security tools of IPv6 for DHCP
security. There are many sources to these tools in the web, and a partial list can be
found in This Link.
The first protocol devised for this purpose was Serial Line Internet Protocol (SLIP).
However, SLIP has some deficiencies: it does not support protocols other than
Internet Protocol (IP), it does not allow the IP addresses to be assigned dynamically,
and it does not support authentication of the user. The Point-to-Point Protocol (PPP)
is a protocol designed to respond to respond to these deficiencies.
TRANSITION STATES
The different phase through which a PPP connection goes can be described using a
transition state diagram as shown in Figure-2
Idle state. The idle state means that the link is not being used. There is no active carrier and
the line is quiet.
Establishing state. When one of the end points starts the communication, the
connection goes into the establishing state. In this state, options are negotiated
between the two parties. If the negotiation is successful, the system goes to the
authenticating state (if authentication is required) or directly to the networking state.
The LCP packets, discussed shortly, are used for this purpose. Several packets may be
exchanged during this state.
Detect carrier
Idle
Terminating
(link) Success
Fail
Authenticating
Finish
Networking Success
(exchanging user data and
control)
Authenticating state. The authenticating state is optional; the two end points may
decide, during the establishing state, not to go through this state. However, if they
decide to proceed with authentication, they send several authentication packets,
discussed in a later section. If the result is successful, the connection goes to the
networking state; otherwise, it goes to the terminating state.
Networking state. The networking state is the heart of the transition states. When a
connection reaches this state, the exchange of user control and data packets can be
started. The connection remains in this state until one of the end points want to
terminate the connection.
Terminating state. When the connection is in the terminating state, several packets
are exchanged between the two ends for house cleaning and closing the link.
PPP LAYERS
Figure-3 shows the PPP layers. PPP has only physical and link layers. This means that
a protocol that wants to use the services of PPP should have other layers (network,
transport, and so on).
Physical Layer
No specific protocol is denied for the physical layer in PPP. Instead, it is left to the
implementer to use whatever is available. PPP supports any of the protocols
recognized by ANSI.
A variation of HDLC
Data Link
ANSI standards
Physical
11111111 11000000
4. Protocol field. The protocol field defines what is being carried in the data
field: user data or other information. We will discuss this field in detail
shortly.
5. Data field. This field carries either the user data or other information that we
will discuss shortly.
6. FCS. The frame check sequence field, as in HDLC, is simply a two-byte or
four-byte CRC.
Payload
Flag Address Control Protocol FCS Flag
(and padding )
C02116
1. Code. The field defines the type of LCP packet. We will discuss these packets
and their purpose in the next section.
2. ID. This field holds a value used to match a request with the reply. One end
point inserts a value in this field, which will be copied in the reply packet.
3. Length. This field define the length of the whole LCP packet.
4. Information. This field contains extra information needed for some LCP
packets.
LCP Packets
Table-1 lists some LCP packets.
0116 Configure-request Contains the list of proposed option and their values
0916 Echo-request A type of hello message to check if the other end is alive
Configuration Packets
Configuration packets are used to negotiate the options between two ends. Four
different packets are used for this purpose: configure-request, configure-ack,
configure-nak, and configure-reject.
1. Configure-request. The end point that wishes to start a connection sends a
configure-request message with a list of zero or more options to the other end
point, Note that all of the options should be negotiated in one packet.
2. Configure-ack. If all of the options listed in the configure-request packet are
accepted by the receiving end, it will send a configure-ack, which repeats all
of the options requested.
3. Configure-nak. If the receiver of the configure-request packet recognizes all
of the options but finds that some be omitted or revised (the values should be
changed), it sends a configure-nak packet to the sender. The sender should
then omit or revise the options and send a totally new configure-request
packet.
4. Configure-reject. If some of the options are not recognized by the receiving
party, it responds with a configure-reject packet, marking those options that
are not recognized. The sender of the request should revise the configure-
request message and send a totally new one.
Option Default
AUTHENTICATION
Authentication plays a very important role in PPP because PPP is designed for use
over dial-up links where verification of user identity is necessary. Authentication
means validating the identity of a user who needs to access a set of resources. PPP has
created two protocols for authentication: Password Authentication Protocol (PAP) and
Challenge Handshake Authentication Protocol (CHAP).
PAP
The Password Authentication Protocol (PAP) is a simple authentication procedure
with a two-step process:
1. The user who wants to access a system sends an authentication identification
(usually the user name) and a password.
2. The system checks the validity of the identification and password and either
accepts or denies connection.
For those systems that require more security, PAP is not enough; a third party with
access to the link can easily pick up the password and access the system resources.
Figure-6 shows the idea of PAP.
Figure-6 PAP
User System
Point-to-point physical link
Authenticate-request packet
User name and password
Authenticate-ack or authenticate-nak packet
Accept or reject
PAP Packets
PAP packets are encapsulated in a PPP frame. What distinguishes a PAP packet from
other packets is the value of the protocol field, C02316. There are three PAP packets:
authenticate-request, authenticate-ack, and authenticate-nak. The first packet is used
by the user to send the user name and password. The second is used by the system to
allow access. The third is used by the system to deny access. Figure-7 shows the
format of the three packets.
PAP Packets
Payload
Flag Address Control Protocol FCS Flag
(and padding )
C023
C0231616
CHAP
Figure-8 CHAP
User System
Point-to-point physical link
Challenge packet
Challenge value
Response packet
Response and name
Success or failure packet
Accept or reject
CHAP Packets
CHAP packets are encapsulated in the PPP frame. What distinguishes a CHAP packet
from other packets is the value of the protocol field, C22316. There are four CHAP
packets: challenge, response, success, and failure. The first packet is used by the
system to send the challenge value. The second is used by the user to return the result
of the calculation. The third is used by the system to allow access to the system. The
fourth is used by the system to deny access to the system. Figure-9 shows the format
of the four packets.
NETWORK CONTROL PROTOCOL (NCP)
After the link has been established and authentication (if any) has been successful, the
connection goes to the networking state. In this state, PPP uses another protocol called
Network Control Protocol (NCP). NCP is a set of control protocols to allow the
encapsulation of data coming from network layer protocols (such as IP, IPX, and
AppleTalk) in the PPP frame.
CHAP Packets
Payload
Flag Address Control Protocol FCS Flag
(and padding )
C223
C0231616
IPCP
The set of packets that establish and terminate a network layer connection for IP
packets is called Internetwork Protocol Control Protocol (IPCP). The format of an
IPCP packet is shown in Figure-10. Note that the value of the protocol field, 802116,
defines the packet encapsulated in the protocol as an IPCP packet.
Figure-10 IPCP packet encapsulated in PPP frame
80216
Seven packets are defined for the IPCP protocol, distinguished by their code values as
shown in Table-3
Table-3 Code value for IPCP packets
01 Configure-request
02 Configure-ack
03 Configure-nak
04 Configure-reject
05 Terminate-request
06 Terminate-ack
07 Code-reject
A party uses the configure-request packet to negotiate options with the other party and
to set the IP addresses, and so on.
After configuration, the link is ready to carry IP protocol data in the payload field of a
PPP frame. This time, the value of the protocol field is 0021 16 to show that the IP data
packet, not the IPCP packet, is being carried across the link.
After IP has sent all of its packets the IPCP can take control and use the terminate-
request and terminate-ack packets to end the network connection.
Other Protocols
Note the other protocols have their own set of control packets defined by the value of
the protocol field in the PPP frame.
AN EXAMPLE
Let us given an example of the states through which a PPP connection goes to deliver
some network layer packets. Figure-11 shows the steps:
1. Establishing. The user sends the configure-request packet to negotiate the
options for establishing the link. The user requests PAP authentication. After
the user receives the configure-ack packet, link establishing is done.
2. Authenticating. The user sends the authenticate-request packet and includes
the user name and password. After it receives the configure-ack packet, the
authentication phase is over.
3. Networking. Now the user sends the configure-request to negotiate the
options for the network layer activity. After it receives the configure-ack, the
user can send the network layer data, which may consume several frames.
After all data are sent, the user sends the terminate-request to terminate the
network layer activity. When the terminate-ack packet is received the
networking phase is complete. The connection goes to the terminating state.
Figure-11 An example
User System
Point-to-point physical link
Establishing
Configure-request
Establishing
Configure-ack State
Authenticate-request
Authenticating
Authenticating
Authenticate-ack
State
Configure-request
Networking
Configure-ack State
Networking
User data
Networking
State
User data
Terminating- Networking
request
Terminating-ack State
Terminating
Terminating.
Terminating- The user sends the terminate-request packet to terminate the link. With the
Terminating
receipt of the terminate-ack packet, the link is terminated.
request
Terminating-ack State
INTERNET SERVICES
Introduction
The Domain Name System, or DNS, is a distributed database that is used by TCP/IP
applications to map between the host-names and IP addresses, and to provide
electronic mail routing information. We use the term distributed because no single site
on the Internet knows all the information. Each site (university department, campus,
company, or department within a company, for example) maintains its own database
of information and runs a server program that other system across the Internet
(clients) can query. The DNS provides the protocol that allows client and servers to
communicate with each other.
The impetus for the development of the domain system was growth in the Internet :
Host name to address mappings were maintained by the Network Information Center
(NIC) in a single file (HOSTS.TXT) which was FTPed by all hosts (RFC-952, RFC-
953). The total network bandwidth consumed in distributing a new version by this
scheme is proportional to the square of the number of hosts in the network, and even
when multiple levels of FTP are used, the outgoing FTP load on the NIC host is
considerable. Explosive growth in the number of hosts didn't bode well for the future.
The network population was also changing in character. The timeshared hosts that
made up the original ARPANET were being replaced with local networks of
workstations. Local organizations were administering their own names and addresses,
but had to wait for the NIC to change HOSTS.TXT to make changes visible to the
Internet at large. Organizations also wanted some local structure on the name space.
The applications on the Internet were getting more sophisticated and creating a need
for general purpose name service.
The result was several ideas about name spaces and their management. The proposals
varied, but a common thread was the idea of a hierarchical name space, with the
hierarchy roughly corresponding to organizational structure, and names using "." as
the character to mark the boundary between hierarchy levels. A design using a
distributed database and generalized resources was described in (RFC-882, RFC-883).
Based on experience with several implementations, the system evolved into the
scheme described in this document.
DNS Components
DNS does much more than the name-to-address translation. It basically comprises of
the following components :
1. Domain Name Space and Resource Records
2. Name Servers
3. Resolvers
authoritative name server for the x.z domain. If nic.x.z is asked about a node called
a.y.z, nic.x.z must query nic.y.z, because nic.y.z is the authoritative name server for
the domain y.z. nic.x.z then caches the response; it can then quickly answer future
queries, but its answers will not be authoritative, because nic.x.z is not responsible for
the y.z domain.
Resolvers
These are programs that send requests over the network to servers on behalf of the
users. Resolvers must be able to access at least one name server and use that name
server‘s information to answer a query directly, or pursue the query using referrals to
other name servers. When a DNS server responds to a resolver, the requester attempts
a connection to the host using the IP address and not the name. The resolver is the
client portion of the DNS. The resolver is the library of routines called by applications
when they want to translate (resolve) a DNS name.
Resolver handles :
1. Querying a name server
2. interpreting responses (which may be RRs or error)
3. returning information to the programs that requested it
Telnet Protocol
The Telnet protocol is often thought of as simply providing a facility for remote
logins to computer via the Internet. This was its original purpose although it can be
used for many other purposes.
It is best understood in the context of a user with a simple terminal using the local
telnet program (known as the client program) to run a login session on a remote
computer where his communications needs are handled by a telnet server program. It
should be emphasised that the telnet server can pass on the data it has received from
the client to many other types of process including a remote login server. It is
described in RFC854 and was first published in 1983.
HTTP (HyperText Transfer Protocol)
The standard Web transfer protocol is HTTP, it transmits hyptertext over networks.
The name is somewhat misleading in that HTTP is not a protocol for transferring
hypertext; rather, it is a protocol for transferring information with the efficiency
necessary for making hypertext jumps. The data transferred by the protocol can be
plain text, hypertext, audio, images or any other Internet accessible information.
The HyperText Transport Protocol (HTTP) is an application-level Protocol used by
Web client and Web servers to communicate with each other. HTTP has been in use
since 1990.
The HTTP is a transaction-oriented client/ server protocol. To provide reliability,
HTTP makes use of TCP. Although the use of TCP for the transport connection is
very common, it is not formally required by the standard. As and when ATM
networks become commercially available the HTTP requests and replies can be
carried in AAL5 just as well.
HTTP is a ‗stateless‖ protocol : Each transaction is treated independently. Therefore,
a typical implementation will create a TCP new connection between client and server
for each transaction and then terminate the connection as soon as the transaction is
complete. Each interaction consists of one ASCII request, followed by one RFC 822
MIME-like response i.e. Messages are in a format similar to that used by Internet
Mail and the Multipurpose Internet Mail Extensions (MIME).
HTTP is constantly evolving, several versions are in use and others are under
development.
The World Wide Web provides a single interface for accessing all these protocols.
This creates a convenient and user-friendly environment. It is no longer necessary to
be conversant in these protocols within separate, command-level environments. The
Web gathers together these protocols into a single system. Because of this feature, and
because of the Web's ability to work with multimedia and advanced programming
languages, the World Wide Web is the fastest-growing component of the Internet.
Understanding HyperText Transport Protocol (HTTP)
HTTP is a request/ response protocol. A web client establishes a connection with a
Web server and sends a resource request. The request contains a request method,
protocol version, followed by a MIME-like message. The message contains request
modifiers, client information, and possible body content.
The Web server responds with a status line, including the message‘s protocol version
and a success or error code. It is followed by a MIME-link message containing server
information, entity meta-information, and possible body content. Figure 2 shows
where the HTTP layer fits into Web client and servers.
Web Client Web Server
Virtual Connection
HTTP HTTP
TCP/IP TCP/IP
Protocol Internet Protocol
Suite Suite
Fig. 2 The Web client communicates with the Web server using an
HTTP virtual circuit
Details of HTTP can be found in the following Request for comments (RFC) :
HTTP 1.0 specifications are described in RFC 1945:
http:// www.cis.ohio-state.edu/htbin/rfc/rfc1945.html
MIME specifications are described in RFC 1521:
http:// www.cis.ohio-state.edu/htbin/rfc/rfc1521.html
Graphical
User Web
Interface Resources
Web
Server
HTML
HTTP HTTP
TCP/ IP TCP/ IP
Protocol Protocol
Suit Suit
Internet
Figure 3 HTML is transported between the web client and web server over the
HTTP virtual connection.
The HyperText Markup Language is a document-layout, hyperlink specification, and
markup language. Web clients use it to generate resource requests for Web servers,,
and to process output returned by the Web server for presentation. A markup language
described what text means and what it is supposed to look like. Figure 3 shows where
the HTML layer fits into Web clients.
A fundamental property of HTML is that the text it describes can be rendered on most
devices. A single HTML Web page on a Web server can be displayed on a PC, Mac,
UNIX, and so on.
HTML 3.2 specifications are available online at : http:// www.w3c.org/
Web Client/ Server
A web is similar to the server in client/ server technology. The server, in client/ server
technology, usually connects to a database. The client, in client/ server technology,
makes a data request to the server, processes the returned data, and presents the result
through a graphical user interface.
A web client makes a resource request to the Web server, processes the returned
resource, and presents the result through a graphical user interface.
The difference between a server, in client/ server technology, and a Web server seems
to be one accepts requests for data, and the other accepts requests for a resource. The
differences are dramatic as we look closer.
Web Client
Graphical
User
Interface
Server
Application
Logic Database
Client Database
Server
Vendor
Vendor
Network
Network
Software
Software
Internet
Web servers receive requests for a resource from Web client through the standard
TCP/ IP protocol Suite. The resource can be a file, or data returned by another
process. The Web server locates the file and returns it, or executes another process,
supplies it with input, and returns the output. The Web client does not apply
application logic to the resource. It presents the resource through a graphical user
interface. This is a ―thin client‖ because it does not contain application logic. Figure 5
illustrates Web client/ server components.
Client Client
Graphical
User Web
Interface Resources
Web Database
Client Server
TCP/ IP TCP/ IP
Protocol Protocol
Suit Suit
Internet
Any web client can request a resource from any Web server, and any Web server can
request a resource from any other web server. This is possible because they use the
standard TCP/IP Protocol Suite.
The network connection between the Web client and Web server remains only until
the Web server has returned the resource. The Web server does not retain any state
information about the Web client.
Proxy Server
Introduction
Although the volume of Web traffic on the Internet is staggering, a large percentage
of that traffic is redundant---multiple users at any given site request much of the same
content. This means that a significant percentage of the WAN infrastructure carries
the identical content (and identical requests for it) day after day. Eliminating a
significant amount of recurring telecommunications charges offers an enormous
savings opportunity for enterprise and service provider customers.
Data networking is growing at a dizzying rate. More than 80% of Fortune 500
companies have Web sites. More than half of these companies have implemented
intranets and are putting graphically rich data onto the corporate WANs. The number
of Web users is expected to increase by a factor of five in the next three years. The
resulting uncontrolled growth of Web access requirements is straining all attempts to
meet the bandwidth demand.
Caching
Caching is the technique of keeping frequently accessed information in a location
close to the requester. A Web cache stores Web pages and content on a storage device
that is physically or logically closer to the user---this is closer and faster than a Web
lookup. By reducing the amount of traffic on WAN links and on overburdened Web
servers, caching provides significant benefits to ISPs, enterprise networks, and end
users. There are two key benefits :
Cost savings due to WAN bandwidth reduction---ISPs can place cache engines at
strategic points on their networks to improve response times and lower the bandwidth
demand on their backbones. ISPs can station cache engines at strategic WAN access
points to serve Web requests from a local disk rather than from distant or overrun
Web servers.
In enterprise networks, the dramatic reduction in bandwidth usage due to Web
caching allows a lower-bandwidth (lower-cost) WAN link to serve the same user
base. Alternatively, the organisation can add users or add more services that use the
freed bandwidth on the existing WAN link.
Improved productivity for end users---The response of a local Web cache is often
three times faster than the download time for the same content over the WAN. End
users see dramatic improvements in response times, and the implementation is
completely transparent to them.
Other benefits include the following :
Secure access control and monitoring---The cache engine provides network
administrators with a simple, secure method to enforce a sitewide access policy
through URL filtering.
Operational logging---Network administrators can learn which URLs receive hits,
how many requests per second the cache is serving, what percentage of URLs are
served from the cache, and other related operational statistics.
Web Caching : How it works
Web caching works as follows :
1. A user accesses a Web page.
2. While the page is being transmitted to the user, the caching system saves
the page and all its associated graphics on a local storage device. That content
is now cached.
3. Another user (or the original user) accesses that Web page later in the day.
4.
5. Instead of sending the request over the Internet, the Web cache system
delivers the Web page from local storage. This process speeds download time
for the user, and reduces bandwidth demand on the WAN link.
Advantages / Disadvantages
Security Issues
Many of the current firewall designs rely on the combination of packet filtering and
the proxy technology (especially "transparent proxying" technology). Today, Proxy
systems can manage the different operation authorisations that users have when
surfing (for example: who is allowed to use which protocol), blocking unwanted
surfers outside the local net from going in, and run a log file containing users
operations. Of course that's all besides the filtering on the basis of IP address.
However, the caching ability which makes the Web run faster, has its security
disadvantages. It could be bad for business advertising at Web sites. It might even
violate copyright law.
Advertisers behind a site have a problem with the caching proxy servers. They have
no way of knowing the number of readers behind a hit-it could be one or hundred
thousand - they can't tell without looking at the log files of the proxies. Furthermore,
every copyrighted document sitting in the proxy's cache is, in fact, an unauthorised
copy.
The wrong solution would be to disable the caching. It will hurt the performance,
causing fewer visitors at the advertisers sites. A good solution would be letting a
caching proxy to keep a copy of a Web page if the proxy promises in return, to tell the
Web server the number of hits it got for that page over a reasonable time period.
Undoubtedly, advertisers would prefer a more specific information of the readers, but
that's something to argue about.
Other problems arise when using the Internet Cache Protocol (ICP) - a lightweight
format message used for communication among Web proxy caches, implemented on
top of UDP. ICP is used for object location, and can be used for cache selection.
Because of its connection-less nature, it has vulnerability to some methods of attack.
By checking the source IP address of an ICP message-certain degree of protection is
accomplished. ICP queries should be processed only if the querying address is
allowed to access the cache. ICP replies should only be accepted from known
neighbours, otherwise ignored. Trusting the validity of address in the IP level makes
ICP susceptible to IP address spoofing which has many problematic consequences
(for example: inserting bogus ICP queries, inserting bogus ICP replies thereby
preventing a certain neighbour from being used or forcing a certain neighbour to be
used). In fact, only routers are able to detect spoofed addresses, hosts can't do it. But
still, the IP Authentication Header can be used to provide cryptographic
authentication for the IP packet with the ICP in it.
In general, the caching method can cut down duplicate request up to 30%. However,
in order to investigate the overall effects of different caching strategies on the network
as a whole, a mathematical model should be used.
Mail Protocols
Introduction
Mail service is perhaps the most widely used application on the Internet. Several
protocols for mail service are available, but the most widely used is the Simple Mail
Transfer Protocol (SMTP). Because of large number of mobiles and workstation users
on the Internet, other support protocols, such as POP3 (Post Office version 3) and
IMAP4 (Internet Message Access Protocol version 4), have also been developed.
Simple Mail Transfer Protocol (SMTP)
SMTP enables ASCII text messages to be sent to mailbox on TCP/IP hosts that have
been configured with mail services. Figure 13.3 shows a mail session that uses SMTP.
A user who wants to send mail interacts with the local mail system through the user
agent (UA) component of the mail system. The mail is deposited in local mail
outgoing mailbox. A sender-SMTP process periodically polls the outgoing box, and
when the process finds a mail message in the box, it establishes a TCP connection
with the destination host to which mail is to be sent. The receiver-SMTP process
running in the destination host accepts the connection, and the mail message is sent to
that connection. The receiver-SMTP process deposits the mail message in the
destination mailbox on the destination host. If there is no mailbox with the specified
name on the destination host, a mail message is sent to the originator. This message
indicates that the mailbox does not exist. The sender-SMTP and receiver-SMTP
processes that are responsible for the transfer of mail are called message transfer
agents (MTA).
Post Office Protocol Version 3 (POP3)
SMTP expects the destination host --- the mail server receiving the mail --- to be
online; otherwise, a TCP connection cannot be established with the destination host.
For this reason, it is not practical to establish an SMTP session with a desktop for
receiving mail because desktop workstations are often turned off at the end of the day.
In many network environments, SMTP mail is received by a SMTP host that is
always active on the network (see fig. 13.6). This SMTP host provides a mail-drop
service. Workstations interact with the SMTP host and retrieves messages by using a
client/server mail protocol, such as POP3 (Post Office Protocol version 3) described
in RFC 1939. POP3 uses the TCP transport protocol, and the POP3 server listens on
its well-known TCP port number 110.
Although POP3 is used to download messages from the server, SMTP is still used to
forward messages from the workstation user to its SMTP mail server.
Table 13.6 through 13.8 list the POP3 command based on the RFC 1939 specification.
Although the USER and PASS commands (see table 13.7) are listed as optional
commands in RFC 1939, most POP3 implementations support these commands. The
reason why USER/PASS can be regarded as optional is because they can be replaced
by the MD5 (Message Digest version 5) authentication method used in the APOP
command.
POP3
server
110
TCP SMTP
POP3
client
IP
TCP/IP
TCP Internet
IP
User agent
FTP Session
An FTP session normally involves the interaction of five software elements.
User
This provides a user interface and drives the client protocol interpreter.
Interface
This is the client protocol interpreter. It issues commands to the remote server
Client PI
protocol interpreter and it also drives the client data transfer process.
This is the client data transfer process responsible for communicating with
Client DTP
the server data transfer process and the local file system.
This is the server data transfer process responsible for communicating with
Server DTP
the client data transfer process and the remote file system.
User
I/F User
RFC 959 refers to the user rather than the client. RFC 959 defines the means by which
the two PIs talk to each other and by which the two DTPs talk to each other. The user
interface and the mechanism by which the PIs talk to the DTPs are not part of the
standard. It is common practice for the PI and DTP functionalities to be part of the
same program but this is not essential.
During an FTP session there will be two separate network connections one between
the PIs and one between the DTPs. The connection between the PIs is known as the
control connection. The connection between the DTPs is known as the data
connection.
In normal Internet operation the FTP server listens on the well-known port number 21
for control connection requests. The choice of port numbers for the data connection
depends on the commands issued on the control connection. Conventionally the client
sends a control message which indicates the port number on which the client is
prepared to accept an incoming data connection request.
Client System Server System
TCP/IP
Internet
The use of separate connections for control and data offers the advantages that the two
connections can select different appropriate qualities of service e.g. minimum delay
for the control connection and maximum throughput for the data connection, it also
avoids problems of providing escape and transparency for commands embedded
within the data stream.
When a transfer is being set up it always initiated by the client, however either the
client or the server may be the sender of data. As well as transferring user requested
files, the data transfer mechanism is also used for transferring directory listings from
server to client.
TFTP PROTOCOL
TFTP is a simple protocol to transfer files, and therefore was named the Trivial File
Transfer Protocol or TFTP. It has been implemented on top of the Internet User
Datagram protocol (UDP or Datagram) so it may be used to move files between
machines on different networks implementing UDP. (This should not exclude the
possibility of implementing TFTP on top of other datagram protocols.) It is
designed to be small and easy to implement. Therefore, it lacks most of the features
of a regular FTP. The only thing it can do is read and write files (or mail) from/to a
remote server. It cannot list directories, and currently has no provisions for user
authentication. In common with other Internet protocols, it passes 8 bit bytes of data.