You are on page 1of 267

Bharat Sanchar Nigam Limited Hkkjr lapkj fuxe fyfeVsM

JTO Ph-II DATA NETWORK


WEEK-1 (DATA COMMUNICATION BASICS & INTERNAL
PROTOCOL)

BSNL
ES & IT FACULTY
COURSE CODE – BRBCOIF 114

BHARAT RATNA BHIMRAO AMBEDKAR


INSTITUTE OF TELECOM TRAINING,
RIDGE ROAD, JABALPUR – 482 001
(ISO-9001 : 2008 Certified)
―DATA NETWORK‖ FOR JTOs PH-II

PHASE II SPECIALIZATION TRAINING


ON
“DATA NETWORKS” FOR JTOs

INDEX

Week-1 Data Communications Basics:-

S No Topic Page No.


1. Data Communications Concepts 2
2. OSI Layers 15
3. Physical Layer 26
4. Modem in Data Circuits 57
5. Error Detection & Correction Techniques 114
6. Packet Switching & Message Switching Concepts 137
7. TCP/IP Protocol Suite: An Overview 148
8. IP Addressing: VLSM & CIDR 157
9. LAN Technology 166
10. Ethernet & Wi-Fi standards 190
11. Address Resolution Protocol (ARP) 214
12. Dynamic Host configuration protocol (DHCP) 223
13. Point to Point Protocol (PPP) 239
14. Internet services; DNS, telnet, HTTP, PROXY, E-Mail, 252
SMTP & POP 3, FTP & TFTP

BRBRAITT : June-2011 1
―DATA NETWORK‖ FOR JTOs PH-II

DATA COMMUNICATION CONCEPTS

Communication, whether between human beings or computer systems, involves the


transfer of information from a sender to receiver. Data communication refers to
exchanger of digital information between two digital devices. In this chapter, we
examine some of the basic concepts and terminology relating to data communication.
Data representation, serial/parallel data transmission and asynchronous/synchronous
data transmission concepts are discussed first. We then proceed to examine some
theoretical concepts of Fourier series, Nyquist‘s and Shannon‘s theorems and their
application in data transmission. Digital modulation techniques and baud rate are
introduced next. We close the chapter with a discussion on the distinction between
transmission and data communic3.ation. These terms are used interchangeably in the
literature on data communication and are the cause of much confusion and frustration.
DATA REPRESTENTATION
A binary digit or bit has only two states, ―0‖ and ―1‖ and can represent only two
symbols, but even the simplest form of communication between computers requires a
much larger set of symbols, e.g.
52 capital and small letters,
10 numerals from 0 to 9,
punctuation marks and other special symbols, and
terminal control characters – Carriage Return (CR), Line Feed (LF).
Therefore, a group of bits is used as a code to represent a symbol. The code is usually
5
5 to 8 bits long. 5-bit code can have 2 = combinations and can, therefore, represent
8
32 symbols. Similarly, an 8-bit code can represent 2 = 256 symbols. A code set is
the set of these codes representing the symbols. There are several code sets, some are
used for specific applications while others are the proprietary code sets of computer
manufacturers. The following two code sets are very common:
1. ANSI‘s 7-bit American Standard Code for Information Interchange (ASCII)
2. IBM‘s 8-bit Extended Binary-Coded-Decimal Interchange Code (EBCDIC)
EBCDIC is vendor-specific and is used primarily in large IBM computers. ASCII
is the most common code set and is used worldwide.
ASCII – American Standard Code for information Interchange
ASCII is defined by American National Standards Institute (ANSI) in ANSI X3.4.
The corresponding CCIT recommendation is T.50 (International Alphabet No. 5 or
IA5) and ISO specification is ISO 646. It is 7-bit code and all the possible 128 codes
have defined meanings (Table 1). The code set consists of the following symbols:

96 graphic symbols (columns 2 to 7), comprising 94 printable characters,


SPACE and DEL etc characters
32 control symbols (columns 0 and 1).

Table 1 ASCII Code Set

BRBRAITT : June-2011 2
―DATA NETWORK‖ FOR JTOs PH-II

Bit 7 0 0 0 0 1 1 1 1
Numbers 6 0 0 1 1 0 0 1 1
5 0 1 0 1 0 1 0 1
4321 0 1 2 3 4 5 6 7

0000 0 NUL DLE SPACE 0 @ P P


0001 1 SOH DC1 ! 1 A Q a q
0010 2 STX DC2 ‖ 2 B R b r
0011 3 ETX DC3 # 3 C S c s
0100 4 EOT DC4 $ 4 D T d t
0101 5 ENQ NAK % 5 E U e u
0110 6 ACK SYN & 6 F V f v
0111 7 BEL ETB ‘ 7 G W g w
1000 8 BS CAN ( 8 H X h x
1001 9 HT EM ) 9 I Y i y
1010 A LF SUB * : J Z j z
1011 B VT ESC + ; K [ k {
1100 C FF FS , < L \ l |
1101 D CR GS - = M ] m }
1110 E SO RS . > N ^ n ~
1111 F SI US / ? O – o DEL

The binary representation of a particular character can be easily determined from its
hexadecimal coordinates. For example, the coordinates of character ―K‖ are (4,B)
and, therefore, its binary code is 100 1011.
The control symbols are codes reserved for special functions. Table 2 lists the control
symbols. Some important functions and the corresponding control symbols are:
functions relating to basic operation of the terminal device, e.g., a printer or a
VDU
CR (Carriage Return)
LF (Line Feed)
functions relating to error control
ACK (Acknowledgement)
NAK (Negative Acknowledgement)
functions relating to blocking (grouping) of data characters
STX (Start of Text)
ETX (End of Text).
DC1, DC2, DC3 and DC4 are user definable. DCI and DC3 are generally used as X-
ON and X-OFF for switching the transmitter.

BRBRAITT : June-2011 3
―DATA NETWORK‖ FOR JTOs PH-II

Table 2 Control Symbols

ACK Acknowledgement FF Form Feed


BEL Bell FS File Separator
BS Backspace GS Group Separator
CAN Cancel HT Horizontal Tabulation
CR Carriage Return LF Line Feed
DC1 Device Control 1 NAK Negative Acknowledgement
DC2 Device Control 2 NUL Null
DC3 Device Control 3 RS Record Separator
DC4 Device Control 4 SI Shift-In
DEL Delete SO Shift-Out
DLE Data Line Escape SOH Start of Heading
EM End of Medium STX Start of Text
ENQ Enquiry SUB Substitute Character
EOT End of Transmission SYN Synchronous Idle
ESC Escape US Unit Separator
ETB End of Transmission Block VT Vertical Tabulation
ETX End of Text

ASCII is often used with an eighth bit called the parity bit. This bit is utilized for
detecting errors which occur during transmission. It is added in the most significant
bit (MBS) positions. We will examine the use of parity bits in detail in the chapter on
Error Control.

EXAMPLE 1
Represent the message ―3P.bat‖ in ASCII code. The eighth bit may be kept as ―0‖

Solution

Bit Positions 8 7 6 5 4 3 2 1
3 0 0 1 1 0 0 1 1
P 0 1 0 1 0 0 0 0
. 0 0 1 0 1 1 1 0
b 0 1 1 0 0 0 1 0
a 0 1 1 0 0 0 0 1
t 0 1 1 1 0 1 0 0

EBCDIC – Extended Binary Coded Decimal Interchange Code


It is an 8-bit code with 256 possible combinations; however, all combinations are not
used and have also not been defined. There is no parity bit for error checking in the
basic code set. The graphic symbol subset is approximately the same as ASCII. There
are several differences in the control characters. EBCDIC is not the same for all
devices. There may be variations even within different models of IBM equipment.. In
EBCDIC, the bit numbering starts from the most significant bit (MSB) and is ASCII,
it starts from the least significant bit (LSB).

b0 b1 b2 b3 b4 b5 b6 b7

BRBRAITT : June-2011 4
―DATA NETWORK‖ FOR JTOs PH-II

MSB LSB

Other Code Sets


The following code sets, though not of much significance to the data processing
community today, were used one time or the other:
Baudot Teletype Code. Also called ITA2 (International Telegraph Alphabet
Number 2), it is a 5-bit code and is used in electromechanical teletype machines. 32
codes are possible using 5 bits but in this code there are 58 symbols. The same code is
used for two symbols using letter shift/figure shift keys which change the meaning of
a code. In telegraphy terminology, binary ―1‖ is called Mark and binary ―0‖ is called
Space.
BCDIC – Binary Coded Decimal Interchange Code. It is a six-bit code with 64
symbols.
Bytes
Byte is a group of bits which is considered as a single unit during processing. It is
usually eight bits long though is length may be different. A character code, e.g.,
1001011 of ASCII, is a byte having a defined meaning ―K‖, but it should be noted
that there may be bytes which are not elements of any standard code set.
DATA TRANSMISSION
There is always need to exchange data, commends and other control information
between a computer and its terminals or between two computer. This information, as
we saw in the previous section, is in the form of bits.
Data transmission refers to movement of the bits over some physical medium
connecting two or more digital devices. There are two options of transmitting the bits,
namely, Parallel transmission, or Serial transmission.
Parallel Transmission
In parallel transmission, all the bits of a byte are transmitted simultaneously on
separate wires as show in Fig. 1 and multiple circuits interconnecting the two devices
are, therefore, required. It is practical only if the two devices, e.g., a computer and its
associated printer are close to each other.

BRBRAITT : June-2011 5
―DATA NETWORK‖ FOR JTOs PH-II

0 0
0 0

1 1
1 1
0
2 0 0

3 0 0 0

4 1 1 1

5 0 0 0
6 1 1 1
1
7 1 1

Transmitter Receiver

Fig. 1 Parallel transmission.


Serial Transmission

In serial transmission, bits are transmitted serially one after the other (Fig.2). The
least significant

MSB LSB
1 1 0 1 0 0 1 0
1 1 0 1 0 0 1 0

Transmitter Receiver

Fig. 2 Serial transmission.

Bit (LSB) is usually transmitted first. Note that as compared to transmission, serial
transmission requires only one circuit interconnecting the two devices. Therefore,
serial transmission is suitable for transmission over long distances.

EXAMPLE 2

Write the bit transmission sequence of the message given in Example 1.

Solution

3 p b t
11001100 00001010 01110100 0 01000110 10000110 00101110

BRBRAITT : June-2011 6
―DATA NETWORK‖ FOR JTOs PH-II

Bits are transmitted as electrical signals over the interconnecting wires. The two
binary states ―1‖ transmission is termed unipolar and if we choose to represent a
binary ―1‖ by, say, a positive voltage + V volts and binary ―0‖ by a negative voltage –
V volts, the transmission is said to be bipolar. Figure 3 shows the bipolar waveform of
the character ―K‖. Bipolar transmission is preferred because the signal does not have
any DC component. The transmission media usually do not allow the DC signals to
pass through.
Bit Rate
Bit rate is simply the number of bits which can be transmitted in a second. If t p is the
duration of a bit, the bit rate R will be 1/t p. It must be noted that bit duration is not
necessarily the pulse duration. For example, in Fig.3, the first pulse is of two-bit
duration. Letter, we will come across signal formats in which the pulse duration is
only half the bit duration.
Receiving Data Bits
The signal received at the other end of the transmitting medium is never identical to
the transmitted signal as the transmission medium distorts the signal to some extent.
As a result, the receiver has to put in considerable efforts to identify the bits. the
receiver must know the time instant at which it should look for a bit. Therefore, the
receiver must have synchronized clock pulses which mark the location of the bits. The
received signal is sampled using the clock pulses, and depending on the polarity of a
sample, the corresponding bit is identified (Fig. 4).

1 1 0 1 0 0 1 0 Transmitted Signal

Received Signal

Clock Signal

Sampled Signal

1 1 0 1 0 0 1 0
Recovered Signal

Fig.4 Bit recovery

It is essential that the received signal is sampled at the right instants as otherwise it
could be misinterpreted. Therefore, the clock frequency should be exactly the same as
the transmission bit rate. Even a small difference will built up as timing error and
eventually result in sampling at wrong instants. When the clock frequency is slightly
faster or slightly slower than the bit rate, a bit may be sampled twice and may be
missed.

BRBRAITT : June-2011 7
―DATA NETWORK‖ FOR JTOs PH-II

MODES OF DATA TRANSMISSION


There are two methods of timing control for reception of bits. The transmission modes
corresponding to these two timing methods are called Asynchronous transmission and
Synchronous transmission.
Asynchronous Transmission
We call an action asynchronous when the agent performing the action does so
whenever it wishes. Asynchronous transmission refers to the case when the sending
end commences transmission of bytes at any instant of time. Only one byte is sent at
a time and there is no time relation between consecutive bytes, i.e., after sending a
byte, the next byte can sent after arbitrary delay (Fig. 5).In the idle state, when no byte
is being transmitted, the polarity of the electrical signal corresponds to ―1‖

Idle Idle Idle


Start Start
1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 0 bit
Stop bit Stop
bit bit

Fig. 5 Asynchronous transmission.


Due to the arbitrary delay between consecutive bytes, the time occurrences of the
clock pulses at the receiving end need to be synchronized repeatedly for each byte.
This is achieved by providing two extra bits, a start bit at the beginning and a stop bit
at the end of a byte.
Start Bit. The start bit is always ―0‖ and is prefixed to each byte. At the onset
transmission of a byte, it ensures that the electrical signal changes form idle state ―1‖
to ―0‖ and remains at ―0‖ for one bit duration. The leading edge of the start bit used as
a time reference for generating the clock pulses at the required sampling instants.
Thus, each onset of a byte results in resynchronization of the receiver clock.
Stop Bit. To ensure that the transition from ―1‖ to ―0‖ is always present at the
beginning of a byte, it is necessary that polarity of the electrical signal should
correspond to ―1‖ before occurrence of the start bit. That is why the idle state is kept
at ―1‖. But there may be two bytes, one immediately following the other and if the last
bit of the first byte is ―0‖, the transition from ―1‖ to ―0‖ will not occur. Therefore, a
stop bit is also suffixed to each byte. It is always ―1‖ and its duration is usually 1, 1.5
or 2 bits.
Synchronous Transmission
A synchronous action, unlike an asynchronous action, is carried out under the control
of a timing source. In synchronous transmission, bits are always synchronized to a
reference clock irrespective of the bytes they belong to. There are no start or stop bits.
bytes are transmitted as a block (group of bytes) in a continuous stream of bits (Fig.
6). Even the inter block idle time is filled with idle characters.

Direction of Transmission

BRBRAITT : June-2011 8
―DATA NETWORK‖ FOR JTOs PH-II

Flag Block Of bytes Flag Idle data Flag Block Of bytes Flag

Block 2 Block 1

Fig. 6 Synchronous transmission.


Continuous transmission of bits enables the receiver to extract the clock from the
incoming electrical signal . As this clock is inherently synchronized to the bits, the job
of the receiver becomes simpler.
There is, however, still one problem. The bytes lose their identity and their boundaries
need to be identified. Therefore, a unique sequence of fixed number of bits, called
flag, is prefixed to each block. The flag identifies the start of a block. The receiver
first detects the flag and then identifies the boundaries of different bytes using a
counter. Just after the flag there is first bit of the first byte.
A more common term for data block is frame. A frame contains may other fields in
addition to the flag. We will discuss frame structures later in the chapter on Data
Link Control.
SIGNAL ENCODING
For transmission of the bits as electrical signals, simple positive and negative voltage
representation of the two binary states may not be sufficient. Some of the transmission
requirements of digital signals are:

Sufficient signal transitions should be present in the transmitted signal for the
clock extraction circuit at the receiving end to work properly.
Bandwidth of the digital signal match the bandwidth of the transmission
medium.
There should not be any ambiguity in recognizing the binary states of the
received signal.
There are several ways of representing bits as digital electrical signals. Two broad
classes of signal representation codes are: Non-Return to Zero (NRZ) codes and
Return to Zero (RZ) codes.

0 0 1 0 1 1 1 0 NRZ-L Coding (Data )

Clock Signal

NRZ-M Coding

NRZ-S Coding

Fig. 7 NRZ signal encoding

BRBRAITT : June-2011 9
―DATA NETWORK‖ FOR JTOs PH-II

NRZ-L. In NRZ-L (Non Return to Zero-Level), the bit is represented by a voltage


level which remains constant during the bit duration.
NRZ-M and NRZ-S. In NRZ-M (Non Return to Zero-Mark), and NRZ-S (Non-
Return to Zero-Space), it is a change in signal level which corresponds to one bit
value, and absence of a change corresponds to the other bit value. ―Mark‖ or ―1‖
changes the signal level in NRZ-M and ―Space‖ or ―0‖ changes the signal level in
NRZ-S. NRZ-M is also called NRZ-I, Non-Return to Zero-Invert n ones.
Return to Zero (RZ) Codes
We mentioned in the last section that the clock can be extracted from the digital signal
if bits are continuously transmitted. However, if there is a continuous string of zeros
or ones and if it is coded using one of the NRZ codes, the electrical signal will not
have any level transitions. For the receiver clock extraction circuit, it will be as good
as no signal.
The RZ codes usually ensure signal transitions for any bit pattern and thus overcome
the above limitation of NRZ codes. These codes are essentially a combination of
NRRZ-L and the clock signal. Figure 8 shows some examples of RZ codes.

NRZ-L Coding (Data)


0 0 1 0 1 1 1 0

Clock Signal

Manchester Coding

Bi-phase-M Coding

Bi-phase-S Coding
Differential
Manchester Coding

Fig. 8 RZ signal encoding.


Manchester Code.
In this code, ―1‖ is represented as logical AND of ―1‖ and the clock. This produces
one cycle of clock. For ―0‖, this clock cycle is inverted. Note that whatever be the bit
sequence each bit period will have one transition. The receiver clock extraction circuit
never faces a dearth of transitions. The Manchester code is widely employed to
represent data in local area networks. It is also called Biphase-L code
Biphase-M Code.
In this code also there is always a transition at the beginning of a bit interval. Binary
―1‖ has another transition in the middle of the bit interval.
Biphase-S Code.
In this code also there is a transition at the beginning of a bit interval. Binary ―0‖ has
another transition in the middle of the bit interval.

BRBRAITT : June-2011 10
―DATA NETWORK‖ FOR JTOs PH-II

Differential Manchester Code.


In this Code there is always a transition in the middle of a bit interval. Binary ―0‖ has
additional transition at the beginning of the interval.
Other Signal Codes
Local area networks based on optical fibres use another type of signal code termed
4B/FB. In this code four data bits are taken at a time and coded into 5 bits. There are
32 possible combinations of 5 bits. of these combinations, 15 codes are selected to
represent 16 possible sets of 4 bits. The codes are so selected that there are at least
two signal transitions in a group of 5 bits for clock recovery.
Base-band Transmission
When a digital signal is transmitted on the medium using one of the signal codes
discussed earlier, it is called baseband transmission. Baseband transmission is limited
to low data rates because at high data rates, significant frequency components are
spread over a wide frequency band over which the transmission characteristics of the
medium do not remain uniform. For faithful reproduction of a signal, it is necessary
that the relative amplitudes and phase relationships of the frequency components are
maintained during transmission. For transmitting data at high bit rates we need to use
modulation techniques which we will discuss later.
TRANSMISSION CHANNEL
A transmission channel transports the electrical signals from the transmitter to the
receiver. It is characterized by two basic parameters – bandwidth and signal-to-noise
ratio. These parameters determine the ultimate information-carrying capacity of a
channel. Nyquist derived the limit of data rate considering a perfectly noiseless
channel. Nyquist‘s theorem states that if B is the bandwidth of a transmission channel
which carrier a signal having L levels (a binary digital signal has two levels), the
maximum data rate R is given by

R = 2B log2 L
Bauds
When bits are transmitted as an electrical signal having two levels, the bits rate and
the ―modulation‖ rate of the electrical signal are the same (Fig. 9). Modulation rate is
the rate at which the electrical

Fig. 9 Baud rate for two-level modulation.

BRBRAITT : June-2011 11
―DATA NETWORK‖ FOR JTOs PH-II

Signal changes its levels. It is expressed in bauds (―per second‖ is implied). Note that
there is one to one correspondence between bits and electrical levels.
It is possible to associate more than one bit to one electrical level. For example, if the
electrical signal has four distinct levels, two bits can be associated with one electrical
level (Fig. 10). In this case, the bit rate is twice the baud rate.

Fig. 10 Baud rate for four-level modulation.

Modem
In Fig. 10, the four levels define four states of the electrical signal. The electrical state
can also be defined in terms of other attributes of an electrical signal such as
amplitude, frequency or phase. The basic electrical signal is a sine wave in this case.
The binary signal modulates one of these signal attributes. The sine wave carries the
information and is, therefore termed ―carrier‖. The device which performs
modulation is called a modulator and the device which recovers the information signal
from the modulated carrier is called a demodulator. In data transmission, we usually
come across devices which perform both modulation as well as demodulation
function and these devices are called modems. Modems are required when data is to
be transmitted over long distances. In a modem, the input digital signal modulates a
carrier which is transmitted to the distant end. At the distant end, another modem
demodulates the received carrier to get the digital signal. A pair of modems is, thus,
always required.
DATA COMMUNICATION
Communication and transmission terms are often interchangeably used, but it is
necessary to understand the distinction between the two activities. Transmission is
physical movement of information and concerns issues like bit polarity,
synchronization, clock, electrical characteristics of signals, modulation, demodulation
etc. we have so far been examining these data transmission issues.
Communication has a much wider connotation than transmission. It refers to
meaningful exchange of information between the communicating entities. Therefore,
in data communications we are concerned with all the issues relating to exchange of
information in the form of a dialogue, e.g., dialogue discipline, interpretation of
messages, and acknowledgements.

BRBRAITT : June-2011 12
―DATA NETWORK‖ FOR JTOs PH-II

Synchronous Communication

Communication can be asynchronous and synchronous. In synchronous mode of the


communication, the communicating entities exchange messages in a disciplined
manner. An entity can send a message when it is permitted to do so.

Entity A Entity B

Hello B!
Hello!
Do you want to send data? Go ahead.
Yes. Here it is.
Any more data?
No.
Bye
Bye.

The dialogue between the entities A and B is ―synchronized‖ in the sense that each
message of the dialogue is a command or response. Physical transmission of data
bytes corresponding to the characters of these messages could be in synchronous or
asynchronous mode.
Asynchronous Communication
Asynchronous communication, on the other hand, is less disciplined. A
communicating entity can send message whenever it wishes to.

Entity A Entity B

Hello B!
Hello! Here is some data
Here is some data Here is more data
Did you receive what I sent?
Yes. Here is more data. Please
acknowledge.
Acknowledged, Bye
Bye.

Note the lack of discipline in the dialogue. The communicating entities send messages
whenever they please, Here again, physical transmission of bytes of the messages can
be in synchronous or asynchronous mode. We will come across many example of
synchronous and asynchronous communication in this book when we discuss
protocols. Protocols are the rules and procedures for communication.

BRBRAITT : June-2011 13
―DATA NETWORK‖ FOR JTOs PH-II

DIRECTIONAL CAPABILITES OF DATA EXCHANGE


There are three possibilities of data exchange:
1. Transfer in both direction at the same time.
2. Transfer in either direction, but only in one direction at a time.
3. Transfer in one direction only.
Terminology used for specifying the directional capabilities is different for data
transmission and for data communication (Table 4).

Table 4 Terminology for Directional Capabilities

Directional Capability Transmission Communication

One direction only Simplex (SX) One Way (OW)


One direction at a time Half duplex (HDX) Two-Way Alternate
(TWA)
Both direction at the same time Full duplex (FDX) Two-Way Simultaneous
(TWS)

SUMMARY
Binary codes are used for representing the symbols for computer communications.
ASCII is the most common code set used worldwide. The bits of a binary code can be
transmitted in parallel or in serial form. Transmission is always serial unless the
devices are near each other. Serial transmission mode can be asynchronous or
synchronous. Asynchronous transmission is byte by byte transmission and start/stop
bits are appended to each byte. In synchronous transmission, data is transmitted in the
form of frames having flags to identify the start of a frame. Clock is required in
synchronous transmission. Digital signals are coded using RZ codes to enable clock
extraction at the receiving end.
A communication channel is limited in its information-carrying capacity by its
bandwidth and the noise present in the channel. To make best use of this limited
capacity of the channel, very sophisticated carrier-modulation methods are used.
modems are the devices which carry out the modulation and demodulation functions.
Data communication has wider scope as compared to data transmission.
Asynchronous and synchronous communication refer to non-disciplined and
disciplined exchange of messages respectively.

BRBRAITT : June-2011 14
―DATA NETWORK‖ FOR JTOs PH-II

OPEN SYSTEM INERCONNECTION (OSI) MODEL

BRBRAITT : June-2011 15
―DATA NETWORK‖ FOR JTOs PH-II

OPEN SYSTEM INERCONNECTION (OSI) MODEL


Established in 1947, the International Standards Organization (ISO) is a multinational
body dedicated to worldwide agreement on international standards. An ISO standard
that covers all aspects of network communication is the Open Systems
Interconnection (OSI) model. An open system is a model that allows any two
different systems to communicate regardless of their underlying architecture. Vendor-
specific protocols close off communication between unrelated systems. The purpose
of the OSI model is to open communication between different systems without
requiring changes to the logic of the underlying hardware and software. The OSI
model is not a protocol: it is a model for understanding and designing a network
architecture that is flexible, robust and interoperable.

THE MODEL
The Open Systems Interconnection model is a layered framework for the design of
network systems that allows for communication across all types of computer systems.
It consists of seven separate but related layers, each of which defines a segment of the
process of moving information across a network. Understanding the fundamentals of
the OSI model provides a solid basis for exploration of data communication.

Layered Architecture

The OSI model is built of seven ordered layers: physical (layer 1), data link (layer 2),
network (layer 3), transport (layer 4), session (layer 5), presentation (layer 6), and
application (layer 7).
As the message travels from A to B, it may pass through many intermediate nodes.
These intermediate nodes usually involve only the first three layers of the OSI model.
In developing the model, the designers distilled the process of transmitting data down
to its most fundamental elements. They identified which networking functions had
related uses and collected those functions into discrete groups that became the layers.
Each layer defines a family of functions distinct from those of the other layers. By
defining and localizing functionality in this fashion, the designers created an

BRBRAITT : June-2011 16
―DATA NETWORK‖ FOR JTOs PH-II

architecture that is both comprehensive and flexible. Most important, the OSI model
allows complete transparency between otherwise incompatible systems.
Peer-to-Peer processes
Within a single machine, each layer calls upon the services of the layer just below it.
Layer 3, for example, uses the services provided by layer 2 and provides services for
layer 4. Between machines, layer x on one machine communicates with layer x on
another machine. This communication is governed by an agreed-upon series of rules
and conventions called protocols. The processes on each machine that communicate at
a given layer are called peer-to-peer processes. Communication between machines is
therefore a peer-to-peer process using the protocols appropriate to a given layer.
At the physical layer, communication is direct: machine A sends a stream of bits to
machine B. At the higher layers, however, communication must move down through
the layers on machine A, over to machine B, and then back up through the layers.
Each layer in the sending machine adds its own information to the message it receives
from the layer just above it and passes the whole package to the layer just below it.
This information is added in the form of headers or trailers (control data added to the
beginning or end of a data parcel). Headers are added to the message at layers 6,5,4,3,
and 2. A trailer is added at layer 2.
At layer 1 the entire package is converted to a form that can be transferred to the
receiving machine. At the receiving machine, the message is unwrapped layer by
layer, with each process receiving and removing the data meant for it. For example,
layer 2 removes the data meant for it, then passes the rest to layer 3. Layer 3 removes
the data meant for it and passes the rest to layer 4, and so on.
Interfaces between Layers
The passing of the data and network information down through the layers of the
sending machine and back up through the layers of the receiving machine is made
possible by an interface between each pair of adjacent layers. Each interface defines
what information and services a layer must provide for the layer above it. Well-
defined interfaces and layer functions provide modularity to a network. As long as a
layer still provides the expected services to the layer above it, the specific
implementation of its functions can be modified or replaced without requiring
changes to the surrounding layers.
Organization of the Layer
The seven layers can be thought of as belonging to three subgroups. Layers 1, 2, and 3
– physical, data link, and network – are the network support layers; they deal with the
physical aspects of moving data from one device to another (such as electrical
specifications, physical connections, physical addressing, and transport timing and
reliability). Layers 5, 6, and 7 – session, presentation, and application – can be
thought of as the user support layers ; they allow interoperability among unrelated
software systems. Layer 4, the transport layer, ensures end-to-end reliable data
transmission while layer 2 ensures reliable transmission on a single link. The upper
OSI layers are almost always implemented in software: lower layers are a
combination of hardware and software ; except for the physical layer, which is mostly
hardware.
L7 data means the data unit at layer 7, L6 data means the data unit at layer 6, and so
on. The process starts out at layer 7 (the application layer), then moves from layer in

BRBRAITT : June-2011 17
―DATA NETWORK‖ FOR JTOs PH-II

descending sequential order. At each layer (except layer 7 and 1), a header is added to
the data unit. At layer 2, a trailer is added as well. When the formatted data unit
passes through the physical layer (layer 1), it is changed into an electromagnetic
signal and transported along a physical link.

Upon reaching its destination, the signal passes into layer 1 and is transformed back
into bits. The data units then move back up through the OSI layers. As each block of
data reaches the next higher layer, the headers and trailers attached to it at the
corresponding sending layer are removed, and actions appropriate to that layer are
taken. By the time it reaches layer 7, the message is again in a form appropriate to the
application and is made available to the recipient.
FUNCTIONS OF THE LAYERS
In this section we briefly describe the functions of each layer in the OSI model.
Physical Layer
The physical layer coordinates the functions required to transmit a bit stream over a
physical medium. It deals with mechanical and electrical specifications of the
interface and transmission medium. It also defines the procedures and functions that
physical devices and interfaces have to perform for transmission to occur.

BRBRAITT : June-2011 18
―DATA NETWORK‖ FOR JTOs PH-II

The physical layer is concerned with the following:


Physical characteristics of interfaces and media.
The physical layer defines the characteristics of the interface between the devices and
the transmission medium. It also defines the type of transmission medium.
Representation of bits. The physical layer data consist of a stream of bits (sequence
of 0s and 1s) without any interpretation. To be transmitted, bits must be encoded into
signals – electrical or optical. The physical layer defines the type of encoding (how 0s
and 1s are changed to signals).
Data rate. The transmission rate – the number of bits sent each second – is also
defined by the physical layer. In order words, the physical layer defines the duration
of a bit, which is how long it lasts.
Synchronization of bits. The sender and receiver must be synchronized at the bit level.
In other word , the sender and the receiver clocks must be synchronized.
Line configuration. The physical layer is concerned with the connection of devices to
the medium. In a point-to-point configuration, two devices are connected together
through a dedicated link. In a multipoint configuration, a link is shared between
several devices.
Physical topology. The physical topology defines how devices are connected to make
a network. Devices can be connected using a mesh topology (every device connected
to every other device), a star topology (devices are connected through a central
device), a ring topology (every device is connected to the next, forming a ring), or a
bus topology (every device on a common link).
Transmission mode. The physical layer also defines the direction of transmission
between two devices: simplex, half-duplex, or full-duplex. In the simplex mode, only
one device can send; the other can only receive. The simplex mode is a one-way
communication. In the half-duplex mode, two devices can send and receive, but not at
the same time. In a full-duplex (or simply duplex) mode, two devices can send and
receive at the same time.
Data Link Layer
The data link layer transforms the physical layer, a raw transmission facility, to a
reliable link and is responsible for node-to-node delivery. It makes the physical layer

appear error free to the upper layer (network layer).


Specific responsibilities of the data link layer include the following:

BRBRAITT : June-2011 19
―DATA NETWORK‖ FOR JTOs PH-II

Framing.
The data link layer divides the stream of bits received from the network layer in to
manageable data units called frames.
Physical addressing.
If frames are to be distributed to different systems on the network, the data link layer
adds a header to the frame to define the physical address of the sender (source
address) and /or receiver (destination address) of the frame. If the frame is intended
for a system outside the sender‘s network, the receiver address is the address of the
device that connects one network to the next.
Flow control.
If the rate at which the data are absorbed by the receiver is less than the rate produced
in the sender, the data link layer imposes a flow control mechanism to prevent
overwhelming the receiver.
Error control.
The data link layer adds reliability to the physical layer by adding mechanisms to
detect and retransmit damaged or lost frames. It also uses a mechanism to prevent
duplication of frames. Error control is normally achieved through a trailer added to
the end of the frame.
Access control.
When two or more devices are connected to the same link, data link layer protocols
are necessary to determine which device has control over the link at any given time.
Network Layer
The network layer is responsible for the source-to destination delivery of a packet
possible across multiple network (link). Whereas the data link layer oversees the
delivery of the packet between two systems on the same network (link), the network
layer ensures that each packet gets from its point of origin to its final destination.
If two systems are connected to the same link, there is usually no need for a network
layer. However, if the two systems are attached to different networks (links) with
connecting devices between the networks (link), there is often a need for the network
layer to accomplish source-to-destination delivery.

BRBRAITT : June-2011 20
―DATA NETWORK‖ FOR JTOs PH-II

Specific responsibilities of the network layer include the following:


Logical addressing.
The physical addressing implemented by the data link layer handles the addressing
problem locally. If a packet passes the network boundary, we need another addressing
system to help distinguish the source and destination systems. The network layer adds
a header to the packet coming form the upper layer that, among other things, includes
the logical addresses of the sender and receiver.
Routing
When independent networks or links are connected together to create an internetwork
(a networks) or a large network, the connecting devices (called routers or gateways)
route the packets to their final destination. One of the functions of the network layer is
to provide this mechanism.
Transport Layer
The transport layer is responsible for source-to-destination (end-to-end) delivery of
the entire message. Whereas the network layer oversees end-to-end delivery of
individual packets, it does not recognize any relationship between those packets. It
treats each one independently, as though each piece belonged to a separate message,
whether or not it does. The transport layer, on the other hand, ensures that the whole
message arrives intact and in order, overseeing both error control and flow control at
the source-to-destination level.
For added security, the transport layer may create a connection between the two end
ports. A connection is a single logical path between the source and destination that is
associated with all packets in a message. Creating a connection involves three steps:
connection establishment, data transfer, and connection release. By confining
transmission of all packets to a single path way, the transport layer has more control
over sequencing, flow, and error detection and correction.

BRBRAITT : June-2011 21
―DATA NETWORK‖ FOR JTOs PH-II

Specific responsibilities of the transport layer include the following:


Service-point addressing. Computers often run several programs at the same time.

For this reason, source-to-destination delivery means delivery not only from one
computer to the next but also from a specific process (running program) on one
computer to a specific process (running program) on the other. The transport layer
header therefore must include a type of address called a service-point address (or port
address). The network layer gets each packet to the correct computer; the transport
layer gets the entire message to the correct process on that computer.
Segmentation and reassembly.
A message is divided into transmittable segments, each segment containing a
sequence number. These numbers enable the transport layer to reassemble the
message correctly upon arriving at the destination and to identify and replace packets
that were lost in the transmission.
Connection control.
The transport layer can be either connectionless or connection-oriented. A
connectionless transport layer treats each segment as an independent packet and
delivers it to the transport layer at the destination machine. A connection-oriented
transport layer makes a connection with the transport layer at the destination machine
first before delivering the packets. After all the data are transferred, the connection is
terminated.
Flow control.
Like the data link layer, the transport layer is responsible for flow control. However,
flow control at this layer is performed end to end rather than across a single link.
Error control.
Like the data link layer, the transport layer is responsible for error control. However,
error control at this layer is performed end to end rather than across a single link. The
sending transport layer makes sure that the entire message arrives at the receiving
transport layer without error (damage, loss, or duplication). Error correction is usually
achieved through retransmission.

BRBRAITT : June-2011 22
―DATA NETWORK‖ FOR JTOs PH-II

Session Layer
The services provided by the first three layers (physical, data link, and network) are
not sufficient for some processes. The session layer is the network dialog controller. It
establishes, maintains, and synchronizes the interaction between communicating
systems.

Specific responsibilities of the session layer include the following:


Dialog control.
The session layer allows two systems to enter into a dialog. It allows the
communication between two processes to take place either in half-duplex (one way at
a time) or full-duplex (two ways at a time). For example, the dialog between a
terminal connected to a mainframe can be half-duplex.
Synchronization.
The session layer allows a process to add checkpoints (synchronization points) into a
stream of data. For example, if a system is sending
a file of 2000 pages, it is advisable to insert checkpoints after every 100 pages to
ensure that each 100-page unit is received and acknowledged independently. In this
case, if a crash happens during the transmission of page 523, retransmission begins at
page 501: pages 1 to 500 need not be retransmitted.
Presentation Layer
The presentation layer is concerned with the syntax and semantics of the information
exchanged between two systems.
Specific responsibilities of the presentation layer include the following:
Translation.
The processes (running programs) in two systems are usually exchanging information
in the form of character strings, number, and so on. The information should be
changed to bit streams before being transmitted. Because different computers use
different encoding systems, the presentation layer is responsible for interoperability

BRBRAITT : June-2011 23
―DATA NETWORK‖ FOR JTOs PH-II

between these different encoding methods. The presentation layer at the sender
changes the information from its sender-dependent format into a common format. The
presentation layer at the receiving machine changes the common format into its
receiver-dependent format.

Encryption.
To carry sensitive information, a system must be able to assure privacy. Encryption
means that the sender transforms the original information to another form and sends
the resulting message out over the network. Decryption reverses the original process
to transform the message back to its original form.

Compression.
Data compression reduces the number of bits to be transmitted. Data compression
becomes particularly important in the transmission of multimedia such as text, audio,
and video.
Application layer
The application layer enables the user, whether human or software, to access the
network. It provides user interfaces and support for services such as electronic mail,
remote file access and transfer, shared database management, and other types of
distributed information services. Of the many application services available, the figure
shows only three: X.400 (message-handling services); X.500 (directory services): and
file transfer, access, and management (FTAM). The user in this example uses X.400
to send an e-mail message. Note that no headers or trailers are added at this layer.
Specific services provided by the application layer include the following :
Network virtual terminal.
A network virtual terminal is a software version of a physical terminal and allows a
use to log on to a remote host. To do so, the application creates a software emulation
of a terminal at the remote host. The user computer talks to the software terminal
which in turn, talks to the host, and vice versa. The remote host believes it is
communicating with one of its own terminal and allows you to log on.

BRBRAITT : June-2011 24
―DATA NETWORK‖ FOR JTOs PH-II

File transfer, access, and management (FTAM).


This application allows a user to access files in are remote computer (to make changes
or read data), to retrieve files from a remote computer: and to manage or control files

in a remote computer.
Mail services.
This application provides the basis for e-mail forwarding and storage.
Directory services.
This application provides distributed database sources and access for global
information about various objects and services.

BRBRAITT : June-2011 25
―DATA NETWORK‖ FOR JTOs PH-II

PHYSICAL LAYER

BRBRAITT : June-2011 26
―DATA NETWORK‖ FOR JTOs PH-II

PHYSICAL LAYER
Transmission of digital information from one device to another is the basic function
for the devices to be able to communicate. This chapter describes the first layer of the
OSI model, the Physical layer, which carries out this function. After examining the
services it provides to the Data Link layer, functions of the Physical layer are
discussed. Relaying through the use of modems is a vary important data transmission
function carried out at the Physical layer level. Various protocols and interfaces which
pertain to the relaying functions are put into perspective. We then proceed to examine
EIA-232-D, a very important interface of the Physical layer. We discuss its
applications and limitations.
THE PHYSICAL LAYER
Let us consider a simple data communication situation shown in Fig.1, where two
digital devices A and B need to exchange data bits.

A B

Physical Bits Bits


Layer
Physical Layer
Protocol

Interface

Interconnecting Medium

Fig. 1 Transmission of bits by the Physical layer.

The basic requirements for the devices to be able to exchange bits are the following:

1. There should be a physical interconnecting medium which can carry electrical


signals between the two devices.
2. The bits need to be converted into electrical signals and vice versa.
3. The electrical signal should have characteristics (voltage, current, impedance, rise
time etc) suitable for transmission over the medium.
4. The devices should be prepared to exchange the electrical signals.
These requirements, which are related purely to the physical aspects of transmission
of bits, are met out by the Physical layer. The rules and procedures for interaction
between the Physical layers are called Physical layer protocols (Fig. 1).

BRBRAITT : June-2011 27
―DATA NETWORK‖ FOR JTOs PH-II

The Physical layer provides its service to the Data Link layer which is the next higher
layer and uses this service. It receives service of the physical interconnection medium
for transmitting the electrical signals.
Physical Connection
The Physical layer receives the bits to be transmitted from the Data Link layer (Fig.
2). At the receiving end, the Physical layer hands over these bits to the Data Link
layer. Thus, the Physical layers at the two ends provide a transport service from one
Data Link layer to the other over a ―Physical connection‖ activated by them. A
Physical connection is different from a physical transmission path in the sense that it
is at bit level while the transmission path is at the electrical signal level.

Data Link
Layer

Bits Bits
Physical )
Physical Connection
Layer ( ) ( )

Interconnection Medium
Fig. 2 Physical connection.

The Physical connection Shown in Fig. 2 is point-to-point. Point-to-multipoint


Physical connection is also possible as shown in Fig. 3.

A B C

Data Link
Layer
Bits
Bits Bits
Physical
Physical Connection
Layer

Interconnection Medium

Fig. 3 Point-to-multipoint Physical Connection.


Basic Service Provided to the Data Link Layer
The basic service provided by the Physical layer to the Data Link layer is the bits
transmission service over the Physical connection. The Physical layer service is
specified in ISO 10022 and CCITT X.211 documents. Some of the features of this
service are now described.

BRBRAITT : June-2011 28
―DATA NETWORK‖ FOR JTOs PH-II

Activation/Deactivation of the Physical Connection.


The Physical layer, when requested by the Data Link layer, activates and deactivates a
Physical connection for transmission of bits. Activation ensures that if one user
initiates transmission of bits, the receiver at the other end is ready to receive them.
The activation and deactivation service is non-confirmed, i.e., the user activating or
deactivating a connection is not given any feedback of the action having been carried
out by the Physical layer.
A Physical connection may allow full duplex or half duplex transmission of the bits.
In half duplex transmission, the users themselves decide which of the two users may
transmit. It is not done by the Physical layer protocol.
Transparency.
The Physical layer provides transparent transmission of the bit stream between the
Data Link entities over the Physical connection. Transparency implies that any bit
sequence can be transmitted without any restriction imposed by the Physical layer.
Physical Service Data Units (Ph-SDU).
Ph-SDU received from the Data Link layer consists of one bit in serial transmission
and of ―n‖ bits in parallel transmission.
Sequenced Delivery.
The Physical layer tries to deliver the bits in the same sequence as they were received
from the Data Link layer but it does not carry out any error control. Therefore, it is
likely that some of the bits are altered, some are not delivered at all, and some are
duplicated.
Fault Condition Notification.
Data Link entities are notified in case of any fault detected in the Physical connection.
FUNCTIONS WITHIN THE PHYSICAL LAYER
To provide the services as listed above to the Data Link layer, the Physical layer
carries out the following functions::
It activates and deactivates the Physical connection at the request of the Data Link
layer entity. These functions involve interaction of the Physical layer entities. Thus,
the Physical layer exchanges control signals with the peer entity.
A Physical connection may necessitate the use of a relay at an intermediate point to
regenerate the electrical signals. (Fig.4). Activation and deactivation of the relay is
carried out by the Physical layer. This function is explained in detail in the next
section.

BRBRAITT : June-2011 29
―DATA NETWORK‖ FOR JTOs PH-II

Bits Physical Connection


Bits
Relay

Physical
Layer

Interconnection Media
Physical Connection End Points
Fig. 4 Relaying function of the Physical layer.

The Physical transmission of the bits may be synchronous or asynchronous. The


Physical layer provides sychronization signals necessary for transmission of the bits.
Character level or frame level synchronization is the responsibility of the Data Link
layer.
If the signal encoding is required, this function is carried out by the Physical layer.
The Physical layer does not incorporate any error control function.
RELAYING FUNCTION IN THE PHYSICAL LAYER
It may not always be practical to directly connect two digital devices using a cable if
the distance between them is very long. The quality of the received signals gets
degraded by noise, attenuation and phase characterstics of the interconnecting
medium. Signal converiting units (SCUs) are used in the physical interconnecting
medium as relays to overcome these problems (Fig.5).

A
SCU SCU B

Fig. 5 Signal converting unit (SCU)

SCUs employ one or more of the following methods to ensure acceptable quality of
the signal received at the distant end:
1. Amplification
2. Regeneration
3. Equalization of media characteristics
4. Modulation.
Examples of SCUs which carry out these functions are: modems, LDMs (Limited
Distance Modems), line drivers, digital service unit, and optical transceiver.

BRBRAITT : June-2011 30
―DATA NETWORK‖ FOR JTOs PH-II

A pair of these devices is always required, one at each end. These two devices
together act as a relay. They receive electrical signals representing data bits at one end
and deliver the same signals at the other end.
The digital end devices face the SCUs and interact with the SCUs at the Physical
layer level. This is shown in detail in Fig.6. Notice that a number of protocols and
interfaces at Physical layer level are involved when SCUs are used as relay units.

A B

SCU-A SCU-B
Physical 1 2 1 Physical
layer Layer

I1 I1 I2 I2 I1 I1
M1 M2 M1

M1 Transmission Medium between End Device and SCU


M2 Transmission Medium between the Two SCUs
I1 Physical Medium Interface between End Device and SCU
I2 Physical Medium Interface between Two SCUs
1. Physical Layer Protocol between End Device and SCU
2. Physical Layer Protocol between the Two SCUs
Fig. 6 Interfaces & protocols in a Physical connection involving signal converting
units.

In the above example, the media M1 and M2 are usually different. M1 consists of a
bunch of copper wires, each carrying data or a control signal. M2, on the other hand,
can be a telephony channel or even optical fiber. Physical medium interfaces I1 and I2
depend on the type of medium used.
As regards the Physical layer protocols, note that the Physical layer of device A no
longer interacts with the Physical layer of device B. It interacts with the Physical layer
of SCU-A to carry out the Physical layer functions. The two SCUs have a different set
of Physical layer protocols between them.

BRBRAITT : June-2011 31
―DATA NETWORK‖ FOR JTOs PH-II

PHYSICAL MEDIUM INTERFACE


The Physical layers need to exchange protocol control information between them.
Unlike the other layers which send the protocol control information as a separate
field, the Physical layers use the interconnecting medium for sending the protocol
control signals. These signals are sent on separate wires as shown in Fig.7. Note that
the control signals originate and terminate in the Physical layers. They have no
functional significance beyond the Physical layer. This is in conformity with the
principles of the layered architecture.

B
A

Data
Data
Bits Bits
Physical Layer
Protocol
Control Signals

Data Signals

Fig. 7 Transmission of control signals of the Physical layer.

The physical interconnecting medium consists of a number of wires carrying data and
control signals. It is essential to specify which wire carries which signal. Moreover,
the mechanical specifications of the connector, type of the connector (male or female)
and the electrical characteristics of the signals need to be specified. Definition of the
physical medium interface includes all these specifications.
PHYSICAL LAYER STANDAREDS
Historically, the specifications and standards of the physical medium interface have
also covered the Physical layer protocols. But these specifications have not identified
the Physical layer protocols as such.
Physical layer specifications can be divided into the following 4 components (Fig.8):
1. Mechanical specification
2. Electrical specification
3. Functional specification
4. Procedural specification.

BRBRAITT : June-2011 32
―DATA NETWORK‖ FOR JTOs PH-II

Procedural Specification
(Physical layer protocol)
Physical
Layer
Mechanical Specification
(Connector pin assignment)

Functional Specification
(Various Signals)

Electrical Specification
(Electrical characteristics)
Fig.8 Physical layer specifications

The procedural specification is the Physical layer protocol definition and the other
three specifications constitute the physical medium interface specifications.
The mechanical specification gives details of the mechanical dimensions and the type
of connectors to be used on the device and the medium. Pin assignments of the
connector are also specified.
The electrical specification defines the permissible limits of the electrical signals
appearing at the interface in terms of voltages, currents, impedances, rise time, etc.
The required electrical characteristics of the medium are also specified.
The functional specification indicates the functions of various control signals.
The procedural specification indicates the sequence in which the control signals are
exchanged between the Physical layers for carrying out their functions.
Although there are many standards of the Physical layer, only a few are of wide
significance. Some examples of Physical layer standards are given below.

EIA: EIA-232-D
RS-449, RS-422-A, RS-423-A
CCITT: X.20, X.20bis
X.21, X.21bis
V.35, V.24, V.28
ISO: ISO 2110

Out of the above, the EIA-232-D interface is the most common and is found in almost
all computers. We will examine EIA-232-D in detail in the following sections. other
less important Physical standards will also be discussed in brief.

BRBRAITT : June-2011 33
―DATA NETWORK‖ FOR JTOs PH-II

EIA-232-D DIGITAL INTERFACE


The EIA-232-D digital interface of Electronics Industries Association (EIA) is the
most widely used physical medium interface. RS-232-C is the older and more familiar
version of EIA-232-D. It was published in 1969 as RS-232 interface and the current
version was finalised in 1987. EIA-232-D is applicable to the following modes of
transmission:
1. Serial transmission of data
2. Synchronous and asynchronous transmission
3. Point-to-point and point-to-multipoint working
4. Half duplex and full duplex transmission.
DTE/DCE interface
EIA-232-D is applicable to the interface between a Data Terminal Equipment (DTE)
and a Data Circuit Terminating Equipment (DCE) (Fig.9). The terminal devices are
usually called Data Terminal Equipment (DTE). The DTEs are interconnected using
two intermediary devices which carry out the relay function. The intermediary devices
are categorized as Data Circuit-terminating Equipment (DCE). They are so called
because standing at the Physical layer of a DTE and facing the data circuit, one finds
oneself looking at an intermediary device which terminates the data circuit.

DTE DTE
DCE DCE
Physical
Layer

Interface Interface
Between Between
DTE and DCE DCE and DCE
(EIA-232-D)
Fig. 9 DTE/DCE interfaces at the Physical layer.
Two types of Physical layer interfaces are involved in the above configuration:

Interface between a DTE and a DCE


Interface between the DCEs.
EIA-232-D defines the interface between a DTE and DCE. There are other standards
for DCE-to-DCE interface.

The physical media between the DTE and the DCE consist of several circuits carrying
data, control and timing signals. Each circuit carries one specific signal, either from
the DTE or from the DCE. These circuits are called interchange circuits.

BRBRAITT : June-2011 34
―DATA NETWORK‖ FOR JTOs PH-II

DCE-DCE Connection
A DCE has two interfaces, DTE-side interface which is EIA-232-D, and the line-side
interface which interconnects the two DCEs through the transmissions medium. There
can be several forms of connection and modes of transmission between the DCEs as
shown in Fig.10

EIA-232-D EIA-232-D
DTE DCE Dedicated DCE DTE
Transmission Medium

DTE DCE DCE DTE


Telephone
Network

Telephone Telephone
Instrument Instrument

DTE DCE DCE DTE

4-Wire Circuit

DTE DCE DCE DTE


2- Wire Circuit

Fig. 10 Transmission alternatives between two DCEs.


1. The two DCEs may be connected directly through a dedicated transmission
medium.
2. The two DCEs may be connected to PSTN (Public Switched Telephone
Network).
3. The connection may be on a 2-wire transmission circuit or on a 4-wire
transmission circuit.
4. The mode of transmission between the DCEs may be either full duplex or half
duplex.
Full duplex mode of transmission is easily implemented on a 4-wire circuit. Two
wires are used for transmission in one direction and the other two in the opposite
direction. Full duplex operation on a 2-wire circuit requires two communication
channels which are provided at different frequencies on the same medium.
PSTN provides a 2-wire circuit between the DCEs and the circuit needs to be
established and released using a standard telephone interface.

BRBRAITT : June-2011 35
―DATA NETWORK‖ FOR JTOs PH-II

Note that electronics of the DCE may not be directly connected to the interconnecting
transmission circuit. This connection is made on request from the DTE as we shall see
later.
EIA-232-D INTERFACE SPECIFICATIONS
EIA-232-D interface defines four sets of specifications for the interface between a
DTE and a DCE:
1. Mechanical specifications
2. Electrical specifications
3. Functional specifications
4. Procedural specifications
The protocol between the Physical layers of the DTE and DCE is defined by the
procedural specifications. Therefore, the scope of the EIA-232-D interface is not
confined to the Physical layer to the transmission media interface.
CCITT recommendations for the physical interface are as follows:
1. Mechanical specifications as per ISO 2110
2. Electrical specifications V.28
3. Functional specifications V.24
4. Procedural specifications V.24
These recommendations are equivalent to EIA-232-D.

Mechanical Specifications
Mechanical specifications include mechanical design of the connectors which are
used on the equipment and the interconnecting cables; and pin assignments of the
connectors.
EIA-232-D defines the pin assignments and the connector design is as per ISO 2110
standard. A DB-25 connector having 25 pins is used (Fig. 11). The male connector is
used for the DTE port and the female connector is used for the DCE port.
DB-25 Pin Male Connector for DTE Port

1 13

25
14

DB-25 Pin Female Connector for DCE Port

11

DB-25 Pin Female Connector for DCE Port

BRBRAITT : June-2011 36
―DATA NETWORK‖ FOR JTOs PH-II

13

Fig.11
25 25-pin connector of EIA-232-D interface.
14

Electrical specifications
The electrical specifications of the EIA-232-D interface specify characteristics of the
electrical signals. EIA-232-D is a voltage interface. Positive and negative voltages
within the limits as shown in Fig.12 are assigned to the two logical states of a binary
digital signal.

Limit + 25
Volts
Logic 0, On, Space
Nominal + 12
Volts
+ 3 Volts
0 Volt
Logic 1, Off, Mark
– 3 Volts
Nominal – 12
Volts

Limit – 25
Volts

Fig. 12 Electrical specifications of EIA-232-D interface.


All the voltages are measured with respect to the common ground. The 25-volts limit
is the open circuit or no-load voltage. The range from – 3 to + 3 volts is the transition
region and is not assigned any state.
DC resistance of the load impedance is specified to be between 3000 to 7000 ohms
with a shunt capacitance less than 2500 pF. The cable interconnecting a DTE and a
DCE usually has a capacitance of the order of 150 pF per meter which limits its
maximum length to about 16 meters. EIA-232-D specifies the maximum length of the
cable as 50 feet (15.3 meters) at the maximum data rate of 20 kbps.

BRBRAITT : June-2011 37
―DATA NETWORK‖ FOR JTOs PH-II

Functional Specifications
Functional specifications describe the various signals which appear on different pins
of the EIA-232-D interface. Table 1 lists these signals which are divided into five
categories:
1. Ground or common return
2. Data circuits
3. Control circuits
4. Timing circuits
5. Secondary channel circuits
A circuit implies the wire carrying a particular signal. The return path for all the
circuits in both directions (form DTE to DCE and from DCE to DTE) is common. It is
provided on pin 7 of the interface. EIA has used a two- or three-letter designation for
each circuit. CCITT, on the other hand, has given a three digit number to each circuit.
In day-to-day use, however, acronyms based on the function of individual circuits are
more common.
Not all the circuits are always wired between a DTE and a DCE. Depending on
configuration and application, only essential circuits are wired. Functions of the
commonly used circuits are now described.
Signal Ground (AB).
It is the common earth return for all data and control circuits in both directions. This
is one circuit that is always required whatever be the configuration.
Data Terminal Ready (CD), DTE DCE.
The ON condition of the signal on this circuit informs the DCE that the DTE is ready
to operate and the DCE should also connect itself to the transmission medium.

Table 1 EIA-232-D Interchange Circuits


Pin To To Circuit names CCITT
EIA
DTE DCE
1 Common Shield 101 –
7 Common Signal ground 107 AB
2 Transmitted data 103 BA
3 Received data 104 BB
4 Request to send 105 CA
5 Clear to send 106 CB
6 DCE ready 107 CC
20 Data terminal ready 108.2 CD
22 Ring indicator 125 CE
8 Received line signal detector 109 CF
21 Signal quality detector 110 CG
23 Data rate selector (DTE) 111 CH

BRBRAITT : June-2011 38
―DATA NETWORK‖ FOR JTOs PH-II

23* Data rate selector (DCE) 112 CI


24 Transmitter signal element timing (DTE) 113 DA
15 Transmitter signal element timing(DCE) 114 DB
17 Receiver signal element timing (DCE) 115 DD
14 Secondary transmitted data 118
SBA
16 Secondary received data 119
SBB
19 Secondary request to send 120
SCA
13 Secondary clear to send 121
SCB
12* Secondary received line signal detector 122
SCF
18 Local loopback 141 LL
21 Remote loopback 140 RL
25 Test mode 142 TM

* If SCF is not used then CI is on pin 23.

DCE Ready (CC), DTE DCE.


This circuit is usually turned ON in response to CD and indicates ready status of the
DCE. When this signal is ON, it means that power of the DCE is switched on and it is
connected to the transmission medium.
If the DCE-to-DCE connection is through PSTN, ON status of the CC implies that the
call has been established.
Request to Send (CA), DTE DCE.
Transition from OFF to ON on the CA triggers the local DCE to perform such set-up
actions as are necessary to transmit data. These set-up activities include sending a
carrier to the remote DCE so that it may further alert the remote DTE and get ready to
receive data.
Transition of the CA from ON to OFF instructs the DCE to complete transmission of
all data and then withdraw the carrier.
Clear to send (CB), DTE DCE.
Clear to Send signal indicates that the DCE is ready to receive data from the DTE on
Transmitted Data (BA) circuit. This control signal is changed to the ON state in
response to the Request to Send (CA) from the DTE after a predefined delay. This
delay is provided to give sufficient time to the remote DCE and DTE to get ready for
receiving data. Figure.13 illustrates how the Request to Send (CA) signal works with
Clear to Send (CB) signal to coordinate data transmission between a DTE and a DCE.

BRBRAITT : June-2011 39
―DATA NETWORK‖ FOR JTOs PH-II

ON
Request to
Send ON

(CA)

Clear to Send
(CB)

Transmitted Data
(BA)

Carrier A B C
A : DTE switches CA ―ON‖ indicating its wish to transmit
data;DCE sends carrier on the transmission media
B : DCE accepts to receive data by switching CB ―ON‖
C : DCE receives data from DTE on BA
Fig. 13 Time sequence
(BA), of Request to Send and ClearData
to Send circuits .
Transmitted Data DTE DCE. from DTE to DCE is
transmitted on this circuit. When no data is being transmitted, the DTE Keeps the
signal on this circuit in ―1‖ state.
Data can be transmitted on this circuit only when the following control signals are
ON:
1. Request to Send (CA)
2. Clear to Send (CB)
3. DCE Ready (CC)
4. Data Terminal Ready (CD).
The ON state of these signals ensures that the local DCE is in readiness to transmit
data and sufficient opportunity has been given to the remote DCE and DTE to get
ready for receiving data.
Received Data (BB), DTE DCE.
Data from DCE to DTE is received on this circuit. DCE maintains the signal on this
circuit in ―1‖ state when no data is being received
Received Line Signal Detector (CF), DTE DCE.
When a DTE asserts CA, the local DCE sends a carrier to the remote DCE so that it
may get ready to receive data. When the remote DCE detects the carrier on the line, it
alerts the DTE to get ready to receive data by turning the CF circuit ON.

BRBRAITT : June-2011 40
―DATA NETWORK‖ FOR JTOs PH-II

Transmitter Signal Element Timing (DA), DTE DCE.


When operating in the synchronous mode of transmission, the DTE clock is made
available to the DCE on this circuit.
Transmitter Signal Element Timing (DB), DTE DCE.
When operating in synchronous mode of transmission, the DCE clock is made
available to the DTE on this circuit. One of the two clocks, DA or DB, is used as
timing reference.
Receiver Signal Element Timing (DD), DTE DCE.
At the receiving end, the circuit DD provides the receive clock from the DCE to the
DTE. This clock is extracted from the received signal by the DCE and is used by the
DTE to store the data bits in a shift register. Figure 14 shows two typical methods of
configuring the timing circuits.

DTE DCE DCE DTE


DB DD
Clock
BB
BA BA
BB

DD DB
Clock

(a) Clock supplied by the DCE

DTE DCE DCE DTE


DA DD
Clock
BA BB

BB BA

DD DA
Clock

(b) Clock supplied by the DTE

Fig. 14 Clock supply alternatives in synchronous transmission.

In the first alternative, the DCE supplies clock to the DTE on circuit DB for the
transmitted data. At each clock transition, one data bit is pushed out of the DTE. At
the remote end, the clock is extracted from the received data and supplied to the DTE
on circuit DD for the received data.
In the second alternative, the DTE supplies clock to the DCE on circuit DA. For the
received data, the DCE extracts the clock from data and supplies it to the DTE as
before.

BRBRAITT : June-2011 41
―DATA NETWORK‖ FOR JTOs PH-II

Ring Indicator (CE), DTE DCE.


The ON state of this circuit indicates to the DTE that there is an incoming call and the
DCE is receiving a ringing signal. On receipt of this signal the DTE is expected to get
ready and indicate this to the DCE by turning its Data Terminal Ready signal ON.
Local Loopback (LL), DTE DCE.
The ON condition of this circuit causes a local loopback at the DCE line output so
that the data transmitted on the circuit BA is made available on the received data
circuit BB for conducting local tests.
Remote Loopback (RL), DTE DCE.
The ON condition of this circuit causes loopback at the remote DCE so that the local
DCE line and the remote DCE could be tested.
Test Mode (TM), DTE DCE.
After establishing the loopback condition, the DCE indicates its loopback status to the
local DTE by the ON condition of the TM circuit.
Secondary Channel Circuits (SBA, SBB, SCA, SCB, SCF).
These circuits are used when a secondary channel is provided by a DCE. The
secondary channel operates at a lower data signaling rate (typically 75 bits/s) than the
data channel and is intended to be used for return of supervisory control signals. The
control circuits for the secondary channel, SCA and SCB, are functionally the same as
CA and CB except that they are associated with the secondary channel rather than the
data channel.
Procedural Specifications
Procedural specifications lay down the procedures for the exchange of control signals
between a DTE and a DCE. The sequence of events which comprise the complete
procedure for data transmission can be divided into the following four phases:
1. Equipment readiness phase.
2. Circuit assurance phase.
3. Data transfer phase.
4. Disconnect phase.
Equipment Readiness phase.
The following functions are carried out during the equipment readiness phase.
1. The DTE and DCE are energized.
2. Physical connection between the DCEs is established if they are connected to
PSTN.
3. The transmission medium is connected to the DCE electronics.
4. The DTE and DCE exchange signals which indicate their ready state.
5. We shall consider two simple configurations of connection between the DCEs.
6. The DCEs having dedicated transmission medium between them.
7. The DCEs, having a switched connection through PSTN between them.

BRBRAITT : June-2011 42
―DATA NETWORK‖ FOR JTOs PH-II

Dedicated Transmission Connection: A DTE which wants to transmit, asserts the


Data Terminal Ready signal (CD) which connects the DCE electronics to the
transmission medium. If the DCE is energized, it replies with the DCE Ready signal
(CC) as shown in Fig. 15.

CD: Data Terminal Ready


DTE DCE
CC: DCE Ready
CD

CC

2-Wire or 4-Wire Dedicated


Transmission Medium

Fig. 15 Equipment readiness phase for dedicated transmission media.

Switched Connection: In this case, the physical connection of DCEs needs to be


established through a switched telephone network. This is done either manually by the
operators at both ends or automatically through using automatic calling and answering
equipment.
In the manual operation, the DCEs are fitted with a telephone instrument. The
operator wishing to establish the connection dials the distant end telephone number
and indicates his intent to the distant end operator. The operators then press
appropriate switches on their respective DTEs to send the Data Terminal Ready
signals (CD). The Data Terminal Ready signal causes the transmission medium to
changeover from the telephone instrument to the DCE at both ends (Fig. 16)

CD: Data Terminal Ready


DTE DCE CC: DCE Ready
CD

CC
2-Wire or 4-Wire Dedicated
Transmission Medium

Fig. 16 Equipment readiness phase for transmission on switched media.

If automatic answering equipment is used, the incoming call is detected by the DCE
and indicated to the DTE by Ring indicator signal (CE). If the DTE is in energized
condition, it sends the Data Terminal Ready signal (CD), which causes connection to
the transmission medium.

BRBRAITT : June-2011 43
―DATA NETWORK‖ FOR JTOs PH-II

The DCE indicates its readiness status simultaneously to the DTE on the DCE Ready
circuit (CC) (Fig.17)
DCE DTE
CD

To Switched CC
Telephone Network

Incoming Ring CE
RD

Telephone CD: Data Terminal Ready


Instrument CC: DCE Ready
CE: Ring Indicator
RD: Ring Detector
Fig. 17 Distant end readiness with auto answering equipment.

Thus, at the end of the equipment readiness phase, we have (a) ON state of the Data
Terminal Ready and DCE Ready signals and (b) the transmission medium connected
to the DCE electronics.
Circuit Assurance Phase.
In the circuit assurance phase, the DTEs, indicate their intent to transmit data to the
respective DCEs and the end-to-end (DTE to DTE) data circuit is activated. If the
transmission mode is half duplex, only one of the two directions of transmission of
the data circuit is activated.
Half Duplex Mode of Transmission:
A DTE indicates its intent to transmit data by asserting the Request to Send signal
(CA) which activates the transmitter of the DCE and a carrier is sent to the distant end
DCE (Fig. 18). The Request to send signal also inhibits the receiver of the DCE.

DTE DCE DCE DTE


CD CD

CC
CC
TX

CA RX CF
RX
CB CTS Carrier
Delay

CD: Data Terminal Ready


CC: DCE Ready
CA: Request to Send
CB: Clear to Send
CF: Received Line Signal Detector
TX: Transmitter Fig. 18 Circuit assurance phase in half duplex mode of transmission.
RX: Receiver
BRBRAITT : June-2011 44
―DATA NETWORK‖ FOR JTOs PH-II

After a short interval of time equal to the propagation delay, the carrier appears at the
input of the distant end DCE. The DCE detects the incoming carrier and gets ready to
demodulate data from the carrier. It also alerts the DTE using the Received Line
Signal Detector circuit (CF) as shown in the Fig. 18.
After activating the circuit, the sending end DCE signals the DTE to proceed with
data transmission by returning the Clear to Send signal (CB) after a fixed delay. This
delay ensures that sufficient opportunity is given to the distant end to get ready to
receive data. With the clear to Send signal, the equipment readiness and end-to-end
data circuit readiness are assured and the sending end DTE can initiate data
transmission
In half duplex operation, the Clear to Send signal is given in response to Request to
Send only if the local Received Line Signal Detector circuit is OFF.
Full Duplex Operation:
In full duplex operation, there are separate communication channels for each direction
of data transmission so that both the DTEs may transmit and receive simultaneously.
The circuit assurance phase is exactly the same in half duplex transmission mode
except that both the DTEs can independently assert Request to Transmit. In this case,
the receivers always remain connected to the receive side of the communication
channel.
Data Transfer Phase.
Once the circuit assurance phase is over, data exchange between DTEs can start. The
following circuits are in ON state during this phase:
Transmitting End Receiving End
Data Terminal Ready Data Terminal Ready
DCE Ready DCE Ready
Request to Send Received Line Signal Detector
Clear to Send

At the transmitting end, the DTE sends data on Transmitted Data circuit (BA) to the
DCE which sends a modulated carrier on the transmission medium. The distant end
DCE demodulates the carrier and hands over the data to the DTE on Received Data
circuit (BB).
In the half duplex operation, the direction of transmission needs to be reversed every
time a DTE completes its transmission and the other DTE wants to transmit. The
Request to send signal is withdrawn after the transmitting end DTE completes its
transmission. The DCE withdraws its carrier and switches the communication channel
to its receiver. The DCE also inhibits further flow of data from the local DTE by
turning off the Clear to send signal.
When the distant end DCE notices the carrier disappear, it withdraws the Received
Line Signal Detector circuit. Noticing that the transmission medium is free, the distant
end DTE performs actions of the circuit assurance phase and then transmits data.
Thus, a DTE wanting to transmit, checks each time if the channel is free by sensing
Received Line Signal Detector circuit and if it is OFF, it asserts the Request to Send.

BRBRAITT : June-2011 45
―DATA NETWORK‖ FOR JTOs PH-II

Disconnect Phase.
After the data transfer phase, disconnection of the transmission media is initiated by a
DTE. It withdraws Data Terminal Ready signal. The DCE disconnects from the
transmission media and turns off the DCE Ready signal.
COMMON CONFIGURATIONS OF EIA-232-D INTERFACE
Not all the circuits defined in EIA-232-D specifications are always implemented.
Depending on application and communication configuration only a subset of the
circuits is implemented.

DTE DCE

0 1
0 2
0 3
0 4
0 5
0 6
0 7
0 8
0 20
0 22

Figure 19 shows the circuits commonly implemented in a standard full duplex


configuration

Shield
1 0 Transmitted Data
2 0 Received Data
3 0 Request to Send
4 0 Clear to Send
5 0 DCE Ready
6 0 Signal Ground
7 0 Received Line Signal Detector
8 0
Data Terminal Ready
20 0
22 0 Ring Indicator

Fig.19 Commonly implemented circuits in a standard full duplex configuration.

BRBRAITT : June-2011 46
―DATA NETWORK‖ FOR JTOs PH-II

Standard full duplex configuration implementation as shown above is required for


communication involving modems and telephone network. In practice, however, the
following non-standard configurations are also quite often used.
Three-wire interconnection.
Figure 20 depicts a three-wire interconnection which is quite adequate for many
interfacing configurations. This interconnection provides a bare minimum number of
circuits necessary for full duplex communication. The circuits present are Transmitted
Data, Received Data and Signal Ground.

BRBRAITT : June-2011 47
―DATA NETWORK‖ FOR JTOs PH-II

DTE DCE

1 0 Transmitted Data 0 1
2 0 0 2
Received Data
3 0 0 3
4 0 0 4
5 0 0 5
6 0 Signal Ground 0 6
7 0 0 7
8 0 0 8
20 0 0 20
22 0 0 22

Fig. 20 Three-wire interconnection for full duplex operation.


Three-Wire Interconnection with Loopback.
If Request to Send and Clear to Send circuits are implemented in a DTE port, the
three-wire interconnection shown in Fig. 20 does not work because the DTE will not
transmit data unless it receives the Clear to Send signal. A three-wire interconnection
with loopback overcomes this problem (Fig. 21) by locally generating the signals
required for initiating the transmission. The following jumpers are provided.
1. Request to Send circuit is jumpered to Clear to Send and Received Line Signal
Detector circuits.
2. Data Terminal Ready circuit is jumpered to DCE Ready circuit.

DTE DCE
Shield
1 0 0 1 AB: Signal Ground
BA BA: Transmitted Data
2 0 BB
0 2 BB: Received Data
3 0 0 3 CA: Request to Send
CA CB: Clear to Send
4 0 0 4
CB CC: DCE Ready
5 0 0 5 CD: Data Terminal Ready
6 0 CC 0 6 CF: Received Line
AB
Signal Detector
7 0 0 7
8 0 CF 0 8
20 0 CD 0 20
22 0 0 22
Fig. 21 Three-wire interconnection with loop backs.

By jumpering the Data Terminal Ready circuit to DCE Ready circuit, the equipment
readiness phase is completed as soon as the DTE asserts the Data Terminal Ready
signal. Quite often, this occurs when power is applied to the DTE.

BRBRAITT : June-2011 48
―DATA NETWORK‖ FOR JTOs PH-II

When the DTE asserts the Request to send signal, the circuit assurance phase is
immediately completed because it receives immediately the Clear to Send and
Received Line Signal Detector signals.
By providing the loopbacks, the number of interconnecting wires is reduced but it
should be kept in mind that certain features of EIA-232-D interface have also been
omitted. There are many other configurations each tailored to a particular requirement
and with its own merits and limitations. In the following section we shall discuss the
special class of interface configurations associated with interconnection of devices
having similar interface ports even though EIA-232-D was designed to work between
two dissimilar devices, a DTE and a DCE.
Null Modem
If we view the EIA-232-D interface by standing between the DTE and the DCE, it is
seen that a signal which comes out of a particular pin of the DTE port goes towards
the DCE on the same pin. In other words, in any pair of corresponding pins of the
DTE and DCE ports, one is output pin and the other is input pin.
Therefore, in order to apply EIA-232-D to interconnect any two devices, it is
necessary that a DTE thinks that it is connected to a DCE, whether the other device is
actually a DCE or not. Thus, a computer and a terminal can be directly interconnected
using EIA-232-D interface if one of them has a DCE port and the other a DTE port
(Fig. 22a)
On the other hand, if both the devices which are to be interconnected have DTE ports,
one of the devices needs to be suitably modified to look like a DCE (Fig. 22b). A null
modem carries out this job externally by converting a DTE port to a DCE port and
vice versa (Fig.22c).

Terminal Computer

DTE DCE
(a) DTE-DCE interconnection

Terminal Computer

DCE DCE

DTE DTE
(b) DTE-DTE direct interconnection

Terminal Null Computer


Modem
DCE DCE

DTE DTE

(c) DTE-DTE interconnecting null modem

BRBRAITT : June-2011 49
―DATA NETWORK‖ FOR JTOs PH-II

Fig. 22 Need for null modem.

Null Modem with Loopback.


Figure 23 shows a three-wire null modem used for interconnecting two DTEs. Notice
that null modem is a cable with DCE connectors (female connectors) at the ends. The
transmitted/Received Data wires are crossed so that data transmitted by one DTE may
be received by the other at its appropriate pin. The loopback jumpers for three-wire

Fig. 23 Internal configuration of a null modem.

interconnections explained earlier are also provided.


Null Modem with Loopback and Multiple Crossovers.
Figure 24 shows another variation of null modem cable. The following jumpers and
crossovers are provided.
Jumpers from
Request to Send to Clear Send
Ring Indicator to DCE Ready.
Crossovers between
Transmitted Data and Received Data
Request to Send and Received Line Signal Detector
Data Terminal Ready and Ring Indicator.

Fig. 24 Null modem with loopbacks and multiple crossovers.

BRBRAITT : June-2011 50
―DATA NETWORK‖ FOR JTOs PH-II

When a DTE asserts a Data Terminal Ready signal, the other DTE is immediately
given a stimulus, the Ring indicator, to believe that it has an incoming call. It
responds with its Data Terminal Ready which results in the DCE Ready signal at the
calling DTE. Thus, the equipment readiness phase is complete. Before transmitting
data, the calling DTE asserts the Request to Send which raises the Received Line
Signal Detector at the other DTE. The Request to Send signal is looped back at the
calling DTE as Clear to Send. Therefore, the circuit assurance phase is also
immediately completed and data transmission can begin.
The above discussion applies to the asynchronous mode of operation because we have
not considered the clock. If the terminal devices require external clock, the null
modem cable will not serve the purpose. A synchronous null modem device which
has a clock source is required. Else, the internal clock of a DTE can serve the purpose.
This clock which is available on pin 24, is wired to pin 17 locally for receive timing,
and to pins 15 and 17 of the other device for transmit and receive timings.
LIMITIATIONS OF EIA-232-D
Although EIA-232-D is the most popular physical layer interface, its use in computer
networking is limited to low data rates and shot distance data transmission
applications. The distance between a DTE and DCE is limited to 15 meters, beyond
which modems are necessary. Even a small industrial plant or an office requires
modems between the host and its terminals. As regards the data rate, the EIA-232-D
interface meets the local transmission requirements which are usually below 9600 bps
but higher data rates of 48 kbps and above are required for computer networking. The
upper limit of 20 kbps of EIA-232-D is not sufficient for these applications.
The above limitations of the EIA-232-D interface are due to the following two
reasons:
1. Unbalanced transmission mode of its signals.
2. Shared common ground for all signals flowing in both the directions.
Raised ground potential, crosstalk and noise due to these factors result in introduction
of errors at high bit rates and for longer separation between the DTE and the DCE.
These limitations of the EIA-232-D have been overcome in the interface standards
developed subsequently.
RS-449 INTERFACE
In the early 1970s, the EIA introduced RS-422-A, RS-423-A and RS-449 interfaces to
overcome the limitations of RS-232-C. RS-422-A and RS-423A cover only the
electrical specifications, and RS-449 covers mechanical, functional and procedural
specifications. These specifications are compatible with EIA-232-D so that a device
having EIA-232-D interface can be interconnected to another having the RS-449
interface. CCITT also adopted RS-449, RS-422-A and RS-423-A subsequently and
published recommendations V.54, V.10 and V.11. Procedural specifications are the
same as in EIA-232-D and, therefore, have not been described again.
Mechanical Specifications
RS-449 gives detailed mechanical specifications of the interface. Since RS-449
incorporates more than 25 signals, two connectors, one with 37 pins and the other
with 9 pins have been specified. Mechanical designs of the connectors are as per ISO
4902 standard. All signals associated with the basic operation of the interface appear

BRBRAITT : June-2011 51
―DATA NETWORK‖ FOR JTOs PH-II

on the 37-pin connector. The secondary channel circuits are grouped on the 9-pin
connector. Table 2 gives a list of the signals present in the RS-449 interface with their
pin assignments. For purposes of comparison, we have included the signals which are
present in the EIA-232-D interface also in the table.
Mechanical compatibility between EIA-232-D and RS-449 is accomplished at
connector level using an adapter as shown in Fig. 25.
The RS-449 standard also specifies the maximum cable length and the corresponding
data rate supported by the cable. Figure 26 shows this relationship graphically.

Table 2 RS-449 Interface Circuits


A.37 Pin Connector

RS-449 Pin NO To EIA-232-D


Circuit name DTE DCE Circuit name
Shield 1 Shield
SG Signal Ground 19 AB Signal ground
SC Send Common 37
RC Receive Common 20
TS Terminal in Service 28
IC Incoming Call 15 CE Ring Indicator
TR Terminal Ready 12, 30 CD Data Terminal ready
DM Data Mode 11, 29 CC DCE Ready
SD Send Data 4, 22 BA Transmitted Data
RD Receive Data 6, 24 BB Received Data
TT Terminal Timing 17,.35 DA Transmitter Signal Element
Timing (DTE)
ST Send Timing 5, 23 DB Transmitter Signal Element
Timing (DCE)
RT Receive Timing 8, 26 DD Receiver Signal Element
Timing (DCE)
RS Request to Send 7, 25 CA Request to Send
CS Clear to Send 9, 27 CB Clear to Send
RR Receive Ready 13, 31 CF Received Line Signal Detector
SQ Signal Quality 33 CG Signal Quality Detect
NS New Signal 34
SF Select Frequency 16
SR Signal Rate Selector 16 CH Data Signal Rate Selector
(DTE)
SI Signal Rate Indication 2 CI Data Signal Rate Selector
(DCE)
LL Local Loop-back 10 LL Local Loop-back
RL Remote Loop-back 14 RL Remote Loop-back
TM Test Mode 18 TM Test Mode
SS Select Standby 32
SB Standby indicator 36
Spare 3, 21
Table 2 RS-449 Interface Circuits

BRBRAITT : June-2011 52
―DATA NETWORK‖ FOR JTOs PH-II

9 Pin Connector

RS-449 Pin. To EIA-232-D


Circuit name No DTE DCE Circuit name
Shied 1
SG Signal Ground 5 AB Signal ground
SC Send Common 9
RC Receive Common 6
SSD Secondary Send Data 3 SBA Secondary Transmitted Data
SRD Secondary Received Data 4 SCB Secondary Received Data
SRS Secondary Request to Send 7 SCA Secondary Request to Send
SCS Secondary Clear to Send 8 SCB Secondary Clear to Send
SRR Secondary Receiver Ready 6 SCF Secondary Received Line
Signal Detector

DCE
DTE

25 37 RS-449
EIA-232-D

DCE

RS-449 37 25 EIA-232-D

9
Fig. 25 Adapter for EIA-232-D and RS-449 interfaces.

Electrical Specifications
To ensure electrical compatibility with EIA-232-D, both balanced and unbalanced
transmissions can be used. RS-422-A specifies electrical characteristics of the
balanced circuits while RS-423-A specifies electrical characteristics of the unbalanced
circuits. Circuits of RS-449 are divided into two categories. Category I circuits are as
follows:
1. Send Data (SD)
2. Receive Data (RD)
3. Terminal Timing (TT)
4. Send Timing (ST)
5. Receive Timing (RT)

BRBRAITT : June-2011 53
―DATA NETWORK‖ FOR JTOs PH-II

Fig.26 Data rates supported by RS-449

1. Request to Send (RS)


2. Clear to Send (CS)
3. Receive Ready (RR)
4. Terminal Ready (TR)
5. Data Mode (DM)

The rest of the circuits belong to category II.

For data rates of less than 20 kbps (upper limit for EIA-232-D circuits), Category I
circuits may be implemented using either RS-422-A or RS-423-A electrical
characteristics. For data rates over 20 kbps, balanced RS-422-A electrical
characteristics must be used. Circuits belonging to Category II are always
implemented using RS-423-A characteristics.
V.35 Interface
The V.35 interface was originally specified by CCITT as an interface for 48kbps line
transmission. It has been adopted for all line speeds above 20kbps.
V.35 is a mixture of balanced (like RS422A) and common earth (like RS232) signal
interfaces. The control lines including DTR, DSR, DCD, RTS and CTS are single
wire common earth interfaces, functionally compatible with RS-232 level signals. The
data and clock signals are balanced, RS-422A-like signals.
The control signals in V.35 are common earth single wire interfaces because these
signal levels are mostly constant or vary at low frequencies. The high frequency data
and clock signals are carried by balanced lines. Thus single wires are used for the low
frequencies for which they are adequate, while balanced pairs are used for the high
frequency data and clock signals.

BRBRAITT : June-2011 54
―DATA NETWORK‖ FOR JTOs PH-II

The V.35 plug is standard. It is a black plastic plug about 20 mm by 70mm, often with
gold plated contacts and built-in hold down and mating screws. The V.35 plug is
roughly 30 times the price of a DB25.
Characteristics Standards
Electrical ITU: V11 and V28 recommendation
Circuits specifications ITU: V35 recommendation
Mechanical ISO: IS2593
G 703 Interface
It is probably the most cost competitive solution of connecting data communications
equipment to two mega-bit leased line private circuits. This interface can work from
64 kbps to 2 Mbps. Its functional interface is defined in G 704. G 703 interface
supports the electrical specification. Maximum cable length is 800 meters of nine pin
connector.
Specifications
ITU-T G. 703 interface specification

TYPE Codirectional, centradirectional


or contradirectional 64Kbps
Line 4/6 wires, 19 to 26 AWG
Twisted-pair cable
Range Up to 800 meters over
24 AWG wire
Impedance 120 Ohms
Clock frequency 64KHz
Frequency tracking 500ppm
Interface connector RJ-45
Complies with ITU-TG 703 and G823
Frame format Unframed only
Line Code Codirectional or AMI

Data communication interface specifications


Interface Type RS-232 ; DB25 Female, DB9
V.35 ; DB25 – MB35 adapter cable
X.21 ; DB25 – DB15 adapter cable
Data rate 64Kbps to 2mbps for synchronous,

BRBRAITT : June-2011 55
―DATA NETWORK‖ FOR JTOs PH-II

High-Speed Serial Interface


High-Speed Serial interface (HSSI) is a short-distance communications interface that
is commonly used to interconnect routing and switching devices on local area
networks (LANs) with the higher-speed lines of a wide area network (WAN). HSSI is
used between devices that are within fifty feet of each other and achieves data rates up
to 52 Mbps. Typically, HSSI is used to connect a LAN router to a T-3 line. HSSI can
be used to interconnect devices on token ring and Ethernet LANs with devices that
operate at Synchronous Optical Network (SONET) OC-1 speeds or on T-3 lines.
HSSI is also used for host-to-host link, image processing, and disaster recovery
applications.
The electrical connection uses a 50-PIN connector. The HSSI transmission
technology used differential emitter-coupled logic (ECL). (ECL is a circuit design in
which the emitters, producing high bit rates.) HSSI uses gaped timing. Gapped timing
allows a Data Communications Equipment device to control the flow of data being
transmitted from a Data Terminating Equipment device such as a terminal or
computer by adjusting the clock speed or deleting clock impulses.
For diagnosing problems, HSSI offers four loopback tests. The first loopback tests the
cable by looping the signal back after it reaches the DTE port. The second and third
loopbacks test the line ports of the local DCE and the remote DTE. The fourth tests
the DTE‘s DCE port. HSSI requires two control signals (―DTE available‖ and ―DCE
available‖) before the data circuit is valid.
The High-Speed Serial Interface (HSSI) is a DTE/DCE interface that was developed
by Cisco Systems and T3 plus Networking to address the need for high-speed
communication over WAN lines.
HSSI defines both electrical and physical interfaces on DTE and DCE devices. It
operates at the physical layer of the OSI reference model.
HSSI technical characteristics
Characteristic Value
Maximum signaling rate 52 Mbps
Maximum cable length 50 feet
Number of connector points 50
Interface DTE-DCE
Electrical technology Differential ECL
Typical power consumption 610 m W
Topology Point-to-point
Cable type Shielded twisted-pair wire

The maximum signaling rate of HSSI is 52 Mbps. At this rate, HSSI can handle the
T3 speeds (45 Mbps) of many of today‘s fast WAN technologies, as well as the Office
Channel-1 (OC-1) speeds (52 Mbps) of the synchronous digital hierarchy (SDH). In
addition, HSSI easily can provide high-speed connectivity between LANs, such as
Token Ring and Ethernet.

BRBRAITT : June-2011 56
―DATA NETWORK‖ FOR JTOs PH-II

MODEMS AND DATA CIRCUITS

BRBRAITT : June-2011 57
―DATA NETWORK‖ FOR JTOs PH-II

MODEMS AND DATA CIRCUITS


The telephone network which provides worldwide accessibility is meant primarily for
voice communication and supports only analog voice band service having bandwidth
of 300 Hz to 3400 Hz. Baseband transmission of digital signals using along voice
band service has several limitation. To overcome these limitations, some intermediary
devices are used to utilize the service in the best possible way. These devices are
modems and data multiplexers.
We begin this chapter by examining various digital modulation methods which are
used in the modems. We then proceed to describe operation of the modem. Besides
modulation and demodulation, there are many additional functions which are
performed by the modems. We examine all these functions and familiarize ourselves
with the modem terminology. There are number of CCITT recommendations on the
modems. We take a brief look at the features of the CCITT modems and some non-
standard modems. We next examine the various data multiplexing techniques.
Frequency division and time division multiplexers are discussed in brief while the
statistical time division multiplexer is discussed in considerable detail
DIGITAL MODULATION MEHODS
There are three basic types of modulation methods for transmission of digital signals.
These methods are based on the three attributes of a sinusoidal signal, amplitude,
frequency and phase. The corresponding modulation methods are called: Amplitude
Shift Keying (ASK), Frequency Shift Keying (FSK) and Phase Shift Keying (PSK).
In addition, a combination of ASK and PSK is employed at high bit rates. This
method is called Quadrature Amplitude Modulation (QAM).
Amplitude Shift Keying (ASK)
Amplitude Shift Keying (ASK) is the simplest form of digital modulation. In ASK,
the carrier amplitude is multiplied by the binary ―1‖ or ―0‖ (Fig.1). The digital input
is a unipolar NRZ signal.
The amplitude modulated carrier signal can be written as
V(t) = d sin (2 fct)
Where fc is the carrier frequency and d is the data bit variable which can take values
―1‖ or ―0‖, depending on the state of digital signal.
The frequency spectrum of the ASK signal consists of the carrier frequency with
upper and lower side bands (Fig. 2). For random unipolar NRZ digital signal having
bit rate R, the first zero of the spectrum occurs at R Hz away from the carrier
frequency.
The transmission bandwidth B of the ASK signal is restricted by using a filter to
B = (1+r)R

Where r is a factor related to the filter characteristics and its values lies in the range 0 – 1.

BRBRAITT : June-2011 58
―DATA NETWORK‖ FOR JTOs PH-II

Fig. 1 Amplitude shift keying

Fig. 2 Frequency spectrum of an ASK signal.

ASK is very sensitive to noise and finds limited application in data transmission. It is
used at very low bit rates, of less than 100 bps.
Frequency Shift Keying (FSK)
In Frequency Shift Keying (FSK) , the frequency of the carrier is shifted between two
discrete values, one representing binary ―1‖ and the other representing binary ―0‖
(Fig.3). The carrier amplitude does not change. FSK is relatively simply to
implement. It is used extensively in low speed modems having bit rates below 1200
bps.
The instantaneous value of the FSK signal is given by
V(t) = d sin (2 f1t) + d sin (2 f0t)
Where f1 and f0 are the frequencies corresponding to binary ―1‖ and ―0‖
respectively and d is the data signal variable as before

BRBRAITT : June-2011 59
―DATA NETWORK‖ FOR JTOs PH-II

Fig. 3 Frequency shift keying

From the above equation, it is obvious that the FSK signal can be considered to be
comprising two ASK signals with carrier frequencies f1 and f0. Therefore, the
frequency spectrum of the FSK signal is as shown in Fig. 4.

Fig. 4 Frequency spectrum of a FSK signal.


To get an estimate of the bandwidth B for the FSK signal, we need to include the
separation between f1 and f0 and significant portions of the upper side band of carrier
f1 and of the lower side band of carrier f0.
B = | f1– f0 | + (1 + r) R
The separation between f1 and f0 is kept at least 2R/3. CCITT Recommendation V.23
specifies f1 = 2100 Hz and f0 = 1300 Hz for bit rate of 1200 bps. FSK is not very
efficiency in its use of the available transmission channel bandwidth.

BRBRAITT : June-2011 60
―DATA NETWORK‖ FOR JTOs PH-II

Phase Shift Keying (PSK)


Phase Shift Keying (PSK) is the most efficient of the three modulation methods and is
used for high bit rate. In PSK, phase of the carrier is modulated to represent the binary
values. Figure 5 shows the simplest form of PSK called Binary PSK (BPSK). The
carrier phase is changed between 0 & by the bipolar digital signal. Binary states ―1‖
and ―0‖ are represented by the negative and positive polarities of the digital signal.

Fig. 5 Binary phase shift keying.


The instantaneous value of the BPSK signal can be written as
v(t) = sin (2 fct) when d = 1 for binary state ―0‖
v(t) = – sin(2 fct) = sin (2 fct+ ) when d = – 1 for binary state ―1‖
In other words, V(t) = d sin (2 fct) d= 1
The expression for BPSK signal is very similar to the expression for ASK signal
except that the data variable d takes the values 1. The carrier get suppressed due to
bipolar modulation signal. The frequency spectrum of the PSK signal for random
NRZ digital modulating signal is shown in Fig. 6.

Fig. 6 Frequency spectrum of a BPSK signal.

BRBRAITT : June-2011 61
―DATA NETWORK‖ FOR JTOs PH-II

The estimate of bandwidth B of the BPSK signal can be obtained as before.


B = (1 + r)R 0<r<1
Where the parameter r depends on transmission filter characteristics. The BPSK
signal requires less bandwidth as compared to the FSK signal.
4 PSK Modulator
Figure 8 shows the schematic of a 4 PSK modulator. It consists of two BPSK
modulators. The carrier frequency of one of the modulators is phase shifted by /2
radians. The data bits are taken in groups of two bits called dibits and two bipolar
digital signals are generated, one from the first bit of the dibits and the other from the
second bit of the dibits. Outputs of the modulators are added so that the phase of the
resultant carrier is the vectorial addition of the respective phasors of the two
modulated carriers.

Fig. 8 4 PSK modulator

4 PSK Demodulator
Figure 9 shows a 4 PSK demodulator. The reference carrier is recovered from the
received modulated carrier. As in the modulator, a /2 phase shifted carrier is also
generated. When these carriers are multiplied with the received signal, we get
sin (2 fct+ ) sin (2 fct) = ½ cos ( ) – ½ cos (4 fct+ )
and

sin (2 fct+ ) sin (2 fct+ /2) = ½ cos ( – /2) – ½ cos (4 fct+ + /2)
where is the phase of the received carrier.
The multiplier output are passed through low pass filters to remove the 2f c frequency
component and are applied to the comparators which generate the dibits. Table 1
gives the outputs of low pass filters for various values of input phase .

BRBRAITT : June-2011 62
―DATA NETWORK‖ FOR JTOs PH-II

In the above demodulation method, we have assumed availability of the phase


coherent carrier at the receiving end, i.e., the recovered carrier at the receiving end
being in phase with the carrier at the transmitting end. But it is quite possible that the
phase of recovered carrier is out by /2 or . And if this happens, the demodulator
operation will be upset.

Fig. 9 4 PSK demodulator

Table 1 4 PSK modulator shown in Fig. 8.

U V A B

/4 0.35 0.35 0 0
3 /4 0.35 –0.35 0 1
5 /4 –0.35 –0.35 1 1
7 /4 –0.35 0.35 1 0
EXAMPLE 1
1. What are the phase states of the carrier when the bit stream
1 0 1 1 1 0 0 1 0 0
is applied to 4 PSK modulator shown in Fig. 8.
2. If the recovered carrier at the demodulator is out of phase by radians, what
will be the output when the above 4 PSK carrier is applied to the demodulator
shown in Fig. 9.
Solution
1. Modulator input 1 0 1 1 1 0 0 1 0 0
Phase states of the 7 /4 5 /4 7 /4 3 /4 /4
transmitted carrier
2. Relative phase with respect 3 /4 /4 3 /4 7 /4 5 /4
to the recovered carrier

BRBRAITT : June-2011 63
―DATA NETWORK‖ FOR JTOs PH-II

Output of the demodulator 0 1 0 0 0 1 1 0 1 1


(Table 1)
DIFFERENTIAL PSK
The problem of generating the carrier with a fixed absolute phase can be
circumvented by encoding the digital information as the phase change rather than as
the absolute phase. This modulation scheme is called differential PSK. If t-1 is the
previous phase state and t is the new phase state of the carrier when data bits
modulate the carrier, the phase change is defined as = – t-1
is coded to represent the data bits. the phase space diagrams of Fig. 7 are still
applicable for 4 differential PSK and 8 differential PSK, but now they represent phase
changes rather than the absolute phase states.
For demodulating the differential PSK signal, it is merely necessary to detect the
carrier phase variations. The instantaneous value of the carrier phase is no longer
important.
Differential BPSK
Differential BPSK modulator is implemented using an encoder before a BPSK
modulator (Fig. 10.) The encoder logic is so designed that the desired phase changes
are obtained at the modulator output.

Input
Encoder Level BPSK Differential
Data
Shifter Modulator BPSK

Fig. 10 Differential BPSK modulator


Table 2 shows the relation between the input data bits and the phase states of the
carrier at the modulator output. Knowing that the carrier phase is 0 for binary ―0‖ at
the modulator input and for binary ―1‖ modulator input, we can write the logic table
for the encoder. It is easily implemented using a JK flip flop in the toggle mode.

Table 2 Encoder Logic of Differential BPSK Modulator

A t-1 t Mt-1 Mt

0 0 0 0 0 0
0 0 1 1
1 0 0 1
1 0 1 0

BRBRAITT : June-2011 64
―DATA NETWORK‖ FOR JTOs PH-II

EXMAPLE 2
Write the phase states of the differential BPSK carrier for input data stream
100110101. The starting phase of the carrier can be taken as 0.

Solution
A 1 0 0 1 1 0 1 0 1
0 0 0 0
0 0 0 0

Figure 11 shows the demodulation scheme for differential BPSK signal. The received
signal is delayed by one bit and multiplied by the received signal. In other words, the
carrier phase states

Fig. 11 Differential BPSK demodulator

of the adjacent bits are multiplied. Adjacent phase states may be in phase or out of
phase. If they are in phase, the multiplier output is positive and if they are out of
phase, the multiplier output is negative.
Sin2 (2 fct) = sin 2 (2 fct + ) = ½ – ½ cos (4 fct)
Sin (2 fct) sin (2 fct + ) = – ½ + ½ cos(4 fct)

The low pass filter allows only the DC component to pass through. Thus polarity of
the signal at the filter output reflects the phase change. The comparator generates the
demodulated data signal.
The differential demodulator does not require phase coherent carrier for
demodulation. Also, note that there is no decoder corresponding to the encoder in the
modulator. If a phase-coherent demodulator is used in place of the differential
demodulator, a decoder will be required at the output of the demodulator.
Differential 4 PSK
Just like differential BPSK modulator, differential 4 PSK modulator can also be
implemented using an encoder before a 4 PSK modulator as shown in Fig. 12.

BRBRAITT : June-2011 65
―DATA NETWORK‖ FOR JTOs PH-II

A M

Encoder Level 4 PSK


Input Differential
Shifter Modulator
Dibits 4 PSK
B
N
Level
Shifter

Fig. 12 Differential 4 PSK modulator

The encoder logic is so designed that its outputs M and N modulate the carrier to
produce the required phase changes in the carrier. Table 3a shows the relation
between the input dibit AB and the phase changes of the modulated carrier. This
modulation scheme has been standardized in CCITT recommendation V.26. Table 3b
shows the relation between MN bits and the corresponding phase of the modulated
carrier. Table 3c gives encoder logic derived from Tables 3a and 3b. From Table 3c, it
can be shown that
M t= A . B + A . B . P + A . B. P

N t = A . B + A . B . P + A . B. P

BRBRAITT : June-2011 66
―DATA NETWORK‖ FOR JTOs PH-II

EXAMPLE 3
The following bit steam is applied to the differential 4 PSK modulator described in
Table3. Write the carrier phase states taking the initial carrier phase as reference.
1 0 1 1 1 1 0 0 0 1
Solution

Bit stream 1 0 1 1 1 1 0 0 0 1
3 0
2 2
0 3 3 3 0
2 2 2 2
16 Quadrature Amplitude Modulation (QAM)
We can generalize the concept of differential phase shift keying to M equally spaced
phase states. The bit rate will become n (baud rate), where n is such that 2n = M.
This is called M-ary PSK or simply MPSK. The phase states of the MPSK signal are
equidistant from the origin and are separated by 2 /M radians (Fig. 13). As M is
increased, the phase states come closer and result in degraded error rate performance
because of reduced phase detection margin. In practice, differential PSK is used up to

M = 8.
Fig. 13 Phase states of M-ary PSK.

Quadrature Amplitude Modulation (QAM) is one approach in which separation of the


phase states is increased by utilizing combination of amplitude and phase
modulations. Figure 14 shows the states of 16 QAM. There are sixteen states and each
state corresponds to a group of four bits. Unlike PSK, the states are not equidistant
from the origin, indicating the presence of amplitude modulation.
Note that each state can be represented as the sum of two carriers in quadrature. These
carriers can have four possible amplitudes v1 and v2. Figure 15 shows block
schematic of the modulator for 16 QAM. The odd numbered bits at the input are
combined in pairs to generate one of the four levels at the D/A output which

BRBRAITT : June-2011 67
―DATA NETWORK‖ FOR JTOs PH-II

modulates the carrier. The even numbered bits are combined in a similar manner to
modulate the other /2 phase shifted carrier. The modulated carriers are combined to
get the 16 QAM output.
It can be shown that 16 QAM gives better performance than does 16 PSK. Out of the
basic modulation methods PSK comes closest to Shannon‘s limit for bit rate which we
studied in Chapter 1. QAM displays further improvement over PSK.
MODEM
The term ‗Modem‘ is derived form the words, MOdulator and DEModulator. A
modem contains a modulator as well as a demodulator. The digital
modulation/demodulation schemes discussed above are implemented in the modems.
Most of the modems are designed for utilizing the analog voice band service offered
by the telecommunication network. Therefore, the modulated carrier generated by a
modem ―fits‖ into the 300-3400Hz bandwidth of the speech c

BRBRAITT : June-2011 68
―DATA NETWORK‖ FOR JTOs PH-II

.
Fig. 14 Phase states of 16 quadrature amplitude modulation.

Fig.15 16 QAM modulator .

A typical data connection set up using modems is shown in Fig. 16. The digital
terminal devices which exchange digital signals are called Data Terminal Equipment
(DTE). Two modems are always required, one at each end. The modem at the
transmitting end converts the digital signal from the DTE into an analog signal by
modulating a carrier. The modem at the receiving end demodulates the carrier and
hands over the demodulated digital signal to the DTE.
The transmission medium between the two modems can be a dedicated leased circuit
or a switched telephone circuit. In the latter case, modems are connected to the local
telephone exchanges. Whenever data transmission is required, connection between the
modems is established through the

BRBRAITT : June-2011 69
―DATA NETWORK‖ FOR JTOs PH-II

Telephone
DTE Modem Modem DTE
Network

DTE : Data Terminal Equipment

Fig. 16 A data circuit implemented using modems.

telephone exchanges. Modems are also required within a building to connect


terminals which are located at distances usually more than 15 metres from the host.
Broadly, a modem comprises a transmitter, a receiver and two interfaces (Fig. 17).
The digital signal to be transmitted is applied to the transmitter. The modulated carrier
which is received from the distant end is applied to the receiver. The digital interface
connects the modem to the DTE which generates and receives the digital signals. The
line interface connects the modem to the transmission channel for transmitting and
receiving the modulated signals. Modems connected to telephone exchanges have
additional provision for connecting a telephone instruments. The telephone instrument
enables establishment of the telephone connection.

Fig. 17 building blocks of a modem.

The transmitter and receiver in a modem comprise several signal processing circuits
which include a modulator in the transmitter and a demodulator in the receiver.
Types of Modems
Modems can be of several types and they can be categorized in a number of ways.
Categorization is usually based on the following basic modem features:
1. Directional capability – Half duplex modem and full duplex modem.
2. Connection to the line – 2- wire modem and 4-wire modem.
3. Transmission mode – Asynchronous modem and synchronous modem.
Half Duplex and Full Duplex Modems.
A half duplex modem permits transmission in one direction at a time. If a carrier is
detected on the line by the modem, it gives an indication of the incoming carrier to the
DTE through a control signal of its digital interface (Fig. 18a). So long as the carrier
is being received, the modem does not give clearance to the DTE to transmit.

BRBRAITT : June-2011 70
―DATA NETWORK‖ FOR JTOs PH-II

A full duplex modem allows simultaneous transmission in both direction. Thus,


there are two carriers on the line, one outgoing and the other incoming (Fig. 18b).

Fig. 18

2W – 4W Modems.
The line interface of the modem can have a 2-wire or a 4-wire connection to the
transmission medium. In a 4-wire connection, one pair of wires is used for the
outgoing carrier and the other is used for the incoming carrier (Fig. 19). Full duplex
and half duplex modes of data transmission are possible on a 4-wire connection. As
the physical transmission path for each direction is separate, the same carrier
frequency can be used for both the directions.

Fig. 19 4-wire modem.

A leased 2-wire connection is cheaper than a 4-wire connection because only one pair
of wires is extended to the subscriber‘s premises. The data connection established
through telephone exchanges is also a 2-wire connection. For the 2-wire connection,
modems, with a 2-wire line interface are required. Such modems use the same pair of
wires for outgoing and incoming carriers. Half duplex mode of transmission using the
same frequency for the incoming and outgoing carriers can be easily implemented
(Fig. 20a). The transmit and receive carrier frequencies can be the same because only
one of them is present on the line at a time.

BRBRAITT : June-2011 71
―DATA NETWORK‖ FOR JTOs PH-II

For full duplex mode of operation on a 2-wire connection, it is necessary to have two
transmission channels, one for the transmit direction and the other for the receive
direction (Fig. 20b). This is achieved by frequency division multiplexing of two
different carrier frequencies. These carriers are placed within the bandwidth of the

speech channel (Fig. 20c). A modem transmits


Fig. 20 2-wire modems.

data on one carrier and receives data from the other end on the other carrier. A hybrid
is provided in the 2-wire modem to couple the line to its modulator and demodulator
(Fig. 21).

Fig. 21 Line interconnection in a 2-wire full duplex modem.

Note that available bandwidth for each carrier is reduced to half. Therefore, the baud
rate is also reduced to half. There is a special technique which allows simultaneous
transmission of incoming and outgoing carriers having the same frequency on the 2-
wire transmission medium. Full bandwidth of the speech channel is available to both
the carriers simultaneously. This technique is called echo cancellation technique and
is implemented in high speed 2-wire full duplex modems.

BRBRAITT : June-2011 72
―DATA NETWORK‖ FOR JTOs PH-II

Asynchronous and Synchronous Modems.


Modems for asynchronous and synchronous transmission are of different types. An
asynchronous modem can only handle data bytes with start and stop bits. There is no
separate timing signal or clock between the modem and the DTE (Fig. 22a). The
internal timing pulses are synchronized repeatedly to the leading edge of the start
pulse.
A synchronous modem can handle a continuous stream of data bits but requires a
clock signal (Fig. 22b). The data bits are always synchronized to the clock signal.
There are separate clocks for the data bits being transmitted and received.
For synchronous transmission of data bits, the DTE can use its internal clock and
supply the same to the modem. Else, it can take the clock from the modem and send
data bits on each occurrence of the clock pulse. At the receiving end, the modem
recovers the clock signal from the received data signal and supplies it to the DTE. It
is, however, necessary that the received data signal contains enough transitions to
ensure that the timing extraction circuit remains in synchronization. High speed
modems are provided with scramblers and descramblers for this purpose

Fig. 22
Scrambler and descrambler
As mentioned above, it is essential to have sufficient transitions in the transmitted
data for clock extraction. A scrambler is provided in the transmitter to ensure this. It
uses an algorithm to change the data stream received from the terminal in a controlled
way so that a continuous stream of zeros or ones is avoided. The scrambled data is
descrambled at the receiving end using a complementary algorithm.
There is another reason for using scramblers. It is often seen in data communications
that computers transmits ―idle‖ characters for relatively long periods of time and then
there is a sudden burst of data. The effect is seen as repeating errors at the beginning
of the data. The reason for these error is sensitivity of the receiver clock phase to
certain data patterns. If the transmission line has poor group delay characteristic in
some part of the spectrum and the repeated data pattern concentrates the spectral
energy in the part of the spectrum, the recovered clock phase can be offset from its
mean position. Drifted clock phase results in errors when the data bits are regenerated.

BRBRAITT : June-2011 73
―DATA NETWORK‖ FOR JTOs PH-II

This problem can be overcome by properly equalizing the transmission line but the
long term solution is to always randomize the data before it is transmitted so that
pattern sensitivity of the clock phase is avoided. The scramblers randomize the data
and thus avoid the errors due to pattern sensitivity of the clock phase.
The scrambler at transmitter consists of a shift register with some feedback loops &
exclusive OR gates. Figure 23 shows a scrambler used in V.27 4800 bps modem.

Fig. 23 Scrambler used in CCITT V.27 modem.

For the ith pulse, the output ci can be obtained as


bi = ci- 6 + ci- 7
ci = ai + bi = ai + c i- 6 + c i- 7
-1
If we represent one-bit delay using a delay operator x , the above equation can be
rewritten as follows:
-6 -7
ci = ai + ci (x + x )
-6 -7
ci (1 + x + x ) = ai
-6 -7
ci = ai /(1 + x + x )

Note that in modulo –2 arithmetic, addition and subtraction operations are the same.
-6 -
Thus, a scrambler effectively divides the input data stream by polynomial 1 + x + x
7
. This polynomial is called the generating polynomial. By proper choice of the
polynomial, it can be assured that undesirable bit sequences are avoided at the output.
The generating polynomials recommended by CCITT for scramblers are given in
Table 4.

BRBRAITT : June-2011 74
―DATA NETWORK‖ FOR JTOs PH-II

Table 4 CCITT Generating Polynomials

CCITT recommendations Generating polynomial

-14 -17
V.22, V.22 bis 1+x +x
-6 -7
V.27 1+x +x
-18
V.29, V.32 1+x + x-23
V.26ter
-5
V.32 1 + x + x-23

To get back the data sequence at the receiving end, the scrambled data stream is
multiplied by the same generating polynomial. The descrambler is shown is Fig. 24.
bi = ci-6 + ci-7
-6 -7
ai = ci + bi = ci + ci-6 + ci-7 = ci (1 + x + x ) = ai

In the above analysis, we have assumed that there was no transmission error. If an
error occurs in the scrambled data, it is reflected in three data bits after descrambling.
In the expression

BRBRAITT : June-2011 75
―DATA NETWORK‖ FOR JTOs PH-II

Fig. 24 Descrambler used in CCITT V.27 modem.

for descrambler output, note that if one of the scrambled bits c i is received wrong, a‘i,
a‘i+6 and a‘i+7 will be affected as ci moves along the shift register. Therefore,
scramblers result in increased error rate but their usefulness outweighs this limitation.
Block Schematic of a Modem
With this background, we can now describe the detailed block schematic of a modem.
The modem design and complexity vary depending on the bit rate, type of modulation
and other basic features as discussed above. Low speed modems upto 1200 bps are
asynchronous and use FSK. Medium speed modems form 2400 to 4800 bps use
differential PSK. High speed modems which operate at 9600 bps and above employ
QAM and are the most complex. Medium and high speed modems operate in
synchronous mode of transmission.
Figure 25 shows important components of typical synchronous differential PSK
modem. It must, however, be born in mind that this design gives the general
functional picture of the modem. Actual implementation will vary from vendor to
vendor.
Digital Interface.
The digital interface connects the internal circuits of the modem to the DTE. On the
DTE side, it consist of several wires carrying different signals. These signals are
either from the DTE or from the modem. The digital interface contains drivers and
receivers for these signals. A brief description of same of the important signals is
given below.
1. Transmitted Data (TD) signal from the DTE to the modem carries data to be
transmitted.
2. Received Data (RD) signal from the modem carries the data received from the
other end.
3. DTE Ready (DTR) signal from the DTE and indicates readiness of the DTE to
transmit and receive data.

BRBRAITT : June-2011 76
―DATA NETWORK‖ FOR JTOs PH-II

4. Data Set Ready (DSR) signal from the modem indicate its readiness to
transmit and receive data signals.
5. Request to Send (RTS) signal from the DTE seeks permission of the modem
to transmit data.
6. Clear to Send (CTS) signal from the modem gives clearances to the DTE to
transmit its data. CTS is given as response to the RTS.
7. Received line signal detector signal from the modem indicates that the
incoming carrier has been detected on the line interface.
8. Timing signals are the clock signals from the DTE to the modem and from the
modem to the DTE for synchronous transmission.

BRBRAITT : June-2011 77
―DATA NETWORK‖ FOR JTOs PH-II

BRBRAITT : June-2011 78
―DATA NETWORK‖ FOR JTOs PH-II

Digital interface has been standardized so that there are no compatibility problems.
There are several standards, but the most common standard digital interface is
EIA232D. There are equivalent CCITT recommendations also.
Scrambler.
A scrambler is incorporated in the modems which operate at data rates of 4800 bps
and above. The data stream received from the DTE at the digital interface is applied to
the scrambler. The scrambler divides the data stream by the generating polynomial
and its output is applied to the encoder.
Encoder.
An encoder consists of a serial to parallel converter for grouping the serial data bits
received from the scrambler, e.g., in a modem employing 4 PSK, dibits are formed.
The data bit groups are then encoded for differential PSK.
Modulator.
A modulator changes the carrier phase as per the output of the encoder. A pulse
shaping filter precedes the modulator to reduce the intersymbol interference. Raised
cosine pulse shape is usually used. the modulator output is passed through a band pass
filter to restrict the bandwidth of the modulated carrier within the specified frequency
band.
Compromise Equalizer.
It is a fixed equalizer which provides pre-equalization of the anticipated gain and
delay characteristics of the line.
Line Amplifier.
The line amplifier is provided to bring the carrier level to the desired transmission
level. Output of the line amplifier is coupled to the line through the line interface.
Transmitter Timing Source.
Synchronous modems have an in-built crystal clock source which generates all the
timing references required for the operation of the encoder and the modulator. The
clock is also supplied to the DTE through the digital interface. The modem has
provision to accept the external clock supplied by the DTE.
Transmitter Control.
This circuit controls the carrier transmitted by the modem. When the RTS is received
from the DTE, it switches on the outgoing carrier and sends it on the line. After a
brief delay, it generates the CTS signal for the DTE so that it may start transmitting
data. In half duplex modems CTS is not given if the modem is receiving a carrier.
Training Sequence Generator.
For reception of the data signals through the modems, it is necessary that the
following operational conditions are established in the receiver portion of the modems
beforehand:

BRBRAITT : June-2011 79
―DATA NETWORK‖ FOR JTOs PH-II

1. The demodulator carrier is detected and recovered. Gain of the AGC amplifier
is adjusted and absolute phase reference of the recovered carrier is established
2. The adaptive equalizer is conditioned for the line characteristics.
3. The receiver timing clock is synchronized.
4. The descrambler is synchronized to the scrambler.
These functions are carried out by sending a training sequence. It is transmitted by a
modem when it receives the RTS signal from the DTE. On receipt of RTS from the
DTE, the modem transmits a carrier modulated with the training sequence of fixed
length and then gives the CTS signal to the DTE so that it may commence
transmission of its data. From the training sequence, the modem at the receiving end
recovers the carrier, establishes its absolute phase reference, conditions its adaptive
equalizer and synchronizes its clock and descrambler. The composition of the
training sequence depends on the type of the modem. We will examine some of the
training sequences while discussing the modem standards later.
Line Interface.
The line interface provides connection to the transmission facilities through coupling
transformers. The coupling transformers isolate the line for DC signals. The
transmission facilities provide a two-wire or four-wire connection between the two
modems. For a four-wire connection, there are separate transformers for the transmit
and receive directions. For a 2-wire connection, the line interface is equipped with a
hybrid.
Receive Band Limiting Filter.
In the receive direction, the band limiting filter selects the received carrier from the
signals present on the line. It also removes the out-of-band noise.
AGC Amplifier.
Automatic Gain Control (AGC) amplifier provides variable gain to compensate for
carrier-level loss during transmission. The gain depends on the received carrier level.
Equalizer.
The equalizer section of the receiver corrects the attenuation and group delay
distortion introduced by the transmission medium and the band limiting filters. Fixed,
manually adjustable or adaptive equalizers are provided depending on speed, line
condition and the application. In high speed dial up modems, an adaptive equalizer is
provided because characteristics of the transmission medium change on each instance
of call establishment.
Carrier Recovery Circuit.
The carrier is recovered from the AGC amplifier output by this circuit. The recovered
carrier is supplied to the demodulator. An indication of the incoming carrier is given
at the digital interface.
Demodulator.
The demodulator recovers the digital signal from the received modulated carrier. The
carrier required for demodulation is supplied by the carrier recovery circuit.

BRBRAITT : June-2011 80
―DATA NETWORK‖ FOR JTOs PH-II

Clock Extraction Circuit.


The clock extraction circuit recovers the clock from the received digital signal. The
clock is used for regenerating the digital signal and to provide the timing information
to the decoder. The receiver clock is also made available to the DTE through the
digital interface.
Decoder.
The decoder performs a function complementary to the encoder. The demodulated
data bits are converted into groups of data bits which are serialized by using a parallel
to serial converter.
Descrambler.
The decoder output is applied to the descrambler which multiplies the decoder output
by the generating polynomial. The unscrambled data is given to the DTE through the
digital interface.

BRBRAITT : June-2011 81
―DATA NETWORK‖ FOR JTOs PH-II

Additional Modem Features


As mentioned above, modems vary in design and complexity depending on speed,
mode of transmission, modulation methods and their application. The driving force
for the developments in modems has been the high cost of the transmission medium.
By more efficient utilization of the available bandwidth and increasing the effective
throughput, the high cost of transmission can be neutralized. Echo cancellers and
secondary channel are the two additional features of modems in this direction. For
ease of operation, modems are also equipped with test loops. We will take a brief look
at these features of modems also.
Echo Canceller.
Full duplex transmission of data on 2-wire leased or dial up connection is
implemented by dividing the available frequency band for the two carriers. This
effectively reduces the available bandwidth for each carrier to half and limits the data
speed to about 2400 to 4800 bps. Echo cancellation makes it possible to use the same
carrier frequency and the entire frequency band for both the carriers simultaneously.
Transmit and receive carrier frequencies being the same, it becomes essential for the
transmitted carrier not to appear at the local receiver input. The line-coupling hybrid
gives about 15 dB loss across the opposite ports. Thus the transmitted carrier with 15
dB loss appears at the receiver input of the modem. This signal is referred to as near-
end echo (Fig. 26). It has high amplitude and very short delay.

Fig. 26 Echoes present in a 2-wire full duplex modem.

There is another type of echo which is called the far-end echo. Far-end echo is caused
by the hybrids present in the interconnecting telecommunication link. It is
characterized by low amplitude but long delay. For terrestrial connections, the delay
can be of the order 40 ms and for the satellite based connections, it is of the order of
half a second.
The echo being at the same carrier frequency as the received carrier, interfaces with
the demodulation process and needs to be removed. For this purpose, an echo
canceller is built into the high-speed modems. It generates a copy of the echo from
the transmitted carrier and subtracts it from received signals (Fig. 27).

BRBRAITT : June-2011 82
―DATA NETWORK‖ FOR JTOs PH-II

The echo canceller circuit consists of a tapped-delay line with a set of coefficients
which are adjusted to get the minimum echo at the receiver input. This adjustment is
carried out when the training sequence is being transmitted.
Secondary Channel.
We have seen that a DTE needs to exchanges RTS/CTS signals with the modem
before it transmits data. On receipt of the RTS signal, the modem gives the CTS after
a certain delay. During this period, it transmits the training sequence so that the
modem at the other end may detect the carrier, extract the clock, synchronize the

descrambler and condition the equalizers.


Fig . 27 Echo canceller.

sIf the mode of operation is half duplex, each reversal of the direction of transmission
involves RTS-CTS delay and thus, reduces the effective throughput. In most of the
data communication situations, the receiver sends short acknowledgements for every
received data frame and for transmitting these acknowledgements the direction of
transmission must be reversed. To avoid frequent reversal of direction of
transmission, a low speed secondary channel is provided in the modems (Fig. 28).

Fig. 28 Secondary channel.

BRBRAITT : June-2011 83
―DATA NETWORK‖ FOR JTOs PH-II

The secondary channel operates at 75 bps and uses FSK. The secondary channel has
its own RTS, CTS and other control signals which are available at the digital interface
of the modem. It should be noted that the main channel is used in half duplex mode
for data transmission and the DTEs are configured to send the acknowledgements on
the secondary channel.
Test Loops.
Modems are provided with the capability for locating faults in the digital connection
from DTE to DTE. The testing procedure involves sending a test data and looping it
back at various stages of the connection. The test pattern can be generated by the
modem internally or it can be applied externally using modem tester. The common
test configuration are shown in Fig. 29.
1. Loop 1: Digital loopback. This loop is set up as close as possible to the digital
interface.
2. Loop 2: Remote digital loopback. This loop checks the line and the remote
modem. It can be used only in full duplex modems.
3. Loop 3: Local analog loopback. The modulated carrier at the transmitter
output of the local modem is looped back to the receiver input. The loopback
may require some attenuators to adjust the level.
4. Loop 4: Remote analog loopback. This loop arrangement is applicable for 4-
wire line connections only. The two pairs at the distant end are disconnected
from the modem and connected to each other.
5. Loop 5: Local digital loopback and loopforward. In this case, the local digital
loopback is provided for the local modem and remote digital loopback is
provided for the remote modem.
6. Loop 6: Local analog loopback and loopforward. In this case, the local modem
has analog loopback and the remote modem has remote analog loopback.
The test configurations can be set up by pressing the appropriate switches provided on
the modems. The digital interface also provides some control signals for activating the
loop tests. When in the test mode, the modem indicates its test status to the local DTE
through a control signal in the digital interface.
All modems do not have provision for all these tests. Test features are specific to the
modem type. Test loops 1 to 4 have been standardized by CCITT in their
Recommendation V.54.

BRBRAITT : June-2011 84
―DATA NETWORK‖ FOR JTOs PH-II

Fig. 29 Test loops in modems.


STANDARD MODEMS
It is essential that modems conform to international standards because similar
modems supplied by different vendors must work with each other. CCITT has drawn
up modem standards which are internationally accepted. We will discuss the main
features of the CCITT modems. The reader is urged to refer to the CCITT
recommendations for detailed description of these modems.
CCITT V.21 Modem
This modem is designed to provide full duplex asynchronous transmission over the 2-
wire leased line or switched telephone network. It operates at 300 bps.
Modulation. I
t utilises FSK over the following two channels:

1. Transmit channel frequencies (originating modem)


Space 1180Hz, Mark 980 Hz.
2. Receive channel Frequencies (originating modem)
Space 1850Hz, Mark 1650Hz.

The channel selection for the transmit and receive directions can be done through the
digital interface by switching on the appropriate control circuit.
CCITT V.22 Modem
This modem provides full duplex synchronous transmission over 2-wire leased line or
switched telephone network. It transmits data at 1200 bps. as an option, it can also
operate at 600 bps.
Scrambler.
A scrambler and a descrambler having the generating polynomial 1+x -14 +x-17 are
provided in the modem.
Modulation.
Differential 4 PSK over two channel is utilised in this modem. The dibits are encoded
as phase changes as given in Table 5. The carrier frequencies are

BRBRAITT : June-2011 85
―DATA NETWORK‖ FOR JTOs PH-II

Low channel 1200 Hz


High channel 2400 Hz

Table 5 Modulation Scheme of CCITT V.22 Modem

A B

0 0 /2
0 1 0
1 1 3 /2
1 0

At 600 bps, the carrier phase changes are 3 /2 and /2 for binary ―1‖ and ―0‖
respectively.
Equalizer.
Fixed compromise equalizer shared equally between the transmitter and receiver are
provided in the modem.
Test Loops.
Test loops 2 and 3 as defined in Recommendation V.54 are provided in the modem.
For self –test, an internally generated binary pattern of alternating ―0‖s and ―1‖s is
applied to the scrambler. At the output of the descrambler, an error detector identifies
the errors and gives visual indication.
CCITT V.22bis Modem
This modem provides full duplex synchronous transmission on a 2-wire leased line or
switched telephone network. The bit rates supported are 2400 or 1200 bps at the
modulation rate of 600 bauds.
Scrambler.
The modem incorporates a scrambler and a descrambler having the generating
polynomial 1+x-14 +x-17.
Modulation.
At 2400 bps, the modem uses 16 QAM having a constellation as shown in Fig. 30.
From the scrambled data stream quadbits are formed. The first two bits of the
quadbits are coded as quadrant change as given in Table 6. The last two bits of the
quadbits determine the phase within a quadrant as shown in Fig. 30.

BRBRAITT : June-2011 86
―DATA NETWORK‖ FOR JTOs PH-II

Table 6 Quadrant Changes Determined by the First Two Bits of Quadbits (CCITT
V.22bis Modem).

First two bits of quadbits

00 01 11 10

1 2 1 4 3
Last quadrant 2 3 2 1 4 Next quadrant
3 4 3 2 1
4 1 4 3 2

Fig. 30 Phase states of CCITT V.22 bis 16 QAM modem.

At 1200 bps, the dibits are formed from the scrambled data stream and coded as
quadrant changes shown above. In each quadrant, the phase state corresponding to ―0‖
is transmitted.
The following two carriers used for transmit and receive directions, the calling
modem used the low channel to transmit data.

Low channel carrier 1200Hz


High channel carrier 1800Hz

BRBRAITT : June-2011 87
―DATA NETWORK‖ FOR JTOs PH-II

Equalizer.
A fixed compromise equalizer is provided in the modem transmitter. The modem
receiver is equipped with an adaptive equalizer.
Test Loops.
Test loops 2 and 3 as defined in Recommendation V.54 are provided in the modem.
For self-test, an internally generated binary pattern of alternating ―0‖s and ―1‖s is
applied to the scrambler. At the output of the descrambler, an error detector identifies
the errors and gives visual indication.
CCITT V.23 Modem
The modem is designed to operate in full duplex asynchronous transmission mode
over a 4-wire leased line. It can also operate in half duplex over a 2-wire leased line
and switched telephone network.
The modem can operate at two speeds – 600 bps and 1200 bps. It is equipped with the
secondary channel which operates at 75 bps.
Modulation.
The modem employ FSK over two channels. The frequencies are:

Transmit frequencies (originating modem)


Space 1180 Hz, Mark 980 Hz

Receive frequencies (originating modem)


Space 1850 Hz, Mark 1650 Hz

Secondary channel frequencies


Space 450 Hz, Mark 390 Hz

CCITT V.26 Modem


This modem operates in full duplex synchronous mode of transmission on a 4-wire
leased connection. It operates at 2400 bps. It also includes a secondary channel having
a bit rate of 75 bps.
Modulation.
Differential 4 PSK is employed to transmit data at 2400 bps. The carrier frequency is
1800 Hz. The modulation scheme has two alternative A and B (Table 7). The
secondary channel frequencies are the same as in V.23.

BRBRAITT : June-2011 88
―DATA NETWORK‖ FOR JTOs PH-II

Table 7 Modulation Scheme of CCITT


V.26 Modem

A B
Dibit
00 0 /4
01 /2 3 /4
11 5 /4
10 3 /2 7 /

CCITT V.26 bis Modem


It is a half duplex synchronous modem for use in the switched telephone network. It
operates at nominal speed of 2400 bps or at a reduced speed of 1200 bps. It includes a
secondary channel which operates at the speed of 75 bps.
Modulation. T
he modem uses the differential 4 PSK for transmission at 2400 bps. the modulation
scheme is the same as for V.26, alternative B. At 1200 bps, the modem uses
differential BPSK with phase changes /2 and 3 /2 for binary ―0‖ and ―1‖
respectively. The frequencies of the secondary channel are the same as in V.23.
Equalizer.
A fixed compromise equalizer is provided in the receiver.
CCITT V.26ter Modem
It is a full duplex synchronous modem for use in 2-wire leased line or switched
telephone network. It uses an echo cancellation technique for channel separation. As
an option, the modem can accept asynchronous data from the DTE If asynchronous
option is used, the modem converts the asynchronous data suitably for synchronous
transmission. The modem operates at a nominal speed of 2400 bps with fall-back at
1200 bps.
Modulation.
The modem use differential 4 PSK for transmission at 2400 bps. The carrier
frequency is 1800 Hz in both direction. The modulation scheme is the same as for
V.26, alternative A. At 1200 bps, differential BPSK is used. The phase changes
corresponding to binary ―0‖ and ―1‖ are respectively 0 and radians respectively.
Equalizer.
A fixed compromise equalizer or an adaptive equalizer is provided in the receiver. No
training sequence is provided for convergence of the adaptive equalizer.

BRBRAITT : June-2011 89
―DATA NETWORK‖ FOR JTOs PH-II

Scrambler.
The modem incorporates a scrambler and a descrambler. The generating polynomial
for the call-originating modem is 1+x-18+x-23. The generating polynomial of the
answering modem for transmission of its data is 1+x-5+x-23.
Test Loops.
Test loops 2 and 3 as defined in Recommendation V.54 are provided in the modem.
CCITT V.27 MODEM
This modem is designed for full duplex/half duplex synchronous transmission over a
4-wire or 2-wire leased connection which is specially conditioned as per M.1020. It
operates at the bit rate of 4800 bps with modulation rate of 1600 baud. It includes a
secondary channel which operates at 75 bps.
Scrambler.
The modem incorporates a scrambler and a descrambler having the generating
polynomial 1+x-6+x-7.
Modulation.
The modem uses differential 8 PSK for transmission at 4800 bps. The modulation
scheme is given in Table 8. The carrier frequency is 1800 Hz. The secondary channel
is the same as in V.23.

Table 8 Modulation Scheme of CCITT V.27 Modem

Tribit values Phase change

001 0
000 /4
010 /2
011 3 /4
111
110 5 /4
100 3 /4
101 7 /4

Equalizer.
A manually adjustable equalizer is provided in the receiver. The transmitter has
provision to send scrambled continuous binary ―1‖s for the equalizer adjustment. The
modem has means for indicating correct adjustment of the equalizer.

BRBRAITT : June-2011 90
―DATA NETWORK‖ FOR JTOs PH-II

CCITT V.27 bis modem


This modem is designed for full duplex/half duplex synchronous transmission over 4-
wire/2-wire leased connection not necessarily conditioned as per M.1020. Its speed,
modulation scheme and other features are the same as in V.27. The principal
difference are given below:
1. It can operate at a reduced rate of 2400 bps. at 2400, the modem uses
differential 4 PSK. The modulation scheme is the same as in V.26, alternative
A.
2. An automatic adaptive equalizer is provided in the receiver.
3. A training sequence generator is incorporated in the transmitter.
The training sequence used in V.27 bis modem is shown in Table 9. It comprise three
segments whose duration have been expressed in terms of Symbol Intervals (SI). One
SI is equal to 1/baud rate. The figures shown within brackets are for the 2-wire
connection and for the 4-wire connection worse than M.1020 conditioning.

Table 9 Training Sequence of CCITT V.27 bis Modem

Segment1 Segment 2 Segment3

Duration (SI) 14(58) 58(1074) 8

Type of line signal Continuous 180 Differential Differential


phase reversals BPSK carrier 8/4 PSK carrier

The first segment consists of continuous phase reversals of the carrier. It enables AGC
convergence and carrier recovery. During the second segment, the adaptive equalizer
is conditioned. Differential BPSK carrier is transmitted during this interval. The
modulating sequence is generated from every third bit of a PRBS having the
generating polynomial 1+x-6+x-7. The phase changes in the carrier are 0 and radians
of binary ―0‖ and ―1‖ respectively. The third segment of the training sequence
synchronizes the descrambler. It consists of scrambled binary ―1‖s.
CCITT V.27ter Modem
This modem is designed for use in the switched telephone network. It is similar to
V.27 bis modem in most respects. it incorporates additional circuits for auto
answering, ring indicator etc.
CCITT V.29 Modem
This modem is designed for point-to-point full duplex/half synchronous operation on
4 wire leased circuits conditioned as per M. 1020 or M.1025. It operates at a nominal
speed of 9600 bps. The fall-back speed are 7200 and 4800 bps.

BRBRAITT : June-2011 91
―DATA NETWORK‖ FOR JTOs PH-II

Scrambler.
The modem incorporates a scrambler and a descrambler having the generating
polynomial 1+x-18+x-23.
Modulation.
The modem employs 16 state QAM with modulation rate of 2400 baud. The carrier
frequency is 1700 Hz. The scrambled data at 9600 bps is divided into quadbits. The
last three bits are coded to generate differential eight-phase modulation identical to
Recommendation V.27. The first bit along with the absolute phase of the carrier
determines its amplitude (Fig. 31). The absolute phase is established during
transmission of the training sequence.

Fig. 31 Phase states of CCITT V.29 16 QAM modem at 9600 bps.

At the fallback rate of 7200 bps, tribits are formed from the scrambled 7200 bps bit
stream. Each tribit is prefixed with a zero to a make the quadbit. At the fallback rate
of 4800 bps, dibits are formed from the scrambled 4800 bps bit stream. These dibits
constitute the second and third bits of the quadbits. The first bit of the quadbits is zero
as before and the fourth bit is modulo 2 sum of the second and third bits. The phase
state diagrams for the modem operation at 7200 and 4800 bps are shown in Fig.32a
and Fig. 32b respectively.

BRBRAITT : June-2011 92
―DATA NETWORK‖ FOR JTOs PH-II

Fig. 32 Phase states of CCITT V.29 modem.

Equalizer.
An adaptive equalizer is provided in the receiver.
Training Sequence.
The training sequence is shown in Table 10. It consists of four segments which
provide for clock synchronization, establishment of absolute phase reference for the
carrier, equalizer conditioning and descrambler synchronization.

Table 10 Training Sequence of CCITT V.29 Modem

Segment Signal type Duration


(Symbol intervals)

1 No transmitted energy 48
2 Alternations 128
3 Equalizer conditioning pattern 384
4 Scrambled binary 1s 48

The second segment consists of two alternating signal elements A and B (Fig. 31).
This sequence establishes absolute phase of the carrier.
The third segment consists of the equalizer conditioning signal which consists of
elements C and D (Fig. 31). Whether C or D is to be transmitted is decided by a
pseudo-random binary sequence at 2400 bps generated using the generating
polynomial 1+x-6+x-7. The element C is transmitted when a ―0‖ occurs in the
sequence. The element D is transmitted when a ―1‖ occurs in the sequence.
The fourth segment consist of a continuous stream of binary ―1‖s which is scrambled
and transmitted. During this period descrambler synchronization is achieved.
CCITT V.32 Modem
This modem is designed for full duplex synchronous transmission on 2-wire leased
line or switched telephone network. It can operate at 9600 and 4800 bps. The
modulation rate is 2400 bauds.
Scrambler.
The modem incorporates a scrambler and a descrambler. The generating polynomial
for the call-originating modem is 1+x-18+x-23. The generating polynomial of the
answering modem for transmission of its data is 1+x -5+x-23.

BRBRAITT : June-2011 93
―DATA NETWORK‖ FOR JTOs PH-II

Modulation.
The carrier frequency is 1800 Hz in both directions of transmission. Echo cancellation
technique is employed to separate the two channels. 16 or 32 state QAM is employed
for converting the digital information into the analog signal. There are two
alternatives for encoding the 9600 bps scrambled digital signal.
Nonredundant Coding.
The scrambled digital signal is divided into quadbits. The first two bits of each
quadbit Q1n and Q2n are differentially encoded into y1n and y2n respectively as per
Table 11. Y1(n-1), y2(n-1) are the previous values of the Y bits. The last two bits are
taken without any change and the encoded quadbit Y1nY2nQ3nQ4n is mapped as shown
in Fig. 33.

Fig. 33 Phase states of CCITT V.32 modem at 9600 bps when non-redundant coding
is used.
At 4800bps, the scrambled data stream is grouped into dibits which are differentially
encoded as per Table 11 and mapped on a subset ABCD of the phasor states (Fig.
33).
Trellis Coding. Trellis coding enables detection and correction of error which are
introduced in the transmission medium. We will study the principles of error control
using trellis coding in the next chapter. Here, suffice it to say that some additional bits
are added to a group of data bits for detecting and correcting the errors. There are
several coding algorithms for error control and trellis coding is one of them. It is
implemented using convolution encoders.

BRBRAITT : June-2011 94
―DATA NETWORK‖ FOR JTOs PH-II

Table 11 Differential Encoding Scheme of the First Two Bits of


1. Quadbits (CCITT V.32 Modem)

Q1nQ2n Q1nQ2n Q1nQ2n Q1nQ2n


00 01 10 11
Y1Y2 Y1nY2n Y1nY2n Y1nY2n Y1nY2n
(n – 1)

00 01 00 11 10
01 11 01 10` 00
10 00 10 01 11
11 10 11 00 01

In trellis coded V.32 modem, quadbits formed from the scrambled data stream are
converted into groups of five bits using a convolution encoder. The coding scheme is
as under:
1. The first two bits Q1n and Q2n of the quadbit are differentially encoded into Y1n
and Y2n as given in Table 12.
2. From Y1n and Y2n, Y0n is generated using the convolution encoder.
3. Y0n, Y1n and Y2n form the first three bits of the five bit code. The last bits of
the code are Q3n and Q4n bits of the quadbit.

Table 11 Differential Encoding Scheme of the First Two Bits of


2. Quadbits (CCITT V.31 Trellis Coded Modem)P

Q1nQ2n Q1nQ2n Q1nQ2n Q1nQ2n


00 01 10 11
Y1Y2 Y1nY2n Y1nY2n Y1nY2n Y1nY2n
(n – 1)

00 00 01 10 11
01 01 00 11 10
10 10 11 01 00
11 11 10 00 01

The phase state diagram of the V.32 trellis coded modem is shown in Fig. 34.

BRBRAITT : June-2011 95
―DATA NETWORK‖ FOR JTOs PH-II

Equalizer.
An adaptive equalizer is provided in the receiver.
Training Sequence.
A training sequence is provided in the modem for adaptive equalization, echo
cancellation, data rate selection, and for the other function described earlier. It
consists of the following five segments:
1. Alterations between states A and B (Fig. 34) for 256 symbol intervals
2. Alterations between states C and D (Fig. 35 for 16 symbol intervals.
3. Equalizer and echo canceller conditioning signal of 1280 symbol intervals

Fig. 34
Phase states of CCITT V .32 modem at 9600 bps when trellis coding is used.

1. Data rate indicating sequence which is delimited by a rate signal ending


sequence of eight symbol intervals.
2. Sequence of scrambled binary ―1‖s of 128 symbol intervals.
Test Loops.
Test loops 2 and 3 as defined in Recommendation V.54 are provided in the modem.
CCITT V .33 Modem
This modem is designed for full duplex synchronous transmission on 4-wire leased
connections conditioned as per M.1020 or M.1025. It operates at 14,400 bps with
modulation rate of 2400 bauds. The fallback speed is 12,000 bps.
Scrambler.
The modem incorporates a scrambler and a descrambler. The generating polynomial
for the call-originating modem is 1+x-18+x-23.

BRBRAITT : June-2011 96
―DATA NETWORK‖ FOR JTOs PH-II

Modulation.
The carrier frequency is 1800 Hz in both directions of transmission. 128 state QAM
using trellis coding is employed for converting the digital information into an analog
signal. The scrambled data bits are divided into groups of six bits. The first two bits of
each six-bit group are encoded into three bits using the differential encoder followed
by a convolution encoder as described in V.32. Seven bit code words are thus formed

and these codes are mapped on the 128 sate phase diagram as shown in Fig.35.
Fig. 35 Phase states of CCITT V.33 modem at 14400 bps.

At the fallback speed of 12,000 bps, five-bit groups are formed and the first two bits
of each group are coded into three bits using the same scheme as above.

BRBRAITT : June-2011 97
―DATA NETWORK‖ FOR JTOs PH-II

The six-bit codes so generated are mapped as shown in Fig. 36.

Fig. 36 Phase states of CCITT V.33 modem at 12000 bps.

Equalizer.
An adaptive equalizer is provided in the receiver.
Training Sequence.
The training sequence given in Table 13 is provided in the modem for adaptive
equalization, data rate selection and the other functions described earlier.

Table 13 Training Sequence of CCITT V.33 Modem

Segment Signal type Duration


(Symbol intervals
)
1 Alterations ABABA 256
2 Equalizer conditioning pattern 2979
3 Rate sequence 64
4 Scrambled binary ―1‖s 48

states A and B are shown in the phase state diagrams. For details of the training
sequence, the reader is advised to refer to the CCITT recommendation.

BRBRAITT : June-2011 98
―DATA NETWORK‖ FOR JTOs PH-II

LIMITED DISTANCE MODEMS AND LINE DRIVERS


The CCITT modems discussed above are designed to operate on the speech channel
of 300 to 3400 Hz provided by the telecommunication network. Filters are provided in
the network to restrict the bandwidth to this value primarily to pack more channels on
the transmission media. The copper pair as such provides much wider frequency pass
band as we saw in the last chapter. Limited distance Modem (LDM) are designed for
the entire frequency band of the non-loaded copper transmission line. Their
application is limited to short distances as the media distortions and attenuation
increase with the distance. The distance limitation is, of course, a function of bit rate
and cable characteristics. The longer the distance, the slower must the transmission
speed be because sophisticated equalization techniques required for long distance
operation are not provided in the LDMs. Some typical figures are 20 kilometres at
1200 bps and 8 kilometres at 19,200 bps on 26-gauge cable. LDMs usually require 4-
wire unloaded connection between modems.
Another class of modems which fall under the category of LDMs are the baseband
modem. A baseband modem does not have a modulator and demodulator and utilizes
digital baseband transmission. It has the usual interfaces and other circuits including
equalizers to compensate for the transmission distortions of the line.
Line drivers as modem substitutes provide transmission capabilities usually limited to
within buildings where the terminals are separated from the host at distances which
cannot be supported by the digital interface. A line driver converts the digital signal to
low-impedance balanced signal which can be transmitted over a twisted pair. For the
incoming signals, a line driver also incorporates a balanced line receiver. Line drivers
usually require DC continuity of the transmission medium.
GROUP BAND MODEMS
We have so far concentrated on data modems designed to operate in the frequency
band, 300 to 3400 Hz. Use of such modem is restricted to 19,200 bps primarily due to
the bandwidth limitations. The telecommunication network also provides group band
service which extends from 60 kHz to 1085kHz. The modems designed to operate
over this frequency band are called group band modems. Basic features of the CCITT
V.6 group band modem are as follows:
1. This modem provides synchronous transmission at bit rates 48, 56, 64 and
72 kbps.
2. Single sideband amplitude modulation of carrier at 100 kHz is used. The
carrier at 100 kHz is also transmitted along with the modulated signal.
3. The modem has provision for injecting external group reference pilot at
104.08 kHz.
4. An optional speech channel occupying the frequency band 104 to 108 kHz
is integrated into the modem.
5. The modem incorporates a scrambler and a descrambler.
For bit rates higher than 72 kHz, CCITT has specified the V.37 group band modem. It
supports 96 kbps, 112 kbps, 128 kbps and 144 kbps bit rates.

BRBRAITT : June-2011 99
―DATA NETWORK‖ FOR JTOs PH-II

DATA MULTIPLEXERS
A modem is an intermediary device which is used for interconnecting terminals and
computers when the distances involved are large. Another data transmission
intermediary device is the data multiplexer which allows sharing of the transmission
media. Multiplexing is adopted to reduce the cost of transmission media and modems.
Figure 37 shows a simple application of data multiplexers. In the first option, 16
modems and eight leased line are required for connecting eight terminals to the host.
In the second option, the terminals and the host are connected using two data
multiplexers. The modem requirement is reduced to two and the leased line
requirement is reduced to one.

Fig. 37 Use of multiplexers for sharing media and modems.

The multiplexer ports which are connected to the terminal are called terminal ports
and the port connected to the leased line is called the line port. A multiplexer has a
built-in demultiplexer also for the signals coming from the other end. The terminal
port for incoming and outgoing signals is the same. One of the several wires of the
terminal port carries the outgoing signal and another carriers the incoming signal.
Besides consideration of economy, the other benefit of multiplexing is centralized
monitoring of all the channels. Data multiplexers can be equipped with diagnostic
hardware/software for monitoring the performance of individual data channels.
However, there is possibility of catastrophic failure. If any of the multiplexers or the
leased line fails, all the terminals will be cut off from the host.

Types of Data Multiplexers

BRBRAITT : June-2011 100


―DATA NETWORK‖ FOR JTOs PH-II

Like speech channel multiplexing, data multiplexers use either frequency division
multiplexing (FDM) or time division multiplexing (TDM). In FDM, the line
frequency band is divided into sub-channels. Each terminal port is assigned one sub-
channel for transmission of its data. In TDM, the sub-channels are obtained by
assigning time intervals (time slots) to the terminals for use of the line. Time slot
allotment to the sub-channels may be fixed or dynamic. A time division multiplexer
with dynamic time slot allotment is called Statistical Time Division Multiplexer
(STDM or Stat Mux).
In the following sections we will briefly introduce the frequency division and time
division multiplexers. Stat Mux is more powerful and common than these two types
of multiplexers. It is described in considerable detail. The reader will find many new
concepts and terminology to which he has not been introduced so far. In order to
appreciate the operation of Stat Mux, it is first necessary to understand data link
protocols. The reader is strongly advised to read the section on Stat Mux only after
reading the chapter on Data Link Layer.
Frequency Division Multiplexers (FDM)
The leased line usually provides speech channel bandwidth of 300 – 3400 Hz.
Therefore, most of the multiplexers are designed for this band. For frequency division
multiplexing, the frequency band is divided into several sub-channels separated by
guard bands. The sub- channels utilize frequency shift keying for modulating the
carrier. Aggregate of all sub-channels is within the speech channel bandwidth and is
an analog signal. Therefore, the multiplexers does not require any modem to connect
it to the line. a four-wire circuit is always required for outgoing and incoming
channels.
Bandwidths of the sub-channels depend on the baud rates. Frequency division data
multiplexers provide baud rates from 50 to 600 bauds. The number of sub-channels
varies from thirty-six to four depending on baud rate (Table 14)

Table 14 Frequency Division Multiplexers

Data rate Number of Total capacity


(bps) sub-channels (bps)

50 36 1,800
75 24 1,800
110 18 1,980
150 12 1,800
600 4 2,400

Multidrop operation of the frequency division multiplexer is shown in Fig. 38. Each
remote transmits and receives a different frequency as determined by the remote
single channels units. The multiple line unit which is connected to the host separates
the signals received on the line. It also carries out frequency division multiplexing of
the outgoing signals.

BRBRAITT : June-2011 101


―DATA NETWORK‖ FOR JTOs PH-II

Frequency division multiplexers are not much in use. Their major limitations are
1. Production costs are high because of analog components.
2. Total capacity is limited to 2400 bps due to large wasted bandwidth in the
guard band

Fig. 38 Multidrop application of frequency division multiplexers.

1. They usually require a conditioned line.


2. Most multiplexers do not allow mixing of bit rates of the sub-channels,
i.e,. all the sub-channels have the same bit rate.
3. They are inflexible. If the sub-channel capacity has to be changed,
hardware modifications are required. Complete replacement of sub-
channels cards is usually necessary.
One advantage of frequency division multiplexers is that they are robust. Failure of
one channel does not affect other sub-channels.
Time division multiplexers (TDM)
A time division multiplexer uses a fixed assignment of time slots to the sub-channels.
One complete cycle of time slots is called a frame and the beginning of a frame is
marked by a synchronization word (Fig.39). The synchronization word enables the
demultiplexer to identify the time slots and their boundaries. The first bit of the first
time slot follows immediately after the synchronization word.

Fig. 39 Frame format of a time division multiplexer.

If all the sub-channels have the same bit rates, all the time slots have the same lengths.
If the multiplexer permits speed flexibility, the higher speed sub-channels have longer
time slots. The frame format and time slot lengths are, however, fixed for any given

BRBRAITT : June-2011 102


―DATA NETWORK‖ FOR JTOs PH-II

configuration or number of sub-channels and their rates. Since the frame format is
fixed, time slots of all the sub-channels are always transmitted irrespective of the fact
that some of the sub-channels may not have any data to send.
Bit and Byte Interleaved TDM.
Time division multiplexer are of two types:
1. Bit interleaved multiplexer
2. Byte interleaved multiplexer.
In the bit interleaved multiplexer, each time slot is one bit long. Thus, the user data
streams are interleaved taking one bit from each stream. Bit interleaved multiplexers
are totally transparent to the terminals.
In the byte interleaved multiplexer,each time slot is one byte long. Therefore, the
multiplexed output consists of a series of interleaved characters of successive sub-
channels. Usually, a buffer is provided at the input of each of its ports to temporarily
store the character received from the terminal. The multiplexer reads the buffers
sequentially. The start-stop bits of the characters are stripped during multiplexing and
again reinserted after demultiplexing. It is necessary to transmit a special ―idle‖
character when a terminal is not transmitting.
The bit rate at the output of the multiplexer is slightly greater than the aggregate bit
rate of the sub-channels due to the overhead of the synchronization word. Another
feature of TDMs is that even though the multiplexed output is formatted, there is no
provision for detecting or correcting the errors.
Time division multiplexers permit the mixing of bit rates of the sub-channels. Their
line capacity utilization is also better than frequency division multiplexers. A line bit
rate of 9600 bps is possible.
STATISTICAL TIME DIVISION MULTIPLEXERS
Statistical time division multiplexer, Stat Mux in short, uses dynamic of time slots for
transmitting data. If a sub-channel has data waiting to be transmitted, the Stat Mux
allots it a time slot in the frame (Fig. 40). Duration of the time slot may be fixed or
variable. There is need to identify the time slots and their boundaries. Therefore, some
additional control fields are required. When we examine the Stat Mux protocols later
we will see how the time slots are identified.

Fig. 40 Frame format of a statistical time division multiplexer.

BRBRAITT : June-2011 103


―DATA NETWORK‖ FOR JTOs PH-II

Dynamic assignment allows the aggregate bit rates of the sub-channels to be more
than the line speed of the Stat Mux considering that all the terminals will not generate
traffic all the time. If sufficient aggregate traffic is assured at the input, the Stat Mux
permits full utilization of the line capacity. It is not so in TDMs, where the line time is
wasted if a time slot is not utilized by a sub-channel though another sub-channel may
have data to send.
Stat Mux Buffer
A Stat Mux is configured to handle an aggregate sub-channel bit rate which is more
than the line rate. it must have a buffer so that it may absorb the input traffic
fluctuations maintaining a constant flow of multiplexed data on the line. the Stat Mux
maintains a queue in the buffer to maintain sequence of the data bytes. Buffer size
may vary from vender to vendor but 64 kbyte is typical. This buffer is usually shared
by both the directions of transmission, i.e., by the multiplexer and the demultiplexer
portions of a Stat Mux. To guard against the overflow, the sub-channel traffic is flow-
controlled.
Stat Mux Protocol
Some of the important issues which need to be addressed to have dynamic time slot
allotment are:

1. In simple time division multiplexer, the location of time slot with respect
to the synchronization word identifies the time slot because fixed frame
format is used. But in Stat Mux, the frame has variable format. Therefore,
some mechanism to identify the time slots is required.
2. Lengths of the time slots are variable. There is need to define time slot
delimiters.

Therefore, a Stat Mux protocol which defines the format of the Stat Mux frame is
required. There are several proprietary protocols but none of them is standard. We
will discuss two common Stat Mux protocols, Bit Map and Multiple-character.
The Stat Mux has a well-defined frame structure and has built-in buffer to
temporarily store data. Therefore, it is possible to enhance its capability by
implementing a data link protocol for error control. A commonly implemented data
link protocol is HDLC.
Layered Architecture
Figure 41a shows the three-layer architecture of a Stat Mux. The control sublayer
generates a multiplexed data frame with a control field to identify the data fields. It is
handed over to the data link sublayer which adds a header and a trailer to it. The
resulting frame structure in case of HDLC protocol is shown in Fig.41b. The
information field of the HDLC frame contains the frame received from the control
sublayer. Note that the address and control fields of the HDLC frame have nothing to
do with the sub-channel. They are part of the HDLC protocol. The frame check
sequence (FCS) contains the CRC code of error detection.

BRBRAITT : June-2011 104


―DATA NETWORK‖ FOR JTOs PH-II

The first layer constitutes the physical layer which is concerned with the physical
aspects of transmitting the multiplexed bit stream on the line.
The control protocol is proprietary with each vendor and determines the overall
efficiency of the Stat Mux.
Bit Map Stat Mux Protocol
In the bit map Stat Mux protocol, the multiplexed data frame formed by the control
sublayer consists of a map fields and several data fields (Fig. 42). The map field has
one bit for each sub-channel. It is two bytes long for the sixteen-port Stat Mux. If a
bit is ―1‖ in the map field, it indicates that the frame contains data field of the
corresponding sub-channel. A ―0‖ in the map field of a frame indicates that data field
of the corresponding sub-channel is missing from this particular frame.
Note that the map fields is present in all frames and has fixed length. The size of data
fields of a channel, if present, is fixed in the frame. It can be set to any value while
configuring the Stat Mux. Fixed sizes of the data field enable the receiving Stat Mux
to identify the boundaries of these

Fig. 41 Architecture of a Stat Mux.

Fig. 42 Frame format of bit map Stat Mux protocol.

BRBRAITT : June-2011 105


―DATA NETWORK‖ FOR JTOs PH-II

fields. For asynchronous terminal ports, the data field size is usually set to one
character. The start stop bits are stripped before multiplexing and reinserted after
demultiplexing.
The HDLC frame transmitted on the line contains seven overhead bytes (Flag-1,
address-1, control-1, FCS-2, bit map-2) which reduce effective line utilization. If
there are N bytes in the data fields of the control frame, the maximum line utilization
efficiency E can be estimated by
E= N
N+7
EXAMPLE 4
A host is connected to 16 asynchronous terminal through a pair of statistical time
division multiplexers utilizing the bit map protocol. The sixteen asynchronous
terminal ports operate at 1200 bps. The line port has a bit rate of 9600 bps the data
link control protocol is HDLC.
1. Calculate the maximum line utilization efficiency and throughput.
2. Will there be any queues in the Stat Mux
(a) if the average character rate at all the ports is 10 cps ?
(b) If the host sends full screen display of average 1200 characters to each
terminal ?
3. How much time will the Stat Mux take to clear the queues ?
Solution
1. As N = 16, the line utilization efficiency is given by
E = 16/(7 +16) = 0.696

Throughput T = E 9600 = 0.696 9600 = 6678 bps

2. (a) Aggregate average input = 16 10 = 160cps


= 160 8

= 1280 bps

since the throughput is 6678 bps, it is very unlikely there will be queues at the
terminal ports. (b) With start and stop bits, the minimum size of a character is 10
bits. therefore, at 1200 bps, the host will take 10 seconds to transfer 1200
characters of one screen of a terminal. The Stat Mux will get 1200 16 = 19200
characters in 10 seconds from the host. The throughput is

6678 bps = 6678/8 = 84075

BRBRAITT : June-2011 106


―DATA NETWORK‖ FOR JTOs PH-II

The Stat Mux will transmit 834.75 10 characters in 10 seconds. Therefore, queue at
the end of 10 seconds = 19200 – 8347.5 = 10852.5 characters
3. The Stat Mux will take 10852.5/84.75 = 13 additional seconds to clear the
queue.

BRBRAITT : June-2011 107


―DATA NETWORK‖ FOR JTOs PH-II

Multiple-Character Stat Mux Protocol


The bit map Stat Max protocol has one limitation. The number of bytes in the data
field of a sub-channel cannot be varied from frame to frame. Multiple-character Stat
Mux protocol overcomes this limitation by including additional fields in the frame for
indicating the sizes of the various data fields. The frame format of this protocol is
shown in (Fig. 43.)
The data field of each sub-channel which is present in a frame is identified by the sub-
channel identifier of four bits. Thus, there can be a maximum of 16 sub-channels. The
identifier fields is followed by a four-bit sub-channel control field for management
purpose.
The control fields is followed by a length field which indicates the number of bytes in
the data field of the sub-channel. The length field is also one byte long and, therefore,
there can be maximum 256 bytes per sub-channel per frame. The data fields follows
immediately after the length field. The format is repeated for each sub-channel in the
frame.
If the data link protocol is HDLC, total overhead bytes will be 5 + 2N per HDLC
frame,
Fig. 43 Frame format of multiple-character Stat Mux protocol.

Where N is the number of sub-channels present in a frame. Therefore, the line


utilization efficiency E is given by

di
N
E=
5+2N+ di
N
Where di is the number of data bytes in ith sub-channel.

EXAMPLE 5
A host is connected to 16 asynchronous terminal through a pair of statistical time
division multiplexers utilizing the multiple-character protocol described above. The
sixteen asynchronous terminal ports operate at 1200 bps. The line port has a bit rate

BRBRAITT : June-2011 108


―DATA NETWORK‖ FOR JTOs PH-II

of 9600 bps. The data link control protocol is HDLC and the maximum size of the
HDLC frame is 261 bytes.
1. Calculate the line utilization efficiency when all the ports generate their
maximum traffic. Will queues develop for this load ?
2. What is the maximum line utilization efficiency without having the queues ?
3. If the host sends full screen display of average 1200 characters to each
terminal, will there by any queue ?
If so, how much time will the Stat Mux take to clear the queue.

Solution

If all the 16 users simultaneously generate a burst of data, each HDLC frame will
contain all the sub-channels. As the HDLC frame size is 261 bytes, each sub-channel
will occupy (261 – 5)/16 = 16 bytes. The data fields of each channel will be 16 – 2 =
14 bytes. Therefore,

16 14
E= = 0.8582
261

261 8
Time to transmit one frame t 0 = = 217.5 ms
9600

Number of characters received at each port in 217.5 ms is

n = 0.2175 1200/10 = 26.1

But out of these only 14 characters are transmitted in each frame; so queues will
develop

If there are fewer sub-channels, the overhead of two bytes per sub-channel is reduced.
Therefore, the line utilization efficiency may be increased. Let there be N sub-
channels in a frame and d data bytes in each sub-channel.

Size of the HDLC frame = 5 + 2N + Nd

(5 + 2N + Nd) 8
Time to transmits the frame on the line t 0 =
9600

BRBRAITT : June-2011 109


―DATA NETWORK‖ FOR JTOs PH-II

Time taken by the terminal to generate d characters is 10d/1200. If there are no


queues, then

10d (5 + 2N + Nd) 8
=
1200 9600

Simplifying, we get
D = (5 + 2N)/(10 –N), N 10
We need to solve the above equation for integer values of d and N. Line utilization
efficiency is given by
Nd
E=
5 + 2N + Nd

Substituting the value of d, we get


E= N/10

As N 10, maximum line utilization efficiency is obtained when N = 9. Therefore


E = 0.9, N = 9, d = 23
1. Time required by the host to transfer one screen = 1200 10/1200 = 10s.
Number, of characters to transferred in 10 seconds = 16 1200 = 19,200

At the line of 9600 bps, time taken to transmit one HDLC frame is given by

261 8
t0 =
9600
Assume all the sub-channels are present in the frame, the data character transfer rate
per HDLC frame is 224 characters/frame. Therefore, number of data characters
transferred in 10 seconds is
224 10
= 10298085 characters
t0
Additional time required to clear the queues
(19200 – 10298.85) 10
= 8.64s
10298.85

BRBRAITT : June-2011 110


―DATA NETWORK‖ FOR JTOs PH-II

COMPARISON OF DATA MULTIPLEXING TECHNIQUES


When compared with other types of data multiplexers, Stat Mux offers many
advantages. Table 15 gives a general comparison of the data multiplexing techniques.
The parameters used for comparison are :
Line Utilization Efficiency. It gives the potential to effectively utilize the line
capacity.
Channel Capacity. It gives the aggregate capacity of all the sub-channels.
High Speed Channels. This parameter compares the ability to support high speed data
sub-channels.
Flexibility. This parameter compares the ability to change speed of sub-channels.
Error Control. This parameter compares the ability to detect and connect
transmission errors.
Multidrop Capability. This parameter compares the ability to use multidrop
techniques on a sub-channel.
Transmission Delay. This parameter compares the additional transmission delays
introduced by the multiplexers, over and above the propagation delay.

BRBRAITT : June-2011 111


―DATA NETWORK‖ FOR JTOs PH-II

Table 15 Comparison of Data Multiplexer Techniques

Parameter FDM TDM Stat Mux

Line efficiency Poor Good Excellent


Channel capacity Poor Good Excellent
High speed sub-channel Very poor Poor Excellent
Flexibility Very poor Good Excellent
Error control None None Possible
Multidrop capability Good Difficult Possible
Cost High Low Medium
Transmission delay None Low Random

SUMMARY
Transmission of digital signal using the limited bandwidth of the speech channel of
the telephone network necessitates use of digital modulation methods, namely,
Frequency Shift Keying (FSK), differential Phase Shift Keying (PSK) and Quadrature
Amplitude Modulation (QAM). FSK is used in the low speed modems. PSK and
QAM are used in medium and high speed modems.
A modem has two interfaces, a digital interface which is connected to the Data
Terminal Equipment (DTE) and a line interface which is connected to the
transmission line. It comprises several functional blocks besides a modulator and a
demodulator. Encoding, scrambling, equalizing and timing extraction are some of the
additional functions, carried out in a modem. CCITT recommendations for modems
are summarized below. The number within brackets is the speed of the modem in bits
per second. Half duplex modems are indicated by the letters ―HD‖.
Wire-Asynchronus Modem: V.21 (300).
Wire-Synchronous Modems: V.22 (1200), V.22bis (2400), V.26bis (2400 HD),
V26ter (2400), V.27ter (4800), V.32 (9600).

4-wire-Synchronous Modems: V.23 (1200), V.26 (2400), V.27 (4800), V.27bis


(4800), V.29 (9600), V.33 (14400), V.36 (72k), V.37 (144k).
Limited distance modems, baseband modems and line drivers are designed for copper
cable connection between the modems. These modem require the wider bandwidth of
the cable and cannot work within the 300-3400 Hz band of the speech channel.
Data multiplexer are used to economize on lines and modems. Frequency division and
time division data multiplexers offer limited capabilities and do not make optimum
use of the channel capacity. Statistical time division multiplexers offer a very high
potential utilization of channel capacity. They also offer high flexibility of
configuring terminal port speeds.

BRBRAITT : June-2011 112


―DATA NETWORK‖ FOR JTOs PH-II

ERROR DETECTION & CORRECTION


TECHNIQUES

BRBRAITT : June-2011 113


―DATA NETWORK‖ FOR JTOs PH-II

ERROR DETECTION & CORRECTION TECHNIQUES


Transmission of bits as electrical signals suffers from many impairments which
ultimately result in introduction of errors in the bit stream. Digital systems are very
sensitive to errors and may malfunction of error rate is above a certain level.
Therefore, error control mechanisms are built into almost all digital systems. In this
module we will discuss some common error detection and correction mechanisms.
We begin with basic concept and terminology of coding theory, parity checking,
checksum and cyclic redundancy check methods of error detection are examined in
some detail. We then proceed to error correction methods which include block codes,
the Hamming code and convolution code. Mechanisms for error control in data
communication are based on detection of errors in a message and its retransmission.
TRANSMISSION ERRORS
Errors are introduced in the data bits during their transmission. These errors can be
categorized as: content errors, and flow integrity errors.
Content errors are errors in the content of a message, e.g., a ― 1‖ may be received as a
―0‖. Such errors creep in due to impairment of the electrical signal in the transmission
medium.
Flow integrity errors refer to missing blocks of data. For example, a data block may
be lost in the network due to its having been delivered to a wrong destination.
In voice communication the listener can tolerate a good deal of signal distortion and
make sense of the received signal but digital systems are very sensitive to errors.
Measures are, therefore, built into a data communication system to counteract the
effect of errors. These measures include the following:
1. Introduction of additional check bits in the data bits to detect content errors
2. Correction of the errors
3. Establishment of procedures of data exchange which enable detection of
missing blocks of date
4. Recovery of the corrupted messages.

CODING FOR ERROR DETECTION AND CORRECTION


For error detection and correction, we need to add some check bits to a block of data
bits. The check bits are also called redundant bits because they do not carry any user
information. Check bits are so chosen that the resulting bit sequence has a unique
―characteristic‖ which enables error detection. Coding is the process of adding the
check bits.

BRBRAITT : June-2011 114


―DATA NETWORK‖ FOR JTOs PH-II

Some of the terms relating to coding theory are explained below:

1. The block of data bits to which check bits are added is called a data word.
2. The bigger block containing check bits is called the code word.
3. Hamming distance or simply distance between two code words is the number
of disagreements between them. For example, the distance between the two
words given below is 3 (Fig. 1).
4. The weight of a code word is the number of ―1‖ s in the code word e.g.,
11001100 has a weight of 4.
5. A code set consists of all valid code words. As the valid code words have a
built in ―characteristic‖ of the code set.

1 1 0 1 0 1 0 0
Distance =3

0 1 0 1 1 1 1 0

Fig. 1 Hamming distance.


Error detection
When a code word is transmitted, one or more of its bits may be reversed due to
signal impairment. The receiver can detect these errors if the received code word is
not one of the valid code words of the code set.
When errors occur, the distance between the transmitted and received code words
becomes equal to the number of erroneous bits (Fig.2).

Transmitted Received Number of Distance


Code Word Code Word Errors
11001100 11001110 1 1
10010010 00011010 2 2
10101010 10100100 3 3

Fig. 2 Hamming distance between transmitted and received code words.


In other words, the valid code words must be separated by a distance of more than 1
otherwise, even a single bit error will generate another valid code word and the error
will not be detected. The number of errors which can be detected depends on the
distance between any two valid code words. For example, if the valid code words are
separated by a distance of 4, up to three errors in a code word can be detected. By
adding a certain number of check bits and properly choosing the algorithm for
generating them, we ensure some minimum distance between any two valid code
words of a code set.

BRBRAITT : June-2011 115


―DATA NETWORK‖ FOR JTOs PH-II

Error Correction
After an error is detected, there are two approaches to correction of errors:

1. Reverse Error Correction (REC)


2. Forward Error Correction (FEC)
In the first approach, the receiver requests for retransmission of the code word
whenever it detects an error. In the second approach, the code set is so designed that it
is possible for the receiver to detect and correct the errors as well. The receiver locates
the errors by analyzing the received code word and reverse the erroneous bits.
An alternative way of forward error correction is to search for the most likely correct
code word. When an error is detected, the distances of all the valid code words from
the received invalid code word are measured. The nearest valid code word is the most
likely correct version of the received word (Fig. 3).

Valid Code Word Valid Code Word

10001110 10100110

3 Received Word 1

10110110

Valid Code Word 7 5 Valid Code Word

01001000 01011111

Fig.3 Error correction between valid based on the least Hamming distance.

If the minimum distance between valid code words is D, upto D/2 – 1 errors can be
corrected. More than D/2 – 1 errors will cause the received code word to be nearer to
the wrong valid code word.
Bit Error Rate (BER)
In analog transmission, signal quality is specified in terms of signal-to-noise ratio
(S/N) which is usually expressed in decibels. In digital transmission, the quality of
received digital signal is expressed in terms of Bit Error Rate (BER) which is the
number of errors in a fixed number of transmitted bits. A typical error rate on a high
quality leased telephone line is as low as 1 error in 106 bits or simply 1 10-6

BRBRAITT : June-2011 116


―DATA NETWORK‖ FOR JTOs PH-II

ERROR DETECTION METHODS


Some of the popular error detection methods are:

1. Parity checking
2. Checksum error detection
3. Cyclic Redundancy Check (CRC).
Each of the above methods has its advantages and limitations as we shall see in the
following section.

Parity Checking
In parity checking methods, an additional bit called a ―parity‖ bit is added to each data
word. The additional bit is so chosen that the weight of the code word so formed is
either even (even parity) or odd (odd parity) (Fig .4). All the code words of a code set
have the same parity (either odd or even) which is decided in advance.

Even Parity Odd Parity

P Data Word
P Data
1 1001011
Word 0 001011
0
0 1001011
1 0010110
Fig. 4 Even and odd parity bits.

When a single error or an odd number of errors occurs during transmission, the parity
of the code word changes (Fig.5). Parity of the code word is checked at the receiving
end and violation of the parity rule indicates errors somewhere in the code word.
Transmitted Code 10010110 Even Parity
Received Code (single error) 00010110 Odd Parity (Error is detected)
Received Code (Double error) 00011110 Even Parity (Error is not
detected)
Fig. 5 Error detection by change in parity.
Note that double or any even number of errors will go undetected because the
resulting parity of the code word will not change. Thus, a simple parity checking
method has its limitations. It is not suitable for multiple errors. To keep the possibility
of occurrence of multiple errors low, the size of the data word is usually restricted to a
single byte.
Parity checking does not reveal the location of the erroneous bit. Also, the received
code word with an error is always at equal distance from two valid code words.
Therefore, errors cannot be corrected by the parity checking method.

BRBRAITT : June-2011 117


―DATA NETWORK‖ FOR JTOs PH-II

EXAMPLE 2
Write the ASCII code of the word ― HELLO‖ using even parity.
Solution
Bit Positions 87654321
H 01001000
E 11000101
L 11001100
L 11001100
O 11001111
Burst Errors
There is a strong tendency for the errors to occur in bursts. An electrical interference
like lightning lasts for several bit times and, therefore, it corrupts a block of several
bits. The parity checking method fails completely in such situations. Checksum and
cyclic redundancy check are the two methods which can take care of burst errors.
Checksum Error Detection
In checksum error detection method, a checksum is transmitted along with every
block of data bytes. Eight-bit bytes of a block of data are added in an eight-bit
accumulator. Checksum is the resulting sum in the accumulator. Being an eight-bit
accumulator, the carries of the most significant bits are ignored.
EXAMPLE 3
Find the checksum of the following message. The MSB is on the left-hand side of
each byte.

BRBRAITT : June-2011 118


―DATA NETWORK‖ FOR JTOs PH-II

10100101 001001110 11100010 01010101 10101010 11001100 00100100

Solution
1 1 Carries
1
1 1 1 1 1
1
1 0 1 0 0 1
0 1
0 0 1 0 0 1
1 0
1 1 1 0 0 0
1 0 Data
0 1 0 1 0 1
0 1 Bytes
1 0 1 0 1 0
1 0
1 1 0 0 1 1
0 0
0 0 1 0 0 1
0 0 Checksum
1 0 0 1 1 1 0 0
Byte

After transmitting the data bytes, the checksum is also transmitted. The checksum is
regenerated at the receiving end and errors show up as a different checksum. Further
simplification is possible by transmitting the 2‘ s complement of the checksum in
place of the checksum itself. The receiver in this case accumulates all the bytes
including the 2‘s complement of the checksum. If there is no error, the contents of
the accumulator should be zero after accumulation of the 2‘s complement of the
checksum byte.
The advantage of this approach over simple parity checking is that 8-bit addition
―mixes up‖ bits and the checksum is representative of the overall block. Unlike simple
parity where even number of errors may not be detected, in checksum there is 255 to
1 chance of detecting random errors.
Cyclic Redundancy Check
Cyclic Redundancy Check (CRC) codes are very powerful and are now almost
universally employed. These codes provide a better measure of protection at the lower
level of redundancy and can be fairly easily implemented using shift registers or
software.
A CRC code word of length N with m-bit data word is referred to as (N,m) cyclic
code and contains (N-m) check bits. These check bits are generated by modulo-2

BRBRAITT : June-2011 119


―DATA NETWORK‖ FOR JTOs PH-II

division. The dividend is the data word followed by n= N-m zeros and the divisor is a
special binary word of length n+1. The CRC code word is formed by modulo-2
addition of the remainder so obtained and the dividend.
EXAMPLE 6
Generate CRC code for the data word 110101010 using the divisor 10101.
Solution
Data Word 110101010
Divisor 10101

111000111 Quotient
1 0 1 0 1) 1101010100000 Dividend
10101
11111
10101
10100
10101

11000

10101

11010

10101

11110 Remainder

10101

Code 1 0 1 1 110101010
In 0the
0 0 above
0 example, note that the CRC code word consists of the date word
Word
followed by the remainder. The code word so generated is completely divisible by the
divisor because
1 0 1it1is the difference of1 the dividend and the remainder (Modulo-2
addition
1 0 1 0and
1 0 subtraction
1 0 1 0 1 1 are equivalent). Thus, when the code word is again divided
by the same divisor at the receiving end, a non-zero remainder after so dividing will
indicate errors in transmission of the code word.
EXAMPLE 7
The code word of Example 6 be received as 1100100101011. Check if there are errors
in the code word.

BRBRAITT : June-2011 120


―DATA NETWORK‖ FOR JTOs PH-II

Solution
Dividing the code word by 10101, we get

111110001
10101) 1100100101011
10101
11000
10101
11010
1jhkhhkhkhhkj
10101
11111
10101
10100
10101
11011
10101
1110 Remainder

Non-zero remainder indicates that there are errors in the received code word.
Algebraic Representation of Binary Code Words
For the purpose of analysis, the binary codes are represented using algebraic
polynomials. In a polynomial of variable x, coefficients of the powers of x are the bits
of the code, the most significant bit being the coefficient of the highest power of x. the
data word of Example 6 can be represented by a polynomial M (x) as::

M(x) = 1x8 + 1x7 + 0x6 + 1x5 + 0x4 + 1x3 + 0x2 + 1x1 + 0x0
Or M(x) = x8 + x7 + x5 + x3 + x

The polynomial corresponding to the divisor is called the generating polynomial G(x)
. G(x) corresponding to divisor used in last example would be
G (x) = 1x4 + 0x3 + 1x2 + 0x1 + 1x0
Or G(x) = x4 + x2 + 1

The polynomial D(x) corresponding to the dividend (1101010100000) is


D(x) = x12 + x11 + x9 + x7 + x5 = x4. M(x)

If Q (x) is the quotient and R(x) is the remainder when D(x) is divided by G(x),

D(x) = Q(x). G(x) + R(x)


D(x) + R (x) = Q(x). G(x) + R(x) + R(x)
D(x) + R(x) = Q(x). G(x)

BRBRAITT : June-2011 121


―DATA NETWORK‖ FOR JTOs PH-II

Thus, the CRC code D(x) +R(x) is completely divisible by G(x). This characteristic of
the code is used for detecting errors.
Some of the common generating polynomials and their applications are :

CCITT V.41 x16 +x12+ x5 +1

It is used in HDLC/SDLC/ADCCP protocols.


CRC –12 x12 + x11 + x3 + x2 +x 1
It is employed in BISYNC protocol with 6-bit characters.

CRC-16 x16 +x15 +x2 + 1


It is used in BISYNC protocol with 8-bit characters.

CRC-32 x32 +x26 +x23 +x22 +x16 +x12 +x11 +x10 +x8 +x7 +x5 +x4 +x2 +x +1
It is used with 8-bit characters when very high probability of error detection is
required.
FORWARD ERROR CORRECTION METHODS
To locate and correct errors require a bigger overhead in terms of number of check
bits in the code word. Some of the important error-correction codes which find
application in data transmission devices are:
1. Block parity
2. Hamming code
3. Convolutional code.

Block Parity
The concept of parity checking can be extended to detect and correct single errors.
The data block is arranged in a rectangular matrix form as shown in Fig.8 and two
sets of parity bits are generated, namely,
1. Longitudinal Redundancy Check (LRC)
2. Vertical Redundancy Check (VRC).
VRC is the parity bit associated with the character code and LRC is generated over
the rows of bits. LRC is appended to the end of data block. The bit 8 of the LRC
represents the VRC of the other 7 bits of the LRC. In Fig.6, even parity is used for the
LRC and the VRC.

BRBRAITT : June-2011 122


―DATA NETWORK‖ FOR JTOs PH-II

COMPUTER
1 1 1 1 0 1 0 1 0 1
2 1 1 0 0 0 0 0 1 1
7-Bit 3 0 1 1 0 1 1 1 0 1
ASCII 4 0 1 1 0 0 0 0 0 0
Codes 5 0 0 0 1 1 1 0 1 0
6 0 0 0 0 0 0 0 0 0
7 1 1 1 1 1 1 1 1 0
1 1 0 0 0 1 1 1 1

Even Parity Even Parity


Bits (VRC) Bits (LRC)

Fig.6 Vertical and longitudinal parity check bits.

Bit Transmission Sequence


11000011 11110011 10110010 00001010 10101010 00101011 10100011
01001011 11100001
Even a single error in any bit results in failure of longitudinal redundancy check in
one of the rows and vertical redundancy check in one of the columns. The bit which is
common to the row and column is the bit in error.
Multiple errors in rows and columns can be detected but cannot be corrected as the
bits which are in error cannot be located.
EXAMPLE 8
The following bit stream is encoded using VRC, LRC and even parity. Correct the
error, if any.
11000011 11110011 10110010 00001010 10111010 00101011 10100011
01001011 11100001

BRBRAITT : June-2011 123


―DATA NETWORK‖ FOR JTOs PH-II

Solution
11 1 0 1 0 1 0 1
1 1 0 0 0 0 0 1 1
0 1 1 0 1 1 1 0 1

1 Wrong Parity
01 1 0 1 0 0 0 0

00 0 1 1 1 0 1 0
0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 0
1 1 0 0 0 1 1 1 1

Wrong Parity

Fourth bit
of the fifth byte is in error. It should be ―0‖.
Hamming Code
It is the single error correcting code devised by Hamming. In this code, there are
multiple parity bits in a code word. Bit positions 1, 2, 4, 8... etc. Of the code word are
reserved for the parity bits. The other bit position are for the data bits (Fig. 7). The
number of parity bits required for

1 2 3 4 5 6 7 8 9 10 11
P1 P2 D P4 D D D P8 D D D

P: Parity Bit D: Data Bit

Fig. 7 Location of parity bits in Hamming code.

Correcting single bit errors depends on the length of the code word. A code word of
length n contains m parity bits, where m is the smallest integer satisfying the
condition:
2m n+1

The MSB of the data word is on the right-hand side and its position is third in Fig. 7.
As usual, the LSB is transmitted first.
Each data bit is checked by a number of parity bits. Data bit position expressed as
sum of the powers of 2 determines parity bit positions which check the data bit. For
example, a data bit in position 6 is checked by parity bits P 4 and P2 (6=22 + 21 ).

BRBRAITT : June-2011 124


―DATA NETWORK‖ FOR JTOs PH-II

Similarly, data bit in position 11 is checked by parity bits P 8, P2and P1 (11=23 + 21


+20). Table 1 gives the parity bit position which check the various data bit positions.
Each parity bit is determined by the data bits it checks. Even or add parity can be
used. For

BRBRAITT : June-2011 125


―DATA NETWORK‖ FOR JTOs PH-II

Table 1 Data Bit Position Checked by the Parity Bits

Data bit Parity bit positions

Positions P1 P2 P4 P8
3
5
6
7

10

11

12

example, if even parity is used, P2 is such that the number of ―1‖ s in 2nd, 3rd, 6th,
7th, 10th and 11th positions is even. The logic behind this way of generating the
parity bits is that when a code word suffers an error, all the parity bits which check the
erroneous bit will indicate violation of the parity rule and the sum of these parity bit
positions will indicate the position of the erroneous bit. For example, if the 11th bit is
in error, parity bits P8, P2 and P1 will indicate error and 8+2+1=11 will immediately
point to the 11th bit.
EXAMPLE 9
Generate the code word for ASCII character ―K‖= 1001011. Assume even parity for
the Hamming code. No character parity is used.

Solution

Bit positions

1 2 3 4 5 6 7
8 9 10 11
P1 P2 1 P4 0 0 1
P8 0 1 1

BRBRAITT : June-2011 126


―DATA NETWORK‖ FOR JTOs PH-II

First parity bit P1 1 0 1


0 1 P1=1
Second parity bit P2 1 0 1
1 1 P2=0
Third parity bit P4 0 0 1
P4=1
Fourth parity bit P8
0 1 1 P8=0

Code Word 1 0 1 1 0 0 1 0
0 1 1

EXAMPLE 10

Detect and correct the single error in the received Hamming code word 10110010111.
Assume even parity.

BRBRAITT : June-2011 127


―DATA NETWORK‖ FOR JTOs PH-II

Solution

Bit positions
Parity Check
1 2 3 4 5 6 7
8 9 10 11
P1 P2 D P4 D D D
P8 D D D

Code word 1 0 1 1 0 0 1 0
1 1 1
First Check 1 1 0 1
1 1 Odd fail 1
(P1, 3,5,7,9,11)
Second check 0 1 0 1
1 1 Even Pass
(P2, 3,6,7,10,11)
Third check 1 0 0 1
Even Pass
(P4, 5,6,7)
Fourth check 0
1 1 1 Odd Fail 8
(P8, 9,10,11) 9

Thus, the 9th bit position is in error. Correct code word is 10110010011.
Convolutional Codes
Unlike block codes in which the check bits are computed for a block of data,
convolutional codes are generated over a ―span‖ of data bits, e.g., a convolutional
code of constraint length 3 is generated bit by bit always using the ―last 3 data bits‖.
Figure 8 shows a simple convolutional encoder consisting of a shift register having
three stages and EXOR gates which generate two output bits for each input bit. It is
called a rate ½ convolutional encoder.

Fig. 8 Half-rate convolutional encoder.

State transition diagram of this encoder is shown in Fig. 9. Each circle in the diagram
represents a state of the encoder, which is the content of two leftmost stages of the
shift register. There are four possible states 00, 01, 10, 11. The arrows represent the

BRBRAITT : June-2011 128


―DATA NETWORK‖ FOR JTOs PH-II

state transitions for the input bit which can be 0 or 1. The label on each arrow shows
the input data bit by which the transition is caused and the corresponding output bits.
As an example, suppose the initial state of the encoder is 00 and the input data
sequence is 1011. The corresponding output sequence of the encoder will then be
11010010.
Trellis Diagram.
An alternative way of representing the states is by using the trellis diagram (Fig. 10).
Here the four states 00, 01, 11, 10 are represented as four levels. The arrows represent
state transitions as in the state transition diagram. The labels on the arrows indicate
the output. By convention, a ―0‖ input is always represented as an upward transition
and a ―1‖ input as a downward transition. The trellis diagram can be obtained from
the state transition diagram.
EXAMPLE 11
Generate the convolutional code using the trellis diagram of Fig . 10 for the input bit
sequence 0101 assuming the encoder is in state A to start with.

BRBRAITT : June-2011 129


―DATA NETWORK‖ FOR JTOs PH-II

Solution
Starting from state A at top left corner in Fig . 10 and tracing the path through the
trellis for the input sequence 0101, we get

Present Input Next Output


sate bit state bits

A 0 A
00
A 1 C
11
C 0 B
01
B 1 C
00

Output bit sequence 0 0 1 1 0 1 0 0


Decoding Algorithm.
Decoder for the convolutional code is based on the maximum likelihood principle
called the Viterbi algorithm. Knowing the encoder behavior and the received
sequence of bits, we can find the most likely transmitted sequence by analyzing all the
possible paths through the trellis. The path which results in the output sequence
which is nearest to the received sequence is chosen and the corresponding input bits
are the decoded data bits.
Let the data bit sequence be 1011 which is encoded as 11010010 using the encoder
shown in Fig. 8. The received sequence is 11110010 having an error in the third bit
position.
Now we need to analyze all possible paths through the trellis and select the path
which results in an output sequence nearest the received sequence. We will do it in
two steps. After the first step we will be in a position to exclude further analysis of
some of the paths.
Step 1: Let us first analyze the first three pairs of bits, that is, 111100. If we start
from state A and trace all possible paths through the trellis shown in Fig. 10, we get
the output bit sequences, and their distances from the received sequence 111100 as
given in Table 2.

BRBRAITT : June-2011 130


―DATA NETWORK‖ FOR JTOs PH-II

Table 2 Alternative Paths through the Trellis

Step1
Step2
Data Path Output Distance Next Next Output
Distance
Bits sequence from data state sequence
from
111100 bit
11110010

000 AAAA 000000 4


100 ACBA 110111 3+ 0 A
11011100 4
1
C 11011111 3
110 ACDB 111010 2+ 0 A
11101011 3
1
C 11101000 3
010 AACB 001101 3
001 AAAC 000011 6
101 ACBC 110100 1+ 0 B
11010001 3
1
D 11010010 1+
111 ACDD 111001 2+ 0 B
11100110 2
1
D 11100101 4
011 AACD 001110 3

+ chosen paths having smaller distance.


Note that a pair of paths terminate on each state, e.g., state A can be reached via
AAAA or ACBA. But path AAAA results in output sequence 000000 which is at a
distance of 4 from the first six bits of the received sequence. In the case of the other
path ACBA, this distance is only 3. Because we are looking for a sequence with the
smallest distance, we need not consider the first path for further analysis. We can drop
some more paths in similar manner.
Step 2: Having considered the first three pairs of bits, let us move further.
Transitions from the last state arrived at in the first step, will result in two potential
states depending on the next input bit. Distances of the resulting bit sequences from
the received sequence are given in Table 2. Note that we have computed the distances
for only the selected paths of the first step. The minimum distance is for the path
ACBCD which corresponds to the correct data bit sequence 1011.

BRBRAITT : June-2011 131


―DATA NETWORK‖ FOR JTOs PH-II

EXAMPLE 12

What is the message sequence if the received rate ½ encoded bit sequence is
00010100? Use the trellis diagram given in Fig. 10.
Solution

Step1
Step2
Data Path Output Distance Next Next
Output Distance
Bits sequence from data state
sequence from
000101 bit
00010100
000 AAAA 000000 2 0
A 00000000 2
1
C 00000011 4
100 ACBA 110111 3
110 ACDB 111010 6
010 AACB 001101 1 0
A 00110111 3
1
C 00110100 1
001 AAAC 000011 2 0
B 00001101 3
1
D 00001110 2
101 ACBC 110100 3
111 ACDD 111001 4
011 AACD 001110 3 0
B 00111010 4

1 D 00111001 4
from the above table, it can be seen that minimum distance is for the path AACBC
which corresponds to the message bit sequence 0101.
REVERSE ERROR CORRECTION
We have seen some of the methods of forward error correction but reverse error
correction is more economical than forward error correction in terms of the number of
check bits. Therefore, usually error detection methods are implemented with an error
correction mechanism which requires the receiver to request the sender for
retransmission of the code word received with errors. There are three basic
mechanisms of reverse error correction:
1. Stop and wait
2. Go-back-N
3. Selective retransmission

BRBRAITT : June-2011 132


―DATA NETWORK‖ FOR JTOs PH-II

Stop and wait

In this scheme, the sending end transmits one block of data at a time and then waits
for acknowledgement from the receiver. If the receiver detects any error in the data
block, it sends a request for retransmission in the form of negative acknowledgement.
If there is no error, the receiver sends a positive acknowledgement in which case the
sending end transmits, the next block of data. Figure 11 illustrates the mechanism.

sender Receiver

Data Block No
with Check - --------------
bits Errors
Positive
----------------
Acknowledgement
Next Data Errors
e
Block --------------
Detected
Negative
---------------

Retransmission
Acknowledgem
---------------
ent
Fig . 11 Reverse error correction by stop-and-wait
mechanism.

Go-Back-N
In this mechanism all the data blocks are numbered and the sending end keeps
transmitting the data blocks with check bits. Whenever the receiver detects error in a
block, it sends a retransmission request indicating the sequence number of the data
block received with errors. The sending end then starts retransmission of all the data
blocks from the requested data block onwards (Fig.12).
Selective Retransmission
If the receiver is equipped with the capability to resequence the data blocks, it
requests for selective retransmission of the data block containing errors. On receipt of
the request, the sending end retransmits the data block but skips the following data
blocks already transmitted and continues with the next data block (Fig. 13).In data
communications, we use reverse error correction using one of the mechanisms
described above.

BRBRAITT : June-2011 133


―DATA NETWORK‖ FOR JTOs PH-II

Sender Receiver
Data Block
With check
Bits 1 1
Errors
2
Detected

3 3

Request for
2 2 Retransmission
Of Data Block 2
2 2

3 3

Fig. 12 Reverse error correction by go-back-N Mechanism

Sender Receiver
Data Block
With Check 1 -- 1 No
Bits
Errors
2 Errors
--
Detected
3 3

Request for
2 2
Retransmission
Of Data Block 2

2 2

4 4

Fig. 13 Reverse error correction by selective retransmission mechanism

BRBRAITT : June-2011 134


―DATA NETWORK‖ FOR JTOs PH-II

SUMMARY
Errors are introduced due to imperfections in the transmission media. For error
control, we need to detect the errors and then take corrective action. Parity bits,
checksum and cyclic redundancy check (CRC) are some of the error detection
methods. Out of the three, CRC is the most powerful and widely implemented.
Error correction methods include forward error correction or reverse error correction.
Forward error correction requires additional check bits which enable the receiver to
correct the errors as well. However, reverse errors correction mechanisms, namely,
stop and wait, go-back-N or selective retransmission are more common. In these
mechanisms the receiver requests for retransmission of the data blocks received with
errors.

BRBRAITT : June-2011 135


―DATA NETWORK‖ FOR JTOs PH-II

PACKET SWITCHING
AND MESSAGE SWITCHING CONCEPTS

BRBRAITT : June-2011 136


―DATA NETWORK‖ FOR JTOs PH-II

PACKET SWITCHING
AND MESSAGE SWITCHING CONCEPTS
Whenever we have multiple devices, we have the problem of how to connect them to
make one-on-one communication possible. One solution is to install a point-to-point
connection between each pair of devices(a mesh topology) or between a central
device and every other device (a star topology). These methods, however, are
impractical and wasteful when applied to very large networks. The number and length
of the links require too much infrastructure to be cost efficient, and the majority of
those links would be idle most of the time. Imagine a network of six devices: A, B, C,
D, E, and F. If device A has point-to-point links to devices B, C, D, E, and F, then
whenever only A and B are connected, the links connecting A to each of the other
device are idle and wasted.
Other topologies employing multipoint connections, such as a bus, are ruled out
because the distances between devices and the total number of devices increase
beyond the capacities of the media and equipment.
A better solution is switching. A switched network consists of a series of inter-linked
nodes, called switches. Switches are hardware and /or software devices capable of
creating temporary connections between tow or more devices linked to the switch but
not to each other. In a switched network, some of these nodes are connected to the
communicating devices. Others are used only for routing.

The communicating devices are labeled A, B, C, D, and so on, and the switches I, II,
III, IV, and so on. Each switch is connected to multiple links and is used to complete
the connections between them, two at a time.
Traditionally, three methods of switching have been important: circuit switching,
packet switching, and message switching. The first two are commonly used today.
The third has been phased out in general communication but still has networking
applications new switching strategies are gaining prominence, among them cell relay
(ATM) and Frame.

BRBRAITT : June-2011 137


―DATA NETWORK‖ FOR JTOs PH-II

CIRCUIT SWITCING
Circuit Switching
creates a direct physical connection between two devices such as phones or
computers. Instead of point-to-point connections between the three computers on the
left (A, B, and C) to the four computer on the right (D, E, F, and G), requiring 12
links, we can use four switches to reduce the number and the total length of the links.
Computer A is connected through switches I, II, and III to computer D. By moving

the levers of the switches, any computer on the left can be connected on the right.
A circuit switch is a device with n inputs and m outputs that creates a temporary
connection between an input link and an output link. The number of inputs does not
have to match the number of outputs.

An n-by-n folded switch can connect n lines in full-duplex mode. For example, it can
connect n telephones in such a way that each phone can be connected to every other
phone.

BRBRAITT : June-2011 138


―DATA NETWORK‖ FOR JTOs PH-II

Circuit switching today can use either of two technologies : space-division switches or
time-division switches.
Space-Division Switches
In space-division switching,
The paths in the circuit are separated from each other spatially. This technology was
originally designed for use in analog networks but is used currently in both analog and
digital networks. It has evolved through a ling history of many designs.
Time-Division Switches
Time-division switching uses time-division multiplexing to achieve switching. There
are two popular methods used in time-division multiplexing: the time-slot
interchange and the TDM bus.
Time-Slot Interchange (TSI)
Given a system connecting four input lines to four output lines. Imagine that each
input line wants to send data to an output line according to the following pattern:

1 3 2 4 3 1 4 2

Example shows the results of ordinary time-division multiplexing. As you can see, the
desired task is not accomplished. Data are output in the same order as they are input.
Data from 1 go to 1, from 2 go to 2, from 3 go to 3, and from 4 go to 4.
However, we insert a device called a time-slot interchange (TSI) into link. A TSI
changes the ordering of the slots based on the desired connections. In this case, it
changes the order of data from A, B, C, D to C, D, A, B. Now, when the
demultiplexer separates the slots, it passes them to the proper outputs

BRBRAITT : June-2011 139


―DATA NETWORK‖ FOR JTOs PH-II

How a TSI works ? A TSI consists of random access memory (RAM) with several
memory locations. The size of each location is same as the size of a single time slot.
The number of locations is the same as the number of inputs (in most cases, the
number of inputs and outputs are equal). The RAM fills up with incoming data from

time slots in the order received. Slots are then sent out in an order based on the
decision s of a control unit.
Public Switched Telephone Network (PSTN)
An example of a circuit-switched telephone network is the Public Switched
Telephone Network. Subscriber telephones are connected, through local loops, to
end offices (or central offices). A small town may have only one end office, but a
large city will have several end offices. Many end offices are connected to one toll
office. Several toll offices are connected to a primary office. Several primary offices
are connected to a sectional office, which normally serves more than one state. And
finally several sectional offices are connected to one regional office. All the regional
offices are connected using mesh topology.
Accessing the switching station at end offices is accomplished through dialing. In the
past, telephones featured rotary or pulse dialing, in which a digital signal was sent to
the end office for each number dialed. This type of dialing was prone to errors due to
the inconsistency of humans during the dialing process.
Today, dialing is accomplished through the Touch-Tone technique. In this method,
instead of sending a digital signal, the user sends two small bursts of analog signals,
called dual tone. The frequency of the signals sent depends on the row and column of
the pressed pad. Note that there is also a variation with an extra column (16-pad
Touch-Tone), which is used for special purposes. When a user dials, for example, the
number 8, two bursts of analog signals with frequencies 852 and 1336 Hz are sent to
the end office.

BRBRAITT : June-2011 140


―DATA NETWORK‖ FOR JTOs PH-II

PACKET SWITCHING
Circuit switching was designed for voice communication. In a telephone conversation,
for example, once a circuit is established, it remains, connected for the duration of the
session. Circuit switching creates temporary (dialed) or permanent (leased) dedicated
links that are well suited to this type of communication.
Circuit switching is less well suited to data and other nonvoice transmission. Non-
voice transmissions tend to be bursty, meaning that data come in spurts with idle gaps
between them. When circuit-switched links are used for data transmission, therefore,
the line is often idle and its facilities wasted.
A second weakness of circuit-switched connections for data transmission is in its data
rate. a circuit-switched link creates the equivalent of a single cable between two
devices and thereby assumes a single data rate for both devices. This assumption
limits the flexibility and usefulness of a circuit-switched connection for networks
interconnecting a variety of digital devices.
Third, circuit switching is inflexible. Once a circuit has been established, that circuit
is the path taken by all parts of the transmission whether or not it remains the most
efficient or available.
Finally, circuit switching sees all transmission as equal. Any request is granted to
whatever links available. But often with data transmission. We want to be able to
prioritize: to say, for example, that transmission x can go anytime but transmission z
is time dependent and must go immediately.
A better solution for data transmission is packet switching. In a packet-switched
network, data are transmitted in discrete units of potentially variable length blocks
called packets. The maximum length of the packet is established by the network.
Longer transmission are broken up into multiple packets. Each packet contains not
only data but also a header with control information (such as priority codes and source
and destination addresses). The packets are sent over the network node to node. At
each node, the packet stored briefly then routed according to the information in its
header.
There are two popular approaches to packet switching: datagram and virtual circuit.
Datagram Approach
In the datagram approach to packet switching, each packet is treated independently
form all other. Even when one packet represents just a piece of a multipacket
transmission, the network (and network layer functions) treats it as though in existed
alone. Packets in this technology are referred to as datagrams.

BRBRAITT : June-2011 141


―DATA NETWORK‖ FOR JTOs PH-II

Example shows how the datagram approach can be used to deliver four packets from
station A to station X. In this example, all four packets (or datagrams) belong to the
same message but may go by different paths to reach their destination.
This approach can cause the datagrams of a transmission to arrive at their destination
out of order. It is the responsibility of the transport layer in most protocols to reorder
the datagrams before passing them on to the destination port.

The link joining each pair of nodes can contain multiple channels. Each of these
channels is capable, in turn, of carrying datagrams either from several different
sources or from one source. Multiplexing can be done using TDM or FDM .
Device A and B are sending datagram to device X and Y. Some paths use one channel
while others use more than one. As you can see, the bottom link is carrying two
packets form different source is the same direction. The link on the right, however, is
carrying datagrams in two directions.

BRBRAITT : June-2011 142


―DATA NETWORK‖ FOR JTOs PH-II

Virtual Circuit Approach


In the virtual circuit approach to packet switching, the relationship between all
packets belonging to a message or session is preserved. A single route is chosen
between sender and receiver at the beginning of the session. When the data are sent,
all packets of the transmission travel one after another along that route.
Today, virtual circuit transmission is implemented in two formats : switched virtual
circuit (SVC) and permanent virtual circuit (PVC).
SVC
The switched virtual circuit (SVC) format is comparable conceptually to dial-up
lines in circuit switching. In this method, a virtual circuit is created whenever it is
needed and exists only for the duration of the specific exchange. For example,
imagine that station A wants to send four packets to station X. first, A requests the
establishment of a connection to X. Once the connection is in place. the packets are

sent one after another

and in sequential order. When the last packet has been received and, if necessary,
acknowledged, the connection is released and that virtual circuit ceases to exist. Only
one single route exists for the duration of transmission, although the network could
pick an alternate route in response to failure or congestion.
Each time that A wishes to communicate with X, a new route is established. The route
may be the same each time, or it may differ in response to varying network
conditions.
PVC
Permanent virtual circuits (PVC) are comparable to leased lines in circuit switching.
In this method, the same virtual circuit is provided between two users on a continuous
basis.

BRBRAITT : June-2011 143


―DATA NETWORK‖ FOR JTOs PH-II

The circuit is dedicated to the specific users. No one else can use it and, because it is
always in place, it can be used without connection establishment and connection
termination. Whereas two SVC users may get a different route every time they request
a connection, two PVC users always get the same route
Circuit-Switched Connection versus Virtual-Circuit Connection
Although it seems that a circuit-switched connection and a virtual-circuit connection
are the same, there are differences:
Path versus route.
A circuit-switched connection creates a path between two points. The physical path is
created by setting the switches for the duration of the dial (dial-up line) or the
duration of the lease (leased line). A virtual-circuit connection creates a route between
two points. This means each switch creates an entry in its routing table for the
duration of the session (SVC) or duration of the lease (PVC). Whenever, the switch
receives a packet belonging to a virtual connection, it checks the table for the
corresponding entry and routes the packet out of one of its interfaces.
Dedicated versus sharing.
In a circuit-switched connection, the links that make a path are dedicated; they cannot
be used by other connections. In a virtual circuit connection, the links the make a
route can be shared by other connections.

BRBRAITT : June-2011 144


―DATA NETWORK‖ FOR JTOs PH-II

Path versus Route

Dedicated versus Shared


MESSAGE SWITCHING
Message switching is best known by the descriptive term store and forward. In this
mechanism, a node (usually a special computer with a number of disks) receives a
message, stores it until the appropriate route is free, them sends it along.
Store and forward is considered a switching technique because there is no direct link
between the sender and receiver of a transmission. A message is delivered to the node
alone one path then rerouted along another to its destination.
Note that in message switching, the messages are stored and relayed from secondary
storage (disk), while in packet switching the packets are stored and forwarded from
primary storage (RAM)
Message switching was common in the 1960s and 1970s. The primary uses have been
to provide high-level network services (e.g., delayed delivery, broadcast) for
unintelligent devices. Since such devices have been replaces, this type of switch has
virtually disappeared. Also, the delays inherent in the process, as well as the
requirements for large-capacity storage media at each node, make it unpopular for
direct communication.

BRBRAITT : June-2011 145


―DATA NETWORK‖ FOR JTOs PH-II

BRBRAITT : June-2011 146


―DATA NETWORK‖ FOR JTOs PH-II

TCP/IP PROTOCOL Suite: An Overview

BRBRAITT : June-2011 147


―DATA NETWORK‖ FOR JTOs PH-II

TRANSMISSION CONTROL PROTOCOL


INTERNET PROTOCOL

INTRODUCTION
One of the problems with networks that is prevalent today is that there are many
different protocols and network types. The hardware choices are confusing enough,
but software protocol suites that run over the various types of network hardware
solutions can absolutely boggle the mind. Ethernet, for instance, boasts a vast number
of protocol suites such as DDCMP, LAT, MOP, XNS, SCS, TCP/IP, VRP, NRP, and
a slew of other three-letter acronyms for various protocols that will solve all the
problems a customer could have.
Within the scheme of protocols, however, some still seem to rear their ugly heads, no
matter how hard the industry tries to put them down or get rid of them. One suite,
Transmission control Protocol/Internet Protocol (TCP/IP), is such an occurrence.
Every other vendor of networks will claim that their protocol is better and that TCP/IP
is going away. Some will point to the decisions made by the US Department of
Defense (DOD) to eventually migrate to internationally recognized and standardized
communications hardware and protocols, obviating the need for TCP/IP and
eventually replacing it. Some view TCP/IP as a workhorse whose time has come to be
put out to pasture.
Then there are the zealots—those that think that the ONLY communications protocol
suite for use in the world is TCP/IP and all others are fluff. These folks are dangerous
because they not only are vocal about TCP/IP, many times they are UNIX zealots as
well.
Somewhere in the middle of the two camps are those who do not know what to do
with TCP/IP or, worse, do not even really understand its significance to networks.
Unfortunately, these individuals are usually the managers of such diverse camps of
attitudes and must make decisions on whether to use TCP/IP on a project or not.
Although it is the ISO open systems protocols which have received most recent
publicity, there are other well established protocol sets, particularly on Ethernet,
which have a large share of the current LAN market. Some argue that these protocols
offer a better alternative to the largely untried and potentially cumbersome ISO set,
but most manufacturers indicate a willingness to adopt ISO protocols at some point in
the future.

BRBRAITT : June-2011 148


―DATA NETWORK‖ FOR JTOs PH-II

The non-ISO protocol described in this chapter illustrate different approach from the
ISO protocol set to Open Systems working. TCP/IP, is a vendor-independent wide
area network protocol set, which been widely used on LANs for peer-to-peer
communications. Here, we will examine the TCP and IP networking protocols and
some implementations that have become de-facto standards in the military area as
well as academic and UNIX areas.
TCP/IP PROTOCOL SET STRUCTURE
The TCP/IP suite is not a single protocol. Rather, it is four-layer communication
architecture that provides some reasonable network features, such as end-to-end
communications, unreliable communications line fault handling, packet sequencing,
internet work routing, and specialized functions unique to DOD communications
needs such as standardized message priorities. He bottom layer, network services,
provides for communication to network hardware. Network hardware used in the
various networks throughout the DOD typically reflects the usage of FIPS (Federal
Information Processing Standard) compliant network hardware (such as IEEE 802
series of LANs and other technologies such as X.25). The layer above the network
services layer is referred to as the internet protocol (IP) layer. The IP layer is
responsible for providing a datagram service that routes data packets between
dissimilar network architectures (such as between Ethernet, and, say, X.25). IP has a
few interesting qualities, one of which is the issue of data reliability. As a datagram
service, IP does not guarantee delivery of data. Basically, if the data gets there,
great.

ISO-Model TCP/IP Interactive File Network


Terminal Transfer File Store
(TELNET) (FTP) (NFS)
Application Application Level

Session Host Level TCP UDP


Transport

Network Gateway IP & ICMP


Level

Data Link Network Level


LLC

FIG. 1.1 : TCP/IP PROTOCOL RELATIONSHIPS


If not, that‘s OK too. Data concurrency, sequencing, and delivery guarantee is the job
of the TCP protocol. TCP provides for error control, retransmission, packet
sequencing, and many other capabilities. It is very complex and provides most of the
features of the connection to other applications on other systems.
To understand properly what TCP/IP is all about, it is important to understand that, a)
it is not OSI in implementation ( although some argue that there are substantial

BRBRAITT : June-2011 149


―DATA NETWORK‖ FOR JTOs PH-II

similarities) and b) it is a unique network architecture that provides what are


considered traditional network services in a manner that can be overhead intensive in
some implementations.
The structure of the TCP/IP protocol set is shown in Figure 1.1, along with the
approximately equivalent ISO model layers. It can be seen that this is essentially a
four layer model, although the layers are not as clear cut as in the ISO model, and the
model has been drawn from analysis of what is used, rather than being defined first
and then the protocols specified. The TCP/IP philosophy is the antithesis of the ISO
philosophy. In ISO protocols, everything appears to be put into the protocol, but parts
are made optional. In TCP/IP, the protocols are kept very simple. If more
functionality is required, then another protocol is added to deal with the situation.
The main protocols are as follows:
IP The Internet Protocol, which provides a connectionless datagram
‗network‘ layer
ICMP The Internet Control Message Protocol, is an example of the bolt-on
Approach mentioned above. It adds functionality to the IP Protocol and
can be considered as extension to the protocol.
UDP The User Datagram Protocol provides the rough equivalent of the ISO
connectionless transport service.
TCP The Transmission Control Protocol is a connection oriented,
Reliable end-to-end transport protocol.
The protocols which run above TCP include TELNET, a terminal access protocol, and
a file transfer protocol FTP. The four main protocols are now examined further as a
contrast to the ISO approach.

BRBRAITT : June-2011 150


―DATA NETWORK‖ FOR JTOs PH-II

Fig. 1.2 : Format of an internal datagram header.

Bit 0 BitBit
1516 Bit 31

Version Priority & Type


Header
Total Length
of(4)
(4)Length Service
(16) (8)

Identification (16) Fragment offset (13) Flags


(3)

Time to live (8)


Protocol (8) Header checksum (16)

Source IP Address (32)

Destination IP Address (32)

Options (0 or 32 if any)

Data (varies if any)

BRBRAITT : June-2011 151


―DATA NETWORK‖ FOR JTOs PH-II

INTERNET PROTOCOL
There is a second difference in philosophy between ISO and the TCP/IP approach,
which revolves around the work ;network;. In the TCP/IP model, a network is an
individual packet switched network which may be a LAN or a WAN, but is generally
under the control of one organisation. These networks connect to each other by
gateways, and the resting
Collection of such network is called a catenet (from concatenation). The Internet
Protocol provides for the transmission of data-grams between systems over the whole
catenet. It specifically allows for the fragmentation and reassembly of the datagrams
at the gateways. As the underlying networks may demand different packet sizes.
The Internet Protocol (US Military Standard 1771) is a very simple protocol, with no
mechanism for end-to-end data reliability, flow control or sequencing. The header,
however, shown in Figure 1.2, is quite complex, the fields being as follows:

Version The version Number of IP. There have been several new releases, which (given
the size of ARPANET) must co-exist for some time.
IHL The IP header length. Because of the options field, the header is not a fixed length.
This field shows where the data starts.
Type of This field allows for a priority system to be imposed, plus an indication of the
Service desired, but not guaranteed, reliability required.
Length The total length of the IP packet. Although there is a theoretical maximum of
64Kbytes, most networks operate with much smaller packets, though all must
accept at least 576bytes.
ID/Flags/Offs These fields enable a gateway to split up the datagram into smaller segments. The
et ID field ensures that the receiver can piece together the fragments from the correct
datagrams, as fragments from many datagrams may arrive in any order. The offset
tells how far down the datagram this fragment is, and the flags can be used to
mark the datagram as non-fragmentable.
Time to live This is a count which limits the lifetime of a datagram on the catenet. Each time it
passes through a gateway, the count is decremented by one. If it reaches zero, the
gateway does not forward it. This prevents permanently circulating datagrams.
Protocol This indicates which higher level protocol is being carried, e.g. TCP or UDP
Checksum This checksum covers the header only. It is up to the higher layers to detect
transmission errors in the data.
Source/dest To assist the gateways to route datagrams by the most efficient path, each. IP
Address address is structured into a Network Number and a local address. There are three
classes of network providing different numbers of locally administered addresses.
Options The final part of the header is a variable number of optional fields, which are used
to enforce security or network management.
Padding This field is used to align the header to the next 32-bit boundary.
Because there is no facility for error reporting in IP, for example, the sender of a data-
gram is not informed if the intended recipient is available, an extra protocol is used
particularly to help gateways between networks. This is called the Internet Control

BRBRAITT : June-2011 152


―DATA NETWORK‖ FOR JTOs PH-II

Message Protocol (ICMP) which, although it is carried over IP, is considered to be an


integral part of it. It does not help in making IP reliable, however, it merely reports
errors without trying to recover from them.
Examples of ICMP messages include TIME EXCEEDED when the lifetime of a
datagram expires, and DESTINATION UNREACHABLE when a gateway or
network has failed. The gateways also exchange routing information using another
extra protocol, called the gateway-to-gateway protocol. This enables the gateways to
have up-to-date information on the loading on certain routes, so that bottlenecks can
be avoided.
USER DATAGRAM PROTOCOL
The User Datagram Protocol (UDP) provides transport service to applications. Unlike
the ISO protocols, which are layer independent, it assumes that IP is running below,
and implementations must have access to incoming IP headers.
The UDP header, shown in Figure 1.3 is very simple, and can be considered as an
extension of the IP header to permit multiple services to be addressed within the same
IP network address.
16bits 16bits

Source Port Destination Port

Length Checksum

DATA

Fig. 1.3 : Format of user datagram protocol header.

BRBRAITT : June-2011 153


―DATA NETWORK‖ FOR JTOs PH-II

TRANSMISSION CONTROL PROTOCOL


TCP (US Military Standard 1781 provides a highly reliable, connection oriented, end-
to-end transport service between processes in end systems connected to the content
TCP only assumes that the layer below offers an unreliable datagram service, and thus
could run over any such protocols. In practice, however, it is invariably linked to IP
TCP provides the types of facility associated with the ISO Class 4 transport service
including error recovery, sequencing of packets, flow control by the windowing
method, and the support of multiplexed connections from the layer above. The format
of the TCP header is shown in Figure 1-4. The operational procedures are similar to
the ISO connection oriented protocols, such as LLC Type 2. The fields in the header
are as follows:

16 bits 16bits

Source Port Destination Port

Sequence Number

Acknowledgement Number

Data Reserved Flags Window

Checksum Urgent Pointer

Options Padding

DATA

Fig 1.4 : Format of TCP header

BRBRAITT : June-2011 154


―DATA NETWORK‖ FOR JTOs PH-II

Source/dest ports These fields identity multiple streams to the layer above.

Sequence/ack These are used for the windowing acknowledgement


number technique.

Data Offset This is the number of 32-bit words in the TCP header which,
like the IP header has a variable length options field.

Flag bits There are several bits used as status indicators to show, for
example, the resetting of the connection

Window This field is used by the receiver to set the window size.
Checksum Again this covers only the header.
Urgent pointer The sender can indicate that an urgent datagram is coming and
urges the receiver to handle it as quickly as possible.

Option This variable-sized field contains some negotiation parameters


to set the size of the TCP packets for example.

Padding To align to the next 32-bit boundary.

The procedures used by the TCP protocol are two complex to describe here. It can be
seen however, that the catenet style of networking has benefits for linking LANs—
hence the widespread use of TCP/IP on LANs. It should not be assumed, however,
that TCP/IP networks are immune from the compatibility problems discussed earlier
for ISO networks. Differences in interpretation of the protocols can drastically reduce
interoperability and there are reports of deficiencies in many of the protocols. One
interesting recent development, however, is an experimental implementation of the
ISO transport service on top of TCP. Which means that ISO applications could be
carried over IP catenets. TCP/IP can also co-exist with ISO and other protocols on a
LAN, and it can be expected that the production of protocol converters should ease
the transition between TCP/IP and ISO for many users.

BRBRAITT : June-2011 155


―DATA NETWORK‖ FOR JTOs PH-II

IP ADDRESSING, SUBNETTING
AND
SUPERNETTING

BRBRAITT : June-2011 156


―DATA NETWORK‖ FOR JTOs PH-II

INTRODUCTION

In the mid-1990s, the Internet is a dramatically different network than when it was
first established in the early 1980s. There is a direct relationship between the value of
the Internet and the number of sites connected to the Internet. Over the past few years,
the Internet has experienced two major scaling issues as it has struggled to provide
continuous and uninterrupted growth. The eventual exhaustion of the IPv4 address
space The ability to route traffic between the ever increasing number of networks that
comprise the Internet The first problem is concerned with the eventual depletion of
the IP address space.
IP ADDRESS
The current version of IP, IP version 4 (IPv4), defines a 32-bit address which means
that there are only 232 (4,294,967,296) IPv4 addresses available. This might seem
like a large number of addresses, but as new markets open and a significant portion of
the world's population becomes candidates for IP addresses, the finite number of IP
addresses will eventually be exhausted. The address shortage problem is aggravated
by the fact that portions of the IP address space have not been efficiently allocated.
Also, the traditional model of classful addressing does not allow the address space to
be used to its maximum potential.
In order to provide the flexibility required to support different size networks, the
designers decided that the IP address space should be divided into three different
address classes - Class A, Class B, and Class C. This is often referred to as "classful"
addressing because the address space is split into three predefined classes, groupings,
or categories. Each class fixes the boundary between the network-prefix and the host-
number at a different point within the 32-bit address.
One of the fundamental features of classful IP addressing is that each address contains
a self-encoding key that identifies the dividing point between the network-prefix and
the host-number.

BRBRAITT : June-2011 157


―DATA NETWORK‖ FOR JTOs PH-II

Class A Networks (8 Prefixes)


Each Class A network address has an 8-bit network-prefix with the highest order bit
set to 0 and a seven-bit network number, followed by a 24-bit host-number. Today, it
is no longer considered 'modern' to refer to a Class A network. Class A networks are
now referred to as "/8s" (pronounced "slash eight" or just "eights") since they have an
8-bit network-prefix. A maximum of 126 (27 -2) /8 networks can be defined. The
calculation requires that the 2 is subtracted because the /8 network 0.0.0.0 is reserved
for use as the default route and the /8 network 127.0.0.0 (also written 127/8 or
127.0.0.0/8) has been reserved for the "loopback" function. Each /8 supports a
maximum of 16,777,214 (224-2) hosts per network. The host calculation requires that
2 is subtracted because the all-0s ("this network") and all-1s ("broadcast") host-
numbers may not be assigned to individual hosts.
Class B Networks (/16 Prefixes)
Each Class B network address has a 16-bit network-prefix with the two highest order
bits set to 1-0 and a 14-bit network number, followed by a 16-bit host-number. Class
B networks are now referred to as"/16s" since they have a 16-bit network-prefix.A
maximum of 16,384 (214 ) /16 networks can be defined with up to 65,534 (216 -2)
hosts per network.
Class C Networks (/24 Prefixes)
Each Class C network address has a 24-bit network-prefix with the three highest
order bits set to 1-1-0 and a 21-bit network number, followed by an 8-bit host-
number. Class C networks are now referred to as "/24s" since they have a 24-bit
network-prefix. A maximum of 2,097,152 ( 221 )/24 networks can be defined with up
to 254 (28 -2) hosts per network.
Dotted-Decimal Notation
To make Internet addresses easier for human users to read and write, IP addresses are
often expressed as four decimal numbers, each separated by a dot. This format is
called "dotted-decimal notation."Dotted-decimal notation divides the 32-bit Internet
address into four 8-bit (byte) fields and specifies the value of each field independently
as a decimal number with the fields separated by dots.
The classful A, B, and C octet boundaries were easy to understand and implement, but
they did not foster the efficient allocation of a finite address space. A /24, which
supports 254 hosts, is too small while a /16, which supports 65,534 hosts, is too large.
In the past, the Internet has assigned sites with several hundred hosts a single /16
address instead of a couple of /24s addresses.

BRBRAITT : June-2011 158


―DATA NETWORK‖ FOR JTOs PH-II

SUBNETTING

In 1985, RFC 950 defined a standard procedure to support the subnetting, or division,
of a single Class A, B, or C network number into smaller pieces. Subnetting was
introduced to overcome some of the problems that parts of the Internet were
beginning to experience with the classful two-level addressing hierarchy:
Subnetting attacked the expanding routing table problem by ensuring that the subnet
structure of a network is never visible outside of the organization's private network.
The route from the Internet to any subnet of a given IP address is the same, no matter
which subnet the destination host is on. This is because all subnets of a given network
number use the same network-prefix but different subnet numbers. The routers within
the private organization need to differentiate between the individual subnets, but as
far as the Internet routers are concerned, all of the subnets in the organization are
collected into a single routing table entry. This allows the local administrator to
introduce arbitrary complexity into the private network without affecting the size of
the Internet's routing tables. Subnetting overcame the registered number issue by
assigning each organization one (or at most a few) network number(s) from the IPv4
address space. The organization was then free to assign a distinct subnetwork number
for each of its internal networks. This allows the organization to deploy additional
subnets without needing to obtain a new network number from the Internet.
The router accepts all traffic from the Internet addressed to network 130.5.0.0, and
forwards traffic to the interior subnetworks based on the third octet of the classful
address. The deployment of subnetting within the private network provides several
benefits: The size of the global Internet routing table does not grow because the site
administrator does not need to obtain additional address space and the routing
advertisements for all of the subnets are combined into a single routing table entry.
The local administrator has the flexibility to deploy additional subnets without
obtaining a new network number from the Internet. Route flapping (i.e., the rapid
changing of routes) within the private network does not affect the Internet routing
table since Internet routers do not know about the reachability of the individual
subnets - they just know about the reachability of the parent network number.
Extended-Network-Prefix Internet routers use only the network-prefix of the
destination address to route traffic to a subnetted environment. Routers within the
subnetted environment use the extended-network- prefix to route traffic between the
individual subnets. The extended-network-prefix is composed of the classful network-
prefix and the subnet-number.

BRBRAITT : June-2011 159


―DATA NETWORK‖ FOR JTOs PH-II

The extended-network-prefix has traditionally been identified by the subnet mask. For
example, if you have the /16 address of 130.5.0.0 and you want to use the entire third
octet to represent the subnet-number, you need to specify a subnet mask of
255.255.255.0. The bits in the subnet mask and the Internet address have a one-to-one
correspondence. The bits of the subnet mask are set to 1 if the system examining the
address should treat the corresponding bit in the IP address as part of the extended-
network- prefix. The bits in the mask are set to 0 if the system should treat the bit as
part of the host-number.
The standards describing modern routing protocols often refer to the extended-
network-prefix- length rather than the subnet mask. The prefix length is equal to the
number of contiguous one-bits in the traditional subnet mask. This means that
specifying the network address 130.5.5.25 with a subnet mask of 255.255.255.0 can
also be expressed as 130.5.5.25/24. The /<prefix-length> notation is more compact
and easier to understand than writing out the mask in its traditional dotted-decimal
format.
Variable Length Subnet Masks (VLSM)
In 1987, RFC 1009 specified how a subnetted network could use more than one
subnet mask. When an IP network is assigned more than one subnet mask, it is
considered a network with "variable length subnet masks" since the extended-
network-prefixes have different lengths.RIP-1 Permits Only a Single Subnet Mask
When using RIP-1, subnet masks have to be uniform across the entire network-prefix.
RIP-1 allows only a single subnet mask to be used within each network number
because it does not provide subnet mask information as part of its routing table update
messages. In the absence of this information, RIP-1 is forced to make very simple
assumptions about the mask that should be applied to any of its learned routes.
How does a RIP-1 based router know what mask to apply to a route when it learns a
new route from a neighbor? If the router has a subnet of the same network number
assigned to a local interface, it assumes that the learned subnetwork was defined using
the same mask as the locally configured interface. However, if the router does not
have a subnet of the learned network number assigned to a local interface, the router
has to assume that the network is not subnetted and applies the route's natural classful
mask. Assuming that Port 1 of a router has been assigned the IP address
130.24.13.1/24 and that Port 2 has been assigned the IP address 200.14.13.2/24. If the
router learns about network 130.24.36.0 from a neighbor, it applies a /24 mask since
Port 1 is configured with another subnet of the 130.24.0.0 network. However, when
the router learns about network 131.25.0.0 from a neighbor, it assumes a "natural" /16
mask since it has no other masking information available. How does a RIP-1 based
router know if it should include the subnet-number bits in a routing table update to a
RIP-1 neighbor? A router executing RIP-1 will only advertise the subnet-number bits
on another port if the update port is configured with a subnet of the same network
number. If the update port is configured with a different subnet or network number,
the router will only advertise the network portion of the subnet route and "zero-out"
the subnet-number field.
For example, assume that Port 1 of a router has been assigned the IP address
130.24.13.1/24 and that Port 2 has been assigned the IP address 200.14.13.2/24. Also,
assume that the router has learned about network 130.24.36.0 from a neighbor. Since
Port 1 is configured with another subnet of the 130.24.0.0 network, the router assumes

BRBRAITT : June-2011 160


―DATA NETWORK‖ FOR JTOs PH-II

that network 130.24.36.0 has a /24 subnet mask. When it comes to advertise this
route, it advertises 130.24.36.0 on Port 1, but it only advertises 130.24.0.0 on Port
2.For these reasons, RIP-1 is limited to only a single subnet mask for each network
number.
However, there are several advantages to be gained if more than one subnet mask can
be assigned to a given IP network number: Multiple subnet masks permit more
efficient use of an organization's assigned IP address space.Multiple subnet masks
permit route aggregation which can significantly reduce the amount of routing
information at the "backbone" level within an organization's routing domain.Efficient
Use of the Organization's Assigned IP Address Space.VLSM supports more efficient
use of an organization's assigned IP address space. One of the major problems with
the earlier limitation of supporting only a single subnet mask across a given network-
prefix was that once the mask was selected, it locked the organization into a fixed-
number of fixed-sized subnets. For example, assume that a network administrator
decided to configure the 130.5.0.0/16 network with a /22 extended-network-prefix.
A /16 network with a /22 extended-network prefix permits 64 subnets (26 ), each of
which supports a maximum of 1,022 hosts (2 10 -2). This is fine if the organization
wants to deploy a number of large subnets, but what about the occasional small subnet
containing only 20 or 30 hosts? Since a subnetted network could have only a single
mask, the network administrator was still required to assign the 20 or 30 hosts to a
subnet with a 22-bit prefix. This assignment would waste approximately 1,000 IP host
addresses for each small subnet deployed! Limiting the association of a network
number with a single mask did not encourage the flexible and efficient use of an
organization's address space. One solution to this problem was to allow a subnetted
network to be assigned more than one subnet mask. Assume that in the previous
example, the network administrator is also allowed to configure the 130.5.0.0/16
network with a /26 extended-network-prefix. Please refer to Figure 16. A /16 network
address with a /26 extended-network prefix permits 1024 subnets (210 ), each of
which supports a maximum of 62 hosts (26 -2). The /26 prefix would be ideal for
small subnets with less than 60 hosts, while the /22 prefix is well suited for larger
subnets containing up to 1000 hosts.

BRBRAITT : June-2011 161


―DATA NETWORK‖ FOR JTOs PH-II

Conceptually, a network is first divided into subnets, some of the subnets are further
divided into sub-subnets, and some of the sub-subnets are divided into sub 2 -subnets.
This allows the detailed structure of routing information for one subnet group to be
hidden from routers in another subnet group.

11.0.0.0./8
11.1.0.0/16
11.2.0.0/16
11.3.0.0/16
11.252.0.0/16
11.253.0.0/16
11.254.0.0/16
11.1.1.0/24
11.1.2.0/24
11.1.253.0/24
11.1.254.0/24
11.253.32.0/19
11.253.64.0/19
11.253.160.0/19
11.253.192.0/19
11.1.253.32/27
11.1.253.64/27
11.1.253.160/27
11.1.253.192/27

The 11.0.0.0/8 network is first configured with a /16 extended-network-prefix. The


11.1.0.0/16 subnet is then configured with a /24 extended-network-prefix and the
11.253.0.0/16 subnet is configured with a /19 extended-network-prefix. Note that the
recursive process does not require that the same extended-network-prefix be assigned
at each level of the recursion. Also, the recursive sub-division of the organization's
address space can be carried out as far as the network administrator needs to take it.
Likewise, Router C is able to summarize the six subnets behind it into a single
advertisement (11.253.0.0/16). Finally, since the subnet structure is not visible outside
of the organization, Router A injects a single route into the global Internet's routing
table -11.0.0.0/ 8 (or 11/8).

BRBRAITT : June-2011 162


―DATA NETWORK‖ FOR JTOs PH-II

Classless Inter-Domain Routing (CIDR)


By 1992, the exponential growth of the Internet was beginning to raise serious
concerns among members of the IETF about the ability of the Internet's routing
system to scale and support future growth. These problems were related to:
The near-term exhaustion of the Class B network address space. The rapid growth in
the size of the global Internet's routing tables. The eventual exhaustion of the 32-bit
IPv4 address space. Projected Internet growth figures made it clear that the first two
problems were likely to become critical by 1994 or 1995. The response to these
immediate challenges was the development of the concept of Supernetting or
Classless Inter-Domain Routing (CIDR). The third problem, which is of a more long-
term nature, is currently being explored by the IP Next Generation (IPng or IPv6)
working group of the IETF. CIDR was officially documented in September 1993 in
RFC 1517, 1518, 1519, and 1520. CIDR supports two important features that benefit
the global Internet routing system: CIDR eliminates the traditional concept of Class
A, Class B, and Class C network addresses. This enables the efficient allocation of the
IPv4 address space which will allow the continued growth of the Internet until IPv6 is
deployed. CIDR supports route aggregation where a single routing table entry can
represent the address space of perhaps thousands of traditional classful routes. This
allows a single routing table entry to specify how to route traffic to many individual
network addresses. Route aggregation helps control the amount of routing information
in the Internet's backbone routers, reduces route flapping (rapid changes in route
availability), and eases the local administrative burden of updating external routing
information. Without the rapid deployment of CIDR in 1994 and 1995, the Internet
routing tables would have in excess of 70,000 routes (instead of the current 30,000+)
and the Internet would probably not be functioning today!
CIDR Promotes the Efficient Allocation of the IPv4 Address Space CIDR eliminates
the traditional concept of Class A, Class B, and Class C network addresses and
replaces them with the generalized concept of a "network-prefix." Routers use the
network-prefix, rather than the first 3 bits of the IP address, to determine the dividing
point between the network number and the host number. As a result, CIDR supports
the deployment of arbitrarily sized networks rather than the standard 8-bit, 16- bit, or
24-bit network numbers associated with classful addressing. In the CIDR model, each
piece of routing information is advertised with a bit mask (or prefix-length). The
prefix-length is a way of specifying the number of leftmost contiguous bits in the
network-portion of each routing table entry. For example, a network with 20 bits of
network-number and 12-bits of host-number would be advertised with a 20-bit prefix
length (a /20).

BRBRAITT : June-2011 163


―DATA NETWORK‖ FOR JTOs PH-II

The clever thing is that the IP address advertised with the /20 prefix could be a former
Class A, Class B, or Class C. Routers that support CIDR do not make assumptions
based on the first 3-bits of the address, they rely on the prefix-length information
provided with the route. In a classless environment, prefixes are viewed as bit wise
contiguous blocks of the IP address space. For example, all prefixes with a /20 prefix
represent the same amount of address space (212 or 4,096 host addresses).
Furthermore, a /20 prefix can be assigned to a traditional Class A, Class B, or Class C
network number.
It is important to note that there may be severe host implications when you deploy
CIDR based networks. Since many hosts are classful, their user interface will not
permit them to be configured with a mask that is shorter than the "natural" mask for a
traditional classful address. For example, potential problems could exist if you wanted
to deploy 200.25.16.0 as a /20 to define a network capable of supporting 4,094 (2 12 -
2) hosts. The software executing on each end station might not allow a traditional
Class C (200.25.16.0) to be configured with a 20-bit mask since the natural mask for a
Class C network is a 24-bit mask. If the host software supports CIDR, it will permit
shorter masks to be configured. However, there will be no host problems if you were
to deploy the 200.25.16.0/20 (a traditional Class C) allocation as a block of 16 /24s
since non-CIDR hosts will interpret their local /24 as a Class C. Likewise,
130.14.0.0/16 (a traditional Class B) could be deployed as a block of 255 /24s since
the hosts will interpret the /24s as subnets of a /16. If host software supports the
configuration of shorter than expected masks, the network manager has tremendous
flexibility in network design and address allocation.

BRBRAITT : June-2011 164


―DATA NETWORK‖ FOR JTOs PH-II

LAN TECHNOLOGIES

BRBRAITT : June-2011 165


―DATA NETWORK‖ FOR JTOs PH-II

LAN TECHNOLOGIES
Introduction
Networking means interconnection of computers. These computers can be linked
together for different purposes and using a variety of different cabling types.
The basic reasons why computers need to be networked are :
1. To share resources (files, printers, modems, fax machines etc.)
2. To share application software (MS Office, Adobe Publisher etc.)
3. Increase productivity (makes it easier to share data amongst users)
Take for example a typical office scenario where a number of users require access to
some common information. As long as all user computers are connected via a
network, they can share their files, exchange mail, schedule meetings, send faxes and
print documents all from any point of the network. It is not necessary for users to
transfer files via electronic mail or floppy disk, rather, each user can access all the
information they require, thus leading to less wastage of time and hence increased
productivity.
Imagine the benefits of a user being able to directly fax the Word document they are
working on, rather than print it out, then feed it into the fax machine, dial the number
etc.
Small networks are often called Local Area Networks (LAN). A LAN is a network
allowing easy access to other computers or peripherals. The typical characteristics of
a LAN are :
1. physically limited distance (< 2km)
2. high bandwidth (> 1mbps)
3. inexpensive cable media (coax or twisted pair)
4. data and hardware sharing between users
5. owned by the user
The factors that determine the nature of a LAN are :
1. Topology
2. Transmission medium
3. Medium access control technique
LAN Architecture
The layered protocol concept can be employed to describe the architecture of a LAN,
wherein each layer represents the basic functions of a LAN.
Protocol Architecture
The Protocols defined for LAN transmission address issues relating to the
transmission of blocks of data over the network. In the context of OSI model, higher
layer protocols (layer 3 or 4 and above) are independent of network architecture and
are applicable to LAN. Therefore LAN protocols are concerned primarily with the
lower layers of the OSI model.
Figure 1 relates the LAN protocols to the OSI model. This architecture has been
developed by the IEEE 802 committee and has been adopted by all organisations
concerned with the specification of LAN standards. It is generally referred to as the
IEEE 802 reference model.

BRBRAITT : June-2011 166


―DATA NETWORK‖ FOR JTOs PH-II

OSI Reference Model

Application
IEEE 802
Presentation Reference
Model LLC Service
Session Access Point

Transport
~ Upper
layer ~ (LSAP)

protocols

Network
() () ()
Logical Link
Control Scope of
Data Link IEEE 802
Medium Standards
access control

Physical Physical

Medium Medium

FIG.1 IEEE 802 Protocol Layers compared to OSI

The lowest layer of the IEEE 802 reference model corresponds to the physical layer
of the OSI model, and includes the following functions :
1. Encoding/ decoding of signals
2. Preamble generation/ removal (for synchronisation)
3. Bit transmission/ reception
The physical layer of the 802 model also includes a specification for the transmission
medium and the topology. Generally, this is considered below the lowest layer of the
OSI model. However, the choice of the transmission medium and topology is critical
in LAN design, and so a specification of the medium is included.
Above the physical layer are the functions associated with providing service to the
LAN users. These comprise :
1. Assembling data into a frame with address and error-detection fields for
onward transmission.
2. Disassemble frame, perform address recognition and error detection during
reception.
3. Supervise and control the access to the LAN transmission medium.
4. Provide an interface to the higher layers and perform flow control and error
control.
The above functions are typically associated with OSI layer 2. The last function noted
above is grouped in to a logical link control (LLC) layer. The functions in the first
three bullet items are treated as a separate layer, called medium access control
(MAC). The separation is done for the following reasons:

BRBRAITT : June-2011 167


―DATA NETWORK‖ FOR JTOs PH-II

1. The logic and mechanism required to manage access to a shared- access


medium is not found in the conventional layer-2 data link control.
2. For the same LLC, different MAC options may be provided.
The standards that have been issued are illustrated in Table 1. Most of the standards
were developed by a committee known as IEEE 802, sponsored by the Institute for
Electrical and Electronics Engineers. All of these standards have subsequently been
adopted as international standards by the International Organisation for
Standardization (ISO).

L IEEE 802.2
ogical • Unacknowledged connectionless service
Link • Connection-mode service
Contro
l • Acknowledged connectionless service
(LLC)

Mediu Token Round Token Token DQDB CSMA;p


m CSMA/CD bus robin; ring ring olling
access priority
control FD
IE IE IE IE IE IE
(MAC) DI
EE EE EE EE EE EE
Baseband Baseband Unshielded Shielded Optical fiber;
80 coaxial; 80 coaxial
80 twisted pair;
80 twisted pair; 100 Mbps
80 Optical fiber;
100 Mbps
80 Infrared;
1, 2 Mbps
2.3 10 Mbps 2.4 1, 5, 10 Mbps 2.1 100 Mbps 2.5 4,16 Mbps 2.6 2.1
2 1
Ph Unshielded
ysi twisted pair; Carrierband Unshielded Unshielded Spread
10,000 Mbps coaxial
cal twisted pair ; twisted pair; spectrum;
1, 5, 10 Mbps 100 Mbps 1, 2 Mbps
Shielded 4 Mbps
twisted pair;
100 Mbps
Optical fiber
Baseband 5, 10, 20,
coaxial; Mbps
10 Mbps
Optical fiber;
10 Mbps

Bus/ tree/ star topologies Ring topology Dual bus topology Wireless

Table 1 LAN/MAN standards

BRBRAITT : June-2011 168


―DATA NETWORK‖ FOR JTOs PH-II

Figure 2 illustrates the relationship between the various levels of the architecture.
User data is passed down to LLC, which appends control information as a header,
creating an LLC protocol data unit (PDU). This control information is used in the
operation of the LLC protocol. The entire LLC PDU is then passed down to the MAC
layer, which appends control information at the front and back of the packet,
forming a MAC frame.

Application layer

TCP layer
TCP
header

IP header IP layer

LLC LLC layer


header

MAC MAC MAC layer


header
trailer

TCP segment
IP datagram
LLC protocol data unit
MAC frame

Fig. 2 LAN Protocol Architecture

BRBRAITT : June-2011 169


―DATA NETWORK‖ FOR JTOs PH-II

LAN Topologies
The common topologies for LANs are bus, tree, ring, and star. The bus is a special
case of the tree, with only one trunk and no branches.
Bus and Tree Topologies
Bus and Tree topologies are characterised by the use of a multi-point medium. For the
bus all stations attach, through appropriate hardware interfaces known as a Tap,
directly to a linear transmission medium, or bus. Full-duplex operation between the
station and the tap permits data to be transmitted onto the bus and received from the
bus. A transmission from any station propagates throughout the length of the medium
in both directions and can be received (heard) by all other stations. At each end of the
bus is a terminator, to avoid reflection of signals.

Tap
Terminating
Flow of data
Resistance

Station

Fig. 3 (a) Bus

Headend

Fig. 3 (b) Tree

The tree topology is a generalisation of the bus topology. The transmission medium is
a branched cable with no closed loops. The tree layout begins at a point known as the
head-end, where one or more cable start, and each of these may have branches. The
branches in turn may have additional branches. Transmission from any station
propagates throughout the medium and can be received (heard) by all other stations.
However, there are two problems in this arrangement. First, since a transmission from
any one station can be received by all other stations, there needs to be some way of
indicating that for whom the transmission is intended. Second, a mechanism is needed

BRBRAITT : June-2011 170


―DATA NETWORK‖ FOR JTOs PH-II

to regulate the transmission. To visualise the logic behind this, consider that if two
stations on the bus attempt to transmit at the same time, their signals will overlap and
become garbled. Or, consider that one station decides to transmit continuously for a
long period of time.
To solve these problems, stations transmit data in small blocks, known as frames.
Each frame consists of a portion of data that a station wishes to transmit, plus a frame
header that contains control information. Each station on the bus is assigned a unique
address, or identifier, and the destination address for a frame is included in its header.
Figure 4 illustrates the concept. In this example, station C wishes to transmit a frame
of data to A. The frame header includes A‘s address. As the frame propagates along
the bus, it passes B, which observes the address and ignores the frame. A, on the other
hand, sees that the frame is addressed to itself and therefore copies the data from the
frame as it goes by.

C transmits a frame addressed


A B C to A

Frame is not addressed to B;


A B C
therefore B ignores it

Frame is meant for A, therefore


A B C A copies it

Fig. 4 Frame Transmission on a Bus LAN


So the frame structure solves the first problem mentioned above: It provides a
mechanism for indicating that who is the intended recipient of data. It also provides
the basic tool for solving the second problem, i.e. regulation of access. In particular,
the station take turns sending frames in some co-operative fashion; this involves
putting additional control information into the frame header.

BRBRAITT : June-2011 171


―DATA NETWORK‖ FOR JTOs PH-II

Ring Topology
In the ring topology, the network consists of a set of repeaters joined by point-to point
links in a closed loop. The repeater is a comparatively simple device, capable of
receiving data on one link and transmitting them, bit by bit, on the other link as
quickly as they are received, with no buffering at the repeater. The links are
unidirectional, i.e. data is transmitted in one direction (clockwise or counter-
clockwise).
Each station is attached to the network at a repeater and can transmit data onto the
network through that repeater.

Ring

As with the bus and tree, data is transmitted in frames. As a frame circulates past all
other stations, the destination station recognises its address and copies the frame into a
local buffer as it goes by. The frame continues to circulate until it reaches the source
station, where it is ultimately removed (Figure 5).
Because multiple stations share the ring , medium access control is needed to
determine when each station may insert frames.

BRBRAITT : June-2011 172


―DATA NETWORK‖ FOR JTOs PH-II

C
C

A B
B A

A
A

(a) C transmits a frame (c A copies the frame


addressed to A as it goes by

C C
A
B
B
A
A
A

(b) Frame is not addressed to B


therefore B ignores it (d) C absorbs the
returning frame

Fig. 5 Frame Transmission on a Ring LAN

Star Topology
In the Star type topology, each station is directly connected to a common central node.
Typically, each station attaches to a central node, referred to as the star coupler, via
two point-to point links, one for transmission in each direction.
In general, there are two alternatives for the operation of the central node :
One method is for the central node to operate in a broadcast fashion. The transmission
of a frame from one station to the Central Node is retransmitted in all of the outgoing
links. In this case, although the arrangement is physically a star, it is logically a bus; a
transmission from any station is received by all other stations, and only one station at
a time may transmit (successfully).
Another method is for the central node to act as a frame switching device. An
incoming frame is buffered in the node and then retransmitted on an outgoing link to
the destination station.

BRBRAITT : June-2011 173


―DATA NETWORK‖ FOR JTOs PH-II

Central Hub,
Switch/
Repeater

Medium Access Control


All LANs consist of a collection of devices that have to share the network‘s
transmission capacity. Some means of controlling access to the transmission medium
is needed to provide for an orderly and efficient use of that capacity. This is the
function of medium access control (MAC) protocol.
The key parameters in any medium access control technique are-where and how.
Where refers to whether control is in a centralised or distributed fashion. In a
centralised scheme, a controller is designated that has the authority to grant access to
the network. A station wishing to transmit must wait until it receives permissions
from the controller. In a decentralised network, each station collectively performs a
medium access control function to dynamically determine the order in which stations
transmit. A centralised scheme has certain advantages, such as the following :
1. It may afford greater control over access for providing such things as
priorities, overrides, and guarantee capacity.
2. It enables the use of relatively simple access logic at each station.
3. It overcomes the problems of distributed co-ordination among peer entities.
The principal disadvantages of a centralised scheme are :
1. It creates a single point of failure
2. It may act as a bottleneck, reducing the performance
3. The pros and cons of distributed schemes are mirror images of the points made
above.
The second parameter, how, is determined by the topology and is a trade-off among
competing factors such as- including cost, performance, and complexity. Access
control techniques could follow the same approach used in circuit switching, viz.
frequency-division multiplexing (FDM), and synchronous time-division multiplexing
(TDM). Such techniques are generally not suitable for LANs because the data
transmission needs of the stations are unpredictable. It is desirable to allocate
capacity in an asynchronous (dynamic) fashion, more or less in response to immediate
demand. The asynchronous approach can be further subdivided into three categories:
round robin, reservation and contention.

BRBRAITT : June-2011 174


―DATA NETWORK‖ FOR JTOs PH-II

Round Robin
With Round robin, each station in turn is given an opportunity to transmit. During that
period, the station may decline to transmit or may transmit subject to a specified
upper bound, usually expressed as a maximum amount of data transmitted or time for
this opportunity. In any case, the station, when it is finished, relinquishes its turn, and
the right to transmit passes to the next station in logical sequence. Control of this
sequence may be centralised or distributed. Polling is an example of a centralised
technique.
When many stations have to transmit data over an extended period of time, round
robin techniques can be very efficient. If only a few stations have data to transmit
over an extended period of time, then there is a considerable overhead in passing the
turn from station to station, as most of the stations will not transmit but simply pass
their turns. Under such circumstances, other techniques may be preferable, largely
depending on whether the data traffic has a stream or bursty characteristic. Stream
traffic is characterised by lengthy and fairly continuous transmissions; examples are
voice communication, telemetry, and bulk file transfer. Bursty traffic is characterised
by short, sporadic transmissions, (interactive terminal-host traffic fits this
description).
Reservation
For stream traffic, reservation techniques are well suited. In general, for these
techniques, time on the medium is divided into slots, similar to synchronous TDM. A
station wanting to transmit, reserves future slots for an extended or even an indefinite
period. Again, reservations may be made in a centralised or distributed manner.
Contention
For bursty traffic, contention techniques are more appropriate. With these techniques,
no control is required to determine whose turn it is; all stations contend for time.
These techniques are by nature distributed. Their principal advantage is that they are
simple to implement and, under light to moderate load, quite efficient. For some of
these techniques, however, performance tends to collapse under heavy load.
Although both centralised and distributed reservation techniques have been
implemented in some LAN products, round robin and contention techniques are the
most common.
The specific access techniques are discussed further in this chapter. Table 2 lists the
MAC protocols that are defined in LAN standards.

BRBRAITT : June-2011 175


―DATA NETWORK‖ FOR JTOs PH-II

Table 2 Standardised Medium Access Control Techniques

Bus Topology Ring Topology Switched Topology

Round Token Bus (IEEE 802.4) Token Ring Request/ Priority


Robin (IEEE 802.5 & FDDI) (IEEE 802.12)
Polling (IEEE 802.11)

Reservation DQDB (IEEE 802.6) - -

Contention CSMA/CD (IEEE 802.3) - CSMA/CD


(IEEE 802.3)
CSMA (IEEE 802.11)

MAC Frame Format


The MAC layer receives a block of data from the LLC layer and is responsible for
performing functions related to medium access and for transmitting the data. MAC
implements these functions, by making use of protocol data unit at its layer; in this
case, the PDU is referred to as a MAC frame.
The exact format of the MAC frame differs for the various MAC protocols in use. In
general, all of the MAC frames have a format similar to that of Figure 6. The fields of
this frame are :
MAC control : This field contains any protocol control information needed for the
functioning of the MAC protocol. For example, a priority level could be indicated
here.
Destination MAC Address : The destination physical attachment point on the LAN
for this frame.
Source MAC address : The source physical attachment point on the LAN for this
frame.

MAC MAC Destination Source


Frame LLC PDU CRC
control MAC Address MAC Address

1 octet 1 or 2 Variable

LLC
PDU DSAP SSAP LLC control Information

I/G DSAP value C/R SSAP value

FIG. 6 LLC PDU with generic MAC Frame format.

BRBRAITT : June-2011 176


―DATA NETWORK‖ FOR JTOs PH-II

LLC : The LLC Data from the next higher layer.


CRC : The cyclic redundancy check field ( also known as the frame check sequence,
FCS, field). This is an error-detecting code, as we have seen in HDLC and other data
link control protocols
In most of the data link control protocols, the data link protocol entity is responsible
not only for detecting errors using the CRC, but for recovering from those errors by
re-transmitting damaged frames. In the LAN protocol architecture, these two
functions are split between the MAC and LLC layers. The MAC layer is responsible
for detecting errors and discarding any frames that are in error. The LLC layer
optionally keeps track of which frames have been successfully received and
retransmits unsuccessful frames.
Logical Link Control
The LLC layer of LANs is similar in many respects to other link layers in common
use. Like all link layers, LLC is concerned with the transmission of a link-level
protocol data unit (PDU) between two stations, without the necessity of an
intermediate switching node. LLC has two characteristics not shared by most other
link control protocols :
It must support the multi-access, shared-medium nature of the link.
It is relieved of some details of link access by the MAC layer.
LLC Services
LLC specifies the mechanism for addressing stations across the medium and for
controlling the exchange of data between two users. The operation and format of this
standard is based on HDLC. Three services are provided as alternatives for devices
using LLC:
Unacknowledged connection-less service. This service is a datagram-style service. It
is a very simple service that does not involve any of the flow control and error control
mechanisms. Thus the delivery of a data is not guaranteed. However, in most devices,
there will be some higher layer of software that deals with reliability issues.
Connection-mode service. This service is similar to that offered by HDLC. A logical
connection is set up between the two users exchanging data, and flow control and
error control are provided.
Acknowledged connection-less service. This is a cross between the previous two
services. It provides that datagram are to be acknowledged, but no prior logical
connection is set up.
Typically, a vendor will provide these services as options that the customer can select
when purchasing the equipment. Alternatively, the customer can purchase equipment
that provides two or all three services and select a specific service based on
application.
The unacknowledged connection-less service requires minimum logic and is useful in
two contexts. Firstly, it will most often be the case that higher layers of software will
provide the necessary reliability and flow-control mechanism, and there is no need to
duplicate them. For example, either TCP or the ISO transport protocol standard will
provide the mechanisms needed to ensure that data are delivered reliably. Secondly,

BRBRAITT : June-2011 177


―DATA NETWORK‖ FOR JTOs PH-II

there are instances in which the overhead of connection establishment and


maintenance is unjustified or even counterproductive; for example, data collection
activities that involve the periodic sampling of data sources, such as sensors and
automatic self-test reports from security equipment or network components. In most
cases, the unacknowledged connection-less service is the preferred option.
The connection-mode service could be used in very simple devices, such as terminal
controllers, that have little software operating above this level. In these cases, it would
provide the flow control and reliability mechanism normally implemented at higher
layers of the communications software.
The acknowledged connection-less service is useful in several contexts. With the
connection-mode service, the logical link control software must maintain some sort of
table for each active connection, so as to keep track of the status of that connection. If
the user needs guaranteed delivery, but if there are a large number of destinations for
data, then the connection-mode service may be impractical because of the large
number of tables required; an example is a process-control or automated factory
environment where a central site may need to communicate with a large number of
processors and programmable controllers; another use is the handling of important
and time-critical alarm or emergency control signals in a factory. Because of their
importance, an acknowledgement is needed so that the sender can be assured that the
signal got through. Because of the urgency of the signal, the user might not want to
take the time to first establish a logical connection and then send the data.
BASIC NETWORK COMPONENTS
There are a number of components which are used to build networks. An
understanding of these is essential in order to support networks.
Network Adapter Cards
A network adapter card plugs into the workstation, providing the connection to the
network. Adapter cards come from many different manufacturers, and support a wide
variety of cable media and bus types such as - ISA, MCA, EISA, PCI, PCMCIA.
New cards are software configurable, using a software programs to configure the
resources used by the card. Other cards are PNP (plug and Play), which automatically
configure their resources when installed in the computer, simplifying the installation.
With an operating system like Windows 95, auto-detection of new hardware makes
network connections simple and quick.

BRBRAITT : June-2011 178


―DATA NETWORK‖ FOR JTOs PH-II

Cabling
Cables are used to interconnect computers and network components together. There
are 3 main cable types used today :
1. twisted pair
2. coax
3. fibre optic
The choice of cable depends upon a number of factors like :
1. cost
2. distance
3. number of computers involved
4. speed
5. bandwidth i.e. how fast data is to be transferred

REPEATERS
Repeaters extend the network segments. They amplify the incoming signal received
from one segment and send it on to all other attached segments. This allows the
distance limitations of network cabling to be extended. There are limits on the number
of repeaters which can be used. The repeater counts as a single node in the maximum
node count associated with the Ethernet standard (30 for thin coax).
Repeaters also allow isolation of segments in the event of failures or fault conditions.
Disconnecting one side of a repeater effectively isolates the associated segments from
the network.
Using repeaters simply allows you to extend your network distance limitations. It
does not give you any more bandwidth or allow you to transmit data faster.

Main Network Segment


Repeater

Workstation

Fig. 7 Use of Repeaters in a Network


It should be noted that in the above diagram, the network number assigned to the main
network segment and the network number assigned to the other side of the repeater
are the same. In addition, the traffic generated on one segment is propagated onto the
other segment. This causes a rise in the total amount of traffic, so if the network
segments are already heavily loaded, it's not a good idea to use a repeater.
A repeater works at the Physical Layer by simply repeating all data from one segment
to another.

BRBRAITT : June-2011 179


―DATA NETWORK‖ FOR JTOs PH-II

Summary of Repeater features :


1. increases traffic on segments
2. have distance limitations
3. limitations on the number of repeaters that can be used
4. propagate errors in the network
5. cannot be administered or controlled via remote access
6. cannot loop back to itself (must be unique single paths)
7. no traffic isolation or filtering is possible
BRIDGES
Bridges interconnect Ethernet segments. Most bridges today support filtering and
forwarding, as well as Spanning Tree Algorithm. The IEEE 802.1D specification is
the standard for bridges.
During initialisation, the bridge learns about the network and the routes. Packets are
passed onto other network segments based on the MAC layer. Each time the bridge is
presented with a frame, the source address is stored. The bridge builds up a table
which identifies the segment to which the device is located on. This internal table is
then used to determine which segment incoming frames should be forwarded to. The
size of this table is important, especially if the network has a large number of
workstations/ servers.

Network Segment A Network Segment B

BRIDGE

Fig. 8 Use of Bridge in a Network


The diagram above shows two separate network segments connected via a bridge.
Note that each segment must have a unique network address number in order for the
bridge to be able to forward packets from one segment to the other.
The advantages of bridges are
1. increase the number of attached workstations and network segments
2. since bridges buffer frames, it is possible to interconnect different segments
which use different MAC protocols
3. since bridges work at the MAC layer, they are transparent to higher level
protocols
4. by subdividing the LAN into smaller segments, overall reliability is increased
and the network becomes easier to maintain
5. used for non routable protocols like NETBEUI which must be bridged

BRBRAITT : June-2011 180


―DATA NETWORK‖ FOR JTOs PH-II

6. help in localising the network traffic by only forwarding data onto other
segments as required (unlike repeaters)
How Bridges Work
Bridges work at the Data Link layer of the OSI model. Because they were at this
layer, all information contained in the higher levels of the OSI model is unavailable to
them, Therefore, they do not distinguish between one protocol and another. Bridges
simply pass all protocols along the network. Because all protocols pass across bridges,
it is up to individual computers to determine which protocols they can recognise.
You may remember that the Data Link layer has two sub layers, the Logical Link
Control sub layer and the Media Access Control sub layer. Bridges work at the Media
Access Control sub layer and are sometimes referred to as Media Access Control
layer bridges.
A Media Access Control layer bridge :
Listens to all traffic.
Checks the source and destination addresses of each packet.
Builds a routing table as information becomes available.
Forwards packets in the following manner :
If the destination is not listed in the routing table, the bridges forwards the packets to
all segments, or
If the destination is listed in routing table, the bridge forwards the packets to that
segment (unless it is the same segment as the source).
A bridge works on the principle that each network node has its own address. A bridge
forwards packets based on the address of the destination node.
Bridges actually have some degree of intelligence in that they learn where to forward
data. As traffic passes through the bridge, information about the computer addresses is
stored in the bridge‘s RAM. The bridge uses this RAM to build a routing table based
on source addresses.
Initially, the bridge‘s routing table is empty. As nodes transmit packets, the source
address is copied to the routing table. With this address information, the bridge learns
which computers are on which segment of the network.
Creating the Routing Table
Bridges build their routing tables bases on the addresses of computers that have
transmitted data on the network. Specifically, bridges use source addresses – the
address of the device initiates the transmission – to create routing table.
When the bridge receives a packet, the source address is compared to the routing
table. If the source address is not there, it is added to the table. The bridge then
compares the destination address with the routing table database.
If the destination address is in the routing table and is on the same segment as the
source address, the packet is described. This filtering helps to reduce network traffic
and the isolate segment of the network.

BRBRAITT : June-2011 181


―DATA NETWORK‖ FOR JTOs PH-II

If the destination address is in the routing table and not in the same segment as the
source address, the bridge forwards the packet out of the appropriate port to reach the
destination address.
If the destination address is not in the routing table, the bridges forwards the packet to
all of its ports, except the one on which it is originated.
In summary, if a bridge knows the location of the destination node, it forwards the
packet to it. If it does not know the destination, it forwards the packet to all segments.
Segmenting Network Traffic
A bridge can segment traffic because of it s routing table. A computer on segment 1
(the source), sends data to another computer (the destination) also located in segment
1. If the destination address is in the routing table, the bridge can determine that the
destination computer is also on segment 1. Because the source and the destination
computers are both on segment 1, the packet does not get forwarded across the bridge
to segment 2.
Therefore, bridges can use routing tables to reduce the traffic on the network by
controlling which packets get forwarded to other segments. This controlling (or
restricting) of the flow of network traffic is known as segmenting network traffic.
A large network is not limited to one bridge. Multiple bridges can be used to combine
several small networks into one large network.

BRBRAITT : June-2011 182


―DATA NETWORK‖ FOR JTOs PH-II

Differentiating Between Bridges and Repeaters


Bridges work at a higher OSI layer than repeaters. This means that bridges have more
intelligence than repeaters and can take more data features into account.
Bridges are like repeaters in that they can regenerate data, but bridges regenerate data
at the packet level. This means that bridges can send packets over long distances
using a variety of long distance media.
Bridge Considerations
Bridges have all of the features of a repeater, but also accommodate more nodes. They
provide better network performance than a repeater. Because the network has been
divided, there will be fewer computers competing for available resources on each
segment.
To look at it another way, if a large Ethernet network were divided into two segments
connected by a bridge, each new network would carry fewer packets, have fewer
collisions, and operate more efficiently. Although each of the networks was separate,
the bridge would pass appropriate traffic between them.
Implementing Bridges
A bridge can be either a stand-alone, separate piece of equipment (an external bridge)
or it can be installed in a server. If the network operating system supports it, one or
more network cards (an internal bridge) can be installed.
Network administrators like bridges because they are:
Simple to install and transparent to users.
Flexible and adaptable.
Relatively inexpensive.
Summary
Consider the following when you are thinking about using bridges to expand your
network.
1. Bridges have all of the features of a repeater.
2. They connect two segments and regenerate the signal at the packet level.
3. They function at the Data Link layer of the OSI model.
4. Bridges are not suited to WANs slower than 56K.
5. They cannot take advantage of multiple paths simultaneously.
6. They pass all broadcasts, possibly creating broadcast storms.
7. Bridges read the source and destination of every packet.
8. They pass packets with unknown destinations.
Use bridges to :
1. Connect two segments to expend the length of number of nodes on the
network.
2. Reduce traffic by segmenting the network.
3. Connect dissimilar networks.

BRBRAITT : June-2011 183


―DATA NETWORK‖ FOR JTOs PH-II

The disadvantages of bridges are


1. the buffering of frames introduces network delays
2. bridges may overload during periods of high traffic
3. bridges which combine different MAC protocols require the frames to be
modified before transmission onto the new segment. This causes delays
4. in complex networks, data is not sent over redundant paths, and the shortest
path is not always taken
5. bridges pass on broadcasts, giving rise to broadcast storms on the network
Transparent Bridges (also known as spanning tree, IEEE 802.1 D) make all routing
decisions. The bridge is said to be transparent (invisible) to the workstations. The
bridge will automatically initialise itself and configure its own routing information
after it has been enabled.
Bridges are ideally used in environments where there a number of well defined
workgroups, each operating more or less independent of each other, with occasional
access to servers outside of their localised workgroup or network segment. Bridges do
not offer performance improvements when used in diverse or scattered workgroups,
where the majority of access occurs outside of the local
Segment.
Ideally, if workstations on network segment A needed access to a server, the best
place to locate that server is on the same segment as the workstations, as this
minimizes traffic on the other segment, and avoids the delay incurred by the bridge.
A bridge works at the MAC Layer by looking at the destination address and
forwarding the frame to the appropriate segment upon which the destination computer
resides.

BRBRAITT : June-2011 184


―DATA NETWORK‖ FOR JTOs PH-II

Summary of Bridge features :


1. operate at the MAC layer (layer 2 of the OSI model)
2. can reduce traffic on other segments
3. broadcasts are forwarded to every segment
4. most allow remote access and configuration
5. often SNMP (Simple Network Management Protocol) enabled
6. loops can be used (redundant paths) if using spanning tree algorithm
7. small delays may be introduced
8. fault tolerant by isolating fault segments and reconfiguring paths in the event
of failure
9. not efficient with complex networks
10. redundant paths to other networks are not used (would be useful if the major
path being used
11. was overloaded)
12. shortest path is not always chosen by the spanning tree algorithm
ROUTERS
In an environment consisting of several network segments with differing protocols
and architectures, a bridge may not be adequate for ensuring fast communication
among all of the segments. A network this complex needs a device which not only
knows the address of each segment, but also determine the best path for sending data
and filtering broadcast traffic to the local segment. Such a device is called a router.
Routers work at the Network layer of the OSI model. This means they can switch and
route packets across multiple networks. They do this by exchanging protocol-specific
information between separate networks. Routers read complex network addressing
information in the packet and, because they function at a higher layer in the OSI
model than bridges, they have access to additional information.
Routers can provide the following functions of a bridge :
1. Filtering and isolating traffic
2. Connecting network segments
Routers have access to more information in the packet than bridges, and use this
information to improve packet deliveries. Routers are used in complex network
situation because they provide better traffic management than bridges and do not pass
broadcast traffic. Routers can share status and routing information with one another
and use this information to bypass slow or malfunctioning connections.
How Routers Work
The routing table found in routes contain network addresses. However, host addresses
may be kept depending on the protocol the network is running. A router uses a table
to determine the destination address for incoming data. The table lists the following
information :
1. All known network addresses
2. How to connect to other networks
3. The possible path between those routers
4. The cost of sending data over those paths
The router selects the best route for the data based on cost & available paths.

BRBRAITT : June-2011 185


―DATA NETWORK‖ FOR JTOs PH-II

Note : Remember that routing tables were also discussed with bridges. The routing
table maintained by a bridge contains Media Access Control sublayer addresses for
each node, while the routing table maintained by a router contains network numbers.
Even though manufacturers of these two different types of equipment have chosen to
use the term routing table, it has a different meaning for bridge than it does for
routers.
Routers require specific addresses. They only understand network numbers which
allow them to talk to other routers and local network adapter card addresses. Routers
do not talk to remote computers.
When router receives packets destined for a remote network, they send them to the
router that manages the destination network. In some ways this is an advantage
because it means routers can :
1. Segment large networks into smaller ones.
2. Act as safety barrier between segments.
3. Prohibit broadcast storms, because broadcasts are not forwarded.
Because routers must perform complex functions on each packet, routers are slower
than most bridges. As packets are passed from router to router, Data Link layer source
and destination addresses are stripped off and then recreated. This enables a router to
route a packet from a TCP/IP Ethernet network to a server on a TCP/IP Token Ring
Network.
Because the routers only read addresses network packets, they will not allow bad data
to get passed on to the network. Because they do not pass the bad data or broadcast
data storms, router put little stress on networks.
Routers do not look at the destination node address; they only look at the network
address. Routers will only pass information if the network address is known. This
ability to control the data passing through the router reduces the amount of traffic
between networks and allows router to use these links more efficiently than bridges.
Using the router addressing scheme, administrators can break one large network into
many separate networks, and because routers do not pass or even handle every packet,
they act as a safety barrier between network segments. This can greatly reduce the
amount of traffic on the network and the wait time experienced by users.
Routable Protocols
Not all protocols work with routers. The one that are routable include :
1. DECnet
2. IP
3. IPX
4. OSI
5. XNS
6. DDP (AppleTalk)
Protocols which are not routable include:
1. LAT (local area transport, a protocol from Digital Equipment Corporation.)
2. NetBEUI
There are routers available which can accommodate multiple protocols such as IP and
DECnet in the same network.

BRBRAITT : June-2011 186


―DATA NETWORK‖ FOR JTOs PH-II

Packets are only passed to the network segment they are destined for.
They work similar to bridges and switches in that they filter out unnecessary network
traffic and remove it from network segments. Routers generally work at the protocol
level.
Routers were devised in order to separate networks logically. For instance, a TCP/ IP
router can segment the network based on groups of TCP/IP addresses. Filtering at this
level (on TCP/IP addresses, also known as level 3 switching) will take longer than
that of a bridge or switch which only looks at the MAC layer.
Most routers can also perform bridging functions. A major feature of routers, because
they can filter packets at a protocol level, is to act as a firewall. This is essentially a
barrier, which prevents unwanted (unauthorised) packets either entering or leaving
designated areas of the network.
Typically, an organisation which connects to the Internet will install a router as the
main gateway link between their network and the outside world. By configuring the
router with access lists (which define what protocols and what hosts have access) this
enforces security by restricted (or allowing) access to either internal or external hosts.
For example, an internal WWW server can be allowed IP access from external
networks, but other company servers which contain sensitive data can be protected, so
that external hosts outside the company are prevented access (you could even deny
internal workstations access if required).
A router works at the Network Layer or higher, by looking at information embedded
within the data field, like a TCP/IP address, then forwards the frame to the appropriate
segment upon which the destination computer resides.
Summary of Router features :
1. use dynamic routing
2. operate at the protocol level
3. remote administration and configuration via SNMP
4. support complex networks
5. the more filtering done, the lower the performance
6. provides security
7. segment the networks logically
8. broadcast storms can be isolated
9. often provide bridge functions also
10. more complex routing protocols used (such as RIP, IGRP, OSPF)
HUBS
There are many types of hubs. Passive hubs are simple splitters or combiners that
group workstations into a single segment, whereas active hubs include a repeater
function and are thus capable of supporting many more connections.
Nowadays, with the advent of 10BaseT, hub concentrators are being very popular.
These are very sophisticated and offer significant features which make them radically
different from the older hubs which were available during the 1980's. These 10BaseT
hubs provide each client with exclusive access to the full bandwidth, unlike bus
networks where the bandwidth is shared. Each workstation plugs into a separate port,
which runs at 10 Mbps and is for the exclusive use of that workstation, thus there is
no contention to worry about like in Ethernet.

BRBRAITT : June-2011 187


―DATA NETWORK‖ FOR JTOs PH-II

In standard Ethernet, all stations are connected to the same network segment in bus
configuration. Traffic on the bus is controlled using CSMA (Carrier Sense Multiple
Access) protocol, and all stations share the available bandwidth.

BACKPLANE

PORT 1 PORT 2 PORT 3 PORT 4

Fig. 9 Connecting Workstations to a Hub

10BaseT Hubs dedicate the entire bandwidth to each port (workstation). The W/S
attach to the Hub using UTP. The Hub provides a number of ports, which are
logically combined using a single backplane, which often runs at a much higher data
rate than that of the ports.
Ports can also be buffered, to allow packets to be held in case the hub or port is busy.
And, because each workstation has its own port, it does not contend with other
workstations for access, having the entire bandwidth available for its exclusive use.
The ports on a hub all appear as one Ethernet segment. In addition, hubs can be
stacked or cascaded (using master/ slave configurations) together, to add more ports
per segment. As hubs do not count as repeaters, this is a better solution for adding
more workstations than the use of a repeater.
Hub options also include an SNMP (Simple Network Management Protocol) agent.
This allows the use of network management software to remotely administer and
configure the hub.
The advantages of the newer 10 BaseT hubs are :
1. Each port has exclusive access to its bandwidth (no CSMA/ CD)
2. Hubs may be cascaded to add additional ports
3. SNMP managed hubs offer good management tools and statistics
4. Utilise existing cabling and other network components
5. Becoming a low cost solution

BRBRAITT : June-2011 188


―DATA NETWORK‖ FOR JTOs PH-II

ETHERNET AND FAST ETHERNET (CSMA/ CD)


The most commonly used medium access control technique for bus/ tree and star
topologies is carrier-sense multiple access with collision detection (CSMA/CD). The
original baseband version of this technique was developed by Xerox as part of the
Ethernet LAN. Ethernet is currently the most popular network architecture. This
baseband architecture uses bus topology, usually transmits at 10 Mbps, and relies on
CSMA/CD to regulate traffic on the main cable segment. The Ethernet specification
performs the same functions as the OSI physical and Data Link Layer of data
communications. This design is the basis of IEEE‘s 802.3 specification.
Ethernet Features
Ethernet media is passive which means it draws power from the computer and thus
will not fail unless the media is physically cut or improperly terminated.
The following list summarizes Ethernet features :

Traditional topology Linear Bus


Other Topologies Star Bus
Type of Architecture Baseband
Access Method CSMA/ CD
Specifications IEEE 802.3
Transfer Speed 10 Mbps or 100 Mbps
Cable Types Thicknet, Thinnet, UTP

IEEE 802.3 Medium Access Control


It would be easier to appreciate the operation of CSMA/ CD if we look first at some
of the earlier schemes from which CSMA/ CD evolved.
Precursors
CSMA/ CD and its precursors can be termed random access, or contention,
techniques. They are random access in the sense that there is no predictable or
scheduled time for any station to transmit; station transmissions are ordered randomly.
They exhibit contention in the sense that stations contend for time on the medium.
The earlier of these techniques, known as ALOHA, was developed for packet radio
networks. However it is applicable to any shared transmission medium. ALOHA, or
pure ALOHA as it is sometimes called, is a true free-for-all. Whenever a station has a
frame to send, it does so. The station then listens for an amount of time equal to the
maximum possible round-trip propagation delay on the network (twice the time it
takes to send a frame between the two most widely separated stations) plus a small
fixed time increment. If the station hears an acknowledgment during that time, fine;
otherwise, it re-sends the frame. If the station fails to receive an acknowledgment
after repeated transmissions, it gives up. A receiving station determines the
correctness of an incoming frame by examining a frame check-sequence field, as in
HDLC. If the frame is valid and if the destination address in the frame header matches
the receiver‘s address, the station immediately sends an acknowledgment. The frame
may be invalid due to noise on the channel or because another station transmitted a

BRBRAITT : June-2011 189


―DATA NETWORK‖ FOR JTOs PH-II

frame at about the same time. In the latter case, the two frames may interface with
each other at the receiver so that neither gets through; this is known as collision. If a
received frame is determined to be invalid, the receiving station simply ignores the
frame.
Description of CSMA/ CD
CSMA, although more efficient than ALOHA or slotted ALOHA, still has one glaring
inefficiency : when two frames collide, the medium remains unusable for the duration
of transmission of both damaged frames. For long frames, compared to propagation
time, the amount of wasted capacity can be considerable. This waste can be reduced if
a station continues to listen to the medium while transmitting.

BRBRAITT : June-2011 190


―DATA NETWORK‖ FOR JTOs PH-II

This leads to the following rules for CSMA/ CD :


1. If the medium is idle, transmit; otherwise, go to step 2.
2. If the medium is busy, continue to listen until the channel is idle, then transmit
immediately.
3. If a collision is detected during transmission, transmit a brief jamming signal
to assure that all stations know that there has been a collision and then cease
transmission.
4. After transmitting the jamming signal, wait a random amount of time, then
attempt to transmit again. (Repeat from step 1.)
Figure 10 below illustrates the techniques for a baseband bus. At time t0, station A
begins transmitting a packet addressed to D. At t1, both B and C are ready to transmit.
B senses a transmission and so defers. C, however, is still unaware of A‘s
transmission and begins its own transmission. When A‘s transmission reaches C, at t2,
C detects the collision and cases transmission. The effect of the collision propagates
back to A, where it is detected some time later, t3, at which time A ceases
transmission.

t0

A B C D

t1

A B C D

t2

A B C D

t3

A B C D

Fig. 10 CSMA/ CD operation

BRBRAITT : June-2011 191


―DATA NETWORK‖ FOR JTOs PH-II

With CSMA/CD, the amount of wasted capacity is reduced to the time it takes to
detect a collision. Question : how long does that take? Let us consider the first case of
a baseband bus and consider the two stations as far apart as possible. For example, in
the above figure, suppose that station A begins a transmission and that just before that
transmission reaches D, D is ready to transmit. Because D is not yet aware of A‘s
transmission, it begins to transmit. A collision occurs almost immediately and is
recognized by D. However, the collision must propagate all the way back to A before
A is aware of the collision. By this line of reasoning, we conclude that the amount of
time that it takes to detect a collision is no greater than twice the end-to-end
propagation delay.
For a broadband bus, the delay is even longer. Figure 11 shows the dual-cable
system. This time, the worst case occurs for two stations as close together as possible
as far as possible for the headend. In this case, the maximum time to detect a collision
is four times the propagation delay from an end of the cable to the head-end.

t0 A B

A begins transmission

t1 A B

B begins transmission just before leading edge of A‘s packet arrives at B‘s receiver;
B almost immediately detects A’s transmission and cases its own transmission.

t2 A B

A detects collision

Fig. 11 Broadband collision detection timing

BRBRAITT : June-2011 192


―DATA NETWORK‖ FOR JTOs PH-II

An important rule followed in most CSMA/ CD systems, including the IEEE


standard, is that frame should be long enough to allow collision detection prior to the
end of transmission. If shorter frames are used, then collision detection does not
occur, and CSMA/CD exhibits the same performance as the less efficient CSMA
protocol.
Although the implementation of CSMA/ CD is substantially the same for the
baseband and broadband, there are differences. One is the means for performing
carrier sense; for baseband systems, this is done by detecting a voltage pulse train. For
broadband, the RF carrier is detected.
Collision detection also differs for the two systems. For baseband, a collision should
produce substantially higher voltage swings than those produced by a single
transmitter. Accordingly, the IEEE standard dictates that the transmitter will detect a
collision of the signal on the cable at the transmitter tap point exceeds the maximum
that could be produced by the transmitter alone. Because a transmitted signal
attenuates as it propagates, there is a potential problem: If two stations far apart are
transmitting, each station will receive a greatly attenuated signal from the other. The
signal strength could be so small that when it is added to the transmitted signal at the
transmitted tap point, the combined signal does not exceed the CD threshold. For this
reason, among others, the IEEE standard restricts the maximum length of coaxial
cable to 500 m for 10BASE5 and 200m for 10BASE2.
A much simpler collision detection scheme is possible with the twisted pair star-
topology approach. In this case, collision detection is based on logic rather than on
sensing voltage magnitudes. For any hub, if there is activity (signal) on more than one
input, a collision is assumed. A special signal called the collision presence signal is
generated. This signal is generated and sent out as long as activity is sensed on any of
the input lines. This signal is interpreted by every node as an occurrence of a
collision.
There are several possible approaches to collision detection in broadband systems.
The most common of these is to perform a bit-by-bit comparison between transmitted
and received data. When a station transmits on the inbound channel, it begins to
receive its own transmission on the outbound channel after a propagation delay to the
head-end and back. Note the similarity to a satellite link. Another approach, for split
systems, is for the head-end to perform detection based on garbled data.
MAC Frame
Figure 12 depicts the frame format for the 802.3 protocol; it consists of the following
fields :

1. Preamble : A 7-octet pattern of alternating 0s and 1s used by the receiver to


establish bit synchronization.
2. Start frame delimiter : The sequence 10101011, which indicates the actual
start of the frame and which enables the receiver to locate first bit of the rest
of the frame.
3. Destination address (DA) : Specifies the station(s) for which the frame is
intended. It may be unique physical address, a group address, or a global
address. The choice of the 16- or 48-bit address length is a implementation
decision, and must be the same for all stations on a particular LAN.

BRBRAITT : June-2011 193


―DATA NETWORK‖ FOR JTOs PH-II

4. Source address (SA) : Specifies the station that sent the frame.
5. Length : Length of the LLC data field.
6. LLC data : Data unit supplied by LLC.
7. Pad : Octets added to ensure that the frame is long enough for proper CD
operation.
8. Frame check sequence (FCS). A 32-bit cyclic redundancy check, based on
all fields except the preamble, the SFD, and the FCS.

Octets
LLC FCS
Preamble SFD DA SA Length Data Pad

LEGEND

SFD = Start frame delimiter SA = Source address


DA = Destination address FCS = Frame-check sequence

FIG. 12 IEEE 802.3 Frame Format.


Introduction to Wireless LAN
A wireless local area network (LAN) utilizes radio frequency (RF) as an alternative
for a wired LAN. Wireless LANs transmit and receive data over the air, without the
use of any cable, combining the benefits of data connectivity and user mobility.
Need for Wireless LAN
The widespread reliance on networking in business and the explosive growth of the
Internet reveal the benefits of shared data and shared resources. With wireless LANs,
users can access shared information and resources without looking for a place to plug
in, and network managers can set up networks without installing or moving wires.
Wireless LANs provide all the functionality of wired LANs with the following
benefits:
Mobility: Wireless LANs can provide users with access to real-time information and
resources anywhere in their organization through designated access points. This
freedom to "roam" increases employee productivity as they move throughout the
building.
Installation Speed and Simplicity: Installing a wireless LAN system can be fast and
easy and eliminates the need to pull cable through walls and ceilings.
Installation flexibility: Wireless technology allows the network to go where wires
cannot go.
Scalability: Configurations for wireless LANs are easily changed and range from
peer-to-peer networks suitable for a small number of users to full infrastructure
networks of thousands of users that enable roaming over a broad area. Adding a user

BRBRAITT : June-2011 194


―DATA NETWORK‖ FOR JTOs PH-II

to the network is as simple as equipping a PC or laptop with a wireless LAN adapter


card or USB device.
Types of Applications Using Wireless Technology
The following list describes some of the many applications made possible through the
power and flexibility of wireless LANs:
Corporate environment
1. Growing businesses in leased office space can avoid the need for expensive
network wiring.
2. Users collaborating on a project can quickly set up a peer-to-peer LAN to
share files and peripherals.
3. Employees can take advantage of mobile networking for e-mail, Internet
access, and file sharing regardless of where they are in the office.
4. Network managers in dynamic environments minimize the overhead caused
by moves, extensions to networks and other changes with wireless LANs.
5. Training sites at corporations can use wireless connectivity to make it easy to
access information, and learning.
6. Network managers installing networked computers in older buildings find that
wireless LANs are a cost-effective network infrastructure solution.
7. Branch office workers minimize setup requirements by installing pre-
configured wireless LANs needing no local MIS support.
8. Warehouse workers use wireless LANs to exchange information with central
databases, thereby increasing productivity.
9. Network managers implement wireless LANs to provide backup for mission-
critical applications running on wired networks.
10. Senior executives in meetings make quicker decisions because they have real-
time information at their fingertips.
Education
Mobile students and teachers with notebook computers can connect to the university
network for collaborative class discussions and to the Internet for e-mail and Internet
access.
Finance
Teams of auditors or consultants can set up small secure networks at client locations.
Healthcare
Doctors and nurses in hospitals are more productive when utilizing notebook
computers with wireless LAN adapters to deliver patient information instantly.
Types of Wireless LAN Technology
When evaluating wireless LAN solutions, there are a number of technologies to
choose from. Each comes with its own set of advantages and limitations:
Narrowband Technology
A narrowband radio system transmits and receives user information on a specific
radio frequency. Narrowband radio keeps the radio signal frequency as narrow as
possible just to pass the information. Undesirable crosstalk between communications
channels is avoided by coordinating different users on different channel frequencies.

BRBRAITT : June-2011 195


―DATA NETWORK‖ FOR JTOs PH-II

The drawback to this type of technology is that the end-user must obtain an FCC
license for each site where it is employed.
Spread Spectrum Technology
Most wireless LAN systems use spread-spectrum technology, a wideband radio
frequency technique developed by the military for use in reliable, secure, mission-
critical communications systems. Spread-spectrum is designed to trade off bandwidth
efficiency for reliability, integrity and security. In other words, more bandwidth is
consumed to produce a louder and thus easier to detect broadcast signal. The
drawback to this technology is when the receiver is not tuned to the right frequency, a
spread-spectrum signal looks like background noise. There are two types of spread
spectrum radio: frequency hopping and direct sequence:
Frequency-hopping Spread Spectrum Technology – (FHSS) uses a narrowband
carrier that hops among several frequencies at a specific rate and sequence as a way of
avoiding interference. Properly synchronized, the net effect is to maintain a single
logical channel. To an unintended receiver, FHSS appears to be short-duration
impulse noise.
Direct-Sequence Spread Spectrum Technology – (DSSS) uses a radio transmitter to
spread data packets over a fixed range of the frequency band. To an unintended
receiver, DSSS appears as low-power wideband noise and is rejected by most
narrowband receivers. The interoperability standard IEEE 802.11b is focusing on
utilizing 11M bps high rate DSSS technology as the standard for wireless networks.
Infrared Technology – little used in commercial wireless LANs, infrared (IR)
systems use very high frequencies, just below visible light in the electromagnetic
spectrum, to carry data.

BRBRAITT : June-2011 196


―DATA NETWORK‖ FOR JTOs PH-II

How do Wireless LANs Work?


Wireless LANs use radio airwaves to communicate information from one point to
another without relying on any physical connection. Radio waves are often referred to
as radio carriers because they simply perform the function of delivering energy to a
remote receiver. The data being transmitted is superimposed (modulated) on the radio
carrier so that it can be accurately extracted at the receiving end.
In a typical wireless LAN configuration, a transmitter/receiver device, called an
access point (AP), connects to the wired network from a fixed location using standard
cabling. The access point serves as a communications "hub" that receives, buffers, and
transmits data between the wireless clients and the wired LAN. A single access point
can support a small group of users and can function within a range of less than one
hundred to several hundred feet. The access point (or antenna attached to the access
point) is usually mounted high but may be mounted essentially anywhere that is
practical as long as the desired radio coverage is obtained.
End users access the wireless LAN through wireless LAN adapters. These are mostly
implemented as PC cards in notebook computers, PCI cards in desktop computers or
as USB devices. Wireless LAN adapters provide an interface between the client
network operating system (NOS) and the airwaves via an antenna.
Some Typical Wireless LAN Configurations

Peer-to-Peer Network (Ad-Hoc Mode)

The most basic wireless LAN consists of two PCs equipped with wireless adapter
cards that form an independent network whenever they are within a range of one
another. On-demand networks, such as this example, require no administration or
preconfiguration. In this case, each client would only have access to the resources of
the other client and not to a central server. This wireless LAN setup is sometimes
called an Ad-Hoc network.

BRBRAITT : June-2011 197


―DATA NETWORK‖ FOR JTOs PH-II

Client and Access Point (Infrastructure Mode)

Installing an access point allows each client to have access to shared resources as well
as to other clients. The access point connects to the wired network from a fixed
location using standard cabling. Each access point can accommodate many clients (up
to 16 with the Multi-Tech RouteFinder RF802EW); the specific number depends on
the number and nature of the transmissions involved. This wireless LAN setup is
sometimes called Infrastructure Mode.
Multiple Access Points and Roaming

Access points have a finite range for transmission -- around 100 meters (328 feet)
indoors and 300 meters (984 feet) outdoors. In a very large facility such as a
warehouse, or on a college campus, it will probably be necessary to install more than
one access point. Access point positioning is accomplished by means of a site survey.
The goal is to blanket the coverage area with overlapping coverage cells so that
clients might range throughout the area without ever losing network contact. The
ability of clients to move seamlessly among a cluster of access points is called
roaming. Access points hand the client off from one to another in a way that is
invisible to the client, ensuring unbroken connectivity.

BRBRAITT : June-2011 198


―DATA NETWORK‖ FOR JTOs PH-II

IEEE 802.11 Standard fro Wireless LAN


802.11 is a set of specifications for LANs (Local Area Networks) from the Institute of
Electrical and Electronic Engineers (IEEE). 802.11 defines the standard for wireless
LANs encompassing three incompatible (non-interoperable) technologies: Frequency
Hopping Spread Spectrum (FHSS), Direct Sequence Speed Spectrum (DSSS) and
Infrared. The standard promises multi-vendor interoperability among products
utilizing the same technology.
More recently ratified is a version of the 802.11 standard, called 802.11b High Rate.
This standard is based on DSSS at 11M bps. 2M bps 802.11 DSSS systems will be
able to co-exist with 11M bps 802.11b HR systems, enabling a smooth transition to
the higher data rate technology. (This is similar to migrating from 10M bps Ethernet
to 100M bps Ethernet, enabling a large performance improvement while maintaining
the same protocol).
Operating Range
Transmission distance will differ according to the conditions of the surroundings.
Access points have a finite wireless operating range up to 300 meters (984 feet)
outdoors and up to 100 meters (328 feet) indoors, but the actual range will vary. It is
best to try and place the access point in a location near the center of the wireless work
environment with as few obstructions as possible between the wireless clients and the
access point.
Can Transmission possible Through a Wall?
Transmitting through a wall is possible. However, the wall must be made of material
that allows the passage of radio waves. In general, metals and concrete do not allow
radio waves to pass through. Metals reflect radio waves and concrete attenuates radio
waves.
Effect of Wireless Transmission on Other Equipments
Wireless LAN products that comply with the IEEE 802.11b standard will not interfere
with cell phones, 900 MHz cordless phones, television, radio, etc. However, since
microwave ovens and 2.4GHz cordless phones use the same frequency band,
communication may be affected if they are used near wireless LAN equipment.
Effects of Wireless Technology on the Human Body
Wireless LAN products that comply with the IEEE 802.11b are in line with the
standards and guidelines of the FCC and will not affect the human body.
Type of Security is Available for Wireless LAN
WEP (Wired Equivalent Privacy a.k.a. Wireless Encryption Protocol) is data
encryption defined by the 802.11 standard that was designed to prevent access to the
network by "intruders" using similar wireless LAN equipment and to prevent the
capture of wireless LAN traffic through eavesdropping. WEP allows the administrator
to define a set of respective "Keys" for each wireless network user based on a "Key
String" passed through the WEP encryption algorithm. Access is denied by anyone
who does not have an assigned key. WEP comes in 40/64-bit and 128-bit encryption
key lengths.

BRBRAITT : June-2011 199


―DATA NETWORK‖ FOR JTOs PH-II

AN INTRODUCTION TO WIRELESS-FIDELITY (WI-FI)

1.0 Scope:
Wi-Fi is a registered trademark by the Wi-Fi Alliance. The products tested and
approved as "Wi-Fi Certified" are interoperable with each other, even if they are from
different manufacturer. It is Short form for “Wireless-Fidelity” and is meant to
generically refer to any type of ‗802.11‘ network, whether ‗802.11‘b, ‗802.11‘a, dual-
band, etc. Initially the term "Wi-Fi" was used in place of the 2.4GHz ‗802.11‘b
standard, in the same way that "Ethernet" is used in place of IEEE 802.3 but Alliance
has expanded the generic use of the term to cover ‗802.11‘a, dual-band etc.
General description of Wi-Fi Network:
A Wi-Fi network provides the features and benefits of traditional LAN technologies
such as Ethernet and Token Ring without the limitations of wires or cables. It
provides the final few metres of connectivity between a wired network and the mobile
user thereby providing mobility, scalability of networks and the speed of installation.
WIFI is a wireless LAN Technology to deliver wireless broad band speeds up to 54
Mbps to Laptops, PCs, PDAs , dual mode wifi enabled phones etc. Apart from Data
delivery Voice over WIFI is also in pipeline. The backhaul bandwidth from wired
network i.e. ADSL modem, leased line etc. is shared among the users.
In a typical Wi-Fi configuration, a transmitter/receiver (transceiver) device, called the
Access Point (AP), connects to the wired network from a fixed location using
standard cabling. A wireless Access Point combines router and bridging functions, it
bridges network traffic, usually from Ethernet to the airwaves, where it routes to
computers with wireless adapters. The AP can reside at any node of the wired
network and acts as a gateway for wireless data to be routed onto the wired network
as shown in Figure-1. It supports only 10 to 30 mobile devices per Access Point (AP)
depending on the network traffic. Like a cellular system, the Wi-Fi is capable of
roaming from the AP and re-connecting to the network through another AP. The
Access Point (or the antenna attached to the Access Point) is usually mounted high
but may be mounted essentially anywhere that is practical as long as the desired radio
coverage is obtained.

Figure -1: A typical Wi-Fi Network.

BRBRAITT : June-2011 200


―DATA NETWORK‖ FOR JTOs PH-II

Like a cellular phone system, the wireless LAN is capable of roaming from the AP
and re-connecting to the network through other APs residing at other points on the
wired network. This can allow the wired LAN to be extended to cover a much larger
area than the existing coverage by the use of multiple APs such as in a campus
environment as shown in Figure 2.

Figure -2: Extending Wi-Fi coverage with multiple APs.

An important feature of the wireless LAN is that it can be used independent of a wired
network. It may be used as a stand alone network anywhere to link multiple
computers together without having to build or extend a wired network. Then a peer to
peer workgroup can be established for transfer or access of data. A member of the
workgroup may be established as the server or the network can act in a peer to peer
mode as Shown in Figure-3.

Figure-3: Wireless LAN workgroup.

End users access the Wi-Fi network through Wi-Fi adapters, which are implemented
as cards in desktop computers, or integrated within hand-held computers. Wi-Fi
wireless LAN adapters provide an interface between the client Network Operating
System (NOS) and the airwaves via an antenna. The nature of the wireless connection

BRBRAITT : June-2011 201


―DATA NETWORK‖ FOR JTOs PH-II

is transparent to the NOS. Wi-Fi deals with fixed, portable and mobile stations and of
course, the physical layers used here are fundamentally different from wired media
3.0 Wi-Fi Network Configuration:
3.1 A Wireless Peer-To-Peer Network: This mode is also known as ADHOC mode.
Wi-Fi networks can be simple or complex. At its most basic, two PCs equipped with
wireless adapter cards can set up an independent network whenever they are within
range of one another. This is called a peer-to-peer network. It requires no
administration or pre-configuration. In this case, each client would only have access
to the resources of the other client and not to a central server as shown in Figure-4.

Figure-4: A Wi-Fi Peer-To-Peer Network.

3.2 Client and Access Point:


This is known as INFRASTUCTURE mode and is normally employed. However,
wireless gateway can be configured to enable peer to peer communication in this
mode as well.
In this mode, one Access Point is connected to the wired network and each client
would have access to server resources as well as to other clients. The specific number
client depends on the number and nature of the transmissions involved. Many real-
world applications exist where a single Access Point services from 15 to 50 client
devices as shown in Figure-5.

Figure-5: A Server and Clint Wi-Fi Network.

3.3 Multiple Access Points and Roaming:


Access points can be connected to each other through UTP cable or they can be
connected to each other over radio through wireless bridging. There is an option to
connect access points in a mesh architecture where in event of a fault in an access
point the network heals itself and connectivity is ensured through other access point.
This changeover takes place dynamically.

BRBRAITT : June-2011 202


―DATA NETWORK‖ FOR JTOs PH-II

Access Points have a finite range, of the order of 500 feet indoor and 1000 feet
outdoors. In a very large facility such as a warehouse, or on a college campus, it will
probably be necessary to install more than one Access Point. Access Point positioning
is done by a site survey. The goal is to blanket the coverage area with overlapping
coverage cells so that clients might range throughout the area without ever losing
network contact. The ability of clients to move seamlessly among a cluster of Access
Points is called roaming. Access Points hand the client off from one to another in a
way that is invisible to the client, ensuring unbroken connectivity as shown in Fig-6.

Figure-6: Multiple Access Points and Roaming.

3.4 Use of an Extension Point:


To solve particular problems of topology, the network designer some times uses
Extension Points (EPs) to augment the network of Access Points (APs). Extension
Points look and function like Access Points, but they are not tethered to the wired
network as are APs. EPs function just as their name implies: they extend the range of
the network by relaying signals from a client to an AP or another EP. EPs may be
strung together in order to pass along messaging from an AP to far-flung clients as
shown in Figure-7.

Figure -7: Wi-Fi network with Extension Point (EP).

3.5 The Use of Directional Antennae:


One last item of wireless LAN equipment to consider is the directional antenna. Let‘s
suppose you had a Wi-Fi network in your building-A and wanted to extend it to a
leased building-B, one mile away. One solution might be to install a directional
antenna on each building, each antenna targeting the other. The antenna on ‗A‘ is
connected to your wired network via an Access Point. The antenna on ‗B‘ is similarly

BRBRAITT : June-2011 203


―DATA NETWORK‖ FOR JTOs PH-II

connected to an Access Point in that building, which enables Wi-Fi network


connectivity in that facility as shown in Figure-8.

Figure-8: A Wi-Fi network using Directional Antennae.

4.0 The Wi-Fi working:


There are two methods of spread spectrum modulation used within the unlicensed 2.4
GHz frequency band:
1. Frequency Hopping Spread Spectrum (FHSS) and
2. Direct Sequence Spread Spectrum (DSSS).

Spread spectrum is ideal for data communications because it is less susceptible


to radio noise and creates little interference; it is used to comply with the
regulations for use in the ISM band. Using frequency hopping, the 2.4GHz
band is divided into 75 numbers of 1-MHz-channels. FHSS allows for a less
complex radio design than DSSS but FHSS is limited to a 2 Mbps data transfer
rate, the reasons for this are the FCC regulations that restrict sub-channel
bandwidth to 1 MHz, causing many hops which means a high amount of
hopping overhead. For wireless LAN applications, DSSS is a better choice.
DSSS divides the 2.4GHz band into 14 channels (in the US only 11 channels
are available). Channels used at the same location should be separated 25
MHz from each other to avoid interference. This means that only 3 channels
can exist at the same location (Figure 9). FHSS and DSSS are fundamentally
different signaling mechanisms and are not capable of interoperating with each
other.

BRBRAITT : June-2011 204


―DATA NETWORK‖ FOR JTOs PH-II

Figure 9: DSSS channels.

4.1 The Wi-Fi Physical Layer:


The Physical Layer is further subdivided in following two sub layers:
1. A Physical Layer Convergence Procedure (PLCP) sub layer and
2. A Physical Media Dependent (PMD) sub layer.
PLCP adapts the capabilities of the physical medium dependent system to the
Physical Layer service. It presents an interface for the MAC sub layer to write to and
provides carrier sense and Clear Channel Assessment (CCA).
PMD defines the method of transmitting and receiving data through a wireless
medium between two or more stations each using the same modulation system. It
takes care of the wireless encoding.
4.2 The Wi-Fi Data Link Layer:
A ‗802.11‘ Data Link Layer is divided in following two sub layers:
1. Logical Link Control (LLC) and
2. Media Access Control (MAC).
The LLC sub layer is the same in ‗802.11‘ and other 802 LANs and can easily be
plugged-in into a wired LAN, but ‗802.11‘ defines a different MAC protocol. For
Ethernet LANs, the CSMA/CD protocol regulates the access of the stations. In a
WLAN collision detection is not possible.
The ‗802.11‘ standard defines the protocol and compatible interconnection of data
communication equipment via the air, radio or infrared, in a Local Area Network
(LAN) using the CSMA/CA medium sharing mechanism. This basic access method
for ‗802.11‘ is called Distributed Coordination Function (DCF) and it is mandatory
for all stations.

BRBRAITT : June-2011 205


―DATA NETWORK‖ FOR JTOs PH-II

A second media access control method, the Point Coordination Function (PCF), is an
optional extension to DCF. PCF provides a time division duplexing capability to
allow the Access Point to deal with time bounded, connection-oriented services.
Using this method, one AP controls the access through a polling system.
CSMA/CA (Figure 10) needs each station to listen to other users. If the channel is
idle the station is allowed to transmit. If it is busy, each station waits until
transmission stops, and then enters into a random back off procedure. This prevents
multiple stations from owning the medium immediately after completion of the
preceding transmission. Packet reception in DCF requires acknowledgements (ACK).
The period between completion of packet transmission and start of the ACK frame is
one Short Inter Frame Space (SIFS). ACK frames have a higher priority than other
traffic. Fast acknowledgement is one of the features of the ‗802.11‘ standard, because
it requires ACKs to be handled at the MAC sub layer. Transmissions other than ACKs
must wait at least one DCF Inter Frame Space (DIFS) before transmitting data. If a
transmitter senses a busy medium, it determines a random back-off period by setting
an internal timer to an integer number of slot times. Upon expiration of a DIFS, the
timer begins to decrement. If the timer reaches zero, the station may begin
transmission. If the channel is seized by another station before the timer reaches zero,
the timer setting is retained at the decremented value for subsequent transmission.
The method described above relies on the underlying assumption that every station
can hear all other stations. This is not always the case: this problem is known as the
Hidden-Node Problem. The hidden node problem arises when a station is able to
successfully receive frames from two other transmitters but the two transmitters can
not receive signals from each other. In this case a transmitter may sense the medium
as being idle even if the other one is transmitting. This results in a collision at the
receiving station.

Figure 10: CSMA/CA algorithm.

To provide a solution for this problem, another mechanism is present: the use of
RTS/CTS frames (Figure 11). A Request To Send (RTS) frame is sent by a potential
transmitter to the receiver and a Clear To Send (CTS) frame is sent from the receiver
in response to the received RTS frame. If the CTS frame is not received within a
certain time interval the RTS frame is re-transmitted by executing a back-off

BRBRAITT : June-2011 206


―DATA NETWORK‖ FOR JTOs PH-II

algorithm. After a successful exchange of the RTS and CTS frames, the data frame
can be sent by the transmitter after waiting for a SIFS. RTS and CTS include a
duration field that specifies the time interval necessary to transmit the data frame and
the ACK. This information is used by stations which can hear the transmitter or the
receiver to update their Net Allocation Vector (NAV), a timer which is always
decremented. The drawback of using RTS/CTS is an increased overhead which may
be very important for short data frames. The efficiency of RTS/CTS depends upon the
length of the packets. RTS/CTS are typically used for large-size packets, for which re-
transmissions would be expensive from a bandwidth viewpoint.
Two other robustness features of the ‗802.11‘ MAC layer are the CRC checksum and
packet fragmentation. Each packet has a CRC attached to ensure its correctness. This
is different from Ethernet, where higher-level protocols such as TCP handle error
checking. Packet fragmentation is very useful in congested or high interference
environments since larger packets have a better chance to get corrupted. The MAC
layer is responsible for re-assembling the received fragments; this makes the process
transparent to higher-level protocols.
IEEE „802.11‟b: In 2000, ‗802.11‘b became the standard wireless ethernet
networking technology for both business and home. That year, wireless networking
took a giant leap with the release of 11 Mbps products, based on this ‗802.11‘b
standard (commonly known as Wi-Fi).
First generation of wireless adapters supported 1 or 2 Mbps. This is very low
compared to wired Ethernets, defined by the Institute of Electrical and Electronics
Engineers (IEEE) in the 802.3 standard, which are able to operate at 10 Mbps,
100Mbps,or even 1000Mbps. ‗802.11‘b transmits at 2.4 GHz, the same spectrum as
microwave ovens. The cards use less power than a mobile phone. Cisco warns that
their PCMCIA card should be more than 4 cm from your body, and the Access Point's
antenna should be at least 15 cm away from the body.

Figure 11: RTS/CTS.

5.0 Wireless LAN Standards:


Wi-Fi and IEEE ‗802.11‘ are often used interchangeably. Wi-Fi is an interoperability
certification program promoted by the Wi-Fi alliance, the idea is that a consumer
should look for the Wi-Fi logo and feel free that this ‗802.11‘ product will work with

BRBRAITT : June-2011 207


―DATA NETWORK‖ FOR JTOs PH-II

other Wi-Fi certified products. The ‗802.11‘ specifications are wireless standards that
define as ―over-the–air‖ interface between wireless client and a base station or Access
Point. It includes task groups called ‗802.11‘b, a, e, g working on amendments.
1. „802.11‟b was the first version to reach the marketplace. It is the slowest and
least expensive of the three. As mentioned above, ‗802.11‘b transmits at 2.4
GHz ISM band and can handle up to 11 megabits per second. Wi-Fi reaches
only about 7Mbps of throughput due to synchronization issues, ACK overhead
etc.
2. „802.11‟g: The -g group is a natural speed extension for the ‗802.11‘b
standard. It will extend the highly successful family of IEEE ‗802.11‘
standards, with data rates up to 54 Mbps in the 2.4 GHz band.
3. „802.11‟a: Task Group (TG a) operates in the 5GHz band. Because its
operating frequency is higher than that of ‗802.11‘b, ‗802.11‘a has a smaller
range. It tries to solve this distance problem by using more power and more
efficient data encoding schemes. The higher frequency band gives the
advantage of not residing in the crowded 2.4GHz region where we see
cordless phones, Bluetooth and even microwave ovens operating.
4. The major advantage is it's speed: the spectrum of ‗802.11‘a is divided into
8 sub-network segments or channels of about 20 MHz each. These channels
are responsible for a number of network nodes. The channels are made up of
52 carriers of 300 KHz each, and can present a maximum of 54 Mbps. This
speed takes WLAN from the first generation Ethernet (10 Mbps) to the second
(Fast Ethernet, 100Mbps). The new specification is based on a OFDM
modulation scheme. The RF system operates at 5.15 to 5.25, 5.25 to 5.35 and
5.725 to 5.825 GHz U-NII bands. The OFDM system provides 8 different data
rates between 6 to 54 Mbps. It uses BPSK, QPSK, 16-QAM and 64-QAM
modulation schemes coupled with forward error correcting coding. Important
to remember: ‗802.11‘b is completely incompatible with ‗802.11‘a
5. „802.11‟e Task Group (TG e) is proceeding to build improved support for
Quality of Service. The aim is to enhance the current ‗802.11‘ MAC to expand
support for LAN applications with Quality of Service requirements, to provide
improvements in security and in the capabilities & efficiency of the protocol.
Its applications include transport of voice, audio and video over ‗802.11‘
wireless networks, video conferencing, media stream distribution, enhanced
security applications and mobile & nomadic access applications.
6. „802.11‟d Task Group (TG d) describes a protocol that will allow an ‗802.11‘
device to receive the regulatory information required to configure itself
properly to operate anywhere on earth. The current ‗802.11‘ standard defines
operation in only a few regulatory domains (countries). This supplement will
add the requirements and definitions necessary to allow ‗802.11‘ WLAN
equipment to operate in markets not served by the current standard.
6.0 Specifications:
1. It uses one of the frequencies of the ISM Frequency band these bands are 902
to 928MHz, 2.4 to 2.4853 GHz, and 5.725 to 5.85 GHz, out of which 2.4 to
2.4853 GHz is most commonly used.
2. RF powers radiated by nodes are limited to one watt.
3. Spread spectrum modulation technique is used for data communication as it is
less susceptible to radio noise and creates little interference

BRBRAITT : June-2011 208


―DATA NETWORK‖ FOR JTOs PH-II

4. In Wi-Fi networks, Direct Sequence Spread Spectrum Modulation (DSSS)


technique is used. It divides the 2.4 GHz band into 14 channels. Channels used
at the same location should be separated by 25 MHz from each other to avoid
interference. Therefore only 03 channels can exist at the same location.
5. The channel bandwidth is 20 MHz separated by 25 MHz.
6. Wi-Fi uses CSMA/CA (Carrier Sense multiple Access with Collision
Avoidance) at MAC layer for client to communicate first listen on the
network.
7. Antenna Diversity is used to improve the range and performance of systems,
especially near the edge of the range profile, the marginal area. Antenna
diversity is the use of multiple Antennae that are physically separated.
8. FDM (Orthogonal Frequency Division Multiplexing) is the modulation
scheme which offers high data speed of 54 Mbps.
9. For data DCF i.e. Distributed Coordination Function is used while for voice
PCF (Pointed Coordinated Function) is used.

7.0 Configuring Wi-Fi:


7.1 Configuring a New Hotspot: Most wireless Access Points come with default
values built-in. Once you plug them in, they start working with these default values in
90 percent of the cases. However, you may want to change things. You normally get
to set three things on your Access Point:
The SSID: It will normally default to the manufacturer's name (e.g. "Linksys" or
"Netgear"). You can set it to any word or phrase you like.
Same SSID eg. BSNL should be configured in all access points to allow seamless
roaming between access points.
The channel: Normally it will default to channel 6. However, if a nearby neighbor is
also using an Access Point and it is set to channel 6, there can be interference. Choose
any other channel between 1 and 11. An easy way to see if your neighbors have
Access Points and use the search feature that comes with your wireless card.
. Neighboring APs should have channels 1,6,11. However these can be repeated after
some distance as is done in GSM cell planning
The WEP key: The default is to disable Wired Equipment Privacy (WEP). If you
want to turn it on, you have to enter a WEP key and turn on 128-bit encryption
.It should be enabled to secure the network against eavesdropping and hacking.
Though it is not full proof.
Access Points come with simple instructions for changing these three values.
Normally you do it with a Web browser. Once it is configured properly, you can use
your new hotspot to access the Internet from anywhere in your network.
7.2 Configuring Wi-Fi in client machine: On the newest machines, an ‗802.11‘ card
will automatically connect with an ‗802.11‘ hotspot and a network connection will be
established. As soon as you turn on your machine, it will connect and you will be able
to browse the Web, send email, etc. using Wi-Fi. On older machines you often have to
go through these simple 3-steps process to connect to a hotspot:
Access the software for the ‗802.11‘ card: Normally there is an icon for the card down
in the system tray at the bottom right of the screen.

BRBRAITT : June-2011 209


―DATA NETWORK‖ FOR JTOs PH-II

Click the "Search button" in the software. The card will search for all of the available
hotspots in the area and show you a list.
Double-click on one of the hotspots to connect to it.
Old ‗802.11‘ equipment has no automatic search feature. You have to find what is
known as the SSID of the hotspot (usually a short word of 10 characters or less) as
well as the channel number (an integer between 1 and 11) and type these two pieces
of information in manually. All the search feature is doing is grabbing these two
pieces of information from the radio signals generated by the hotspot and displaying
them for you.
8.0 Benefits of Wi-Fi:
In a Wi-Fi users can access shared information without looking for a place to plug in,
and network managers can set up or augment networks without installing or moving
wires. Wi-Fi offers the following productivity, conveniences, and cost advantages
over traditional wired networks:
1. Mobility: Wi-Fi systems can provide LAN users with access to real-time
information anywhere in their organization. This mobility supports
productivity and service opportunities not possible with wired networks.
2. Installation Speed and Simplicity: Installing a Wi-Fi system can be fast and
easy and can eliminate the need to pull cable through walls and ceilings.
3. Installation Flexibility: Wireless technology allows the network to go where
wire cannot go.
4. Reduced Cost-of-Ownership: While the initial investment required for Wi-Fi
hardware can be higher than the cost of wired LAN hardware, overall
installation expenses and life-cycle costs can be significantly lower. Long-
term cost benefits are greatest in dynamic environments requiring frequent
moves, adds, and changes.
5. Scalability: Wi-Fi systems can be configured in a variety of topologies to
meet the needs of specific applications and installations. Configurations are
easily changed and range from peer-to-peer networks suitable for a small
number of users to full infrastructure networks of thousands of users that
allows roaming over a broad area.
6. It offers much high speed upto 54 Mbps which is very much greater than other
wireless access technologies like CORDECT, GSM and CDMA.
9.0 WPA (WiFi protected architecture)
The WiFi Protected Architecture (WPA) is a coding standard for the WLAN security,
which the ACCESS POINTs (AP) the entrance to the WLAN secures. WPA ensures
in connection with the speed ral key Integrity Protocol (TKIP) and the RC4-
Algorithmus for a good coding. WPA were taken over defined by the WiFi alliance
whereby all relevant specifications by the working group 802.11i. Beside TKIP as
replacement for incoming inspection minutes the standardized handshake enterprise
between Client and ACCESS POINT (AP) were taken over for the determination of
the meeting keys. As well as in addition a simplified procedure for the determination
of the master Secret by passport cliche, that without radius servers gets along the
Aushandlung of the coding procedure between ACCESS POINT and Client.
The version WPA2, which is conformal to 802.11i, sets on the AES coding and
fulfills thereby the safety guidelines demanded by many US authorities. WPA2 knows
two operatings mode, the personnel mode and the Enterprise mode, which differ in

BRBRAITT : June-2011 210


―DATA NETWORK‖ FOR JTOs PH-II

the Authentifizierung. While in the personnel mode with passwords one works, in the
Enterprise mode on the remote Authentifizierung by means of RADIUS and EAP
minutes one sets. This procedure corresponds to 802,1 (x).( Information taken from
Net.)
10.0 Limitation of Wi-Fi networks:
The key areas of limitation of Wi-Fi are:
1. Coverage: A single Access Point can cover, at best, a radius of only about 60
metres. Hundreds of Access Points are necessary to provide seamless coverage
in small area. For 10 square kms area roughly 650 Access Points are required,
where as CDMA 2000 1xEV-DO requires just 09 sites.
2. Roaming: It lacks roaming between different networks hence wide spread
coverage by one service provider is not possible, which is the key to success
of wireless technology.
3. Backhaul: Backhaul directly affects data rate service provider used Cable or
DSL for backhaul. Wi-Fi real world data rates are at least half of the their
theoretical peak rates due to factors such as signal strength, interference and
radio overhead .Backhaul reduces the remaining throughput further.
4. Interference: Wi-Fi uses unlicensed spectrum, which mean no regulator
recourse against interference. The most popular type of Wi-Fi, ‗802.11‘b uses
the crowded 2.4 GHz band which is already used in Bluetooth, cordless
phones and microwave ovens.
5. Security: Wi-Fi Access Points and modems use the Wired Equivalent Privacy
(WEP) Standards, which is very susceptible to hacking and eavesdropping.
6. Security: WEP( Wired Equivalent Privacy) is not very secure. WPA (WIFI
Protected Access) offers much better security with the help of dynamic key
encryption and mutual authentication.
7. Authentication, Authorization and Accounting:
8. In a server based configuration whenever a laptop enters into a wifi zone, a
welcome page is sent to it. User enters username and password. It is connected
through the wireless gateway(router) to AAA, LDAP servers. Once
authenticated ,user can access sites of his choice. Prepaid and postpaid
customers can be billed.
( P Khan JTO, TP WMA Mum.)
11.0 Abbreviations:
1. LAN: Local Area Network.
2. AP: Access Point.
3. EP: Extension Point.
4. ISM: Industrial scientific & medical
5. MAC: Media Access Control.
6. CSMA/CA: Carrier Sense multiple Access with Collision Avoidance.
7. CDMA 2000 1x EV-DO: CDMA 2000 1x Evolution Version Data Only.
8. IEEE: Institute of Electrical & Electronics Engineers.
9. OSI: Open systems Interconnect.
10. WEP: Wireless Equivalent Privacy.

BRBRAITT : June-2011 211


―DATA NETWORK‖ FOR JTOs PH-II

12. References:
1. Article in PC Quest Magazine August 2003 issue.
2. Article in CHIP magazine September 2004 issue.
3. Article in Network Magazine February 2001 issue.
4. Technical article at internet site: www.wirelesslan.com.
5. Technical article at internet site: www.proxim.com.

BRBRAITT : June-2011 212


―DATA NETWORK‖ FOR JTOs PH-II

ADDRESS RESOLUTION PROTOCOL

BRBRAITT : June-2011 213


―DATA NETWORK‖ FOR JTOs PH-II

ADDRESS RESOLUTION PROTOCOL


What is Address Resolution Protocol?
All Interfaces on the Network are identified by an unique 32-bit IP address. Every
IPdatagram carries in its header the Source/Destination IP address for Routing the
Packet. However, for actual transmission, these IP datagrams are encapsulated in Data
Link Layer Frames. The Data Link layer Frame needs Hardware Addresses as part of
their framing (See Fig. 1).
The Protocols required to create the association between Hardware Addresses
(Physical Addresses) and IP Addresses are called ADDRESS RESOLUTION
PROTOCOL.

Name: A Name: B
IP Address : 144.12.12.06 IP Address: 144.12.12.26

HA: (080010C2A102) Hexa HA: (080010310596) Hexa

??? ‗080010310596‘ ‗0800‘ IP datagram CRC


Dest. H/W Addr Source H/W Addr. Ethernet Type
Fig.1
Need for Address Resolution Protocol
The host system knows the IP address of the Destination by using DNS (Domain
Name System) or a Table Look-up. But the IP datagram cannot be transmitted without
Destination Hardware Address in a MAC (Media Access Control) Frame. (See Fig.
2).
IP Datagram
144.12.12.26 144.12.12.06 Data
Source IP Addr. Dest. IP Addr.

??? ‗080010310596‘ Hexa ‗0800‘ Hexa IP datagram CRC


Dest. H/W Addr Source H/W Addr. Ethernet Type
Data Link Frame (MAC Frame)
Fig. 2
One Solution for this problem is manually configuring the TCP/IP system, the relation
between IP Address and MAC Address of all Nodes in that Network Segment. The
problem with this approach is if the Network Interface Card (NIC) is replaced on a
host the MAC Address changes and the table is to be updated on all Nodes.
Hence there is a need for a Dynamic Mechanism to determine the Destination
Hardware Address knowing the its IP Address. This Dynamic Mechanism is
implemented as a separate Protocol called the ADDRESS RESOLUTION
PROTOCOL.

BRBRAITT : June-2011 214


―DATA NETWORK‖ FOR JTOs PH-II

ARP Format
Fig 3 shows the format of ARP-Request and ARP- Reply packets and its
encapsulation in the Data Link Frame (for e.g. MAC Frame). Ehernet type value
‗0806‘ Hexadecimal is reserved for ARP frames.

2 Octets 2 Octets 1 Octet 1 Octet 2 Octets 6 Octets 4 Octets 6 Octets 4 Octets


Hardware Protocol Hlen Plen Operation Sender Sender Target Target
Type Type Add. Add. Field H/W IP H/W IP
Length Length Addr. Addr. Addr. Addr.

= 1 for ARP-Request =2 for ARP-Reply

― Note 1‖ ‗080010310596‘ ‗0806‘ ARP Data CRC


Dest. H/W Addr . Source H/W Addr. Ethernet Type

Data Link Frame (MAC Frame)


Note 1:
For ARP Request Pkt : Dest. H/W Addr. is ‗FFFF FFFF FFFF‘ ( Broadcast Addr.)
For ARP Reply Pkt : Dest. H/W Addr. is H/W of the sender who has generated
the ARP-Request Packet (Point-to-Point)

Fig.3
ARP Request Format
Hardware Type:- 2 Octets

Value ‗1‘ in Hardware type fields indicates it is Ethernet Network. Other values are
listed in Table 1.
Table 1
ARP Hardware Type Values
Hardware Type value
Description of Network
1 Ethernet (10 Mbps)
6 !EEE 802 Networks
7 ARCNET

BRBRAITT : June-2011 215


―DATA NETWORK‖ FOR JTOs PH-II

Protocol Type:- 2 Octets


Value ‗0800‘ Hexadecimal indicates it is DoD IP Protocol. For other Protocol types
see Table 2.
Table 2
Frequently used Protocol Type(Ethernet Type) Values
Ethernet Type (Hexa)
Ethernet type Field
Assignment
0800 DoD IP
0806 ARP (Address Resolution Protocol)
8035 RARP (Reverse Address Resolution Protocol)

Hlen :- 1 Octet
Hardware Address Length value is '6 Octets' in Ethernet
Plen :- 1 Octet
Protocol Address Length value is '4 Octets' in DoD IP Protocol.
Operation:- 2 Octets
For ARP Request operation field value is '1'. For ARP Reply the value is '2' Refer
Table 3. for other values.
Table 3
Operation Values for ARP Packet
Operation Field Value Type of Operation
1 ARP-Request
2 ARP-Reply
3 RARP-Request
4 RARP-Reply
5 DRARP-Request
6 DRARP-Reply
7 DRARP-Error
8 InARP-Request
9 InARP-Reply
10 ARP-NAK

Sender Hardware Address:- 6 Octets


The Sender Hardware Address contains the Hardware Address of the Sender
Sender IP Address :- 4 Octets
Sender IP Address contains the IP Address of the Node sending the ARP Request.
Target Hardware Address:- 4 Octets
The Target Hardware Address is to be determined by ARP Protocol. It is either set to
all '0's or all '1's. (all '1's in case of Ethernet).

BRBRAITT : June-2011 216


―DATA NETWORK‖ FOR JTOs PH-II

Target IP Address:-
This is the IP Address of the Target Node . The Target node responds with Hardware
address in ARP-Reply Packet after identifying this IP Address.
Encapsulation of ARP-Request Packet at the Data Link Level:-
Data Link Source Hardware Address is Hardware address of the ARP Request sender.
Data Link Destination Hardware Address is Ethernet Broadcast Address usually all
'1's. (FFFF FFFF FFFF) Hexadecimal (See Fig.4)
Note: ARP Protocol operates on the Physical Network which supports Broadcast
capability viz. Ethernet, Token Ring, FDDI, ARCnet etc.
Ethernet Type Value is '0806' Hexadecimal which indicates that the ARP Data is
carried in the Frame.

Name: A Name: B
IP Address : 144.12.12.06 IP Address: 144.12.12.26
HA: (080010C2A102) Hexa HA: (080010310596) Hexa

ARP Broadcast ARP Broadcast


ARP Reply ARP Reply

Broadcast
‗FFFFFFFFFFFF‘ ‗080010310596‘ ‗0806‘ ARP Request Pkt CRC
Dest. H/W Addr Source H/W Addr. Ethernet Type

Pont-to-Point
‗080010310596‘ ‗080010C2A102‘ ‗0806‘ ARP Reply Pkt CRC
Dest. H/W Addr Source H/W Addr. Ethernet Type

Fig. 4

BRBRAITT : June-2011 217


―DATA NETWORK‖ FOR JTOs PH-II

ARP - Reply Format


The ARP- Reply Packet uses the same format as ARP-Request, but the Operation
field value is set to '2' to indicate it is ARP-Reply.
Sender Hardware Address:- 6 Octets
This contains the Target node's Hardware Address. (This is the Answer)
Sender IP Address:- 4 Octets
This contains the Target Node's IP address.
Target Hardware Address:- 6 Octets
This contains the Hardware Address of the Node which generated the ARP-Request
Packet.
Target IP Address:- 4 Octets
This contains the IP Address of the Node which generated the ARP-Request Packet.
Encapsulation of ARP-Reply Packet at the Data Link Level:-
Data Link Source Hardware Address is Hardware address of the Node generating
ARP-Reply Packet.
Data Link Destination Hardware Address is the Hardware Address of the Node which
generated the ARP-Reply Packet. ARP-Reply is not Broadcast. It is Point- to-Point
(See Fig. 4).
Ethernet Type Value is '0806' Hexadecimal which indicates that the ARP Data is
carried in the Frame.

BRBRAITT : June-2011 218


―DATA NETWORK‖ FOR JTOs PH-II

ARP Operation:-
When IP Datagram is ready for transmission the Routing Component in the Network
Layer (IP Layer) determines whether the Destination IP address is in Local Network
or Remote Network. If it is in Local Network the sender host needs to find out the
Hardware Address of the Target Node. If it is in the Remote Network the sender host
needs to find out the Hardware Address of the Router Port to which the IP Datagram
is to be forwarded (See Fig 5).

Datagram from Upper


Layers

Network/ Routing
IP Component

Data Link To External Network


Physical

Destination on Local Network


IP Router

Destination on Remote Network


Fig 5

ARP Protocol cannot be routed. That is it cannot cross the Router boundary. Before
sending the ARP request the ARP module tries to find the Target Address in the ARP
Cache table . The ARP cache table keeps pairs of entries of IP addresses and the
corresponding Hardware Addresses (See Table 4)

Table 4
ARP Cache Table
Protocol Type (IP) Protocol Address Hardware Address Time Stamp
(IP Address) (MAC Address) (Minutes)
0800 144.12.12.06 080010C2A102 15
------- --------------------- ---------------------- ---

If the Target IP address is found in the ARP Cache Table it returns the corresponding
Hardware Address and the IP datagram is transmitted to the destination in a MAC
Frame. If the Target IP Address is not found in the ARP Cache Table ARP Request is
broadcast at the Data Link Layer and on receipt of the ARP-Reply and the ARP
Cache Table is updated. Usually the Age of the ARP Cache entry is for 15 minutes.
After time out ARP Request is again needed to find the Hardware Address of the
Target.

BRBRAITT : June-2011 219


―DATA NETWORK‖ FOR JTOs PH-II

Procedure involved in Routing an IP Packet from ‗Node A1‘ to ‗Node B1‘:- (See Fig.
6)
Procedure involved at Node A1:-
1. Since the Destination IP Address is not in the Local Network the IP Datagram
is to be forwarded to Router-A which is connected to the Remote Network.
2. Node A1 looks into ARP Cache Table to find the H/W Address of Router-A.
3. If found the IP datagram is forwarded to Destination H/W Address of
Router-A.
4. If not found the Node A1 generates ARP-Request packet to find the H/W
Address of Router-A and Broadcasts the MAC Frame Containing the ARP-
Request Packet.
5. Router-A responds with its H/W Address in the ARP-Reply Packet
encapsulated in a MAC Frame addressed to Node A1.
6. Node A1 updates the ARP-Cache Table and sets the Time stamp value to 15
Minutes.
7. Node A1 sends the IP Datagram encapsulated in a MAC Frame to Router-A .
Procedure involved at Router-A:-
1. Router-A analyses the Destination IP Address in IP Datagram and Routes the
Packet to Router-B
Procedure involved at Router-B:-
1. Since the Destination IP Address belongs to the Local Network the IP
Datagram is to be forwarded to Node B1 which is directly connected to the
Ethernet LAN.
2. Router-B looks into ARP Cache Table to find the H/W Address of the Node
B1.
3. If found the IP datagram is forwarded to Destination H/W Address of Node
B1.
4. If not found the Router-B generates ARP-Request packet to find the H/W
Address of Node B1 and Broadcasts the MAC Frame Containing the ARP-
Request Packet
5. Node B1 responds with its H/W Address in the ARP-Reply Packet
encapsulated in a MAC Frame addressed to Router-B.
6. Router-B updates the ARP-Cache Table and sets the Time stamp value to 15
Minutes.
7. Router-B sends the IP Datagram encapsulated in a MAC Frame to Node B1.

BRBRAITT : June-2011 220


―DATA NETWORK‖ FOR JTOs PH-II

Procedure involved at Node B1:-


1. Node B1 receives the IP Datagram sent by Node A1.

Node A1 Node B1

Router Network

Router A Router B

Node A2 Node B2

Node A1 Router -A Router-B Node-B1

ARP-Request

ARP-Reply

IP Datagram

IP Datagram

ARP-Request

ARP-Reply

IP Datagram

Flow of ARP Packets/ IP Datagrams

Fig. 6

BRBRAITT : June-2011 221


―DATA NETWORK‖ FOR JTOs PH-II

Dynamic Host Configuration Protocol


The need for DHCP
Any protocol is essentially a software that runs on a specific computer and manages
all the "talking" with other computers in the "protocol language". In order for the
same software to run on different machines there is a need to initialize the protocol
with parameters specific to that machine and to the local network before starting
proper operation. Initialization can be done during booting (if the protocol is
embedded in the operating system) or it can be triggered by a specific application (if
the protocol is embedded into the application).
Take for example the TCP/IP protocol stack: first of all, the IP protocol needs to know
the IP address of the computer. Moreover, it needs to know the network subnet-mask,
IP addresses of the default router, the printer, the DNS and perhaps some other servers
etc.
Those parameters can be configured manually and locally for each and every
computer. Using a mechanism like that introduces some problems:
1. A lot of manual work is required by the network administrator, being time
consuming and error prone.
2. Keeping the parameters up-to-date is not a one-time effort. The amount of
work increases with the changes in the net (e.g portable computers that change
locations frequently introduce a lot of daily work for network administrators).
3. A change in a parameter common to all the computers in a subnet
4. (e.g: local router's address) forces changes in each computer on the
net.
5. Some systems may not have a permanent storage device (e.g. hard disk) to
store the configuration parameters - in which case no local configuration can
be considered.
6. In cases of shortage in IP addresses and a network that is changing frequently
it can be a waist to give a computer (that may be out of the network for a
while) a permanent address. A better approach will be to use common pool of
addresses by a set of computers. Manual configuration give no easy way of
doing so.
All those reasons lead to the need in an automated mechanism for TCP/IP protocols'
configuration and DHCP is the currently most advanced mechanism for doing so.
DHCP Goals
Compatibility
The DHCP protocol must be compatible with existing interface and protocols; For
example it should be compatible with all the types of clients, each one can have a
different configuration of communication parameters.
As mentioned in "Protocol introduction", the DHCP protocol should have
compatibility with the BOOTP protocol which was used before.
Local control
Although the DHCP protocol is an external mechanism for allocating a web address
and communication parameters, the administrator of the client must have the
capability to control these parameters.

BRBRAITT : June-2011 222


―DATA NETWORK‖ FOR JTOs PH-II

Communication parameters preserving:


The DHCP server has to give a specific client the same communication parameters in
as many sequential requests as possible, so if a client was disconnected from the web
and requests its address/communication parameters again, it would receive the same
communication parameters as before. (if nothing has changed since the
disconnection).
The same is for the DHCP server, which should have the capability to give the same
client the same communication parameters even if it was disconnected from the web.
Unique clients
A main goal of the DHCP protocol is to give each client its unique address. A
situation in which two or more clients are allocated with the same web address must
not occur in any circumstances, to prevent a situation of client send a message to the
wrong client.
Automatic Configuration
Usually, a single client should have to be configured automatically by the DHCP
server and not manually by the administrator of that client. A situation in which the
configuration of the communication parameters is done manually by the client's
administrator should occur seldom. But when it happens, the DHCP server must be
compatible with the manual configuration as mentioned before.
Saving hardware
There should not be a DHCP server for each and every link interface. There should be
a wide use of relay-agents to transmit a DHCP messages. If less DHCP servers are
used, it's more financialy agreeable because we'll use the relay-agents which are
simpler. The hardware saving will lead to more economically worthwhile network and
will save money as mentioned in "protocol introduction".
Historical background
At first, most TCP/IP networks were relatively small and static. Manual IP address
management techniques were sufficient for them. Each station kept its own IP address
somewhere in its secondary storage. Once the address had to be changed, it required
manual administrator action, usually at the machine console, and in most cases
involved a reboot.
Soon afterwards, as more complex networks were established, as more and more
underlying network hardware was used for TCP/IP communication networks and as
cheap client workstations without secondary storage came in use, a need for central
administration of the hardware to P addresses bindings became obvious. A special
protocol (RARP) for such bindings was designed. It allowed a machine on a network
segment to learn its own IP address and then to begin normal TCP/IP operation.
Another protocol, BOOTP, was also developed to allow diskless stations retrieve all
the TCP/IP configuration parameters and other operating system data, needed to start
functioning normally after a startup. It allowed configuration over broader networks
as it was not limited to a single segment. For that purpose BOOTP defined the
concept of a BOOTP relay agent which specified how BOOTP traffic is forwarded
between multiple segments.

BRBRAITT : June-2011 223


―DATA NETWORK‖ FOR JTOs PH-II

BOOTP was designed to be easily extended by the BOOTP extension mechanism.


This mechanism uses the last field in the frame for more (vendor) specific data and
message options.
The next attempt to extend BOOTP provided the Dynamic Host Configuration
Protocol, DHCP. DHCP wad designed to be backward compatible with BOOTP in
order to support BOOTP clients and BOOTP relay agents, yet there are two primary
differences between DHCP and BOOTP:
1. DHCP defines a mechanism through which a client can be assigned a network
address for a finite lease, allowing for a serial reuse the same network address
by different clients.
2. DHCP provides a mechanism for a client to request and acquire all the IP
configuration parameters that it needs in order to operate, and only them.
DHCP comes with a predefined set of DHCP options, which it inherits from the
BOOTP vendor extensions mechanism, and it is open for further extension, inheriting
the openness from BOOTP.
Main differences between BOOTP AND DHCP:
The DHCP is an extension of the previous BOOTP protocol, thus it must be
compatible with BOOTP messages, but there are some differences between BOOTP
and DHCP.
One difference is that DHCP is designed to allocate a web address to a client
temporarily so the client can disconnect allowing another client to get this web
address, or renew the lease of the web address, a capability which the BOOTP
protocol does not have.
Another difference is that the DHCP can configure a client with all the IP parameters
that the client needs in order to establish communication. One BOOTP transfer only
some of these parameters.
Moreover, the BOOTP protocol had a field named 'vendor extentions', to specify the
requested parameters and other options, which were replaced with the field 'options' in
the DHCP protocol.
In addition, the BOOTP protocol had a field named "chaddr" in order to specify the
address of the client which has requested the communication parameters. In DHCP
protocol, there is the field "client identifier". This field can have the physical address
of the client as in "chaddr" of the BOOTP protocol, or it can have another identifier
like DNS name, or another types. New types of identifier can be registered in IANA.
DHCP is currently the most advanced host configuration mechanism for TCP/IP,
although it still has its problems, giving researchers things to work on, for an even
better configuration protocol in the future.
Protocol Introduction
General
The Dynamic Host Configuration Protocol (DHCP) provides configuration
parameters to Internet hosts in a client-server model. DHCP server hosts allocate
network addresses and deliver configuration parameters to other (client) hosts.

BRBRAITT : June-2011 224


―DATA NETWORK‖ FOR JTOs PH-II

DHCP consists of two components: a protocol for delivering host-specific


configuration parameters from a server to a host and a mechanism for allocation of
network addresses to hosts.
IP Address Allocation
DHCP supports three mechanisms for IP address allocation.
1. Automatic allocation -- in which a permanent IP address is assigned to the
client.
2. Dynamic allocation -- in which the address is assigned for a limited period of
time (a "lease").
3. Manual allocation -- in which the address is assigned manually by the network
administrator.
Configuration Parameters Delivery
The client sends a message to request configuration parameters and the server
responds with a message carrying the desired parameters back to the client.
BOOTP Compatibility
The format of DHCP messages is based on the format of BOOTP messages due to the
following reasons:
1. From the client's point of view, is an extension of the BOOTP mechanism.
This behavior allows existing BOOTP clients to interoperate with DHCP
servers without requiring any change to the clients' initialization software.
2. DHCP supports the BOOTP relay agent behavior.
Use of Relay Agents
DHCP does not require a server on each subnet. To allow for scale and economy,
DHCP can work across routers or through the intervention of BOOTP relay agents. A
relay agent listens to DHCP messages and forwards them on (and onto other network
segments). This eliminates the necessity of having a DHCP server on each physical
network.
Allocation of network addresses
DHCP supports three mechanisms for IP address allocation. The DHCP server can
use any one or more of the of these mechanisms:
1. Automatic allocation: The DHCP server assigns a permanent IP address to a
client without any manual interference. Automatic allocation is best suited for
in cases where hosts are permanently connected to a network and the network
does not suffer from an address shortage.
2. Manual allocation:The client's IP address is assigned manually by the network
administrator. The DHCP server simply retrieves it from its storage and
delivers it to the client. Manual allocation is best suited for giving IP addresses
to servers of any kind. As servers are the ones to be addressed, rather than to
initiate a conversation, their location should be permanent and known in the
network. Manual allocation would guarantee that (although a clever use of
Automatic allocation can accomplish that too).
3. Dynamic allocation: The DHCP server assigns a temporary IP address to a
client without any manual interference.

BRBRAITT : June-2011 225


―DATA NETWORK‖ FOR JTOs PH-II

Dynamic allocation is the most interesting method of the three, because it involves not
only the assigning of a network address but also reclaiming and reusing of the same
address by another client. Therefore, using Dynamic allocation allows for an efficient
managing of a pool of network addresses and is particularly useful in cases where:
1. There is a limited amount of network addresses on the net.
2. The network has computers which temporarily connect and disconnect to it
(e.g. portable computers) and so the network is changing frequently.
The basic mechanism for the dynamic allocation of network addresses is simple: the
client requests the use of an address for a limited period of time (which is called a
lease). The DHCP server allocates an address for the client, marks it as 'used' and
notifies the client about the address and the lease time approved.
The client, in his turn, can:
1. Extend its lease with subsequent requests.
2. Ask for a permanent assignment by asking for an infinite lease.
3. Release the address back to the server before the lease expires, in case it
doesn't need it.

Renewing and acquiring addresses


The client holds two times in its memory ? time1, and time2.
The first time is the time in which the client starts to ask its server for renewing the
lease of its address ? RENEWING state in the states diagram of the protocol.
The second time is the time in which the client starts to ask other servers for address ?
REBINDING state in the states diagram of the DHCP protocol.
When the first time arrives, the client sends the DHCPREQUEST message to the
server with unique ID to this request. If the renew is approved, the server will send an
answer in DHCPACK message with this ID. Then the client returns to normal
functioning. The new time1 will be the sum of the time in the server's answer and the
time which has passed from the start of the request to the answer.
If no answer has arrived until time2 has passed, the client will enter the REBINDING
state in the states diagram of the DHCP protocol, and will send a multicast message to
all the servers available to acquire new address.
These times can be changed by servers in the 'option' field, and have default values.
The default of time1 is half of the lease time of the current address, and the default of
time2 is 0.875x(lease time).
In both cases ? time1 and time2, if the client has not got an answer from the DHCP
servers, it should wait half of the time which is left before sending DHCPREQUEST
again. The shortest waiting time is one minute.
If the client has got its previous address, it continues to work normally. If the client
didn't get its previous address but got a new one, he should continue working, but not
with the current web parameters, and must inform the users about this. If the client
didn't manage to get an address at all, it should stop its work and go back to INIT state
in the states diagram of the DHCP protocol.

BRBRAITT : June-2011 226


―DATA NETWORK‖ FOR JTOs PH-II

Configuration Parameters Delivery


The DHCP server is designed to supply DHCP clients with the configuration
parameters defined in the Host Requirements RFCs (1122 and 1123). Most of those
parameters are related to the TCP/IP protocol stack but DHCP allows the
configuration of non-related parameters too.
The server provides a permanent storage of network parameters for network clients.
The DHCP storage model is a set of key-value pairs for each client, where the key is
some unique identifier and the value contains the configuration parameters for the
client. In other words, the storage model is a per-host list of entries of the form:
key = value
The client addresses the server with a request message to retrieve its configuration
parameters. The server answer with a response message carrying the configuration
parameters in the (later discussed) options field.
Not all clients require initialization of all possible parameters. Two techniques are
used to reduce the number of parameters delivered from the server to the client:
1. Most of the parameters have defaults defined in the Host Requirements RFCs
(1122 and 1123). If the client receives no parameters from the server that
override the defaults, a client uses those default values.
2. A client and server may negotiate for the delivery of only those parameters
required by the client. In a case like that the client includes the parameter
request list option in the requested message and fills it with the list of
parameters it needs.
Message Format
As mentioned earlier, the format of the DHCP messages is based on the format of
BOOTP messages in order to keep compatible with BOOTP relay agents and BOOTP
clients.
Here is the DHCP message format. The numbers in parentheses indicate the size of
each Field in Bytes.

BRBRAITT : June-2011 227


―DATA NETWORK‖ FOR JTOs PH-II

Description of Fields in a DHCP message

Field Bytes Description

Message op code / message type.


op 1
1 = BOOTREQUEST, 2 = BOOTREPLY

htype 1 Hardware address type (e.g., '1' = 10Mb Ethernet)

hlen 1 Hardware address length (e.g. '6' for 10Mb Ethernet)

Client sets to zero, optionally used by relay agents when booting via
hops 1
a relay agent.

Transaction ID.
xid 4 A random number chosen by the client, used by the client and server
to associate the request message with its response.

Seconds passed since client began the request process.


secs 2
Filled in by client.

flags 2 Flags

Client IP address.
ciaddr 4 Filled in by client if it knows its IP address (from previous requests
or from manual configurations) and can respond to ARP requests.

yiaddr 4 'your' (client) IP address. Server's response to client.

Server IP address. Address of sending server or of the next server to


siaddr 4
use in the next bootstrap process step.

giaddr 4 Relay agent IP address, used in booting via a relay agent.

chaddr 16 Client hardware address.

sname 64 Optional server host name. Null terminated string.

Boot file name.


file 128 Null terminated string; "generic" name or null in request, fully
qualified directory-path name in reply.

Field to hold the optional parameters.


options variable
(See next section).

BRBRAITT : June-2011 228


―DATA NETWORK‖ FOR JTOs PH-II

The 'options' Field in a DHCP message


Apart from the small amount of the Fields imported from the BOOTP frame format
there came a need for a lot more Fields, some of them changing from one message to
another, others from one subnet to another etc.
That need came first after defining the BOOTP protocol which led to the BOOTP
extension mechanism, where the last Field in the frame format, the 'vendor extensions'
filed was of variable length and could contain the extra information. DHCP improved
this mechanism, changing the Field name to 'options' and adding more options.
One way to categorize those options would be to split them into two groups:
1. Configuration parameters.
2. Message control information.
All options begin with a tag octet, which uniquely identifies the option. The next octet
is the option length specifier, its value does not include the two Bytes specifying the
tag and length. The length octet is followed by length Bytes of data.
All these options/vendor extensions are defined in RFC 2132, where they are split to
the following groups:
1. RFC 1497 Vendor Extensions.
2. IP Layer Parameters per Host.
3. IP Layer Parameters per Interface.
4. Link Layer Parameters per Interface.
5. TCP Parameters.
6. Application and Service Parameters.
7. DHCP Extensions.
Some important DHCP options:
Message Type (a DHCP control):
Specifies the type of the DHCP message in order to be more specific than the
originally BOOTP Field 'op'. Different message types are used at different stages of
the client/server interaction.
Appears in every DHCP message (therefore the 'options' Field is never empty):
Renewal Time Value (a DHCP control):
Specifies the time interval from address assignment until the client attempts to contact
the server that originally issued the client's network address before the lease expire.
Parameter Request List (a DHCP control):
A list of valid DHCP option codes. Used by a DHCP client to request values for
specified configuration parameters.
Subnet Mask (a Configuration parameter):
Specifies the client's subnet mask.
DNS Option (a Configuration parameter):
Specifies a list of DNS name servers available to the client.

BRBRAITT : June-2011 229


―DATA NETWORK‖ FOR JTOs PH-II

Message Format in IPv6


Every message in DHCP protocol of IPv6 has a constant length header and variable
length data. This data is located in the "options" Field, and composed from Bytes, in
network byte format (least significant byte first.)

This is the format of a message which sent from client directly to the server:

Field Bytes Description

This is the type of the message chosen from the 11 types of


msg-type 1 direct messages from client to server (an exact list in "Message
Types Summary").

transaction-
3 The ID for this message transport.
id

options variable The options for this message.

BRBRAITT : June-2011 230


―DATA NETWORK‖ FOR JTOs PH-II

This is the format of a message from relay-agent to another relay-agent or a server:

Field Bytes Description

msg-type 1 The code of the message, RELAY-FORW or RELAY-REPL.

hop-count 1 Counts the relay-agents which the message has passed until now.

link- Used by the server to identify the link of the client in RELAY-
12
address FORW message or in RELAY-REPL message.

peer- The address of the relay-agent or the client from which the
12
address message was received ? the current hop.

Options for the message. Here the message has to have "Relay
options variable
Message option" among the other options.

BRBRAITT : June-2011 231


―DATA NETWORK‖ FOR JTOs PH-II

The options are in this format:

Field Bytes Description

option- The number of option according to "types of


2
code options".

option-len 2 The length of the option-data in bytes.

option- the value in "option-len" in


The data of the options.
data bytes

Client/Server Model
The client and the server negotiate in a series of messages in order for the client to get
the parameters it needs.
The following diagram shows the messages exchanged between the DHCP client and
servers when allocating a new network address. Next is a detailed explanation of all
the various messages and a description of the communication steps.
This process can involve more than one server but only one server is selected by the
client. In the figure, the selected server is marked 'selected' and the other, 'not
selected' server stands for all the possible not selected servers.

Description of the communication steps

BRBRAITT : June-2011 232


―DATA NETWORK‖ FOR JTOs PH-II

1. The client broadcasts a DHCPDISCOVER.


2. Each server may responds with a DHCPOFFER message.
3. The client receives one or more DHCPOFFER messages from one or more
servers and chooses one server from which to request configuration
parameters.
4. The client broadcasts a DHCPREQUEST message.
5. Those servers not selected by the DHCPREQUEST message use the message
as notification that the client has declined that server's offer.
6. The server selected in the DHCPREQUEST message commits the responds
with a DHCPACK message containing the configuration parameters for the
requesting client.
7. The client receives the DHCPACK message with configuration parameters. At
this point, the client is configured.
8. If the client receives a DHCPNAK message, the client restarts the
configuration process.
9. The client may choose to relinquish its lease on a network address by sending
a DHCPRELEASE message to the server (e.g. on shutdown).
10. The server receives the DHCPRELEASE message and marks the lease as free.
Variations on the timeline diagram
There are two main variations on the presented client/server interaction scenario:

1. Reuse of a previously allocated network address:


If a client remembers (in its cache) and wishes to reuse a previously allocated
network address, a client may choose to omit some of the steps taken in case
of a new allocation.
In the first DHCPREQUEST the client includes its network address in the
'requested IP address' option. The server that has the knowledge of the client's
configuration respond with a DHCPACK message and from then on the
diagram continues from step (5).
2. Obtaining parameters with externally configured network address:
If a client has obtained a network address through some other means (e.g.,
manual configuration), it may use a DHCPINFORM request message to obtain
other local configuration parameters. Servers receiving a DHCPINFORM
message construct a DHCPACK message with any local configuration
parameters appropriate for the client without allocating a new address.
Message Types
Message Use
DHCPDISCOVER
Client broadcast to locate available servers.
DHCPOFFER
Server to client in response to DHCPDISCOVER with offer of
configuration parameters.
DHCPREQUEST
Client message to servers either (a) requesting offered parameters
from one server and implicitly declining offers from all others,
(b) confirming correctness of previously allocated address after,
e.g., system reboot, or (c) extending the lease on a particular
network address.

BRBRAITT : June-2011 233


―DATA NETWORK‖ FOR JTOs PH-II

DHCPACK
Server to client with configuration parameters, including
committed network address.
DHCPNAK
Server to client indicating client's notion of network address is
incorrect (e.g., client has moved to new subnet) or client's lease
as expired
DHCPDECLINE
Client to server indicating network address is already in use.
DHCPRELEASE
Client to server relinquishing network address and canceling
remaining lease.
DHCPINFORM
Client to server, asking only for local configuration parameters;
client already has externally configured network address.

Message types in IPv6


Message Use

SOLICIT This message is sent by a node to discover new DHCP servers

ADVERTISE This message is sent by the DHCP server in response to a


"SOLICIT" message. It means that this DHCP server is
available to serve the client.

REQUEST A request of an address and communication parameters after a


node has found a DHCP server.
CONFIRM
A multicast message to all DHCP servers which are available,
to confirm, that its address is still appropriate to its link.
RENEW
A request to renew address lifetime or updating
communication parameters. The message is sent to a specific
DHCP server which has sent its address/communication
parameters beforehand.
REBIND
A multicast message to all the servers available with the request to
renew address or updating its communication parameter. This
message is sent after a "RENEW" request didn't get any response
from the node?s DHCP server.
REPLY
A message that is sent to a node by a DHCP server.
Address/communication parameters are sent in response to a
"SOLICIT", "REQUEST", "RENEW" or "REBIND" messages
from a node. Additionally it is used to confirm or reject an
address in response to a "CONFIRM" message, or it simply
used as an ack message in response to a "RELEASE" or

BRBRAITT : June-2011 234


―DATA NETWORK‖ FOR JTOs PH-II

"DECLINE" message from a node.

RELEASE A message from a node to the DHCP server which granted it


an address. It is sent when the node no longer needs that
adress. This message is mean to let the DHCP server know that
this address is free to use by other nodes.

DECLINE A message to the DHCP server which means that a node


declines the address which is already taken, and the node
requests another address. It can happen when a node discovers
that an address which the DHCP server has gaven it is used by
another node in the link.

RECONFIGURE A message sent by the DHCP server when it wants to update a


node's communication parameters. The node response should
be a "RENEW" message back to the DHCP server.

INFORMATION- This message is sent to a DHCP server when a node wants to


REQUEST get communication parameters without an address.

RELAY-FORW A message which is sent from relay-agent to a DHCP server or


another relay-agent, and encapsulates the innitial message
from a node to the DHCP server.

RELAY-REPL A message which is sent from DHCP server or another relay-


agent to a certain relay-agent, and encapsulates the innitial
message from a DHCP server to the node.

Security in DHCP
Security is a significant subject in when concidering DHCP, this is because the main
goal is to get communication parameters/IP address from an external source. This can
give an opportunity do damage the host from outside of the system.
There numerous threats to a host which using DHCP. For example: Deploying fake
DHCP servers that will always deny service. Another way is by sending incorrect
communication parameters and wrong DHCP server information either because of
flawed server, or deliberatelly.
These threats require authentication of the DHCP server or/and the communication
parameters to ensure that we are dealing with real DHCP server which sends valid
parameters.
In order to achieve higher safety, the following two rules must be obeyed:
1. The protocol cannot be changed. (i.e. its stucrute, msg types etc. must remain
intact.)
2. Interact with the DHCP server as little as possible ? minimize the number of
stages of the communication with the DHCP server.

BRBRAITT : June-2011 235


―DATA NETWORK‖ FOR JTOs PH-II

Main way to authenticate a DHCP message is to include an authentication field in the


"option" field of the DHCP message.
This is the format of DHCP client/server message with "authentication option?:

Description of Fields in a DHCP message

Code Bytes Description

op 1 The code of an authentication message is 90.

Length 1 The length of the information data.

The name of the protocol used for authentication (there are


Protocol 1
number of techniques).

The name of the algorithm used by the protocol in the


Algorithm 1
"protocol" field.

RDM stands for "Replay Detection Method" ? the method


RDM 4
used for replay detection.

Replay This is a sequence of authentication. If the RDM field is


8
Detection 0x00, the sequence must be a monotonic increasing counter.

Security in IPv6
In addition to the method of adding "options", like in IPv4, there is also use of the
IPsec mechanisms for communication between relay-agents or relay-agent - server in
IPv6.
IPSec is mechanism of security on the IP level. It provides services such as reply
detection, access control etc.

BRBRAITT : June-2011 236


―DATA NETWORK‖ FOR JTOs PH-II

The servers and relay-agents are configured manually. Each relay-agent or server has
to hold a list of pairs of servers and relay-agents to know which one will get the
message. Servers and relay agents can accept messages only from DHCP sources
which are on the list in their configuration.
In addition to this tool, one can use also the general security tools of IPv6 for DHCP
security. There are many sources to these tools in the web, and a partial list can be
found in This Link.

BRBRAITT : June-2011 237


―DATA NETWORK‖ FOR JTOs PH-II

Point-to-Point Protocol (PPP)

BRBRAITT : June-2011 238


―DATA NETWORK‖ FOR JTOs PH-II

Point-to-Point Protocol (PPP)


Today, millions of Internet users need to connect their home computer to the
computers of an Internet provider to access the Internet. There are also a lot of
individuals who need to connect to a computer from home, but they do not want to go
through the Internet. The majority of these users have either a dialup or leased
telephone line. The telephone line provides a physical link, but to control and manage
the transfer of data, there is a need for a point-to-point link protocol. Figure-1 shows a
physical point-to-point connection.
Figure -1 Point-to-point link

Point-to-point physical link

End point End point

The first protocol devised for this purpose was Serial Line Internet Protocol (SLIP).
However, SLIP has some deficiencies: it does not support protocols other than
Internet Protocol (IP), it does not allow the IP addresses to be assigned dynamically,
and it does not support authentication of the user. The Point-to-Point Protocol (PPP)
is a protocol designed to respond to respond to these deficiencies.
TRANSITION STATES
The different phase through which a PPP connection goes can be described using a
transition state diagram as shown in Figure-2

 Idle state. The idle state means that the link is not being used. There is no active carrier and
the line is quiet.
Establishing state. When one of the end points starts the communication, the
connection goes into the establishing state. In this state, options are negotiated
between the two parties. If the negotiation is successful, the system goes to the
authenticating state (if authentication is required) or directly to the networking state.
The LCP packets, discussed shortly, are used for this purpose. Several packets may be
exchanged during this state.

BRBRAITT : June-2011 239


―DATA NETWORK‖ FOR JTOs PH-II

Figure-2 Transition states

Detect carrier
Idle

Drop carrier Fail Establishing


(link)

Terminating
(link) Success
Fail

Authenticating

Finish
Networking Success
(exchanging user data and
control)

Authenticating state. The authenticating state is optional; the two end points may
decide, during the establishing state, not to go through this state. However, if they
decide to proceed with authentication, they send several authentication packets,
discussed in a later section. If the result is successful, the connection goes to the
networking state; otherwise, it goes to the terminating state.
Networking state. The networking state is the heart of the transition states. When a
connection reaches this state, the exchange of user control and data packets can be
started. The connection remains in this state until one of the end points want to
terminate the connection.
Terminating state. When the connection is in the terminating state, several packets
are exchanged between the two ends for house cleaning and closing the link.
PPP LAYERS
Figure-3 shows the PPP layers. PPP has only physical and link layers. This means that
a protocol that wants to use the services of PPP should have other layers (network,
transport, and so on).

BRBRAITT : June-2011 240


―DATA NETWORK‖ FOR JTOs PH-II

Physical Layer
No specific protocol is denied for the physical layer in PPP. Instead, it is left to the
implementer to use whatever is available. PPP supports any of the protocols
recognized by ANSI.

Figure-3 PPP layers

A variation of HDLC
Data Link

ANSI standards
Physical

Data Link Layer


At the data link layer, PPP employs a version of HDLC. Figure-4 shows the format of
a PPP frame.
Figure-4 PPP frame

11111111 11000000

Flag Address Control Protocol Data and padding FCS Flag

1 byte 1 byte 1 byte 1 or 2 byte Variable 2 or 4 bytes 1 byte

The description of the fields are as follows:


1. Flag field. The flag field, like the one in HDLC, identifies the boundaries of a
PPP frame. Its value is 01111110.
2. Address field. Because PPP is used for a point-to-point connection, it uses the
broadcast address of HDLC, 11111111, to avoid a data link address in the
protocol.
3. Control field. The control field uses the format of the format of the U-frame
in HDLC. The value is 11000000 to show that the frame does not contain any
sequence number and that there is no flow and error control.

BRBRAITT : June-2011 241


―DATA NETWORK‖ FOR JTOs PH-II

4. Protocol field. The protocol field defines what is being carried in the data
field: user data or other information. We will discuss this field in detail
shortly.
5. Data field. This field carries either the user data or other information that we
will discuss shortly.
6. FCS. The frame check sequence field, as in HDLC, is simply a two-byte or
four-byte CRC.

LINK CONTROL PROTOCOL (LCP)


The Link Control Protocol (LCP) is responsible for establishing, maintaining,
configuring, and terminating links. It also provides negotiation mechanisms to set
options between the two end points. Both end points of the link must reach an
agreement about the options before the link can be established.
All LCP packets are carried in the payload field of the PPP frame. What defines the
frame as one carrying an LCP packet is the value of the protocol field, which should
be set to C02116. Figure-5 shows the format of the LCP packet.
Figure-5 LCP packet encapsulated in a frame

1 byte 1 byte 2 byte Variable

LCP Code ID Length Information for some LCP packet


packet

Payload
Flag Address Control Protocol FCS Flag
(and padding )

C02116

The descriptions of the fields are as follows:

1. Code. The field defines the type of LCP packet. We will discuss these packets
and their purpose in the next section.
2. ID. This field holds a value used to match a request with the reply. One end
point inserts a value in this field, which will be copied in the reply packet.
3. Length. This field define the length of the whole LCP packet.
4. Information. This field contains extra information needed for some LCP
packets.

BRBRAITT : June-2011 242


―DATA NETWORK‖ FOR JTOs PH-II

LCP Packets
Table-1 lists some LCP packets.

Table-1 LCP packets and their codes

Code Packet Type Description

0116 Configure-request Contains the list of proposed option and their values

0216 Configure-ack Accepts all options proposed

0316 Configure-nak Announces that some options are not acceptable

0416 Configure-reject Announces that some options are not recognized

0516 Terminate-request Requests to shut the line down

0616 Terminate-ack Accepts the shut-down request

0716 Code-reject Announces an unknown code

0816 Protocol-reject Announces an unknown protocol

0916 Echo-request A type of hello message to check if the other end is alive

0A16 Echo-reply The response to the echo-request message

0B16 Discard-request A request to discard the packet

Configuration Packets

Configuration packets are used to negotiate the options between two ends. Four
different packets are used for this purpose: configure-request, configure-ack,
configure-nak, and configure-reject.
1. Configure-request. The end point that wishes to start a connection sends a
configure-request message with a list of zero or more options to the other end
point, Note that all of the options should be negotiated in one packet.
2. Configure-ack. If all of the options listed in the configure-request packet are
accepted by the receiving end, it will send a configure-ack, which repeats all
of the options requested.
3. Configure-nak. If the receiver of the configure-request packet recognizes all
of the options but finds that some be omitted or revised (the values should be
changed), it sends a configure-nak packet to the sender. The sender should
then omit or revise the options and send a totally new configure-request
packet.
4. Configure-reject. If some of the options are not recognized by the receiving
party, it responds with a configure-reject packet, marking those options that
are not recognized. The sender of the request should revise the configure-
request message and send a totally new one.

BRBRAITT : June-2011 243


―DATA NETWORK‖ FOR JTOs PH-II

Link Termination Packets


The link termination packets are used to disconnect the link between two end points.
1. Terminate-request. Either party can terminate the link by sending a
terminate-request packet.
2. Terminate-ack. The party that receives the terminate-request packet should
answer with a terminate-ack packet.
Link Monitoring and Debugging Packets
These packets are used for monitoring and debugging the link.
Code-reject. If the end point receives a packet with an unrecognized code in the
packet, it send a code-reject packet.
1. Protocol-reject. If the end point receives a packet with an unrecognized
protocol in the frame, it sends a protocol-reject packet.
2. Echo-request. The packet is sent to monitor the link. Its purpose is to see if the
link is functioning. The sender expects to receive an echo-reply packet from
the other side as proof.
3. Echo-reply. This packet is sent in response to an echo-request. The
information field in the echo-request packet is exactly duplicated and sent
back to the sender of the echo-request packet.
4. Discard-request. This is a kind of loopback test packet. It is used by the
sender to check its own loopback condition. The receiver of the packet just
discards it.
Options
There are many options that can be negotiated between the two end points. Options
are inserted in the information field of the configuration packets. We list some of the
most common options in Table-2
Table-2 Common options

Option Default

Maximum receive unit 1500

Authentication protocol None

Protocol field compression Off

Address and control field compression Off

AUTHENTICATION
Authentication plays a very important role in PPP because PPP is designed for use
over dial-up links where verification of user identity is necessary. Authentication
means validating the identity of a user who needs to access a set of resources. PPP has
created two protocols for authentication: Password Authentication Protocol (PAP) and
Challenge Handshake Authentication Protocol (CHAP).

BRBRAITT : June-2011 244


―DATA NETWORK‖ FOR JTOs PH-II

PAP
The Password Authentication Protocol (PAP) is a simple authentication procedure
with a two-step process:
1. The user who wants to access a system sends an authentication identification
(usually the user name) and a password.
2. The system checks the validity of the identification and password and either
accepts or denies connection.
For those systems that require more security, PAP is not enough; a third party with
access to the link can easily pick up the password and access the system resources.
Figure-6 shows the idea of PAP.
Figure-6 PAP

User System
Point-to-point physical link

Authenticate-request packet
User name and password
Authenticate-ack or authenticate-nak packet

Accept or reject

PAP Packets
PAP packets are encapsulated in a PPP frame. What distinguishes a PAP packet from
other packets is the value of the protocol field, C02316. There are three PAP packets:
authenticate-request, authenticate-ack, and authenticate-nak. The first packet is used
by the user to send the user name and password. The second is used by the system to
allow access. The third is used by the system to deny access. Figure-7 shows the
format of the three packets.

BRBRAITT : June-2011 245


―DATA NETWORK‖ FOR JTOs PH-II

Figure-7 PAP packets

PAP Packets

1 byte 1 byte 2 byte 1 byte Variable 1 byte Variable

Authenticate- Code=1 ID Length


User User Passwo Passwo
request
name name rd rd
1 byte 1 byte 2 bytes length
1 byte Variable length
Authenticate- Code=2 ID Length
Messag User name
ack
e
1 byte 1 byte 2 bytes length
1 byte Variable

Authenticate- Code=3 ID Length


Messag User Name
nak
e
length

Payload
Flag Address Control Protocol FCS Flag
(and padding )

C023
C0231616
CHAP

The Challenge Handshake Authentication Protocol (CHAP) is a three-way hand-


shaking authentication protocol that provides more security than PAP. In this method,
the password is kept secret; it is never sent on-line.
1. The system sends to the user a challenge packet containing a challenge value,
usually a few bytes.
2. The user applies a predefined function that takes the challenge value and the
user‘s own password and creates a result. The user sends the result in the
response packet to the system.
3. The system does the same. It applies the same function to the password of the
user (known to the system) and the challenge value to create a result. If the
result created is the same as the result sent in the response packet, access is
granted; otherwise, it is denied.
CHAP is more secure than PAP, especially if the system continuously changes the
challenge value. Even if the intruder learns the challenge value and the result, the
password is still secret. Figure-8 shows the idea.

BRBRAITT : June-2011 246


―DATA NETWORK‖ FOR JTOs PH-II

Figure-8 CHAP

User System
Point-to-point physical link

Challenge packet
Challenge value

Response packet
Response and name
Success or failure packet

Accept or reject

CHAP Packets
CHAP packets are encapsulated in the PPP frame. What distinguishes a CHAP packet
from other packets is the value of the protocol field, C22316. There are four CHAP
packets: challenge, response, success, and failure. The first packet is used by the
system to send the challenge value. The second is used by the user to return the result
of the calculation. The third is used by the system to allow access to the system. The
fourth is used by the system to deny access to the system. Figure-9 shows the format
of the four packets.
NETWORK CONTROL PROTOCOL (NCP)
After the link has been established and authentication (if any) has been successful, the
connection goes to the networking state. In this state, PPP uses another protocol called
Network Control Protocol (NCP). NCP is a set of control protocols to allow the
encapsulation of data coming from network layer protocols (such as IP, IPX, and
AppleTalk) in the PPP frame.

BRBRAITT : June-2011 247


―DATA NETWORK‖ FOR JTOs PH-II

Figure-9 CHAP packets

CHAP Packets

1 byte 1 byte 2 byte 1 byte Variable Variable

Challenge Code=1 ID Length


Challen Challenge Name
ge value
1 byte 1 byte 2 byte
length
1 byte Variable Variable

Response Code=2 ID Length


Respon Response Name
se value
length
1 byte 1 byte 2 bytes Variable

Success Code=3 ID Length Message

1 byte 1 byte 2 bytes Variable

Failure Code=4 ID Length Message

Payload
Flag Address Control Protocol FCS Flag
(and padding )

C223
C0231616

IPCP
The set of packets that establish and terminate a network layer connection for IP
packets is called Internetwork Protocol Control Protocol (IPCP). The format of an
IPCP packet is shown in Figure-10. Note that the value of the protocol field, 802116,
defines the packet encapsulated in the protocol as an IPCP packet.
Figure-10 IPCP packet encapsulated in PPP frame

1 byte 1 byte 2 bytes Variable

IPCP Code ID Length IPCP information


packet

Flag Address Control Protocol Payload FCS Flag


(and padding )

80216

BRBRAITT : June-2011 248


―DATA NETWORK‖ FOR JTOs PH-II

Seven packets are defined for the IPCP protocol, distinguished by their code values as
shown in Table-3
Table-3 Code value for IPCP packets

Code IPCP packet

01 Configure-request

02 Configure-ack

03 Configure-nak

04 Configure-reject

05 Terminate-request

06 Terminate-ack

07 Code-reject

A party uses the configure-request packet to negotiate options with the other party and
to set the IP addresses, and so on.
After configuration, the link is ready to carry IP protocol data in the payload field of a
PPP frame. This time, the value of the protocol field is 0021 16 to show that the IP data
packet, not the IPCP packet, is being carried across the link.
After IP has sent all of its packets the IPCP can take control and use the terminate-
request and terminate-ack packets to end the network connection.
Other Protocols
Note the other protocols have their own set of control packets defined by the value of
the protocol field in the PPP frame.
AN EXAMPLE
Let us given an example of the states through which a PPP connection goes to deliver
some network layer packets. Figure-11 shows the steps:
1. Establishing. The user sends the configure-request packet to negotiate the
options for establishing the link. The user requests PAP authentication. After
the user receives the configure-ack packet, link establishing is done.
2. Authenticating. The user sends the authenticate-request packet and includes
the user name and password. After it receives the configure-ack packet, the
authentication phase is over.
3. Networking. Now the user sends the configure-request to negotiate the
options for the network layer activity. After it receives the configure-ack, the
user can send the network layer data, which may consume several frames.
After all data are sent, the user sends the terminate-request to terminate the
network layer activity. When the terminate-ack packet is received the
networking phase is complete. The connection goes to the terminating state.

BRBRAITT : June-2011 249


―DATA NETWORK‖ FOR JTOs PH-II

Figure-11 An example

User System
Point-to-point physical link
Establishing

Configure-request
Establishing
Configure-ack State

Authenticate-request
Authenticating
Authenticating

Authenticate-ack
State

Configure-request
Networking
Configure-ack State
Networking

User data
Networking
State

User data

Terminating- Networking
request
Terminating-ack State
Terminating

 Terminating.
Terminating- The user sends the terminate-request packet to terminate the link. With the
Terminating
receipt of the terminate-ack packet, the link is terminated.
request
Terminating-ack State

BRBRAITT : June-2011 250


―DATA NETWORK‖ FOR JTOs PH-II

INTERNET SERVICES : DNS, TELNET, HTTP, PROXY,


EMAIL, SMTP & POP3, FTP & TFTP

BRBRAITT : June-2011 251


―DATA NETWORK‖ FOR JTOs PH-II

INTERNET SERVICES

DNS : The Domain Name System

Introduction
The Domain Name System, or DNS, is a distributed database that is used by TCP/IP
applications to map between the host-names and IP addresses, and to provide
electronic mail routing information. We use the term distributed because no single site
on the Internet knows all the information. Each site (university department, campus,
company, or department within a company, for example) maintains its own database
of information and runs a server program that other system across the Internet
(clients) can query. The DNS provides the protocol that allows client and servers to
communicate with each other.
The impetus for the development of the domain system was growth in the Internet :
Host name to address mappings were maintained by the Network Information Center
(NIC) in a single file (HOSTS.TXT) which was FTPed by all hosts (RFC-952, RFC-
953). The total network bandwidth consumed in distributing a new version by this
scheme is proportional to the square of the number of hosts in the network, and even
when multiple levels of FTP are used, the outgoing FTP load on the NIC host is
considerable. Explosive growth in the number of hosts didn't bode well for the future.
The network population was also changing in character. The timeshared hosts that
made up the original ARPANET were being replaced with local networks of
workstations. Local organizations were administering their own names and addresses,
but had to wait for the NIC to change HOSTS.TXT to make changes visible to the
Internet at large. Organizations also wanted some local structure on the name space.
The applications on the Internet were getting more sophisticated and creating a need
for general purpose name service.
The result was several ideas about name spaces and their management. The proposals
varied, but a common thread was the idea of a hierarchical name space, with the
hierarchy roughly corresponding to organizational structure, and names using "." as
the character to mark the boundary between hierarchy levels. A design using a
distributed database and generalized resources was described in (RFC-882, RFC-883).
Based on experience with several implementations, the system evolved into the
scheme described in this document.
DNS Components
DNS does much more than the name-to-address translation. It basically comprises of
the following components :
1. Domain Name Space and Resource Records
2. Name Servers
3. Resolvers

BRBRAITT : June-2011 252


―DATA NETWORK‖ FOR JTOs PH-II

Domain Name Space and Resource Records


This is the database of grouped names and addresses that are strictly formatted using a
tree-structured name space and data associated with the names. The domain system
consists of separate sets of local information called Zones. The data base is divided
into sections called zones, which are distributed among the name servers.
While name servers can have several optional functions and sources of data, the
essential task of a name server is to answer queries using data in its zones.
Conceptually, each node and leaf of the domain name space tree names a set of
information, and query operations are attempts to extract specific types of information
from a particular set. A query names the domain of interest and describes the types of
resource information that is desired.
Zones and Domains
There is a subtle difference between a zone and a domain. The domain is the entire set
of machines encompassed by an organizational domain. For example, the domain
uwa.edu.au contains all machines at the University of Western Australia.
A zone contains domain names and any data that a domain contains. It is an area of
the DNS about which a name server has complete information and, therefore, the
name server has authority for the zone. A name server can have authority for multiple
zones.
A zone may delegate domain names and data elsewhere. When you delegate, you
assign authority for your subdomains to different name servers. Instead of information
about the delegated subdomain, your data now includes pointers to the authoritative
name servers for that subdomain.
Name Servers
The programs that keep information about the domain name space are called name
servers. These are workstations that contain a database of information about hosts in
zones. This information can be about well-known services, mail exchanger, or host
information. A name server may cache structure or set information about any part of
the domain tree, but in general, a particular name server has complete information
about a subset of the domain space, and pointers to other name servers that can be
used to lead to information from any part of the domain tree. Name Servers know the
parts of the domain tree for which they have complete information; a name server is
said to be an authority for these parts of the name space. An Authoritative Name
Server has complete information about the part of the domain name space it is
responsible for. Authoritative information is organised into units called zones, and
these zones can be automatically distributed to the name servers that provide
redundant service for the data in a zone. The name server must periodically refresh its
zones from master copies in local files or foreign name servers.
Name servers can be authoritative for multiple zones too. Similarly a name server can
be a primary master for one zone, and a secondary master for another. However most
name servers are either primary for most of the zones they load or secondary for most
of the zones they load.
For example, there are two domains, x.z and y.z. The authoritative name servers for
both of these are called nic.x.z and nic.y.z, respectively. If nic.x.z is asked if there is a
node called a.x.z, then nic.x.z can definitively say yes or no, because it is the

BRBRAITT : June-2011 253


―DATA NETWORK‖ FOR JTOs PH-II

authoritative name server for the x.z domain. If nic.x.z is asked about a node called
a.y.z, nic.x.z must query nic.y.z, because nic.y.z is the authoritative name server for
the domain y.z. nic.x.z then caches the response; it can then quickly answer future
queries, but its answers will not be authoritative, because nic.x.z is not responsible for
the y.z domain.
Resolvers
These are programs that send requests over the network to servers on behalf of the
users. Resolvers must be able to access at least one name server and use that name
server‘s information to answer a query directly, or pursue the query using referrals to
other name servers. When a DNS server responds to a resolver, the requester attempts
a connection to the host using the IP address and not the name. The resolver is the
client portion of the DNS. The resolver is the library of routines called by applications
when they want to translate (resolve) a DNS name.
Resolver handles :
1. Querying a name server
2. interpreting responses (which may be RRs or error)
3. returning information to the programs that requested it
Telnet Protocol
The Telnet protocol is often thought of as simply providing a facility for remote
logins to computer via the Internet. This was its original purpose although it can be
used for many other purposes.
It is best understood in the context of a user with a simple terminal using the local
telnet program (known as the client program) to run a login session on a remote
computer where his communications needs are handled by a telnet server program. It
should be emphasised that the telnet server can pass on the data it has received from
the client to many other types of process including a remote login server. It is
described in RFC854 and was first published in 1983.
HTTP (HyperText Transfer Protocol)
The standard Web transfer protocol is HTTP, it transmits hyptertext over networks.
The name is somewhat misleading in that HTTP is not a protocol for transferring
hypertext; rather, it is a protocol for transferring information with the efficiency
necessary for making hypertext jumps. The data transferred by the protocol can be
plain text, hypertext, audio, images or any other Internet accessible information.
The HyperText Transport Protocol (HTTP) is an application-level Protocol used by
Web client and Web servers to communicate with each other. HTTP has been in use
since 1990.
The HTTP is a transaction-oriented client/ server protocol. To provide reliability,
HTTP makes use of TCP. Although the use of TCP for the transport connection is
very common, it is not formally required by the standard. As and when ATM
networks become commercially available the HTTP requests and replies can be
carried in AAL5 just as well.
HTTP is a ‗stateless‖ protocol : Each transaction is treated independently. Therefore,
a typical implementation will create a TCP new connection between client and server
for each transaction and then terminate the connection as soon as the transaction is
complete. Each interaction consists of one ASCII request, followed by one RFC 822

BRBRAITT : June-2011 254


―DATA NETWORK‖ FOR JTOs PH-II

MIME-like response i.e. Messages are in a format similar to that used by Internet
Mail and the Multipurpose Internet Mail Extensions (MIME).
HTTP is constantly evolving, several versions are in use and others are under
development.
The World Wide Web provides a single interface for accessing all these protocols.
This creates a convenient and user-friendly environment. It is no longer necessary to
be conversant in these protocols within separate, command-level environments. The
Web gathers together these protocols into a single system. Because of this feature, and
because of the Web's ability to work with multimedia and advanced programming
languages, the World Wide Web is the fastest-growing component of the Internet.
Understanding HyperText Transport Protocol (HTTP)
HTTP is a request/ response protocol. A web client establishes a connection with a
Web server and sends a resource request. The request contains a request method,
protocol version, followed by a MIME-like message. The message contains request
modifiers, client information, and possible body content.
The Web server responds with a status line, including the message‘s protocol version
and a success or error code. It is followed by a MIME-link message containing server
information, entity meta-information, and possible body content. Figure 2 shows
where the HTTP layer fits into Web client and servers.
Web Client Web Server

Virtual Connection
HTTP HTTP

TCP/IP TCP/IP
Protocol Internet Protocol
Suite Suite

Fig. 2 The Web client communicates with the Web server using an
HTTP virtual circuit

Details of HTTP can be found in the following Request for comments (RFC) :
 HTTP 1.0 specifications are described in RFC 1945:
http:// www.cis.ohio-state.edu/htbin/rfc/rfc1945.html
 MIME specifications are described in RFC 1521:
http:// www.cis.ohio-state.edu/htbin/rfc/rfc1521.html

BRBRAITT : June-2011 255


―DATA NETWORK‖ FOR JTOs PH-II

Hypertext : The Motion Of The Web


The operation of the Web relies primarily on hypertext as its means of information
retrieval. HyperText is a document containing words that connect to other documents.
These words are called links and are selectable by the user. A single hypertext
document can contain links to many documents. In the context of the Web, words or
graphics may serve as links to other documents, images, video, and sound. Links may
or may not follow a logical path, as each connection is programmed by the creator of
the source document. Overall, the WWW contains a complex virtual web of
connections among a vast number of documents, graphics, videos, and sounds.
Producing hypertext for the Web is accomplished by creating documents with a
language called Hyper Text Markup Language, or HTML. With HTML, tags are
placed within the text to accomplish document formatting, visual features such as font
size, italics and bold, and the creation of hypertext links. Graphics may also be
incorporated into an HTML document. HTML is an evolving language, with new tags
being added as each upgrade of the language is developed and released. The World
Wide Web Consortium, led by Tim Berners-Lee, co-ordinates the efforts of
standardising HTML.

BRBRAITT : June-2011 256


―DATA NETWORK‖ FOR JTOs PH-II

Understanding HyperText Markup Language (HTML)


Client Client

Graphical
User Web
Interface Resources

Web
Server

HTML

HTTP HTTP

TCP/ IP TCP/ IP
Protocol Protocol
Suit Suit
Internet

Figure 3 HTML is transported between the web client and web server over the
HTTP virtual connection.
The HyperText Markup Language is a document-layout, hyperlink specification, and
markup language. Web clients use it to generate resource requests for Web servers,,
and to process output returned by the Web server for presentation. A markup language
described what text means and what it is supposed to look like. Figure 3 shows where
the HTML layer fits into Web clients.
A fundamental property of HTML is that the text it describes can be rendered on most
devices. A single HTML Web page on a Web server can be displayed on a PC, Mac,
UNIX, and so on.
HTML 3.2 specifications are available online at : http:// www.w3c.org/
Web Client/ Server
A web is similar to the server in client/ server technology. The server, in client/ server
technology, usually connects to a database. The client, in client/ server technology,
makes a data request to the server, processes the returned data, and presents the result
through a graphical user interface.

BRBRAITT : June-2011 257


―DATA NETWORK‖ FOR JTOs PH-II

A web client makes a resource request to the Web server, processes the returned
resource, and presents the result through a graphical user interface.
The difference between a server, in client/ server technology, and a Web server seems
to be one accepts requests for data, and the other accepts requests for a resource. The
differences are dramatic as we look closer.

Web Client

Graphical
User
Interface
Server

Application
Logic Database

Client Database
Server

Vendor
Vendor
Network
Network
Software
Software
Internet

Fig. 4 Above Client/ Server concept : A ―fat client‖


The server, in client/ server technology, is typically a specialised database server. The
Microsoft SQL server product is a good example. A database server receives requests
for data from a client through a vendor proprietary network software. It locates the
data and returns it. The client applies application logic to the data, and presents the
result through a graphical user interface. This is a ―fat client‖ because the application
logic is in the client. Figure 4 illustrates client/ server components.
Not just any client in client/ server technology can request data from the database
server. All clients must run the correct vendor proprietary network software. Like-
wise, A database server can only locate the return data form the vendor proprietary
database, without using a database gateway to another proprietary database.
The network connection between the client and the database server remains until one
or the other closes it, or until the network fails. The database server retains state
information about the client for the entire lifetime for the connection. This saves the
database server time in completing requests for data.

BRBRAITT : June-2011 258


―DATA NETWORK‖ FOR JTOs PH-II

Web servers receive requests for a resource from Web client through the standard
TCP/ IP protocol Suite. The resource can be a file, or data returned by another
process. The Web server locates the file and returns it, or executes another process,
supplies it with input, and returns the output. The Web client does not apply
application logic to the resource. It presents the resource through a graphical user
interface. This is a ―thin client‖ because it does not contain application logic. Figure 5
illustrates Web client/ server components.

Client Client

Graphical
User Web
Interface Resources

Web Database
Client Server

TCP/ IP TCP/ IP
Protocol Protocol
Suit Suit
Internet

Fig. 5 Client/ Server concept : A ―thin client‖

Any web client can request a resource from any Web server, and any Web server can
request a resource from any other web server. This is possible because they use the
standard TCP/IP Protocol Suite.
The network connection between the Web client and Web server remains only until
the Web server has returned the resource. The Web server does not retain any state
information about the Web client.
Proxy Server
Introduction
Although the volume of Web traffic on the Internet is staggering, a large percentage
of that traffic is redundant---multiple users at any given site request much of the same
content. This means that a significant percentage of the WAN infrastructure carries
the identical content (and identical requests for it) day after day. Eliminating a
significant amount of recurring telecommunications charges offers an enormous
savings opportunity for enterprise and service provider customers.

BRBRAITT : June-2011 259


―DATA NETWORK‖ FOR JTOs PH-II

Data networking is growing at a dizzying rate. More than 80% of Fortune 500
companies have Web sites. More than half of these companies have implemented
intranets and are putting graphically rich data onto the corporate WANs. The number
of Web users is expected to increase by a factor of five in the next three years. The
resulting uncontrolled growth of Web access requirements is straining all attempts to
meet the bandwidth demand.
Caching
Caching is the technique of keeping frequently accessed information in a location
close to the requester. A Web cache stores Web pages and content on a storage device
that is physically or logically closer to the user---this is closer and faster than a Web
lookup. By reducing the amount of traffic on WAN links and on overburdened Web
servers, caching provides significant benefits to ISPs, enterprise networks, and end
users. There are two key benefits :
Cost savings due to WAN bandwidth reduction---ISPs can place cache engines at
strategic points on their networks to improve response times and lower the bandwidth
demand on their backbones. ISPs can station cache engines at strategic WAN access
points to serve Web requests from a local disk rather than from distant or overrun
Web servers.
In enterprise networks, the dramatic reduction in bandwidth usage due to Web
caching allows a lower-bandwidth (lower-cost) WAN link to serve the same user
base. Alternatively, the organisation can add users or add more services that use the
freed bandwidth on the existing WAN link.
Improved productivity for end users---The response of a local Web cache is often
three times faster than the download time for the same content over the WAN. End
users see dramatic improvements in response times, and the implementation is
completely transparent to them.
Other benefits include the following :
Secure access control and monitoring---The cache engine provides network
administrators with a simple, secure method to enforce a sitewide access policy
through URL filtering.
Operational logging---Network administrators can learn which URLs receive hits,
how many requests per second the cache is serving, what percentage of URLs are
served from the cache, and other related operational statistics.
Web Caching : How it works
Web caching works as follows :
1. A user accesses a Web page.
2. While the page is being transmitted to the user, the caching system saves
the page and all its associated graphics on a local storage device. That content
is now cached.

3. Another user (or the original user) accesses that Web page later in the day.
4.
5. Instead of sending the request over the Internet, the Web cache system
delivers the Web page from local storage. This process speeds download time
for the user, and reduces bandwidth demand on the WAN link.

BRBRAITT : June-2011 260


―DATA NETWORK‖ FOR JTOs PH-II

6. The important task of ensuring that data is up-to-date is addressed in a variety


of ways, depending on the design of the system.

Advantages / Disadvantages
Security Issues
Many of the current firewall designs rely on the combination of packet filtering and
the proxy technology (especially "transparent proxying" technology). Today, Proxy
systems can manage the different operation authorisations that users have when
surfing (for example: who is allowed to use which protocol), blocking unwanted
surfers outside the local net from going in, and run a log file containing users
operations. Of course that's all besides the filtering on the basis of IP address.
However, the caching ability which makes the Web run faster, has its security
disadvantages. It could be bad for business advertising at Web sites. It might even
violate copyright law.
Advertisers behind a site have a problem with the caching proxy servers. They have
no way of knowing the number of readers behind a hit-it could be one or hundred
thousand - they can't tell without looking at the log files of the proxies. Furthermore,
every copyrighted document sitting in the proxy's cache is, in fact, an unauthorised
copy.
The wrong solution would be to disable the caching. It will hurt the performance,
causing fewer visitors at the advertisers sites. A good solution would be letting a
caching proxy to keep a copy of a Web page if the proxy promises in return, to tell the
Web server the number of hits it got for that page over a reasonable time period.
Undoubtedly, advertisers would prefer a more specific information of the readers, but
that's something to argue about.
Other problems arise when using the Internet Cache Protocol (ICP) - a lightweight
format message used for communication among Web proxy caches, implemented on
top of UDP. ICP is used for object location, and can be used for cache selection.
Because of its connection-less nature, it has vulnerability to some methods of attack.
By checking the source IP address of an ICP message-certain degree of protection is
accomplished. ICP queries should be processed only if the querying address is
allowed to access the cache. ICP replies should only be accepted from known
neighbours, otherwise ignored. Trusting the validity of address in the IP level makes
ICP susceptible to IP address spoofing which has many problematic consequences
(for example: inserting bogus ICP queries, inserting bogus ICP replies thereby
preventing a certain neighbour from being used or forcing a certain neighbour to be
used). In fact, only routers are able to detect spoofed addresses, hosts can't do it. But
still, the IP Authentication Header can be used to provide cryptographic
authentication for the IP packet with the ICP in it.
In general, the caching method can cut down duplicate request up to 30%. However,
in order to investigate the overall effects of different caching strategies on the network
as a whole, a mathematical model should be used.

BRBRAITT : June-2011 261


―DATA NETWORK‖ FOR JTOs PH-II

Examples of Proxy Servers


1. Microsoft Proxy Server 2.0-recently developed for use with Windows
NT 4.0.
2. Has firewall functionality, and caching ability as well.
3. Netscape Proxy Server 2.5-for use with UNIX and Windows NT.
4. Caches Web pages and scans for viruses at the same time.

BRBRAITT : June-2011 262


―DATA NETWORK‖ FOR JTOs PH-II

Mail Protocols
Introduction
Mail service is perhaps the most widely used application on the Internet. Several
protocols for mail service are available, but the most widely used is the Simple Mail
Transfer Protocol (SMTP). Because of large number of mobiles and workstation users
on the Internet, other support protocols, such as POP3 (Post Office version 3) and
IMAP4 (Internet Message Access Protocol version 4), have also been developed.
Simple Mail Transfer Protocol (SMTP)
SMTP enables ASCII text messages to be sent to mailbox on TCP/IP hosts that have
been configured with mail services. Figure 13.3 shows a mail session that uses SMTP.
A user who wants to send mail interacts with the local mail system through the user
agent (UA) component of the mail system. The mail is deposited in local mail
outgoing mailbox. A sender-SMTP process periodically polls the outgoing box, and
when the process finds a mail message in the box, it establishes a TCP connection
with the destination host to which mail is to be sent. The receiver-SMTP process
running in the destination host accepts the connection, and the mail message is sent to
that connection. The receiver-SMTP process deposits the mail message in the
destination mailbox on the destination host. If there is no mailbox with the specified
name on the destination host, a mail message is sent to the originator. This message
indicates that the mailbox does not exist. The sender-SMTP and receiver-SMTP
processes that are responsible for the transfer of mail are called message transfer
agents (MTA).
Post Office Protocol Version 3 (POP3)
SMTP expects the destination host --- the mail server receiving the mail --- to be
online; otherwise, a TCP connection cannot be established with the destination host.
For this reason, it is not practical to establish an SMTP session with a desktop for
receiving mail because desktop workstations are often turned off at the end of the day.
In many network environments, SMTP mail is received by a SMTP host that is
always active on the network (see fig. 13.6). This SMTP host provides a mail-drop
service. Workstations interact with the SMTP host and retrieves messages by using a
client/server mail protocol, such as POP3 (Post Office Protocol version 3) described
in RFC 1939. POP3 uses the TCP transport protocol, and the POP3 server listens on
its well-known TCP port number 110.
Although POP3 is used to download messages from the server, SMTP is still used to
forward messages from the workstation user to its SMTP mail server.
Table 13.6 through 13.8 list the POP3 command based on the RFC 1939 specification.
Although the USER and PASS commands (see table 13.7) are listed as optional
commands in RFC 1939, most POP3 implementations support these commands. The
reason why USER/PASS can be regarded as optional is because they can be replaced
by the MD5 (Message Digest version 5) authentication method used in the APOP
command.

BRBRAITT : June-2011 263


―DATA NETWORK‖ FOR JTOs PH-II

fIgure 13.6 POP3 client/ server architecture.


(Courtesy Learning Tree)

POP3
server

110
TCP SMTP
POP3
client
IP

TCP/IP
TCP Internet

IP
User agent

1. Message Transfer Agent (MTA) is run on a computer with more resources


than that available to the workstation.
-- offers a ―maildrop‖ service to smaller nodes, such as workstations
2. POP3 provides dynamic access to maildrop server.

File Transfer Protocol


Introduction
The Internet File Transfer Protocol (FTP) is defined by RFC 959 published in 1985. It
provides facilities for transferring to and from remote computer systems. Usually the
user transferring a file needs authority to login and access files on the remote system.
The common facility known as anonymous FTP actually works via a special type of
public guest account implemented on the remote system.

BRBRAITT : June-2011 264


―DATA NETWORK‖ FOR JTOs PH-II

FTP Session
An FTP session normally involves the interaction of five software elements.

User
This provides a user interface and drives the client protocol interpreter.
Interface

This is the client protocol interpreter. It issues commands to the remote server
Client PI
protocol interpreter and it also drives the client data transfer process.

This is the server protocol interpreter which responds to commands issued by


Server PI
the client protocol interpreter and drives the server data transfer process.

This is the client data transfer process responsible for communicating with
Client DTP
the server data transfer process and the local file system.

This is the server data transfer process responsible for communicating with
Server DTP
the client data transfer process and the remote file system.

User
I/F User

Server FTP Commands User


PI
PI
FTP Replies
File File
System System
Server User
Data
DTP DTP
Connection

FTP Server FTP User

Five software elements of FTP

RFC 959 refers to the user rather than the client. RFC 959 defines the means by which
the two PIs talk to each other and by which the two DTPs talk to each other. The user
interface and the mechanism by which the PIs talk to the DTPs are not part of the
standard. It is common practice for the PI and DTP functionalities to be part of the
same program but this is not essential.
During an FTP session there will be two separate network connections one between
the PIs and one between the DTPs. The connection between the PIs is known as the
control connection. The connection between the DTPs is known as the data
connection.

BRBRAITT : June-2011 265


―DATA NETWORK‖ FOR JTOs PH-II

The control and data connections use TCP.

In normal Internet operation the FTP server listens on the well-known port number 21
for control connection requests. The choice of port numbers for the data connection
depends on the commands issued on the control connection. Conventionally the client
sends a control message which indicates the port number on which the client is
prepared to accept an incoming data connection request.
Client System Server System

Data Contr Control Data


ol

Available Ports Port-21 Port-20

Client Data Server


Data

TCP/IP
Internet

The use of separate connections for control and data offers the advantages that the two
connections can select different appropriate qualities of service e.g. minimum delay
for the control connection and maximum throughput for the data connection, it also
avoids problems of providing escape and transparency for commands embedded
within the data stream.
When a transfer is being set up it always initiated by the client, however either the
client or the server may be the sender of data. As well as transferring user requested
files, the data transfer mechanism is also used for transferring directory listings from
server to client.
TFTP PROTOCOL
TFTP is a simple protocol to transfer files, and therefore was named the Trivial File
Transfer Protocol or TFTP. It has been implemented on top of the Internet User
Datagram protocol (UDP or Datagram) so it may be used to move files between
machines on different networks implementing UDP. (This should not exclude the
possibility of implementing TFTP on top of other datagram protocols.) It is
designed to be small and easy to implement. Therefore, it lacks most of the features
of a regular FTP. The only thing it can do is read and write files (or mail) from/to a
remote server. It cannot list directories, and currently has no provisions for user
authentication. In common with other Internet protocols, it passes 8 bit bytes of data.

BRBRAITT : June-2011 266

You might also like