You are on page 1of 22

Unit- IV

What is Network Layer?

The network layer is concerned with getting packets from the
source all the way to the destination. The functions of this layer
1. Routing - The process of transferring packets received from the Data Link
Layer of the source network to the Data Link Layer of the correct
destination network is called routing. Involves decision making at each
intermediate node on where to send the packet next so that it eventually
reaches its destination. The node which makes this choice is called a router.
For routing we require some mode of addressing which is recognized by the
Network Layer. This addressing is different from the MAC layer addressing.
2. Inter-networking - The network layer is the same across all physical
networks (such as Token-Ring and Ethernet). Thus, if two physically
different networks have to communicate, the packets that arrive at the Data
Link Layer of the node which connects these two physically different
networks, would be stripped of their headers and passed to the Network
Layer. The network layer would then pass this data to the Data Link Layer
of the other physical network...
3. Congestion Control - If the incoming rate of the packets arriving at any
router is more than the outgoing rate, then congestion is said to occur.
Congestion may be caused by many factors. If suddenly, packets begin
arriving on many input lines and all need the same output line, then a queue
will build up. If there is insufficient memory to hold all of them, packets will
be lost. But even if routers have an infinite amount of memory, congestion
gets worse, because by the time packets reach to the front of the queue, they
have already timed out (repeatedly), and duplicates have been sent. All these
packets are dutifully forwarded to the next router, increasing the load all the
way to the destination. Another reason for congestion are slow processors. If
the router's CPUs are slow at performing the bookkeeping tasks required of
them, queues can build up, even though there is excess line capacity.
Similarly, low-bandwidth lines can also cause congestion.
Routing: Routing is the process of forwarding of a packet in a network so that it
reaches its intended destination. The main goals of routing are:

1. Correctness: The routing should be done properly and correctly so that the
packets may reach their proper destination.

2. Simplicity: The routing should be done in a simple manner so that the

overhead is as low as possible. With increasing complexity of the routing
algorithms the overhead also increases.
3. Robustness: Once a major network becomes operative, it may be expected
to run continuously for years without any failures. The algorithms designed
for routing should be robust enough to handle hardware and software
failures and should be able to cope with changes in the topology and traffic
without requiring all jobs in all hosts to be aborted and the network rebooted
every time some router goes down.
4. Stability: The routing algorithms should be stable under all possible
5. Fairness: Every node connected to the network should get a fair chance of
transmitting their packets. This is generally done on a first come first serve
6. Optimality: The routing algorithms should be optimal in terms of
throughput and minimizing mean packet delays. Here there is a trade-off and
one has to choose depending on his suitability.

Static routing: Static routing is not really a routing protocol.

entering routes into a device's routing table via a configuration file
that is loaded when the routing device starts up. As an alternative,
these routes can be entered by a network administrator who
the routes manually.
configured routes don't change after they are configured (unless a
human changes them) they are called 'static' routes.
Static routing may have the following uses:

Static routing can be used to define an exit point from a router when no other routes are
available or necessary. This is called a default route.

Static routing can be used for small networks that require only one or two routes. This is
often more efficient since a link is not being wasted by exchanging dynamic routing

Static routing is often used as a complement to dynamic routing to provide a failsafe

backup in the event that a dynamic route is unavailable.

Static routing is often used to help transfer routing information from one routing protocol
to another (routing redistribution).

Dynamic routing: Dynamic routing protocols are supported

by software applications running on the routing device (the router)
which dynamically learn network destinations and how to get to
them and also advertise those destinations to other routers. This
advertisement function allows all the routers to learn about all the
destination networks that exist and how to to those networks.
Classification of Routing Algorithms

The routing algorithms may be classified as follows:

1. Adaptive Routing Algorithm: These algorithms change their routing
decisions to reflect changes in the topology and in traffic as well. These get
their routing information from adjacent routers or from all routers. The
optimization parameters are the distance, number of hops and estimated
transit time. This can be further classified as follows:
1. Centralized: In this type some central node in the network gets entire
information about the network topology, about the traffic and about
other nodes. This then transmits this information to the respective
routers. The advantage of this is that only one node is required to
keep the information. The disadvantage is that if the central node goes
down the entire network is down, i.e. single point of failure.
2. Isolated: In this method the node decides the routing without seeking
information from other nodes. The sending node does not know about
the status of a particular link. The disadvantage is that the packet may
be send through a congested route resulting in a delay. Some
examples of this type of algorithm for routing are:
Hot Potato: When a packet comes to a node, it tries to get rid
of it as fast as it can, by putting it on the shortest output queue
without regard to where that link leads. A variation of this
algorithm is to combine static routing with the hot potato
algorithm. When a packet arrives, the routing algorithm takes
into account both the static weights of the links and the queue
Backward Learning: In this method the routing tables at each
node gets modified by information from the incoming packets.
One way to implement backward learning is to include the
identity of the source node in each packet, together with a hop
counter that is incremented on each hop. When a node receives
a packet in a particular line, it notes down the number of hops

it has taken to reach it from the source node. If the previous

value of hop count stored in the node is better than the current
one then nothing is done but if the current value is better than
the value is updated for future use. The problem with this is
that when the best route goes down then it cannot recall the
second best route to a particular node. Hence all the nodes
have to forget the stored informations periodically and start all
over again.
3. Distributed: In this the node receives information from its
neighbouring nodes and then takes the decision about which way to
send the packet. The disadvantage is that if in between the the interval
it receives information and sends the packet something changes then
the packet may be delayed.
2. Non-Adaptive Routing Algorithm: These algorithms do not base their
routing decisions on measurements and estimates of the current traffic and
topology. Instead the route to be taken in going from one node to the other is
computed in advance, off-line, and downloaded to the routers when the
network is booted. This is also known as static routing. This can be further
classified as:
1. Flooding: Flooding adapts the technique in which every incoming
packet is sent on every outgoing line except the one on which it
arrived. One problem with this method is that packets may go in a
loop. As a result of this a node may receive several copies of a
particular packet which is undesirable. Some techniques adapted to
overcome these problems are as follows:
Sequence Numbers: Every packet is given a sequence
number. When a node receives the packet it sees its source
address and sequence number. If the node finds that it has sent
the same packet earlier then it will not transmit the packet and
will just discard it.
Hop Count: Every packet has a hop count associated with it.
This is decremented (or incremented) by one by each node
which sees it. When the hop count becomes zero (or a
maximum possible value) the packet is dropped.
Spanning Tree: The packet is sent only on those links that
lead to the destination by constructing a spanning tree routed at
the source. This avoids loops in transmission but is possible
only when all the intermediate nodes have knowledge of the
network topology.

Flooding is not practical for general kinds of applications. But in

cases where high degree of robustness is desired such as in military
applications, flooding is of great help.
2. Random Walk: In this method a packet is sent by the node to one of
its neighbours randomly. This algorithm is highly robust. When the
network is highly interconnected, this algorithm has the property of
making excellent use of alternative routes. It is usually implemented
by sending the packet onto the least queued link.
Shortest Path Routing

Here, the central question dealt with is 'How to determine the

optimal path for routing?' Various algorithms are used to
determine the optimal routes with respect to some predetermined
criteria. A network is represented as a graph, with its terminals as
nodes and the links as edges. A 'length' is associated with each
edge, which represents the cost of using the link for transmission.
Lower the cost, more suitable is the link. The cost is determined
depending upon the criteria to be optimized. Some of the
important ways of determining the cost are:
Minimum number of hops: If each link is given a unit cost,
the shortest path is the one with minimum number of hops.
Such a route is easily obtained by a breadth first search
method. This is easy to implement but ignores load, link
capacity etc.
Transmission and Propagation Delays: If the cost is
fixed as a function of transmission and propagation delays, it
will reflect the link capacities and the geographical
distances. However these costs are essentially static and do
not consider the varying load conditions.
Queuing Delays: If the cost of a link is determined through
its queuing delays, it takes care of the varying load
conditions, but not of the propagation delays.
Ideally, the cost parameter should consider all the above
mentioned factors, and it should be updated periodically to reflect
the changes in the loading conditions. However, if the routes are
changed according to the load, the load changes again. This
feedback effect between routing and load can lead to undesirable
oscillations and sudden swings.

CONGESTION CONTROL: Congestion control refers to techniques and

mechanisms that can either prevent congestion, before it happens, or remove
congestion, after it has happened. In general, we can divide congestion control
mechanisms into two broad categories: open-loop congestion control (prevention)
and closed-loop congestion control (removal).

Open-Loop Congestion Control

(a)Retransmission Policy: Retransmission is sometimes unavoidable. If the sender
feels that a sent packet is lost or corrupted, the packet needs to be retransmitted.
Retransmission in general may increase congestion in the network. However, a
good retransmission policy can prevent congestion. The retransmission policy and
the retransmission timers must be designed to optimize efficiency and at the same
time prevent congestion.
(b) Window Policy: The type of window at the sender may also affect congestion.
The Selective Repeat window is better than the Go-Back-N window for congestion
control. In the Go-Back-N window, when the timer for a packet times out, several
packets may be resent, although some may have arrived safe and sound at the
receiver. This duplication may make the congestion worse. The Selective Repeat
window, on the other hand, tries to send the specific packets that have been lost or
(c) Acknowledgment Policy: The acknowledgment policy imposed by the
receiver may also affect congestion. If the receiver does not acknowledge every
packet it receives, it may slow down the sender and help prevent congestion.
(d) Discarding Policy: A good discarding policy by the routers may prevent
congestion and at the same time may not harm the integrity of the transmission.
(e) Admission Policy: An admission policy, which is a quality-of-service
mechanism, can also prevent congestion in virtual-circuit networks. Switches in a
flow first check the resource requirement of a flow before admitting it to the

Closed-Loop Congestion Control

(a) Backpressure: The technique of backpressure refers to a congestion control
mechanism in which a congested node stops receiving data from the immediate
upstream node or nodes. This may cause the upstream node or nodes to become
congested, and they, in turn, reject data from their upstream nodes or nodes. And so
(b) Choke Packet: A choke packet is a packet sent by a node to the source to inform
it of congestion. In the choke packet method, the warning is from the router, which
has encountered congestion, to the source station directly. The intermediate nodes
through which the packet has travelled are not warned.
(c) Implicit Signaling: In implicit signaling, there is no communication between the
congested node or nodes and the source. The source guesses that there is a
congestion somewhere in the network from other symptoms.
(d) Explicit Signaling: The node that experiences congestion can explicitly send a
signal to the source or destination. The explicit signaling method, however, is
different from the choke packet method. In the choke packet method, a separate

packet is used for this purpose; in the explicit signaling method, the signal is
included in the packets that carry data. Explicit signaling, as we will see in Frame
Relay congestion control, can occur in either the forward or the backward
Congestion control algorithms
(a) Leaky Bucket Algorithm
It is a traffic shaping mechanism
that controls the amount and the rate
of the traffic sent to the network.
A leaky bucket algorithm shapes
bursty traffic into fixed rate traffic by
averaging the data rate.
Imagine a bucket with a small hole
at the bottom.
The rate at which the water is poured into the bucket is not fixed and can vary but it
leaks from the bucket at a constant rate. Thus (as long as water is present in bucket),
the rate at which the water leaks does not depend on the rate at which the water is
input to the bucket.
Also, when the bucket is full, any additional water that enters into the bucket spills
over the sides and is lost.
The same concept can be applied to packets in the network. Consider that data is
coming from the source at variable speeds. Suppose that a source sends data at 12
Mbps for 4 seconds. Then there is no data for 3 seconds. The source again transmits
data at a rate of 10 Mbps for 2 seconds. Thus, in a time span of 9 seconds, 68 Mb data
has been transmitted.
If a leaky bucket algorithm is used, the data flow will be 8 Mbps for 9 seconds. Thus
constant flow is maintained.

(b) Token bucket Algorithm

The leaky bucket algorithm allows only an average (constant) rate of data flow. Its
major problem is that it cannot deal with bursty data.

A leaky bucket algorithm does not consider the idle time of the host. For example, if
the host was idle for 10 seconds and now it is
willing to sent
data at a very high speed for another 10 seconds,
the total data
transmission will be divided into 20 seconds
data rate will be maintained. The host is having no advantage of
idle for 10 seconds.
To overcome this problem, a token
bucket algorithm is used. A token bucket
algorithm allows bursty data transfers.
A token bucket algorithm is a modification of leaky bucket in which leaky bucket
contains tokens.
In this algorithm, a token(s) are generated at every clock tick. For a packet to be
transmitted, system must remove token(s) from the bucket.
Thus, a token bucket algorithm allows idle hosts to accumulate credit for the future
in form of tokens.
For example, if a system generates 100 tokens in one clock tick and the host is idle
for 100 ticks. The bucket will contain 10,000 tokens.
Now, if the host wants to send bursty data, it can consume all 10,000 tokens at once
for sending 10,000 cells or bytes.
Thus a host can send bursty data as long as bucket is not empty.

Like OSI network model, TCP/IP also has a network model. TCP/IP
was on the path of development when the OSI standard was
published and there was interaction between the designers
of OSI and TCP/IP standards. The TCP/IP model is not same as OSI
model. OSI is a seven-layered standard, but TCP/IP is a four layered
Layer 4. Application Layer
Application layer is the top most layer of four layer TCP/IP model.
Application layer is present on the top of theTransport layer.

Application layer defines TCP/IP application protocols and how host

programs interface with Transport layer services to use the network.
Application layer includes all the
higher-level protocols like DNS (Domain
System), HTTP
Transfer Protocol), Telnet, SSH, FTP (File
Transfer Protocol), TFTP (Trivial File
Network Management Protocol), SMTP
(Simple Mail Transfer Protocol) , DHCP
(Dynamic Host Configuration Protocol), X Windows, RDP (Remote
Desktop Protocol) etc.
Layer 3. Transport Layer
Transport Layer is the third layer of the four layer TCP/IP model. The
position of the Transport layer is between Application and Internet
layer. The purpose of Transport layer is to permit devices on the
source and destination hosts to carry on a conversation. Transport
layer defines the level of service and status of the connection used
when transporting data.
The main protocols included at Transport layer are TCP (Transmission
Control Protocol) and UDP (User Datagram Protocol).
Layer 2. Internet Layer
Internet Layer is the second layer of the four layer TCP/IP model. The
of Internet
layer is
between Network
Layer and Transport layer. Internet layer pack data into data packets
known as IP datagrams, which contain source and destination
address (logical address or IP address) information that is used to
forward the datagrams between hosts and across networks.
The Internet layer is also responsible for routing of IP datagrams.
Packet switching network depends upon a connectionless
internetwork layer. This layer is known as Internet layer. Its job is to
allow hosts to insert packets into any network and have them to
deliver independently to the destination. At the destination side data
packets may appear in a different order than they were sent. It is the
job of the higher layers to rearrange them in order to deliver them to
proper network applications operating at the Application layer.

The main protocols included at Internet layer are IP (Internet

Protocol), ICMP (Internet Control Message Protocol), ARP (Address
Resolution Protocol), RARP (Reverse Address Resolution Protocol)
and IGMP (Internet Group Management Protocol).
Layer 1. Network Access Layer
Network Access Layer is the first layer of the four layer TCP/IP
model. Network Access Layer defines details of how data is
physically sent through the network, including how bits are
electrically or optically signaled by hardware devices that interface
directly with a network medium, such as coaxial cable, optical fiber,
or twisted pair copper wire.
The protocols included in Network Access Layer are Ethernet, Token
Ring, FDDI, X.25, Frame Relay etc.The most popular LAN
architecture among those listed above is Ethernet. Ethernet uses
an Access Method called CSMA/CD (Carrier Sense Multiple
Access/Collision Detection) to access the media, when Ethernet
operates in a shared media. An Access Method determines how a
host will place data on the medium.
IN CSMA/CD Access Method, every host has equal access to the
medium and can place data on the wire when the wire is free from
network traffic. When a host wants to place data on the wire, it will
check the wire to find whether another host is already using the
medium. If there is traffic already in the medium, the host will wait
and if there is no traffic, it will place the data in the medium.
UDP (User Datagram Protocol)
UDP -- like its cousin the Transmission Control Protocol (TCP) -- sits directly on
top of the base Internet Protocol (IP). In general, UDP implements a fairly
"lightweight" layer above the Internet Protocol. It seems at first site that similar
service is provided by both UDP and IP, namely transfer of data. But we need UDP
for multiplexing/demultiplexing of addresses.

UDP's main purpose is to abstract network traffic in the form of datagrams. A

datagram comprises one single "unit" of binary data; the first eight (8) bytes of a
datagram contain the header information and the remaining bytes contain the data
UDP Headers
The UDP header consists of four (4) fields of two bytes each:

Source Port

Destination Port



source port number

destination port number
datagram size
UDP port numbers allow different applications to maintain their own "channels"
for data; both UDP and TCP use this mechanism to support multiple applications
sending and receiving data concurrently. The sending application (that could be a
client or a server) sends UDP datagrams through the source port, and the recipient
of the packet accepts this datagram through the destination port. Some applications
use static port numbers that are reserved for or registered to the application. Other
applications use dynamic (unregistered) port numbers. Because the UDP port
headers are two bytes long, valid port numbers range from 0 to 65535; by
convention, values above 49151 represent dynamic ports.
The datagram size is a simple count of the number of bytes contained in the header
and data sections . Because the header length is a fixed size, this field essentially
refers to the length of the variable-sized data portion (sometimes called the
payload). The maximum size of a datagram varies depending on the operating
environment. With a two-byte size field, the theoretical maximum size is 65535
bytes. However, some implementations of UDP restrict the datagram to a smaller
number -- sometimes as low as 8192 bytes.
UDP checksums work as a safety feature. The checksum value represents an
encoding of the datagram data that is calculated first by the sender and later by the
receiver. Should an individual datagram be tampered with (due to a hacker) or get
corrupted during transmission (due to line noise, for example), the calculations of
the sender and receiver will not match, and the UDP protocol will detect this error.
The algorithm is not fool-proof, but it is effective in many cases. In UDP, check
summing is optional -- turning it off squeezes a little extra performance from the
system -- as opposed to TCP where checksums are mandatory. It should be
remembered that check summing is optional only for the sender, not the receiver.
If the sender has used checksum then it is mandatory for the receiver to do so.

Usage of the Checksum in UDP is optional. In case the sender does not use it, it
sets the checksum field to all 0's. Now if the sender computes the checksum then
the recipient must also compute the checksum an set the field accordingly. If the
checksum is calculated and turns out to be all 1's then the sender sends all 1's
instead of all 0's. This is since in the algorithm for checksum computation used by
UDP, a checksum of all 1's if equivalent to a checksum of all 0's. Now the
checksum field is unambiguous for the recipient, if it is all 0's then checksum has
not been used, in any other case the checksum has to be computed.

Classful Addressing: In classful addressing, the address space is

divided into five classes: A, B, C, D, and E. Each class occupies some
part of the address space. We can find the class of an address when
given the address in binary notation or dotted-decimal notation. If
the address is given in binary notation, the first few bits can
immediately tell us the class of the address. If the address is given
in decimal-dotted notation, the first byte defines the class.
One problem with classful addressing is that each class is divided into a fixed
number of blocks with each block having a fixed size.
In classful addressing, an IP address in class A, B, or C is divided into netid and
hostid. These parts are of varying lengths, depending on the class of the
The netid is in color, the hostid is in white.
During the era of classful addressing, subnetting was
introduced. If an organization was granted a large block in
class A or B, it could divide the addresses into several
contiguous groups and assign each group to smaller networks
(called subnets).
Classless Addressing To overcome address depletion and give more organizations
access to the Internet, classless addressing was designed and implemented. In this
scheme, there are no classes, but the addresses are still granted in blocks.
In classless addressing, when an entity, small or large, needs to be connected to
the Internet, it is granted a block (range) of addresses. The size of the block (the
number of addresses) varies based on the nature and size of the entity.
Restriction to simplify the handling of addresses, the Internet authorities impose three
restrictions on classless address blocks:
1. The addresses in a block must be contiguous, one after another.
2. The number of addresses in a block must be a power of 2 (I, 2, 4, 8 ...).
3. The first address must be evenly divisible by the number of addresses.

Transport Layer: Transport layer offers peer-to-peer and end-to-end connection between
two processes on remote hosts. Transport layer takes data from upper layer (i.e. Application
layer) and then breaks it into smaller size segments, numbers each byte, and hands over to
lower layer (Network Layer) for delivery.

Functions : This Layer is the first one which breaks the information data, supplied by
Application layer in to smaller units called segments. It numbers every byte in the segment and
maintains their accounting.

This layer ensures that data must be received in the same sequence in which it was

This layer provides end-to-end delivery of data between hosts which may or may not
belong to the same subnet.

All server processes intend to communicate over the network are equipped with wellknown Transport Service Access Points (TSAPs) also known as port numbers.

End-to-End Communication : A process on one host identifies its peer host on remote
network by means of TSAPs, also known as Port numbers. TSAPs are very well defined and a
process which is trying to communicate with its peer knows this in advance.
The two main Transport layer protocols are:
Transmission Control Protocol
It provides reliable communication between two hosts.
User Datagram Protocol
It provides unreliable communication between two hosts.

DNS (Domain Name Service)
The internet primarily uses IP addresses for locating nodes. However, its humanly
not possible for us to keep track of the many important nodes as numbers.
Alphabetical names as we see would be more convenient to remember than the
numbers as we are more familiar with words. Hence, in the chaotic organization of
numbers (IP addresses) we would be much relieved if we can use familiar sounding
names for nodes on the network.
There is also another motivation for DNS. All the related information about a
particular network (generally maintained by an organization, firm or university)
should be available at one place. The organization should have complete control
over what it includes in its network and how does it "organize" its network.
Meanwhile, all this information should be available transparently to the outside
Conceptually, the internet is divide into several hundred top level domains where
each domain covers many hosts. Each domain is partitioned in subdomains which
may be further partitioned into subsubdomains and so on... So the domain space is

partitioned in a tree like structure as shown below. It should be noted that this tree
hierarchy has nothing in common with the IP address hierarchy or organization.
The internet uses a hierarchical tree structure of Domain Name Servers for IP
address resolution of a host name.

The top level domains are either generic or

names of countries. eg of generic top level
domains are .edu .mil .gov .org .net
.com .int etc. For countries we have one
each country as defined in ISO3166.
(United Kingdom).


.in (India)


The leaf nodes of this tree are target machines.

Obviously we would have to
ensure that the names in a row in a subdomain are unique. The max length of any
name between two dots can be 63 characters. The absolute address should not be
more than 255 characters. Domain names are case insensitive. Also in a name only
letters, digits and hyphen are allowed. For eg. is a domain name
corresponding to a machine named www under the subsubdomain
Resource Records:
Every domain whether it is a single host or a top level domain can have a set of
resource records associated with it. Whenever a resolver (this will be explained
later) gives the domain name to DNS it gets the resource record associated with it.
So DNS can be looked upon as a service which maps domain names to resource
records. Each resource record has five fields and looks as below:

Domain Name



Time to Live


Domain name: the domain to which this record applies.

Class: set to IN for internet information. For other information other codes
may be specified.
Type: tells what kind of record it is.
Time to live: Upper Limit on the time to reach the destination
Value: can be an IP address, a string or a number depending on the record

A Resource Record (RR) has the following:

owner which is the domain name where the RR is found.
type which is an encoded 16 bit value that specifies the
type of the resource in this resource record. It can be one of
the following:
o A a host address
o CNAME identifies the canonical name of an alias
o HINFO identifies the CPU and OS used by a host
o MX identifies a mail exchange for the domain.
o NS the authoritative name server for the domain
o PTR a pointer to another part of the domain name
o SOA identifies the start of a zone of authority class
which is an encoded 16 bit value which identifies a
protocol family or instance of a protocol.
class One

of: IN the



or CH the


TTL which is the time to live of the RR. This field is a 32 bit
integer in units of seconds, an is primarily used by resolvers
when they cache RRs. The TTL describes how long a RR can
be cached before it should be discarded.
RDATA Data in this field depends on the values of the type
and class of the RR and a description for each is as follows:
o for A: For the IN class, a 32 bit IP address For the CH
class, a domain name followed by a 16 bit octal Chaos
o for CNAME: a domain name.

o for MX: a 16 bit preference value (lower is better)

followed by a host name willing to act as a mail
exchange for the owner domain.
o for NS: a host name.
o for PTR: a domain name.
o for SOA: several fields.
Note: While short TTLs can be used to minimize caching, and a
zero TTL prohibits caching, the realities of Internet performance
suggest that these times should be on the order of days for the
typical host. If a change can be anticipated, the TTL can be
reduced prior to the change to minimize inconsistency during the
change, and then increased back to its former value following the
change. The data in the RDATA section of RRs is carried as a
combination of binary strings and domain names. The domain
names are frequently used as "pointers" to other data in the DNS.
Aliases and Cannonical Names

Some servers typically have multiple names for convenience. For

example & identify the same
server. In addition multiple mailboxes might be provided by some
organizations. Most of these systems have a notion that one of
the equivalent set of names is the canonical or primary name and
all others are aliases.
When a name server fails to find a desired RR in the resource set associated with
the domain name, it checks to see if the resource set consists of a CNAME record
with a matching class. If so, the name server includes the CNAME record in the
response and restarts the query at the domain name specified in the data field of the
CNAME record.
Name Servers

Name servers are the repositories of information that make up the

domain database. The database is divided up into sections called
zones, which are distributed among the name servers. Name
servers can answer queries in a simple manner; the response can
always be generated using only local data, and either contains
the answer to the question or a referral to other name servers
"closer" to the desired information. The way that the name server

answers the query depends upon whether it is operating in

recursive mode or iterative mode:
The simplest mode for the server is non-recursive, since it
can answer queries using only local information: the
response contains an error, the answer, or a referral to some
other server "closer" to the answer. All name servers must
implement non-recursive queries.
The simplest mode for the client is recursive, since in this
mode the name server acts in the role of a resolver and
returns either an error or the answer, but never referrals.
This service is optional in a name server, and the name
server may also choose to restrict the clients which can use
recursive mode.
EMAIL (electronic mail - SMTP, MIME, ESMTP)

Email is the most widely used application service which is used by computer users.
It differs from other uses of the networks as network protocols send packets
directly to destinations using timeout and retransmission for individual segments if
no ack returns. However in the case of email the system must provide for
instances when the remote machine or the network connection has failed and take
some special action.Email applications involve two aspects User-agent( pine, elm etc.)
Transfer agent( sendmail daemon etc.)
When an email is sent it is the mail transfer agent (MTA) of the source that
contacts the MTA of the destination. The protocol used by the MTA 's on the
source and destination side is called SMTP. SMTP stands for Simple Mail
Transfer Protocol.. There are some protocols that come between the user agent
and the MTA eg. POP,IMAP which are discussed later.
Mail Gateways Mail gateways are also called mail relays, mail bridges and in such systems the
senders machine does not contact the receiver's machine directly but sends
mail across one or more intermediate machines that forward it on.
These intermediate machines are called mail gateways.Mail gateways are
introduce unreliablity.Once the sender sends to first intermediate m/c then it
discards its local copy. So failure at an intermediate machine may result in
message loss without informing the sender or the receiver. Mail gateways also
introduce delays. Neither the sender nor the receiver can determine how long

However mail gateways have an advantage providing interoperability ie. They

provide connections among standard TCP/IP mail systems and other mail
systems as well as between TCP/IP internets and networks that do not support
Internet protocols. So when there is a change in protocol then the mail gateway
helps in translating the mail message from one protocol to another since it will
be designed to understand both. .
TCP/IP protocol suite specifies a standard for the exchange of mail between
machines. It was derived from the (MTP) Mail Transfer Protocol. it deals with how
the underlying mail delivery system passes messages across a link from one
machine to another. The mail is enclosed in what is called an envelope. The
envelope contains the To and From fields and these are followed by the mail. The
mail consists of two parts namely the Header and the Data.
The Header has the To and From fields. If Headers are defined by us they should
In SMTP data portion can contain only printable ASCII characters The old method
of sending a binary file was to send it in uuencoded form but there was no way to
distinguish between the many types of binary files possible eg. .tar , .gz , .dvi


1. There is no convenient way to send nonprintable characters
2. There is no way to know if one has received mail or not or has read it or not.
3. Someone else can send a mail on my behalf.
So a better protocol was proposed - ESMTP ESMTP stands for Extended Simple
Mail Transfer Protocol. It is compatible with SMTP. Just as the first packet sent in
SMTP is HELO similarly in ESMTP the first packet is called EHELO. If the
receiver supports ESMTP then it will answer to this EHELO packet by sending
what data type and what kind of encoding it supports. Even a SMTP based receiver
can reply to it. Also if there is an error message or there is no answer then the
sender uses SMTP.

Network Security
The various issues in Network security are as follows:
1. Authentication: We have to check that the person who has requested for
something or has sent an e-mail is indeed allowed to do so. In this process
we will also look at how the person authenticates his identity to a remote

2. Integrity: We have to check that the message which we have received is

indeed the message which was sent. Here CRC will not be enough because
somebody may deliberately change the data. Nobody along the route should
be able to change the data.
3. Confidentiality: Nobody should be able to read the data on the way so we
need Encryption
4. Non-repudiation: Once we sent a message, there should be no way that we
can deny sending it and we have to accept that we had sent it.
5. Authorization: This refers to the kind of service which is allowed for a
particular client. Even though a user is authenticated we may decide not to
authorize him to use a particular service.
For authentication, if two persons know a secret then we just
need to prove that no third person could have generated the
message. But for Non-repudiation we need to prove that even
the sender could not have generated the message. So
authentication is easier than Non-repudiation. To ensure all
this, we take the help of cryptography.

1) Message Confidentiality:
Message confidentiality or privacy means that the sender and the receiver expect
confidentiality. The transmitted message must make sense to only the intended receiver. To
all others, the message must be garbage. When a customer communicates with her bank, she
expects that the communication is totally confidential.
Confidentiality with Symmetric-Key Cryptography: To provide confidentiality with
symmetric-key cryptography, a sender and a receiver need to share a secret key. A person
residing in the United States cannot meet and exchange a secret key with a person living in
China. To be able to use symmetric-key cryptography, we need to find a solution to the key
sharing. This can be done using a session key. A session key is one that is used only for the
duration of one session. The session key itself is exchanged using asymmetric key
cryptography. Note that the nature of the symmetric key allows the communication to be
carried on in both directions although it is not recommended today. Using two different keys
is more secure, because if one key is compromised, the communication is still confidential in
the other direction. For a long message, symmetric-key cryptography is much more efficient
than asymmetric-key cryptography.
Confidentiality with Asymmetric-Key Cryptography: The problem we mentioned about
key exchange in symmetric-key cryptography for privacy culminated in the creation of
asymmetric-key cryptography. Here, there is no key sharing; there is a public announcement.
Bob creates two keys: one private and one public. He keeps the private key for decryption; he
publicly announces the public key to the world. The public key is used only for encryption;
the private key is used only for decryption. The public key locks the message; the private key
unlocks it. For a two-way communication between Alice and Bob, two pairs of keys are
needed. When Alice sends a message to Bob, she uses Bob's pair; when Bob sends a message
to Alice, he uses Alice's pair as shown in Figure.

2) MESSAGE INTEGRITY: Encryption and decryption provide secrecy, or

confidentiality, but not integrity.
Document and Fingerprint: One way to preserve the integrity of a document is through the
use of a fingerprint. If Alice needs to be sure that the contents of her document will not be
illegally changed, she can put her fingerprint at the bottom of the document. Eve cannot modify
the contents of this document or create a false document because she cannot forge Alice's
fingerprint. To ensure that the document has not been changed, Alice's fingerprint on the
document can be compared to Alice's fingerprint on file. If they are not the same, the document is
not from Alice. To preserve the integrity of a document, both the document and the
fingerprint are needed.
Message and Message Digest: The electronic equivalent of the document and fingerprint pair is
the message and message digest pail: To preserve the integrity of a message, the message is
passed through an algorithm called a hash function. The hash function creates a compressed
image of the message that can be used as a fingerprint. Figure shows the message, hash function,
and the message digest. The message digest needs to be kept secret.
Creating and Checking the Digest: The message digest is created at the sender site and is
sent with the message to the receiver. To check the integrity of a message, or document, the
receiver creates the hash function again and compares the new message digest with the one
received. If both are the same, the receiver is sure that the original message has not been
changed. Of course, we are assuming that the digest has been sent secretly. Figure shows the
idea. To be eligible for a hash, a function needs to meet three criteria: one-wayness,
resistance to weak collision, and resistance to strong collision.
3) MESSAGE AUTHENTICATION: A hash function guarantees the integrity of a message.
It guarantees that the message has not been changed. A hash function, however, does not
authenticate the sender of the message. When Alice sends a message to Bob, Bob needs to
know if the message is coming from Alice or Eve. To provide message authentication, Alice
needs to provide proof that it is Alice sending the message and not an imposter. A hash
function per se cannot provide such a proof. The digest created by a hash function is normally
called a modification detection code (MDC). The code can detect any modification in the
MAC: To provide message authentication, we need to change a modification detection code
to a message authentication code (MAC). An MDC uses a keyless hash function. A MAC
uses a keyed hash function. A keyed hash function includes the symmetric key between the
sender and receiver when creating the digest. There are several implementations of MAC in
use today. However, in recent 'years, some MACs have been designed that are based on
keyless hash functions such as SHA-l. This idea is a hashed MAC, called HMAC that can use
any standard keyless hash function such as SHA-l. HMAC creates a nested MAC by applying
a keyless hash function to the concatenation of the message and a symmetric key.
4) DIGITAL SIGNATURE: Although a MAC can provide message integrity and
message authentication, it has a drawback. It needs a symmetric key that must be established
between the sender and the receiver. A digital signature, on the other hand, can use a pair of
asymmetric keys (a public one and a private one). When Alice sends a message to Bob, Bob
needs to check the authenticity of the sender; he needs to be sure that the message comes
from Alice and not Eve. Bob can ask Alice to sign the message electronically. In other words,
an electronic signature can prove the authenticity of Alice as the sender of the message. We
refer to this type of signature as a digital signature.
Comparison: let us discuss the differences between two types of signatures: conventional
and digital.

Inclusion: A conventional signature is included in the document; it is part of the document.

When we write a check, the signature is on the check; it is not a separate document. On the
other hand, when we sign a document digitally, we send the signature as a separate document.
The sender sends two documents: the message and the signature. The recipient receives both
documents and verifies that the signature belongs to the supposed sender. If this is proved, the
message is kept; otherwise, it is rejected.
Verification Method: The second difference between the two types of documents is the
method of verifying the signature. In conventional signature, when the recipient receives a
document, she compares the signature on the document with the signature on file. If they are
the same, the document is authentic. The recipient needs to have a copy of this signature on
file for comparison. In digital signature, the recipient receives the message and the signature.
A copy of the signature is not stored anywhere. The recipient needs to apply a verification
technique to the combination of the message and the signature to verify the authenticity.
5) ENTITY AUTHENTICATION: Entity authentication is a technique designed to let
one party prove the identity of another party. An entity can be a person, a process, a client, or
a server. The entity whose identity needs to be proved is called the claimant; the party that
tries to prove the identity of the claimant is called the verifier. When Bob tries to prove the
identity of Alice, Alice is the claimant, and Bob is the verifier. In entity authentication, the
claimant must identify herself to the verifier. This can be done with one of three kinds of
witnesses: something known, something possessed, or something inherent.
Something known- This is a secret known only by the claimant that can be checked by the
verifier. Examples are a password, a PIN number, a secret key, and a private key.
Something possessed-This is something that can prove the claimant's identity. Examples are
a passport, a driver's license, an identification card, a credit card, and a smart card.
Something inherent-This is an inherent characteristic of the claimant. Examples are
conventional signature, fingerprints, voice, facial characteristics, retinal pattern, and
Passwords: The simplest and the oldest method of entity authentication is the password,
something that the claimant possesses. A password is used when a user needs to access a
system to use the system's resources (log-in). Each user has a user identification that is public
and a password that is private. We can divide this authentication scheme into two separate
groups: the fixed password and the one-time password.

Asynchronous Transfer Mode (ATM)

Asynchronous transfer mode (ATM) is a switching technique used by telecommunication
networks that uses asynchronous time-division multiplexing to encode data into small, fixed-sized
cells. This is different from Ethernet or Internet, which use variable packet sizes for data or
frames. ATM is the core protocol used over the synchronous optical network (SONET) backbone
of the integrated digital services network (ISDN).
Asynchronous transfer mode was designed with cells in mind. This is because voice data is
converted to packets and is forced to share a network with burst data (large packet data) passing
through the same medium. So, no matter how small the voice packets are, they always
encounter full-sized data packets, and could experience maximum queuing delays. This is why
all data packets should be of the same size. The fixed cell structure of ATM means it can be
easily switched by hardware without the delays introduced by routed frames and software
switching. This is why some people believe that ATM is the key to the Internet bandwidth
problem. ATM creates fixed routes between two points before data transfer begins, which differs
from TCP/IP, where data is divided into packets, each of which takes a different route to get to its

destination. This makes it easier to bill data usage. However, an ATM network is less adaptable
to a sudden network traffic surge.
The ATM provides data link layer services that run on the OSI's Layer 1 physical links. It
functions much like small-packet switched and circuit-switched networks, which makes it ideal for
real-rime, low-latency data such as VoIP and video, as well as for high-throughput data traffic like
file transfers. A virtual circuit or connection must be established before the two end points can
actually exchange data.
ATM services generally have four different bit rate choices:

Available Bit Rate: Provides a guaranteed minimum capacity but data can be bursted to
higher capacities when network traffic is minimal.

Constant Bit Rate: Specifies a fixed bit rate so that data is sent in a steady stream. This
is analogous to a leased line.

Unspecified Bit Rate: Doesnt guarantee any throughput level and is used for applications
such as file transfers that can tolerate delays.

Variable Bit Rate (VBR): Provides a specified throughput, but data is not sent evenly. This
makes it a even popular choice for voice and videoconferencing.

ATM Architecture:
ATM is a cell-switched network. The user access devices, called the endpoints, are connected
through a user-to-network interface (UNI) to the switches inside the network.
The switches are connected through network-to-network interfaces (NNIs).