You are on page 1of 54

CONTENTS:

1) Introduction to WAN based troubleshooting project


2) Screenshot of Project
3) Apparatus Used
4) About The Project
5) Literature reviews
6) EIGRP protocol
7) Switching
8) Switching Protocols
9) Wide Area network
ABSTRACT

Computer Networking is a very vast project in the present developing era of electronics and
communication. Now a days, computers are used in a wider range. All the organizations are
using multiple computers within their departments to perform their day to day work.
Computer network allows the user to share data, share folders and files with other users
connected in a network with high security. Computer Networking has bound the world in a
very small area with it wide networking processes like LAN, MAN, WAN.
In this concept it is possible for the networker to check the incoming & the outgoing traffic
and to maintain some security concepts as well. In this logic we use the multiple Routing
Protocols in different areas of the Hospital Network. Now it will show the proper movement
of the packet from one part of the hospital to the other part of the hospital. The project
initiates from the Billing Department of the Hospital. The Network is established using the
RIP Protocol through which all the different departments which are distinguished based on
different VLANs. The Inter-VLAN Routing has been implemented in the Network along with
the Frame Tagging so that the different VLANs will be able to communicate with each other.
Hence, each & every department can communicate with each other. The Wireless end point
Technology has also been implemented to let the admin part & the important terms & Staff
would be allowed to utilize the Network Resources at the time of urgency. The important
security concepts have also been implemented so that the forbidden information of the
Hospital Record would not be accessible for the Clients of the Hospital.
CHAPTER 1
Introduction to WAN based troubleshooting project

A wide area network (WAN) is a network that covers a broad area (i.e., any
telecommunications network that links across metropolitan, regional, or national boundaries)
using private or public network transports. Business and government entities utilize WANs to
relay data among employees, clients, buyers, and suppliers from various geographical
locations. In essence, this mode of telecommunication allows a business to effectively carry
out its daily function regardless of location. The Internet can be considered a WAN as well,
and is used by businesses, governments, organizations, and individuals for almost any
purpose imaginable.
In this project we are trying to explain the whole structure of Wide Area network, how it
actually works in different cities or countries based on the policies and strategies applied by
organizations or Internet Service Providers, to achieve the best possible services at possible
lowest investments for better outputs..
SCREENSHOT OF PROJECT
APPARATUS USED

1. Routers
Router is a small physical device that joins multiple networks together.
Technically, a router is a Layer 3 gateway device, meaning that it connects two or
more networks and that the router operates at the network layer of the OSI model.

2. Switches
Switch is a computer networking device that links network segments or network
devices. The term commonly refers to a multi-port network bridge that processes
and routes data at the data link layer (layer 2) of the OSI model. Switches that
additionally process data at the network layer (layer 3) and above are often
called layer-3 switches or multilayer switches.

3. Cables:
Cable is most often two or more wires running side by side and bonded,
twisted, or braided together to form a single assembly.

A) Straight Cable: It is used to connect PC to Switch, PC to Hub, Hub to Router,


Switch to router

B) Cross Cable: It is used to connect PC to PC, Hub to Hub, Switch to Switch, PC


To Router, Hub to Switch

C) Serial Cable: It is used to connect Router to Router.

Colour coding of straight and cross cable is shown below:


CROSS

STRAIGHT

ABOUT THE PROJECT


The project describes the working of a Wide area network. The topology contains features
like:
1. Routing protocols RIP and OSPF and their Redistribution.
2. DHCP Pools for automatically assigning IP addresses.
3. DNS Server for IP to Name and Name to IP Resolution.
4. NAT for translating Private to Public addresses.
5. Access Control List (ACL) for Security
6. Wireless Routers and Devices Configuration.
7. Creation of different VLANs.
8. IP Phones between different VLANs.
9. Inter-Vlan routing.
10. Access Control List

1. RIP: It stands for routing information protocol. It is a distance vector routing protocol. It
has a max hop count of 15.After every 30 sec full updates are send. It has an AD value of
120. It is used for very small networks.
2. OSPF: It stands for open shortest path first. It is a link state protocol. Its hop count is
limitless and can be used for very large networks. Route selection is based on hops count
as well as cost of link. The best path is chosen by dijkstra algorithm.
3. REDISTRIBUTION: It is the mechanism that allows to connect different domains, so as
the different Routing protocol can exchange and advertise routing updates as if they are a
single protocol. The redistribution is performed on the router that lies at the boundary
between different domains or runs multiple protocols.
4. DHCP: DHCP stands for dynamic host configuration protocol.It is used to configure
devices connected to a network.It is actually a network protocol that enables the server to
automatically assign IP address to a computer from a given range of number that has been
configured for a given network.It is used for both IPv4 and IPv6.DHCP allows a
computer to join an IP-based network without having a pre-configured IP address.
5. DNS Server: The DNS translates Internet domain and host names to IP addresses. DNS
automatically converts the names we type in our Web browser address bar to the IP
addresses of Web servers hosting those sites.DNS implements a distributed database to
store this name and address information for all public hosts on the Internet. DNS assumes
IP addresses do not change.
6. NAT: NAT allows an IP network to maintain public IP addresses separately from private
IP addresses. NAT is a popular technology for Internet connection sharing. It is also
sometimes used in server load balancing applications on corporate networks. In its most
common configuration, NAT maps all of the private IP addresses on a home network to
the single IP address supplied by an ISP. This allows computers on the home LAN to
share a single Internet connection.
7. Wireless Routers: A wireless router is a device that performs the functions of a router but
also includes the functions of a wireless access point. It is commonly used to provide
access to the Internet or a computer network. It does not require a wired link, as the
connection is made wirelessly, via radio waves. It can function in a wired LAN in a
wireless-only LAN, or in a mixed wired/wireless network, depending on the manufacturer
and model.
8. VLANS: It is a virtual local area network. A switch can be partitioned to create multiple
distinct broadcast domains, which are mutually isolated so that packets can only pass
between them via one or more routers such a domain is referred to as a Virtual Local Area
Network, Virtual LAN.VLAN is a group of end stations with a common set of
requirements, independent of physical location. VLANs have the same attributes as a
physical LAN but allow you to group end stations even if they are not located physically
on the same LAN segment.
9. IP phones: This is an IP phone commonly known as voice over IP phone. A VoIP phone
or IP Phone uses voice over IP (VoIP) technologies for placing and transmitting
telephone calls over an IP network, such as the Internet. This can be simple software-
based soft phones or purpose-built hardware devices. It looks very similar to an ordinary
phone. These are commonly used in colleges, schools, institutes, companies etc.
10. Inter-Vlan routing: if we want two communicate between two different Vlans we need a
Router which will help in communication between the two different vlans and this
concept is known as Inter-Vlan routing. Inter-VLAN routing is a method to enable
communication between two different VLANs connected in a network device like
network switch. Inter-Vlan Routing is the capability to route traffic between vlans. It is a
process of forwarding network traffic from one VLAN to another VLAN using a router
or layer 3 device.
11. Access Control List: ACLs are basically a set of commands, grouped together by a
number or name that is used to filter traffic entering or leaving an interface. There are a
variety of reasons for which we use ACLs. The primary reason is to provide a basic level
of security for the network. ACLs are not as complex and in depth of protection as
stateful firewalls, but they do provide protection on higher speed interfaces where line
rate speed is important and firewalls may be restrictive. ACLs are also used to restrict
updates for routing from network peers and can be instrumental in defining flow control
for network traffic.
12. Telnet: Telnet is a service which is used to access the devices remotely. Telnet is an
application protocol which performs the function on an application layer of the OSI
model.It provides a logical configurable port for a router device.It is basically used for
troubleshooting.
13. Security: Enable secret password: It is used to protect the privilege mode of a router
It is in encrypted format.It is implemented before the privilege mode.
Console password: It is used to protect the IOS and user mode of a router device.
1.1 BRIEF ABOUT THE NETWORKING TECHNOLOGY
1.1.1 What is a Network?
 A network, often simply referred to as a computer network, is a collection of
computers and devices connected by communications channels that facilitates
communications among users and allows users to share resources with other users. A
computer network allows sharing of resources and information among devices
connected to the network.

 A computer network is a group of two or more computers connected to each


electronically. This means that the computers can "talk" to each other and that every
computer in the network can send information to the others.

 In the world of computers, networking is the practice of linking two or more


computing devices together for the purpose of sharing data. Networks are built with a
mix of computer hardware and computer software.

Fig 1.1.1: A Computer Network


Thus networking is the practice of linking two or more computers or devices with each other.
The connectivity can be wired or wireless. In a nutshell computer networking is the
engineering discipline concerned with the communication between computer systems or
devices. Computer networking is sometimes considered a sub-discipline of
telecommunications, computer science, information technology and electronics engineering
since it relies heavily upon the theoretical and practical application of these scientific and
engineering disciplines.

1.1.2 Network Classification


As a computer network is a system for communication among two or more computers.
Though there are numerous ways of classifying a network, the most popular categorization is
by range, functional relationship, network topology and specialized function.
By Range

 Local area network (LAN): A local area network is a network that connects
computers and devices in a limited geographical area such as home, school, computer
laboratory, office building, or closely positioned group of buildings. Each computer or
device on the network is a node. Current wired LANs are most likely to be based on
Ethernet technology, although new standards like ITU-TG.hn also provide a way to
create a wired LAN using existing home wires (coaxial cables, phone lines and power
lines).

Fig 1.1.2: A Typical Local Area Network

All interconnected devices must understand the network layer (layer 3), because they
are handling multiple subnets (the different colors). Those inside the library, which
have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit
Ethernet connection to the central router, could be called "layer 3 switches" because
they only have Ethernet interfaces and must understand IP. It would be more correct
to call them access routers, where the router at the top is a distribution router that
connects to the Internet and academic networks' customer access routers. The defining
characteristics of LANs, in contrast to WANs (Wide Area Networks), include their
higher data transfer rates, smaller geographic range, and no need for leased
telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies
operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects
investigating the standardization of 40 and 100 Gbit/s.

 Metropolitan area network (MAN): A metropolitan area network is a large computer


network that usually spans a city or a large campus. A MAN usually interconnects a
number of local area networks (LANs) using a high-capacity backbone technology, such
as fiber-optical links, and provides up-link services to wide area networks and the
Internet. A Metropolitan Area Network (MAN) is a large computer network that spans a
metropolitan area or campus. Its geographic scope falls between a WAN and LAN.
MANs provide Internet connectivity for LANs in a metropolitan region, and connect
them to wider area networks like the Internet.

Fig 1.1.3: A Simple MAN

 Wide area network (WAN): The term Wide Area Network (WAN) usually refers to a
network which covers a large geographical area, and use communications circuits to
connect the intermediate nodes. A major factor impacting WAN design and performance
is a requirement that they lease communications circuits from telephone companies or
other communications carriers. Transmission rates are typically 2 Mbps, 34 Mbps, 45
Mbps, 155 Mbps, 625 Mbps (or sometimes considerably more). Numerous WANs have
been constructed, including public packet networks, large corporate networks, military
networks, banking networks, stock brokerage networks, and airline reservation networks.
Some WANs are very extensive, spanning the globe, but most do not provide true global
coverage. Organisations supporting WANs using the Internet Protocol are known as
Network Service Providers (NSPs).

 Personal area network (PAN): A personal area network is a computer network used for
communication among computer devices, including telephones and personal digital
assistants, in proximity to an individual's body. The devices may or may not belong to the
person in question. The reach of a PAN is typically a few meters. PANs can be used for
communication among the personal devices themselves (intrapersonal communication),
or for connecting to a higher level network and the Internet (an uplink). Personal area
networks may be wired with computer buses such as USB and FireWire.
Fig 1.1.4: Personal Area Network

 Virtual Private Network (VPN): A virtual private network (VPN) is a computer


network in which some of the links between nodes are carried by open connections or
virtual circuits in some larger network (e.g., the Internet) instead of by physical wires.
The data link layer protocols of the virtual network are said to be tunnelled through the
larger network when this is the case. One common application is secure communications
through the public Internet, but a VPN need not have explicit security features, such
authentication or content encryption VPNs.

Fig 1.1.5: VPN used to interconnect 3 office and Remote users

1.1.3 BY FUNCTIONAL RELATIONSHIP

Client-server: Client-server model of computing is a distributed application structure that


partitions tasks or workloads between service providers, called servers, and service
requesters, called clients. Often clients and servers communicate over a computer network on
separate hardware, but both client and server may reside in the same system. A server
machine is a host that is running one or more server programs which share its resources with
clients. A client does not share any of its resources, but requests a server's content or service
function. Clients therefore initiate communication sessions with servers which await
incoming requests.
 Peer-to-peer: A peer-to-peer, commonly abbreviated to P2P, is any distributed network
architecture composed of participants that make a portion of their resources (such as
processing power, disk storage or network bandwidth) directly available to other network
participants, without the need for central coordination instances (such as servers or stable
hosts). Peers are both suppliers and consumers of resources, in contrast to the traditional
client–server model where only servers supply, and clients consume. Peer-to-peer was
popularized by file sharing systems like Napster. Peer-to-peer file sharing networks have
inspired new structures and philosophies in other areas of human interaction. In such
social contexts, peer-to-peer as a meme refers to the egalitarian social networking that is
currently emerging throughout society, enabled by Internet technologies in general. P2P
networks are typically used for connecting nodes via largely ad hoc connections. Sharing
content files containing audio, video, data or anything in digital format is very common,
and real time data, such as telephony traffic, is also passed using P2P technology.

 Multitier architecture: Multi-tier architecture (often referred to as n-tier architecture) is


an architecture in which the presentation, the application processing, and the data
management are logically separate processes. For example, an application that uses
middleware to service data requests between a user and a database employs multi-tier
architecture. The most widespread use of "multi-tier architecture" refers to three-tier
architecture. N-tier application architecture provides a model for developers to create a
flexible and reusable application. By breaking up an application into tiers, developers
only have to modify or add a specific layer, rather than have to rewrite the entire
application over. There should be a presentation tier, a business or data access tier, and a
data tier.

1.2.1 BY NETWORK TOPOLOGY

Bus network: A bus network topology is a network architecture in which a set of clients are
connected via a shared communications line, called a bus. There are several common
instances of the bus architecture, including one in the motherboard of most computers, and
those in some versions of Ethernet networks. Bus networks are the simplest way to connect
multiple clients, but may have problems when two clients want to transmit at the same time
on the same bus. Thus systems which use bus network architectures normally have some
scheme of collision handling or collision avoidance for communication on the bus, quite
often using Carrier Sense Multiple Access or the presence of a bus master which controls
access to the shared bus resource. A true bus network is passive – the computers on the bus
simply listen for a signal; they are not responsible for moving the signal along. However,
many active architectures can also be described as a "bus", as they provide the same logical
functions as a passive bus; for example, switched Ethernet can still be regarded as a logical
network, if not a physical one. Indeed, the hardware may be abstracted away completely in
the case of a software bus.
Fig 1.2.1:Bus Topology

 Star network: Star networks are one of the most common computer network topologies.
In its simplest form, a star network consists of one central switch, hub or computer, which
acts as a conduit to transmit messages. Thus, the hub and leaf nodes, and the transmission
lines between them, form a graph with the topology of a star. If the central node is
passive, the originating node must be able to tolerate the reception of an echo of its own
transmission, delayed by the two-way transmission time (i.e. to and from the central
node) plus any delay generated in the central node. An active star network has an active
central node that usually has the means to prevent echo-related problems. The star
topology reduces the chance of network failure by connecting all of the systems to a
central node. When applied to a bus-based network, this central hub rebroadcasts all
transmissions received from any peripheral node to all peripheral nodes on the network,
sometimes including the originating node. All peripheral nodes may thus communicate
This configuration is common with twisted pair cable. However, it can also be used with
coaxial cable or optical fibre cable.

Fig 1.2.2: Star Topology

 Ring network: A ring network is a network topology in which each node connects to
exactly two other nodes, forming a single continuous pathway for signals through
each node - a ring. Data travels from node to node, with each node along the way
handling every packet. Because a ring topology provides only one pathway between
any two nodes, ring networks may be disrupted by the failure of a single link. A node
failure or cable break might isolate every node attached to the ring. FDDI networks
overcome this vulnerability by sending data on a clockwise and a counter clockwise
ring: in the event of a break data is wrapped back onto the complementary ring before
it reaches the end of the cable, maintaining a path to every node along the resulting
"C-Ring". 802.5 networks -- also known as IBM Token Ring networks -- avoid the
weakness of a ring topology altogether: they actually use a star topology at the
physical layer and a Multistation Access Unit (MAU) to imitate a ring at the data link
layer. Many ring networks add a "counter-rotating ring" to form a redundant topology.

Fig 1.2.3: Ring Topology

 Grid network: A grid network is a kind of computer network consisting of a number


of (computer) systems connected in a grid topology. In a regular grid topology, each
node in the network is connected with two neighbours along one or more dimensions.
If the network is one-dimensional, and the chain of nodes is connected to form a
circular loop, the resulting topology is known as a ring. In general, when an n-
dimensional grid network is connected circularly in more than one dimension, the
resulting network topology is a torus, and the network is called toroidal.
CHAPTER 2

LITERATURE REVIEW

2.1 ELEMENTS OF A NETWORK

A network element is usually defined as a manageable logical entity uniting one or more
physical devices. This allows distributed devices to be managed in a unified way using one
management system. Elements of the network include the entities on which the network runs
upon. This includes routers, switches, hubs, bridges, network cards, repeaters, filters,
modems, connecting cables. All of these network components are discussed in detail below:

Routers :- A router is a device that interconnects two or more computer networks, and
selectively interchanges packets of data between them. Each data packet contains address
information that a router can use to determine if the source and destination are on the same
network, or if the data packet must be transferred from one network to another. Where
multiple routers are used in a large collection of interconnected networks, the routers
exchange information about target system addresses, so that each router can build up a table
showing the preferred paths between any two systems on the interconnected networks. A
router is a networking device whose software and hardware are customized to the tasks of
routing and forwarding information. A router has two or more network interfaces, which may
be to different physical types of network or different network standards. Each network
interface is a small computer specialized to convert electric signals from one form to another.
Routers connect two or more logical subnets, which do not share a common network address.
The subnets in the router do not necessarily map one-to-one to the physical interfaces of the
router. The term "layer 3 switching" is used with the term "routing". The term switching is
generally used to refer to data forwarding between two network devices that share a common
network address. This is also called layer 2 switching or LAN switching.

Switches:- A network switch or switching hub is a computer networking device that connects
network segments. Switches may operate at one or more OSI layers, including physical, data
link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more
than one of these layers is known as a multilayer switch. In switches intended for commercial
use, built-in or modular interfaces make it possible to connect different types of networks,
including Ethernet, Fibre Channel, ATM, ITU-T G.hn and 802.11. This connectivity can be
at any of the layers mentioned. While Layer 2 functionality is adequate for speed-shifting
within one technology, interconnecting technologies such as Ethernet and token ring are
easier at Layer 3. Interconnection of different Layer 3 networks is done by routers. If there
are any features that characterize "Layer-3 switches" as opposed to general-purpose routers, it
tends to be that they are optimized, in larger switches, for high-density Ethernet connectivity.

Hubs:- A hub, essentially an network hub is a device for connecting multiple twisted pair or
fiber optic Ethernet devices together and making them act as a single network segment. Hubs
work at the physical layer (layer 1) of the OSI model. The device is a form of multiport
repeater. Repeater hubs also participate in collision detection, forwarding a jam signal to all
ports if it detects a collision. A network hub is a fairly unsophisticated broadcast device.
Hubs do not manage any of the traffic that comes through them, and any packet entering any
port is broadcast out on all other ports. Since every packet is being sent out through all other
ports, packet collisions result—which greatly impedes the smooth flow of traffic. The need
for hosts to be able to detect collisions limits the number of hubs and the total size of a
network built using hubs (a network built using switches does not have these limitations). For
10 Mbit/s networks, up to 5 segments (4 hubs) are allowed between any two end stations.

Bridges:- A Network Bridge connects multiple network segments at the data link layer
(Layer 2) of the OSI model. In Ethernet networks, the term Bridge formally means a device
that behaves according to the IEEE 802.1 D standards. A bridge and switch are very much
alike; a switch being a bridge with numerous ports. Switch or Layer 2 switch is often used
interchangeably with Bridge. Bridges are similar to repeaters or network hubs, devices that
connect network segments at the physical layer; however, with bridging, traffic from one
network is managed rather than simply rebroadcast to adjacent network segments. Bridges
are more complex than hubs or repeaters. Bridges can analyze incoming data packets to
determine if the bridge is able to send the given packet to another segment of the network.

Fig 2.1.1: A Network Bridge

Repeaters:- A network repeater is a device used to expand the boundaries of a wired or


wireless (Wi-Fi) local area network (LAN). In the past, wired network repeaters were used to
join segments of Ethernet cable. The repeaters would amplify the data signals before sending
them on to the uplinked segment, thereby countering signal decay that occurs over extended
lengths of wire. Modern Ethernet networks use more sophisticated switching devices, leaving
the wireless flavour of the network repeater a more popular device for use with wireless

Fig 2.1.2: Network Repeaters


 Modems:- A modem (modulator-demodulator) is a device that modulates an analog
carrier signal to encode digital information, and also demodulates such a carrier signal to
decode the transmitted information. The goal is to produce a signal that can be
transmitted easily and decoded to reproduce the original digital data. Modems can be used
over any means of transmitting analog signals, from driven diodes to radio. The most
familiar example is a voice band modem that turns the digital data of a personal computer
into analog audio signals that can be transmitted over a telephone line, and once received
on the other side, a modem converts the analog data back into digital. Modems are
generally classified by the amount of data they can send in a given time, normally
measured in bits per second (bit/s, or bps). They can also be classified by Baud, the
number of times the modem changes its signal state per second. A simple type of a
modem is shown below in the figure:

Fig 2.1.3: Modem

 Network Cables:- Communication is the process of transferring signals from one point to
another and there must be some medium to transfer those signals. In computer networking
and especially in the local area networking, there are certain communication mediums.
This section provides the basic overview of the network cables, LAN communication
system and other transmission mediums in LAN and WAN. Today many standardized
communication cables and communication devices are in use the according to the needs
of a computer network. LAN data communication systems there are different types of
cables are used. The most common types of the LAN cables are the Ethernet UTP/STP
cables. An Ethernet cable is a twisted pair cable that is consist of eight cables that are
paired together to make four pairs. A RJ-45 connector is joined with both ends of the
cables and one end of the connector is connected with the LAN card of the computer and
the other end of the cable is connected with the hub or switch. Cable testers are used to
test the performance of each cable. The preferable cable in the Ethernet networking is the
100baseT, which provides the best communication speed. UTP/STP is a standardize cable
in which data is transferred which provides the transmission speed of 10/100 mbps. The
most commonly used cable in the star topology is the UTP/STP cable. UTP/STP cables
are same in functionality only a slight difference is that an extra protective silver coated
layer surrounds the cable. UPT/STP cables are further divided into straight over and cross
over cables. The most common use of the UTP/STP cables is the serial transmission,
Ethernet, ISDN, fixed and modular interfaces in the WAN networking. Straight over
Fig 2.1.4: Types of Cables

2.2 NETWORKING MODELS


Network models define a set of network layers and how they interact. There are several
different network models depending on what organization or company started them. The
most important two are:

 The TCP/IP Model- This model is sometimes called the DOD model since it was
designed for the department of defence. It is also called the internet model because
TCP/IP is the protocol used on the internet.
 OSI Network Model - The International Standards Organization (ISO) has defined a
standard called the Open Systems Interconnection (OSI) reference model. This is a
seven layer architecture listed in the next section.

2.3 THE TCP/IP MODEL

The TCP/IP model is a description framework for computer network protocols created in the
1970s by DARPA, an agency of the United States Department of Defense. It evolved from
ARPANET, which were the world's first wide area network and a predecessor of the Internet.
The TCP/IP Model is sometimes called the Internet Model or the DoD Model. The TCP/IP
model, or Internet Protocol Suite, describes a set of general design guidelines and
implementations of specific networking protocols to enable computers to communicate over a
network. TCP/IP provides end-to-end connectivity specifying how data should be formatted,
addressed, transmitted, routed and received at the destination. Protocols exist for a variety of
different types of communication services between computers.
Fig 2.3: TCP/IP Model

Layers in the TCP/IP Model:


The layers near the top are logically closer to the user application, while those near the
bottom are logically closer to the physical transmission of the data. Viewing layers as
providing or consuming a service is a method of abstraction to isolate upper layer protocols
from the nitty-gritty detail of transmitting bits over, for example, Ethernet and collision
detection, while the lower layers avoid having to know the details of each and every
application and its protocol. The following is a description of each layer in the TCP/IP
networking model starting from the lowest level:

Data Link Layer: The Data Link Layer is the networking scope of the local network
connection to which a host is attached. This regime is called the link in Internet literature.
This is the lowest component layer of the Internet protocols, as TCP/IP is designed to be
hardware independent. As a result TCP/IP has been implemented on top of virtually any
hardware networking technology in existence. The Data Link Layer is used to move packets
between the Internet Layer interfaces of two different hosts on the same link. The processes
of transmitting and receiving packets on a given link can be controlled both in the software
device driver for the network card, as well as on firmware or specialized chipsets. These will
perform data link functions such as adding a packet header to prepare it for transmission, and
then actually transmit the frame over a physical medium.

Network Layer:- The Network Layer solves the problem of sending packets across one or
more networks. Internetworking requires sending data from the source network to the
destination network. This process is called routing. In the Internet Protocol Suite, the Internet
Protocol performs two basic functions: Host addressing and identification and Packet routing.
IP can carry data for a number of different upper layer protocols. These protocols are each
identified by a unique protocol number: for example, Internet Control Message Protocol
(ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2,
respectively.

Transport Layer:- The Transport Layer's responsibilities include end-to-end message


transfer capabilities independent of the underlying network, along with error control,
segmentation, flow control, congestion control, and application addressing (port numbers).
End to end message transmission or connecting applications at the transport layer can be
categorized as either connection-oriented, implemented in Transmission Control Protocol
(TCP), or connectionless, implemented in User Datagram Protocol (UDP). The Transport
Layer can be thought of as a transport mechanism, e.g., a vehicle with the responsibility to
make sure that its contents (passengers/goods) reach their destination safely and soundly,
unless another protocol layer is responsible for safe delivery. The Transport Layer provides
this service of connecting applications through the use of service ports. Since IP provides
only a best effort delivery, the Transport Layer is the first layer of the TCP/IP stack to offer
reliability. IP can run over a reliable data link protocol such as the High-Level Data Link
Control (HDLC). Protocols above transport, such as RPC, also can provide reliability.

Application Layer:- The TCP/IP network interface layer provides network functions such as
frame synchronization, media access, and error control. It is sometimes referred to as the
network access layer, and is roughly equivalent to the Open System Interconnection (OSI)
model's data link layer. The network interface layer's functionality is divided between the
network interface card–driver combination and the low-level protocol stack driver.
Application Layer protocols generally treat the transport layer (and lower) protocols as "black
boxes" which provide a stable network connection across which to communicate, although
the applications are usually aware of key qualities of the transport layer connection such as
the end pointIP addresses and port numbers. As noted above, layers are not necessarily
clearly defined in the Internet protocol suite.

2.4 OSI REFERENCE NETWORK MODEL

The Open System Interconnection (OSI) reference model describes how information from a
software application in one computer moves through a network medium to a software
application in another computer. The OSI reference model is a conceptual model composed
of seven layers, each specifying particular network functions. The model was developed by
the International Organization for Standardization (ISO) in 1984, and it is now considered the
updated without adversely affecting the other layers. The following diagram details the seven
layers of the Open System Interconnection (OSI) reference model:

Table 2.4: The OSI Reference Model Showing Seven Layers


Characteristics of the OSI Layers:

The seven layers of the OSI reference model can be divided into two categories: upper layers
and lower layers. The upper layers of the OSI model deal with application issues and
generally are implemented only in software. The highest layer, the application layer, is
closest to the end user. Both users and application layer processes interact with software
applications that contain a communications component. The term upper layer is sometimes
used to refer to any layer above another layer in the OSI model. The lower layers of the OSI
model handle data transport issues. The lowest layer, the physical layer, is closest to the
physical network medium and is responsible for actually placing information on the medium.

Table 2.4: Two Sets of Layers Make Up the OSI Layers

2.5 DESCRIPTION OF THE OSI LAYERS

Physical Layer: It defines the electrical and physical specifications for devices. In particular,
it defines the relationship between a device and physical medium. Physical layer
specifications define characteristics such as voltage levels, timing of voltage changes,
physical data rates, maximum transmission distances, and physical connectors. Physical layer
implementations can be categorized as either LAN or WAN specifications. The major
functions and services performed by the Physical Layer are establishment and termination of
a connection to a communications medium Participation in the process whereby the
communication resources are effectively shared among multiple users, modulation and
conversion between the representation of digital data in user equipment and the
corresponding signals transmitted over a communications channel.

Data Link Layer: The data link layer provides reliable transit of data across a physical
network link. Different data link layer specifications define different network and protocol
characteristics, including physical addressing, network topology, error notification,
sequencing of frames, and flow control. Physical addressing (as opposed to network
addressing) defines how devices are addressed at the data link layer. Network topology
consists of the data link layer specifications that often define how devices are to be physically
connected, such as in a bus or a ring topology. Error notification alerts upper-layer protocols
that a transmission error has occurred, and the sequencing of data frames reorders frames that

Network Layer: The network layer defines the network address, which differs from the
MAC address. Some network layer implementations, such as the Internet Protocol (IP),
define network addresses in a way that route selection can be determined systematically by
comparing the source network address with the destination network address and applying the
subnet mask. Because this layer defines the logical network layout, routers can use this layer
to determine how to forward packets. Because of this, much of the design and configuration
work for internetworks happens at Layer 3, the network layer.

Transport Layer: The transport layer accepts data from the session layer and segments the
data for transport across the network. Generally, the transport layer is responsible for making
sure that the data is delivered error-free and in the proper sequence. Flow control generally
occurs at the transport layer. Flow control manages data transmission between devices so that
the transmitting device does not send more data than the receiving device can process.
Multiplexing enables data from several applications to be transmitted onto a single physical
link. Virtual circuits are established, maintained, and terminated by the transport layer. Error
checking involves creating various mechanisms for detecting transmission errors, while error
recovery involves acting, such as requesting that data be retransmitted, to resolve any errors
that occur.

Session Layer: The session layer establishes, manages, and terminates communication
sessions. Communication sessions consist of service requests and service responses that occur
between applications located in different network devices. These requests and responses are
coordinated by protocols implemented at the session layer. Some examples of session-layer
implementations include Zone Information Protocol (ZIP), the AppleTalk protocol that
coordinates the name binding process; and Session Control Protocol (SCP), the DECnet
Phase IV session layer protocol.

Presentation Layer: The system. Some examples of presentation layer coding and
conversion schemes include presentation layer provides a variety of coding and conversion
functions that are applied to application layer data. These functions ensure that information
sent from the application layer of one system would be readable by the application layer of
another common data representation formats, conversion of character representation formats,
common data compression schemes, and common data encryption schemes. Common data
representation formats, or the use of standard image, sound, and video formats, enable the
interchange of application data between different types of computer systems. Conversion
schemes are used to exchange information with systems by using different text and data
representations, such as EBCDIC and ASCII. Standard data compression schemes enable
data that is compressed at the source device to be properly decompressed at the destination.
Standard data encryption schemes enable data encrypted at the source device to be properly
deciphered at the destination.

Application Layer: The application layer is the OSI layer closest to the end user, which
means that both the OSI application layer and the user interact directly with the software
application. This layer interacts with software applications that implement a communicating
component. Such application programs fall outside the scope of the OSI model. Application
layer functions typically include identifying communication partners, determining resource
availability, and synchronizing communication.

OSI and TCP/IP layering differences:

The three top layers in the OSI model—the Application Layer, the Presentation Layer and the
Session Layer—are not distinguished separately in the TCP/IP model where it is just the
Application Layer. While some pure OSI protocol applications, such as X.400, also combined
them, there is no requirement that a TCP/IP protocol stack needs to impose monolithic
architecture above the Transport Layer. For example, the Network File System (NFS)
application protocol runs over the Xternal Data Representation (XDR) presentation protocol,
which, in turn, runs over a protocol with Session Layer functionality, Remote Procedure Call
(RPC). RPC provides reliable record transmission, so it can run safely over the best-effort
User Datagram Protocol (UDP) transport. The Session Layer roughly corresponds to the
Telnet virtual terminal functionality which is part of text based protocols such as the HTTP
and SMTP TCP/IP model Application Layer protocols.

CHAPTER 3
ROUTING
3.1 DEFINITION
Routing (or routeing) is the process of selecting paths in a network along which to send
network traffic. Routing is performed for many kinds of networks, including the telephone
network, electronic data networks (such as the Internet), and transportation networks. Here
we are concerned primarily with routing in electronic data networks using packet
switching technology In packet switching networks, routing directs packet forwarding, the
transit of logically addressed packets from their source toward their ultimate destination
General-purpose computers with multiple network cards can also forward packets and
perform routing, though they are not specialized hardware and may suffer from limited
performance. The routing process usually directs forwarding on the basis of routing

CLASSIFICATION OF ROUTING
Routing can be classified on the basis of route telling scheme to the router about
neighbouring networks. This can be done in two ways, either we can tell the router about the
neighbouring networks statically or they can be told dynamically. Hence the classification
comes out to be:
Static routing and dynamic routing

Static routing:
Small networks may involve manually configured routing tables (static routing) or Non-
Adaptive routing, while larger networks involve complex topologies and may change rapidly,
making the manual construction of routing tables unfeasible. Nevertheless, most of the public
switched telephone network (PSTN) uses pre-computed routing tables, with fallback routes if
the most direct route becomes blocked (see routing in the PSTN).

Dynamic routing:
Adaptive routing or Dynamic routing attempts to solve this problem by constructing routing
tables automatically, based on information carried by routing protocols, and allowing the
network to act nearly autonomously in avoiding network failures and blockages. For larger
networks, static routing is avoided. Examples for (Dynamic routing) or Adaptive
routing algorithms are Routing Information Protocol (RIP), Open Shortest Path First
(OSPF).

Distance vector algorithms:


Distance vector algorithms use the Bellman-Ford algorithm. This approach assigns a number,
the cost, to each of the links between each node in the network. Nodes will send information
from point A to point B via the path that results in the lowest total cost (i.e. the sum of the
costs of the links between the nodes used). The algorithm operates in a very simple manner.
When a node first starts, it only knows of its immediate neighbours, and the direct cost
involved in reaching them.

Link-state algorithms:
When applying link-state algorithms, each node uses as its fundamental data a map of the
network in the form of a graph. To produce this, each node floods the entire network with
information about what other nodes it can connect to, and each node then independently
assembles this information into a map. Using this map, each router then independently
determines the least-cost path from itself to every other node using a standard shortest
paths algorithm such as Dijkstra's algorithm. The result is a tree rooted at the current node
such that the path through the tree from the root to any other node is the least-cost path to that
node.
Comparison of routing algorithms
Distance-vector routing protocols are simple and efficient in small networks, and
require little, if any management. However, distance-vector algorithms do not scale well (due
to the count-to-infinity problem), have poor convergence properties and are based on a 'hop
count' metric rather than a 'link-state' metric thus they ignore bandwidth (a major drawback)
when calculating the best path. This has led to the development of more complex but more
scalable algorithms for use in large networks. Interior routing mostly uses link-state routing
protocols such as OSPF and IS-IS. A more recent development is that of loop-free distance-
vector protocols (e.g. EIGRP). Loop-free distance-vector protocols are as robust and
manageable as distance-vector protocols, while avoiding counting to infinity and hence
having good worst-case convergence times. Path selection involves applying a routing
metric to multiple routes, in order to select (or predict) the best route.
The routing table stores only the best possible routes, while link-state or topological
databases may store all other information as well. Because a routing metric is specific to a
given routing protocol, multi-protocol routers must use some external heuristic in order to
select between routes learned from different routing protocols. Ciscos routers, for example,
attribute a value known as the administrative distance to each route, where smaller
administrative distances indicate routes learned from a supposedly more reliable protocol. A
local network administrator, in special cases, can set up host-specific routes to a particular
machine which provides more control over network usage, permits testing and better overall
security. In some networks, routing is complicated by the fact that no single entity is
responsible for selecting paths: instead, multiple entities are involved in selecting paths or
even parts of a single path.

3.2ROUTING PROTOCOL BASICS


Administrative distance
The administrative distance (AD) is used to rate the trustworthiness of routing information
received on a router from a neighbour router. An administrative distance is an integer from 0
to 255, where 0 is the most trusted and 255 means no traffic will be passed via this route. If a
router receives two updates listing the same remote network, the first thing the router checks
is the AD. If one of the advertised routes has a lower AD than the other, then the route with
the lowest AD will be placed in the routing table. If both advertised routes to the same
network have the same AD, then routing protocol metrics will be used to find the best path to
the remote network.
Route source Default AD
Connected 0
Static route 1
EIGRP 90
RIP 120
IGRP 100
OSPF 110
External EIGRP 170
Unknown 255 (this route will never be used)
Table 4.3: Administrative Distances
3.3 MAJOR ROUTING PROTOCOLS
RIP
The Routing Information Protocol (RIP) is a dynamic routing protocol used in local and wide
area networks. As such it is classified as an interior gateway protocol (IGP). It uses the
distance-vector routing algorithm. It was first defined in RFC 1058 (1988). The protocol has
since been extended several times, resulting in RIP Version 2 (RFC 24503). Both versions are
still in use today, however, they are considered to have been made technically obsolete by
more advanced techniques such as Open Shortest Path First (OSPF) and the OSI protocol IS-
IS. RIP has also been adapted for use in IPv6 networks, a standard known as RIPng (RIP next
generation), published in RFC 2080 (1997).
Technical details
RIP is a distance-vector routing protocol, which employs the hop count as a routing metric.
The hold down time is 180 seconds. RIP prevents routing loops by implementing a limit on
the number of hops allowed in a path from the source to a destination. The maximum number
of hops allowed for RIP is 15. This hop limit, however, also limits the size of networks that
RIP can support. A hop count of 16 is considered an infinite distance and used to deprecate
inaccessible, inoperable, or otherwise undesirable routes in the selection process. RIP
implements the split horizon, route poisoning and hold down mechanisms to prevent
incorrect routing information from being propagated. These are some of the stability features
of RIP. It is also possible to use the so called RIP-MTI algorithm to cope with the count to
infinity problem. With its help, it's possible to detect every possible loop with a very small
computation effort. Originally each RIP router transmitted full updates every 30 seconds. In
the early deployments, routing tables were small enough that the traffic was not significant.
As networks grew in size, however, it became evident there could be a massive traffic burst
every 30 seconds, even if the routers had been initialized at random times. RIP is
implemented on top of the User Datagram Protocol as its transport protocol. It is assigned the
reserved port number 520.
Versions
There are three versions of the Routing Information Protocol: RIPv1, RIPv2, and RIPng.

Limitations
 Without using RIP-MTI, Hop count cannot exceed 15, in case if it exceeds it will be
considered invalid.

 Most RIP networks are flat. There is no concept of areas or boundaries in RIP
networks.

 Variable Length Subnet Masks were not supported by RIP version 1.

 Without using RIP-MTI, RIP has slow convergence and count to infinity problems.

Implementations
 Routed, included in most BSD Unix systems.

 Routing and Remote Access, a Windows Server feature, contains RIP support.

 Quagga, a free open source routing software suite based on GNU Zebra.

 OpenBSD, includes a RIP implementation

 Cisco IOS, software used in Cisco routers (supports version 1, version 2 and RIPng)

 Cisco NX-OS software used in Cisco Nexus data center switches (supports RIPv1 and
RIPv2)

3.5 OPEN SHORTEST PATH FIRST (OSPF)


Open Shortest Path First (OSPF) is a dynamic routing protocol for use in Internet Protocol
(IP) networks. Specifically, it is a link-state routing protocol and falls into the group of
interior gateway protocols, operating within a single autonomous system (AS). It is defined
as OSPF Version 2 in RFC 2328 (1998) for IPv4. The updates for IPv6 are specified as OSPF
Version 3 in RFC 5340 (2008).
Overview
OSPF is an interior gateway protocol that routes Internet Protocol (IP) packets solely within a
single routing domain (autonomous system). It gathers link state information from available
routers and constructs a topology map of the network. The topology determines the routing
table presented to the Internet Layer which makes routing decisions based solely on the
destination IP address found in IP datagram’s. OSPF was designed to support variable-length
subnet masking (VLSM) or Classless Inter-Domain Routing (CIDR) addressing models.
An OSPF network may be structured, or subdivided, into routing areas to simplify
administration and optimize traffic and resource utilization. Areas are identified by 32-bit
numbers, expressed either simply in decimal, or often in octet-based dot-decimal notation,
familiar from IPv4 address notation. By convention, area 0 (zero) or 0.0.0.0 represents the
core or backbone region of an OSPF network. The identifications of other areas may be
chosen at will, often, administrators select the IP address of a main router in an area as the
area's identification. Each additional area must have a direct or virtual connection to the
backbone OSPF area. Such connections are maintained by an interconnecting router, known
as area border router (ABR). An ABR maintains separate link state databases for each area it
serves and maintains summarized routes for all areas in the network.
Neighbour relationships
Routers in the same broadcast domain or at each end of a point-to-point telecommunications
link form adjacencies when they have detected each other. This detection occurs when a
router identifies itself in a hello OSPF protocol packet. This is called a two way state and is
the most basic relationship. The routers in an Ethernet or frame relay network select a
designated router (DR) and a backup designated router (BDR) which act as a hub to reduce
traffic between routers. OSPF uses both Unicast and multicast to send "hello packets" and
link state updates.
As a link state routing protocol, OSPF establishes and maintains neighbour relationships in
order to exchange routing updates with other routers. The neighbour relationship table is
called an adjacency database in OSPF. Provided that OSPF is configured correctly, OSPF
forms neighbour relationships only with the routers directly connected to it. In order to form a
neighbour relationship between two routers, the interfaces used to form the relationship must
be in the same area. An interface can only belong to a single area.

Area types in OSPF:


Backbone area
The backbone area (also known as area 0 or area 0.0.0.0) forms the core of an OSPF network.
All other areas are connected to it, and inter-area routing happens via routers connected to the
backbone area and to their own associated areas. It is the logical and physical structure for the
'OSPF domain' and is attached to all nonzero areas in the OSPF domain.
Stub area
A stub area is an area which does not receive route advertisements external to the
autonomous system (AS) and routing from within the area is based entirely on a default route.
This reduces the size of the routing databases for the area's internal routers.
Modifications to the basic concept of stub areas exist in the not-so-stubby area (NSSA). In
addition, several other proprietary variation have been implemented by systems vendors, such
as the totally stubby area (TSA) and the NSSA totally stubby area, both an extension in Cisco
Systems routing equipment.
Not-so-stubby area
A not-so-stubby area (NSSA) is a type of stub area that can import autonomous system
external routes and send them to other areas, but still cannot receive AS external routes from
other areas. NSSA is an extension of the stub area feature that allows the injection of external
routes in a limited fashion into the stub area.
Transit area
A transit area is an area with two or more OSPF border routers and is used to pass network
traffic from one adjacent area to another. The transit area does not originate this traffic and is
not the destination of such traffic.
Applications
OSPF was the first widely deployed routing protocol that could converge a network in the
low seconds, and guarantee loop-free paths. It has many features that allow the imposition of
policies about the propagation of routes that it may be appropriate to keep local, for load
sharing, and for selective route importing more than IS-IS.
IS-IS
Intermediate system to intermediate system (IS-IS), is a protocol used by network devices
(routers) to determine the best way to forward datagram’s through a packet-switched
network, a process called routing. The protocol was defined in ISO/IEC 10589:2002 as an
international standard within the Open Systems Interconnection (OSI) reference design. IS-IS
is not an Internet standard, however IETF republished the standard in RFC 1142 for the
Internet community.
Description
IS-IS is an Interior Gateway Protocol (IGP) meaning that it is intended for use within an
administrative domain or network. It is not intended for routing between Autonomous
Systems (RFC 1930), a job that is the purpose of an Exterior Gateway Protocol, such as
Border Gateway Protocol (BGP). IS-IS is a link-state routing protocol, meaning that it
operates by reliably flooding topology information throughout a network of routers. Each
router then independently builds a picture of the network's topology. Packets or datagram's
are forwarded based on the best topological path through the network to the destination. IS-IS
uses Dijkstra's algorithm for computing the best path through the network.

CHAPTER 4
EIGRP PROTOCOL
4.1 INTRODUCTION
Enhanced Interior Gateway Routing Protocol - (EIGRP) is a Cisco proprietary routing
protocol loosely based on their original IGRP. EIGRP is an advanced distance-vector routing
protocol, with optimizations to minimize both the routing instability incurred after topology
changes, as well as the use of bandwidth and processing power in the router. Routers that
support EIGRP will automatically redistribute route information to IGRP neighbours by
converting the 32 bit EIGRP metric to the 24 bit IGRP metric. Most of the routing
optimizations are based on the Diffusing Update Algorithm (DUAL) work from SRI, which
guarantees loop-free operation and provides a mechanism for fast convergence.

4.2 BASIC OPERATION

The data EIGRP collects is stored in three tables:

 Neighbour Table: Stores data about the neighbouring routers, i.e. those directly
accessible through directly connected interfaces.

 Topology Table: Confusingly named, this table does not store an overview of the
complete network topology; rather, it effectively contains only the aggregation of the
routing tables gathered from all directly connected neighbours. This table contains a
list of destination networks in the EIGRP-routed network together with their
respective metrics. Also for every destination, a successor and a feasible successor are
identified and stored in the table if they exist. Every destination in the topology table
can be marked either as "Passive", which is the state when the routing has stabilized
and the router knows the route to the destination, or "Active" when the topology has
changed and the router is in the process of (actively) updating its route to that
destination.

 Routing table: Stores the actual routes to all destinations; the routing table is
populated from the topology table with every destination network that has its
successor and optionally feasible successor identified (if unequal-cost load-balancing
is enabled using the variance command). The successors and feasible successors serve
as the next hop routers for these destinations.

Unlike most other distance vector protocols, EIGRP does not rely on periodic route dumps in
order to maintain its topology table. Routing information is exchanged only upon the
establishment of new neighbour adjacencies, after which only changes are sent.

Multiple metrics

EIGRP associates five different metrics with each route:

K1 = Bandwidth modifier 12
 Minimum Bandwidth (in kilobits per second)

K2 = Load modifier

 Load (number in range 1 to 255; 255 being saturated)

K3 = Delay modifier

 Total Delay (in 10s of microseconds)

K4 = Reliability modifier

 Reliability (number in range 1 to 255; 255 being the most reliable)

K5 = MTU modifier

 Minimum path Maximum Transmission Unit (MTU) (though not actually used in the
calculation)

By default, only total delay and minimum bandwidth are enabled when EIGRP is started on a
router, but an administrator can enable or disable all the metrics as needed.

For the purposes of comparing routes, these are combined together in a weighted formula to
produce a single overall metric:

Where the various constants (K1 through K5) can be set by the user to produce varying
behaviours. An important and totally non-obvious fact is that if K5 is set to zero, the term
is not used (i.e. taken as 1).

The default is for K1 and K3 to be set to 1, and the rest to zero, effectively reducing the above
formula to (Bandwidth + Delay) * 256.

Obviously, these constants must be set to the same value on all routers in an EIGRP system,
or permanent routing loops will probably result. Cisco routers running EIGRP will not form
an EIGRP adjacency and will complain about K-values mismatch until these values are
identical on these routers.

EIGRP scales Bandwidth and Delay metrics with following calculations:

Bandwidth for EIGRP = 107 / Interface Bandwidth


Delay for EIGRP = Interface Delay / 10

EIGRP also maintains a hop count for every route; however, the hop count is not used
in metric calculation. It is only verified against a predefined maximum on an EIGRP router
(by default it is set to 100 and can be changed to any value between 1 and 255). Routes
having a hop count higher than the maximum will be advertised as unreachable by an EIGRP
router

4.3 IMPORTANT TERMS USED IN EIGRP

Successor

A successor for a particular destination is a next hop router that satisfies these two conditions:

 it provides the least distance to that destination

 it is guaranteed not to be a part of some routing loop

The first condition can be satisfied by comparing metrics from all neighbouring routers that
advertise that particular destination, increasing the metrics by the cost of the link to that
respective neighbour, and selecting the neighbour that yields the least total distance. The
second condition can be satisfied by testing a so-called Feasibility Condition for every
neighbour advertising that destination. There can be multiple successors for a destination,
depending on the actual topology.

Feasible Successor

A feasible successor for a particular destination is a next hop router that satisfies this
condition:

 it is guaranteed not to be a part of some routing loop

This condition is also verified by testing the Feasibility Condition.

Thus, every successor is also a feasible successor. However, in most references about EIGRP
the term "feasible successor" is used to denote only those routers which provide a loop-free
path but which are not successors (i.e. they do not provide the least distance). From this point
of view, for a reachable destination there is always at least one successor, however, there
might not be any feasible successors.

The feasible successor effectively provides a backup route in the case that existing successors
die. Also, when performing unequal-cost load-balancing (balancing the network traffic in
inverse proportion to the cost of the routes), the feasible successors are used as next hops in
the routing table for the load-balanced destination. By default, the total count of successors
and feasible successors for a destination stored in the routing table is limited to four. This
limit can be changed in the range from 1 to 6. In more recent versions of Cisco IOS (e.g.
12.4), this range is between 1 and 16.

CHAPTER 5
SWITCHING

5.1 LAYER 2 SWITCHING

Ethernet is a family of frame-based computer networking technologies for local area


networks (LANs). The name comes from the physical concept of the ether. It defines a
number of wiring and signalling standards for the Physical Layer of the OSI networking
model as well as a common addressing format and Media Access Control at the Data Link
Layer. Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions
of Ethernet for connecting end systems to the network, along with the fiber optic versions for
site backbones, is the most widespread wired LAN technology. It has been in use from
around 1980 to the present, largely replacing competing LAN standards such as token ring,
FDDI, and ARCNET.

Fig 6.1: A standard 8P8C (often called RJ45) connector

Standardization

Notwithstanding its technical merits, timely standardization was instrumental to the


success of Ethernet. It required well-coordinated and partly competitive activities in several
standardization bodies such as the IEEE, ECMA, IEC, and finally ISO. In February 1980
IEEE started a project, IEEE 802 for the standardization of Local Area Networks (LAN). In
addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus were also considered
as candidates for a LAN standard. Due to the goal of IEEE 802 to forward only one standard
and due to the strong company support for all three designs, the necessary agreement on a
LAN standard was significantly delayed.

5.2 GENERAL DESCRIPTION

This is a combination card that supports both coaxial-based using a 10BASE2 (BNC
connector, left) and twisted pair-based 10BASE-T, using an RJ45 (8P8Cmodular connector,
right).Ethernet was originally based on the idea of computers communicating over a shared
coaxial cable acting as a broadcast transmission medium. The methods used show some
similarities to radio systems, although there are fundamental differences, such as the fact that
it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The
common cable providing the communication channel was likened to the ether and it was from
this reference that the name "Ethernet" was derived.

The advantage of CSMA/CD was that, unlike Token Ring and Token Bus, all nodes could
"see" each other directly. All "talkers" shared the same medium - a single coaxial cable -
however, this was also a limitation; with only one speaker at a time, packets had to be of a
minimum size to guarantee that the leading edge of the propagating wave of the message got
to all parts of the medium before the transmitter could stop transmitting, thus guaranteeing
that collisions (two or more packets initiated within a window of time which forced them to
overlap) would be discovered. Minimum packet size and the physical medium's total length
were thus closely linked.

Above the physical layer, Ethernet stations communicate by sending each other data packets,
blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each
Ethernet station is given a single 48-bit MAC address, which is used to specify both the
destination and the source of each data packet. Network interface cards (NICs) or chips
normally do not accept packets addressed to other Ethernet stations. Adapters generally come
programmed with a globally unique address, but this can be overridden, either to avoid an
address change when an adapter is replaced, or to use locally administered addresses.

Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it,
and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build
the functionality of an Ethernet card directly into PC motherboards, eliminating the need for
installation of a separate network card.

CSMA/CD shared medium Ethernet

Ethernet originally used a shared coaxial cable (the shared medium) winding around a
building or campus to every attached machine. A scheme known as carrier sense multiple
accesses with collision detection (CSMA/CD) governed the way the computers shared the
channel. This scheme was simpler than the competing token ring or token bus technologies.
When a computer wanted to send some information, it used the following algorithm:

Collision detected procedure

1. Continue transmission until minimum packet time is reached (jam signal) to ensure
that all receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort
transmission.
4. Calculate and wait random back off period based on number of collisions.
5. Re-enter main procedure at stage 1.
This can be likened to what happens at a dinner party, where all the guests talk to each other
through a common medium (the air). Before speaking, each guest politely waits for the
current speaker to finish. If two guests start speaking at the same time, both stop and wait for
short, random periods of time (in Ethernet, this time is generally measured in microseconds).
The hope is that by each choosing a random period of time, both guests will not choose the
same time to try to speak again, thus avoiding another collision. Exponentially increasing
back-off times (determined using the truncated binary exponential back off algorithm) are
used when there is more than one failed attempt to transmit.

Since all communications happen on the same wire, any information sent by one computer is
received by all, even if that information is intended for just one destination. The network
interface card interrupts the CPU only when applicable packets are received: the card ignores
information not addressed to it unless it is put into "promiscuous mode". This "one speaks, all
listen" property is a security weakness of shared-medium Ethernet, since a node on an
Ethernet network can eavesdrop on all traffic on the wire if it so chooses. Use of a single
cable also means that the bandwidth is shared, so that network traffic can slow to a crawl
when, for example, the network and nodes restart after a power failure.

Bridging and switching:

While repeaters could isolate some aspects of Ethernet segments, such as cable breakages,
they still forwarded all traffic to all Ethernet devices. These created practical limits on how
many machines could communicate on an Ethernet network. Also as the entire network was
one collision domain and all hosts had to be able to detect collisions anywhere on the
network, the number of repeaters between the farthest nodes was limited. Finally segments
joined by repeaters had to all operate at the same speed, making phased-in upgrades
impossible. To alleviate these problems, bridging was created to communicate at the data link
layer while isolating the physical layer. With bridging, only well-formed Ethernet packets are
forwarded from one Ethernet segment to another; collisions and packet errors are isolated.
Bridges learn where devices are, by watching MAC addresses, and do not forward packets
across segments when they know the destination address is not located in that direction.Early
bridges examined each packet one by one using software on a CPU, and some of them were
patterns are symmetrical (which in reality they rarely are). The elimination of the collision
domain also means that all the link's bandwidth can be used and that segment length is not
limited by the need for correct collision detection (this is most significant with some of the
fiber variants of Ethernet).

More advanced networks:

Simple switched Ethernet networks, while an improvement over hub based Ethernet, suffer
from a number of issues:
 They suffer from single points of failure. If any link fails some devices will be unable
to communicate with other devices and if the link that fails is in a central location lots
of users can be cut off from the resources they require.
 It is possible to trick switches or hosts into sending data to a machine even if it's not
intended for it (see switch vulnerabilities).
 Large amounts of broadcast traffic, whether malicious, accidental, or simply a side
effect of network size can flood slower links and/or systems.
o It is possible for any host to flood the network with broadcast traffic forming a
denial of service attack against any hosts that run at the same or lower speed
as the attacking device.
o As the network grows, normal broadcast traffic takes up an ever greater
amount of bandwidth.
o If switches are not multicast aware, multicast traffic will end up treated like
broadcast traffic due to being directed at a MAC with no associated port.
o If switches discover more MAC addresses than they can store (either through
network size or through an attack) some addresses must inevitably be dropped
and traffic to those addresses will be treated the same way as traffic to
unknown addresses, that is essentially the same as broadcast traffic (this issue
is known as fail open).
 They suffer from bandwidth choke points where a lot of traffic is forced down a
single link.

Some switches offer a variety of tools to combat these issues including:

 Spanning-tree protocol to maintain the active links of the network as a tree while
allowing physical loops for redundancy.
 Various port protection features, as it is far more likely an attacker will be on an end
system port than on a switch-switch link.
 VLANs to keep different classes of users separate while using the same physical
infrastructure.
 Fast routing at higher levels to route between those VLANs.
 Link aggregation to add bandwidth to overloaded links and to provide some measure
of redundancy, although the links won't protect against switch failure because they
connect the same pair of switches.

5.3 LAYER 3 SWITCHING

The only difference between a layer 3 switch and router is the way the administrator creates
the physical implementation. Also, traditional routers use microprocessors to make
forwarding decisions, and the switch performs only hardware-based packet switching.
However, some traditional routers can have other hardware functions as well in some of the
higher-end models. Layer 3 switches can be placed anywhere in the network because they
handle high-performance LAN traffic and can cost-effectively replace routers. Layer 3
switching is all hardware-based packet forwarding, and all packet forwarding is handled by
hardware ASICs. Layer 3 switches really are no different functionally than a traditional router
and perform the same functions, which are listed here

 Determine paths based on logical addressing


 Run layer 3 checksums (on header only)
 Use Time to Live (TTL)
 Process and respond to any option information
 Update Simple Network Management Protocol (SNMP) managers with Management
Information Base (MIB) information
 Provide Security

The benefits of layer 3 switching include the following

 Hardware-based packet forwarding


 High-performance packet switching
 High-speed scalability
 Low latency
 Lower per-port cost
 Flow accounting
 Security
 Quality of service (QoS)
CHAPTER 6
SWITCHING PROTOCOLS

6.1 SPANNING TREE PROTOCOL

The Spanning tree protocol (STP) is a link layer network protocol that ensures a loop-free
topology for any bridged LAN. Thus, the basic function of STP is to prevent bridge loops and
ensuing broadcast radiation. In the OSI model for computer networking, STP falls under the
OSI layer-2. It is standardized as 802.1D. As the name suggests, it creates a spanning tree
within a mesh network of connected layer-2 bridges (typically Ethernet switches), and
disables those links that are not part of the spanning tree, leaving a single active path between
any two network nodes. Spanning tree allows a network design to include spare (redundant)
links to provide automatic backup paths if an active link fails, without the danger of bridge
loops, or the need for manual enabling/disabling of these backup links. Bridge loops must be
avoided because they result in flooding the internet network.

Determine the least cost paths to the root bridge

The computed spanning tree has the property that messages from any connected device to the
root bridge traverse a least cost path, i.e., a path from the device to the root that has minimum
cost among all paths from the device to the root. The cost of traversing a path is the sum of
the costs of the segments on the path. Different technologies have different default costs for
network segments. An administrator can configure the cost of traversing a particular network
segment.

The property that messages always traverse least-cost paths to the root is guaranteed by the
following two rules.

 Least cost path from each bridge. After the root bridge has been chosen, each bridge
determines the cost of each possible path from itself to the root. From these, it picks
the one with the smallest cost (the least-cost path). The port connecting to that path
becomes the root port (RP) of the bridge.
 Least cost path from each network segment. The bridges on a network segment
collectively determine which bridge has the least-cost path from the network segment
to the root. The port connecting this bridge to the network segment is then the
designated port (DP) for the segment.
 Disable all other root paths. Any active port that is not a root port or a designated port
is a blocked port (BP).
 Modifications in case of ties. The above rules over-simplify the situation slightly,
because it is possible that there are ties, for example, two or more ports on a single
bridge are attached to least-cost paths to the root or two or more bridges on the same
network segment have equal least-cost paths to the root. To break such ties:
 Breaking ties for root ports. When multiple paths from a bridge are least-cost paths,
the chosen path uses the neighbour bridge with the lower bridge ID. The root port is
thus the one connecting to the bridge with the lowest bridge ID. For example, in
figure 3, if switch 4 were connected to network segment d, there would be two paths
of length 2 to the root, one path going through bridge 24 and the other through bridge
92. Because there are two least cost paths, the lower bridge ID (24) would be used as
the tie-breaker in choosing which path to use.

 Breaking ties for designated ports. When more than one bridge on a segment leads to
a least-cost path to the root, the bridge with the lower bridge ID is used to forward
messages to the root. The port attaching that bridge to the network segment is the
designated port for the segment. In figure 4, there are two least cost paths from
network segment d to the root, one going through bridge 24 and the other through
bridge 92. The lower bridge ID is 24, so the tie breaker dictates that the designated
port is the port through which network segment d is connected to bridge 24. If bridge
IDs were equal, then the bridge with the lowest MAC address would have the
designated port. In either case, the loser sets the port as being blocked.

 The final tie-breaker. In some cases, there may still be a tie, as when two bridges are
connected by multiple cables. In this case, multiple ports on a single bridge are
candidates for root port. In this case, the path which passes through the port on the
neighbour bridge that has the lowest port priority is used.

STP Switch Port States:

 Blocking - A port that would cause a switching loop, no user data is sent or received
but it may go into forwarding mode if the other links in use were to fail and the
spanning tree algorithm determines the port may transition to the forwarding state.
BPDU data is still received in blocking state.
 Listening - The switch processes BPDUs and awaits possible new information that
would cause it to return to the blocking state.
 Learning - While the port does not yet forward frames (packets) it does learn source
addresses from frames received and adds them to the filtering database (switching
database)
 Forwarding - A port receiving and sending data, normal operation. STP still monitors
incoming BPDUs that would indicate it should return to the blocking state to prevent
a loop.
 Disabled - Not strictly part of STP, a network administrator can manually disable a
port

To prevent the delay when connecting hosts to a switch and during some topology changes,
Rapid STP was developed and standardized by IEEE 802.1w, which allows a switch port to
rapidly transition into the forwarding state during these situations.
6.2 VIRTUAL LAN

A virtual LAN, commonly known as a VLAN, is a group of hosts with a common set of
requirements that communicate as if they were attached to the same broadcast domain,
regardless of their physical location. A VLAN has the same attributes as a physical LAN, but
it allows for end stations to be grouped together even if they are not located on the same
network switch. Network reconfiguration can be done through software instead of physically
relocating devices.

Uses

VLANs are created to provide the segmentation services traditionally provided by routers in
LAN configurations. VLANs address issues such as scalability, security, and network
management. Routers in VLAN topologies provide broadcast filtering, security, address
summarization, and traffic flow management. By definition, switches may not bridge IP
traffic between VLANs as it would violate the integrity of the VLAN broadcast domain.

This is also useful if someone wants to create multiple Layer 3 networks on the same Layer 2
switch. For example, if a DHCP server (which will broadcast its presence) is plugged into a
switch it will serve any host on that switch that is configured to get its IP from a DHCP
server. By using VLANs you can easily split the network up so some hosts won't use that
DHCP server and will obtain link-local addresses, or obtain an address from a different
DHCP server. Virtual LANs are essentially Layer 2 constructs, compared with IP subnets
which are Layer 3 constructs. In an environment employing VLANs, a one-to-one
relationship often exists between VLANs and IP subnets, although it is possible to have
multiple subnets on one VLAN or have one subnet spread across multiple VLANs. Virtual
LANs and IP subnets provide independent Layer 2 and Layer 3 constructs that map to one
another and this correspondence is useful during the network design process.

By using VLANs, one can control traffic patterns and react quickly to relocations. VLANs
provide the flexibility to adapt to changes in network requirements and allow for simplified
administration.

6.3 PROTOCOLS AND DESIGN

The protocol most commonly used today in configuring virtual LANs is IEEE 802.1Q. The
IEEE committee defined this method of multiplexing VLANs in an effort to provide
multivendor VLAN support. Prior to the introduction of the 802.1Q standard, several
proprietary protocols existed, such as Cisco's ISL (Inter-Switch Link) and 3Com's VLT
(Virtual LAN Trunk). Cisco also implemented VLANs over FDDI by carrying VLAN
information in an IEEE 802.10 frame header, contrary to the purpose of the IEEE 802.10
standard.

Both ISL and IEEE 802.1Q tagging perform "explicit tagging" - the frame itself is tagged
with VLAN information. ISL uses an external tagging process that does not modify the
existing Ethernet frame, while 802.1Q uses a frame-internal field for tagging, and so does
modify the Ethernet frame. This internal tagging is what allows IEEE 802.1Q to work on
both access and trunk links: frames are standard Ethernet, and so can be handled by
commodity hardware.

The IEEE 802.1Q header contains a 4-byte tag header containing a 2-byte tag protocol
identifier (TPID) and 2-byte tag control information (TCI). The TPID has a fixed value of
0x8100 that indicates that the frame carries the 802.1Q/802.1p tag information. The TCI
contains the following elements:

 Three-bit user priority


 One-bit canonical format indicator (CFI)
 Twelve-bit VLAN identifier (VID)-Uniquely identifies the VLAN to which the frame
belongs

Inter-Switch Link (ISL) is a Cisco proprietary protocol used to interconnect multiple switches
and maintain VLAN information as traffic travels between switches on trunk links. This
technology provides one method for multiplexing bridge groups (VLANs) over a high-speed
backbone. It is defined for Fast Ethernet and Gigabit Ethernet, as is IEEE 802.1Q.

Establishing VLAN memberships

The two common approaches to assigning VLAN membership are as follows:

 Static VLANs
 Dynamic VLANs

Static VLANs are also referred to as port-based VLANs. Static VLAN assignments are
created by assigning ports to a VLAN. As a device enters the network, the device
automatically assumes the VLAN of the port. If the user changes ports and needs access to
the same VLAN, the network administrator must manually make a port-to-VLAN assignment
for the configuration.

Port-based VLANs With port-based VLAN membership, the port is assigned to a specific
VLAN independent of the user or system attached to the port. This means all users attached
to the port should be members of the same VLAN. The network administrator typically
performs the VLAN assignment. The port configuration is static and cannot be automatically
changed to another VLAN without manual reconfiguration. As with other VLAN approaches,
the packets forwarded using this method do not leak into other VLAN domains on the
network. After a port has been assigned to a VLAN, the port cannot send to or receive from
devices in another VLAN without the intervention of a Layer 3 device.

CHAPTER 7
WIDE AREA NETWORKS

7.1 INTRODUCTION

A wide area network (WAN) is a computer network that covers a broad area (i.e., any
network whose communications links cross metropolitan, regional, or national boundaries).
This is in contrast with personal area networks (PANs), local area networks (LANs), campus
area networks (CANs), or metropolitan area networks (MANs) which are usually limited to a
room, building, campus or specific metropolitan area (e.g., a city) respectively.

7.2 WAN DESIGN OPTIONS

WANs are used to connect LANs and other types of networks together, so that users and
computers in one location can communicate with users and computers in other locations.
Many WANs are built for one particular organization and are private. Others, built by
Internet service providers, provide connections from an organization's LAN to the Internet.
WANs are often built using leased lines. At each end of the leased line, a router connects to
the LAN on one side and a hub within the WAN on the other. Leased lines can be very
expensive. Instead of using leased lines, WANs can also be built using less costly circuit
switching or packet switching methods. Network protocols including TCP/IP deliver
transport and addressing functions. Protocols including Packet over SONET/SDH, MPLS,
ATM and Frame relay are often used by service providers to deliver the links that are used in
WANs. X.25 was an important early WAN protocol, and is often considered to be the
"grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are
still in use today (with upgrades) by Frame Relay.

7.3 WAN CONNECTION TECHNOLOGY OPTIONS:

Several options are available for WAN connectivity:

Option: Description Advantages Disadvantages Bandwidth Sample


range protocols
used
Point-to-Point Most secure Expensive PPP,
connection between HDLC,
Leased line two computers or SDLC,
Local Area Networks HNAS
(LANs)
Circuit A dedicated circuit Less Call Setup 28 - 144 PPP,
switching path is created Expensive kbps ISDN
between end points.
Best example is dialup
connections
Packet Devices transport Shared media X.25Fram
switching packets via a shared across link e-Relay
single point-to-point
or point-to-multipoint
link across a carrier
internetwork. Variable
length packets are
transmitted over
Permanent Virtual
Circuits (PVC) or
Switched Virtual
Circuits (SVC)
Cell relay Similar to packet Best for Overhead can ATM
switching, but uses simultaneou be
fixed length cells s use of considerable
instead of variable voice and
length packets. Data is data
divided into fixed-
length cells and then
transported across
virtual circuits

Table 8.3Several options are available for WAN connectivity

7.4 HIGH-LEVEL DATA LINK CONTROL

High-Level Data Link Control (HDLC) is a bit-oriented synchronous data link layer protocol
developed by the International Organization for Standardization (ISO). The original ISO
standards for HDLC are:

 ISO 3309 — Frame Structure


 ISO 4335 — Elements of Procedure
 ISO 6159 — Unbalanced Classes of Procedure
 ISO 6256 — Balanced Classes of Procedure

The current standard for HDLC is ISO 13239, which replaces all of those standards. HDLC
provides both connection-oriented and connectionless service.

History

HDLC is based on IBM's SDLC protocol, which is the layer 2 protocol for IBM's Systems
Network Architecture (SNA). It was extended and standardized by the ITU as LAP, while
ANSI named their essentially identical version ADCCP. Derivatives have since appeared in
innumerable standards. It was adopted into the X.25 protocol stack as LAPB, into the V.42
protocol as LAPM, into the Frame Relay protocol stack as LAPF and into the ISDN protocol
stack as LAPD. It was the inspiration for the IEEE 802.2LLC protocol, and it is the basis for
the framing mechanism used with the PPP on synchronous lines, as used by many servers to
connect to a WAN, most commonly the Internet.

Framing

HDLC frames can be transmitted over synchronous or asynchronous links. Those links have
no mechanism to mark the beginning or end of a frame, so the beginning and end of each
frame has to be identified. This is done by using a frame delimiter, or flag, which is a unique
sequence of bits that is guaranteed not to be seen inside a frame. This sequence is '01111110',
or, in hexadecimal notation, 0x7E. Each frame begins and ends with a frame delimiter. A
frame delimiter at the end of a frame may also mark the start of the next frame. A sequence of
7 or more consecutive 1-bits within a frame will cause the frame to be aborted.

When no frames are being transmitted on a simplex or full-duplex synchronous link, a


frame delimiter is continuously transmitted on the link. Using the standard NRZI encoding
from bits to line levels (0 bit = transition, 1 bit = no transition), this generates one of two
continuous waveforms, depending on the initial state:

Frame Sequence

This is used by modems to train and synchronize their clocks via phase-locked loops.
Some protocols allow the 0-bit at the end of a frame delimiter to be shared with the start of
the next frame delimiter, i.e. '011111101111110'.

Synchronous framing

On synchronous links, this is done with bit stuffing. Any time that 5 consecutive 1-bits
appear in the transmitted data, the data is paused and a 0-bit is transmitted. This ensures that
no more than 5 consecutive 1-bits will be sent. The receiving device knows this is being
done, and after seeing 5 1-bits in a row, a following 0-bit is stripped out of the received data.
If the following bit is a 1-bit, the receiver has found a flag.

Asynchronous framing

When using asynchronous serial communication such as standard RS-232serial ports, bits are
sent in groups of 8, and bit-stuffing is inconvenient. Instead they use "control-octet
transparency", also called "byte stuffing" or "octet stuffing". The frame boundary octet is
01111110, (7E in hexadecimal notation). A "control escape octet", has the bit sequence
'01111101', (7D hexadecimal). If either of these two octets appears in the transmitted data, an
escape octet is sent, followed by the original data octet with bit 5 inverted.

Structure

The contents of an HDLC frame are shown in the following table:

Flag Address Control Information FCS Flag


8 bits 8 or more bits 8 or 16 Variable length,0 16,32 bits 8 bits
bits

Data is usually sent in multiples of 8 bits, but only some variants require this; others
theoretically permit data alignments on other than 8-bit boundaries. The frame check
sequence (FCS) is a 16-bit CRC-CCITT or a 32-bit CRC-32 computed over the Address,
Control, and Information fields. It provides a means by which the receiver can detect errors
that may have been induced during the transmission of the frame, such as lost bits, flipped
bits, and extraneous bits. However, given that the algorithms used to calculate the FCS are
such that the probability of certain types of transmission errors going undetected increases
with the length of the data is being checked for errors, the FCS can implicitly limit the
practical size of the frame.

Types of Stations (Computers), and Data Transfer Modes

Synchronous Data Link Control (SDLC) was originally designed to connect one computer
with multiple peripherals. The original "normal response mode" is a master-slave mode
where the computer (or primary terminal) gives each peripheral (secondary terminal)
permission to speak in turn. Because all communication is either to or from the primary
terminal, frames include only one address, that of the secondary terminal; the primary
terminal is not assigned an address. There is also a strong distinction between commands sent
by the primary to a secondary, and responses sent by a secondary to the primary. Commands
and responses are in fact indistinguishable; the only difference is the direction in which they
are transmitted.

Normal response mode allows operation over half-duplex communication links, as long as
the primary is aware that it may not transmit when it has given permission to a secondary.
Asynchronous response mode is an HDLC addition for use over full-duplex links. While
retaining the primary/secondary distinction, it allows the secondary to transmit at any time.
Asynchronous balanced mode added the concept of a combined terminal which can act as
both a primary and a secondary. There are some subtleties about this mode of operation;
while many features of the protocol do not care whether they are in a command or response
frame, some do, and the address field of a received frame must be examined to determine
whether it contains a command (the address received is ours) or a response (the address
received is that of the other terminal).

HDLC Operations and Frame Types:


There are three fundamental types of HDLC frames.

 Information frames, or I-frames, transport user data from the network layer. In
addition they can also include flow and error control information piggybacked on
data.
 Supervisory Frames, or S-frames, are used for flow and error control whenever
piggybacking is impossible or inappropriate, such as when a station does not have
data to send. S-frames do not have information fields.
 Unnumbered frames, or U-frames, are used for various miscellaneous purposes,
including link management. Some U-frames contain an information field, depending
on the type.

7.5 LINK CONFIGURATIONS

Link configurations can be categorized as being either:

 Unbalanced, which consists of one primary terminal, and one or more secondary
terminals.
 Balanced, which consists of two peer terminals.

The three link configurations are:

 Normal Response Mode (NRM) is an unbalanced configuration in which only the


primary terminal may initiate data transfer. The secondary terminal transmits data
only in response to commands from the primary terminal. The primary terminal polls
the secondary terminal(s) to determine whether they have data to transmit, and then
selects one to transmit.
 Asynchronous Response Mode (ARM) is an unbalanced configuration in which
secondary terminals may transmit without permission from the primary terminal.
However, the primary terminal still retains responsibility for line initialization, error
recovery, and logical disconnect.
 Asynchronous Balanced Mode (ABM) is a balanced configuration in which either
station may initiate the transmission.

An additional link configuration is Disconnected mode. This is the mode that a secondary
station is in before it is initialized by the primary, or when it is explicitly disconnected. In this
mode, the secondary responds to almost every frame other than a mode set command with a
"Disconnected mode" response. The purpose of this mode is to allow the primary to reliably
detect a secondary being powered off or otherwise reset..

Basic Operations

 Initialization can be requested by either side. When the six-mode set-command is


issued. This command:
o Signals the other side that initialization is requested
o Specifies the mode, NRM, ABM, ARM
o Specifies whether 3 or 7 bit sequence numbers are in use.

7.6 FRAME RELAY

Frame Relay is a standardized wide area networking technology that specifies the physical
and logical link layers of digital telecommunications channels using a packet switching
methodology. Originally designed for transport across Integrated Services Digital Network
(ISDN) infrastructure, it may be used today in the context of many other network interfaces.
Network providers commonly implement Frame Relay for voice (VoFR) and data as an
encapsulation technique, used between local area networks (LANs) over a wide area network
(WAN). Each end-user gets a private line (or leased line) to a frame-relay node. The frame-
relay network handles the transmission over a frequently-changing path transparent to all
end-users.

With the advent of MPLS, VPN and dedicated broadband services such as cable modem and
DSL, the end may loom for the Frame Relay protocol and encapsulation. However many
rural areas remain lacking DSL and cable modem services. In such cases the least expensive
type of "always-on" connection remains a 64-kbit/s frame-relay line. Thus a retail chain, for
instance, may use Frame Relay for connecting rural stores into their corporate WAN.

Fig 8.6: A basic Frame Relay network

Design

The designers of Frame Relay aimed to a telecommunication service for cost-efficient data
transmission for intermittent traffic between local area networks (LANs) and between end-
points in a wide area network (WAN). Frame Relay puts data in variable-size units called
"frames" and leaves any necessary error-correction (such as re-transmission of data) up to the
end-points. This speeds up overall data transmission. For most services, the network provides
a permanent virtual circuit (PVC), which means that the customer sees a continuous,
dedicated connection without having to pay for a full-time leased line, while the service-
provider figures out the route each frame travels to its destination and can charge based on
usage.

An enterprise can select a level of service quality - prioritizing some frames and making
others less important. Frame Relay can run on fractional T-1 or full T-carrier system

Each Frame Relay Protocol data unit (PDU) consists of the following fields:

 Flag Field. The flag is used to perform high-level data link synchronization which
indicates the beginning and end of the frame with the unique pattern 01111110. To
ensure that the 01111110 pattern does not appear somewhere inside the frame, bit
stuffing and destuffing procedures are used.
 Address Field. Each address field may occupy either octet 2 to 3, octet 2 to 4, or octet
2 to 5, depending on the range of the address in use. A two-octet address field
comprises the EA=ADDRESS FIELD EXTENSION BITS and the
C/R=COMMAND/RESPONSE BIT.
1. DLCI-Data Link Connection Identifier Bits. The DLCI serves to identify the
virtual connection so that the receiving end knows which information
connection a frame belongs to. Note that this DLCI has only local
significance. A single physical channel can multiplex several different virtual
connections.
2. FECN, BECN, DE bits. These bits report congestion:
 FECN=Forward Explicit Congestion Notification bit
 BECN=Backward Explicit Congestion Notification bit
 DE=Discard Eligibility bit
 Information Field. A system parameter defines the maximum number of data bytes
that a host can pack into a frame. Hosts may negotiate the actual maximum frame
length at call set-up time. The standard specifies the maximum information field size
(supportable by any network) as at least 262 octets. Since end-to-end protocols
typically operate on the basis of larger information units, Frame Relay recommends
that the network support the maximum value of at least 1600 octets in order to avoid
the need for segmentation and reassembling by end-users.
 Frame Check Sequence (FCS) Field. Since one cannot completely ignore the bit
error-rate of the medium, each switching node needs to implement error detection to
avoid wasting bandwidth due to the transmission of erred frames. The error detection
mechanism used in Frame Relay uses the cyclic redundancy check (CRC) as its basis.

The Frame Relay network uses a simplified protocol at each switching node. It achieves
simplicity by omitting link-by-link flow-control. As a result, the offered load has largely
determined the performance of Frame Relay networks. When offered load is high, due to the
bursts in some services, temporary overload at some Frame Relay nodes causes a collapse in
network throughput. Therefore, frame-relay networks require some effective mechanisms to
control the congestion.
Congestion control in frame-relay networks includes the following elements:

 Admission Control. This provides the principal mechanism used in Frame Relay to
ensure the guarantee of resource requirement once accepted. It also serves generally
to achieve high network performance. The network decides whether to accept a new
connection request, based on the relation of the requested traffic descriptor and the
network's residual capacity. The traffic descriptor consists of a set of parameters
communicated to the switching nodes at call set-up time or at service-subscription
time, and which characterizes the connection's statistical properties. The traffic
descriptor consists of three elements:
 Committed Information Rate (CIR). The average rate (in bit/s) at which the network
guarantees to transfer information units over a measurement interval T. This T
interval is defined as: T = Bc/CIR.
 Committed Burst Size (BC). The maximum number of information units transmittable
during the interval T.
 Excess Burst Size (BE). The maximum number of uncommitted information units (in
bits) that the network will attempt to carry during the interval.

Once the network has established a connection, the edge node of the Frame Relay network
must monitor the connection's traffic flow to ensure that the actual usage of network
resources does not exceed this specification. Frame Relay defines some restrictions on the
user's information rate. It allows the network to enforce the end user's information rate and
discard information when the subscribed access rate is exceededvoid data accumulation
inside the network. FECN means Forward Explicit Congestion Notification. The FECN bit
can be set to 1 to indicate that congestion was experienced in the direction of the frame
transmission, so it informs the destination that congestion has occurred. BECN means
Backwards Explicit Congestion Notification. The BECN bit can be set to 1 to indicate that
congestion was experienced in the network in the direction opposite of the frame
transmission, so it informs the sender that congestion has occurred.

7.7 Frame Relay versus X.25

X.25 provides quality of service and error-free delivery, whereas, Frame Relay was designed
to relay data as quickly as possible over low error networks. Frame Relay eliminates a
number of the higher-level procedures and fields used in X.25. Frame Relay was designed for
use on links with error-rates far lower than available when X.25 was designed.X.25 packet
switched networks typically allocated a fixed bandwidth through the network for each X.25
access, regardless of the current load. This resource allocation approach, while apt for
applications that require guaranteed quality of service, is inefficient for applications that are
highly dynamic in their load characteristics or which would benefit from a more dynamic
resource allocation. Frame Relay networks can dynamically allocate bandwidth at both the
physical and logical channel level.

7.8 Virtual private network


Fig 8.9: VPN Connectivity overview

A virtual private network (VPN) links two computers through an underlying local or wide-
area network, while encapsulating the data and keeping it private. It is analogous to a pipe
within a pipe. Even though the outer pipe contains the inner one, the inner pipe has a wall that
blocks other traffic in the outer pipe. To the rest of the network, the VPN traffic just looks
like another traffic stream.

VPN classifications

VPN technologies have myriad protocols, terminologies and marketing influences that
define them. For example, VPN technologies can differ in:

 The protocols they use to tunnel the traffic


 The tunnel's termination point, i.e., customer edge or network provider edge
 Whether they offer site-to-site or remote access connectivity
 The levels of security provided
 The OSI layer they present to the connecting network, such as Layer 2 circuits or
Layer 3 network connectivity

Some classification schemes are discussed in the following sections.

Security Mechanisms:

Secure VPNs use cryptographic tunneling protocols to provide confidentiality by blocking


intercepts and packet sniffing, allow sender authentication to block identity spoofing, and
provide message integrity by preventing message alteration.

Secure VPN protocols include the following:

 IPSec (Internet Protocol Security) was originally developed for IPv6, which requires
it. This standards-based security protocol is also widely used with IPv4. L2TP
frequently runs over IPSec.
 Transport Layer Security (SSL/TLS) can tunnel an entire network's traffic, as it does
in the Open VPN project, or secure an individual connection. A number of vendors
provide remote access VPN capabilities through SSL. An SSL VPN can connect from
locations where IPSec runs into trouble with Network Address Translation and
firewall rules. However, SSL-based VPNs use Transmission Control Protocol (TCP)
and so may be vulnerable to denial-of-service attacks because TCP connections do not
authenticate.
 Datagram Transport Layer Security (DTLS), is used in Cisco's next-generation VPN
product, Cisco Any Connect VPN, to solve the issues SSL/TLS has with tunneling
TCP over TCP.
 Microsoft's Microsoft Point-to-Point Encryption (MPPE) works with their PPTP and
in several compatible implementations on other platforms.
 Microsoft introduced Secure Socket Tunneling Protocol (SSTP) in Windows Server
2008 and Windows Vista Service Pack 1. SSTP tunnels Point-to-Point Protocol (PPP)
or L2TP traffic through an SSL 3.0 channel.
 MPVPN (Multi Path Virtual Private Network). Ragula Systems Development
Company owns the registered trademark "MPVPN".
 Secure Shell (SSH) VPN – Open SSH offers VPN tunnelling to secure remote
connections to a network or inter-network links. This should not be confused with
port forwarding. Open SSH server provides limited number of concurrent tunnels and
the VPN feature itself does not support personal authentication.

Authentication

Tunnel endpoints must authenticate before secure VPN tunnels can establish. User-created
remote access VPNs may use passwords, biometrics, two-factor authentication or other
cryptographic methods. Network-to-network tunnels often use passwords or digital
certificates, as they permanently store the key to allow the tunnel to establish automatically
and without intervention.

Routing

Tunnelling protocols can be used in a point-to-point topology that would theoretically not be
considered a VPN, because a VPN by definition is expected to support arbitrary and changing
sets of network nodes. But since most router implementations support software-defined
tunnel interface, customer-provisioned VPNs often are simply defined tunnels running
conventional routing protocols. On the other hand provider-provided VPNs (PPVPNs), need
to support coexisting multiple VPNs, hidden from one another, but operated by the same
service provider.

VPNs in mobile environments:

Mobile VPNs handle the special circumstances when an endpoint of the VPN is not fixed to a
single IP address, but instead roams across various networks such as data networks from
cellular carriers or between multiple Wi-Fi access points. Mobile VPNs have been widely
used in public safety, where they give law enforcement officers access to mission-critical
applications, such as computer-assisted dispatch and criminal databases, as they travel
between different subnets of a mobile network. They are also used in field service
management and by healthcare organizations among other industries.

Increasingly, mobile VPNs are being adopted by mobile professionals and white-collar
workers who need reliable connections. They allow users to roam seamlessly across networks
and in and out of wireless-coverage areas without losing application sessions or dropping the
secure VPN session. A conventional VPN cannot survive such events because the network
tunnel is disrupted, causing applications to disconnect, time routor fail, or even cause the
computing device itself to crash.

You might also like