TCP/IP Protocol

1.1 Protocol Layers

Layer 4. Application Layer
Application layer is the top most layer of four layer TCP/IP model. Application layer is present on the top of the Transport layer. Application layer defines TCP/IP application protocols and how host programs interface with transport layer services to use the network. Application layer includes all the higher-level protocols like DNS (Domain Naming System), HTTP (Hypertext Transfer Protocol), Telnet, FTP (File Transfer Protocol), TFTP (Trivial File Transfer Protocol), SNMP (Simple Network Management Protocol), SMTP (Simple Mail Transfer Protocol) , DHCP (Dynamic Host Configuration Protocol), X Windows, RDP (Remote Desktop Protocol) etc.

Layer 3. Transport Layer
Transport Layer is the third layer of the four layer TCP/IP model. The position of the Transport layer is between Application layer and Internet layer. The purpose of Transport layer is to permit devices on the source and destination hosts to carry on a conversation. Transport layer defines the level of service and status of the connection used when transporting data. The main protocols included at Transport layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

Layer 2. Internet Layer
Internet Layer is the second layer of the four layer TCP/IP model. The position of Internet layer is between Network Access Layer and Transport Layer. Internet layer pack data into data packets known as IP datagrams, which contain source and destination address (logical address or IP address) information that is used to forward the datagrams between hosts and across networks. The Internet layer is also responsible for routing of IP datagrams. Packet switching network depends upon a connectionless internetwork layer. This layer is known as internet layer, is the linchpin that holds the whole design together. Its job is to allow hosts to insert packets into any network and have them to deliver independently to the destination. At the destination side data packets may appear in a different order than they were sent. It is the job of the higher layers to rearrange them in order to deliver them to proper network applications operating at the Application layer. The main protocols included at Internet layer are IP (Internet Protocol), ICMP (Internet Control Message Protocol), ARP (Address Resolution Protocol), RARP (Reverse Address Resolution Protocol) and IGMP (Internet Group Management Protocol).

Layer 1. Network Access Layer
Network Access Layer is the first layer of the four layer TCP/IP model. Network Access layer defines details of how data is physically sent through the network, including how bits are electrically or optically signaled by hardware devices that interface directly with a network medium, such as coaxial cable, optical fiber, or twisted pair copper wire.

The protocols included in Network Access layer are Ethernet, Token Ring, FDDI, X.25, Frame Relay etc. The most popular LAN architecture among those listed above is Ethernet. Ethernet uses an Access Method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to access the media. An Access Method determines how a host will place data on the medium. IN CSMA/CD Access Method, every host has equal access to the medium and can place data on the wire when the wire is free from network traffic. When a host wants to place data on the wire, it will check the wire to find whether another host is already using the medium. If there is traffic already in the medium, the host will wait and if there is no traffic, it will place the data in the medium. But, if two systems place data on the medium at the same instance, they will collide with each other, destroying the data. If the data is destroyed during transmission, the data will need to be retransmitted. After collision, each host will wait for a small interval of time and again the data will be retransmitted.

1.2 Internet Protocol
IP (Internet Protocol) is the primary network protocol used on the Internet, developed in the 1970s. On the Internet and many other networks, IP is often used together with the Transport Control Protocol (TCP) and referred to interchangeably as TCP/IP. IP supports unique addressing for computers on a network. Most networks use the Internet Protocol version 4 (IPv4) standard that features IP addresses four bytes (32 bits) in length. The newer Internet Protocol version 6 (IPv6) standard features addresses 16 bytes (128 bits) in length. Data on an Internet Protocol network is organized into packets. Each IP packet includes both a header (that specifies source, destination, and other information about the data) and the message data itself. IP functions at layer 3 of the OSI model. It can therefore run on top of different data link interfaces including Ethernet and Wi-Fi.

1.3 Internet Addressing
An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication.[1] An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there."[2] The designers of the Internet Protocol defined an IP address as a 32-bit number[1] and this system, known as Internet Protocol Version 4 (IPv4), is still in use today. However, due to the enormous growth of the Internet and the predicted depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995,[3] standardized as RFC 2460

in 1998,[4] and its deployment has been ongoing since the mid-2000s. IP addresses are binary numbers, but they are usually stored in text files and displayed in human-readable notations, such as (for IPv4), and 2001:db8:0:1234:0:567:8:1 (for IPv6). The Internet Assigned Numbers Authority (IANA) manages the IP address space allocations globally and delegates five regional Internet registries (RIRs) to allocate IP address blocks to local Internet registries (Internet service providers) and other entities.

1.3.1 IPV4
In IPv4 an address consists of 32 bits which limits the address space to 4294967296 (232) possible unique addresses. IPv4 reserves some addresses for special purposes such as private networks (~18 million addresses) or multicast addresses (~270 million addresses).

IPv4 addresses are canonically represented in dot-decimal notation, which consists of four decimal numbers, each ranging from 0 to 255, separated by dots, e.g., Each part represents a group of 8 bits (octet) of the address. In some cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations.


In the early stages of development of the Internet Protocol, network administrators interpreted an IP address in two parts: network number portion and host number portion. The highest order octet (most significant eight bits) in an address was designated as the network number and the remaining bits were called the rest field or host identifier and were used for host numbering within a network. This early method soon proved inadequate as additional networks developed that were independent of the existing networks already designated by a network number. In 1981, the Internet addressing specification was revised with the introduction of classful network architecture. Classful network design allowed for a larger number of individual network assignments and fine-grained subnetwork design. The first three bits of the most significant octet of an IP address were defined as the class of the address. Three classes (A, B, and C) were defined for universal unicast addressing. Depending on the class derived, the network identification was based on octet boundary segments of the entire address. Each class used successively additional octets in the network identifier, thus reducing the possible number of hosts in the

higher order classes (B and C). The following table gives an overview of this now obsolete system.

The rapid exhaustion of IPv4 address space, despite conservation techniques, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the Internet's addressing capability. The permanent solution was deemed to be a redesign of the Internet Protocol itself. This next generation of the Internet Protocol, intended to replace IPv4 on the Internet, was eventually named Internet Protocol Version 6 (IPv6) in 1995[3][4] The address size was increased from 32 to 128 bits or 16 octets. This, even with a generous assignment of network blocks, is deemed sufficient for the foreseeable future. Mathematically, the new address space provides the potential for a maximum of 2128, or about 3.403×1038 unique addresses. The new design is not intended to provide a sufficient quantity of addresses on its own, but rather to allow efficient aggregation of subnet routing prefixes to occur at routing nodes. As a result, routing table sizes are smaller, and the smallest possible individual allocation is a subnet for 264 hosts, which is the square of the size of the entire IPv4 Internet. At these levels, actual address utilization rates will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment — that is the local administration of the segment's available space — from the addressing prefix used to route external traffic for a network. IPv6 has facilities that automatically change the routing prefix of entire networks, should the global connectivity or the routing policy change, without requiring internal redesign or renumbering. The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is not the need to have complex address conservation methods as used in Classless Inter-Domain Routing (CIDR). Many modern desktop and enterprise server operating systems include native support for the IPv6 protocol, but it is not yet widely deployed in other devices, such as home networking routers, voice over IP (VoIP) and multimedia equipment, and network peripherals.

1.4 OSPF (Open Shortest Path First)

Open Shortest Path First (OSPF) routing protocol is a Link State protocol based on cost rather than hops or ticks (i.e. it is not a vector based routing protocol). As with RIPv2 different sized subnet masks can be used within the same network thereby allowing more efficient utilisation of available address space. Also, OSPF supports unnumbered point to point links and equal cost multipath (or load balancing for up to 6 paths; meaning balancing the distribution of IP datagrams down parallel routes to the same destination router using a round robin or a direct addressing option).

Link State Advertisements

Because only link state advertisements are exchanged rather than complete network information (as in RIP), OSPF networks converge far more quickly than RIP networks. In addition, Link State Advertisements are triggered by network changes (like the triggered updates in RIP). The Dijkstra's

algorithm used to calculate the SPF tree is CPU intensive, therefore it is advisable to run it (the Soloist) on a router slot that either has a slow speed network attached or none at all.

The OSPF Process

The Link State Database (LSDB) contains the link state advertisements sent around the 'Area' and each router holds an identical copy of this LSDB. The router then creates a Shortest Path First (SPF) tree using Dijkstra's algorithm on the LSDB and a routing table can be derived from the SPF tree which now contains the best route to each router.

OSPF Networks

Within OSPF there can be Point-to-Point networks or Multi-Access networks. The Multi-Access networks could be one of the following:
 

Broadcast Network: A single message can be sent to all routers Non-Broadcast Multi-Access (NBMA) Network: Has no broadcast ability, ISDN, ATM, Frame Relay and X.25 are examples of NBMA networks.

Point to Multipoint Network: Used in group mode Frame Relay networks.

Important Parameters
The Retransmit Interval is the number of seconds between LSAs across an adjacency. The following settings are often recommended:

Broadcast network

5 seconds

Point-to-Point network

10 seconds

NBMA network

10 seconds

Point-to Multipoint network 10 seconds

The Hello Interval must be the same on each end of the adjacency otherwise the adjacency will not form. In a Point-to-Point network this value is 10 seconds whereas in a Non Broadcast Multiaccess Network (NBMA) the Hello Interval is 30 seconds. The Dead Interval is 40 seconds in a Point-to-Point network and 120 seconds in a Non Broadcast Multiaccess Network (NBMA). The Metric Cost can be related to line speed by using the formula 108 / line speed (bps) These costs are used to calculate the metric for a line and thus determine the best route for traffic. The lowest cost to a destination is calculated using Dijkstras Algorithm. The lowest cost link is used unless there are multiple equally low cost links in which case load balancing takes place between up to 6 route entries. RFC 2328 describes Dijkstras Algorithm (also called the Shortest Path First (SPF) algorithm. OSPF has a 5 second damper in case a link flaps. A link change will cause an update to be sent only after 5 seconds has elapsed so preventing routers locking up due to continually running the SPF algorithm and never allowing OSPF to converge. There is also a timer that determines the minimum time between SPF calculations, the default for this is often 10 seconds. A Password can be enabled on a per Area basis so providing some form of security and consistency in route information.

OSPF Packet Types
Within the OSPF header the packet type is indicated by way of a type code as follows:

Type Code Packet Type




Database Description


Link State Request


Link State Update


Link State Acknowledgment

OSPF Areas

Within a network multiple Areas can be created to help ease CPU use in SPF calculations, memory use and the number of LSAs being transmitted. 60-80 routers are considered to be the maximum to have in one area. The Areas are defined on the routers and then interfaces are assigned to the areas. The default area is and should exist even if there is only one area in the whole network (which is the default situation). As more areas are added, becomes the 'backbone area'. In fact, if you have one area on its own then it could be configured with a different area number than 0 and OSPF will still operate correctly, but this should really be a temporary arrangement. You may for instance, want to set up separate areas initially that are to be joined at a later date. Separate LSDBs are maintained one per area and networks outside of an area are advertised into that area, routers internal to an area have less work to do as only topology changes within an area affect a modification of the SPF specific to that area. Another benefit of implementing areas is that networks within an area can be advertised as a summary so reducing the size of the routing table and the processing on routers external to this area. Creating summaries is made easier if addresses within an area are contiguous. In a multiple area environment there are four types of router:

Internal router: All its directly connected networks are within the same area as itself. It is only concerned with the LSDB for that area.

Area Border Router: This has interfaces in multiple areas and so has to maintain multiple LSDBs as well as be connected to the backbone. It sends and receives Summary Links Advertisements from the backbone area and they describe one network or a range of networks within the area.

 

Backbone Router: This has an interface connected to the backbone. AS Boundary Routers: This has an interface connected to a non-OSPF network which is considered to be outside its Autonomous System (AS). The router holds AS external routes which are advertised throughout the OSPF network and each router within the OSPF network knows the path to each ASBR.

A RIP network will look at any IP address within an OSPF network as only one hop away. When configuring an area, authentication can be configured with a password which must be the same on a given network but (as in RIPv2) can be different for different interfaces on the same router. There are seven types of Link State Advertisements (LSAs):

Type 1: Router Links Advertisements are passed within an area by all OSPF routers and describe the router links to the network. These are only flooded within a particular area.

Type 2: Network Links Advertisements are flooded within an area by the DR and describes a multi-access network, i.e. the routers attached to particular networks.

Type 3: Summary Link Advertisements are passed between areas by ABRs and describes networks within an area.

Type 4: AS (Autonomous System) Summary Link Advertisements are passed between areas and describe the path to the AS Boundary Router (ASBR). These do not get flooded into Totally Stubby Areas.

Type 5: AS External Link Advertisements are passed between and flooded into areas by ASBRs and describe external destinations outside the Autonomous System. The areas that do not receive these are Stub, Totally Stubby and Not So Stubby areas. There are two types of External Link Advertisements, Type 1 and Type 2. Type 1 packets add the external cost to the internal cost of each link passed. This is useful when there are multiple ASBRs advertising the same route into an area as you can decide a preferred route. Type 2 packets only have an external cost assigned so is fine for a single ASBR advertising an external route.

 

Type 6: Multicast OSPF routers flood this Group Membership Link Entry. Type 7: NSSA AS external routes flooded by the ASBR. The ABR converts these into Type 5 LSAs before flooding them into the Backbone. The difference between Type 7 and Type 5 LSAs is that Type 5s are flooded into multiple areas whereas Type 7s are only flooded into NSSAs.

Stub Area
A stub area is an area which is out on a limb with no routers or areas beyond it. A stub area is configured to prevent AS External Link Advertisements (Type 5) being flooded into the Stub area. The benefits of configuring a Stub area are that the size of the LSDB is reduced along with the routing table and less CPU cycles are used to process LSA's. Any router wanting access to a network outside the area sends the packets to the default route (

Totally Stubby Area
This is a Stub Area with the addition that Summary Link Advertisements (Type 3/4) are not sent into the area, as well as External routes, a default route is advertised instead.

Not So Stubby Area (NSSA)
This area accepts Type 7 LSAs which are external route advertisements like Type 5s but they are only flooded within the NSSA. This is used by an ISP when connecting to a branch office running an IGP. Normally this would have to be a standard area since a stub area would not import the external routes. If it was a standard area linking the ISP to the branch office then the ISP would receive all the Type 5 LSAs from the branch which it does not want. Because Type 7 LSAs are only flooded to the NSSA the ISP is saved from the external routes whereas the NSSA can still receive them. The NSSA is effectively a 'No-Mans Land' between two politically disparate organisations and is a hybrid stubby area. Over a slow link between the two organisations you would not normally configure OSPF because the Type 5 LSAs would overwhelm the link, so redistribution across RIP would be common. With NSSA, OSPF can still be maintained but by using less intensive Type 7 LSAs. RFC 1587 describes the Not So Stubby Area.

Virtual Links
If an area has been added to an OSPF network and it is not possible to connect it directly to the backbone or two organisations that both have a backbone area have merged, then a virtual link is required. The link must connect two routers within a common area called a Transit Area and one of these routers must be connected to the backbone. A good example of its use could be when two organisations merge and two Area 0s must be connected i.e. 'patching the backbone'. Virtual links cannot be used to patch together a split area that is not the backbone area. Instead a tunnel must be used, the IP address of which is in one of the areas.

Summary Links Advertisements are sent by Area Border Routers and by default they advertise every individual network within each area to which it is connected. Networks can be condensed into a network summary so reducing the number of Summary Links Advertisements being sent and reduces

the LSDB's of routers outside the area. In addition, if there is a network change then this will not be propagated into the backbone and other areas so minimising the recalculation of SPF. There are two types of summarisations:

Inter-Area Route Summarisation is carried out on ABRs and applies to routes from within each area rather than external routes redistributed into OSPF.

External Route Summarisation is specific to external routes redistributed into OSPF.

A summary is configured by defining a range within which the subnets that need to be summarised fall. The range is made up of an address and a summary mask, the address encompasses the range of subnetworks to be included within the summary and the mask describes the range of addresses. Using the network in the following diagram, summaries can be created to illustrate the process:

Within Area 1: The summary address is because of the way summarising works. This forms the bottom of the range of addresses within the summary mask of and gives available addresses up to, see below: 11111111 11111111 11110000 00000000

10000000 10000000 00010000 00000000

10000000 10000000 00010001 00000000



10000000 10000000 00011111 00000000

All the network possibilities from 16 to 31 are defined by the mask (third octet of 240), the existing networks can be added to. If 17 had been used as the summary address instead of 16, then the third octet would be 00010001, the problem here is that a subnet bit is set to '1' in the host area of the address. The system will not use bits that are set to '1', it only increments from '0' to '1', this means that subnet 19 would be ignored, and 21 etc. etc. The other areas can be summarised in a similar manner. If an Area Border Router does not have an interface in area then a virtual link needs to be created between an Area Border Router that is connected to the backbone and ends at an Area Border Router of the non-contiguous area. The virtual link is tied to the least-cost path through the 'Transit area' between the backbone and the non-contiguous area. An adjacency is formed between the two routers and the timers need to be identical.

External Routes
In order to make non-OSPF networks available to routers within an OSPF network, the router connected to the non-OSPF network needs to be configured as an AS Boundary Router (ASBR). As described earlier AS External Link Advertisements (one for each external route) are flooded into the OSPF network (except Stub networks). There are two types of metric for external detinations:

Type-1 destination networks: The cost to an external network directly connected to the ASBR (close) plus the internal path cost within the OSPF network gives the total cost.

Type-2 destination networks: The cost to a 'far away' network (i.e. not directly connected to the ASBR) is merely the number of hops from the ASBR to the external network.

If a number of routes to a network are advertised to an internal OSPF router, then the router picks the Type-1 route rather than the Type-2 route. If this router learns the route via different protocols then it decides which route to use based on firstly the preference value (configurable) and then on route weight (non-configurable).

OSPF Accept Policies
These can only be configured for external routes (Type-1 and Type-2) and can be set up on any router. Consider the following network:

An OSPF Accept Policy can be configured on R3 to prohibit R3 from forwarding IP datagrams to N1. N1 is learned as a Type-1 external route from R1 (since N1 is directly connected to R1 which is an ASBR) but N1 is also learned as a Type-2 external route from R2 (since N1 is now several networks away from R2). Because the routing table in R3 sees N1 as a Type-1 or Type-2 external route, an Accept Policy can be created to exclude these networks from R3's routing table, however other routers within the OSPF domain can still learn about N1 unless Accept Policies are also configured on these.

OSPF Announce Policies
Unlike OSPF Accept Policies, the OSPF Announce Policies can only be configured on an ASBR since they determine which Type-1 and Type-2 external routes are advertised into the OSPF domain. Referring to Fig. 25c: We want traffic from R3 to N6 to be routed via R2, and if R2 goes down then the traffic to go via R1. R3 learns about N6 after receiving Type-2 external LSAs from R2 and R1, the metric being 2. To force traffic through R2 we can create an announce policy on R1 that advertises N6 with a metric of 3. Important parameters for both Accept and Announce Policies are Name (of Policy - this needs to describe what it actually does), precedence (out of a number of policies created, the one with the highest metric takes precedence) and route source (hexadecimal values indicating the non-OSPF protocols contributing to the route). Just a final note to say that some items shown on the OSPF Announce Policy screen only actually apply to RIP Policies, the software has been lazily written. The achilles heel of OSPF is that all areas are connected to the backbone area. This limits the number of routers that can take part in OSPF to about 1000. The protocol Intermediate System to Intermediate System (IS-IS) is designed to be more scalable than OSPF.

1.5 RIP (Routing Information Protocol)
The Routing Information Protocol (RIP) is a distance-vector routing protocol, which employs the hop count as a routing metric. RIP prevents routing loops by implementing a limit on the number of hops allowed in a path from the source to a destination. The maximum number of hops allowed for RIP is 15. This hop limit, however, also limits the size of networks that RIP can support. A hop count of 16 is considered an infinite distance and used to deprecate inaccessible, inoperable, or otherwise undesirable routes in the selection process. RIP implements the split horizon, route poisoning and holddown mechanisms to prevent incorrect routing information from being propagated. These are some of the stability features of RIP. It is also possible to use the so called RMTI[1] (Routing Information Protocol with Metric-based Topology Investigation) algorithm to cope with the count-to-infinity problem. With its help, it is possible to detect every possible loop with a very small computation effort.

Originally each RIP router transmitted full updates every 30 seconds. In the early deployments, routing tables were small enough that the traffic was not significant. As networks grew in size, however, it became evident there could be a massive traffic burst every 30 seconds, even if the routers had been initialized at random times. It was thought, as a result of random initialization, the routing updates would spread out in time, but this was not true in practice. Sally Floyd and Van Jacobson showed in 1994[2] that, without slight randomization of the update timer, the timers synchronized over time. In most current networking environments, RIP is not the preferred choice for routing as its time to converge and scalability are poor compared to EIGRP, OSPF, or IS-IS (the latter two being link-state routing protocols), and (without RMTI) a hop limit severely limits the size of network it can be used in. However, it is easy to configure, because RIP does not require any parameters on a router unlike other protocols (see herefor an animation of basic RIP simulation visualizing RIP configuration and exchanging of Request and Response to discover new routes).

1.6 RARP (Reverse Address Resolution Protocol)
The Reverse Address Resolution Protocol (RARP) is an obsolete computer networking protocol used by a host computer to request its Internet Protocol (IPv4) address from an administrative host, when it has available its Link Layer or hardware address, such as a MAC address. RARP is described in Internet Engineering Task Force (IETF) publication RFC 903. It has been rendered obsolete by the Bootstrap Protocol (BOOTP) and the modern Dynamic Host Configuration Protocol (DHCP), which both support a much greater feature set than RARP. RARP requires one or more server hosts to maintain a database of mappings of Link Layer addresses to their respective protocol addresses. Media Access Control (MAC) addresses needed to be individually configured on the servers by an administrator. RARP was limited to serving only IP addresses.Reverse ARP differs from the Inverse Address Resolution Protocol (InARP) described in RFC 2390, which is designed to obtain the IP address associated with a local Frame Relay data link connection identifier. InARP is not used in Ethernet.

1.7 BOOTP (Bootstrap Protocol)
In computer networking, the Bootstrap Protocol, or BOOTP, is a network protocol used by a network client to obtain an IP address from a configuration server. The BOOTP protocol was originally defined in RFC 951.BOOTP is usually used during the bootstrap process when a computer is starting up. A BOOTP configuration server assigns an IP address to each client from a pool of addresses. BOOTP uses the User Datagram Protocol (UDP) as a transport on IPv4 networks only. Historically, BOOTP has also been used for Unix-like diskless workstations to obtain the network location of their boot image in addition to an IP address, and also by enterprises to roll out a pre-configured client (e.g., Windows) installation to newly installed PCs. Originally requiring the use of a boot floppy disk to establish the initial network connection, manufacturers of network cards later embedded the protocol in the BIOS of the interface cards as well as system boards with on-board network adapters, thus allowing direct network booting. In 2005, users with an interest in diskless stand-alone media center PCs have shown new interest in this method of booting a Windows operating system.[1] The Dynamic Host Configuration Protocol (DHCP) is a more advanced protocol for the same purpose and has superseded the use of BOOTP. Most DHCP servers also function as BOOTP servers.

1.8 DHCP (Dynamic Host Configuration Protocol)
The Dynamic Host Configuration Protocol (DHCP) is a network configuration protocol for hosts on Internet Protocol (IP) networks. Computers that are connected to IP networks must be configured before they can communicate with other hosts. The most essential information needed is an IP address, and a default route and routing prefix. DHCP eliminates the manual task by a network administrator. It also provides a central database of devices that are connected to the network and eliminates duplicate resource assignments. In addition to IP addresses, DHCP also provides other configuration information, particularly the IP addresses of local Domain Name Server (DNS), network boot servers, or other service

hosts. DHCP is used for IPv4 as well as IPv6. While both versions serve much the same purpose, the details of the protocol for IPv4 and IPv6 are sufficiently different that they may be considered separate protocols. Hosts that do not use DHCP for address configuration may still use it to obtain other configuration information. Alternatively, IPv6 hosts may use stateless address autoconfiguration. IPv4 hosts may use link-local addressing to achieve limited local connectivity.


BGP (Border Gateway Protocol)

The Border Gateway Protocol (BGP) is the protocol backing the core routing decisions on the Internet. It maintains a table of IP networks or 'prefixes' which designate network reachability among autonomous systems (AS). It is described as a path vector protocol. BGP does not use traditional Interior Gateway Protocol (IGP) metrics, but makes routing decisions based on path, network policies and/or rule-sets. For this reason, it is more appropriately termed a reach-ability protocol rather than routing protocol. BGP was created to replace the Exterior Gateway Protocol (EGP) protocol to allow fully decentralized routing in order to transition from the core ARPAnet model to a decentralized system that included the NSFNET backbone and its associated regional networks. This allowed the Internet to become a truly decentralized system. Since 1994, version four of the BGP has been in use on the Internet. All previous versions are now obsolete. The major enhancement in version 4 was support of Classless Inter-Domain Routing and use of route aggregation to decrease the size of routing tables. Since January 2006, version 4 is codified in RFC 4271, which went through more than 20 drafts based on the earlier RFC 1771 version 4. RFC 4271 version corrected a number of errors, clarified ambiguities and brought the RFC much closer to industry practices. Most Internet service providers must use BGP to establish routing between one another (especially if they are multihomed). Therefore, even though most Internet users do not use it directly, BGP is one of the most important protocols of the Internet. Compare this with Signaling System 7 (SS7), which is the inter-provider core call setup protocol on the PSTN. Very large private IP networks use BGP internally. An example would be the joining of a number of large OSPF (Open Shortest Path First) networks where OSPF by itself would not scale to size. Another reason to use BGP is multihoming a network for better redundancy either to multiple access points of a single ISP (RFC 1998) or to multiple ISPs.

1.9 ARP ( Address Resolution Protocol )
Address Resolution Protocol (ARP) is a telecommunications protocol used for resolution of network layer addresses into link layer addresses, a critical function in multiple-access

networks. ARP was defined by RFC 826 in 1982.[1] It is Internet Standard STD 37. It is also the name of the program for manipulating these addresses in most operating systems. ARP has been implemented in many combinations of network and overlaying internetwork technologies, such as IPv4, Chaosnet, DECnet and Xerox PARC Universal Packet (PUP) using IEEE 802 standards, FDDI, X.25, Frame Relay and Asynchronous Transfer Mode (ATM), IPv4 over IEEE 802.3 and IEEE 802.11 being the most common cases. In Internet Protocol Version 6 (IPv6) networks, the functionality of ARP is provided by the Neighbor Discovery Protocol (NDP).


2. High Speed LAN
2.1 LAN Ethernet
Ethernet is a family of computer networking technologies for local area networks (LANs) commercially introduced in 1980. Standardized in IEEE 802.3, Ethernet has largely replaced competing wired LAN technologies. Systems communicating over Ethernet divide a stream of data into individual packets called frames. Each frame contains source and destination addresses and error-checking data so that damaged data can be detected and re-transmitted. The standards define several wiring and signaling variants. The original 10BASE5 Ethernet used coaxial cable as a shared medium. Later the coaxial cables were replaced by twisted pair and fiber optic links in conjunction with hubs or switches. Data rates were periodically increased from the original 10 megabits per second, to 100 gigabits per second. Since its commercial release, Ethernet has retained a good degree of compatibility. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols.

2.1.1 Fast Ethernet
In computer networking, Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s. Of the fast Ethernet standards 100BASE-TX is by far the most common and is supported by the vast majority of Ethernet hardware currently produced. Fast Ethernet was introduced in 1995[1] and remained the fastest version of Ethernet for three years before being superseded by gigabit Ethernet.

2.1.2 Gigabit Ethernet
In computer networking, Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet frames at a rate of a gigabit per second (1,000,000,000 bits per second), as defined by the IEEE 802.3-2008 standard. It came into use beginning in

1999, gradually supplanting Fast Ethernet in wired local networks where it performed considerably faster. The cables and equipment are very similar to previous standards, and by the year 2010, were very common and economical. Half-duplex gigabit links connected through hubs are allowed by the specification, but fullduplex usage with switches is much more common.

2.1.3 FDDI ( Fiber Distributed Data Interface )
Fiber Distributed Data Interface (FDDI) provides a 100 Mbit/s optical standard for data transmission in a local area network that can extend in range up to 200 kilometers (120 mi). Although FDDI logical topology is a ring-based token network, it does not use the IEEE 802.5 token ring protocol as its basis; instead, its protocol is derived from the IEEE 802.4 token bus timed token protocol. In addition to covering large geographical areas, FDDI local area networks can support thousands of users. As a standard underlying medium it uses optical fiber, although it can use copper cable, in which case it may be referred to as CDDI (Copper Distributed Data Interface). FDDI offers both a Dual-Attached Station (DAS), counter-rotating token ring topology and a Single-Attached Station (SAS), token bus passing ring topology. FDDI was considered an attractive campus backbone technology in the early to mid 1990s since existing Ethernet networks only offered 10 Mbit/s transfer speeds and Token Ring networks only offered 4 Mbit/s or 16 Mbit/s speeds. Thus it was the preferred choice of that era for a high-speed backbone, but FDDI has since been effectively obsolesced by fast Ethernet which offered the same 100 Mbit/s speeds, but at a much lower cost and, since 1998, by Gigabit Ethernet due to its speed, and even lower cost, and ubiquity. FDDI, as a product of American National Standards Institute X3T9.5 (now X3T12), conforms to the Open Systems Interconnection (OSI) model of functional layering of LANs using other protocols. FDDI-II, a version of FDDI, adds the capability to add circuit-switched service to the network so that it can also handle voice and video signals. Work has started to connect FDDI networks to the developing Synchronous Optical Network (SONET). A FDDI network contains two rings, one as a secondary backup in case the primary ring fails. The primary ring offers up to 100 Mbit/s capacity. When a network has no requirement for

the secondary ring to do backup, it can also carry data, extending capacity to 200 Mbit/s. The single ring can extend the maximum distance; a dual ring can extend 100 km (62 mi). FDDI has a larger maximum-frame size (4,352 bytes) than standard 100 Mbit/s Ethernet which only supports a maximum-frame size of 1,500 bytes, allowing better throughput. Designers normally construct FDDI rings in the form of a "dual ring of trees" (see network topology). A small number of devices (typically infrastructure devices such as routers and concentrators rather than host computers) connect to both rings - hence the term "dualattached". Host computers then connect as single-attached devices to the routers or concentrators. The dual ring in its most degenerate form simply collapses into a single device. Typically, a computer-room contains the whole dual ring, although some implementations have deployed FDDI as a Metropolitan area network.

2.2 DSL (Digital Subscriber Line)
Digital subscriber line (DSL, originally digital subscriber loop) is a family of technologies that provide internet access by transmitting digital data over the wires of a local telephone network. In telecommunications marketing, the term DSL is widely understood to mean Asymmetric Digital Subscriber Line (ADSL), the most commonly installed DSL technology. DSL service is delivered simultaneously with wired telephone service on the same telephone line. This is possible because DSL uses higher frequency bands for data separated by filtering. On the customer premises, a DSL filter on each outlet removes the high frequency interference, to enable simultaneous use of the telephone and data. The data bit rate of consumer DSL services typically ranges from 256 kbit/s to 40 Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (the direction to the service provider) is lower, hence the designation of asymmetric service. In Symmetric Digital Subscriber Line (SDSL) services, the downstream and upstream data rates are equal.


ADSL (Asymmetric Digital Subscriber Line)

Asymmetric digital subscriber line (ADSL) is a type of digital subscriber line technology, a data communications technology that enables faster data transmission over copper telephone lines than a conventional voiceband modem can provide. It does this by utilizing frequencies that are not used by a voice telephone call. A splitter, or DSL filter, allows a single telephone connection to be used for both ADSL service and voice calls at the same time. ADSL can generally only be distributed over short distances from the telephone exchange (the last mile), typically less than 4 kilometres (2 mi), but has been known to exceed 8 kilometres (5 mi) if the originally laid wire gauge allows for further distribution. At the telephone exchange the line generally terminates at a digital subscriber line access multiplexer (DSLAM) where another frequency splitter separates the voice band signal for the conventional phone network. Data carried by the ADSL are typically routed over the telephone company's data network and eventually reach a conventional Internet Protocol network.



BHARAT DAS 8EC-A, A7605108057

Sign up to vote on this title
UsefulNot useful