You are on page 1of 85

Network Communications – Networking Essentials – MIM0609

Chapter 1 - Introduction to Computer Networks

Basic Network Concepts


People communicate with other people in a variety of ways. For example, we talk to people face-
to-face, or we write a letter and send it to someone and they write us a letter back. These are
common forms of communication. When people use computers to communicate they use a
computer network. This course is about computer networks and how they are used to transmit
information between computers, and ultimately between people. It is a fundamental course that
provides a broad overview and foundation for understanding networks and working in the
computer and networking industry.

A network is a group of connected devices. For people using computers to communicate with one
another, a network must be used. Simply stated, the requirements of a network are to: Move
information from a source to a destination

Networks are an interconnection of computers. These computers can be linked together using a
wide variety of different cabling types, and for a wide variety of different purposes.

People use computers and networks for a wide variety of reasons. Three common reasons that
people use networks to send information from a source, such as a personal computer (PC), to a
destination, such as a printer, are:
1. Communicate and collaborate (i.e., e-mail and newsgroups)
2. Share information (i.e., document sharing)
3. Share resources (i.e., printers and servers)

Take for example a typical office scenario where a number of users in a small business require
access to common information. As long as all user computers are connected via a network, they
can share their files, exchange mail, schedule meetings, send faxes and print documents all from
any point of the network.

It would not be necessary for users to transfer files via electronic mail or floppy disk, rather, each
user could access all the information they require, thus leading to less wasted time and hence
greater productivity.

Imagine the benefits of a user being able to directly fax the Word document they are working on,
rather than print it out, then feed it into the fax machine, dial the number etc.

Small networks are often called Local Area Networks (LAN). A LAN is a network allowing easy
access to other computers or peripherals. The typical characteristics of a LAN are,
 Physically limited (< 2km)
 High bandwidth (> 1mbps)
 Inexpensive cable media (coax or twisted pair)
 Data and hardware sharing between users

Page 1 of 85
Network Communications – Networking Essentials – MIM0609

 Owned by the user

Page 2 of 85
Network Communications – Networking Essentials – MIM0609

Examples of sharing resources are:


 Sharing computer files and disk space
 Sharing high-quality printers
 Access to common fax machines
 Access to common modems
 Multiple accesses to the Internet

Network Devices
A source or destination can be any device capable of transferring information electronically from
one point (source) to another point (destination). There are many examples of devices that
communicate over a network. They take many forms and vary widely in capabilities. These
include:
 PCs
 Macintosh computers
 Workstations
 Printers
 Servers

Generically speaking, these devices are referred to as nodes. Nodes are the various endpoints
on a network, connected together to form a network. The connection between nodes is made
using some type of connection medium. Examples of connection mediums include:
 Copper cables
 Fiber optic cables
 Radio waves/wireless

Networks are used in a wide variety of ways to tie computers together so they can communicate
with one another and provide services to the user of a network.

Computer Components
Computers come in all shapes and sizes, and are manufactured to serve different purposes.
Some computers are made for operation in a single-user environment. Other computers are
made to support a small number of users in a workgroup environment, while still others may
support thousands of users in a large corporation.

Computers attach to a network through a network interface card (NIC). Typically cables are
attached to a NIC to connect to other computers or networks.
Several aspects of computer technology to be considered are:
 Video
 Microprocessor
 Memory
 Storage
 Input/Output
 Application software
 System software
 Device driver

Whether the computer that attaches to the network is a small desktop computer or a powerful
mainframe, all computers contain the same basic structure and the components mentioned
above.

Computers are the endpoints, or nodes, in a network and come in a variety of shapes and sizes.
It is important to understand common components found in most computer systems.

The CPU is the brain of any computer. The CPU executes instructions developed by a computer

Page 3 of 85
Network Communications – Networking Essentials – MIM0609

programmer, and directs the flow of information within the computer. The terms microprocessor
and CPU are often used interchangeably. At the heart of a CPU is a microprocessor.

Page 4 of 85
Network Communications – Networking Essentials – MIM0609

A CPU runs computer programs, and directs the flow of information between different
components of a computer. There are two basic characteristics that differentiate microprocessors:
 Data bus size—the number of bits that a CPU can send or receive in a single instruction.
 Clock speed—given in MHz, the clock speed determines how many instructions per
second the processor can execute.

In both the cases, the higher the value, the more powerful the CPU is. For example, a 64-bit
microprocessor that runs at 450 MHz is more powerful than a 16-bit microprocessor that runs at
100 MHz. The vast majority of all desktop PCs incorporate a single Intel architecture processor
(such as Pentium). Although Intel is the world’s largest microprocessor manufacturer, it does not
have the entire PC processor market. For example, Advanced Micro Devices (AMD)
manufactures processors comparable to the Intel Pentium.
Key Point
The I/O of a computer travels over a bus, which is a collection of wires that transmit data from
one part of a computer to another. When used in reference to desktop computers, the term bus
usually refers to an internal bus. An internal bus connects all internal computer components to the
CPU and main memory. Expansion slots connect plug-in expansion boards, such as internal
modems or NICs, to the I/O bus. All buses consist of two parts, an address bus and a data bus.
The data bus transfers actual data, and the address bus transfers information about where the
data should go.

The size of a bus, known as its width, is important because it determines how much data can be
transmitted at one time. For example, a 16-bit bus can transmit 16 bits of data, and a 32-bit bus
can transmit 32 bits of data.
Every bus has a clock speed measured in MHz; the faster the bus clock, the faster the bus’ data
transfer rate. A fast bus allows data to be transferred faster, making applications run faster. On
PCs, the ISA bus is being replaced by faster buses, such as the PCI bus. PCs made today
include a local bus for data that requires especially fast transfer speeds, such as video data. The
local bus is a high-speed pathway that connects directly to the processor.

A computer’s memory stores information currently being worked on by the CPU; it is a short-term
storage component of a computer. Memory differs from the long-term storage systems of a
computer, such as a hard drive or floppy, where information is stored for longer periods of time.
When you run a program, it is typically read from a disk drive (such as a hard drive or CD-ROM
drive), and put in memory for execution. After the program has completed or is no longer needed,
it is typically removed from memory. Some programs remain in memory after execution and are
called Terminate and Stay Resident (TSR). The program waits for an event to occur that notifies
them to begin processing. Memory is the systems internal, board-mounted storage area in the
computer, also known as physical memory. The term memory identifies data storage that comes
in the form of chips or plug-in modules, such as SIMMs or DIMMs. Most computers also use
virtual memory, which is physical memory, swapped to a hard disk. Virtual memory expands the
amount of memory for an application beyond the actual physical, installed memory by moving old
data to the hard drive. Every computer comes with a certain amount of physical memory, usually
referred to as main memory or RAM. A computer that has 1 megabyte (Mb) of memory can hold
approximately 1 million bytes (or characters) of information.

There are several different types of memory, some of which are listed below:
 RAM – RAM is the same as main memory. When used by itself, the term RAM refers to
read and write memory; that is, you can both write data into RAM and read data from
RAM. This is in contrast to read-only memory (ROM), which only permits you to read
data. Most RAM is volatile, which means it requires a steady flow of electricity to maintain
its contents. As soon as the power is turned off, whatever data was in RAM is lost.
 ROM—Computers almost always contain a small amount of ROM that holds instructions
for starting up the computer. Unlike RAM, ROM cannot be written to after it is initially
programmed.

Page 5 of 85
Network Communications – Networking Essentials – MIM0609

 Programmable read-only memory (PROM)—A PROM is a memory chip on which you


can store a program. After the PROM has been programmed, you cannot wipe it clean
and reprogram it.
 Erasable programmable read-only memory (EPROM)—An EPROM is a special type of
PROM that can be erased by exposing it to ultraviolet light. EPROM can be
reprogrammed and reused after it is has been erased.

A NIC (pronounced “nick”) is the hardware component inserted into the PC or workstation that
provides connectivity to a network. The NIC provides the interface between the physical
networking cable and the software implementing the networking protocols. The NIC is responsible
for transmitting and receiving information to and from a network.

The NIC fits into an expansion slot on the motherboard’s I/O bus. This bus connects adapter
cards, such as NICs, to the main CPU and RAM. The speed at which data may be transferred to
and from the NIC is determined by the I/O bus bandwidth, processor speed, NIC design and
quality of components, operating system, and the network topology used.

New cards are software configurable, using a software program to configure the resources used
by the card. Other cards are PNP (Plug and Play), which automatically configure their resources
when installed in the computer, simplifying installation. With an operating system like Windows
95, auto-detection of new hardware makes network connections simple and quick.

On power-up, the computer detects the new network card, assigns the correct resources to it, and
then installs the networking software required for connection to the network. All the users need to
assign the network details like computer name.

For Ethernet, a 48-bit number identifies each card. This number uniquely identifies the computer.
These network card numbers are used in the Medium Access (MAC) Layer to identify the
destination for the data. When talking to another computer, the data you send to that computer is
prefixed with the number of the card you are sending the data to.

This allows intermediate devices in the network to decide in which direction the data should go, in
order to transport the data to its correct destination.

A typical adapter card looks like,

There are many ways to connect NICs to a network. The NIC attaches to a network via a twisted
pair cable through a wall outlet. On the other side of the wall, the twisted pair cable goes to a
punchdown block, a place where cables are often terminated to provide continuing connections to
other devices. From the punchdown block, the twisted pair cable is sometimes connected to a
hub or Multi-station Access Unit (MAU) that forms the central connecting point of the network. If
the network contains a dedicated server or other communicating device, it also contains a NIC.

Page 6 of 85
Network Communications – Networking Essentials – MIM0609

Basic Computer Terminology


 Analog—An analog signal, also referred to as an analog wave or carrier wave, is a
continuous electrical signal on a communication circuit. An analog signal in its normal
form does not have intelligence. Modulation is used to add intelligence to an analog
wave.
 Bandwidth—Bandwidth is the difference between the highest and lowest frequencies or
it is the maximum bit rate that can be transmitted across a transmission line or through a
network. It is measured in Hz for analog networks, and bps for digital networks.
 Binary—Binary is the base two number system that computers use to represent data. It
consists of only two numbers: 0 and 1.
 Bit—A bit is the smallest unit of data in a computer. A bit has a single binary value, either
0 or 1. Although computers usually provide instructions that can test and manipulate bits,
they generally are designed to store data and execute instructions in bit multiples called
bytes. In most computer systems, there are eight bits in a byte. The value of a bit is
usually stored as either above or below a designated level of electrical charge in a single
capacitor within a memory device.
 Bus—A bus connects the central processor of a PC with the video controller, disk
controller, hard drives, and memory. There are many types of buses, including internal
buses, external buses, and LANs that operate on bus topologies. Internal buses are
buses such as AT, ISA, EISA, and MCA that are internal to a PC.
 Byte—In most computer systems, a byte is a unit of information that is 8 bits long. A byte
is the unit most computers use to represent a character such as a letter, number, or
typographic symbol (e.g., “g,” “5,” or “?”). A byte can also hold a string of bits that need to
be used in some larger unit for application purposes (for example, the stream of bits that
constitute a visual image within an application program).
 Central Processing Unit (CPU)—A CPU is the processor in a computer that processes
the code and associated data in a computer system.
 Client/Server—Client server or client/server is a mode in computer networking where
individual computers can access data or services from a common high-performance
computer. For instance, when a PC needs data from a common database located on a
computer attached to a LAN, the PC is the client and the network computer where the
database resides is the server.
 Clustering—Clustering is a grouping of devices or other components, typically for the
enhancement of performance. Clustering computers to execute a single application
speeds up the operation of the application.
 Device Driver—A device driver is a program that controls devices attached to a
computer, such as a printer or hard disk drive.
 Digital Data—Digital data is electrical information that represents digits (i.e., 1s and 0s).
1s and 0s (bits) are combined to form bytes and characters, such as letters of the
alphabet.
 Dual Inline Memory Module (DIMM)—DIMM is a small PC circuit board that holds
memory chips. A DIMM has a 64-bit memory path to the CPU, which is compatible with
the Intel Pentium processor.
 Electronic Mail (E-mail)—E-mail is a widely used application for transferring messages
and files from one computer system to another. If the two computers sending messages
use different types of e-mail packages, an e-mail gateway is required to convert from one
format to another.
 Extended Data Output (EDO)—EDO is a type of RAM memory chip with faster
performance than conventional memory. Unlike conventional RAM, EDO RAM retrieves a
block of memory as it sends the previous block to the CPU.
 Extended Industry Standard Architecture (EISA)—EISA is a 32-bit bus technology for
PCs that supports multiprocessing. EISA was designed in response to IBM’s MCA;

Page 7 of 85
Network Communications – Networking Essentials – MIM0609

however, both EISA and MCA were replaced by the PCI bus. See “PCI” and “bus.”
 Hardware—Hardware is the physical part of a computer, which can include things such
as hard drives, circuit boards inside a computer, and other computer components.

Page 8 of 85
Network Communications – Networking Essentials – MIM0609

 Industry Standard Architecture (ISA)—ISA is an older, 8- or 16-bit PC bus technology


used in IBM XT and AT computers. See “bus.”
 Input/Output (I/O)—An I/O channel is the path from the main processor or CPU of a
computer to its peripheral devices.
 Internet Protocol (IP)—IP is the protocol responsible for getting packets of information
across a network.
 Local Area Network (LAN)—A LAN is a grouping of computers via a network typically
confined to a single building or floor of a building.
 Mainframe—A mainframe is a large-scale computer system. Mainframe computers are
powerful, and attach to networks and high-speed peripheral devices, such as tape drives,
disk drives, and printers.
 Megahertz (Mhz)—One hertz is one cycle of a sine wave per second. One MHz is 1
million cycles per second.
 Micro Channel Architecture (MCA)—MCA is IBM’s 32-bit internal bus architecture for
PCs. MCA was never widely accepted by the PC industry, and was replaced by the PCI
bus architecture.
 Modulation—Modulation is the process of modifying the form of a carrier wave (electrical
signal) so that it can carry intelligent information on some sort of communications
medium. Digital computer signals (baseband) are converted to analog signals for
transmission over analog facilities (such as the local loop). The opposite process,
converting analog signals back into their original digital state, is referred to as
demodulation.
 Network Interface Card (NIC)—A NIC is any workstation or PC component (usually a
hardware card) that allows the workstation or PC to communicate with a network. A NIC
address is another term for hardware address or MAC address. The NIC address is built
into the network interface card of the destination node.
 Peripheral Component Interconnect (PCI)—PCI is a newer 32-bit and 64-bit local bus
technology for PCs. See “bus.” (Servers use 64-bit PCI, and PCs use 32-bit.)
 Peripherals—Peripherals are parts of a computer that are not on the primary board
(mother board) of a computer system. Peripherals include hard drives, floppy drives, and
modems.
 Personal Computer Memory Card International Association
 (PCMCIA)—The PCMCIA slot in a laptop was designed for PC memory expansion. NICs
and modems can attach to a laptop through the PCMCIA slot.
 Personal Digital Assistant (PDA)—PDA devices are very small, and provide a subset of
the operations of a typical computer (PC). They are used for scheduling, electronic
notepads, and small database applications.
 Redirector—A redirector is a client software component in a client/server configuration.
The redirector is responsible for deciding if a request for a computer service (i.e., read a
file) is for the local computer or network server.
 Random Access Memory (RAM)—RAM is a computer’s main working memory.
Applications use RAM to hold instructions and data during processing. Applications can
repeatedly write new data to the same RAM, but all data is erased from RAM when the
computer loses power or is shut down.
 RJ-45 Connector—An RJ-45 connector is a snap-in connector for UTP cable, similar to
the standard RJ-11 telephone cable connector.
 Server—A server is a device attached to a network that provides one or more services to
users of the network.
 Single Inline Memory Module (SIMM)—SIMM is a small PC circuit board that holds
memory chips. A SIMM has a 32-bit memory path to the CPU. SIMM capacities are
measured in bytes.

Page 9 of 85
Network Communications – Networking Essentials – MIM0609

 Synchronous Dynamic Random Access Memory (SDRAM)—SDRAM is a type of


RAM and is often referred to as DIMM. It is replacing EDO RAM because it is
approximately twice as fast (up to 133 MHz).

Page 10 of 85
Network Communications – Networking Essentials – MIM0609

 Synchronous Graphic Random Access Memory (SGRAM)—SGRAM is a type of


dynamic RAM optimized for graphics-intensive operations. Like SDRAM, SGRAM can
synchronize itself with the CPU bus clock, up to speeds of 100 MHz.
 Transmission Control Protocol (TCP)—TCP is normally used in conjunction with IP in
a TCP/IP-based network. The two protocols working together provide for connectivity
between applications of networked computers.
 UNIX—UNIX is an operating system used in many workstations and mid-range computer
systems. It is an alternative to PC and Macintosh computer operating systems. Linux is a
UNIX-like clone.
 Workstations—Workstations are a type of computer, typically more powerful than a PC
but still used by a single user.

There are many different types of computers used in organizations, most of which are tied to a
network. Some of these computers are small and can only run a limited amount of applications.

Others are large and can run many applications and service many users at the same time. This
lesson looks at classifications of computers found in networks and the primary purpose of each
type.
Computer classifications include:
 Desktop computers
 Mid-range computers and servers
 Mainframe computers
 Others

A desktop computer is a computer, possibly attached to a network that is used by a single


individual. Desktop computers are sometimes divided into two broad categories, personal
computers (PCs) and workstations. The difference between PCs and workstations, although not
always clear, is generally in the operating system software used and the graphics capabilities.
PCs typically run one of several types of Microsoft Windows operating systems, or Macintosh
Operating Systems in the case of Apple products, while a workstation typically runs a version of
the UNIX operating system. A workstation often features high-end hardware such as large, fast
disk drives, large amounts of Random Access Memory (RAM), advanced graphics capabilities,
and high performance, multiprocessors. There is a great deal of overlap in features and functions
of desktop computers, and the distinctions between PCs and workstations are blurring.

The term “mid-range” covers a wide range of computer systems that support more than one user,
and may support many users. It covers an extensive range of computer systems, overlapping
with desktop computers at one-end and mainframe computers at the other end. Midrange
computers include:
 High-end Reduced Instruction Set Computer (RISC) CPU-based
 Servers (IBM AS/400)
 Intel-based servers (Compaq, Dell, and Hewlett-Packard)
 UNIX-based servers of all types

Mid-range and server systems are commonly used in small to medium organizations, such as
departmental information processing. Typical applications include:
 Finance and accounting (AS/400)
 Database (Intel-based or UNIX-based)
 Printer servers (Intel or UNIX-based)
 Communications servers (Intel-based)

Page 11 of 85
Network Communications – Networking Essentials – MIM0609

Mainframe computers (also referred to as super computers) and associated client/server products
can manage huge organization-wide networks, store and ensure the integrity of massive amounts
of crucial data, and drive data across an organization. The unique and inherent capabilities of
leading-edge mainframe systems include:
 Constant availability—mainframes are designed to be operated around the clock every
day of the year. This is sometimes referred to as a number of nines; a level of reliability
desired and measured in nines (99.999% reliability).
 Rigorous backup, recovery, and security—mainframes provide automatic and constant
backup, tracking, and safeguarding of data.
 Huge economies of scale—the vast resources of mainframes reduce the hidden costs
associated with multiple LANs, such as administration and training, extra disk space, and
printers.
 High bandwidth I/O facilities—the huge I/O bandwidth on mainframes allows rapid and
effective data transfer so that thousands of clients to be serviced simultaneously, and
caters to emerging applications like multimedia, including digitized video on demand.

Laptops, palmtops or PDAs, and thin client terminals have become mainstream devices in the
personal computing world over the last five years. To satisfy a society that is increasingly mobile,
laptops and palmtops have become prevalent. Performance in laptops is closing in on the
desktop computer as miniaturization compounds annually.

Palmtops are also catching up with performance. E-mail, calendering, low-end spreadsheets, and
handwriting recognition compatible with Windows operating systems have become a reality with
the newest generation palmtops. In the last few years, thin client terminals have staged a
comeback, in that they are reminiscent of the terminal of mainframe days, but with graphics
capabilities. Thin client terminals allow for an even greater reduced cost at the desktop computer
level by having a central server handle processing otherwise handled by a PC. The terminal only
displays the screen images, mouse movements, and keystrokes of the application running on the
server.

Software and Operating System


Two primary types of system software are discussed in this lesson, operating systems and device
drivers. Operating systems perform the basic tasks computers do to perform useful work. It is a
group of software programs that control a computer. The tasks of an operating system include:
 Managing the operation of computer programs
 Interpreting keyboard input
 Displaying data to the computer screen
 File I/O
 Controlling peripheral devices, such as floppy disks, hard disks, and printers

Operating systems can be classified as follows:


 Multitasking—Allows more than one program to run.
 Multithreading—Allows different parts of a single program to run.
 Multiuser—Allows multiple users to run programs on the same computer at the same
time. Some operating systems running on large-scale computers permit hundreds or
even thousands of concurrent users.
 Multiprocessing—Supports running a program on more than one CPU.
Key Point
Operating systems provide a software platform that can run other programs, called application
programs. The application programs must be written to run on top of a particular operating
system. Your choice of operating system determines, to a great extent, the applications you can
run. For PCs, the most popular operating systems are Windows 95, Windows 98, Windows NT,
and Windows 2000, but others are also available, such as the UNIX-based operating systems,
including Linux.

Page 12 of 85
Network Communications – Networking Essentials – MIM0609

As a user, you normally interact with the operating system through a set of commands. The
commands are accepted and executed by a part of the operating system called the command
processor or command line interpreter.

Graphical user interfaces (GUIs) allow you to enter commands by pointing and clicking at objects
that appear on the screen. Microsoft has generally dominated the PC operating system market
since its foundation, first with its command line-based disk operating system (DOS), and then with
the Windows user interface, a graphical overlay for DOS. The Macintosh operating system is a
competitor, which runs on Macintosh platforms and not PC-based platforms.

Popular OS

Microsoft Windows was the initial GUI that ran on top of DOS. It was first launched in response to
the need to make PCs easier to use. It is similar to the Apple Macintosh computer, known for its
easy-to-use operating system. Windows 3.1 was the most popular Windows software, and is still
used in many networks.

Windows for Workgroups 3.11 was Microsoft’s first peer-to-peer network operating system. It was
a combination of Windows 3.1 and the Microsoft Windows Network software (which provides the
peer-to-peer networking capability), with enhancements such as improved disk I/O performance,
connectivity, and remote access, and a range of features intended to appeal to the network
manager.

Windows 95 is a true operating system, not just a GUI as found in standard Windows. Windows
95 is the first operating system to support Plug and Play, which makes system setup,
management, and upgrading considerably easier. Other enhancements include improvements to
the user interface, better support for NetWare servers, long file names, and video playback; better
fax and modem support; improved system administration; and remote network access.

Windows 98 is another version of Microsoft Windows. It has many of the same features as
Windows 95, but includes a different user interface and Web-related features.

Windows NT 3.1, Microsoft’s 32-bit operating system, was first launched in July 1993. It runs on
Intel 486 or higher PCs and PC servers, and Digital Equipment’s Alpha chip family. There are two
Windows NT options: Windows NT Workstation and Windows NT Server.

Windows NT Workstation requires only 16 MB of RAM, making it far more accessible. For users
with a strong requirement for security, C2 versions of Windows NT are also available. Windows
NT release 4.0 replaced Windows NT 3.1, 3.5, and 3.51. Another product called Windows 2000
has also been released.

The mid-range software market is characterized by competition between UNIX, which runs on a
very wide range of hardware platforms from workstations to mid-range systems, and the
proprietary operating environments, designed specifically for interactive multiuser applications
developed by the leading systems vendors. Some vendors only offer UNIX on their platforms
(e.g., Sun with Solaris). Other vendors accept and support a multiplicity of operating
environments: Windows on PCs, Windows NT, Linux UNIX, and NetWare on LAN servers; UNIX
on midrange servers; and proprietary environments on mid-range systems and mainframes that
need higher-than-average reliability and security.

A device driver is special-purpose software used to control specific hardware devices in a


computer system. These specific pieces of hardware can be disk drives, floppy drives, or NICs.
Device drivers for NICs control the operation of the NIC and provide an interface for the
computer’s operating system. The operating system and associated applications on the computer
can use the device driver to communicate with the NIC and send and receive information on a
network.

Page 13 of 85
Network Communications – Networking Essentials – MIM0609

Application Software
Applications are computer programs that are used for productivity and automation of tasks.
Networks are used to move application information from source to destination.

Applications are software programs used to perform work. Applications can be divided into two
basic categories: single-user applications, and networked or multiuser applications. Some
applications run primarily as a single-user application, others run almost exclusively as a
networked application. Some applications can run in both modes. Commonly used applications
are described below.

Single-user applications include:


 Word processors—A word processor is used to enter, edit, format, print, and save
documents. These programs are designed to produce various documents for
organizations, such as letters, memos, and reports. An example of a word processor is
Microsoft Word.
 Desktop publishing—Desktop publishing goes beyond basic word processing. It is used
to design professional publications, such as magazines and newsletters. An example of a
desktop publishing package is Adobe Framemaker.
 Graphics—Graphics programs are used to create pictures and artwork that are used by
themselves or imported into other documents using programs such as desktop publishing
packages. Examples of graphics programs are Adobe Illustrator and Shapeware Visio.
 Database—A database program provides the capability to input, store, analyze, and
retrieve information on a computer system. The Information is stored in records and is
managed by a database program. Microsoft Access is a common database program.
 Spreadsheets—Spreadsheet applications are primarily used to create financial reports
and organize financial information. They provide an easy mechanism to manipulate
numbers and compute mathematical equations. Lotus 123 is a spreadsheet program that
is commonly used.
 Web browsers—Web browsers are used to locate and retrieve information from the
World Wide Web. They are primarily used to go directly to another Internet Web site or
search the Internet for specific information. Netscape Navigator is an example of an
Internet browser.

Network applications include:


 Database access—Database requests from client to server are made to retrieve records
from a single source. Oracle is an example of a client/server database.
 Print services—Clients generate print requests that are serviced by a print server. Jobs
are queued by the print server, and the client is notified when the print job has been
completed. Novell NetWare provides print services to NetWare clients.
 E-mail—E-mail programs typically reside on both a client, with packages such as
Eudora, and on a server. When users log on to a network, the e-mail server downloads e-
mail messages to the individual clients in the network.
 Fax services—Clients generate fax requests that are serviced by the server similar to
print requests. Microsoft Windows NT Small Business Sever provides built-in fax
services.

Application-to-Application Communication
Applications use the underlying operating system, such as Windows 98, to carry out the needed
tasks of the application. This includes accepting keystrokes from a keyboard and displaying the
typed information on a computer screen. If you are using a word processor and want to store a
file on a hard drive, such as a local hard drive, the application would rely on the operating system
to store the information. The operating system stores the information on the hard drive by
communicating with the appropriate hard drive device driver to physically place the word
processor’s information on the drive. Perhaps you want to store the information on a hard drive

Page 14 of 85
Network Communications – Networking Essentials – MIM0609

located on the other side of the building, in other words, over the network. What must happen in
this case? The following three items must be installed on the local machine by the local operating
system to provide for communication across a network:
 NIC and NIC device driver
 Client software
 Communication software

The appropriate accessory card, such as a NIC, must first be installed in the computer, along with
the corresponding device driver. If you install a 3Com Ethernet NIC, Ethernet device driver
software must also be installed. Client software is also needed to provide an interface between
the local operating system and communication software. Some client software provides file and
print sharing capabilities for computers, while others provide the capability to connect to shared
resources on other computers. You must also have communication software loaded on the
machine, such as a TCP/IP protocol stack. Key Point

Client software requests are placed inside protocol headers for delivery across a network. The
following steps are typical
1. The user of the application requests that a file be stored on a drive other than a local
drive (a network drive).
2. Computer software on the client machine (also known as a redirector) determines the file
is not destined for a local disk drive, but is destined for a remote disk drive.
3. The redirector takes the “store file” request from the application and requests the services
of the communication software.
4. The communication software adds the appropriate communication information on the
“store file” request.
5. The request is sent from the main CPU of the computer, across the local bus to the NIC.
6. The NIC transmits the information across the networking cables to the final destination,
such as a file server on the network.

From the user’s perspective, we have the ability to store files at multiple locations, including a
hard drive located in our computer, or other hard drives on the network. It is up to the local
operating system to make sure the file gets properly stored on the local computer hardware.
When we want to store information across a network, the request must be redirected out of the
computer via a NIC to the appropriate machine located on the network.

Page 15 of 85
Network Communications – Networking Essentials – MIM0609

Chapter 2 – Network Media, Cables and Connectors

Digital and Analog Transmission


The telecommunications network can transmit a variety of information, in two basic forms, analog
and digital. In this lesson we will examine both. This information may be transmitted over a
circuit/channel or over a carrier system.

Where: A circuit/channel is a transmission path for a single type of transmission service (voice or
data) and is generally referred to as the smallest subdivision of the network. A carrier, on the
other hand, is a transmission path in which one or more channels of information are processed,
converted to a suitable format and transported to the proper destination.

The two types of carrier systems are:


 FDM (Frequency Division Multiplexing) -- analog
 TDM (Time Division Multiplexing) - digital

Digital Transmission
This is the single fastest growing family of signals. It is discrete and well defined, digital signals
vary from dial tone pulses to complex computer and data signals.

Analog Signals
An analog signal, or sine wave, is a continuously varying signal

Analog transmissions are continuous in both time and value. This makes them particularly
susceptible to errors. Digital transmissions are discrete in both time and value. The exact digital
value can be received at the destination with very few errors.

All types of media can be decomposed into bits, which suggest that digital networks can be used
for integrated services (ISDN). This allows compression and encryption that is not possible with
analog signals.

LAN, CAN, MAN, WAN, GAN


Local Area Networks - local to a single limited geographical area- floor of a building, department
within a floor, or one entire building.

Local area networks can be defined as privately owned networks that generally span an entire
building or up to a few kilometers in size. LAN can be effectively used to connect personal
computers and other resources like printers.

LANs often use a single cable to connect the resources. Typically, LANs operated at 10 to 100
Mbps, have low delays and make few errors. Broadcast LANs follow many topologies.

Campus Area Networks - covers a large campus that has might include several city blocks, this
level allows for more refinement in our definitions, it will distinguish between say the Microsoft HQ
CAN and the MAN which covers the city that the CAN is in.

Metropolitan Area Networks - Group of multiple LANs and/or CANs that communicate over a city
and/or several city blocks, MANs are limited to within a single city.

MAN uses similar technology as that of LAN. The difference is that the area coverage of MANs is
more than LANs. They support both voice and data. This property enables them to be efficiently
used for cable television. Typically, a MAN has one or two cables and does not require a
switching element.

Wide Area Networks - Groups of MANs that communicate over larger geographical distances like
cities or states, WANs span city to city and State to state but will stay within a country.

Page 16 of 85
Network Communications – Networking Essentials – MIM0609

Wide area networks, as the name suggests, spans a large geographical area. The typical
structure of a WAN consists of a number of host machines (which run applications) connected by
a communication subnet. The main function of the subnet is to transfer data from one host to
another. The subnet, in turn, consists of two components – transmission lines and switching
elements, also referred to as routers. Data is actually transferred via the transmission lines. The
switching elements connect two or more transmission lines and decide in which outgoing line to
forward the data.

Typically in a WAN, there are a number of transmission lines connected to a router. Not all
routers are connected to each other. Thus, unconnected routers interact with each other indirectly
via other routers. In such a situation, when a router receives a packet, it stores the packet until
the required output line is free and then forwards it. Subnets employing such a technique are
called point-to-point or store-and-forward subnets.

Global Area Networks - covers a country's geographical area or a larger area worldwide, GANs
span countries and are our International level of Networking. This level was added to differentiate
a WAN from the larger Global Networks. The Internet is the most famous example of the GAN

Network Topologies (Ring, Star, Bus, Hybrid)

In networking, the term topology refers to the layout of connected devices on a network. This
article introduces the standard topologies of computer networking.
One can think of a topology as a network's "shape." This shape does not necessarily correspond
to the actual physical layout of the devices on the network. For example, the computers on a
home LAN may be arranged in a circle, but it would be highly unlikely to find an actual ring
topology there.
Network topologies are categorized
into the following basic types:
 Bus
 Ring
 Star
 Tree
 Mesh
More complex networks can be built
as hybrids of two or more of the
above basic topologies.

Bus
Bus networks (not to be confused
with the system bus of a computer) Bus Topology
use a common backbone to connect
all devices. A single cable, the backbone functions as a shared communication medium, those
devices attach or tap into with an interface connector. A device wanting to communicate with
another device on the network sends a broadcast message onto the wire that all other devices
see, but only the intended recipient actually accepts and processes the message.

Ethernet bus topologies are relatively easy to install and don't require much cabling compared to
the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") both were popular Ethernet
cabling options years ago. However, bus networks work best with a limited number of devices. If
more than a few dozen computers are added to a bus, performance problems will likely result. In
addition, if the backbone cable fails, the entire network effectively becomes unusable.

Page 17 of 85
Network Communications – Networking Essentials – MIM0609

Ring
In a ring network, every device has
exactly two neighbors for
communication purposes. All
messages travel through a ring in the
same direction (effectively either
"clockwise" or "counterclockwise"). A
failure in any cable or device breaks
the loop and can take down the entire
network.
To implement a ring network, one
typically uses FDDI, SONET, or
Token Ring technology. Rings are
found in some office buildings or
Ring Topology school campuses.

Page 18 of 85
Network Communications – Networking Essentials – MIM0609

Star
Many home networks use the star
topology. A star network features a central
connection point called a "hub" that may
be an actual hub or a switch. Devices
typically connect to the hub with
Unshielded Twisted Pair (UTP) Ethernet.

Compared to the bus topology, a star


network generally requires more cable,
but a failure in any star network cable will
only take down one computer's network
access and not the entire LAN. (If the hub Star Topology
fails, however, the entire network also fails.)

Tree
Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only
hub devices connect directly to the tree bus and each hub functions as the "root" of a tree of
devices. This bus/star hybrid approach supports future expandability of the network much better
than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star
(limited by the number of hub ports) alone.

Mesh

Page 19 of 85
Network Communications – Networking Essentials – MIM0609

Mesh topologies involve the concept of routes. Unlike each of the previous topologies, messages
sent on a mesh network can take any of several possible paths from source to destination.
(Recall that in a ring, although two cable paths exist, messages can only travel in one direction.)
Some WANs, like the Internet, employ mesh routing.

Page 20 of 85
Network Communications – Networking Essentials – MIM0609

Media
The actual transmission of information in a computer network takes place via a transmission
medium. These mediums can be broadly classified into guided and unguided medium. Guided
medium can be termed as that medium where the packets are directed towards the destination.
Good examples of guided medium can be copper wire and optical fibers. Unguided medium
refers to that medium in which the information is transmitted irrespective of the location of the
destination machine.

Twisted Pair
The twisted pair transmission medium consists of two insulated copper wires, typically about
1mm thick. These two wires are twisted together in a helical form. This twisting reduces the
electrical interference from similar cables close by. The telephone system is an excellent example
of a twisted pair network.
Twisted pair can be used for both analog and digital transmission. The bandwidth that can be
achieved with twisted pair depends on the thickness and the distance traveled. Typically, a
transmission rate of several megabits per second can be achieved for a few kilometers.

Coaxial cable has better shielding than twisted pair. Thus, they have the advantage that they can
span longer distance at relatively higher speed.

A coaxial cable consists of a stiff copper wire as the core, which is surrounded by an insulating
material. A cylindrical conductor in the form of a closely woven braided mesh surrounds the
insulator. A plastic coating then covers this entire setup. Two types of coaxial cables that are
widely used;
 The baseband coaxial cable is a 50-ohm cable and is commonly used for digital
transmission. Due to the shielding structure, they give excellent noise immunity. The
bandwidth depends on the length of the cable. Typically, 1 to 2 Gbps1 is possible for a 1-
km cable. Longer cables may also be used. They, however, provide lower data rates
unless used with amplifiers or repeaters.
 The broadband coaxial cable is a 75-ohm cable mostly used for analog transmission. The
standard cable television network is an excellent example where broadband coaxial
cables are used. The broadband coaxial cables can give up to 450 MHz and can span for
nearly 100 kms for analog transmission. Broadband systems can be subdivided into a
number of independent channels. Each channel can transmit analog or digital data.

Page 21 of 85
Network Communications – Networking Essentials – MIM0609

Optical fibers, as the name suggests, employ the medium of light to transmit information. Thus,
information can be transmitted at very high speed - the speed of light and it eliminates problems
like heat dissipation. Optical fibers are typically used to provide a bandwidth of 1 Gbps although
bandwidth in excess of 50,000 Gbps is possible. This limitation is due to the unavailability of
technology that can convert optical signals to electrical signals and vice versa at such a fast rate.

The technology behind optical fibers employs three components: the light source, transmission
medium and the detector. The light source, connected at one end of the transmission medium,
generates a pulse of light that corresponds to 1 bit of data. The presence of no light is equivalent
to 0 bit. The transmission medium used is an ultra-thin fiber of glass. The detector at the other
end of the transmission medium detects the presence of light pulses and generates an electrical
signal accordingly.

Wireless Transmission
The transmission media described above provides a physical connection between two
computers. This is quite often not feasible especially when the geographical distance between the
two computers is very large. Communication in these types of setup is carried out by employing
various other mediums such as microwaves, radio waves etc. Communication, employing these
types of mediums, is called wireless communication.

Radio Transmission
The obvious advantage of using radio waves comes from the fact that radio waves are easily
generated, can travel longer distances, can penetrate buildings and are omnidirectional.
However, radio waves have the disadvantage that arises from the fact that radio waves are
frequency dependent. At low frequencies, the power of the radio waves deteriorates as the
distance traveled from the source increases. At high frequencies, the radio waves tend to travel in
a straight line and bounces of obstacles. They are also absorbed by rain and are subjected to
interference from motors and other electrical equipment.

Microwave Transmission
Microwave transmission offers a high signal to noise ratio. However, it necessitates the
transmitter and the receiver to be aligned in a straight line without interference. In addition,
because of the fact that microwaves travel in a straight line, it becomes necessary to provide
repeaters for long distances since the curvature of the earth becomes an obstacle. Some waves
may be refracted off low-lying atmospheric layers and thus may take slightly longer to arrive.
They may also be out of phase with the direct wave thus creating a situation called multipath
fading where the delayed wave tend to cancel out the direct wave. Microwaves have the
advantage that they are relatively inexpensive and require less space to setup antennas. They
can also be used long distance transmission.

Page 22 of 85
Network Communications – Networking Essentials – MIM0609

Cabling

A LAN can be as simple as two computers, each having


a network interface card (NIC) or network adapter and
running network software, connected together with a
crossover cable.
The next step up would be a network consisting of three
or more computers and a hub. Each of the computers is
plugged into the hub with a straight-thru cable (the crossover function is performed by the hub).

Let's start with simple pin-out diagrams of the two types of UTP Ethernet cables and watch how
committees can make a can of worms out of them. Here are the diagrams:

Note that the TX (transmitter) pins are connected to corresponding RX (receiver) pins, plus to
plus and minus to minus. And that you must use a crossover cable to connect units with identical
interfaces. If you use a straight-through cable, one of the two units must, in effect, perform the
cross-over function.
Two wire color-code standards apply: EIA/TIA 568A and EIA/TIA 568B.

 A straight-thru cable has identical ends.


 A crossover cable has different ends.

It makes no functional difference which standard you use for a straight-thru cable. You can start
a crossover cable with either standard as long as the other end is the other standard. It makes
no functional difference which end is which. Despite what you may have read elsewhere, a 568A
patch cable will work in a network with 568B wiring and 568B patch cable will work in a 568A
network. The electrons couldn't care less.

Modular Plug Crimp Tool. You will need a


modular crimp tool. This one is very similar to
the one I have been using for many years for all
kinds of telephone cable work and it works just
fine for Ethernet cables. You don't need a lot of
bells and whistles, just a tool that will securely
crimp RJ-45 connectors.

Connectors and Terminators


Twisted Pair (Shielded Twisted Pair and Unshielded Twisted Pair) is becoming the cable of
choice for new installations. Twisted pair cable is readily accepted as the preferred solution to

Page 23 of 85
Network Communications – Networking Essentials – MIM0609

cabling. It provides support for a range of speeds and configurations, and is widely supported by
different vendors. Shielded twisted pair uses a special braided wire, which surrounds all the other
wires, which helps to reduce unwanted interference.

The features of twisted pair cable are,


 Used in token ring (4 or 16mbps), 10baset (Ethernet 10mbps), 100baset (100Mbps)
 Reasonably cheap
 Reasonably easy to terminate [special crimp connector tools are necessary for reliable
operation
 UTP often already installed in buildings
 UTP is prone to interference, which limits speed and distances
 Low to medium capacity
 Medium to high loss
 Category 2 = up to 1Mbps (Telephone wiring)
 Category 3 = up to 10Mbps (Ethernet and 10baset)
 Category 5 = 100mbps (supports 10baset and 100baset)
Unshielded Twisted Pair cable used in Category 5 looks like

Category 5 cable uses 8 wires. The various jack connectors used in the wiring closet look like,

Distance limitations exist when cabling. For category 5 cabling at 100Mbps, the limitations
effectively limit a workstation to wall outlet of 3 meters, and wall outlet to wiring closet of 90
meters.

All workstations are wired back to a central wiring closet, where they are then patched
accordingly. Within an organization, the IT department either performs this work or sub-contracts
it to a third party.

In 10BaseT, each PC is wired back to a central hub using its own cable. There are limits imposed
on the length of drop cable from the PC network card to the wall outlet, the length of the
horizontal wiring, and from the wall outlet to the wiring closet.

Page 24 of 85
Network Communications – Networking Essentials – MIM0609

Patch Cables
Patch cables come in two varieties, straight through or reversed. One application of patch cables
is for patching between modular patch panels in system centers. These are the straight through
variety. Another application is to connect workstation equipment to the wall jack, and these could
be either straight through or reversed depending upon the manufacturer. Reversed cables are
normally used for voice systems.

How to determine the type of patch cable


Align the ends of the cable side by side so that the contacts are facing you, and then compare the
colors from left to right.

If the colors are in the same order on both plugs, the cable is straight through. If the colors appear
in the reverse order, the cable is reversed.

Coaxial Cable
Coaxial cable has traditionally been the cable of choice for low cost, small user networks. This
has been mainly due to its ease of use and low cost. Persons with minimal network
understanding can readily build a LAN using coax components, which can often be purchased in
kit ready format.
The general features of coaxial cable are,
 Medium capacity
 Ethernet systems (10mbps)
 Slighter dearer than utp
 More difficult to terminate
 Not as subject to interference as utp
 Care when bending and installing is needed
 10base2 uses rg-58au (also called thin-net or cheaper net)
 10base5 uses a thicker solid core coaxial cable (also called Thick-Net)

Thin coaxial cable [RG-58AU rated at 50 ohms], as used in Ethernet LAN's, looks like

The connectors used in thin-net Ethernet LAN's are T connectors (used to join cables together
and attach to workstations) and terminators (one at each end of the cable). The T-connectors and
terminators look like
The straight through and cross-over patch cables discussed
in this article are terminated with CAT 5 RJ-45 modular
plugs. RJ-45 plugs are similar to those you'll see on the
end of your telephone cable except they have eight versus
four or six contacts on the end of the plug and they are
about twice as big. Make sure they are rated for CAT 5
wiring. (RJ means "Registered Jack"). Also, there are RJ-
45 plugs designed for both solid core wire and stranded
wire. Others are designed specifically for one kind of wire
or the other.

Page 25 of 85
Network Communications – Networking Essentials – MIM0609

Chapter 3 - Communication Protocols and Services

Protocols
Computer networks use protocols to communicate. These protocols define the procedures to use
for the systems involved in the communication process. A data communication protocol is a set of
rules that must be followed for the two electronic devices to communicate.

Many protocols are used to provide and support data communications. They form communication
architecture, sometimes referred as “protocol stack” such as the TCP/IP family protocols.

Protocols:
 Define the procedures to be used by systems involved in the communication process
 In data communications, are a set of rules that must be followed for devices to
communicate
 Are implemented in software/firmware

Each protocol provides for a function that is needed to make the data communication possible.
Many protocols are used so that the problem can be broken into manageable pieces. Each
software module that implements a protocol can be developed and updated independently of
other modules, as long as the interface between modules remains constant.

Recall that a protocol is a set of rules governing the exchange of data between two entities.
These rules cover:
 Syntax – Data format and coding
 Semantics – Control information and error handling
 Timing – Speed matching and sequencing

Layered Approach
A networking model represents a common structure or protocol to accomplish communication
between systems. These models consist of layers. You can think of a layer as a step that must be
completed to go on to the next step and, ultimately, to communicate between systems.

As described previously, a protocol is a formal description of messages to be exchanged and


rules to be followed for two or more systems to exchange information.
Model = Structure
Layer = Function
Protocol = Rules

Some of the advantages of using a layered model are:


 Allows changes or new features to be introduced in one layer leaving the others intact
 Divides the complexity of networking into functions or sublayers which are more
manageable
 Provides a standard that, if followed, allows interoperability between software and
hardware vendors
 Eases troubleshooting

Implementing a functional internetwork is no simple task. Many challenges must be faced,


especially in the areas of connectivity, reliability, network management, and flexibility. Each area
is the key in establishing an efficient and effective internetwork.

The challenge when connecting various systems is to support communication among disparate
technologies. Different sites, for example, may use different types of media operating at varying
speeds, or may even include different types of systems that need to communicate.

Page 26 of 85
Network Communications – Networking Essentials – MIM0609

Because companies rely heavily on data communication, internetworks must provide a certain
level of reliability. This is an unpredictable world; so many large internetworks include redundancy
to allow for communication even when problems occur.

An ordered approach to network communication is a way out. The layered approach to OSI offers
several advantages to system implementers. By separating the job of networking into logical
smaller pieces, vendors can more easily solve network "problems" through divide-and-
conquer. A product from one vendor that implements Layer 2, for example, will be much
more likely to interoperate with another vendor's Layer 3 product when both vendors follow
this standard model. Finally, the OSI layers afford extensibility. New protocols and other
network services will generally be easier to add to a layered architecture than to a monolithic
one. Protocol Layers

The communication between the nodes in a packet data network must be precisely defined to
ensure correct interpretation of the packets by the receiving intermediate and the end systems.
The packets exchanged between nodes are defined by a protocol - or communications language.

There are many functions which may be need to be performed by a protocol. These range from
the specification of connectors, addresses of the communications nodes, identification of
interfaces, options, flow control, reliability, error reporting, synchronization, etc. In practice there
are so many different functions, that a set (also known as suite or stack) of protocols are usually
defined. Each protocol in the suite handles one specific aspect of the communication.

The protocols are usually structured together to form a layered design (also known as a "protocol
stack"). All major telecommunication network architectures currently used or being developed use
layered protocol architectures. The precise functions in each layer vary. In each case, however,
there is a distinction between the functions of the lower (network) layers, which are primarily
designed to provide a connection or path between users to hide details of underlying
communications facilities, and the upper (or higher) layers, which ensure data exchanged are in
correct and understandable form. The upper layers are sometimes known as "middleware"
because they provide software in the computer which convert data between what the applications
programs expect, and what the network can transport. The transport layer provides the
connection between the upper (applications-oriented) layers and the lower (or network-oriented)
layers.

The basic idea of a layered


architecture is to divide the design into
small pieces. Each layer adds to the
services provided by the lower layers in
such a manner that the highest layer is
provided a full set of services to
manage communications and run
distributed applications. A basic
principle is to ensure independence of
layers by defining services provided by
each layer to the next higher layer
without defining how the services are
to be performed. This permits changes
in a layer without affecting other layers.
Prior to the use of layered protocol
architectures, simple changes such as
adding one terminal type to the list of
those supported by an architecture
often required changes to essentially
all communications software at a site.

Page 27 of 85
Network Communications – Networking Essentials – MIM0609

Protocol Stacks
The protocol stacks were once defined using proprietary documentation - each manufacturer
wrote a comprehensive document describing the protocol. This approach was appropriate when
the cost of computers was very high and communications software was "cheap" in comparison.
Once computers became readily available at economic prices, users saw the need to
interconnect the computers from different manufacturers using computer networks. It was costly
to connect computers with different proprietary protocols, since for each pair of protocols a
separate "gateway" product had to be developed. This process was made more complicated in
some cases, since variants of the protocol existed and not all variants were defined by published
documents.

Network Communication
Network Architectures
 Peer to Peer
 Client Server
 Console Terminal

Peer To Peer
A peer-to-peer type of network is one in which each workstation has equivalent capabilities and
responsibilities. This differs from client/server architectures, in which some computers are
dedicated to serving the others. Peer-to-peer networks are generally simpler, but they usually do
not offer the same performance under heavy loads.

This definition captures the traditional meaning of peer-to-peer networking. Computers in a


workgroup, or home computers, are configured for the sharing of resources such as files and
printers. Although one computer may act as the file server or FAX server at any given time, all
computers on the network generally could host those services on short notice. In particular, the
computers will typically be situated near each other physically and will run the same networking
protocols.

P2P systems involve seven key characteristics:


 User interfaces load outside of a web browser
 User computers can act as both clients and servers
 The overall system is easy to use and well-integrated
 The system includes tools to support users wanting to create content or add functionality
 The system provides connections with other users
 The system does something new or exciting
 The system supports "cross-network" protocols like SOAP or XML-RPC

In this updated view of peer-to-peer computing, devices can now join the network from anywhere
with little effort; instead of dedicated LANs, the Internet itself becomes the network of choice.
Easier configuration and control over the application allows non networking-savvy people to join
the user community. In effect, P2P signifies a shift in emphasis in peer networking from the
hardware to the applications.

Client Server
Ages ago (in Internet time), when mainframe dinosaurs roamed the Earth, a new approach to
computer networking called "client/server" emerged. Client/server proved to be a more cost-
effective way to build many types of networks, particularly PC-based LANs running end-user
database applications. Many types of client/server systems remain popular today.

Page 28 of 85
Network Communications – Networking Essentials – MIM0609

What Is Client/Server?
The most basic definition of client/server

Client/server is a computational architecture that involves client processes requesting


service from server processes.

In general, client/server maintains a distinction between processes and network devices. Usually
a client computer and a server computer are two separate devices, each customized for their
designed purpose. For example, a Web server will often contain large amounts of memory and
disk space, whereas Web clients often include features to support the graphic user interface of
the browser such as high-end video cards and large-screen displays.

Client/server networking, however, focuses primarily on the applications rather than the
hardware. The same device may function as both client and server; for example, Web server
hardware functions as both client and server when local browser sessions are run there.
Likewise, a device that is a server at one moment can reverse roles and become a client to a
different server (either for the same application or for a different application).

Client/Server Applications
Some of the most popular applications on the Internet follow the client/server design:
 Email clients
 FTP (File transfer) clients
 Web browsers

Each of these programs presents a user interface (either graphic- or text-based) in a client
process that allows the user to connect to servers. In the case of email and FTP, the user enters
a computer name (or sometimes an IP address) into the interface to set up future connections to
the server process.

When using a Web browser, the name or address of the server appears in the URL of each
request. Although a person may start a Web surfing session by entering a particular server name
(such as www.about.com), the name regularly changes as they click links on the pages. In the
Web model, the HTML content developer encoded in the anchor tags provides server information.

Client/Server at Home
Many home networker’s use client/server systems without even realizing it. Microsoft's Internet
Connection Sharing (ICS), for example, relies on DHCP server and client functionality built into
the operating system. Cable modem and DSL routers also include a DHCP server with the
hardware unit. Many home LAN gaming applications also use a single-server/multiple-client
configuration.

Console Terminal
The Console-Terminal architecture is one in which the Terminal has to access the Console for
every piece of information. The terminals are also some times referred to as ‘thin clients’.

Page 29 of 85
Network Communications – Networking Essentials – MIM0609

Chapter 4 - OSI Reference Model

The OSI Reference Model


Modern computer networks are designed in a highly structured way. To reduce their design
complexity, most networks are organized as a series of layers, each one built upon its
predecessor.

The OSI Reference Model is based on a proposal developed by the International Organization for
Standardization (ISO). The model is called ISO OSI (Open Systems Interconnection) Reference
Model because it deals with connecting open systems - that is, systems that are open for
communication with other systems.

The OSI model has seven layers. The principles that were applied to arrive at the seven layers
are as follows:
 A layer should be created where a different level of abstraction is needed.
 Each layer should perform a well defined function.
 The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
 The layer boundaries should be chosen to minimize the information flow across the
interfaces.
 The number of layers should be large enough that distinct functions need not be thrown
together in the same layer out of necessity, and small enough that the architecture does
not become unwieldy.

The Seven Layers Model


Seven layers are defined:
Application : Provides different services to the applications
Presentation : Converts the information
Session : Handles problems which are not communication issues
Transport : Provides end to end communication control
Network : Routes the information in the network
Data Link : Provides error control between adjacent nodes
Physical : Connects the entity to the transmission media

Page 30 of 85
Network Communications – Networking Essentials – MIM0609

The Application Layer


The application layer contains a variety of protocols that are commonly needed. For example,
there are hundreds of incompatible terminal types in the world. Consider the plight of a full screen
editor that is supposed to work over a network with many different terminal types, each with
different screen layouts, escape sequences for inserting and deleting text, moving the cursor, etc.

One way to solve this problem is to define an abstract network virtual terminal for which editors
and other programs can be written to deal with. To handle each terminal type, a piece of software
must be written to map the functions of the network virtual terminal onto the real terminal. For
example, when the editor moves the virtual terminal's cursor to the upper left-hand corner of the
screen, this software must issue the proper command sequence to the real terminal to get its
cursor there too. All the virtual terminal software is in the application layer.

Another application layer function is file transfer. Different file systems have different file naming
conventions, different ways of representing text lines, and so on. Transferring a file between two
different systems requires handling these and other incompatibilities. This work, too, belongs to
the application layer, as do electronic mail, remote job entry, directory lookup, and various other
general-purpose and special-purpose facilities.

The Presentation Layer


The presentation layer performs certain functions that are requested sufficiently often to warrant
finding a general solution for them, rather than letting each user solve the problems. In particular,
unlike all the lower layers, which are just interested in moving bits reliably from here to there, the
presentation layer is concerned with the syntax and semantics of the information transmitted.

A typical example of a presentation service is encoding data in a standard, agreed upon way.
Most user programs do not exchange random binary bit strings. They exchange things such as
people's names, dates, amounts of money, and invoices. These items are represented as
character strings, integers, floating point numbers, and data structures composed of several
simpler items. Different computers have different codes for representing character strings,
integers and so on. In order to make it possible for computers with different representation to
communicate, the data structures to be exchanged can be defined in an abstract way, along with
a standard encoding to be used "on the wire". The job of managing these abstract data structures
and converting from the representation used inside the computer to the network standard
representation is handled by the presentation layer.

The presentation layer is also concerned with other aspects of information representation. For
example, data compression can be used here to reduce the number of bits that have to be
transmitted and cryptography is frequently required for privacy and authentication.

The Session Layer


The session layer allows users on different machines to establish sessions between them. A
session allows ordinary data transport, as does the transport layer, but it also provides some
enhanced services useful in a some applications. A session might be used to allow a user to log
into a remote time-sharing system or to transfer a file between two machines.

One of the services of the session layer is to manage dialogue control. Sessions can allow traffic
to go in both directions at the same time, or in only one direction at a time. If traffic can only go
one way at a time, the session layer can help keep track of whose turn it is.

A related session service is token management. For some protocols, it is essential that both sides
do not attempt the same operation at the same time. To manage these activities, the session
layer provides tokens that can be exchanged. Only the side holding the token may perform the
critical operation.

Page 31 of 85
Network Communications – Networking Essentials – MIM0609

Another session service is synchronization. Consider the problems that might occur when trying
to do a two-hour file transfer between two machines on a network with a 1 hour mean time
between crashes. After each transfer was aborted, the whole transfer would have to start over
again, and would probably fail again with the next network crash. To eliminate this problem, the
session layer provides a way to insert checkpoints into the data stream, so that after a crash, only
the data after the last checkpoint has to be repeated.

The Transport Layer


The basic function of the transport layer, is to accept data from the session layer, split it up into
smaller units if need be, pass these to the network layer, and ensure that the pieces all arrive
correctly at the other end. Furthermore, all this must be done efficiently, and in a way that isolates
the session layer from the inevitable changes in the hardware technology.

Under normal conditions, the transport layer creates a distinct network connection for each
transport connection required by the session layer. If the transport connection requires a high
throughput, however, the transport layer might create multiple network connections, dividing the
data among the network connections to improve throughput. On the other hand, if creating or
maintaining a network connection is expensive, the transport layer might multiplex several
transport connections onto the same network connection to reduce the cost. In all cases, the
transport layer is required to make the multiplexing transparent to the session layer.

The transport layer also determines what type of service to provide to the session layer, and
ultimately, the users of the network. The most popular type of transport connection is an error-
free point-to-point channel that delivers messages in the order in which they were sent. However,
other possible kinds of transport, service and transport isolated messages with no guarantee
about the order of delivery, and broadcasting of messages to multiple destinations. The type of
service is determined when the connection is established.

The transport layer is a true source-to-destination or end-to-end layer. In other words, a program
on the source machine carries on a conversation with a similar program on the destination
machine, using the message headers and control messages.

Many hosts are multi-programmed, which implies that multiple connections will be entering and
leaving each host. There needs to be some way to tell which message belongs to which
connection. The transport header is one place this information could be put.

In addition to multiplexing several message streams onto one channel, the transport layer musk
takes care of establishing and deleting connections across the network. This requires some kind
of naming mechanism, so that process on one machine has a way of describing with whom it
wishes to converse. There must also be a mechanism to regulate the flow of information, so that
a fast host cannot overrun a slow one. Flow control between hosts is distinct from flow control
between switches, although similar principles apply to both.

The Network Layer


The network layer is concerned with controlling the operation of the subnet. A key design issue is
determining how packets are routed from source to destination. Routes could be based on static
tables that are "wired into" the network and rarely changed. They could also be determined at the
start of each conversation, for example a terminal session. Finally, they could be highly dynamic,
being determined anew for each packet, to reflect the current network load.

If too many packets are present in the subnet at the same time, they will get in each other's way,
forming bottlenecks. The control of such congestion also belongs to the network layer.

Since the operators of the subnet may well expect remuneration for their efforts, there is often
some accounting function built into the network layer. At the very least, the software must count
how many packets or characters or bits are sent by each customer, to produce billing information.

Page 32 of 85
Network Communications – Networking Essentials – MIM0609

When a packet crosses a national border, with different rates on each side, the accounting can
become complicated.

When a packet has to travel from one network to another to get to its destination, many problems
can arise. The addressing used by the second network may be different from the first one. The
second one may not accept the packet at all because it is too large. The protocols may differ, and
so on. It is up to the network layer to overcome all these problems to allow heterogeneous
networks to be interconnected.

In broadcast networks, the routing problem is simple, so the network layer is often thin or even
nonexistent.

The Data Link Layer


The main task of the data link layer is to take a raw transmission facility and transform it into a
line that appears free of transmission errors in the network layer. It accomplishes this task by
having the sender break the input data up into data frames (typically a few hundred bytes),
transmit the frames sequentially, and process the acknowledgment frames sent back by the
receiver. Since the physical layer merely accepts and transmits a stream of bits without any
regard to meaning of structure, it is up to the data link layer to create and recognize frame
boundaries. This can be accomplished by attaching special bit patterns to the beginning and end
of the frame. If there is a chance that these bit patterns might occur in the data, special care must
be taken to avoid confusion.

The data link layer should provide error control between adjacent nodes.

Another issue that arises in the data link layer (and most of the higher layers as well) is how to
keep a fast transmitter from drowning a slow receiver in data. Some traffic regulation mechanism
must be employed in order to let the transmitter know how much buffer space the receiver has at
the moment. Frequently, flow regulation and error handling are integrated, for convenience.
If the line can be used to transmit data in both directions, this introduces a new complication that
the data link layer software must deal with. The problem is that the acknowledgment frames for A
to B traffic compete for the use of the line with data frames for the B to A traffic.

The Data Link Layer: Error Control


A noise burst on the line can destroy a frame completely. In this case, the data link layer software
on the source machine must retransmit the frame. However, multiple transmissions of the same
frame introduce the possibility of duplicate frames. A duplicate frame could be sent, for example,
if the acknowledgment frame from the receiver back to the sender was destroyed. It is up to this
layer to solve the problems caused by damaged, list, and duplicate frames. The data link layer
may offer several different service classes to the network layer, each of a different quality and
with a different price.

The Physical Layer


The physical later is concerned with transmitting raw bits over a communication channel. The
design issues have to do with making sure that when one side sends a 1 bit, it is received by the
other side as a 1 bit, not as a 0 bit. Typical questions here are how many volts should be used to
represent a 1 and how many for a 0, how many microseconds a bit lasts, whether transmission
may proceed simultaneously in both directions, how the initial connection is established and how
it is torn down when both sides are finished, and how many pins the network connector has and
what each pin is used for. The design issues here deal largely with mechanical, electrical, and
procedural interfaces, and the physical transmission medium, which lies below the physical layer.
Physical layer design can properly be considered to be within the domain of the electrical
engineer.

Page 33 of 85
Network Communications – Networking Essentials – MIM0609

Sending Data via the OSI Model


Each layer acts as though it is communicating with its corresponding layer on the other end.

In reality, data is passed from one layer down to the next lower layer at the sending computer, till
the Physical Layer finally transmits the data onto the network cable. As the data it passed down
to a lower layer, it is encapsulated into a larger unit (in effect, each layer adds its own layer
information to that which it receives from a higher layer). At the receiving end, the message is
passed upwards to the desired layer, and as it passes upwards through each layer, the
encapsulation information is stripped off.

Page 34 of 85
Network Communications – Networking Essentials – MIM0609

Chapter 5 – LAN Protocols & Standards

Data Link Layers Protocols (LLC and MAC)


The logical link layer deals with the transmission of frames. Each frame is a packet of data with a
sequence number that is used to ensure delivery and a checksum to track corrupted frames.
Several algorithms are available for use to acknowledge delivery of packages. The basic protocol
works as follows.

The sending station


1. The sending station sends one or more frames with a sequence numbers them.
2. The sending station awaits acknowledgements of the sent frames before transmitting
further frames.
3. If no acknowledgement for a particular frame arrives within a fixed time the frame is
retransmitted

The Receiving Station


The receiving station acknowledges uncorrupted frames as they are received. Note that there
must be a large enough range for the sequence numbers so that the receiving station can
distinguish between resends (this will happen if an ACK gets lost) and new frames. Three
protocols for the data link layer.
1. Stop and Wait. This requires only two sequence numbers
2. Go back n
3. Selective repeat
The last two allow several frames to be in transit.

The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data link layer into
two sublayers: Logical Link Control (LLC) and Media Access Control (MAC).

The Logical Link Control (LLC) sublayer of the data link layer manages communications between
devices over a single link of a network. LLC is defined in the IEEE 802.2 specification and
supports both connectionless and connection-oriented services used by higher-layer protocols.
IEEE 802.2 defines a number of fields in data link layer frames that enable multiple higher-layer
protocols to share a single physical data link. The Media Access Control (MAC) sublayer of the
data link layer manages protocol access to the physical network medium. The IEEE MAC
specification defines MAC addresses, which enable multiple devices to uniquely identify one
another at the data link layer.

Data Link Layer Addresses


A data link layer address uniquely identifies each physical network connection of a network
device. Data-link addresses sometimes are referred to as physical or hardware addresses. Data-
link addresses usually exist within a flat address space and have a pre-established and typically
fixed relationship to a specific device.

End systems generally have only one physical network connection and thus have only one data-
link address. Routers and other internetworking devices typically have multiple physical network
connections and therefore have multiple data-link addresses.

Page 35 of 85
Network Communications – Networking Essentials – MIM0609

MAC Addresses
Media Access Control (MAC) addresses consist of a subset of data link layer addresses. MAC
addresses identify network entities in LANs that implement the IEEE MAC addresses of the data
link layer. As with most data-link addresses, MAC addresses are unique for each LAN interface.
Figure 1-14 illustrates the relationship between MAC addresses, data-link addresses, and the
IEEE sublayers of the data link layer.

MAC addresses are 48 bits in length and are expressed as 12 hexadecimal digits. The first 6
hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor and
thus comprise the Organizationally Unique Identifier (OUI). The last 6 hexadecimal digits
comprise the interface serial number, or another value administered by the specific vendor. MAC
addresses sometimes are called burned-in addresses (BIAs) because they are burned into read-
only memory (ROM) and are copied into random-access memory (RAM) when the interface card
initializes.

Ethernet, FAST Ethernet, Gigabit Ethernet


Ethernet 802.3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) - This
protocol is commonly used in bus (Ethernet) implementations. Multiple access refers to the fact
that in bus systems, each station has access to the common cable.

Carrier sense refers to the fact that each station listens to see if no other station is transmitting
before sending data.

Page 36 of 85
Network Communications – Networking Essentials – MIM0609

Collision detection refers to the principle of listening to see if other stations are transmitting whilst
we are transmitting.

In bus systems, all stations have access to the same cable medium. It is therefore possible that a
station may already be transmitting when another station wants to transmit. Rule 1 is that a
station must listen to determine if another station is transmitting before initiating a transmission. If
the network is busy, then the station must back off and wait a random interval before trying again.
Rule 2 is that a station, which is transmitting, must monitor the network to see if another station
has begun transmission. This is a collision, and if this occurs, both stations must back off and
retry after a random time interval. As it takes a finite time for signals to travel down the cable, it is
possible for more than one station to think that the network is free and both grab it at the same
time.

CSMA/CD models what happens in the real world. People involved in-group conversation tend to
obey much the same behavior.

Fast Ethernet
The growing importance of LANs and the increasing demand for data in a variety of forms (text,
digital images, and voice) are fueling the need for high-bandwidth networks. To continue to
support business operations, information systems (IS) managers must consider extending or
replacing traditional network technologies with new, high-performance solutions. Managers with
networks already running on 10megabit per second (Mbps) Ethernet have an advantage. They
can easily upgrade to Fast Ethernet for 100 Mbps performance with minimal cost and service
disruption.

As a simple evolution of the familiar IEEE 802.3 Ethernet standard, 100Base-T Fast Ethernet
preserves the core structure of Ethernet and includes support for up to 100Mbps and the most
widely used cabling schemes. It can be easily integrated with existing 10 Mbps Ethernet networks
to provide ten times more network throughput to the desktop at a minimal incremental cost. Its
interoperability with other low- and high-bandwidth networking technologies offers network
administrators more flexibility in structuring high-speed LANs and WAN.

Fast Ethernet is a natural extension of the existing 10 Mbps Ethernet technologies. It is backward
compatible with existing networking protocols, media, and standards. It is cost-effective, easy to
deploy, and offers 10times the speed of 10Base-T Ethernet at a low cost of ownership. 100Base-
T's backward compatibility allows information resource departments to protect their investment in
Ethernet expertise while delivering the performance required by power users.
100Base-T Fast Ethernet consists of five component specifications: the Media access Control
(MAC) layer, the Media Independent Interface (MII), and three physical layers supporting the
most widely used cabling types. Each of these components was designed to preserve
compatibility with 10 Mbps Ethernet and existing installations.

The Fast Ethernet architecture allows scalable performance up to 100 Mbps, meaning that higher
throughput of the workstation at the sending or receiving end directly translates into more
performance through the network. With a Fast Ethernet connection, users can finally tap into the
network I/O performance of their high-end workstations and servers.

The growing importance of LANs and the increasing complexity of desktop computing
applications are fueling the need for high performance networks. A number of high-speed LAN
technologies are proposed to provide greater bandwidth and improve client/server response
times. Among them, Fast Ethernet, or 100BASE-T, provides a non-disruptive, smooth evolution
from the current 10BASE-T technology. The dominating market position virtually guarantees cost
effective and high performance Fast Ethernet solutions in the years to come.

100Mbps Fast Ethernet is a standard specified by the IEEE 802.3 LAN committee. It is an
extension of the 10Mbps Ethernet standard with the ability to transmit and receive data at

Page 37 of 85
Network Communications – Networking Essentials – MIM0609

100Mbps, while maintaining the Carrier Sense Multiple Access with Collision Detection
(CSMA/CD) Ethernet protocol

Gigabit Ethernet
Gigabit Ethernet is based on the same Ethernet standard that IT managers already know and
use. One of the original network architectures, defined during the 1970s, Ethernet is now
widespread throughout the world. From the first implementation of the Ethernet specification —
jointly developed by Digital, Intel and Xerox — this networking technology has proven itself in
terms of performance, reliability and an ever-growing number of established network installations.

Gigabit Ethernet builds on these proven qualities, but is 100 times faster than regular Ethernet
and 10 times faster than Fast Ethernet. The principal benefits of Gigabit Ethernet include:
 Increased bandwidth for higher performance and elimination of bottlenecks
 Broad deployment capabilities without re-wiring, using 1000BASE-T Gigabit over
Category 5 copper cabling
 Aggregate bandwidth to 16Gbps through IEEE 802.3ad and Intel® Link Aggregation
using Intel server adapters and switches
 Full-duplex capacity, allowing data to be transmitted and received at the same time so
that the effective bandwidth is virtually doubled
 Quality of Service (QoS) features which can be used to help eliminate jittery video or
distorted audio
 Low cost of acquisition and ownership

Standards Evolution
Gigabit Ethernet is a function of technological evolution in response to industry demand. It is an
extension of the 10Mbps Ethernet networking standard, 10Base-T, and the 100Mbps Fast
Ethernet standards, 100Base-TX and 100Base-FX (Table 1). Two benefits leading to Ethernet's
longevity and success are its low cost and ease of implementation. Besides offering high-speed
connectivity at an economical price and support for a variety of transmission media, the Ethernet
standard also offers a broad base of support for a huge and ever-growing variety of LAN
applications. It is also easily scalable from 10Mbps to systems with higher-speed 100Mbps and
1000Mbps throughput.

Nomenclature Speed Distance Media


10BASE-T 10Mbps 100m Copper
100BASE-TX 100Mbps 100m Copper
100BASE-FX 100Mbps 2Km Multimode Fiber
1000BASE-LX 1000Mbps 5Km Singlemode Fiber
1000Mbps 550m Multimode Fiber
1000BASE-SX 1000Mbps 550m Multimode Fiber (50u)
1000Mbps 275m Multimode Fiber (62.5u)
1000BASE-CX 1000Mbps 25m Copper
1000BASE-T 1000Mbps 100m Copper

In June of 1998, the IEEE approved the Gigabit Ethernet standard over fiber (LX and SX) and
short-haul copper (CX) as IEEE 802.3z. The fiber implementation was widely supported. With
approval of 802.3z, companies could rely on a well-known, standards-based approach to improve
traffic flow in congested areas without having to upgrade to an unproven or non-standardized
technology.

Page 38 of 85
Network Communications – Networking Essentials – MIM0609

Gigabit Ethernet was originally designed as a switched technology, using fiber for uplinks and for
connections between buildings. Since then, Gigabit Ethernet has also been used extensively in
servers with Gigabit Ethernet network adapters and along backbones to remove traffic
bottlenecks in these areas of aggregation.

In June of 1999, the IEEE further standardized IEEE 802.3ab Gigabit Ethernet over copper
(1000BASE-T); allowing 1Gb speeds to be transmitted over Category 5 cable. Since Category 5
makes up a large portion of the installed cabling base, migrating to Gigabit Ethernet has never
been easier. Organizations can now replace network adapters with Gigabit Ethernet and migrate
to higher speeds more extensively without having to re-wire the infrastructure.

This is especially important in areas where existing network wiring is difficult to access, such as
the utility risers typically located between floors in large office buildings. Without the new
standard, future deployment of Gigabit Ethernet might have required costly replacement of
cabling in these risers.
However, even with the new standard, existing cabling must meet certain characteristics.

Gigabit Ethernet is fully compatible with the large installed base of Ethernet and Fast Ethernet
nodes. It employs all of the same specifications defined by the original Ethernet standard,
including:
 CSMA/CD protocol
 Ethernet frame or "packet" format
 Full duplex
 Flow control
 Management objects as defined by the IEEE 802.3 standard

Because it's part of the Ethernet suite of standards, Gigabit Ethernet also supports traffic
management techniques that deliver Quality of Service over Ethernet, such as:
 IEEE 802.1p Layer 2 prioritization
 ToS coding bits for Layer 3 prioritization
 Differentiated Services
 Resource Reservation Protocol (RSVP)

Gigabit Ethernet can also take advantage of 802.1Q VLAN support, Layer 4 filtering, and Layer 3
switching at Gigabit speeds. In addition, bandwidth up to 16Gbps can be achieved by trunking
either several Gigabit switch ports or Gigabit server adapters together using IEEE 802.3ad or
Intel Link Aggregation.
All of these popular Ethernet technologies, which are deployed in a variety of network
infrastructure devices, are applicable to Gigabit Ethernet.

Page 39 of 85
Network Communications – Networking Essentials – MIM0609

Chapter 6 – Network Components

Network Segments
A network segment:
 Is a length of cable
 Devices can be attached to the cable
 Has unique address
 Has a limit on its length and the number of devices, which can be attached to it

Combining several individual network segments together, using appropriate devices like routers
and/or bridges, makes large networks.

In the above diagram, a bridge is used to allow traffic from one network segment to the other.
Each network segment is considered unique and has its own limits of distance and the number of
connections possible.

When network segments are combined into a single large network, paths exist between the
individual network segments. These paths are called routes, and devices like routers and bridges
keep tables, which define how to get to a particular computer on the network. When a packet
arrives, the router/bridge will look at the destination address of the packet, and determine which
network segment the packet is to be transmitted on in order to get to its destination.

Page 40 of 85
Network Communications – Networking Essentials – MIM0609

Introduction to Dial-Up Networking

Dial-up networking refers to the technology that enables you to connect your computer to a
network via a modem. If your computer is not connected to a LAN and you want to connect to the
Internet, you need to configure Dial-Up Networking (DUN) to dial a Point of Presence (POP) and
log into your Internet Service Provider (ISP). Your ISP will need to provide certain information,
such as the gateway address and your computer's IP address.

Repeaters and Hubs


Repeaters EXTEND network segments. They amplify the incoming signal received from one
segment and send it on to all other attached segments. This allows the distance limitations of
network cabling to be extended. There are limits on the number of repeaters, which can be used.
The repeater counts as a single node in the maximum node count associated with the Ethernet
standard (30 for thin coax).

Repeaters also allow isolation of segments in the event of failures or fault conditions.
Disconnecting one side of a repeater effectively isolates the associated segments from the
network.

Using repeaters simply allows you to extend your network distance limitations. It does not give
you any more bandwidth or allow you to transmit data faster.

Page 41 of 85
Network Communications – Networking Essentials – MIM0609

It should be noted that in the above diagram, the network number assigned to the main network
segment and the network number assigned to the other side of the repeater are the same. In
addition, the traffic generated on one segment is propagated onto the other segment. This causes
a rise in the total amount of traffic, so if the network segments are already heavily loaded, it's not
a good idea to use a repeater.

A repeater works at the Physical Layer by simply repeating all data from one segment to another.

Repeater features
 Increase traffic on segments
 Have distance limitations
 Limitations on the number that can be used
 Propagate errors in the network
 Cannot be administered or controlled via remote access
 Cannot loop back to itself (must be unique single paths)
 No traffic isolation or filtering

There are many types of hubs. Passive hubs are simple splitters or combiners that group
workstations into a single segment, whereas active hubs include a repeater function and are thus
capable of supporting many more connections.
Nowadays, with the advent of 10BaseT, hub concentrators are being very popular. These are
very sophisticated and offer significant features, which make them radically different from the
older hubs, which were available during the 1980's.

These 10BaseT hubs provide each client with exclusive access to the full bandwidth, unlike bus
networks where the bandwidth is shared. Each workstation plugs into a separate port, which runs
at 10Mbps and is for the exclusive use of that workstation, thus there is no contention to worry
about like in Ethernet.

These 10BaseT hubs also include buffering of packets and filtering, so that unwanted packets (or
packets which contain errors) are discarded. SNMP management is also a common feature.

Page 42 of 85
Network Communications – Networking Essentials – MIM0609

In standard Ethernet, all stations are connected to the same network segment in bus
configuration. Traffic on the bus is controlled using the CSMA (Carrier Sense Multiple Access)
protocol, and all stations share the available bandwidth.

10BaseT Hubs dedicate the entire bandwidth to each port (workstation). The workstations attach
to the hub using UTP. The hub provides a number of ports, which are logically, combined using a
single backplane, which often runs at a much higher data rate than that of the ports.

Ports can also be buffered, to allow packets to be held in case the hub or port is busy. And,
because each workstation has its own port, it does not contend with other workstations for
access, having the entire bandwidth available for its exclusive use.

The ports on a hub all appear as one Ethernet segment. In addition, hubs can be stacked or
cascaded (using master/slave configurations) together, to add more ports per segment. As hubs
do not count as repeaters, this is a better option for adding more workstations than the use of a
repeater.

Hub options also include an SNMP (Simple Network Management Protocol) agent. This allows
the use of network management software to remotely administer and configure the hub. Detailed
statistics related to port usage and bandwidth is often available, allowing informed decisions to be
made concerning the state of the network.

The advantages for these newer 10BaseT hubs are,


 Each port has exclusive access to its bandwidth (no CSMA/CD)
 Hubs may be cascaded to add additional ports
 SNMP managed hubs offer good management tools and statistics
 Utilize existing cabling and other network components
 Becoming a low cost solution

Bridges and Switches


Bridges interconnect Ethernet segments. During initialization, the bridge learns about the network
and the routes. Packets are passed onto other network segments based on the MAC layer. Each
time the bridge is presented with a frame, the source address is stored. The bridge builds up a
table, which identifies the segment to which the device is located on. This internal table is then
used to determine which segment incoming frames should be forwarded to. The size of this table
is important, especially if the network has a large number of workstations/servers.

Page 43 of 85
Network Communications – Networking Essentials – MIM0609

The advantages of bridges are


 Increase the number of attached workstations and network segments
 Since bridges buffer frames, it is possible to interconnect different segments which use
different MAC protocols
 Since bridges work at the MAC layer, they are transparent to higher level protocols
 By subdividing the LAN into smaller segments, overall reliability is increased and the
network becomes easier to maintain
 Used for non routable protocols like NETBEUI which must be bridged
 Help localize network traffic by only forwarding data onto other segments as required
(unlike repeaters)

The disadvantages of bridges are


 The buffering of frames introduces network delays
 Bridges may overload during periods of high traffic
 Bridges, which combine different MAC protocols, require the frames to be modified
before transmission onto the new segment. This causes delays
 In complex networks, data may be sent over redundant paths, and the shortest path is
not always taken
 Bridges pass on broadcasts, giving rise to broadcast storms on the network

Bridges are ideally used in environments where there a number of well defined workgroups, each
operating more or less independent of each other, with occasional access to servers outside of
their localized workgroup or network segment. Bridges do not offer performance improvements
when used in diverse or scattered workgroups, where the majority of access occurs outside of the
local segment.

The diagram below shows two separate network segments connected via a bridge. Note that
each segment must have a unique network address number in order for the bridge to be able to
forward packets from one segment to the other.

Ideally, if workstations on network segment A needed access to a server, the best place to locate
that server is on the same segment as the workstations, as this minimizes traffic on the other
segment, and avoids the delay incurred by the bridge.

A bridge works at the MAC Layer by looking at the destination address and forwarding the frame
to the appropriate segment upon which the destination computer resides.

Page 44 of 85
Network Communications – Networking Essentials – MIM0609

Bridge features
 Operate at the MAC layer (layer 2 of the OSI model)
 Can reduce traffic on other segments
 Broadcasts are forwarded to every segment
 Most allow remote access and configuration
 Often SNMP (Simple Network Management Protocol) enabled
 Loops can be used (redundant paths) if using spanning tree algorithm
 Small delays introduced
 Fault tolerant by isolating fault segments and reconfiguring paths in the event of failure
 Not efficient with complex networks
 Redundant paths to other networks are not used (would be useful if the major path being
used was overloaded)
 Shortest path is not always chosen by spanning tree algorithm

Ethernet switches increase network performance by decreasing the amount of extraneous traffic
on individual network segments attached to the switch. They also filter packets a bit like a router
does. In addition, Ethernet switches work and function like bridges at the MAC layer, but instead
of reading the entire incoming Ethernet frame before forwarding it to the destination segment,
usually only read the destination address in the frame before re-transmitting it to the correct
segment. In this way, switches forward frames faster than bridges, offering fewer delays through
the network, hence better performance.
When a packet arrives, the header is checked to determine which segment the packet is destined
for, and then it’s forwarded to that segment. If the packet is destined for the same segment that it
arrives on, the packet is dropped and not retransmitted. This prevents the packet being
"broadcasted" onto unnecessary segments, reducing the traffic.

Nodes, which inter-communicate frequently, should be placed on the same segment. Switches
work at the MAC layer level.

Switches divide the network into smaller collision domains [a collision domain is a group of
workstations that contend for the same bandwidth]. Each segment into the switch has its own
collision domain (where the bandwidth is competed for by workstations in that segment). As
packets arrive at the switch, it looks at the MAC address in the header, and decides which
segment to forward the packet to. Higher protocols like IPX and TCP/IP are buried deep inside
the packet, so are invisible to the switch. Once the destination segment has been determined, the
packet is forwarded without delay.

Each segment attached to the switch is considered to be a separate collision domain. However,
the segments are still part of the same broadcast domain [a broadcast domain is a group of

Page 45 of 85
Network Communications – Networking Essentials – MIM0609

workstations which share the same network subnet, in TCP/IP this is defined by the subnet
mask]. Broadcast packets, which originate on any segment, will be forwarded to all other
segments (unlike a router). On some switches, it is possible to disable this broadcast traffic.

Some vendors implement a broadcast throttle feature, whereby a limit is placed on the number of
broadcasts forwarded by the switch over a certain time period. Once a threshold level has been
reached, no additional broadcasts are forwarded till the time period has expired and a new time
period begins.

Ethernet Switching: Advantages


 Existing cabling structure and network adapters is preserved
 Switches can be used to segment overloaded networks
 Switches can be used to create server farms or implement backbones
 Technology is proven, Ethernet is a widely used standard
 Improved efficiency and faster performance due to low latency switching times
 Each port does not contend with other ports, each having their own full bandwidth (there
is no contention like there is on Ethernet)

Routers and Gateways


Packets are only passed to the network segment they are destined for. They work similar to
bridges and switches in that they filter out unnecessary network traffic and remove it from network
segments. Routers generally work at the protocol level.

Routers were devised in order to separate networks logically. For instance, a TCP/IP router can
segment the network based on groups of TCP/IP addresses. Filtering at this level (on TCP/IP
addresses, also known as level 3 switching) will take longer than that of a bridge or switch, which
only looks at the MAC layer.
Most routers can also perform bridging functions. A major feature of routers, because they can
filter packets at a protocol level, is to act as a firewall. This is essentially a barrier, which prevents
unwanted packets either entering or leaving designated areas of the network.

Typically, an organization, which connects to the Internet, will install a router as the main gateway
link between their network and the outside world. By configuring the router with access lists
(which define what protocols and what hosts have access) this enforces security by restricted (or
allowing) access to either internal or external hosts.

For example, an internal WWW server can be allowed IP access from external networks, but
other company servers which contain sensitive data can be protected, so that external hosts
outside the company are prevented access (you could even deny internal workstations access if
required).

A router works at the Network Layer or higher, by looking at information embedded within the
data field, like a TCP/IP address, then forwards the frame to the appropriate segment upon which
the destination computer resides.

Page 46 of 85
Network Communications – Networking Essentials – MIM0609

Router features
 Use dynamic routing
 Operate at the protocol level
 Remote administration and configuration via SNMP
 Support complex networks
 The more filtering done, the lower the performance
 Provides security
 Segment networks logically
 Broadcast storms can be isolated
 Often provide bridge functions also
 More complex routing protocols used [such as RIP, IGRP and OSPF]

A Gateway is a machine on a network that serves as an entrance to another network. For


example, when a user connects to the Internet, that person essentially connects to a server that
issues the Web Pages to the user. These two devices are host nodes, not gateways. In
enterprises, the gateway is the computer that routes the traffic from a workstation to the outside
network that is serving the Web pages. In homes, the gateway is the ISP that connects the user
to the Internet.

In enterprises, the gateway node often acts as a proxy server and a firewall. The gateway is also
associated with both a router, which use headers and forwarding tables to determine where
packets are sent, and a switch, which provides the actual path for the packet in and out of the
gateway.

Page 47 of 85
Network Communications – Networking Essentials – MIM0609

Chapter 7 – Switching

Switched Ethernets
Local Area Network (LAN) technology has made a significant impact on almost every industry.
Operations of these industries depend on computers and networking. The data is stored on
computers than on paper, and the dependence on networking is so high that banks, airlines,
insurance companies and many government organizations would stop functioning if there were a
network failure. Since, the reliance on networks is so high and the network traffic is increasing,
we have to address some of the bandwidth problems this has caused and find ways to tackle
them.

A LAN switch is a device that provides much higher port density at a lower cost than traditional
bridges. For this reason, LAN switches can accommodate network designs featuring fewer users
per segment, thereby increasing the average available bandwidth per user. This chapter provides
a summary of general LAN switch operation and maps LAN switching to the OSI reference
model.

The trend toward fewer users per segment is known as micro segmentation. Micro segmentation
allows the creation of private or dedicated segments, that is, one user per segment. Each user
receives instant access to the full bandwidth and does not have to contend for available
bandwidth with other users. As a result, collisions (a normal phenomenon in shared-medium
networks employing hubs) do not occur. A LAN switch forwards frames based on either the
frame's Layer 2 address (Layer 2 LAN switch), or in some cases, the frame's Layer 3 address
(multi-layer LAN switch). A LAN switch is also called a frame switch because it forwards Layer 2
frames, whereas an ATM switches forwards cells. Although Ethernet LAN switches are most
common, Token Ring and FDDI LAN switches are becoming more prevalent as network
utilization increases.

The earliest LAN switches were developed in 1990. They were Layer 2 devices dedicated to
solving bandwidth issues. Recent LAN switches are evolving to multi-layer devices capable of
handling protocol issues involved in high-bandwidth applications that historically have been
solved by routers. Today, LAN switches are being used to replace hubs in the wiring closet
because user applications are demanding greater bandwidth.

A LAN switch is a device that typically consists of many ports that connect LAN segments
(Ethernet and Token Ring) and a high-speed port (such as 100-Mbps Ethernet, Fiber Distributed
Data Interface [FDDI], or 155-Mbps ATM). The high-speed port, in turn, connects the LAN switch
to other devices in the network.
A LAN switch has dedicated bandwidth per port, and each port represents a different segment.

When a LAN switch first starts up and as the devices that are connected to it request services
from other devices, the switch builds a table that associates the MAC address of each local
device with the port number through which that device is reachable. That way, when Host A on
Port 1 needs to transmit to Host B on Port 2, the LAN switch forwards frames from Port 1 to Port
2, thus sparing other hosts on Port 3 from responding to frames destined for Host B. If Host C
needs to send data to Host D at the same time that Host A sends data to Host B, it can do so
because the LAN switch can forward frames from Port 3 to Port 4 at the same time it forwards
frames from Port 1 to Port 2.

Whenever a device connected to the LAN switch sends a packet to an address that is not in the
LAN switch's table (for example, to a device that is beyond the LAN switch), or whenever the
device sends a broadcast or multicast packet, the LAN switch sends the packet out all ports
(except for the port from which the packet originated) ---a technique known as flooding.

Page 48 of 85
Network Communications – Networking Essentials – MIM0609

Because they work like traditional "transparent" bridges, LAN switches dissolve previously well-
defined workgroup or department boundaries. A network built and designed only with LAN
switches appears as a flat network topology consisting of a single broadcast domain.
Consequently, these networks are liable to suffer the problems inherent in flat (or bridged)
networks---that is, they do not scale well. Note, however, that LAN switches that support VLANs
are more scalable than traditional bridges.
Local Area Networks in many organizations have to deal with increased bandwidth demands.
More and more users are being added to the existing LANs. If this was the only problem,
upgrading the backbone that connects various LANs could solve it. Bridges and routers can be
used to keep the number of users per LAN at an optimal number. However with increase in the
speed of workstation the bandwidth requirement of each machine has grown more that five times
in the last few years. Coupled with bandwidth hungry multimedia applications and unmanaged
and bursty traffic this problem is further aggravated.
With the increasing use of client-server architecture in which most of the software is stored in the
server, the traffic from workstations to server has increased. Further, the use of a large number of
GUI applications means more pictures and graphics files need to be transferred to the
workstations. This is another cause of increased traffic per workstation. LAN switching is a fast
growing market, with virtually every network vendor marketing its products. Besides LAN
switches, switching routers, switching hubs are also sold.
The reason it works is simple. Ethernet, token ring and FDDI all use shared media. Conventional
Ethernet is bridged or routed. A 100 Mbps Ethernet will have to divide its bandwidth over a
number of users because of shared access. However with a switched network one can connect
each port directly so bandwidth is shared only among a number of users in a workgroup
(connected to the ports). Since there is reduced media sharing more bandwidth is available.
Switches can also maintain multiple connections at one point.
Switches normally have higher port counts than bridges and divide network into several dedicated
channels parallel to each other. These multiple independent data paths increase the throughput
capacity of a switch. There is no contention to gain access and LAN switch architecture is
scalable. Another advantage of switches is that most of them are self-configuring, minimizing
network downtime, although ways for manual configuration are also available. If a segment is
attached to a port of a switch then CSMA/CD is used for media access in that segment. However,
if the port has only one station attached then there is no need for any media access protocol. The
basic operation of a switch is like a multiport bridge. The source and destination Medium Access
Control (MAC) address of incoming frame is looked up and if the frame is to be forwarded, it is
sent to the destination port. Although this is mostly what all switches do, there are a variety of
features that distinguish them, like the following.
Full duplex mode of Ethernet allows simultaneous flow of traffic from one station to another
without collision. So, Ethernet in full duplex mode doesn't require collision detection when only
one port station is attached to each port. There is no contention between stations to transmit over
a medium, and a station can transmit whenever a frame is queued in the adapter. The station can
also receive at the same time. This has a potential to double the performance of the server. The
effective bandwidth is equal to the number of switched ports times the bit rate on medium/2 for
half duplex and for full duplex equal to number of switched ports times the bit rate on medium.
One catch to this is, that while a client can send as well as receive the frames at the same time,
at peak loads server might be overburdened. This may lead to frame loss and eventual loss of
connection to the server. To avoid such a situation, flow control at the client level may be used.
Another big advantage of full duplex is that since there cannot be a collision in full duplex, there is
no MAC layer limitation on the distance, e.g. 2500 m for Ethernet. One can have a 100 km
Ethernet using a single mode fiber. The limitation now is at physical layer. Thus, media speed
rates can be sustained depending upon the station and the switch to which it is attached. The
user is unaware of full duplex operation, and no new software applications are needed for this
enhancement.

Page 49 of 85
Network Communications – Networking Essentials – MIM0609

Flow control is necessary when the destination port is receiving more traffic than it can handle.
Since the buffers are only meant for absorbing peaks traffic, with excessive load frames may be
dropped. It is a costly operation as delay is of the order of seconds for each dropped frame.
Traditional networks do not have a layer 2 flow control mechanism, and rely mainly on higher
layers for this. Switches come with various flow control strategies depending on the vendors.
Some switches upon finding that the destination port is overloaded will send jam message to the
sender. Since the decoding of MAC address is fast and a switch can, in very little time, respond
with a jam message, collision or packet loss can be avoided. To the sender, jam packet is like a
virtual collision, so it will wait a random time before retransmit ting. This strategy works as only
those frames that go to the overloaded destination port are jammed and not the others.

Switching Methods

Cut-through switching
Marked by low latency, these switches begin transmission of the frame to the destination port
even before the whole frame is received. Thus frame latency is about 1/20th of that in store-and-
forward switches (explained later). Cut-through switches with runt (collision fragments) detection
will store the frame in the buffer and begin transmission as soon as the possibility of runt is
eliminated and it can grab the outgoing channel. Filtering of runts is important as they seriously
waste the bandwidth of the network. The delay in these switches is about 60 microseconds.
Compare this with store-and-forward switches where every frame is buffered (delay: 0.8
microsecond per byte). The delay thus for 1500 byte frame is 1200 microsecond. No Cyclic
Redundancy Check (CRC) verification is done in these switches.

Store-and-forward switching
This type of switches receive whole of the frame before forwarding it. While the frame is being
received, processing is done. Upon complete arrival of the frame, CRC is verified and the frame
is directly forwarded to the output port. Even though there are some disadvantages of store-and-
forward switches, in certain cases they are essential. For example when we have a slow port
transmitting to a fast port. The frame must be buffered and transmitted only when it is completely
received. Another advantage would be in high traffic conditions, when the frames have to be
buffered since the output port may be busy. As traffic increases the chances of a certain output
port being busy obviously increase, so even cut-through switches may need to buffer the frames.
Thus, in some cases store-and-forward switching has its obvious advantage.

STP IEEE 802.1d


Spanning-Tree Protocol (STP) is a Layer 2 (L2) protocol designed to run on bridges and
switches. The specification for STP is called 802.1d. The main purpose of STP is to ensure that
you do not run into a loop situation when you have redundant paths in your network. Loops are
deadly to a network.

Page 50 of 85
Network Communications – Networking Essentials – MIM0609

STP runs on bridges and switches that are 802.1d-compliant. There are different flavors of STP,
with IEEE 802.1d being the most popular and widely implemented. STP is implemented on
bridges and switches in order to prevent loops in the network. STP should be used in situations
where you want redundant links, but not loops. Redundant links are as important as backups in
case of failover in a network. If your primary fails, the backup links are activated so that users can
continue using the network. Without STP on the bridges and switches, such a situation could
result in a loop.

Consider the following network:

In the network diagram above, a redundant link is planned between Switch A and B, but this
creates the possibility of having a bridging loop. This is because, for example, a broadcast or
multicast packet transmitted from Station M and destined for Station N would simply keep
circulating again and again between both switches.

Page 51 of 85
Network Communications – Networking Essentials – MIM0609

However, with STP running on both switches, the network logically looks as follows:

The following applies to the network diagram above:


 Switch 15 is the backbone switch.
 Switches 12, 13, 14, 16, and 17 are switches attached to workstations and PCs.
 The following VLANs are defined on the network: 1, 200, 201, 202, 203, and 204.
 The VTP domain name is STD-Doc.
To provide this desired path redundancy, as well as to avoid loop condition, STP defines a tree
that spans all switches in an extended network. STP forces certain redundant data paths into a
standby (blocked) state, while leaving others in a forwarding state. If a link in forwarding state
becomes unavailable, STP reconfigures the network and reroutes data paths by activating the
appropriate standby path.

With STP, the key is for all the switches in the network to elect a root bridge that becomes the
focal point in the network. All other decisions in the network, such as which port is blocked and
which port is put in forwarding mode, are made from the perspective of this root bridge. A
switched environment, which is different from that of a bridge, most likely deals with multiple
VLANs. When implemented in a switching network, the root bridge is usually referred to as the
root switch. Each VLAN (because it is a separate broadcast domain) must have its own root
bridge. The root for the different VLANs can all reside in a single switch, or it can reside in varying
switches.

Note: The selection of the root switch for a particular VLAN is very important. You can choose it,
or you can let the switches decide it on their own. The second option is risky because there may
be sub-optimal paths in your network if the root selection process is not controlled by you.

All the switches exchange information to use in the selection of the root switch, as well as for
subsequent configuration of the network. This information is carried in Bridge Protocol Data Units
(BPDUs). The BPDU contains parameters that the switches use in the selection process. Each
switch compares the parameters in the BPDU that they are sending to their neighbor with the one
that they are receiving from their neighbor.

The thing to remember in the STP root selection process is that smaller is better. If the Root ID on
Switch A is advertising it is smaller than the Root ID that its neighbor (Switch B) is advertising,
Switch A's information is better. Switch B stops advertising its Root ID, and instead accepts that
of Switch A.

Before configuring STP, you need to select a switch to be the root of the spanning-tree. It does
not necessarily have to be the most powerful switch; it should be the most centralized switch on
the network. All dataflow across the network will be from the perspective of this switch. It is also
important that this switch be the least disturbed switch in the network. The backbone switches are
often selected for this function, because they typically do not have end stations connected to
them. They are also less likely to be disturbed during moves and changes within the network.

Page 52 of 85
Network Communications – Networking Essentials – MIM0609

After you decide which switch should be the root switch, set the appropriate variables to
designate it as the root switch. The only variable you have to set is the bridge priority. If this
switch has a bridge priority that is lower than all other switches, it will be automatically selected by
the other switches as the root switch.

Clients (end stations) on switch ports: You can also issue the set spantree portfast command.
This is done on a per-port basis. The portfast variable, when enabled on a port, causes the port to
immediately switch from blocking mode to forwarding mode. This helps prevent time-outs on
clients that use Novell Netware or that use Dynamic Host Configuration Protocol (DHCP) to
obtain an IP address. However, it is important that you do not use this command when you have
switch-to-switch connection. It could potentially result in a loop. The 30-60 second delay that
occurs when transitioning from blocking to forwarding mode transition prevents a temporal loop
condition in the network when connecting two switches.

Most other STP variables should be left at their default values.


Rules of Operation: The rules for how STP works are listed below. When the switches first come
up, they start the root switch selection process by each switch transmitting BPDU to its directly
connected switch on a per-VLAN basis.
As the BPDU goes out through the network, each switch compares the BPDU it sent out to the
one it received from its neighbors. From this comparison, the switches come to an agreement as
to who the root switch is. The switch with the lowest priority in the network wins this election
process.

Note: Remember, there will be one root switch identified per VLAN. After that root switch has
been identified, the switches follow the rules defined below.
 STP Rule One: All ports of the root switch must be in forwarding mode (except for some
corner cases where self-looped ports are involved). Next, each switch determines the
best path to get to the root. They determine this path by comparing the information in all
the BPDUs received on all their ports. The port with the smallest information contained in
its BPDU is used to get to the root switch; that port is called the root port. After a switch
figures out its root port, it proceeds to Rule Two.
 STP Rule Two: Once a switch determines its root port that port must be set to forwarding
mode. In addition, for each LAN segment, the switches communicate with each other to
determine which switch on that LAN segment is best to use for moving data from that
segment to the root bridge. This switch is called the designated switch.
 STP Rule Three: In a given LAN segment, the designated switch's port that connects to
that LAN segment must be placed in forwarding mode.
 STP Rule Four: All other ports in all the switches (VLAN-specific) must be placed in
blocking mode. This is only for ports that are connected to other bridges or switches.
Ports connected to workstations or PCs are not affected by STP; they remain forwarded.

Page 53 of 85
Network Communications – Networking Essentials – MIM0609

Chapter 8 – TCP/IP
 Layered Approach
 Understanding Architectural Models and Protocols
 How a Protocol Stack Works
 Encapsulation of Data for Network Delivery
 Internet Protocol Suite
 ICMP, IP, ARP, TCP, UDP and MAC

TCP/IP
An architectural model provides a common frame of reference for discussing Internet
communications. It is used not only to explain communication protocols but to develop them as
well. It separates the functions performed by communication protocols into manageable layers
stacked on top of each other. Each layer in the stack performs a specific function in the process
of communicating over a network.

Layered Approach
Generally, TCP/IP is described using three to five functional layers. To describe TCP/IP based
firewalls more precisely; we have chosen the common DoD reference model, which is also known
as the Internet reference model.

The DoD Protocol Model

This model is based on the three layers defined for the DoD Protocol Model in the DDN Protocol
Handbook, Volume 1. These three layers are as follows:
• Network Access Layer
• Host-To-Host Transport Layer
• Application Layer

An additional layer, the internetwork layer, has been added to this model. The internetwork layer
is commonly used to describe TCP/IP.

Another standard architectural model that is often used to describe a network protocol stack is the
OSI reference model. This model consists of a seven-layer protocol stack.

Understanding Architectural Models and Protocols


In an architectural model, a layer does not define a single protocol—it defines a data
communication function that may be performed by any number of protocols. Because each layer
defines a function, it can contain multiple protocols, each of which provides a service suitable to
the function of that layer.

Every protocol communicates with its peer. A peer is an implementation of the same protocol in
the equivalent layer on a remote computer. Peer-level communications are standardized to
ensure that successful communications take place. Theoretically, each protocol is only concerned
with communicating to its peer—it does not care about the layers above or below it.

Page 54 of 85
Network Communications – Networking Essentials – MIM0609

A dependency, however, exists between the layers. Because every layer is involved in sending
data from a local application to an equivalent remote application, the layers must agree on how to
pass data between themselves on a single computer. The upper layers rely on the lower layers to
transfer the data across the underlying network.

How a Protocol Stack Works


As the reference model indicates, protocols (which compose the various layers) are like a pile of
building blocks stacked one upon another. Because of this structure, groups of related protocols
are often called stacks or protocol stacks.
Data is passed down the stack from one layer to the next, until it is transmitted over the network
by the network access layer protocols. The four layers in this reference model are crafted to
distinguish between the different ways that the data is handled as it passes down the protocol
stack from the application layer to the underlying physical network.

At the remote end, the data is passed up the stack to the receiving application. The individual
layers do not need to know how the layers above or below them function; they only need to know
how to pass data to them.

Each layer in the stack adds control information (such as destination address, routing controls,
and checksum) to ensure proper delivery. This control information is called a header and/or a
trailer because it is placed in front of or behind the data to be transmitted. Each layer treats all of
the information that it receives from the layer above it as data, and it places its own header and/or
trailer around that information.

These wrapped messages are then passed into the layer below along with additional control
information, some of which may be forwarded or derived from the higher layer. By the time a
message exits the system on a physical link (such as a wire), the original message is enveloped
in multiple, nested wrappers—one for each layer of protocol through which the data passed.
When a protocol uses headers or trailers to package the data from another protocol, the process
is called encapsulation.

Encapsulation of Data for Network Delivery

When data is received, the opposite happens. Each layer strips off its header and/or trailer before
passing the data up to the layer above. As information flows back up the stack, information
received from a lower layer is interpreted as both a header/trailer and data. The process of
removing headers and trailers from data is called decapsulation. This mechanism enables each
layer in the transmitting computer to communicate with its corresponding layer in the receiving
computer. Each layer in the transmitting computer communicates with its peer layer in the
receiving computer via a process called peer-to-peer communication.

Each layer has specific responsibilities and specific rules for carrying out those responsibilities,
and it knows nothing about the procedures that the other layers follow. A layer carries out its
tasks and delivers the message to the next layer in the protocol stack. An address mechanism is
the common element that allows data to be routed through the various layers until it reaches its
destination.

Page 55 of 85
Network Communications – Networking Essentials – MIM0609

Each layer also has its own independent data structures. Conceptually, a layer is unaware of the
data structures used by the layers above and below it. In reality, the data structures of a layer are
designed to be compatible with the structures used by the surrounding layers for the sake of more
efficient data transmission. Still, each layer has its own data structures and its own terminology to
describe those structures.

The following section describes the Internet reference model in more detail. We will use this
reference model throughout this guide to describe the structure and function of the TCP/IP
protocol suite and Cisco Centric Firewall.

Understanding The Internet Reference Model


As mentioned earlier, the Internet reference model contains four layers: the network access layer,
the internet work layer, the host-to-host transport layer, and the application layer.

In the following sections, we describe the function of each layer in more detail, starting with the
network access layer and working our way up to the application layer.

Network Access Layer


The network access layer is the lowest layer in the Internet reference model. This layer contains
the protocols that the computer uses to deliver data to the other computers and devices that are
attached to the network. The protocols at this layer perform three distinct functions:
 They define how to use the network to transmit a frame, which is the data unit passed
across the physical connection.
 They exchange data between the computer and the physical network.
 They deliver data between two devices on the same network. To deliver data on the local
network, the network access layer protocols use the physical addresses of the nodes on
the network. A physical address is stored in the network adapter card of a computer or
other device, and it is a value that is "hard-coded" into the adapter card by the
manufacturer.

Unlike higher level protocols, the network access layer protocols must understand the details of
the underlying physical network, such as the packet structure, maximum frame size, and the
physical address scheme that is used. Understanding the details and constraints of the physical
network ensures that these protocols can format the data correctly so that it can be transmitted
across the network.

Internetwork Layer
In the Internet reference model, the layer above the network access layer is called the
internetwork layer. This layer is responsible for routing messages through internetworks. Two
types of devices are responsible for routing messages between networks. The first device is
called a gateway, which is a computer that has two network adapter cards. This computer
accepts network packets from one network on one network card and routes those packets to a
different network via the second network adapter card. The second device is a router, which is a
dedicated hardware device that passes packets from one network to a different network.

The internetwork layer protocols provide a datagram network service. Datagrams are packets of
information that comprise a header, data, and a trailer. The header contains information, such as
the destination address, that the network needs to route the datagram. A header can also contain
other information, such as the source address and security labels. Trailers typically contain a
checksum value, which is used to ensure that the data is not modified in transit.
The communicating entities—which can be computers, operating systems, programs, processes,
or people—that use the datagram services must specify the destination address (using control
information) and the data for each message to be transmitted. The internetwork layer protocols
package the message in a datagram and send it off.

Page 56 of 85
Network Communications – Networking Essentials – MIM0609

A datagram service does not support any concept of a session or connection. Once a message is
sent or received, the service retains no memory of the entity with which it was communicating. If
such a memory is needed, the protocols in the host-to-host transport layer maintain it. The
abilities to retransmit data and check it for errors are minimal or nonexistent in the datagram
services. If the receiving datagram service detects a transmission error during transmission using
the checksum value of the datagram, it simply ignores (or drops) the datagram without notifying
the receiving higher-layer entity.
Host-to-Host Transport Layer
The protocol layer just above the internetwork layer is the host-to-host transport layer. It is
responsible for providing end-to-end data integrity and provides a highly reliable communication
service for entities that want to carry out an extended two-way conversation.
In addition to the usual transmit and receive functions, the host-to-host transport layer uses open
and close commands to initiate and terminate the connection. This layer accepts information to
be transmitted as a stream of characters, and it returns information to the recipient as a stream.
The service employs the concept of a connection (or virtual circuit). A connection is the state of
the host-to-host transport layer between the time that an open command is accepted by the
receiving computer and the time that the close command is issued by either computer.
Application Layer
The top layer in the Internet reference model is the application layer. This layer provides functions
for users or their programs, and it is highly specific to the application being performed. It provides
the services that user applications use to communicate over the network, and it is the layer in
which user-access network processes reside. These processes include all of those that users
interact with directly, as well as other processes of which the users are not aware.
This layer includes all applications protocols that use the host-to-host transport protocols to
deliver data. Other functions that process user data, such as data encryption and decryption and
compression and decompression, can also reside at the application layer.
The application layer also manages the sessions (connections) between cooperating
applications. In the TCP/IP protocol hierarchy, sessions are not identifiable as a separate layer,
and these functions are performed by the host-to-host transport layer. Instead of using the term
"session," TCP/IP uses the terms "socket" and "port" to describe the path (or virtual circuit) over
which cooperating applications communicate.
Most of the application protocols in this layer provide user services, and new user services are
added often. For cooperating applications to be able to exchange data, they must agree about
how data is represented. The application layer is responsible for standardizing the presentation of
data.
The name TCP/IP refers to a suite of data communication protocols. The name is misleading
because TCP and IP are only two of dozens of protocols that compose the suite. Its name comes
from two of the more important protocols in the suite: the Transmission Control Protocol (TCP)
and the Internet Protocol (IP).

TCP/IP originated out of the investigative research into networking protocols that the Department
of Defense (DoD) initiated in 1969. In 1968, the DoD Advanced Research Projects Agency
(ARPA) began researching the network technology that is now called packet switching.
The original focus of this research was to facilitate communication among the DoD community.
However, the network that was initially constructed as a result of this research, then called
ARPANET, gradually became known as the Internet. The TCP/IP protocols played an important
role in the development of the Internet. In the early 1980s, the TCP/IP protocols were developed.
In 1983, they became standard protocols for ARPANET.
Because of the history of the TCP/IP protocol suite, it is often referred to as the DoD protocol
suite or the Internet protocol suite.

Page 57 of 85
Network Communications – Networking Essentials – MIM0609

Internet Protocol
IP is a connectionless protocol, which means that IP does not exchange control information
(called a handshake) to establish an end-to-end connection before transmitting data. In contrast,
connection-oriented protocol exchanges control information with the remote computer to verify
that it is ready to receive data before sending it. When the handshaking is successful, the
computers are said to have established a connection. IP relies on protocols in other layers to
establish the connection if connection-oriented services are required.
IP also relies on protocols in another layer to provide error detection and error recovery. Because
it contains no error detection or recovery code, IP is sometimes called an unreliable protocol.

The functions performed at this layer are as follows:


 Define the datagram, which is the basic unit of transmission in the Internet. The
TCP/IP protocols were built to transmit data over the ARPANET, which was a packet
switching network. A packet is a block of data that carries with it the information
necessary to deliver it—in a manner similar to a postal letter that has an address written
on its envelope. A packet switching network uses the addressing information in the
packets to switch packets from one physical network to another, moving them toward
their final destination. Each packet travels the network independently of any other packet.
The datagram is the packet format defined by IP.
 Define the Internet addressing scheme. IP delivers the datagram by checking the
destination address in the header. If the destination address is the address of a host on
the directly attached network, the packet is delivered directly to the destination. If the
destination address is not on the local network, the packet is passed to a gateway for
delivery. Gateways and routers are devices that switch packets between the different
physical networks. Deciding which gateway to use is called routing. IP makes the routing
decision for each individual packet.
 Move data between the Network Access Layer and the Host-to-Host Transport
Layer. When IP receives a datagram that is addressed to the local host, it must pass the
data portion of the datagram to the correct host-to-host transport layer protocol. This
selection is done by using the protocol number in the datagram header. Each host-to-
host transport layer protocol has a unique protocol number that identifies it to IP.
 Route datagrams to remote hosts. Internet gateways are commonly (and perhaps
more accurately) referred to as IP routers because they use IP to route packets between
networks. In traditional TCP/IP jargon, there are only two types of network devices:
gateways and hosts. Gateways forward packets between networks and hosts do not.
However, if a host is connected to more than one network (called a multi-homed host), it
can forward packets between the networks. When a multi-homed host forwards packets,
it acts like any other gateway and is considered to be a gateway.
 Fragment and reassemble datagrams. As a datagram is routed through different
networks, it may be necessary for the IP module in a gateway to divide the datagram into
smaller pieces. A datagram received from one network may be too large to be
transmitted in a single packet on a different network. This condition only occurs when a
gateway interconnects dissimilar physical networks.

Each type of network has a maximum transmission unit (MTU), which is the largest packet it can
transfer. If the datagram received from one network is longer than the other network's MTU, it is
necessary to divide the datagram into smaller fragments for transmission. This division process is
called fragmentation.

The Internet Protocol (IP) is a network-layer (Layer 3) protocol that contains addressing
information and some control information that enables packets to be routed. IP is documented in
RFC 791 and is the primary network-layer protocol in the Internet protocol suite. Along with the
Transmission Control Protocol (TCP), IP represents the heart of the Internet protocols. IP has two
primary responsibilities: providing connectionless, best-effort delivery of datagrams through an

Page 58 of 85
Network Communications – Networking Essentials – MIM0609

internetwork; and providing fragmentation and reassembly of datagrams to support data links with
different maximum-transmission unit (MTU) sizes.

Page 59 of 85
Network Communications – Networking Essentials – MIM0609

IP Packet Format

The following discussion describes the IP packet fields illustrated in Figure 30-2:
 Version—Indicates the version of IP currently used.
 IP Header Length (IHL)—Indicates the datagram header length in 32-bit words.
 Type-of-Service—Specifies how an upper-layer protocol would like a current datagram to
be handled, and assigns datagrams various levels of importance.
 Total Length—Specifies the length, in bytes, of the entire IP packet, including the data
and header.
 Identification—Contains an integer that identifies the current datagram. This field is used
to help piece together datagram fragments.
 Flags—Consists of a 3-bit field of which the two low-order (least-significant) bits control
fragmentation. The low-order bit specifies whether the packet can be fragmented. The
middle bit specifies whether the packet is the last fragment in a series of fragmented
packets. The third or high-order bit is not used.
 Fragment Offset—Indicates the position of the fragment's data relative to the beginning of
the data in the original datagram, which allows the destination IP process to properly
reconstruct the original datagram.
 Time-to-Live—Maintains a counter that gradually decrements down to zero, at which
point the datagram is discarded. This keeps packets from looping endlessly.
 Protocol—Indicates which upper-layer protocol receives incoming packets after IP
processing is complete.
 Header Checksum—Helps ensure IP header integrity.
 Source Address—Specifies the sending node.
 Destination Address—Specifies the receiving node.
 Options—Allows IP to support various options, such as security.
 Data—Contains upper-layer information.
Address Resolution Protocol (ARP)
For two machines on a given network to communicate, they must know the other machine's
physical (or MAC) addresses. By broadcasting Address Resolution Protocols (ARPs), a host can
dynamically discover the MAC-layer address corresponding to a particular IP network-layer
address.

Page 60 of 85
Network Communications – Networking Essentials – MIM0609

After receiving a MAC-layer address, IP devices create an ARP cache to store the recently
acquired IP-to-MAC address mapping, thus avoiding having to broadcast ARPS when they want
to recontact a device. If the device does not respond within a specified time frame, the cache
entry is flushed.

In addition to the Reverse Address Resolution Protocol (RARP) is used to map MAC-layer
addresses to IP addresses. RARP, which is the logical inverse of ARP, might be used by diskless
workstations that do not know their IP addresses when they boot. RARP relies on the presence of
a RARP server with table entries of MAC-layer-to-IP address mappings.

Transmission Control Protocol (TCP)


The TCP provides reliable transmission of data in an IP environment. TCP corresponds to the
transport layer (Layer 4) of the OSI reference model. Among the services TCP provides are
stream data transfer, reliability, efficient flow control, full-duplex operation, and multiplexing.

With stream data transfer, TCP delivers an unstructured stream of bytes identified by sequence
numbers. This service benefits applications because they do not have to chop data into blocks
before handing it off to TCP. Instead, TCP groups bytes into segments and passes them to IP for
delivery.

TCP offers reliability by providing connection-oriented, end-to-end reliable packet delivery through
an internetwork. It does this by sequencing bytes with a forwarding acknowledgment number that
indicates to the destination the next byte the source expects to receive. Bytes not acknowledged
within a specified time period are retransmitted. The reliability mechanism of TCP allows devices
to deal with lost, delayed, duplicate, or misread packets. A time-out mechanism allows devices to
detect lost packets and request retransmission.

TCP offers efficient flow control, which means that, when sending acknowledgments back to the
source, the receiving TCP process indicates the highest sequence number it can receive without
overflowing its internal buffers.
Full-duplex operation means that TCP processes can both send and receive at the same time.

Finally, TCP's multiplexing means that numerous simultaneous upper-layer conversations can be
multiplexed over a single connection.

TCP Connection Establishment


To use reliable transport services, TCP hosts must establish a connection-oriented session with
one another. Connection establishment is performed by using a "three-way handshake"
mechanism.

A three-way handshake synchronizes both ends of a connection by allowing both sides to agree
upon initial sequence numbers. This mechanism also guarantees that both sides are ready to
transmit data and know that the other side is ready to transmit as well. This is necessary so that
packets are not transmitted or retransmitted during session establishment or after session
termination.
Each host randomly chooses a sequence number used to track bytes within the stream it is
sending and receiving. Then, the three-way handshake proceeds in the following manner:

The first host (Host A) initiates a connection by sending a packet with the initial sequence number
(X) and SYN bit set to indicate a connection request. The second host (Host B) receives the SYN,
records the sequence number X, and replies by acknowledging the SYN (with an ACK = X + 1).
Host B includes its own initial sequence number (SEQ = Y). An ACK = 20 means the host has
received bytes 0 through 19 and expects byte 20 next. This technique is called forward
acknowledgment. Host A then acknowledges all bytes Host B sent with a forward
acknowledgment indicating the next byte Host A expects to receive (ACK = Y + 1). Data transfer
then can begin.

Page 61 of 85
Network Communications – Networking Essentials – MIM0609

Positive Acknowledgment and Retransmission (PAR)


A simple transport protocol might implement a reliability-and-flow-control technique where the
source sends one packet, starts a timer, and waits for an acknowledgment before sending a new
packet. If the acknowledgment is not received before the timer expires, the source retransmits the
packet. Such a technique is called positive acknowledgment and retransmission (PAR).

By assigning each packet a sequence number, PAR enables hosts to track lost or duplicate
packets caused by network delays that result in premature retransmission. The sequence
numbers are sent back in the acknowledgments so that the acknowledgments can be tracked.

PAR is an inefficient use of bandwidth, however, because a host must wait for an
acknowledgment before sending a new packet, and only one packet can be sent at a time.

TCP Sliding Window


A TCP sliding window provides more efficient use of network bandwidth than PAR because it
enables hosts to send multiple bytes or packets before waiting for an acknowledgment.

In TCP, the receiver specifies the current window size in every packet. Because TCP provides a
byte-stream connection, window sizes are expressed in bytes. This means that a window is the
number of data bytes that the sender is allowed to send before waiting for an acknowledgment.
Initial window sizes are indicated at connection setup, but might vary throughout the data transfer
to provide flow control. A window size of zero, for instance, means "Send no data."

In a TCP sliding-window operation, for example, the sender might have a sequence of bytes to
send (numbered 1 to 10) to a receiver who has a window size of five. The sender then would
place a window around the first five bytes and transmit them together. It would then wait for an
acknowledgment.

The receiver would respond with an ACK = 6, indicating that it has received bytes 1 to 5 and is
expecting byte 6 next. In the same packet, the receiver would indicate that its window size is 5.
The sender then would move the sliding window five bytes to the right and transmit bytes 6 to 10.
The receiver would respond with an ACK = 11, indicating that it is expecting sequenced byte 11
next. In this packet, the receiver might indicate that its window size is 0 (because, for example, its
internal buffers are full). At this point, the sender cannot send any more bytes until the receiver
sends another packet with a window size greater than 0.

TCP Packet Format

Page 62 of 85
Network Communications – Networking Essentials – MIM0609

TCP Packet Field Descriptions


 Source Port and Destination Port—Identifies points at which upper-layer source and
destination processes receive TCP services.
 Sequence Number—Usually specifies the number assigned to the first byte of data in the
current message. In the connection-establishment phase, this field also can be used to
identify an initial sequence number to be used in an upcoming transmission.
 Acknowledgment Number—Contains the sequence number of the next byte of data the
sender of the packet expects to receive.
 Data Offset—Indicates the number of 32-bit words in the TCP header.
 Reserved—Remains reserved for future use.
 Flags—Carries a variety of control information, including the SYN and ACK bits used for
connection establishment, and the FIN bit used for connection termination.
 Window—Specifies the size of the sender's receive window (that is, the buffer space
available for incoming data).
 Checksum—Indicates whether the header was damaged in transit.
 Urgent Pointer—Points to the first urgent data byte in the packet.
 Options—Specifies various TCP options.
 Data—Contains upper-layer information.
User Datagram Protocol (UDP)
The User Datagram Protocol (UDP) is a connectionless transport-layer protocol (Layer 4) that
belongs to the Internet protocol family. UDP is basically an interface between IP and upper-layer
processes. UDP protocol ports distinguish multiple applications running on a single device from
one another.
Unlike the TCP, UDP adds no reliability, flow-control, or error-recovery functions to IP. Because
of UDP's simplicity, UDP headers contain fewer bytes and consume less network overhead than
TCP.

UDP is useful in situations where the reliability mechanisms of TCP are not necessary, such as in
cases where a higher-layer protocol might provide error and flow control.

UDP is the transport protocol for several well-known application-layer protocols, including
Network File System (NFS), Simple Network Management Protocol (SNMP), Domain Name
System (DNS), and Trivial File Transfer Protocol (TFTP).
The UDP packet format contains four fields. These include source and destination ports, length,
and checksum fields.

Source and destination ports contain the 16-bit UDP protocol port numbers used to demultiplex
datagrams for receiving application-layer processes. A length field specifies the length of the UDP
header and data. Checksum provides an (optional) integrity check on the UDP header and data.

Internet Protocols Application-Layer Protocols


The Internet protocol suite includes many application-layer protocols that represent a wide variety
of applications, including the following:
 File Transfer Protocol (FTP)—Moves files between devices

Page 63 of 85
Network Communications – Networking Essentials – MIM0609

 Simple Network-Management Protocol (SNMP)—Primarily reports anomalous network


conditions and sets network threshold values
 Telnet—Serves as a terminal emulation protocol
 X Windows—Serves as a distributed windowing and graphics system used for
communication between X terminals and UNIX workstations
 Network File System (NFS), External Data Representation (XDR), and Remote
Procedure Call (RPC)—Work together to enable transparent access to remote network
resources
 Simple Mail Transfer Protocol (SMTP)—Provides electronic mail services
 Domain Name System (DNS)—Translates the names of network nodes into network
addresses

IP Addressing
As with any other network-layer protocol, the IP addressing scheme is integral to the process of
routing IP datagrams through an internetwork. Each IP address has specific components and
follows a basic format. These IP addresses can be subdivided and used to create addresses for
subnetworks, as discussed in more detail later in this chapter.

Each host on a TCP/IP network is assigned a unique 32-bit logical address that is divided into two
main parts: the network number and the host number. The network number identifies a network
and must be assigned by the Internet Network Information Center (InterNIC) if the network is to
be part of the Internet. An Internet Service Provider (ISP) can obtain blocks of network addresses
from the InterNIC and can itself assign address space as necessary. The host number identifies a
host on a network and is assigned by the local network administrator.

IP Address Format
The 32-bit IP address is grouped eight bits at a time, separated by dots, and represented in
decimal format (known as dotted decimal notation). Each bit in the octet has a binary weight (128,
64, 32, 16, 8, 4, 2, 1). The minimum value for an octet is 0, and the maximum value for an octet is
255. Figure 30-3 illustrates the basic format of an IP address.

IP Address Classes
IP addressing supports five different address classes: A, B,C, D, and E. Only classes A, B, and C
are available for commercial use. The left-most (high-order) bits indicate the network class.

Page 64 of 85
Network Communications – Networking Essentials – MIM0609

Reference Information about the Five IP Address Classes

IP Format Purpose High- Address Range No. Bits Max. Hosts


Address Order Network/H
Class Bit(s) ost
Few large 1.0.0.0 to 16,777, 2142
A N.H.H.H1 0 7/24
organizations 126.0.0.0 (224 - 2)
Medium-size 128.1.0.0 to
B N.N.H.H 1, 0 14/16 65, 543 (216 - 2)
organizations 191.254.0.0
Relatively small 192.0.1.0 to
C N.N.N.H 1, 1, 0 22/8 245 (28 - 2)
organizations 223.255.254.0
N/A (not
Multicast groups 224.0.0.0 to for
D N/A 1, 1, 1, 0 N/A
(RFC 1112) 239.255.255.255 commercia
l use)
240.0.0.0 to
E N/A Experimental 1, 1, 1, 1 N/A N/A
254.255.255.255
1
N = Network number, H = Host number.
2
One address is reserved for the broadcast address, and one address is reserved for the network.

IP address formats A, B, and C are available for commercial use.

The class of address can be determined easily by examining the first octet of the address and
mapping that value to a class range in the following table. In an IP address of 172.31.1.2, for
example, the first octet is 172. Because 172 falls between 128 and 191, 172.31.1.2 is a Class B
address.
Address Class First Octet in Decimal High-Order Bits
Class A 1 !Ð 126 0
Class B 128 !Ð 191 10
Class C 192 !Ð 223 110
Class D 224 !Ð 239 1110
Class E 240 !Ð 254 1111
Some first-octet values have special meanings:
 First octet 127 represents the local computer, regardless of what network it is really in.
This is useful when testing internal operations.
 First octet 224 and above are reserved for special purposes such as multicasting.
Octets 0 and 255 are not acceptable values in some situations, but 0 can be used as the second
and/or third octet (e.g. 10.2.0.100).

Page 65 of 85
Network Communications – Networking Essentials – MIM0609

A class A network does not necessarily consist of 16 million machines on a single network, which
would excessively burden most network technologies and their administrators. Instead, a large
company is assigned a class A network, and segregates it further into smaller sub-nets using
Classless Inter-Domain Routing. However, the class labels are still commonly used as broad
descriptors.

IP Subnet Addressing
IP networks can be divided into smaller networks called subnetworks (or subnets). Subnetting
provides the network administrator with several benefits, including extra flexibility, more efficient
use of network addresses, and the capability to contain broadcast traffic (a broadcast will not
cross a router).
Subnets are under local administration. As such, the outside world sees an organization as a
single network and has no detailed knowledge of the organization's internal structure.

IP Subnet Mask
A subnet address is created by "borrowing" bits from the host field and designating them as the
subnet field. The number of borrowed bits varies and is specified by the subnet mask.

Subnet masks use the same format and representation technique as IP addresses. The subnet
mask, however, has binary 1s in all bits specifying the network and subnetwork fields, and binary
0s in all bits specifying the host field.

Subnet mask bits should come from the high-order (left-most) bits of the host field. Details of
Class B and C subnet mask types follow. Class A addresses are not discussed in this chapter
because they generally are subnetted on an 8-bit boundary.

Page 66 of 85
Network Communications – Networking Essentials – MIM0609

Various types of subnet masks exist for Class B and C subnets.


The default subnet mask for a Class B address that has no subnetting is 255.255.0.0, while the
subnet mask for a Class B address 171.16.0.0 that specifies eight bits of subnetting is
255.255.255.0. The reason for this is that eight bits of subnetting or 28 - 2 (1 for the network
address and 1 for the broadcast address) = 254 subnets possible, with 28 - 2 = 254 hosts per
subnet.

The subnet mask for a Class C address 192.168.2.0 that specifies five bits of subnetting is
255.255.255.248.With five bits available for subnetting, 25 - 2 = 30 subnets possible, with 23 - 2 =
6 hosts per subnet.

Class B Subnetting Reference Chart


Number of Bits Subnet Mask Number of Subnets Number of Hosts
2 255.255.192.0 2 16382
3 255.255.224.0 6 8190
4 255.255.240.0 14 4094
5 255.255.248.0 30 2046
6 255.255.252.0 62 1022
7 255.255.254.0 126 510
8 255.255.255.0 254 254
9 255.255.255.128 510 126
10 255.255.255.192 1022 62
11 255.255.255.224 2046 30
12 255.255.255.240 4094 14
13 255.255.255.248 8190 6
14 255.255.255.252 16382 2

Class C Subnetting Reference Chart


Number of Bits Subnet Mask Number of Subnets Number of Hosts
2 255.255.255.192 2 62
3 255.255.255.224 6 30
4 255.255.255.240 14 14
5 255.255.255.248 30 6
6 255.255.255.252 62 2

Page 67 of 85
Network Communications – Networking Essentials – MIM0609

How Subnet Masks are Used to Determine the Network Number


The router performs a set process to determine the network (or more specifically, the subnetwork)
address. First, the router extracts the IP destination address from the incoming packet and
retrieves the internal subnet mask. It then performs a logical AND operation to obtain the network
number. This causes the host portion of the IP destination address to be removed, while the
destination network number remains. The router then looks up the destination network number
and matches it with an outgoing interface. Finally, it forwards the frame to the destination IP
address. Specifics regarding the logical AND operation are discussed in the following section.

Three basic rules govern logically "ANDing" two binary numbers. First, 1 "ANDed" with 1 yields 1.
Second, 1 "ANDed" with 0 yields 0. Finally, 0 "ANDed" with 0 yields 0. The truth table provided in
table 30-4 illustrates the rules for logical AND operations.

IP Routing
Routing is the act of moving information across an internetwork from a source to a destination.
Along the way, at least one intermediate node typically is encountered. Routing is often
contrasted with bridging, which might seem to accomplish precisely the same thing to the casual
observer. The primary difference between the two is that bridging occurs at Layer 2 (the link
layer) of the OSI reference model, whereas routing occurs at Layer 3 (the network layer). This
distinction provides routing and bridging with different information to use in the process of moving
information from source to destination, so the two functions accomplish their tasks in different
ways.

The topic of routing has been covered in computer science literature for more than two decades,
but routing achieved commercial popularity as late as the mid-1980s. The primary reason for this
time lag is that networks in the 1970s were simple, homogeneous environments. Only relatively
recently has large-scale internetworking become popular.

Routing involves two basic activities: determining optimal routing paths and transporting
information groups (typically called packets) through an internetwork. In the context of the routing
process, the latter of these is referred to as packet switching. Although packet switching is
relatively straightforward, path determination can be very complex.

Path Determination
Routing protocols use metrics to evaluate what path will be the best for a packet to travel. A
metric is a standard of measurement, such as path bandwidth, that is used by routing algorithms
to determine the optimal path to a destination. To aid the process of path determination, routing
algorithms initialize and maintain routing tables, which contain route information. Route
information varies depending on the routing algorithm used.

Routing algorithms fill routing tables with a variety of information. Destination/next hop
associations tell a router that a particular destination can be reached optimally by sending the
packet to a particular router representing the "next hop" on the way to the final destination. When
a router receives an incoming packet, it checks the destination address and attempts to associate
this address with a next hop.

Destination/Next Hop Associations Determine the Data's Optimal Path

Page 68 of 85
Network Communications – Networking Essentials – MIM0609

Routing tables also can contain other information, such as data about the desirability of a path.
Routers compare metrics to determine optimal routes, and these metrics differ depending on the
design of the routing algorithm used. A variety of common metrics will be introduced and
described later in this chapter.

Routers communicate with one another and maintain their routing tables through the transmission
of a variety of messages. The routing update message is one such message that generally
consists of all or a portion of a routing table. By analyzing routing updates from all other routers, a
router can build a detailed picture of network topology. A link-state advertisement, another
example of a message sent between routers, informs other routers of the state of the sender's
links. Link information also can be used to build a complete picture of network topology to enable
routers to determine optimal routes to network destinations.

Switching
Switching algorithms is relatively simple; it is the same for most routing protocols. In most cases,
a host determines that it must send a packet to another host. Having acquired a router's address
by some means, the source host sends a packet addressed specifically to a router's physical
(Media Access Control [MAC]-layer) address, this time with the protocol (network layer) address
of the destination host.

As it examines the packet's destination protocol address, the router determines that it either
knows or does not know how to forward the packet to the next hop. If the router does not know
how to forward the packet, it typically drops the packet. If the router knows how to forward the
packet, however, it changes the destination physical address to that of the next hop and transmits
the packet.
The next hop may be the ultimate destination host. If not, the next hop is usually another router,
which executes the same switching decision process. As the packet moves through the
internetwork, its physical address changes, but its protocol address remains constant.

The preceding discussion describes switching between a source and a destination end system.
The International Organization for Standardization (ISO) has developed a hierarchical
terminology that is useful in describing this process. Using this terminology, network devices
without the capability to forward packets between subnetworks are called end systems (ESs),
whereas network devices with these capabilities are called intermediate systems (ISs). ISs are
further divided into those that can communicate within routing domains (intradomain ISs) and
those that communicate both within and between routing domains (interdomain ISs). A routing
domain generally is considered a portion of an internetwork under common administrative
authority that is regulated by a particular set of administrative guidelines. Routing domains are
also called autonomous systems. With certain protocols, routing domains can be divided into
routing areas, but intradomain routing protocols are still used for switching both within and
between areas.

Page 69 of 85
Network Communications – Networking Essentials – MIM0609

Numerous Routers May Come into Play during the Switching Process

Routing Algorithms
Routing algorithms can be differentiated based on several key characteristics. First, the particular
goals of the algorithm designer affect the operation of the resulting routing protocol. Second,
various types of routing algorithms exist, and each algorithm has a different impact on network
and router resources. Finally, routing algorithms use a variety of metrics that affect calculation of
optimal routes.

Routing algorithms often have one or more of the following design goals:
 Optimality
 Simplicity and low overhead
 Robustness and stability
 Rapid convergence
 Flexibility

Optimality refers to the capability of the routing algorithm to select the best route, which depends
on the metrics and metric weightings used to make the calculation. For example, one routing
algorithm may use a number of hops and delays, but it may weigh delay more heavily in the
calculation. Naturally, routing protocols must define their metric calculation algorithms strictly.

Routing algorithms also are designed to be as simple as possible. In other words, the routing
algorithm must offer its functionality efficiently, with a minimum of software and utilization
overhead. Efficiency is particularly important when the software implementing the routing
algorithm must run on a computer with limited physical resources.

Routing algorithms must be robust, which means that they should perform correctly in the face of
unusual or unforeseen circumstances, such as hardware failures, high load conditions, and
incorrect implementations. Because routers are located at network junction points, they can
cause considerable problems when they fail. The best routing algorithms are often those that
have withstood the test of time and that have proven stable under a variety of network conditions.

Page 70 of 85
Network Communications – Networking Essentials – MIM0609

In addition, routing algorithms must converge rapidly. Convergence is the process of agreement,
by all routers, on optimal routes. When a network event causes routes to either go down or
become available, routers distribute routing update messages that permeate networks,
stimulating recalculation of optimal routes and eventually causing all routers to agree on these
routes. Routing algorithms that converge slowly can cause routing loops or network outages.

In the routing loop displayed in Figure 5-3, a packet arrives at Router 1 at time t1. Router 1
already has been updated and thus knows that the optimal route to the destination calls for
Router 2 to be the next stop. Router 1 therefore forwards the packet to Router 2, but because this
router has not yet been updated, it believes that the optimal next hop is Router 1. Router 2
therefore forwards the packet back to Router 1, and the packet continues to bounce back and
forth between the two routers until Router 2 receives its routing update or until the packet has
been switched the maximum number of times allowed.

Slow Convergence and Routing Loops Can Hinder Progress

Routing algorithms should also be flexible, which means that they should quickly and accurately
adapt to a variety of network circumstances. Assume, for example, that a network segment has
gone down. As many routing algorithms become aware of the problem, they will quickly select the
next-best path for all routes normally using that segment. Routing algorithms can be programmed
to adapt to changes in network bandwidth, router queue size, and network delay, among other
variables.

Algorithm Types
Routing algorithms can be classified by type. Key differentiators include these:
 Static versus dynamic
 Link-state versus distance vector

Static versus Dynamic


Static routing algorithms are hardly algorithms at all, and are nothing but table mappings
established by the network administrator before the beginning of routing. These mappings do not
change unless the network administrator alters them. Algorithms that use static routes are simple
to design and work well in environments where network traffic is relatively predictable and where
network design is relatively simple.

Because static routing systems cannot react to network changes, they generally are considered
unsuitable for today's large, constantly changing networks. Most of the dominant routing
algorithms today are dynamic routing algorithms, which adjust to changing network
circumstances by analyzing incoming routing update messages. If the message indicates that a
network change has occurred, the routing software recalculates routes and sends out new routing
update messages. These messages permeate the network, stimulating routers to rerun their
algorithms and change their routing tables accordingly.

Page 71 of 85
Network Communications – Networking Essentials – MIM0609

Dynamic routing algorithms can be supplemented with static routes where appropriate. A router
of last resort (a router to which all unroutable packets are sent), for example, can be designated
to act as a repository for all unroutable packets, ensuring that all messages are at least handled
in some way.

Link-State Versus Distance Vector


Link-state algorithms (also known as shortest path first algorithms) flood routing information to all
nodes in the internetwork. Each router, however, sends only the portion of the routing table that
describes the state of its own links. In link-state algorithms, each router builds a picture of the
entire network in its routing tables. Distance vector algorithms (also known as Bellman-Ford
algorithms) call for each router to send all or some portion of its routing table, but only to its
neighbors. In essence, link-state algorithms send small updates everywhere, while distance
vector algorithms send larger updates only to neighboring routers. Distance vector algorithms
know only about their neighbors.

Because they converge more quickly, link-state algorithms are somewhat less prone to routing
loops than distance vector algorithms. On the other hand, link-state algorithms require more CPU
power and memory than distance vector algorithms. Link-state algorithms, therefore, can be more
expensive to implement and support. Link-state protocols are generally more scalable than
distance vector protocols.

Routing Metrics
Routing tables contain information used by switching software to select the best route. But how,
specifically, are routing tables built? What is the specific nature of the information that they
contain? How do routing algorithms determine that one route is preferable to others? Routing
algorithms have used many different metrics to determine the best route. Sophisticated routing
algorithms can base route selection on multiple metrics, combining them in a single (hybrid)
metric.

All the following metrics have been used:


 Path length
 Reliability
 Delay
 Bandwidth
 Load
 Communication cost

Routing Protocols

Routing Information Protocol


The Routing Information Protocol, or RIP, as it is more commonly called, is one of the most
enduring of all routing protocols. RIP is also one of the more easily confused protocols because a
variety of RIP-like routing protocols proliferated, some of which even used the same name! RIP
and the myriad RIP-like protocols were based on the same set of algorithms that use distance
vectors to mathematically compare routes to identify the best path to any given destination
address. These algorithms emerged from academic research that dates back to 1957.

Today's open standard version of RIP, sometimes referred to as IP RIP, is formally defined in two
documents: Request For Comments (RFC) 1058 and Internet Standard (STD) 56. As IP-based
networks became both more numerous and greater in size, it became apparent to the Internet
Engineering Task Force (IETF) that RIP needed to be updated. Consequently, the IETF released
RFC 1388 in January 1993, which was then superceded in November 1994 by RFC 1723, which
describes RIP 2 (the second version of RIP). These RFCs described an extension of RIP's
capabilities but did not attempt to obsolete the previous version of RIP. RIP 2 enabled RIP
messages to carry more information, which permitted the use of a simple authentication

Page 72 of 85
Network Communications – Networking Essentials – MIM0609

mechanism to secure table updates. More importantly, RIP 2 supported subnet masks, a critical
feature that was not available in RIP.

ROUTING UPDATES
RIP sends routing-update messages at regular intervals and when the network topology changes.
When a router receives a routing update that includes changes to an entry, it updates its routing
table to reflect the new route. The metric value for the path is increased by 1, and the sender is
indicated as the next hop. RIP routers maintain only the best route (the route with the lowest
metric value) to a destination. After updating its routing table, the router immediately begins
transmitting routing updates to inform other network routers of the change. These updates are
sent independently of the regularly scheduled updates that RIP routers send.

RIP ROUTING METRIC


RIP uses a single routing metric (hop count) to measure the distance between the source and a
destination network. Each hop in a path from source to destination is assigned a hop count value,
which is typically 1. When a router receives a routing update that contains a new or changed
destination network entry, the router adds 1 to the metric value indicated in the update and enters
the network in the routing table. The IP address of the sender is used as the next hop.

RIP STABILITY FEATURES


RIP prevents routing loops from continuing indefinitely by implementing a limit on the number of
hops allowed in a path from the source to a destination. The maximum number of hops in a path
is 15. If a router receives a routing update that contains a new or changed entry, and if increasing
the metric value by 1 causes the metric to be infinity (that is, 16), the network destination is
considered unreachable. The downside of this stability feature is that it limits the maximum
diameter of a RIP network to less than 16 hops.

RIP includes a number of other stability features that are common to many routing protocols.
These features are designed to provide stability despite potentially rapid changes in a network's
topology. For example, RIP implements the split horizon and holddown mechanisms to prevent
incorrect routing information from being propagated.

RIP TIMERS
RIP uses numerous timers to regulate its performance. These include a routing-update timer, a
route-timeout timer, and a route-flush timer. The routing-update timer clocks the interval between
periodic routing updates. Generally, it is set to 30 seconds, with a small random amount of time
added whenever the timer is reset. This is done to help prevent congestion, which could result
from all routers simultaneously attempting to update their neighbors. Each routing table entry has
a route-timeout timer associated with it. When the route-timeout timer expires, the route is
marked invalid but is retained in the table until the route-flush timer expires.

RIP 2 Packet Format


The RIP 2 specification (described in RFC 1723) allows more information to be included in RIP
packets and provides a simple authentication mechanism that is not supported by RIP.

An IP RIP 2 Packet Consists of Fields Similar to Those of an IP RIP Packet

 Command—Indicates whether the packet is a request or a response. The request asks


that a router send all or a part of its routing table. The response can be an unsolicited
regular routing update or a reply to a request. Responses contain routing table entries.
Multiple RIP packets are used to convey information from large routing tables.
 Version—Specifies the RIP version used. In a RIP packet implementing any of the RIP 2
fields or using authentication, this value is set to 2.
 Unused—Has a value set to zero.

Page 73 of 85
Network Communications – Networking Essentials – MIM0609

 Address-family identifier (AFI)—Specifies the address family used. RIPv2's AFI field
functions identically to RFC 1058 RIP's AFI field, with one exception: If the AFI for the
first entry in the message is 0xFFFF, the remainder of the entry contains authentication
information. Currently, the only authentication type is simple password.
 Route tag—Provides a method for distinguishing between internal routes (learned by
RIP) and external routes (learned from other protocols).
 IP address—Specifies the IP address for the entry.
 Subnet mask—Contains the subnet mask for the entry. If this field is zero, no subnet
mask has been specified for the entry.
 Next hop—Indicates the IP address of the next hop to which packets for the entry should
be forwarded.
 Metric—Indicates how many internetwork hops (routers) have been traversed in the trip
to the destination. This value is between 1 and 15 for a valid route, or 16 for an
unreachable route.

Interior Gateway Routing Protocol


The Interior Gateway Routing Protocol (IGRP) is a routing protocol that was developed in the
mid-1980s by Cisco Systems, Inc. Cisco's principal goal in creating IGRP was to provide a robust
protocol for routing within an autonomous system (AS). Such protocols are known as Interior
Gateway Routing Protocols.
In the mid-1980s, the most popular Interior Gateway Routing Protocol was the Routing
Information Protocol (RIP). Although RIP was quite useful for routing within small- to moderate-
sized, relatively homogeneous internetworks, its limits were being pushed by network growth. In
particular, RIP's small hop-count limit (16) restricted the size of internetworks; single metric (hop
count) support of only equal-cost load balancing did not allow for much routing flexibility in
complex environments. The popularity of Cisco routers and the robustness of IGRP encouraged
many organizations with large internetworks to replace RIP with IGRP.

IGRP PROTOCOL CHARACTERISTICS


IGRP is a distance vector Interior Gateway Protocol (IGP). Distance vector routing protocols
mathematically compare routes using some measurement of distance. This measurement is
known as the distance vector. Routers using a distance vector protocol must send all or a portion
of their routing table in a routing-update message at regular intervals to each of their neighboring
routers. As routing information proliferates through the network, routers can identify new
destinations as they are added to the network, learn of failures in the network, and, most
importantly, calculate distances to all known destinations.

Distance vector routing protocols are often contrasted with link-state routing protocols, which
send local connection information to all nodes in the internetwork.

IGRP uses a composite metric that is calculated by factoring weighted mathematical values for
internetwork delay, bandwidth, reliability, and load. Network administrators can set the weighting
factors for each of these metrics, although great care should be taken before any default values
are manipulated. IGRP provides a wide range for its metrics. Reliability and load, for example,
can take on any value between 1 and 255; bandwidth can take on values reflecting speeds from
1200 bps to 10 Gbps, while delay can take on any value from 1 to 2 24. These wide metric ranges
are further complemented by a series of user-definable constants that enable a network
administrator to influence route selection. These constants are hashed against the metrics, and
each other, in an algorithm that yields a single, composite metric. Thus, the network administrator
can influence route selection by giving higher or lower weighting to specific metrics. This flexibility
allows administrators to fine-tune IGRP's automatic route selection.

To provide additional flexibility, IGRP permits multipath routing. Dual equal-bandwidth lines can
run a single stream of traffic in round-robin fashion, with automatic switchover to the second line if
one line goes down. Multiple paths can have unequal metrics yet still be valid multipath routes.

Page 74 of 85
Network Communications – Networking Essentials – MIM0609

For example, if one path is three times better than another path (its metric is three times lower),
the better path will be used three times as often. Only routes with metrics that are within a certain
range or variance of the best route are used as multiple paths. Variance is another value that can
be established by the network administrator.

Stability Features
IGRP provides a number of features that are designed to enhance its stability. These include
holddowns, split horizons, and poison-reverse updates.
Holddowns are used to prevent regular update messages from inappropriately reinstating a route
that might have gone bad. When a router goes down, neighboring routers detect this via the lack
of regularly scheduled update messages. These routers then calculate new routes and send
routing update messages to inform their neighbors of the route change. This activity begins a
wave of triggered updates that filter through the network. These triggered updates do not instantly
arrive at every network device. Thus, it is possible for a device that has yet to be informed of a
network failure to send a regular update message, which advertises a failed route as being valid
to a device that has just been notified of the network failure. In this case, the latter device would
contain (and potentially advertise) incorrect routing information. Holddowns tell routers to hold
down any changes that might affect routes for some period of time. The holddown period usually
is calculated to be just greater than the period of time necessary to update the entire network with
a routing change.

Split horizons derive from the premise that it is never useful to send information about a route
back in the direction from which it came.

The Split-Horizon Rule Helps Protect Against Routing Loops

Split horizons should prevent routing loops between adjacent routers, but poison-reverse updates
are necessary to defeat larger routing loops. Increases in routing metrics generally indicate
routing loops. Poison-reverse updates then are sent to remove the route and place it in holddown.
In Cisco's implementation of IGRP, poison-reverse updates are sent if a route metric has
increased by a factor of 1.1 or greater.

Timers
IGRP maintains a number of timers and variables containing time intervals. These include an
update timer, an invalid timer, a hold-time period, and a flush timer. The update timer specifies
how frequently routing update messages should be sent. The IGRP default for this variable is 90
seconds. The invalid timer specifies how long a router should wait in the absence of routing-
update messages about a specific route before declaring that route invalid. The IGRP default for
this variable is three times the update period. The hold-time variable specifies the holddown
period. The IGRP default for this variable is three times the update timer period plus 10 seconds.
Finally, the flush timer indicates how much time should pass before a route should be flushed
from the routing table. The IGRP default is seven times the routing update period.

Open Shortest Path First


Open Shortest Path First (OSPF) is a routing protocol developed for Internet Protocol (IP)
networks by the Interior Gateway Protocol (IGP) working group of the Internet Engineering Task
Force (IETF). The working group was formed in 1988 to design an IGP based on the Shortest
Path First (SPF) algorithm for use in the Internet. Similar to the Interior Gateway Routing Protocol
(IGRP), OSPF was created because in the mid-1980s, the Routing Information Protocol (RIP)

Page 75 of 85
Network Communications – Networking Essentials – MIM0609

was increasingly incapable of serving large, heterogeneous internetworks. This chapter examines
the OSPF routing environment, underlying routing algorithm, and general protocol components.

OSPF was derived from several research efforts, including Bolt, Beranek, and Newman's (BBN's)
SPF algorithm developed in 1978 for the ARPANET (a landmark packet-switching network
developed in the early 1970s by BBN), Dr. Radia Perlman's research on fault-tolerant
broadcasting of routing information (1988), BBN's work on area routing (1986), and an early
version of OSI's Intermediate System-to-Intermediate System (IS-IS) routing protocol.

OSPF has two primary characteristics. The first is that the protocol is open, which means that its
specification is in the public domain. The OSPF specification is published as Request For
Comments (RFC) 1247. The second principal characteristic is that OSPF is based on the SPF
algorithm, which sometimes is referred to as the Dijkstra algorithm, named for the person credited
with its creation.

OSPF is a link-state routing protocol that calls for the sending of link-state advertisements (LSAs)
to all other routers within the same hierarchical area. Information on attached interfaces, metrics
used, and other variables is included in OSPF LSAs. As OSPF routers accumulate link-state
information, they use the SPF algorithm to calculate the shortest path to each node.

As a link-state routing protocol, OSPF contrasts with RIP and IGRP, which are distance-vector
routing protocols. Routers running the distance-vector algorithm send all or a portion of their
routing tables in routing-update messages to their neighbors.

Page 76 of 85
Network Communications – Networking Essentials – MIM0609

Chapter 9 – WAN Technologies

Fundamentals of WAN
A WAN is a data communications network that covers a relatively broad geographic area and that
often uses transmission facilities provided by common carriers, such as telephone companies.
WAN technologies generally function at the lower three layers of the OSI reference model: the
physical layer, the data link layer, and the network layer.

WAN Technologies Operate at the Lowest Levels of the OSI Model

A point-to-point link provides a single, pre-established WAN communications path from the
customer premises through a carrier network, such as a telephone company, to a remote
network. Point-to-point lines are usually leased from a carrier and thus are often called leased
lines. For a point-to-point line, the carrier allocates pairs of wire and facility hardware to your line
only. These circuits are generally priced based on bandwidth required and distance between the
two connected points. Point-to-point links are generally more expensive than shared services
such as Frame Relay.

A Typical Point-to-Point Link Operates Through a WAN to a Remote Network

Switched circuits allow data connections that can be initiated when needed and terminated when
communication is complete. This works much like a normal telephone line works for voice
communication. Integrated Services Digital Network (ISDN) is a good example of circuit
switching. When a router has data for a remote site, the switched circuit is initiated with the circuit
number of the remote network. In the case of ISDN circuits, the device actually places a call to
the telephone number of the remote ISDN circuit. When the two networks are connected and
authenticated, they can transfer data. When the data transmission is complete, the call can be
terminated.

Page 77 of 85
Network Communications – Networking Essentials – MIM0609

A Circuit-Switched WAN Undergoes a Process Similar to That Used for a Telephone Call

Packet switching is a WAN technology in which users share common carrier resources. Because
this allows the carrier to make more efficient use of its infrastructure, the cost to the customer is
generally much better than with point-to-point lines. In a packet switching setup, networks have
connections into the carrier's network, and many customers share the carrier's network. The
carrier can then create virtual circuits between customers' sites by which packets of data are
delivered from one to the other through the network. The section of the carrier's network that is
shared is often referred to as a cloud.

Packet Switching Transfers Packets Across a Carrier Network

Virtual Circuits
A virtual circuit is a logical circuit created within a shared network between two network devices.
Two types of virtual circuits exist: switched virtual circuits (SVCs) and permanent virtual circuits
(PVCs).

SVCs are virtual circuits that are dynamically established on demand and terminated when
transmission is complete. Communication over an SVC consists of three phases: circuit
establishment, data transfer, and circuit termination. The establishment phase involves creating
the virtual circuit between the source and destination devices. Data transfer involves transmitting
data between the devices over the virtual circuit, and the circuit termination phase involves
tearing down the virtual circuit between the source and destination devices. SVCs are used in
situations in which data transmission between devices is sporadic, largely because SVCs
increase bandwidth used due to the circuit establishment and termination phases, but they
decrease the cost associated with constant virtual circuit availability.

Page 78 of 85
Network Communications – Networking Essentials – MIM0609

PVC is a permanently established virtual circuit that consists of one mode: data transfer. PVCs
are used in situations in which data transfer between devices is constant. PVCs decrease the
bandwidth use associated with the establishment and termination of virtual circuits, but they
increase costs due to constant virtual circuit availability. PVCs are generally configured by the
service provider when an order is placed for service.

Dialup services offer cost-effective methods for connectivity across WANs. Two popular dialup
implementations are dial-on-demand routing (DDR) and dial backup.

DDR is a technique whereby a router can dynamically initiate a call on a switched circuit when it
needs to send data. In a DDR setup, the router is configured to initiate the call when certain
criteria are met, such as a particular type of network traffic needing to be transmitted. When the
connection is made, traffic passes over the line. The router configuration specifies an idle timer
that tells the router to drop the connection when the circuit has remained idle for a certain period.

Dial backup is another way of configuring DDR. However, in dial backup, the switched circuit is
used to provide backup service for another type of circuit, such as point-to-point or packet
switching. The router is configured so that when a failure is detected on the primary circuit, the
dial backup line is initiated. The dial backup line then supports the WAN connection until the
primary circuit is restored. When this occurs, the dial backup connection is terminated.

WAN Devices
WANs use numerous types of devices that are specific to WAN environments. WAN switches,
access servers, modems, CSU/DSUs, and ISDN terminal adapters are discussed in the following
sections. Other devices found in WAN environments that are used in WAN implementations
include routers, ATM switches, and multiplexers.

A WAN switch is a multiport internetworking device used in carrier networks. These devices
typically switch such traffic as Frame Relay, X.25, and SMDS, and operate at the data link layer
of the OSI reference model.

Two Routers at Remote Ends of a WAN Can Be Connected by WAN Switches

An access server acts as a concentration point for dial-in and dial-out connections.

Page 79 of 85
Network Communications – Networking Essentials – MIM0609

Modem
A modem is a device that interprets digital and analog signals, enabling data to be transmitted
over voice-grade telephone lines. At the source, digital signals are converted to a form suitable
for transmission over analog communication facilities. At the destination, these analog signals are
returned to their digital form.

CSU/DSU
A channel service unit/digital service unit (CSU/DSU) is a digital-interface device used to connect
a router to a digital circuit like a T1. The CSU/DSU also provides signal timing for communication
between these devices.

ISDN Terminal Adapter


An ISDN terminal adapter is a device used to connect ISDN Basic Rate Interface (BRI)
connections to other interfaces, such as EIA/TIA-232 on a router. A terminal adapter is essentially
an ISDN modem, although it is called a terminal adapter because it does not actually convert
analog to digital signals.

Digital Subscriber Line


Digital Subscriber Line (DSL) technology is a modem technology that uses existing twisted-pair
telephone lines to transport high-bandwidth data, such as multimedia and video, to service
subscribers. The term xDSL covers a number of similar yet competing forms of DSL technologies,
including ADSL, SDSL, HDSL, HDSL-2, G.SHDL, IDSL, and VDSL. xDSL is drawing significant
attention from implementers and service providers because it promises to deliver high-bandwidth
data rates to dispersed locations with relatively small changes to the existing telco infrastructure.

Asymmetric Digital Subscriber Line

Page 80 of 85
Network Communications – Networking Essentials – MIM0609

Asymmetric Digital Subscriber Line (ADSL) technology is asymmetric. It allows more bandwidth
downstream—from an NSP's central office to the customer site—than upstream from the
subscriber to the central office. This asymmetry, combined with always-on access (which
eliminates call setup), makes ADSL ideal for Internet/intranet surfing, video-on-demand, and
remote LAN access. Users of these applications typically download much more information than
they send.

ADSL transmits more than 6 Mbps to a subscriber and as much as 640 kbps more in both
directions. Such rates expand existing access capacity by a factor of 50 or more without new
cabling. ADSL can literally transform the existing public information network from one limited to
voice, text, and low-resolution graphics to a powerful, ubiquitous system capable of bringing
multimedia, including full-motion video, to every home this century.

WAN PROTOCOLS
Integrated Services Digital Network
Integrated Services Digital Network (ISDN) is comprised of digital telephony and data-transport
services offered by regional telephone carriers. ISDN involves the digitization of the telephone
network, which permits voice, data, text, graphics, music, video, and other source material to be
transmitted over existing telephone wires. The emergence of ISDN represents an effort to
standardize subscriber services, user/network interfaces, and network and internetwork
capabilities. ISDN applications include high-speed image applications (such as Group IV
facsimile), additional telephone lines in homes to serve the telecommuting industry, high-speed
file transfer, and videoconferencing. Voice service is also an application for ISDN. This chapter
summarizes the underlying technologies and services associated with ISDN.

ISDN DEVICES
ISDN devices include terminals, terminal adapters (TAs), network-termination devices, line-
termination equipment, and exchange-termination equipment. ISDN terminals come in two types.
Specialized ISDN terminals are referred to as terminal equipment type 1 (TE1). Non-ISDN
terminals, such as DTE, that predate the ISDN standards are referred to as terminal equipment
type 2 (TE2). TE1s connect to the ISDN network through a four-wire, twisted-pair digital link.
TE2s connect to the ISDN network through a TA. The ISDN TA can be either a standalone device
or a board inside the TE2. If the TE2 is implemented as a standalone device, it connects to the
TA via a standard physical-layer interface. Examples include EIA/TIA-232-C (formerly RS-232-C),
V.24, and V.35.

Beyond the TE1 and TE2 devices, the next connection point in the ISDN network is the network
termination type 1 (NT1) or network termination type 2 (NT2) device.

These are network-termination devices that connect the four-wire subscriber wiring to the
conventional two-wire local loop. In North America, the NT1 is a customer premises equipment
(CPE) device. In most other parts of the world, the NT1 is part of the network provided by the
carrier. The NT2 is a more complicated device that typically is found in digital private branch
exchanges (PBXs) and that performs Layer 2 and 3 protocol functions and concentration
services. An NT1/2 device also exists as a single device that combines the functions of an NT1
and an NT2.

ISDN specifies a number of reference points that define logical interfaces between functional
groups, such as TAs and NT1s. ISDN reference points include the following:
 R—The reference point between non-ISDN equipment and a TA.
 S—The reference point between user terminals and the NT2.
 T—The reference point between NT1 and NT2 devices.
 U—The reference point between NT1 devices and line-termination equipment in the
carrier network. The U reference point is relevant only in North America, where the NT1
function is not provided by the carrier network.

Page 81 of 85
Network Communications – Networking Essentials – MIM0609

Figure illustrates a sample ISDN configuration and shows three devices attached to an ISDN
switch at the central office. Two of these devices are ISDN-compatible, so they can be attached
through an S reference point to NT2 devices. The third device (a standard, non-ISDN telephone)
attaches through the reference point to a TA. Any of these devices also could attach to an NT1/2
device, which would replace both the NT1 and the NT2. In addition, although they are not shown,
similar user stations are attached to the far-right ISDN switch.

Page 82 of 85
Network Communications – Networking Essentials – MIM0609

Sample ISDN Configuration Illustrates Relationships between Devices and Reference Points

There are two types of services associated with ISDN:


 BRI
 PRI

ISDN BRI Service


The ISDN Basic Rate Interface (BRI) service offers two B channels and one D channel (2B+D).
BRI B-channel service operates at 64 kbps and is meant to carry user data; BRI D-channel
service operates at 16 kbps and is meant to carry control and signaling information, although it
can support user data transmission under certain circumstances. The D channel signaling
protocol comprises Layers 1 through 3 of the OSI reference model. BRI also provides for framing
control and other overhead, bringing its total bit rate to 192 kbps.

The BRI physical layer specification is International Telecommunication Union-


Telecommunications Standards Section (ITU-T) (formerly the Consultative Committee for
International Telegraph and Telephone [CCITT]) I.430.

ISDN PRI Service


ISDN Primary Rate Interface (PRI) service offers 23 B channels and 1 D channel in North
America and Japan, yielding a total bit rate of 1.544 Mbps (the PRI D channel runs at 64 kbps).
ISDN PRI in Europe, Australia, and other parts of the world provides 30 B channels plus one 64-
kbps D channel and a total interface rate of 2.048 Mbps. The PRI physical layer specification is
ITU-T I.431.

Frame Relay
Frame Relay is a high-performance WAN protocol that operates at the physical and data link
layers of the OSI reference model. Frame Relay originally was designed for use across Integrated
Services Digital Network (ISDN) interfaces. Today, it is used over a variety of other network
interfaces as well. This chapter focuses on Frame Relay's specifications and applications in the
context of WAN services.

Frame Relay is an example of a packet-switched technology. Packet-switched networks enable


end stations to dynamically share the network medium and the available bandwidth.

Statistical multiplexing techniques control network access in a packet-switched network. The


advantage of this technique is that it accommodates more flexibility and more efficient use of
bandwidth. Most of today's popular LANs, such as Ethernet and Token Ring, are packet-switched
networks.

Page 83 of 85
Network Communications – Networking Essentials – MIM0609

Frame Relay often is described as a streamlined version of X.25, offering fewer of the robust
capabilities, such as windowing and retransmission of last data that are offered in X.25. This is
because Frame Relay typically operates over WAN facilities that offer more reliable connection
services and a higher degree of reliability than the facilities available during the late 1970s and
early 1980s that served as the common platforms for X.25 WANs. As mentioned earlier, Frame
Relay is strictly a Layer 2 protocol suite, whereas X.25 provides services at Layer 3 (the network
layer) as well. This enables Frame Relay to offer higher performance and greater transmission
efficiency than X.25, and makes Frame Relay suitable for current WAN applications, such as LAN
interconnection.

Devices attached to a Frame Relay WAN fall into the following two general categories:
 Data terminal equipment (DTE)
 Data circuit-terminating equipment (DCE)

DTEs generally are considered to be terminating equipment for a specific network and typically
are located on the premises of a customer. In fact, they may be owned by the customer.
Examples of DTE devices are terminals, personal computers, routers, and bridges.

DCEs are carrier-owned internetworking devices. The purpose of DCE equipment is to provide
clocking and switching services in a network, which are the devices that actually transmit data
through the WAN. In most cases, these are packet switches.

The connection between a DTE device and a DCE device consists of both a physical layer
component and a link layer component. The physical component defines the mechanical,
electrical, functional, and procedural specifications for the connection between the devices. One
of the most commonly used physical layer interface specifications is the recommended standard
(RS)-232 specification. The link layer component defines the protocol that establishes the
connection between the DTE device, such as a router, and the DCE device, such as a switch.
This chapter examines a commonly utilized protocol specification used in WAN networking: the
Frame Relay protocol.

Frame Relay Virtual Circuits


Frame Relay provides connection-oriented data link layer communication. This means that a
defined communication exists between each pair of devices and that these connections are
associated with a connection identifier. This service is implemented by using a Frame Relay
virtual circuit, which is a logical connection created between two data terminal equipment (DTE)
devices across a Frame Relay packet-switched network (PSN).

Virtual circuits provide a bidirectional communication path from one DTE device to another and
are uniquely identified by a data-link connection identifier (DLCI). A number of virtual circuits can
be multiplexed into a single physical circuit for transmission across the network. This capability

Page 84 of 85
Network Communications – Networking Essentials – MIM0609

often can reduce the equipment and network complexity required to connect multiple DTE
devices.

A virtual circuit can pass through any number of intermediate DCE devices (switches) located
within the Frame Relay PSN.

Frame Relay virtual circuits fall into two categories: switched virtual circuits (SVCs) and
permanent virtual circuits (PVCs).

Switched Virtual Circuits


Switched virtual circuits (SVCs) are temporary connections used in situations requiring only
sporadic data transfer between DTE devices across the Frame Relay network. A communication
session across an SVC consists of the following four operational states:
 Call setup—The virtual circuit between two Frame Relay DTE devices is established.
 Data transfer—Data is transmitted between the DTE devices over the virtual circuit.
 Idle—The connection between DTE devices is still active, but no data is transferred. If an
SVC remains in an idle state for a defined period of time, the call can be terminated.
 Call termination – The virtual circuit between DTE devices is terminated. After the virtual
circuit is terminated, the DTE devices must establish a new SVC if there is additional data
to be exchanged. It is expected that SVCs will be established, maintained, and
terminated using the same signaling protocols used in ISDN.

Permanent Virtual Circuits


Permanent virtual circuits (PVCs) are permanently established connections that are used for
frequent and consistent data transfers between DTE devices across the Frame Relay network.
Communication across a PVC does not require the call setup and termination states that are
used with SVCs. PVCs always operate in one of the following two operational states:
 Data transfer—Data is transmitted between the DTE devices over the virtual circuit.
 Idle—The connection between DTE devices is active, but no data is transferred. Unlike
SVCs, PVCs will not be terminated under any circumstances when in an idle state.

DTE devices can begin transferring data whenever they are ready because the circuit is
permanently established.

Data-Link Connection Identifier


Frame Relay virtual circuits are identified by data-link connection identifiers (DLCIs). DLCI values
typically are assigned by the Frame Relay service provider (for example, the telephone
company).

Frame Relay DLCIs have local significance, which means that their values are unique in the LAN,
but not necessarily in the Frame Relay WAN.

A Single Frame Relay Virtual Circuit Can Be Assigned Different DLCIs on Each End of a VC

Page 85 of 85

You might also like