You are on page 1of 55

COMPUTER NETWORKS

1 Introduction
1.1 Definition of network
A network is a connected system of objects or people. The telephone system, known in the industry as
the Public Switched Telephone Network (PSTN), is an example of a network. It allows people in
virtually every corner of the world to communicate with anyone who has access to a telephone.

Similarly, a computer network allows users to communicate with other users on that network by
transmitting data along cables. A computer network is defined as having two or more devices, such as
workstations, printers, and servers, linked together for the purpose of sharing information, resources,
or both. The link can be through copper or fiber-optic cable or it can be a wireless connection that
uses radio signals , laser or infrared technology, or satellite transmission. The information and
resources shared can be data files, applications, printers, modems, or other hardware. Computer
networks are used in businesses, schools, government agencies, and even many homes.

Computer networks are not always independent of the telephone network. Telephone lines are often
used for transmitting data between computers, particularly for homes and small businesses. In the
past, this has been a fairly slow and inexpensive connection method for utilizing low speed
connection devices. New telephone company (telco) data services are bringing higher bandwidths at
reasonable costs to home and small business users. These services use the same copper wires that
carry voice calls.

1.2 History of Networking


Early computers were standalone devices, which operated independently from other computers. It
soon became apparent that this was not an efficient or cost-effective way for businesses to operate.
Initially, the primary reason for networking computers in the beginning was to communicate between
computers and share resources such as printers. Of course, once they were linked, people discovered
the usefulness of being able to share much more than printers. Data, applications, and peripheral
equipment like scanners, faxes, and plotters could now be shared between users, thus saving
organizations money and time.

The earliest version of the largest network, the Internet, was developed by the United States
Department of Defense. The Internet was used to communicate with subcontractors and universities
involved with its research projects. Cables were laid between the sites to enable communication.
Other Internet services, such as e-mail and newsgroups, soon followed. In the 1990s, the World Wide
Web was developed which put more demands on existing cable. The cable manufacturers continue to
develop new media that can handle all the data transmission needs of users.

1.3 Goals of Networking


The goal of an effective network is to improve productivity, foster communication, control costs, and
make information access faster and easier. To achieve these goals, the network needs certain
attributes. A good way to help remember these attributes is to see that the first letter of each of these
network objectives creates the acronym SMART:

 Simple - The network should be simple. It should require no special skills of its users.

Prepared by Clive Onsomu


1
 Manageable - The network should be manageable. It should be easy to monitor and to adjust
the performance of network elements, such as servers, workstations, routers, switches, and
wiring.
 Adaptable - The network should be adaptable and scalable. Change and growth should not
present barriers to network owners. The cabling on which a network is built has a great deal
to do with the ability of the network to adapt to change and adjust to growth.
 Reliable - The network should be reliable. When a user requests any connected tool or
operation, it should be available.
 Transparent - The network should be transparent. It should be just as fast and easy to access
a resource that is near as one that comes from a more distant server.

All of these attributes are affected by the quality of the cabling and the cable installation. A cable
installer must always remember the ultimate goals of the network when planning and installing cable
plants.

1.4 Benefits of Networking


There are numerous benefits to networking computers and other devices. Users can share documents
easily, backup data easily, share a common network connection, share hardware to accomplish tasks
like printing documents, and group computers and devices together to more easily manage them. It is
common to find home networks with as little as two computers sharing files, printers, and network
connections.

The benefits of networking computers are the following:

 Sharing Output Devices - Printers, other output devices, and fax machines can be shared
 Sharing Input Devices - High-end scanners, medical equipment, optic readers, and other
input devices are typically used on only an occasional basis and are often relatively
expensive. Therefore, it makes sense to configure them for multiple users on the network.
 Sharing Storage Devices - Networked computers can share the use of hard disks, floppy,
Zip, and CD-ROM drives. Files can be saved to or accessed from these storage devices on
computers anywhere on the network.
 Sharing Modems and Internet Connections - Networked computers can share modems,
ISDN lines, cable modems, and DSL adapters. With the proper software, an entire LAN can
connect to the Internet through one phone line, or coaxial cable, and a single ISP account.
 Security - Security can mean different things when it is associated with a network. Prying
eyes and malicious users can destroy the integrity of data. It is much easier to secure data and
resources when policies and enforcement are centralized and managed. Computers that are
networked together make securing data and resources easier. The term security can also be
used when speaking of hardware or software problems that users can experience. When
computers are networked together, it is much easier to backup the data that is on them. This
provides the users a sense of security should any unexpected failure occur.
 Sharing Data and Applications - Shared data files result in the efficient use of disk space
and easier collaboration on multiuser projects. For example, if several managers need to
revise a spreadsheet containing the department budget, the file can be retrieved from a server,
edited locally and then saved again on a network server. Each manager will be able to access
the updated file to make additional changes. In addition, applications can be installed on a
network server. Users can access the application from the server without requiring disk space
on local hard disks for the program files.
 Reduced expenditure - The cost savings involved in linking computers, which include the
purchase of network interface cards (NICs), cabling or wireless media, hubs, and other
connectivity devices, often outweigh the costs of buying multiple printers and other shared

Prepared by Clive Onsomu


2
devices. However, this cost still outweighs the cost of buying multiple printers and other
shared devices. Networking also saves labor hours when users access and share data.

2 Types of Networks
By using local-area network (LAN) and wide-area network (WAN) technologies, many computers are
interconnected to provide services to their users. In providing services, networked computers take on
different roles or functions in relation to each other. Some types of applications require computers to
function as equal partners. Other types of applications distribute work so that one computer functions
to serve a number of others in an unequal relationship. In either case, two computers typically
communicate with each other by using request/response protocols. One computer issues a request for
a service, and a second computer receives and responds to that request. The requester takes on the role
of a client, and the responder takes on the role of a server.

2.1 Peer - to - Peer Networks


In a peer-to-peer network, the networked computers act as equal partners, or peers, to each other. As
peers, each computer can take on the client function or the server function alternately.. At one time
Workstation A, for example, may make a request for a file from Workstation B, which responds by
serving the file to Workstation A. Workstation A functions as client, while Workstation B functions as
the server. At a later time, Workstation A and B can reverse roles. Workstation B could be the client,
making a request of Workstation A, and Workstation A, as server, responds to the request of
Workstation B. Workstations A and B stand in a reciprocal, or peer, relationship to each other.

In a peer-to-peer network, individual users control their own resources. They may decide to share
certain files with other users and may require passwords before they allow others to access their
resources. Since individual users make these decisions, there is no central point of control or
administration in the network. In addition, individual users must back up their own systems to be able
to recover from data loss in case of failures. When a computer acts as a server, the user of that
machine may experience reduced performance as the machine serves the requests made by other
systems.

Peer-to-peer networks are relatively easy to install and operate. No additional equipment is necessary
beyond a suitable operating system in each computer. Since users control their own resources, no
dedicated administrators are needed. A peer-to-peer network works well with a small number of
computers, perhaps 10 or fewer.

As networks grow larger, peer-to-peer relationships become increasingly difficult to coordinate. They
do not scale well, since their efficiency decreases rapidly as the number of computers on the network
increases. Since individual users control access to the resources on their computers, security may be
difficult to maintain. Client/server networks address these limitations of the peer-to-peer arrangement.

2.2 Client-Server Networks


In a client/server network arrangement, network services are located in a dedicated computer whose
only function is to respond to the requests of clients. The server contains the file, print, application,
security, and other services in a central computer that is continuously available to respond to client
requests. Most network operating systems adopt the form of client/server relationships. Typically,
desktop computers function as clients and one or more computers with additional processing power,
memory, and specialized software function as servers.

Prepared by Clive Onsomu


3
Servers are designed to handle requests from many clients simultaneously. Before a client can access
the server resources, the client must identify itself and be authorized to use the resource. This is
usually done by assigning each user an account name and password. A specialized authentication
server acts as an entry point, guarding access to the network, and verifies this account information. By
centralizing user accounts, security, and access control, server-based networks simplify the work of
network administration.

The concentration of network resources such as files, printers, and applications on servers also makes
the data they generate easier to back up and maintain. Rather than having these resources spread
around on individual machines, they can be located on specialized, dedicated servers for easier access.
Most client/server systems also include facilities for enhancing the network by adding new services
that extend the usefulness of the network.

The distribution of functions in client/server networks brings substantial advantages, but also incurs
some costs. Although the aggregation of resources on server systems brings greater security, simpler
access, and coordinated control, the server introduces a single point of failure into the network.
Without an operational server, the network cannot function at all. Additionally, servers require a
trained, expert staff to administer and maintain them, which increases the expense of running the
network. Server systems require additional hardware and specialized software that adds substantially
to the cost.

2.3 Local Area Networks


A local-area network (LAN) can connect many computers in a relatively small geographical area such
as a home, an office, or a campus. It allows users to access high bandwidth media like the Internet and
allows users to share devices such as printers.

A LAN connects each computer to each of the others by using a separate communications channel. A
direct connection from one computer to another is called a point-to-point link. If the network were
designed using point-to-point links, the number of links would grow rapidly as new computers were
added to the network. For each new computer, the network would need a new separate connection to
each of the other computers. This approach would be very costly and difficult to manage.

Starting in the late 1960s and early 1970s, network engineers designed a form of network that enabled
many computers in a small area to share a single communications channel by taking turns using it.
These LANs now connect more computers than any other type of network. By allowing the computers
to share a communications channel, LANs greatly reduce the cost of the network. For economic and
technical reasons, point-to-point links over longer distances are then used to connect computers and
networks in separate towns, cities, or even across continents.

The general shape or layout of a LAN is called its topology. Topology defines the structure of the
network. This includes the physical topology, which is the actual layout of the wire or media, and the
logical topology, which is how the hosts access the media.

When all the computers connect to a central point, the network takes on a star topology. An
alternative topology connects the computers in a closed loop. Here, the cable is run from one
computer to the next and then from the second to its neighbor until the last one is connected back to
the first. This forms a ring topology. A third topology, called a bus, attaches each computer into a
single, long cable. Each topology has its benefits and drawbacks. Today, most LANs are designed
using some form of star topology, although ring and bus layouts are still used in some installations.

Prepared by Clive Onsomu


4
Whatever the layout, or topology, of the network, all LANs require the networked computers to share
the communications channel that connects them. The communications channel that they all share is
called the medium, and it is typically a cable that carries electrical signals through copper, or it may
be a fiber-optic cable that carries light signals through purified glass or plastic. In the case of wireless
networks, the computers may use antennas to broadcast radio signals to each other. In all cases, the
computers on a LAN share the medium by taking turns using it.

On a LAN, the rules for coordinating the use of the medium are called Media Access Control (MAC).
Since there are many computers on the network but only one of them can use the medium at a time,
there must be some rules for deciding how they will take turns in sharing the network. The data link
layer provides reliable transit of data across a physical link by using the MAC address. If there are
conflicts when more than one computer is contending for the media, the rules ensure that there is an
agreed method for resolving the conflict. In later sections of this chapter, the major types of LANs
will be reviewed, including the rules for sharing the medium.

2.4 Wide Area Networks


For economic and technical reasons, LANs are not suitable for communications over long distances.
On a LAN, the computers must coordinate their use of the network and this coordination takes time.
Over long distances with greater delays in communication, the computers would take more time
coordinating the use of the shared medium and less time sending data messages. In addition, the costs
of providing high-speed media over long distances are much greater than in the case of LANs. For
these reasons, wide-area network (WAN) technologies differ from LANs.

A WAN, as the name implies, is designed to work over a larger area than a LAN. A WAN uses point-
to-point or point-to-multipoint, serial communications lines. Point-to-point lines connect only two
locations, one on each side of the line. Point-to-multipoint lines connect one location on one side of
the line to multiple locations on the other side. They are called serial lines because the bits of
information are transmitted one after another in a series, like cars traveling on a single lane highway.

The following are some of the more common WAN technologies:

 Modems
 Integrated Services Digital Network (ISDN)
 Digital subscriber line (DSL)
 Frame Relay
 Asynchronous Transfer Mode (ATM)
 The T (US) and E (Europe) Carrier series (T1, E1, T3, E3)
 Synchronous Optical Network (SONET)

Typically, individuals and companies do not build their own WAN connections. Government
regulations only allow utility companies to install lines on public property. Therefore, wide area
connections make use of the communications facilities put in place by utility companies, called
common carriers, such as the telephone company.

Connections across WAN lines may be temporary or permanent. Telephone or dialup lines, for
example, might make a temporary connection to a remote network from a computer in a home or
small office. In this case, the home or office computer makes a phone call to a computer on the
boundary of the remote network. The telephone company provides the connection, or circuit, that is
used for the duration of the call. After data is transmitted, the line is disconnected, just as it is for an
ordinary voice call. If a company wants to transmit data at any time without having to connect and
disconnect the line each time, the company can rent a permanent line or circuit from the common

Prepared by Clive Onsomu


5
carrier. These leased lines are constantly available and operate at higher speeds than temporary dialup
connections.

In both temporary and permanent cases, computers that connect over wide area circuits must use
special devices called modems or channel service unit/data service unit (CSU/DSU) at each end of the
connection.

Note: Channel service unit /data service unit (CSU/DSU) is a pair of communications
devices that connect an in-house line to an external digital circuit (T1, DDS, and so
on). It is similar to a modem, but connects a digital circuit rather than an analog one.

Modem devices are required because the electrical signals that carry digital computer data must be
transformed, or modulated, before they can be transmitted on telephone lines. On the transmitting end
of the connection, a modem (modulator-demodulator) transforms computer signals into phone signals.
On the receiving end, the transformation is done from phone to computer signals. The modem is only
one means of connecting computers or similar devices so they can communicate over long distances.
Other much faster technologies include ISDN, Frame Relay, and ATM.

In general, WANs typically connect fewer computers than LANs and normally operate at lower
speeds than LANs. WANs, however, provide the means for connecting single computers and many
LANs over large distances. Thus, they enable networks to span whole countries, and even the entire
globe.

2.5 Circuit-Switched versus Packet-Switched Networks

The public telephone system, sometimes referred to as plain old telephone service (POTS), is a
circuit-switched communications network. When a telephone call is placed in this type of network,
only one physical path is used between the telephones for the duration of that call. This pathway,
called a circuit, is maintained for the exclusive use of the call, until the connection is ended and the
telephone is hung up.

If the same number is called tomorrow from the same location as the call from today, the path would
probably not be the same. The circuit is created by a series of switches that use currently available
network paths to set up the call end-to-end. This explains why callers can get a clear connection one
day, and noise and static on another. This demonstrates that a circuit-switched connection is end-to-
end or point-to-point.

Conversely, in a packet-switched network, each individual packet of data can take a different route
and no dedicated pathway or circuit is established. When transferring data, such as a word processing
file, from one computer to another using a packet-switched network, each individual packet (bundle
of data) can take a different route. Although it all arrives at the same destination, it does not all travel
the same path to get there. Internet traffic uses packet-switching technology.

The difference between circuit and packet switching can be compared to the different ways in which a
large group of people traveling from Kisumu to Mombasa (two cities in Kenya) can reach their
destination. For example, circuit switching is similar to loading the entire group on a bus, a train, or
an airplane. The route is plotted out, and the whole group travels over that same route.

Packet switching is comparable to people traveling in their own automobiles. The group is broken
down into individual components just as the data communication is broken into packets. Some
travelers can take interstate highways, and others can use back roads. Some can drive straight through,
and others can take a more roundabout path. Eventually, they all end up at the same destination. The
group is put back together, just as packets are reassembled at the endpoint of the communication.
Prepared by Clive Onsomu
6
3 Physical Components of a Network
3.1 Network Topologies
LANs are networks that interconnect computers within an organization, buildings, or campuses. Every
network has both a logical and a physical topology. Physical topology is the layout of the networking
cables, devices, and workstations. When pulling cable for LANs, a cable installer must know which
physical topology is to be used or is in existence already. Logical topologies dictate the path that data
takes between devices and workstations.

The two types of topology concepts are explained as follows:

 Logical topologies - Describe the function of the network. They describe how the network
gets voice and data from point to point. Common logical topologies include the ring and the
bus.
 Physical topologies - Describe the actual physical layout of the network. Common physical
topologies are the bus, ring, star, extended star, hierarchical, and mesh.

3.1.1 Logical Topologies

The logical topology of a network describes how data flows through the network. The two types of
logical topologies are ring and bus.

In a logical ring topology, data is transmitted from computer to computer until it reaches the
destination computer. The wire transfers a complete data frame with one bit allowed on the wire at a
time. To send data, computers must wait until they are notified that it is their turn. Also, the logical
ring topology is known as an active topology since each computer regenerates the signal before
passing it on. The logical ring topology is used in manufacturing, where it is often critical that the
time it takes for a message to transmit from a given source to its destination is predictable.

By contrast, a logical bus topology is known as a passive topology because the computers do not
regenerate the signal and pass it on as they do in a ring. Instead, special networking devices like
repeaters are needed to regenerate the signals over long distances. Another difference is that
workstations on a logical bus topology must contend for the right to transmit. Unlike transmissions on
a logical ring, all computers receive the data. The computers look at the destination address on the
data. If it was not intended for them, they discard the data.

On a logical bus topology, when a failure occurs, communication between all the devices fails as well.
One advantage to a logical ring topology is that if a failure occurs, not all communication fails. Only
the communication on the affected segment fails.

A network might have one type of logical topology and a completely different physical topology. For
example, the most common type of network, Ethernet, uses a star or extended star physical topology
but a logical bus topology. Token Ring uses a physical star and a logical ring. Fiber Distributed Data
Interface (FDDI) uses a physical ring and a logical ring.

Prepared by Clive Onsomu


7
Fig 1: Logical Ring

Fig 2 Logical Bus

The network topology defines the way in which computers, printers, and other devices are connected.
A network topology describes the layout of the wire and devices as well as the paths used by data
transmissions. The topology greatly influences how the network functions.

The following sections will discuss the different types of topologies including bus, star, extended star,
ring, mesh, and hybrid. The physical and logical topologies of a network will also be discussed.

3.1.2 Physical Topologies

The physical topology of a network often differs from the logical topology. The following physical
topologies are covered in this chapter:

 Bus
 Star
 Hierarchical
 Extended Star
 Ring and Dual Ring
 Mesh

3.1.2.1 Bus Topology

Commonly referred to as a linear bus, all the devices on a bus topology are connected by one single
cable. This cable proceeds from one computer to the next like a bus line going through a city. The

Prepared by Clive Onsomu


8
main cable segment must end with a terminator that absorbs the signal when it reaches the end of the
line or wire. If there is no terminator, the electrical signal representing the data bounces back at the
end of the wire, causing errors in the network. Only one packet of data can be transmitted at a time. If
more than one packet is transmitted, they collide and have to be resent. A bus topology with many
hosts can be very slow due to these collisions. This topology is rarely used and would only be suitable
for a home office or small business with a few hosts.

Fig 3 Bus Topology

3.1.2.2 Star Topology

The star topology is the most commonly used architecture in Ethernet LANs. When installed, the star
topology resembles spokes in a bicycle wheel. It is made up of a central connection point that is a
device such as a hub, switch, or router, where all of the cabling segments actually meet. Each host in
the network is connected to the central device with its own cable.

A star topology costs more to implement than the bus topology. This is because more cable is used
and a central device is needed such as a hub, switch, or router. However, the advantages of a star
topology are worth the additional costs. Since each host is connected to the central device with its
own wire, if there is a problem with that cable, only that host is affected. The rest of the network is
operational. This benefit is extremely important. It is the reason why virtually every newly designed
network has this topology.

Fig 4 Star Topology

3.1.2.3 Extended Star Topology

When a star network is expanded to include an additional networking device that is connected to the
main networking device, it is called an extended star topology. Larger networks, like those of
corporations or schools, use the extended star topology. When used with network devices that filter

Prepared by Clive Onsomu


9
frames or packets, like bridges, switches, and routers, this topology significantly reduces the traffic on
the wires by sending packets only to the wires of the destination host.

Fig 5 Extended Star Topology

3.1.2.4 Hierarchical Topology

The hierarchical topology imposes order on the network by grouping hosts based on their physical
location on the network. This is typical of many telephone networks, where groups of extensions map
to floors of buildings, departments, or rank of personnel.

The disadvantage of a hierarchical topology is that if one cable fails, it can affect all the hosts that use
it to access other parts of the network.

Fig 6 Hierarchical Topology

3.1.2.5 Ring Topology

The ring topology is another important topology in LAN connectivity. It is important to know the
advantages and disadvantages of choosing a ring topology. As the name implies, hosts are connected
in the form of a ring or circle. Unlike the bus topology, it has no beginning or end that needs to be
terminated. Data is transmitted in a way that is unlike the bus or the star topology. A frame travels
around the ring, stopping at each node. If a node wants to transmit data, it adds the data as well as the
destination address to the frame. The frame then continues around the ring until it finds the destination
node, which takes the data out of the frame. The advantage of using a method such as this is that there
are no collisions of data packets.

There are two types of rings, these include:


Prepared by Clive Onsomu
10
1. Single ring – All the devices on the network share a single cable, and the data travels in one
direction only. Each device waits its turn to send data over the network. An example is the
Token Ring topology.
2. Dual ring – The dual ring topology as illustrated in Figure allows data to be sent in both
directions although only one ring is used at a time. This creates redundancy (fault tolerance),
meaning that in the event of a failure of one ring, data will be still be able to be transmitted on
the other ring. An example of a dual ring is Fiber Distributed Data Interface (FDDI).

The most common implementation of the ring topology is in Token Ring networks. The Institute of
Electrical and Electronics Engineers (IEEE) 802.5 standard is the Token Ring access method used.
FDDI is a technology that is similar to Token Ring, but it uses light instead of electricity to transmit
data. It uses the dual ring.

Fig 7 Ring and Dual Ring Topology

3.1.2.6 Mesh topology

The mesh topology connects all devices (nodes) to each other for redundancy and fault tolerance. It is
used in WANs to interconnect LANs and for mission critical networks like those used by
governments. Implementing the mesh topology is expensive and difficult.

Fig 8 Mesh Topology

3.1.2.7 Hybrid Topology

The hybrid topology combines more than one type of topology. When a bus line joins two hubs of
different topologies, this configuration is called a star bus. Businesses or schools that have several
buildings, known as campuses, sometimes use this topology. The bus line is used to transfer the data
between the star topologies.

Prepared by Clive Onsomu


11
Fig 10 Snapshots of different Topologies

Fig 11 Hybrid Topology

3.1.3 Physical versus Logical Topology


Networks have both a physical and logical topology:

 Physical topology – Refers to the layout of the devices and media.


 Logical topology – Refers to the paths that signals travel from one point on the network to
another. That is, the way in which data accesses media and transmits packets across it.

These two terminologies can be a little confusing, partly because the word logical in this instance has
nothing to do with the way the network appears to be functioning. The physical and logical topologies
of a network can be the same. For instance, in a network physically shaped as a linear bus, the data
travels in a straight line from one computer to the next. Hence, it has both a bus physical topology and
a bus logical topology.

A network can also have physical and logical topologies that are quite different. For example, a
physical topology in the shape of a star, where cable segments can connect all computers to a central
hub, can in fact have a logical ring topology. Remember that in a ring, the data travels from one
computer to the next. That is because inside the hub, the wiring connections are such that the signal
actually travels around in a circle from one port to the next, creating a logical ring. The way data
travels in a network cannot always be predicted by simply observing its physical layout.

As for Ethernet and Token Ring, Token Ring uses a logical ring topology in either a physical ring or
physical star. Ethernet uses a logical bus topology in either a physical bus or physical star.

Prepared by Clive Onsomu


12
All of the topologies discussed above can be both physical and logical, except that no logical star
topology exists.

3.2 Networking Media


Networking media can be defined simply as the means by which signals (data) are sent from one
computer to another (either by cable or wireless means). There are a wide variety of networking
media in the marketplace. This section will briefly discuss some of the available media types
including two that use copper (coaxial and twisted-pair), one that uses glass (fiber-optic), and one that
uses waves (wireless) to transmit data.

3.2.1 Coaxial cable


Coaxial cable is a copper-cored cable surrounded by a heavy shielding. It is used to connect
computers in a network. There are several types of coaxial cable, including thicknet, thinnet, RG-59
(standard cable for cable TV), and RG-6 (used in video distribution). Thicknet is large in diameter,
rigid, and thus difficult to install. In addition, the maximum transmission rate is only 50 Mbps,
significantly less than twisted-pair or fiber-optic, and its maximum run is 500 m. A thinner version,
known as thinnet or cheapernet, is occasionally used in Ethernet networks. Thinnet has the same
transmission rate as thicknet.

Fig 12 Coaxial Cable

3.2.2 Twisted Pair

Twisted-pair is a type of cabling that is used for telephone communications and most modern Ethernet
networks. A pair of wires forms a circuit that can transmit data. The pairs are twisted to provide
protection against crosstalk, the noise generated by adjacent pairs. Pairs of copper wires that are
encased in color-coded plastic insulation are twisted together. All the twisted-pairs are then protected
inside an outer jacket. There are two basic types, shielded twisted-pair (STP) and unshielded twisted-
pair (UTP). There are also categories of UTP wiring.

 STP cable combines the techniques of cancellation and the twisting of wires with shielding.
Each pair of wires is wrapped in metallic foil to further shield the wires from noise. The four
pairs of wires are then wrapped in an overall metallic braid or foil. STP reduces electrical
noise from within the cable (crosstalk) and from outside the cable (electromagnetic interface
[EMI] and radio frequency interference [RFI]).
 UTP cable is used in a variety of networks. It has two or four pairs of wires. This type of
cable relies solely on the cancellation effect produced by the twisted wire pairs to limit signal
degradation caused by EMI and RFI. UTP is the most commonly used cabling in Ethernet
networks.

Prepared by Clive Onsomu


13
Fig 13 Shielded Twisted Pair Cable (STP)

Fig 14 Screened Twisted Pair (ScTP)

Fig 15 Unshielded Twisted Pair (UTP)

Although STP prevents interference better than UTP, it is more expensive and difficult to install. In
addition, the metallic shielding must be grounded at both ends. If improperly grounded, the shield acts
like an antenna picking up unwanted signals. STP is primarily used outside North America.

Category Rating
UTP comes in several categories that are based on the number of wires and number of twists in those
wires. Category 3 is the wiring used for telephone connections. It has four pairs of wires and a
maximum data rate of up to 16 Mbps. Category 5 and Category 5e are currently the most common
Ethernet cables used. They have four pairs of wires with a maximum data rate of up to 100 Mbps.
Category 5e has more twists per foot than Category 5 wiring. These extra twists further prevent
interference from outside sources and the other wires within the cable.

Prepared by Clive Onsomu


14
The latest is Category 6. Category 6 is similar to Category 5 and Category 5e except that a plastic
divider separates the pairs of wires to prevent crosstalk. The pairs also have more twists than Category
5e cable.

RS-232
RS-232 is a standard for serial transmission between computers and other devices (modem, mouse,
and so on). It supports a point-to-point connection over a single copper wire so only two devices can
be connected. Virtually every computer has one RS-232 serial port. Modems, monitors, mice, and
serial printers are designed to connect to the RS-232 port. This port is also used to connect modems to
telephones. There are two types of connectors, a 25-pin connector (DB-25) and a 9-pin connector
(DB-9). There are several limitations of using RS-232. The signaling rates are limited to only 20
Kbps. The rate is limited because the potential for crosstalk between signal lines in the cable is very
high.

RS-232 is still the most common standard for serial communication. RS-422 and RS-423 are expected
to replace it in the future. Both support higher data rates and have greater immunity to electrical
interference.

3.2.3 Fiber-Optic Cable

In many ways fiber-optic systems are similar to copper wire systems. The biggest difference is that
fiber-optics use light pulses to transmit information through fiber circuits instead of using electronic
pulses through copper circuits. Fiber transfers data using light. Incoming light is reflected or refracted
off of the cladding depending on what angle it strikes the cladding. It then bounces inside the core and
cladding for very great distances.

Understanding the components in a fiber-optic system helps to better understand how the system
works relative to wire-based systems. Because of the two-way nature of data communications, each
fiber-optic circuit is actually two fiber cables: one for transmission in each direction. Each cable has
both a transmit and a receive connector. Depending on where the cable is used in the network, a pair
(Tx/Rx) could plug into a router, switch, termination panel, server, or even workstation.

The part of an optical fiber through which light rays travel is called the core of the fiber. Light rays
can only enter the core if their angle is inside the numerical aperture of the fiber. Likewise, once the
rays have entered the core of the fiber, there are a limited number of optical paths that a light ray can
follow through the fiber. These optical paths are called modes. If the diameter of the core of the fiber
is large enough so that there are many paths that light can take through the fiber, the fiber is called
"multimode" fiber. Single-mode fiber has a much smaller core that only allows light rays to travel
along one mode inside the fiber.

Every fiber-optic cable used for networking consists of two glass fibers encased in separate sheaths.
One fiber carries transmitted data from device A to device B. The second fiber carries data from
device B to device A. The fibers are similar to two one-way streets going in opposite directions. This
provides a full-duplex communication link. Copper twisted-pair uses a wire pair to transmit and a
wire pair to receive. Fiber-optic circuits use one fiber strand to transmit and one to receive. Typically,
these two fiber cables will be in a single outer jacket until they reach the point at which connectors are
attached.

Until the connectors are attached, there is no need for shielding, because no light escapes when it is
inside a fiber. This means there are no crosstalk issues with fiber. It is very common to see multiple
fiber pairs encased in the same cable. This allows a single cable to be run between data closets, floors,
or buildings. One cable can contain 2 to 48 or more separate fibers. With copper, one UTP cable

Prepared by Clive Onsomu


15
would have to be pulled for each circuit. Fiber can carry many more bits per second and carry them
farther than copper can.

Usually, five parts make up each fiber-optic cable. The parts are the core, the cladding, a buffer, a
strength material, and an outer jacket.

The core is the light transmission element at the center of the optical fiber. All the light signals travel
through the core. A core is typically glass made from a combination of silicon dioxide (silica) and
other elements. Multimode uses a type of glass, called graded index glass for its core. This glass has a
lower index of refraction towards the outer edge of the core. Therefore, the outer area of the core is
less optically dense than the center and light can go faster in the outer part of the core. This design is
used because a light ray following a mode that goes straight down the center of the core does not have
as far to travel as a ray following a mode that bounces around in the fiber. All rays should arrive at the
end of the fiber together. Then the receiver at the end of the fiber receives a strong flash of light rather
than a long, dim pulse.

Surrounding the core is the cladding. Cladding is also made of silica but with a lower index of
refraction than the core. Light rays traveling through the fiber core reflect off this core-to-cladding
interface as they move through the fiber by total internal reflection. Standard multimode fiber-optic
cable is the most common type of fiber-optic cable used in LANs. A standard multimode fiber-optic
cable uses an optical fiber with either a 62.5 or a 50-micron core and a 125-micron diameter cladding.
This is commonly designated as 62.5/125 or 50/125 micron optical fiber. A micron is one millionth of
a meter (1µ).

Surrounding the cladding is a buffer material that is usually plastic. The buffer material helps shield
the core and cladding from damage. There are two basic cable designs. They are the loose-tube and
the tight-buffered cable designs. Most of the fiber used in LANs is tight-buffered multimode cable.
Tight-buffered cables have the buffering material that surrounds the cladding in direct contact with
the cladding. The most practical difference between the two designs is the applications for which they
are used. Loose-tube cable is primarily used for outside-building installations, while tight-buffered
cable is used inside buildings.

The strength material surrounds the buffer, preventing the fiber cable from being stretched when
installers pull it. The material used is often Kevlar, the same material used to produce bulletproof
vests.

The final element is the outer jacket. The outer jacket surrounds the cable to protect the fiber against
abrasion, solvents, and other contaminants. The color of the outer jacket of multimode fiber is usually
orange, but occasionally another color.

Infrared Light Emitting Diodes (LEDs) or Vertical Cavity Surface Emitting Lasers (VCSELs) are two
types of light source usually used with multimode fiber. Use one or the other. LEDs are a little
cheaper to build and require somewhat less safety concerns than lasers. However, LEDs cannot
transmit light over cable as far as the lasers. Multimode fiber (62.5/125) can carry data distances of up
to 2000 meters (6,560 ft).

Prepared by Clive Onsomu


16
Fig 16Cross-Section showing Fibre optic layers

Fig 17Multi-mode and Single-mode fibre optic cable

3.2.4 Wireless

If the cost of running cables is too high or computers need to be movable without being tethered to
cables, wireless is an alternative method of connecting a LAN. Wireless networks use radio frequency
(RF), laser, infrared (IR), and satellite/microwaves to carry signals from one computer to another
without a permanent cable connection. Wireless signals are electromagnetic waves that travel through
the air. No physical medium is necessary for wireless signals, making them a very versatile way to
build a network.

Prepared by Clive Onsomu


17
Just as in cabled networks, IEEE is the prime issuer of standards for wireless networks. A key
technology contained within the 802.11 standard is Direct Sequence Spread Spectrum (DSSS). DSSS
applies to wireless devices operating within a 1 to 2 Mbps range. A DSSS system may operate at up to
11 Mbps but will not be considered compliant above 2 Mbps. The next standard approved was
802.11b, which increased transmission capabilities to 11 Mbps. Even though DSSS WLANs were
able to interoperate with the Frequency Hopping Spread Spectrum (FHSS) WLANs, problems
developed prompting design changes by the manufacturers. In this case, IEEE‟s task was simply to
create a standard that matched the manufacturer‟s solution.

802.11b may also be called Wi-Fi™ or high-speed wireless and refers to DSSS systems that operate at
1, 2, 5.5 and 11 Mbps. All 802.11b systems are backward compliant in that they also support 802.11
for 1 and 2 Mbps data rates for DSSS only. This backward compatibility is extremely important as it
allows upgrading of the wireless network without replacing the NICs or access points.

802.11b devices achieve the higher data throughput rate by using a different coding technique from
802.11, allowing for a greater amount of data to be transferred in the same time frame. The majority
of 802.11b devices still fail to match the 11 Mbps bandwidth and generally function in the 2 to 4
Mbps range.

802.11a covers WLAN devices operating in the 5 GHZ transmission band. Using the 5 GHZ range
disallows interoperability of 802.11b devices as they operate within 2.4 GHZ. 802.11a is capable of
supplying data throughput of 54 Mbps and with proprietary technology known as "rate doubling" has
achieved 108 Mbps. In production networks, a more standard rating is 20-26 Mbps.

802.11g provides the same bandwidth as 802.11a but with backwards compatibility for 802.11b
devices using Orthogonal Frequency Division Multiplexing (OFDM) modulation technology and
operating in the 2.4 GHZ transmission band. Cisco has developed an access point that permits
802.11b and 802.11a devices to coexist on the same WLAN. The access point supplies „gateway‟
services allowing these otherwise incompatible devices to communicate.

3.2.5 Wireless Devices and Topologies

A wireless network may consist of as few as two devices. The nodes could simply be desktop
workstations or notebook computers. Equipped with wireless NICs, an „ad hoc‟ network could be
established which compares to a peer-to-peer wired network. Both devices act as servers and clients in
this environment. Although it does provide connectivity, security is at a minimum along with
throughput. Another problem with this type of network is compatibility. Many times NICs from
different manufacturers are not compatible.

To solve the problem of compatibility, an access point (AP) is commonly installed to act as a central
hub for the WLAN infrastructure mode. The AP is hard wired to the cabled LAN to provide Internet
access and connectivity to the wired network. APs are equipped with antennae and provide wireless
connectivity over a specified area referred to as a cell. Depending on the structural composition of the
location in which the AP is installed and the size and gain of the antennae, the size of the cell could
greatly vary. Most commonly, the range will be from 91.44 to 152.4 meters (300 to 500 feet). To
service larger areas, multiple access points may be installed with a degree of overlap. The overlap
permits "roaming" between cells. This is very similar to the services provided by cellular phone
companies. Overlap, on multiple AP networks, is critical to allow for movement of devices within the
WLAN. Although not addressed in the IEEE standards, a 20-30% overlap is desirable. This rate of
overlap will permit roaming between cells, allowing for the disconnect and reconnect activity to occur
seamlessly without service interruption.

Prepared by Clive Onsomu


18
When a client is activated within the WLAN, it will start "listening" for a compatible device with
which to "associate". This is referred to as "scanning" and may be active or passive.

Active scanning causes a probe request to be sent from the wireless node seeking to join the network.
The probe request will contain the Service Set Identifier (SSID) of the network it wishes to join.
When an AP with the same SSID is found, the AP will issue a probe response. The authentication and
association steps are completed. Passive scanning nodes listen for beacon management frames
(beacons), which are transmitted by the AP (infrastructure mode) or peer nodes (ad hoc). When a
node receives a beacon that contains the SSID of the network it is trying to join, an attempt is made to
join the network. Passive scanning is a continuous process and nodes may associate or disassociate
with APs as signal strength changes.

Fig 18 Wireless LAN

3.2.6 How Wireless LANs Communicate

After establishing connectivity to the WLAN, a node will pass frames in the same manner as on any
other 802.x network. WLANs do not use a standard 802.3 frame. Therefore, using the term wireless
Ethernet is misleading. There are three types of frames: control, management, and data. Only the data
frame type is similar to 802.3 frames. The payload of wireless and 802.3 frames is 1500 bytes;
however, an Ethernet frame may not exceed 1518 bytes whereas a wireless frame could be as large as
2346 bytes. Usually the WLAN frame size will be limited to 1518 bytes as it is most commonly
connected to a wired Ethernet network.

Since radio frequency (RF) is a shared medium, collisions can occur just as they do on wired shared
medium. The major difference is that there is no method by which the source node is able to detect
that a collision occurred. For that reason WLANs use Carrier Sense Multiple Access/Collision
Avoidance (CSMA/CA). This is somewhat like Ethernet CSMA/CD.

When a source node sends a frame, the receiving node returns a positive acknowledgment (ACK).
This can cause consumption of 50% of the available bandwidth. This overhead when combined with
the collision avoidance protocol overhead reduces the actual data throughput to a maximum of 5.0 to
5.5 Mbps on an 802.11b wireless LAN rated at 11 Mbps.

Performance of the network will also be affected by signal strength and degradation in signal quality
due to distance or interference. As the signal becomes weaker, Adaptive Rate Selection (ARS) may be
invoked. The transmitting unit will drop the data rate from 11 Mbps to 5.5 Mbps, from 5.5 Mbps to 2
Mbps or 2 Mbps to 1 Mbps.

3.2.7 Authentication and Association

WLAN authentication occurs at Layer 2. It is the process of authenticating the device not the user.
This is a critical point to remember when considering WLAN security, troubleshooting and overall
management.

Prepared by Clive Onsomu


19
Authentication may be a null process, as in the case of a new AP and NIC with default configurations
in place. The client will send an authentication request frame to the AP and the frame will be accepted
or rejected by the AP. The client is notified of the response via an authentication response frame. The
AP may also be configured to hand off the authentication task to an authentication server, which
would perform a more thorough credentialing process.

Association, performed after authentication, is the state that permits a client to use the services of the
AP to transfer data.

 Unauthenticated and unassociated. The node is disconnected from the network and not
associated to an access point.
 Authenticated and unassociated. The node has been authenticated on the network but
has not yet associated with the access point.
 Authenticated and associated. The node is connected to the network and able to
transmit and receive data through the access point.

IEEE 802.11 lists two types of authentication processes.

The first authentication process is the open system. This is an open connectivity standard in which
only the SSID must match. This may be used in a secure or non-secure environment although the
ability of low level network „sniffers‟ to discover the SSID of the WLAN is high.

The second process is the shared key. This process requires the use of Wired Equivalent Privacy
(WEP) encryption. WEP is a fairly simple algorithm using 64 and 128 bit keys. The AP is configured
with an encrypted key and nodes attempting to access the network through the AP must have a
matching key. Statically assigned WEP keys provide a higher level of security than the open system
but are definitely not hack proof.

The problem of unauthorized entry into WLANs is being addressed by a number of new security
solution technologies.

3.3 Common Networking Devices

Networking devices are used to connect computers and peripheral devices so they can communicate.
These include hubs, bridges, and switches as detailed in the following sections.

3.3.1 Hubs

A hub is a device that is used to extend an Ethernet wire to allow more devices to communicate with
each other. When using a hub, the network topology changes from a linear bus, where each device
plugs directly into the wire, to a star.

Data arriving over the cables to a hub port is electrically repeated on all the other ports that are
connected to the same Ethernet LAN, except for the port on which the data was received. Sometimes
hubs are called concentrators, because they serve as a central connection point for an Ethernet LAN.
Hubs are most commonly used in Ethernet 10BASE-T or 100BASE-T networks, although there are
other network architectures that use them.

3.3.2 Bridges and Switches

Bridges connect network segments. The basic functionality of the bridge resides in its ability to make
intelligent decisions about whether to pass signals on to the next segment of a network. When a bridge
Prepared by Clive Onsomu
20
sees a frame (data being sent from one computer to another) on the network, it looks at the destination
address and compares it to the forwarding table to determine whether to filter, flood (sent to
everyone), or copy the frame onto another segment.

A switch is sometimes described as a multiport bridge. While a typical bridge may have just two ports
(linking two systems on the same network), the switch has several ports depending on how many
network segments are to be linked. A switch is a more sophisticated device than a bridge, although the
basic function of the switch is deceptively simple. It is to choose a port to forward data to its
destination. Ethernet switches are becoming popular connectivity solutions because, like bridges, they
increase network performance (speed and bandwidth).

3.3.3 Routers

Routers are the most sophisticated internetworking devices discussed so far. They are slower than
bridges and switches, but make “smart” decisions on how to route (or send) packets received on one
port to a network on another port. Each port to which a network segment is attached is described as a
router interface. Routers can be computers with special network software installed on them or they
can be other devices built by network equipment manufacturers. Routers contain tables of network
addresses along with optimal destination routes to other networks.

3.3.4 Servers

In a network operating system environment, many client systems access and share the resources of
one or more servers. Desktop client systems are equipped with their own memory and peripheral
devices, such as a keyboard, monitor, and a disk drive. Server systems must be equipped to support
multiple concurrent users and multiple tasks as clients make demands on the server for remote
resources.

Network operating systems have additional network management tools and features that are designed
to support access by large numbers of simultaneous users. On all but the smallest networks, NOSs are
installed on powerful servers. Many users, known as clients, share these servers. Servers usually have
high-capacity, high-speed disk drives, large amounts of RAM, high-speed NICs, and in some cases,
multiple CPUs. These servers are typically configured to use the Internet family of protocols, TCP/IP,
and offer one or more TCP/IP services.

Servers running NOSs are also used to authenticate users and provide access to shared resources.
These servers are designed to handle requests from many clients simultaneously. Before a client can
access the server resources, the client must be identified and be authorized to use the resource.
Identification and authorization is achieved by assigning each client an account name and password.
The account name and password are then verified by an authentication service to permit or deny
access to the network. By centralizing user accounts, security, and access control, server-based
networks simplify the work of network administration.

Servers are typically larger systems than workstations and have additional memory to support
multiple tasks that are active or resident in memory at the same time. Additional disk space is also
required on servers to hold shared files and to function as an extension to the internal memory on the
system. Also, servers typically require extra expansion slots on their system boards to connect shared
devices, such as printers and multiple network interfaces.

Another feature of systems capable of acting as servers is the processing power. Ordinarily,
computers have a single CPU, which executes the instructions that make up a given task or process. In
order to work efficiently and deliver fast responses to client requests, a NOS server requires a
powerful CPU to execute its tasks or programs. Single processor systems with one CPU can meet the

Prepared by Clive Onsomu


21
needs of most servers if the CPU has the necessary speed. To achieve higher execution speeds, some
systems are equipped with more than one processor. Such systems are called multiprocessor systems.
Multiprocessor systems are capable of executing multiple tasks in parallel by assigning each task to a
different processor. The aggregate amount of work that the server can perform in a given time is
greatly enhanced in multiprocessor systems.

Since servers function as central repositories of resources that are vital to the operation of client
systems, these servers must be efficient and robust. The term robust indicates that the server systems
are able to function effectively under heavy loads. It also means the systems are able to survive the
failure of one or more processes or components without experiencing a general system failure. This
objective is met by building redundancy into server systems. Redundancy is the inclusion of
additional hardware components that can take over if other components fail. Redundancy is a feature
of fault tolerant systems that are designed to survive failures and can be repaired without interruption
while the systems are up and running. Because a NOS depends on the continuous operation of its
server, the extra hardware components justify the additional expense.

Server applications and functions include web services using Hypertext Transfer Protocol (HTTP),
File Transfer Protocol (FTP), and Domain Name System (DNS). Standard e-mail protocols supported
by network servers include Simple Mail Transfer Protocol (SMTP), Post Office Protocol 3 (POP3),
and Internet Messaging Access Protocol (IMAP). File sharing protocols include Sun Microsystems
Network File System (NFS) and Microsoft Server Message Block (SMB).

Network servers frequently provide print services. A server may also provide Dynamic Host
Configuration Protocol (DHCP), which automatically allocates IP addresses to client workstations. In
addition to running services for the clients on the network, servers can be set to act as a basic firewall
for the network. This is accomplished using proxy or Network Address Translation (NAT), both of
which hide internal private network addresses from the Internet.

One server running a NOS may work well when serving only a handful of clients. But most
organizations must deploy several servers in order to achieve acceptable performance. A typical
design separates services so one server is responsible for e-mail, another server is responsible for file
sharing, and another is responsible for FTP.

The concentration of network resources, such as files, printers, and applications on servers, also
makes the data generated easier to back up and maintain. Rather than have these resources distributed
on individual machines, network resources can be located on specialized, dedicated servers for easy
access and back up.

3.3.5 Workstations

A workstation is a client computer that is used to run applications and is connected to a server from
which it obtains data shared with other computers. A server is a computer that runs a network
operating system (NOS). A workstation uses special software, such as a network shell program to
perform the following tasks:

 Intercepts user data and application commands


 Decides if the command is for the local operating system or for the NOS.
 Directs the command to the local operating system or to the network interface card (NIC) for
processing and transmission onto the network
 Delivers transmissions from the network to the application running on the workstation

Prepared by Clive Onsomu


22
Some Windows operating systems may be installed on workstations and servers. The NT/2000/XP
versions of Windows software provide network server capability. Windows 9x and ME versions only
provide workstation support.

UNIX or Linux can serve as a desktop operating system but are usually found on high-end computers.
These workstations are employed in engineering and scientific applications, which require dedicated
high-performance computers. Some of the specific applications that are frequently run on UNIX
workstations are included in the following list:

 Computer-aided design (CAD)


 Electronic circuit design
 Weather data analysis
 Computer graphics animation
 Telecommunications equipment management

Most current desktop operating systems include networking capabilities and support multi-user
access. For this reason, it is becoming more common to classify computers and operating systems
based on the types of applications the computer runs. This classification is based on the role or
function that the computer plays, such as workstation or server. Typical desktop or low-end
workstation applications might include word processing, spreadsheets, and financial management. On
high-end workstations, the applications might include graphical design or equipment management and
others as listed above.

A diskless workstation is a special class of computer designed to run on a network. As the name
implies, it has no disk drives but does have a monitor, keyboard, memory, booting instructions in
ROM, and a network interface card. The software that is used to establish a network connection is
loaded from the bootable ROM chip located on the NIC.

Because a diskless workstation does not have any disk drives, it is not possible to upload data from
the workstation or download anything to it. A diskless workstation cannot pass a virus onto the
network, nor can it be used to take data from the network by copying this information to a disk drive.
As a result, diskless workstations offer greater security than ordinary workstations. For this reason,
such workstations are used in networks where security is paramount.

Laptops can also serve as workstations on a LAN and can be connected through a docking station,
external LAN adapter, or a PCMCIA card. A docking station is an add-on device that turns a laptop
into a desktop.

3.3.6 Network Interface Card

A network interface card (NIC) is a printed circuit board that provides network communication
capabilities to and from a personal computer. Also called a LAN adapter, it resides in a slot on the
motherboard and provides an interface connection to the network media. The type of NIC must match
the media and protocol used on the local network.

The NIC communicates with the network through a serial connection and with the computer through a
parallel connection. The NIC uses an Interrupt Request (IRQ), an I/O address, and upper memory
space to work with the operating system. An IRQ is a signal informing the CPU that an event needing
attention has occurred. An IRQ is sent over a hardware line to the microprocessor when a key is
pressed on the keyboard. Then the CPU enables transmission of the character from the keyboard to
RAM. An I/O address is a location in the memory used to enter data or retrieve data from a computer

Prepared by Clive Onsomu


23
by an auxiliary device. Upper memory refers to the memory area between the first 640 kilobytes (KB)
and 1 megabyte (MB) of RAM.

When selecting a NIC, consider the following factors:

 Protocols – Ethernet, Token Ring, or FDDI


 Types of media – Twisted-pair, coaxial, wireless, or fiber-optic
 Type of system bus – PCI or ISA

3.4 Client – Server Relationship


The client-server computing model distributes processing over multiple computers. Distributed
processing enables access to remote systems for the purpose of sharing information and network
resources. In a client-server environment, the client and server share or distribute processing
responsibilities. Most network operating systems are designed around the client-server model to
provide network services to users. A computer on a network can be referred to as a host, workstation,
client, or server. A computer running TCP/IP, whether it is a workstation or a server, is considered a
host computer.

Definitions of other commonly used terms are:

 Local host – The machine on which the user currently is working.


 Remote host – A system that is being accessed by a user from another system.
 Server – Provides resources to one or more clients by means of a network.
 Client – A machine that uses the services from one or more servers on a network.

An example of a client-server relationship is a File Transfer Protocol (FTP) session. FTP is a


universal method of transferring a file from one computer to another. For the client to transfer a file
to or from the server, the server must be running the FTP daemon or service. In this case, the client
requests the file to be transferred. The server provides the services necessary to receive or send the
file.

The Internet is also a good example of a distributed processing client-server computing relationship.
The client or front end typically handles user presentation functions, such as screen formatting, input
forms, and data editing. This is done with a browser, such as Netscape or Internet Explorer. Web
browsers send requests to web servers. When the browser requests data from the server, the server
responds, and the browser program receives a reply from the web server. The browser then displays
the HTTP data that was received. The server or back end handles the client's requests for Web pages
and provides HTTP or WWW services.

Another example of a client-server relationship is a database server and a data entry or query client in
a LAN. The client or front end might be running an application written in the C or Java language, and
the server or back end could be running Oracle or other database management software. In this case,
the client would handle formatting and presentation tasks for the user. The server would provide
database storage and data retrieval services for the user.

In a typical file server environment, the client might have to retrieve large portions of the database
files to process the files locally. This retrieval of the database files can cause excess network traffic.
With the client-server model, the client presents a request to the server, and the server database engine
might process 100,000 records and pass only a few back to the client to satisfy the request. Servers are
typically much more powerful than client computers and are better suited to processing large amounts
Prepared by Clive Onsomu
24
of data. With client-server computing, the large database is stored, and the processing takes place on
the server. The client has to deal only with creating the query. A relatively small amount of data or
results might be passed across the network. This satisfies the client query and results in less usage of
network bandwidth. The graphic shows an example of client-server computing. Note that the
workstation and server normally would be connected to the LAN by a hub or switch.

The distribution of functions in client-server networks brings substantial advantages, but also incurs
some costs. Although the aggregation of resources on server systems brings greater security, simpler
access, and coordinated control, the server introduces a single point of failure into the network.
Without an operational server, the network cannot function at all. Additionally, servers require
trained, expert staff to administer and maintain them, which increases the expense of running the
network. Server systems require additional hardware and specialized software that adds substantially
to the cost.

3.5 Concept of Services on Servers


Networking operating systems (NOSs) are designed to provide network processes to clients. Network
services include the World Wide Web (WWW), file sharing, mail exchange, directory services,
remote management, and print services. Remote management is a powerful service that allows
administrators to configure networked systems that are miles apart. It is important to understand that
these network processes are referred to as services in Windows 2000 and daemons in UNIX and
Linux. Network processes all provide the same functions, but the way processes are loaded and
interact with the NOS are different in each operating system.

Depending on the NOS, some of these key network processes may be enabled during a default
installation. Most popular network processes rely on the TCP/IP suite of protocols. Because TCP/IP is
an open, well-known set of protocols, TCP/IP-based services are vulnerable to unauthorized scans and
malicious attacks. Denial of service (DoS) attacks, computer viruses, and fast-spreading Internet
worms have forced NOS designers to reconsider which network services are started automatically.

Recent versions of popular NOSs, such as Windows 2000 and Red Hat Linux 7, restrict the number of
network services that are on by default. When deploying a NOS, key network services will need to be
enabled manually.

When a user decides to print in a networked printing environment, the job is sent to the appropriate
queue for the selected printer. Print queues stack the incoming print jobs and services them using a
first-in, first-out (FIFO) order. When a job is added to the queue, it is placed at the end of the waiting
list and printed last. The printing wait time can sometimes be long, depending on the size of the print
jobs at the head of the queue. A network print service will provide system administrators with the
necessary tools to manage the large number of print jobs being routed throughout the network. This
includes the ability to prioritize, pause, and even delete print jobs that are waiting to be printed.

3.5.1 File sharing

The ability to share files over a network is an important network service. There are many file sharing
protocols and applications in use today. Within a corporate or home network, files are typically shared
using Windows File Sharing or the Network File Sharing (NFS) protocol. In such environments, an
end user may not even know if a given file is on a local hard disk or on a remote server. Windows File
Sharing and NFS allow users to easily move, create, and delete files in remote directories.

3.5.2 File Transfer Protocol (FTP)

Prepared by Clive Onsomu


25
Many organizations make files available to remote employees, to customers, and to the general public
using FTP. FTP services are made available to the public in conjunction with web services. For
example, a user may browse a website, read about a software update on a web page, and then
download the update using FTP. Smaller companies may use a single server to provide FTP and
HTTP services, while larger companies may choose to use dedicated FTP servers.

Although FTP clients must logon, many FTP servers are configured to allow anonymous access.
When users access a server anonymously, they do not need to have a user account on the system. The
FTP protocol also allows users to upload, rename, and delete files, so administrators must be careful
to configure an FTP server to control levels of access.

FTP is a session-oriented protocol. Clients must open an application layer session with the server,
authenticate, and then perform an action, such as download or upload. If the client session is inactive
for a certain length of time, the server disconnects the client. This inactive length of time is called an
idle timeout. The length of an FTP idle timeout varies depending on the software.

3.5.3 Web services

The World Wide Web is now the most visible network service. In less than a decade, the World Wide
Web has become a global network of information, commerce, education, and entertainment. Millions
of companies, organizations, and individuals maintain websites on the Internet. Websites are
collections of web pages stored on a server or group of servers.

The World Wide Web is based on a client/server model. Clients attempt to establish TCP sessions
with web servers. Once a session is established, a client can request data from the server. HTTP
typically governs client requests and server transfers. Web client software includes GUI web
browsers, such as Netscape Navigator and Internet Explorer.

Web pages are hosted on computers running web service software. The two most common web server
software packages are Microsoft Internet Information Services (IIS) and Apache Web Server.
Microsoft IIS runs on a Windows platform and Apache Web Server runs on UNIX and Linux
platforms. A Web service software package is available for virtually all operating systems currently in
production.

3.5.4 Domain Name System (DNS)

The Domain Name System (DNS) protocol translates an Internet name, such as www.cisco.com, into
an IP address. Many applications rely on the directory services provided by DNS to do this work.
Web browsers, e-mail programs, and file transfer programs all use the names of remote systems. The
DNS protocol allows these clients to make requests to DNS servers in the network for the translation
of names to IP addresses. Applications can then use the addresses to send their messages. Without this
directory lookup service, the Internet would be almost impossible to use.

3.5.5 Dynamic Host Configuration Protocol (DHCP)

The purpose of Dynamic Host Configuration Protocol (DHCP) is to enable individual computers on
an IP network to learn their TCP/IP configurations from the DHCP server or servers. DHCP servers
have no information about the individual computers until information is requested. The overall
purpose of this is to reduce the work necessary to administer a large IP network. The most significant
piece of information distributed in this manner is the IP address that identifies the host on the
network. DHCP also allows for recovery and the ability to automatically renew network IP addresses
Prepared by Clive Onsomu
26
through a leasing mechanism. This mechanism allocates an IP address for a specific time period,
releases it, and then assigns a new IP address. DHCP allows all this to be done by a DHCP server
which saves the system administrator considerable amount of time.

4 The OSI Reference Model


When analyzing or learning a complex subject, it often helps to break it down into separate parts. The
Open System Interconnect (OSI) reference model divides the networking process into seven
manageable layers. Each layer of the OSI model defines a specific function of the network. These
functions are defined by the International Organization for Standardization (ISO) and are recognized
worldwide. The OSI reference model is used universally as a method for teaching and understanding
network functionality. Following the OSI model when designing, building, upgrading, or
troubleshooting will achieve greater compatibility and interoperability between various types of
network technologies.
The Open Systems Interconnection (OSI) reference model is an industry standard framework that is
used to divide the functions of networking into seven distinct layers. It is one of the most commonly
used teaching and reference tools today in networking. The International Organization for
Standardization (ISO) developed the OSI model in the 1980s.

There are seven layers in the OSI reference mode. Each layer provides specific services to the layers
above and below it in order for the network to work effectively. At the top of the model is the
application interface (layer), which enables the smooth usage of such applications as word processors
and web browsers. At the bottom is the physical side of the network. The physical side includes the
cabling (discussed earlier in this chapter), hubs, and other networking hardware.

How does the OSI model work?

A message begins at the top application layer and moves down the OSI layers to the bottom physical
layer. One example would be a sent e-mail message. Figure shows the progression of the e-mail as
it descends and information or headers are added. A header is layer-specific information that basically
explains what functions the layer carried out.

Conversely, at the receiving end, headers are striped from the message as it travels up the
corresponding layers and arrives at its destination. The process of data being encapsulated on the
sending end and data being de-encapsulated on the receiving end is the function of the OSI model.

Communication across the layers of the reference model is achieved because of special networking
software programs called protocols. Protocols are discussed in the sections that follow. The OSI was
intended to be a model for developing networking protocols. However, most of the protocols now
used on LANs do not necessarily respond exactly to these layers. Some protocols fall neatly within
the boundaries between these layers, while others overlap or provide services that overlap or span
several layers. This explains the meaning of “reference” as it is used in conjunction with the OSI
reference model

Tip: Mnemonics to help remembering the 7 layers of the OSI are “All People Seem To Need Data
Processing”, or “Please Do Not Throw Sausage Pizza Away”.

4.1 Advantages and Use of The OSI Model

Prepared by Clive Onsomu


27
The OSI model is used for the following reasons:

 It divides the aspects of network operation into less complex elements.


 It enables engineers to specialize design and development efforts on specific functions.
 It prevents changes in one area from affecting other areas, so that each area can evolve more
quickly.
 It allows network designers to choose the right networking devices and functions for that
layer.
 It helps with testing and troubleshooting the network. When testing or troubleshooting a
network, start with Layer 1. If there are no problems with this layer, proceed to the next layer
and so on until the problem is found or the network can be shown to be free of problems.

4.2 The Seven Layers


The bottom layer is Layer 1 and deals with the actual transmission of signals throughout the network.
As data is moved from the bottom of the model to the top, it is moved from hardware to software
components until it reaches Layer 7 called the application layer. In order for two devices on the
network to communicate, they both use the OSI model to ensure that data is sent and received in the
same manner. Data being received moves through the layers from bottom to top and data being
transmitted moves through from top to bottom. This method ensures common grounds for devices to
communicate.

The seven layers of the OSI model are the following:

 Application (Layer 7) - The main function of the application layer is to provide network
services to the end user applications. These network services include file access, applications,
and printing.
 Presentation (Layer 6) - This layer provides formatting services to the application layer by
ensuring that data that arrives from another computer can be used by an application. For
instance, it translates characters from mainframe computers into characters for PCs, so that an
application can read the data. This layer is also responsible for encryption or
compression/decompression of data.
 Session (Layer 5) - The session layer establishes, maintains, and manages conversations,
called sessions, between two or more applications on different computers. The session layer is
involved in keeping the lines open for the duration of the session and disconnecting them at
the conclusion.
 Transport (Layer 4) - This layer takes the data file and divides it up into segments to
facilitate transmission. This layer is also responsible for reliable transport between the two
hosts.
 Network (Layer 3) - The network layer adds logical or network addresses, such as Internet
Protocol (IP) addresses, to information that passes through it. With the addition of this
addressing information, the segments are now called packets. This layer is where the best path
is determined to move data from one network to another. Routers perform this operation and
are thus referred to as Layer 3 devices.
 Data Link (Layer 2) - This layer deals with error notification, topology, and flow control.
This layer recognizes special identifiers that are unique to each host, such as Burned in
Addresses (BIA) or Media Access Control (MAC) addresses. The packets from Layer 3 are
placed into frames containing these physical addresses of the source and destination hosts.
 Physical (Layer 1) - This layer includes the media such as twisted-pair, coaxial, and fiber-
optic cable to transmit the data frames. This layer defines the electrical, mechanical,
procedural, and functional means for activating, maintaining and deactivating the physical
link between end systems. If the link between hosts or networks is severed or experiencing

Prepared by Clive Onsomu


28
problems, data may not transmit. That is why the health of the cables is vital for every
network.

4.2.1 Physical Layer Functions

4.2.1.1 Working on the Physical Layer

Since the physical layer includes all the media upon which the entire network is based, it is the layer
with which the cable installer will be most concerned. Media includes twisted pair, fiber-optic, and
coaxial cable as well as free space for waves from radio, infrared, and other wireless technologies.
This section of the chapter discusses the functions of the physical layer, the role of repeaters and hubs,
the effect of wiring errors, and how to avoid common wiring errors during installation.

Encoding is another function of Layer 1. Encoding is the conversion of the information into bits (0s
and 1s). It is these bits that are then transmitted on the cable. When the source host sends data such as
an e-mail message with its addressing information, the physical layer converts the data into bits and
then transmits those bits over the medium. When the destination host receives these bits, Layer 1
converts the bits back into the original format of the e-mail message.

Two types of LAN devices that operate at this layer are repeaters and hubs. Their role is to regenerate
the signals that pass through them.

4.2.1.2 Repeaters

As a signal travels on a wire, it grows weaker. This is referred to as attenuation. To keep the signal
from becoming unrecognizable to the receiving host, a repeater is placed on the wire. A repeater is a
networking device that takes in the weakened signal, cleans it up, and regenerates it before sending it
on its way. Repeaters are generally used near the outer edges of networks where attenuation is most
likely to occur.

4.2.1.3 Hubs
Like repeaters, active hubs also regenerate signals. The difference between the two is that hubs have
many more ports than repeaters. Hence hubs are often called multi-port repeaters. Unlike repeaters,
hubs are often used as the central point in a star topology or as the secondary points in an extended
star topology to join segments of a network. One drawback to using hubs as the central points in
networks is that it forwards all data to every host on the network. Since the speed of network is
dependent on the amount of traffic that is on the wire, unnecessary traffic results in a slowdown of the
network. Therefore, networking devices that can filter traffic will help cut down on the amount of
traffic between segments of a network. The devices that can filter traffic are Layer 2 and Layer 3
devices.

4.2.1.4 The Effect of Wiring Errors

When there are problems with a network, troubleshooting should begin with Layer 1. It is estimated
that about three-quarters of all network problems are Layer 1 problems. Many of these could be
avoided when installing cable. Wiring is a critical component in the process of transmitting data
across a network. Common installation errors can be made and the effect these have on a network
should be avoided by implementing proper wiring techniques.
Prepared by Clive Onsomu
29
One of the most common wiring errors by cable installers is laying cables near other wires,
particularly power cables, or sources of power. Power cables emit background noise, which can
interfere with the signals on network cables. Other sources of electromagnetic noise like fluorescent
lights and machines can also cause problems with signals on wires.

Another common error is improperly terminating wires with jacks and plugs. This can lead to the
wires emitting signals that interfere with the signals on other wires, a condition called crosstalk. When
errors are caused by crosstalk or other interference, it means that data is lost and must be
retransmitted.

Finally, wires can be damaged as they are pulled into place. Pulling cables too tightly, nicking them,
or bending them can cause problems that may not be apparent immediately, but can cause the
electrical properties of the wire to change slowly over time.

All of these problems can be avoided during installation. A professional cable installer will take into
account the location of power cables and other electromagnetic sources, take care when terminating
wires to prevent crosstalk, and take care when pulling wire. It is important that these errors are
avoided when installing cable.

4.2.2 Data Link Layer Functions

Unlike Layer 1 networking devices, Layer 2 LAN devices help filter network traffic by looking at the
MAC addresses in the frame. These MAC addresses are physical addresses burned into the network
interface cards (NICs) on PCs and devices. The data link layer devices reference these addresses when
performing its functions. The two types of LAN network devices that look at the MAC addresses are
bridges and switches. This section discusses the functions of both and how they are used to filter
traffic and reduce congestion on a network.

4.2.2.1 Bridges

The existence of a physical address or media access control (MAC) address for each computer makes
it possible to use a networking device that can read these addresses to filter traffic. Filtering traffic
helps to solve the problem of network congestion. One device that can read MAC addresses is called a
bridge. A bridge keeps a table with all MAC addresses on the network. This table enables the bridge
to recognize which MAC addresses are on each side of the bridge. A bridge works by keeping traffic
destined for one side of the bridge to that side alone. Since frames are not forwarded throughout the
whole network and are contained in the appropriate network segment, network traffic is minimized.
Less network traffic means less congestion, which results in a more efficient and faster network.

Less traffic can also mean a decrease in collisions. Collisions occur when data packets collide on the
media. The most common type of network is Ethernet. In an Ethernet network, a complete data frame
is transmitted one at a time. Only when that frame transmission is complete can a new frame begin. If
more than one frame is sent at a time, they may collide and the contents are destroyed. The frames
have to be resent, tying up the network and possibly causing other collisions. The number of
collisions may become so great that the network uses most of its resources to detect and recover from
collisions. This results in excessive network congestion and significant slow down of the network. To
solve this problem, bridges and switches are used to create several collision domains rather than just a
single large one.

4.2.2.2 Switches
Prepared by Clive Onsomu
30
A switch is sometimes referred to as a multi-port bridge, yet its functions are far more advanced. A
switch can divide the network into many subnetworks, or smaller networks, depending on the number
of ports on the switch. A switch helps to keep network communications from reaching beyond their
destination.

A switch allows multiple connections within it. When two hosts are communicating, they use only a
pair of ports. This allows other hosts on other ports to communicate without causing collisions or
affecting other transmissions.

Switches are also useful because several ports can be grouped together into a virtual local-area
network (VLAN). VLANs can be used to secure certain parts of the network or to manage
departments within a company. For instance, a company may group all accounting PCs and relevant
servers on the same VLAN so that they can communicate with each other and not allow any other
user access to the information.

While switches and bridges are used to filter network traffic based on MAC addresses, Layer 3
devices look at the network addresses to determine the path that data will take.

4.2.3 Network Layer Functions

The other layers of the OSI model deal with network addressing, reliable delivery of data, managing
connections, data formatting, and supporting applications. The first of these layers to be discussed is
Layer 3, the network layer.

The network layer deals with higher level addressing schemes and path determination. The network
layer address is the Internet Protocol (IP) address of a computer. Each computer on a network has an
IP address to identify its location on the network. It indicates to which network and subnetwork a
computer belongs. The IP address can be changed when a computer is moved to another location. On
the other hand, the MAC address, Layer 2, is burned into the NIC at the factory so it is a permanent
address that never changes. Even if the host moves to another location on the network or to another
network altogether, the MAC address is the same.

In addition to addressing, another function of the network layer is to help determine the best path that
data will take through the LAN or a WAN. This is achieved by using a device called a router.

4.2.3.1 Routers

A router is a Layer 3 networking device that connects network segments or entire networks. It is
considered more intelligent than Layer 2 devices because it makes decisions based on information
received about the network as a whole. A router examines the IP address of the destination computer
to determine which path is best to reach the destination. Path determination is the process that the
router uses to select the next hop, that is, the path to the next connected router that will move the data
toward its destination. This process is known as routing.

After routers determine the path, the transport layer is responsible for reliable data delivery.

4.2.4 Transport Layer Functions

Prepared by Clive Onsomu


31
The transport layer, Layer 4, is responsible for segmenting the data file and regulating the flow of
information from source to destination. This end-to-end control is provided using a variety of
techniques, such as sequence numbers, acknowledgements, and windowing.

Since data packets may be sent by different paths and arrive at the destination at different times,
sequence numbers ensure that the data file will be reassembled so that it appears as the same file sent.
When the data file is segmented, each segment receives a sequence number. When the data segments
reach the destination, they are sorted in order according to the sequence numbers so that the original
data file can be reassembled.

Windowing is a flow control mechanism used in conjunction with acknowledgements. First a window
size, that is, the number of bytes that is sent at any one time, is agreed upon by both the sending and
receiving hosts. After those bytes have been sent, the sending host must receive an acknowledgement
from the receiving host before it can send any more segments. If for some reason the destination host
does not receive the information, it does not send an acknowledgment. Because the source does not
receive an acknowledgment, it knows that the information should be retransmitted and that the
transmission rate should be slowed. The phrase "quality of service" is often used to describe the
purpose of Layer 4 because of its use of windowing and acknowledgements.

4.2.5 Session Layer Functions

Whereas the transport layer is responsible for the reliable delivery of the data, the session layer, Layer
5, is responsible for managing the transmission session. The session layer sets up, maintains, and then
terminates sessions between hosts on the network. This includes starting, stopping, and
resynchronizing two computers as they communicate, a process called dialog control. Another
primary role of the session layer is to provide services to the presentation layer.

4.2.6 Presentation Layer Functions

After a session has been terminated, data passes to Layer 6, the presentation layer. This layer
facilitates communication between applications on diverse computer systems to occur in a manner
that it is transparent to the applications. It does so by reformatting the data. For example, data
received from a mainframe computer uses EBCDIC characters that cannot be read by a PC. The
presentation layer translates the EBCDIC characters into ASCII, the format used by PCs.

The presentation layer also performs data compression and encryption functions. Compression is
when frequently repeated words or combinations of characters can be indicated by a single character,
thus reducing the size of the file. When the destination host receives the compressed file, it uses a
compression key to decompress the file to its original size.

Encryption protects data from being read by unauthorized viewers. Encryption is crucial for sensitive
data, such as financial transactions, personal information, or company trade secrets that are being
transmitted to a computer on the same network or across the Internet. When the destination host
receives the encrypted file, it uses a key to decrypt the file. After the data has been decrypted,
decompressed, and formatted, it passes to the application layer.

4.2.7 Application Layer Functions

The uppermost layer of the OSI model, Layer 7, is the application layer. This is the layer closest to the
end user. The application layer does not provide services to any other OSI layer. Instead it provides

Prepared by Clive Onsomu


32
services to applications used by the end user. This includes spreadsheet programs, word processing
programs, banking terminal programs, e-mail, Telnet, file transfer protocol (FTP) programs, and
hypertext transfer protocol (HTTP) programs.

5 LAN Architectures

5.1 Ethernet

The Ethernet architecture is now the most popular type of LAN architecture. Architecture refers to the
overall structure of a computer or communication system. It determines the capabilities and
limitations of the system. The Ethernet architecture is based on the IEEE 802.3 standard. The IEEE
802.3 standard specifies that a network implements the Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) access control method. CSMA/CD uses baseband transmission over coaxial or
twisted-pair cable that is laid out in a bus topology (a linear or star bus). Standard transfer rates are 10
Mbps or 100 Mbps, but new standards provide for Gigabit Ethernet, which is capable of attaining
speeds up to 1 Gbps over fiber-optic cable or other high-speed media.

5.1.1 10BASE-T
Currently 10BASE-T is one of the most popular Ethernet implementations. It uses a star topology.
The 10 stands for the common transmission speed 10 Mbps, the BASE stands for baseband mode, and
the T stands for twisted-pair cabling. The term Ethernet cable is used to describe the unshielded
twisted-pair (UTP) cabling that is generally used in this architecture. Shielded twisted-pair (STP) also
can be used. 10BASE-T and its cousin, 100BASE-X, make networks that are easy to set up and
expand.

 Advantages of 10BASE-T
Networks based on the 10BASE-T specifications are relatively inexpensive. Although a hub
is required to connect more than two computers, small hubs are available at a low cost. The
10BASE-T network cards are inexpensive and widely available.

Twisted-pair cabling, especially the UTP mostly commonly used, is thin, flexible, and easier
to work with than coaxial. It uses modular RJ-45 plugs and jacks, so it is very easy to connect
the cable to the NIC or hub.

Another big advantage of 10BASE-T is its ability to be upgraded. By definition a 10BASE-T


network runs at 10 Mbps. However, by using Category 5 cable or above and 10/100 Mbps
dual-speed NICs, an upgrade to 100 Mbps can be achieved by simply replacing the hubs.

 Disadvantages of 10BASE-T
The maximum length for a 10BASE-T segment (without repeaters) is only 100 meters, which
is about 328 feet. The UTP used in such a network is more vulnerable to EMI and attenuation
than other cable types. Finally, the extra cost of a hub may not be feasible.

The high-bandwidth demands of many modern applications such as live video conferencing and
streaming audio have created a need for speed. Many networks require more throughput than is
possible with 10 Mbps Ethernet. This is where 100BASE-X, also called Fast Ethernet, becomes
important.

5.1.2 100BASE-X

Prepared by Clive Onsomu


33
100BASE-X is the next evolution of 10BASE-T. It is available in several different varieties. It can be
implemented over 4-pair Category 3, 4, or 5 UTP (100BASE-T). It can also be implemented over 2-
pair Category 5 UTP or STP (100BASE-TX), or as Ethernet over 2-strand fiber-optic cable
(100BASE-FX).

 Advantages of 100BASE-X
Regardless of the implementation, the big advantage of 100BASE-X is its high-speed
performance. At 100 Mbps, transfer rates are 10 times that of 10BASE2/10BASE5 (both
outdated technologies), and 10BASE-T.

Because it uses twisted-pair cabling, 100BASE-X also shares the same advantages enjoyed by
10BASE-T. These include its low cost, flexibility, and ease of implementation and expansion.

 Disadvantages of 100BASE-X
100BASE-X shares the disadvantages inherent to twisted-pair cabling of 10BASE-T, such as
susceptibility to EMI and attenuation. 100 Mbps NICs and hubs are generally somewhat more
expensive than those designed for 10 Mbps networks, but prices have dropped as 100BASE-
X has gained in popularity. Fiber-optic cable remains an expensive cabling option, not so
much because of the cost of the cable itself, but because of the training and expertise required
to install it.

5.1.3 1000BASE-T

1000BASE-T is commonly known as Gigabit Ethernet. The 1000BASE-T architecture supports data
transfer rates of 1 Gbps, which is remarkably fast. Gigabit Ethernet is, for the most part, a LAN
architecture although its implementation over fiber-optic cable makes it suitable for metropolitan-area
networks (MANs).

 Advantages of 1000BASE-T
The greatest advantage of 1000BASE-T is the performance. At 1 Gbps, it is 10 times as fast
as Fast Ethernet and 100 times as fast as standard Ethernet. This makes it possible to
implement bandwidth-intensive applications, such as live video, throughout an intranet.
 Disadvantages of 1000BASE-T
The main disadvantages associated with 1000BASE-T are those common to all UTP
networks, as detailed in the sections on 10BASE-T and 100BASE-T.

5.2 Token Ring

IBM originally developed Token Ring as reliable network architecture based on the token-passing
access control method. It is often integrated with IBM mainframe systems such as the AS400. It was
intended to be used with PCs, minicomputers, and mainframes. It works well with Systems Network
Architecture (SNA), which is the IBM architecture used for connecting to mainframe networks.

The Token Ring standards are defined in IEEE 802.5. It is a prime example of an architecture whose
physical topology is different from its logical topology. The Token Ring topology is referred to as a
star-wired ring because the outer appearance of the network design is a star. The computers connect to
a central hub, called a multistation access unit (MSAU). Inside the device, however, the wiring forms
a circular data path, creating a logical ring. .

Token Ring is so named because of its logical topology and its media access control method of token
passing. The transfer rate for Token Ring can be either 4 Mbps or 16 Mbps.
Prepared by Clive Onsomu
34
Token Ring is a baseband architecture that uses digital signaling. In that way it resembles Ethernet,
but the communication process is quite different in many respects. Token Ring is an active topology.
As the signal travels around the circle to each network card, it is regenerated before being sent on its
way.

In an Ethernet network, all computers are created physically equal. At the software level, some may
act as servers and control network accounts and access, but the servers communicate physically on the
network in exactly the same way as the clients.

5.2.1 The Monitor of the Ring

In a Token Ring network, the first computer that comes online becomes the “monitor” and must keep
track of how many times each frame circles the ring. It has the responsibility of ensuring that only one
token is out on the network at a time.

The monitor computer periodically sends a signal called a beacon, which circulates around the ring.
Each computer on the network looks for the beacon. If a computer does not receive the beacon from
its nearest active upstream neighbor (NAUN) when expected, it puts a message on the network that
notifies the monitoring computer that the beacon was not received, along with its own address and
that of the NAUN that failed to send when expected. In most cases, this will cause an automatic
reconfiguration that restores communications.

5.2.2 Data Transfer

A Token Ring network uses a token (that is, a special signal) to control access to the cable. A token is
initially generated when the first computer on the network comes online. When a computer wants to
transmit, it waits for and then takes control of the token when it comes its way. The token can travel
in either direction around the ring, but only in one direction at a time. The hardware configuration
determines the direction of travel.

5.3 Fibre Distributed Data Interface (FDDI)

Fiber Distributed Data Interface (FDDI) is a type of Token Ring network. Its implementation and
topology differ from the IBM Token Ring LAN architecture, which IEEE 802.5 governs. FDDI is
often used for metropolitan-area networks (MANs) or larger LANs, such as those connecting several
buildings in an office complex or campus. MANs typically span a metro area.

As its name implies, FDDI runs on fiber-optic cable, and thus combines high-speed performance with
the advantages of the token-passing ring topology. FDDI runs at 100 Mbps, and its topology is a dual
ring. The outer ring is called the primary ring and the inner ring is called the secondary ring.

Normally, traffic flows only on the primary ring. If it fails, then the data automatically flows onto the
secondary ring in the opposite direction. When this occurs, the network is said to be in a wrapped
state. This provides fault tolerance for the link..

Computers on a FDDI network are divided into two classes, as follows:

 Class A computers connected to the cables of both rings.


 Class B computers connected to only one ring.

A FDDI dual ring supports a maximum of 500 nodes per ring. The total distance of each length of the
cable ring is 100 kilometers, or 62 miles. A repeater (device that regenerates signals) is needed every
2 kilometers, which is why FDDI is not considered to be a WAN link.
Prepared by Clive Onsomu
35
The specifications described so far refer to a FDDI that is implemented over fiber-optic cable. It is
also possible to use the FDDI technology with copper cabling. This is called Copper Distributed Data
Interface (CDDI). The maximum distances for CDDI are considerably lower than those for FDDI.

5.3.1 Advantages of FDDI

FDDI combines the advantages of token passing on the ring topology with the high speed of fiber-
optic transmission. The dual ring topology provides redundancy and fault tolerance. The fiber-optic
cable is not susceptible to EMI and noise, and it is more secure than copper wiring. It can send data
for greater distances between repeaters than Ethernet and traditional Token Ring.

5.3.2 Disadvantages of FDDI

As always, high speed and reliability come with a price. FDDI is relatively expensive to implement,
and the distance limitations, though less restrictive than those of other LAN links, make it unsuitable
for true WAN communications.

6 Networking Protocols

6.1 What Is a Protocol?

A protocol is a controlled sequence of messages that are exchanged between two or more systems to
accomplish a given task. Protocol specifications define this sequence together with the format or
layout of the messages that are exchanged. Protocols use control structures in each system to
coordinate the exchange of information between the systems. They operate like a set of interlocking
gears. Computers can precisely track protocol connection points as they move through the sequence of
exchanges. Timing is crucial to network operation. Protocols require messages to arrive within certain
time intervals, so systems maintain one or more timers during protocol execution. They also take
alternative actions if the network does not meet the timing rules. To do their work, many protocols
depend on the operation of other protocols in the group or suite of protocols. Protocol functions
include the following:

 Identifying errors
 Applying compression techniques
 Deciding how data is to be sent
 Addressing data
 Deciding how to announce sent and received data

6.1.1 Routed/Routable and Non-Routable Protocols

IP is a network layer protocol, and because of that, it can be routed over an internetwork, which is a
network of networks. Protocols that provide support for the network layer are called routed or
routable protocols. Other routed protocols include IPX/SPX and AppleTalk.

Protocols such as IP, IPX/SPX and AppleTalk provide Layer 3 support and are, therefore, routable.
However, there are protocols that do not support Layer 3; these are classed as non-routable protocols.
The most common of these non-routable protocols is NetBEUI. NetBEUI is a small, fast, and efficient
protocol that is limited to running on one segment.

In order for a protocol to be routable, it must provide the ability to assign a network number,
as well as a host number, to each individual device. Some protocols, such as IPX, only
require that you assign a network number, because they use a host's MAC address for the
Prepared by Clive Onsomu
36
physical number. Other protocols, such as IP, require that you provide a complete address, as
well as a subnet mask. The network address is obtained by ANDing the address with the
subnet mask.

6.1.2 Routing Protocols

Routing protocols (Note: Do not confuse with routed protocols.) determine the paths that routed
protocols follow to their destinations. Examples of routing protocols include the Routing Information
Protocol (RIP), the Interior Gateway Routing Protocol (IGRP), the Enhanced Interior Gateway
Routing Protocol (EIGRP), and Open Shortest Path First (OSPF).

Routing protocols enable routers that are connected to create a map, internally, of other routers in the
network or on the Internet. This allows routing (i.e. selecting the best path, and switching) to occur.
Such maps become part of each router's routing table.

Routers use routing protocols to exchange routing tables and to share routing information. Within a
network, the most common protocol used to transfer routing information between routers, located on
the same network, is Routing Information Protocol (RIP). This Interior Gateway Protocol (IGP)
calculates distances to a destination host in terms of how many hops (i.e. how many routers) a packet
must pass through. RIP enables routers to update their routing tables at programmable intervals,
usually every 30 seconds. One disadvantage of routers that use RIP is that they are constantly
connecting to neighboring routers to update their routing tables, thus creating large amounts of
network traffic.
RIP allows routers to determine which path to use to send data. It does so by using a concept known
as distance-vector. Whenever data goes through a router, and thus, through a new network number,
this is considered to be equal to one hop. A path which has a hop count of four indicates that data
traveling along that path would have to pass through four routers before reaching the final destination
on the network. If there are multiple paths to a destination, the path with the least number of hops
would be the path chosen by the router.
Because hop count is the only routing metric used by RIP, it doesn‟t necessarily select the fastest path
to a destination. A metric is a measurement for making decisions. You will soon learn that other
routing protocols use many other metrics besides hop count to find the best path for data to travel.
Nevertheless, RIP remains very popular, and is still widely implemented. This may be due primarily
to the fact that it was one of the earliest routing protocols to be developed.

One other problem posed by the use of RIP is that sometimes a destination may be located too far
away to be reachable. When using RIP, the maximum number of hops that data can be forwarded
through is fifteen. The destination network is considered unreachable if it is more than fifteen router
hops away.

6.1.3 Routing Encapsulation Sequence

At the data link layer, an IP datagram is encapsulated into a frame. The datagram, including the IP
header, is treated as data. A router receives the frame, strips off the frame header, then checks the
destination IP address in the IP header. The router then looks for that destination IP address in its
routing table, encapsulates the data in a data link layer frame, and sends it out to the appropriate
interface. If it does not find the destination IP address, it may drop the packet.

Routers are capable of concurrently supporting multiple independent routing protocols, and of
maintaining routing tables for several routed protocols. This capability allows a router to deliver
packets from several routed protocols over the same data links.

Prepared by Clive Onsomu


37
6.2 Transmission Control Protocol/Internet Protocol (TCP/IP)

The Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols has become the
dominant standard for internetworking. Originally defined by researchers in the United States
Department of Defense (DoD), TCP/IP represents a set of public standards that specify how packets
of information are exchanged between computers over one or more networks.

The TCP/IP protocol suite includes a number of major protocols and each performs a specific
function.

6.2.1 Application Protocols


The application layer is the fourth layer in the TCP/IP model. It provides the starting point for any
communication session.

6.2.1.1 Hypertext Transfer Protocol (HTTP)

HTTP governs how files such as text, graphics, sounds, and video are exchanged on the Internet or
World Wide Web (WWW). HTTP is an application layer protocol. The Internet Engineering Task
Force (IETF) developed the standards for HTTP. HTTP 1.1 is the current version. As its name
implies, HTTP is used to exchange hypertext files. These files can include links to other files. A web
server runs an HTTP service or daemon (a program that services HTTP requests). These requests are
transmitted by HTTP client software, which is another name for a web browser.

6.2.1.2 Hypertext Markup Language (HTML)

HTML is a page description language. Web designers use it to indicate to web browser software how
the page should look. HTML includes tags to indicate boldface type, italics, line breaks, paragraph
breaks, hyperlinks, insertion of tables, and so on.

6.2.1.3 Telnet

Telnet enables terminal access to local or remote systems. The telnet application is used to access
remote devices for configuration, control, and troubleshooting.

6.2.1.4 File Transfer Protocol (FTP)

FTP is an application that provides services for file transfer and manipulation. FTP uses the session
layer to allow multiple simultaneous connections to remote file systems.

6.2.1.5 Simple Mail Transport Protocol (SMTP)

SMTP provides messaging services over TCP/IP and supports most Internet e-mail programs.

6.2.1.6 Domain Name System (DNS)

DNS provides access to name servers where network names are translated to the addresses used by
Layer 3 network protocols. DNS greatly simplifies network usage by end users.

6.2.2 Transport Protocols

The transport layer is the third layer in the TCP/IP model. It provides an end-to-end management of
the communications session.
Prepared by Clive Onsomu
38
 Transmission Control Protocol (TCP) – TCP is the primary Internet protocol for the
reliable delivery of data. TCP includes facilities for end-to-end connection establishment,
error detection and recovery, and metering the rate of data flow into the network. Many
standard applications such as e-mail, web browser, file transfer, and telnet, depend on the
services of TCP. TCP identifies the application using it by a port number.
 User Datagram Protocol (UDP) – UDP offers a connectionless service to applications. UDP
uses lower overhead than TCP and can tolerate a level of data loss. Network management
applications, network file system, and simple file transport use UDP. Like TCP, UDP
identifies applications by port number.

6.2.3 Network Protocols

The Internet layer is the second layer in TCP/IP model. It provides internetworking for the
communications session.

 Internet Protocol (IP) – IP provides source and destination addressing and, in conjunction
with routing protocols, packet forwarding from one network to another toward a destination.
 Internet Control Message Protocol (ICMP) – ICMP is used for network testing and
troubleshooting. It enables diagnostic and error messages. ICMP echo messages are used by
the PING application to test if a remote device is reachable.
 Routing Information Protocol (RIP) – RIP operates between router devices to discover
paths between networks. In an intranet, routers depend on a routing protocol to build and
maintain information about how to forward packets toward the destination. RIP chooses
routes based on the distance or hop count.
 Address Resolution Protocol (ARP) – ARP is used to discover the local address (MAC
address) of a station on the network when its IP address is known. End stations as well as
routers use ARP to discover local addresses.

6.3 Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX)

Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX) is the protocol suite employed
originally by Novell Corporations network operating system, NetWare. It delivers functions similar to
those included in TCP/IP. In order to enable desktop client systems to access NetWare services,
Novell deployed a set of application, transport, and network protocols. Although NetWare
client/server systems are not tied to particular hardware platforms, the native, or original, NetWare
protocols remained proprietary. Unlike TCP/IP, the Novell IPX/SPX protocol suite remained the
property of one company. As a result of the market pressure to move to an industry standard way of
building networks, IPX/SPX has fallen into disfavor among customers. Novell in its current releases
supports the TCP/IP suite. There remains, however, a large installed base of NetWare networks that
continue to use IPX/SPX.

Common examples of some of the protocol elements included in the Novell IPX/SPX protocol suite
include the following:

 Service Advertising Protocol


 Novell Routing Information Protocol (Novell RIP)
 Netware Core Protocol (NCP)
 Get Nearest Server (GNS)
 Netware Link Services Protocol (NLSP)

A detailed discussion of the role and functions played by these protocol elements in networking and
internetworking are not within the scope of this course.

Prepared by Clive Onsomu


39
6.4 NetBEUI

NetBIOS Extended User Interface (NetBEUI) is a protocol used primarily on small Windows NT
networks. It was used earlier as the default networking protocol in Windows 3.11 (Windows for
Workgroups) and LAN Manager. NetBEUI has a very small overhead, but cannot be routed or used
by routers to talk to each other on a large network. NetBEUI is a simple protocol that lacks many of
the features that enable protocol suites such as TCP/IP to be used on networks of almost any size.

Although NetBEUI cannot be used in building large networks or connecting several networks
together, it is suitable for small peer-to-peer networks, involving a few computers directly connected
to each other. It can be used in conjunction with another routable protocol such as TCP/IP. This gives
the network administrator the advantages of the high performance of NetBEUI within the local
network and the capability to communicate beyond the LAN over TCP/IP.

6.5 AppleTalk

AppleTalk is a protocol suite to network Macintosh computers, comprised of a comprehensive set of


protocols that span the seven layers of the OSI reference model. AppleTalk protocols were designed
to run over LocalTalk (the Apple LAN physical topology), as well as major LAN types, notably
Ethernet and Token Ring.

Examples of AppleTalk protocol included the following:

 AppleTalk Filing Protocol (AFP)


 AppleTalk Data Stream Protocol (ADSP)
 Zone Information Protocol (ZIP)
 AppleTalk Session Protocol (ASP)
 Printer Access Protocol (PAP)

A detailed discussion of the use of these protocol elements is not within the scope of this course.

Like Novell, Apple Computer also developed its own proprietary protocol suite to network Macintosh
computers. Also like Novell, there is still a sizeable number of customers who use AppleTalk to
interconnect their systems. But just as other companies have transitioned to the use of TCP/IP, Apple
now fully supports the public networking protocol standards.

7 Network Operating Systems (NOS)


A computer operating system (OS) is the software foundation on which computer applications and
services run on a workstation. Similarly, a network operating system (NOS) enables communication
between multiple devices and the sharing of resources across a network. A NOS operates on UNIX,
Microsoft Windows NT, or Windows 2000 network servers.

Common functions of an OS on a workstation include controlling the computer hardware, executing


programs and providing a user interface. The OS performs these functions for a single user. Multiple
users can share the machine but they cannot log on at the same time. In contrast, a NOS distributes
functions over a number of networked computers. A NOS depends on the services of the native OS in
each individual computer. The NOS then adds functions that allow access to shared resources by a
number of users concurrently.

Workstations function as clients in a NOS environment. When a workstation becomes a client in a


NOS environment, additional specialized software enables the local user to access non-local or remote

Prepared by Clive Onsomu


40
resources, as if these resources were a part of the local system. The NOS enhances the reach of the
client workstation by making remote services available as extensions of the local operating system.

A system capable of operating as a NOS server must be able to support multiple users concurrently.
The network administrator creates an account for each user, allowing the user to logon and connect to
the server system. The user account on the server enables the server to authenticate that user and
allocate the resources that the user is allowed to access. Systems that provide this capability are called
multi-user systems.

A NOS server is a multitasking system, capable of executing multiple tasks or processes at the same
time. The NOS scheduling software allocates internal processor time, memory, and other elements of
the system to different tasks in a way that allows them to share the system resources. Each user on the
multi-user system is supported by a separate task or process internally on the server. These internal
tasks are created dynamically as users connect to the system and are deleted when users disconnect.

The main features to consider when selecting a NOS are performance, management and monitoring
tools, security, scalability, and robustness or fault tolerance. The following briefly define each of
these features:

 Performance. A NOS must perform well at reading and writing files across the network
between clients and servers. It must be able to maintain fast performance under heavy loads,
when many clients are making requests. Consistent performance under heavy demand is an
important standard for a NOS.
 Management and monitoring. The management interface on the NOS server provides the
tools for server monitoring, client administration, file, print, and disk storage management.
The management interface provides tools for the installation of new services and the
configuration of those services. Additionally, servers require regular monitoring and
adjustment.
 Security. A NOS must protect the shared resources under its control. Security includes
authenticating user access to services to prevent unauthorized access to the network resources.
Security also performs encryption to protect information as it travels between clients and
servers.
 Scalability. Scalability is the ability of a NOS to grow without degradation in performance.
The NOS must be capable of sustaining performance as new users join the network and new
servers are added to support them.
 Robustness/fault tolerance. A measure of robustness is the ability to deliver services
consistently under heavy load and to sustain its services if components or processes fail.
Using redundant disk devices and balancing the workload across multiple servers can
improve NOS robustness.

7.1 Microsoft NT, 2000, and .NET

Since the release of Windows 1.0 in November 1985, Microsoft has produced many versions of
Windows operating systems with improvements and changes to support a variety of users and
purposes. Figure summarizes the current Windows OS.

NT 4 was designed to provide an environment for mission critical business that would be more stable
than the Microsoft consumer operating systems. It is available for both desktop (NT 4.0 Workstation)
and server (NT 4.0 Server). An advantage of NT over previous Microsoft OSs is that DOS and older
Windows programs can be executed in virtual machines (VMs). Program failures are isolated and do
not require a system restart.

Prepared by Clive Onsomu


41
Windows NT provides a domain structure to control user and client access to server resources. It is
administered through the User Manager for Domains application on the domain controller. Each NT
domain requires a single primary domain controller which holds the Security Accounts Management
Database (SAM) and may have one or more backup domain controllers, each of which contains a
read-only copy of the SAM. When a user attempts to logon, the account information is sent to the
SAM database. If the information for that account is stored in the database, the user will be
authenticated to the domain and have access to the workstation and network resources.

Based on the NT kernel, the more recent Windows 2000 has both desktop and server versions.
Windows 2000 supports “plug-and-play” technology, permitting installation of new devices without
the need to restart the system. Windows 2000 also includes a file encryption system for securing data
on the hard disk.

Windows 2000 enables objects, such as users and resources, to be placed into container objects called
organizational units (OUs). Administrative authority over each OU can be delegated to a user or
group. This feature allows more specific control than is possible with Windows NT 4.0.

Windows 2000 Professional is not designed to be a full NOS. It does not provide a domain controller,
DNS server, DHCP server, or render any of the services that can be deployed with Windows 2000
Server. The primary purpose of Windows 2000 Professional is to be part of a domain as a client-side
operating system. The type of hardware that can be installed on the system is limited. Windows 2000
Professional can provide limited server capabilities for small networks and peer-to-peer networks. It
can be a file server, a print server, an FTP server, and a web server, but will only support up to ten
simultaneous connections.

Windows 2000 Server adds to the features of Windows 2000 Professional many new server-specific
functions. It can also operate as a file, print, web and application server. The Active Directory
Services feature of Windows 2000 Server serves as the centralized point of management of users,
groups, security services, and network resources. It includes the multipurpose capabilities required for
workgroups and branch offices as well as for departmental deployments of file and print servers,
application servers, web servers, and communication servers.

Windows 2000 Server is intended for use in small-to-medium sized enterprise environments. It
provides integrated connectivity with Novell NetWare, UNIX, and AppleTalk systems. It can also be
configured as a communications server to provide dialup networking services for mobile users.
Windows 2000 Advanced Server provides the additional hardware and software support needed for
enterprise and extremely large networks.

Windows .NET Server is built on the Windows 2000 Server kernel, but tailored to provide a secure
and reliable system to run enterprise-level web and FTP sites in order to compete with the Linux and
UNIX server operating systems. The Windows .NET Server provides XML Web Services to
companies which run medium to high volume web traffic.

7.2 UNIX, Sun, HP, and LINUX

7.2.1 Origins of UNIX

UNIX is the name of a group of operating systems that trace their origins back to 1969 at Bell Labs.
Since its inception, UNIX was designed to support multiple users and multitasking. UNIX was also
one of the first operating systems to include support for Internet networking protocols. The history of
UNIX, which now spans over 30 years, is complicated because many companies and organizations
have contributed to its development.

Prepared by Clive Onsomu


42
UNIX was first written in assembly language, a primitive set of instructions that control the internal
instructions of a computer. However, UNIX could only run on a specific type of computer. In 1971,
Dennis Ritchie created the C language. In 1973, Ritchie along with fellow Bell Labs programmer Ken
Thompson rewrote the UNIX system programs in C language. Because C is a higher-level language,
UNIX could be moved or ported to another computer with far less programming effort. The decision
to develop this portable operating system proved to be the key to the success of UNIX. During the
1970s, UNIX evolved through the development work of programmers at Bell Labs and several
universities, notably the University of California, at Berkeley.

When UNIX first started to be marketed commercially in the 1980s, it was used to run powerful
network servers, not desktop computers. Today, there are dozens of different versions of UNIX,
including the following:

 Hewlett Packard UNIX (HP-UX)


 Berkeley Software Design, Inc. (BSD UNIX), which has produced derivatives such as
FreeBSD
 Santa Cruz Operation (SCO) UNIX
 Sun Solaris
 IBM UNIX (AIX)

UNIX, in its various forms, continues to advance its position as the reliable, secure OS of choice for
mission-critical applications that are crucial to the operation of a business or other organization.
UNIX is also tightly integrated with TCP/IP. TCP/IP basically grew out of UNIX because of the need
for LAN and WAN communications.

The Sun Microsystems Solaris Operating Environment and its core OS, SunOS, is a high-
performance, versatile, 64-bit implementation of UNIX. Solaris runs on a wide variety of computers,
from Intel-based personal computers to powerful mainframes and supercomputers. Solaris is currently
the most widely used version of UNIX in the world for large networks and Internet websites. Sun is
also the developer of the "Write Once, Run Anywhere" Java technology.

Despite the popularity of Microsoft Windows on corporate LANs, much of the Internet runs on
powerful UNIX systems. Although UNIX is usually associated with expensive hardware and is no
user friendly, recent developments, including the creation of Linux, have changed that image.

7.2.2 Origins of Linux

In 1991, a Finnish student named Linus Torvalds began work on an operating system for an Intel
80386-based computer. Torvalds became frustrated with the state of desktop operating systems, such
as DOS, and the expense and licensing issues associated with commercial UNIX. Torvalds set out to
develop an operating system that was UNIX-like in its operation but used software code that was open
and completely free of charge to all users.

Torvald's work led to a world-wide collaborative effort to develop Linux, an open source operating
system that looks and feels like UNIX. By the late 1990s, Linux had become a viable alternative to
UNIX on servers and Windows on the desktop. The popularity of Linux on desktop PCs has also
contributed to interest in using UNIX distributions, such as FreeBSD and Sun Solaris on the desktop.
Versions of Linux can now run on almost any 32-bit processor, including the Intel 80386, Motorola
68000, Alpha, and PowerPC chips.

As with UNIX, there are numerous versions of Linux. Some are free downloads from the web, and
others are commercially distributed. The following are a few of the most popular versions of Linux:

Prepared by Clive Onsomu


43
 Red Hat Linux – distributed by Red Hat Software
 OpenLinux – distributed by Caldera
 Corel Linux
 Slackware
 Debian GNU/Linux
 SuSE Linux

Linux is one of the most powerful and reliable operating systems in the world today. Because of this,
Linux has already made inroads as a platform for power users and in the enterprise server arena.
Linux is less often deployed as a corporate desktop operating system. Although graphical user
interfaces (GUIs) are available to make Linux user-friendly, most beginning users find Linux more
difficult to use than Mac OS or Windows. Currently, many companies, such as Red Hat, SuSE, Corel,
and Caldera, are striving to make Linux a viable operating system for the desktop.

Application support must be considered when Linux is implemented on a desktop system. The
number of business productivity applications is limited when compared to Windows. However, some
vendors provide Windows emulation software, such as WABI and WINE, which enables many
Windows applications to run on Linux. Additionally, companies such as Corel are making Linux
versions of their office suites and other popular software packages.

7.2.3 Networking with Linux

Recent distributions of Linux have networking components built in for connecting to a LAN,
establishing a dialup connection to the Internet, or other remote network. In fact, TCP/IP is integrated
into the Linux kernel instead of being implemented as a separate subsystem.

Some advantages of Linux as a desktop operating system and network client include the following:

 It is a true 32-bit operating system.


 It supports preemptive multitasking and virtual memory.
 The code is open source and thus available for anyone to enhance and improve.

7.3 Apple

Apple Macintosh computers were designed for easy networking in a peer-to-peer, workgroup
situation. Network interfaces are included as part of the hardware and networking components are
built into the Macintosh operating system. Ethernet and Token Ring network adapters are available
for the Macintosh.

The Macintosh, or Mac, is popular in many educational institutions and corporate graphics
departments. Macs can be connected to one another in workgroups and can access AppleShare file
servers. Macs can also be connected to PC LANs that include Microsoft, NetWare, or UNIX servers.

Mac OS X (10)
The Macintosh operating system, Mac OS X, is sometimes referred to as Apple System 10.

Some of the features of Mac OS X are in the GUI called Aqua. The Aqua GUI resembles a cross
between Microsoft Windows XP and Linux X-windows GUI. Mac OS X is designed to provide
features for the home computer, such as Internet browsing, video and photo editing, and games, while
still providing features that offer powerful and customizable tools that IT professionals need in an
operating system.

Prepared by Clive Onsomu


44
The Mac OS X is fully compatible with older versions of the Mac operating systems. Mac OS X
provides a new feature that allows for AppleTalk and Windows connectivity. The Mac OS X core
operating system is called Darwin. Darwin is a UNIX-based, powerful system that provides stability
and performance. These enhancements provide Mac OS X with support for protected memory,
preemptive multitasking, advanced memory management, and symmetric multiprocessing. This
makes Mac OS X a formidable competitor amongst operating systems.

8 Data Communications

8.1 Signal Transmission


Signal refers to a desired electrical voltage, light pattern, or modulated electromagnetic wave. All of
these can carry networking data.

A signal consists of electrical or optical patterns that are transmitted from one connected device to
another. These patterns represent digital bits and move down the media either as a series of voltages
or as light patterns. When the signals reach the destination they are converted back to digital bits.

There are three common methods of signal transmission:

 Electrical signals - Transmission is achieved by representing data as electrical pulses on


copper wire.
 Optical signals - Transmission is achieved by converting the electrical signals into light
pulses.
 Wireless signals - Transmission is achieved by using infrared, microwave, or radio waves
through free space.

8.1.1 Optical Signals

One of the most popular methods of data transmission is fiber-optics. It has been the core of long
distance data transmission for a long time. Recently fiber has become more affordable for the business
desktop.

There are two ways to move a signal using light as the medium of transmission:

 Optical fiber - Optical signals propagate down glass threads called fiber-optics.
 Optical free-space - Optical free-space communications sometimes takes the place of
microwave or other point-to-point transmission systems.

There are many advantages to the use of light beams for communications, but cost and reliability
issues limit the application of this kind of link. Another form of light-based, free-space
communication called infrared is extremely popular. Infrared is a type of wireless technology that is
used in business and residential applications. Infrared is a type of wireless technology that is used in
business and residential applications.

8.1.2 Wireless Signals

Wireless is a term used to describe communications in which electromagnetic waves carry signals.
Wireless transmission works by sending high frequency waves into free space. Waves propagate, or

Prepared by Clive Onsomu


45
travel, through free space until they arrive at their intended destination and are converted back into
electrical impulses so that the destination device can read the data.

A common application of wireless data communication is for mobile use such as cellular telephones,
satellites used for transmitting television programs, walkie-talkies used to dispatch emergency
services, and telemetry signals from remote space probes, space shuttles, and space stations. Another
common application of wireless data communication is wireless LANs. In some cases, it is easier to
set up a wireless system that serves an entire floor or portion of a floor and equip each user with an
individual receiver and transmitter, than it is to wire every user into the network. When there is a need
for network voice or data communications but the user cannot or does not want to rely on cables for
the connection, wireless is the solution.

The wireless spectra have three distinct means of transmission as follows:

 Light wave - Infrared are light waves that are lower in frequency than the unaided human eye
can see. Infrared is rarely used over long distances. It is not particularly reliable and the two
devices must be in line of sight of each other. In the home, infrared is used for remote
controls for televisions, VCRs, DVD players, and stereo systems. There are also infrared
applications in computer networking, although techniques involving radio waves are more
popular. It is also used in certain military applications since it cannot be easily detected by the
monitoring stations of opposing forces.
 Radio and microwave - A very effective and practical system of wireless communication is
based on using radio waves or microwaves for signal transmission. Microwaves are also used
in radar. Common examples of wireless equipment include the following:
o Cellular phones and pagers
o Cordless telephones that connect the handset to the base station via radio frequencies
o Global positioning systems (GPS) that use satellites to help ships, aircraft and even
cars to find their location anywhere on earth
o Home entertainment systems in which the VCR and TV sometimes signal each other
on an unused TV channel
o Garage door openers that work at radio frequencies
o Baby monitors that use a transmitter and receiver for a limited range
o Cordless computer peripherals
o Wireless LANs used for business

Acoustic (ultrasonic) - Some monitoring devices like intrusion alarms employ acoustic sound waves
at frequencies above the range of human hearing. Sonar is another example. These are sometimes
classified as wireless.

8.2 Comparing analogue and digital signals

Electronic signals can be analog or digital. Analog signals have continuous voltage. They can be
illustrated as waves because they change gradually and continuously.

Digital signals, on the other hand, change from one state to another almost instantaneously. There is
no in-between state. Digital signals send information based on the number and position of pulses in a
pulse stream. Each pulse tends to be identical in amplitude (voltage) like "0" or "1". The difference
between pulses is typically one of duration and position. This has an important implication for
networking. If a pulse is recognizable, it is usable. Unlike analog, in which any change in voltage is a
distortion to the signal, a digital pulse is there (1) or it is not (0). The result is that digital systems tend
to be more robust. That is, digital signals are less prone to breakdown or to contain errors. Analog
systems are more prone to errors and breakdowns.

Prepared by Clive Onsomu


46
Since a digital signal has patterns of ones and zeros, it can be transmitted at any speed, stored, or
switched from one medium to another. As long as all of the pulses are resequenced into the proper
order and it is played back at the speed it was recorded, then all the information can be recovered. An
analog signal, on the other hand, must be kept together in segments and played and transmitted in real
time, such as a phone call. This makes digital extremely versatile. Some services such as desktop
video conferencing, telephony over network wiring, and video on demand are more capable when
using digital techniques.

In summary, an analogue signal has the following characteristics:

 is wavy
 has a continuously varying voltage-versus-time graph
 is typical of things in nature
 has been widely used in telecommunications for over 100 years

The main graphic shows a pure sine wave. The two important characteristics of a sine wave are its
amplitude (A) - its height and depth - and its period (T) - length of time to complete 1 cycle. You can
Fig 19 Digital Signal

calculate the frequency (f) - wiggleyness - of the wave with the formula f = 1/T.

A digital signal has the following characteristics:

 has discrete, or jumpy, voltage-versus-time graphs


 is typical of technology, rather than nature

The graphic shows a digital networking signal. Digital signals have fixed amplitude but their pulse
width and frequency can be changed. Digital Signals have fixed amplitude but their pulse width and
frequency can be changed. While this is an approximation, it is a reasonable one, and will be used in
all future diagrams.

8.2.1 Using analogue signals to build digital signals

Jean Baptiste Fourier is responsible for one of the greatest mathematical discoveries. He proved that a
special sum of sine waves, of harmonically related frequencies, which are multiples of some basic
frequency, could be added together to create any wave pattern. This is how voice recognition devices
and heart pacemakers work. Complex waves can be built out of simple waves.

Prepared by Clive Onsomu


47
A square wave, or a square pulse, can be built by using the right combination of sine waves. A square
wave (digital signal) can be built with sine waves (analogue signals). This is important to remember
as you examine what happens to a digital pulse as it travels along networking media.

8.2.2 Representing one bit on a physical medium

Data networks have become increasingly dependent on digital (binary, two-state) systems. The basic
building block of information is 1 binary digit, known as the bit or pulse. One bit, on an electrical
medium, is the electrical signal corresponding to binary 0 or binary 1. This may be as simple as 0
volts for binary 0, and +5 volts for binary 1, or a more complex encoding. Signal reference ground is
an important concept relating to all networking media that use voltages to carry messages.

In order to function correctly, a signal reference ground must be close to a computer's digital circuits.
Engineers have accomplished this by designing ground planes into circuit boards. The computer
cabinets are used as the common point of connection for the circuit board ground planes to establish
the signal reference ground. Signal reference ground establishes the 0 volts line in the signal graphics.

Fig 20 Analog Signal

With optical signals, binary 0 would be encoded as a low-light, or no-light, intensity (darkness).
Binary 1 would be encoded as a higher-light intensity (brightness), or other more complex patterns.

With wireless signals, binary 0 might be a short burst of waves; binary 1 might be a longer burst of
waves, or another more complex pattern.
8.3 Basics Of Data Flow Through LANs

8.3.1 Encapsulation and Packets Review

In order for reliable communications to take place over a network, data to be sent must be put in
manageable traceable packages. This is done through the process of encapsulation as covered in
chapter 2. A brief review of the process states that the top three layers, Application, Presentation,
Session, prepare the data for transmission by creating a common format for transmission.

The Transport layer breaks up the data into manageable size units called segments. It also assigns
sequence numbers to the segments to make sure the receiving host puts the data back together in the
proper order. The Network layer then encapsulates the segment creating a packet. It adds a destination
and source network address, usually IP to the packet.

The Data Link layer further encapsulates the packet and creates a frame. It adds the source and
destination local (MAC) address to the frame. The Data Link layer then transmits the binary bits of
the frame over the Physical layer media.
Prepared by Clive Onsomu
48
When the data is transmitted on just a local area network, we talk about the data units as frames,
because the MAC address is all that is necessary to get from source to destination host. But if we need
to send the data to another host over an Intranet or the Internet, packets become the data unit that is
referred to. This is because the Network address in the packet contains the final destination address of
the host the data (packet) is being sent to.

The bottom three layers (Network, Data Link, Physical) of the OSI model are the primary movers of
data across an Intranet or Internet. The main exception to this is a device called a gateway. It is a
device designed to convert the data from one format, created by the Application, Presentation, and
Session layers, to another. So the gateway uses all seven of the OSI layers to do this.

8.3.2 Packet flow through Layer 1 devices

The packet flow through Layer 1 devices is simple. Physical media are considered Layer 1
components. All they attend to are bits (e.g. voltage or light pulses). If the Layer 1 devices are
passive (e.g. plugs, connectors, jacks, patch panels, physical media), then the bits simply travel
through the passive devices, hopefully with a minimum of distortion.

If the Layer 1 devices are active (e.g. repeaters or hubs), then the bits are actually regenerated and
retimed. Transceivers, also active devices, act as adapters (AUI port to RJ-45), or as media converters
(RJ-45 electrical to ST Optical). In all cases the transceivers act as a Layer 1 devices.

No Layer 1 device examines any of the headers or data of an encapsulated packet. All they work with
are bits.

8.3.2 Packet flow through Layer 2 devices

It is important to remember that packets are contained inside frames, so to understand how packets
travel on Layer 2 devices, you will work with the packets encapsulated form, the frame. Just
remember that anything that happens to the frame also happens to the packet.
NICs, bridges, and switches involve the use of Data-Link (MAC) address information to direct
frames, which means they are referred to as Layer 2 devices. NICs are where the unique MAC address
resides. The MAC address is used to create the frame.

Bridges work by examining the MAC address of incoming frames. If the frame is local (with a MAC
address on the same network segment as the incoming port of the bridge), then the frame is not
forwarded across the bridge. If the frame is non-local (with a MAC address not on the incoming port
of the bridge), then it is forwarded to the next network segment. Because all of this decision-making
by the bridge circuits is done based on MAC addresses, the bridge takes in a frame, removes the
frame, examines the MAC address, and then sends or not sends the frame on, as the situation requires.

Consider a switch to be a hub with individual ports that act like bridges. The switch takes a data
frame, reads the frame, examines the Layer 2 MAC addresses, and forwards the frames (switches
them) to the appropriate ports. So to understand how packets flow in Layer 2 devices, we must look at
how the frames are used.

8.3.3 Packet flow through Layer 3 devices

The main device that is discussed at the Network layer is the router. Routers actually operate at Layer
1 (bits on the medium at router interfaces), Layer 2 (frames switched from one interface to another),
based on packet information and Layer 3 (routing decisions).

Prepared by Clive Onsomu


49
Packet flow through routers (i.e. selection of best path and actual switching to the proper output port)
involves the use of Layer 3 network addresses. After the proper port has been selected, the router
encapsulates the packet in a frame again to send the packet to its next destination. This process
happens for every router in the path from the source host to the destination host.

8.3.4 Packet flow through clouds and through Layer 1-7 devices
The graphic shows that certain devices operate at all seven layers. Some devices (e.g. your PC) are
Layer 1-7 devices. In other words, they perform processes that can be associated with every layer of
the OSI model. Encapsulation and decapsulation are two examples of this. A device called a gateway
(essentially a computer which converts information from one protocol to another) is also a Layer 7
device. An example of a gateway would be a computer on a LAN that allows the network to connect
to an IBM mainframe computer or to a network-wide facsimile (fax) system. In both of these
examples, the data would have to go all the way up the OSI model stack to be converted into a data
format the receiving device, either the mainframe or the fax unit, could use.

Finally, clouds may contain several kinds of media, NICs, switches, bridges, routers, gateways and
other networking devices. Because the cloud is not really a single device, but a collection of devices
that operate at all levels of the OSI model, it is classified as a Layer 1-7 device.

8.4 Effects on Transmitted Signals

8.4.1 Attenuation

Attenuation is a general term that refers to any reduction in the strength of a signal. Attenuation
occurs with any type of signal, whether digital or analog. Sometimes called loss, attenuation is a
natural consequence of signal transmission over long distances. Attenuation can affect a network
since it limits the length of network cabling over which a message can be sent. If the signal travels for
a long distance, the bits can be indiscernible by the time they reach the destination. When it is
necessary to transmit signals over long distances via cable, one or more repeaters can be inserted
along the length of the cable. The repeaters boost the signal strength to overcome attenuation. This
greatly increases the maximum attainable range of communication.

Attenuation also occurs with optical signals. The fiber absorbs and scatters some of the light energy as
the light pulse travels down the fiber. In fiber, attenuation can be influenced by the wavelength, or
color, of the light, the use of single-mode or multimode fiber, and by the actual glass that is used for
the fiber. Even by optimizing these choices, some signal loss is unavoidable.

Attenuation also affects radio waves and microwaves, since they are absorbed and scattered in the
atmosphere. This is called dispersion. Reflections of various structures in the signal path can also
impact the reliability of radio signals and cause attenuation

8.4.2 Noise

Noise is unwanted electrical, electromagnetic, or radio frequency energy that can degrade and distort
the quality of signals and communications of all types.

Noise occurs in digital and analog systems. In the case of analog signals, the signal becomes noisy
and scratchy. For instance, a telephone conversation can be interrupted by noises in the background.
In digital systems, bits can sometimes merge, becoming indistinguishable to the destination computer.
The result is an increase in the bit-error rate, which is the number of bits that were distorted so much
that the destination computer reads the bit incorrectly. A clearly defined digital signal does not always
reach its destination without some change. Electrical noise can occur on the line. When the two

Prepared by Clive Onsomu


50
signals meet, they can merge into a new signal. The original clear signal can be interpreted incorrectly
by the receiving device.

Also, signals that are external to the cables, such as the emissions of radio transmitters and radars, or
the electrical fields that emanate from electric motors and florescent light fixtures, can interfere with
the signals that are traveling down cables. This noise is called Electromagnetic Interference (EMI)
when it occurs from electrical sources, and Radio Frequency Interference (RFI) when it occurs from
radio, radar, or microwave sources. Noise can also be conducted from AC lines and lightning.

Each wire in a cable can act like an antenna. When this happens, the wire actually absorbs electrical
signals from other wires in the cable and from electrical sources outside the cable. If the resulting
electrical noise reaches a high enough level, it can become difficult for the computer to discriminate
the noise from the data signal.

Optical and wireless systems experience some of these forms of noise but are immune to others. For
example, optical fiber is immune to most forms of crosstalk (interference from adjacent cables) and
noises associated with AC power, and ground reference problems. Radio waves and microwaves are
immune as well, but these can be affected by simultaneous transmissions on adjacent radio
frequencies.

For copper links, external noise is picked up from appliances in the vicinity, electrical transformers,
the atmosphere, and even outer space. During severe thunderstorms or in locations where many
electrical appliances are in use, external noise can affect communications. The biggest source of
signal distortion for copper wire occurs when signals inadvertently pass out of one wire within the
cable and onto an adjacent one. This is called crosstalk.

8.4.3 EMI and RFI

External sources of electrical impulses that can attack the quality of electrical signals on the cable
include lighting, electrical motors, and radio systems. These types of interference are referred to as
electromagnetic interference (EMI) and radio frequency interference (RFI).

Any device or system that generates an electromagnetic field has the potential to disrupt the operation
of electronic components, devices, and systems in its vicinity . This phenomenon is electromagnetic
interference (EMI). Moderate or high-powered wireless transmitters can produce EMI fields strong
enough to upset the operation of electronic equipment nearby. Ensuring that all electronic equipment
is operated with a good electrical ground on the system can minimize problems with EMI. Specialized
line filters can also be installed in power cords and interconnecting cables to reduce the EMI
susceptibility of some systems.

Two successful techniques that cable designers use in dealing with EMI and RFI are shielding and
cancellation. In cable that employs shielding, a metal braid or foil surrounds each wire pair or group
of wire pairs. This shielding acts as a barrier to any interfering signals . Cancellation is the more
commonly used technique to protect twisted pair cables from undesirable interference. Cancellation is
achieved by twisting wire pairs together within the cable and by carefully controlling the manufacture
of the cable to ensure precise physical tolerances.

8.4.4 Other Losses

Other losses on network cabling include the following:

 Fiber-optic losses - Optical signals are susceptible to losses when small particles are trapped
inside glass. When the light pulses hit the particles, the light scatters and the signal can be lost
Prepared by Clive Onsomu
51
as a result. This is sometimes called intrinsic losses or dispersion. Fiber-optics can also lose
signal due to the misalignment of connectors.
 Coupling losses - A coupling is a connector for two wires. Since the signal has to pass from
one wire to another, if the coupling is not done properly, the signal can suffer. In most cases,
poor connections cause reflected electromagnetic energy. Fiber-optics connectors can suffer a
coupling loss when contaminants or improper bonding decrease the amount of light that can
enter or leave a connection. These types of losses can sometimes be overcome by changing
the coupling or connectors used when coupling.
 Wireless losses - Wireless losses can be caused by water vapor, weather, or suspended
particles in the air. The signal can scatter or be absorbed. In fact, signal loss of a laser beam is
used in measuring the amount of pollutants in air. Overcoming these environmental losses is
not controllable. Hence the wireless system should be built with these losses in mind.

8.5 Types of Communication

8.5.1 Modem-based Communication

The modem is a telephony-based type of communication whereby the signals are transmitted over the
telephone line. A normal telephone line can be used for digital signals from a computer, but it is
necessary to convert the digital signals into analog signals so that the telephone circuit can understand
it. The modem converts the high and low voltage levels of a digital signal to high and low frequency
of an analog signal. This process is called modulation. At the receiving end the reverse process takes
place. The analog signal is demodulated back to digital signals, and sent to the CPU of the computer.
The term modem is a combination of the words to describe its functions - modulation and
demodulation.

8.5.2 Baseband

The term bandwidth and some of the technologies that use bandwidth have now been explained. Now
the types of communication channels used in networking and telephone systems can be explored with
better understanding. The terms baseband and broadband are used to distinguish the number of
channels that a wire can carry.

The term baseband describes a communications system in which the media carries one signal only.
That signal may have many components, but from the perspective of the wire or fiber, there is only
one signal. A broadband communications system, on the other hand, lets more than one signal use the
wire at one time. For example, a plain old telephone system (POTS) circuit is a baseband system.
Adding a DSL signal to the line, however, makes it a broadband system.

Fig 21 Baseband Signal


Prepared by Clive Onsomu
52
8.5.3 XBase Notation

Most LANs and telephone systems are baseband, meaning that the wires can only carry one channel.
The notation for baseband networks is xBasey where x is the speed of the transmission, or bandwidth,
and y is the type of media.

The Institute of Electrical and Electronics Engineers (IEEE) set up this system of designation as a
shorthand way to identify network types. The "10" in the media type designation refers to the
transmission speed of 10 Mbps. Other popular speed designations are 100 Mbps and 1000 Mbps. The
latter speed is often called Gigabit Ethernet because it is designed to carry a gigabit, one thousand
million bits, each second. The "Base" refers to baseband signaling, which means that only Ethernet
signals are carried on the medium. The alternative is BROAD, which represents broadband signaling.
The next designator indicates the medium. "T" represents twisted-pair. "F" represents fiber-optic
cable. This final designator can indicate the segment length of a coaxial cable. When it does so, the
final numbers, "2", "5", "36" and so on, refer to the segment coaxial cable segment length in hundreds
of meters. The 185 meter coaxial cable limit for Ethernet has been rounded up to "2" for 200.

The following are LANs with bandwidth of 10 Mbps:

 10Base2 uses a thin coaxial cable, known as thinnet, with a maximum segment length of 185
m.
 10Base5 uses a thick coaxial cable, known as thicknet, with a maximum segment length of
500 m.
 10BaseT uses twisted-pair cable with a maximum segment length of 100 m.

The following are LANs with bandwidth of 100 Mbps:

 100BaseT uses four pairs of twisted-pair wire.


 100BaseTX uses two pairs of data grade twisted-pair wire.
 100BaseFX uses a two-strand fiber-optic cable.

The TX and FX types together are sometimes referred to as 100BaseX.

8.5.4 Broadband

Unlike baseband, broadband handles more than one channel. The term broadband commonly
describes two things, both of which are important to installers. The first meaning describes a high-
bandwidth communication. T1 (1.54 Mbps) rates and higher are commonly called broadband, as are
cable modem and DSL circuits.

Broadband also describes a type of data transmission in which a single medium (fiber or wire) can
carry several channels at once. This is usually achieved by multiplexing multiple independent
channels into one broadband signal for transmission (as voice, data, or video).

Cable TV, for example, uses high-speed broadband transmission with multiple media throughputs. In
contrast, baseband transmission, which can also be high-speed, allows only one signal at a time.

Prepared by Clive Onsomu


53
Fig 22 Broadband Signal

8.5.5 Simplex, Half Duplex and Full Duplex Transmission

A data channel, over which a signal is sent, can operate in one of three ways simplex, half-duplex, or
full-duplex (often just called duplex). The distinction is in the way that the signal can travel.

Simplex Transmission
Simplex transmission is a single one-way base band transmission. Simplex transmission, as the name
implies, is simple. It is also called unidirectional because the signal travels in only one direction. An
example of simplex transmission is the signal sent from the TV station to the home television.

Contemporary applications for simplex circuits, although rare, include remote station printers, card
readers, and a few alarm or security systems (fire and smoke alarms). This type of transmission is not
frequently used because it is not a practical mode for transmitting. The only advantage of simplex
transmission is that it is inexpensive.

Half-Duplex Transmission
Half-duplex transmission is an improvement over simplex because the traffic can travel in both
directions. Unfortunately, the road is not wide enough to accommodate bidirectional signals
simultaneously. This means that only one side can transmit at a time. Two-way radios, such as
police/emergency communications mobile radios, work with half-duplex transmissions. When
pressing the button on the microphone to transmit, nothing being said on the other end can be heard. If
people at both ends try to talk at the same time, neither transmission gets through.

Note: Modems are half-duplex devices. They can send and receive, but not at the
same time. However, it is possible to create a full-duplex modem connection with two
telephone lines and two modems.

Full-Duplex Transmission
Full-duplex transmission operates like a two-way, two-lane street. Traffic can travel in both directions
at the same time.

A land-based telephone conversation is an example of full-duplex communication. Both parties can


talk at the same time, and the person talking on the other end can still be heard by the other party
while they are talking (although it might be difficult to understand what is being said).
Prepared by Clive Onsomu
54
Full-duplex networking technology increases performance because data can be sent and received at
the same time. Digital subscriber line (DSL), two-way cable modem, and other broadband
technologies operate in full-duplex mode. With DSL, for example, users can download data to their
computer at the same time they are sending a voice message over the line

8.5.6 Synchronous and Asynchronous transmission

Serial lines that are established over serial cabling connect the standard RS-232 communication
(COM) ports of the computer. Serial transmission sends data one bit at a time, (previously illustrated
with a car on the one lane highway). Analog or digital signals depend on changes in the state
(modulations) to represent the actual binary data. To correctly interpret the signals, the receiving
network device must know precisely when to measure the signal. Therefore, timing becomes very
important in networking. In fact, the biggest problem with sending data over serial lines is keeping the
transmitted data bit timing coordinated. Two techniques are used to provide proper timing for serial
transfers:

 Synchronous serial transmission – Data bits are sent together with a synchronizing clock
pulse. In this transmission method, a built-in timing mechanism coordinates the clocks of the
sending and receiving devices. This is known as guaranteed state change synchronization.
Guaranteed state change synchronization is the most commonly used type of synchronous
transmission method.
 Asynchronous serial transmission – Data bits are sent without a synchronizing clock pulse.
This transmission method uses a start bit at the beginning of each message. The spaces
between the data indicate the start and stop bits. When the receiving device gets the start bit, it
can synchronize its internal clock with the sender clock.

PC serial ports and most analog modems use asynchronous communication method, while digital
modems (also called Terminal Adapters) and LAN adapters use the synchronous method. The
industry standard for the serial line interface is the Electronic Industries Association (EIA) RS-232C.

PC and modem makers have developed single-chip devices that perform all the functions that are
necessary for serial transfers to occur. These devices are called Universal Asynchronous
Receiver/Transmitters (UART). The synchronous devices are known as Universal
Synchronous/Asynchronous Receiver/Transmitters (USART) and can handle both synchronous and
asynchronous transmissions.

Prepared by Clive Onsomu


55

You might also like