Professional Documents
Culture Documents
1. Introduction
The reason for the popularity of computer networks is that they offer many
advantages. Information such as important files, video and audio, and email can be
easily shared between users. Peripherals such as printers and modems can also be
shared over the network. For example, Figure 1 shows a printer being used in a stand-
alone environment and in a networked environment. By connecting many computers
to a print server any of them may make use of the printer directly, instead of the single
computer in the stand-alone environment. Also, software such as word-processors and
spreadsheets can be made available to all computers on the network from a single
central server. Finally, administration and support is simplified.
1
However, with these advantages come a number of potential disadvantages. Making
important and sensitive information available to every user of the network is not
normally desirable. For example, information about employees’ salaries should not be
freely available for anybody to look at. Data security is therefore an important
concern in a networked environment. Secondly, the danger of computer viruses
entering the network is greatly increased. A virus can infect the any of the computers
on the network, and can quickly spread throughout the network causing significant
damage.
Computer networks can be classified into one of two groups, depending on their size
and function. A local area network (LAN) is the basic building block of any computer
network. A LAN can range from simple (two computers connected by a cable) to
complex (hundreds of connected computers and peripherals throughout a major
corporation). (See Figure 2.) The distinguishing feature of a LAN is that it is confined
to a limited geographic area.
A wide area network (WAN), on the other hand, has no geographical limit (see Figure
3). It can connect computers and other devices on opposite sides of the world. A
WAN is made up of a number of interconnected LANs. Perhaps the ultimate WAN is
the Internet.
2
Figure 3 – A Wide Area Network (WAN)
LANs typically have much higher transmission rates than WANS. Most LANs are
able to transmit data at around 100Mbps (million bits per second), whereas WANs
generally transmit at less than 10Mbps. Another difference is the error rates in
transmission: the likely number of errors in data transmission is higher for a WAN
than for a LAN.
This distinction between LANs and WANs is made because of the locality principle.
The locality principle in computer networking states that computers are much more
likely to want to communicate with other computers that are geographically close,
than with those that are distant. For example, if you want to request a printout from
your PC, it makes much more sense to use the printer in the next room rather than one
that is hundreds of kilometres away. Because of the locality principle network
designers tend to use higher performance hardware within a LAN compared to the
connections between different LANs that form a WAN.
You may sometimes hear about other classifications of networks, apart from LANs
and WANs. Although these terms are less commonly used that LAN and WAN, it is
still useful to know them. A CAN is a Campus Area Network: this is a collection of
LANs linked together with high performance hardware within a university or college
campus. Similarly a MAN, or Metropolitan Area Network, is a collection of LANs
linked together within a town or city.
3. Network configuration
All networks have certain components, functions and features in common, shown in
Figure 4. These include:
Servers - computers that provide shared resources for network users
Clients - computers that access shared resources provided by servers
Media - the wires that make the physical connections
Shared data - files provided to clients by servers across the network
Shared peripherals - additional hardware resources provided by servers
3
Figure 4 – A typical network configuration
Even with these similarities, networks are divided into two broad categories:
Peer-to-peer networks
Server-based networks
4
Peer-to-peer networks are good choices for environments where:
There are 10 users or fewer
Users share resources, such as printers, but no specialised servers exist
Security is not an issue
The organization and the network will experience only limited growth within
the foreseeable future
Where these factors apply, a peer-to-peer network will probably be a better choice
than a server-based network.
As networks increase in size (as the number of connected computers, and the physical
distance and traffic between them, grows), more than one server is usually needed.
Spreading the networking tasks among several servers ensures that each task will be
performed as efficiently as possible. Servers must perform varied and complex tasks.
Servers for large networks have become specialized to accommodate the expanding
needs of users. For example, a network may have separate servers for file storage,
printing, email and for storing and running application software.
4. Network Topologies
The term topology, or more specifically, network topology, refers to the arrangement
or physical layout of computers, cables, and other components on the network.
"Topology" is the standard term that most network professionals use when they refer
5
to the network's basic design. In addition to the term "topology," you will find several
other terms that are used to define a network's design:
Physical layout
Design
Diagram
Map
A network's topology affects its capabilities. The choice of one topology over another
will have an impact on the:
Type of equipment that the network needs
Capabilities of the network
Growth of the network
Way the network is managed
Before computers can share resources or perform other communication tasks they
must be connected. Most networks use cable to connect one computer to another.
However, it is not as simple as just plugging a computer into a cable connecting to
other computers. Different types of cable—combined with different network cards,
network operating systems, and other components—require different types of
arrangements. To work well, a network topology takes planning. For example, a
particular topology can determine not only the type of cable used but also how the
cabling runs through floors, ceilings, and walls. Topology can also determine how
computers communicate on the network. Different topologies require different
communication methods, and these methods have a great influence on the network.
There are four basic types of computer topology: bus, star, ring and mesh.
The bus topology is often referred to as a "linear bus" because the computers
are connected in a straight line. This is the simplest and most common method of
networking computers. Figure 5 shows a typical bus topology. It consists of a single
cable called a trunk (also called a backbone or segment) that connects all of the
computers in the network in a single line.
6
Figure 5 – The bus topology
When sending a signal from one computer on the network to another, network data in
the form of electronic signals is in fact sent to all the computers on the network.
However, only the computer whose address matches the address encoded in the
original signal accepts the information. All other computers reject the data. Because
only one computer at a time can send data on a bus network, the number of computers
attached to the bus will affect network performance. The more computers there are on
a bus, the more computers will be waiting to put data on the bus and, consequently,
the slower the network will be. Computers on a bus either transmit data to other
computers on the network or listen for data from other computers on the network.
They are not responsible for moving data from one computer to the next.
Consequently, if one computer fails, it does not affect the rest of the network.
Because the data, or electronic signal, is sent to the entire network, it travels from one
end of the cable to the other. If the signal is allowed to continue uninterrupted, it will
keep bouncing back and forth along the cable and prevent other computers from
sending signals. Therefore, the signal must be stopped after it has had a chance to
reach the proper destination address.
7
4.1.3 Terminator
To stop the signal from bouncing, a component called a terminator is placed at each
end of the cable to absorb free signals. Absorbing the signal clears the cable so that
other computers can send data.
In a bus topology, if a break in the cable occurs the two ends of the cable at the break
will not have terminators, so the signal will bounce, and all network activity will stop.
This is one of several possible reasons why a network will go "down." The computers
on the network will still be able to function as stand-alone computers; however, as
long as the segment is broken, they will not be able to communicate with each other
or otherwise access shared resources.
In the star topology, cable segments from each computer are connected to a
centralised component called a hub. Figure 6 shows four computers and a hub
connected in a star topology. Signals are transmitted from the sending computer
through the hub to all computers on the network.
Because each computer is connected to a central point, this topology requires a great
deal of cable in a large network installation. Also, if the central point fails, the entire
network goes down. If one computer - or the cable that connects it to the hub - fails on
a star network, only the failed computer will not be able to send or receive network
data. The rest of the network continues to function normally.
The ring topology connects computers on a single circle of cable. Unlike the bus
topology, there are no terminated ends. The signals travel around the loop in one
direction and pass through each computer, which can act as a repeater to boost the
signal and send it on to the next computer. Figure 7 shows a typical ring topology
8
with one server and four workstations. The failure of one computer can have an
impact on the entire network.
9
4.5 Hybrid topologies
Many working topologies are hybrid combinations of the bus, star, ring, and mesh
topologies. Two of the more common are described below.
The star bus is a combination of the bus and star topologies. In a star-bus topology,
several star topology networks are linked together with linear bus trunks. Figure 9
shows a typical star-bus topology.
If one computer goes down, it will not affect the rest of the network. The other
computers can continue to communicate. If a hub goes down, all computers on that
hub are unable to communicate. If a hub is linked to other hubs, those connections
will be broken as well.
The star ring (sometimes called a star-wired ring) appears similar to the star bus. Both
the star ring and the star bus are centred in a hub that contains the actual ring or bus.
Figure 10 shows a star-ring network. Linear-bus trunks connect the hubs in a star bus,
while the hubs in a star ring are connected in a star pattern by the main hub.
10
Figure 10 – The star ring hybrid topology
Until now we have assumed that the word topology is used to refer only to the
physical layout of the network. In fact, we can talk about two kinds of topology:
physical and logical. A network's physical topology is the wire itself. A network's
logical topology is the way it carries signals on the wire. This is an important
distinction that will become clearer in the following discussion of the token ring
topology.
One method of transmitting data around a ring is called token passing. (A token is a
special series of bits that travels around a token-ring network. Each network has only
one token.) The token is passed from computer to computer until it gets to a computer
that has data to send. Figure 11 shows a token ring topology with the token. The
sending computer modifies the token, puts an electronic address on the data, and
sends it around the ring.
The data passes by each computer until it finds the one with an address that matches
the address on the data.
The receiving computer returns a message to the sending computer indicating that the
data has been received. After verification, the sending computer creates a new token
11
and releases it on the network. The token circulates within the ring until a workstation
needs it to send data.
Therefore the token ring network uses a logical ring topology – the token travels
around in a circle from computer to computer. However, the physical topology of a
token ring network is a star – the wires connecting the computers to each other are
connected via a central hub. This is sometimes referred to as a “star-shaped ring”
network.
The token ring avoids a common problem with bus topologies. If there are many
computers on the network a bus will often be busy, seriously affecting network
performance. However, with a token ring the network is never busy – each computer
must simply wait for the token to arrive and add its message.
12
Summary of Key Points
What Is a Network?
The primary reasons for networking are to share information, to share
hardware and software (reducing cost), and to centralise administration and
support
Potential disadvantages of computer networks are lack of security when
dealing with sensitive information, and the danger of computer viruses
infecting the system
A local area network (LAN) is the smallest form of network and is the
building block for larger networks
A wide area network (WAN) is a collection of LANs and has no geographical
limitation
The locality principle in computer networking states that a computer is more
likely to communicate with a computer that is nearby, than with one that is
distant
A campus area network (CAN) is a collection of LANs linked together on a
university or college campus
A metropolitan area network (MAN) is a collection of LANs linked together
within a town or city
Network Configuration
Networks are classified into two principal groups based on how they share
information: peer-to-peer networks and server-based networks
In a peer-to-peer network, all computers are equal. They can either share their
resources or use resources on other computers
In a server-based network, one or more computers act as servers and provide
the resources to the network. The other computers are the clients and use the
resources provided by the server
Features of the two major network types are summarized as follows:
13
Network Topologies
14
Mekelle University Faculty of Business & Economics
1. Introduction
Every network requires some hardware to make it work. Precisely what hardware is
required depends on what type of network is being constructed. The following is
summary of some of the more common networking hardware.
2. Cabling
The vast majority of networks today are connected by some sort of wiring or cabling
that acts as a network transmission medium that carries signals between computers.
Although many cable types are available to meet the varying needs and sizes
of networks, from small to large, there are 3 primary cable types:
Coaxial
Twisted pair
Fibre-optic
At one time, coaxial cable was the most widely used network cabling. There were
a couple of reasons for coaxial cable's wide usage: it was relatively inexpensive, and it
was light, flexible, and easy to work with.
In its simplest form, coaxial cable consists of a core of copper wire surrounded by
insulation, a braided metal shielding, and an outer cover. Figure 1 shows the various
components that make up a coaxial cable.
The shielding protects transmitted data by absorbing stray electronic signals, called
noise, so that they do not get onto the cable and distort the data. The core of a coaxial
cable carries the electronic signals that make up the data. This wire core can be either
solid or stranded. If the core is solid, it is usually copper. Surrounding the core is an
insulating layer that separates it from the wire mesh. The braided wire mesh acts as a
ground and protects the core from electrical noise. Coaxial cable uses the BNC
connector to connect to computers and other devices.
1
do not affect data being sent over the inner copper cable. For this reason, coaxial
cabling is a good choice for longer distances.
There are two types of coaxial cable: thinnet and thicknet. Thicknet cabling is thicker,
and a better choice for longer distances, but is more expensive and more difficult to
work with. Thinnet coaxial cable can carry a signal for a distance of up to
approximately 185 meters before the signal starts to suffer from attenuation. Thicknet
cable can carry a signal for 500 meters. Therefore, because of thicknet's ability to
support data transfer over longer distances, it is sometimes used as a backbone to
connect several smaller thinnet-based networks.
In its simplest form, twisted-pair cable consists of two insulated strands of copper
wire twisted around each other. Figure 2 shows the two types of twisted-pair cable:
unshielded twisted-pair (UTP) and shielded twisted-pair (STP) cable.
UTP is the most popular type of twisted-pair cable and is fast becoming the most
popular LAN cabling. It is cheap and easy to use. However, its performance over long
distances is not as good as coaxial cable. The maximum cable length segment of UTP
is 100 meters. There are a number of different types (or categories) of UTP cable,
2
which differ in their specification and in the number of pairs of wire contained within
the cable. Most telephone systems use UTP cable (with the RJ11 connector), and
many LANs nowadays also use UTP (with the RJ45 connector). STP is higher quality
than UTP, but more expensive and less popular.
2.3 Fibre-optic
In fibre-optic cable, optical fibres carry digital data signals in the form of modulated
pulses of light. This is a relatively safe way to send data because, unlike copper-based
cables that carry data in the form of electronic signals, no electrical impulses are
carried over the fibre-optic cable. This means that fibre-optic cable cannot be tapped,
and its data cannot be stolen.
An optical fibre consists of an extremely thin cylinder of glass, called the core,
surrounded by a concentric layer of glass, known as the cladding. The fibres are
sometimes made of plastic. Plastic is easier to install, but cannot carry the light pulses
for as long a distance as glass.
Because each glass strand passes signals in only one direction, a cable includes two
strands in separate jackets. One strand transmits and one receives. A reinforcing layer
of plastic surrounds each glass strand, and Kevlar fibres provide strength. See Figure
3 for an illustration of fibre-optic cable. The Kevlar fibres in the fibre-optic connector
are placed between the two cables. Just as their counterparts (twisted-pair and coaxial)
are, fibre-optic cables are encased in a plastic coating for protection.
3
3. Networking hardware devices
The Network Interface Card (NIC), also known as a network adaptor, acts as the
interface between the computer and the physical network connection. In most
networks, every computer must have a network interface card to be able to connect to
the network. NICs are usually specific to a particular type of cabling – for example, a
NIC may have either an RJ45 connector or a BNC connector – although it is possible
to get combo cards, which include more than one type of connector.
3.2 Transceivers
3.3 Repeater
In a bus topology, signal loss can occur if the segments are too long. A repeater is a
device that connects two network segments and broadcasts data between them. It
amplifies the signal, thereby extending the usable length of the bus.
3.4 Hub
One network component that has become standard equipment in networks is the hub.
A hub acts as the central component in a star topology, and typically contains 4, 8, 16
or even more different ports for connecting to computers or other hubs. It is similar in
operation to a repeater, except that it broadcasts data received by any of the ports to
all other ports on the hub. Hubs can be active, passive or hybrid.
Most hubs are active; that is, they regenerate and retransmit signals in the same way
as a repeater does. Because hubs usually have eight to twelve ports for network
computers to connect to, they are sometimes called multiport repeaters. Active hubs
require electrical power to run. Some types of hubs are passive. They act as
connection points and do not amplify or regenerate the signal; the signal passes
through the hub. Passive hubs do not require electrical power to run. Advanced hubs
that will accommodate several different types of cables are called hybrid hubs.
For large networks it is often necessary to partition it into smaller groups of nodes to
help isolate traffic and improve performance. A bridge is a device that acts as an
interface between two sets of nodes. For example, if a company’s network has been
partitioned into two subnets, for the sales department and administration department
respectively, a bridge will be placed between the two networks. If a computer on the
4
sales subnet sends data to another computer on the sales subnet, the bridge will not
pass on the data to the administration subnet. However, if the same computer sends
data to a computer on the administration subnet, it will be forwarded by the bridge.
Because not all data is passed onto the other subnet, network traffic is reduced.
A switch is similar to a bridge, except that it has multiple ports. A switch can also be
seen as a more intelligent hub – whereas a hub passes on all data to every port, a
switch will only pass data on to the port that it is intended for.
A router is also used for connecting networks together. However, unlike a bridge, a
router can be used to connect networks that use different network technologies.
Routers are very commonly found in the hardware infrastructure that forms the basis
of the Internet.
The topic of routing in computer networking is a crucial one and has been the subject
of much research. We will return to this important topic in Handout 4 (Network
Architecture).
4. Wireless networking
Although most networks use physical connections between the network components,
recently wireless networking has been increasing in popularity. Wireless networks can
use infrared light, line-of-sight lasers, or radio waves to transmit data between nodes
without the need for physical cabling. They eliminate the need to install physical
cabling and offer a lot of flexibility for users using the network. However, they are
currently more expensive and slower than cable-based networks. As costs drop and
performance increases, wireless networks are sure to be increasingly popular in the
future.
There are two main types of hardware associated with wireless communication in
computing: Bluetooth and 802.11. Bluetooth only allows very short-range
transmission (typically less than 10m) and is intended primarily for cable-free
peripherals, such as mouses and keyboards. 802.11, or wireless Ethernet, is the
standard for wireless networking of computers, and will be discussed in more detail in
Handout 4 (Network Architecture).
5
Summary of Key Points
Three primary types of cables are used with networks: coaxial, twisted-pair,
and fibre-optic
Coaxial cable comes in two varieties: thinnet and thicknet
Twisted-pair cable can be either shielded (STP) or unshielded (UTP)
Fibre-optic cables use light to carry digital signals
Fibre-optic cables provide the greatest protection from noise and intrusion, but
are more expensive and require more expertise to install and maintain
Network Interface Cards (NICs), or network adaptors, are required for a
computer to interface with the physical network connection
A transceiver acts as an interface between different types of cable
A repeater is a device that boosts a network signal. It is commonly used in the
bus topology to extend the usable length of the bus
Hubs are used to centralise data traffic and localise failures. If one cable
breaks, it will not shut down the entire network
Hubs can be active, passive or hybrid
Active hubs also act as repeaters, amplifying the signal and passing it on to
every port on the hub
Passive hubs provide no amplification, and require no power supply
Hybrid hubs allow connections between networks with different types of
cabling
A bridge acts as the interface between 2 subnets, passing on signals that are
intended for a different subnet
A switch is similar to a bridge, except that it has multiple ports
A router is also similar to a bridge, except that it can link together networks
that use different network technologies
Wireless networks use infrared, laser or radio waves to eliminate the need for
physical cabling
6
Mekelle University Faculty of Business & Economics
1. Introduction
In the first three handouts, we have looked at the physical aspects of a network. We
learned about cables and the various methods of connecting them so that we can share
data. Now that we can physically link computers, we need to learn how to gain access
to the wires and cables.
In this section, we will first examine how data is put together before it is sent on to the
wires of a computer network. Next, we examine the three principal methods used to
access the wires. The first method, called contention, is based on the principle of "first
come, first served." The second method, token passing, is based on the principle of
waiting to take turns. The third method, demand priority, is relatively new and is
based on prioritising access to the network. Last, we examine two of the most
common network systems (Ethernet and Token Ring).
Data usually exists as rather large files. However, networks cannot operate if
computers put large amounts of data on the cable at the same time. If a computer
sends large amounts of data it can cause other computers to wait (increasing the
frustration of the other users) while the data is being moved. There are two reasons
why putting large chunks of data on the cable at one time slows down the network:
Large amounts of data sent as one large unit tie up the network and make
timely interaction and communications impossible because one computer is
flooding the cable with data.
The impact of retransmitting large units of data further multiplies network
traffic.
These effects are minimized when the large data units are reformatted into smaller
packages. This way, only a small section of data is affected, and, therefore, only a
small amount of data must be retransmitted, making it relatively easy to recover from
the error. These packages are commonly called packets or frames, and are the basic
building blocks of network data communications.
When the operating system at the sending computer breaks the data into packets, it
adds special control information to each frame. This makes it possible to:
Send the original, disassembled data in small chunks
Reassemble the data in the proper order when it reaches its destination
1
Check the data for errors after it has been reassembled
Exactly what control information is added can vary, but all packets include at least the
source address, the data and the destination address. There are three different ways in
which packets can be addressed:
Unicast: packet is addressed to a single destination
Multicast: packet is addressed simultaneously to multiple destinations
Broadcast: packet is sent simultaneously to all stations on the network
CO packet switching has more ‘overheads’: before transmission can start time must
be spent setting up the virtual connection across the network, and after it has finished
more time must be spent closing the connection. However, once transmission has
commenced, bandwidth can be reserved so it is possible to guarantee higher data
rates, which is not possible with CL packet switching. Therefore CO packet switching
is well suited to real-time application such as streaming of video and/or sound. On the
other hand, CL packet switching is simpler, has fewer overheads, and allows multicast
and broadcast addressing.
3. Routing
2
3.1 Routing tables
A routing table is stored in the RAM of a network device such as a bridge, switch or
router, and contains information about where to forward data to, based on its
destination address. For example, Figure 1 shows a simple network consisting of 3
switches and 6 computers. Each switch connects a different sub-network. Each
computer has an address consisting of a network number followed by a computer
number (e.g. computer A is in network number 1 and has computer number 2). The
routing table is shown for switch 3, and indicates where the next destination (or next
hop) should be for reaching each address on the network.
For instance, if computer C sends data to computer A, then switch 3 will first look at
the destination address (1, 2), and then look up this address in its routing table. It finds
that the next hop for this address is port 5 of the switch, and so sends the data to this
port and no other.
The next question is how is the data in the routing table determined. We will now
look at some of the common strategies used to route packets in packet switching
networks. We will first survey some of the key characteristics of such strategies, and
then examine some specific routing strategies
The primary function of a packet switching network is to accept packets from a source
station and deliver them to a destination station. To accomplish this a route through
the network must be established. Often, more than one route is possible. Thus, the
3
‘best’ route must be determined. There are a number of requirements that this decision
should take into account:
Correctness
Simplicity
Robustness
Stability
Fairness
Optimality
Efficiency
The first two requirements are straightforward: correctness means that the route must
lead to the correct destination; and simplicity means that the algorithm used to make
the decision should not be too complex. Robustness has to do with the ability of the
network to cope with network failures and overloads. Ideally, the network should
react to such failures without losing packets and without breaking virtual circuits.
Stability means that the network should not overreact to such failures: the
performance of the network should remain reasonably stable over time. A tradeoff
exists between fairness and optimality. The optimal route for one packet is the
shortest route (measured by some performance criterion). However, giving one packet
its optimal route may adversely affect the delivery of other packets. Fairness means
that overall most packets should have a reasonable performance. Finally, any routing
strategy involves some overheads and processing to calculate the best routes.
Efficiency means that the benefits of these overheads should outweigh their cost.
With these requirements in mind, we are now in a position to assess the various
design elements that contribute to a routing strategy. Table 1 lists these elements.
4
network node could be a computer, router, or other network device. A slightly more
advanced technique is to assign a cost to each link in the network. A shortest path
algorithm can then be used to calculate the lowest cost route. For example, in Figure
2, there are 5 network devices. The weighted edges between them represent the costs
of the connections. To send data from device 1 to device 5, the shortest path is via
devices 3 and 4. However, the shortest number of hops would be via device 3 only.
The cost used could be related to the throughput (i.e. speed) of the link, or related to
the current queueing delay on the link.
Two key characteristics of the routing decision are when and where it is made. The
decision time is determined by whether we a re using a CO network or a CL network.
For CL networks the route is established independently for each packet. For CO
networks the route is established once at the time the virtual circuit is set up. The
decision place refers to which node(s) are responsible for the routing decision. The
most common technique is distributed routing, in which each node has the
responsibility to forward each packet as it arrives. For centralised routing all routing
decisions are made by a single designated node. The danger of this approach is that if
this node is damaged or lost the operation of the network will cease. In source routing,
the routing path is established by the node that is sending the packet.
Almost all routing strategies will make their routing decisions based upon some
information about the state of the network. The network information source refers to
where this information comes from, and the network information update timing refers
to how often this information is updated. Local information means just using
information from outgoing links from the current node. An adjacent information
source means any node which has a direct connection to the current node. The update
timing of a routing strategy can be continuous (updating all the time), periodic (every
t seconds), or occur when there is a major load or topology change.
How that we are familiar with some of the characteristics and elements of routing
strategies, we will examine some specific examples.
5
3.2.3.1 Fixed routing
Fixed routing is a simple scheme, and it works well in a reliable network with a stable
load. However, it does not respond to network failures, or changes in network load
(e.g. congestion).
3.2.3.2 Flooding
With flooding, all possible routes between the source and the destination are tried.
Therefore so long as a path exists at least one packet will reach the destination. This
means that flooding is a highly robust technique, and is sometimes used to send
emergency information. Furthermore, at least one packet will have used the least cost
route. This can make it useful for initialising routing tables with least cost routes.
Another property of flooding is that every node on the network will be visited by a
packet. This means that flooding can be used to propagate important information on
the network, such as routing tables.
A major disadvantage of flooding is the high network traffic that it generates. For this
reason it is rarely used on its own, but as described above it can be a useful technique
when used in combination with other routing strategies.
Random routing has the simplicity and robustness of flooding with far less traffic
load. With random routing, instead of each node forwarding packets to all outgoing
links, the node selects only one link for transmission. This link is chosen at random,
excluding the link on which the packet arrived. Often the decision is completely
random, but an refinement of this technique is to apply a probability to each link. this
probability could be based on some performance criterion, such as throughput.
Like flooding, random routing requires the use of no network information. The traffic
generated is much reduced compared to flooding. However, unlike flooding, random
routing is not guaranteed to find the shortest route from the source to the destination.
6
3.2.3.4 Adaptive routing
In almost all packet switching networks some form of adaptive routing is used. The
term adaptive routing means that the routing decisions that are made change as
conditions on the network change. The two principle factors that can influence
changes in routing decisions are failure of a node or a link, and congestion (if a
particular link has a heavy load it is desirable to route packets away from that link).
For adaptive routing to be possible, information about the state of the network must
be exchanged among the nodes. This has a number of disadvantages. First, the routing
decision is more complex, thus increasing the processing overheads at each node.
Second, the information that is used may not be up-to-date. To get up-to-date
information means requires continuous exchange of routing information between
nodes, thus increasing network traffic. Therefore there is a tradeoff between quality of
information and network traffic overheads. Finally, it is important that an adaptive
strategy does not react to slowly or too quickly to changes. If it reacts too slowly it
will not be useful. But if it reacts too quickly it may result in an oscillation, in which
all network traffic makes the same change of route at the same time.
However, despite these dangers, adaptive routing strategies generally offer real
benefits in performance, hence their popularity. Two examples of adaptive routing
strategies are distance-vector routing and link-state routing.
Distance-vector routing
Link-state routing
In the link state technique, each network device periodically tests the speed of all of
its links. It then broadcasts this information to the entire network. Each device can
therefore construct a graph with weighted edges that represents the network
connectivity and performance (e.g. see Figure 2). The device can then use a shortest
path algorithm such as Dijkstra’s agorithm to compute the best route for a packet to
take.
7
4. Access methods
In networking, to access a resource is to be able to use that resource. The set of rules
that defines how a computer puts data onto the network cable and takes data from the
cable is called an access method. Once data is moving on the network, access methods
help to regulate the flow of network traffic.
If data is to be sent over the network from one user to another, or accessed from
a server, there must be some way for the data to access the cable without running into
other data (a collision). And the receiving computer must have reasonable assurance
that the data has not been destroyed in a data collision during transmission. Access
methods need to be consistent in the way they handle data. If different computers
were to use different access methods, the network would fail because some methods
would dominate the cable. Access methods prevent computers from gaining
simultaneous access to the cable. By making sure that only one computer at a time can
put data on the network cable, access methods ensure that the sending and receiving
of network data is an orderly process.
There are three major access methods: carrier-sense multiple-access, token passing
and demand priority.
Carrier sense multiple access methods can be divided into two subtypes: carrier sense
multiple access with collision detection (CSMA/CD) and carrier sense multiple access
with collision avoidance (CSMA/CA).
In CSMA/CD, each computer on the network, including clients and servers, checks
the cable for network traffic. Only when a computer "senses" that the cable is free and
that there is no traffic on the cable can it send data. Once the computer has transmitted
data on the cable, no other computer can transmit data until the original data has
reached its destination and the cable is free again. Remember, if two or more
computers happen to send data at exactly the same time, there will be a data collision.
When that happens, the two computers involved stop transmitting for a random period
of time and then attempt to retransmit. Each computer determines its own waiting
period; this reduces the chance that the computers will once again transmit
simultaneously. The waiting time is calculated using an algorithm known as
exponential backoff: the first time a collision occurs each computer waits a random
time t1, 0 ≤ t1 ≤ d (where d is a constant). If a second collision occurs with the same
packet, the wait time will be t2, 0 ≤ t2 ≤ 2d. The third time the wait time will be t3, 0 ≤
t3 ≤ 4d, and so on: the maximum waiting time will be doubled after each successive
collision. This will continue for a maximum of 10 times, when the maximum waiting
time will reach a peak of 210d (= 1024d). After 16 successive collisions, transmission
of the packet is aborted and an error is reported.
8
CSMA/CD is known as a contention method because computers on the network
contend, or compete, for an opportunity to send data. This might seem like an
inefficient way to put data on the cable, but current implementations of CSMA/CD
are so fast that users are not even aware they are using a contention access method.
With CSMA/CD, the more computers there are on the network, the more network
traffic there will be. With more traffic, collisions tend to increase, which slows the
network down, so CSMA/CD can be a slow-access method. After each collision, both
computers will have to try to retransmit their data. If the network is very busy, there is
a chance that the attempts by both computers will result in collisions with packets
from other computers on the network. If this happens, four computers (the two
original computers and the two computers whose transmitted packets collided with
the original computer's retransmitted packets) will have to attempt to retransmit.
These retransmissions can slow the network to a near standstill.
CSMA/CA is the least popular of the major access methods. In CSMA/CA, each
computer signals its intent to transmit before it actually transmits data. In this way,
computers sense when a collision might occur; this allows them to avoid transmission
collisions. Unfortunately, broadcasting the intent to transmit data increases the
amount of traffic on the cable and slows down network performance.
CSMA/CA is not commonly used in wired networks, but it has become the standard
for wireless networking. We will return to wireless networking standards later in this
handout.
A collision domain is a part of a LAN (or an entire LAN) where two computer
transmitting at the same time will cause a collision. Because switches, bridges and
routers do not forward unnecessary packets the different ports of these devices operate
in different collision domains. Repeaters and hubs broadcast all packets to all ports, so
their ports are in the same collision domain.
9
Figure 3 – Illustration of collision domains
Figure 3 shows a simple network with one repeater (‘R’), two hubs, a switch and 10
computers (‘C’). Because hubs broadcast all packets to all ports, if computers 2 and 4
attempted to send at the same time there would be a collision, hence they are in the
same collision domain. However, because a switch will only forward a packet if it is
intended for the other subnet, every port of the switch is in a separate collision
domain. So if computer 2 tried to send to computer 4 at the same time as computer 7
tried to send to computer 10, there would be no collision.
While the token is in use by one computer, other computers cannot transmit data.
Because only one computer at a time can use the token, no contention and no collision
take place, and no time is spent waiting for computers to resend tokens due to network
traffic on the cable.
10
and links and verifying that they are all functioning. As in CSMA/CD, two computers
using the demand-priority access method can cause contention by transmitting at
exactly the same time. However, with demand priority, it is possible to implement a
scheme in which certain types of data will be given priority if there is contention. If
the hub or repeater receives two requests at the same time, the highest priority request
is serviced first. If the two requests are of the same priority, both requests are serviced
by alternating between the two.
The following table summarises the major features of each access method:
Demand
Feature/function CSMA/CD CSMA/CA Token passing
priority
Type of Broadcast Broadcast
Token based Hub based
communication based based
Type of access
Contention Contention Non-contention Contention
method
11
5. Common network architectures
The token ring architecture was developed in the mid-1980s by IBM. It is the
preferred method of networking by IBM and is therefore found primarily in large
IBM mini- and mainframe installations.
We introduced the token ring architecture in Handout 1. The table below gives a
summary of the features of token ring LANs.
Feature Description
Physical topology Star
Logical topology Ring
Type of communication Baseband
Access method Token passing
Transfer speeds 4-16 Mbps
Cable type STP or UTP
Hardware for token ring networks is centred on the hub, which houses the actual ring.
This combination of a logical ring and a physical star topology is sometimes referred
to as a “star-shaped ring”. A token ring network can have multiple hubs. STP or UTP
cabling connects the computers to the hubs. Fibre-optic cable, together with repeaters,
can be used to extend the range of token ring networks. Token ring networks are not
that commonly used these days.
Ethernet has become the most popular way of networking desktop computers and is
still very commonly used today in both small and large network environments.
12
ETHERNET FRAME FORMAT
Destination Source
Preamble SFD Length Data FCS
Address Address
7 bytes 1 byte 6 bytes 6 bytes 2 bytes 46-1500 bytes 4 bytes
Each frame begins with a 7-byte preamble. Each byte has the identical pattern
10101010, which is used to help the receiving computer synchronise with the sender.
This is followed by a 1-byte start frame delimiter (SFD), which has the pattern
10101011. Next are the source and destination addresses, which take up 6 bytes each.
The data can be of variable length (46-1500 bytes), so before the data itself there is a
2-byte field that indicates the length of the following data field. Finally there is a 4-
byte frame check sequence, used for cyclic redundancy checking. Therefore the
minimum and maximum lengths of an Ethernet frame are 72 bytes and 1526 bytes
respectively.
Although there have been a number of different standards for the Ethernet architecture
over the years, a number of features have remained the same The table below
summarises the general features of Ethernet LANs.
Feature Description
Traditional topology Linear bus
Other topologies Star bus
Type of communication Baseband
Access method CSMA/CD
Transfer speeds 10/100/1000 Mbps
Cable type Thicknet/thinnet coaxial or UTP
The first phase of Ethernet standards had a transmission speed of 10Mbps. Three of
the most common of these are known as 10Base2, 10Base5 and 10BaseT. The
following table summarises some of the features of each specification.
ETHERNET STANDARDS
10Base2 10Base5 10BaseT
Topology Bus Bus Star bus
UTP
Cable type Thinnet coaxial Thicknet coaxial
(Cat. 3 or higher)
Simplex/half/full
Half duplex Half duplex Half duplex
duplex
Manchester, Manchester, Manchester,
Data encoding
asynchronous asynchronous asynchronous
Connector BNC DIX or AUI RJ45
Max. segment length 185 metres 500 metres 100 metres
Note that although the 10BaseT standard uses a physical star-bus topology, it still
used a logical bus topology. This combination is sometimes referred to as a “star-
shaped bus”. In addition to these three, a number of standard existed for use with
fibre-optic cabling, namely 10BaseFL, 10BaseFB and 10BaseFP.
The next phase of Ethernet standards was known as fast Ethernet, and increased
transmission speed up to 100Mbps. Fast Ethernet is probably the most common
standard in use today. The Manchester encoding technique used in the original
13
Ethernet standards is not well suited to high frequency transmission so new encoding
techniques were developed for fast Ethernet networks. Three of the most common fast
Ethernet standards are summarised below, although others do exist (e.g. 100BaseT2).
The most recent phase of Ethernet standards has increased transmission speeds up to
1000Mbps, although sometimes at the expense of some other features, such as
maximum segment length. Because of the transmission speed, it has become known
as Gigabit Ethernet, and the most common standards are summarised below.
Finally, the IEEE has also published a number of standards for wireless Ethernet
networks. The original standard was known as 802.11, was very slow (around 2Mbps)
and was quickly superseded by more efficient standards. 802.11 now usually refers to
the family of standards that followed after this original standard.
The CSMA/CA access method has become the standard access method for use in
wireless networking.
14
Summary of Key Points
15
The term network architecture refers to the combination of physical/logical
topology, communication method, physical hardware and access method
chosen to implement the network
Ethernet and token ring are two of the most popular network architectures
There have been many standards published by the IEEE for Ethernet
networks: the original standards had a transmission speed of 10Mbps; fast
Ethernet has a speed of 100Mbps; and Gigabit Ethernet has a speed of
1000Mbps
Wireless networking is becoming increasingly popular – the three wireless
Ethernet standards are known as 801.11b, 802.11a and 802.11g
16
Mekelle University Faculty of Business & Economics
Handout 5 – Protocols
1. Introduction to protocols
Protocols are rules and procedures for communicating. The term "protocol" is used in
a variety of contexts. For example, diplomats from one country adhere to rules of
protocol designed to help them interact smoothly with diplomats from other countries.
Rules of protocol apply in the same way in the computer environment. When several
computers are networked, the rules and technical procedures governing their
communication and interaction are called protocols.
When talking about protocols, the term completely reliable delivery means that every
packet is guaranteed to reach its destination without errors. Protocols that do not
provide completely reliable delivery are called best effort delivery schemes. Best
effort delivery schemes simply try their best to deliver a packet without errors, but do
not guarantee anything.
2. Flow control
Flow control refers to techniques used to regulate the flow of data from a source
transmitter to a destination receiver. Sometimes the source may transmit data at a
faster rate than the destination can process it, and data will be lost. Flow control
addresses this problem.
The simplest form of flow control is called stop-and-wait flow control. Figure 1
illustrates how stop-and-wait flow control works. Whenever the sending computer
(computer A in this case) sends a message over the network, it starts a timer. If no
acknowledgement is received from the receiving computer before a certain amount of
time has expired it will assume that the packet was lost, and it will retransmit it.
1
Otherwise, when the acknowledgement is received it will proceed to transmit the next
packet.
The stop and wait flow control technique provides completely reliable delivery, and
works well if the data to be transmitted consists of a small number of large packets.
However, in computer networks data is almost always transmitted in a large number
of small packets. In this case, stop-and-wait flow control causes the sender to spend a
lot of time waiting for acknowledgements to arrive. A more efficient technique is
called sliding window flow control, and is illustrated in Figure 2.
2
Using sliding window flow control the receiving computer first establishes a buffer (a
block of memory to store received packets). It informs the sending computer of the
size of the buffer (for example 4 packets), and tells it that it is ready to receive data.
The sending computer then transmits packets for all available space in the buffer,
without waiting for acknowledgements. Only when the buffer is full does it wait to
receive an acknowledgement. The receiver will remove packets from the buffer and
process them. It will send an acknowledgement after it has processed each packet.
Whenever the sender receives an acknowledgement it knows that there is some space
in the buffer, so it can transmit another packet. This can be visualised as a window
sliding along the data that needs to be sent. The size of the window is the same as the
size of the buffer. If a packet is inside the window it can be transmitted; if it is to the
right is has been sent already; if it is to the left it is still unsent.
Figure 3 shows the efficiency improvements achieved by using sliding window flow
control. Stop and wait flow control is only really useful when the data to be
transmitted consists of a small number of large packets, which is not normally the
case. For high-speed networks, sliding window flow control is essential.
3. Error control
Error control refers to mechanisms that detect and correct errors that occur in
transmission. The most common techniques for error control are based on the
following ingredients:
Error detection (as discussed in Handout 3)
Positive acknowledgement: The destination returns a positive
acknowledgement message if a packet is successfully received and error-free.
Retransmission after timeout: The source retransmits a packet that has not
been acknowledged after a predetermined amount of time.
3
Negative acknowledgement and retransmission: The destination returns a
negative acknowledgment to packets in which an error is detected. The source
retransmits such packets.
Collectively these mechanisms are all referred to as automatic repeat request (ARQ).
The effect of ARQ is to turn an unreliable data link into a reliable one. ARQ provides
completely reliable delivery of packets.
Two types of error could occur in this technique. First, the data packet itself could be
lost or corrupted. If the data packet is corrupted the destination will detect the error
using an error detection technique and discard the corrupted packet. Whether the
packet was lost or corrupted, after the timeout interval the source will retransmit.
Second, the destination could successfully receive the data packet, but the ACK could
be lost or corrupted. In this case no acknowledgment will be received before the
timeout interval so the packet will be retransmitted.
This form of error control is based on the sliding window flow control technique. The
source will send a number of frames without waiting for acknowledgements from the
destination. The number it sends is determined by the size of the destinations buffer.
The destination will send positive acknowledgements (RR = Receive Ready) for
successfully received packets. These positive acknowledgements are cumulative, i.e.
an acknowledgement for packet 4 implicitly acknowledges packets 1 to 3. If the
destination detects an error, it discards the packet and waits for further packets to
arrive. If the destination receives an out-of-order packet it will send a negative
acknowledgement (REJ = Reject) back to the source telling it the number of the next
packet it expects to receive. The source will then retransmit this packet together with
all subsequent packets that have been transmitted. This technique is called go-back-N
because when an error occurs the sender has to go back a number of packets and
retransmit them all. Figure 4 illustrates the go-back-N error control technique.
4
3.3 Selective-reject ARQ
Network software operates at many different levels within the sending and receiving
computers. Each of these levels, or tasks, is governed by one or more protocols. These
protocols, or rules of behaviour, are standard specifications for formatting and moving
the data. When the sending and receiving computers follow the same protocols,
communication is assured. For example, a protocol that is responsible for sending an
email from one mail server to another is very different from a protocol that is
responsible for transmitting the binary 1s and 0s onto the network cabling. Because of
5
this layered structure, a number of protocols working together at different levels is
often referred to as a protocol stack.
With the rapid growth of networking hardware and software, a need arose for standard
protocols that could allow hardware and software from different vendors to
communicate. The Open Systems Interconnection (OSI) reference model is a response
to this need.
The OSI reference model architecture divides network communication into seven
layers. Each layer covers different network activities, equipment, or protocols. Figure
5 represents the layered architecture of the OSI reference model. (Layering specifies
different functions and services as data moves from one computer through the
network cabling to another computer.) The OSI reference model defines how each
layer communicates and works with the layers immediately above and below it. For
example, the session layer communicates and works with the presentation and
transport layers.
Each layer provides some service or action that prepares the data for delivery over the
network to another computer. The lowest layers (1 and 2) define the network's
physical media and related tasks, such as putting data bits onto the network interface
cards (NICs) and cable. The highest layers define how applications access
communication services. The higher the layer, the more complex is its task.
The layers are separated from each other by boundaries called interfaces. All requests
are passed from one layer, through the interface, to the next layer. Each layer builds
upon the standards and activities of the layer below it.
Each layer provides services to the next-higher layer and shields the upper layer from
the details of how the services below it are actually implemented. At the same time,
each layer appears to be in direct communication with its associated layer on the other
6
computer. This provides a logical, or virtual, communication between the same layers
on the two computers, as shown in Figure 6. In reality, actual communication between
adjacent layers takes place on one computer only, and actual communication between
computers occurs at the physical layer only. At each layer, software implements
network functions according to a set of protocols.
Before data is passed from one layer to another, it is broken down into packets. At
each OSI layer, the NOS adds additional formatting or addressing to the packet,
which is needed for the packet to be successfully transmitted across the network. At
the receiving end, the packet passes through the layers in reverse order. A software
module at each layer reads the information on the packet, strips it away, and passes
the packet up to the next layer. When the packet is finally passed up to the application
layer, the addressing information has been stripped away and the packet is in its
original form, which is readable by the receiver.
With the exception of the lowest layer in the OSI networking model (i.e. the physical
layer), no layer can pass information directly to its counterpart on another computer.
Instead, information on the sending computer must be passed down through each
successive layer until it reaches the physical layer. The information then moves across
the networking cable to the receiving computer and up that computer's networking
layers until it arrives at the corresponding layer. For example, when the network layer
sends information from computer A, the information moves down through the data-
link and physical layers on the sending side, over the cable, and up the physical and
data-link layers on the receiving side to its final destination at the network layer on
computer B.
The purpose of each of the 7 layers of the OSI model is summarised below.
Application layer: This layer relates to the services that directly support user
applications, such as software for file transfers, database access, and e-mail.
Presentation layer: You can think of the presentation layer as the network's
translator. When computers from dissimilar systems (such as IBM, Apple, and
Sun) need to communicate, a certain amount of translation and byte reordering
7
must be done. This layer translates information from computers applications
into a commonly recognised, intermediary format.
Session layer: This layer allows two applications on different computers to
open, use, and close a connection called a session. A session is a highly
structured dialog between two workstations. The session layer is responsible
for managing this dialog and handles such things as login requests and
password verification.
Transport layer: The transport layer ensures that packets are delivered error
free, in sequence, and without losses or duplications. It is responsible for
dividing up a large block of data into smaller packets and reassembling them
at the receiving computer. It is not concerned with the route the data takes to
reach its destination.
Network layer: The network layer is responsible for addressing messages
and translating logical addresses and names into physical addresses. This layer
also determines the route from the source to the destination computer.
Data-link layer: This layer controls the electrical impulses that enter and
leave the network cable, and is responsible for controlling the flow of data
from sender to receiver.
Physical layer: This layer transmits the unstructured, raw bit stream over a
physical medium (such as the network cable). The physical layer is totally
hardware-oriented and deals with all aspects of establishing and maintaining a
physical link between communicating computers.
8
Summary of Key Points
9
Mekelle University Faculty of Business & Economics
Handout 6 – TCP/IP
1. Introduction
Although there are a large number of different protocols that operate at different
layers of the OSI model, one protocol has assumed primary importance for
communicating both within and between networks. This protocol is known as the
Transmission Control Protocol/Internet Protocol (TCP/IP). As its name suggests,
TCP/IP is the protocol that makes communication via the Internet possible, hence its
importance.
TCP/IP has become the standard protocol used for interoperability among many
different types of computers. (Interoperability simply means different types of
computers being able to communicate with each other.) This interoperability is a
primary advantage of TCP/IP. Most networks support TCP/IP as a protocol.
TCP/IP is not actually a single protocol, but a set of protocols that operate at different
levels. The levels involved in TCP/IP do not exactly match those of the OSI reference
model. Instead of seven layers, TCP/IP specifies only four:
Network interface layer
Internet layer
Transport layer
Application layer
Each of these layers corresponds to one or more layers of the OSI reference model.
The table below shows the correspondence between OSI layers and TCP/IP layers.
1
TCP/IP is an industry standard and is an open protocol. This means it is not controlled
by a single company, and is less subject to compatibility issues.
3. Overview of TCP/IP
As was stated above, TCP/IP consists of a number of different protocols that perform
a variety of functions and operate at a number of different levels. An overview of the
protocols included in TCP/IP is shown below.
One of the most important protocols in the TCP/IP suite is the IP protocol. This is
used at the Internet layer of TCP/IP (i.e. the Network layer in the OSI model) and is
used to attach network addresses to packets. The IP protocol provides best effort
delivery between network stations.
Clearly there is not one single network administrator responsible for the whole of the
Internet, so breaking it down into smaller subnets makes sense. Using the 4 number
(or 32 bit) IP addresses makes it relatively easy to segment the task of managing
computer networks. This is done by splitting the address into two parts: the network
ID (or prefix), and the computer ID (or suffix). As an example, we will first consider
the largest type of subnet: class A networks.
Subnets are defined by fixing a certain number of the 32 bits in the IP address, and
allowing the others to vary. In class A networks, the first 8 bits of the IP address are
fixed (i.e. the first of the four numbers), allowing network administrators to assign the
2
other 24 bits (three numbers) as computer addresses. As 24 bits are available for use
in the subnet, class A networks can contain up to 224 different computers. There are
only a very small number of class A networks, and all have already been assigned to
large companies. For example, IBM have the class A network 9.*.*.* and Apple
have 17.*.*.*.
In a class B network the first 16 bits of the IP address are fixed. They can have up to
216 different computers on their network (65,536). All class B networks have also
already been assigned. Microsoft is an example of a company with a class B network.
Class C networks have the first 24 bits of the IP address fixed, allowing only 255 (28)
different computer addresses. This is the only type of subnet that it is still possible to
buy.
Maximum Maximum
Network
Prefix bits number of Suffix bits computers
class
networks per network
A 7 128 24 16777216
B 14 16384 16 65536
C 21 2097152 8 256
For example, given the IP address 128.255.10.1, we know immediately that this
is on a class B network. We can tell this because if we rewrite the address in binary
form (10000000.11111111.00001010.00000001), the first two bits are 10,
which always indicate a class B network. So the first 16 bits represent the network ID
(128.255) and the last 16 bits are the computer ID (10.1).
To tell what class of network an IP address is on, we do not need to always rewrite the
address in binary form. Any address beginning with a number between 0 and 127 is
on a class A network, between 128 and 191 is on a class B network, and between 192
and 223 is on a class C network. Any IP address starting with any number greater than
223 is reserved for special uses.
4.1.2 Exercise 1
For example, using the class-based system of IP addressing, what can you deduce
from the following IP addresses?
i.e. What class network are they on, and what are the network ID and computer ID?
i. 223.1.0.129
ii. 2.255.15.254
iii. 131.192.161.1
3
(go to the end of this handout for the answers)
You cannot use every IP address. There are some addresses, or sets of addresses, that
are reserved for special uses. The table below summarises these.
We can see that any IP address that has a valid network ID, but all binary 0’s for the
computer ID, is the network number. The network number is a way of referring to an
entire subnet. Therefore this address cannot be assigned to a computer. Similarly if
the computer ID is all binary 1’s it is a broadcast address. The broadcast address is
used if you want to send a packet to every computer on a subnet. Therefore this
address can also not be assigned to a computer on the network. For example, a class C
network provides 256 different values for the computer ID, but only 254 of these can
be assigned to computers.
Most subnets have at least one router. If it didn’t have a router it would be isolated
and could not communicate with any other networks. A router must also have an IP
address on the subnet, and by convention the first IP address after the network number
is assigned to the default router. Note that this is not a rule, just a convention (it is
usually done but you do not have to do it).
In addition to these there are a number of ranges of IP addresses that are specified as
‘non-routable’ addresses. This means that routers on the Internet will never forward
them. This is because they are reserved for local network use. If every computer in the
world that was on a network connected to the Internet had to have a unique IP address
we would have run out of IP addresses many years ago. But many of these computers
are on networks that only connect to the Internet through a single router, gateway
computer or dial-up connection. Therefore, on networks like this we only need a
single routable IP address; the rest of the computers can be given non-routable
4
addresses. A number of computers on networks in different parts of the world can
share the same non-routable IP address provided they are not directly connected on
the same network. Internet routers are programmed to ignore these addresses so there
can be no address conflict.
The ranges of non-routable IP addresses are specified by RFC 1918. (RFC stands for
Request for Comments. RFCs are electronic documents that are used for publishing
Internet standards. Anybody can submit or comment on an RFC.) The addresses are:
10.0.0.0 - 10.255.255.255
172.16.0.0 - 172.31.255.255
192.168.0.0 - 192.168.255.255
The third range (192.168.*.*) is the range used on the FBE network.
See Figure 1 for an illustration. Here we have two LANs, called A and B. Both
contain 3 PCs with the same IP addresses (192.168.0.2 to 192.168.0.4).
Similarly the routers that connect the LANs to the Internet have the same IP address
on the LAN (192.168.0.1). However, the Internet IP addresses of each of these
routers are different (200.111.23.12 and 197.210.33.12). Since all routers
are programmed to ignore addresses in the range 192.168.0.0 –
192.168.255.255 there is no address conflict. Note also that these computers can
never receive packets from the Internet, because their router would ignore them. All
packets for these subnets must be addressed to the routable IP address of the router.
Therefore every network connected to the Internet must have at least 1 routable IP
address.
4.1.5 Exercise 2
Which of the following IP addresses are invalid addresses for computers on the
Internet? If they are invalid, explain why.
i. 130.22.256.22
ii. 222.222.255.222
iii. 240.12.3.24
iv. 128.128.0.128
v. 200.128.0.255
vi. 255.255.255.255
vii. 127.0.0.1
viii. 13.13.0.13
ix. 10.240.12.11
(go to the end of this handout for the answers)
5
Figure 1 – Use of non-routable IP addresses in local networks
When this class-based system was introduced, it was thought that it would easily
provide enough IP addresses for the Internet. However, due to the rapid increase in
the number of Internet users worldwide, IP addresses eventually came to be in short
supply. Because of this, in 1994 a new system was introduced: classless inter-domain
routing, or CIDR.
CIDR uses subnet masks to subdivide networks. The 32 bits in a subnet mask indicate
which of the bits in an IP address are a part of the prefix (network ID), and which are
a part of the suffix (computer ID). If a bit in the subnet mask is a 1, that bit is in the
prefix and so must be fixed in the IP addresses of a subnet. If a bit in the subnet mask
is a 0 then it is part of the suffix and it is allowed to vary within a subnet. For
example, in a class C network only the last 8 bits can vary, so the subnet mask is
255.255.255.0 (or 11111111.11111111.11111111.00000000). For class A
and B networks the subnets masks are 255.0.0.0 and 255.255.0.0 respectively.
However, subnet masks allow much more flexibility than the class-based system. For
example, suppose we wish to have a subnet with 1000 IP addresses. Under the class-
based system we would have to allocate a class B network, which has a total of
65,534 addresses, approximately 64,000 of which would be unused. Using CIDR we
can specify the following subnet mask:
Now we are using 22 bits to specify the network ID, and 10 bits for the computer ID.
This allows a total of 210, or 1024, different IP addresses in the subnet, which
minimises the number of unused addresses.
6
Because we can now have any number of bits in the network ID part of the IP address,
the IP address is generally written with a slash at the end followed by the number of
bits in the network ID, e.g. 200.123.192.2/22. Because of this notation a subnet
with 22 bits for the network ID is known as a “slash 22 network”.
The CIDR system has temporarily alleviated the shortage of IP addresses on the
Internet, but still we will run out of addresses one day. Because of this a new system
is being devised that uses 128 bit addresses.
4.1.7 Exercise 3
For example, if we have a subnet with the network number 21.100.19.0, and a
subnet mask of 255.255.255.192 (i.e. a slash 26 network) which of the
following IP addresses would be on the subnet?
i. 21.100.19.1
ii. 21.101.19.1
iii. 21.100.19.128
iv. 21.100.19.62
(go to the end of this handout for the answers)
4.1.8 Exercise 4
Look at the network diagram in Figure 2 and answer the following questions:
i. What class network are the computers A, B, D and E on? What about
computer C?
ii. If computer E wanted to send a packet to computer C, what IP address would
it send that packet to?
iii. Can you identify any problems with the assignment of IP addresses and
default gateways in this network?
(go to the end of this handout for the answers)
7
Figure 2 – Network diagram
UDP (User Datagram Protocol) and TCP (Transmission Control Protocol) are both
higher-level protocols than IP (they operate at level 4 of the OSI model), and provide
the communication link between the application program and IP.
Every network device (e.g. NIC, router, bridge, etc.) has a unique hardware address.
This address is known as the MAC (media access control) address. MAC addresses
are different to IP addresses: they are a 48-bit binary code and they never change –
they are permanently assigned to the device at manufacturing time. IP addresses, on
8
the other hand, are assigned by software and so they can change during the lifetime of
a device. In low-level protocols, all addressing is performed using MAC addresses.
ARP stands for the address resolution protocol. It operates at the Network layer of
OSI, and the Internet layer of TCP/IP. ARP is responsible for translating from IP
addresses to MAC addresses. RARP stands for the reverse address resolution
protocol, and is responsible for translating from MAC addresses to IP addresses.
Because of the service provided by ARP and RARP, all protocols above them in the
OSI model are able to use IP addresses only when referring to network devices.
For example, suppose that a computer COM1 with IP address 10.0.0.2 wanted to send
a message to a computer COMSERVER with IP address 10.0.0.1. Before any
communication is possible COM1 must know the MAC address of COMSERVER.
Stored in the RAM of COM1 will be an ARP cache. This cache will contain a list of all
IP-MAC translations that COM1 knows about. If there is no entry for COMSERVER in
the ARP cache on COM1, COM1 will broadcast an ARP Request packet to the network.
COMSERVER will receive this broadcast and notice that the target IP address in the
message is the same as its own. Therefore it will send a unicast ARP Reply back to
COM1 with the required MAC address (see Figure 3). Notice that the ARP Request
must be broadcast to the whole network, as COM1 does not yet know the MAC
address to send it to. But the ARP reply from COMSERVER can be unicast because
COMSERVER knows the MAC address of COM1 from the ARP Request packet. After
COM1 receives the ARP Reply it can communicate directly with COMSERVER. It will
also add the IP-MAC translation for COMSERVER to its ARP cache.
4.4 ICMP
ICMP is the Internet Message Control Protocol. ICMP is used to transmit status and
error messages between network stations. For example, whenever you type a URL
into Internet Explorer and you get the message “Page cannot be displayed”, it is an
ICMP packet that is responsible.
9
4.5 DHCP
Every computer on a network must have a unique address. This address is attached to
any packets of data that are intended for transmission to the computer. If the network
is using the TCP/IP protocol, these addresses will be IP addresses (i.e. they will
consist of 4 numbers between 0 and 255 separated by dots). If two computers have the
same address it causes an address conflict, and network problems will result. There
are two ways of ensuring that all computers have unique addresses: static IP
addressing and dynamic IP addressing. In static IP addressing each computer is
assigned a unique address by the network administrator. It will keep this address until
the network administrator assigns a different one. It is the administrator’s
responsibility to ensure that the same address is not assigned twice. In dynamic IP
addressing the assignment of addresses is handled automatically by a program
running on the server. This program is responsible for ensuring that every computer
has a unique address. Addresses are leased to clients for a limited period of time, after
which the client must request a new lease.
DHCP stands for the Dynamic Host Configuration Protocol, and is the protocol used
for requesting and assigning dynamic IP addresses. A DHCP application will typically
run on the network server. Clients then use the DHCP protocol to obtain their IP
address lease from this application. Figure 4 illustrates the communication that occurs
during dynamic IP address leasing. If a computer COM1 wanted to obtain a dynamic
IP address, it would first broadcast a DHCP Discover packet to the network to find
out if there was a DHCP server available. The DHCP service running on
COMSERVER will receive this message and broadcast a DHCP Offer packet offering a
particular address. It cannot use a unicast transmission, as COM1 does not yet have an
IP address. COM1 receives the DHCP Offer and decides to accept, so it sends a DHCP
Request packet back to COMSERVER. Finally COMSERVER sends an
acknowledgement back to COM1 to confirm that the IP address has been assigned (a
DHCP ACK packet).
10
Figure 4 – Obtaining an IP address lease using the DHCP protocol
4.6 DNS
DNS stands for the Domain Name Service. Although high-level protocols in TCP/IP
use IP addresses to communicate, it is easier for people using the computers to
identify them by names, such as COM1 and COMSERVER. These names are known as
host names. DNS is the protocol used to obtain host name to IP address translation
information between computers on the network. Typically every network will have at
least one DNS server. Clients needing to know translations will contact the DNS
server using the DNS protocol to obtain the required information. On a local network,
the host name can just be a single word, for example COM1 or COMSERVER. On the
Internet the name will consist of a sequence of words separated by dots, for example
www.yahoo.com or www.bbc.co.uk. There is a one-to-one mapping between these
computer names and IP addresses: every IP address corresponds to a single computer
name and vice versa.
The DNS server will maintain a list of which IP address maps to which computer
name, so that it can translate between the two. For instance, if a user requests a
directory listing from the computer COM1 then the NOS must first find out the IP
address that corresponds to the name COM1, and then send a request for the directory
listing to that IP address. The process of translating a computer name into an IP
address is known as name resolution.
11
not significant. For example, if www.bbc.co.uk corresponds to the IP address
27.21.225.129, then it does not follow that 129 represents ‘.uk’, and 225 represents
‘.co’, and so on. The naming hierarchy is decided on by the local network
administrator, based normally upon the structure of the organisation it represents. For
example, Figure 5 shows a sample naming hierarchy for the ‘.et’ domain. If there
were a computer called fbe-server in the fbe subdivision of the domain, it would have
the name fbe-server.fbe.mekelle.edu.et. The number of different segments to a
computer name (in this example it is 5) is determined by the naming hierarchy. There
is no global standard. Each organisation can choose how to structure names in its
hierarchy.
The Internet contains a number of DNS servers. None of these servers knows the
names and addresses of every computer on the Internet. DNS uses a system known as
distributed lookup to enable every DNS server to be able to translate any address.
This means that each DNS server is responsible for providing a translation service for
a certain subset of computers only. If it receives a request that it cannot answer, it will
forward the request to another DNS server that will know the answer. For example, in
Figure 5 the DNS server at mekelle.edu.et provides a translation service for the
‘.edu.et’ subdivision. If it receives a request for an address that it does not end in
‘edu.et’ it will forward it to the root DNS server for the ‘et’ domain.
4.7 SNMP
SNMP stands for the Simple Network Management Protocol. It was developed as a
network management tool for networks running TCP/IP. Using SNMP, network
administrators can administer servers and other network devices from remote
workstations.
12
4.8 SMTP
The Simple Mail Transfer Protocol (SMTP) was defined by RFC 821 and is the
standard protocol for transferring emails between hosts. The way in which emails are
transferred on computer networks will be discussed in more detail in Handout 7
(Network Operating Systems and Applications).
4.9 FTP
The file transport protocol (FTP) uses the TCP protocol as the underlying transport
protocol. The purpose of FTP is to safely and efficiently transport files over computer
networks.
4.10 Telnet
The TELNET protocol is used for providing remote terminal access over a network.
For example, using TELNET a user can log in to another computer somewhere else
on the network and take part in an interactive session on that computer. TELNET also
uses TCP as its underlying basis for communications.
TCP/IP also provides a number of command-line utilities that can be useful when
troubleshooting networks. You can use any of these utilities at the DOS command
prompt in Windows.
4.11.1 Ping
To test if your network connection is complete between two computers, you can use
the Packet Internet Groper, better known as ping. The ping utility works by sending a
message to a remote computer. If the remote computer receives the message, it
responds with a reply message (see Figure 6). The reply consists of the remote
workstation's IP address, the number of bytes in the message, how long it took to
reply - given in milliseconds (ms) - and the time-to-live (TTL) in seconds. If you
receive back the message "Request timed out," this means that the remote workstation
did not respond before the TTL time expired. This might be the result of heavy
network traffic or it might indicate a physical disconnection in the path to the remote
workstation. An example of using the ping utility to check the connection to a
computer called AWASA is shown in Figure 7.
4.11.2 Tracert
Another utility that documents network performance is called tracert. While the ping
utility merely lets us know that the connection from A to B is complete, tracert
informs us of the route and number of hops the packet of data took to arrive at its
13
destination. An example of using the tracert utility to trace the route to a computer
called AWASA is shown in Figure 7.
4.11.3 Ipconfig
An example of using the ipconfig utility is shown in Figure 7. The output lists the
current IP address of the computer, the subnet mask, and the default gateway. The
subnet mask indicates which class of network the computer is a part of. Because the
first three numbers in the subnet mask are 255 this means that the computer is on a
class C network (i.e. the first 24 bits of the IP address are fixed). If this computer
needs to send a packet of data to a computer outside of this subnet, it must first send it
to the default gateway. The default gateway is a computer or router on the subnet that
is responsible for forwarding packets to addresses outside the subnet.
4.11.4 Route
Every computer and network routing device stores a routing table in its RAM. A
routing table stores information about which routers to send network packets to. The
route command can be used to display and modify the routing table of a computer.
4.11.5 Nslookup
Nslookup is a utility that can be used to manually query the DNS database. It can be a
useful troubleshooting tool if the DNS server is not working correctly.
4.11.6 Netstat
The netstat command can be used to display the currently active TCP connections on
a computer.
14
Figure 7 – The ping, tracert and ipconfig utilities
15
Summary of Key Points
16
DHCP (Dynamic Host Configuration Protocol) is the protocol used in
assigning dynamic IP addresses.
DNS (Domain Name Service) is the protocol used in translating between host
names and IP addresses.
SNMP (Simple Network Management Protocol) can be used by network
administrators to remotely administer network devices.
SMTP (Simple Mail Transfer Protocol) is used for transferring email
messages.
FTP (File Transfer Protocol) is used for simple file transfers.
Telnet is used for running remote sessions over a network.
Ping, tracert, ipconfig, route, nslookup and netstat are useful TCP/IP
troubleshooting tools
17
Exercise 1 - Answers
i. Because the first number in the IP address is between 192 and 223 we know
that this is a class C network. Therefore the first 24 bits specify the network ID
(223.1.0), and the final 8 bits represent the computer ID (129).
ii. Because the first number in the IP address is between 0 and 127 we know that
this is a class A network. Therefore the first 8 bits specify the network ID (2),
and the next 24 bits represent the computer ID (255.15.254).
iii. Because the first number in the IP address is between 128 and 191 we know
that this is a class B network. Therefore the first 16 bits specify the network ID
(131.192), and the final 16 bits represent the computer ID (161.1).
Exercise 2 - Answers
i. This address is invalid as the third number is 256 – the highest possible value
is 255.
ii. Valid address.
iii. This address is invalid as the first number is above 223, so it is reserved and
cannot be assigned to computers.
iv. Valid address.
v. This is not a valid address as the last number is 255, which represents the
directed broadcast address, and cannot be assigned to computers.
vi. This is not a valid address as it represents the limited broadcast address – it
will broadcast to all computers on the subnet.
vii. This is not a valid address as it represents the loopback address and cannot be
assigned to computers. This is used for troubleshooting purposes, and will
send a message to the local computer.
viii. Valid address.
ix. Not a valid address as this is a non-routable address – it will be ignored by
Internet routers so the computer will never receive any packets.
Exercise 3 – Answers
First we should note that the subnet mask indicates that the first 26 bits represent the
network ID, and the last 6 bits the computer ID. Now we write the network ID in
binary form: 21.100.19.0 corresponds to
00010101.01100100.00010011.00000000
(network ID is normal text, computer ID in italics). Therefore, so long as the first 26
bits of an IP address are the same as indicated above, it will be on the subnet. If any
are different it will not be. Therefore the range of IP addresses for this subnet are
21.100.19.0 to 21.100.19.63.
i. This is on the same subnet as it is in the range specified above.
ii. This is not on the same subnet as it is not in the range specified. You can
check this by writing the address in binary form – you will find that one of the
first 26 bits is different.
iii. Again, this is not on the same subnet because it is not in the range specified. In
this case the 25th bit is different to the network number of the subnet.
iv. This is on the same subnet as it is in the range specified.
18
Exercise 4 - Answers
19
Mekelle University Faculty of Business & Economics
Novell's NetWare is the most familiar and popular example of a NOS in which the
client computer's networking software is added on to its existing computer operating
system. The desktop computer needs both operating systems in order to handle stand-
alone and networking functions together.
A computer's operating system coordinates the interaction between the computer and
the programs (applications) it is running. It controls the allocation and use of
hardware resources such as:
Memory
CPU time
Disk space
Peripheral devices
1
1.2 Multitasking
A multitasking operating system, as the name suggests, provides the means for a
computer to process more than one task at a time. A true multitasking operating
system can run as many tasks as there are processors (CPUs). If there are more tasks
than processors, the computer must arrange for the available processors to devote a
certain amount of time to each task, alternating between tasks until all are completed.
With this system, the computer appears to be working on several tasks at once.
Because the interaction between the stand-alone operating system and the NOS is
ongoing, a pre-emptive multitasking system offers certain advantages. For example,
when the situation requires it, the pre-emptive system can shift CPU activity from a
local task to a network task.
In a stand-alone system, when the user types a command that requests the computer to
perform a task, the request goes over the computer's local bus to the computer's CPU.
For example, if you want to see a directory listing on one of the local hard disks, the
CPU interprets and executes the request and then displays the results in a directory
listing in the window. In a network environment, however, when a user initiates a
request to use a resource that exists on a server in another part of the network, the
request has to be forwarded, or redirected, away from the local bus, out onto the
network, and from there to the server with the requested resource. This forwarding is
performed by the redirector.
Redirector activity originates in a client computer when the user issues a request for a
network resource or service. Figure 1 shows how a redirector forwards requests to the
network. The user's computer is referred to as a client because it is making a request
of a server. The request is intercepted by the redirector and forwarded out onto the
2
network. The server processes the connection requested by client redirectors and gives
them access to the resources they request. In other words, the server services - or
fulfils - the request made by the client.
Using the redirector, users don't need to be concerned with the actual location of data
or peripherals, or with the complexities of making a connection.
The role of the NOS on a server is to process and act upon requests from clients
(redirectors) for network resources managed by the server. For example, in Figure 2, a
user is requesting a directory listing on a shared remote hard disk. The request is
forwarded by the redirector on to the network, where it is passed to the file and print
server containing the shared directory. The request is granted, and the directory listing
is provided.
The server is also responsible for controlling the way in which resources are shared
over the network. Sharing is the term used to describe resources made publicly
available for access by anyone on the network. Most NOSs not only allow sharing,
but also determine the degree of sharing. For example, an office manager wants
everyone on the network to be familiar with a certain document (file), so she shares
3
the document. However, she controls access to the document by sharing it in such a
way that:
Some users will be able only to read it
Some users will be able to read it and make changes in it
Two security models have evolved for keeping data and hardware resources safe:
Password-protected shares
Access permissions
These models are also called "share-level security" (for password-protected shares)
and "user-level security" (for access permissions).
To simplify the task of managing users in a large network, NOSs allow for the
creation of user groups. By classifying individuals into groups, the administrator can
assign privileges to the group. All group members have the same privileges, which
have been assigned to the group as a whole. When a new user joins the network, the
4
administrator can assign the new user to the appropriate group, with its accompanying
rights and privileges.
2. Network Applications
Computer networking has revolutionised the way people use computers. This section
will briefly examine some of the applications of computer networking that have led to
this massive change. In particular we will look at the Internet and electronic mail (or
email).
The Internet is a vast network of networks, the ultimate WAN, consisting of tens of
thousands of businesses, universities, and research organizations with millions of
individual users and using a variety of different network architectures.
What is now known as the Internet was originally formed in 1970 as a military
network called ARPAnet (Advanced Research Projects Agency network) as part of
the United States Department of Defence. The network opened to non-military users
in the 1970s, when universities and companies doing defence-related research were
given access, and flourished in the late 1980s as most universities and many
businesses around the world started to use the Internet. In 1993, when commercial
Internet service providers were first permitted to sell Internet connections to
individuals, usage of the network grew tremendously. There were millions of new
users within months, and a new era of computer communications began. Today, it is
estimated that over 500 million people use the Internet worldwide. The table below
breaks this number down by region.
5
Every site on the Internet has an address, just like people have PO Box numbers at
their local post office. On the Internet addresses are called URLs (Uniform Resource
Locators). URLs are written as a number of words separated by dots, for example
www.yahoo.com. The word after the final dot (e.g. com) is the domain of the address.
The domain indicates the category of the web site. The table below lists some of the
more common categories of address on the Internet.
The World Wide Web (WWW) is a way of browsing the information on the Internet
in a pleasant, easy to understand. Text can be mixed with graphics, video, and audio
to provide multimedia (i.e. many different media) Internet content.
This is all made possible by using a special communications protocol, called the
Hypertext Transport Protocol (HTTP). You may have noticed when using the Internet
that many URLs begin with the letters “http://” - this means that the page of
information will be transmitted using the Hypertext Transport Protocol. Pages of
multimedia Internet content are commonly written in a special language called HTML
(the Hypertext Markup Language)
One of the more recent innovations in the use of the Internet is instant messaging.
Using instant messaging software two users in different parts of the world can take
part in an on-line conversation using their personal computers. Text typed at one
computer will be “instantly” transmitted to the screen of the other. Instant messaging
provides for much faster and interactive communication than electronic mail.
When most people think of applications of the Internet they probably think first of
electronic mail, or email. Originally email was a way of sending simple text messages
to different users over local area networks. However, nowadays email can be used to
send multimedia content such as audio, video or even computer software to a user
anywhere in the world.
6
Email is made possible by using the Simple Mail Transport Protocol (SMTP). SMTP
specifies how electronic mail messages are exchanged between computers using TCP.
In order to use email, it is necessary to install software on both the sending and
receiving computer. Email uses the client-server method to allow mail to be
exchanged. Client computers exchange messages with a mail server that is
responsible for ensuring that the message reaches its destination. On the server
computer each user is assigned a specific mailbox. This electronic mailbox is just like
a normal PO Box – mail is stored there until a user logs on to collect their mail. Each
electronic mailbox has a unique email address. Email addresses are divided into two
parts: the user name and the mailbox name. These two parts are separated by an “@”
character. For example, Elizabeth@telecom.net.et is a valid email address. The user
name is “Elizabeth”, and the mail server that is responsible for collecting the mail is
located at the computer called “telecom.net.et”. In this case “telecom.net.et” is a mail
server running at Ethiopian Telecom in Addis Ababa. Remember that this computer
name will also have an associated IP address to identify it on the Internet.
SMTP is the protocol used to send email on the Internet. The user receiving the email
will need to use another protocol to access the incoming mail from the mail server.
Two different protocols exist for this purpose: the Post Office Protocol (POP3) and
the newer alternative, Internet Message Access Protocol (IMAP).
The potential of the computer networks and the Internet to change our lives still
further is great. As processor speed and network bandwidth increases many new
applications will undoubtedly emerge. Already it is becoming possible to view
television programs, films and other multimedia content on demand over the Internet.
Once this becomes more commonplace it will fundamentally change the way we
organise our leisure activities. In the workplace too further changes will occur. One
interesting current development is known as the grid. The Internet consists of
hundreds of thousands of computers, most of which are idle most of the time. The
grid is a way of utilising this unused processor power. In the future it may be possible
to run complex and processor-intensive software by simultaneously using CPUs in
many different parts of the world.
7
Summary of Key Points
Network Applications
8
Notes prepared by: FBE Computer Science Department.