You are on page 1of 80

Sikkim Manipal University – MI0035

MBA SEMESTER III


MI0035 –Computer Network - 4 Credits
Assignment Set- 1 (60 Marks)

Q.1 Explain all design issues for several layers in Computer. What is connection – oriented and
connectionless service?

Answer: The various key design issues are present in several layers in computer
networks. The important design issues are:

1. Addressing: Mechanism for identifying senders and receivers, on the network need


some form of addressing. There are multiple processesrunning on one machine. Some
means is needed for a process on one machine to specify with whom it wants to
communicate.

2. Error Control: There may be erroneous transmission due to several problems during
communication. These are due to problem in communication circuits, physical medium,
due to thermal noise and interference. Many error detecting and error correcting codes are
known, but both ends of the connection must agree on which one being used. In addition,
the receiver must have some mechanism of telling the sender which messages have been
received correctly and which has not.

3. Flow control: If there is a fast sender at one end sending data to a slow receiver, then
there must be flow control mechanism to control the loss of data by slow receivers. There
are several mechanisms used for flow control such as increasing buffer size at receivers,
slow down the fast sender, and so on. Some process will not be in position to accept
arbitrarily long messages. Then, there must be some mechanism to disassembling,
transmitting and then reassembling messages.
Sikkim Manipal University – MI0035

4. Multiplexing / demultiplexing: If the data has to be transmitted on transmission media


separately, it is inconvenient or expensive to setup separate connection for each pair of
communicating processes. So, multiplexing is needed in the physical layer at sender end
and demultiplexing is need at the receiver end.

5. Routing: When data has to be transmitted from source to destination, there may be
multiple paths between them. An optimized (shortest) route must be chosen. This
decision is made on the basis of several routing algorithms, which chooses optimized
route to the destination.

Connection Oriented and Connectionless Services:

Layers can offer two types of services namely connection oriented service and


connectionless service.

Connection oriented service: 

The service user first establishes a connection, uses the connection and then releases
the connection. Once the connection is established between source and destination, the
path is fixed. The data transmission takes place through this path established. The order
of the messages sent will be same at the receiver end. Services are reliable and there is no
loss of data. Most of the time, reliable service provides acknowledgement is an overhead
and adds delay. 

Connectionless Services: 

In this type of services, no connection is established between source and destination.


Here there is no fixed path. 

Therefore, the messages must carry full destination address and each one of these
messages are sent independent of each other. Messages sent will not be delivered at the
Sikkim Manipal University – MI0035

destination in the same order. Thus, grouping and ordering is required at the receiver end,
and the services are not reliable.

There is no acknowledgement confirmation from the receiver. Unreliable connectionless


service is often called datagram service, which does not return an acknowledgement to
the sender. In some cases, establishing aconnection to send one short messages is needed.
But reliability is required, and then acknowledgement datagram service can be used for
these applications.
Another service is the request-reply service. In this type of service, the sender transmits a
single datagram containing a request from the client side. Then at the other end, server
reply will contain the answer. Requestreply is commonly used
to implement communication in the client-server model.

Connection-Oriented and Connectionless Services

Two distinct techniques are used in data communications to transfer data. Each has its
own advantages and disadvantages. They are the connection-oriented method and the
connectionless method:

 Connection-oriented    Requires a session connection (analogous to a phone call)


be established before any data can be sent. This method is often called a "reliable"
network service. It can guarantee that data will arrive in the same order.
Connection-oriented services set up virtual links between end systems through a
network, as shown in Figure 1. Note that the packet on the left is assigned the
virtual circuit number 01. As it moves through the network, routers quickly send
it through virtual circuit 01.

 Connectionless    Does not require a session connection between sender and


receiver. The sender simply starts sending packets (called datagrams) to the
destination. This service does not have the reliability of the connection-oriented
method, but it is useful for periodic burst transfers. Neither system must maintain
Sikkim Manipal University – MI0035

state information for the systems that they send transmission to or receive
transmission from. A connectionless network provides minimal services.

Figure 1

Connection-oriented methods may be implemented in the data link layers of the protocol
stack and/or in the transport layers of the protocol stack, depending on the physical
connections in place and the services required by the systems that are communicating.
TCP (Transmission Control Protocol) is a connection-oriented transport protocol, while
UDP (User Datagram Protocol) is a connectionless network protocol. Both operate over
IP.

The physical, data link, and network layer protocols have been used to implement
guaranteed data delivery. For example, X.25 packet-switching networks perform
extensive error checking and packet acknowledgment because the services were
originally implemented on poor-quality telephone connections. Today, networks are more
reliable. It is generally believed that the underlying network should do what it does best,
which is deliver data bits as quickly as possible. Therefore, connection-oriented services
are now primarily handled in the transport layer by end systems, not the network. This
allows lower-layer networks to be optimized for speed.
Sikkim Manipal University – MI0035

LANs operate as connectionless systems. A computer attached to a network can start


transmitting frames as soon as it has access to the network. It does not need to set up a
connection with the destination system ahead of time. However, a transport-level
protocol such as TCP may set up a connection-oriented session when necessary.

The Internet is one big connectionless packet network in which all packet deliveries are
handled by IP. However, TCP adds connection-oriented services on top of IP. TCP
provides all the upper-level connection-oriented session requirements to ensure that data
is delivered properly. MPLS is a relatively new connection-oriented networking scheme
for IP networks that sets up fast label-switched paths across routed or layer 2 networks.

A WAN service that uses the connection-oriented model is frame relay. The service
provider sets up PVCs (permanent virtual circuits) through the network as required or
requested by the customer. ATM is another networking technology that uses the
connection-oriented virtual circuit approach.

Q.2 discuss OSI Reference model.

Answer: Open Systems Interconnection ( OSI ) is a standard reference model for


communication between two end users in a network. The model is used in developing
products and understanding networks. Also see the notes below the figure.
Sikkim Manipal University – MI0035

Illustration republished with permission from The manual Page .


OSI divides telecommunication into seven layers. The layers are in two groups. The
upper four layers are used whenever a message passes from or to a user. The lower three
layers are used when any message passes through the host computer. Messages intended
for this computer pass to the upper layers. Messages destined for some other host are not
passed up to the upper layers but are forwarded to another host. The seven layers are:

Layer 7: The application layer ...This is the layer at which communication partners are
identified, quality of service is identified, user authentication and privacy are considered,
and any constraints on data syntax are identified. (This layer is not the application itself,
although some applications may perform application layer functions.)
Sikkim Manipal University – MI0035

Layer 6: The presentation layer ...This is a layer, usually part of an operating system,


that converts incoming and outgoing data from one presentation format to another (for
example, from a text stream into a popup window with the newly arrived text).
Sometimes called the syntax layer.

Layer 5: The session layer ...This layer sets up, coordinates, and terminates
conversations, exchanges, and dialogs between the applications at each end. It deals with
session and connection coordination.

Layer 4: The transport layer ...This layer manages the end-to-end control (for example,
determining whether all packets have arrived) and error-checking. It ensures complete
data transfer.

Layer 3: The network layer ...This layer handles the routing of the data (sending it in
the right direction to the right destination on outgoing transmissions and receiving
incoming transmissions at the packet level). The network layer does routing and
forwarding.

Layer 2: The data-link layer ...This layer provides synchronization for the physical
level and does bit-stuffing for strings of 1's in excess of 5. It furnishes transmission
protocol knowledge and management.

Layer 1: The physical layer ...This layer conveys the bit stream through the network at
the electrical and mechanical level. It provides the hardware means of sending and
receiving data on a carrier.

OSI Reference Model

The International Organization for Standardization (ISO) developed the OSI reference
model in the early 1980s. OSI is now the de facto standard for developing protocols that
enable computers to communicate. Although not every protocol follows this model, most
new protocols use this layered approach. In addition, when starting to learn about
networking, most instructors will begin with this model to simplify understanding.
Sikkim Manipal University – MI0035

The OSI reference model breaks up the problem of intermachine communication into
seven layers. Each layer is concerned only with talking to its corresponding layer on the
other machine (see Figure 6-1). This means that Layer 5 has to worry only about talking
to Layer 5 on the receiving machine, and not what the actual physical medium might be.

Figure 6-1. OSI Reference Model

In addition, each layer of the OSI reference model provides services to the layer above it
(Layer 5 to Layer 6, Layer 6 to Layer 7, and so on) and requests certain services from the
layer directly below it (5 to 4, 4 to 3, and so on).

This layered approach enables each layer to handle small pieces of information, make any
necessary changes to the data, and add the necessary functions for that layer before
passing the data along to the next layer. Data becomes less human-like and more
computer-like the further down the OSI reference model it traverses, until it becomes 1s
Sikkim Manipal University – MI0035

and 0s (electrical impulses) at the physical layer. Figure 6-1 shows the OSI reference
model.

The focus of this chapter is to discuss the seven layers (application, presentation, session,
transport, network, data link, and physical). Understanding these layers allows you to
understand how IP routing works and how IP is transported across various media residing
at Layer 1.

The Internet Protocol suite (see Figure 6-1) maps to the corresponding OSI layers. From
the IP Suite figure, you can see how applications (FTP or email) run atop protocols such
as TCP before they are transmitted across some Layer 1 transport mechanism.

The Application Layer

Most users are familiar with the application layer. Some well-known applications include
the following:

 E-mail

 Web browsing

 Word processing

The Presentation Layer

The presentation layer ensures that information sent by the application layer of one
system is readable by the application layer of another system. If necessary, the
presentation layer translates between multiple data formats by using a common data
representation format.

The presentation layer concerns itself not only with the format and representation of
actual user data, but also with data structures used by programs. Therefore, in addition to
actual data format transformation (if necessary), the presentation layer negotiates data
transfer syntax for the application layer.

Common examples include


Sikkim Manipal University – MI0035

 Encryption

 Compression

 ASCII EBCDIC

The Session Layer

As its name implies, the session layer establishes, manages, and terminates sessions
between applications. Sessions consist of dialogue between two or more presentation
entities (recall that the session layer provides its services to the presentation layer).

The session layer synchronizes dialogue between presentation layer entities and manages
their data exchange. In addition to basic regulation of conversations (sessions), the
session layer offers provisions for data expedition and exception reporting of session-
layer, presentation-layer, and application-layer problems.

The Transport Layer

The transport layer is responsible for ensuring reliable data transport on an internetwork.
This is accomplished through flow control, error checking (checksum), end-to-end
acknowledgments, retransmissions, and data sequencing.

Some transport layers, such as Transmission Control Protocol (TCP), have mechanisms
for handling congestion. TCP adjusts its retransmission timer, for example, when
congestion or packet loss occurs within a network. TCP slows down the amount of traffic
it sends when congestion is present. Congestion is determined through the lack of
acknowledgments received from the destination node.

The Network Layer

The network layer provides for the logical addressing which enables two disparate
systems on different logical networks to determine a possible path to communicate. The
network layer is the layer in which routing protocols reside.
Sikkim Manipal University – MI0035

On the Internet today, IP addressing is by far the most common addressing scheme in
use. Routing protocols such as Enhanced Interior Gateway Routing Protocol (Enhanced
IGRP, or EIGRP), Open Shortest Path First (OSPF), Border Gateway Protocol (BGP),
Intermediary System to Intermediary System (IS-IS), and many others are used to
determine the optimal routes between two logical subnetworks (subnets).

Note

You can switch IP traffic outside its own subnetwork only if you use an IP router.

Traditional routers route IP packets based on their network layer address.

Key functions of the network layer include the following:

 Packet formatting, addressing networks and hosts, address resolution, and routing

 Creating and maintaining routing tables

The Data Link Layer

The data link layer provides reliable transport across a physical link. The link layer has its
own addressing scheme. This addressing scheme is concerned with physical connectivity
and can transport frames based upon the data link layer address.

Traditional Ethernet switches switch network traffic based upon the data link layer (Layer
2) address. Switching traffic based on a Layer 2 address is generally known as bridging.
In fact, an Ethernet switch is nothing more than a high-speed bridge with multiple
interfaces.

The Physical Layer

The physical layer is concerned with creating 1s and 0s on the physical medium with
electrical impulses/voltage changes. Common physical layer communication
specifications include the following:
Sikkim Manipal University – MI0035

 EIA/TIA-232Electrical Industries Association/Telecommunications Industry


Association specification used for communicating between computing devices.
This interface is often used for connecting computers to modems, and might use
different physical connectors.

 V.35International Telecommunication Union Telecommunication Standardization


Sector (ITU-T) signaling mechanism that defines signaling rates from 19.2 Kbps
to 1.544 Mbps. This physical interface is a 34-pin connector and also is known as
a Winchester Block.

 RS-449Specification used for synchronous wide area communication. The


physical connector uses 37 pins and is capable of significantly longer runs than
EIA/TIA-232.

 802.3One of the most widely utilized physical mediums is Ethernet. Currently,


Ethernet speeds are deployed from 10Mbps to 1000Mbps.

Q.3. describe different types of data transmission modes?

Answer: Transmission modes

A given transmission on a communications channel between two machines can occur in


several different ways. The transmission is characterised by:

 the direction of the exchanges

 the transmission mode: the number of bits sent simultaneously

 synchronisation between the transmitter and receiver

Simplex, half-duplex and full-duplex connections

There are 3 different transmission modes characterised according to the direction of the
exchanges:
Sikkim Manipal University – MI0035

 A simplex connection is a connection in which the data flows in only one direction,
from the transmitter to the receiver. This type of connection is useful if the data do not
need to flow in both directions (for example, from your computer to the printer or
from the mouse to your computer...).

 A half-duplex connection (sometimes called an alternating connection or semi-


duplex) is a connection in which the data flows in one direction or the other, but not
both at the same time. With this type of connection, each end of the connection
transmits in turn. This type of connection makes it possible to have bidirectional
communications using the full capacity of the line.

 A full-duplex connection is a connection in which the data flow in both directions


simultaneously. Each end of the line can thus transmit and receive at the same time,
which means that the bandwidth is divided in two for each direction of data
Sikkim Manipal University – MI0035

transmission if the same transmission medium is used for both directions of


transmission.

Serial and parallel transmission

The transmission mode refers to the number of elementary units of information (bits)


that can be simultaneously translated by the communications channel. In fact, processors
(and therefore computers in general) never process (in the case of recent processors) a
single bit at a time; generally they are able to process several (most of the time it is 8: one
byte), and for this reason the basic connections on a computer are parallel connections.

Parallel connection

Parallel connection means simultaneous transmission of N bits. These bits are sent
simultaneously overN different channels (a channel being, for example, a wire, a cable or
any other physical medium). Theparallel connection on PC-type computers generally
requires 10 wires.
Sikkim Manipal University – MI0035

These channels may be:

 N physical lines: in which case each bit is sent on a physical line (which is why
parallel cables are made up of several wires in a ribbon cable)

 one physical line divided into several sub-channels by dividing up the bandwidth. In
this case, each bit is sent at a different frequency...
Since the conductive wires are close to each other in the ribbon cable, interference can
occur (particularly at high speeds) and degrade the signal quality...

Serial connection

In a serial connection, the data are sent one bit at a time over the transmission channel.
However, since most processors process data in parallel, the transmitter needs to
transform incoming parallel data into serial data and the receiver needs to do the
opposite.

These operations are performed by a communications controller (normally


a UART (Universal Asynchronous Receiver Transmitter) chip). The communications
controller works in the following manner:
 The parallel-serial transformation is performed using a shift register. The shift
register, working together with a clock, will shift the register (containing all of the
data presented in parallel) by one position to the left, and then transmit the most
significant bit (the leftmost one) and so on:
Sikkim Manipal University – MI0035

 The serial-parallel transformation is done in almost the same way using a shift
register. The shift register shifts the register by one position to the left each time a bit
is received, and then transmits the entire register in parallel when it is full:

Synchronous and asynchronous transmission

Given the problems that arise with a parallel-type connection, serial connections are
normally used. However, since a single wire transports the information, the problem is
how to synchronise the transmitter and receiver, in other words, the receiver can not
necessarily distinguish the characters (or more generally the bit sequences) because the
bits are sent one after the other. There are two types of transmission that address this
problem:

 An asynchronous connection, in which each character is sent at irregular intervals in


time (for example a user sending characters entered at the keyboard in real time). So,
for example, imagine that a single bit is transmitted during a long period of silence...
the receiver will not be able to know if this is 00010000, 10000000 or 00000100... 
To remedy this problem, each character is preceded by some information indicating
the start of character transmission (the transmission start information is called
a START bit) and ends by sending end-of-transmission information (called STOP bit,
there may even be several STOP bits).
Sikkim Manipal University – MI0035

 In a synchronous connection, the transmitter and receiver are paced by the same
clock. The receiver continuously receives (even when no bits are transmitted) the
information at the same rate the transmitter send it. This is why the transmitter and
receiver are paced at the same speed. In addition, supplementary information is
inserted to guarantee that there are no errors during transmission.
During synchronous transmission, the bits are sent successively with no separation
between each character, so it is necessary to insert synchronisation elements; this is
called character-level synchronisation.
The main disadvantage of synchronous transmission is recognising the data at the
receiver, as there may be differences between the transmitter and receiver clocks. That is
why each data transmission must be sustained long enough for the receiver to distinguish
it. As a result, the transmission speed can not be very high in a synchronous link.

Q.4. define switching. What is the difference between circuit switching and packet
switching?

Answer: Switching: The controlling or routing of signals in circuits to execute logical


or arithmetic operations or to transmit data between specific points in
a network. Note:Switching may be performed by electronic, optical, or
electromechanical devices.

Packet-switched and circuit-switched networks use two different technologies for sending
messages and data from one point to another.

Each have their advantages and disadvantages depending on what you are trying to do.

 In packet-based networks, the message gets broken into small data packets.
These packets are sent out from the computer and they travel around the network
Sikkim Manipal University – MI0035

seeking out the most efficient route to travel as circuits become available. This
does not necessarily mean that they seek out the shortest route.

Each packet may go a different route from the others.

 Each packet is sent with a ‘header address’. This tells it where its final
destination is, so it knows where to go.

 The header address also describes the sequence for reassembly at the destination
computer so that the packets are put back into the correct order.

 One packet also contains details of how many packets should be arriving so that
the recipient computer knows if one packet has failed to turn up.

 If a packet fails to arrive, the recipient computer sends a message back to the
computer which originally sent the data, asking for the missing packet to be
resent.

Difference between circuit switching and packet switching:

 Packet Switching

 Message is broken up into segments (packets).


Sikkim Manipal University – MI0035

 Each packet carries the identification of the intended recipient, data


used to assist in data correction and the position of the packet in
the sequence.

Each packet is treated individually by the switching centre and may be sent to the
destination by a totally different route to all the others.

Packet Switching

 Advantages:

 Security

 Bandwidth used to full potential

 Devices of different speeds can communicate

 Not affected by line failure (rediverts signal)

 Availability – do not have to wait for a direct connection to


become available

 During a crisis or disaster, when the public telephone network


might stop working, e-mails and texts can still be sent via packet
switching

Disadvantages

 Under heavy use there can be a delay

 Data packets can get lost or become corrupted

 Protocols are needed for a reliable transfer

 Not so good for some types data streams e.g real-time video
streams can lose frames due to the way packets arrive out of
sequence.
Sikkim Manipal University – MI0035

 Circuit switching was designed in 1878 in order to send telephone calls down a
dedicated channel. This channel remained open and in use throughout the whole
call and could not be used by any other data or phone calls.

 There are three phases in circuit switching:

 Establish

 Transfer

 Disconnect

 The telephone message is sent in one go, it is not broken up. The message arrives
in the same order that it was originally sent.

 In modern circuit-switched networks, electronic signals pass through several


switches before a connection is established.

 During a call, no other network traffic can use those switches.

 The resources remain dedicated to the circuit during the entire data transfer and
the entire message follows the same path.
Sikkim Manipal University – MI0035

 Circuit switching can be analogue or digital

 With the expanded use of the Internet for voice and video, analysts predict a
gradual shift away from circuit-switched networks.

 A circuit-switched network is excellent for data that needs a constant link from
end-to-end. For example real-time video.

Disadvantages:

 Inefficient – the equipment may be unused for a lot of the call, if


no data is being sent, the dedicated line still remains open

 Takes a relatively long time to set up the circuit

 During a crisis or disaster, the network may become unstable or


unavailable.

 It was primarily developed for voice traffic rather than data traffic.

It is easier to double the capacity of a packet switched network than a circuit network – a
circuit network is heavily dependent on the number of channel available.

 It is cheaper to expand a packet switching system.

 Circuit-switched technologies, which take four times as long to double their


performance/cost, force ISPs to buy that many more boxes to keep up. This is
why everyone is looking for ways to get Internet traffic off the telephone network.
The alternative of building up the telephone network to satisfy the demand growth
is economically out of the question.

 The battle between circuit and packet technologies has been around a long time,
and it is starting to be like the old story of the tortoise and the hare. In this case,
the hare is circuit switching—fast, reliable and smart. The hare starts out fast and
Sikkim Manipal University – MI0035

keeps a steady pace, while the tortoise starts slow but manages to double his
speed every 100 meters.

 If the race is longer than 2 km, the power of compounding favours the tortoise.

Q.5. classify Guided medium (wired). Compare fiber optics and copper wire.

Answer: GUIDED MEDIA

Guided media, which are those that provide a conduit from one device to another, include
twisted-pair cable, coaxial cable, and fiber-optic cable.

Guided Transmission Media uses a "cabling" system that guides the data signals along a
specific path. The data signals are bound by the "cabling" system. Guided Media is also
known as Bound Media. Cabling is meant in a generic sense in the previous sentences
and is not meant to be interpreted as copper wire cabling only. Cable is the medium
through which information usually moves from one network device to another.

Twisted pair cable and coaxial cable use metallic (copper) conductors that accept and
transport signals in the form of electric current. Optical fiber is a glass or plastic cable
that accepts and transports signals in the form of light.

There four basic types of Guided Media :

1. Open Wire

2. Twisted Pair

3. Coaxial Cable

4. Optical Fiber
Sikkim Manipal University – MI0035

Figure: Types of guided media


OPEN WIRE

Open Wire is traditionally used to describe the electrical wire strung along power poles.
There is a single wire strung between poles. No shielding or protection from noise
interference is used. We are going to extend the traditional definition of Open Wire to
include any data signal path without shielding or protection from noise interference. This
can include multiconductor cables or single wires. This media is susceptible to a large
degree of noise and interference and consequently not acceptable for data transmission
except for short distances under 20 ft.

TWISTED-PAIR (TP) CABLE

Twisted pair cable is least expensive and most widely used. The wires in Twisted Pair
cabling are twisted together in pairs. Each pair would consist of a wire used for the +ve
data signal and a wire used for the -ve data signal. Any noise that appears on one wire of
the pair would occur on the other wire. Because the wires are opposite polarities, they are
180 degrees out of phase When the noise appears on both wires, it cancels or nulls itself
out at the receiving end. Twisted Pair cables are most effectively used in systems that use
a balanced line method of transmission : polar line coding (Manchester Encoding) as
opposed to unipolar line coding (TTL logic).

Physical description

·Two insulated copper wires arranged in regular spiral pattern.


Sikkim Manipal University – MI0035

·Number of pairs are bundled together in a cable.

·Twisting decreases the crosstalk interference between adjacent pairs in the cable,
by using different twist length for neighboring pairs.

A twisted pair consists of two conductors (normally copper), each with its own plastic
insulation, twisted together. One of the wire is used to carry signals to the receiver, and
the other is used only a ground reference.

Why the cable is twisted?

In past, two parallel flat wires were used for communication. However, electromagnetic
interference from devices such as a motor can create noise over those wires.

If the two wires are parallel, the wire closest to the source of the noise gets more
interference and ends up with a higher voltage level than the wire farther away, which
results in an uneven load and a damaged signal. If, however, the two wires are twisted
around each other at regular intervals, each wire is closer to the noise source for half the
time and farther away for the other half. The degree of reduction in noise interference is
determined specifically by the number of turns per foot. Increasing the number of turns
per foot reduces the noise interference. To further improve noise rejection, a foil or wire
braid shield is woven around the twisted pairs.

Twisted pair cable supports both analog and digital signals. TP cable can be either
unshielded TP (UTP) cable or shielded TP (STP) cable. Cables with a shield are called
Shielded Twisted Pair and commonly abbreviated STP. Cables without a shield are called
Unshielded Twisted Pair or UTP. Shielding means metallic material added to cabling to
reduce susceptibility to noise due to electromagnetic interference (EMI).

IBM produced a version of TP cable for its use called STP. STP cable has a metal foil
that encases each pair of insulated conductors. Metal casing used in STP improves the
Sikkim Manipal University – MI0035

quality of cable by preventing the penetration of noise. It also can eliminate a


phenomenon called crosstalk.

Crosstalk is the undesired effect of one circuit (or channel) on another circuit (or
channel). It occurs when one line picks up some of the signal traveling down another line.
Crosstalk effect can be experienced during telephone conversations when one can hear
other conversations in the background.

Twisted-pair cabling with additional shielding to reduce crosstalk and other forms of
electromagnetic interference (EMI). It has an impedance of 150 ohms, has a maximum
length of 90 meters, and is used primarily in networking environments with a high
amount of EMI due to motors, air conditioners, power lines, or other noisy electrical
components. STP cabling is the default type of cabling for IBM Token Ring networks.
STP is more expensive as compared to UTP.

UTP is cheap, flexible, and easy to install. UTP is used in many LAN technologies,
including Ethernet and Token Ring.

In computer networking environments that use twisted-pair cabling, one pair of wires is
typically used for transmitting data while another pair receives data. The twists in the
cabling reduce the effects of crosstalk and make the cabling more resistant to
electromagnetic interference (EMI), which helps maintain a high signal-to-noise ratio for
reliable network communication. Twisted-pair cabling used in Ethernet networking is
usually unshielded twisted-pair (UTP) cabling, while shielded twisted-pair (STP) cabling
is typically used in Token Ring networks. UTP cabling comes in different grades for
different purposes.

The Electronic Industries Association (EIA) has developed standards to classify UTP
cable into seven categories. Categories are determined by cable quality, with CAT 1 as
the lowest and CAT 7 as the highest.
Sikkim Manipal University – MI0035

Category Data Rate Digital/Analog Use

CAT 1 < 100 Kbps Analog Telephone systems

CAT 2 4 Mbps Analog/Digital Voice + Data Transmission

CAT 3 10 Mbps Digital Ethernet 10BaseT LANs

CAT 4 20 Mbps Digital Token based or 10baseT


LANs

CAT 5 100 Mbps Digital Ethernet 100BaseT LANs

CAT 6 200 Mbps Digital LANs

CAT 7 600 Mbps Digital LANs

Table: Categories of UTP cable

Figure: Unshielded twisted pair cable

The quality of UTP may vary from telephone-grade wire to extremely high-speed
cable. The cable has four pairs of wires inside the jacket. Each pair is twisted with a
different number of twists per inch to help eliminate interference from adjacent pairs and
Sikkim Manipal University – MI0035

other electrical devices. The tighter the twisting, the higher the supported transmission
rate and the greater the cost per foot.

Unshielded Twisted Pair Connector

The standard connector for unshielded twisted pair cabling is an RJ-45 connector. This is
a plastic connector that looks like a large telephone-style connector. A slot allows the RJ-
45 to be inserted only one way. RJ stands for Registered Jack, implying that the
connector follows a standard borrowed from the telephone industry. This standard
designates which wire goes with each pin inside the connector.

Table: STP Cabling categories

STP Cabling Description


Categories
:Category
IBM Type 1 Token Ring transmissions on AWG #22 wire up to 20 Mbps.
IBM Type 1A Fiber Distributed Data Interface (FDDI), Copper Distributed
Data Interface (CDDI), and Asynchronous Transfer Mode
(ATM) transmission up to 300 Mbps.
IBM Type 2A Hybrid combination of STP data cable and CAT3 voice cable
in one jacket.
IBM Type 6A AWG #26 patch cables.
Sikkim Manipal University – MI0035

Figure : STP cable

COAXIAL CABLE

A form of network cabling used primarily in older Ethernet networks and in


electrically noisy industrial environments. The name “coax” comes from its
two-conductor construction in which the conductors run concentrically with
each other along the axis of the cable. Coaxial cabling has been largely
replaced by twisted-pair cabling for local area network (LAN) installations
within buildings, and by fiber-optic cabling for high-speed network
backbones.

Coaxial cable (or coax) carries signals of higher frequency ranges than
twisted-pair cable. Instead of having two wires, coax has a central core
conductor of solid or standard wire (usually copper) enclosed in an

insulating sheath, which is, in turn, encased in an outer conductor of metal foil, braid, or a
combination of the two (also usually copper).
Sikkim Manipal University – MI0035

Figure: Coaxial cable

FIBER-OPTIC CABLE

Fiber-optic is a glass cabling media that sends network signals using light. Fiber-optic
cabling has higher bandwidth capacity than copper cabling, and is used mainly for
high-speed network Asynchronous Transfer Mode (ATM) or Fiber Distributed Data
Interface (FDDI) backbones, long cable runs, and connections to high-performance
workstations. A fiber-optic cable is made of glass or plastic and transmits signals in
the form of light. Light is a form of electromagnetic energy. It travels at its fastest in a
vacuum: 3,00,000 kilometers/sec. The speed of light depends on the density of the
medium through, which it is traveling (the higher the density, the slower the speed).
Light travels in a straight line as long as it is moving through a single uniform
substance. If a ray of light traveling through one substance suddenly enters another
(more or less dense), the ray changes direction. This change is called.

Refraction : The direction in which a light ray is refracted depends on the change in
density encountered. A beam of light moving from a less dense into a denser medium
is bent towards vertical axis.
Sikkim Manipal University – MI0035

When light travels into a denser medium, the angle of incidence is greater than the angle
of refraction; and when light travels into a less dense medium, the angle of incidence is
less than the angle of refraction.

Figure: Fiber Optic cable

Comparison Twisted-Pair Cable Coaxial Cable Fiber Optic


of Guided Cable (FOC)
mediasSr
No
1. It uses electrical It uses electrical It uses optical
signals for signals for form of signal
transmission. transmission. (i.e. light) for
transmission.
2. It uses metallic It uses metallic It uses glass or
conductor to carry the conductor to carry the plastic to carry
signal. signal. the signal.
3. Noise immunity is low. Higher noise Highest noise
Therefore more immunity than immunity as the
distortion. twisted-pair cable due light rays are
to the presence of unaffected by
shielding conductor. the electrical
noise.
Sikkim Manipal University – MI0035

4. Affected due to Less affected due to Not affected by


external magnetic external magnetic the external
filed. filed. magnetic filed.
5. Cheapest Moderately costly Costly

6. Can support low data Moderately high data Very high data
rates. rates. rates.
7. Power loss due to Power loss due to Power loss due
conduction and conduction. to absorption,
radiation. scattering,
dispersion.
8. Short circuit between Short circuit between Short circuit is
two conductors is two conductors is not possible.
possible. possible.
9. Low bandwidth. Moderately high Very high
bandwidth. bandwidth.

Q.6. what are different types of satellites?

Answer: Satellites fall into five principal types:

1). Research Satellites, 

2). Communication Satellites, 

3). Weather Satellites, 

4). Navigational Satellites, and 

5). Application Satellites.

Communication satellite.
Sikkim Manipal University – MI0035

It is difficult to go through a day without using a communications satellite at least once.


Do you know when you used a communications satellite today? Did you watch T.V.? Did
you make a long distance phone call, use a cellular phone, a fax machine, a pager, or
even listen to the radio? Well, if you did, you probably used a communications satellite,
either directly or indirectly.

Communications satellites allow radio, television, and telephone transmissions to be sent


live anywhere in the world. Before satellites, transmissions were difficult or impossible at
long distances. The signals, which travel in straight lines, could not bend around the
round Earth to reach a destination far away. Because satellites are in orbit, the signals can
be sent instantaneously into space and then redirected to another satellite or directly to
their destination.

The satellite can have a passive role in communications like bouncing signals from the
Earth back to another location on the Earth; on the other hand, some satellites carry
electronic devices called transponders for receiving, amplifying, and re-broadcasting
signals to the Earth.

Communications satellites are often in geostationary orbit. At the high orbital altitude of
35,800 kilometers, a geostationary satellite orbits the Earth in the same amount of time it
takes the Earth to revolve once. From Earth, therefore, the satellite appears to be
stationary, always above the same area of the Earth. The area to which it can transmit is
called a satellite's footprint. For example, many Canadian communications satellites have
a footprint which covers most of Canada.

Communications satellties can also be in highly elliptical orbits. This type of orbit is
roughly egg-shaped, with the Earth near the top of the egg. In a highly elliptical orbit, the
satellite's velocity changes depending on where it is in its orbital path. When the satellite
is in the part of its orbit that's close to the Earth, it moves faster because the Earth's
gravitational pull is stronger. This means that a communications satellite can be over the
Sikkim Manipal University – MI0035

region of the Earth that it is communicating with for the long part of its orbit. It will only
be out of contact with that region when it quickly zips close by the Earth.

Wheather satellite.

Because of weather satellite technology and communications satellite technology, you


can find out the weather anywhere in the world any time of the day. There are television
stations that carry weather information all day long. Meteorologists use weather satellites
for many things, and they rely on images from satellites. Here are a few examples of
those uses:

 Radiation measurements from the earth's surface and atmosphere give information
on amounts of heat and energy being released from the Earth and the Earth's
atmosphere.

 People who fish for a living can find out valuable information about the
temperature of the sea from measurements that satellites make.

 Satellites monitor the amount of snow in winter, the movement of ice fields in the
Arctic and Antarctic, and the depth of the ocean.

 Infrared sensors on satellites examine crop conditions, areas of deforestation and


regions of drought.

 Some satellites have a water vapour sensor that can measure and describe how
much water vapour is in different parts of the atmosphere.

 Satellites can detect volcanic eruptions and the motion of ash clouds.

 During the winter, satellites monitor freezing air as it moves south towards
Florida and Texas, allowing weather forecasters to warn growers of upcoming
low temperatures.

 Satellites receive environmental information from remote data collection


platforms on the surface of the Earth. These include transmitters floating in the
water called buoys, gauges of river levels and conditions, automatic weather
Sikkim Manipal University – MI0035

stations, stations that measure earthquake and tidal wave conditions, and ships.
This information, sent to the satellite from the ground, is then relayed from the
satellite to a central receiving station back on Earth.

There are two basic types of weather satellites: those in geostationary orbit and those
in polar orbit. Orbiting very high above the Earth, at an altitude of 35,800 kilometres
(the orbital altitude), geostationary satellites orbit the Earth in the same amount of time it
takes the Earth to revolve once. From Earth, therefore, the satellite appears to stay still,
always above the same area of the Earth. This orbit allows the satellite to monitor the
same region all the time. Geostationary satellites usually measure in "real time", meaning
they transmit photographs to the receiving system on the ground as soon as the camera
takes the picture. A series of photographs from these satellites can be displayed in
sequence to produce a movie showing cloud movement. This allows forecasters to watch
the progress of large weather systems such as fronts, storms, and hurricanes. Forecasters
can also find out the wind direction and speed by monitoring cloud movement.

The other basic type of weather satellite is polar orbiting. This type of satellite orbits in a
path that closely follows the Earth's meridian lines, passing over the north and south
poles once each revolution. As the Earth rotates to the east beneath the satellite, each pass
of the satellite monitors a narrow area running from north to south, to the west of the
previous pass. These 'strips' can be pieced together to produce a picture of a larger area.
Polar satellites circle at a much lower altitude at about 850 km. This means that polar
satellites can photograph clouds from closer than the high altitude geostationary satellites.
Polar satellites, therefore, provide more detailed information about violent storms and
cloud systems.

Navigation satellite.
Sikkim Manipal University – MI0035

Satellites for navigation were developed in the late 1950's as a direct result of ships
needing to know exactly where they were at any given time. In the middle of the ocean or
out of sight of land, you can't find out your position accurately just by looking out the
window.

The idea of using satellites for navigation began with the launch of Sputnik 1 on October
4, 1957. Scientists at Johns Hopkins University's Applied Physics Laboratory monitored
that satellite. They noticed that when the transmitted radio frequency was plotted on a
graph, a pattern developed. This pattern was recognizable to scientists, and it is known as
the doppler effect. The doppler effect is an apparent change of radio frequency as
something that emits a signal in the form of waves passes by. Since the satellite was
emitting a signal, scientists were able to show that the doppler curve described the orbit
of the satellite.

Today, most navigation systems use time and distance to determine location. Early on,
scientists recognized the principle that, given the velocity and the time required for a
radio signal to be transmitted between two points, the distance between the two points
can be computed. The calculation must be done precisely, and the clocks in the satellite
and in the ground-based receiver must be telling exactly the same time - they must be
synchronized. If they are, the time it takes for a signal to travel can be measured and then
multiplied by the exact speed of light to obtain the distance between the two positions.

Research satellite.
Sikkim Manipal University – MI0035

NASA's Voyager 1 spacecraft has entered a new region between our solar system and
interstellar space. Data obtained from Voyager over the last year reveal this new region to
be a kind of cosmic purgatory. In it, the wind of charged particles streaming out from our
sun has calmed, our solar system's magnetic field has piled up, and higher-energy
particles from inside our solar system appear to be leaking out into interstellar space.
"Voyager tells us now that we're in a stagnation region in the outermost layer of the
bubble around our solar system," said Ed Stone, Voyager project scientist at the
California Institute of Technology.  "Voyager is showing that what is outside is pushing
back. We shouldn't have long to wait to find out what the space between stars is really
like."
Although Voyager 1 is about 11 billion miles (18 billion kilometers) from the sun, it is
not yet in interstellar space. In the latest data, the direction of the magnetic field lines has
not changed, indicating Voyager is still within the heliosphere, the bubble of charged
particles the sun blows around itself. The data do not reveal exactly when Voyager 1 will
make it past the edge of the solar atmosphere into interstellar space, but suggest it will be
in a few months to a few years.
The latest findings, described today at the American Geophysical Union's fall meeting in
San Francisco, come from Voyager's Low Energy Charged Particle instrument, Cosmic
Ray Subsystem and Magnetometer.
Scientists previously reported the outward speed of the solar wind had diminished to zero
in April 2010, marking the start of the new region. Mission managers rolled the
spacecraft several times this spring and summer to help scientists discern whether the
Sikkim Manipal University – MI0035

solar wind was blowing strongly in another direction. It was not. Voyager 1 is plying the
celestial seas in a region similar to Earth's doldrums, where there is very little wind.
During this past year, Voyager's magnetometer also detected a doubling in the intensity
of the magnetic field in the stagnation region. Like cars piling up at a clogged freeway
off-ramp, the increased intensity of the magnetic field shows that inward pressure from
interstellar space is compacting it.
Voyager has been measuring energetic particles that originate from inside and outside our
solar system. Until mid-2010, the intensity of particles originating from inside our solar
system had been holding steady. But during the past year, the intensity of these energetic
particles has been declining, as though they are leaking out into interstellar space. The
particles are now half as abundant as they were during the previous five years.
At the same time, Voyager has detected a 100-fold increase in the intensity of high-
energy electrons from elsewhere in the galaxy diffusing into our solar system from
outside, which is another indication of the approaching boundary.
"We've been using the flow of energetic charged particles at Voyager 1 as a kind of wind
sock to estimate the solar wind velocity," said Rob Decker, a Voyager Low-Energy
Charged Particle Instrument co-investigator at the Johns Hopkins University Applied
Physics Laboratory in Laurel, Md. "We've found that the wind speeds are low in this
region and gust erratically. For the first time, the wind even blows back at us. We are
evidently traveling in completely new territory. Scientists had suggested previously that
there might be a stagnation layer, but we weren't sure it existed until now."
Launched in 1977, Voyager 1 and 2 are in good health. Voyager 2 is 15 billion km away
from the sun.
The Voyager spacecraft were built by NASA's Jet Propulsion Laboratory in Pasadena,
Calif., which continues to operate both. JPL is a division of the California Institute of
Technology. The Voyager missions are a part of the NASA Heliophysics System
Observatory, sponsored by the Heliophysics Division of the Science Mission Directorate
in Washington.
Sikkim Manipal University – MI0035

Satellite Applications

Broadband Digital Communications 

Broadband satellites transmit high-speed data and video directly to consumers and
businesses. Markets for broadband services also include interactive TV, wholesale
telecommunications, telephony, and point-of-sale communications, such as credit card
transactions and inventory control.

Direct-Broadcast Services 

Direct-broadcast satellites (DBS) transmit signals for direct reception by the general
public, such as satellite television and radio. Satellite signals are sent directly to users
through their own receiving antennas or satellite dishes, in contrast to satellite/cable
systems in which signals are received by a ground station, and re-broadcast to users by
cable.

Environmental Monitoring 
Sikkim Manipal University – MI0035

Environmental monitoring satellites carry highly sensitive imagers and sounders to


monitor the Earth's environment, including the vertical thermal structure of the
atmosphere; the movement and formation of clouds; ocean temperatures; snow levels;
glacial movement; and volcanic activity. Large-scale computers use this data to model
the entire earth's atmosphere and create weather forecasts such as those provided by
national weather services in the U.S. and abroad.

These satellites are typically self-contained systems that carry their own communications
systems for distributing the data they gather in the form reports and other products for
analyzing the condition of the environment. Satellites are particularly useful in this case
because they can provide continuous coverage of very large geographic regions.

Fixed-Satellite Services 

Satellites providing Fixed-Satellite Services (FSS) transmit radio communications


between ground Earth stations at fixed locations. Satellite-transmitted information is
carried in the form of radio-frequency signals. Any number of satellites may be used to
link these stations. Earth stations that are part of fixed-satellite services networks also use
satellite news gathering vehicles to broadcast from media events, such as sporting events
or news conferences. In addition, FSS satellites provide a wide variety of services
including paging networks and point-of-sale support, such as credit card transactions and
inventory control.

Government 
Providing X-band satellite communications services to governments is a new commercial
application with substantial growth potential. SS/L has designed and built two X-band
satellites, which will be available for lease to government users in the United States and
Spain, as well as other friendly and allied nations within the satellites' extensive coverage
areas. Government communications use specially allocated frequency bands and
waveforms.
Sikkim Manipal University – MI0035

Beyond environmental applications, government sensors gather intelligence in various


forms, including radar, infrared imaging, and optical sensing.

Mobile Satellite Services 

Mobile Satellite Services (MSS) use a constellation of satellites that provide


communications services to mobile and portable wireless devices, such as cellular phones
and global positioning systems. The satellite constellation is interconnected with land-
based cellular networks or ancillary terrestrial components that allow for interactive
mobile-to-mobile and mobile-to-fixed voice, data, and multimedia communications
worldwide. With repeaters located on orbit, the interference of traditional fixed-ground
terminals can be eliminated.
Sikkim Manipal University – MI0035

MBA SEMESTER III


MI0035 –Computer Networks - 4 Credits
Assignment Set- 2 (60 Marks)
Q.1. Write down the features of fast Ethernet and gigabit Ethernet.

Answer: Fast Ethernet Technology

Fast Ethernet, or 100BaseT, is conventional Ethernet but faster, operating at 100 Mbps
instead of 10 Mbps. Fast Ethernet is based

on the proven CSMA/CD Media Access Control (MAC) protocol and can use existing
10BaseT cabling (See Appendix for pinout diagram and table). Data can move from 10
Mbps to 100 Mbps without protocol translation or changes to application and networking
software.

Data- Link Layer

Fast Ethernet maintains CSMA/CD, the Ethernet transmission protocol. However, Fast
Ethernet reduces the duration of time each bit is transmitted by a factor of 10, enabling
the packet speed to increase tenfold from 10 Mbps to 100 Mbps. Data can move between
Ethernet and Fast Ethernet without requiring protocol translation, because Fast Ethernet
also maintains the 10BaseT error control functions as well as the frame format and
length.

Other high-speed technologies such as 100VG-AnyLAN, FDDI, and Asynchronous


Transfer Mode (ATM) achieve 100 Mbps

or higher speeds by implementing different protocols that require protocol translation


when moving data to and from 10BaseT. This

protocol translation involves changes to the frame that typically mean higher latencies
when frames are passed through layer 2 LAN switches.
Sikkim Manipal University – MI0035

Physical Layer Media Options

Fast Ethernet can run over the same variety of media as 10BaseT, including UTP,
shielded twisted-pair (STP), and fiber. The Fast Ethernet specification defines separate
physical sublayers for each media type:

• 100BaseT4 for four pairs of voice- or data-grade Category 3, 4, and 5 UTP wiring

• 100BaseTX for two pairs of data-grade Category 5 UTP and STP wiring

• 100BaseFX for two strands of 62.5/125-micron multimode fiber

In many cases, organizations can upgrade to 100BaseT technology without replacing


existing wiring. However, for installations with Category 3 UTP wiring in all or part of
their locations, four pairs must be available to implement Fast Ethernet.

The MII layer of 100BaseT couples these physical sublayers to the CSMA/CD MAC
layer (see Figure 1). The MII provides a single interface that can support external
transceivers for any of the 100BaseT physical sublayers. For the physical connection, the
MII is implemented on Fast Ethernet devices such as routers, switches, hubs, and
adapters, and on transceiver devices using a 40-pin connector (See Appendix for pinout
and connector diagrams). Cisco Systems contributed to the MII specification.Public
Copyright © 1999 Cisco Systems, Inc. All Rights Reserved.

Physical Layer Signaling Schemes

Each physical sublayer uses a signaling scheme that is appropriate to its media type.
100BaseT4 uses three pairs of wire for 100-Mbps transmission and the fourth pair for
collision detection. This method lowers the 100BaseT4 signaling to 33 Mbps per pair,
making it suitable for Category 3, 4, and 5 wiring.

100BaseTX uses one pair of wires for transmission (125-MHz frequency operating at 80-
percent efficiency to allow for 4B5B encoding) and the other pair for collision detection
Sikkim Manipal University – MI0035

and receive. 100BaseFX uses one fiber for transmission and the other fiber for collision
detection and receive. The 100BaseTX and 100BaseFX physical signaling channels are
based on FDDI physical layers developed and approved by the American National
Standards Institute (ANSI) X3T9.5 committee. 100BaseTX uses the MLT-3 line
encoding signaling scheme, which Cisco developed and contributed to the ANSI
committee as the specification for FDDI over Category 5 UTP. Today MLT-3 also is
used as the signaling scheme for ATM over Category 5 UTP.

Gigabit Ethernet:

Gigabit Ethernet is a 1-gigabit/sec (1,000-Mbit/sec) extension of the IEEE 802.3 Ethernet


networking standard. Its primary niches are corporate LANs, campus networks, and
service provider networks where it can be used to tie together existing 10-Mbit/sec and
100-Mbit/sec Ethernet networks. Gigabit Ethernet can replace 100-Mbit/sec FDDI (Fiber
Distributed Data Interface) and Fast Ethernet backbones, and it competes with ATM
(Asynchronous Transfer Mode) as a core networking technology. Many ISPs use Gigabit
Ethernet in their data centers.

Gigabit Ethernet provides an ideal upgrade path for existing Ethernet-based networks. It
can be installed as a backbone network while retaining the existing investment in
Ethernet hubs, switches, and wiring plants. In addition, management tools can be
retained, although network analyzers will require updates to handle the higher speed.

Gigabit Ethernet provides an alternative to ATM as a high-speed networking technology.


While ATM has built-in QoS (quality of service) to support real-time network traffic,
Gigabit Ethernet may be able to provide a high level of service quality by providing more
bandwidth than is needed.
Sikkim Manipal University – MI0035

This topic continues in "The Encyclopedia of Networking and Telecommunications" with


a discussion of the following:

 Gigabit Ethernet features and specification

 Gigabit Ethernet modes and functional elements

 Gigabit Ethernet committees and specifications, including:

 1000Base-LX (IEEE 802.3z)

 1000Base-SX (IEEE 802.3z)

 1000Base-CX (IEEE 802.3z)

 1000Base-T (IEEE 802.3ab)

 10-Gigabit Ethernet (IEEE 802.3ae)

 Gigabit Ethernet switches

 Network configuration and design

 Flat network or subnets

 Gigabit Ethernet backbones

 Switch-to-server links

 Gigabit Ethernet to the desktop

 Switch-to-switch links

 Gigabit Ethernet versus ATM

 Hybrid Gigabit Ethernet/ATM Core Network

10-Gigabit Ethernet

As if 1 Gbits/sec wasn't enough, the IEEE is working to define 10-Gigabit Ethernet


(sometimes called "10 GE"). The new standard is being developed by the IEEE 802.3ae
Working Group. Service providers will be the first to take advantage of this standard. It is
Sikkim Manipal University – MI0035

being deployed in emerging metro-Ethernet networks. See "MAN (Metropolitan Area


Network)" and "Network Access Services."

As with 1-Gigabit Ethernet, 10-Gigabit Ethernet will preserve the 802.3 Ethernet frame
format, as well as minimum and maximum frame sizes. It will support full-duplex
operation only. The topology is star-wired LANs that use point-to-point links, and
structured cabling topologies. 802.3ad link aggregation will also be supported.

The new standard will support new multimedia applications, distributed processing,
imaging, medical, CAD/CAM, and a variety of other applications-many that cannot even
be perceived today. Most certainly it will be used in service provider data centers and as
part of metropolitan area networks. The technology will also be useful in the SAN
(Storage Area Network) environment. Refer to the following Web sites for more
information.

10 GEA (10 Gigabit Ethernet Alliance) http://www.10gea.org/Tech-whitepapers.htm

Telecommunications article on 10http://www.telecoms-


Gigabit Ethernet "Lighting Internet inmag.com/issues/200009/tcs/lighting_internet.html
the WAN"

Q.2. Differentiate the working between pure ALOHA and slotted ALOHA

ALOHA:

Aloha is a computer networking system which was introduced in the early 1970 by
Norman Abramson and his colleagues at university of Hawaii to solve the channel
allocation problem. On the basis of global time synchronization. Aloha is divided into
two different versions or protocols. i.e Pure Aloha and Slotted Aloha.

Pure Aloha:
Sikkim Manipal University – MI0035

Pure Aloha does not require global time synchronization. The basic idea of pure aloha
system is that it allows its users to transmit whenever they have data.A sender just like
other users can listen to what it is transmitting, and due to this feedback broadcasting
system is able to detect collision, if any. If the collision is detected the sender will wait a
random period of time and attempt transmission again. The waiting time must not be the
same or the same frames will collide and destroyed over and over. Systems in which
multiple users share a common channel in a way that can lead to conflicts are widely
known as contention systems.

Efficiency of Pure Aloha:

Let "T" be the time needed to transmit one frame on the channel, and "frame-time" as a
unit of time equal to T. Let "G" refer to the mean used in the Poisson distribution over
transmission-attempt amounts that is, on average, there are G transmission-attempts per
frame-time. Let "t" be the time at which the sender wants to send a frame. We want to use
the channel for one frame-time beginning at t, and so we need all other stations to refrain
from transmitting during this time. Moreover, we need the other stations to refrain from
transmitting between t-T and t as well, because a frame sent during this interval would
overlap with our frame.

EFFICIENCY OF ALOHA

Vulnerable period for the shaded frame is 2t, if t is the frame time. A frame will not
collide if no other frames are sent within one frame time of its start, before and after. For
Sikkim Manipal University – MI0035

any frame-time, the probability of there being k transmission-attempts during that frame-
time is: {G^k e^{-G}} / {k!} If throughput (number of packets per unit time) is
represented by S, under all load, S =GPo, where Po is the probability that the frame does
not suffer collision. A frame does not have collision if no frames are send during the
frame time. Thus, in t time Po=(e)^(-G). In 2t time Po=e^(-2G), as mean number of
frames generated in 2t is 2G. From the above, throughput in 2t time S=G*(Po)=G*e^(-
2G)

Slotted Aloha Channel:

Slotted Aloha does require global time synchronization.

Efficiency of Slotted Aloha Channel:

Assume that the sending stations has to wait until the beginning of a frame time (one
frame time is one time slot) and arrivals still follow Poisson Distribution, where they are
assumed probabilistically independent: In this case the vulnerable period is just t time
units. Then the Probability that k frames are generated in a frame time is effective:-

Pk=(G^k)*(e^-G)/k! In t time, the probability of zero frames, Po=e^(-G) From the above
throughput becomes:

S=GPo=G*(e^-G)

Comparison Of Pure Aloha And Slotted Aloha:


Sikkim Manipal University – MI0035

PURE ALOHA VS SLOTTED ALOHA

Throughput versus offered traffic for pure ALOHA and slotted ALOHA systems, ie, plot
of S against G, from S=Ge^(-2G) and S=Ge^(-G) formulas.

CSMA: CSMA is a set of rules in which the devices attached to a network first


determines whether the channel or carrier is in use or free and then act accordingly. As in
this MAC protocol,the network devices or nodes before transmission senses the
channel,therefore, this protocol is known as carrier sense multiple access protocol.
Multiple Access indicates that many devices can connect to and share the same network
and if a node transmits anything, it is heard by all the stations on the network.

Q.3. write down distance vector algorithm. Explain path vector protocol

Answer: Distance Vector Routing algorithm:

1) For each node, estimate the cost from itself to each destination.

2) For each node, send the cost information the neighbors.

3) Receive cost information from the neighbor, update the routing tables accordingly.
Sikkim Manipal University – MI0035

4) Repeat steps 1 to 3 periodically.

A path vector protocol is a computer network routing protocol which maintains the path


information that gets updated dynamically. Updates which have looped through the
network and returned to the same node are easily detected and discarded. This algorithm
is sometimes used in Bellman–Ford routing algorithms to avoid "Count to Infinity"
problems.

It is different from the distance vector routing and link state routing. Each entry in the
routing table contains the destination network, the next router and the path to reach the
destination.

Path Vector Messages in BGP: The autonomous system boundary routers (ASBR), which


participate in path vector routing, advertise the reachability of networks. Each router that
receives a path vector message must verify that the advertised path is according to its
policy. If the messages comply with the policy, the ASBR modifies its routing table and
the message before sending it to the next neighbor. In the modified message it sends its
own AS number and replaces the next router entry with its own identification.

BGP is an example of a path vector protocol. In BGP the routing table maintains


the autonomous systems that are traversed in order to reach the destination
system. Exterior Gateway Protocol (EGP) does not use path vectors.

Q.4. state the working principle of TCP segment header and UDP header

Answer: TCP Header Format

TCP segments are sent as internet datagrams. The Internet Protocol header carries several
information fields, including the source and destination host addresses [2]. A TCP header
Sikkim Manipal University – MI0035

follows the internet header, supplying information specific to the TCP protocol. This
division allows for the existence of host level protocols other than TCP.

TCP Header Format

0 1 2 3
01234567890123456789012345678901
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Acknowledgment Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Data | |U|A|P|R|S|F| |
| Offset| Reserved |R|C|S|S|Y|I| Window |
| | |G|K|H|T|N|N| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Checksum | Urgent Pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options | Padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

TCP Header Format


Sikkim Manipal University – MI0035

Note that one tick mark represents one bit position.

Figure 3.

Source Port: 16 bits

The source port number.

Destination Port: 16 bits

The destination port number.

Sequence Number: 32 bits

The sequence number of the first data octet in this segment (except when SYN is
present). If SYN is present the sequence number is the initial sequence number (ISN) and
the first data octet is ISN+1.

Acknowledgment Number: 32 bits

If the ACK control bit is set this field contains the value of the next sequence number
the sender of the segment is expecting to receive. Once a connection is established this is
always sent.

Data Offset: 4 bits

The number of 32 bit words in the TCP Header. This indicates where the data begins.
The TCP header (even one including options) is an integral number of 32 bits long.

Reserved: 6 bits

Reserved for future use. Must be zero.

Control Bits: 6 bits (from left to right):

URG: Urgent Pointer field significant


ACK: Acknowledgment field significant
PSH: Push Function
Sikkim Manipal University – MI0035

RST: Reset the connection


SYN: Synchronize sequence numbers
FIN: No more data from sender

Window: 16 bits

The number of data octets beginning with the one indicated in the acknowledgment
field which the sender of this segment is willing to accept.

Checksum: 16 bits

The checksum field is the 16 bit one's complement of the one's complement sum of all
16 bit words in the header and text. If a segment contains an odd number of header and
text octets to be checksummed, the last octet is padded on the right with zeros to form a
16 bit word for checksum purposes. The pad is not transmitted as part of the segment.
While computing the checksum, the checksum field itself is replaced with zeros.

The checksum also covers a 96 bit pseudo header conceptually prefixed to the TCP
header. This pseudo header contains the Source Address, the Destination Address, the
Protocol, and TCP length.
This gives the TCP protection against misrouted segments. This information is carried
in the Internet Protocol and is transferred across the TCP/Network interface in the
arguments or results of calls by the TCP on the IP.

+--------+--------+--------+--------+
| Source Address |
+--------+--------+--------+--------+
| Destination Address |
+--------+--------+--------+--------+
| zero | PTCL | TCP Length |
+--------+--------+--------+--------+
Sikkim Manipal University – MI0035

The TCP Length is the TCP header length plus the data length in octets (this is not an
explicitly transmitted quantity, but is computed), and it does not count the 12 octets of the
pseudo header.

Urgent Pointer: 16 bits

This field communicates the current value of the urgent pointer as a positive offset
from the sequence number in this segment. The urgent pointer points to the sequence
number of the octet following the urgent data. This field is only be interpreted in
segments with the URG control bit set.

Options: variable

Options may occupy space at the end of the TCP header and are a multiple of 8 bits in
length. All options are included in the checksum. An option may begin on any octet
boundary. There are two cases for the format of an option:

Case 1: A single octet of option-kind.


Case 2: An octet of option-kind, an octet of option-length, and the actual option-
data octets.

The option-length counts the two octets of option-kind and option-length as well as the
option-data octets.

Note that the list of options may be shorter than the data offset field might imply. The
content of the header beyond the End-of-Option option must be header padding (i.e.,
zero).

A TCP must implement all options.


Currently defined options include (kind indicated in octal):
Sikkim Manipal University – MI0035

Kind Length Meaning


---- ------ -------
0 - End of option list.
1 - No-Operation.
2 4 Maximum Segment Size.

Specific Option Definitions

End of Option List

+--------+
|00000000|
+--------+
Kind=0

This option code indicates the end of the option list. This might not coincide with
the end of the TCP header according to the Data Offset field. This is used at the end of
all options, not the end of each option, and need only be used if the end of the options
would not otherwise coincide with the end of the TCP header.

No-Operation

+--------+
|00000001|
+--------+
Kind=1
Sikkim Manipal University – MI0035

This option code may be used between options, for example, to align the beginning
of a subsequent option on a word boundary.
There is no guarantee that senders will use this option, so receivers must be prepared
to process options even if they do not begin on a word boundary.

Maximum Segment Size

+--------+--------+---------+--------+
|00000010|00000100| max seg size |
+--------+--------+---------+--------+
Kind=2 Length=4

Maximum Segment Size Option Data: 16 bits

If this option is present, then it communicates the maximum receive segment size
at the TCP which sends this segment.
This field must only be sent in the initial connection request (i.e., in segments with
the SYN control bit set). If this option is not used, any segment size is allowed.

Padding: variable

The TCP header padding is used to ensure that the TCP header ends and data begins on
a 32 bit boundary. The padding is composed of zeros.

The User Datagram Protocol (UDP)

The User Datagram Protocol (UDP) is a transport layer protocol defined for use with
the IP network layer protocol. It is defined by RFC 768 written by John Postel. It
provides a best-effort datagram service to an End System (IP host).
Sikkim Manipal University – MI0035

The service provided by UDP is an unreliable service that provides no guarantees for
delivery and no protection from duplication (e.g. if this arises due to software errors
within an Intermediate System (IS)). The simplicity of UDP reduces the overhead from
using the protocol and the services may be adequate in many cases.

UDP provides a minimal, unreliable, best-effort, message-passing transport to


applications and upper-layer protocols. Compared to other transport protocols, UDP and
its UDP-Lite variant are unique in that they do not establish end-to-end connections
between communicating end systems. UDP communication consequently does not incur
connection establishment and teardown overheads and there is minimal associated end
system state. Because of these characteristics, UDP can offer a very efficient
communication transport to some applications, but has no inherent congestion control or
reliability. A second unique characteristic of UDP is that it provides no inherent On many
platforms, applications can send UDP datagrams at the line rate of the link interface,
which is often much greater than the available path capacity, and doing so would
contribute to congestion along the path, applications therefore need to be designed
responsibly [RFC 4505].

One increasingly popular use of UDP is as a tunneling protocol, where a tunnel endpoint
encapsulates the packets of another protocol inside UDP datagrams and transmits them to
another tunnel endpoint, which decapsulates the UDP datagrams and forwards the
original packets contained in the payload. Tunnels establish virtual links that appear to
directly connect locations that are distant in the physical Internet topology, and can be
used to create virtual (private) networks. Using UDP as a tunneling protocol is attractive
when the payload protocol is not supported by middleboxes that may exist along the path,
because many middleboxes support UDP transmissions.

UDP does not provide any communications security. Applications that need to protect
their communications against eavesdropping, tampering, or message forgery therefore
need to separately provide security services using additional protocol mechanisms.
Sikkim Manipal University – MI0035

Protocol Header

A computer may send UDP packets without first establishing a connection to the
recipient. A UDP datagram is carried in a single IP packet and is hence limited to a
maximum payload of 65,507 bytes for IPv4 and 65,527 bytes for IPv6. The transmission
of large IP packets usually requires IP fragmentation. Fragmentation decreases
communication reliability and efficiency and should theerfore be avoided.

To transmit a UDP datagram, a computer completes the appropriate fields in the UDP
header (PCI) and forwards the data together with the header for transmission by the IP
network layer.

The UDP protocol header consists of 8 bytes of Protocol Control Information (PCI)

The UDP header consists of four fields each of 2 bytes in length:

 Source Port (UDP packets from a client use this as a service access point
(SAP) to indicate the session on the local client that originated the packet. UDP
packets from a server carry the server SAP in this field)

 Destination Port (UDP packets from a client use this as a service access point
(SAP) to indicate the service required from the remote server. UDP packets from
a server carry the client SAP in this field)

 UDP length (The number of bytes comprising the combined UDP header


information and payload data)
Sikkim Manipal University – MI0035

 UDP Checksum (A checksum to verify that the end to end data has not been
corrupted by routers or bridges in the network or by the processing in an end
system. The algorithm to compute the checksum is the Standard Internet
Checksum algorithm. This allows the receiver to verify that it was the intended
destination of the packet, because it covers the IP addresses, port numbers and
protocol number, and it verifies that the packet is not truncated or padded,
because it covers the size field. Therefore, this protects an application against
receiving corrupted payload data in place of, or in addition to, the data that was
sent. In the cases where this check is not required, the value of 0x0000 is placed
in this field, in which case the data is not checked by the receiver.

Like for other transport protocols, the UDP header and data are not processed
by Intermediate Systems (IS) in the network, and are delivered to the final destination in
the same form as originally transmitted.

At the final destination, the UDP protocol layer receives packets from the IP network
layer. These are checked using the checksum (when >0, this checks correct end-to-end
operation of the network service) and all invalid PDUs are discarded. UDP does not make
any provision for error reporting if the packets are not delivered. Valid data are passed to
the appropriate session layer protocol identified by the source and destination port
numbers (i.e. the session service access points).

UDP and UDP-Lite also may be used for multicast and broadcast, allowing senders to
transmit to multiple receivers.

Using UDP

Application designers are generally aware that UDP does not provide any reliability, e.g.,
it does not retransmit any lost packets. Often, this is a main reason to consider UDP as a
transport. Applications that do require reliable message delivery therefore need to
implement appropriate protocol mechanisms in their applications (e.g. tftp).
Sikkim Manipal University – MI0035

UDP's best effort service does not protect against datagram duplication, i.e., an
application may receive multiple copies of the same UDP datagram. Application
designers therefore need to verify that their application gracefully handles datagram
duplication and may need to implement mechanisms to detect duplicates.

The Internet may also significantly delay some packets with respect to others, e.g., due to
routing transients, intermittent connectivity, or mobility. This can cause reordering,
where UDP datagrams arrive at the receiver in an order different from the transmission
order. Applications that require ordered delivery must restore datagram ordering
themselves.

The burdon of needing to code all these protocol mechanims can be avoided by
using TCP!

Q.5. what is IP addressing? Discuss different classs of IP addressing.

Answer: In identifier for a computer or device on a TCP/IP network. Networks using the


TCP/IP protocol route messages based on the IP address of the destination. The format of
an IP address is a 32-bit numeric address written as four numbers separated by periods.
Each number can be zero to 255. For example, 1.160.10.240 could be an IP address.
Within an isolated network, you can assign IP addresses at random as long as each one is
unique. However, connecting a private network to the Internetrequires using registered IP
addresses (called Internet addresses) to avoid duplicates.
The four numbers in an IP address are used in different ways to identify a particular
network and a host on that network. Four regional Internet registries -- ARIN, RIPE
NCC, LACNIC and APNIC -- assign Internet addresses from the following three classes.
· Class A - supports 16 million hosts on each of 126 networks

· Class B - supports 65,000 hosts on each of 16,000 networks


Sikkim Manipal University – MI0035

· Class C - supports 254 hosts on each of 2 million networks

The number of unassigned Internet addresses is running out, so a new classless scheme
called CIDR is gradually replacing the system based on classes A, B, and C and is tied to
adoption of IPv6.

IP address classes

These IP addresses can further be broken down into classes. These classes are A, B, C, D,
E and their possible ranges can be seen in Figure 2 below.

Class Start address Finish address


A 0.0.0.0 126.255.255.255
B 128.0.0.0 191.255.255.255
C 192.0.0.0 223.255.255.255
D 224.0.0.0 239.255.255.255
E 240.0.0.0 255.255.255.255
Figure 2. IP address Classes

If you look at the table you may notice something strange. The range of IP address from
Class A to Class B skips the 127.0.0.0-127.255.255.255 range. That is because this range
is reserved for the special addresses called Loopback addresses that have already been
discussed above.

The rest of classes are allocated to companies and organizations based upon the amount
of IP addresses that they may need. Listed below are descriptions of the IP classes and
the organizations that will typically receive that type of allocation.

Default Network: The special network 0.0.0.0 is generally used for routing.

Class A: From the table above you see that there are 126 class A networks. These
networks consist of 16,777,214 possible IP addresses that can be assigned to devices and
computers. This type of allocation is generally given to very large networks such as
multi-national companies.
Sikkim Manipal University – MI0035

Loopback: This is the special 127.0.0.0 network that is reserved as a loopback to your
own computer. These addresses are used for testing and debugging of your programs or
hardware.

Class B: This class consists of 16,384 individual networks, each allocation consisting of
65,534 possible IP addresses. These blocks are generally allocated to Internet Service
Providers and large networks, like a college or major hospital.

Class C: There is a total of 2,097,152 Class C networks available, with each network
consisting of 255 individual IP addresses. This type of class is generally given to small to
mid-sized companies.

Class D: The IP addresses in this class are reserved for a service called Multicast.

Class E: The IP addresses in this class are reserved for experimental use.

Broadcast: This is the special network of 255.255.255.255, and is used for broadcasting
messages to the entire network that your computer resides on.

Private Addresses

There are also blocks of IP addresses that are set aside for internal private use for
computers not directly connected to the Internet. These IP addresses are not supposed to
be routed through the Internet, and most service providers will block the attempt to do so.
These IP addresses are used for internal use by company or home networks that need to
use TCP/IP but do not want to be directly visible on the Internet. These IP ranges are:

Class Private Start Address Private End Address


A 10.0.0.0 10.255.255.255
B 172.16.0.0 172.31.255.255
C 192.168.0.0 192.168.255.255
If you are on a home/office private network and want to use TCP/IP, you should assign
your computers/devices IP addresses from one of these three ranges. That way your
router/firewall would be the only device with a true IP address which makes your
network more secure.
Sikkim Manipal University – MI0035

Common Problems and Resolutions

The most common problem people have is by accident assigning an IP address to a


device on your network that is already assigned to another device. When this happens, the
other computers will not know which device should get the information, and you can
experience erratic behavior. On most operating systems and devices, if there are two
devices on the local network that have the same IP address, it will generally give you a
"IP Conflict" warning. If you see this warning, that means that the device giving the
warning, detected another device on the network using the same address.

The best solution to avoid a problem like this is to use a service called DHCP that almost
all home routers provide. DHCP, or Dynamic Host Configuration Protocol, is a service
that assigns addresses to devices and computers. You tell the DHCP server what range of
IP addresses you would like it to assign, and then the DHCP server takes the
responsibility of assigning those IP addresses to the various devices and keeping track so
those IP addresses are assigned only once.

Q.6. define Cryptography? Discuss two cryptography techniques

Answer: Cryptography is the science of information security. The word is derived from
the Greekkryptos, meaning hidden. Cryptography is closely related to the disciplines
of cryptology andcryptanalysis. Cryptography includes techniques such as microdots,
merging words with images, and other ways to hide information in storage or transit.
However, in today's computer-centric world, cryptography is most often associated with
scrambling plaintext(ordinary text, sometimes referred to as cleartext) into ciphertext (a
process calledencryption), then back again (known as decryption). Individuals who
practice this field are known as cryptographers.
Sikkim Manipal University – MI0035

Modern cryptography concerns itself with the following four objectives:

1) Confidentiality (the information cannot be understood by anyone for whom it was


unintended)

2) Integrity (the information cannot be altered in storage or transit between sender and


intended receiver without the alteration being detected)
3) Non-repudiation (the creator/sender of the information cannot deny at a later stage his
or her intentions in the creation or transmission of the information)
4) Authentication (the sender and receiver can confirm each other?s identity and the
origin/destination of the information)

TYPES OF CRYPTOGRAPHIC ALGORITHMS

There are several ways of classifying cryptographic algorithms. For purposes of this
paper, they will be categorized based on the number of keys that are employed for
encryption and decryption, and further defined by their application and use. The three
types of algorithms that will be discussed are (Figure 1):
 Secret Key Cryptography (SKC): Uses a single key for both encryption and
decryption

 Public Key Cryptography (PKC): Uses one key for encryption and another for
decryption

 Hash Functions: Uses a mathematical transformation to irreversibly "encrypt"


information
Sikkim Manipal University – MI0035

FIGURE 1: Three types of cryptography: secret-key, public key, and hash function.

Secret Key Cryptography

With secret key cryptography, a single key is used for both encryption and decryption. As
shown in Figure 1A, the sender uses the key (or some set of rules) to encrypt the plaintext
and sends the ciphertext to the receiver. The receiver applies the same key (or ruleset) to
decrypt the message and recover the plaintext. Because a single key is used for both
functions, secret key cryptography is also called symmetric encryption.

With this form of cryptography, it is obvious that the key must be known to both the
sender and the receiver; that, in fact, is the secret. The biggest difficulty with this
approach, of course, is the distribution of the key.

Secret key cryptography schemes are generally categorized as being either stream


ciphers or block ciphers. Stream ciphers operate on a single bit (byte or computer word)
at a time and implement some form of feedback mechanism so that the key is constantly
Sikkim Manipal University – MI0035

changing. A block cipher is so-called because the scheme encrypts one block of data at a
time using the same key on each block. In general, the same plaintext block will always
encrypt to the same ciphertext when using the same key in a block cipher whereas the
same plaintext will encrypt to different ciphertext in a stream cipher.

Stream ciphers come in several flavors but two are worth mentioning here. Self-
synchronizing stream ciphers calculate each bit in the keystream as a function of the
previous n bits in the keystream. It is termed "self-synchronizing" because the decryption
process can stay synchronized with the encryption process merely by knowing how far
into the n-bit keystream it is. One problem is error propagation; a garbled bit in
transmission will result in n garbled bits at the receiving side. Synchronous stream
ciphers generate the keystream in a fashion independent of the message stream but by
using the same keystream generation function at sender and receiver. While stream
ciphers do not propagate transmission errors, they are, by their nature, periodic so that the
keystream will eventually repeat.

Block ciphers can operate in one of several modes; the following four are the most
important:

 Electronic Codebook (ECB) mode is the simplest, most obvious application: the
secret key is used to encrypt the plaintext block to form a ciphertext block. Two
identical plaintext blocks, then, will always generate the same ciphertext block.
Although this is the most common mode of block ciphers, it is susceptible to a
variety of brute-force attacks.

 Cipher Block Chaining (CBC) mode adds a feedback mechanism to the


encryption scheme. In CBC, the plaintext is exclusively-ORed (XORed) with the
previous ciphertext block prior to encryption. In this mode, two identical blocks
of plaintext never encrypt to the same ciphertext.

 Cipher Feedback (CFB) mode is a block cipher implementation as a self-


synchronizing stream cipher. CFB mode allows data to be encrypted in units
Sikkim Manipal University – MI0035

smaller than the block size, which might be useful in some applications such as
encrypting interactive terminal input. If we were using 1-byte CFB mode, for
example, each incoming character is placed into a shift register the same size as
the block, encrypted, and the block transmitted. At the receiving side, the
ciphertext is decrypted and the extra bits in the block (i.e., everything above and
beyond the one byte) are discarded.

 Output Feedback (OFB) mode is a block cipher implementation conceptually


similar to a synchronous stream cipher. OFB prevents the same plaintext block
from generating the same ciphertext block by using an internal feedback
mechanism that is independent of both the plaintext and ciphertext bitstreams.

A nice overview of these different modes can be found at progressive-coding.com.

Secret key cryptography algorithms that are in use today include:

 Data Encryption Standard (DES): The most common SKC scheme used today,
DES was designed by IBM in the 1970s and adopted by the National Bureau of
Standards (NBS) [now the National Institute for Standards and Technology
(NIST)] in 1977 for commercial and unclassified government applications. DES
is a block-cipher employing a 56-bit key that operates on 64-bit blocks. DES has a
complex set of rules and transformations that were designed specifically to yield
fast hardware implementations and slow software implementations, although this
latter point is becoming less significant today since the speed of computer
processors is several orders of magnitude faster today than twenty years ago. IBM
also proposed a 112-bit key for DES, which was rejected at the time by the
government; the use of 112-bit keys was considered in the 1990s, however,
conversion was never seriously considered.

DES is defined in American National Standard X3.92 and three Federal


Information Processing Standards (FIPS):

 FIPS 46-3: DES


Sikkim Manipal University – MI0035

 FIPS 74: Guidelines for Implementing and Using the NBS Data
Encryption Standard

 FIPS 81: DES Modes of Operation

Information about vulnerabilities of DES can be obtained from the Electronic


Frontier Foundation.

Two important variants that strengthen DES are:

 Triple-DES (3DES): A variant of DES that employs up to three 56-bit


keys and makes three encryption/decryption passes over the block; 3DES
is also described in FIPS 46-3 and is the recommended replacement to
DES.

 DESX: A variant devised by Ron Rivest. By combining 64 additional key


bits to the plaintext prior to encryption, effectively increases the keylength
to 120 bits.

More detail about DES, 3DES, and DESX can be found below in Section 5.4.

 Advanced Encryption Standard (AES): In 1997, NIST initiated a very public, 4-


1/2 year process to develop a new secure cryptosystem for U.S. government
applications. The result, the Advanced Encryption Standard, became the official
successor to DES in December 2001. AES uses an SKC scheme called Rijndael, a
block cipher designed by Belgian cryptographers Joan Daemen and Vincent
Rijmen. The algorithm can use a variable block length and key length; the latest
specification allowed any combination of keys lengths of 128, 192, or 256 bits
and blocks of length 128, 192, or 256 bits. NIST initially selected Rijndael in
October 2000 and formal adoption as the AES standard came in December
2001. FIPS PUB 197 describes a 128-bit block cipher employing a 128-, 192-, or
256-bit key. The AES process and Rijndael algorithm are described in more detail
below in Section 5.9.
Sikkim Manipal University – MI0035

 CAST-128/256: CAST-128, described in Request for Comments (RFC) 2144, is a


DES-like substitution-permutation crypto algorithm, employing a 128-bit key
operating on a 64-bit block. CAST-256 (RFC 2612) is an extension of CAST-128,
using a 128-bit block size and a variable length (128, 160, 192, 224, or 256 bit)
key. CAST is named for its developers, Carlisle Adams and Stafford Tavares and
is available internationally. CAST-256 was one of the Round 1 algorithms in the
AES process.

 International Data Encryption Algorithm (IDEA): Secret-key cryptosystem


written by Xuejia Lai and James Massey, in 1992 and patented by Ascom; a 64-
bit SKC block cipher using a 128-bit key. Also available internationally.

 Rivest Ciphers (aka  Ron's Code): Named for Ron Rivest, a series of SKC
algorithms.

 RC1: Designed on paper but never implemented.

 RC2: A 64-bit block cipher using variable-sized keys designed to replace


DES. It's code has not been made public although many companies have
licensed RC2 for use in their products. Described in RFC 2268.

 RC3: Found to be breakable during development.

 RC4: A stream cipher using variable-sized keys; it is widely used in


commercial cryptography products, although it can only be exported using
keys that are 40 bits or less in length.

 RC5: A block-cipher supporting a variety of block sizes, key sizes, and


number of encryption passes over the data. Described in RFC 2040.

 RC6: An improvement over RC5, RC6 was one of the AES Round 2
algorithms.

 Blowfish: A symmetric 64-bit block cipher invented by Bruce Schneier; optimized


for 32-bit processors with large data caches, it is significantly faster than DES on
Sikkim Manipal University – MI0035

a Pentium/PowerPC-class machine. Key lengths can vary from 32 to 448 bits in


length. Blowfish, available freely and intended as a substitute for DES or IDEA,
is in use in over 80 products.

 Twofish: A 128-bit block cipher using 128-, 192-, or 256-bit keys. Designed to be
highly secure and highly flexible, well-suited for large microprocessors, 8-bit
smart card microprocessors, and dedicated hardware. Designed by a team led by
Bruce Schneier and was one of the Round 2 algorithms in the AES process.

 Camellia: A secret-key, block-cipher crypto algorithm developed jointly by


Nippon Telegraph and Telephone (NTT) Corp. and Mitsubishi Electric
Corporation (MEC) in 2000. Camellia has some characteristics in common with
AES: a 128-bit block size, support for 128-, 192-, and 256-bit key lengths, and
suitability for both software and hardware implementations on common 32-bit
processors as well as 8-bit processors (e.g., smart cards, cryptographic hardware,
and embedded systems). Also described in RFC 3713. Camellia's application in
IPsec is described in RFC 4312 and application in OpenPGP in RFC 5581.

 MISTY1: Developed at Mitsubishi Electric Corp., a block cipher using a 128-bit


key and 64-bit blocks, and a variable number of rounds. Designed for hardware
and software implementations, and is resistant to differential and linear
cryptanalysis. Described in RFC 2994.

 Secure and Fast Encryption Routine (SAFER) : Secret-key crypto scheme


designed for implementation in software. Versions have been defined for 40-, 64-,
and 128-bit keys.

 KASUMI: A block cipher using a 128-bit key that is part of the Third-Generation
Partnership Project (3gpp), formerly known as the Universal Mobile
Telecommunications System (UMTS). KASUMI is the intended confidentiality
and integrity algorithm for both message content and signaling data for emerging
mobile communications systems.
Sikkim Manipal University – MI0035

 SEED: A block cipher using 128-bit blocks and 128-bit keys. Developed by the
Korea Information Security Agency (KISA) and adopted as a national standard
encryption algorithm in South Korea. Also described in RFC 4269.

 ARIA: A 128-bit block cipher employing 128-, 192-, and 256-bit keys. Developed
by large group of researchers from academic institutions, research institutes, and
federal agencies in South Korea in 2003, and subsequently named a national
standard. Described in RFC 5794.

 CLEFIA: Described in RFC 6114, CLEFIA is a 128-bit block cipher employing


key lengths of 128, 192, and 256 bits (which is compatible with AES).
The CLEFIA algorithm was first published in 2007 by Sony Corporation.
CLEFIA is one of the new-generation lightweight blockcipher algorithms
designed after AES, offering high performance in software and hardware as well
as a lightweight implementation in hardware.

 SMS4: SMS4 is a 128-bit block cipher using 128-bit keys and 32 rounds to


process a block. Declassified in 2006, SMS4 is used in the Chinese National
Standard for Wireless Local Area Network (LAN) Authentication and Privacy
Infrastructure (WAPI). SMS4 had been a proposed cipher for the Institute of
Electrical and Electronics Engineers (IEEE) 802.11i standard on security
mechanisms for wireless LANs, but has yet to be accepted by the IEEE or
International Organization for Standardization (ISO). SMS4 is described in SMS4
Encryption Algorithm for Wireless Networks (translated and typeset by Whitfield
Diffie and George Ledin, 2008) or in the original Chinese.

 Skipjack: SKC scheme proposed for Capstone. Although the details of the


algorithm were never made public, Skipjack was a block cipher using an 80-bit
key and 32 iteration cycles per 64-bit block.

Public-Key Cryptography
Sikkim Manipal University – MI0035

Public-key cryptography has been said to be the most significant new development in


cryptography in the last 300-400 years. Modern PKC was first described publicly by
Stanford University professor Martin Hellman and graduate student Whitfield Diffie in
1976. Their paper described a two-key crypto system in which two parties could engage
in a secure communication over a non-secure communications channel without having to
share a secret key.

PKC depends upon the existence of so-called one-way functions, or mathematical


functions that are easy to computer whereas their inverse function is relatively difficult to
compute. Let me give you two simple examples:

 Multiplication vs. factorization: Suppose I tell you that I have two numbers, 9 and
16, and that I want to calculate the product; it should take almost no time to
calculate the product, 144. Suppose instead that I tell you that I have a number,
144, and I need you tell me which pair of integers I multiplied together to obtain
that number. You will eventually come up with the solution but whereas
calculating the product took milliseconds, factoring will take longer because you
first need to find the 8 pairs of integer factors and then determine which one is the
correct pair.

 Exponentiation vs. logarithms: Suppose I tell you that I want to take the number 3
to the 6th power; again, it is easy to calculate 3 6=729. But if I tell you that I have
the number 729 and want you to tell me the two integers that I used, x and y so
that logx 729 = y, it will take you longer to find all possible solutions and select
the pair that I used.

While the examples above are trivial, they do represent two of the functional pairs that
are used with PKC; namely, the ease of multiplication and exponentiation versus the
relative difficulty of factoring and calculating logarithms, respectively. The mathematical
"trick" in PKC is to find a trap door in the one-way function so that the inverse
calculation becomes easy given knowledge of some item of information. (The problem is
Sikkim Manipal University – MI0035

further exacerbated because the algorithms don't use just any old integers, but very large
prime numbers.)

Generic PKC employs two keys that are mathematically related although knowledge of
one key does not allow someone to easily determine the other key. One key is used to
encrypt the plaintext and the other key is used to decrypt the ciphertext. The important
point here is that it does not matter which key is applied first, but that both keys are
required for the process to work (Figure 1B). Because a pair of keys are required, this
approach is also called asymmetric cryptography.

In PKC, one of the keys is designated the public key and may be advertised as widely as
the owner wants. The other key is designated the private keyand is never revealed to
another party. It is straight forward to send messages under this scheme. Suppose Alice
wants to send Bob a message. Alice encrypts some information using Bob's public key;
Bob decrypts the ciphertext using his private key. This method could be also used to
prove who sent a message; Alice, for example, could encrypt some plaintext with her
private key; when Bob decrypts using Alice's public key, he knows that Alice sent the
message and Alice cannot deny having sent the message (non-repudiation).

Public-key cryptography algorithms that are in use today for key exchange or digital
signatures include:

 RSA: The first, and still most common, PKC implementation, named for the three
MIT mathematicians who developed it — Ronald Rivest, Adi Shamir, and
Leonard Adleman. RSA today is used in hundreds of software products and can
be used for key exchange, digital signatures, or encryption of small blocks of data.
RSA uses a variable size encryption block and a variable size key. The key-pair is
derived from a very large number, n, that is the product of two prime numbers
chosen according to special rules; these primes may be 100 or more digits in
length each, yielding an n with roughly twice as many digits as the prime factors.
The public key information includes n and a derivative of one of the factors ofn;
Sikkim Manipal University – MI0035

an attacker cannot determine the prime factors of n (and, therefore, the private
key) from this information alone and that is what makes the RSA algorithm so
secure. (Some descriptions of PKC erroneously state that RSA's safety is due to
the difficulty in factoring large prime numbers. In fact, large prime numbers, like
small prime numbers, only have two factors!) The ability for computers to factor
large numbers, and therefore attack schemes such as RSA, is rapidly improving
and systems today can find the prime factors of numbers with more than 200
digits. Nevertheless, if a large number is created from two prime factors that are
roughly the same size, there is no known factorization algorithm that will solve
the problem in a reasonable amount of time; a 2005 test to factor a 200-digit
number took 1.5 years and over 50 years of compute time (see the Wikipedia
article on integer factorization.) Regardless, one presumed protection of RSA is
that users can easily increase the key size to always stay ahead of the computer
processing curve. As an aside, the patent for RSA expired in September 2000
which does not appear to have affected RSA's popularity one way or the other. A
detailed example of RSA is presented below in Section 5.3.

 Diffie-Hellman: After the RSA algorithm was published, Diffie and Hellman


came up with their own algorithm. D-H is used for secret-key key exchange only,
and not for authentication or digital signatures. More detail about Diffie-Hellman
can be found below in Section 5.2.

 Digital Signature Algorithm (DSA): The algorithm specified in NIST's Digital


Signature Standard (DSS), provides digital signature capability for the
authentication of messages.

 ElGamal: Designed by Taher Elgamal, a PKC system similar to Diffie-Hellman


and used for key exchange.

 Elliptic Curve Cryptography (ECC): A PKC algorithm based upon elliptic curves.
ECC can offer levels of security with small keys comparable to RSA and other
Sikkim Manipal University – MI0035

PKC methods. It was designed for devices with limited compute power and/or
memory, such as smartcards and PDAs. More detail about ECC can be found
below in Section 5.8. Other references include "The Importance of ECC" Web
page and the"Online Elliptic Curve Cryptography Tutorial" , both from Certicom.
See also RFC 6090 for a review of fundamental ECC algorithms.

 Public-Key Cryptography Standards (PKCS): A set of interoperable standards


and guidelines for public-key cryptography, designed by RSA Data Security Inc.

 PKCS #1: RSA Cryptography Standard (Also RFC 3447)

 PKCS #2: Incorporated into PKCS #1.

 PKCS #3: Diffie-Hellman Key-Agreement Standard

 PKCS #4: Incorporated into PKCS #1.

 PKCS #5: Password-Based Cryptography Standard (PKCS #5 V2.0 is


also RFC 2898)

 PKCS #6: Extended-Certificate Syntax Standard (being phased out in


favor of X.509v3)

 PKCS #7: Cryptographic Message Syntax Standard (Also RFC 2315)

 PKCS #8: Private-Key Information Syntax Standard (Also RFC 5208)

 PKCS #9: Selected Attribute Types (Also RFC 2985)

 PKCS #10: Certification Request Syntax Standard (Also RFC 2986)

 PKCS #11: Cryptographic Token Interface Standard

 PKCS #12: Personal Information Exchange Syntax Standard

 PKCS #13: Elliptic Curve Cryptography Standard

 PKCS #14: Pseudorandom Number Generation Standard is no longer


available
Sikkim Manipal University – MI0035

 PKCS #15: Cryptographic Token Information Format Standard

 Cramer-Shoup: A public-key cryptosystem proposed by R. Cramer and V. Shoup


of IBM in 1998.

 Key Exchange Algorithm (KEA): A variation on Diffie-Hellman; proposed as the


key exchange method for Capstone.

 LUC: A public-key cryptosystem designed by P.J. Smith and based on Lucas


sequences. Can be used for encryption and signatures, using integer factoring.

For additional information on PKC algorithms, see "Public-Key Encryption", Chapter 8


in Handbook of Applied Cryptography, by A. Menezes, P. van Oorschot, and S. Vanstone
(CRC Press, 1996).

A digression: Who invented PKC? I tried to be careful in the first paragraph of this
section to state that Diffie and Hellman "first described publicly" a PKC scheme.
Although I have categorized PKC as a two-key system, that has been merely for
convenience; the real criteria for a PKC scheme is that it allows two parties to exchange a
secret even though the communication with the shared secret might be overheard. There
seems to be no question that Diffie and Hellman were first to publish; their method is
described in the classic paper, "New Directions in Cryptography," published in the
November 1976 issue of IEEE Transactions on Information Theory. As shown below,
Diffie-Hellman uses the idea that finding logarithms is relatively harder than
exponentiation. And, indeed, it is the precursor to modern PKC which does employ two
keys. Rivest, Shamir, and Adleman described an implementation that extended this idea
in their paper "A Method for Obtaining Digital Signatures and Public-Key
Cryptosystems," published in the February 1978 issue of theCommunications of the ACM
(CACM). Their method, of course, is based upon the relative ease of finding the product
of two large prime numbers compared to finding the prime factors of a large number.
Sikkim Manipal University – MI0035

Some sources, though, credit Ralph Merkle with first describing a system that allows two
parties to share a secret although it was not a two-key system, per se. A Merkle
Puzzle works where Alice creates a large number of encrypted keys, sends them all to
Bob so that Bob chooses one at random and then lets Alice know which he has selected.
An eavesdropper will see all of the keys but can't learn which key Bob has selected
(because he has encrypted the response with the chosen key). In this case, Eve's effort to
break in is the square of the effort of Bob to choose a key. While this difference may be
small it is often sufficient. Merkle apparently took a computer science course at UC
Berkeley in 1974 and described his method, but had difficulty making people understand
it; frustrated, he dropped the course. Meanwhile, he submitted the paper "Secure
Communication Over Insecure Channels" which was published in the CACM in April
1978; Rivest et al.'s paper even makes reference to it. Merkle's method certainly wasn't
published first, but did he have the idea first?

An interesting question, maybe, but who really knows? For some time, it was a quiet
secret that a team at the UK's Government Communications Headquarters (GCHQ) had
first developed PKC in the early 1970s. Because of the nature of the work, GCHQ kept
the original memos classified. In 1997, however, the GCHQ changed their posture when
they realized that there was nothing to gain by continued silence. Documents show that a
GCHQ mathematician named James Ellis started research into the key distribution
problem in 1969 and that by 1975, Ellis, Clifford Cocks, and Malcolm Williamson had
worked out all of the fundamental details of PKC, yet couldn't talk about their work.
(They were, of course, barred from challenging the RSA patent!) After more than 20
years, Ellis, Cocks, and Williamson have begun to get their due credit.

And the National Security Agency (NSA) claims to have knowledge of this type of
algorithm as early as 1966 but there is no supporting documentation... yet. So this really
was a digression...
Sikkim Manipal University – MI0035

Hash Functions

Hash functions, also called message digests and one-way encryption, are algorithms that,


in some sense, use no key (Figure 1C). Instead, a fixed-length hash value is computed
based upon the plaintext that makes it impossible for either the contents or length of the
plaintext to be recovered. Hash algorithms are typically used to provide a digital
fingerprint of a file's contents, often used to ensure that the file has not been altered by an
intruder or virus. Hash functions are also commonly employed by many operating
systems to encrypt passwords. Hash functions, then, provide a measure of the integrity of
a file.

Hash algorithms that are in common use today include:

 Message Digest (MD) algorithms: A series of byte-oriented algorithms that


produce a 128-bit hash value from an arbitrary-length message.

 MD2 (RFC 1319): Designed for systems with limited memory, such as


smart cards. (MD2 has been relegated to historical status, perRFC 6149.)

 MD4 (RFC 1320): Developed by Rivest, similar to MD2 but designed


specifically for fast processing in software. (MD4 has been relegated to
historical status, per RFC 6150.)

 MD5 (RFC 1321): Also developed by Rivest after potential weaknesses


were reported in MD4; this scheme is similar to MD4 but is slower
because more manipulation is made to the original data. MD5 has been
implemented in a large number of products although several weaknesses
in the algorithm were demonstrated by German cryptographer Hans
Dobbertin in 1996 ("Cryptanalysis of MD5 Compress").
Sikkim Manipal University – MI0035

 Secure Hash Algorithm (SHA): Algorithm for NIST's Secure Hash Standard


(SHS). SHA-1 produces a 160-bit hash value and was originally published as
FIPS 180-1 and RFC 3174. FIPS 180-2 (aka SHA-2) describes five algorithms in
the SHS: SHA-1 plus SHA-224, SHA-256, SHA-384, and SHA-512 which can
produce hash values that are 224, 256, 384, or 512 bits in length, respectively.
SHA-224, -256, -384, and -512 are also described in RFC 4634.

 RIPEMD: A series of message digests that initially came from the RIPE (RACE
Integrity Primitives Evaluation) project. RIPEMD-160 was designed by Hans
Dobbertin, Antoon Bosselaers, and Bart Preneel, and optimized for 32-bit
processors to replace the then-current 128-bit hash functions. Other versions
include RIPEMD-256, RIPEMD-320, and RIPEMD-128.

 HAVAL (HAsh of VAriable Length): Designed by Y. Zheng, J. Pieprzyk and J.


Seberry, a hash algorithm with many levels of security. HAVAL can create hash
values that are 128, 160, 192, 224, or 256 bits in length.

 Whirlpool: A relatively new hash function, designed by V. Rijmen and P.S.L.M.


Barreto. Whirlpool operates on messages less than 2 256 bits in length, and
produces a message digest of 512 bits. The design of this has function is very
different than that of MD5 and SHA-1, making it immune to the same attacks as
on those hashes (see below).

 Tiger: Designed by Ross Anderson and Eli Biham, Tiger is designed to be secure,


run efficiently on 64-bit processors, and easily replace MD4, MD5, SHA and
SHA-1 in other applications. Tiger/192 produces a 192-bit output and is
compatible with 64-bit architectures; Tiger/128 and Tiger/160 produce a hash of
length 128 and 160 bits, respectively, to provide compatibility with the other hash
functions mentioned above.

(Readers might be interested in HashCalc, a Windows-based program that calculates hash


values using a dozen algorithms, including MD5, SHA-1 and several variants, RIPEMD-
Sikkim Manipal University – MI0035

160, and Tiger. Command line utilities that calculate hash values include sha_verify by
Dan Mares [Windows; supports MD5, SHA-1, SHA-2] and md5deep [cross-platform;
supports MD5, SHA-1, SHA-256, Tiger, and Whirlpool].)

Hash functions are sometimes misunderstood and some sources claim that no two files
can have the same hash value. This is, in fact, not correct. Consider a hash function that
provides a 128-bit hash value. There are, obviously, 2128 possible hash values. But there
are a lot more than 2128 possiblefiles. Therefore, there have to be multiple files — in fact,
there have to be an infinite number of files! — that can have the same 128-bit hash value.

The difficulty is finding two files with the same hash! What is, indeed, very hard to do is
to try to create a file that has a given hash value so as to force a hash value collision —
which is the reason that hash functions are used extensively for information security and
computer forensics applications. Alas, researchers in 2004 found that practical collision
attacks could be launched on MD5, SHA-1, and other hash algorithms.

At this time, there is no obvious successor to MD5 and SHA-1 that could be put into use
quickly; there are so many products using these hash functions that it could take many
years to flush out all use of 128- and 160-bit hashes. That said, NIST announced in 2007
their Cryptographic Hash Algorithm Competition to find the next-generation secure
hashing method. Dubbed SHA-3, this new scheme will augment FIPS 180-2. A list of
submissions can be found at The SHA-3 Zoo. The SHA-3 standard may not be available
until 2011 or 2012.

Certain extensions of hash functions are used for a variety of information security and
digital forensics applications, such as:

 Hash libraries are sets of hash values corresponding to known files. A hash


library of known good files, for example, might be a set of files known to be a
part of an operating system, while a hash library of known bad files might be of a
set of known child pornographic images.
Sikkim Manipal University – MI0035

 Rolling hashes refer to a set of hash values that are computed based upon a fixed-
length "sliding window" through the input. As an example, a hash value might be
computed on bytes 1-10 of a file, then on bytes 2-11, 3-12, 4-13, etc.

 Fuzzy hashes are an area of intense research and represent hash values that
represent two inputs that are similar. Fuzzy hashes are used to detect documents,
images, or other files that are close to each other with respect to content. See
"Fuzzy Hashing" (PDF | PPT) by Jesse Kornblum for a good treatment of this
topic.

You might also like