Professional Documents
Culture Documents
Q.1 Explain all design issues for several layers in Computer. What is connection – oriented and
connectionless service?
Answer: The various key design issues are present in several layers in computer
networks. The important design issues are:
2. Error Control: There may be erroneous transmission due to several problems during
communication. These are due to problem in communication circuits, physical medium,
due to thermal noise and interference. Many error detecting and error correcting codes are
known, but both ends of the connection must agree on which one being used. In addition,
the receiver must have some mechanism of telling the sender which messages have been
received correctly and which has not.
3. Flow control: If there is a fast sender at one end sending data to a slow receiver, then
there must be flow control mechanism to control the loss of data by slow receivers. There
are several mechanisms used for flow control such as increasing buffer size at receivers,
slow down the fast sender, and so on. Some process will not be in position to accept
arbitrarily long messages. Then, there must be some mechanism to disassembling,
transmitting and then reassembling messages.
Sikkim Manipal University – MI0035
5. Routing: When data has to be transmitted from source to destination, there may be
multiple paths between them. An optimized (shortest) route must be chosen. This
decision is made on the basis of several routing algorithms, which chooses optimized
route to the destination.
Connection oriented service:
The service user first establishes a connection, uses the connection and then releases
the connection. Once the connection is established between source and destination, the
path is fixed. The data transmission takes place through this path established. The order
of the messages sent will be same at the receiver end. Services are reliable and there is no
loss of data. Most of the time, reliable service provides acknowledgement is an overhead
and adds delay.
Connectionless Services:
Therefore, the messages must carry full destination address and each one of these
messages are sent independent of each other. Messages sent will not be delivered at the
Sikkim Manipal University – MI0035
destination in the same order. Thus, grouping and ordering is required at the receiver end,
and the services are not reliable.
Two distinct techniques are used in data communications to transfer data. Each has its
own advantages and disadvantages. They are the connection-oriented method and the
connectionless method:
state information for the systems that they send transmission to or receive
transmission from. A connectionless network provides minimal services.
Figure 1
Connection-oriented methods may be implemented in the data link layers of the protocol
stack and/or in the transport layers of the protocol stack, depending on the physical
connections in place and the services required by the systems that are communicating.
TCP (Transmission Control Protocol) is a connection-oriented transport protocol, while
UDP (User Datagram Protocol) is a connectionless network protocol. Both operate over
IP.
The physical, data link, and network layer protocols have been used to implement
guaranteed data delivery. For example, X.25 packet-switching networks perform
extensive error checking and packet acknowledgment because the services were
originally implemented on poor-quality telephone connections. Today, networks are more
reliable. It is generally believed that the underlying network should do what it does best,
which is deliver data bits as quickly as possible. Therefore, connection-oriented services
are now primarily handled in the transport layer by end systems, not the network. This
allows lower-layer networks to be optimized for speed.
Sikkim Manipal University – MI0035
The Internet is one big connectionless packet network in which all packet deliveries are
handled by IP. However, TCP adds connection-oriented services on top of IP. TCP
provides all the upper-level connection-oriented session requirements to ensure that data
is delivered properly. MPLS is a relatively new connection-oriented networking scheme
for IP networks that sets up fast label-switched paths across routed or layer 2 networks.
A WAN service that uses the connection-oriented model is frame relay. The service
provider sets up PVCs (permanent virtual circuits) through the network as required or
requested by the customer. ATM is another networking technology that uses the
connection-oriented virtual circuit approach.
Layer 7: The application layer ...This is the layer at which communication partners are
identified, quality of service is identified, user authentication and privacy are considered,
and any constraints on data syntax are identified. (This layer is not the application itself,
although some applications may perform application layer functions.)
Sikkim Manipal University – MI0035
Layer 5: The session layer ...This layer sets up, coordinates, and terminates
conversations, exchanges, and dialogs between the applications at each end. It deals with
session and connection coordination.
Layer 4: The transport layer ...This layer manages the end-to-end control (for example,
determining whether all packets have arrived) and error-checking. It ensures complete
data transfer.
Layer 3: The network layer ...This layer handles the routing of the data (sending it in
the right direction to the right destination on outgoing transmissions and receiving
incoming transmissions at the packet level). The network layer does routing and
forwarding.
Layer 2: The data-link layer ...This layer provides synchronization for the physical
level and does bit-stuffing for strings of 1's in excess of 5. It furnishes transmission
protocol knowledge and management.
Layer 1: The physical layer ...This layer conveys the bit stream through the network at
the electrical and mechanical level. It provides the hardware means of sending and
receiving data on a carrier.
The International Organization for Standardization (ISO) developed the OSI reference
model in the early 1980s. OSI is now the de facto standard for developing protocols that
enable computers to communicate. Although not every protocol follows this model, most
new protocols use this layered approach. In addition, when starting to learn about
networking, most instructors will begin with this model to simplify understanding.
Sikkim Manipal University – MI0035
The OSI reference model breaks up the problem of intermachine communication into
seven layers. Each layer is concerned only with talking to its corresponding layer on the
other machine (see Figure 6-1). This means that Layer 5 has to worry only about talking
to Layer 5 on the receiving machine, and not what the actual physical medium might be.
In addition, each layer of the OSI reference model provides services to the layer above it
(Layer 5 to Layer 6, Layer 6 to Layer 7, and so on) and requests certain services from the
layer directly below it (5 to 4, 4 to 3, and so on).
This layered approach enables each layer to handle small pieces of information, make any
necessary changes to the data, and add the necessary functions for that layer before
passing the data along to the next layer. Data becomes less human-like and more
computer-like the further down the OSI reference model it traverses, until it becomes 1s
Sikkim Manipal University – MI0035
and 0s (electrical impulses) at the physical layer. Figure 6-1 shows the OSI reference
model.
The focus of this chapter is to discuss the seven layers (application, presentation, session,
transport, network, data link, and physical). Understanding these layers allows you to
understand how IP routing works and how IP is transported across various media residing
at Layer 1.
The Internet Protocol suite (see Figure 6-1) maps to the corresponding OSI layers. From
the IP Suite figure, you can see how applications (FTP or email) run atop protocols such
as TCP before they are transmitted across some Layer 1 transport mechanism.
Most users are familiar with the application layer. Some well-known applications include
the following:
Web browsing
Word processing
The presentation layer ensures that information sent by the application layer of one
system is readable by the application layer of another system. If necessary, the
presentation layer translates between multiple data formats by using a common data
representation format.
The presentation layer concerns itself not only with the format and representation of
actual user data, but also with data structures used by programs. Therefore, in addition to
actual data format transformation (if necessary), the presentation layer negotiates data
transfer syntax for the application layer.
Encryption
Compression
ASCII EBCDIC
As its name implies, the session layer establishes, manages, and terminates sessions
between applications. Sessions consist of dialogue between two or more presentation
entities (recall that the session layer provides its services to the presentation layer).
The session layer synchronizes dialogue between presentation layer entities and manages
their data exchange. In addition to basic regulation of conversations (sessions), the
session layer offers provisions for data expedition and exception reporting of session-
layer, presentation-layer, and application-layer problems.
The transport layer is responsible for ensuring reliable data transport on an internetwork.
This is accomplished through flow control, error checking (checksum), end-to-end
acknowledgments, retransmissions, and data sequencing.
Some transport layers, such as Transmission Control Protocol (TCP), have mechanisms
for handling congestion. TCP adjusts its retransmission timer, for example, when
congestion or packet loss occurs within a network. TCP slows down the amount of traffic
it sends when congestion is present. Congestion is determined through the lack of
acknowledgments received from the destination node.
The network layer provides for the logical addressing which enables two disparate
systems on different logical networks to determine a possible path to communicate. The
network layer is the layer in which routing protocols reside.
Sikkim Manipal University – MI0035
On the Internet today, IP addressing is by far the most common addressing scheme in
use. Routing protocols such as Enhanced Interior Gateway Routing Protocol (Enhanced
IGRP, or EIGRP), Open Shortest Path First (OSPF), Border Gateway Protocol (BGP),
Intermediary System to Intermediary System (IS-IS), and many others are used to
determine the optimal routes between two logical subnetworks (subnets).
Note
You can switch IP traffic outside its own subnetwork only if you use an IP router.
Packet formatting, addressing networks and hosts, address resolution, and routing
The data link layer provides reliable transport across a physical link. The link layer has its
own addressing scheme. This addressing scheme is concerned with physical connectivity
and can transport frames based upon the data link layer address.
Traditional Ethernet switches switch network traffic based upon the data link layer (Layer
2) address. Switching traffic based on a Layer 2 address is generally known as bridging.
In fact, an Ethernet switch is nothing more than a high-speed bridge with multiple
interfaces.
The physical layer is concerned with creating 1s and 0s on the physical medium with
electrical impulses/voltage changes. Common physical layer communication
specifications include the following:
Sikkim Manipal University – MI0035
There are 3 different transmission modes characterised according to the direction of the
exchanges:
Sikkim Manipal University – MI0035
A simplex connection is a connection in which the data flows in only one direction,
from the transmitter to the receiver. This type of connection is useful if the data do not
need to flow in both directions (for example, from your computer to the printer or
from the mouse to your computer...).
Parallel connection
Parallel connection means simultaneous transmission of N bits. These bits are sent
simultaneously overN different channels (a channel being, for example, a wire, a cable or
any other physical medium). Theparallel connection on PC-type computers generally
requires 10 wires.
Sikkim Manipal University – MI0035
N physical lines: in which case each bit is sent on a physical line (which is why
parallel cables are made up of several wires in a ribbon cable)
one physical line divided into several sub-channels by dividing up the bandwidth. In
this case, each bit is sent at a different frequency...
Since the conductive wires are close to each other in the ribbon cable, interference can
occur (particularly at high speeds) and degrade the signal quality...
Serial connection
In a serial connection, the data are sent one bit at a time over the transmission channel.
However, since most processors process data in parallel, the transmitter needs to
transform incoming parallel data into serial data and the receiver needs to do the
opposite.
The serial-parallel transformation is done in almost the same way using a shift
register. The shift register shifts the register by one position to the left each time a bit
is received, and then transmits the entire register in parallel when it is full:
Given the problems that arise with a parallel-type connection, serial connections are
normally used. However, since a single wire transports the information, the problem is
how to synchronise the transmitter and receiver, in other words, the receiver can not
necessarily distinguish the characters (or more generally the bit sequences) because the
bits are sent one after the other. There are two types of transmission that address this
problem:
In a synchronous connection, the transmitter and receiver are paced by the same
clock. The receiver continuously receives (even when no bits are transmitted) the
information at the same rate the transmitter send it. This is why the transmitter and
receiver are paced at the same speed. In addition, supplementary information is
inserted to guarantee that there are no errors during transmission.
During synchronous transmission, the bits are sent successively with no separation
between each character, so it is necessary to insert synchronisation elements; this is
called character-level synchronisation.
The main disadvantage of synchronous transmission is recognising the data at the
receiver, as there may be differences between the transmitter and receiver clocks. That is
why each data transmission must be sustained long enough for the receiver to distinguish
it. As a result, the transmission speed can not be very high in a synchronous link.
Q.4. define switching. What is the difference between circuit switching and packet
switching?
Packet-switched and circuit-switched networks use two different technologies for sending
messages and data from one point to another.
Each have their advantages and disadvantages depending on what you are trying to do.
In packet-based networks, the message gets broken into small data packets.
These packets are sent out from the computer and they travel around the network
Sikkim Manipal University – MI0035
seeking out the most efficient route to travel as circuits become available. This
does not necessarily mean that they seek out the shortest route.
Each packet is sent with a ‘header address’. This tells it where its final
destination is, so it knows where to go.
The header address also describes the sequence for reassembly at the destination
computer so that the packets are put back into the correct order.
One packet also contains details of how many packets should be arriving so that
the recipient computer knows if one packet has failed to turn up.
If a packet fails to arrive, the recipient computer sends a message back to the
computer which originally sent the data, asking for the missing packet to be
resent.
Packet Switching
Each packet is treated individually by the switching centre and may be sent to the
destination by a totally different route to all the others.
Packet Switching
Advantages:
Security
Disadvantages
Not so good for some types data streams e.g real-time video
streams can lose frames due to the way packets arrive out of
sequence.
Sikkim Manipal University – MI0035
Circuit switching was designed in 1878 in order to send telephone calls down a
dedicated channel. This channel remained open and in use throughout the whole
call and could not be used by any other data or phone calls.
Establish
Transfer
Disconnect
The telephone message is sent in one go, it is not broken up. The message arrives
in the same order that it was originally sent.
The resources remain dedicated to the circuit during the entire data transfer and
the entire message follows the same path.
Sikkim Manipal University – MI0035
With the expanded use of the Internet for voice and video, analysts predict a
gradual shift away from circuit-switched networks.
A circuit-switched network is excellent for data that needs a constant link from
end-to-end. For example real-time video.
Disadvantages:
It was primarily developed for voice traffic rather than data traffic.
It is easier to double the capacity of a packet switched network than a circuit network – a
circuit network is heavily dependent on the number of channel available.
The battle between circuit and packet technologies has been around a long time,
and it is starting to be like the old story of the tortoise and the hare. In this case,
the hare is circuit switching—fast, reliable and smart. The hare starts out fast and
Sikkim Manipal University – MI0035
keeps a steady pace, while the tortoise starts slow but manages to double his
speed every 100 meters.
If the race is longer than 2 km, the power of compounding favours the tortoise.
Q.5. classify Guided medium (wired). Compare fiber optics and copper wire.
Guided media, which are those that provide a conduit from one device to another, include
twisted-pair cable, coaxial cable, and fiber-optic cable.
Guided Transmission Media uses a "cabling" system that guides the data signals along a
specific path. The data signals are bound by the "cabling" system. Guided Media is also
known as Bound Media. Cabling is meant in a generic sense in the previous sentences
and is not meant to be interpreted as copper wire cabling only. Cable is the medium
through which information usually moves from one network device to another.
Twisted pair cable and coaxial cable use metallic (copper) conductors that accept and
transport signals in the form of electric current. Optical fiber is a glass or plastic cable
that accepts and transports signals in the form of light.
1. Open Wire
2. Twisted Pair
3. Coaxial Cable
4. Optical Fiber
Sikkim Manipal University – MI0035
Open Wire is traditionally used to describe the electrical wire strung along power poles.
There is a single wire strung between poles. No shielding or protection from noise
interference is used. We are going to extend the traditional definition of Open Wire to
include any data signal path without shielding or protection from noise interference. This
can include multiconductor cables or single wires. This media is susceptible to a large
degree of noise and interference and consequently not acceptable for data transmission
except for short distances under 20 ft.
Twisted pair cable is least expensive and most widely used. The wires in Twisted Pair
cabling are twisted together in pairs. Each pair would consist of a wire used for the +ve
data signal and a wire used for the -ve data signal. Any noise that appears on one wire of
the pair would occur on the other wire. Because the wires are opposite polarities, they are
180 degrees out of phase When the noise appears on both wires, it cancels or nulls itself
out at the receiving end. Twisted Pair cables are most effectively used in systems that use
a balanced line method of transmission : polar line coding (Manchester Encoding) as
opposed to unipolar line coding (TTL logic).
Physical description
·Twisting decreases the crosstalk interference between adjacent pairs in the cable,
by using different twist length for neighboring pairs.
A twisted pair consists of two conductors (normally copper), each with its own plastic
insulation, twisted together. One of the wire is used to carry signals to the receiver, and
the other is used only a ground reference.
In past, two parallel flat wires were used for communication. However, electromagnetic
interference from devices such as a motor can create noise over those wires.
If the two wires are parallel, the wire closest to the source of the noise gets more
interference and ends up with a higher voltage level than the wire farther away, which
results in an uneven load and a damaged signal. If, however, the two wires are twisted
around each other at regular intervals, each wire is closer to the noise source for half the
time and farther away for the other half. The degree of reduction in noise interference is
determined specifically by the number of turns per foot. Increasing the number of turns
per foot reduces the noise interference. To further improve noise rejection, a foil or wire
braid shield is woven around the twisted pairs.
Twisted pair cable supports both analog and digital signals. TP cable can be either
unshielded TP (UTP) cable or shielded TP (STP) cable. Cables with a shield are called
Shielded Twisted Pair and commonly abbreviated STP. Cables without a shield are called
Unshielded Twisted Pair or UTP. Shielding means metallic material added to cabling to
reduce susceptibility to noise due to electromagnetic interference (EMI).
IBM produced a version of TP cable for its use called STP. STP cable has a metal foil
that encases each pair of insulated conductors. Metal casing used in STP improves the
Sikkim Manipal University – MI0035
Crosstalk is the undesired effect of one circuit (or channel) on another circuit (or
channel). It occurs when one line picks up some of the signal traveling down another line.
Crosstalk effect can be experienced during telephone conversations when one can hear
other conversations in the background.
Twisted-pair cabling with additional shielding to reduce crosstalk and other forms of
electromagnetic interference (EMI). It has an impedance of 150 ohms, has a maximum
length of 90 meters, and is used primarily in networking environments with a high
amount of EMI due to motors, air conditioners, power lines, or other noisy electrical
components. STP cabling is the default type of cabling for IBM Token Ring networks.
STP is more expensive as compared to UTP.
UTP is cheap, flexible, and easy to install. UTP is used in many LAN technologies,
including Ethernet and Token Ring.
In computer networking environments that use twisted-pair cabling, one pair of wires is
typically used for transmitting data while another pair receives data. The twists in the
cabling reduce the effects of crosstalk and make the cabling more resistant to
electromagnetic interference (EMI), which helps maintain a high signal-to-noise ratio for
reliable network communication. Twisted-pair cabling used in Ethernet networking is
usually unshielded twisted-pair (UTP) cabling, while shielded twisted-pair (STP) cabling
is typically used in Token Ring networks. UTP cabling comes in different grades for
different purposes.
The Electronic Industries Association (EIA) has developed standards to classify UTP
cable into seven categories. Categories are determined by cable quality, with CAT 1 as
the lowest and CAT 7 as the highest.
Sikkim Manipal University – MI0035
The quality of UTP may vary from telephone-grade wire to extremely high-speed
cable. The cable has four pairs of wires inside the jacket. Each pair is twisted with a
different number of twists per inch to help eliminate interference from adjacent pairs and
Sikkim Manipal University – MI0035
other electrical devices. The tighter the twisting, the higher the supported transmission
rate and the greater the cost per foot.
The standard connector for unshielded twisted pair cabling is an RJ-45 connector. This is
a plastic connector that looks like a large telephone-style connector. A slot allows the RJ-
45 to be inserted only one way. RJ stands for Registered Jack, implying that the
connector follows a standard borrowed from the telephone industry. This standard
designates which wire goes with each pin inside the connector.
COAXIAL CABLE
Coaxial cable (or coax) carries signals of higher frequency ranges than
twisted-pair cable. Instead of having two wires, coax has a central core
conductor of solid or standard wire (usually copper) enclosed in an
insulating sheath, which is, in turn, encased in an outer conductor of metal foil, braid, or a
combination of the two (also usually copper).
Sikkim Manipal University – MI0035
FIBER-OPTIC CABLE
Fiber-optic is a glass cabling media that sends network signals using light. Fiber-optic
cabling has higher bandwidth capacity than copper cabling, and is used mainly for
high-speed network Asynchronous Transfer Mode (ATM) or Fiber Distributed Data
Interface (FDDI) backbones, long cable runs, and connections to high-performance
workstations. A fiber-optic cable is made of glass or plastic and transmits signals in
the form of light. Light is a form of electromagnetic energy. It travels at its fastest in a
vacuum: 3,00,000 kilometers/sec. The speed of light depends on the density of the
medium through, which it is traveling (the higher the density, the slower the speed).
Light travels in a straight line as long as it is moving through a single uniform
substance. If a ray of light traveling through one substance suddenly enters another
(more or less dense), the ray changes direction. This change is called.
Refraction : The direction in which a light ray is refracted depends on the change in
density encountered. A beam of light moving from a less dense into a denser medium
is bent towards vertical axis.
Sikkim Manipal University – MI0035
When light travels into a denser medium, the angle of incidence is greater than the angle
of refraction; and when light travels into a less dense medium, the angle of incidence is
less than the angle of refraction.
6. Can support low data Moderately high data Very high data
rates. rates. rates.
7. Power loss due to Power loss due to Power loss due
conduction and conduction. to absorption,
radiation. scattering,
dispersion.
8. Short circuit between Short circuit between Short circuit is
two conductors is two conductors is not possible.
possible. possible.
9. Low bandwidth. Moderately high Very high
bandwidth. bandwidth.
2). Communication Satellites,
3). Weather Satellites,
Communication satellite.
Sikkim Manipal University – MI0035
The satellite can have a passive role in communications like bouncing signals from the
Earth back to another location on the Earth; on the other hand, some satellites carry
electronic devices called transponders for receiving, amplifying, and re-broadcasting
signals to the Earth.
Communications satellites are often in geostationary orbit. At the high orbital altitude of
35,800 kilometers, a geostationary satellite orbits the Earth in the same amount of time it
takes the Earth to revolve once. From Earth, therefore, the satellite appears to be
stationary, always above the same area of the Earth. The area to which it can transmit is
called a satellite's footprint. For example, many Canadian communications satellites have
a footprint which covers most of Canada.
Communications satellties can also be in highly elliptical orbits. This type of orbit is
roughly egg-shaped, with the Earth near the top of the egg. In a highly elliptical orbit, the
satellite's velocity changes depending on where it is in its orbital path. When the satellite
is in the part of its orbit that's close to the Earth, it moves faster because the Earth's
gravitational pull is stronger. This means that a communications satellite can be over the
Sikkim Manipal University – MI0035
region of the Earth that it is communicating with for the long part of its orbit. It will only
be out of contact with that region when it quickly zips close by the Earth.
Wheather satellite.
Radiation measurements from the earth's surface and atmosphere give information
on amounts of heat and energy being released from the Earth and the Earth's
atmosphere.
People who fish for a living can find out valuable information about the
temperature of the sea from measurements that satellites make.
Satellites monitor the amount of snow in winter, the movement of ice fields in the
Arctic and Antarctic, and the depth of the ocean.
Some satellites have a water vapour sensor that can measure and describe how
much water vapour is in different parts of the atmosphere.
Satellites can detect volcanic eruptions and the motion of ash clouds.
During the winter, satellites monitor freezing air as it moves south towards
Florida and Texas, allowing weather forecasters to warn growers of upcoming
low temperatures.
stations, stations that measure earthquake and tidal wave conditions, and ships.
This information, sent to the satellite from the ground, is then relayed from the
satellite to a central receiving station back on Earth.
There are two basic types of weather satellites: those in geostationary orbit and those
in polar orbit. Orbiting very high above the Earth, at an altitude of 35,800 kilometres
(the orbital altitude), geostationary satellites orbit the Earth in the same amount of time it
takes the Earth to revolve once. From Earth, therefore, the satellite appears to stay still,
always above the same area of the Earth. This orbit allows the satellite to monitor the
same region all the time. Geostationary satellites usually measure in "real time", meaning
they transmit photographs to the receiving system on the ground as soon as the camera
takes the picture. A series of photographs from these satellites can be displayed in
sequence to produce a movie showing cloud movement. This allows forecasters to watch
the progress of large weather systems such as fronts, storms, and hurricanes. Forecasters
can also find out the wind direction and speed by monitoring cloud movement.
The other basic type of weather satellite is polar orbiting. This type of satellite orbits in a
path that closely follows the Earth's meridian lines, passing over the north and south
poles once each revolution. As the Earth rotates to the east beneath the satellite, each pass
of the satellite monitors a narrow area running from north to south, to the west of the
previous pass. These 'strips' can be pieced together to produce a picture of a larger area.
Polar satellites circle at a much lower altitude at about 850 km. This means that polar
satellites can photograph clouds from closer than the high altitude geostationary satellites.
Polar satellites, therefore, provide more detailed information about violent storms and
cloud systems.
Navigation satellite.
Sikkim Manipal University – MI0035
Satellites for navigation were developed in the late 1950's as a direct result of ships
needing to know exactly where they were at any given time. In the middle of the ocean or
out of sight of land, you can't find out your position accurately just by looking out the
window.
The idea of using satellites for navigation began with the launch of Sputnik 1 on October
4, 1957. Scientists at Johns Hopkins University's Applied Physics Laboratory monitored
that satellite. They noticed that when the transmitted radio frequency was plotted on a
graph, a pattern developed. This pattern was recognizable to scientists, and it is known as
the doppler effect. The doppler effect is an apparent change of radio frequency as
something that emits a signal in the form of waves passes by. Since the satellite was
emitting a signal, scientists were able to show that the doppler curve described the orbit
of the satellite.
Today, most navigation systems use time and distance to determine location. Early on,
scientists recognized the principle that, given the velocity and the time required for a
radio signal to be transmitted between two points, the distance between the two points
can be computed. The calculation must be done precisely, and the clocks in the satellite
and in the ground-based receiver must be telling exactly the same time - they must be
synchronized. If they are, the time it takes for a signal to travel can be measured and then
multiplied by the exact speed of light to obtain the distance between the two positions.
Research satellite.
Sikkim Manipal University – MI0035
NASA's Voyager 1 spacecraft has entered a new region between our solar system and
interstellar space. Data obtained from Voyager over the last year reveal this new region to
be a kind of cosmic purgatory. In it, the wind of charged particles streaming out from our
sun has calmed, our solar system's magnetic field has piled up, and higher-energy
particles from inside our solar system appear to be leaking out into interstellar space.
"Voyager tells us now that we're in a stagnation region in the outermost layer of the
bubble around our solar system," said Ed Stone, Voyager project scientist at the
California Institute of Technology. "Voyager is showing that what is outside is pushing
back. We shouldn't have long to wait to find out what the space between stars is really
like."
Although Voyager 1 is about 11 billion miles (18 billion kilometers) from the sun, it is
not yet in interstellar space. In the latest data, the direction of the magnetic field lines has
not changed, indicating Voyager is still within the heliosphere, the bubble of charged
particles the sun blows around itself. The data do not reveal exactly when Voyager 1 will
make it past the edge of the solar atmosphere into interstellar space, but suggest it will be
in a few months to a few years.
The latest findings, described today at the American Geophysical Union's fall meeting in
San Francisco, come from Voyager's Low Energy Charged Particle instrument, Cosmic
Ray Subsystem and Magnetometer.
Scientists previously reported the outward speed of the solar wind had diminished to zero
in April 2010, marking the start of the new region. Mission managers rolled the
spacecraft several times this spring and summer to help scientists discern whether the
Sikkim Manipal University – MI0035
solar wind was blowing strongly in another direction. It was not. Voyager 1 is plying the
celestial seas in a region similar to Earth's doldrums, where there is very little wind.
During this past year, Voyager's magnetometer also detected a doubling in the intensity
of the magnetic field in the stagnation region. Like cars piling up at a clogged freeway
off-ramp, the increased intensity of the magnetic field shows that inward pressure from
interstellar space is compacting it.
Voyager has been measuring energetic particles that originate from inside and outside our
solar system. Until mid-2010, the intensity of particles originating from inside our solar
system had been holding steady. But during the past year, the intensity of these energetic
particles has been declining, as though they are leaking out into interstellar space. The
particles are now half as abundant as they were during the previous five years.
At the same time, Voyager has detected a 100-fold increase in the intensity of high-
energy electrons from elsewhere in the galaxy diffusing into our solar system from
outside, which is another indication of the approaching boundary.
"We've been using the flow of energetic charged particles at Voyager 1 as a kind of wind
sock to estimate the solar wind velocity," said Rob Decker, a Voyager Low-Energy
Charged Particle Instrument co-investigator at the Johns Hopkins University Applied
Physics Laboratory in Laurel, Md. "We've found that the wind speeds are low in this
region and gust erratically. For the first time, the wind even blows back at us. We are
evidently traveling in completely new territory. Scientists had suggested previously that
there might be a stagnation layer, but we weren't sure it existed until now."
Launched in 1977, Voyager 1 and 2 are in good health. Voyager 2 is 15 billion km away
from the sun.
The Voyager spacecraft were built by NASA's Jet Propulsion Laboratory in Pasadena,
Calif., which continues to operate both. JPL is a division of the California Institute of
Technology. The Voyager missions are a part of the NASA Heliophysics System
Observatory, sponsored by the Heliophysics Division of the Science Mission Directorate
in Washington.
Sikkim Manipal University – MI0035
Satellite Applications
Broadband satellites transmit high-speed data and video directly to consumers and
businesses. Markets for broadband services also include interactive TV, wholesale
telecommunications, telephony, and point-of-sale communications, such as credit card
transactions and inventory control.
Direct-Broadcast Services
Direct-broadcast satellites (DBS) transmit signals for direct reception by the general
public, such as satellite television and radio. Satellite signals are sent directly to users
through their own receiving antennas or satellite dishes, in contrast to satellite/cable
systems in which signals are received by a ground station, and re-broadcast to users by
cable.
Environmental Monitoring
Sikkim Manipal University – MI0035
These satellites are typically self-contained systems that carry their own communications
systems for distributing the data they gather in the form reports and other products for
analyzing the condition of the environment. Satellites are particularly useful in this case
because they can provide continuous coverage of very large geographic regions.
Fixed-Satellite Services
Government
Providing X-band satellite communications services to governments is a new commercial
application with substantial growth potential. SS/L has designed and built two X-band
satellites, which will be available for lease to government users in the United States and
Spain, as well as other friendly and allied nations within the satellites' extensive coverage
areas. Government communications use specially allocated frequency bands and
waveforms.
Sikkim Manipal University – MI0035
Fast Ethernet, or 100BaseT, is conventional Ethernet but faster, operating at 100 Mbps
instead of 10 Mbps. Fast Ethernet is based
on the proven CSMA/CD Media Access Control (MAC) protocol and can use existing
10BaseT cabling (See Appendix for pinout diagram and table). Data can move from 10
Mbps to 100 Mbps without protocol translation or changes to application and networking
software.
Fast Ethernet maintains CSMA/CD, the Ethernet transmission protocol. However, Fast
Ethernet reduces the duration of time each bit is transmitted by a factor of 10, enabling
the packet speed to increase tenfold from 10 Mbps to 100 Mbps. Data can move between
Ethernet and Fast Ethernet without requiring protocol translation, because Fast Ethernet
also maintains the 10BaseT error control functions as well as the frame format and
length.
protocol translation involves changes to the frame that typically mean higher latencies
when frames are passed through layer 2 LAN switches.
Sikkim Manipal University – MI0035
Fast Ethernet can run over the same variety of media as 10BaseT, including UTP,
shielded twisted-pair (STP), and fiber. The Fast Ethernet specification defines separate
physical sublayers for each media type:
• 100BaseT4 for four pairs of voice- or data-grade Category 3, 4, and 5 UTP wiring
• 100BaseTX for two pairs of data-grade Category 5 UTP and STP wiring
The MII layer of 100BaseT couples these physical sublayers to the CSMA/CD MAC
layer (see Figure 1). The MII provides a single interface that can support external
transceivers for any of the 100BaseT physical sublayers. For the physical connection, the
MII is implemented on Fast Ethernet devices such as routers, switches, hubs, and
adapters, and on transceiver devices using a 40-pin connector (See Appendix for pinout
and connector diagrams). Cisco Systems contributed to the MII specification.Public
Copyright © 1999 Cisco Systems, Inc. All Rights Reserved.
Each physical sublayer uses a signaling scheme that is appropriate to its media type.
100BaseT4 uses three pairs of wire for 100-Mbps transmission and the fourth pair for
collision detection. This method lowers the 100BaseT4 signaling to 33 Mbps per pair,
making it suitable for Category 3, 4, and 5 wiring.
100BaseTX uses one pair of wires for transmission (125-MHz frequency operating at 80-
percent efficiency to allow for 4B5B encoding) and the other pair for collision detection
Sikkim Manipal University – MI0035
and receive. 100BaseFX uses one fiber for transmission and the other fiber for collision
detection and receive. The 100BaseTX and 100BaseFX physical signaling channels are
based on FDDI physical layers developed and approved by the American National
Standards Institute (ANSI) X3T9.5 committee. 100BaseTX uses the MLT-3 line
encoding signaling scheme, which Cisco developed and contributed to the ANSI
committee as the specification for FDDI over Category 5 UTP. Today MLT-3 also is
used as the signaling scheme for ATM over Category 5 UTP.
Gigabit Ethernet:
Gigabit Ethernet provides an ideal upgrade path for existing Ethernet-based networks. It
can be installed as a backbone network while retaining the existing investment in
Ethernet hubs, switches, and wiring plants. In addition, management tools can be
retained, although network analyzers will require updates to handle the higher speed.
Switch-to-server links
Switch-to-switch links
10-Gigabit Ethernet
As with 1-Gigabit Ethernet, 10-Gigabit Ethernet will preserve the 802.3 Ethernet frame
format, as well as minimum and maximum frame sizes. It will support full-duplex
operation only. The topology is star-wired LANs that use point-to-point links, and
structured cabling topologies. 802.3ad link aggregation will also be supported.
The new standard will support new multimedia applications, distributed processing,
imaging, medical, CAD/CAM, and a variety of other applications-many that cannot even
be perceived today. Most certainly it will be used in service provider data centers and as
part of metropolitan area networks. The technology will also be useful in the SAN
(Storage Area Network) environment. Refer to the following Web sites for more
information.
Q.2. Differentiate the working between pure ALOHA and slotted ALOHA
ALOHA:
Aloha is a computer networking system which was introduced in the early 1970 by
Norman Abramson and his colleagues at university of Hawaii to solve the channel
allocation problem. On the basis of global time synchronization. Aloha is divided into
two different versions or protocols. i.e Pure Aloha and Slotted Aloha.
Pure Aloha:
Sikkim Manipal University – MI0035
Pure Aloha does not require global time synchronization. The basic idea of pure aloha
system is that it allows its users to transmit whenever they have data.A sender just like
other users can listen to what it is transmitting, and due to this feedback broadcasting
system is able to detect collision, if any. If the collision is detected the sender will wait a
random period of time and attempt transmission again. The waiting time must not be the
same or the same frames will collide and destroyed over and over. Systems in which
multiple users share a common channel in a way that can lead to conflicts are widely
known as contention systems.
Let "T" be the time needed to transmit one frame on the channel, and "frame-time" as a
unit of time equal to T. Let "G" refer to the mean used in the Poisson distribution over
transmission-attempt amounts that is, on average, there are G transmission-attempts per
frame-time. Let "t" be the time at which the sender wants to send a frame. We want to use
the channel for one frame-time beginning at t, and so we need all other stations to refrain
from transmitting during this time. Moreover, we need the other stations to refrain from
transmitting between t-T and t as well, because a frame sent during this interval would
overlap with our frame.
EFFICIENCY OF ALOHA
Vulnerable period for the shaded frame is 2t, if t is the frame time. A frame will not
collide if no other frames are sent within one frame time of its start, before and after. For
Sikkim Manipal University – MI0035
any frame-time, the probability of there being k transmission-attempts during that frame-
time is: {G^k e^{-G}} / {k!} If throughput (number of packets per unit time) is
represented by S, under all load, S =GPo, where Po is the probability that the frame does
not suffer collision. A frame does not have collision if no frames are send during the
frame time. Thus, in t time Po=(e)^(-G). In 2t time Po=e^(-2G), as mean number of
frames generated in 2t is 2G. From the above, throughput in 2t time S=G*(Po)=G*e^(-
2G)
Assume that the sending stations has to wait until the beginning of a frame time (one
frame time is one time slot) and arrivals still follow Poisson Distribution, where they are
assumed probabilistically independent: In this case the vulnerable period is just t time
units. Then the Probability that k frames are generated in a frame time is effective:-
Pk=(G^k)*(e^-G)/k! In t time, the probability of zero frames, Po=e^(-G) From the above
throughput becomes:
S=GPo=G*(e^-G)
Throughput versus offered traffic for pure ALOHA and slotted ALOHA systems, ie, plot
of S against G, from S=Ge^(-2G) and S=Ge^(-G) formulas.
Q.3. write down distance vector algorithm. Explain path vector protocol
1) For each node, estimate the cost from itself to each destination.
3) Receive cost information from the neighbor, update the routing tables accordingly.
Sikkim Manipal University – MI0035
It is different from the distance vector routing and link state routing. Each entry in the
routing table contains the destination network, the next router and the path to reach the
destination.
Q.4. state the working principle of TCP segment header and UDP header
TCP segments are sent as internet datagrams. The Internet Protocol header carries several
information fields, including the source and destination host addresses [2]. A TCP header
Sikkim Manipal University – MI0035
follows the internet header, supplying information specific to the TCP protocol. This
division allows for the existence of host level protocols other than TCP.
0 1 2 3
01234567890123456789012345678901
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Acknowledgment Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Data | |U|A|P|R|S|F| |
| Offset| Reserved |R|C|S|S|Y|I| Window |
| | |G|K|H|T|N|N| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Checksum | Urgent Pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options | Padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 3.
The sequence number of the first data octet in this segment (except when SYN is
present). If SYN is present the sequence number is the initial sequence number (ISN) and
the first data octet is ISN+1.
If the ACK control bit is set this field contains the value of the next sequence number
the sender of the segment is expecting to receive. Once a connection is established this is
always sent.
The number of 32 bit words in the TCP Header. This indicates where the data begins.
The TCP header (even one including options) is an integral number of 32 bits long.
Reserved: 6 bits
Window: 16 bits
The number of data octets beginning with the one indicated in the acknowledgment
field which the sender of this segment is willing to accept.
Checksum: 16 bits
The checksum field is the 16 bit one's complement of the one's complement sum of all
16 bit words in the header and text. If a segment contains an odd number of header and
text octets to be checksummed, the last octet is padded on the right with zeros to form a
16 bit word for checksum purposes. The pad is not transmitted as part of the segment.
While computing the checksum, the checksum field itself is replaced with zeros.
The checksum also covers a 96 bit pseudo header conceptually prefixed to the TCP
header. This pseudo header contains the Source Address, the Destination Address, the
Protocol, and TCP length.
This gives the TCP protection against misrouted segments. This information is carried
in the Internet Protocol and is transferred across the TCP/Network interface in the
arguments or results of calls by the TCP on the IP.
+--------+--------+--------+--------+
| Source Address |
+--------+--------+--------+--------+
| Destination Address |
+--------+--------+--------+--------+
| zero | PTCL | TCP Length |
+--------+--------+--------+--------+
Sikkim Manipal University – MI0035
The TCP Length is the TCP header length plus the data length in octets (this is not an
explicitly transmitted quantity, but is computed), and it does not count the 12 octets of the
pseudo header.
This field communicates the current value of the urgent pointer as a positive offset
from the sequence number in this segment. The urgent pointer points to the sequence
number of the octet following the urgent data. This field is only be interpreted in
segments with the URG control bit set.
Options: variable
Options may occupy space at the end of the TCP header and are a multiple of 8 bits in
length. All options are included in the checksum. An option may begin on any octet
boundary. There are two cases for the format of an option:
The option-length counts the two octets of option-kind and option-length as well as the
option-data octets.
Note that the list of options may be shorter than the data offset field might imply. The
content of the header beyond the End-of-Option option must be header padding (i.e.,
zero).
+--------+
|00000000|
+--------+
Kind=0
This option code indicates the end of the option list. This might not coincide with
the end of the TCP header according to the Data Offset field. This is used at the end of
all options, not the end of each option, and need only be used if the end of the options
would not otherwise coincide with the end of the TCP header.
No-Operation
+--------+
|00000001|
+--------+
Kind=1
Sikkim Manipal University – MI0035
This option code may be used between options, for example, to align the beginning
of a subsequent option on a word boundary.
There is no guarantee that senders will use this option, so receivers must be prepared
to process options even if they do not begin on a word boundary.
+--------+--------+---------+--------+
|00000010|00000100| max seg size |
+--------+--------+---------+--------+
Kind=2 Length=4
If this option is present, then it communicates the maximum receive segment size
at the TCP which sends this segment.
This field must only be sent in the initial connection request (i.e., in segments with
the SYN control bit set). If this option is not used, any segment size is allowed.
Padding: variable
The TCP header padding is used to ensure that the TCP header ends and data begins on
a 32 bit boundary. The padding is composed of zeros.
The User Datagram Protocol (UDP) is a transport layer protocol defined for use with
the IP network layer protocol. It is defined by RFC 768 written by John Postel. It
provides a best-effort datagram service to an End System (IP host).
Sikkim Manipal University – MI0035
The service provided by UDP is an unreliable service that provides no guarantees for
delivery and no protection from duplication (e.g. if this arises due to software errors
within an Intermediate System (IS)). The simplicity of UDP reduces the overhead from
using the protocol and the services may be adequate in many cases.
One increasingly popular use of UDP is as a tunneling protocol, where a tunnel endpoint
encapsulates the packets of another protocol inside UDP datagrams and transmits them to
another tunnel endpoint, which decapsulates the UDP datagrams and forwards the
original packets contained in the payload. Tunnels establish virtual links that appear to
directly connect locations that are distant in the physical Internet topology, and can be
used to create virtual (private) networks. Using UDP as a tunneling protocol is attractive
when the payload protocol is not supported by middleboxes that may exist along the path,
because many middleboxes support UDP transmissions.
UDP does not provide any communications security. Applications that need to protect
their communications against eavesdropping, tampering, or message forgery therefore
need to separately provide security services using additional protocol mechanisms.
Sikkim Manipal University – MI0035
Protocol Header
A computer may send UDP packets without first establishing a connection to the
recipient. A UDP datagram is carried in a single IP packet and is hence limited to a
maximum payload of 65,507 bytes for IPv4 and 65,527 bytes for IPv6. The transmission
of large IP packets usually requires IP fragmentation. Fragmentation decreases
communication reliability and efficiency and should theerfore be avoided.
To transmit a UDP datagram, a computer completes the appropriate fields in the UDP
header (PCI) and forwards the data together with the header for transmission by the IP
network layer.
The UDP protocol header consists of 8 bytes of Protocol Control Information (PCI)
Source Port (UDP packets from a client use this as a service access point
(SAP) to indicate the session on the local client that originated the packet. UDP
packets from a server carry the server SAP in this field)
Destination Port (UDP packets from a client use this as a service access point
(SAP) to indicate the service required from the remote server. UDP packets from
a server carry the client SAP in this field)
UDP Checksum (A checksum to verify that the end to end data has not been
corrupted by routers or bridges in the network or by the processing in an end
system. The algorithm to compute the checksum is the Standard Internet
Checksum algorithm. This allows the receiver to verify that it was the intended
destination of the packet, because it covers the IP addresses, port numbers and
protocol number, and it verifies that the packet is not truncated or padded,
because it covers the size field. Therefore, this protects an application against
receiving corrupted payload data in place of, or in addition to, the data that was
sent. In the cases where this check is not required, the value of 0x0000 is placed
in this field, in which case the data is not checked by the receiver.
Like for other transport protocols, the UDP header and data are not processed
by Intermediate Systems (IS) in the network, and are delivered to the final destination in
the same form as originally transmitted.
At the final destination, the UDP protocol layer receives packets from the IP network
layer. These are checked using the checksum (when >0, this checks correct end-to-end
operation of the network service) and all invalid PDUs are discarded. UDP does not make
any provision for error reporting if the packets are not delivered. Valid data are passed to
the appropriate session layer protocol identified by the source and destination port
numbers (i.e. the session service access points).
UDP and UDP-Lite also may be used for multicast and broadcast, allowing senders to
transmit to multiple receivers.
Using UDP
Application designers are generally aware that UDP does not provide any reliability, e.g.,
it does not retransmit any lost packets. Often, this is a main reason to consider UDP as a
transport. Applications that do require reliable message delivery therefore need to
implement appropriate protocol mechanisms in their applications (e.g. tftp).
Sikkim Manipal University – MI0035
UDP's best effort service does not protect against datagram duplication, i.e., an
application may receive multiple copies of the same UDP datagram. Application
designers therefore need to verify that their application gracefully handles datagram
duplication and may need to implement mechanisms to detect duplicates.
The Internet may also significantly delay some packets with respect to others, e.g., due to
routing transients, intermittent connectivity, or mobility. This can cause reordering,
where UDP datagrams arrive at the receiver in an order different from the transmission
order. Applications that require ordered delivery must restore datagram ordering
themselves.
The burdon of needing to code all these protocol mechanims can be avoided by
using TCP!
The number of unassigned Internet addresses is running out, so a new classless scheme
called CIDR is gradually replacing the system based on classes A, B, and C and is tied to
adoption of IPv6.
IP address classes
These IP addresses can further be broken down into classes. These classes are A, B, C, D,
E and their possible ranges can be seen in Figure 2 below.
If you look at the table you may notice something strange. The range of IP address from
Class A to Class B skips the 127.0.0.0-127.255.255.255 range. That is because this range
is reserved for the special addresses called Loopback addresses that have already been
discussed above.
The rest of classes are allocated to companies and organizations based upon the amount
of IP addresses that they may need. Listed below are descriptions of the IP classes and
the organizations that will typically receive that type of allocation.
Default Network: The special network 0.0.0.0 is generally used for routing.
Class A: From the table above you see that there are 126 class A networks. These
networks consist of 16,777,214 possible IP addresses that can be assigned to devices and
computers. This type of allocation is generally given to very large networks such as
multi-national companies.
Sikkim Manipal University – MI0035
Loopback: This is the special 127.0.0.0 network that is reserved as a loopback to your
own computer. These addresses are used for testing and debugging of your programs or
hardware.
Class B: This class consists of 16,384 individual networks, each allocation consisting of
65,534 possible IP addresses. These blocks are generally allocated to Internet Service
Providers and large networks, like a college or major hospital.
Class C: There is a total of 2,097,152 Class C networks available, with each network
consisting of 255 individual IP addresses. This type of class is generally given to small to
mid-sized companies.
Class D: The IP addresses in this class are reserved for a service called Multicast.
Class E: The IP addresses in this class are reserved for experimental use.
Broadcast: This is the special network of 255.255.255.255, and is used for broadcasting
messages to the entire network that your computer resides on.
Private Addresses
There are also blocks of IP addresses that are set aside for internal private use for
computers not directly connected to the Internet. These IP addresses are not supposed to
be routed through the Internet, and most service providers will block the attempt to do so.
These IP addresses are used for internal use by company or home networks that need to
use TCP/IP but do not want to be directly visible on the Internet. These IP ranges are:
The best solution to avoid a problem like this is to use a service called DHCP that almost
all home routers provide. DHCP, or Dynamic Host Configuration Protocol, is a service
that assigns addresses to devices and computers. You tell the DHCP server what range of
IP addresses you would like it to assign, and then the DHCP server takes the
responsibility of assigning those IP addresses to the various devices and keeping track so
those IP addresses are assigned only once.
Answer: Cryptography is the science of information security. The word is derived from
the Greekkryptos, meaning hidden. Cryptography is closely related to the disciplines
of cryptology andcryptanalysis. Cryptography includes techniques such as microdots,
merging words with images, and other ways to hide information in storage or transit.
However, in today's computer-centric world, cryptography is most often associated with
scrambling plaintext(ordinary text, sometimes referred to as cleartext) into ciphertext (a
process calledencryption), then back again (known as decryption). Individuals who
practice this field are known as cryptographers.
Sikkim Manipal University – MI0035
There are several ways of classifying cryptographic algorithms. For purposes of this
paper, they will be categorized based on the number of keys that are employed for
encryption and decryption, and further defined by their application and use. The three
types of algorithms that will be discussed are (Figure 1):
Secret Key Cryptography (SKC): Uses a single key for both encryption and
decryption
Public Key Cryptography (PKC): Uses one key for encryption and another for
decryption
FIGURE 1: Three types of cryptography: secret-key, public key, and hash function.
With secret key cryptography, a single key is used for both encryption and decryption. As
shown in Figure 1A, the sender uses the key (or some set of rules) to encrypt the plaintext
and sends the ciphertext to the receiver. The receiver applies the same key (or ruleset) to
decrypt the message and recover the plaintext. Because a single key is used for both
functions, secret key cryptography is also called symmetric encryption.
With this form of cryptography, it is obvious that the key must be known to both the
sender and the receiver; that, in fact, is the secret. The biggest difficulty with this
approach, of course, is the distribution of the key.
changing. A block cipher is so-called because the scheme encrypts one block of data at a
time using the same key on each block. In general, the same plaintext block will always
encrypt to the same ciphertext when using the same key in a block cipher whereas the
same plaintext will encrypt to different ciphertext in a stream cipher.
Stream ciphers come in several flavors but two are worth mentioning here. Self-
synchronizing stream ciphers calculate each bit in the keystream as a function of the
previous n bits in the keystream. It is termed "self-synchronizing" because the decryption
process can stay synchronized with the encryption process merely by knowing how far
into the n-bit keystream it is. One problem is error propagation; a garbled bit in
transmission will result in n garbled bits at the receiving side. Synchronous stream
ciphers generate the keystream in a fashion independent of the message stream but by
using the same keystream generation function at sender and receiver. While stream
ciphers do not propagate transmission errors, they are, by their nature, periodic so that the
keystream will eventually repeat.
Block ciphers can operate in one of several modes; the following four are the most
important:
Electronic Codebook (ECB) mode is the simplest, most obvious application: the
secret key is used to encrypt the plaintext block to form a ciphertext block. Two
identical plaintext blocks, then, will always generate the same ciphertext block.
Although this is the most common mode of block ciphers, it is susceptible to a
variety of brute-force attacks.
smaller than the block size, which might be useful in some applications such as
encrypting interactive terminal input. If we were using 1-byte CFB mode, for
example, each incoming character is placed into a shift register the same size as
the block, encrypted, and the block transmitted. At the receiving side, the
ciphertext is decrypted and the extra bits in the block (i.e., everything above and
beyond the one byte) are discarded.
Data Encryption Standard (DES): The most common SKC scheme used today,
DES was designed by IBM in the 1970s and adopted by the National Bureau of
Standards (NBS) [now the National Institute for Standards and Technology
(NIST)] in 1977 for commercial and unclassified government applications. DES
is a block-cipher employing a 56-bit key that operates on 64-bit blocks. DES has a
complex set of rules and transformations that were designed specifically to yield
fast hardware implementations and slow software implementations, although this
latter point is becoming less significant today since the speed of computer
processors is several orders of magnitude faster today than twenty years ago. IBM
also proposed a 112-bit key for DES, which was rejected at the time by the
government; the use of 112-bit keys was considered in the 1990s, however,
conversion was never seriously considered.
FIPS 74: Guidelines for Implementing and Using the NBS Data
Encryption Standard
More detail about DES, 3DES, and DESX can be found below in Section 5.4.
Rivest Ciphers (aka Ron's Code): Named for Ron Rivest, a series of SKC
algorithms.
RC6: An improvement over RC5, RC6 was one of the AES Round 2
algorithms.
Twofish: A 128-bit block cipher using 128-, 192-, or 256-bit keys. Designed to be
highly secure and highly flexible, well-suited for large microprocessors, 8-bit
smart card microprocessors, and dedicated hardware. Designed by a team led by
Bruce Schneier and was one of the Round 2 algorithms in the AES process.
KASUMI: A block cipher using a 128-bit key that is part of the Third-Generation
Partnership Project (3gpp), formerly known as the Universal Mobile
Telecommunications System (UMTS). KASUMI is the intended confidentiality
and integrity algorithm for both message content and signaling data for emerging
mobile communications systems.
Sikkim Manipal University – MI0035
SEED: A block cipher using 128-bit blocks and 128-bit keys. Developed by the
Korea Information Security Agency (KISA) and adopted as a national standard
encryption algorithm in South Korea. Also described in RFC 4269.
ARIA: A 128-bit block cipher employing 128-, 192-, and 256-bit keys. Developed
by large group of researchers from academic institutions, research institutes, and
federal agencies in South Korea in 2003, and subsequently named a national
standard. Described in RFC 5794.
Public-Key Cryptography
Sikkim Manipal University – MI0035
Multiplication vs. factorization: Suppose I tell you that I have two numbers, 9 and
16, and that I want to calculate the product; it should take almost no time to
calculate the product, 144. Suppose instead that I tell you that I have a number,
144, and I need you tell me which pair of integers I multiplied together to obtain
that number. You will eventually come up with the solution but whereas
calculating the product took milliseconds, factoring will take longer because you
first need to find the 8 pairs of integer factors and then determine which one is the
correct pair.
Exponentiation vs. logarithms: Suppose I tell you that I want to take the number 3
to the 6th power; again, it is easy to calculate 3 6=729. But if I tell you that I have
the number 729 and want you to tell me the two integers that I used, x and y so
that logx 729 = y, it will take you longer to find all possible solutions and select
the pair that I used.
While the examples above are trivial, they do represent two of the functional pairs that
are used with PKC; namely, the ease of multiplication and exponentiation versus the
relative difficulty of factoring and calculating logarithms, respectively. The mathematical
"trick" in PKC is to find a trap door in the one-way function so that the inverse
calculation becomes easy given knowledge of some item of information. (The problem is
Sikkim Manipal University – MI0035
further exacerbated because the algorithms don't use just any old integers, but very large
prime numbers.)
Generic PKC employs two keys that are mathematically related although knowledge of
one key does not allow someone to easily determine the other key. One key is used to
encrypt the plaintext and the other key is used to decrypt the ciphertext. The important
point here is that it does not matter which key is applied first, but that both keys are
required for the process to work (Figure 1B). Because a pair of keys are required, this
approach is also called asymmetric cryptography.
In PKC, one of the keys is designated the public key and may be advertised as widely as
the owner wants. The other key is designated the private keyand is never revealed to
another party. It is straight forward to send messages under this scheme. Suppose Alice
wants to send Bob a message. Alice encrypts some information using Bob's public key;
Bob decrypts the ciphertext using his private key. This method could be also used to
prove who sent a message; Alice, for example, could encrypt some plaintext with her
private key; when Bob decrypts using Alice's public key, he knows that Alice sent the
message and Alice cannot deny having sent the message (non-repudiation).
Public-key cryptography algorithms that are in use today for key exchange or digital
signatures include:
RSA: The first, and still most common, PKC implementation, named for the three
MIT mathematicians who developed it — Ronald Rivest, Adi Shamir, and
Leonard Adleman. RSA today is used in hundreds of software products and can
be used for key exchange, digital signatures, or encryption of small blocks of data.
RSA uses a variable size encryption block and a variable size key. The key-pair is
derived from a very large number, n, that is the product of two prime numbers
chosen according to special rules; these primes may be 100 or more digits in
length each, yielding an n with roughly twice as many digits as the prime factors.
The public key information includes n and a derivative of one of the factors ofn;
Sikkim Manipal University – MI0035
an attacker cannot determine the prime factors of n (and, therefore, the private
key) from this information alone and that is what makes the RSA algorithm so
secure. (Some descriptions of PKC erroneously state that RSA's safety is due to
the difficulty in factoring large prime numbers. In fact, large prime numbers, like
small prime numbers, only have two factors!) The ability for computers to factor
large numbers, and therefore attack schemes such as RSA, is rapidly improving
and systems today can find the prime factors of numbers with more than 200
digits. Nevertheless, if a large number is created from two prime factors that are
roughly the same size, there is no known factorization algorithm that will solve
the problem in a reasonable amount of time; a 2005 test to factor a 200-digit
number took 1.5 years and over 50 years of compute time (see the Wikipedia
article on integer factorization.) Regardless, one presumed protection of RSA is
that users can easily increase the key size to always stay ahead of the computer
processing curve. As an aside, the patent for RSA expired in September 2000
which does not appear to have affected RSA's popularity one way or the other. A
detailed example of RSA is presented below in Section 5.3.
Elliptic Curve Cryptography (ECC): A PKC algorithm based upon elliptic curves.
ECC can offer levels of security with small keys comparable to RSA and other
Sikkim Manipal University – MI0035
PKC methods. It was designed for devices with limited compute power and/or
memory, such as smartcards and PDAs. More detail about ECC can be found
below in Section 5.8. Other references include "The Importance of ECC" Web
page and the"Online Elliptic Curve Cryptography Tutorial" , both from Certicom.
See also RFC 6090 for a review of fundamental ECC algorithms.
A digression: Who invented PKC? I tried to be careful in the first paragraph of this
section to state that Diffie and Hellman "first described publicly" a PKC scheme.
Although I have categorized PKC as a two-key system, that has been merely for
convenience; the real criteria for a PKC scheme is that it allows two parties to exchange a
secret even though the communication with the shared secret might be overheard. There
seems to be no question that Diffie and Hellman were first to publish; their method is
described in the classic paper, "New Directions in Cryptography," published in the
November 1976 issue of IEEE Transactions on Information Theory. As shown below,
Diffie-Hellman uses the idea that finding logarithms is relatively harder than
exponentiation. And, indeed, it is the precursor to modern PKC which does employ two
keys. Rivest, Shamir, and Adleman described an implementation that extended this idea
in their paper "A Method for Obtaining Digital Signatures and Public-Key
Cryptosystems," published in the February 1978 issue of theCommunications of the ACM
(CACM). Their method, of course, is based upon the relative ease of finding the product
of two large prime numbers compared to finding the prime factors of a large number.
Sikkim Manipal University – MI0035
Some sources, though, credit Ralph Merkle with first describing a system that allows two
parties to share a secret although it was not a two-key system, per se. A Merkle
Puzzle works where Alice creates a large number of encrypted keys, sends them all to
Bob so that Bob chooses one at random and then lets Alice know which he has selected.
An eavesdropper will see all of the keys but can't learn which key Bob has selected
(because he has encrypted the response with the chosen key). In this case, Eve's effort to
break in is the square of the effort of Bob to choose a key. While this difference may be
small it is often sufficient. Merkle apparently took a computer science course at UC
Berkeley in 1974 and described his method, but had difficulty making people understand
it; frustrated, he dropped the course. Meanwhile, he submitted the paper "Secure
Communication Over Insecure Channels" which was published in the CACM in April
1978; Rivest et al.'s paper even makes reference to it. Merkle's method certainly wasn't
published first, but did he have the idea first?
An interesting question, maybe, but who really knows? For some time, it was a quiet
secret that a team at the UK's Government Communications Headquarters (GCHQ) had
first developed PKC in the early 1970s. Because of the nature of the work, GCHQ kept
the original memos classified. In 1997, however, the GCHQ changed their posture when
they realized that there was nothing to gain by continued silence. Documents show that a
GCHQ mathematician named James Ellis started research into the key distribution
problem in 1969 and that by 1975, Ellis, Clifford Cocks, and Malcolm Williamson had
worked out all of the fundamental details of PKC, yet couldn't talk about their work.
(They were, of course, barred from challenging the RSA patent!) After more than 20
years, Ellis, Cocks, and Williamson have begun to get their due credit.
And the National Security Agency (NSA) claims to have knowledge of this type of
algorithm as early as 1966 but there is no supporting documentation... yet. So this really
was a digression...
Sikkim Manipal University – MI0035
Hash Functions
RIPEMD: A series of message digests that initially came from the RIPE (RACE
Integrity Primitives Evaluation) project. RIPEMD-160 was designed by Hans
Dobbertin, Antoon Bosselaers, and Bart Preneel, and optimized for 32-bit
processors to replace the then-current 128-bit hash functions. Other versions
include RIPEMD-256, RIPEMD-320, and RIPEMD-128.
160, and Tiger. Command line utilities that calculate hash values include sha_verify by
Dan Mares [Windows; supports MD5, SHA-1, SHA-2] and md5deep [cross-platform;
supports MD5, SHA-1, SHA-256, Tiger, and Whirlpool].)
Hash functions are sometimes misunderstood and some sources claim that no two files
can have the same hash value. This is, in fact, not correct. Consider a hash function that
provides a 128-bit hash value. There are, obviously, 2128 possible hash values. But there
are a lot more than 2128 possiblefiles. Therefore, there have to be multiple files — in fact,
there have to be an infinite number of files! — that can have the same 128-bit hash value.
The difficulty is finding two files with the same hash! What is, indeed, very hard to do is
to try to create a file that has a given hash value so as to force a hash value collision —
which is the reason that hash functions are used extensively for information security and
computer forensics applications. Alas, researchers in 2004 found that practical collision
attacks could be launched on MD5, SHA-1, and other hash algorithms.
At this time, there is no obvious successor to MD5 and SHA-1 that could be put into use
quickly; there are so many products using these hash functions that it could take many
years to flush out all use of 128- and 160-bit hashes. That said, NIST announced in 2007
their Cryptographic Hash Algorithm Competition to find the next-generation secure
hashing method. Dubbed SHA-3, this new scheme will augment FIPS 180-2. A list of
submissions can be found at The SHA-3 Zoo. The SHA-3 standard may not be available
until 2011 or 2012.
Certain extensions of hash functions are used for a variety of information security and
digital forensics applications, such as:
Rolling hashes refer to a set of hash values that are computed based upon a fixed-
length "sliding window" through the input. As an example, a hash value might be
computed on bytes 1-10 of a file, then on bytes 2-11, 3-12, 4-13, etc.
Fuzzy hashes are an area of intense research and represent hash values that
represent two inputs that are similar. Fuzzy hashes are used to detect documents,
images, or other files that are close to each other with respect to content. See
"Fuzzy Hashing" (PDF | PPT) by Jesse Kornblum for a good treatment of this
topic.