You are on page 1of 149

Studynama.

com powers Engineers, Doctors, Managers & Lawyers in India by providing 'free'
resources for aspiring students of these courses as well as students in colleges.

You get FREE Lecture notes, Seminar presentations, guides, major and minor projects.
Also, discuss your career prospects and other queries with an ever-growing community.

Visit us for more FREE downloads: www.studynama.com

ALL FILES ON STUDYNAMA.COM ARE UPLOADED BY RESPECTIVE USERS WHO MAY OR MAY NOT BE THE OWNERS OF THESE FILES. FOR ANY SUGGESTIONS OR FEEDBACK, EMAIL US AT INFO@STUDYNAMA.COM
EC1451 MOBILE AND WIRELESS COMMUNICATION
LTPC
3104
UNIT I PRINCIPLES OF WIRELESS COMMUNICATION 10
Digital modulation techniques Linear modulation techniques Spread spectrum
modulation Performance of modulation Multiple access techniques TDMA
FDMA CDMA SDMA Overview of cellular networks Cellular concept Handoff
strategies Path loss Fading and Doppler effect.
UNIT II WIRELESS PROTOCOLS 11
Issues and challenges of wireless networks Location management Resource
management Routing Power management Security Wireless media access
techniques ALOHA CSMA Wireless LAN MAN IEEE 802.11 (abefgh
i) Bluetooth. Wireless routing protocols Mobile IP IPv4 IPv6 Wireless TCP.
Protocols for 3G & 4G cellular networks IMT 2000 UMTS CDMA2000
Mobility management and handover technologies All-IP based cellular network
UNIT III TYPES OF WIRELESS NETWORKS 9
Mobile networks Ad-hoc networks Ad-hoc routing Sensor networks Peer-Peer
networks Mobile routing protocols DSR AODV Reactive routing Location
aided routing Mobility models Entity based Group mobility Random way Point
mobility model.
UNIT IV ISSUES AND CHALLENGES 9
Issues and challenges of mobile networks Security issues Authentication in mobile
applications Privacy issues Power management Energy awareness computing.
Mobile IP and Ad-hoc networks VoIP applications.
UNIT V SIMULATION 6
Study of various network simulators (GloMoSim NS2 Opnet) Designing and
evaluating the performance of various transport and routing protocols of mobile and
wireless networks using network simulator (any one).
Total: 60
REFERENCES
1. Theodore S. Rappaport, Wireless Communications, Principles and Practice,
Prentice Hall, 1996.
2. Stallings W., Wireless Communications & Networks, Prentice Hall, 2001.
3. Schiller J., Mobile Communications, Addison Wesley, 2000.
4. Lee W. C. Y., Mobile Communications Engineering: Theory and Applications,
2nd Edition, TMH, 1997.
5. Pahlavan K. and Krishnamurthy P., Principles of Wireless Networks, Prentice
Hall, 2002.
6. Black U. D., Mobile and Wireless Networks, PHI, 1996.
7. Charles E. Perkins, Ad Hoc Networking, Addison Wesley, December 2000
8. IEEE Journals and Proceedings
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING
SUBJECT CODE: EC 1451
SUB.NAME: MOBILE AND WIRELESS NETWORKS
YEAR /SEM: IV/VIII

AIM
To introduce the concepts of wireless / mobile communication using cellular environment. To
make the students to know about the various modulation techniques, propagation methods,
coding and multi access techniques used in the mobile communication. Various wireless network
systems and standards are to be introduced.

Objectives

It deals with the fundamental cellular radio concepts such as frequency reuse and handoff. This
also demonstrates the principle of trunking efficiency and how trunking and interference issues
between mobile and base stations combine to affect the overall capacity of cellular systems.

It presents different ways to radio propagation models and predict the large scale effects of
radio propagation in many operating environment. This also covers small propagation effects
such as fading, time delay spread and Doppler spread and describes how to measures and model
the impact that signal bandwidth and motion have on the instantaneous received signal through
the multi-path channel.

It provides idea about analog and digital modulation techniques used in wireless communication.
It also deals with the different types of equalization techniques and diversity concepts.

It provides an introduction to speech coding principles which have driven the development of
adaptive pulse code modulation and linear predictive coding techniques are presented. This unit
also describes the time, frequency code division multiple access techniques as well as more
recent multiple access technique such as space division multiple access.

It deals with second generation and third generation wireless networks and worldwide wireless
standards.
UNIT I Principles of Wireless Communication
Digital Modulation Techniques
Modern mobile communication systems use digital modulation techniques.
VLSI & DSP technology have made digital modulation more cost effective than analog
transmission systems.
Advantages over analog modulation:
Greater Noise immunity
Robustness to channel impairments
Easier multiplexing
And greater security
Furthermore digital transmissions accommodate digital error control codes
which detect & correct transmission errors and support complex signal
conditioning and processing techniques to improve the performance of the
overall communication link.
In digital wireless communication systems, the modulating signal may be represented as a
time sequence of symbols or pulses, where each symbol has m finite states. Each symbol
represents n bits of information, where n=log2 m bits/symbol.

Factors that Influence the Choice of Digital Modulation:


The performance of a modulation scheme is often measured in terms of its power
Efficiency and Bandwidth Efficiency.

Modulation:
In electronics and telecommunications, modulation is the process of varying one or more
properties of a high-frequency periodic waveform, called the carrier signal, with a modulating
signal which typically contains information to be transmitted. This is done in a similar fashion to
a musician modulating a tone (a periodic waveform) from a musical instrument by varying its
volume, timing and pitch. The three key parameters of a periodic waveform are its amplitude
("volume"), its phase ("timing") and its frequency ("pitch"). Any of these properties can be
modified in accordance with a low frequency signal to obtain the modulated signal. Typically a
high-frequency sinusoid waveform is used as carrier signal, but a square wave pulse train may
also be used.

In telecommunications, modulation is the process of conveying a message signal, for example a


digital bit stream or an analog audio signal, inside another signal that can be physically
transmitted. Modulation of a sine waveform is used to transform a baseband message signal into
a passband signal, for example low-frequency audio signal into a radio-frequency signal (RF
signal). In radio communications, cable TV systems or the public switched telephone network
for instance, electrical signals can only be transferred over a limited passband frequency
spectrum, with specific (non-zero) lower and upper cutoff frequencies. Modulating a sine-wave
carrier makes it possible to keep the frequency content of the transferred signal as close as
possible to the centre frequency (typically the carrier frequency) of the passband.
A device that performs modulation is known as a modulator and a device that performs the
inverse operation of modulation is known as a demodulator (sometimes detector or demod). A
device that can do both operations is a modem (from "modulatordemodulator").

Aim:

The aim of digital modulation is to transfer a digital bit stream over an analog bandpass channel,
for example over the public switched telephone network (where a bandpass filter limits the
frequency range to between 300 and 3400 Hz), or over a limited radio frequency band.

The aim of analog modulation is to transfer an analog baseband (or lowpass) signal, for example
an audio signal or TV signal, over an analog bandpass channel at a different frequency, for
example over a limited radio frequency band or a cable TV network channel.

Analog and digital modulation facilitate frequency division multiplexing (FDM), where several
low pass information signals are transferred simultaneously over the same shared physical
medium, using separate passband channels (several different carrier frequencies).

The aim of digital baseband modulation methods, also known as line coding, is to transfer a
digital bit stream over a baseband channel, typically a non-filtered copper wire such as a serial
bus or a wired local area network.

The aim of pulse modulation methods is to transfer a narrowband analog signal, for example a
phone call over a wideband baseband channel or, in some of the schemes, as a bit stream over
another digital transmission system.

In music synthesizers, modulation may be used to synthesise waveforms with an extensive


overtone spectrum using a small number of oscillators. In this case the carrier frequency is
typically in the same order or much lower than the modulating waveform. See for example
frequency modulation synthesis or ring modulation synthesis.

Analog modulation methods:

In analog modulation, the modulation is applied continuously in response to the analog


information signal. Common analog modulation techniques are
Amplitude modulation (AM) (here the amplitude of the carrier signal is varied in
accordance to the instantaneous amplitude of the modulating signal)
Double-sideband modulation (DSB)

Double-sideband modulation with carrier (DSB-WC) (used on the


AM radio broadcasting band)
Double-sideband suppressed-carrier transmission (DSB-SC)
Double-sideband reduced carrier transmission (DSB-RC)

Single-sideband modulation (SSB, or SSB-AM)


SSB with carrier (SSB-WC)
SSB suppressed carrier modulation (SSB-SC)
Vestigial sideband modulation (VSB, or VSB-AM)
Quadrature amplitude modulation (QAM)

Angle modulation

Frequency modulation (FM) (here the frequency of the carrier signal is


varied in accordance to the instantaneous amplitude of the modulating
signal)
Phase modulation (PM) (here the phase shift of the carrier signal is varied
in accordance to the instantaneous amplitude of the modulating signal)

Digital modulation methods:


In digital modulation, an analog carrier signal is modulated by a discrete signal. Digital
modulation methods can be considered as digital-to-analog conversion, and the corresponding
demodulation or detection as analog-to-digital conversion. The changes in the carrier signal are
chosen from a finite number of M alternative symbols (the modulation alphabet).

Fig :Schematic of 4 baud (8 bit/s) data link containing arbitrarily chosen values.

A simple example: A telephone line is designed for transferring audible sounds, for example
tones, and not digital bits (zeros and ones). Computers may however communicate over a
telephone line by means of modems, which are representing the digital bits by tones, called
symbols. If there are four alternative symbols (corresponding to a musical instrument that can
generate four different tones, one at a time), the first symbol may represent the bit sequence 00,
the second 01, the third 10 and the fourth 11. If the modem plays a melody consisting of 1000
tones per second, the symbol rate is 1000 symbols/second, or baud. Since each tone (i.e.,
symbol) represents a message consisting of two digital bits in this example, the bit rate is twice
the symbol rate, i.e. 2000 bits per second. This is similar to the technique used by dialup modems
as opposed to DSL modems.

According to one definition of digital signal, the modulated signal is a digital signal, and
according to another definition, the modulation is a form of digital-to-analog conversion. Most
textbooks would consider digital modulation schemes as a form of digital transmission,
synonymous to data transmission; very few would consider it as analog transmission.

Fundamental digital modulation methods:

The most fundamental digital modulation techniques are based on keying:

In the case of PSK (phase-shift keying), a finite number of phases are used.
In the case of FSK (frequency-shift keying), a finite number of frequencies are used.
In the case of ASK (amplitude-shift keying), a finite number of amplitudes are used.
In the case of QAM (quadrature amplitude modulation), a finite number of at least two
phases, and at least two amplitudes are used.

In QAM, an inphase signal (the I signal, for example a cosine waveform) and a quadrature phase
signal (the Q signal, for example a sine wave) are amplitude modulated with a finite number of
amplitudes, and summed. It can be seen as a two-channel system, each channel using ASK. The
resulting signal is equivalent to a combination of PSK and ASK.

In all of the above methods, each of these phases, frequencies or amplitudes are assigned a
unique pattern of binary bits. Usually, each phase, frequency or amplitude encodes an equal
number of bits. This number of bits comprises the symbol that is represented by the particular
phase, frequency or amplitude.

If the alphabet consists of M = 2N alternative symbols, each symbol represents a message


consisting of N bits. If the symbol rate (also known as the baud rate) is fS symbols/second (or
baud), the data rate is NfS bit/second.

For example, with an alphabet consisting of 16 alternative symbols, each symbol represents 4
bits. Thus, the data rate is four times the baud rate.

In the case of PSK, ASK or QAM, where the carrier frequency of the modulated signal is
constant, the modulation alphabet is often conveniently represented on a constellation diagram,
showing the amplitude of the I signal at the x-axis, and the amplitude of the Q signal at the y-
axis, for each symbol.
Modulator and detector principles of operation:

PSK and ASK, and sometimes also FSK, are often generated and detected using the principle of
QAM. The I and Q signals can be combined into a complex-valued signal I+jQ (where j is the
imaginary unit). The resulting so called equivalent lowpass signal or equivalent baseband signal
is a complex-valued representation of the real-valued modulated physical signal (the so called
passband signal or RF signal).

These are the general steps used by the modulator to transmit data:

1. Group the incoming data bits into codewords, one for each symbol that will be
transmitted.
2. Map the codewords to attributes, for example amplitudes of the I and Q signals (the
equivalent low pass signal), or frequency or phase values.
3. Adapt pulse shaping or some other filtering to limit the bandwidth and form the spectrum
of the equivalent low pass signal, typically using digital signal processing.
4. Perform digital-to-analog conversion (DAC) of the I and Q signals (since today all of the
above is normally achieved using digital signal processing, DSP).
5. Generate a high-frequency sine wave carrier waveform, and perhaps also a cosine
quadrature component. Carry out the modulation, for example by multiplying the sine
and cosine wave form with the I and Q signals, resulting in that the equivalent low pass
signal is frequency shifted into a modulated passband signal or RF signal. Sometimes this
is achieved using DSP technology, for example direct digital synthesis using a waveform
table, instead of analog signal processing. In that case the above DAC step should be
done after this step.
6. Amplification and analog bandpass filtering to avoid harmonic distortion and periodic
spectrum

At the receiver side, the demodulator typically performs:

1. Bandpass filtering.
2. Automatic gain control, AGC (to compensate for attenuation, for example fading).
3. Frequency shifting of the RF signal to the equivalent baseband I and Q signals, or to an
intermediate frequency (IF) signal, by multiplying the RF signal with a local oscillator
sinewave and cosine wave frequency (see the superheterodyne receiver principle).
4. Sampling and analog-to-digital conversion (ADC) (Sometimes before or instead of the
above point, for example by means of undersampling).
5. Equalization filtering, for example a matched filter, compensation for multipath
propagation, time spreading, phase distortion and frequency selective fading, to avoid
intersymbol interference and symbol distortion.
6. Detection of the amplitudes of the I and Q signals, or the frequency or phase of the IF
signal.
7. Quantization of the amplitudes, frequencies or phases to the nearest allowed symbol
values.
8. Mapping of the quantized amplitudes, frequencies or phases to codewords (bit groups).
9. Parallel-to-serial conversion of the codewords into a bit stream.
10. Pass the resultant bit stream on for further processing such as removal of any error-
correcting codes.

As is common to all digital communication systems, the design of both the modulator and
demodulator must be done simultaneously. Digital modulation schemes are possible because the
transmitter-receiver pair have prior knowledge of how data is encoded and represented in the
communications system. In all digital communication systems, both the modulator at the
transmitter and the demodulator at the receiver are structured so that they perform inverse
operations.

Non-coherent modulation methods do not require a receiver reference clock signal that is phase
synchronized with the sender carrier wave. In this case, modulation symbols (rather than bits,
characters, or data packets) are asynchronously transferred. The opposite is coherent modulation.

List of common digital modulation techniques

The most common digital modulation techniques are:

Phase-shift keying (PSK):

Binary PSK (BPSK), using M=2 symbols


Quadrature PSK (QPSK), using M=4 symbols
8PSK, using M=8 symbols
16PSK, using M=16 symbols
Differential PSK (DPSK)
Differential QPSK (DQPSK)
Offset QPSK (OQPSK)
/4QPSK

Frequency-shift keying (FSK):

Audio frequency-shift keying (AFSK)


Multi-frequency shift keying (M-ary FSK or MFSK)
Dual-tone multi-frequency (DTMF)
Continuous-phase frequency-shift keying (CPFSK)

Amplitude-shift keying (ASK)


On-off keying (OOK), the most common ASK form

M-ary vestigial sideband modulation, for example 8VSB

Quadrature amplitude modulation (QAM) - a combination of PSK and ASK


Polar modulation like QAM a combination of PSK and ASK.[citation needed]

Continuous phase modulation (CPM) methods:

Minimum-shift keying (MSK)


Gaussian minimum-shift keying (GMSK)

Orthogonal frequency-division multiplexing (OFDM) modulation:

discrete multitone (DMT) - including adaptive modulation and bit-loading.

Wavelet modulation
Trellis coded modulation (TCM), also known as trellis modulation
Spread-spectrum techniques:

Direct-sequence spread spectrum (DSSS)


Chirp spread spectrum (CSS) according to IEEE 802.15.4a CSS uses pseudo-
stochastic coding
Frequency-hopping spread spectrum (FHSS) applies a special scheme for channel
release

MSK and GMSK are particular cases of continuous phase modulation. Indeed, MSK is a
particular case of the sub-family of CPM known as continuous-phase frequency-shift keying
(CPFSK) which is defined by a rectangular frequency pulse (i.e. a linearly increasing phase
pulse) of one symbol-time duration (total response signaling).

OFDM is based on the idea of frequency-division multiplexing (FDM), but is utilized as a digital
modulation scheme. The bit stream is split into several parallel data streams, each transferred
over its own sub-carrier using some conventional digital modulation scheme. The modulated
sub-carriers are summed to form an OFDM signal. OFDM is considered as a modulation
technique rather than a multiplex technique, since it transfers one bit stream over one
communication channel using one sequence of so-called OFDM symbols. OFDM can be
extended to multi-user channel access method in the orthogonal frequency-division multiple
access (OFDMA) and multi-carrier code division multiple access (MC-CDMA) schemes,
allowing several users to share the same physical medium by giving different sub-carriers or
spreading codes to different users.

Of the two kinds of RF power amplifier, switching amplifiers (Class C amplifiers) cost less and
use less battery power than linear amplifiers of the same output power. However, they only work
with relatively constant-amplitude-modulation signals such as angle modulation (FSK or PSK)
and CDMA, but not with QAM and OFDM. Nevertheless, even though switching amplifiers are
completely unsuitable for normal QAM constellations, often the QAM modulation principle are
used to drive switching amplifiers with these FM and other waveforms, and sometimes QAM
demodulators are used to receive the signals put out by these switching amplifiers.
Introduction to Modulation Techniques:

Definition :
Process by which some characteristic of a carrier wave is varied in accordance with
an information-bearing signal
Information-bearing signal Modulating signal
Output of modulation process Modulated signal

Three practical benefits from the use of modulation in wireless communication :


1) It is used to shift the spectral content of a message signal so that it lies
inside the operating frequency band of the wireless communication channel
Ex.: telephonic communication over cellular radio channel
Voice 300-3100Hz freq. assigned to cellular radio channel 900-1800MHz
2) It provides a mechanism for putting the information content of a message
signal into a form that be less vulnerable to noise or interference
Received signal ordinarily corrupted by noise FM : improve system
performance in presence of noise
3) It permits the use of multiple-access techniques
Simultaneous transmission of several different information-bearing
signals over the same channel

Principal characteristics
Linear and Nonlinear Modulation Process

Linear Modulation :
Input-Output relation of modulator satisfies principle of superposition
Output of modulator produced by a number of inputs applied simultaneously
is equal to the sum of the output that result when the inputs are applied one
at a time M(i1+i2+in) = M(i1)+M(i2)+M(in)
If the input is scale by a certain factor, the output of the modulator is scaled
by exactly the same factor

Nonlinear Modulation :
Input-Output relation of modulator does not (partially or fully) satisfies
principle of superposition

Linearity and nonlinearity has importance in both theoretical and practical


aspects.

Analog and Digital Modulation Techniques


Analog modulation :
Modulation of analog signal infinity of value of the modulated parameter of the
modulated signal within a certain scale
Digital modulation :
Modulation of digital signal finite number of value of the modulated parameter
of the modulated signal
Ex.: QPSK 4 values of phase
Amplitude and Angle Modulation Process
Carrier C(t) = Ac cos (2 fc t + )
Three parameters :
Ac Amplitude modulation : AM
fc Frequency modulation : FM
Phase modulation : PM
Linear Modulation Techniques:
Binary Phase-Shift Keying
Simplest form of digital phase modulation
Modulating signal = binary data stream =m(t)=bkp(t-kT)
Where P(t) = basic pulse and T = bit duration
bk = +1 for binary symbol 1
-1 for binary symbol 0
Binary symbol 0 carrier phase (t) = 0 radians
Binary symbol 1 carrier phase (t) = radians

S(t) = Ac cos (2 fc t) for binary symbol 0


Ac cos (2 fc t + ) for binary symbol 1

Spread Spectrum Modulation:


Definition: The process of using a second modulating signal which is independent of the data
and has the effect of increasing the bandwidth of the transmitted signal to well beyond the
bandwidth of the data signal.
NOTE: Spread Spectrum Modulation is distinguished from wideband modulation schemes such
as wideband Frequency Modulation (FM) by noting that in spread spectrum the waveform
causing the spreading is independent of the data being transmitted. This permits the spreading
waveform to be selected based on improving system performance in some way. In IS-95, PN
sequences are selected as the spreading signals since they uniformly spread the signal power over
the available bandwidth and provide other critical advantages such as permitting universal
frequency reuse. See the topic PN Spreading and Despreading.

Application:

Spread Spectrum Modulation

Application: IS-95 uses Walsh words and PN sequences to spread the spectrum of its
transmitted signals. Since spreading the signal power with Walsh codes is typically far
from uniform over the bandwidth of the Walsh code words, PN spreading is used in
conjunction with Walsh code spreading. The primary use of Walsh code words on the
forward link is to uniquely identify the mobile user. The primary use of Walsh words on
the reverse link is to implement the 64-ary orthogonal modulation scheme. The PN
sequences spread the power of the signal more uniformly or more evenly over the
available bandwidth. In this way, the available bandwidth is used more efficiently since
each portion of the bandwidth is equally to transmit the signal.

Example: A bipolar 1 volt symbol signal s(t) with a symbol rate of Rs symbols/sec has a
bandwidth of about Rs Hz. Likewise a bipolar PN sequence PN(t) having a chip rate of Rc
chips/sec has a bandwidth of about Rc Hz. Multiplying the symbol stream s(t) by PN(t) produces
a signal which looks very similar to PN(t) and, in fact, has a bandwidth nearly equal to PN(t).
However, the signals s(t), PN(t), and PN(t)s(t) all have unit power since the square of these
signals is always one. Since PN(t)s(t) has the same power as s(t), but a bandwidth Rc/Rs times
greater than the signal, the power spectral density of the transmitted signal PN(t)s(t) averages
Rs/Rc times lower than the power spectral density of s(t). The power of s(t) is thinly spread over
the large bandwidth of PN(t) and it is from this spreading that the modulation technique gets its
name.

Cellular handover in mobile phone networks

As the phone user moves from one cell area to another cell whilst a call is in progress, the mobile
station will search for a new channel to attach to in order not to drop the call. Once a new
channel is found, the network will command the mobile unit to switch to the new channel and at
the same time switch the call onto the new channel.

With CDMA, multiple CDMA handsets share a specific radio channel. The signals are separated
by using a pseudonoise code (PN code) specific to each phone. As the user moves from one cell
to another, the handset sets up radio links with multiple cell sites (or sectors of the same site)
simultaneously. This is known as "soft handoff" because, unlike with traditional cellular
technology, there is no one defined point where the phone switches to the new cell.

In IS-95 inter-frequency handovers and older analog systems such as NMT it will typically be
impossible to test the target channel directly while communicating. In this case other techniques
have to be used such as pilot beacons in IS-95. This means that there is almost always a brief
break in the communication while searching for the new channel followed by the risk of an
unexpected return to the old channel.

If there is no ongoing communication or the communication can be interrupted, it is possible for


the mobile unit to spontaneously move from one cell to another and then notify the base station
with the strongest signal.

Multiple Access Techniques For Wireless Communication:

Multiple access schemes are used to allow many mobile users to share
simultaneously a finite amount of radio spectrum.

For high quality communications, this must be done without severe degradation in
the performance of the system.
Multiple Access Techniques:

PR
FDMA
TDMA
CDMA
SDMA
Multiple Access (MA) Technologies used in Different Wireless Systems :

Cellular Systems MA Technique

AMPS ( Advanced Mobile Phone system ) FDMA / FDD

GSM ( Global System for Mobile ) TDMA / FDD

US DC ( U. S Digital Cellular ) TDMA / FDD

JDC ( Japanese Digital Cellular ) TDMA / FDD

DECT ( Digital European Cordless Telephone ) FDMA / FDD

IS 95 ( U.S Narrowband Spread Spectrum ) CDMA / FDD

Frequency Division Multiple Access (FDMA):


Principles Of Operation:

Each user is allocated a unique frequency band or channel. These channels are
assigned on demand to users who request service.

In FDD, the channel has two frequencies forward channel & reverse channel.

During the period of the call, no other user can share the same frequency band.

If the FDMA channel is not in use, then it sits idle and cannot be used by other
users to increase or share capacity. This is a wasted resource.

Properties of FDMA:

The bandwidth of FDMA channels is narrow (30 KHz) since it supports only one call/
carrier.

ISI is low since the symbol time is large compared to average delay spread No
equalization is required.

FDMA systems are simple than TDMA systems, but modern DSP is changing this
factor.

FDMA systems have higher cost


Cell site system due to single call/carrier
Costly band pass filters to eliminate spurious radiation
Duplexers in both T/R increase subscriber costs

Number Of Channel Supported By FDMA System:

Bt-2Bg
N= -------------
Bc

Where Bg= Guard band


Bc=Channel Bandwidth
Example:
In the US, each cellular carrier is allocated 416 channels,

Bt=12.5MHz
Bg=10KHz
Bc=30KHz

[(12.5*106)- 2*(10*103)]
N= ------------------------------------ = 416
30*103
Time Division Multiple Access (TDMA):

Principles Of Operation:

TDMA systems divide the radio spectrum into time slots and each user is allowed
to either transmit or receive in each time slots.

Each user occupies a cyclically repeating time slots. TDMA can allow different
number of time slots for separate user.

TDMA Frame Structure:

Preamble Information message Trail Bits

Slot 1 Slot2 Slot N

Trail Bit Sync Bit Information Bit Guard Bits

Components of 1 TDMA Frame:

Preamble Address and synchronization information for base station and subscriber
identification

Guard times Synchronization of receivers between a different slots and frames


Principles Of Operation:

TDMA shares the single carrier frequency with several users, where each user makes
use of non-overlapping timeslots.

Data Transmission for user of TDMA system is discrete bursts


The result is low battery consumption.

Handoff process is simpler, since it is able to listen for other base stations
during idle time slots.
Since different slots are used for T and R, duplexers are not required.

Equalization is required, since transmission rates are generally very high as compared to
FDMA channels.

Efficiency of TDMA:

Frame Efficiency :

No.of bits / frame containing transmitted data


f = ----------------------------------------------------------
Total No. of bits / frame

= (1-bOH/bT)x100

bT- bOH
= ------------------------ x 100
bT

Overview of cellular networks:

A cellular network is a radio network distributed over land areas called cells, each served by at
least one fixed-location transceiver known as a cell site or base station. When joined together
these cells provide radio coverage over a wide geographic area. This enables a large number of
portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with
fixed transceivers and telephones anywhere in the network, via base stations, even if some of the
transceivers are moving through more than one cell during transmission.

Cellular networks offer a number of advantages over alternative solutions:

increased capacity
reduced power use
larger coverage area
reduced interference from other signals

An example of a simple non-telephone cellular system is an old taxi driver's radio system where
the taxi company has several transmitters based around a city that can communicate directly with
each taxi.

The concept:

In a cellular radio system, a land area to be supplied with radio service is divided into regular
shaped cells, which can be hexagonal, square, circular or some other irregular shapes, although
hexagonal cells are conventional. Each of these cells is assigned multiple frequencies (f1 - f6)
which have corresponding radio base stations. The group of frequencies can be reused in other
cells, provided that the same frequencies are not reused in adjacent neighboring cells as that
would cause co-channel interference.

The increased capacity in a cellular network, compared with a network with a single transmitter,
comes from the fact that the same radio frequency can be reused in a different area for a
completely different transmission. If there is a single plain transmitter, only one transmission can
be used on any given frequency. Unfortunately, there is inevitably some level of interference
from the signal from the other cells which use the same frequency. This means that, in a standard
FDMA system, there must be at least a one cell gap between cells which reuse the same
frequency.

In the simple case of the taxi company, each radio had a manually operated channel selector
knob to tune to different frequencies. As the drivers moved around, they would change from
channel to channel. The drivers knew which frequency covered approximately what area. When
they did not receive a signal from the transmitter, they would try other channels until they found
one that worked. The taxi drivers would only speak one at a time, when invited by the base
station operator (in a sense TDMA).

Example of a cellular network: the mobile phone network:

The most common example of a cellular network is a mobile phone (cell phone) network. A
mobile phone is a portable telephone which receives or makes calls through a cell site (base
station), or transmitting tower. Radio waves are used to transfer signals to and from the cell
phone.

Modern mobile phone networks use cells because radio frequencies are a limited, shared
resource. Cell-sites and handsets change frequency under computer control and use low power
transmitters so that a limited number of radio frequencies can be simultaneously used by many
callers with less interference.

A cellular network is used by the mobile phone operator to achieve both coverage and capacity
for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight
signal loss and to support a large number of active phones in that area. All of the cell sites are
connected to telephone exchanges (or switches) , which in turn connect to the public telephone
network.

In cities, each cell site may have a range of up to approximately mile, while in rural areas, the
range could be as much as 5 miles. It is possible that in clear open areas, a user may receive
signals from a cell site 25 miles away.

Since almost all mobile phones use cellular technology, including GSM, CDMA, and AMPS
(analog), the term "cell phone" is in some regions, notably the US, used interchangeably with
"mobile phone". However, satellite phones are mobile phones that do not communicate directly
with a ground-based cellular tower, but may do so indirectly by way of a satellite.

There are a number of different digital cellular technologies, including: Global System for
Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple
Access (CDMA), Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution
(EDGE), 3GSM, Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-
136/TDMA), and Integrated Digital Enhanced Network (IDEN).

Structure of the mobile phone cellular network

A simple view of the cellular mobile-radio network consists of the following:

A network of Radio base stations forming the Base station subsystem.


The core circuit switched network for handling voice calls and text
A packet switched network for handling mobile data
The Public switched telephone network to connect subscribers to the wider telephony
network

This network is the foundation of the GSM system network. There are many functions that are
performed by this network in order to make sure customers get the desired service including
mobility management, registration, call set up, and handover.

Any phone connects to the network via an RBS (Radio Base Station) at a corner of the
corresponding cell which in turn connects to the Mobile switching center (MSC). The MSC
provides a connection to the public switched telephone network (PSTN). The link from a phone
to the RBS is called an uplink while the other way is termed downlink.

Radio channels effectively use the transmission medium through the use of the following
multiplexing schemes: frequency division multiplex (FDM), time division multiplex (TDM),
code division multiplex (CDM), and space division multiplex (SDM). Corresponding to these
multiplexing schemes are the following access techniques: frequency division multiple access
(FDMA), time division multiple access (TDMA), code division multiple access (CDMA), and
space division multiple access (SDMA).
Fading:

In wireless communications, fading is deviation of the attenuation that a carrier-modulated


telecommunication signal experiences over certain propagation media. The fading may vary with
time, geographical position and/or radio frequency, and is often modelled as a random process. A
fading channel is a communication channel that experiences fading. In wireless systems, fading
may either be due to multipath propagation, referred to as multipath induced fading, or due to
shadowing from obstacles affecting the wave propagation, sometimes referred to as shadow
fading.

Key concepts

The presence of reflectors in the environment surrounding a transmitter and receiver create
multiple paths that a transmitted signal can traverse. As a result, the receiver sees the
superposition of multiple copies of the transmitted signal, each traversing a different path. Each
signal copy will experience differences in attenuation, delay and phase shift while travelling
from the source to the receiver. This can result in either constructive or destructive interference,
amplifying or attenuating the signal power seen at the receiver. Strong destructive interference is
frequently referred to as a deep fade and may result in temporary failure of communication due
to a severe drop in the channel signal-to-noise ratio.

A common example of multipath fading is the experience of stopping at a traffic light and
hearing an FM broadcast degenerate into static, while the signal is re-acquired if the vehicle
moves only a fraction of a meter. The loss of the broadcast is caused by the vehicle stopping at a
point where the signal experienced severe destructive interference. Cellular phones can also
exhibit similar momentary fades.

Fading channel models are often used to model the effects of electromagnetic transmission of
information over the air in cellular networks and broadcast communication. Fading channel
models are also used in underwater acoustic communications to model the distortion caused by
the water. Mathematically, fading is usually modeled as a time-varying random change in the
amplitude and phase of the transmitted signal.

Slow versus fast fading:

The terms slow and fast fading refer to the rate at which the magnitude and phase change
imposed by the channel on the signal changes. The coherence time is a measure of the minimum
time required for the magnitude change of the channel to become uncorrelated from its previous
value. Alternatively, it may be defined as the maximum time for which the magnitude change of
channel is correlated to its previous value.

Slow fading arises when the coherence time of the channel is large relative to the delay
constraint of the channel. In this regime, the amplitude and phase change imposed by the
channel can be considered roughly constant over the period of use. Slow fading can be
caused by events such as shadowing, where a large obstruction such as a hill or large
building obscures the main signal path between the transmitter and the receiver. The
amplitude change caused by shadowing is often modeled using a log-normal distribution
with a standard deviation according to the log-distance path loss model.

Fast fading occurs when the coherence time of the channel is small relative to the delay
constraint of the channel. In this regime, the amplitude and phase change imposed by the
channel varies considerably over the period of use.

In a fast-fading channel, the transmitter may take advantage of the variations in the channel
conditions using time diversity to help increase robustness of the communication to a temporary
deep fade. Although a deep fade may temporarily erase some of the information transmitted, use
of an error-correcting code coupled with successfully transmitted bits during other time instances
(interleaving) can allow for the erased bits to be recovered. In a slow-fading channel, it is not
possible to use time diversity because the transmitter sees only a single realization of the channel
within its delay constraint. A deep fade therefore lasts the entire duration of transmission and
cannot be mitigated using coding.

The coherence time of the channel is related to a quantity known as the Doppler spread of the
channel. When a user (or reflectors in its environment) is moving, the user's velocity causes a
shift in the frequency of the signal transmitted along each signal path. This phenomenon is
known as the Doppler shift. Signals travelling along different paths can have different Doppler
shifts, corresponding to different rates of change in phase. The difference in Doppler shifts
between different signal components contributing to a single fading channel tap is known as the
Doppler spread. Channels with a large Doppler spread have signal components that are each
changing independently in phase over time. Since fading depends on whether signal components
add constructively or destructively, such channels have a very short coherence time.

In general, coherence time is inversely related to Doppler spread, typically expressed as

where Tc is the coherence time, Ds is the Doppler spread (Doppler shift). This equation is just
an approximation[1], to be exact, see Coherence time.

Flat versus frequency-selective fading:

As the carrier frequency of a signal is varied, the magnitude of the change in amplitude will
vary. The coherence bandwidth measures the separation in frequency after which two signals
will experience uncorrelated fading.

In flat fading, the coherence bandwidth of the channel is larger than the bandwidth of the
signal. Therefore, all frequency components of the signal will experience the same
magnitude of fading.
In frequency-selective fading, the coherence bandwidth of the channel is smaller than
the bandwidth of the signal. Different frequency components of the signal therefore
experience decorrelated fading.
Since different frequency components of the signal are affected independently, it is highly
unlikely that all parts of the signal will be simultaneously affected by a deep fade. Certain
modulation schemes such as OFDM and CDMA are well-suited to employing frequency
diversity to provide robustness to fading. OFDM divides the wideband signal into many
slowly modulated narrowband subcarriers, each exposed to flat fading rather than frequency
selective fading. This can be combated by means of error coding, simple equalization or
adaptive bit loading. Inter-symbol interference is avoided by introducing a guard interval
between the symbols. CDMA uses the Rake receiver to deal with each echo separately.

Frequency-selective fading channels are also dispersive, in that the signal energy associated
with each symbol is spread out in time. This causes transmitted symbols that are adjacent in
time to interfere with each other. Equalizers are often deployed in such channels to
compensate for the effects of the intersymbol interference.

The echoes may also be exposed to Doppler shift, resulting in a time varying channel model.

Path loss:

Path loss (or path attenuation) is the reduction in power density (attenuation) of an
electromagnetic wave as it propagates through space. Path loss is a major component in the
analysis and design of the link budget of a telecommunication system.

This term is commonly used in wireless communications and signal propagation. Path loss may
be due to many effects, such as free-space loss, refraction, diffraction, reflection, aperture-
medium coupling loss, and absorption. Path loss is also influenced by terrain contours,
environment (urban or rural, vegetation and foliage), propagation medium (dry or moist air), the
distance between the transmitter and the receiver, and the height and location of antennas.

Doppler effect:

The Doppler effect (or Doppler shift), named after Austrian physicist Christian Doppler who
proposed it in 1842 in Prague, is the change in frequency of a wave for an observer moving
relative to the source of the wave. It is commonly heard when a vehicle sounding a siren or horn
approaches, passes, and recedes from an observer. The received frequency is higher (compared
to the emitted frequency) during the approach, it is identical at the instant of passing by, and it is
lower during the recession.

The relative changes in frequency can be explained as follows. When the source of the waves is
moving toward the observer, each successive wave crest is emitted from a position closer to the
observer than the previous wave. Therefore each wave takes slightly less time to reach the
observer than the previous wave. Therefore the time between the arrival of successive wave
crests at the observer is reduced, causing an increase in the frequency. While they are travelling,
the distance between successive wave fronts is reduced; so the waves "bunch together".
Conversely, if the source of waves is moving away from the observer, each wave is emitted from
a position farther from the observer than the previous wave, so the arrival time between
successive waves is increased, reducing the frequency. The distance between successive wave
fronts is increased, so the waves "spread out".

For waves that propagate in a medium, such as sound waves, the velocity of the observer and of
the source are relative to the medium in which the waves are transmitted. The total Doppler
effect may therefore result from motion of the source, motion of the observer, or motion of the
medium. Each of these effects are analyzed separately. For waves which do not require a
medium, such as light or gravity in general relativity, only the relative difference in velocity
between the observer and the source needs to be considered.

UNIT I
PRINCIPLES OF WIRELESS COMMUNICATION
PART A (2 MARKS)
1. What is digital modulation?
2. What is FSK?
3. What is PSK?
4. What is QAM?
5. Define Spread-spectrum techniques.
6. What is hopping sequence?
7. Define TDMA.
8. What are the TDMA characteristics?
9. Define Frequency Division Multiple Access or FDMA.
10. Define Space-Division Multiple Access (SDMA).
11. What is mean by Handover technique?
PART B (16 MARKS)
1. Briefly explain the different types of Digital Modulation techniques.
2. Briefly explain the different types of linear modulation techniques.
3. Define multiple access techniques and briefly explain them.
4. In detail explain the Cellular concepts.

.
UNIT II
WIRELESS PROTOCOLS
Security Issues and Challenges in Wireless Networks
Introduction:
Wireless stations, or nodes, communicate over a wireless medium
Networks operating under infrastructure mode e.g., 802.11, 802.16, Cellular
networks
Networks operating with limited or no infrastructural support e.g., ad hoc
networks in AODV mode
Security threats are imminent due to the open nature of communication
Two main issues: authentication and privacy
Other serious issues: denial-of-service
A categorization is required to understand the issues in each situation.

Introduction Wireless Technologies:


Different technologies have been developed for different scenarios and requirements
WiFi is technology for Wireless LANs and short range mobile access networks
WiMAX is technology for last mile broadband connectivity
Wireless USB is technology for Internet connectivity on the go
Other technologies like Infrared (TV remotes etc), Bluetooth (soon to be obsolete) etc are
short range
Extreme bandwidth but short range technologies are Gigabit wireless etc
Fixed Infrastructure
Base stations that are typically not resource constrained.
Examples: sensor networks, and cellular networks.
Mobility of nodes but not of base stations
Ad hoc wireless networks
No infrastructural support.
Nodes also double up as routers.
Mobility of nodes.
Examples laptops/cellphones operating in ad hoc mode.
Mixed mode
In between the two modes.
Some nodes exhibit ad hoc capability.
To formalize study and solutions, need good models for these networks.
Formal model to characterize the properties and solutions
Models that are close to reality
Still allow for solution design and analysis.
Solution properties
Light-weight
Have to use battery power wisely.
Other resources, such as storage, are also limited.
Local control
Many cases, only neighbours are known.
Any additional information gathering is expensive.
Difficulty of modeling wireless networks as opposed to wired networks:
Transmission
Interference
Resource constraints
Mobility
Physical carrier sensing

Resource Management:
In organizational studies, resource management is the efficient and effective deployment of an
organization's resources when they are needed. Such resources may include financial resources,
inventory, human skills, production resources, or information technology (IT). In the realm of
project management, processes, techniques and philosophies as to the best approach for
allocating resources have been developed. These include discussions on functional vs. cross-
functional resource allocation as well as processes espoused by organizations like the Project
Management Institute (PMI) through their Project Management Body of Knowledge (PMBOK)
methodology of project management. Resource management is a key element to activity resource
estimating and project human resource management. Both are essential components of a
comprehensive project management plan to execute and monitor a project successfully.[1][2] As is
the case with the larger discipline of project management, there are resource management
software tools available that automate and assist the process of resource allocation to projects
and portfolio resource visibility including supply and demand of resources.

HR (Human Resource) Management

This is the science of allocating human resources among various projects or business units,
maximizing the utilization of available personnel resources to achieve business goals; and
performing the activities that are necessary in the maintenance of that workforce through
identification of staffing requirements, planning and oversight of payroll and benefits, education
and professional development, and administering their work-life needs. The efficient and
effective deployment of an organization's personnel resources where and when they are needed,
and in possession of the tools, training and skills required by the work.

Corporate Resource Management Process

Large organizations usually have a defined corporate resource management process which
mainly guarantees that resources are never over-allocated across multiple projects.[3]

Techniques

One resource management technique is resource leveling. It aims at smoothing the stock of
resources on hand, reducing both excess inventories and shortages.

The required data are: the demands for various resources, forecast by time period into the future
as far as is reasonable, as well as the resources' configurations required in those demands, and
the supply of the resources, again forecast by time period into the future as far as is reasonable.

The goal is to achieve 100% utilization but that is very unlikely, when weighted by important
metrics and subject to constraints, for example: meeting a minimum service level, but otherwise
minimizing cost.

The principle is to invest in resources as stored capabilities, then unleash the capabilities as
demanded.

A dimension of resource development is included in resource management by which investment


in resources can be retained by a smaller additional investment to develop a new capability that
is demanded, at a lower investment than disposing of the current resource and replacing it with
another that has the demanded capability.

In conservation, resource management is a set of practices pertaining to maintaining natural


systems integrity. Examples of this form of management are air resource management, soil
conservation, forestry, wildlife management and water resource management. The broad term for
this type of resource management is natural resource management (NRM).

Routing protocol:

A routing protocol is a protocol that specifies how routers communicate with each other,
disseminating information that enables them to select routes between any two nodes on a
computer network, the choice of the route being done by routing algorithms. Each router has a
priori knowledge only of networks attached to it directly. A routing protocol shares this
information first among immediate neighbors, and then throughout the network. This way,
routers gain knowledge of the topology of the network. For a discussion of the concepts behind
routing protocols, see: Routing.

The term routing protocol may refer specifically to one operating at layer three of the OSI model,
which similarly disseminates topology information between routers.

Although there are many types of routing protocols, three major classes are in widespread use on
IP networks:

Interior gateway routing via link-state routing protocols, such as OSPF and IS-IS
Interior gateway routing via path vector or distance vector protocols, such as RIP,
IGRP and EIGRP
Exterior gateway routing. BGP v4 is the routing protocol used by the public Internet.

Many routing protocols are defined in documents called RFCs.[1][2][3][4]

The specific characteristics of routing protocols include

the manner in which they either prevent routing loops from forming or break them up
if they do
the manner in which they select preferred routes, using information about hop costs
the time they take to converge
how well they scale up
many other factors

Routed versus routing protocols:

A routed protocol can be routed by a router, i.e., it can be forwarded from one router to another.
A routing protocol sends and receives packets containing routing information to and from other
routers.

In some cases, routing protocols can themselves run over routed protocols: for example, BGP
runs over TCP which runs over IP; care is taken in the implementation of such systems not to
create a circular dependency between the routing and routed protocols. That a routing protocol
runs over particular transport mechanism does not mean that the routing protocol is of layer
(N+1) if the transport mechanism is of layer (N). Routing protocols, according to the OSI
Routing framework, are layer management protocols for the network layer, regardless of their
transport mechanism:

IS-IS runs over the data link layer


OSPF, IGRP, and EIGRP run directly over IP; OSPF and EIGRP have their own
reliable transmission mechanism while IGRP assumed an unreliable transport
RIP runs over UDP
BGP runs over TCP

Examples

Interior routing protocols

Interior Gateway Protocols (IGPs) exchange routing information within a single routing domain.
A given autonomous system [5] can contain multiple routing domains, or a set of routing domains
can be coordinated without being an Internet-participating autonomous system. Common
examples include:

IGRP (Interior Gateway Routing Protocol)


EIGRP (Enhanced Interior Gateway Routing Protocol)
OSPF (Open Shortest Path First)
RIP (Routing Information Protocol)
IS-IS (Intermediate System to Intermediate System)

Note that IGRP, a Cisco proprietary routing protocol, is no longer supported. EIGRP accepts
IGRP configuration commands, but the internals of IGRP and EIGRP are completely different.

Static routing:

Static routing is a data communication concept describing one way of configuring path selection
of routers in computer networks. It is the type of routing characterized by the absence of
communication between routers regarding the current topology of the network.[1] This is
achieved by manually adding routes to the routing table. The opposite of static routing is
dynamic routing, sometimes also referred to as adaptive routing.

In these systems, routes through a data network are described by fixed paths (statically). These
routes are usually entered into the router by the system administrator. An entire network can be
configured using static routes, but this type of configuration is not fault tolerant. When there is a
change in the network or a failure occurs between two statically defined nodes, traffic will not be
rerouted. This means that anything that wishes to take an affected path will either have to wait
for the failure to be repaired or the static route to be updated by the administrator before
restarting its journey. Most requests will time out (ultimately failing) before these repairs can be
made. There are, however, times when static routes can improve the performance of a network.
Some of these include stub networks and default routes.

Adaptive routing:
Adaptive routing describes the capability of a system, through which routes are characterized by
their destination, to alter the path that the route takes through the system in response to a change
in conditions[1]. The adaptation is intended to allow as many routes as possible to remain valid
(that is, have destinations that can be reached) in response to the change.

People using a transport system can display adaptive routing. For example, if a local railway
station is closed, people can alight from a train at a different station and use another method,
such as a bus, to reach their destination. Another example of adaptive routing can be seen within
financial markets. For example, ASOR or Adaptive Smart Order Router (developed by Quod
Financial), takes routing decisions dynamically and based on real-time market events.

The term is commonly used in data networking to describe the capability of a network to 'route
around' damage, such as loss of a node or a connection between nodes, so long as other path
choices are available. There are several protocols used to achieve this:

RIP
OSPF
IS-IS
IGRP/EIGRP

Systems that do not implement adaptive routing are described as using static routing, where
routes through a network are described by fixed paths (statically). A change, such as the loss of a
node, or loss of a connection between nodes, is not compensated for. This means that anything
that wishes to take an affected path will either have to wait for the failure to be repaired before
restarting its journey, or will have to fail to reach its destination and give up the journey.

Power management:

Power management is a feature of some electrical appliances, especially copiers, computers and
computer peripherals such as monitors and printers, that turns off the power or switches the
system to a low-power state when inactive. In computing this is known as PC power
management and is built around a standard called ACPI. This supersedes APM. All recent
(consumer) computers have ACPI support.

Power Management Techniques

The previous section discussed WLANs and WPANs and the various standards that exist for
them. The differences between each type of network were introduced with an emphasis put on
their requirements for performing power management that each of them have. This section
discusses the various power management techniques used by these standards for reducing the
power consumed in each type of network. Many of the techniques introduced in this section do
not appear in any of these standards, but are used in common practice to reduce the power of
devices in both WLANs and WPANs. These techniques exist from the application layer all the
way down to the physical layer of a traditional networking protocol stack. Techniques specific to
a particular type of network are annotated as appropriate.
Application Layer

At the application layer a number of different techniques can be used to reduce the power
consumed by a wireless device. A technique known as load partitioning allows an application to
have all of its power intensive computation performed at its base station rather than locally. The
wireless device simply sends the request for the computation to be performed, and then waits for
the result. Another technique uses proxies in order to inform an application to changes in battery
power. Applications use this information to limit their functionality and only provide their most
essential features. This technique might be used to suppress certain "unnecessary" visual effects
that accompany a process. While these techniques may be adapted to work with any application
that wishes to support them, a number of techniques also exist for specific classes of
applications.

Some applications are so common that it is worth exploring techniques that specifically deal with
reducing the power consumed while running them. Two of the most common such applications
include database operations and video processing. For database systems, techniques are explored
that are able to reduce the power consumed during data retrieval, indexing, as well as querying
operations. In all three cases, energy is conserved by reducing the number of transmissions
needed to perform these operations. For video processing applications, energy can be conserved
using compression techniques to reduce the number of bits transmitted over the wireless
medium. Since performing the compression itself may consume a lot of power, however, other
techniques that allow the video quality to become slightly degraded have been explored in order
to reduce the power even further.

Transport Layer

The various techniques used to conserve energy at the transport layer all try to reduce the number
of retransmissions necessary due to packet losses from a faulty wireless link. In a traditional
(wired) network, packet losses are used to signify congestion and require backoff mechanisms to
account for this. In a wireless network, however, losses can occur sporadically and should not
immediately be interpreted as the onset of congestion. The TCP-Probing and Wave and Wait
Protocols have been developed with this knowledge in mind. They are meant as replacements
for traditional TCP, and are able to guarantee end-to-end data delivery with high throughput and
low power consumption.

Network Layer

Power management techniques existing at the network layer are concerned with performing
power efficient routing through a multi-hop network. They are typically either backbone based,
topology control based, or a hybrid of them both. In a backbone based protocol (sometimes also
referred to as Charge Based Clustering), some nodes are chosen to remain active at all times
(backbone nodes), while others are allowed to sleep periodically. The backbone nodes are used
to establish a path between all source and destination nodes in the network. Any node in the
network must therefore be within one hop of at least one backbone node, including backbone
nodes themselves. Energy savings are achieved by allowing non-backbone nodes to sleep
periodically, as well as by periodically changing which nodes in fact make up the backbone.
Fig. : Backbone based routing

Fig. 3 shows how packets would be routed from node 3 to node 4 and from node 1 to node 2
using the backbone that has been established. Black nodes signify backbone nodes, while
numbered nodes signify non-backbone nodes. Solid lines indicate paths along which a packet
may travel, while dashed ones show paths that will not be followed. Given this backbone
structure, packets traveling from node 3 to node 4 will have to travel through 4 different
backbone nodes before reaching their destination. If node 5 had been chosen as a backbone node
as well, packets would only have had to traverse through 2.

Topology based routing protocols achieve energy savings in a different way. Their goal is to
reduce the transmission power of all nodes in a network such that the network remains
connected, but all nodes operate with the lowest transmission power possible. In a homogeneous
network, this means that the transmission powers of all nodes are adjusted so that they are just
within range of their nearest one-hop neighbor. In heterogeneous networks (i.e. networks with
nodes of different type, power limitations, etc.) the transmission powers may be adjusted
according to the needs of that network. A summary of the different types of topology based
protocols that exist can be seen in Fig. 4.

As seen in the figure, certain location based topology control protocols attempt to use the
topology of the network to provide the most energy efficient communication path possible.
These protocols produce a sort of "Localized Power-Aware Routing" mechanism for the
network. In some cases, providing this path means taking a larger number of hops through the
network than would otherwise be taken when transmitting directly from one node to another.
While this may seem counterintuitive at first, it makes sense if the amount of energy expended in
transmitting to a node very far away is significantly greater than the energy expended when
transmitting between a large number of nodes that are within closer range of one another. The
rational behind the other topology based protocols found in Fig. 4.
Fig. : Topology based routing protocols

Transmission power control schemes are combined with backbone based ones to produce a
hybrid of them both. Using hybrid based protocols, the benefits of both backbone based and
topology based routing protocols can be achieved simultaneously.

Data Link Layer

The two most common techniques used to conserve energy at the link layer involve reducing the
transmission overhead during the Automatic Repeat Request (ARQ) and Forward Error
Correction (FEC) schemes. Both of these schemes are used to reduce the number of packet errors
at a receiving node. By enabling ARQ, a router is able to automatically request the
retransmission of a packet directly from its source without first requiring the receiver node to
detect that a packet error has occurred. Results have shown that sometimes it is more energy
efficient to transmit at a lower transmission power and have to send multiple ARQs than to send
at a high transmission power and achieve better throughput. Integrating the use of FEC codes to
reduce the number of retransmissions necessary at the lower transmission power can result in
even more energy savings.

Other power management techniques existing at the link layer are based on some sort of packet
scheduling protocol. By scheduling multiple packet transmission to occur back to back (i.e. in a
burst), it may be possible to reduce the overhead associated with sending each packet
individually. Preamble bytes only need to be sent for the first packet in order to announce it
presence on the radio channel, and all subsequent packets essentially "piggyback" this
announcement. Packet scheduling algorithms may also reduce the number of retransmissions
necessary if a packet is only scheduled to be sent during a time when its destination is known to
be able to receive packets. By reducing the number of retransmissions necessary, the overall
power consumption is consequently reduced as well.

MAC Layer

Power saving techniques existing at the MAC layer consist primarily of sleep scheduling
protocols. The basic principle behind all sleep scheduling protocols is that lots of power is
wasted listening on the radio channel while there is nothing there to receive. Sleep schedulers are
used to duty cycle a radio between its on and off power states in order to reduce the effects of
this idle listening. They are used to wake up a radio whenever it expects to transmit or receive
packets and sleep otherwise. Other power saving techniques at this layer include battery aware
MAC protocols (BAMAC) in which the decision of who should send next is based on the battery
level of all surrounding nodes in the network. Battery level information is piggy-backed on each
packet that is transmitted, and individual nodes base their decisions for sending on this
information.

Sleep scheduling protocols can be broken up into two categories: synchronous and asynchronous
Synchronous sleep scheduling policies rely on clock synchronization between nodes all nodes in
a network. As seen in Fig. 5., senders and receivers are aware of when each other should be on
and only send to one another during those time periods. They go to sleep otherwise.

Fig. : Synchronous sleep scheduler

Asynchronous sleep scheduling, on the other hand, does not rely on any clock synchronization
between nodes whatsoever. Nodes can send and receive packets whenever they please, according
to the MAC protocol in use. Fig. 6 shows how two nodes running asynchronous sleep schedulers
Fig. 6: Asynchronous sleep scheduler

Nodes wake up and go to sleep periodically in the same way they do for synchronous sleep
scheduling. Since there is no time synchronization, however, there must be a way to ensure that
receiving nodes are awake to hear the transmissions coming in from other nodes. Normally
preamble bytes are sent by a packet in order to synchronize the starting point of the incoming
data stream between the transmitter and receiver. With asynchronous sleep scheduling, a
significant number of extra preamble bytes are sent per packet in order to guarantee that a
receiver has the chance to synchronize to it at some point. In the worst case, a packet will begin
transmitting just as its receiver goes to sleep, and preamble bytes will have to be sent for a time
equal to the receiver's sleep interval (plus a little more to allow for proper synchronization once it
wakes up). Once the receiver wakes up, it synchronizes to these preamble bytes and remains on
until it receives the packet.

It doesn't make sense to have a hybrid sleep scheduling protocol based on each of the two
techniques. The energy savings achieved using each of them varies from system to system and
application to application. One technique is not "better" than the other in this sense, so efforts are
being made to define exactly when each type should be used.

Physical Layer

At the physical layer, techniques can be used to not only preserve energy, but also generate it.
Proper hardware design techniques allow one to decrease the level of parasitic leak currents in an
electronic device to almost nothing. These smaller leakage currents ultimately result in longer
lifetimes for these devices, as less energy is wasted while idle. Variable clock CPUs, CPU
voltage scaling, flash memory, and disk spin down techniques can also be used to further reduce
the power consumed at the physical layer. A technique known as Remote Access Switch (RAS)
can be used to wake up a receiver only when it has data destined for it. A low power radio circuit
is run to detect a certain type of activity on the channel. Only when this activity is detected does
the circuit wake up the rest of the system for reception of a packet. A transmitter has to know
what type of activity needs to be sent on the channel to wake up each of its receivers.
Energy harvesting techniques allow a device to actually gather energy from its surrounding
environment. Ambient energy is all around in the form of vibration, strain, inertial forces, heat,
light, wind, magnetic forces, etc. Energy harvesting techniques allow one to harness this energy
and either convert it directly into usable electric current or store it for later use within an
electrical system.

Wireless security:

Wireless security is the prevention of unauthorized access or damage to computers using


wireless networks.

Many laptop computers have wireless cards pre-installed. The ability to enter a network while
mobile has great benefits. However, wireless networking is prone to some security issues [1].
Crackers have found wireless networks relatively easy to break into, and even use wireless
technology to crack into wired networks [2]. As a result, it's very important that enterprises define
effective wireless security policies that guard against unauthorized access to important
resources.[3] Wireless Intrusion Prevention Systems (WIPS) or Wireless Intrusion Detection
Systems (WIDS) are commonly used to enforce wireless security policies.

The risks to users of wireless technology have increased as the service has become more popular.
There were relatively few dangers when wireless technology was first introduced. Crackers had
not yet had time to latch on to the new technology and wireless was not commonly found in the
work place. However, there are a great number of security risks associated with the current
wireless protocols and encryption methods, and in the carelessness and ignorance that exists at
the user and corporate IT level.[4] Cracking methods have become much more sophisticated and
innovative with wireless. Cracking has also become much easier and more accessible with easy-
to-use Windows or Linux-based tools being made available on the web at no charge.

Some organizations that have no wireless access points installed do not feel that they need to
address wireless security concerns. In-Stat MDR and META Group have estimated that 95% of
all corporate laptop computers that were planned to be purchased in 2005 were equipped with
wireless. Issues can arise in a supposedly non-wireless organization when a wireless laptop is
plugged into the corporate network. A cracker could sit out in the parking lot and gather info
from it through laptops and/or other devices as handhelds, or even break in through this wireless
card-equipped laptop and gain access to the wired network.

Media access control:


The media access control (MAC) data communication protocol sub-layer, also known as the
medium access control, is a sublayer of the data link layer specified in the seven-layer OSI
model (layer 2), and in the four-layer TCP/IP model (layer 1). It provides addressing and channel
access control mechanisms that make it possible for several terminals or network nodes to
communicate within a multiple access network that incorporates a shared medium, e.g. Ethernet.
The hardware that implements the MAC is referred to as a medium access controller.

The MAC sub-layer acts as an interface between the logical link control (LLC) sublayer and the
network's physical layer. The MAC layer emulates a full-duplex logical communication channel
in a multi-point network. This channel may provide unicast, multicast or broadcast
communication service.

Functions performed in the MAC layer:

According to 802.3-2002 the functions required of a MAC are

receive/transmit normal frames


half-duplex retransmission and backoff functions
append/check FCS (frame check sequence)
interframe gap enforcement
discard malformed frames
append(tx)/remove(rx) preamble, SFD, and padding
half-duplex compatibility: append(tx)/remove(rx) MAC address

In 100Mbps and faster MACs, the MAC address is not actually handled in the MAC layer. Doing
so would make it impossible to implement IP because the ARP layer of IP-Ethernet needs access
to the MAC address.

Addressing mechanism:

In 100Mbps and faster Ethernet MACs, there is no required addressing mechanism. However,
the MAC address inherited from the original MAC layer specification is used in many higher
level protocols such as Internet Protocol (IP) over Ethernet.

The local network address used in IP-Ethernet is called MAC address because it historically was
part of the MAC layer in early Ethernets. The MAC layer's addressing mechanism is called
physical address or MAC address. A MAC address is a unique serial number. Once a MAC
address has been assigned to a particular network interface (typically at time of manufacture),
that device should be uniquely identifiable amongst all other network devices in the world. This
guarantees that each device in a network will have a different MAC address (analogous to a
street address). This makes it possible for data packets to be delivered to a destination within a
subnetwork, i.e. hosts interconnected by some combination of repeaters, hubs, bridges and
switches, but not by IP routers. Thus, when an IP packet reaches its destination (sub)network, the
destination IP address (a layer-3, network layer, construct) is resolved into the MAC address (a
layer-2 construct) of the destination host.

An example of a physical network is an Ethernet network, perhaps extended by wireless local


area network (WLAN) access points and WLAN network adapters, since these share the same
48-bit MAC address hierarchy as Ethernet.
A MAC layer is not required in full-duplex point-to-point communication, but address fields are
included in some point-to-point protocols for compatibility reasons.

Channel access control mechanism:

The channel access control mechanisms provided by the MAC layer are also known as a multiple
access protocol. This makes it possible for several stations connected to the same physical
medium to share it. Examples of shared physical media are bus networks, ring networks, hub
networks, wireless networks and half-duplex point-to-point links. The multiple access protocol
may detect or avoid data packet collisions if a packet mode contention based channel access
method is used, or reserve resources to establish a logical channel if a circuit switched or
channelization based channel access method is used. The channel access control mechanism
relies on a physical layer multiplex scheme.

The most widespread multiple access protocol is the contention based CSMA/CD protocol used
in Ethernet networks. This mechanism is only utilized within a network collision domain, for
example an Ethernet bus network or a hub network. An Ethernet network may be divided into
several collision domains, interconnected by bridges and switches.

A multiple access protocol is not required in a switched full-duplex network, such as today's
switched Ethernet networks, but is often available in the equipment for compatibility reasons.

Common multiple access protocols:

Examples of common packet mode multiple access protocols for wired multi-drop networks are:

CSMA/CD (used in Ethernet and IEEE 802.3)


Token bus (IEEE 802.4)
Token ring (IEEE 802.5)
Token passing (used in FDDI)

Examples of common multiple access protocols that may be used in packet radio wireless
networks are:

CSMA/CA (used in IEEE 802.11/WiFi WLANs)


Slotted ALOHA
Dynamic TDMA
Reservation ALOHA (R-ALOHA)
Mobile Slotted Aloha (MS-ALOHA)
CDMA
OFDMA

ALOHAnet:

ALOHAnet, also known as the ALOHA System,[1][2] or simply ALOHA, was a pioneering
computer networking system[3] developed at the University of Hawaii. ALOHAnet became
operational in June, 1971, providing the first public demonstration of a wireless packet data
network.

The ALOHAnet used a new method of medium access (ALOHA random access) and
experimental UHF frequencies for its operation, since frequency assignments for
communications to and from a computer were not available for commercial applications in the
1970s. But even before such frequencies were assigned there were two other media available for
the application of an ALOHA channel cables and satellites. In the 1970s ALOHA random
access was employed in the widely used Ethernet cable based network and then in the Marisat
(now Inmarsat) satellite network.

In the early 1980s frequencies for mobile networks became available, and in 1985 frequencies
suitable for what became known as Wi-Fi were allocated in the US. These regulatory
developments made it possible to use the ALOHA random access techniques in both Wi-Fi and
in mobile telephone networks.

ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling
and control purposes. In the 1990s, Matti Makkonen and others at Telecom Finland greatly
expanded the use of ALOHA channels in order to implement SMS message texting in 2G mobile
phones. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile
phones with the widespread introduction of GPRS, using a slotted ALOHA random access
channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at
BBN.

Overview:

One of the early computer networking designs, development of the ALOHA network was begun
in 1968 at the University of Hawaii under the leadership of Norman Abramson and others
(including F. Kuo, N. Gaarder and N. Weldon). The goal was to use low-cost commercial radio
equipment to connect users on Oahu and the other Hawaiian islands with a central time-sharing
computer on the main Oahu campus.

The original version of ALOHA used two distinct frequencies in a hub/star configuration, with
the hub machine broadcasting packets to everyone on the "outbound" channel, and the various
client machines sending data packets to the hub on the "inbound" channel. If data was received
correctly at the hub, a short acknowledgment packet was sent to the client; if an acknowledgment
was not received by a client machine after a short wait time, it would automatically retransmit
the data packet after waiting a randomly selected time interval. This acknowledgment
mechanism was used to detect and correct for "collisions" created when two client machines both
attempted to send a packet at the same time.

ALOHAnet's primary importance was its use of a shared medium for client transmissions.
Unlike the ARPANET where each node could only talk directly to a node at the other end of a
wire or satellite circuit, in ALOHAnet all client nodes communicated with the hub on the same
frequency. This meant that some sort of mechanism was needed to control who could talk at
what time. The ALOHAnet solution was to allow each client to send its data without controlling
when it was sent, with an acknowledgment/retransmission scheme used to deal with collisions.
This became known as a pure ALOHA or random-accessed channel, and was the basis for
subsequent Ethernet development and later Wi-Fi networks. Various versions of the ALOHA
protocol (such as Slotted ALOHA) also appeared later in satellite communications, and were
used in wireless data networks such as ARDIS, Mobitex, CDPD, and GSM.

Also important was ALOHAnet's use of the outgoing hub channel to broadcast packets directly
to all clients on a second shared frequency, using an address in each packet to allow selective
receipt at each client node.

The ALOHA protocol:

Pure ALOHA:

The first version of the protocol (now called "Pure ALOHA", and the one implemented in
ALOHAnet) was quite simple:

If you have data to send, send the data


If the message collides with another transmission, try resending "later"

Note that the first step implies that Pure ALOHA does not check whether the channel is busy
before transmitting. The critical aspect is the "later" concept: the quality of the backoff scheme
chosen significantly influences the efficiency of the protocol, the ultimate channel capacity, and
the predictability of its behavior.

To assess Pure ALOHA, we need to predict its throughput, the rate of (successful) transmission
of frames. (This discussion of Pure ALOHA's performance follows Tanenbaum.) First, let's make
a few simplifying assumptions:

All frames have the same length.


Stations cannot generate a frame while transmitting or trying to transmit. (That is, if a
station keeps trying to send a frame, it cannot be allowed to generate more frames to
send.)
The population of stations attempts to transmit (both new frames and old frames that
collided) according to a Poisson distribution.

Let "T" refer to the time needed to transmit one frame on the channel, and let's define "frame-
time" as a unit of time equal to T. Let "G" refer to the mean used in the Poisson distribution over
transmission-attempt amounts: that is, on average, there are G transmission-attempts per frame-
time.

Consider what needs to happen for a frame to be transmitted successfully. Let "t" refer to the
time at which we want to send a frame. We want to use the channel for one frame-time
beginning at t, and so we need all other stations to refrain from transmitting during this time.
Moreover, we need the other stations to refrain from transmitting between t-T and t as well,
because a frame sent during this interval would overlap with our frame.

For any frame-time, the probability of there being k transmission-attempts during that frame-
time is:

The average amount of transmission-attempts for 2 consecutive frame-times is 2G. Hence, for
any pair of consecutive frame-times, the probability of there being k transmission-attempts
during those two frame-times is:
Therefore, the probability (Probpure) of there being zero transmission-attempts between t-T and
t+T (and thus of a successful transmission for us) is:

Probpure = e 2G

The throughput can be calculated as the rate of transmission-attempts multiplied by the


probability of success, and so we can conclude that the throughput (Spure) is:

Spure = Ge 2G

The maximum throughput is 0.5/e frames per frame-time (reached when G = 0.5), which is
approximately 0.184 frames per frame-time. This means that, in Pure ALOHA, only about 18.4%
of the time is used for successful transmissions.

Slotted ALOHA:

An improvement to the original ALOHA protocol was "Slotted ALOHA", which introduced
discrete timeslots and increased the maximum throughput.[10] A station can send only at the
beginning of a timeslot, and thus collisions are reduced. In this case, we only need to worry
about the transmission-attempts within 1 frame-time and not 2 consecutive frame-times, since
collisions can only occur during each timeslot. Thus, the probability of there being zero
transmission-attempts in a single timeslot is:

Probslotted = e G

the probability of k packets is:

Probslottedk = e G(1 e G)k 1

The throughput is:

Sslotted = Ge G

The maximum throughput is 1/e frames per frame-time (reached when G = 1), which is
approximately 0.368 frames per frame-time, or 36.8%.

Slotted ALOHA is used in low-data-rate tactical satellite communications networks by military


forces, in subscriber-based satellite communications networks, mobile telephony call setup, and
in the contactless RFID technologies.
Carrier sense multiple access:

Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC)
protocol in which a node verifies the absence of other traffic before transmitting on a shared
transmission medium, such as an electrical bus, or a band of the electromagnetic spectrum.

"Carrier Sense" describes the fact that a transmitter uses feedback from a receiver that detects a
carrier wave before trying to send. That is, it tries to detect the presence of an encoded signal
from another station before attempting to transmit. If a carrier is sensed, the station waits for the
transmission in progress to finish before initiating its own transmission.

"Multiple Access" describes the fact that multiple stations send and receive on the medium.
Transmissions by one node are generally received by all other stations using the medium.

Protocol modifications:

Carrier sense multiple access with collision detection (CSMA/CD) is a modification of CSMA.
CSMA/CD is used to improve CSMA performance by terminating transmission as soon as a
collision is detected, and reducing the probability of a second collision on retry.

Carrier sense multiple access with collision avoidance (CSMA/CA) is a modification of CSMA.
Collision avoidance is used to improve the performance of CSMA by attempting to be less
"greedy" on the channel. If the channel is sensed busy before transmission then the transmission
is deferred for a "random" interval. This reduces the probability of collisions on the channel.

CSMA access modes:

1-persistent
When the sender (station) is ready to transmit data, it checks if the physical medium is
busy. If so, it senses the medium continually until it becomes idle, and then it transmits a
piece of data (a frame). In case of a collision, the sender waits for a random period of
time and attempts to transmit again. 1-persistent CSMA is used in CSMA/CD systems
including Ethernet.
P-persistent
When the sender is ready to send data, it checks continually if the medium is busy. If the
medium becomes idle, the sender transmits a frame with a probability p. If the station
chooses not to transmit (the probability of this event is 1-p), the sender waits until the
next available time slot and transmits again with the same probability p. This process
repeats until the frame is sent or some other sender stops transmitting. In the latter case
the sender monitors the channel, and when idle, transmits with a probability p, and so on.
p-persistent CSMA is used in CSMA/CA systems including WiFi and other packet radio
systems.
O-persistent
Each station is assigned a transmission order by a supervisor station. When medium goes
idle, stations wait for their time slot in accordance with their assigned transmission order.
The station assigned to transmit first transmits immediately. The station assigned to
transmit second waits one time slot (but by that time the first station has already started
transmitting). Stations monitor the medium for transmissions from other stations and
update their assigned order with each detected transmission (i.e. they move one position
closer to the front of the queue).[1] O-persistent CSMA is used by CobraNet, LonWorks
and the controller area network.

Carrier sense multiple access with collision detection:

Carrier sense multiple access with collision detection (CSMA/CD) is a Media Access Control
method in which:[1]

a carrier sensing scheme is used.


a transmitting data station that detects another signal while transmitting a frame,
stops transmitting that frame, transmits a jam signal, and then waits for a random
time interval before trying to resend the frame.

CSMA/CD is a modification of pure carrier sense multiple access (CSMA). CSMA/CD is used
to improve CSMA performance by terminating transmission as soon as a collision is detected,
thus shortening the time required before a retry can be attempted.

Algorithm:

When a station wants to send some information, it uses the following algorithm:

Main procedure

1. Frame ready for transmission.


2. Is medium idle? If not, wait until it becomes ready[note 1]
3. Start transmitting.
4. Did a collision occur? If so, go to collision detected procedure.
5. Reset retransmission counters and end frame transmission.

Collision detected procedure

1. Continue transmission until minimum packet time is reached to ensure that all receivers
detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort transmission.
4. Calculate and wait random backoff period based on number of collisions.
5. Re-enter main procedure at stage 1.

This can be likened to what happens at a dinner party, where all the guests talk to each other
through a common medium (the air). Before speaking, each guest politely waits for the current
speaker to finish. If two guests start speaking at the same time, both stop and wait for short,
random periods of time (in Ethernet, this time is measured in microseconds). The hope is that by
each choosing a random period of time, both guests will not choose the same time to try to speak
again, thus avoiding another collision.

Methods for collision detection are media dependent, but on an electrical bus such as 10BASE-5
or 10BASE-2, collisions can be detected by comparing transmitted data with received data or by
recognizing a higher than normal signal amplitude on the bus.

Jam signal:

The jam signal is a signal that carries a 32-bit binary pattern sent by a data station to inform the
other stations that they must not transmit.

The maximum jam-time is calculated as follows: The maximum allowed diameter of an Ethernet
installation is limited to 232 bits. This makes a round-trip-time of 464 bits. As the slot time in
Ethernet is 512 bits, the difference between slot time and round-trip-time is 48 bits (6 bytes),
which is the maximum "jam-time".

This in turn means: A station noting a collision has occurred is sending a 4 to 6 byte long pattern
composed of 16 1-0 bit combinations. Note: The size of this jam signal is clearly beyond the
minimum allowed frame-size of 64 bytes.

The purpose of this is to ensure that any other node which may currently be receiving a frame
will receive the jam signal in place of the correct 32-bit MAC CRC, this causes the other
receivers to discard the frame due to a CRC error.

Applications:
CSMA/CD was used in now obsolete shared media Ethernet variants (10BASE5, 10BASE2) and
in the early versions of twisted-pair Ethernet which used repeater hubs. Modern Ethernet
networks built with switches and full-duplex connections no longer utilize CSMA/CD though it
is still supported for backwards compatibility. IEEE Std 802.3, which defines all Ethernet
variants, for historical reasons still bears the title "Carrier sense multiple access with collision
detection (CSMA/CD) access method and physical layer specifications".

Variations of the concept are used in radio frequency systems that rely on frequency sharing,
including Automatic Packet Reporting System.

Carrier sense multiple access with collision avoidance:

Carrier sense multiple access with collision avoidance (CSMA/CA), in computer networking,
is a wireless network multiple access method in which:

a carrier sensing scheme is used.


a node wishing to transmit data has to first listen to the channel for a predetermined
amount of time to determine whether or not another node is transmitting on the channel
within the wireless range. If the channel is sensed "idle," then the node is permitted to begin
the transmission process. If the channel is sensed as "busy," the node defers its transmission
for a random period of time. Once the transmission process begins, it is still possible for the
actual transmission of application data to not occur.

CSMA/CA is a modification of carrier sense multiple access.

Collision avoidance is used to improve CSMA performance by not allowing wireless


transmission of a node if another node is transmitting, thus reducing the probability of collision
due to the use of a random truncated binary exponential backoff time.

Optionally, but almost always implemented, an IEEE 802.11 RTS/CTS exchange can be required
to better handle situations such as the hidden node problem in wireless networking.

CSMA/CA is a layer 2 access method, not a protocol of the OSI model

IEEE 802.11:

IEEE 802.11 is a set of standards for implementing wireless local area network (WLAN)
computer communication in the 2.4, 3.6 and 5 GHz frequency bands. They are created and
maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base version of the
standard IEEE 802.11-2007 has had subsequent amendments. These standards provide the basis
for wireless network products using the Wi-Fi brand name.

General description:

The 802.11 family consists of a series of over-the-air modulation techniques that use the same
basic protocol. The most popular are those defined by the 802.11b and 802.11g protocols, which
are amendments to the original standard. 802.11-1997 was the first wireless networking standard,
but 802.11b was the first widely accepted one, followed by 802.11g and 802.11n. 802.11n is a
new multi-streaming modulation technique. Other standards in the family (cf, h, j) are service
amendments and extensions or corrections to the previous specifications.

802.11b and 802.11g use the 2.4 GHz ISM band, operating in the United States under Part 15 of
the US Federal Communications Commission Rules and Regulations. Because of this choice of
frequency band, 802.11b and g equipment may occasionally suffer interference from microwave
ovens, cordless telephones and Bluetooth devices. 802.11b and 802.11g control their interference
and susceptibility to interference by using direct-sequence spread spectrum (DSSS) and
orthogonal frequency-division multiplexing (OFDM) signaling methods, respectively. 802.11a
uses the 5 GHz U-NII band, which, for much of the world, offers at least 23 non-overlapping
channels rather than the 2.4 GHz ISM frequency band, where all channels overlap.[1] Better or
worse performance with higher or lower frequencies (channels) may be realized, depending on
the environment.

The segment of the radio frequency spectrum used by 802.11 varies between countries. In the
US, 802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the
FCC Rules and Regulations. Frequencies used by channels one through six of 802.11b and
802.11g fall within the 2.4 GHz amateur radio band. Licensed amateur radio operators may
operate 802.11b/g devices under Part 97 of the FCC Rules and Regulations, allowing increased
power output but not commercial content or encryption.

History:

802.11 technology has its origins in a 1985 ruling by the U.S. Federal Communications
Commission that released the ISM band for unlicensed use.[3][4]

In 1991 NCR Corporation/AT&T (now Alcatel-Lucent and LSI Corporation) invented the
precursor to 802.11 in Nieuwegein, The Netherlands. The inventors initially intended to use the
technology for cashier systems; the first wireless products were brought on the market under the
name WaveLAN with raw data rates of 1 Mbit/s and 2 Mbit/s

Vic Hayes, who held the chair of IEEE 802.11 for 10 years and has been called the "father of
Wi-Fi" was involved in designing the initial 802.11b and 802.11a standards within the IEEE.

In 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark under
which most products are sold

Protocols:

The original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but
is today obsolete. It specified two net bit rates of 1 or 2 megabits per second (Mbit/s), plus
forward error correction code. It specified three alternative physical layer technologies: diffuse
infrared operating at 1 Mbit/s; frequency-hopping spread spectrum operating at 1 Mbit/s or 2
Mbit/s; and direct-sequence spread spectrum operating at 1 Mbit/s or 2 Mbit/s. The latter two
radio technologies used microwave transmission over the Industrial Scientific Medical frequency
band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U.S.
900 MHz ISM band.

Legacy 802.11 with direct-sequence spread spectrum was rapidly supplanted and popularized by
802.11b.

802.11a:

The 802.11a standard uses the same data link layer protocol and frame format as the original
standard, but an OFDM based air interface (physical layer). It operates in the 5 GHz band with a
maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net
achievable throughput in the mid-20 Mbit/s

Since the 2.4 GHz band is heavily used to the point of being crowded, using the relatively
unused 5 GHz band gives 802.11a a significant advantage. However, this high carrier frequency
also brings a disadvantage: the effective overall range of 802.11a is less than that of 802.11b/g.
In theory, 802.11a signals are absorbed more readily by walls and other solid objects in their path
due to their smaller wavelength and, as a result, cannot penetrate as far as those of 802.11b. In
practice, 802.11b typically has a higher range at low speeds (802.11b will reduce speed to 5
Mbit/s or even 1 Mbit/s at low signal strengths). 802.11a too suffers from interference [10], but
locally there may be fewer signals to interfere with, resulting in less interference and better
throughput.

802.11b

802.11b has a maximum raw data rate of 11 Mbit/s and uses the same media access method
defined in the original standard. 802.11b products appeared on the market in early 2000, since
802.11b is a direct extension of the modulation technique defined in the original standard. The
dramatic increase in throughput of 802.11b (compared to the original standard) along with
simultaneous substantial price reductions led to the rapid acceptance of 802.11b as the definitive
wireless LAN technology.

802.11b devices suffer interference from other products operating in the 2.4 GHz band. Devices
operating in the 2.4 GHz range include: microwave ovens, Bluetooth devices, baby monitors,
and cordless telephones.

802.11g

In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4 GHz band
(like 802.11b), but uses the same OFDM based transmission scheme as 802.11a. It operates at a
maximum physical layer bit rate of 54 Mbit/s exclusive of forward error correction codes, or
about 22 Mbit/s average throughput. 802.11g hardware is fully backwards compatible with
802.11b hardware and therefore is encumbered with legacy issues that reduce throughput when
compared to 802.11a by ~21%.
The then-proposed 802.11g standard was rapidly adopted by consumers starting in January 2003,
well before ratification, due to the desire for higher data rates as well as to reductions in
manufacturing costs. By summer 2003, most dual-band 802.11a/b products became dual-
band/tri-mode, supporting a and b/g in a single mobile adapter card or access point. Details of
making b and g work well together occupied much of the lingering technical process; in an
802.11g network, however, activity of an 802.11b participant will reduce the data rate of the
overall 802.11g network.

Like 802.11b, 802.11g devices suffer interference from other products operating in the 2.4 GHz
band, for example wireless keyboards.

802.11-2007

In 2003, task group TGma was authorized to "roll up" many of the amendments to the 1999
version of the 802.11 standard. REVma or 802.11ma, as it was called, created a single document
that merged 8 amendments (802.11a, b, d, e, g, h, i, j) with the base standard. Upon approval on
March 8, 2007, 802.11REVma was renamed to the then-current base standard IEEE 802.11-
2007.

802.11n

802.11n is an amendment which improves upon the previous 802.11 standards by adding
multiple-input multiple-output antennas (MIMO). 802.11n operates on both the 2.4 GHz and the
lesser used 5 GHz bands. The IEEE has approved the amendment and it was published in
October 2009. Prior to the final ratification, enterprises were already migrating to 802.11n
networks based on the Wi-Fi Alliance's certification of products conforming to a 2007 draft of
the 802.11n proposal.

Bluetooth:

Bluetooth is a proprietary open wireless technology standard for exchanging data over short
distances (using short wavelength radio transmissions in the ISM band from 2400-2480 MHz)
from fixed and mobile devices, creating personal area networks (PANs) with high levels of
security. Created by telecoms vendor Ericsson in 1994, it was originally conceived as a wireless
alternative to RS-232 data cables. It can connect several devices, overcoming problems of
synchronization.

Bluetooth is managed by the Bluetooth Special Interest Group, which has more than 15,000
member companies in the areas of telecommunication, computing, networking, and consumer
electronics. The SIG oversees the development of the specification, manages the qualification
program, and protects the trademarks. To be marketed as a Bluetooth device, it must be qualified
to standards defined by the SIG. A network of patents is required to implement the technology
and are only licensed to those qualifying devices; thus the protocol, whilst open, may be
regarded as proprietary.
Implementation:

Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up
the data being sent and transmits chunks of it on up to 79 bands (1 MHz each; centered from
2402 to 2480 MHz) in the range 2,400-2,483.5 MHz (allowing for guard bands). This range is in
the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio
frequency band.

Originally Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme
available; subsequently, since the introduction of Bluetooth 2.0+EDR, /4-DQPSK and 8DPSK
modulation may also be used between compatible devices. Devices functioning with GFSK are
said to be operating in basic rate (BR) mode where an instantaneous data rate of 1 Mbit/s is
possible. The term Enhanced Data Rate (EDR) is used to describe /4-DPSK and 8DPSK
schemes, each giving 2 and 3 Mbit/s respectively. The combination of these (BR and EDR)
modes in Bluetooth radio technology is classified as a "BR/EDR radio".

Bluetooth is a packet-based protocol with a master-slave structure. One master may


communicate with up to 7 slaves in a piconet; all devices share the master's clock. Packet
exchange is based on the basic clock, defined by the master, which ticks at 312.5 s intervals.
Two clock ticks make up a slot of 625 s; two slots make up a slot pair of 1250 s. In the simple
case of single-slot packets the master transmits in even slots and receives in odd slots; the slave,
conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3 or 5 slots long
but in all cases the master transmit will begin in even slots and the slave transmit in odd slots.

Bluetooth provides a secure way to connect and exchange information between devices such as
faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning
System (GPS) receivers, digital cameras, and video game consoles.

Uses:

Bluetooth is a standard wire-replacement communications protocol primarily designed for low


power consumption, with a short range (power-class-dependent, but effective ranges vary in
practice; see table below) based on low-cost transceiver microchips in each device. Because the
devices use a radio (broadcast) communications system, they do not have to be in visual line of
sight of each other, however a quasi optical wireless path must be viable

List of applications:

Wireless control of and communication between a mobile phone and a handsfree


headset. This was one of the earliest applications to become popular.
Wireless control of and communication between a mobile phone and a Bluetooth
compatible car stereo system
Wireless Bluetooth headset and Intercom.
Wireless networking between PCs in a confined space and where little bandwidth is
required.
Wireless communication with PC input and output devices, the most common being
the mouse, keyboard and printer.
Transfer of files, contact details, calendar appointments, and reminders between
devices with OBEX.
Replacement of previous wired RS-232 serial communications in test equipment,
GPS receivers, medical equipment, bar code scanners, and traffic control devices.
For controls where infrared was often used.
For low bandwidth applications where higher USB bandwidth is not required and
cable-free connection desired.
Sending small advertisements from Bluetooth-enabled advertising hoardings to other,
discoverable, Bluetooth devices.
Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks.
Three seventh-generation game consoles, Nintendo's Wii[11] and Sony's PlayStation 3
and PSP Go, use Bluetooth for their respective wireless controllers.
Dial-up internet access on personal computers or PDAs using a data-capable mobile
phone as a wireless modem.
Short range transmission of health sensor data from medical devices to mobile phone,
set-top box or dedicated telehealth devices.
Allowing a DECT phone to ring and answer calls on behalf of a nearby cell phone
Real-time location systems (RTLS), are used to track and identify the location of
objects in real-time using Nodes or tags attached to, or embedded in the objects
tracked, and Readers that receive and process the wireless signals from these tags
to determine their locations.
Personal security application on mobile phones for prevention of theft or loss of
items. The protected item has a Bluetooth marker (e.g. a tag) that is in constant
communication with the phone. If the connection is broken (the marker is out of
range of the phone) then an alarm is raised. This can also be used as a man overboard
alarm. A product using this technology has been available since 2009

Bluetooth vs. Wi-Fi (IEEE 802.11):

Bluetooth and Wi-Fi (the brand name for products using IEEE 802.11 standards) have some
similar applications: setting up networks, printing, or transferring files. Wi-Fi is intended as a
replacement for cabling for general local area network access in work areas. This category of
applications is sometimes called wireless local area networks (WLAN). Bluetooth was intended
for portable equipment and its applications. The category of applications is outlined as the
wireless personal area network (WPAN). Bluetooth is a replacement for cabling in a variety of
personally carried applications in any setting and can also support fixed location applications
such as smart energy functionality in the home (thermostats, etc.).

Wi-Fi is a wireless version of a common wired Ethernet network, and requires configuration to
set up shared resources, transmit files, and to set up audio links (for example, headsets and
hands-free devices). Wi-Fi uses the same radio frequencies as Bluetooth, but with higher power,
resulting in higher bit rates and better range from the base station. The nearest equivalents in
Bluetooth are the DUN profile, which allows devices to act as modem interfaces, and the PAN
profile, which allows for ad-hoc networking
Mobile IP:

Mobile IP (or IP mobility) is an Internet Engineering Task Force (IETF) standard


communications protocol that is designed to allow mobile device users to move from one
network to another while maintaining a permanent IP address. Mobile IP for IPv4 is described in
IETF RFC 5944, and extensions are defined in IETF RFC 4721. Mobile IPv6, the IP mobility
implementation for the next generation of the Internet Protocol, IPv6, is described in RFC 6275

Introduction:

The Mobile IP protocol allows location-independent routing of IP datagrams on the Internet.


Each mobile node is identified by its home address disregarding its current location in the
Internet. While away from its home network, a mobile node is associated with a care-of address
which identifies its current location and its home address is associated with the local endpoint of
a tunnel to its home agent. Mobile IP specifies how a mobile node registers with its home agent
and how the home agent routes datagrams to the mobile node through the tunnel.

Applications:

In many applications (e.g., VPN, VoIP), sudden changes in network connectivity and IP address
can cause problems. Mobile IP was designed to support seamless and continuous Internet
connectivity.

Mobile IP is most often found in wired and wireless environments where users need to carry
their mobile devices across multiple LAN subnets. Examples of use are in roaming between
overlapping wireless systems, e.g., IP over DVB, WLAN, WiMAX and BWA.

Mobile IP is not required within cellular systems such as 3G, to provide transparency when
Internet users migrate between cellular towers, since these systems provide their own data link
layer handover and roaming mechanisms. However, it is often used in 3G systems to allow
seamless IP mobility between different packet data serving node (PDSN) domains.

Operational principles:

A mobile node has two addresses - a permanent home address and a care-of address (CoA),
which is associated with the network the mobile node is visiting. Two kinds of entities comprise
a Mobile IP implementation:

A home agent stores information about mobile nodes whose permanent home address
is in the home agent's network.
A foreign agent stores information about mobile nodes visiting its network. Foreign
agents also advertise care-of addresses, which are used by Mobile IP. If there is no
foreign agent in the host network, the mobile device has to take care of getting an
address and advertising that address by its own means.
A node wanting to communicate with the mobile node uses the permanent home address of the
mobile node as the destination address to send packets to. Because the home address logically
belongs to the network associated with the home agent, normal IP routing mechanisms forward
these packets to the home agent. Instead of forwarding these packets to a destination that is
physically in the same network as the home agent, the home agent redirects these packets
towards the remote address through an IP tunnel by encapsulating the datagram with a new IP
header using the care of address of the mobile node.

When acting as transmitter, a mobile node sends packets directly to the other communicating
node, without sending the packets through the home agent, using its permanent home address as
the source address for the IP packets. This is known as triangular routing. If needed, the foreign
agent could employ reverse tunneling by tunneling the mobile node's packets to the home agent,
which in turn forwards them to the communicating node. This is needed in networks whose
gateway routers check that the source IP address of the mobile host belongs to their subnet or
discard the packet otherwise.

Development:

Enhancements to the Mobile IP technique, such as Mobile IPv6 and Hierarchical Mobile IPv6
(HMIPv6) defined in RFC 5380, are being developed to improve mobile communications in
certain circumstances by making the processes more secure and more efficient. HMIPv6
explanation can be found at Hierarchical-Mobile-IPv6.

Researchers create support for mobile networking without requiring any pre-deployed
infrastructure as it currently is required by MIP. One such example is Interactive Protocol for
Mobile Networking (IPMN) which promises supporting mobility on a regular IP network just
from the network edges by intelligent signalling between IP at end-points and application layer
module with improved quality of service.

Researchers are also working to create support for mobile networking between entire subnets
with support from Mobile IPv6. One such example is Network Mobility (NEMO) Network
Mobility Basic Support Protocol by the IETF Network Mobility Working Group which supports
mobility for entire Mobile Networks that move and to attach to different points in the Internet.
The protocol is an extension of Mobile IPv6 and allows session continuity for every node in the
Mobile Network as the network moves

Changes in IPv6 for Mobile IPv6

A set of mobility options to include in mobility messages


A new Home Address option for the Destination Options header
A new Type 2 Routing header
New Internet Control Message Protocol for IPv6 (ICMPv6) messages to discover the
set of home agents and to obtain the prefix of the home link
Changes to router discovery messages and options and additional Neighbor
Discovery options
IPv6:
Internet Protocol version 6 (IPv6) is a version of the Internet Protocol (IP). It is designed to
succeed the Internet Protocol version 4 (IPv4). The Internet operates by transferring data
between hosts in small packets that are independently routed across networks as specified by an
international communications protocol known as the Internet Protocol.

Each host or computer on the Internet requires an IP address in order to communicate. The
growth of the Internet has created a need for more addresses than are possible with IPv4. IPv6
was developed by the Internet Engineering Task Force (IETF) to deal with this long-anticipated
IPv4 address exhaustion, and is described in Internet standard document RFC 2460, published in
December 1998.[1] Like IPv4, IPv6 is an internet-layer protocol for packet-switched
internetworking and provides end-to-end datagram transmission across multiple IP networks.
While IPv4 allows 32 bits for an Internet Protocol address, and can therefore support 232
(4,294,967,296) addresses, IPv6 uses 128-bit addresses, so the new address space supports 2128
(approximately 340 undecillion or 3.41038) addresses. This expansion allows for many more
devices and users on the internet as well as extra flexibility in allocating addresses and efficiency
for routing traffic. It also eliminates the primary need for network address translation (NAT),
which gained widespread deployment as an effort to alleviate IPv4 address exhaustion.

IPv6 also implements additional features not present in IPv4. It simplifies aspects of address
assignment (stateless address autoconfiguration), network renumbering and router
announcements when changing Internet connectivity providers. The IPv6 subnet size has been
standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate
an automatic mechanism for forming the host identifier from link-layer media addressing
information (MAC address). Network security is also integrated into the design of the IPv6
architecture, and the IPv6 specification mandates support for IP sec as a fundamental
interoperability requirement.

The last top level (/8) block of free IPv4 addresses was assigned in February 2011 by IANA to
the 5 RIRs, although many free addresses still remain in most assigned blocks and each RIR will
continue with standard policy until it is at its last /8. After that, only 1024 addresses (a /22) are
made available from the RIR for each LIR currently, only APNIC has already reached this
stage.[2] While IPv6 is supported on all major operating systems in use in commercial, business,
and home consumer environments,[3] IPv6 does not implement interoperability features with
IPv4, and creates essentially a parallel, independent network. Exchanging traffic between the two
networks requires special translator gateways, but modern computer operating systems
implement dual-protocol software for transparent access to both networks either natively or using
a tunneling protocol such as 6to4, 6in4, or Teredo. In December 2010, despite marking its 12th
anniversary as a Standards Track protocol, IPv6 was only in its infancy in terms of general
worldwide deployment. A 2008 study[4] by Google Inc. indicated that penetration was still less
than one percent of Internet-enabled hosts in any country at that time.
IPv4:
The first publicly used version of the Internet Protocol, Version 4 (IPv4), provides an addressing
capability of 232 or approximately 4.3 billion addresses. Address exhaustion was not initially a
concern in IPv4 as this version was originally presumed to be an internal test within ARPA, and
not intended for public use.

The decision to put a 32-bit address space on there was the result of a year's battle among a
bunch of engineers who couldn't make up their minds about 32, 128, or variable-length. And
after a year of fighting, I said--I'm now at ARPA, I'm running the program, I'm paying for this
stuff, I'm using American tax dollars, and I wanted some progress because we didn't know if this
was going to work. So I said: OK, it's 32-bits. That's enough for an experiment; it's 4.3 billion
terminations. Even the Defense Department doesn't need 4.3 billion of everything and couldn't
afford to buy 4.3 billion edge devices to do a test anyway. So at the time I thought we were doing
an experiment to prove the technology and that if it worked we'd have opportunity to do a
production version of it. Well, it just escaped! It got out and people started to use it, and then it
became a commercial thing. So this [IPv6] is the production attempt at making the network
scalable.

During the first decade of operation of the Internet (by the late 1980s), it became apparent that
methods had to be developed to conserve address space. In the early 1990s, even after the
redesign of the addressing system using a classless network model, it became clear that this
would not suffice to prevent IPv4 address exhaustion, and that further changes to the Internet
infrastructure were needed.[6]

Transmission Control Protocol:

The Transmission Control Protocol (TCP) is one of the core protocols of the Internet Protocol
Suite. TCP is one of the two original components of the suite, complementing the Internet
Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP provides
reliable, ordered delivery of a stream of bytes from a program on one computer to another
program on another computer. TCP is the protocol that major Internet applications such as the
World Wide Web, email, remote administration and file transfer rely on. Other applications,
which do not require reliable data stream service, may use the User Datagram Protocol (UDP),
which provides a datagram service that emphasizes reduced latency over reliability.

Network function:

TCP provides a communication service at an intermediate level between an application program


and the Internet Protocol (IP). That is, when an application program desires to send a large chunk
of data across the Internet using IP, instead of breaking the data into IP-sized pieces and issuing a
series of IP requests, the software can issue a single request to TCP and let TCP handle the IP
details.
IP works by exchanging pieces of information called packets. A packet is a sequence of octets
and consists of a header followed by a body. The header describes the packet's destination and,
optionally, the routers to use for forwarding until it arrives at its destination. The body contains
the data IP is transmitting.

Due to network congestion, traffic load balancing, or other unpredictable network behavior, IP
packets can be lost, duplicated, or delivered out of order. TCP detects these problems, requests
retransmission of lost data, rearranges out-of-order data, and even helps minimize network
congestion to reduce the occurrence of the other problems. Once the TCP receiver has
reassembled the sequence of octets originally transmitted, it passes them to the application
program. Thus, TCP abstracts the application's communication from the underlying networking
details.

TCP is utilized extensively by many of the Internet's most popular applications, including the
World Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file
sharing, and some streaming media applications.

TCP is optimized for accurate delivery rather than timely delivery, and therefore, TCP
sometimes incurs relatively long delays (in the order of seconds) while waiting for out-of-order
messages or retransmissions of lost messages. It is not particularly suitable for real-time
applications such as Voice over IP. For such applications, protocols like the Real-time Transport
Protocol (RTP) running over the User Datagram Protocol (UDP) are usually recommended
instead.

TCP is a reliable stream delivery service that guarantees delivery of a data stream sent from one
host to another without duplication or losing data. Since packet transfer is not reliable, a
technique known as positive acknowledgment with retransmission is used to guarantee reliability
of packet transfers. This fundamental technique requires the receiver to respond with an
acknowledgment message as it receives the data. The sender keeps a record of each packet it
sends, and waits for acknowledgment before sending the next packet. The sender also keeps a
timer from when the packet was sent, and retransmits a packet if the timer expires. The timer is
needed in case a packet gets lost or corrupted.

TCP consists of a set of rules: for the protocol, that are used with the Internet Protocol, and for
the IP, to send data "in a form of message units" between computers over the Internet. While IP
handles actual delivery of the data, TCP keeps track of the individual units of data transmission,
called segments, that a message is divided into for efficient routing through the network. For
example, when an HTML file is sent from a Web server, the TCP software layer of that server
divides the sequence of octets of the file into segments and forwards them individually to the IP
software layer (Internet Layer). The Internet Layer encapsulates each TCP segment into an IP
packet by adding a header that includes (among other data) the destination IP address. Even
though every packet has the same destination address, they can be routed on different paths
through the network. When the client program on the destination computer receives them, the
TCP layer (Transport Layer) reassembles the individual segments and ensures they are correctly
ordered and error free as it streams them to an application
Data transfer

There are a few key features that set TCP apart from User Datagram Protocol:

Ordered data transfer - the destination host rearranges according to sequence number
Retransmission of lost packets - any cumulative stream not acknowledged is
retransmitted
Error-free data transfer
Flow control - limits the rate a sender transfers data to guarantee reliable delivery.
The receiver continually hints the sender on how much data can be received
(controlled by the sliding window). When the receiving host's buffer fills, the next
acknowledgment contains a 0 in the window size, to stop transfer and allow the data
in the buffer to be processed.
Congestion control.

Reliable transmission

TCP uses a sequence number to identify each byte of data. The sequence number identifies the
order of the bytes sent from each computer so that the data can be reconstructed in order,
regardless of any fragmentation, disordering, or packet loss that may occur during transmission.
For every payload byte transmitted, the sequence number must be incremented. In the first two
steps of the 3-way handshake, both computers exchange an initial sequence number (ISN). This
number can be arbitrary, and should in fact be unpredictable to defend against TCP Sequence
Prediction Attacks.

TCP primarily uses a cumulative acknowledgment scheme, where the receiver sends an
acknowledgment signifying that the receiver has received all data preceding the acknowledged
sequence number. The sender sets the sequence number field to the sequence number of the first
payload byte in the segment's data field, and the receiver sends an acknowledgment specifying
the sequence number of the next byte they expect to receive. For example, if a sending computer
sends a packet containing four payload bytes with a sequence number field of 100, then the
sequence numbers of the four payload bytes are 100, 101, 102 and 103. When this packet arrives
at the receiving computer, it would send back an acknowledgment number of 104 since that is
the sequence number of the next byte it expects to receive in the next packet.

In addition to cumulative acknowledgments, TCP receivers can also send selective


acknowledgments to provide further information.

If the sender infers that data has been lost in the network, it retransmits the data.

Error detection

Sequence numbers and acknowledgments cover discarding duplicate packets, retransmission of


lost packets, and ordered-data transfer. To assure correctness a checksum field is included (see
TCP segment structure for details on check summing).
The TCP checksum is a weak check by modern standards. Data Link Layers with high bit error
rates may require additional link error correction/detection capabilities. The weak checksum is
partially compensated for by the common use of a CRC or better integrity check at layer 2,
below both TCP and IP, such as is used in PPP or the Ethernet frame. However, this does not
mean that the 16-bit TCP checksum is redundant: remarkably, introduction of errors in packets
between CRC-protected hops is common, but the end-to-end 16-bit TCP checksum catches most
of these simple errors.[14] This is the end-to-end principle at work.

Flow control

TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for
the TCP receiver to receive and process it reliably. Having a mechanism for flow control is
essential in an environment where machines of diverse network speeds communicate. For
example, if a PC sends data to a hand-held PDA that is slowly processing received data, the PDA
must regulate data flow so as not to be overwhelmed.[2]

TCP uses a sliding window flow control protocol. In each TCP segment, the receiver specifies in
the receive window field the amount of additional received data (in bytes) that it is willing to
buffer for the connection. The sending host can send only up to that amount of data before it
must wait for an acknowledgment and window update from the receiving host.

When a receiver advertises a window size of 0, the sender stops sending data and starts the
persist timer. The persist timer is used to protect TCP from a deadlock situation that could arise
if a subsequent window size update from the receiver is lost, and the sender cannot send more
data until receiving a new window size update from the receiver. When the persist timer expires,
the TCP sender attempts recovery by sending a small packet so that the receiver responds by
sending another acknowledgement containing the new window size.
If a receiver is processing incoming data in small increments, it may repeatedly advertise a small
receive window. This is referred to as the silly window syndrome, since it is inefficient to send
only a few bytes of data in a TCP segment, given the relatively large overhead of the TCP
header. TCP senders and receivers typically employ flow control logic to specifically avoid
repeatedly sending small segments. The sender-side silly window syndrome avoidance logic is
referred to as Nagle's algorithm.

Congestion control

The final main aspect of TCP is congestion control. TCP uses a number of mechanisms to
achieve high performance and avoid congestion collapse, where network performance can fall by
several orders of magnitude. These mechanisms control the rate of data entering the network,
keeping the data flow below a rate that would trigger collapse. They also yield an approximately
max-min fair allocation between flows.

Acknowledgments for data sent, or lack of acknowledgments, are used by senders to infer
network conditions between the TCP sender and receiver. Coupled with timers, TCP senders and
receivers can alter the behavior of the flow of data. This is more generally referred to as
congestion control and/or network congestion avoidance.

Modern implementations of TCP contain four intertwined algorithms: Slow-start, congestion


avoidance, fast retransmit, and fast recovery (RFC 5681).

In addition, senders employ a retransmission timeout (RTO) that is based on the estimated
round-trip time (or RTT) between the sender and receiver, as well as the variance in this round
trip time. The behavior of this timer is specified in RFC 2988. There are subtleties in the
estimation of RTT. For example, senders must be careful when calculating RTT samples for
retransmitted packets; typically they use Karn's Algorithm or TCP timestamps (see RFC 1323).
These individual RTT samples are then averaged over time to create a Smoothed Round Trip
Time (SRTT) using Jacobson's algorithm. This SRTT value is what is finally used as the round-
trip time estimate.

Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast in very
high-speed environments are ongoing areas of research and standards development. As a result,
there are a number of TCP congestion avoidance algorithm variations.

Maximum segment size

The maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is
willing to receive in a single segment. For best performance, the MSS should be set small
enough to avoid IP fragmentation, which can lead to packet loss and excessive retransmissions.
To try to accomplish this, typically the MSS is announced by each side using the MSS option
when the TCP connection is established, in which case it is derived from the maximum
transmission unit (MTU) size of the data link layer of the networks to which the sender and
receiver are directly attached. Furthermore, TCP senders can use path MTU discovery to infer
the minimum MTU along the network path between the sender and receiver, and use this to
dynamically adjust the MSS to avoid IP fragmentation within the network.

MSS announcement is also often called "MSS negotiation". Strictly speaking, the MSS is not
"negotiated" between the originator and the receiver, because that would imply that both
originator and receiver will negotiate and agree upon a single, unified MSS that applies to all
communication in both directions of the connection. In fact, two completely independent values
of MSS are permitted for the two directions of data flow in a TCP connection.[15] This situation
may arise, for example, if one of the devices participating in a connection has an extremely
limited amount memory reserved (perhaps even smaller than the overall discovered Path MTU)
for processing incoming TCP segments.

Selective acknowledgments

Relying purely on the cumulative acknowledgment scheme employed by the original TCP
protocol can lead to inefficiencies when packets are lost. For example, suppose 10,000 bytes are
sent in 10 different TCP packets, and the first packet is lost during transmission. In a pure
cumulative acknowledgment protocol, the receiver cannot say that it received bytes 1,000 to
9,999 successfully, but failed to receive the first packet, containing bytes 0 to 999. Thus the
sender may then have to resend all 10,000 bytes.

To solve this problem TCP employs the selective acknowledgment (SACK) option, defined in
RFC 2018, which allows the receiver to acknowledge discontinuous blocks of packets that were
received correctly, in addition to the sequence number of the last contiguous byte received
successively, as in the basic TCP acknowledgment. The acknowledgement can specify a number
of SACK blocks, where each SACK block is conveyed by the starting and ending sequence
numbers of a contiguous range that the receiver correctly received. In the example above, the
receiver would send SACK with sequence numbers 1000 and 9999. The sender thus retransmits
only the first packet, bytes 0 to 999.

An extension to the SACK option is the duplicate-SACK option, defined in RFC 2883. An out-
of-order packet delivery can often falsely indicate the TCP sender of lost packet and, in turn, the
TCP sender retransmits the suspected-to-be-lost packet and slow down the data delivery to
prevent network congestion. The TCP sender undoes the action of slow-down, that is a recovery
of the original pace of data transmission, upon receiving a D-SACK that indicates the
retransmitted packet is duplicate.

The SACK option is not mandatory and it is used only if both parties support it. This is
negotiated when connection is established. SACK uses the optional part of the TCP header (see
TCP segment structure for details). The use of SACK is widespread - all popular TCP stacks
support it. Selective acknowledgment is also used in Stream Control Transmission Protocol
(SCTP).
3G Overview:

3GDefined
3G (Third Generation) is a generic name for a set of mobile technologies set to be launched by
the end of 2001 which use a host of high-tech infrastructure networks, handsets, base stations,
switches and other equipment to allow mobiles to offer high-speed Internet access, data, video
and CD-quality music services.

Data speeds in 3G networks should be show speeds of to up to 2 Megabits per second, an


increase on current technology.

2G/2.5GDefined
GSM for example is a 2G technology. It uses TDMA technology, proving data speeds of
9.6kbps/14.4kbps. The packet radio upgrade to GSM, called GPRS, can have speeds of up to
114kbps. GPRS an interim technology towards 3G, and hence is known as 2.5G. GSM might go
the same way as the older first generation (1G) NMT and AMPS networks in 8-15 years because
of the use of newer and better UMTS technology

CDMA:

The new 3G services are almost all flavours of technolgies based on the generic name, CDMA
(Code Division Multiple Access). CDMA is a digital wireless technology that allows multiple
users to share radio frequencies at the same time without interfering with each other. A telephone
or data call is assigned a unique code that distinguishes it from others and and since the signals
hop among different frequencies.

Current 2G services using the original CDMA "IS-95" technology are know as cdmaOne. 3G
services will use new high-speed versions of CDMA called W-CDMA, or its competing
technology, cdma2000.

IMT-2000:

In all, these technologies fall under the ITUs generic name of IMT-2000 (International Mobile
Telecommunications 2000). But when the ITU tried to unify and standardise 3G technologies, no
consensus was reached. There were thus five terrestrial standards developed as part of the IMT-
2000 program. Instead, depending on where in the world 3G will be implemented, the 3G
standard will be based on CDMA variants cdma2000 or W-CDMA.
3G:

3G or 3rd generation mobile telecommunications is a generation of standards for mobile phones


and mobile telecommunication services fulfilling the International Mobile Telecommunications-
2000 (IMT-2000) specifications by the International Telecommunication Union. Application
services include wide-area wireless voice telephone, mobile Internet access, video calls and
mobile TV, all in a mobile environment.
Several telecommunications companies market wireless mobile Internet services as 3G,
indicating that the advertised service is provided over a 3G wireless network. Services advertised
as 3G are required to meet IMT-2000 technical standards, including standards for reliability and
speed (data transfer rates). To meet the IMT-2000 standards, a system is required to provide peak
data rates of at least 200 kbit/s (about 0.2 Mbit/s). However, many services advertised as 3G
provide higher speed than the minimum technical requirements for a 3G service. Recent 3G
releases, often denoted 3.5G and 3.75G, also provide mobile broadband access of several Mbit/s
to smart phones and mobile modems in laptop computers.

The following standards are typically branded 3G:

the UMTS system, first offered in 2001, standardized by 3GPP, used primarily in
Europe, Japan, China (however with a different radio interface) and other regions
predominated by GSM 2G system infrastructure. The cell phones are typically UMTS
and GSM hybrids. Several radio interfaces are offered, sharing the same
infrastructure:
The original and most widespread radio interface is called W-CDMA.
The TD-SCDMA radio interface was commercialised in 2009 and is only offered in
China.
The latest UMTS release, HSPA+, can provide peak data rates up to 56 Mbit/s in the
downlink in theory (28 Mbit/s in existing services) and 22 Mbit/s in the uplink.
the CDMA2000 system, first offered in 2002, standardized by 3GPP2, used
especially in North America and South Korea, sharing infrastructure with the IS-95
2G standard. The cell phones are typically CDMA2000 and IS-95 hybrids. The latest
release EVDO Rev B offers peak rates of 14.7 Mbit/s downstream.

The above systems and radio interfaces are based on kindred spread spectrum radio transmission
technology. While the GSM EDGE standard ("2.9G"), DECT cordless phones and Mobile
WiMAX standards formally also fulfill the IMT-2000 requirements and are approved as 3G
standards by ITU, these are typically not branded 3G, and are based on completely different
technologies.

A new generation of cellular standards has appeared approximately every tenth year since 1G
systems were introduced in 1981/1982. Each generation is characterized by new frequency
bands, higher data rates and non backwards compatible transmission technology. The first
release of the 3GPP Long Term Evolution (LTE) standard does not completely fulfill the ITU 4G
requirements called IMT-Advanced. First release LTE is not backwards compatible with 3G, but
is a pre-4G or 3.9G technology, however sometimes branded "4G" by the service providers. Its
evolution LTE Advanced is a 4G technology. WiMAX is another technology verging on or
marketed as 4G.
Mobility management:
Mobility management is one of the major functions of a GSM or a UMTS network that allows
mobile phones to work. The aim of mobility management is to track where the subscribers are,
allowing calls, SMS and other mobile phone services to be delivered to them.

Location update procedure:

A GSM or UMTS network, like all cellular networks, is a radio network of individual cells,
known as base stations. Each base station covers a small geographical area which is part of a
uniquely identified location area. By integrating the coverage of each of these base stations, a
cellular network provides a radio coverage over a much wider area. A group of base stations is
named a location area, or a routing area.

The location update procedure allows a mobile device to inform the cellular network, whenever
it moves from one location area to the next. Mobiles are responsible for detecting location area
codes. When a mobile finds that the location area code is different from its last update, it
performs another update by sending to the network, a location update request, together with its
previous location, and its Temporary Mobile Subscriber Identity (TMSI).

There are several reasons why a mobile may provide updated location information to the
network. Whenever a mobile is switched on or off, the network may require it to perform an
IMSI attach or IMSI detach location update procedure. Also, each mobile is required to regularly
report its location at a set time interval using a periodic location update procedure. Whenever a
mobile moves from one location area to the next while not on a call, a random location update is
required. This is also required of a stationary mobile that reselects coverage from a cell in a
different location area, because of signal fade. Thus a subscriber has reliable access to the
network and may be reached with a call, while enjoying the freedom of mobility within the
whole coverage area.

When a subscriber is paged in an attempt to deliver a call or SMS and the subscriber does not
reply to that page then the subscriber is marked as absent in both the Mobile Switching Center /
Visitor Location Register (MSC/VLR) and the Home Location Register (HLR) (Mobile not
reachable flag MNRF is set). The next time the mobile performs a location update the HLR is
updated and the mobile not reachable flag is cleared.

TMSI:

The Temporary Mobile Subscriber Identity (TMSI) is the identity that is most commonly sent
between the mobile and the network. TMSI is randomly assigned by the VLR to every mobile in
the area, the moment it is switched on. The number is local to a location area, and so it has to be
updated each time the mobile moves to a new geographical area.

The network can also change the TMSI of the mobile at any time. And it normally does so, in
order to avoid the subscriber from being identified, and tracked by eavesdroppers on the radio
interface. This makes it difficult to trace which mobile is which, except briefly, when the mobile
is just switched on, or when the data in the mobile becomes invalid for one reason or another. At
that point, the global "international mobile subscriber identity" (IMSI) must be sent to the
network. The IMSI is sent as rarely as possible, to avoid it being identified and tracked.

A key use of the TMSI is in paging a mobile. "Paging" is the one-to-one communication between
the mobile and the base station. The most important use of broadcast information is to set up
channels for "paging". Every cellular system has a broadcast mechanism to distribute such
information to a plurality of mobiles.

Size of TMSI is 4 octet with full hex digits and can't be all 1 because the SIM uses 4 octets with
all bits equal to 1 to indicate that no valid TMSI is available

Roaming:

Roaming is one of the fundamental mobility management procedures of all cellular networks.
Roaming is defined as the ability for a cellular customer to automatically make and receive voice
calls, send and receive data, or access other services, including home data services, when
travelling outside the geographical coverage area of the home network, by means of using a
visited network. This can be done by using a communication terminal or else just by using the
subscriber identity in the visited network. Roaming is technically supported by mobility
management, authentication, authorization and billing procedures.

Location area:

A "location area" is a set of base stations that are grouped together to optimise signalling.
Typically, tens or even hundreds of base stations share a single Base Station Controller (BSC) in
GSM, or a Radio Network Controller (RNC) in UMTS, the intelligence behind the base stations.
The BSC handles allocation of radio channels, receives measurements from the mobile phones,
controls handovers from base station to base station.

To each location area, a unique number called a "location area code" is assigned. The location
area code is broadcast by each base station, known as a "base transceiver station" BTS in GSM,
or a Node B in UMTS, at regular intervals.

In GSM, the mobiles cannot communicate directly with each other but, have to be channeled
through the BTSs. In UMTS networks, if no Node B is accessible to a mobile, it will not be able
to make any connections at all.

If the location areas are very large, there will be many mobiles operating simultaneously,
resulting in very high paging traffic, as every paging request has to be broadcast to every base
station in the location area. This wastes bandwidth and power on the mobile, by requiring it to
listen for broadcast messages too much of the time. If on the other hand, there are too many
small location areas, the mobile must contact the network very often for changes of location,
which will also drain the mobile's battery. A balance has therefore to be struck
Routing area:

The routing area is the PS domain equivalent of the location area. A "routing area" is normally a
subdivision of a "location area". Routing areas are used by mobiles which are GPRS-attached.
GPRS ("General Packet Radio Services"), GSMs new data transmission technology, is
optimized for "bursty" data communication services, such as wireless internet/intranet, and
multimedia services. It is also known as GSM-IP ("Internet Protocol") because it will connect
users directly to Internet Service Providers (ISP).

The bursty nature of packet traffic means that more paging messages are expected per mobile,
and so it is worth knowing the location of the mobile more accurately than it would be with
traditional circuit-switched traffic. A change from routing area to routing area (called a "Routing
Area Update") is done in an almost identical way to a change from location area to location area.
The main differences are that the "Serving GPRS Support Node" (SGSN) is the element involved

Handover:
In cellular telecommunications, the term handover or handoff refers to the process of transferring
an ongoing call or data session from one channel connected to the core network to another. In
satellite communications it is the process of transferring satellite control responsibility from one
earth station to another without loss or interruption of service.

Handover or handoff:

American English tends to use the term handoff, and this is most commonly used within some
American organizations such as 3GPP2 and in American originated technologies such as
CDMA2000. In British English the term handover is more common, and is used within
international and European organizations such as ITU-T, IETF, ETSI and 3GPP, and
standardized within European originated standards such as GSM and UMTS. The term handover
is more common than handoff in academic research publications and literature, while handoff is
slightly more common within the IEEE and ANSI organizations.

Purpose:

In telecommunications there may be different reasons why a handover might be conducted:

when the phone is moving away from the area covered by one cell and entering the
area covered by another cell the call is transferred to the second cell in order to avoid
call termination when the phone gets outside the range of the first cell;
when the capacity for connecting new calls of a given cell is used up and an existing
or new call from a phone, which is located in an area overlapped by another cell, is
transferred to that cell in order to free-up some capacity in the first cell for other
users, who can only be connected to that cell;
in non-CDMA networks when the channel used by the phone becomes interfered by
another phone using the same channel in a different cell, the call is transferred to a
different channel in the same cell or to a different channel in another cell in order to
avoid the interference;
again in non-CDMA networks when the user behavior changes, e.g. when a fast-
travelling user, connected to a large, umbrella-type of cell, stops then the call may be
transferred to a smaller macro cell or even to a micro cell in order to free capacity on
the umbrella cell for other fast-traveling users and to reduce the potential interference
to other cells or users (this works in reverse too, when a user is detected to be moving
faster than a certain threshold, the call can be transferred to a larger umbrella-type of
cell in order to minimize the frequency of the handovers due to this movement);
in CDMA networks a (see further down) may be induced in order to reduce the
interference to a smaller neighboring cell due to the "near-far" effect even when the
phone still has an excellent connection to its current cell;.

The most basic form of handover is when a phone call in progress is redirected from its current
cell (called source) and its used channel in that cell to a new cell (called target) and a new
channel. In terrestrial networks the source and the target cells may be served from two different
cell sites or from one and the same cell site (in the latter case the two cells are usually referred to
as two sectors on that cell site). Such a handover, in which the source and the target are different
cells (even if they are on the same cell site) is called inter-cell handover. The purpose of inter-
cell handover is to maintain the call as the subscriber is moving out of the area covered by the
source cell and entering the area of the target cell.

A special case is possible, in which the source and the target are one and the same cell and only
the used channel is changed during the handover. Such a handover, in which the cell is not
changed, is called intra-cell handover. The purpose of intra-cell handover is to change one
channel, which may be interfered or fading with a new clearer or less fading channel.

Types of handover:

In addition to the above classification of inter-cell and intra-cell classification of handovers, they
also can be divided into hard and soft handovers:

A hard handover is one in which the channel in the source cell is released and only
then the channel in the target cell is engaged. Thus the connection to the source is
broken before or 'as' the connection to the target is madefor this reason such
handovers are also known as break-before-make. Hard handovers are intended to be
instantaneous in order to minimize the disruption to the call. A hard handover is
perceived by network engineers as an event during the call. It requires the least
processing by the network providing service. When the mobile is between base
stations, then the mobile can switch with any of the base stations, so the base stations
bounce the link with the mobile back and forth. This is called ping-ponging.
A soft handover is one in which the channel in the source cell is retained and used for
a while in parallel with the channel in the target cell. In this case the connection to the
target is established before the connection to the source is broken, hence this
handover is called make-before-break. The interval, during which the two
connections are used in parallel, may be brief or substantial. For this reason the soft
handover is perceived by network engineers as a state of the call, rather than a brief
event. Soft handovers may involve using connections to more than two cells:
connections to three, four or more cells can be maintained by one phone at the same
time. When a call is in a state of soft handover, the signal of the best of all used
channels can be used for the call at a given moment or all the signals can be
combined to produce a clearer copy of the signal. The latter is more advantageous,
and when such combining is performed both in the downlink (forward link) and the
uplink (reverse link) the handover is termed as softer. Softer handovers are possible
when the cells involved in the handovers have a single cell site.

Comparison of handovers

An advantage of the hard handover is that at any moment in time one call uses only one channel.
The hard handover event is indeed very short and usually is not perceptible by the user. In the
old analog systems it could be heard as a click or a very short beep, in digital systems it is
unnoticeable. Another advantage of the hard handoff is that the phone's hardware does not need
to be capable of receiving two or more channels in parallel, which makes it cheaper and simpler.
A disadvantage is that if a handover fails the call may be temporarily disrupted or even
terminated abnormally. Technologies, which use hard handovers, usually have procedures which
can re-establish the connection to the source cell if the connection to the target cell cannot be
made. However re-establishing this connection may not always be possible (in which case the
call will be terminated) and even when possible the procedure may cause a temporary
interruption to the call.

One advantage of the soft handovers is that the connection to the source cell is broken only when
a reliable connection to the target cell has been established and therefore the chances that the call
will be terminated abnormally due to failed handovers are lower. However, by far a bigger
advantage comes from the mere fact that simultaneously channels in multiple cells are
maintained and the call could only fail if all of the channels are interfered or fade at the same
time. Fading and interference in different channels are unrelated and therefore the probability of
them taking place at the same moment in all channels is very low. Thus the reliability of the
connection becomes higher when the call is in a soft handover. Because in a cellular network the
majority of the handovers occur in places of poor coverage, where calls would frequently
become unreliable when their channel is interfered or fading, soft handovers bring a significant
improvement to the reliability of the calls in these places by making the interference or the
fading in a single channel not critical. This advantage comes at the cost of more complex
hardware in the phone, which must be capable of processing several channels in parallel.
Another price to pay for soft handovers is use of several channels in the network to support just a
single call. This reduces the number of remaining free channels and thus reduces the capacity of
the network. By adjusting the duration of soft handovers and the size of the areas, in which they
occur, the network engineers can balance the benefit of extra call reliability against the price of
reduced capacity.

Possibility of handover:

While theoretically speaking soft handovers are possible in any technology, analog or digital, the
cost of implementing them for analog technologies is prohibitively high and none of the
technologies that were commercially successful in the past (e.g. AMPS, TACS, NMT, etc.) had
this feature. Of the digital technologies, those based on FDMA also face a higher cost for the
phones (due to the need to have multiple parallel radio-frequency modules) and those based on
TDMA or a combination of TDMA/FDMA, in principle, allow not so expensive implementation
of soft handovers. However, none of the 2G (second-generation) technologies have this feature
(e.g. GSM, D-AMPS/IS-136, etc.). On the other hand, all CDMA based technologies, 2G and 3G
(third-generation), have soft handovers. On one hand, this is facilitated by the possibility to
design not so expensive phone hardware supporting soft handovers for CDMA and on the other
hand, this is necessitated by the fact that without soft handovers CDMA networks may suffer
from substantial interference arising due to the so-called "near-far" effect.

In all current commercial technologies based on FDMA or on a combination of TDMA/FDMA


(e.g. GSM, AMPS, IS-136/DAMPS, etc.) changing the channel during a hard handover is
realized by changing the pair of used transmit/receive frequencies.

Implementations

For the practical realization of handoffs in a cellular network each cell is assigned a list of
potential target cells, which can be used for handing-off calls from this source cell to them.
These potential target cells are called neighbors and the list is called neighbor list. Creating such
a list for a given cell is not trivial and specialized computer tools are used. They implement
different algorithms and may use for input data from field measurements or computer predictions
of radio wave propagation in the areas covered by the cells.

During a call one or more parameters of the signal in the channel in the source cell are monitored
and assessed in order to decide when a handover may be necessary. The downlink (forward link)
and/or uplink (reverse link) directions may be monitored. The handover may be requested by the
phone or by the base station (BTS) of its source cell and, in some systems, by a BTS of a
neighboring cell. The phone and the BTSs of the neighboring cells monitor each other others'
signals and the best target candidates are selected among the neighboring cells. In some systems,
mainly based on CDMA, a target candidate may be selected among the cells which are not in the
neighbor list. This is done in an effort to reduce the probability of interference due to the
aforementioned "near-far" effect.

In analog systems the parameters used as criteria for requesting a hard handover are usually the
received signal power and the received signal-to-noise ratio (the latter may be estimated in an
analog system by inserting additional tones, with frequencies just outside the captured voice-
frequency band at the transmitter and assessing the form of these tones at the receiver). In non-
CDMA 2G digital systems the criteria for requesting hard handover may be based on estimates
of the received signal power, bit error rate (BER) and block error/erasure rate (BLER), received
quality of speech (RxQual), distance between the phone and the BTS (estimated from the radio
signal propagation delay) and others. In CDMA systems, 2G and 3G, the most common criterion
for requesting a handover is Ec/Io ratio measured in the pilot channel (CPICH) and/or RSCP.

In CDMA systems, when the phone in soft or softer handoff is connected to several cells
simultaneously, it processes the received in parallel signals using a rake receiver. Each signal is
processed by a module called rake finger. A usual design of a rake receiver in mobile phones
includes three or more rake fingers used in soft handoff state for processing signals from as many
cells and one additional finger used to search for signals from other cells. The set of cells, whose
signals are used during a soft handoff, is referred to as the "active set". If the search finger finds a
sufficiently-strong signal (in terms of high Ec/Io or RSCP) from a new cell this cell is added to
the active set. The cells in the neighbor list (called in CDMA neighboring set) are checked more
frequently than the rest and thus a handoff with a neighboring cell is more likely, however a
handoff with others cells outside the neighbor list is also allowed (unlike in GSM, IS-
136/DAMPS, AMPS, NMT, etc.).

Reasons for failure

There are occurrences where a handoff is unsuccessful. Lots of research was conducted
regarding this. In the late 80's the main reason was found out. Because frequencies cannot be
reused in adjacent cells, when a user moves from one cell to another, a new frequency must be
allocated for the call. If a user moves into a cell when all available channels are in use, the users
call must be terminated. Also, there is the problem of signal interference where adjacent cells
overpower each other resulting in receiver desensitization.

Vertical handover

There are also inter-technology handovers where a call's connection is transferred from one
access technology to another, e.g. a call being transferred from GSM to UMTS or from CDMA
IS-95 to cdma2000.

The 3GPP UMA/GAN standard enables GSM/UMTS handoff to Wi-Fi and vice-versa.

UNIT II - WIRELESS PROTOCOLS


PART A (2 MARKS)
1. Define routing.
2. What is network security?
3. Define ALOHA.
4. Explain CSMA.
5. Define wireless LAN.
6. Define MAN.
7. Explain IEEE 802.11.
8. Define wireless routing protocol.
9. Define Mobile IP.
10. What is 4G?
11. What is 3G?
PART B (16 MARKS)
1. What are the issues and challenges of wireless networks? Explain them.
2. In detail explain ALOHA, CSMA, Wireless LAN, and MAN IEEE 802.11 (abef
ghi) Bluetooth.
3. Explain in detail about Protocols for 3G & 4G cellular networks.
4. Briefly explain IMT 2000, UMTS, CDMA2000.
5. Explain Mobility management and handover technologies and All-IP based cellular
network.
UNIT III- TYPES OF WIRELESS NETWORKS

Wireless networks fetch essential changes to data networking and makes integrated networks an
authenticity. Wireless networks is offering a network having no wires because by using wireless
network you can connect your computer to a network using radio waves and can move your
computer anywhere easily. Wireless network has made a network extremely portable because of
digital modulation, adaptive modulation, information compression, and access multiplexing.

In a wireless network you use air as a medium. Wireless networks introduced by IT Consulting
group with IEEE certified 802.11b technology. Wireless networks providing you privacy and
personal computer security more than before. A Wireless networks presenting flexibility,
roaming, high standard and low cost. There are different types of wireless network such as
wireless LAN, wireless MAN, and mobile devices network.

Wireless Network working at a highest bandwidth of 11Mbps and 802.11b can be implementing
with various products. This technology focusing on frequency-hopping spread spectrum and
direct-sequence spread spectrum which have two approaches. DSSS transmit information by
divided into small pieces. It is also identified as a chipping code that transmits the data according
to a spreading ratio. It is also helpful for signal resist interference and use can easily observe
original data and recover damage data while HSS use the bandwidth to split into lots of probable
broadcast frequencies. Its performance is generally enhanced and more consistent. There are lots
of benefits or wireless networking such as very fast and reliable having long range from 1,000 ft
/ 305 m and 250 to 400 ft / 76 to 122 m .You can also integrate it into any existing wired-
Ethernet networks. Wireless networks offering very high speed of internet and compatible with
peer-to-peer mode.

There are different functional parts of a wireless networking such radios base stations and mobile
networks are increasing trends for wireless industries. UWB and Bluetooth can be used as
personal network. Wireless network now has been a Broadband wireless with great invention of
Wi-Fi and WiMAX networks Wireless networks are competent of offering suppleness for an
association. Wireless local area networks offer consistent and effectual keys to a number of
instant applications therefore at present it is used under numerous diverse platforms such as
developed, health care, education, finance, hospitality, airport, and retail.

Types of Wireless Networks

A wireless network joins two or more than two computers by means of communication without
using any wires. Wireless Networks utilizes spread-spectrum or OFDM depends on the
technology which is using .Wireless network enable a user to move about within a wide coverage
area and still be associated to the network. There are different types of wireless networking such
as wide area network, local area network and personal area network but the most common are of
two.

WLAN (Wireless Local Area Network)

WLANs provide wireless network contact using radio signal instead of traditional network
cabling and built by joining a device called AP through which a user converse with the AP using
a wireless network. WLAN also network security because it relics an important issue for
WLANs. The WEP technology used in WLAN elevate the rank of security. WLANs have
expanded well-built status in a different kind of markets during the last seven years and set up to
offer wireless connectivity within a limited exposure area which may be a hospital, a university,
the airport, health care providers or a gas plant. WLAN is providing highest data-transfer rate
with 802.11 terminologies. Today WLANs are fetching more usually recognized as a general-
purpose connectivity substitute for a wide array of business customers. WLANs offering various
benefit for user such as mobility, condensed Cost-of-Ownership, installation speed and flexibility
and scalability. The technology used in WLANs is Spread Spectrum developed by the military
offer secure and reliable services. Frequency-hopping spread-spectrum maintains a single logical
channel and Direct-Sequence Spread Spectrum offer chip pattern to make it more effective and
infrared technology. Wireless LAN adapters are necessary for regular computer platforms. The
benefits of WLAN are high Range and coverage, Throughput, Mulitpath Effects, Integrity,
Interoperability with Wired Infrastructure, Interoperability with Wireless Infrastructure,
Interference and Coexistence, Simplicity and Ease of Use, Security, Cost, Scalability and , Safety
which makes a wireless network in real a great platform.

WMAN (Wireless Metropolitan Area Network)

Fast communications of network within the vicinity of a metropolitan area is called WMAN, that
put up an entire city or other related geographic area and can span up to 50km. WMAN designed
for a larger geographical area than a LAN. The standard of MAN is DQDB which cover up to 30
miles with the speed of 34 Mbit/s to 155 Mbit/s.1t is more common in schools, colleges, and
public services support a high-speed network backbone. WMAN is a certified name by the IEEE
802.16 that functioning on Broadband for its wireless metropolitan. WMAN have air interface
and a single-carrier scheme intended to activate in the 10-66 GHz spectrum, supports incessantly
unreliable transfer levels at many certified frequencies. WMAN opens the door for the creation
and Provide high-speed Internet access to business subscribers.It can handle thousands of user
stations with prevents collisions and support legacy voice systems, voice over IP, TCP/IP.
WMAN offer different applications with different QoS requirements.The technology of WMAN
consist of ATM, FDDI, and SMDS. WiMAX is a term used for Wireless metropolitan area
network and plinth on the IEEE 802.16.

Wireless Networks usage

As we know that Wireless network is usually related with a telecommunications that works
between nodes and executed without the use of wires. The usage of wireless networking
increasing day by day because it has influenced significant impact on the world therefore its uses
have appreciably grown-up.

Radio frequency signals used in a wireless network therefore you can move about and get
admittance to the network while you are working an outdoor location.

Through Wireless Networks you can send information over the world using satellites and
other signals.

Now days wireless networks used in emergency services like police department where
wireless network utilize to commune significant information speedily.

The growth of wireless network increasing both in people and businesses to send and
share data swiftly It doesnt matter be in a small office or across the world.

Another vital exercise for wireless networks is as a cheap and fast way to be linked to the
Internet in regions especially where telecom transportation is meager and no source for
communication.

To make use of Wireless Networks you can get access to other network resources like
Library Online System because to move your laptop anywhere is not enough difficult
now. Wireless networks make easy of file sharing, the use of printer and other documents
with high security.
Now the wireless network is in use every where such as hospital, organizations, universities,
airports, health care departments, stores, any much more as wireless network make this world a
global village in real.

Cellular network:

A cellular network is a radio network distributed over land areas called cells, each served by at
least one fixed-location transceiver known as a cell site or base station. When joined together
these cells provide radio coverage over a wide geographic area. This enables a large number of
portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with
fixed transceivers and telephones anywhere in the network, via base stations, even if some of the
transceivers are moving through more than one cell during transmission.

Cellular networks offer a number of advantages over alternative solutions:

increased capacity
reduced power use
larger coverage area
reduced interference from other signals

An example of a simple non-telephone cellular system is an old taxi driver's radio system where
the taxi company has several transmitters based around a city that can communicate directly with
each taxi.

The concept:

In a cellular radio system, a land area to be supplied with radio service is divided into regular
shaped cells, which can be hexagonal, square, circular or some other irregular shapes, although
hexagonal cells are conventional. Each of these cells is assigned multiple frequencies (f1 - f6)
which have corresponding radio base stations. The group of frequencies can be reused in other
cells, provided that the same frequencies are not reused in adjacent neighboring cells as that
would cause co-channel interference.

The increased capacity in a cellular network, compared with a network with a single transmitter,
comes from the fact that the same radio frequency can be reused in a different area for a
completely different transmission. If there is a single plain transmitter, only one transmission can
be used on any given frequency. Unfortunately, there is inevitably some level of interference
from the signal from the other cells which use the same frequency. This means that, in a standard
FDMA system, there must be at least a one cell gap between cells which reuse the same
frequency.

In the simple case of the taxi company, each radio had a manually operated channel selector
knob to tune to different frequencies. As the drivers moved around, they would change from
channel to channel. The drivers knew which frequency covered approximately what area. When
they did not receive a signal from the transmitter, they would try other channels until they found
one that worked. The taxi drivers would only speak one at a time, when invited by the base
station operator (in a sense TDMA).

Figure :Example of frequency reuse factor or pattern

Wireless ad-hoc network:

A wireless ad-hoc network is a decentralized type of wireless network.[1] The network is ad hoc
because it does not rely on a preexisting infrastructure, such as routers in wired networks or
access points in managed (infrastructure) wireless networks. Instead, each node participates in
routing by forwarding data for other nodes, and so the determination of which nodes forward
data is made dynamically based on the network connectivity. In addition to the classic routing,
ad hoc networks can use flooding for forwarding the data.

The earliest wireless ad-hoc networks were the "packet radio" networks (PRNETs) from the
1970s, sponsored by DARPA after the ALOHAnet project.

Application:

The decentralized nature of wireless ad-hoc networks makes them suitable for a variety of
applications where central nodes can't be relied on, and may improve the scalability of wireless
ad-hoc networks compared to wireless managed networks, though theoretical[2] and practical[3]
limits to the overall capacity of such networks have been identified.

Minimal configuration and quick deployment make ad hoc networks suitable for emergency
situations like natural disasters or military conflicts. The presence of a dynamic and adaptive
routing protocols enable ad-hoc networks to be formed quickly.

Wireless ad hoc networks can be further classified by their application:

mobile ad-hoc networks (MANET)


wireless mesh networks (WMN)
wireless sensor networks (WSN)
Wireless Sensor Networks
What are Wireless Sensor Networks

Administration is very important to run an organization successfully. Fast telecommunication era


where shows positive aspects there it brings many negative effects as well. For proper
administration different tools have been devised to track, monitor and control the working bodies.
Wireless sensor network is one such device. Wireless networking is quite a handy tool to connect
different computers remotely at one place. Wireless sensor network is a type of wireless connection
spatially as autonomous bodies to work in harmony to perform a specific function.

How Wireless Sensor Networks Work

Wireless Sensor Network mechanism is quite simple and applicable to a variety of fields. It is
based on Smaller nodes, controller, radio transceiver, and battery. The key to stimulate the sensor
networking is the algorithm sponsor multi-router phenomenon. The system is totally dependent
on the nodes and the harmony established between them through proper frequency. These nodes
are of different sizes according to the function they perform.

To activate the monitoring / tracking function of these nodes a radio transmitter is attached to
forward the information in the form of waves. They are controlled by the microcontroller
according to the function and device in which they are used. All the system remains in working
condition with the help of energy supply which is in the form of battery. The wireless sensor
networks perform function concurrently where nodes are autonomous bodies incorporated in the
field spatially for the accurate results. The information transmits through proper channel taking
the information collecting it in the form of data and send to the base.

Types of Wireless Sensor Networks

There are broadly speaking two types of wireless sensor networking; physical and
environmental. They are used to track and monitor heat, pressure, temperature, vibratory
movements, movement or pollution level, sound detection, etc.

Uses of Wireless Sensor Networks

According to their types they are used by different organizations and fields to monitor a specific
task. Wireless sensor networks are incorporated at different point to monitor a specific area a
common known example is that of military communication either land or water. Major issues
which are becoming a possible threat to life are environmental and industrial issues. Wireless
sensor networks are doing great job in the relevant fields to sense to temperature for greenhouse
gasses and similarly earthquake detectors are implanted to detect the land sliding phenomenon
for precautionary measurements.
Pollution is the major problem of today so is the waste of natural resources. There is great danger
of finishing the natural reservoirs. Wireless sensor networks are successfully devised to monitor
the water and electricity use. They are used to monitor the waste water in the landfill for cleaning
process through landfills and water level detection in the domestic and industrial tanks. Similarly
they are used to sense the light and help to consume the daylight properly till the evening and
detecting the dim light it automatically switches on the light. This is permissible at homes,
offices and factories. Machinery health is an important issue to keep the machines in running
conditions for a long time. It helps to reduce the need of large labor and cost.

Peer-to-peer:

Peer-to-peer (P2P) computing or networking is a distributed application architecture that


partitions tasks or workloads among peers. Peers are equally privileged, equipotent participants
in the application. They are said to form a peer-to-peer network of nodes.

Peers make a portion of their resources, such as processing power, disk storage or network
bandwidth, directly available to other network participants, without the need for central
coordination by servers or stable hosts.[1] Peers are both suppliers and consumers of resources, in
contrast to the traditional clientserver model where only servers supply (send), and clients
consume (receive).

The peer-to-peer application structure was popularized by file sharing systems like Napster. The
concept has inspired new structures and philosophies in many areas of human interaction. Peer-
to-peer networking is not restricted to technology, but covers also social processes with a peer-
to-peer dynamic. In such context, social peer-to-peer processes are currently emerging
throughout society

Architecture of P2P systems:

Peer-to-peer systems often implement an abstract overlay network, built at Application Layer, on
top of the native or physical network topology. Such overlays are used for indexing and peer
discovery and make the P2P system independent from the physical network topology. Content is
typically exchanged directly over the underlying Internet Protocol (IP) network. Anonymous
peer-to-peer systems are an exception, and implement extra routing layers to obscure the identity
of the source or destination of queries.

In structured peer-to-peer networks, peers (and, sometimes, resources) are organized following
specific criteria and algorithms, which lead to overlays with specific topologies and properties.
They typically use distributed hash table-based (DHT) indexing, such as in the Chord system
(MIT).

Unstructured peer-to-peer networks do not impose any structure on the overlay networks. Peers
in these networks connect in an ad-hoc fashion. Ideally, unstructured P2P systems would have
absolutely no centralized system, but in practice there are several types of unstructured systems
with various degrees of centralization. Three categories can easily be seen.
In pure peer-to-peer systems the entire network consists solely of equipotent peers.
There is only one routing layer, as there are no preferred nodes with any special
infrastructure function.
Hybrid peer-to-peer systems allow such infrastructure nodes to exist, often called
supernodes.
In centralized peer-to-peer systems, a central server is used for indexing functions
and to bootstrap the entire system. Although this has similarities with a structured
architecture, the connections between peers are not determined by any algorithm.

The first prominent and popular peer-to-peer file sharing system, Napster, was an example of the
centralized model. Freenet and early implementations of the gnutella protocol, on the other hand,
are examples of the decentralized model. Modern gnutella implementations, Gnutella2, as well
as the now deprecated Kazaa network are examples of the hybrid model.

A pure P2P network does not have the notion of clients or servers but only equal peer nodes that
simultaneously function as both "clients" and "servers" to the other nodes on the network. This
model of network arrangement differs from the clientserver model where communication is
usually to and from a central server. A typical example of a file transfer that does not use the P2P
model is the File Transfer Protocol (FTP) service in which the client and server programs are
distinct: the clients initiate the transfer, and the servers satisfy these requests.

The P2P overlay network consists of all the participating peers as network nodes. There are links
between any two nodes that know each other: i.e. if a participating peer knows the location of
another peer in the P2P network, then there is a directed edge from the former node to the latter
in the overlay network. Based on how the nodes in the overlay network are linked to each other,
we can classify the P2P networks as unstructured or structured.

Structured systems:

Structured P2P networks employ a globally consistent protocol to ensure that any node can
efficiently route a search to some peer that has the desired file, even if the file is extremely rare.
Such a guarantee necessitates a more structured pattern of overlay links. By far the most
common type of structured P2P network is the distributed hash table (DHT), in which a variant
of consistent hashing is used to assign ownership of each file to a particular peer, in a way
analogous to a traditional hash table's assignment of each key to a particular array slot.

Advantages and weaknesses:

In P2P networks, clients provide resources, which may include bandwidth, storage space, and
computing power. This property is one of the major advantages of using P2P networks because it
makes the setup and running costs very small for the original content distributor. As nodes arrive
and demand on the system increases, the total capacity of the system also increases, and the
likelihood of failure decreases. If one peer on the network fails to function properly, the whole
network is not compromised or damaged. In contrast, in a typical clientserver architecture,
clients share only their demands with the system, but not their resources. In this case, as more
clients join the system, fewer resources are available to serve each client, and if the central server
fails, the entire network is taken down. The decentralized nature of P2P networks increases
robustness because it removes the single point of failure that can be inherent in a client-server
based system.

Another important property of peer-to-peer systems is the lack of a system administrator. This
leads to a network that is easier and faster to setup and keep running because a full staff is not
required to ensure efficiency and stability. Decentralized networks introduce new security issues
because they are designed so that each user is responsible for controlling their data and
resources. Peer-to-peer networks, along with almost all network systems, are vulnerable to
unsecure and unsigned codes that may allow remote access to files on a victim's computer or
even compromise the entire network. A user may encounter harmful data by downloading a file
that was originally uploaded as a virus disguised in an .exe, .mp3, .avi, or any other filetype. This
type of security issue is due to the lack of an administrator that maintains the list of files being
distributed.

Harmful data can also be distributed on P2P networks by modifying files that are already being
distributed on the network. This type of security breach is created by the fact that users are
connecting to untrusted sources, as opposed to a maintained server. In the past this has happened
to the FastTrack network when the RIAA managed to introduce faked chunks into downloads
and downloaded files (mostly MP3 files). Files infected with the RIAA virus were unusable
afterwards or even contained malicious code. The RIAA is also known to have uploaded fake
music and movies to P2P networks in order to deter illegal file sharing. Consequently, the P2P
networks of today have seen an enormous increase of their security and file verification
mechanisms. Modern hashing, chunk verification and different encryption methods have made
most networks resistant to almost any type of attack, even when major parts of the respective
network have been replaced by faked or nonfunctional hosts.

There are both advantages and disadvantages in P2P networks related to the topic of data backup,
recovery, and availability. In a centralized network, the system administrators are the only forces
controlling the availability of files being shared. If the administrators decide to no longer
distribute a file, they simply have to remove it from their servers, and it will no longer be
available to users. Along with leaving the users powerless in deciding what is distributed
throughout the community, this makes the entire system vulnerable to threats and requests from
the government and other large forces. For example, YouTube has been pressured by the RIAA,
MPAA, and entertainment industry to filter out copyrighted content. Although server-client
networks are able to monitor and manage content availability, they can have more stability in the
availability of the content they choose to host. A client should not have trouble accessing obscure
content that is being shared on a stable centralized network. P2P networks, however, are more
unreliable in sharing unpopular files because sharing files in a P2P network requires that at least
one node in the network has the requested data, and that node must be able to connect to the node
requesting the data. This requirement is occasionally hard to meet because users may delete or
stop sharing data at any point.

In this sense, the community of users in a P2P network is completely responsible for deciding
what content is available. Unpopular files will eventually disappear and become unavailable as
more people stop sharing them. Popular files, however, will be highly and easily distributed.
Popular files on a P2P network actually have more stability and availability than files on central
networks. In a centralized network, only the loss of connection between the clients and server is
simple enough to cause a failure, but in P2P networks, the connections between every node must
be lost in order to fail to share data. In a centralized system, the administrators are responsible
for all data recovery and backups, while in P2P systems, each node requires its own backup
system. Because of the lack of central authority in P2P networks, forces such as the recording
industry, RIAA, MPAA, and the government are unable to delete or stop the sharing of content
on P2P systems.

Wireless Routing Protocol:

The Wireless Routing Protocol (WRP) is a proactive unicast routing protocol for mobile ad-
hoc networks (MANETs).

Description:

WRP uses an enhanced version of the distance-vector routing protocol, which uses the Bellman-
Ford algorithm to calculate paths. Because of the mobile nature of the nodes within the MANET,
the protocol introduces mechanisms which reduce route loops and ensure reliable message
exchange.

WRP, similar to DSDV, inherits the properties of the distributed Bellman-Ford algorithm. To
counter the count-to-infinity problem and to enable faster convergence, it employs a unique
method of maintaining information regarding the shortest distance to every destination node in
the network and the penultimate hop node on the path to every destination node. Since WRP, like
DSDV, maintains an up-to-date view of the network, every node has a readily available route to
every destination node in the network. It differs from DSDV in table maintenance and in the
update procedures. While DSDV maintains only one topology table, WRP uses a set of tables to
maintain more accurate information. The tables that are maintained by a node are the following:
distance table (DT), routing table (RT), link cost table (LCT), and a message retransmission list
(MRL).

The DT contains the network view of the neighbors of a node. It contains a matrix where each
element contains the distance and the penultimate node reported by a neighbor for a particular
destination. The RT contains the up-to-date view of the network for all known destinations. It
keeps the shortest distance, the predecessor node (penultimate node), the successor node (the
next node to reach the destination), and a flag indicating the status of the path. The path status
may be a simple path (correct), or a loop (error), or the destination node not marked (null). The
LCT contains the cost (e.g., the number of hops to reach the destination) of relaying messages
through each link. The cost of a broken link is infinity. It also contains the number of update
periods (intervals between two successive periodic updates) passed since the last successful
update was received from that link. This is done to detect links breaks. The MRL contains an
entry for every update message that is to be retransmitted and maintains a counter for each entry.
This counter is decremented after every retransmission of an update message. Each update
message contains a list of updates. A node also marks each node in the RT that has to
acknowledge the update message it transmitted. Once the counter reaches zero, the entries in the
update message for which no acknowledgments have been received are to be retransmitted and
the update message is deleted. Thus, a node detects a link break by the number of update periods
missed since the last successful transmission. After receiving an update message, a node not only
updates the distance for transmission neighbors but also checks the other neighbors distance,
hence convergence is much faster than DSDV.

Method:

Each node implementing WRP keeps a table of routes and distances and link costs. It also
maintains a 'message retransmission list (MRL).

Routing table entries contain distance to a destination node, the previous and next nodes along
the route, and is tagged to identify the route's state: whether it is a simple path, loop or invalid
route. (Storing the previous and successive nodes assists in detecting loops and avoiding the
counting-to-infinity problem - a shortcoming of Distance Vector Routing.)

The link cost table maintains the cost of the link to its nearest neighbors (nodes within direct
transmission range), and the number of timeouts since successfully receiving a message from the
neighbor.

Nodes periodically exchange routing tables with their neighbors via update messages, or
whenever the link state table changes. The MRL maintains a list of which neighbors are yet to
acknowledged an update message, so they can be retransmitted if necessary. Where no change in
the routing table, a node is required to transmit a 'hello' message to affirm its connectivity.

When an update message is received, a node updates its distance table and reassesses the best
route paths. It also carries out a consistency check with its neighbors, to help eliminate loops and
speed up convergence.

List of ad hoc routing protocols:

An ad-hoc routing protocol is a convention, or standard, that controls how nodes decide which
way to route packets between computing devices in a mobile ad hoc network .

In ad-hoc networks, nodes are not familiar with the topology of their networks; instead, they
have to discover it. The basic idea is that a new node may announce its presence and should
listen for announcements broadcast by its neighbors. Each node learns about nodes nearby and
how to reach them, and may announce that it, too, can reach them.

Note that in a wider sense, ad hoc protocol can also be used literally, that is, to mean an
improvised and often impromptu protocol established for a specific purpose.

The following is a list of some ad hoc network routing protocols.

Pro-active (table-driven) routing:


This type of protocols maintains fresh lists of destinations and their routes by periodically
distributing routing tables throughout the network. The main disadvantages of such algorithms
are:

1. Respective amount of data for maintenance.


2. Slow reaction on restructuring and failures.

Reactive (on-demand) routing:

This type of protocols finds a route on demand by flooding the network with Route Request
packets. The main disadvantages of such algorithms are:

1. High latency time in route finding.


2. Excessive flooding can lead to network clogging

Dynamic Source Routing:

'Dynamic Source Routing' (DSR) is a routing protocol for wireless mesh networks. It is similar
to AODV in that it forms a route on-demand when a transmitting computer requests one.
However, it uses source routing instead of relying on the routing table at each intermediate
device.

Determining source routes requires accumulating the address of each device between the source
and destination during route discovery. The accumulated path information is cached by nodes
processing the route discovery packets. The learned paths are used to route packets. To
accomplish source routing, the routed packets contain the address of each device the packet will
traverse. This may result in high overhead for long paths or large addresses, like IPv6. To avoid
using source routing, DSR optionally defines a flow id option that allows packets to be
forwarded on a hop-by-hop basis.

This protocol is truly based on source routing whereby all the routing information is maintained
(continually updated) at mobile nodes. It has only two major phases, which are Route Discovery
and Route Maintenance. Route Reply would only be generated if the message has reached the
intended destination node (route record which is initially contained in Route Request would be
inserted into the Route Reply).

To return the Route Reply, the destination node must have a route to the source node. If the route
is in the Destination Node's route cache, the route would be used. Otherwise, the node will
reverse the route based on the route record in the Route Request message header (this requires
that all links are symmetric). In the event of fatal transmission, the Route Maintenance Phase is
initiated whereby the Route Error packets are generated at a node. The erroneous hop will be
removed from the node's route cache; all routes containing the hop are truncated at that point.
Again, the Route Discovery Phase is initiated to determine the most viable route.

For information on other similar protocols, see the ad hoc routing protocol list.
Dynamic source routing protocol (DSR) is an on-demand protocol designed to restrict the
bandwidth consumed by control packets in ad hoc wireless networks by eliminating the periodic
table-update messages required in the table-driven approach. The major difference between this
and the other on-demand routing protocols is that it is beacon-less and hence does not require
periodic hello packet (beacon) transmissions, which are used by a node to inform its neighbors of
its presence. The basic approach of this protocol (and all other on-demand routing protocols)
during the route construction phase is to establish a route by flooding Route Request packets in
the network. The destination node, on receiving a Route Request packet, responds by sending a
Route Reply packet back to the source, which carries the route traversed by the Route Request
packet received.

Consider a source node that does not have a route to the destination. When it has data packets to
be sent to that destination, it initiates a Route Request packet. This Route Request is flooded
throughout the network. Each node, upon receiving a Route Request packet, rebroadcasts the
packet to its neighbors if it has not forwarded it already, provided that the node is not the
destination node and that the packets time to live (TTL) counter has not been exceeded. Each
Route Request carries a sequence number generated by the source node and the path it has
traversed. A node, upon receiving a Route Request packet, checks the sequence number on the
packet before forwarding it. The packet is forwarded only if it is not a duplicate Route Request.
The sequence number on the packet is used to prevent loop formations and to avoid multiple
transmissions of the same Route Request by an intermediate node that receives it through
multiple paths. Thus, all nodes except the destination forward a Route Request packet during the
route construction phase. A destination node, after receiving the first Route Request packet,
replies to the source node through the reverse path the Route Request packet had traversed.
Nodes can also learn about the neighboring routes traversed by data packets if operated in the
promiscuous mode (the mode of operation in which a node can receive the packets that are
neither broadcast nor addressed to itself). This route cache is also used during the route
construction phase.

Advantages and disadvantages:

This protocol uses a reactive approach which eliminates the need to periodically flood the
network with table update messages which are required in a table-driven approach. In a reactive
(on-demand) approach such as this, a route is established only when it is required and hence the
need to find routes to all other nodes in the network as required by the table-driven approach is
eliminated. The intermediate nodes also utilize the route cache information efficiently to reduce
the control overhead. The disadvantage of this protocol is that the route maintenance mechanism
does not locally repair a broken link. Stale route cache information could also result in
inconsistencies during the route reconstruction phase. The connection setup delay is higher than
in table-driven protocols. Even though the protocol performs well in static and low-mobility
environments, the performance degrades rapidly with increasing mobility. Also, considerable
routing overhead is involved due to the source-routing mechanism employed in DSR. This
routing overhead is directly proportional to the path length.
AODV

The Ad hoc On Demand Distance Vector (AODV) routing algorithm is a routing protocol
designed for ad hoc mobile networks. AODV is capable of both unicast and multicast routing. It
is an on demand algorithm, meaning that it builds routes between nodes only as desired by
source nodes. It maintains these routes as long as they are needed by the sources. Additionally,
AODV forms trees which connect multicast group members. The trees are composed of the
group members and the nodes needed to connect the members. AODV uses sequence numbers to
ensure the freshness of routes. It is loop-free, self-starting, and scales to large numbers of mobile
nodes.

AODV builds routes using a route request / route reply query cycle. When a source node desires
a route to a destination for which it does not already have a route, it broadcasts a route request
(RREQ) packet across the network. Nodes receiving this packet update their information for the
source node and set up backwards pointers to the source node in the route tables. In addition to
the source node's IP address, current sequence number, and broadcast ID, the RREQ also
contains the most recent sequence number for the destination of which the source node is aware.
A node receiving the RREQ may send a route reply (RREP) if it is either the destination or if it
has a route to the destination with corresponding sequence number greater than or equal to that
contained in the RREQ. If this is the case, it unicasts a RREP back to the source. Otherwise, it
rebroadcasts the RREQ. Nodes keep track of the RREQ's source IP address and broadcast ID. If
they receive a RREQ which they have already processed, they discard the RREQ and do not
forward it.

As the RREP propagates back to the source, nodes set up forward pointers to the destination.
Once the source node receives the RREP, it may begin to forward data packets to the destination.
If the source later receives a RREP containing a greater sequence number or contains the same
sequence number with a smaller hop count, it may update its routing information for that
destination and begin using the better route.

As long as the route remains active, it will continue to be maintained. A route is considered
active as long as there are data packets periodically travelling from the source to the destination
along that path. Once the source stops sending data packets, the links will time out and
eventually be deleted from the intermediate node routing tables. If a link break occurs while the
route is active, the node upstream of the break propagates a route error (RERR) message to the
source node to inform it of the now unreachable destination(s). After receiving the RERR, if the
source node still desires the route, it can reinitiate route discovery.

Multicast routes are set up in a similar manner. A node wishing to join a multicast group
broadcasts a RREQ with the destination IP address set to that of the multicast group and with the
'J'(join) flag set to indicate that it would like to join the group. Any node receiving this RREQ
that is a member of the multicast tree that has a fresh enough sequence number for the multicast
group may send a RREP. As the RREPs propagate back to the source, the nodes forwarding the
message set up pointers in their multicast route tables. As the source node receives the RREPs, it
keeps track of the route with the freshest sequence number, and beyond that the smallest hop
count to the next multicast group member. After the specified discovery period, the source node
will unicast a Multicast Activation (MACT) message to its selected next hop. This message
serves the purpose of activating the route. A node that does not receive this message that had set
up a multicast route pointer will timeout and delete the pointer. If the node receiving the MACT
was not already a part of the multicast tree, it will also have been keeping track of the best route
from the RREPs it received. Hence it must also unicast a MACT to its next hop, and so on until a
node that was previously a member of the multicast tree is reached.

AODV maintains routes for as long as the route is active. This includes maintaining a multicast
tree for the life of the multicast group. Because the network nodes are mobile, it is likely that
many link breakages along a route will occur during the lifetime of that route. The papers listed
below describe how link breakages are handled. The WMCSA paper describes AODV without
multicast but includes detailed simulation results for networks up to 1000 nodes. The Mobicom
paper describes AODV's multicast operation and details simulations which show its correct
operation. The internet drafts include descriptions of both unicast and multicast route discovery,
as well as mentioning how QoS and subnet aggregation can be used with AODV. Finally, the
IEEE Personal Communications paper and the Infocom paper details an in-depth study of
simulations comparing AODV with the Dynamic Source Routing (DSR) protocol, and examines
each protocol's respective strengths and weaknesses.

Location-Aided Routing Protocol

LAR is an on-demand protocol who is based on the DSR (Dynamique Source Routing)!

The Location - Aided Routing Protocol uses location information to reduce routing overhead of
the ad-hoc network! Normally the LAR protocol uses the GPS (Global Positioning System) to
get these location informations. With the availability of GPS, the mobile hosts knows there
physical location.

To reduce the complexity of the protocol, we assume, that every host knows his position exactly,
the difference between the exact position and the calculated position of GPS will no be
considered! We also assume that the mobile nodes are only moving in a two-dimensional plane!

Expected Zone & Request Zone


Expected Zone

First, we consider that the node S (source) needs to find a way to node D (destination) ! Node S
knows that D was at position L. Then, the "expected zone" of node D, from the viewpoint of
node S, is the region that node S expects to contain node D! When node S knows, that node D
travels with a several speed, node S considers this speed to determine the expected zone of D!
When node S has no information about the position of D, the entire region of the ad-hoc network
is assumed to the expected zone! Then the algorithm reduces to the basic flooding algorithm!
In general we can say, as more as the node S knows about D, as smaller can be considered the
expectedzone!
figure 1
a) The circular expected zone of node D
b) When node S knows that D is moving north, then the circular expected zone can be reduced to
a semi-circle.

Request Zone

Again, we consider that node S needs to find a path to node D. Node S defines a Request Zone
like in figure 1 (a)! After that S sends a route request like in the normal flooding algorithm. With
the difference, that a node only forwards this route request when it is in the Request Zone!
But there are two different reasons, that regions outside the request zone havge to be include in
the Request Zone:

1) Node S is not in the Expected Zone of D, then this Expected Zone has to be enlarged to the
Request Zone like in figure 2 (a).

2) But now we ask us if this Request Zone of figure 2 (a) is a good Request Zone! We see in
figure 2 (b) that all the nodes between S and D are outside of the Request Zone! So it is not
guaranteed, that a path between S and D can be found. LAR allows to expand the Request Zone,
so that it easier to find a path. But we have to consider, when we increase the Expected Zone like
in figure 2 (c), the route discovery overhead will also increase.
figure 2

Membership of the Request Zone

LAR algorithms are essentially identical to the flooding algorithm! With the difference that an
node not in the Request Zone doesn't forward the route request to it's neighbour! There are two
different possibilities to determine that a node is 'member' of the Request Zone:

1)LARScheme1

This first Scheme uses a Request that is rectangular! The Node S includes the coordinates of the
four corners of the Request Zone in the route request message! Node S knows his own
coordinates of his own GPS! The coordinates of the opposite corner becomes S with the corner
of the Expected Zone! With these two points, Node S can make e rectangular and has the
coordinates of the four corners of the rectangular! The following picture figure 3 shows this
rectangle an helps to understand how the node S builds the rectangle!

figure3
When an node with own coordinates outside of the rectangle, this node discards the route
request! So the flooding of the ad-hoc network is reduced!
Now, the route request sended by node S is arrived at node D and replies with a route reply
message! Node D includes in this route reply message its current location an the time where the
message is sended! S, when node S receives this message and records the location of D and S
can use it to determine the Request Zone in a future route discovery!

LARScheme2

In this scheme of the LAR protocol node S, who will find a path to D, knows the location
(Xd,Yd) of D. With this coordinates, node S calculates the distance from D (DISTd). Both, the
coordinates and the distance are included in the route request! Now, when a node receives the
route request from S, it calculates the distances between itself and D. When this distance as
larger then DISTd from node S it discards the route request. Otherwise it sends the route request
with included its own distance to D and the coordinates of D to his neighbours! So the route
request will arrive at node D and route reply will be send back to S!
The figure 4 will help in the understanding of the scheme 2 of the LAR protocol!

figure4
Mobility model:

Mobility models represent the movement of mobile users, and how their location, velocity and
acceleration change over time. Such models are frequently used for simulation purposes when
new communication or navigation techniques are investigated. Mobility management schemes
for mobile communication systems make use of mobility models for predicting future user
positions.

For mobility modelling, the behaviour or activity of a users movement can be described using
both analytical and simulation models. The input to analytical mobility models are simplifying
assumptions regarding the movement behaviors of users. Such models can provide performance
parameters for simple cases through mathematical calculations. In contrast, simulation models
consider more detailed and realistic mobility scenarios. Such models can derive valuable
solutions for more complex cases. Typical mobility models include

Brownian model
random waypoint model
random walk model
random direction model
random Gauss-Markov model
Markovian model
incremental model,
mobility vector model
reference point group model (RPGM)
pursue model
nomadic community model
column model
fluid flow model
exponential correlated random model
map based model
Random waypoint model:

The Random waypoint model is a random-based mobility model used in mobility management
schemes for mobile communication systems. The mobility model is designed to describe the
movement pattern of mobile users, and how their location, velocity and acceleration change over
time. Mobility models are used for simulation purposes when new network protocols are
evaluated.

In random-based mobility simulation models, the mobile nodes move randomly and freely
without restrictions. To be more specific, the destination, speed and direction are all chosen
randomly and independently of other nodes. This kind of model has been used in many
simulation studies.

The Random waypoint model, first proposed by Johnson and Maltz[1], soon became a
"benchmark" mobility model[2] to evaluate the Mobile ad hoc network (MANET) routing
protocols, because of its simplicity and wide availability.

Two variants, the Random walk model and the Random direction model are variants of the
Random waypoint model

UNIT III - TYPES OF WIRELESS NETWORKS


PART A (2 MARKS)
1. Define Adhoc network.
2. Explain Adhoc routing protocol.
3. Define wireless sensor network.
4. What is meant by peer-to-peer network?
8. Define LAR (Location - Aided Routing).
9. What are the disadvantages of Reactive routing
PART B (16 MARKS)
1. Explain in detail about Ad-hoc networks and Ad-hoc routing.
2. Explain in detail about Sensor networks and Peer-Peer networks.
3. Briefly explain the different types of Mobile routing protocols.
4. Define mobility models and briefly explain them.

5. Explain in detail about Reactive routing and Location aided routing


UNIT IV

ISSUES AND CHALLENGES:

VoIP Applications:

Voice over Internet Protocol (Voice over IP, VoIP) is a family of technologies, methodologies,
communication protocols, and transmission techniques for the delivery of voice communications
and multimedia sessions over Internet Protocol (IP) networks, such as the Internet. Other terms
frequently encountered and often used synonymously with VoIP are IP telephony, Internet
telephony, voice over broadband (VoBB), broadband telephony, and broadband phone.

Internet telephony refers to communications services Voice, fax, SMS, and/or voice-messaging
applications that are transported via the Internet, rather than the public switched telephone
network (PSTN). The steps involved in originating a VoIP telephone call are signaling and
media channel setup, digitization of the analog voice signal, encoding, packetization, and
transmission as Internet Protocol (IP) packets over a packet-switched network. On the receiving
side, similar steps (usually in the reverse order) such as reception of the IP packets, decoding of
the packets and digital-to-analog conversion reproduce the original voice stream. Even though IP
Telephony and VoIP are terms that are used interchangeably, they are actually different; IP
telephony has to do with digital telephony systems that use IP protocols for voice communication
while VoIP is actually a subset of IP Telephony. VoIP is a technology used by IP telephony as a
means of transporting phone calls.

VoIP systems employ session control protocols to control the set-up and tear-down of calls as
well as audio codecs which encode speech allowing transmission over an IP network as digital
audio via an audio stream. The codec used is varied between different implementations of VoIP
(and often a range of codecs are used); some implementations rely on narrowband and
compressed speech, while others support high fidelity stereo codecs.

VoIP is available on many smart phones and internet devices so even the users of portable
devices that are not phones can still make calls or send SMS text messages over 3G or Wi-F

Protocols:

Voice over IP has been implemented in various ways using both proprietary and open protocols
and standards. Examples of technologies used to implement Voice over IP include:

H.323
IP Multimedia Subsystem (IMS)
Media Gateway Control Protocol (MGCP)
Session Initiation Protocol (SIP)
Real-time Transport Protocol (RTP)
Session Description Protocol (SDP)
Inter-Asterisk eXchange (IAX)
The H.323 protocol was one of the first VoIP protocols that found widespread implementation
for long-distance traffic, as well as local area network services. However, since the development
of newer, less complex protocols, such as MGCP and SIP, H.323 deployments are increasingly
limited to carrying existing long-haul network traffic. In particular, the Session Initiation
Protocol (SIP) has gained widespread VoIP market penetration.

A notable proprietary implementation is the Skype protocol, which is in part based on the
principles of peer-to-peer (P2P) networking.

Advantages:

Operational cost

VoIP can be a benefit for reducing communication and infrastructure costs. Examples include:

Routing phone calls over existing data networks to avoid the need for separate voice and
data networks.
The ability to transmit more than one telephone call over a single broadband connection.
Secure calls using standardized protocols (such as Secure Real-time Transport Protocol).
Most of the difficulties of creating a secure telephone connection over traditional phone
lines, such as digitizing and digital transmission, are already in place with VoIP. It is only
necessary to encrypt and authenticate the existing data stream.

Challenges:

Quality of service

Communication on the IP network is inherently less reliable in contrast to the circuit-switched


public telephone network, as it does not provide a network-based mechanism to ensure that data
packets are not lost, and are delivered in sequential order. It is a best-effort network without
fundamental Quality of Service (QoS) guarantees. Therefore, VoIP implementations may face
problems mitigating latency and jitter

By default, network routers handle traffic on a first-come, first-served basis. Network routers on
high volume traffic links may introduce latency that exceeds permissible thresholds for VoIP.
Fixed delays cannot be controlled, as they are caused by the physical distance the packets travel;
however, latency can be minimized by marking voice packets as being delay-sensitive with
methods such as DiffServ.

A VoIP packet usually has to wait for the current packet to finish transmission, although it is
possible to preempt (abort) a less important packet in mid-transmission, although this is not
commonly done, especially on high-speed links where transmission times are short even for
maximum-sized packets. An alternative to preemption on slower links, such as dialup and DSL,
is to reduce the maximum transmission time by reducing the maximum transmission unit. But
every packet must contain protocol headers, so this increases relative header overhead on every
link along the user's Internet paths, not just the bottleneck (usually Internet access) link.
ADSL modems provide Ethernet (or Ethernet over USB) connections to local equipment, but
inside they are actually Asynchronous Transfer Mode (ATM) modems. They use ATM
Adaptation Layer 5 (AAL5) to segment each Ethernet packet into a series of 53-byte ATM cells
for transmission and reassemble them back into Ethernet packets at the receiver. A virtual circuit
identifier (VCI) is part of the 5-byte header on every ATM cell, so the transmitter can multiplex
the active virtual circuits (VCs) in any arbitrary order. Cells from the same VC are always sent
sequentially.

However, the great majority of DSL providers use only one VC for each customer, even those
with bundled VoIP service. Every Ethernet packet must be completely transmitted before another
can begin. If a second PVC were established, given high priority and reserved for VoIP, then a
low priority data packet could be suspended in mid-transmission and a VoIP packet sent right
away on the high priority VC. Then the link would pick up the low priority VC where it left off.
Because ATM links are multiplexed on a cell-by-cell basis, a high priority packet would have to
wait at most 53 byte times to begin transmission. There would be no need to reduce the interface
MTU and accept the resulting increase in higher layer protocol overhead, and no need to abort a
low priority packet and resend it later.

ATM's potential for latency reduction is greatest on slow links, because worst-case latency
decreases with increasing link speed. A full-size (1500 byte) Ethernet frame takes 94 ms to
transmit at 128 kbit/s but only 8 ms at 1.5 Mbit/s. If this is the bottleneck link, this latency is
probably small enough to ensure good VoIP performance without MTU reductions or multiple
ATM PVCs. The latest generations of DSL, VDSL and VDSL2, carry Ethernet without
intermediate ATM/AAL5 layers, and they generally support IEEE 802.1p priority tagging so that
VoIP can be queued ahead of less time-critical traffic.[

Voice, and all other data, travels in packets over IP networks with fixed maximum capacity. This
system may be more prone to congestion and DoS attacks than traditional circuit switched
systems; a circuit switched system of insufficient capacity will refuse new connections while
carrying the remainder without impairment, while the quality of real-time data such as telephone
conversations on packet-switched networks degrades dramatically.

Fixed delays cannot be controlled as they are caused by the physical distance the packets travel.
They are especially problematic when satellite circuits are involved because of the long distance
to a geostationary satellite and back; delays of 400600 ms are typical.

When the load on a link grows so quickly that its switches experience queue overflows,
congestion results and data packets are lost. This signals a transport protocol like TCP to reduce
its transmission rate to alleviate the congestion. But VoIP usually uses UDP not TCP because
recovering from congestion through retransmission usually entails too much latency. So QoS
mechanisms can avoid the undesirable loss of VoIP packets by immediately transmitting them
ahead of any queued bulk traffic on the same link, even when that bulk traffic queue is
overflowing.

The receiver must resequence IP packets that arrive out of order and recover gracefully when
packets arrive too late or not at all. Jitter results from the rapid and random (i.e., unpredictable)
changes in queue lengths along a given Internet path due to competition from other users for the
same transmission links. VoIP receivers counter jitter by storing incoming packets briefly in a
"de-jitter" or "playout" buffer, deliberately increasing latency to improve the chance that each
packet will be on hand when it is time for the voice engine to play it. The added delay is thus a
compromise between excessive latency and excessive dropout, i.e., momentary audio
interruptions.

Although jitter is a random variable, it is the sum of several other random variables that are at
least somewhat independent: the individual queuing delays of the routers along the Internet path
in question. Thus according to the central limit theorem, we can model jitter as a gaussian
random variable. This suggests continually estimating the mean delay and its standard deviation
and setting the playout delay so that only packets delayed more than several standard deviations
above the mean will arrive too late to be useful. In practice, however, the variance in latency of
many Internet paths is dominated by a small number (often one) of relatively slow and congested
"bottleneck" links. Most Internet backbone links are now so fast (e.g. 10 Gbit/s) that their delays
are dominated by the transmission medium (e.g. optical fiber) and the routers driving them do
not have enough buffering for queuing delays to be significant.

It has been suggested to rely on the packetized nature of media in VoIP communications and
transmit the stream of packets from the source phone to the destination phone simultaneously
across different routes (multi-path routing). In such a way, temporary failures have less impact
on the communication quality. In capillary routing it has been suggested to use at the packet
level Fountain codes or particularly raptor codes for transmitting extra redundant packets making
the communication more reliable.

A number of protocols have been defined to support the reporting of QoS/QoE for VoIP calls.
These include RTCP Extended Report (RFC 3611), SIP RTCP Summary Reports, H.460.9
Annex B (for H.323), H.248.30 and MGCP extensions. The RFC 3611 VoIP Metrics block is
generated by an IP phone or gateway during a live call and contains information on packet loss
rate, packet discard rate (because of jitter), packet loss/discard burst metrics (burst
length/density, gap length/density), network delay, end system delay, signal / noise / echo level,
Mean Opinion Scores (MOS) and R factors and configuration information related to the jitter
buffer.

RFC 3611 VoIP metrics reports are exchanged between IP endpoints on an occasional basis
during a call, and an end of call message sent via SIP RTCP Summary Report or one of the other
signaling protocol extensions. RFC 3611 VoIP metrics reports are intended to support real time
feedback related to QoS problems, the exchange of information between the endpoints for
improved call quality calculation and a variety of other applications.

Layer-2 quality of service

A number of protocols that deal with the data link layer and physical layer include quality-of-
service mechanisms that can be used to ensure that applications like VoIP work well even in
congested scenarios. Some examples include:
IEEE 802.11e is an approved amendment to the IEEE 802.11 standard that defines a
set of quality-of-service enhancements for wireless LAN applications through
modifications to the Media Access Control (MAC) layer. The standard is considered
of critical importance for delay-sensitive applications, such as Voice over Wireless
IP.
IEEE 802.1p defines 8 different classes of service (including one dedicated to voice)
for traffic on layer-2 wired Ethernet.
The ITU-T G.hn standard, which provides a way to create a high-speed (up to 1
gigabit per second) Local area network using existing home wiring (power lines,
phone lines and coaxial cables). G.hn provides QoS by means of "Contention-Free
Transmission Opportunities" (CFTXOPs) which are allocated to flows (such as a
VoIP call) which require QoS and which have negotiated a "contract" with the
network controllers.

Susceptibility to power failure

Telephones for traditional residential analog service are usually connected directly to telephone
company phone lines which provide direct current to power most basic analog handsets
independently of locally available power.

IP Phones and VoIP telephone adapters connect to routers or cable modems which typically
depend on the availability of mains electricity or locally generated power. Some VoIP service
providers use customer premise equipment (e.g., cablemodems) with battery-backed power
supplies to assure uninterrupted service for up to several hours in case of local power failures.
Such battery-backed devices typically are designed for use with analog handsets.

Some VoIP service providers implement services to route calls to other telephone services of the
subscriber, such a cellular phone, in the event that the customer's network device is inaccessible
to terminate the call.

The susceptibility of phone service to power failures is a common problem even with traditional
analog service in areas where many customers purchase modern telephone units that operate with
wireless handsets to a base station, or that have other modern phone features, such as built-in
voicemail or phone book features.

Emergency calls

The nature of IP makes it difficult to locate network users geographically. Emergency calls,
therefore, cannot easily be routed to a nearby call center. Sometimes, VoIP systems may route
emergency calls to a non-emergency phone line at the intended department; in the United States,
at least one major police department has strongly objected to this practice as potentially
endangering the public.

A fixed line phone has a direct relationship between a telephone number and a physical location.
If an emergency call comes from that number, then the physical location is known.
In the IP world, it is not so simple. A broadband provider may know the location where the wires
terminate, but this does not necessarily allow the mapping of an IP address to that location. IP
addresses are often dynamically assigned, so the ISP may allocate an address for online access,
or at the time a broadband router is engaged. The ISP recognizes individual IP addresses, but
does not necessarily know to which physical location it corresponds. The broadband service
provider knows the physical location, but is not necessarily tracking the IP addresses in use.

There are more complications since IP allows a great deal of mobility. For example, a broadband
connection can be used to dial a virtual private network that is employer-owned. When this is
done, the IP address being used will belong to the range of the employer, rather than the address
of the ISP, so this could be many kilometres away or even in another country. To provide
another example: if mobile data is used, e.g., a 3G mobile handset or USB wireless broadband
adapter, then the IP address has no relationship with any physical location, since a mobile user
could be anywhere that there is network coverage, even roaming via another cellular company.

In short, there is no relationship between IP address and physical location, so the address itself
reveals no useful information for the emergency services.

At the VoIP level, a phone or gateway may identify itself with a SIP registrar by using a
username and password. So in this case, the Internet Telephony Service Provider (ITSP) knows
that a particular user is online, and can relate a specific telephone number to the user. However,
it does not recognize how that IP traffic was engaged. Since the IP address itself does not
necessarily provide location information presently, today a "best efforts" approach is to use an
available database to find that user and the physical address the user chose to associate with that
telephone number Clearly an imperfect solution.

VoIP Enhanced 911 (E911) is a method by which VoIP providers in the United States support
emergency services. The VoIP E911 emergency-calling system associates a physical address
with the calling party's telephone number as required by the Wireless Communications and
Public Safety Act of 1999. All VoIP providers that provide access to the public switched
telephone network are required to implement E911, a service for which the subscriber may be
charged. Participation in E911 is not required and customers may opt-out of E911 service.[20]

One shortcoming of VoIP E911 is that the emergency system is based on a static table lookup.
Unlike in cellular phones, where the location of an E911 call can be traced using Assisted GPS
or other methods, the VoIP E911 information is only accurate so long as subscribers are diligent
in keeping their emergency address information up-to-date. In the United States, the Wireless
Communications and Public Safety Act of 1999 leaves the burden of responsibility upon the
subscribers and not the service providers to keep their emergency information up to date.

Lack of redundancy

With the current separation of the Internet and the PSTN, a certain amount of redundancy is
provided. An Internet outage does not necessarily mean that a voice communication outage will
occur simultaneously, allowing individuals to call for emergency services and many businesses
to continue to operate normally. In situations where telephone services become completely
reliant on the Internet infrastructure, a single-point failure can isolate communities from all
communication, including Enhanced 911 and equivalent services in other locales. However, the
internet as designed by DARPA in the early 1980s was specifically designed to be fault tolerant
under adverse conditions. Even during the 9/11 attacks on the World Trade Centers the internet
routed data around the failed nodes that were housed in or near the towers. So single point
failures while possible in some geographic areas are not the norm for the internet as a whole.

Number portability

Local number portability (LNP) and Mobile number portability (MNP) also impact VoIP
business. In November 2007, the Federal Communications Commission in the United States
released an order extending number portability obligations to interconnected VoIP providers and
carriers that support VoIP providers. Number portability is a service that allows a subscriber to
select a new telephone carrier without requiring a new number to be issued. Typically, it is the
responsibility of the former carrier to "map" the old number to the undisclosed number assigned
by the new carrier. This is achieved by maintaining a database of numbers. A dialed number is
initially received by the original carrier and quickly rerouted to the new carrier. Multiple porting
references must be maintained even if the subscriber returns to the original carrier. The FCC
mandates carrier compliance with these consumer-protection stipulations.

A voice call originating in the VoIP environment also faces challenges to reach its destination if
the number is routed to a mobile phone number on a traditional mobile carrier. VoIP has been
identified in the past as a Least Cost Routing (LCR) system, which is based on checking the
destination of each telephone call as it is made, and then sending the call via the network that
will cost the customer the least. This rating is subject to some debate given the complexity of call
routing created by number portability. With GSM number portability now in place, LCR
providers can no longer rely on using the network root prefix to determine how to route a call.
Instead, they must now determine the actual network of every number before routing the call.

Therefore, VoIP solutions also need to handle MNP when routing a voice call. In countries
without a central database, like the UK, it might be necessary to query the GSM network about
which home network a mobile phone number belongs to. As the popularity of VoIP increases in
the enterprise markets because of least cost routing options, it needs to provide a certain level of
reliability when handling calls.

MNP checks are important to assure that this quality of service is met. By handling MNP
lookups before routing a call and by assuring that the voice call will actually work, VoIP service
providers are able to offer business subscribers the level of reliability they require.

PSTN integration

E.164 is a global FGFnumbering standard for both the PSTN and PLMN. Most VoIP
implementations support E.164 to allow calls to be routed to and from VoIP subscribers and the
PSTN/PLMN. VoIP implementations can also allow other identification techniques to be used.
For example, Skype allows subscribers to choose "Skype names" (usernames) whereas SIP
implementations can use URIs similar to email addresses. Often VoIP implementations employ
methods of translating non-E.164 identifiers to E.164 numbers and vice-versa, such as the
Skype-In service provided by Skype and the ENUM service in IMS and SIP.

Echo can also be an issue for PSTN integration. Common causes of echo include impedance
mismatches in analog circuitry and acoustic coupling of the transmit and receive signal at the
receiving end.

Security:

VoIP telephone systems are susceptible to attacks as are any internet-connected devices. This
means that hackers who know about these vulnerabilities (such as insecure passwords) can
institute denial-of-service attacks, harvest customer data, record conversations and break into
voice mailboxes.

Another challenge is routing VoIP traffic through firewalls and network address translators.
Private Session Border Controllers are used along with firewalls to enable VoIP calls to and from
protected networks. For example, Skype uses a proprietary protocol to route calls through other
Skype peers on the network, allowing it to traverse symmetric NATs and firewalls. Other
methods to traverse NATs involve using protocols such as STUN or Interactive Connectivity
Establishment (ICE).

Many consumer VoIP solutions do not support encryption, although having a secure phone is
much easier to implement with VoIP than traditional phone lines. As a result, it is relatively easy
to eavesdrop on VoIP calls and even change their content. An attacker with a packet sniffer
could intercept your VoIP calls if you are not on a secure VLAN. However, physical security of
the switches within an enterprise and the facility security provided by ISPs make packet capture
less of a problem than originally foreseen. Further research has shown that tapping into a fiber
optic network without detection is difficult if not impossible. This means that once a voice
packet is within the internet backbone it is relatively safe from interception.

There are open source solutions, such as Wireshark, that facilitate sniffing of VoIP
conversations. A modicum of security is afforded by patented audio codecs in proprietary
implementations that are not easily available for open source applications; however, such
security through obscurity has not proven effective in other fields. Some vendors also use
compression, which may make eavesdropping more difficult. However, real security requires
encryption and cryptographic authentication which are not widely supported at a consumer level.
The existing security standard Secure Real-time Transport Protocol (SRTP) and the new ZRTP
protocol are available on Analog Telephone Adapters (ATAs) as well as various softphones. It is
possible to use IPsec to secure P2P VoIP by using opportunistic encryption. Skype does not use
SRTP, but uses encryption which is transparent to the Skype provider. In 2005, Skype invited a
researcher, Dr Tom Berson, to assess the security of the Skype software, and his conclusions are
available in a published report
Power Management Techniques for Mobile Communication:

Abstract:
In mobile computing, power is a limited resource. Like other devices, communication devices
need to be properly managed to conserve energy. In this paper, we present the design and
implementation of an innovative transport level protocol capable of significantly reducing the
power usage of the communication device. The protocol achieves power savings by selectively
choosing short periods of time to suspend communications and shut down the communication
device. It manages the important task of queuing data for future delivery during periods of
communication suspension, and decides when to restart communication. We also address the
tradeoff between reducing power consumption and reducing delay for incoming data.

We present results from experiments using our implementation of the protocol. These
experiments measure the energy consumption for three simulated communication patterns and
compare the effects of different suspension strategies. Our results show up to an 83% savings in
the energy consumed by the communication. For a high end laptop, this can translate to a 6-9%
savings in the energy consumed by the entire mobile computer. This can represent a savings of
up to 40% for current hand-held PCs. The resulting delay introduced is small (0.4-3.1 seconds
depending on the power management level).

Introduction
In today's world of mobile communications, one of the most precious commodities is power.
The mobile host can only operate as long as its battery maintains power. New machines are
being made to use less power allowing for smaller batteries with smaller capacities. The trend in
mobile computing is towards more communication-dependent activities, with mobile users
switching from traditional wired Ethernet communication to wireless communication (using
wireless Ethernet cards, for example). When inserted, many wireless communication devices
consume energy continuously. Although dependent on the specific machine and wireless device,
this energy consumption can represent over 50% of total system power for current hand-held
computers and up to 10% for high-end laptops. These trends make it imperative that we design
power-efficient communication subsystems.

Various techniques, both hardware and software, have been proposed to reduce the mobile host's
power consumption during operation. Most software-level techniques have concentrated on non-
communication components of the mobile host, such as the display, the disk and the CPU. In
particular, researchers have looked at methods to turn off the display after some period of
inactivity (as often implemented in BIOS or screen savers), to spin down the hard disk of the
mobile host, and to slow down or stop the CPU depending on work load. The principle
underlying the techniques for controlling these components is to estimate (or guess) when the
device will not be used and suspend it for those intervals, and these techniques have been mostly
tested with simulations. Stemm et. al have identified the problem of excess energy consumption
by network interfaces in hand held devices, and have provided trace-driven simulation results for
simple software-level time-out strategies. The new IEEE 802.11 standard that is being adopted
by some vendors adopts lower level solutions (at the MAC and PHY layer) to support idle-time
power management. Hardware-level solutions for managing the communication device focus on
modulating the power used by the mobile transmitter during active communication.

Our research presented in this paper focuses on software-level techniques for managing the
mobile host's communication device through suspension of the device during idle periods in the
communication. We present a novel transport level protocol for managing the suspend/resume
cycle of the mobile host's communication device in an effort to reduce power consumption. The
management of communication devices creates a new and interesting challenge not present when
managing other devices' power consumption. Similar to hard disks and CPUs, the
communication devices continuously draw power unless they can be suspended. A suspended
hard disk or CPU can be restarted by any user requiring that device. However, when a
communication device is suspended, the mobile host is effectively cut off from the rest of the
network. A mobile host with a suspended communication device can only guess about when
other hosts may have data destined for it. If the suspension of the mobile host's communication
does not match prevailing communication patterns, the isolation can cause buffers to overflow
both in the mobile host and in other hosts trying to communicate with it. Additionally, other
hosts may waste precious resources trying to communicate with the mobile host if they have no
knowledge about whether or not the mobile host's communication is suspended.

Our goal is to provide the mechanisms for managing and reducing the power consumption of the
communication device. We present a simple model for mobile communication that provides
adaptable functionality at the transport layer for suspending and resuming communication. By
exposing this functionality to the application, we enable application-driven solutions to power
management. Power savings are attained by suspending communications and the communication
device for short periods of time. During these suspensions, data transmissions are queued up in
both the mobile host and any other host trying to communicate with the mobile host. The key to
balancing power savings and data delay lies in identifying when to suspend and restart
communications. By abstracting power management to a higher level, we can exploit
application-specific information about how to balance power savings and data delay.

Intuitively, power conservation is achieved by accumulating the power savings from many small
idle periods. We, however, need to be careful to monitor any additional energy consumption
caused while executing the suspend/resume strategies. Additionally, we need to consider the
effect on other hosts who are trying to communicate with the suspended mobile host. A base
station using our protocol has enough knowledge about the state of the mobile host to know
when it is suspended and can use this information to help employ scheduling techniques. We
implemented our protocol and experimentally determined its effect on power consumption and
the quality of communication. Using three simulated users designed to capture typical mobile
communication patterns, we obtained 48-83% savings in the power consumed by the
communication subsystem, while introducing a small additional response delay (0.4-3.1 seconds
depending on the power management level) that is acceptable for many mobile applications, like
web browsing.
2 Communication Model and Power Management
The introduction of wireless links into communication systems based on wired links has posed
a number of problems. These problems include different loss characteristics and different
bandwidth capabilities on the wired and the wireless line, synchronization of disconnected
operations, and issues involving packet forwarding. These problems pose significant challenges
for end-to-end communication protocols. Two types of models have been studied. The first
model exploits the natural hop existing in the communication route to a mobile host. Standard
communication protocols are used by wired hosts to a base station and specialized protocols are
used for the final hop from the base station to the mobile hosts. The second model utilizes and
tunes existing end-to-end protocols, providing help and hints along the way.

In this paper, we focus on the first model of communication described above, which allows us to
isolate the communication between the base station and the mobile host. With some extensions,
the technique is also applicable to the second model. We target our approach at the transport
layer, where we provide a set of mechanisms that allow communication to be suspended and
resumed. We assume a model where the mobile host is communicating with the rest of the
network through a base station. This base station may be a proxy, or it may be the connection
point for end-to-end communication with other hosts. We concentrate on the communication
between the mobile host and the base station, and for clarity assume that all communication to
and from the mobile host is directed through one specific base station. This work can easily be
extended to include changing base stations.

Current wireless communication devices typically operate in two modes: transmit mode and
receive mode. The transmit mode is used during data transmission. The receive mode is the
default mode for both receiving data and listening for incoming data. Much of the time, the
wireless communication device sits idle in receive mode, and, while the power required for
reception is less than the power required for transmission, this power consumption is not
negligible. The IEEE 802.11 standard provides for some power management at the lower (MAC
or PHY) layers. Compliant cards can exchange information about outstanding data to decide on
when to wake up suspended cards. There are ongoing efforts to provide IEEE 802.11 compliant
support for power management by introducing new features into the next generation wireless
communication cards. Researchers have also considered hardware-level solutions to provide low
power communication capabilities. Such solutions reduce the power cost of operating in either
one of the modes, and are orthogonal to our approach which addresses the amount of time the
device spends in each mode.

Logical areas to look for software-level power conservation in communication are two-fold.
Since data transmission is expensive, we can reduce the time spent in transmission. This can be
achieved by data reduction techniques and intelligent data transfer protocols. The obvious
technique of data compression reduces the amount of transmission time, but requires additional
CPU cycles for performing compression. Through simple experiments, we observed that,
considering the current power requirements of CPUs versus wireless communication devices, the
benefit in terms of power savings from reduced communication time often outweighs the
increased energy consumption costs for compression. Intelligent data transfer protocols can be
used to reduce the effect of noisy connections that cause power-expensive retransmission of lost
messages. Our continuing research addresses the assessment of the effects of different techniques
for data reduction, including reduced reliability requirements, and their effect on both power
reduction and communication quality.

The second area, and the emphasis of this paper, is the cost of leaving the communication device
sitting idle during periods of no communication activity. During such idle periods, the
communication device draws power listening for incoming data. Our goal in this work is to
reduce the amount of time the device sits idle drawing power by judiciously suspending it.
Suspending a wireless communication device is similar to slowing a CPU in that there are some
small power costs associated with suspension and resumption. As mentioned above , the difficult
part here is to deal with when to suspend and resume the communication device, how to deal
with the mobile host being unreachable at times, and how to address the issue of not losing en-
route data. Our protocol and its implementation presented here address these problems. Since the
protocol itself generates additional communication during these idle periods, there needs to be a
balance between when it is beneficial to use the power management techniques, and when we
should leave the device on continuously.

In contrast to the solutions proposed by the IEEE 802.11 standard, we believe that power
management should be controlled by the mobile host, potentially even the application. By
providing power control at the transport layer (or above), we can provide power management
interfaces to the application, allowing the application to better control the communication,
enabling adaptive power management driven by the needs of the application. Specifically,
communications using the IEEE 802.11 standard will always pay the overhead of delays imposed
by using power management, while our techniques allow the application to determine when such
delays are too high, and so adapt power management levels. Stemm et. al have also investigated
methods for reducing power consumption of network interfaces, specifically targeting their
research at hand-held devices. Their research suggests application-specific solutions to such
problems. In contrast, our research provides a general solution capable of hosting various
strategies, both static and adaptive. Our measurements are with a real implementation of a power
management protocol in an experimental setup. We are, therefore, able to observe the effects of
the queuing of data and the real effect of extra energy consumption by such a protocol. We
measure the power consumption in the context of the entire system, considering such costs as
message processing and disk accesses, for various simulated workloads that we expect mobile
users to perform.

3 Communication-Based Power Management

Currently, a typical mobile host leaves its wireless Ethernet card in receive mode during the time
it is not being used, unless the user explicitly removes the card. The technique described in this
section provides mechanisms to extend battery lifetime by suspending the wireless Ethernet card
during idle periods in communication. At the heart of the technique lies a protocol where the
mobile host acts as a master and tells the base station when data transmission can occur. When
the mobile host wakes up, it sends a query to the base station to see if the base station has any
data to send. This permits communication device suspension at the mobile host, and enables the
implementation of communication scheduling techniques at the base station. The
suspend/resume cycle results in bursts of communication that may be followed by periods of
inactivity. Although producing such bursty communication may incur additional delay, bursty
communication patterns lend themselves well to efficient scheduling techniques.

With the suspension of a communication device, a mobile host will experience an additional
delay in data transmission since data on both the sending and receiving sides may be held up
during suspension. The mobile host can monitor its own outgoing communication patterns to
insure that, despite these suspension times, communication continues smoothly without buffer
overflow. The base station, on the other hand, has no means to restart communication if it notices
that it is running out of buffer space. It is up to the mobile host to understand the base station's
expected communication patterns so that the buffers at the base station do not overflow. In order
to efficiently use our power management techniques, our communication layer must monitor the
communication patterns of the mobile host and match the suspend/resume cycle to these patterns.

The protocol we describe in this section allows a mobile host to suspend a wireless
communication device. Periodically, or by request from the application, the protocol wakes up
and reinitiates communication with the base station. In the rest of this section, we will describe
our power management protocol in detail and discuss the significance of some of the timing
parameters

3.1 Power Management Control Protocol


In this protocol, the mobile host is the master and the base station acts like a slave. The slave is
only allowed to send data to the master during specific phases of the protocol. During non-
transmit phases, the slave queues up data and waits for commands from the master. Idle periods
for both the master and the slave can be detected through the use of idle timers or indicated to the
protocol from the application. Figure 1 and Figure 2 show the protocol state diagrams for the
master and the slave. In these diagrams, IN: indicates an input event that can be either an
incoming message or a timeout, Q: indicates the state of the queue, and OUT: indicates and
outgoing response message.

As shown in Figure 1, the slave is initialized to be in the SLEEPING mode. It can only leave that
mode upon a WAKE_UP message from the master. If the slave has data to send, it will enter the
SEND_RECV mode. The slave will stay in this mode until it has detected that it has no more
data to transmit, whereupon, it will send a DONE message to the master, enter the RECEIVING
mode, and continue receiving until it receives a SLEEP message. If during this time, the slave
detects that there is new data to transmit, it will send a NEW_DATA message to the master and
enter the RECEIVING_WAIT mode. The slave can only start to transmit when it receives a
WAKE_UP message from the master. If a SLEEP message is received first, the waiting data
stays buffered and is not transmitted until the next resume cycle.
Figure 1: Slave (Base Station) Protocol State Diagram

Figure 2 shows the state diagram for the master. Although this is a much more complex state
diagram, we can see that the states may be partitioned into three sets. The first set ( SLEEPING)
concerns the master when it is sleeping. When the master is in the SLEEPING mode, it can be
woken up by one of two triggers: a wakeup timer or new data to transmit. If the wakeup timer
expires, the master sends a WAKE_UP message along with any new data to the slave. If there is
new data to transmit to the slave before the wakeup timer expires, the master has the option to
wake up and transmit this new data, or continue sleeping and queue up the data until the timeout
expires.

The second set of states (SENDING_WAIT, WAITING, and WAIT_FOR_OK) concerns the
master when it is waiting for a response from the slave about whether or not the slave has data to
send. In the SENDING_WAIT mode, the master is transmitting data and in the WAITING mode
it has no data to transmit. When the master receives a response from the slave in the form of a
DATA or a NO_DATA message, the master enters the appropriate state in the third set.
Additionally, if while in the SENDING_WAIT mode, an idle timer expires indicating that the
master has no more data to send, the master will enter the WAITING mode and continue waiting
for a response from the slave. In the WAIT_FOR_OK mode, the master has indicated to the
slave that it should sleep and is waiting for a SLEEP_OK message.

When the master is in one of the final set of states (SENDING, SEND/RECV, and
RECEIVING), it is actively sending and/or receiving data. In the SENDING mode, the master
may receive a NEW_DATA message from the slave. The master responds with a WAKE_UP
message and enters the SENDING_WAIT mode. When neither the master nor the slave have any
more data to send, the master sends a SLEEP message and enters the WAIT_FOR_OK mode.
.

Figure 2: Master (Mobile Host) Protocol State Diagram

Wireless connections are very susceptible to interference from both external devices and other
wireless devices using the same settings or talking to the same base station. By using this
protocol, we provide the base station with useful information about the communication patterns
of the mobile host. Although not required by the protocol, the master can inform the slave of its
sleep time, or the slave can suggest appropriate sleep times to the master. If the protocol is used
such that only prespecified timeouts trigger restarting communication, the slave can design a
communication scheduling algorithm based around the known sleep time of the master.
Additionally, if the sleep times for the master are sufficiently long, the slave can save any data
destined for the master to disk. This will free the buffer space being used by the data destined for
the master so it can be used for other active communications.

3.2 Timing Considerations


Timing is a key issue for both the observed performance of the mobile host as well as the
amount of power that can be saved. If the wireless Ethernet card is suspended too often, the user
will see lags in data transmission performance. On the other hand, if it is not suspended long
enough, the gain in battery life time may be undetectable.

In order to determine when the card should be suspended, the protocol needs to determine the
communication patterns for both sender and receiver. There are two ways by which idle periods
in the communication can be detected. The first, and simplest, is when the application can
actually inform the protocol that it doesn't have any data to send. This requires a more complex
application that has information about its communication patterns. The second method is to use a
timer set with a timeout period. If the timer expires and no communication has occurred since the
last expiration, the protocol concludes that there is an idle period in the communication. The
appropriate timeout period depends on the requirements of the application. Timeout periods that
are too short may cause the protocol to go to sleep prematurely, resulting in poor response time
for applications dependent on communication. On the other hand, timeout periods that are too
long may cause the protocol and the communication device to remain active for unnecessarily
long periods of time, wasting precious energy.

The other timing parameter is the sleep duration which defines how long the master should keep
the communication suspended. The appropriate sleep duration also depends on the requirements
of the application. Longer sleep periods will cause longer lags in any interactive applications.
Shorter sleep periods will not extend battery lifetime appreciatively. The application needs to
determine the appropriate tradeoff for battery lifetime versus delay. In many instances, the
expected time and data size for the response to a request initiated by the mobile host can be
estimated. This includes, for example, applications like mail, web browsing, and file transfer. In
this context, hints provided by the application could be very helpful.

Security Issues in Next Generation Mobile Networks: LTE and Femto cells:

Abstract Cellular mobile networks are used by more than 4 billion users worldwide. One
effective way to meet the increasing demand for data rates is to deploy femtocells, which are
low-power base stations that connect to the mobile operator through the subscribers residential
Internet access. Yet, security and privacy issues in femtocell-enabled cellular networks, such as
UMTS and LTE, still need to be fully addressed by the standardization bodies. In this paper, we
review signicant threats to the security and privacy of femtocell-enabled cellular networks. We
also propose novel solution directions in order to tackle some of these threats by drawing
inspiration from solutions to similar challenges in wireless data networks such as WLANs and
mobile ad hoc networks (MANETs).

I. INTRODUCTION

The use of mobile devices has changed since the advent of digital technologies such as GSM.
What started as a voice only service, has been upgraded to support data traffic as well. With
modern smart phones, users are able to browse the Inter-net and obtain services such as
ebanking, navigation, social networking and recommendations based on the subscribers
location. Femtocells, which are low-power and low-range base stations for cellular networks
installed by users at their own premises, are believed to meet the surge in data rates that these
multimedia and interactive services require. They offload the microcell network and provide
backhaul connections to the cellular operators networks through the users residential broadband
accesses [7]. Long Term Evolution (LTE) is the mobile network technology for the next
generation mobile communications, as dened by the 3rd Generation Partnership Project (3GPP)
[1]. In addition to features such as increased data-rates, lower latencies and better spectral
efficiency, one of the most interesting aspects is the radically novel all-IP core network
architecture, known as Evolved Packet Core (EPC). LTE is expected to make extensive use of
user-installed femtocells, in order to achieve its goals of spectral efficiency and high-speed for a
greater number of users. It is clear that the sensitivity and condentiality of users and data
transiting in such digital cellular networks is paramount both to businesses and private users.
Security and privacy in such networks is achieved at several levels in their architectures, such as
the air interface, the operators internal network and the inter-operator links. The main
assumption underlying the security of legacy mobile networks, such as GSM and UMTS, is the
trust that each operator has in its own infrastructure and in other operators with whom it has a
roaming agreement. Clearly, the evolution to LTE and its at all-IP core network emphasizes the
urgency for a revision of trust relationships among operators and their network components, as
both their exposure and vulnerability to external threats are greatly affected. For instance, it has
be-come easier for a malicious user to tamper with the femtocell in order to access condential
data, as it resides directly at the users premises, or to disrupt the legitimate communications
both at the femtocell and at the core network level, due to the openness of the IP networks. Our
goal in this paper is to raise awareness about security and privacy issues in femtocell-enabled
cellular networks, such as LTE, by reviewing some signicant security threats and
countermeasures. Our solutions are inspired from similar research efforts in the WLAN and
mobile ad hoc network (MANET) research community.

II. SECURITY AND PRIVACY CHALLENGES

Figure 1 shows the threat model for a femtocell-enabled mobile network. The three vulnerable
elements are indicated by arrows: (i) the air interface between the mobile device (User
Equipment) and the femtocell (Home(e) NodeB), (ii) the femtocell itself and (iii) the public link
between the femtocell and the security gateway (SecGW). Our intent is to focus on certain
attacks on the aforementioned elements, which are achievable without breaking the
cryptosystems or the protocols. A more exhaustive list of all possible attacks and
countermeasures can be found in [2]. A. Attacks on the Air Interface The attacks on the air
interface can be either passive (the attacker only passively listens to the communications
between the mobile device and the base station) or active (in addition to listening, the attacker
injects or modies the data). Although prerogatives for active attacks have been mitigated by
cryptographically protecting the messages sent over the air, passive attacks, such as traffic
analysis and user tracking, are still possible.
Figu
re 1. Three different targets for malicious attacks on femtocell-enabled mobile networks: (i) the
air interface between the mobile device (User Equipment) and the femtocell (Home(e)NodeB),
(ii) the femtocell itself and (iii) the public link between the femtocell and the security gateway.

The issue of user identity protection was already raised in the early GSM networks, and the
solution that has been adopted ever since has never been substantially revisited. With the
ongoing migration towards all-IP and femtocell-enabled cellular networks, the legacy solution
might not be appropriately suited anymore. In fact, GSM, UMTS and LTE standards mandate the
use of unlinkable temporary identiers (TMSIs [3] and GUTIs [4]) to protect the identity of
mobile devices at the air interface, but the capillary deployment of femtocells might render this
insufficient to guarantee a satisfactory level of protection for the users. TMSIs (or GUTIs) are
usually unchanged in a given location (or tracking) area, which is composed by up to a hundred
adjacent cells, and femtocells could make it possible for malicious users not only to track the
movements of mobile subscribers, but due to the low range, to have an unprecedented accuracy
as well. For instance, such tracking attacks could be perpetrated by curious employers, in order
to monitor whether an employee is visiting a competitor, or by governmental agencies, in order
to illegitimately track peoples locations.

Subscriber identity and tracking are the emerging threats at the air interface in femtocell enabled
mobile networks. Solutions that are suited for such network s can be inspired from the
experience gained by the research community in MANETs. In particular, studies such as suggest
directions for a more dynamic and context-aware location privacy protection mechanisms.
B. Attacks on the Femtocell

From the perspective of a mobile device, being connected to a regular base station, i.e.,
(e)NodeB , or a femtocell is equivalent, because the protocols and security standards used at the
air interface are exactly the same. From a malicious users point of view, it makes a substantial
difference because it is much easier for a malicious user to tamper with a small and inexpensive
(120 [15]) femtocell than it could be with a large and complicated device located on a rooftop.
The physical size, material quality, lower cost components and the IP interface of the femtocell
make it more suited for reverse engineering and tampering than a traditional, more expensive and
business-grade (e)NodeB base station. As the over-the-air user data encryption is terminated at
the femtocell, hardware tampering with the device could expose the private information of the
unsuspecting user. For instance, if an adversarial user is able to set the femtocell to accept all
external users without having to register them rst, as opposed to having a Closed Subscriber
Group (CSG), he would be able to analyze their communications. Moreover, attacks such as
device impersonation, Internet protocol attacks on the net-work services, false location reporting
or simply unauthorized reconguration of the onboard radio equipment could hinder the network
operator from controlling interference and power management features. This could have severe
consequences on the quality of service. To this end, femtocells should be equipped with trusted
execution environments (TrE) that render malicious manipulation of the onboard software and
the on the wire sniffing very hard to achieve. However, as fem-tocells are authorized to operate
only on specic geographic areas, false location reporting issues could still arise if IP-only geo-
localization techniques are be deployed. This is because, in order to manipulate the source IP
address of the femtocell, there would be no need to physically tamper with it.

C. Attacks on the Core Network

The large scale deployment of comparatively less expensive femtocells is a good alternative for
mobile operators as it avoids expensive upgrades to the backbone connections. However, the
exposure of the core networks point of entrance to the public Internet has a severe drawback: it
renders most Internet-based attacks, such as Denial of Service (DoS, discussed in Section III) or
impersonation attacks, feasible against the mobile network operators. Let us focus on the
implications of the exposure of public IP addresses of security gateways to the Internet, which
are required by a large number of femtocells in order for the whole system to function properly.

DoS attacks (as well as Distributed DoS, DDoS) are a well known occurrence in large companies
(such as eBay, Amazon or Yahoo [10]) that host a multitude of web-based services. In order to
deal with such attacks in a systematic way, Mirkovic [11] proposes a general classication of
attacks and defense mechanisms such that system developers and researchers can better observe
and react to the inherently different attacks by exploiting their common traits. If the detection of
ongoing DDoS attacks is best performed at the victim site, many researchers ([14], [6], [12],
[13]) agree that a distributed solution is better suited against large-scale DDoS attacks than a
solution localized only at the nal link with the target. The reason is that the suppression
mechanisms are most effective near the sources of the attack, as it is possible to lter the
malicious trafc from the genuine connections and to avoid that it even reaches and saturates the
nal link with the target. But the requirement for these solutions to succeed is that different ISPs
are able and willing to cooperate to provide protection. Failure to reach an agreement could
jeopardize the effectiveness of their solutions.

For a mobile network operator, this means that an effective protection against DDoS attacks
needs to encompass not only the ISP that is providing Internet access but also the neighbor-ing
ISPs. Together, they could both limit the femtocell service disruption at the security gateways
and ensure the service delivery to the femtocell subscribers. However, cooperation among ISPs
is best achieved when all concerned parties have incentives to jointly protect the mobile
operators gateways. We further discuss this issue in the following section.

III. DISCUSSION

In this section, we present solution directions to the two most relevant issues discussed in Section
II, i.e., identity and location tracking at the air interface and distributed DDoS defense for the
core network. A solution to the issue of identity and location tracking consists of an adaptive
scheme to assign and change identiers based on context. This would require the mobile devices
to dynamically decide when to change identiers, based on their own observation of the
surroundings and thus move away from the network controlled strategy to a user-triggered ID
change strategy. For instance, when planning a cellular network, mobile operators have to decide
where and how many base stations to install, in order to provide an optimal trade-off between
service quality, availability and cost. A densely populated area will usually have many more cells
than a rural area with a lower population density. As each cell has a unique cell ID, a mobile
device is able to assess whether the current location has a high cell density or not by reading the
cell broadcast messages. Moreover, the majority of recent feature-phones and smart phones are
equipped with Bluetooth radio technology for low-range ad hoc connectivity. Combined with the
cell ID broadcast messages, Bluetooth can be used to dene more precisely the number of
neighboring devices and to trigger the coordinated temporary ID (or pseudonym) change.
Network and device parameters such as neighborhood density, device speed, mobility patterns
and neighborhood dynamics affect the effectiveness of the ID change and, as also shown in , they
should be used when making ID change decisions.

The second challenge for mobile networks with publicly accessible IP interfaces concerns the
vulnerability to Internet-based DDoS attacks. Several well-known solutions for thwart-ing such
threats and attacks exist, such as network analysis and client puzzles . But, as the effectiveness of
these solutions relies on several participating entities (ISPs), incentives for the cooperation
among them need to be studied. One framework that has been extensively applied to security
studies for cooperation among self-interested parties is Game Theory. A game-theoretic
framework will allow us to study the incentives, predict the outcomes and distribute individual
prots that are optimal, with respect to a given criteria, and commensurate to the role of each
individual ISP in the protection of the security gateways. By using information such as the ISPs
national Internet traffic share, femtocell penetration and subscriber base, the model could
determine the best strategies for each ISP, which would guarantee the highest prot in any given
situation.

IV. CONCLUSION

The control over security and privacy in the next generation of mobile networks, such as LTE, is
held solely by the core network. On one hand, this is benecial as it ensures the trust relationship
between subscriber and home network. On the other hand, it impedes dynamic and mobile device
controlled actions that could guarantee a better protection. In this paper, we have reviewed some
signicant threats to security and privacy in femto cell enabled mobile networks and have
presented solution directions to mitigate two among the most relevant and immediate ones. First,
we discussed the issue related to the identity and location tracking with femto cell technology.
We have shown that by using contextual information such as node density, device speed and
mobility pattern, the mobile device-triggered ID change can reduce the risk of being tracked.
Second, we suggested a novel approach towards the protection of the mobile network against
Internet-based DDoS attacks. Successful incentive strategies, stimulating cooperation among
local ISPs, could ensure an efcient protection and, at the same time, enable a transparent service
for legitimate users.

UNIT IV - ISSUES AND CHALLENGES


PART A (2 MARKS)
1. What are the issues in mobile networking?
2. What is the need for Network Security?
3. What are the Types of unauthorized access
4. Define Ad-hoc networks
5. Explain MAC ID filtering
6. What are the Different levels of security?
7. Explain Authentication versus identification.
8. Define Mobile IP
9. What do you mean by Home network and Home address?
10. Give the applications of VoIP.
PART B (16 MARKS)
1. Briefly explain the Issues and challenges of mobile networks.
2. Give the difference between Authentication and identification? Write a short note on
Authentication in mobile applications.
3. Explain in detail about Privacy issues in Mobile Networks.
4. Briefly explain About Mobile IP and Ad-hoc networks.
5. Explain in detail about Security issues.
5. Define DSR.
6. Explain Mobility models.
7. What are the advantages of P2P networks?
UNIT V

SIMULATION

About GloMoSim:

In GloMoSim we are building a scalable simulation environment for wireless and wired network
systems. It is being designed using the parallel discrete-event simulation capability provided by
Parsec. GloMoSim currently supports protocols for a purely wireless network. In the future, we
anticipate adding functionality to simulate a wired as well as a hybrid network with both wired
and wireless capabilities.

Most network systems are currently built using a layered approach that is similar to the OSI
seven layer network architecture. The plan is to build GloMoSim using a similar layered
approach. Standard APIs will be used between the different simulation layers. This will allow the
rapid integration of models developed at different layers by different people. The protocols being
shipped with the current library include the following:

Layers Protocols
Mobility Random waypoint, Random drunken, Trace based
Radio Propagation Two ray and Free space
Radio Model Noise Accumulating
Packet Reception
SNR bounded, BER based with BPSK/QPSK modulation
Models
Data Link (MAC) CSMA, IEEE 802.11 and MACA
IP with AODV, Bellman-Ford, DSR, Fisheye, LAR scheme 1,
Network (Routing)
ODMRP, WRP
Transport TCP and UDP
Application CBR, FTP, HTTP and Telnet

To run GloMoSim, you will need the latest Parsec compiler (now included with the GloMoSim
distribution). If you want to develop your protocols in GloMoSim, you should have some
familiarity with Parsec, but you don't need to be an expert. Most protocol developers will be
writing purely C code with some Parsec functions for time management. Hence, you will need to
use the Parsec compiler. Parsec code is used extensively in the GloMoSim kernel. Most users do
not need to know how the kernel works. If you are interested in the GloMoSim kernel, you will
need to have extensive knowledge about Parsec.
The Network Simulator - ns-2

Ns is a discrete event simulator targeted at networking research. Ns provides substantial support


for simulation of TCP, routing, and multicast protocols over wired and wireless (local and
satellite) networks.

Ns began as a variant of the REAL network simulator in 1989 and has evolved substantially over
the past few years. In 1995 ns development was supported by DARPA through the VINT project
at LBL, Xerox PARC, UCB, and USC/ISI. Currently ns development is support through DARPA
with SAMAN and through NSF with CONSER, both in collaboration with other researchers
including ACIRI. Ns has always included substantal contributions from other researchers,
including wireless code from the UCB Daedelus and CMU Monarch projects and Sun
Microsystems.

ns (from network simulator) is a name for series of discrete event network simulators,
specifically ns-1, ns-2 and ns-3. These simulators are used in the simulation of routing protocols,
among others, and are heavily used in ad-hoc networking research, and support popular network
protocols, offering simulation results for wired and wireless networks alike.

ns-3 is built using C++ and Python and scripting is available with either language. Split over 30
modules, features of ns-3 include

Callback-driven events
Attribute system that manages default and per-object simulation values
Helpers that allow using simpler API when configuring simulations

Work Flow for NS:

It include following four steps:

1. Implement protocol models

2. Setup simulation scenario i.e. Make Tcl file in which you mention what type of scenario
you want, e.g. no of nodes, kind of agent working on nodes etc.

3. Run simulation i.e. Run the tcl file

4. Analyse simulation results i.e. by GNU Awk and gnuplot

History

ns began development in 1989 as a variant of the REAL network simulator and it is currently
maintained by volunteers. Long-running contributions have also come from Sun Microsystems
and the UCB Daedelus and Carnegie Mellon Monarch projects[3]
ns-2

ns-2 was built in C++ and provides a simulation interface through OTcl, an object-oriented
dialect of Tcl. The user describes a network topology by writing OTcl scripts, and then the main
ns-2 program simulates that topology with specified parameters. It runs on Linux, FreeBSD,
Solaris, Mac OS X and on Windows using Cygwin. It is licensed for use under version 2 of the
GNU General Public License.

ns-3

On February 22, 2005 Tom Henderson made a post on ns-developers mailing list saying We
intend to have some discussions on how some of ns-2 might be either refactored or forked as part
of a future development effort (in parallel, for now, with maintenance of the existing code tree)

In the process of discussing the necessary changes, it was found out that maintaining backward
compatibility with ns-2 was generally not worth the effort, since most useful ns-2 models were
already implemented in forks of ns-2 that were generally incompatible with each other. It was
decided that new simulator will be written from scratch, using C++ programming language.

Mathieu Lacage started developing yans (Yet Another Network Simulator) back in 2004, which
was later used as a base for ns-3. Development of ns-3, initially sponsored by the NSF, INRIA,
and Georgia Tech, began on July 1, 2006,. The first release, ns-3.1 was made in June 2008, and
afterwards the project continued making quarterly software releases.

OPNET:

OPNET Modeler accelerates the R&D process for analyzing and designing communication
networks, devices, protocols, and applications. Users can analyze simulated networks to compare
the impact of different technology designs on end-to-end behavior. Modeler incorporates a broad
suite of protocols and technologies, and includes a development environment to enable modeling
of all network types and technologies including:

VoIP
TCP
OSPFv3
MPLS
IPv6
Others

Key Features

Fastest discrete event simulation engine among leading industry solutions


Hundreds of protocol and vendor device models with source code (complete OPNET
Model Library)
Object-oriented modeling
Hierarchical modeling environment
Discrete Event, Hybrid, and optional Analytical simulation
32-bit and 64-bit fully parallel simulation kernel
Grid computing support for distributed simulation
Optional System-in-the-Loop to interface simulations with live systems
Realistic Application Modeling and Analysis
Open interface for integrating external object files, libraries, and other simulators
Integrated, GUI-based debugging and analysis

Wireless Network Simulation

The OPNET Modeler Wireless Suite provides high fidelity modeling, simulation, and analysis
of a broad range of wireless networks. Technology developers leverage advanced simulation
capabilities and rich protocol model suites to design and optimize proprietary wireless protocols,
such as access control and scheduling algorithms. Simulations incorporate motion in mobile
networks, including ground, airborne, and satellite systems. Modeler Wireless Suite supports any
network with mobile devices, including cellular (GSM, CDMA, UMTS, IEEE 802.16 WiMAX,
LTE, etc.), mobile ad hoc, wireless LAN (IEEE 802.11), personal area networks (Bluetooth,
ZigBee, etc.) and satellite.

Wireless network planners, architects, and operations professionals can analyze end-to-end
behavior, tune network performance, and evaluate growth scenarios for revenue-generating
network services.

Key Features

Fastest simulation engine among leading industry solutions


Hundreds of wired/wireless protocol and vendor device models with source code
(complete OPNET Model Library)
Object-oriented modeling
Hierarchical modeling environment
Scalable wireless simulations incorporating terrain, mobility, and multiple pathloss
models
Customizable wireless modeling
Discrete Event, Hybrid, and optional Analytical simulation
32-bit and 64-bit fully parallel simulation kernel
Grid computing support for distributed simulation
Optional System-in-the-Loop to interface simulations with live systems
Realistic Application Modeling and Analysis
Open interface for integrating external object files, libraries, and other simulators
Integrated, GUI-based debugging and analysis

Reinforce Networking Theory with OPNET Simulation:

As networking systems have become more complex and expensive, hands-on experiments
based on networking simulation have become essential for teaching the key computer
networking topics to students. The simulation approach is the most cost effective and highly
useful because it provides a virtual environment for an assortment of desirable features such
as modeling a network based on specific criteria and analyzing its performance under
different scenarios with no cost. In this paper, we present our approach to develop an
OPNET simulation networking laboratory that complements classroom lectures. Our
simulation labs emphasize the understanding of the dynamics of network protocols instead of
configuration and management. Students learn, through these experiments, a wide range of
networking aspects including the design and the limitations of protocols, simulation and
performance evaluation techniques, interpretation of data and packet analysis. Furthermore,
we try to ensure that labs contain some extension or development of the topic beyond the
lecture/reading and provide students additional active learning opportunities to discover
knowledge. We have been using OPNET simulation in an introductory computer networks
course for the past three years. Feedback from the students has been very positive.
Overwhelmingly, students have indicated that the OPNET labs help them better understand
the intricate details of actual networking protocols, and they generally indicate that they
enjoy these labs as well.

In summary, students benefit from the OPNET simulation laboratory in the following three
ways.

First, the OPNET simulation labs reinforce the networking theory taught by regular
lectures.
Second, the open design of the labs encourages active learning.
Third, students gain the knowledge of modeling and simulation techniques for
performance evaluation of networking systems. This active learning approach gives
students experience in the subtleties of the design of a complex system, as well as
prepares them for the networking industry.

Network Simulator Selection:

Our teaching goal is to effectively integrate laboratory components into the introductory
networking course without significantly increasing the workload of both instructors and students.
The main objectives of our simulation laboratory experiments are:
To reinforce the networking theory taught in classes with hands-on experiments In our
lectures, we teach networking concepts and protocols at a relatively abstract level. We
hope that hands-on lab exercises lead to a deeper understanding of networking principles
and concepts.
To allow students to build, observe, experiment, and measure variety of networks
including direct link networks, switched networks, wireless networks, and inter-networks.
To balance the breadth and depth of knowledge in an introductory networking course and
drive some topics down to a level of details where students understand the elegance of
the engineering that make this all work.
To provide additional learning opportunities to discover knowledge.
To provide an open lab environment so that all the lab experiments can be completed
without supervision and in relatively short time (a few hours).

To meet these objectives, the following properties are essential for the network simulator to be
used for the laboratory experiments:

Ability to simulate a wide range of networking technologies: The simulation software


could be used to model the entire network, including its routers, switches, protocols,
servers, and the individual applications they support. It should support a large range of
communication systems from a single LAN to global satellite networks.
Ease of use: the simulation software should be easy to install and use. Students should
be able to use the software to complete the lab assignments independently without any
formal training.
Free or low cost: The software should be free or low cost. In order to provide the open
lab, students should be able to download and install the software on their personal
computers.
Higher simulation performance: For each lab assignment, students are required to create
network model, run simulation, analyze results, and write a report. It is very important to
have a high performance simulation engine so that simulations of most lab experiments
can be completed in relatively short time (less than 30 minutes).

Other properties desired but not absolutely necessary are:

Suitability of the software for use in research: the simulation software can be used for the
simulation-based networking research.
Better industry employment opportunities for students: the software should have a large
user community and should be widely used by industry. So, students who have been
taught using the software should be able to immediately apply their knowledge of net-
work simulation when first employed.

Why OPNET?

There are various simulation experiment environments. Many target a specific area of research
interest a particular network type or protocol, such as wireless networks by GloMoSIM (Glo-
MoSIM, 2001). Some systems, such as x-Sim (Brakmo & Peterson, 1996) and Maise
(Bagrodia& Liao, 1994), focus on allowing the same code to run in simulation and on a live
network. OPNET and NS-2 (NS2, 2006) are the two most popular network simulators, targeting
a wider range of networks and protocols. NS-2, derived from REAL (Keshav, 1988), is an open
source network simulator. NS-2 is widely used for network research in academia. NS-2 is also
free. However, NS-2 is more difficult to learn and lacks a user interface. It requires the users to
learn and use non-standard scripting interfaces such as tcl. It takes a significant amount time to
get OPNET Simulation familiar with NS-2. OPNET is the best network simulator to meet our
teaching goals for the following reasons:

OPNET is much easier to use than NS-2. It provides a very convenient Graphic User
Interface (GUI) and is very easy to learn.
OPNET can be used to model the entire network, including its routers, switches, proto-
cols, servers, and the individual applications they support. A large range of
communication systems from a single LAN to global inter-networks can be supported.
OPNET software (with model source code) is available for FREE to the academic re-
search and teaching community. Students can download and install OPNET IT Guru
Academic Edition at home.
The OPNET's discrete event engine for network simulations is the fastest and most
scalable commercially available solution. It usually takes just a few minutes to complete
simulations of most lab experiments.
OPNET has a large user community. OPNET software is used by major fortune-500
companies, service providers, and government organizations worldwide. Students who
have experiences with OPNET simulator will have much better future employment
opportunities in industry.

Representative Projects:

Our labs emphasize the understanding of the dynamics of network protocols instead of
configuration and management. Students learn through these experiments a wide range of
networking aspects including the design and the limitations of protocols, simulation and
performance evaluation techniques, interpretation of data and packet analysis. Furthermore, we
try to ensure that labs contain some extension or development of the topic beyond the
lecture/reading.
Each lab experiment consists of the following five steps: create network model, choose statistics,
run simulation, analyze results, and write a report, as seen in Figure 1. In addition, each lab has a
few questions based on the reports generated from the simulation to test the students
understanding as well as analytical and reasoning skills.

OPNET provides four editors to develop a representation of a system being modeled. These
editors, the Network, Node, Process, and Parameter Editors, are organized in a hierarchical
fashion, as seen in Figure 2. Each level of the hierarchy describes different aspects of the
complete model being simulated. Models developed at one level of the hierarchy are used (or
inherited) by models at the next higher level. This leads to a highly flexible simulation
environment where generic models can be developed and used in many different scenarios.
Global Mobile Information System Simulator:

Global Mobile Information System Simulator is a popular network simulation tool, which is
frequently used in the study of the behavior of large-scale hybrid networks that include
wireless, wired, and satellite based communications are becoming common in both
in military and commercial situations. It is freely available without fee for education, or
research, or to non-profit agencies. It is simple to install and use. It is available for various
Linux flavors files include freebsd-3.3, aix, irix, redhat-6.0, redhat-7.2 and Solaris. This
tool is not supported to fedora core Linux. This paper will help you get started.

INTRODUCTION

Global Mobile Information System Simulator (GloMoSim) simulates networks with up


to thousand nodes linked by a heterogeneous communications capability that includes
multicast, asymmetric communications using direct satellite broadcasts, multi-hop
wireless communications using ad-hoc networking, and traditional Internet protocols.
Developers use these simulators to model the wired or wireless network design process. It is
being designed using the parallel discrete-event simulation capability provided by
Parsec. This makes it possible to evaluate various design alterations and configurations
before even deploying the actual devices and components. Based on the outcome of the
validation process, simulations can once again be attempted for optimization of the
hardware performance. Most network systems are currently build using a layered approach that
is similar to the OSI seven layer network architecture. The plan is to build GloMoSim
using a similar layered approach. Standard APIs will be used between the different
simulation layers. This will allow the rapid integration of models developed at
different layers by different people. It usually made available on a standalone machine.
The goal is to build a library of parallelized models that can be used for the evaluation
of variety of wireless network protocols. The proposed protocols stack will include models for
the channel, radio, MAC, network, transport, and higher layers. The simple approach to
designing a network simulation would be to initialize each network node in the simulation as a
Parsec entity. We can view different entity initializations as being separate logical
processes in the system. Hence each entity initialization requires its own stack space in the
runtime. In GloMoSim, we are trying to build a simulation that will scale to thousands of node. If
we have to instantiate an entity for each node in the runtime, the memory requirements
would increase dramatically. The performance of the system would also degrade rapidly. Since
there are so many entities in the simulation, the runtime would need to constantly
context switch among the different entities in the system. This will cause significant degradation
in the performance of the simulation. Hence initializing each node as a separate entity will
inherently limit the scalability and performance of the simulation. To circumvent these
problems network gridding was introduced into the simulation. With network gridding,
a single entity can simulate several network nodes in the system. A separate data
structure
1. NS version 2

NS (network simulator) is a part of DARPA-funded research project, VINT, whose aim is to


build a network simulator that will allow the study of scale and protocol interaction in the
context of current and future network protocols. VINT is a collaborative project involving
USC/ISI, Xerox PARC, LBNL, and UC Berkeley. ns is based on REAL (REalistic And Large),
which in turn is based on NEST. Like any network simulator, ns also has two key components:
(i)Building blocks like nodes, links, and traffic models which will be described later, and (ii)Glue
in the form of simulation description language (SDL). Common problem encountered by
network simulators is that the Building blocks and the Glue are often implemented with a
uniform programming model. This uniform programming model generally cannot satisfy the
diverse requirements of both the components at the same time. Other common problem is the
inflexibility of SDL when it comes to defining dynamic network simulation scenarios (e.g.
REALs NETLanguage which defines static configuration file and OPNETs schematic capture
mechanism). ns tries to avoid these problems by using different programming models for the
implementation of these components. It uses C++ to implement efficient Building blocks. At the
same time, it uses a scripting language OTcl, an object oriented extension of Tcl (from David
Wetherall at MIT/LCS) as a Glue. Key features of ns are listed below:

C++ is used for building the simulation engine as well as implementing performance-
intensive activities

Protocol definition and simulation configurations are specified using a rich scripting
language, OTcl

Split object implementation across compiled/interpreted object hierarchy

Composable object architecture, which supports multiple levels of abstraction

Visualization is separated from simulation using network animator ( nam)

Scaling through abstraction.

The class hierarchy of ns is depicted in Figure 1. ns supports following protocol modules:


Source Agents: These include support for Telnet, file transfer as well as traffic generations
based on various distributions. UDP and CBR are also supported.

TCP: Many flavors of TCP like Tahoe, Reno, Sack, Vegas, Two-way etc are supported.

Queuing/Scheduling: Packet queuing algorithms like Drop tail, FIFO, RED as well as packet
scheduling algorithms like FIFO, CBQ, Round-robin,FQ/WFQ are supported. There is a support
for tracing monitoring objects.

Unicast routing: Precomputed unicast routing as well as dynamic unicast routing is available in
ns. Multi-path and asymmetric path routing is also supported with provision for link failures etc.

Multicast routing: ns supports broadcast and prune type of multicasting with static or dynamic
topologies. Multicast tree computation is centralized. Support for multicast transport in terms of
Reliable multicast (SRM-fixed and adaptive) and RTP/RTCP exists.

MAC/LAN: ns support Multi-access LAN as well as wireless MAC protocols. Nodes are
implemented as a collection of Agents and Classifiers. Agents are protocol end-points and
related objects whereas Classifiers are packet demultiplexers. Links encapsulate queue and delay
objects. nam is used in network animation for visualization of packet flows, protocol states,
queuing etc. It is also used for graphical scenario construction. It is extremely useful for
designing and understanding the dynamics and interactions of protocols. It is also useful as a
debugging tool.
Fig1: NS Class Hierarchy

2. OWN Simulator

2.1 Design Objectives

OWns is designed to accomplish the following objectives:

To provide a generic framework that includes support for:

o Modeling of WDM network characteristics such as multi-wavelength links.

o -Network topology generation

o -Network traffic generation

o -Performance evaluation based on parameters specific to optical WDM networks.

To study the performance of IP overWDM


To study the interaction between existing electronically switched networks and optical
WDM networks

2.2 Design Considerations

Instead of designing OWns from scratch as a stand-alone simulator, we tried to make avail of
facilities provided by the existing network simulators. We first investigated the designs and
functionality provided by currently available network simulators. The factors like availability of
the source code, extensibility and ease of use were used to judge the suitability of these
simulators for achieving our design goals. The simulators considered in our investigation
included ns (available from UCB/LBN), REAL (available from Cornell University), NEST
(available from Columbia University), OPNET (commercially available from MIL 3), and
BONeS (commercially available from Cadence Design Systems). We found ns to be the most
suitable for our needs. ns provides the functionalities such as discrete-event simulation, TCP
protocol stack, graphical user interface support. Description of features of ns is given below. The
details of the design of OWns components using ns as a basic framework are listed below.

2.3 Owns Components

Given the framework of ns, we need to identify the specific areas where we are going extend this
framework to support optical WDM network simulations. We need to take into account the
differences between the traditional networks, which operate almost entirely in the electronic
domain and the opticalWDM networks, which mostly operate in the optical domain. OWns is
designed to support following components as shown in Figure 2.

Multi-wavelength Links: Optical WDM technology uses multiple wavelengths for data
transmission over a optical fiber link. We have to support this property of WDM transmission
over the fiber links.

Optical Switches/Routers: We need to model the nodes in the network that act as optical
switches/routers with varying degrees of wavelength conversion capabilities.
Switching Architectures: NS supports packet-switching. We need to extend the framework to
support circuit switching as well as new switching paradigms like Optical Burst Switching
(OBS) and Tag switching. We need to take into account the multi-wavelength capability of the
links since these wavelengths may be assigned to different types of switching traffic.

Fig2: OWns Components

Virtual Topology Design Schemes: In order to support routing in optical WDM networks based
on virtual topologies, virtual topology construction algorithms based on heuristic need to be
supported. Corresponding to these algorithms, virtual topology Agents must be integrated with
the framework.

Routing Schemes: Different routing algorithms must be tailored to work with optical WDM
networks. Routing schemes based on virtual topology based routing need to be implemented and
integrated with the framework.

Multicasting Agents: Different multicasting algorithms could be implemented in the


opticalWDM networks in such a way that they could benefit fromthemulti-wavelength nature of
the fiber links. Multicasting Agents need to be implemented for these multicasting algorithms.
Multicasting Agents will interact with the routing Agents

Integrated services/QoS Agent: In order to study the architectures for supporting integrated
services over optical WDM network, we need to support QoS Agents. QoS Agents can use
algorithms specific to optical WDM networks to support QoS in optical domain.

Visualization Component: nam needs to be extended in order to visualize optical WDM


network simulations. It would be important to visualize the network traffic to better understand
the interaction between traditional network traffic and the traffic over optical WDM backbone.
With the support of these OWns components, network engineering for optical WDM networks
could be studied by the means of simulations. Detailed study of the interaction between
opticalWDMnetwork backbone and the traditional networks would be useful in designing
different optical WDM backbone architectures and evaluating their relative performance. It
would also help in better understanding of differentiated services versus integrated services
issues in high speed optical WDM networking environment.

3. Design of Owns

3.1 Switching Architecture

The OWns architecture is designed to accommodate specific characteristics of WDM network


simulations. As shown in Figure 3 the OWns architecture views the physical and logical topology
of WDM networksbeing implemented as the physical layer and the logical layer, respectively.
The physical layer consists of optical switching nodes and multi-wavelength links. Packet
transmission mechanisms are implemented at the physical layer. The logical layer comprises the
routing module and wavelength assignment (WA) module, which together create and maintain
the virtual topology. These modules are described further in Section 3.2.
Fig3: OWns architecture and layers

The OWns circuit-switched architecture is composed of the routing module, the WA module,
optical switching nodes, and the multi-wavelength links. The multi-channel structures of multi-
wavelength links are centrally maintained in the logical layer. The WA module works along with
the routing module to compute wavelength assignment, set up lightpaths, and construct the
virtual topology. Relying on the above results, optical nodes forward incoming traffic to the
corresponding next hops through multi-wavelength links. The current version of OWns supports
circuit switching. The development of specific optical burst switching photonic packet switching
and multi-protocol lambda switching switching modules is planned for future work.

3.2 Components

Figure 4 illustrates component organization and interactions of OWns. The optical switching
node, multiwavelength link, routing module, andWA module are implemented as the
WDMNode, duplex FiberLink, RouteLogic/Wavelength, and WAssignLogic objects
respectively. The session traffic source objects that are used to generate packets are composed of
a transport-level agent, Agent/WDM and an application.
Fig 4: OWns Components Organization and Interactions

Optical switching node: The WDMNode object is derived from the ns Node object, and it
represents an instance of the optical switching node. It consists of a port classifier and a lightpath
classifier. The port classifier de-multiplexes and relays incoming packets to their respective
sinks, which are the Application/SessionTraffic object. The lightpath classifier interacts with the
WAssignLogic object to establish lightpaths for incoming traffic sessions and updates the current
state of the virtual topology. The lightpath classifier at a source node always attempts to issue
lightpath requests for its generated traffic to the WAssignLogic. On the other hand, a lightpath
classifier at an intermediate node of the traffic path simply consults the WAssignLogic to
determine forwarding information. The lightpath classifier also simulates the delay introduced by
wavelength conversion. The lightpath classifier is implemented as the object
Classifier/Addr/Lightpath.

Multi-wavelength link: The duplex link of ns is extended to form the duplex multi-wavelength
link, duplex-FiberLink. It has additional properties, such as the number of wavelengths and
perwavelength bandwidth, used to model the characteristics of optical links. Another major
difference from traditional links is the absence of queuing components in multi-wavelength links
due to the characteristics of lightpath communication. An abstraction technique is adopted to
reduce the complexity of implementing detailed multi-channels on multi-wavelength links. All
multi-channel structures, wavelength usage, and virtual topology information are centrally
maintained by the WAssignLogic.Error models for the links that allow transmission errors to be
simulated are left for future work.

Wavelength assignment (WA) module: The WA module is responsible for computing


wavelength assignment, establishing lightpaths and constructing virtual topologies. The
simulator represents the WA module as an object WAssignLogic, which stores information
necessary for wavelength assignment calculation. The first-fit wavelength assignment algorithm
is implemented in WAssignLogic as the default wavelength assignment mechanism in OWns.
Implementation of a new wavelength assignment algorithm can be done by using classes
inherited from WAssignLogic.

Routing module: The routing module computes the routes needed to establish lightpaths, using
certain specified routing algorithms. Since the routing algorithms of traditional networks and
those of WDM networks are similar in functionality, the routing module is implemented as the
object RouteLogic/Wavelength, which is inherited from the ns routing logic RouteLogic object.
The simulator centrally maintains the routing information across networks in the RouteLogic/
Wavelength object. OWns currently uses the fixed-alternate shortest path routing algojrithm as
the default routing. The RouteLogic/Wavelength supplies the WAssignLogic with route
information to route incoming traffic. New wavelength routing algorithms can be easily
implemented as derived classes based on the RouteLogic/Wavelength object.

UNIT V - SIMULATION
PART A (2 MARKS)
1. What is GloMoSim?
2. Explain advantages of parallelization in GloMoSim.
3. What are important components of mobile networking?
4. Write the command for Creating Node movements in NS-2.
5. What are the Network Components in a mobile node?
6. What are the Different types of Routing Agents in mobile networking?
7. Define DSDV.
8. Define TORA.
9. Define AODV.
10. Briefly explain about 802.11 DCF from CMU
PART B (16 MARKS)
1. Design and evaluate the performance of various transport protocols of mobile
using network simulator (any one).
2. Design and evaluate the performance of various routing protocols of mobile
using network simulator (any one).
3. Design and evaluate the performance of routing protocols of wireless networks
using network simulator (any one).
QUESTION BANK
1.Classify the mobile radio transmission systems.
Simplex & Duplex.
2. State example for a half duplex system.
Push to talk and release to listen.
3. State example for a Simplex system.
Pager.
4. State the operations performed by control channel
Call setup, call request, call initiation and other control purposes.
5. Define page.
A brief message which is broadcast over the entire service area in a simulcast
fashion by many base station at the same time.
6. Define the term Roamer.
A mobile unit that operates in a service area other than that from which service has
been subscribed.
1. Define handoff ?
When a mobile moves from one cell to another the control of this mobile is transferred
from one cell to another. This process is referred as handoff.
7. Define cluster.
The N cells which collectively use the complete set of available frequencies is called a
cluster.
8. Give the equation which illustrates the relation between capacity of a system and cluster
size.
C = MKN
9. State the different classifications of channel assignment strategies.
Fixed and dynamic.
10. What is the use of RSSI ?
This is receive signal strength indicator. This information is sent to the cell site from
the mobile unit so that the MTSO can decide for a handoff.
11. Mention the type of handoff used in CDMA.
Soft handoff.
12..State the different types of handoffs.
Soft handoff, hard handoff, forced handoff, delayed handoff and mobile associated
handoff.
13.What is intersystem handoff ?
During a course of a call, if a mobile moves from one cellular system to a different
cellular system controlled by a different MSC it is referred as intersystem handoff.

14.What is co channel interference ?


Interference between signals from cells that operate in same frequency is referred as
channel interference.
15. What is grade of service ?
It is a measure of the ability of a user to access a trunked system during the busiest
hour.
16. What is cell splitting ?
It is a process of subdividing a congested cell into smaller cells.
17. What is sectoring ?
The process of using directional antennas in a cell is referred as sectoring.

18. State the different techniques used for improving coverage and capacity in cellular
systems.
Cell splitting, Sectoring, Repeaters for range extension and Microcell zone.
19. Define modulation.
It is the process of encoding information from a message source in a manner suitable for
transmission.
20. What is frequency planning ?
The design process of selecting and allocating channel groups for all the cellular
base stations within a system is called frequency planning.
21. What is trunking efficiency ?
It is a measure of the number of users which can be offered a particular GOS with
a particular configuration of fixed channels.
22. State the basic constituents of a cellular system.
Mobile unit, cell site, mobile telephone switching office.
23. State the two different types of fading.
Long term fading & short term fading.
24. Define rayleigh fading.
It refers to the variation in the received signal which is due to the waves reflected
from surrounding buildings and other structures.
25. Define the term coherence bandwidth.
It is defined as the bandwidth in which either the amplitudes or the phases of two
received signals have a high degree of similarity.
26. What is direct wave path ?
It is the path which is clear from the terrain contour.
27. State the different analog modulation schemes.
Amplitude and frequency modulation.

28. State the different modulation schemes.


Amplitude shift keying, frequency shift keying, phase shift keying.
29. Define amplitude modulation.
The amplitude of the high frequency carried is varied in accordance to the
instantaneous amplitude of the message signal.
30. State the techniques used for SSB generation.
Filter method and balanced modulator method.
31. State the advantages of digital modulation schemes.
Power efficiency and bandwidth efficiency.
32. Define bandwidth efficiency.
It describes the ability of the modulation scheme to accommodate data within a
limited bandwidth.
33. Define Power efficiency.
It describes the ability of the modulation scheme to preserve the fidelity of the
digital message at low power levels.
34. State the different types of line coding.
Return to zero, non-return to zero and Manchester.
35. State the types of modulation schemes used in mobile communication.
GMSK, GFSK and DQPSK.
36. What is coherent detector ?
If the receiver has prior knowledge of the transmitted signal then the receiver is
known as coherent detector.
37. State the advantage of using GMSK rather than MSK.
The bandwidth occupied by GMSK modulated signal is less in comparison to
MSK modulated signal.
38. What is CPFSK ?
Continuous phase frequency shift keying. It is another name for MSK.
39. What is QAM ?
Quadrature amplitude modulation.
40. State the difference between MSK and GMSK.
GMSK uses a Gaussian pulse shaping filter prior to MSK.
41. What is a diversity receiver?

Diversity receiver is the diversity scheme applied at the receiver end of the antenna
in all effective technique for reducing interference, where selective combiner is used to
combine two-correlated signal.
42. Expand PCS, PLMR, NLOS and DECT.
PCS - Personal Communication Systems.
PLMR Public Land Mobile Radio
NLOS Non Line Of Sight
DECT Digital Equipment Cordless Telephone
43. Mention the three partially separable effects of radio propagation.
The three partially separable effects of radio propagation are,
Multi path fading
Shadowing
Path loss
44. Mention the basic propagation mechanisms, which impact propagation in mobile
communication.
The basic propagation mechanisms are,
Reflection
Diffraction
Scattering
45. What is reflection?
Reflection occurs when a propagating electromagnetic wave impinges upon an
object, which has very large dimension when compared to the wavelength of propagating
wave.
46. What is diffraction?
Diffraction occurs when the radio path between the transmitter and receiver is
obstructed by a surface that has sharp irregularities.
47. What is scattering?
Scattering occurs when the medium through which the wave travels consists of
objects with dimensions that are small compared to the wavelength and where the
number of obstacles per unit volume is large.
48. Define Brewster angle?
The Brewster angle is the angle at which no reflection occurs in the medium of
origin. It occurs when the incident angle b is such that the reflection coefficient Is
equal to zero.
49. Why we use 1mi intercept for mobile communication?
Within a 1mi radius the antenna beam width of a high gain omni-directional
antenna is narrow in vertical plan. Larger the elevation angle weaker the reception level.
50. What are the possible conditions in a point-to-point prediction model?
The possible conditions in a point to point prediction model are,
Non Obstructive direct path.

Obstructive direct path.


51. What are the merits of point-to-point model?
The merits are,
Produces an accurate prediction with a deviation of 8dB.
Reduces the uncertainty range by including the detailed terrain contour information.
52. What is a smart antenna?
A smart antenna system consist of an antenna array, associated RF hardware and a
computer controller that changes array pattern in response to radio frequency
environment.
53. What is EIRP?
Effective isotropic radiated power is referenced to an isotropic source. The
difference between ERP and EIRP is 2dB
ERP=EIRP-2dB
54) What is PHP?
PHP means Personal Handy Phone System. It is otherwise called PHS. PHP is a
wireless communication TDD System which supports personal communication services
(PCS). It uses small, low-complexity light weight terminals called Personal Stations
(PSS).
55) Write down the applications of PHP?
PHP can be used for,
* Public Telephone
* Wireless PBX
* Home Cordless Telephone
* Walkie talkie communication.
56 Features of PHP?
* Wider Coverage per cell.
* Operation in a mobile Outdoor environment,
* Faster and distributed control of handoffs.
* Enhanced authentication
* Encryption
* Privacy
* Circuit and packet-oriented data services.
57) What are the logical channels that the control channel consists?
* Broadcast control channel.
* Common control channel.
* User packet channel.
* Associated control channel.
58) What is BCCH?
Broadcast control channel is a one way down link channel for broadcasting control
information from CS to PS.

59) What is CCCH?


CCCH is Common Control Channel Which sends out the control information for
call connection.
60) What is SIM?
SIM, which is memory device that store information such as the subscriber identity
number, the network and countries where the subscriber is entitled to service, private
key,and other user specified information.
61) What are main subsystems of GSM architecture?
i) Base station subsystem (BSS)
ii) Network &switching subsystem (NSS)
iii) Operation support subsystem (OSS)
62) What are frequencies used in forward and reverse link frequency in GSM?
(890-915) MHz- reverse link frequency
(935-960) MHz-forward link frequency
63) What are the channel types of GSM system?
i) GSM traffic channel
ii) GSM control channel
1. Broad cost channel
2. Common control channel
3. Dedicated control channel
64) What is CDMA digital cellar standard (is 95)?
IS-95- interim standard
IS 95 allows each user with in the a cell to use the same radio channel and user in
adjacent cell also use the same radio channel since this is a direct sequence spread
spectrum CDMA system.
65) What are frequencies used in forward and reverse link frequency in IS-95?
(824-849) MHz- reverse link frequency
(869-894) MHz-forward link frequency
66) If a cellular operator is allocated 12.5 MHz for each simples band and if bt is 12.5
MHz bguard is
10 KHz & Bc=10khz find the number of channel available in an FDMA system.
N= (bt --2 bguard) / Bc
N=(12.5 MHz-2(10 KHz))/ 10khz
=416channel
76. State certain access technologies used in mobile satellite communication systems.
FDMA, TDMA and CDMA.
77 State the different types of handoffs.
Soft handoff, hard handoff, forced handoff, delayed handoff and mobile associated
handoff.
78. What is intersystem handoff ?
During a course of a call, if a mobile moves from one cellular system to a different
cellular system controlled by a different MSC it is referred as intersystem handoff.
79. State the expression that relates co channel reuse ratio (Q) to radius (R) of a cell
Q = D/R
D Distance between center of co channel cells
80. State the expression used to locate co channel cells.
N = i2 + ij + j2
81. Define the term dwell time.
The time over which a call may be maintained within a cell without handoff.
82. State the advantage of umbrella cell approach.
It provides large area coverage to high speed users while providing small area
coverage to users traveling at low speeds.
84. Define co channel cells.
The cells that operate with the same set of frequencies are referred as co channel
cells.
85. Define the term Erlong.
One Erlong represents the amount of traffic intensity carried by a channel that is
completely occupied.
86. State the relation between traffic intensity (Au) and holding time (H).
Au = H.
= request rate
87. State the two types of trunked system.
Blocked call cleared system and Delayed call cleared system
88.How many co channel interferes are present in the first tier for a cluster size of 7?
Six
89.What is CDPD?
CDPD is a Cellular packet digital Data System that uses packet switched data .The bit
rate in the RF channel for CDPD is !9.2kbps
90.Write some features of TDMA?
*In TDMA , no. of time slots depends upon modulation technique ,available bandwidth
*Data transmission occurs in bursts
It uses different time slots for transmission and reception, then duplexers are not
Required

Adaptive equalization is necessary


Guard time should be minimized
91Write some features of CDMA?
*In CDMA system, many users share the same frequency either TDD or FDD may be
used
*Channel data rate is high
*Multipath fading may be substantially reduced
*CDMA uses co channel cells, it can use macroscopic spatial diversity to provide soft
hand off
92.Write the features of DECT?
DECT provides a cordless communication framework for high traffic intensity,
short range telecommunication and covers a broad range of applications and
environment
It supports telepoint services
It provides low power radio access between portable parts and fixed base stations
at ranges of upto a few hundred meters

93.What are the interfaces used in the GSM?


GSM radio air interface
Abis interface
A interface
94.What are the types of services in GSM?
Tele sevices and Data services
95.Write some third generation wireless standards.
Personal communication system
IMT-2000
UMTS
96.What is Bluetooth?
It is an open standard that provides an ad-hoc approach for enabling various devices to
communicate with one another within nominal 10 meter range. It operates in the 2.4 Ghz
ISM band and uses frequency hopping TDD scheme for each radio channel
97.What is the forward and reverse link frequency for AMPS?
(890-915) MHz- reverse link frequency
(935-960) MHz-forward link frequency
98.Write the specifications of DECT ?
Frequency band 1880-1900Mhz
No. of carriers - 10
RF channel bandwidth -1.728MHz
Multiplexing FDMA/TDMA
Duplex-TDD
99.What is near-far effect in wireless network?

successfully capture the intended receiver , even when many users are also transmitting .
If the closest transmitter is able to capture a receiver because of small propagation path
loss, it is called as near -far effect in wireless network
100. Write some standards used in 2G system
GSM
IS-136
IS-95
Pacific Digital Cellular standard
.
Part B Questions
1. Explain elaborately about types of handoffs.
Hard handoff
Soft handoff
Forced handoff
Delayed handoff
Mobile assisted handoff
2. Explain in detail about dropped call rate and cell splitting.
Definition of dropped call rate
Consideration of dropped calls
Relation ship among capacity, voice quality and dropped call rate
Formulae for dropped call rate
Diagram for cell splitting
Theory for cell splitting
3. Explain the different techniques of improving coverage and capacity in cellular system
Explanation about cell splitting
Explanation about sectoring
Explanation about Mirozone approach
4. Derive the expression for Erlang B and Erlag C formulas
Explanations about Blocked call cleared system and Delayed call cleared
System
Derivation for Erlang B formula
Derivation for Erlang C formula
5. Explain in detail about usage of repeater for coverage improvement.
Repeaters
Usage in providing coverage.
6. Explain with neat diagram about a cell mobile telephone system.
Diagram
Explanation about MSC
Explanation about PSTN
Explanation about cell sites
Explanation about mobile units

Explanation about communication between cell sites, MSC and PSTN.


7. With the help of a neat diagram explain about frequency reuse and the advantages of it.
Diagram
Derivation for N=3, 7 and 12
The advantages of frequency reuse
8. Explain in detail about Paging system and its operation.
History
Operation Procedure.
9. Explain elaborately the evolution of cellular systems.
History behind cellular
10.With the help of timing diagram ,explain the process of call initiation in a cellular
system.
Timing diagram
Explanation about call initiated by landline subscriber
Explanation about call initiated by a mobile
11. Compare and contrast the features of FDMA,TDMA and CDMA
Comparision based on
Bandwidth
Security
Efficiency
12. With neat diagram explain the forward CDMA channel Structure
Frequency Hopping
Explanation
Direct Sequence
13.Explain the Free space propacation model?
14.What is the non linear equalization? Explain the three non linear methods of
equlization with suitable diagrams?
15.Draw the block diagram of LPC coding system and explain the different

of LPC used for wireless systems?


16.With suitable block diagram explain the GSM system?
17.Explain the concepts of CDMA. What are its merits and demerits? Explain the working
principle of RAKE receiver.
18.Explain the TDMA frame structure and derive the efficiency of a TDMA system
19.Explain detail about DECT?
20.Explain detail about IS-95
21.Write about the GMSK transmitter and receiver with neat diagram?
22.With a diagram explain the performance of RAKE receiver?
23.Describe the impulse response for the a multipath radio channel?
24.Explain the two ray ground reflection model and obtain an expression for the received
power at a distancedfrom the transmitter?
25.Enumerate the fandamental of equalization and reduction in intersymbol interference in
communication channels.
B.E/B.Tech Degree examination, APRIL/MAY 2008

Eighth semester

Electronics and communication Engineering

EC1451 MOBILE COMMUNICATION

Answer all the Questions.

PART-A- (10 x 2 = 20)

1. In wireless communication system what is cell dragging?

2. If bandwidth = 33 MHz and forward and reverse channel bandwidth is 25KHz for a

cell size of N = 12, calculate the total channels and channel bandwidth for full duplex.

3. For the figure given below calculate the loss due to knife edge diffraction take T =

1.11ns.

4. State the effects of multi path propagation.

5. Gaussian low pass filter is used to produce 0.25 GMSK with a channel data rate of 270

Kbps. What is the 90% power bandwidth in the RF channel ? Specify the Gaussian filter

parameter?

6. What is Bluetooth communication?

7. What are Raleigh and Ricean fading?

8. Difference between 2G and 3G system with respect to multiple access system.

9. GSM frame structure consists of 8 time slots and each has 256 bits. And transmission

rate is 270.83 Kbps. Find how long a user must be occupying a single slot wait between

two successive transmissions.


10. How does cell split improve efficiency of a mobilr system?

PART-B (5 x 16 = 80)

11.a.(i) Explain with suitable diagrams and specifications the difference between cellular,

paging and cordless systems.(8)

(ii) How the trunking capacity is calculated for lost calls and delayed calls in a

mobile trunking system. (8)

Or

b.(i) hat is hand off? Explain how this is accomplished in a cellular mobile

communication system. (6)

(ii) What are the different channel assignment strategies ?. Differentiate cell splitting

and cell sectoring concepts.

12.a. (i) What are the different types of distortions in wireless communication ?(6)

(ii) Draw the two ray ground reflection model and derive an expression to compute

the electric field at a distance of d meters. (10)

Or

b.(i) Explain the path loss prediction over hilly terrain. (8)

(ii) Define the multi path shape factors . How is used to derive the fading rate variance

relationships.(8)

13. a.(i) What are the digital modulation techniques employed for mobile

communication? Explain the most preferred technique.(8)

(ii) What is space diversity technique? What is the need for space diversity? (8)
Or

b. (i) Explain RAKE receiver for CDMA systems.(10)

(ii) How channel distortion can be avoided in a wireless channel. (6)

14.a. What are the major differences between TDMA,FDMA and CDMA . Explain in

detail about each multiple access.

Or

b. What are Vocoders? Design suitable Vocoder that is most suitable for mobile

communication.

15.a. Draw the architecture of GSM and explain how location of mobile users are tracked

using various registers. Draw the TDMA frame structure for the GSM network. (16)

Or

b. (i) How can speech quality be improved to accommodate higher data rate.(8)

(ii) Explain wireless local loop ,with specifications and applications. (8)
B.E/B.Tech. DEGREE EXAMINATION , MAY/JUNE 2007.

Eighth semester

Electronics and Communication Engineering

EC047 CELLULAR MOBILE COMMUNICATION

PART-A- (10 x 2 = 20)

1. Why the cells are assumed to be hexagonal in shapes?

2. why low data rates are used in paging systems?

3. What are soft and hard hand off?

4. A FDD cellular system has a total cellular bandwidth of 50 MHz . assuming the

forward and reverse channel bandwidth of each equal to 50KHz, determine the number

of available channel per cell if the system uses N= 4(four cell runs)

5. What are the three basic propagation mechanisms? When do they occur?

6. Name the propagation models used for outdoor and also indicate the specific situation

for each model.

7. What are the advantages of diversity techniques?

8. What are Trellis and Turbo codes?

9. Why do you use Walsh codes for modulation in CDMA?

10. What is near/far effect in wireless networks?

PART B (5 x 16 = 80marks)

11. (a) (i) Draw the block diagram of a cellular system and explain how a cellular

telephone call is made between the landline and the mobile user and when the

call is initiated by the landline customer. Draw suitable timing diagrams.

(ii) Explain briefly about 3 G CDMA techniques.


Or

(b) (i) What is the need for frequency reuse?

Explain the frequency reuse concept and show that N = i + ij + j ,where N is

the no of cells per cluster.

(ii) Derive an expression for signal to interference ratio (S/I) for 7 cell cluster

system.

12. (a) what are the different techniques used for increasing the capacity and improve

the coverage in cellular system? Explain them

Or

(b) (i) What is grade of service? How are Erlang B and Erlang C formula are used for

cellular systems?

(ii) A hexagonal cell with four cell system has a radius of 2 Km and a total of 50

channels are used in the system. If the load per user is 0.03 Erlangs , and = 2 call/ hour

.compute the following for Erlang C system by assuming 5% probability of delay with

C= 15 and traffic intensity = 9.0 Erlangs.

(1) How many users per square kilometer this system will support?

(2) What is the probability that a call will be delayed for more than 10 secs?

13. (a) Derive and explain the free space propagation model to determine the received

power at a distance dand relate this power to electric field.

Or

(b) For a two- ray model derive the expression for the received power at a

distance dfrom a transmitter and show that ,

Pr = PtGtGr. ht2hr2/ dt
14.a) What is nonlinear equalization? Explain the three nonlinear methods of

equalization with suitable diagrams.

Or

b) Draw the block diagram of a LPC coding system and explain the different types

of LPC used for wireless systems.

15. (a) (i) With suitable block diagram explain the GSM system.

(ii) Draw the transmitter and receiver block diagram of MSK and explain.

Or

(b) (i) Explain the concept of CDMA. What are its merits and demerits?

Explain the working principle of RAKE receiver.

(ii) Explain the TDMA frame structure and derive the efficiency of a TDMA

system
B.E/B.Tech. DEGREE EXAMINATION , MAY/JUNE 2009

Eighth semester

( Regulation 2004)

Electronics and Communication Engineering

EC1451-MOBILE COMMUNICATION

PART-A- (10 x 2 = 20)

1. Define coherence bandwidth

2. Define grade of service

3. What is meant by soft hand off ? What advantages does it have?

4. What are piconets?

5. What is air interface?

6. Draw the frame structure of GSM.

7. What are the advantages of the 2-ray ground reflection model in the analysis of path loss?

8. What is Doppler shift?

9. Explain adj acent channel interface

10. In an FDMA system the total spectrum bandwidth is 12.5 MHz. Each channel is of 30 KHz

and the guard band is 10 KHz. Find the total no.of channels available in the system

PART-B (5 x 16 = 80 marks)

11.(a). (i) Explain the various methods of increasing the capacity of users in cellular system

(ii) Show that the frequency reuse factor for a cellular system is given by k/s, where k is the

average number of channels per call and S is the total no of channels available.

Or

(b). (i) Explain the procedure for making a call from a mobile to another mobile.

(ii) Explain how the interference between the base and the mobile transmissions is reduced

in PCS.
12.(a) (i) Explain how the diffraction affects the signal propagation using knife edge diffraction

model.

(ii) Explain the propagation of signal in outdoor using Durkin s Model.

Or

(b) What are the types of small scale fading explain each fading effects in detail.

13. (a) (i) Explain the generation and detection of GMSK signal.

(ii) Discuss the nonlinear equalization methods using Maximum likelihood sequence

estimation (MLSE).

Or

(b) (i) What are the benefits of RAKE receiver?

(ii) Draw the constellation points of 16-QAM signal and derive the average probability

error.

14. (a) (i) Explain the LPC vocoder and determine the predictor coefficients

(ii) Explain the working of GSM codec with block diagram.

Or

(b) (i) Draw the frame structure of TDMA and derive ite efficiency .

(ii) Derive the equation to calculate the number of users in a CDMA cellular system.

15. (a) (i) What are the advantages and disadvantages of WLL

(ii) Draw the functional block of DECT systems and explain its working principle.

Or

(b) (i) Explain the forward and reverse channel parameters in IS-95 CDMA.

(ii) Explain the GSM system architecture and give its protocol specifications.
B.E/B.Tech. DEGREE EXAMINATION , NOV/DEC2008.

Eighth semester

Electronics and Communication Engineering

EC1451 - MOBILE COMMUNICATION

PART-A- (10 x 2 = 20)

1. Give the conceptual difference between wireless PABX and cellular systems.

2. What are the maximum data rates of 802.11, 802.11a, 802.11b?

3. Explain soft hand off in mobile communication.

4. What are the different ways to increase the channel capacity in mobile networks?.

5. What is the advantage of OFDM?

6. What is the transmitting frequency and distance for the Bluetooth devices?

7. Define grade of service.

8. Raleigh distribution is widely used in wireless communication? Why? Give any two

reasons.

9. What is the advantage of coil sectoring?

10. Name the wireless system that uses the GMSK modulation technique.

PART B (5 x 16 = 80marks)

11. (a) Explain co-channel interference and adjacent channel interference. Describe the

techniques to avoid the interference.

Or

(b) Explain briefly the steps involved in processing of call in cell phone to Cell phone

communication.

12. (a) In free space propagation , describe how the signals are affected by reflection

diffraction and scattering.

Or
(b) Consider N homogenous plane waves have been created by reflection / scattering

from different interacting objects (obstacles) . The interactive objects and

transmitter do not move, and the receiver moves with a velocity v, derive the

Raleigh amplitude and phase distribution.

13(a) (i) Explain equalization in time domain and frequency domain .

(ii) Consider a linear equalizer structure and derive the mean square error between the

transmit signal and output of the equalizer.

Or

(b) (i) Explain the modulation and demodulation techniques of GMSK.

(ii) Explain how the cyclic prefix is used to reduce the ISI in a frequency selective channel.

Or

14. (a) (i) What is the function of Vocoder?

(ii) Explain the linear predictive code and derive the predictive coefficient.

Or

(b) (i) Compare the multiple access CDMA and SDMA.

(ii) Explain the GSM codec.

15. (a) Write short notes on

(i) WLL

(ii) AMPS

(iii) DECT

Or

(b) Explain about the logical and physical channels in GSM and IS-95.