You are on page 1of 57

Experiment No.

1
1. Aim - Study and analysis of wired & wireless networks
2. Theory –

Network - A computer network is a collection of two or more computer systems which are
linked together. A network connection can be established using wireless media and cables. A
network consists of various kinds of nodes. Servers, networking hardware, personal computer,
and other specialized or general purpose hosts can all be nodes in a computer network.

Network is further classified into two types:

(a) Wired Network - Wired network refer to any physical medium made up of cables. Copper
wires, twisted pair or fiber optic cables are some options of cables. A wired network
employs wire to link devices to the internet or another network, such as laptops or desktop
PCs.
Wired Connectivity is responsible for providing high security with high bandwidth
provisioned for each user. It is highly reliable and incurs very low delay unlike wireless
connectivity.
Advantages of wired networks include dedicated bandwidth instead of having share with
other users, less attacks of network traffic interruptions and less susceptible to interference
and outages than wireless access points.
Most wired networks use ethernet cables transfer data between connected PCs. In a small
wired networks, a single router may be used to connect all the computers. Larger networks
often involve multiple routers or switches that connect to each other.

In a wired network, user does not have to share space with other users and thus gets
dedicated speeds while in wireless networks, the same connection may be shared by
multiple users.
The medium must have properties that will ensure a reasonable error performance for a
guaranteed distance and rate of date delivery (i.e. speed). It must also support two-way or
multiway communications.
Fiber optics systems use mainly infrared light for data transmission. At each end, fast opto-
couplers and diodes are used to translate the signal to electrical levels. Most fiber -based
systems can only transmit one way, so a pair of fibers are required to implement a two way
system. Hubs are also required at each junction as fiber systems can only operate point-to-
point. Fiber are used mainly in areas where high speed, security and/or electrical isolation
are important.
Coaxial cable systems were very popular in the early days of networking, mainly because it
made use of cheap and commonly available 75 or 50 Ω coaxial cable, nominally used for
video and radio frequency applications. In a coaxial system, nodes are connected together
via a single backbone, that is, the cable is laid out as a single line, passing through all
stations, this is also known as bus topology. A resistive 75 (or 50) Ω terminator R is placed at
each end of the cable to absorb all reflections. The nodes act on receives as high impedance
signal pickoffs, and on transmission, as current drives into the (resistive) line. To all extents,
this line looks to all the devices connected just like a purely resistive load of R/2 Ω; such a
known load resistance allows the transmitters to use simple current generators to inject a
fixed amount of current into the line. The voltage generated on the line is of the order of
half a volt for a typical Ethernet network, this allows receivers to use dynamically adjustable
thresholds sensors to determine zero crossings, and also to detect voltage overloads caused
when two or more station transmitters are attempting to drive the line together. This
provides a simple form of collision detection. To avoid ground loops, the coaxial cable shield
connection is grounded at only one point; network adapters must therefore incorporate
isolation hardware to float all the power supplies and other circuits directly connected to
the cable, adding somewhat to their cost.
Twisted pair (TP) systems are more recent. The most basic for TP cable consists of one or
more pairs of insulated strands of copper wire twisted around one another. The twisting is
necessary to minimize electromagnetic radiation and resist external interference. It also
helps to limit interference with other adjacent twisted pairs (cross-talk). In theory, the more
twists per meter a cable has the better the isolation. However, this increases the
capacitance per meter, which can result in increased load requirement and an increase in
high frequency attenuation. There are two main types of twisted pair cables, unshielded
twisted pair (UTP), and shielded twisted pair (STP), which contains each pair of wires within
an aluminum foil shield for further isolation. The impedance of a typical twisted pair is of the
order of 100–150 Ω. Lines are fed in voltage differential mode; a positive signal level is fed to
one wire, and a corresponding negative, or inverted signal to the other. This makes the total
radiation field around the pair cancel to zero. TPs can be used in bus topologies, as
described before, or star topologies, where each computer or terminal is connected by a
single cable to a central hub. The signal sent from one station is received by the hub and
redirected to all the other stations on the star network.

Wireless Networks: Traditional networks provide place-to-place communication; Wireless networks will
provide person-to-person communication, which is certainly a desirable feature for people on the move.

Wireless networks are computer networks that are not connected by cables of any kind. The use of a
wireless network enables enterprises to avoid the costly process of introducing cables into buildings or
as a connection between different equipment locations.  Access points amplify Wi-Fi signals, so a device
can be far from a router but still be connected to the network. When you connect to a Wi-Fi hotspot at a
cafe, a hotel, an airport lounge, or another public place, you're connecting to that business's wireless
network.

High-Speed Wireless Local Area Networks (WLANs):

Wireless LANs can be categorized as providing low-mobility high-speed data communication within a
confined region. Coverage range from a wireless terminal is ten to hundreds of feet. There are many
different WLAN products offered by different vendors, with data rates ranging from hundreds of kb/s to
more than 10Mb/s. An IEEE standards committee, 802.11, has been attempting to put some order into
this topic, but their success has been somewhat limited.

There are two overall network architectures pursued by WLAN designers. One is centrally coordinated
and controlled network. There are base stations in these networks that exercise overall control over
channel access. The other type of network architecture is the self-organizing and distributed controlled
network where every terminal has the same function as every other terminal, and networks are formed
ad-hoc by communications exchanges among terminals.

Wide Area Wireless Data Systems: Wide area data systems can be categorized as providing high
mobility, wide ranging, low-data-rate digital data communication to both vehicles and pedestrians. The
earliest and best-known systems are the ARDIS network developed and run by Motorola, and the RAM
mobile data network based on Ericsson Mobitex Technology. These technologies were designed to make
use of standard, two-way voice, land mobile radio channels, with 12.5kHz or 25kHz channel spacing.

An (relatively) new technology called Cellular Digital Packet Data (CDPD) is being developed by major
cellular carriers and manufactures. CDPD shares the 30 kHz spaced 800 MHz voice channels used by the
analog FM Advanced Mobile Phone Service (AMPS) systems. Data rate is 19.2 kbps.

Satellite-based Mobile Systems: Satellite-based systems are the epitome of wide-area-coverage,


expensive, base station systems. These systems can provide very widespread, often global, coverage.
However, it is extremely expensive to maintain the orbital base station. Also, it is very difficult to provide
adequate link margin to cover inside buildings, or even to cover locations shadowed by a tree. Thus,
satellite systems are not likely to compete favorably with terrestrial systems, at least in the near future.
Research Directions: Current research directions of wireless network are integration of various wireless
network services, mobile Internet, and mobile application support.

Integration of Wireless Networks: It is highly unlikely that a single wireless network able to meet all
mobile computing needs will evolve. It is far more probable that many wireless networks will be
available, each of which will work in different scale, provide service over a variety of geographical
coverage areas at various speed at a variety of price level. Each network will serve a niche, but none will
meet all needs. Accordingly, mobile computer users will need to be able to access multiple networks in
order to meet their needs. The user would in general like a wireless network infrastructure that can
provide seamless roaming, the ability for mobile computers to continue to receive service as they move
from the coverage area of one wireless network to another.

Mobile Interneting: Within the Internet society, the middle-aged IP is now being face-lifted to be able to
deal with the various demand of growth and new services. Mobile IP was proposed to work with
wireless networks. It is in principle based on location registration and packet redirection. A mobile node
changing its location must register with some dedicated agent. This agent then handshakes with the
station’s home agent. Upon successful registration, the mobile station’s current address is bound to its
home address. Incoming datagrams are first routed to the mobile station’s home network, where they
are encapsulated by the home agent and then tunneled to the foreign agent, which delivers them to the
mobile host. The problem with mobile IP is that the routing path traversed by datagrams is only
suboptimal; worst case “triangle routing” may occur.

Mobile Application Support: We build the wireless infrastructure because we want to use them. To use
them, we need applications interfaces fine-tuned to the nature of mobile computing. For example,
mobile real-time multimedia applications (such as video conferencing) require low latency. However, if
we use traditional mobile IP, there will be a significant delay caused by registering and packet re-
direction. New method needs to be created to address this kind of problems.

Technical Approaches Overview

Seamless Integration of Overlay: Networks No general network management architecture exists for
effectively integrating multiple overlay networks. Mobile applications roaming across overlays requires
network intelligence to determine that the mobile has moved from acceptable coverage in one network
to better coverage in another. But a global network management algorithm is stilled needed to control
handoffs across overlays based on current mobile connectivity. Link quality is only one metric that
determines handover; priority of access, applications need, and relative cost are equally important.
Since overlays may not cooperate with one another to render such decisions, mobile assisted handoff, in
which the mobile host must be an active participant in handoff processing, will be needed.

Support Services for Mobile Applications: Handover across overlays will change an application's
network bandwidth and latency. A new applications interface to the network management layers will be
designed to allow them to initiate handovers, to determine changes in their current network
capabilities, and to gracefully adapt their communications demands. It will better integrate mobile
applications and scale-able wire-line processing and storage capabilities through an agent processing
architecture that exploits data type specific transmissions to manage the communications demands over
dynamically varying wireless links.

Managing Mobile Connections to Support Latency-Sensitive Applications: Handoffs must be executed


with lower latency than is now possible if (near) real time multimedia applications are to be well
supported. One strategy moves the routing and resource allocations to local sub-nets. For example,
roaming authentication can be cached locally to avoid repeated remote, latency-intensive transactions.
BARWAN is developing algorithms that exploit information about the location of mobile devices, the
geographical adjacency of cells, and the likely routes devices might take, to improve handoff processing.
End-to-end strategies like Mobile IP provide routing, but fall far short for latency-sensitive connection-
oriented services. More hierarchical approaches, which localize information collection to the region or
the sub-net containing the users, are more scale-able.

Load Balancing for Scale-able Mobile Processing: Repositioning within future wireless networks will be
a common event. Traffic patterns will not be uniform, with high correlation between mobile host
location, their repositioning, and their requests for service. BARWAN is developing network
management architectures that build on decentralized algorithms to allocate network and processing
resources on demand, avoiding the static and centralized schemes of the past. Furthermore, overlay
networks provide an opportunity to share bandwidth and processing across networks; current network
load is one reason to initiate internetwork handoff.
Conclusion: A computer network is a collection of two or more computer systems which are linked
together. A network connection can be established using wireless media and cables. A network consists
of various kinds of nodes. Servers, networking hardware, personal computer, and other specialized or
general purpose hosts can all be nodes in a computer network.
Experiment No. 2

Aim: Case study on wireless network.

Theory: Traditional networks provide place-to-place communication; Wireless networks will provide
person-to-person communication, which is certainly a desirable feature for people on the move.

Wireless networks are computer networks that are not connected by cables of any kind. The use of a
wireless network enables enterprises to avoid the costly process of introducing cables into buildings or
as a connection between different equipment locations.  Access points amplify Wi-Fi signals, so a device
can be far from a router but still be connected to the network. When you connect to a Wi-Fi hotspot at a
cafe, a hotel, an airport lounge, or another public place, you're connecting to that business's wireless
network.

High-Speed Wireless Local Area Networks (WLANs):

Wireless LANs can be categorized as providing low-mobility high-speed data communication within a
confined region. Coverage range from a wireless terminal is ten to hundreds of feet. There are many
different WLAN products offered by different vendors, with data rates ranging from hundreds of kb/s to
more than 10Mb/s. An IEEE standards committee, 802.11, has been attempting to put some order into
this topic, but their success has been somewhat limited.

There are two overall network architectures pursued by WLAN designers. One is centrally coordinated
and controlled network. There are base stations in these networks that exercise overall control over
channel access. The other type of network architecture is the self-organizing and distributed controlled
network where every terminal has the same function as every other terminal, and networks are formed
ad-hoc by communications exchanges among terminals.

Wide Area Wireless Data Systems: Wide area data systems can be categorized as providing high
mobility, wide ranging, low-data-rate digital data communication to both vehicles and pedestrians. The
earliest and best-known systems are the ARDIS network developed and run by Motorola, and the RAM
mobile data network based on Ericsson Mobitex Technology. These technologies were designed to make
use of standard, two-way voice, land mobile radio channels, with 12.5kHz or 25kHz channel spacing.

An (relatively) new technology called Cellular Digital Packet Data (CDPD) is being developed by major
cellular carriers and manufactures. CDPD shares the 30 kHz spaced 800 MHz voice channels used by the
analog FM Advanced Mobile Phone Service (AMPS) systems. Data rate is 19.2 kbps.

Satellite-based Mobile Systems: Satellite-based systems are the epitome of wide-area-coverage,


expensive, base station systems. These systems can provide very widespread, often global, coverage.
However, it is extremely expensive to maintain the orbital base station. Also, it is very difficult to provide
adequate link margin to cover inside buildings, or even to cover locations shadowed by a tree. Thus,
satellite systems are not likely to compete favorably with terrestrial systems, at least in the near future.

Research Directions: Current research directions of wireless network are integration of various wireless
network services, mobile Internet, and mobile application support.

Integration of Wireless Networks: It is highly unlikely that a single wireless network able to meet all
mobile computing needs will evolve. It is far more probable that many wireless networks will be
available, each of which will work in different scale, provide service over a variety of geographical
coverage areas at various speed at a variety of price level. Each network will serve a niche, but none will
meet all needs. Accordingly, mobile computer users will need to be able to access multiple networks in
order to meet their needs. The user would in general like a wireless network infrastructure that can
provide seamless roaming, the ability for mobile computers to continue to receive service as they move
from the coverage area of one wireless network to another.

Mobile Interneting: Within the Internet society, the middle-aged IP is now being face-lifted to be able to
deal with the various demand of growth and new services. Mobile IP was proposed to work with
wireless networks. It is in principle based on location registration and packet redirection. A mobile node
changing its location must register with some dedicated agent. This agent then handshakes with the
station’s home agent. Upon successful registration, the mobile station’s current address is bound to its
home address. Incoming datagrams are first routed to the mobile station’s home network, where they
are encapsulated by the home agent and then tunneled to the foreign agent, which delivers them to the
mobile host. The problem with mobile IP is that the routing path traversed by datagrams is only
suboptimal; worst case “triangle routing” may occur.

Mobile Application Support: We build the wireless infrastructure because we want to use them. To use
them, we need applications interfaces fine-tuned to the nature of mobile computing. For example,
mobile real-time multimedia applications (such as video conferencing) require low latency. However, if
we use traditional mobile IP, there will be a significant delay caused by registering and packet re-
direction. New method needs to be created to address this kind of problems.

Technical Approaches Overview

Seamless Integration of Overlay: Networks No general network management architecture exists for
effectively integrating multiple overlay networks. Mobile applications roaming across overlays requires
network intelligence to determine that the mobile has moved from acceptable coverage in one network
to better coverage in another. But a global network management algorithm is stilled needed to control
handoffs across overlays based on current mobile connectivity. Link quality is only one metric that
determines handover; priority of access, applications need, and relative cost are equally important.
Since overlays may not cooperate with one another to render such decisions, mobile assisted handoff, in
which the mobile host must be an active participant in handoff processing, will be needed.
Support Services for Mobile Applications: Handover across overlays will change an application's
network bandwidth and latency. A new applications interface to the network management layers will be
designed to allow them to initiate handovers, to determine changes in their current network
capabilities, and to gracefully adapt their communications demands. It will better integrate mobile
applications and scale able wire-line processing and storage capabilities through an agent processing
architecture that exploits data type specific transmissions to manage the communications demands over
dynamically varying wireless links.

Managing Mobile Connections to Support Latency-Sensitive Applications: Handoffs must be executed


with lower latency than is now possible if (near) real time multimedia applications are to be well
supported. One strategy moves the routing and resource allocations to local sub-nets. For example,
roaming authentication can be cached locally to avoid repeated remote, latency-intensive transactions.
BARWAN is developing algorithms that exploit information about the location of mobile devices, the
geographical adjacency of cells, and the likely routes devices might take, to improve handoff processing.
End-to-end strategies like Mobile IP provide routing, but fall far short for latency-sensitive connection-
oriented services. More hierarchical approaches, which localize information collection to the region or
the sub-net containing the users, are more scale able.

Load Balancing for Scale able Mobile Processing: Repositioning within future wireless networks will be
a common event. Traffic patterns will not be uniform, with high correlation between mobile host
location, their repositioning, and their requests for service. BARWAN is developing network
management architectures that build on decentralized algorithms to allocate network and processing
resources on demand, avoiding the static and centralized schemes of the past. Furthermore, overlay
networks provide an opportunity to share bandwidth and processing across networks; current network
load is one reason to initiate internetwork handoff.

Computer networks that are not connected by cables are called wireless networks. They generally use
radio waves for communication between the network nodes. They allow devices to be connected to
the network while roaming around within the network coverage.
Advantages of Wireless Networks

 It provides clutter-free desks due to the absence of wires and cables.


 It increases the mobility of network devices connected to the system since the
devices need not be connected to each other.
 Accessing network devices from any location within the network coverage or Wi-Fi
hotspot becomes convenient since laying out cables is not needed.
 Installation and setup of wireless networks are easier.
 New devices can be easily connected to the existing setup since they needn’t be
wired to the present equipment. Also, the number of equipment that can be added or
removed to the system can vary considerably since they are not limited by the cable
capacity. This makes wireless networks very scalable.
 Wireless networks require very limited or no wires. Thus, it reduces the equipment
and setup costs.

Types of Wireless Networks


 Wireless LANs − Connects two or more network devices using wireless distribution
techniques.
 Wireless MANs − Connects two or more wireless LANs spreading over a
metropolitan area.
 Wireless WANs − Connects large areas comprising LANs, MANs and personal
networks.
Experiment No. 3
Aim: Study and analysis of antenna and is types

Theory: A metallic structure used to transmit and capture a radio electromagnetic signal is known as an
antenna. These are available in different sizes as well as shapes. Smaller antennas were used to watch
television in the past. While large antennas are used to capture the signal from the satellite.

SCAN (Space Communications and Navigation) Such antennas consist primarily of a bowl-shaped
antenna with a special antenna that captures the signal at a specific end called the parabolic antenna.
This type of antenna allows both transmission and capture of the electromagnet signal. Which can be
moved horizontally and vertically to transmit and capture the signal.

The signal is given to the antenna by the transmission line. This signal can then be converted into
electromagnetic energy to be transmitted into space. Sometimes an electrical device such as an antenna
or aerial is used to change the electromagnetic signal and vice versa the electrical power. Antennas play
an important role in transmitting electromagnetic radiation.

In a transmitting antenna, an antenna receives an electrical signal from a transmission line and converts
it into a radio wave. Receiving antennas is the exact opposite as it allows radio signals from space and
converts them into electrical signals and provides them in the transmission line. Typical antenna
parameters are bandwidth, gain, radiation pattern, polarization, impedance, and beamwidth.

Why do we need Antennas?

Antennas are used for many reasons but the main reason for this is that it provides a simple method to
transmit the signal where other types of techniques are not possible.

For example, the pilot of an aircraft often needs to communicate with ATC i.e. air traffic control. So this
communication between them can be done through wireless communication and antenna which is the
entrance for it. Therefore, there are many conditions otherwise there are applications where the cable is
selected on the wireless communication through the antenna.

Types of Antenna:

 Wire Antenna
 Aperture Antenna
 Reflector Antenna
 Lens Antenna
 Micro strip antenna
 Array antenna

Wire Antenna: Wire antennas are the basic types of antennas. These are well known and widely
used antennas. To have a better idea of these wire antennas, first let us have a look at the
transmission lines.
Transmission Lines: The wire or the transmission line has some power, which travels from one end to
the other end. If both the ends of transmission line are connected to circuits, then the information will
be transmitted or received using this wire between these two circuits.

If one end of this wire is not connected, then the power in it tries to escape. This leads to wireless
communication. If one end of the wire is bent, then the energy tries to escape from the transmission
line, more effectively than before. This purposeful escape is known as Radiation.

For the radiation to take place effectively, the impedance of the open end of the transmission line
should match with the impedance of the free-space. Consider a transmission line of a quarter-wave
length size. The far end of it is kept open and bent to provide high impedance. This acts as a half-wave
dipole antenna. Already, it has low impedance at one end of the transmission line. The open end, which
has high impedance, matches with the impedance of free space to provide better radiation.

Dipole

The radiation of energy when done through such a bent wire, the end of such transmission line is
termed as dipole or dipole antenna.

The reactance of the input impedance is a function of the radius and length of the dipole. The smaller
the radius, the larger the amplitude of the reactance. It is proportional to the wavelength. Hence, the
length and radius of the dipole should also be taken into consideration. Normally, its impedance is
around 72Ω.

This is better understood with the help of the following figure.


The figure shows the circuit diagram of a normal dipole connected to a transmission line. The current for
a dipole is maximum at the center and minimum at its ends. The voltage is minimum at its center and
maximum at its ends.

The types of wire antennas include Half-wave dipole, Half-wave folded dipole, Full-wave dipole, Short
dipole, and Infinitesimal dipole

Aperture Antenna: An Antenna with an aperture at the end can be termed as an Aperture antenna.
Waveguide is an example of aperture antenna. The edge of a transmission line when terminated
with an opening, radiates energy. This opening which is an aperture, makes it an Aperture antenna.

The main types of aperture antennas are −

 Wave guide antenna

 Horn antenna

 Slot antenna

Let us now have a look at these types of aperture antennas.

Waveguide Antenna

A Waveguide is capable of radiating energy when excited at one end and opened at the other end. The
radiation in wave guide is greater than a two-wire transmission line.

Frequency Range

The operational frequency range of a wave guide is around 300MHz to 300GHz. This antenna works
in UHF and EHF frequency ranges. The following image shows a waveguide.
This waveguide with terminated end, acts as an antenna. But only a small portion of the energy is
radiated while a large portion of it gets reflected back in the open circuit. It means VSWR (voltage
standing wave ratio, discussed in basic parameters chapter) value increases. The diffraction around the
waveguide provides poor radiation and non-directive radiation pattern.

Radiation Pattern

The radiation of waveguide antenna is poor and the pattern is non-directive, which means omni-
directional. An omni-directional pattern is the one which has no certain directivity but radiates in all
directions, hence it is called as non-directive radiation pattern.
The above figure shows a top section view of an omni-directional pattern, which is also called as non-
directional pattern. The two-dimensional view is a figure-of-eight pattern, as we already know.

Advantages

The following are the advantages of Aperture antenna −

 Radiation is greater than two-wire transmission line

 Radiation is Omni-directional

Disadvantages

The following are the disadvantages of Aperture antenna −

 VSWR increases

 Poor radiation

Conclusion: A metallic structure used to transmit and capture a radio electromagnetic signal is known as
an antenna. These are available in different sizes as well as shapes. Smaller antennas were used to
watch television in the past. While large antennas are used to capture the signal from the satellite.
Experiment-4

Aim: Case study on Bluetooth.

Theory: Bluetooth is a short-range wireless technology standard that is used for exchanging data


between fixed and mobile devices over short distances using UHF radio waves in the ISM bands, from
2.402 to 2.48 GHz, and building personal area networks (PANs). It is mainly used as an alternative to
wire connections, to exchange files between nearby portable devices and connect cell phones and music
players with wireless headphones. In the most widely used mode, transmission power is limited to
2.5 milliwatts, giving it a very short range of up to 10 meters (33 ft).

Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000
member companies in the areas of telecommunication, computing, networking, and consumer
electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard.
The Bluetooth SIG oversees development of the specification, manages the qualification program, and
protects the trademarks..A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth
device. A network of patents apply to the technology, which are licensed to individual qualifying devices.
As of 2009, Bluetooth integrated circuit chips ship approximately 920 million units annually. By 2017,
there were 3.6 billion Bluetooth devices being shipped annually and the shipments were expected to
continue increasing at about 12% a year.

Bluetooth Connections

Wireless communication is now a common phenomenon. Many of us have WiFi internet connections in
our offices and homes. But Bluetooth devices communicate directly with each other, rather than
sending traffic through an in-between device such as a wireless router. This makes life very convenient
and keeps power use extremely low, improving battery life.

Bluetooth devices communicate using low-power radio waves on a frequency band between 2.400 GHz
and 2.483.5 GHz [source: Bluetooth Special Interest Group (SIG)]. This is one of a handful of bands that is
set aside by international agreement for the use of industrial, scientific and medical devices (ISM).

Many devices that you may already use take advantage of this same radio-frequency band,
including baby monitors, garage-door openers and the newest generation of cordless phones. Making
sure that Bluetooth devices and other wireless communications technologies don't interfere with one
another is essential.

There are two types of Bluetooth technology as of 2020: Bluetooth Low Energy (LE) and Bluetooth
Classic (more formally known as Bluetooth Basic Rate/Enhanced Data Rate, or BR/EDR)
[source: Bluetooth SIG]. Both operate using the same frequency band, but Bluetooth LE is the more
popular option, by far. It needs much less energy to operate and can also be used for broadcast or mesh
networks in addition to allowing communication over point-to-point connections between two devices.
The classic Bluetooth technology can deliver a slightly higher data rate than Bluetooth LE (3 Mbs
compared to either 1Mbs or 2 Mbs) but can only be used for communication directly between two
devices using point-to-point connections. Each of the two types of Bluetooth technology has its
particular strengths and manufacturers adopt the version that best fits the needs of their product.

Why Is It Called Bluetooth?

Harald Bluetooth was king of Denmark in the late 900s. He managed to unite Denmark and part of
Norway into a single kingdom, then introduced Christianity into Denmark. He left a large monument, the
Jelling rune stone, in memory of his parents. He was killed in 986 during a battle with his son, Svend
Forkbeard. Choosing this name for the standard indicates how important companies from the Nordic
region (nations including Denmark, Sweden, Norway and Finland) are to the communications industry,
even if it says little about the way the technology works.

How Bluetooth Technology Operates

Bluetooth BR/EDR devices must always be paired and this procedure results in each of the two devices
trusting the other and being able to exchange data in a secure way, using encryption.

When Bluetooth BR/EDR devices come within range of one another, an electronic conversation takes
place to determine whether they trust each other or not and have data to share. The user doesn't
usually have to press a button or give a command — the electronic conversation happens automatically.
Once the conversation has occurred, the devices — whether they're part of a computer system or a
stereo — form a network.

Bluetooth LE works differently. Devices may also be paired to form a trusted relationship between them
but not all types of product require this. A Bluetooth LE device which wants to be discovered broadcasts
special messages (known as packets) in a process called advertising. Advertising packets contain useful
information about the advertising device. Another suitable device will find the advertising device by
scanning (listening) for advertising packets and selecting those which are from appropriate devices.
Usually scanning only happens when the user triggers it by say, pressing a button in a smartphone
application. Typically the user is then presented with details of appropriate devices that were discovered
and then selects one to connect to.

Bluetooth peripherals (e.g., an activity tracker and a smartwatch) that are connected to the same central
device (e.g., a smartphone) form a personal-area network (PAN) or piconet that may fill an entire
building or may encompass a distance no more than that between the smartphone in your pocket and
the watch on your wrist. Once a piconet is established, its members hop radio frequencies in unison so
they stay in touch with one another and avoid interfering with other Bluetooth piconets that may be
operating in the same room or devices using other wireless technologies such as WiFi. Bluetooth
technology even learns which radio channels are working well and which ones are experiencing
interference so that it can dynamically avoid bad channels and just use the channels that are free from
interference. This process, called adaptive frequency hopping allows Bluetooth devices to work really
well, even in environments where there are very large numbers of wireless devices operating.

Bluetooth Range
Although many think of Bluetooth primarily as a short-range technology, it can also be used to connect
devices more than a kilometer (3,280 feet) apart [source: Bluetooth SIG]. In fact, many types of product
such as wireless headphones, require the devices' communication range to be very short. But because
Bluetooth technology is very flexible and can be configured to the needs of the application,
manufacturers can adjust the Bluetooth settings on their devices to achieve the range they need whilst
at the same time maximizing battery life and achieving the best quality of signal.

Several factors affect the range of Bluetooth devices:

 Radio spectrum: Bluetooth technology's frequency band makes it a good choice for wireless
communication.

 Physical layer (PHY): This defines some key aspects of how the radio is used to transmit and
receive data such as the data rate, how error detection and correction is performed,
interference protection, and other techniques that influence signal clarity over different ranges.

 Receiver sensitivity: The measure of the minimum signal strength at which a receiver can still
receive and correctly decode data.

 Transmission power: As you may expect, the higher the transmitted signal strength, the longer
the range that can be achieved. But increasing the transmission power will also deplete your
battery faster.

 Antenna gain: Essentially, this is changing electrical signals from the transmitter into radio
waves and back again on the receiving end.

 Path loss: Several factors may weaken the signal, including distance, humidity, and the medium
through which it travels (such as wood, concrete or metal).

One of the most recent Bluetooth technology updates introduced a technique called forward error
correction (FEC) to improve receiver sensitivity. FEC corrects data errors that are detected at the
receiving end and improves a device's effective range by four or more times without having to use more
transmission power. This means a device can successfully receive data when it is at a much longer range
from the transmitter, where the signal will be much weaker [source: Bluetooth SIG].

Bluetooth Security

Bluetooth technology includes a number of security measures that can satisfy even the most stringent
security requirements such as those included in the Federal Information Processing Standards (FIPS).

When setting up a new device, users typically go through a process called pairing. Pairing equips each
device with special security keys and causes them to trust each other. A device that requires pairing will
not connect to another device which it has not been paired with.

Those security keys allow Bluetooth technology to protect data and users in a number of ways. For
example, data exchanged between devices can be encrypted so that it cannot be read by other devices.
It can also allow the address which acts as the identity of a device and which is included in wireless data
exchanges to be disguised and changed every few minutes. This protects users from the risk of being
tracked using data transmitted by their personal electronic devices.
If you own Bluetooth-enabled devices, you have experienced this for yourself. For example, if you buy a
cordless mouse, the first time you turn it on, you pair it to the device you plan to use it with. You might
turn the mouse on, then go to the Bluetooth settings on your computer to pair the device once you see
its name in a list of nearby Bluetooth accessories. A computer can handle many Bluetooth connections
at once by design. You may want to use a cordless mouse, keyboard and headphones.

The makers of those accessories, however, are going to limit connections to one at a time. You want
your keyboard to type only on your computer, or your headphones to listen specifically to your phone.
Some allow the user to pair the device with multiple computers, tablets or phones, but they may only be
allowed to connect with one at a time. It all depends on what the manufacturer decided was sensible for
their product.

Some devices require a code for security while being paired with another device. This is an example
of authentication and it ensures that the device you are setting up that trusted relationship with is the
one you think it is, rather than another device somewhere else in the environment. For example, many
cars let you take calls without taking your hands off the steering wheel. The first time you want to use
this facility, you will have to pair your phone and the car's audio system using the car's entertainment
display and your smartphone together. The car gives you a number to type in. Your phone lets you know
a device wants to pair using a numeric code. You enter the code off the entertainment display to
confirm that this is an authorized pairing. After that, you can use the hands-free phone system without
ever needing to pair again.

The user also has control over a device's visibility to other Bluetooth devices. On a computer or
smartphone, for example, you can also simply switch the device's Bluetooth mode to "nondiscoverable"
or simply disable Bluetooth until you need it again.

How Secure Is Bluetooth?

Bluetooth is considered a reasonably secure wireless technology when used with precautions.


Connections are encrypted, preventing casual eavesdropping from other devices nearby. Bluetooth
devices also shift radio frequencies often while paired, which prevents an easy invasion.

Devices also offer a variety of settings that allow the user to limit Bluetooth connections. The device-
level security of "trusting" a Bluetooth device restricts connections to only that specific device. With
service-level security settings, you can also restrict the kinds of activities your device is permitted to
engage in while on a Bluetooth connection. 

As with any wireless technology, however, there is always some security risk involved. Hackers have
devised a variety of malicious attacks that use Bluetooth networking. For example, "bluesnarfing" refers
to a hacker gaining authorized access to information on a device through Bluetooth; "bluebugging" is
when an attacker takes over your mobile phone and all its functions.  

For the average person, Bluetooth doesn't present a grave security risk when used with safety in mind
(e.g., not connecting to unknown Bluetooth devices). For maximum security, while in public and not
using Bluetooth, you can disable it completely.
The specifications were formalized by the Bluetooth Special Interest Group (SIG) and formally
announced on 20 May 1998.[70] Today it has a membership of over 30,000 companies worldwide. [71] It
was established by Ericsson, IBM, Intel, Nokia and Toshiba, and later joined by many other companies.

All versions of the Bluetooth standards support downward compatibility.[72] That lets the latest standard
cover all older versions.

The Bluetooth Core Specification Working Group (CSWG) produces mainly 4 kinds of specifications:

 The Bluetooth Core Specification, release cycle is typically a few years in between

 Core Specification Addendum (CSA), release cycle can be as tight as a few times per year

 Core Specification Supplements (CSS), can be released very quickly

 Errata (Available with a user account: Errata login)

Conclusion: Bluetooth is a short-range wireless technology standard that is used for exchanging data


between fixed and mobile devices over short distances using UHF radio waves in the ISM bands, from
2.402 to 2.48 GHz, and building personal area networks (PANs). 
Experiment No.-5
Aim: Case study on mobile IP.
Theory: Mobile IP is a communication protocol (created by extending Internet Protocol, IP) that
allows the users to move from one network to another with the same IP address. It ensures
that the communication will continue without the user’s sessions or connections being
dropped. 
This is an IETF (Internet Engineering Task Force) standard communications protocol designed
to allow mobile devices' (such as laptop, PDA, mobile phone, etc.) users to move from one
network to another while maintaining their permanent IP (Internet Protocol) address.
Defined in RFC (Request for Comments) 2002, mobile IP is an enhancement of the internet
protocol (IP) that adds mechanisms for forwarding internet traffic to mobile devices (known as
mobile nodes) when they are connecting through other than their home network.
The following case shows how a datagram moves from one point to another within the Mobile
IP framework.
o First of all, the internet host sends a datagram to the mobile node using the mobile
node's home address (normal IP routing process).
o If the mobile node (MN) is on its home network, the datagram is delivered through the
normal IP (Internet Protocol) process to the mobile node. Otherwise the home agent
picks up the datagram.
o If the mobile node (MN) is on foreign network, the home agent (HA) forwards the
datagram to the foreign agent.
o The foreign agent (FA) delivers the datagram to the mobile node.
o Datagrams from the MN to the Internet host are sent using normal IP routing
procedures. If the mobile node is on a foreign network, the packets are delivered to the
foreign agent. The FA forwards the datagram to the Internet host.
In the case of wireless communications, the above illustrations depict the use of wireless
transceivers to transmit the datagrams to the mobile node. Also, all datagrams between the
Internet host and the MN use the mobile node's home address regardless of whether the
mobile node is on a home or foreign network. The care-of address (COA) is used only for
communication with mobility agents and is never seen by the Internet host.
The Mobile IP allows for location-independent routing of IP datagrams on the Internet. Each
mobile node is identified by its home address disregarding its current location in the Internet.
While away from its home network, a mobile node is associated with a care-of address which
identifies its current location and its home address is associated with the local endpoint of a
tunnel to its home agent. Mobile IP specifies how a mobile node registers with its home agent
and how the home agent routes datagrams to the mobile node through the tunnel.
In many applications (e.g., VPN, VoIP), sudden changes in network connectivity and IP address can
cause problems. Mobile IP was designed to support seamless and continuous Internet connectivity.
Mobile IP is most often found in wired and wireless environments where users need to carry their
mobile devices across multiple LAN subnets. Examples of use are in roaming between overlapping
wireless systems, e.g., IP over DVB, WLAN, WiMAX and BWA.
Mobile IP is not required within cellular systems such as 3G, to provide transparency when Internet
users migrate between cellular towers, since these systems provide their own data link
layer handover and roaming mechanisms. However, it is often used in 3G systems to allow
seamless IP mobility between different packet data serving node (PDSN) domains.
In many applications (e.g., VPN, VoIP), sudden changes in network connectivity and IP address
can cause problems. Mobile IP was designed to support seamless and continuous Internet
connectivity.
Mobile IP is most often found in wired and wireless environments where users need to carry
their mobile devices across multiple LAN subnets. Examples of use are in roaming between
overlapping wireless systems, e.g., IP over DVB, WLAN, WiMAX and BWA.
Mobile IP is not required within cellular systems such as 3G, to provide transparency when
Internet users migrate between cellular towers, since these systems provide their own data link
layer handover and roaming mechanisms. However, it is often used in 3G systems to allow
seamless IP mobility between different packet data serving node (PDSN) domains.
Components of a Mobile IP Network
Mobile IP has the following three components, as shown in Figure 1:
• Mobile Node
• Home Agent
• Foreign Agent
The Mobile Node is a device such as a cell phone, personal digital assistant, or laptop whose
software enables network roaming capabilities.
The Home Agent is a router on the home network serving as the anchor point for
communication with the Mobile Node; it tunnels packets from a device on the Internet, called a
Correspondent Node, to the roaming Mobile Node. (A tunnel is established between the Home
Agent and a reachable point for the Mobile Node in the foreign network.)
The Foreign Agent is a router that may function as the point of attachment for the Mobile Node
when it roams to a foreign network, delivering packets from the Home Agent to the Mobile
Node.
The care-of address is the termination point of the tunnel toward the Mobile Node when it is
on a foreign network. The Home Agent maintains an association between the home IP address
of the Mobile Node and its care-of address, which is the current location of the Mobile Node on
the foreign or visited network

How Mobile IP Works


This section explains how Mobile IP works. The Mobile IP process has three main phases, which
are discussed in the following sections.
• Agent Discovery
A Mobile Node discovers its Foreign and Home Agents during agent discovery.
• Registration
The Mobile Node registers its current location with the Foreign Agent and Home Agent during
registration.
• Tunneling
A reciprocal tunnel is set up by the Home Agent to the care-of address (current location of the
Mobile Node on the foreign network) to route packets to the Mobile Node as it roams.
Agent Discovery
During the agent discovery phase, the Home Agent and Foreign Agent advertise their services
on the network by using the ICMP Router Discovery Protocol (IRDP). The Mobile Node listens to
these advertisements to determine if it is connected to its home network or foreign network.
The IRDP advertisements carry Mobile IP extensions that specify whether an agent is a Home
Agent, Foreign Agent, or both; its care-of address; the types of services it will provide such as
reverse tunneling and generic routing encapsulation (GRE); and the allowed registration lifetime
or roaming period for visiting Mobile Nodes. Rather than waiting for agent advertisements, a
Mobile Node can send out an agent solicitation. This solicitation forces any agents on the link to
immediately send an agent advertisement.
If a Mobile Node determines that it is connected to a foreign network, it acquires a care-of
address. Two types of care-of addresses exist:
• Care-of address acquired from a Foreign Agent
• Colocated care-of address
A Foreign Agent care-of address is an IP address of a Foreign Agent that has an interface on the
foreign network being visited by a Mobile Node. A Mobile Node that acquires this type of care-
of address can share the address with other Mobile Nodes. A collocated care-of address is an IP
address temporarily assigned to the interface of the Mobile Node itself. A collocated care-of
address represents the current position of the Mobile Node on the foreign network and can be
used by only one Mobile Node at a time.
When the Mobile Node hears a Foreign Agent advertisement and detects that it has moved
outside of its home network, it begins registration.
Registration
The Mobile Node is configured with the IP address and mobility security association (which
includes the shared key) of its Home Agent. In addition, the Mobile Node is configured with
either its home IP address, or another user identifier, such as a Network Access Identifier.
The Mobile Node uses this information along with the information that it learns from the
Foreign Agent advertisements to form a Mobile IP registration request. It adds the registration
request to its pending list and sends the registration request to its Home Agent either through
the Foreign Agent or directly if it is using a collocated care-of address and is not required to
register through the Foreign Agent. If the registration request is sent through the Foreign
Agent, the Foreign Agent checks the validity of the registration request, which includes
checking that the requested lifetime does not exceed its limitations, the requested tunnel
encapsulation is available, and that reverse tunnel is supported. If the registration request is
valid, the Foreign Agent adds the visiting Mobile Node to its pending list before relaying the
request to the Home Agent. If the registration request is not valid, the Foreign Agent sends a
registration reply with appropriate error code to the Mobile Node.
Tunneling
The Mobile Node sends packets using its home IP address, effectively maintaining the
appearance that it is always on its home network. Even while the Mobile Node is roaming on
foreign networks, its movements are transparent to correspondent nodes.
Data packets addressed to the Mobile Node are routed to its home network, where the Home
Agent now intercepts and tunnels them to the care-of address toward the Mobile Node.
Tunneling has two primary functions: encapsulation of the data packet to reach the tunnel
endpoint, and decapsulation when the packet is delivered at that endpoint. The default tunnel
mode is IP Encapsulation within IP Encapsulation. Optionally, GRE and minimal encapsulation
within IP may be used.

Conclusion: Mobile IP is a communication protocol (created by extending Internet Protocol, IP)
that allows the users to move from one network to another with the same IP address. It
ensures that the communication will continue without the user’s sessions or connections being
dropped. 
Experiment No. 6
Aim: Study of OPNET tool for modeling and simulation of different cellular standards.
Theory: Opnet is an important simulation tool. We guide Opnet Network simulator tutorial
which is computer software to simulate various network communication under final year
project. We ensure Opnet as discrete event engine network simulation for fast and scalable
solutions. We implement simulation written by (or) C+ + code. We develop Opnet Project to
design Access Network transmission and inter office network communications.

Opnet Project Goal: We implemented Opnet to aid the following factors in M.E projects are:
Large user community.
Easier to use.
Provide GUI (Graphical user Interface) and easy to learn.
Model entire network with router, switches, protocols, server.
Major Service provider.
 
Opnet Tools: We use the following tools in OPNET Projects for students projects are:
 Nodal Model Editor.
Process Model Editor.
Source Code Editing Environment.
Simulation Model.
Analysis Configuration Tool.
Network Model Editor.
 
Application in OPNET: We adopted Opnet in more than 85+ projects and provide following
applications are:-
 
Configure Web (HTTP) Application: We use Opnet to configure web browsing application, the
application named Http should selected from built in Model List.
Configure E-Mail Application: Opnet provide preset configuration for e-mail as low load,
medium load, High load and email size which define email message size in bytes.
Configure FTP Application: Opnet provide FTP application low, medium and high load. The file
size
defines byte size. FTP file to transfer.
Configure Remote Login Application: It is similar to email and FTP application. During remote
login session we use Inter command Time parameter and terminal Traffic Parameter.
Opnet Features: We adopt the following characteristics are:
Provide specification simulation and network analysis from different level
complexity.
Design simple or sophisticated Network systems.
Ensure finite state machine model with analytical model.
 
Opnet Language: We implement Opnet by the following languages are:
Programmed by C.
Initial configuration by GUI, a set of Xml files.
Code written by C (OR) C++ language.
 
Opnet Riverbed Applications: We support and develop science direct based opnet projects. We
use following are:
Application Management.
HD Analytics.
Monitoring.
Granular Transaction Capture.
Wireless sensor networks generally comprise a large number of sensor nodes deployed in an
area of interest to collect physical or environmental conditions, such as temperature, humidity,
pressure, etc. In wireless sensor networks, performance evaluation is critical to test the
practicability of network architectures5and protocol algorithms, and provides guidelines in
performance optimization. Among different candidates, simulation offers a cost-effective way.
Recently, re-searchers have developed many simulation models on different simulation plat-
forms, such as OPNET, NS-2, TOSSIM, EmStar, OMNeT++, J-Sim, ATEMU, and Avrora. Compared
with other simulators, OPNET is more suitable to10simulate behaviors of networks in the real
world. OPNET Modeler, as a net-work simulator, provides an industry-leading network
technology development environment [2]. It can be used to design and study network modeling
and simulation in applications, equipment, protocols and network communication and show
flexibility and intuition in designing practical systems.15Recently, Zigbee technology has been
widely adopted to develop wireless sensor network applications [3] by forming a wireless mesh
network with low rate , low power consumption, and secure networking. In Zigbee protocol
stack, the physical layer and the MAC layer protocols have been defined by
IEEE802.15.4standard [4]. Its network layer built upon both lower layers should be
designed20to enable a mesh networking, support node joining or leaving, assign network
addresses to devices, and perform routing. Zigbee Alliance is working at providing a
standardized base set of solutions for sensor networks [5]. In this paper, a network layer model
is proposed for mobile sensor networks in order to accomplish all defined functions. The
application layer aims at providing the25services for an application program, consisting of
application support sub-layer, application framework, and Zigbee device object. Since this layer
is related to specific applications, and is not the main focus of this paper, the design of the
application layer is omitted here. simulation of Zigbee sensor networks within OPNET simulator
has been attracting interests from researchers. There are many research works on simulation
modelling and evaluation of sensor nodes in OPNET [6, 7]. For example, Kucuk et al [6]
presented a detailed implementation methodology for their
proposed positioning algorithm, called M-SSLE. Shrestha et al [7] proposed a
simulation model for new networking nodes equipped with multiple radio technologies.
However, few works focused on the simulation model of mobile sensor
networks in literature. Device mobility is inevitable and must be conciliated
[8, 9], where lack of the support for simulation on mobile Zigbee sensor network
is a major limitation in this field of research, evaluation and development.
In [10], the adequacy of current provisions for dealing with different mobility cases was
assessed. Simulation results demonstrated that the current
model in OPNET standard libraries is ineffective in dealing with nodal mobility. Since OPNET
Mode provides a comprehensive simulation environment
for modeling distributed systems and communication networks, many simulation studies for
Zigbee sensor networks were performed in OPNET simulator45
[11, 12, 13, 14, 15, 16]. According to the performance studies using the Zigbee
model within OPNET Modeler standard libraries (ZMOMSL), there are several
disadvantages on this model. For example, its address assignment mechanism
may waste address space, the high communication overheads may reduce net-
work lifetime, and the network joining strategy may result in significant traffic50
collisions and jams [17, 18]. Among all these disadvantages, the most critical
issue is that the Zigbee model can not support the mobility of device nodes.
This motivated us to develop a new simulation model based on the OPNET
simulator for mobile Zigbee sensor networks.
The main contributions of this paper are summarized as follows. 1) We55
adopt the OPNET simulation development platform to design a mobile Zigbee
sensor network simulation model compatible with Zigbee protocols, where the
physical layer and the MAC layer defined by IEEE 802.15.4 are employed. 2) We
provide a node level design of mobile sensor nodes, present a process level model
of its network layer model and the detailed implementation procedure of the key60
functions. 3) In order to further decrease the communication overhead of nodes,
3
an improved AODV routing algorithm is also proposed, which demonstrates
superior capability in supporting node mobility.
The rest of this paper is organized as follows. In section 2, we discuss the
design of network process model in details. In section 3, we propose a new65
simulation model which enables mobile support for Zigbee devices. Section
4 presents our simulation results and demonstrates experimental comparison
between our proposed model and ZMOMSL. Section 5 draws conclusions
provides a comprehensive simulation environment for modeling distributed systems and
communication networks, many simulation studies for Zigbee sensor networks were performed
in OPNET simulator45[11, 12, 13, 14, 15, 16]. According to the performance studies using the
Zigbee model within OPNET Modeler standard libraries (ZMOMSL), there are several
disadvantages on this model. For example, its address assignment mechanism may waste address
space, the high communication overheads may reduce net-work lifetime, and the network joining
strategy may result in significant traffic50collisions and jams [17, 18]. Among all these
disadvantages, the most critical issue is that the Zigbee model cannot support the mobility of
device nodes. This motivated us to develop a new simulation model based on the OPNET
simulator for mobile Zigbee sensor networks. The main contributions of this paper are
summarized as follows. 1) We55adopt the OPNET simulation development platform to design a
mobile Zigbee sensor network simulation model compatible with Zigbee protocols, where the
physical layer and the MAC layer defined by IEEE 802.15.4 are employed. 2) We provide a node
level design of mobile sensor nodes, present a process level model of its network layer model
and the detailed implementation procedure of the key60functions. 3) In order to further decrease
the communication overhead of nodes,3
an improved AODV routing algorithm is also proposed, which demonstrates superior capability
in supporting node mobility. The rest of this paper is organized as follows. In section 2, we
discuss the design of network process model in details. In section 3, we propose a
new65simulation model which enables mobile support for Zigbee devices. Section4 presents our
simulation results and demonstrates experimental comparison between our proposed model and
ZMOMSL.
an improved AODV routing algorithm is also proposed, which demonstrates
superior capability in supporting node mobility.
The rest of this paper is organized as follows. In section 2, we discuss the
design of network process model in details. In section 3, we propose a new65
simulation model which enables mobile support for Zigbee devices. Section
4 presents our simulation results and demonstrates experimental comparison
between our proposed model and ZMOMSL. Section 5 draws conclusions.
2. The design of simulation system model
2.1. Design of node model70
As shown in Fig. 1, a Zigbee node model within OPNET Modeler typically incorporates the
physical layer, the MAC layer, the network layer and the
application layer. The physical layer comprises a transmitter module, a receiver module, and a
wireless pipeline model. The wireless pipeline model can be
configured to build a real radio environment. In the MAC layer, Carrier Sense75
Multiple Access with Collision Avoidance (CSMA/CA) protocol is used. For
the network layer, following services are provided: forming a network, nodes
joining and leaving a network, network address assignment, neighbor discovery,
and route maintenance discovery. The application layer is responsible for producing and
processing sensing data. In the rest of the paper, we will focus on80
the design of the network layer model for mobile Zigbee sensor networks.
2.2. The design of network layer model
Three types of devices are defined in the Zigbee standard framework: coordinator, router, and
end device. Coordinator is responsible for forming a new
network, storing the key parameters of the network and connecting to other85
networks. There is always a single coordinator in a Zigbee network. In Zigbee-
based WSNs, sink node typically plays the role of network coordinator. Router
has the routing capability. Specifically, it could allow other devices to join the
network as its child nodes, and route data packets. End device has no routing
an improved AODV routing algorithm is also proposed, which demonstrates superior capability
in supporting node mobility. The rest of this paper is organized as follows. In section 2, we
discuss the design of network process model in details. In section 3, we propose a
new65simulation model which enables mobile support for Zigbee devices. Section4 presents our
simulation results and demonstrates experimental comparison between our proposed model and
ZMOMSL. Section 5 draws conclusions.2. The design of simulation system model2.1. Design of
node model70As shown in Fig. 1, a Zigbee node model within OPNET Modeler typically
incorporates the physical layer, the MAC layer, the network layer and the application layer. The
physical layer comprises a transmitter module, a receiver module, and a wireless pipeline model.
The wireless pipeline model can be configured to build a real radio environment. In the MAC
layer, Carrier Sense75Multiple

Conclusion: Opnet is an important simulation tool. We guide Opnet Network simulator tutorial


which is computer software to simulate various network communication under final year
project. We ensure Opnet as discrete event engine network simulation for fast and scalable
solutions. We implement simulation written by (or) C+ + code. We develop Opnet Project to
design Access Network transmission and inter office network communications.

.
Experiment No. 7
Aim: Study of TCP/IP Protocol.

Theory: TCP/IP stands for Transmission Control Protocol/Internet Protocol and is a suite of
communication protocols used to interconnect network devices on the internet. TCP/IP is also
used as a communications protocol in a private computer network (an intranet or extranet).

The entire IP suite -- a set of rules and procedures -- is commonly referred to as TCP/IP. TCP and IP are
the two main protocols, though others are included in the suite. The TCP/IP protocol suite functions as
an abstraction layer between internet applications and the routing and switching fabric.

TCP/IP specifies how data is exchanged over the internet by providing end-to-end communications that
identify how it should be broken into packets, addressed, transmitted, routed and received at the
destination. TCP/IP requires little central management and is designed to make networks reliable with
the ability to recover automatically from the failure of any device on the network.

The two main protocols in the IP suite serve specific functions. TCP defines how applications can create
channels of communication across a network. It also manages how a message is assembled into smaller
packets before they are then transmitted over the internet and reassembled in the right order at the
destination address.
IP defines how to address and route each packet to make sure it reaches the right destination. Each
gateway computer on the network checks this IP address to determine where to forward the message.

A subnet mask tells a computer, or other network device, what portion of the IP address is used to
represent the network and what part is used to represent hosts, or other computers, on the network.

Network address translation (NAT) is the virtualization of IP addresses. NAT helps improve security and
decrease the number of IP addresses an organization needs.

Common TCP/IP protocols include the following:

Hypertext Transfer Protocol (HTTP) handles the communication between a web server and a web
browser.

HTTP Secure handles secure communication between a web server and a web browser.

File Transfer Protocol handles transmission of files between computers.

How does TCP/IP work?

TCP/IP uses the client-server model of communication in which a user or machine (a client) is provided a
service, like sending a webpage, by another computer (a server) in the network.

Collectively, the TCP/IP suite of protocols is classified as stateless, which means each client request is
considered new because it is unrelated to previous requests. Being stateless frees up network paths so
they can be used continuously.

The transport layer itself, however, is stateful. It transmits a single message, and its connection remains
in place until all the packets in a message have been received and reassembled at the destination.

The TCP/IP model differs slightly from the seven-layer Open Systems Interconnection (OSI) networking
model designed after it. The OSI reference model defines how applications can communicate over a
network.

Why is TCP/IP important?


TCP/IP is nonproprietary and, as a result, is not controlled by any single company. Therefore, the IP suite
can be modified easily. It is compatible with all operating systems (OSes), so it can communicate with
any other system. The IP suite is also compatible with all types of computer hardware and networks.

TCP/IP is highly scalable and, as a routable protocol, can determine the most efficient path through the
network. It is widely used in current internet architecture.

The 4 layers of the TCP/IP model

TCP/IP functionality is divided into four layers, each of which includes specific protocols:

The application layer provides applications with standardized data exchange. Its protocols include HTTP,
FTP, Post Office Protocol 3, Simple Mail Transfer Protocol and Simple Network Management Protocol. At
the application layer, the payload is the actual application data.

The transport layer is responsible for maintaining end-to-end communications across the network. TCP
handles communications between hosts and provides flow control, multiplexing and reliability. The
transport protocols include TCP and User Datagram Protocol, which is sometimes used instead of TCP
for special purposes.

The network layer, also called the internet layer, deals with packets and connects independent networks
to transport the packets across network boundaries. The network layer protocols are IP and Internet
Control Message Protocol, which is used for error reporting.

The physical layer, also known as the network interface layer or data link layer, consists of protocols that
operate only on a link -- the network component that interconnects nodes or hosts in the network. The
protocols in this lowest layer include Ethernet for local area networks and Address Resolution Protocol.

Uses of TCP/IP

TCP/IP can be used to provide remote login over the network for interactive file transfer to deliver
email, to deliver webpages over the network and to remotely access a server host's file system. Most
broadly, it is used to represent how information changes form as it travels over a network from the
concrete physical layer to the abstract application layer. It details the basic protocols, or methods of
communication, at each layer as information passes through.

Pros and cons of TCP/IP

The advantages of using the TCP/IP model include the following:

helps establish a connection between different types of computers;


works independently of the OS;

supports many routing protocols;

uses client-server architecture that is highly scalable;

can be operated independently;

supports several routing protocols; and

is lightweight and doesn't place unnecessary strain on a network or computer.

The disadvantages of TCP/IP include the following:

is complicated to set up and manage;

transport layer does not guarantee delivery of packets;

is not easy to replace protocols in TCP/IP;

does not clearly separate the concepts of services, interfaces and protocols, so it is not suitable for
describing new technologies in new networks; and

is especially vulnerable to a synchronization attack, which is a type of denial-of-service attack in which a


bad actor uses TCP/IP.

How are TCP/IP and IP different?

There are numerous differences between TCP/IP and IP. For example, IP is a low-level internet protocol
that facilitates data communications over the internet. Its purpose is to deliver packets of data that
consist of a header, which contains routing information, such as source and destination of the data, and
the data payload itself.

IP is limited by the amount of data that it can send. The maximum size of a single IP data packet, which
contains both the header and the data, is between 20 and 24 bytes long. This means that longer strings
of data must be broken into multiple data packets that must be independently sent and then
reorganized into the correct order after they are sent.

Since IP is strictly a data send/receive protocol, there is no built-in checking that verifies whether the
data packets sent were actually received.

In contrast to IP, TCP/IP is a higher-level smart communications protocol that can do more things. TCP/IP
still uses IP as a means of transporting data packets, but it also connects computers, applications,
webpages and web servers. TCP understands holistically the entire streams of data that these assets
require in order to operate, and it makes sure the entire volume of data needed is sent the first time.
TCP also runs checks that ensure the data is delivered.

As it does its work, TCP can also control the size and flow rate of data. It ensures that networks are free
of any congestion that could block the receipt of data.

An example is an application that wants to send a large amount of data over the internet. If the
application only used IP, the data would have to be broken into multiple IP packets. This would require
multiple requests to send and receive data, since IP requests are issued per packet.

With TCP, only a single request to send an entire data stream is needed; TCP handles the rest. Unlike IP,
TCP can detect problems that arise in IP and request retransmission of any data packets that were lost.
TCP can also reorganize packets so they get transmitted in the proper order -- and it can minimize
network congestion. TCP/IP makes data transfers over the internet easier.

TCP/IP model vs. OSI model

TCP/IP and OSI are the most widely used communication networking protocols. The main difference is
that OSI is a conceptual model that is not practically used for communication. Rather, it defines how
applications can communicate over a network. TCP/IP, on the other hand, is widely used to establish
links and network interaction.

The TCP/IP protocols lay out standards on which the internet was created, while the OSI model provides
guidelines on how communication has to be done. Therefore, TCP/IP is a more practical model.

The TCP/IP and OSI models have similarities and differences. The main similarity is in the way they are
constructed as both use layers, although TCP/IP consists of just four layers, while the OSI model consists
of the following seven layers:

Layer 7, the application layer, enables the user -- software or human -- to interact with the application or
network when the user wants to read messages, transfer files or engage in other network-related
activities.

Layer 6, the presentation layer, translates or formats data for the application layer based on the
semantics or syntax that the app accepts.

Layer 5, the session layer, sets up, coordinates and terminates conversations between apps.

Layer 4, the transport layer, handles transferring data across a network and providing error-checking
mechanisms and data flow controls.
Layer 3, the network layer, moves data into and through other networks.

Layer 2, the data link layer, handles problems that occur as a result of bit transmission errors.

Layer 1, the physical layer, transports data using electrical, mechanical or procedural interfaces.

Conclusion: TCP/IP stands for Transmission Control Protocol/Internet Protocol and is a suite of
communication protocols used to interconnect network devices on the internet. TCP/IP is also
used as a communications protocol in a private computer network (an intranet or extranet).
Experiment No 8
Aim: Study and analysis of HYPERLAN Protocol architecture.
Theory: High Performance LAN (Hyper lan)
Contributed by Torben Rune, Netplan.
HIPERLAN is a European (ETSI) standardization initiative for a High Performance wireless Local
Area Network. Radio waves are used instead of a cable as a transmission medium to connect
stations. Either, the radio transceiver is mounted to the movable station as an add-on and no
base station has to be installed separately, or a base station is needed in addition per room.
The stations may be moved during operation-pauses or even become mobile. The max. data
rate for the user depends on the distance of the communicating stations. With short
distances (<50 m) and asynchronous transmission a data rate of 20 Mbit/s is achieved, with
up to 800 m distance a data rate of 1 Mbit/s are provided. For connection-oriented services,
e.g. video-telephony, at least 64 kbit/s are offered.
HIPERLAN
HIPERLAN is a European family of standards on digital high speed wireless communication in
the 5.15-5.3 GHz and the 17.1-17.3 GHz spectrum developed by ETSI. The committee
responsible for HIPERLAN is RES-10 which has been working on the standard since November
1991.
The standard serves to ensure the possible interoperability of different manufacturers'
wireless communications equipment that operate in this spectrum. The HIPERLAN standard
only describes a common air interface including the physical layer for wireless
communications equipment, while leaving decisions on higher level configurations and
functions open to the equipment manufacturers.
The choice of frequencies allocated to HIPERLAN was part of the 5-5.30 GHz band being
allocated globally to aviation purposes. The Aviation industry only used the 5-5.15GHz
frequency, thus making the 5.15-5.30 frequency band accessible to HIPERLAN standards.
HIPERLAN is designed to work without any infrastructure. Two stations may exchange data
directly, without any interaction from a wired (or radio-based) infrastructure. The simplest
HIPERLAN thus consists of two stations. Further, if two HIPERLAN stations are not in radio
contact with each other, they may use a third station (i.e. the third station must relay
messages between the two communicating stations).
Products compliant to the HIPERLAN 5 GHz standard shall be possible to implement on a
PCMCIA Type III card. Thus the standard will enable users to truly take computing power on
the road.
The HIPERLAN standard has been developed at the same time as the development of the
SUPER net standard in the United States.
HIPERLAN requirements
 Short range - 50m
 Low mobility - 1.4m/s
 Networks with and without infrastructure
 Support isochronous traffic
 audio 32kbps, 10ns latency
 video 2Mbps, 100ns latency
 Support asynchronous traffic
 data 10Mbps, immediate access
Quality of service
Performance is one of the most important factors when dealing with wireless LANs. In
contrast to other radio-based systems, data traffic on a local area network has a randomized
bursty nature, which may cause serious problems with respect to throughput.
Many factors have to be taken into consideration, when quality of service is to be measured.
Among these are:
 The topography of the landscape in general
 Elevations in the landscape that might cause shadows, where connectivity is unstable
or impossible.
 Environments with many signal-reflection surfaces
 Environments with many signal-absorbing surfaces
 Quality of the wireless equipment
 Placement of the wireless equipment
 Number of stations
 Proximity to installations that generate electronic noise
 and many more
The sheer number of factors to take into consideration means, that the physical environment
will always be a factor in trying to asses the usefulness of using a wireless technology like
HIPERLAN.
Simulations show that the HIPERLAN MAC can simultaneously support
 25 audio links at 32kbit/s, 10ms delivery
 25 audio links at 16kbit/s, 20ms delivery
 1 video link at 2Mbit/s, 100ms delivery
 Asynch file transfer at 13.4Mbit/s
Benchmarking HIPERLAN in practice
Once a new HIPERLAN installation is implemented, trying to benchmark it can easily become
a mind-boggling task.
Even though a spectrum analyzer can be used for initial evaluation and troubleshooting, the
factors influencing performance are so many and so complex, that initial benchmarking
should be based evenly on perceived performance and registered performance over a longer
period of time.
In contrast to cable based LANs, the testing equipment has to find the communication stream
in the air not on a physical cable and it has to monitor several frequencies at once. On top of
that, the testing equipment itself can interfere with the signals it intends to monitor.
New HIPERLAN standards ahead
A second set of standards have been constructed for a new version of HIPERLAN - HIPERLAN2.
The idea of HIPERLAN2 is to be compatible with ATM.
There is also undergoing work to establish global sharing rules. The WINForum for
NII/SUPERNET in the US aim to support HIPERLAN 1 and HIPERLAN 2. This effort involves
interaction between ETSI RES10, WINForum, ATM Forum.

Figure 2 - The future of HIPERLAN


HIPERLAN-related projects
HIPERION aims to create and demonstrate a European capability for high-performance radio
networking for portable computers.
One of the reasons for the establishment of HIPERION is in their own words "Portable
computers will play an ever increasingly important role in business, education and leisure
environments in coming years. Communications technologies which permit access to
information and services as well as those which enable collaboration and sharing will be
fundamental technologies for tomorrow's information based economies."
Standard Documents
ETS 300 836-(1-4) (ed.1)
 Radio Equipment and Systems (RES);
High Performance Radio Local Area Network (HIPERLAN);
Type 1 Conformance Testing Specification;
Part 1: Radio Type Approval and Radio Frequency (RF) - Conformance Test
Specification
 Part 2: Protocol Implementation Conformance Statement (PICS) - performance
specification
 Part 3: Test Suite Structure and Test Purposes (TSS&TP) specification
 Part 4: Abstract Test Suite (ATS) specification
 RES10
 sPE: 14-02-1997
ETS 300 652 (Ed.1)
Radio Equipment and Systems (RES); High Performance Radio Local Area Network (HIPERLAN)
Type 1; Functional specification
CEPT Recommendation T/R 22-06 permits the operation of high speed radio local area
networks in the 5,15 to 5,30 GHz and 17,1 to 17,3 GHz frequency bands. These types of radio
networks are referred to as High Performance Radio Local Area Networks (HIPERLANs).
This ETS specifies the technical characteristics of HIPERLAN Type 1 that operates in the 5,15 to
5,3 GHz frequency band and that uses Non-Preemptive Priority Multiple Access (NPMA) as
the channel access method.
HIPERLAN Type 1 is confined to the lowest two layers of the Open Systems Interconnection
(OSI) model: the Physical Layer and the Medium Access Control (MAC) part of the Data Link
Layer.
Experiment No. 9

Aim: Study of different security threats.

Pre-requisite (Software): No software or hardware needed.

Procedure: Cybersecurity threats are acts performed by individuals with harmful intent, whose goal
is to steal data, cause damage to or disrupt computing systems. Common categories of cyber threats
include malware, social engineering, man in the middle (MitM) attacks, denial of service (DoS),
and injection attacks—we describe each of these categories in more detail below.
Cyber threats can originate from a variety of sources, from hostile nation states and terrorist
groups, to individual hackers, to trusted individuals like employees or contractors, who abuse
their privileges to perform malicious acts.

Common Sources of Cyber Threats

Here are several common sources of cyber threats against organizations:

● Nation states—hostile countries can launch cyber attacks against local companies and
institutions, aiming to interfere with communications, cause disorder, and inflict damage.
● Terrorist organizations—terrorists conduct cyber attacks aimed at destroying or abusing
critical infrastructure, threaten national security, disrupt economies, and cause bodily
harm to citizens.
● Criminal groups—organized groups of hackers aim to break into computing systems for
economic benefit. These groups use phishing, spam, spyware and malware for extortion,
theft of private information, and online scams.
● Hackers—individual hackers target organizations using a variety of attack techniques.
They are usually motivated by personal gain, revenge, financial gain, or political activity.
Hackers often develop new threats, to advance their criminal ability and improve their
personal standing in the hacker community.
● Malicious insiders—an employee who has legitimate access to company assets, and abuses
their privileges to steal information or damage computing systems for economic or
personal gain. Insiders may be employees, contractors, suppliers, or partners of the target
organization. They can also be outsiders who have compromised a privileged account and
are impersonating its owner.

Types of Cybersecurity
Threats Malware Attacks

Malware is an abbreviation of “malicious software”, which includes viruses, worms, trojans, spyware,
and ransomware, and is the most common type of cyberattack. Malware infiltrates a system, usually
via a link on an untrusted website or email or an unwanted software download. It deploys on the target
system, collects sensitive data, manipulates and blocks access to network components, and may
destroy data or shut down the system altogether.

Here are some of the main types of malware attacks:


● Viruses—a piece of code injects itself into an application. When the application runs, the
malicious code executes.
● Worms—malware that exploits software vulnerabilities and backdoors to gain access to an
operating system. Once installed in the network, the worm can carry out attacks such as
distributed denial of service (DDoS).
● Trojans—malicious code or software that poses as an innocent program, hiding in apps, games
or email attachments. An unsuspecting user downloads the trojan, allowing it to gain control of
their device.
● Ransomware—a user or organization is denied access to their own systems or data via encryption.
The attacker typically demands a ransom be paid in exchange for a decryption key to restore
access, but there is no guarantee that paying the ransom will actually restore full access or
functionality.
● Crypto jacking—attackers deploy software on a victim’s device, and begin using their computing
resources to generate cryptocurrency, without their knowledge. Affected systems can become
slow and crypto jacking kits can affect system stability.
● Spyware—a malicious actor gains access to an unsuspecting user’s data, including sensitive
information such as passwords and payment details. Spyware can affect desktop browsers,
mobile phones and desktop applications.
● Adware—a user’s browsing activity is tracked to determine behavior patterns and interests,
allowing advertisers to send the user targeted advertising. Adware is related to spyware but does
not involve installing software on the user’s device and is not necessarily used for malicious
purposes, but it can be used without the user’s consent and compromise their privacy.
● Fileless malware—no software is installed on the operating system. Native files like WMI and
PowerShell are edited to enable malicious functions. This stealthy form of attack is difficult
to detect (antivirus can’t identify it), because the compromised files are recognized as
legitimate.
● Rootkits—software is injected into applications, firmware, operating system kernels or
hypervisors, providing remote administrative access to a computer. The attacker can start the
operating system within a compromised environment, gain complete control of the computer and
deliver additional malware.
Social Engineering Attacks

Social engineering involves tricking users into providing an entry point for malware. The victim provides
sensitive information or unwittingly installs malware on their device, because the attacker poses as a
legitimate actor.

Here are some of the main types of social engineering attacks:

● Baiting—the attacker lures a user into a social engineering trap, usually with a promise of
something attractive like a free gift card. The victim provides sensitive information such
as credentials to the attacker.
● Pretexting—similar to baiting, the attacker pressures the target into giving up information under
false pretenses. This typically involves impersonating someone with authority, for example an
IRS or police officer, whose position will compel the victim to comply.
● Phishing—the attacker sends emails pretending to come from a trusted source. Phishing often
involves sending fraudulent emails to as many users as possible, but can also be more targeted.
For example, “spear phishing” personalizes the email to target a specific user, while “whaling”
takes this a step further by targeting high-value individuals such as CEOs.
● Vishing (voice phishing)—the imposter uses the phone to trick the target into disclosing
sensitive data or grant access to the target system. Vishing typically targets older individuals
but can be employed against anyone.
● Smishing (SMS phishing)—the attacker uses text messages as the means of deceiving the victim.
● Piggybacking—an authorized user provides physical access to another individual who “piggybacks”
off the user’s credentials. For example, an employee may grant access to someone posing as a
new employee who misplaced their credential card.
● Tailgating—an unauthorized individual follows an authorized user into a location, for example by
quickly slipping in through a protected door after the authorized user has opened it. This
technique is similar to piggybacking except that the person being tailgated is unaware that they
are being used by another individual.
Supply Chain Attacks

Supply chain attacks are a new type of threat to software developers and vendors. Its purpose is to
infect legitimate applications and distribute malware via source code, build processes or software
update mechanisms.

Attackers are looking for non-secure network protocols, server infrastructure, and coding techniques, and
use them to compromise build and update process, modify source code and hide malicious content.

Supply chain attacks are especially severe because the applications being compromised by attackers are
signed and certified by trusted vendors. In a software supply chain attack, the software vendor is not
aware that its applications or updates are infected with malware. Malicious code runs with the same trust
and privileges as the compromised application.

Types of supply chain attacks include:

● Compromise of build tools or development pipelines


● Compromise of code signing procedures or developer accounts
● Malicious code sent as automated updates to hardware or firmware components
● Malicious code pre-installed on physical devices
Man-in-the-Middle Attack

A Man-in-the-Middle (MitM) attack involves intercepting the communication between two endpoints, such
as a user and an application. The attacker can eavesdrop on the communication, steal sensitive data, and
impersonate each party participating in the communication.

Examples of MitM attacks include:

● Wi-Fi eavesdropping—an attacker sets up a Wi-Fi connection, posing as a legitimate actor, such
as a business, that users may connect to. The fraudulent Wi-Fi allows the attacker to monitor the
activity of connected users and intercept data such as payment card details and login credentials.
● Email hijacking—an attacker spoofs the email address of a legitimate organization, such as a
bank, and uses it to trick users into giving up sensitive information or transferring money to the
attacker. The user follows instructions they think come from the bank but are actually from the
attacker.
● DNS spoofing—a Domain Name Server (DNS) is spoofed, directing a user to a malicious website
posing as a legitimate site. The attacker may divert traffic from the legitimate site or steal the
user’s credentials.
● IP spoofing—an internet protocol (IP) address connects users to a specific website. An attacker
can spoof an IP address to pose as a website and deceive users into thinking they are interacting
with that website.
● HTTPS spoofing—HTTPS is generally considered the more secure version of HTTP, but can also
be used to trick the browser into thinking that a malicious website is safe. The attacker uses
“HTTPS” in the URL to conceal the malicious nature of the website.
A Denial-of-Service (DoS) attack overloads the target system with a large volume of traffic, hindering the
ability of the system to function normally. An attack involving multiple devices is known as a distributed
denial-of-service (DDoS) attack.

DoS attack techniques include:

● HTTP flood DDoS—the attacker uses HTTP requests that appear legitimate to overwhelm an
application or web server. This technique does not require high bandwidth or malformed packets,
and typically tries to force a target system to allocate as many resources as possible for each
request.
● SYN flood DDoS—initiating a Transmission Control Protocol (TCP) connection sequence involves
sending a SYN request that the host must respond to with a SYN-ACK that acknowledges the
request, and then the requester must respond with an ACK. Attackers can exploit this sequence,
tying up server resources, by sending SYN requests but not responding to the SYN-ACKs from the
host.
● UDP flood DDoS—a remote host is flooded with User Datagram Protocol (UDP) packets sent to
random ports. This technique forces the host to search for applications on the affected ports
and respond with “Destination Unreachable” packets, which uses up the host resources.
● ICMP flood—a barrage of ICMP Echo Request packets overwhelms the target, consuming both
inbound and outgoing bandwidth. The servers may try to respond to each request with an
ICMP Echo Reply packet, but cannot keep up with the rate of requests, so the system slows
down.
● NTP amplification—Network Time Protocol (NTP) servers are accessible to the public and can be
exploited by an attacker to send large volumes of UDP traffic to a targeted server. This is
considered an amplification attack due to the query-to-response ratio of 1:20 to 1:200, which
allows an attacker to exploit open NTP servers to execute high-volume, high-bandwidth DDoS
attacks.
Injection Attacks

Injection attacks exploit a variety of vulnerabilities to directly insert malicious input into the code of a
web application. Successful attacks may expose sensitive information, execute a DoS attack or
compromise the entire system.

Here are some of the main vectors for injection attacks:

● SQL injection—an attacker enters an SQL query into an end user input channel, such as a web form
or comment field. A vulnerable application will send the attacker’s data to the database, and will
execute any SQL commands that have been injected into the query. Most web applications use
databases based on Structured Query Language (SQL), making them vulnerable to SQL injection. A
new variant on this attack is NoSQL attacks, targeted against databases that do not use a relational
data structure.
● Code injection—an attacker can inject code into an application if it is vulnerable. The web
server executes the malicious code as if it were part of the application.
● OS command injection—an attacker can exploit a command injection vulnerability to input
commands for the operating system to execute. This allows the attack to exfiltrate OS data or
take over the system.
● LDAP injection—an attacker inputs characters to alter Lightweight Directory Access Protocol
(LDAP) queries. A system is vulnerable if it uses unsensitized LDAP queries. These attacks are very
severe because LDAP servers may store user accounts and credentials for an entire organization.
● XML external Entities (XXE) Injection—an attack is carried out using specially-constructed XML
documents. This differs from other attack vectors because it exploits inherent vulnerabilities in
legacy XML parsers rather than unvalidated user inputs. XML documents can be used to traverse
paths, execute code remotely and execute server-side request forgery (SSRF).
● Cross-Site Scripting (XSS)—an attacker inputs a string of text containing malicious JavaScript. The
target’s browser executes the code, enabling the attacker to redirect users to a malicious
website or steal session cookies to hijack a user’s session. An application is vulnerable to
XSS if it doesn’t sanitize user inputs to remove JavaScript code.

Conclusion: Organizations are finding themselves under the pressure of being forced to react
quickly to the dynamically increasing number of cybersecurity threats. Since the attackers have been
using an attack life cycle, organizations have also been forced to come up with a vulnerability
management life cycle. The vulnerability management life cycle is designed to counter the efforts
made by the attackers in the quickest and most effective way. This chapter has discussed the
vulnerability management life cycle in terms of the vulnerability management strategy. It has gone
through the steps of asset inventory creation, the management of information flow, the assessment of
risks, assessment of vulnerabilities, reporting and remediation, and ...
Experiment No. 10
Aim: - Implementation of Code Division Multiple Access.
Introduction: - CDMA refers to a multiple access method in which the individual
terminals use spread-spectrum techniques and occupy the entire spectrum whenever
they transmit. This feature makes CDMA different from frequency division multiple
access (FDMA) and from time division multiple access (TDMA). In FDMA each user is
given a small portion of the total available spectrum, and in TDMA each user is
allowed full use of the available spectrum, but only during certain periods in time.

Code Division Multiple Access is a modulation and multiple access scheme based on
the spread-spectrum communication technology. It is well-established technology and
applied to digital cellular radio and wireless communication systems in the early
1990s. Capacity concerns of major markets and efficient and economic wireless
communication needs of the industries were the most significant drivers for the
development of the CDMA cellular technology.

CDMA is a method in which users share time and frequency allocations, and are
channelized by unique assigned codes. The signals of different users are separated at
the receiver by using a correlator that captures signal energy only from the desired
user or channel. Undesired signals contribute only to noise and interference. Figure 1
illustrates the principle of the CDMA technique.

The development of the CDMA technique dates back to the early 1950s when different
studies of the spread-spectrum technologies were started. The first era in CDMA
history consisted of introducing basic ideas of the CDMA by Claude Shannon and
Robert Pierce in 1949. In 1950 De-Rosa-Rogoff defined the direct-sequence spread-
spectrum method, the processing gain equation, and a noise multiplexing idea. Price
and Green filed the RAKE receiver patent in 1956. In 1961 Manuski defined the near-
far problem crucial for CDMA systems. During the 1970s several military and
navigation applications were developed. [1]

The second CDMA era introduced studies focusing on narrowband systems. In 1978
Cooper and Nettleton suggested a cellular spread-spectrum application. During the
1980s communication company Qualcomm investigated narrowband CDMA
techniques for cellular applications, and the result was that in 1993 the
CDMA IS-95 standard was developed [1]. Compared to third generation CDMA
systems IS-95 can be considered a narrowband CDMA system with 1.2288 Mchip/s
carrier chip rate. Third generation wideband CDMA systems, such as CDMA IS-2000
and European WCDMA use higher chip rates than CDMA IS-95.

SPREAD-SPECTRUM TECHNOLOGY: - Originally, the spread-spectrum technology has


been developed for military and navigation purposes because it has some interesting
characteristics that provide secure means of communication in hostile environments
[2]. First of all, spread spectrum signals have LPI-properties (Low Probability of
Interception), and cannot be easily detected by enemy communication equipment
due to low power spectral density, even lower than background noise. Secondly,
spread- spectrum signals have efficient AJ (Anti-Jamming) properties to combat
intentional interference trying to sabotage
communication systems [3]. Nowadays, spread-spectrum technology has also proven to be
feasible for commercial applications especially for mobile communication systems. It provides
an efficient multiple access method for a number of independent users sharing a common
communication channel without external synchronization methods. CDMA is probably the
most interesting multiple access method provided by spread-spectrum technology
The fundamental idea of spread-
spectrum communication is to spread a certain information bandwidth, Bi, over a wider
transmission bandwidth, Bt. The minimum of the transmission bandwidth has to be wider
than the information bandwidth. The relative rate between user information and the
pseudo-random code sequence can be on the order of tens or hundreds for commercial
systems and on the order of thousands for military systems. Spread-spectrum
communications cannot be said to be an efficient means of utilizing bandwidth because it
needs a lot of bandwidth to be efficient. On the other hand, the wider transmitted bandwidth
offers such a low power spectral density that it makes the transmitted signal look like
background noise in front end of a receiver [3]. Besides LPI and AJ capabilities spread
spectrum communication systems can offer further advantages such as multiple access,
efficient privacy, and interference rejection.

There are two basic spread-spectrum techniques: direct sequencing (DS), frequency
hopping (FH), and time hopping (TH). Also, a variety of hybrid techniques use different
combinations of these basic techniques. With direct-sequence spreading, the original signal is
multiplied by a known signal of much larger bandwidth. With frequency-hopped spreading, the
center frequency of the transmitted signal is varied in a pseudorandom pattern.
A. Processing Gain
Combining a bit stream of information with an independent pseudo-random code
sequence by simple multiplication carries out the spreading operation. One of the
main parameters of a spread spectrum communication system is the processing gain,
Gp. It is the ratio of the transmitted bandwidth, Bt, and information bandwidth, Bi, as
presented in the following equation 1. [1]

Gp is also called the spreading factor. This processing gain or spreading factor determines the
maximum number of simultaneous users or connections allowed in a communication system.
It determines the level of protection against multipath interference signals and signal
detection capabilities of a spread spectrum communication system. In multipath situations
the receiver observes spread spectrum signals summed with narrowband interference. The
processing gain determines the power ratio of the desired signal and interference after de-
spreading. Higher desired signal power leads to easier detection. It can be seen that low data
rates such as speech have high processing gain compared to high data rates.

DIRECT-SEQUENCE CDMA: - As mentioned CDMA is a spread-spectrum multiple access method.


Spread-spectrum is a transmission method in which the signal occupies a bandwidth in excess
of the minimum necessary to send the information. The spreading of the signal is accomplished
by means of a pseudorandom code that is independent of the transmitted data signal. A
synchronized reception with the same pseudorandom code at he receiver is used for de-
spreading and subsequent data recovery. [3]
A. Principle of DS-CDMA
Figure 2 shows the multiple access capability of a CDMA communication system. Two users are
sending simultaneously narrowband information signals having the same bandwidth Bi. Both
narrowband signals are spread with a user specific and unique code having sufficiently low
cross- correlation with the other user’s code [3]. Code makes each user’s communications
approximately orthogonal to those of other users. After spreading the two signals are
transmitted into a radio channel having the same bandwidth, Bt. In the radio channel the two
signals are mixed and exposed to impairments. Spreading the signal de-sensitizes the original
narrowband signal to some potential channel degradation and to interference [3].

The CDMA channel is nominally 1.23 MHz wide. CDMA networks use a scheme
called soft handoff, which minimizes signal breakup as a handset passes from one cell
to another. The combination of digital and spread spectrum modes supports several
times as many signals per unit of bandwidth as analog modes. CDMA is compatible with
other cellular technologies; this enables nationwide roaming. The original CDMA
standard, also known as CDMA One, offers a transmission speed of only up to 14.4
kilobits per second in its single channel form and up to 115 Kbps in an eight-channel
form. CDMA2000 and Wideband CDMA (W-CDMA) deliver data many times faster.

The CDMA2000 family of standards includes single-carrier Radio Transmission


Technology (1xRTT), Evolution-Data Optimized Release 0, EVDO Revision A and
EVDO Rev. B. People often confuse CDMA2000, which is a family of standards
supported by Verizon and Sprint, with CDMA, which is the physical layer multiplexing
scheme.

GSM and CDMA are multiple-access technologies that enable numerous data connections and
multiple calls on a single radio channel. CDMA cellular systems use a unique code to encode
every call's data and then transmit all those calls at once. On the other end, the receivers divide
the combined signal into their individual calls before channeling them to the intended recipient.
GSM transforms every call into digital data, transmits it via a shared channel at a specific time
and then puts each call back together at the other end of the line for the intended recipient.

Conclusion:
CDMA refers to a multiple access method in which theindividual terminals use spread-
spectrum techniques and occupy the entire spectrum whenever they transmit. Thisfeature
makes CDMA different from frequency division multiple access (FDMA) and from time division
multiple access (TDMA). In FDMA each user is given a small portion of the total available
spectrum, and in TDMA each user isallowed full use of the available spectrum, but only
during certain periods in time.

You might also like