Professional Documents
Culture Documents
1
1. Aim - Study and analysis of wired & wireless networks
2. Theory –
Network - A computer network is a collection of two or more computer systems which are
linked together. A network connection can be established using wireless media and cables. A
network consists of various kinds of nodes. Servers, networking hardware, personal computer,
and other specialized or general purpose hosts can all be nodes in a computer network.
(a) Wired Network - Wired network refer to any physical medium made up of cables. Copper
wires, twisted pair or fiber optic cables are some options of cables. A wired network
employs wire to link devices to the internet or another network, such as laptops or desktop
PCs.
Wired Connectivity is responsible for providing high security with high bandwidth
provisioned for each user. It is highly reliable and incurs very low delay unlike wireless
connectivity.
Advantages of wired networks include dedicated bandwidth instead of having share with
other users, less attacks of network traffic interruptions and less susceptible to interference
and outages than wireless access points.
Most wired networks use ethernet cables transfer data between connected PCs. In a small
wired networks, a single router may be used to connect all the computers. Larger networks
often involve multiple routers or switches that connect to each other.
In a wired network, user does not have to share space with other users and thus gets
dedicated speeds while in wireless networks, the same connection may be shared by
multiple users.
The medium must have properties that will ensure a reasonable error performance for a
guaranteed distance and rate of date delivery (i.e. speed). It must also support two-way or
multiway communications.
Fiber optics systems use mainly infrared light for data transmission. At each end, fast opto-
couplers and diodes are used to translate the signal to electrical levels. Most fiber -based
systems can only transmit one way, so a pair of fibers are required to implement a two way
system. Hubs are also required at each junction as fiber systems can only operate point-to-
point. Fiber are used mainly in areas where high speed, security and/or electrical isolation
are important.
Coaxial cable systems were very popular in the early days of networking, mainly because it
made use of cheap and commonly available 75 or 50 Ω coaxial cable, nominally used for
video and radio frequency applications. In a coaxial system, nodes are connected together
via a single backbone, that is, the cable is laid out as a single line, passing through all
stations, this is also known as bus topology. A resistive 75 (or 50) Ω terminator R is placed at
each end of the cable to absorb all reflections. The nodes act on receives as high impedance
signal pickoffs, and on transmission, as current drives into the (resistive) line. To all extents,
this line looks to all the devices connected just like a purely resistive load of R/2 Ω; such a
known load resistance allows the transmitters to use simple current generators to inject a
fixed amount of current into the line. The voltage generated on the line is of the order of
half a volt for a typical Ethernet network, this allows receivers to use dynamically adjustable
thresholds sensors to determine zero crossings, and also to detect voltage overloads caused
when two or more station transmitters are attempting to drive the line together. This
provides a simple form of collision detection. To avoid ground loops, the coaxial cable shield
connection is grounded at only one point; network adapters must therefore incorporate
isolation hardware to float all the power supplies and other circuits directly connected to
the cable, adding somewhat to their cost.
Twisted pair (TP) systems are more recent. The most basic for TP cable consists of one or
more pairs of insulated strands of copper wire twisted around one another. The twisting is
necessary to minimize electromagnetic radiation and resist external interference. It also
helps to limit interference with other adjacent twisted pairs (cross-talk). In theory, the more
twists per meter a cable has the better the isolation. However, this increases the
capacitance per meter, which can result in increased load requirement and an increase in
high frequency attenuation. There are two main types of twisted pair cables, unshielded
twisted pair (UTP), and shielded twisted pair (STP), which contains each pair of wires within
an aluminum foil shield for further isolation. The impedance of a typical twisted pair is of the
order of 100–150 Ω. Lines are fed in voltage differential mode; a positive signal level is fed to
one wire, and a corresponding negative, or inverted signal to the other. This makes the total
radiation field around the pair cancel to zero. TPs can be used in bus topologies, as
described before, or star topologies, where each computer or terminal is connected by a
single cable to a central hub. The signal sent from one station is received by the hub and
redirected to all the other stations on the star network.
Wireless Networks: Traditional networks provide place-to-place communication; Wireless networks will
provide person-to-person communication, which is certainly a desirable feature for people on the move.
Wireless networks are computer networks that are not connected by cables of any kind. The use of a
wireless network enables enterprises to avoid the costly process of introducing cables into buildings or
as a connection between different equipment locations. Access points amplify Wi-Fi signals, so a device
can be far from a router but still be connected to the network. When you connect to a Wi-Fi hotspot at a
cafe, a hotel, an airport lounge, or another public place, you're connecting to that business's wireless
network.
Wireless LANs can be categorized as providing low-mobility high-speed data communication within a
confined region. Coverage range from a wireless terminal is ten to hundreds of feet. There are many
different WLAN products offered by different vendors, with data rates ranging from hundreds of kb/s to
more than 10Mb/s. An IEEE standards committee, 802.11, has been attempting to put some order into
this topic, but their success has been somewhat limited.
There are two overall network architectures pursued by WLAN designers. One is centrally coordinated
and controlled network. There are base stations in these networks that exercise overall control over
channel access. The other type of network architecture is the self-organizing and distributed controlled
network where every terminal has the same function as every other terminal, and networks are formed
ad-hoc by communications exchanges among terminals.
Wide Area Wireless Data Systems: Wide area data systems can be categorized as providing high
mobility, wide ranging, low-data-rate digital data communication to both vehicles and pedestrians. The
earliest and best-known systems are the ARDIS network developed and run by Motorola, and the RAM
mobile data network based on Ericsson Mobitex Technology. These technologies were designed to make
use of standard, two-way voice, land mobile radio channels, with 12.5kHz or 25kHz channel spacing.
An (relatively) new technology called Cellular Digital Packet Data (CDPD) is being developed by major
cellular carriers and manufactures. CDPD shares the 30 kHz spaced 800 MHz voice channels used by the
analog FM Advanced Mobile Phone Service (AMPS) systems. Data rate is 19.2 kbps.
Integration of Wireless Networks: It is highly unlikely that a single wireless network able to meet all
mobile computing needs will evolve. It is far more probable that many wireless networks will be
available, each of which will work in different scale, provide service over a variety of geographical
coverage areas at various speed at a variety of price level. Each network will serve a niche, but none will
meet all needs. Accordingly, mobile computer users will need to be able to access multiple networks in
order to meet their needs. The user would in general like a wireless network infrastructure that can
provide seamless roaming, the ability for mobile computers to continue to receive service as they move
from the coverage area of one wireless network to another.
Mobile Interneting: Within the Internet society, the middle-aged IP is now being face-lifted to be able to
deal with the various demand of growth and new services. Mobile IP was proposed to work with
wireless networks. It is in principle based on location registration and packet redirection. A mobile node
changing its location must register with some dedicated agent. This agent then handshakes with the
station’s home agent. Upon successful registration, the mobile station’s current address is bound to its
home address. Incoming datagrams are first routed to the mobile station’s home network, where they
are encapsulated by the home agent and then tunneled to the foreign agent, which delivers them to the
mobile host. The problem with mobile IP is that the routing path traversed by datagrams is only
suboptimal; worst case “triangle routing” may occur.
Mobile Application Support: We build the wireless infrastructure because we want to use them. To use
them, we need applications interfaces fine-tuned to the nature of mobile computing. For example,
mobile real-time multimedia applications (such as video conferencing) require low latency. However, if
we use traditional mobile IP, there will be a significant delay caused by registering and packet re-
direction. New method needs to be created to address this kind of problems.
Seamless Integration of Overlay: Networks No general network management architecture exists for
effectively integrating multiple overlay networks. Mobile applications roaming across overlays requires
network intelligence to determine that the mobile has moved from acceptable coverage in one network
to better coverage in another. But a global network management algorithm is stilled needed to control
handoffs across overlays based on current mobile connectivity. Link quality is only one metric that
determines handover; priority of access, applications need, and relative cost are equally important.
Since overlays may not cooperate with one another to render such decisions, mobile assisted handoff, in
which the mobile host must be an active participant in handoff processing, will be needed.
Support Services for Mobile Applications: Handover across overlays will change an application's
network bandwidth and latency. A new applications interface to the network management layers will be
designed to allow them to initiate handovers, to determine changes in their current network
capabilities, and to gracefully adapt their communications demands. It will better integrate mobile
applications and scale-able wire-line processing and storage capabilities through an agent processing
architecture that exploits data type specific transmissions to manage the communications demands over
dynamically varying wireless links.
Load Balancing for Scale-able Mobile Processing: Repositioning within future wireless networks will be
a common event. Traffic patterns will not be uniform, with high correlation between mobile host
location, their repositioning, and their requests for service. BARWAN is developing network
management architectures that build on decentralized algorithms to allocate network and processing
resources on demand, avoiding the static and centralized schemes of the past. Furthermore, overlay
networks provide an opportunity to share bandwidth and processing across networks; current network
load is one reason to initiate internetwork handoff.
Conclusion: A computer network is a collection of two or more computer systems which are linked
together. A network connection can be established using wireless media and cables. A network consists
of various kinds of nodes. Servers, networking hardware, personal computer, and other specialized or
general purpose hosts can all be nodes in a computer network.
Experiment No. 2
Theory: Traditional networks provide place-to-place communication; Wireless networks will provide
person-to-person communication, which is certainly a desirable feature for people on the move.
Wireless networks are computer networks that are not connected by cables of any kind. The use of a
wireless network enables enterprises to avoid the costly process of introducing cables into buildings or
as a connection between different equipment locations. Access points amplify Wi-Fi signals, so a device
can be far from a router but still be connected to the network. When you connect to a Wi-Fi hotspot at a
cafe, a hotel, an airport lounge, or another public place, you're connecting to that business's wireless
network.
Wireless LANs can be categorized as providing low-mobility high-speed data communication within a
confined region. Coverage range from a wireless terminal is ten to hundreds of feet. There are many
different WLAN products offered by different vendors, with data rates ranging from hundreds of kb/s to
more than 10Mb/s. An IEEE standards committee, 802.11, has been attempting to put some order into
this topic, but their success has been somewhat limited.
There are two overall network architectures pursued by WLAN designers. One is centrally coordinated
and controlled network. There are base stations in these networks that exercise overall control over
channel access. The other type of network architecture is the self-organizing and distributed controlled
network where every terminal has the same function as every other terminal, and networks are formed
ad-hoc by communications exchanges among terminals.
Wide Area Wireless Data Systems: Wide area data systems can be categorized as providing high
mobility, wide ranging, low-data-rate digital data communication to both vehicles and pedestrians. The
earliest and best-known systems are the ARDIS network developed and run by Motorola, and the RAM
mobile data network based on Ericsson Mobitex Technology. These technologies were designed to make
use of standard, two-way voice, land mobile radio channels, with 12.5kHz or 25kHz channel spacing.
An (relatively) new technology called Cellular Digital Packet Data (CDPD) is being developed by major
cellular carriers and manufactures. CDPD shares the 30 kHz spaced 800 MHz voice channels used by the
analog FM Advanced Mobile Phone Service (AMPS) systems. Data rate is 19.2 kbps.
Research Directions: Current research directions of wireless network are integration of various wireless
network services, mobile Internet, and mobile application support.
Integration of Wireless Networks: It is highly unlikely that a single wireless network able to meet all
mobile computing needs will evolve. It is far more probable that many wireless networks will be
available, each of which will work in different scale, provide service over a variety of geographical
coverage areas at various speed at a variety of price level. Each network will serve a niche, but none will
meet all needs. Accordingly, mobile computer users will need to be able to access multiple networks in
order to meet their needs. The user would in general like a wireless network infrastructure that can
provide seamless roaming, the ability for mobile computers to continue to receive service as they move
from the coverage area of one wireless network to another.
Mobile Interneting: Within the Internet society, the middle-aged IP is now being face-lifted to be able to
deal with the various demand of growth and new services. Mobile IP was proposed to work with
wireless networks. It is in principle based on location registration and packet redirection. A mobile node
changing its location must register with some dedicated agent. This agent then handshakes with the
station’s home agent. Upon successful registration, the mobile station’s current address is bound to its
home address. Incoming datagrams are first routed to the mobile station’s home network, where they
are encapsulated by the home agent and then tunneled to the foreign agent, which delivers them to the
mobile host. The problem with mobile IP is that the routing path traversed by datagrams is only
suboptimal; worst case “triangle routing” may occur.
Mobile Application Support: We build the wireless infrastructure because we want to use them. To use
them, we need applications interfaces fine-tuned to the nature of mobile computing. For example,
mobile real-time multimedia applications (such as video conferencing) require low latency. However, if
we use traditional mobile IP, there will be a significant delay caused by registering and packet re-
direction. New method needs to be created to address this kind of problems.
Seamless Integration of Overlay: Networks No general network management architecture exists for
effectively integrating multiple overlay networks. Mobile applications roaming across overlays requires
network intelligence to determine that the mobile has moved from acceptable coverage in one network
to better coverage in another. But a global network management algorithm is stilled needed to control
handoffs across overlays based on current mobile connectivity. Link quality is only one metric that
determines handover; priority of access, applications need, and relative cost are equally important.
Since overlays may not cooperate with one another to render such decisions, mobile assisted handoff, in
which the mobile host must be an active participant in handoff processing, will be needed.
Support Services for Mobile Applications: Handover across overlays will change an application's
network bandwidth and latency. A new applications interface to the network management layers will be
designed to allow them to initiate handovers, to determine changes in their current network
capabilities, and to gracefully adapt their communications demands. It will better integrate mobile
applications and scale able wire-line processing and storage capabilities through an agent processing
architecture that exploits data type specific transmissions to manage the communications demands over
dynamically varying wireless links.
Load Balancing for Scale able Mobile Processing: Repositioning within future wireless networks will be
a common event. Traffic patterns will not be uniform, with high correlation between mobile host
location, their repositioning, and their requests for service. BARWAN is developing network
management architectures that build on decentralized algorithms to allocate network and processing
resources on demand, avoiding the static and centralized schemes of the past. Furthermore, overlay
networks provide an opportunity to share bandwidth and processing across networks; current network
load is one reason to initiate internetwork handoff.
Computer networks that are not connected by cables are called wireless networks. They generally use
radio waves for communication between the network nodes. They allow devices to be connected to
the network while roaming around within the network coverage.
Advantages of Wireless Networks
Theory: A metallic structure used to transmit and capture a radio electromagnetic signal is known as an
antenna. These are available in different sizes as well as shapes. Smaller antennas were used to watch
television in the past. While large antennas are used to capture the signal from the satellite.
SCAN (Space Communications and Navigation) Such antennas consist primarily of a bowl-shaped
antenna with a special antenna that captures the signal at a specific end called the parabolic antenna.
This type of antenna allows both transmission and capture of the electromagnet signal. Which can be
moved horizontally and vertically to transmit and capture the signal.
The signal is given to the antenna by the transmission line. This signal can then be converted into
electromagnetic energy to be transmitted into space. Sometimes an electrical device such as an antenna
or aerial is used to change the electromagnetic signal and vice versa the electrical power. Antennas play
an important role in transmitting electromagnetic radiation.
In a transmitting antenna, an antenna receives an electrical signal from a transmission line and converts
it into a radio wave. Receiving antennas is the exact opposite as it allows radio signals from space and
converts them into electrical signals and provides them in the transmission line. Typical antenna
parameters are bandwidth, gain, radiation pattern, polarization, impedance, and beamwidth.
Antennas are used for many reasons but the main reason for this is that it provides a simple method to
transmit the signal where other types of techniques are not possible.
For example, the pilot of an aircraft often needs to communicate with ATC i.e. air traffic control. So this
communication between them can be done through wireless communication and antenna which is the
entrance for it. Therefore, there are many conditions otherwise there are applications where the cable is
selected on the wireless communication through the antenna.
Types of Antenna:
Wire Antenna
Aperture Antenna
Reflector Antenna
Lens Antenna
Micro strip antenna
Array antenna
Wire Antenna: Wire antennas are the basic types of antennas. These are well known and widely
used antennas. To have a better idea of these wire antennas, first let us have a look at the
transmission lines.
Transmission Lines: The wire or the transmission line has some power, which travels from one end to
the other end. If both the ends of transmission line are connected to circuits, then the information will
be transmitted or received using this wire between these two circuits.
If one end of this wire is not connected, then the power in it tries to escape. This leads to wireless
communication. If one end of the wire is bent, then the energy tries to escape from the transmission
line, more effectively than before. This purposeful escape is known as Radiation.
For the radiation to take place effectively, the impedance of the open end of the transmission line
should match with the impedance of the free-space. Consider a transmission line of a quarter-wave
length size. The far end of it is kept open and bent to provide high impedance. This acts as a half-wave
dipole antenna. Already, it has low impedance at one end of the transmission line. The open end, which
has high impedance, matches with the impedance of free space to provide better radiation.
Dipole
The radiation of energy when done through such a bent wire, the end of such transmission line is
termed as dipole or dipole antenna.
The reactance of the input impedance is a function of the radius and length of the dipole. The smaller
the radius, the larger the amplitude of the reactance. It is proportional to the wavelength. Hence, the
length and radius of the dipole should also be taken into consideration. Normally, its impedance is
around 72Ω.
The types of wire antennas include Half-wave dipole, Half-wave folded dipole, Full-wave dipole, Short
dipole, and Infinitesimal dipole
Aperture Antenna: An Antenna with an aperture at the end can be termed as an Aperture antenna.
Waveguide is an example of aperture antenna. The edge of a transmission line when terminated
with an opening, radiates energy. This opening which is an aperture, makes it an Aperture antenna.
Horn antenna
Slot antenna
Waveguide Antenna
A Waveguide is capable of radiating energy when excited at one end and opened at the other end. The
radiation in wave guide is greater than a two-wire transmission line.
Frequency Range
The operational frequency range of a wave guide is around 300MHz to 300GHz. This antenna works
in UHF and EHF frequency ranges. The following image shows a waveguide.
This waveguide with terminated end, acts as an antenna. But only a small portion of the energy is
radiated while a large portion of it gets reflected back in the open circuit. It means VSWR (voltage
standing wave ratio, discussed in basic parameters chapter) value increases. The diffraction around the
waveguide provides poor radiation and non-directive radiation pattern.
Radiation Pattern
The radiation of waveguide antenna is poor and the pattern is non-directive, which means omni-
directional. An omni-directional pattern is the one which has no certain directivity but radiates in all
directions, hence it is called as non-directive radiation pattern.
The above figure shows a top section view of an omni-directional pattern, which is also called as non-
directional pattern. The two-dimensional view is a figure-of-eight pattern, as we already know.
Advantages
Radiation is Omni-directional
Disadvantages
VSWR increases
Poor radiation
Conclusion: A metallic structure used to transmit and capture a radio electromagnetic signal is known as
an antenna. These are available in different sizes as well as shapes. Smaller antennas were used to
watch television in the past. While large antennas are used to capture the signal from the satellite.
Experiment-4
Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000
member companies in the areas of telecommunication, computing, networking, and consumer
electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard.
The Bluetooth SIG oversees development of the specification, manages the qualification program, and
protects the trademarks..A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth
device. A network of patents apply to the technology, which are licensed to individual qualifying devices.
As of 2009, Bluetooth integrated circuit chips ship approximately 920 million units annually. By 2017,
there were 3.6 billion Bluetooth devices being shipped annually and the shipments were expected to
continue increasing at about 12% a year.
Bluetooth Connections
Wireless communication is now a common phenomenon. Many of us have WiFi internet connections in
our offices and homes. But Bluetooth devices communicate directly with each other, rather than
sending traffic through an in-between device such as a wireless router. This makes life very convenient
and keeps power use extremely low, improving battery life.
Bluetooth devices communicate using low-power radio waves on a frequency band between 2.400 GHz
and 2.483.5 GHz [source: Bluetooth Special Interest Group (SIG)]. This is one of a handful of bands that is
set aside by international agreement for the use of industrial, scientific and medical devices (ISM).
Many devices that you may already use take advantage of this same radio-frequency band,
including baby monitors, garage-door openers and the newest generation of cordless phones. Making
sure that Bluetooth devices and other wireless communications technologies don't interfere with one
another is essential.
There are two types of Bluetooth technology as of 2020: Bluetooth Low Energy (LE) and Bluetooth
Classic (more formally known as Bluetooth Basic Rate/Enhanced Data Rate, or BR/EDR)
[source: Bluetooth SIG]. Both operate using the same frequency band, but Bluetooth LE is the more
popular option, by far. It needs much less energy to operate and can also be used for broadcast or mesh
networks in addition to allowing communication over point-to-point connections between two devices.
The classic Bluetooth technology can deliver a slightly higher data rate than Bluetooth LE (3 Mbs
compared to either 1Mbs or 2 Mbs) but can only be used for communication directly between two
devices using point-to-point connections. Each of the two types of Bluetooth technology has its
particular strengths and manufacturers adopt the version that best fits the needs of their product.
Harald Bluetooth was king of Denmark in the late 900s. He managed to unite Denmark and part of
Norway into a single kingdom, then introduced Christianity into Denmark. He left a large monument, the
Jelling rune stone, in memory of his parents. He was killed in 986 during a battle with his son, Svend
Forkbeard. Choosing this name for the standard indicates how important companies from the Nordic
region (nations including Denmark, Sweden, Norway and Finland) are to the communications industry,
even if it says little about the way the technology works.
Bluetooth BR/EDR devices must always be paired and this procedure results in each of the two devices
trusting the other and being able to exchange data in a secure way, using encryption.
When Bluetooth BR/EDR devices come within range of one another, an electronic conversation takes
place to determine whether they trust each other or not and have data to share. The user doesn't
usually have to press a button or give a command — the electronic conversation happens automatically.
Once the conversation has occurred, the devices — whether they're part of a computer system or a
stereo — form a network.
Bluetooth LE works differently. Devices may also be paired to form a trusted relationship between them
but not all types of product require this. A Bluetooth LE device which wants to be discovered broadcasts
special messages (known as packets) in a process called advertising. Advertising packets contain useful
information about the advertising device. Another suitable device will find the advertising device by
scanning (listening) for advertising packets and selecting those which are from appropriate devices.
Usually scanning only happens when the user triggers it by say, pressing a button in a smartphone
application. Typically the user is then presented with details of appropriate devices that were discovered
and then selects one to connect to.
Bluetooth peripherals (e.g., an activity tracker and a smartwatch) that are connected to the same central
device (e.g., a smartphone) form a personal-area network (PAN) or piconet that may fill an entire
building or may encompass a distance no more than that between the smartphone in your pocket and
the watch on your wrist. Once a piconet is established, its members hop radio frequencies in unison so
they stay in touch with one another and avoid interfering with other Bluetooth piconets that may be
operating in the same room or devices using other wireless technologies such as WiFi. Bluetooth
technology even learns which radio channels are working well and which ones are experiencing
interference so that it can dynamically avoid bad channels and just use the channels that are free from
interference. This process, called adaptive frequency hopping allows Bluetooth devices to work really
well, even in environments where there are very large numbers of wireless devices operating.
Bluetooth Range
Although many think of Bluetooth primarily as a short-range technology, it can also be used to connect
devices more than a kilometer (3,280 feet) apart [source: Bluetooth SIG]. In fact, many types of product
such as wireless headphones, require the devices' communication range to be very short. But because
Bluetooth technology is very flexible and can be configured to the needs of the application,
manufacturers can adjust the Bluetooth settings on their devices to achieve the range they need whilst
at the same time maximizing battery life and achieving the best quality of signal.
Radio spectrum: Bluetooth technology's frequency band makes it a good choice for wireless
communication.
Physical layer (PHY): This defines some key aspects of how the radio is used to transmit and
receive data such as the data rate, how error detection and correction is performed,
interference protection, and other techniques that influence signal clarity over different ranges.
Receiver sensitivity: The measure of the minimum signal strength at which a receiver can still
receive and correctly decode data.
Transmission power: As you may expect, the higher the transmitted signal strength, the longer
the range that can be achieved. But increasing the transmission power will also deplete your
battery faster.
Antenna gain: Essentially, this is changing electrical signals from the transmitter into radio
waves and back again on the receiving end.
Path loss: Several factors may weaken the signal, including distance, humidity, and the medium
through which it travels (such as wood, concrete or metal).
One of the most recent Bluetooth technology updates introduced a technique called forward error
correction (FEC) to improve receiver sensitivity. FEC corrects data errors that are detected at the
receiving end and improves a device's effective range by four or more times without having to use more
transmission power. This means a device can successfully receive data when it is at a much longer range
from the transmitter, where the signal will be much weaker [source: Bluetooth SIG].
Bluetooth Security
Bluetooth technology includes a number of security measures that can satisfy even the most stringent
security requirements such as those included in the Federal Information Processing Standards (FIPS).
When setting up a new device, users typically go through a process called pairing. Pairing equips each
device with special security keys and causes them to trust each other. A device that requires pairing will
not connect to another device which it has not been paired with.
Those security keys allow Bluetooth technology to protect data and users in a number of ways. For
example, data exchanged between devices can be encrypted so that it cannot be read by other devices.
It can also allow the address which acts as the identity of a device and which is included in wireless data
exchanges to be disguised and changed every few minutes. This protects users from the risk of being
tracked using data transmitted by their personal electronic devices.
If you own Bluetooth-enabled devices, you have experienced this for yourself. For example, if you buy a
cordless mouse, the first time you turn it on, you pair it to the device you plan to use it with. You might
turn the mouse on, then go to the Bluetooth settings on your computer to pair the device once you see
its name in a list of nearby Bluetooth accessories. A computer can handle many Bluetooth connections
at once by design. You may want to use a cordless mouse, keyboard and headphones.
The makers of those accessories, however, are going to limit connections to one at a time. You want
your keyboard to type only on your computer, or your headphones to listen specifically to your phone.
Some allow the user to pair the device with multiple computers, tablets or phones, but they may only be
allowed to connect with one at a time. It all depends on what the manufacturer decided was sensible for
their product.
Some devices require a code for security while being paired with another device. This is an example
of authentication and it ensures that the device you are setting up that trusted relationship with is the
one you think it is, rather than another device somewhere else in the environment. For example, many
cars let you take calls without taking your hands off the steering wheel. The first time you want to use
this facility, you will have to pair your phone and the car's audio system using the car's entertainment
display and your smartphone together. The car gives you a number to type in. Your phone lets you know
a device wants to pair using a numeric code. You enter the code off the entertainment display to
confirm that this is an authorized pairing. After that, you can use the hands-free phone system without
ever needing to pair again.
The user also has control over a device's visibility to other Bluetooth devices. On a computer or
smartphone, for example, you can also simply switch the device's Bluetooth mode to "nondiscoverable"
or simply disable Bluetooth until you need it again.
Devices also offer a variety of settings that allow the user to limit Bluetooth connections. The device-
level security of "trusting" a Bluetooth device restricts connections to only that specific device. With
service-level security settings, you can also restrict the kinds of activities your device is permitted to
engage in while on a Bluetooth connection.
As with any wireless technology, however, there is always some security risk involved. Hackers have
devised a variety of malicious attacks that use Bluetooth networking. For example, "bluesnarfing" refers
to a hacker gaining authorized access to information on a device through Bluetooth; "bluebugging" is
when an attacker takes over your mobile phone and all its functions.
For the average person, Bluetooth doesn't present a grave security risk when used with safety in mind
(e.g., not connecting to unknown Bluetooth devices). For maximum security, while in public and not
using Bluetooth, you can disable it completely.
The specifications were formalized by the Bluetooth Special Interest Group (SIG) and formally
announced on 20 May 1998.[70] Today it has a membership of over 30,000 companies worldwide. [71] It
was established by Ericsson, IBM, Intel, Nokia and Toshiba, and later joined by many other companies.
All versions of the Bluetooth standards support downward compatibility.[72] That lets the latest standard
cover all older versions.
The Bluetooth Core Specification Working Group (CSWG) produces mainly 4 kinds of specifications:
The Bluetooth Core Specification, release cycle is typically a few years in between
Core Specification Addendum (CSA), release cycle can be as tight as a few times per year
Conclusion: Mobile IP is a communication protocol (created by extending Internet Protocol, IP)
that allows the users to move from one network to another with the same IP address. It
ensures that the communication will continue without the user’s sessions or connections being
dropped.
Experiment No. 6
Aim: Study of OPNET tool for modeling and simulation of different cellular standards.
Theory: Opnet is an important simulation tool. We guide Opnet Network simulator tutorial
which is computer software to simulate various network communication under final year
project. We ensure Opnet as discrete event engine network simulation for fast and scalable
solutions. We implement simulation written by (or) C+ + code. We develop Opnet Project to
design Access Network transmission and inter office network communications.
Opnet Project Goal: We implemented Opnet to aid the following factors in M.E projects are:
Large user community.
Easier to use.
Provide GUI (Graphical user Interface) and easy to learn.
Model entire network with router, switches, protocols, server.
Major Service provider.
Opnet Tools: We use the following tools in OPNET Projects for students projects are:
Nodal Model Editor.
Process Model Editor.
Source Code Editing Environment.
Simulation Model.
Analysis Configuration Tool.
Network Model Editor.
Application in OPNET: We adopted Opnet in more than 85+ projects and provide following
applications are:-
Configure Web (HTTP) Application: We use Opnet to configure web browsing application, the
application named Http should selected from built in Model List.
Configure E-Mail Application: Opnet provide preset configuration for e-mail as low load,
medium load, High load and email size which define email message size in bytes.
Configure FTP Application: Opnet provide FTP application low, medium and high load. The file
size
defines byte size. FTP file to transfer.
Configure Remote Login Application: It is similar to email and FTP application. During remote
login session we use Inter command Time parameter and terminal Traffic Parameter.
Opnet Features: We adopt the following characteristics are:
Provide specification simulation and network analysis from different level
complexity.
Design simple or sophisticated Network systems.
Ensure finite state machine model with analytical model.
Opnet Language: We implement Opnet by the following languages are:
Programmed by C.
Initial configuration by GUI, a set of Xml files.
Code written by C (OR) C++ language.
Opnet Riverbed Applications: We support and develop science direct based opnet projects. We
use following are:
Application Management.
HD Analytics.
Monitoring.
Granular Transaction Capture.
Wireless sensor networks generally comprise a large number of sensor nodes deployed in an
area of interest to collect physical or environmental conditions, such as temperature, humidity,
pressure, etc. In wireless sensor networks, performance evaluation is critical to test the
practicability of network architectures5and protocol algorithms, and provides guidelines in
performance optimization. Among different candidates, simulation offers a cost-effective way.
Recently, re-searchers have developed many simulation models on different simulation plat-
forms, such as OPNET, NS-2, TOSSIM, EmStar, OMNeT++, J-Sim, ATEMU, and Avrora. Compared
with other simulators, OPNET is more suitable to10simulate behaviors of networks in the real
world. OPNET Modeler, as a net-work simulator, provides an industry-leading network
technology development environment [2]. It can be used to design and study network modeling
and simulation in applications, equipment, protocols and network communication and show
flexibility and intuition in designing practical systems.15Recently, Zigbee technology has been
widely adopted to develop wireless sensor network applications [3] by forming a wireless mesh
network with low rate , low power consumption, and secure networking. In Zigbee protocol
stack, the physical layer and the MAC layer protocols have been defined by
IEEE802.15.4standard [4]. Its network layer built upon both lower layers should be
designed20to enable a mesh networking, support node joining or leaving, assign network
addresses to devices, and perform routing. Zigbee Alliance is working at providing a
standardized base set of solutions for sensor networks [5]. In this paper, a network layer model
is proposed for mobile sensor networks in order to accomplish all defined functions. The
application layer aims at providing the25services for an application program, consisting of
application support sub-layer, application framework, and Zigbee device object. Since this layer
is related to specific applications, and is not the main focus of this paper, the design of the
application layer is omitted here. simulation of Zigbee sensor networks within OPNET simulator
has been attracting interests from researchers. There are many research works on simulation
modelling and evaluation of sensor nodes in OPNET [6, 7]. For example, Kucuk et al [6]
presented a detailed implementation methodology for their
proposed positioning algorithm, called M-SSLE. Shrestha et al [7] proposed a
simulation model for new networking nodes equipped with multiple radio technologies.
However, few works focused on the simulation model of mobile sensor
networks in literature. Device mobility is inevitable and must be conciliated
[8, 9], where lack of the support for simulation on mobile Zigbee sensor network
is a major limitation in this field of research, evaluation and development.
In [10], the adequacy of current provisions for dealing with different mobility cases was
assessed. Simulation results demonstrated that the current
model in OPNET standard libraries is ineffective in dealing with nodal mobility. Since OPNET
Mode provides a comprehensive simulation environment
for modeling distributed systems and communication networks, many simulation studies for
Zigbee sensor networks were performed in OPNET simulator45
[11, 12, 13, 14, 15, 16]. According to the performance studies using the Zigbee
model within OPNET Modeler standard libraries (ZMOMSL), there are several
disadvantages on this model. For example, its address assignment mechanism
may waste address space, the high communication overheads may reduce net-
work lifetime, and the network joining strategy may result in significant traffic50
collisions and jams [17, 18]. Among all these disadvantages, the most critical
issue is that the Zigbee model can not support the mobility of device nodes.
This motivated us to develop a new simulation model based on the OPNET
simulator for mobile Zigbee sensor networks.
The main contributions of this paper are summarized as follows. 1) We55
adopt the OPNET simulation development platform to design a mobile Zigbee
sensor network simulation model compatible with Zigbee protocols, where the
physical layer and the MAC layer defined by IEEE 802.15.4 are employed. 2) We
provide a node level design of mobile sensor nodes, present a process level model
of its network layer model and the detailed implementation procedure of the key60
functions. 3) In order to further decrease the communication overhead of nodes,
3
an improved AODV routing algorithm is also proposed, which demonstrates
superior capability in supporting node mobility.
The rest of this paper is organized as follows. In section 2, we discuss the
design of network process model in details. In section 3, we propose a new65
simulation model which enables mobile support for Zigbee devices. Section
4 presents our simulation results and demonstrates experimental comparison
between our proposed model and ZMOMSL. Section 5 draws conclusions
provides a comprehensive simulation environment for modeling distributed systems and
communication networks, many simulation studies for Zigbee sensor networks were performed
in OPNET simulator45[11, 12, 13, 14, 15, 16]. According to the performance studies using the
Zigbee model within OPNET Modeler standard libraries (ZMOMSL), there are several
disadvantages on this model. For example, its address assignment mechanism may waste address
space, the high communication overheads may reduce net-work lifetime, and the network joining
strategy may result in significant traffic50collisions and jams [17, 18]. Among all these
disadvantages, the most critical issue is that the Zigbee model cannot support the mobility of
device nodes. This motivated us to develop a new simulation model based on the OPNET
simulator for mobile Zigbee sensor networks. The main contributions of this paper are
summarized as follows. 1) We55adopt the OPNET simulation development platform to design a
mobile Zigbee sensor network simulation model compatible with Zigbee protocols, where the
physical layer and the MAC layer defined by IEEE 802.15.4 are employed. 2) We provide a node
level design of mobile sensor nodes, present a process level model of its network layer model
and the detailed implementation procedure of the key60functions. 3) In order to further decrease
the communication overhead of nodes,3
an improved AODV routing algorithm is also proposed, which demonstrates superior capability
in supporting node mobility. The rest of this paper is organized as follows. In section 2, we
discuss the design of network process model in details. In section 3, we propose a
new65simulation model which enables mobile support for Zigbee devices. Section4 presents our
simulation results and demonstrates experimental comparison between our proposed model and
ZMOMSL.
an improved AODV routing algorithm is also proposed, which demonstrates
superior capability in supporting node mobility.
The rest of this paper is organized as follows. In section 2, we discuss the
design of network process model in details. In section 3, we propose a new65
simulation model which enables mobile support for Zigbee devices. Section
4 presents our simulation results and demonstrates experimental comparison
between our proposed model and ZMOMSL. Section 5 draws conclusions.
2. The design of simulation system model
2.1. Design of node model70
As shown in Fig. 1, a Zigbee node model within OPNET Modeler typically incorporates the
physical layer, the MAC layer, the network layer and the
application layer. The physical layer comprises a transmitter module, a receiver module, and a
wireless pipeline model. The wireless pipeline model can be
configured to build a real radio environment. In the MAC layer, Carrier Sense75
Multiple Access with Collision Avoidance (CSMA/CA) protocol is used. For
the network layer, following services are provided: forming a network, nodes
joining and leaving a network, network address assignment, neighbor discovery,
and route maintenance discovery. The application layer is responsible for producing and
processing sensing data. In the rest of the paper, we will focus on80
the design of the network layer model for mobile Zigbee sensor networks.
2.2. The design of network layer model
Three types of devices are defined in the Zigbee standard framework: coordinator, router, and
end device. Coordinator is responsible for forming a new
network, storing the key parameters of the network and connecting to other85
networks. There is always a single coordinator in a Zigbee network. In Zigbee-
based WSNs, sink node typically plays the role of network coordinator. Router
has the routing capability. Specifically, it could allow other devices to join the
network as its child nodes, and route data packets. End device has no routing
an improved AODV routing algorithm is also proposed, which demonstrates superior capability
in supporting node mobility. The rest of this paper is organized as follows. In section 2, we
discuss the design of network process model in details. In section 3, we propose a
new65simulation model which enables mobile support for Zigbee devices. Section4 presents our
simulation results and demonstrates experimental comparison between our proposed model and
ZMOMSL. Section 5 draws conclusions.2. The design of simulation system model2.1. Design of
node model70As shown in Fig. 1, a Zigbee node model within OPNET Modeler typically
incorporates the physical layer, the MAC layer, the network layer and the application layer. The
physical layer comprises a transmitter module, a receiver module, and a wireless pipeline model.
The wireless pipeline model can be configured to build a real radio environment. In the MAC
layer, Carrier Sense75Multiple
.
Experiment No. 7
Aim: Study of TCP/IP Protocol.
Theory: TCP/IP stands for Transmission Control Protocol/Internet Protocol and is a suite of
communication protocols used to interconnect network devices on the internet. TCP/IP is also
used as a communications protocol in a private computer network (an intranet or extranet).
The entire IP suite -- a set of rules and procedures -- is commonly referred to as TCP/IP. TCP and IP are
the two main protocols, though others are included in the suite. The TCP/IP protocol suite functions as
an abstraction layer between internet applications and the routing and switching fabric.
TCP/IP specifies how data is exchanged over the internet by providing end-to-end communications that
identify how it should be broken into packets, addressed, transmitted, routed and received at the
destination. TCP/IP requires little central management and is designed to make networks reliable with
the ability to recover automatically from the failure of any device on the network.
The two main protocols in the IP suite serve specific functions. TCP defines how applications can create
channels of communication across a network. It also manages how a message is assembled into smaller
packets before they are then transmitted over the internet and reassembled in the right order at the
destination address.
IP defines how to address and route each packet to make sure it reaches the right destination. Each
gateway computer on the network checks this IP address to determine where to forward the message.
A subnet mask tells a computer, or other network device, what portion of the IP address is used to
represent the network and what part is used to represent hosts, or other computers, on the network.
Network address translation (NAT) is the virtualization of IP addresses. NAT helps improve security and
decrease the number of IP addresses an organization needs.
Hypertext Transfer Protocol (HTTP) handles the communication between a web server and a web
browser.
HTTP Secure handles secure communication between a web server and a web browser.
TCP/IP uses the client-server model of communication in which a user or machine (a client) is provided a
service, like sending a webpage, by another computer (a server) in the network.
Collectively, the TCP/IP suite of protocols is classified as stateless, which means each client request is
considered new because it is unrelated to previous requests. Being stateless frees up network paths so
they can be used continuously.
The transport layer itself, however, is stateful. It transmits a single message, and its connection remains
in place until all the packets in a message have been received and reassembled at the destination.
The TCP/IP model differs slightly from the seven-layer Open Systems Interconnection (OSI) networking
model designed after it. The OSI reference model defines how applications can communicate over a
network.
TCP/IP is highly scalable and, as a routable protocol, can determine the most efficient path through the
network. It is widely used in current internet architecture.
TCP/IP functionality is divided into four layers, each of which includes specific protocols:
The application layer provides applications with standardized data exchange. Its protocols include HTTP,
FTP, Post Office Protocol 3, Simple Mail Transfer Protocol and Simple Network Management Protocol. At
the application layer, the payload is the actual application data.
The transport layer is responsible for maintaining end-to-end communications across the network. TCP
handles communications between hosts and provides flow control, multiplexing and reliability. The
transport protocols include TCP and User Datagram Protocol, which is sometimes used instead of TCP
for special purposes.
The network layer, also called the internet layer, deals with packets and connects independent networks
to transport the packets across network boundaries. The network layer protocols are IP and Internet
Control Message Protocol, which is used for error reporting.
The physical layer, also known as the network interface layer or data link layer, consists of protocols that
operate only on a link -- the network component that interconnects nodes or hosts in the network. The
protocols in this lowest layer include Ethernet for local area networks and Address Resolution Protocol.
Uses of TCP/IP
TCP/IP can be used to provide remote login over the network for interactive file transfer to deliver
email, to deliver webpages over the network and to remotely access a server host's file system. Most
broadly, it is used to represent how information changes form as it travels over a network from the
concrete physical layer to the abstract application layer. It details the basic protocols, or methods of
communication, at each layer as information passes through.
does not clearly separate the concepts of services, interfaces and protocols, so it is not suitable for
describing new technologies in new networks; and
There are numerous differences between TCP/IP and IP. For example, IP is a low-level internet protocol
that facilitates data communications over the internet. Its purpose is to deliver packets of data that
consist of a header, which contains routing information, such as source and destination of the data, and
the data payload itself.
IP is limited by the amount of data that it can send. The maximum size of a single IP data packet, which
contains both the header and the data, is between 20 and 24 bytes long. This means that longer strings
of data must be broken into multiple data packets that must be independently sent and then
reorganized into the correct order after they are sent.
Since IP is strictly a data send/receive protocol, there is no built-in checking that verifies whether the
data packets sent were actually received.
In contrast to IP, TCP/IP is a higher-level smart communications protocol that can do more things. TCP/IP
still uses IP as a means of transporting data packets, but it also connects computers, applications,
webpages and web servers. TCP understands holistically the entire streams of data that these assets
require in order to operate, and it makes sure the entire volume of data needed is sent the first time.
TCP also runs checks that ensure the data is delivered.
As it does its work, TCP can also control the size and flow rate of data. It ensures that networks are free
of any congestion that could block the receipt of data.
An example is an application that wants to send a large amount of data over the internet. If the
application only used IP, the data would have to be broken into multiple IP packets. This would require
multiple requests to send and receive data, since IP requests are issued per packet.
With TCP, only a single request to send an entire data stream is needed; TCP handles the rest. Unlike IP,
TCP can detect problems that arise in IP and request retransmission of any data packets that were lost.
TCP can also reorganize packets so they get transmitted in the proper order -- and it can minimize
network congestion. TCP/IP makes data transfers over the internet easier.
TCP/IP and OSI are the most widely used communication networking protocols. The main difference is
that OSI is a conceptual model that is not practically used for communication. Rather, it defines how
applications can communicate over a network. TCP/IP, on the other hand, is widely used to establish
links and network interaction.
The TCP/IP protocols lay out standards on which the internet was created, while the OSI model provides
guidelines on how communication has to be done. Therefore, TCP/IP is a more practical model.
The TCP/IP and OSI models have similarities and differences. The main similarity is in the way they are
constructed as both use layers, although TCP/IP consists of just four layers, while the OSI model consists
of the following seven layers:
Layer 7, the application layer, enables the user -- software or human -- to interact with the application or
network when the user wants to read messages, transfer files or engage in other network-related
activities.
Layer 6, the presentation layer, translates or formats data for the application layer based on the
semantics or syntax that the app accepts.
Layer 5, the session layer, sets up, coordinates and terminates conversations between apps.
Layer 4, the transport layer, handles transferring data across a network and providing error-checking
mechanisms and data flow controls.
Layer 3, the network layer, moves data into and through other networks.
Layer 2, the data link layer, handles problems that occur as a result of bit transmission errors.
Layer 1, the physical layer, transports data using electrical, mechanical or procedural interfaces.
Conclusion: TCP/IP stands for Transmission Control Protocol/Internet Protocol and is a suite of
communication protocols used to interconnect network devices on the internet. TCP/IP is also
used as a communications protocol in a private computer network (an intranet or extranet).
Experiment No 8
Aim: Study and analysis of HYPERLAN Protocol architecture.
Theory: High Performance LAN (Hyper lan)
Contributed by Torben Rune, Netplan.
HIPERLAN is a European (ETSI) standardization initiative for a High Performance wireless Local
Area Network. Radio waves are used instead of a cable as a transmission medium to connect
stations. Either, the radio transceiver is mounted to the movable station as an add-on and no
base station has to be installed separately, or a base station is needed in addition per room.
The stations may be moved during operation-pauses or even become mobile. The max. data
rate for the user depends on the distance of the communicating stations. With short
distances (<50 m) and asynchronous transmission a data rate of 20 Mbit/s is achieved, with
up to 800 m distance a data rate of 1 Mbit/s are provided. For connection-oriented services,
e.g. video-telephony, at least 64 kbit/s are offered.
HIPERLAN
HIPERLAN is a European family of standards on digital high speed wireless communication in
the 5.15-5.3 GHz and the 17.1-17.3 GHz spectrum developed by ETSI. The committee
responsible for HIPERLAN is RES-10 which has been working on the standard since November
1991.
The standard serves to ensure the possible interoperability of different manufacturers'
wireless communications equipment that operate in this spectrum. The HIPERLAN standard
only describes a common air interface including the physical layer for wireless
communications equipment, while leaving decisions on higher level configurations and
functions open to the equipment manufacturers.
The choice of frequencies allocated to HIPERLAN was part of the 5-5.30 GHz band being
allocated globally to aviation purposes. The Aviation industry only used the 5-5.15GHz
frequency, thus making the 5.15-5.30 frequency band accessible to HIPERLAN standards.
HIPERLAN is designed to work without any infrastructure. Two stations may exchange data
directly, without any interaction from a wired (or radio-based) infrastructure. The simplest
HIPERLAN thus consists of two stations. Further, if two HIPERLAN stations are not in radio
contact with each other, they may use a third station (i.e. the third station must relay
messages between the two communicating stations).
Products compliant to the HIPERLAN 5 GHz standard shall be possible to implement on a
PCMCIA Type III card. Thus the standard will enable users to truly take computing power on
the road.
The HIPERLAN standard has been developed at the same time as the development of the
SUPER net standard in the United States.
HIPERLAN requirements
Short range - 50m
Low mobility - 1.4m/s
Networks with and without infrastructure
Support isochronous traffic
audio 32kbps, 10ns latency
video 2Mbps, 100ns latency
Support asynchronous traffic
data 10Mbps, immediate access
Quality of service
Performance is one of the most important factors when dealing with wireless LANs. In
contrast to other radio-based systems, data traffic on a local area network has a randomized
bursty nature, which may cause serious problems with respect to throughput.
Many factors have to be taken into consideration, when quality of service is to be measured.
Among these are:
The topography of the landscape in general
Elevations in the landscape that might cause shadows, where connectivity is unstable
or impossible.
Environments with many signal-reflection surfaces
Environments with many signal-absorbing surfaces
Quality of the wireless equipment
Placement of the wireless equipment
Number of stations
Proximity to installations that generate electronic noise
and many more
The sheer number of factors to take into consideration means, that the physical environment
will always be a factor in trying to asses the usefulness of using a wireless technology like
HIPERLAN.
Simulations show that the HIPERLAN MAC can simultaneously support
25 audio links at 32kbit/s, 10ms delivery
25 audio links at 16kbit/s, 20ms delivery
1 video link at 2Mbit/s, 100ms delivery
Asynch file transfer at 13.4Mbit/s
Benchmarking HIPERLAN in practice
Once a new HIPERLAN installation is implemented, trying to benchmark it can easily become
a mind-boggling task.
Even though a spectrum analyzer can be used for initial evaluation and troubleshooting, the
factors influencing performance are so many and so complex, that initial benchmarking
should be based evenly on perceived performance and registered performance over a longer
period of time.
In contrast to cable based LANs, the testing equipment has to find the communication stream
in the air not on a physical cable and it has to monitor several frequencies at once. On top of
that, the testing equipment itself can interfere with the signals it intends to monitor.
New HIPERLAN standards ahead
A second set of standards have been constructed for a new version of HIPERLAN - HIPERLAN2.
The idea of HIPERLAN2 is to be compatible with ATM.
There is also undergoing work to establish global sharing rules. The WINForum for
NII/SUPERNET in the US aim to support HIPERLAN 1 and HIPERLAN 2. This effort involves
interaction between ETSI RES10, WINForum, ATM Forum.
Procedure: Cybersecurity threats are acts performed by individuals with harmful intent, whose goal
is to steal data, cause damage to or disrupt computing systems. Common categories of cyber threats
include malware, social engineering, man in the middle (MitM) attacks, denial of service (DoS),
and injection attacks—we describe each of these categories in more detail below.
Cyber threats can originate from a variety of sources, from hostile nation states and terrorist
groups, to individual hackers, to trusted individuals like employees or contractors, who abuse
their privileges to perform malicious acts.
● Nation states—hostile countries can launch cyber attacks against local companies and
institutions, aiming to interfere with communications, cause disorder, and inflict damage.
● Terrorist organizations—terrorists conduct cyber attacks aimed at destroying or abusing
critical infrastructure, threaten national security, disrupt economies, and cause bodily
harm to citizens.
● Criminal groups—organized groups of hackers aim to break into computing systems for
economic benefit. These groups use phishing, spam, spyware and malware for extortion,
theft of private information, and online scams.
● Hackers—individual hackers target organizations using a variety of attack techniques.
They are usually motivated by personal gain, revenge, financial gain, or political activity.
Hackers often develop new threats, to advance their criminal ability and improve their
personal standing in the hacker community.
● Malicious insiders—an employee who has legitimate access to company assets, and abuses
their privileges to steal information or damage computing systems for economic or
personal gain. Insiders may be employees, contractors, suppliers, or partners of the target
organization. They can also be outsiders who have compromised a privileged account and
are impersonating its owner.
Types of Cybersecurity
Threats Malware Attacks
Malware is an abbreviation of “malicious software”, which includes viruses, worms, trojans, spyware,
and ransomware, and is the most common type of cyberattack. Malware infiltrates a system, usually
via a link on an untrusted website or email or an unwanted software download. It deploys on the target
system, collects sensitive data, manipulates and blocks access to network components, and may
destroy data or shut down the system altogether.
Social engineering involves tricking users into providing an entry point for malware. The victim provides
sensitive information or unwittingly installs malware on their device, because the attacker poses as a
legitimate actor.
● Baiting—the attacker lures a user into a social engineering trap, usually with a promise of
something attractive like a free gift card. The victim provides sensitive information such
as credentials to the attacker.
● Pretexting—similar to baiting, the attacker pressures the target into giving up information under
false pretenses. This typically involves impersonating someone with authority, for example an
IRS or police officer, whose position will compel the victim to comply.
● Phishing—the attacker sends emails pretending to come from a trusted source. Phishing often
involves sending fraudulent emails to as many users as possible, but can also be more targeted.
For example, “spear phishing” personalizes the email to target a specific user, while “whaling”
takes this a step further by targeting high-value individuals such as CEOs.
● Vishing (voice phishing)—the imposter uses the phone to trick the target into disclosing
sensitive data or grant access to the target system. Vishing typically targets older individuals
but can be employed against anyone.
● Smishing (SMS phishing)—the attacker uses text messages as the means of deceiving the victim.
● Piggybacking—an authorized user provides physical access to another individual who “piggybacks”
off the user’s credentials. For example, an employee may grant access to someone posing as a
new employee who misplaced their credential card.
● Tailgating—an unauthorized individual follows an authorized user into a location, for example by
quickly slipping in through a protected door after the authorized user has opened it. This
technique is similar to piggybacking except that the person being tailgated is unaware that they
are being used by another individual.
Supply Chain Attacks
Supply chain attacks are a new type of threat to software developers and vendors. Its purpose is to
infect legitimate applications and distribute malware via source code, build processes or software
update mechanisms.
Attackers are looking for non-secure network protocols, server infrastructure, and coding techniques, and
use them to compromise build and update process, modify source code and hide malicious content.
Supply chain attacks are especially severe because the applications being compromised by attackers are
signed and certified by trusted vendors. In a software supply chain attack, the software vendor is not
aware that its applications or updates are infected with malware. Malicious code runs with the same trust
and privileges as the compromised application.
A Man-in-the-Middle (MitM) attack involves intercepting the communication between two endpoints, such
as a user and an application. The attacker can eavesdrop on the communication, steal sensitive data, and
impersonate each party participating in the communication.
● Wi-Fi eavesdropping—an attacker sets up a Wi-Fi connection, posing as a legitimate actor, such
as a business, that users may connect to. The fraudulent Wi-Fi allows the attacker to monitor the
activity of connected users and intercept data such as payment card details and login credentials.
● Email hijacking—an attacker spoofs the email address of a legitimate organization, such as a
bank, and uses it to trick users into giving up sensitive information or transferring money to the
attacker. The user follows instructions they think come from the bank but are actually from the
attacker.
● DNS spoofing—a Domain Name Server (DNS) is spoofed, directing a user to a malicious website
posing as a legitimate site. The attacker may divert traffic from the legitimate site or steal the
user’s credentials.
● IP spoofing—an internet protocol (IP) address connects users to a specific website. An attacker
can spoof an IP address to pose as a website and deceive users into thinking they are interacting
with that website.
● HTTPS spoofing—HTTPS is generally considered the more secure version of HTTP, but can also
be used to trick the browser into thinking that a malicious website is safe. The attacker uses
“HTTPS” in the URL to conceal the malicious nature of the website.
A Denial-of-Service (DoS) attack overloads the target system with a large volume of traffic, hindering the
ability of the system to function normally. An attack involving multiple devices is known as a distributed
denial-of-service (DDoS) attack.
● HTTP flood DDoS—the attacker uses HTTP requests that appear legitimate to overwhelm an
application or web server. This technique does not require high bandwidth or malformed packets,
and typically tries to force a target system to allocate as many resources as possible for each
request.
● SYN flood DDoS—initiating a Transmission Control Protocol (TCP) connection sequence involves
sending a SYN request that the host must respond to with a SYN-ACK that acknowledges the
request, and then the requester must respond with an ACK. Attackers can exploit this sequence,
tying up server resources, by sending SYN requests but not responding to the SYN-ACKs from the
host.
● UDP flood DDoS—a remote host is flooded with User Datagram Protocol (UDP) packets sent to
random ports. This technique forces the host to search for applications on the affected ports
and respond with “Destination Unreachable” packets, which uses up the host resources.
● ICMP flood—a barrage of ICMP Echo Request packets overwhelms the target, consuming both
inbound and outgoing bandwidth. The servers may try to respond to each request with an
ICMP Echo Reply packet, but cannot keep up with the rate of requests, so the system slows
down.
● NTP amplification—Network Time Protocol (NTP) servers are accessible to the public and can be
exploited by an attacker to send large volumes of UDP traffic to a targeted server. This is
considered an amplification attack due to the query-to-response ratio of 1:20 to 1:200, which
allows an attacker to exploit open NTP servers to execute high-volume, high-bandwidth DDoS
attacks.
Injection Attacks
Injection attacks exploit a variety of vulnerabilities to directly insert malicious input into the code of a
web application. Successful attacks may expose sensitive information, execute a DoS attack or
compromise the entire system.
● SQL injection—an attacker enters an SQL query into an end user input channel, such as a web form
or comment field. A vulnerable application will send the attacker’s data to the database, and will
execute any SQL commands that have been injected into the query. Most web applications use
databases based on Structured Query Language (SQL), making them vulnerable to SQL injection. A
new variant on this attack is NoSQL attacks, targeted against databases that do not use a relational
data structure.
● Code injection—an attacker can inject code into an application if it is vulnerable. The web
server executes the malicious code as if it were part of the application.
● OS command injection—an attacker can exploit a command injection vulnerability to input
commands for the operating system to execute. This allows the attack to exfiltrate OS data or
take over the system.
● LDAP injection—an attacker inputs characters to alter Lightweight Directory Access Protocol
(LDAP) queries. A system is vulnerable if it uses unsensitized LDAP queries. These attacks are very
severe because LDAP servers may store user accounts and credentials for an entire organization.
● XML external Entities (XXE) Injection—an attack is carried out using specially-constructed XML
documents. This differs from other attack vectors because it exploits inherent vulnerabilities in
legacy XML parsers rather than unvalidated user inputs. XML documents can be used to traverse
paths, execute code remotely and execute server-side request forgery (SSRF).
● Cross-Site Scripting (XSS)—an attacker inputs a string of text containing malicious JavaScript. The
target’s browser executes the code, enabling the attacker to redirect users to a malicious
website or steal session cookies to hijack a user’s session. An application is vulnerable to
XSS if it doesn’t sanitize user inputs to remove JavaScript code.
Conclusion: Organizations are finding themselves under the pressure of being forced to react
quickly to the dynamically increasing number of cybersecurity threats. Since the attackers have been
using an attack life cycle, organizations have also been forced to come up with a vulnerability
management life cycle. The vulnerability management life cycle is designed to counter the efforts
made by the attackers in the quickest and most effective way. This chapter has discussed the
vulnerability management life cycle in terms of the vulnerability management strategy. It has gone
through the steps of asset inventory creation, the management of information flow, the assessment of
risks, assessment of vulnerabilities, reporting and remediation, and ...
Experiment No. 10
Aim: - Implementation of Code Division Multiple Access.
Introduction: - CDMA refers to a multiple access method in which the individual
terminals use spread-spectrum techniques and occupy the entire spectrum whenever
they transmit. This feature makes CDMA different from frequency division multiple
access (FDMA) and from time division multiple access (TDMA). In FDMA each user is
given a small portion of the total available spectrum, and in TDMA each user is
allowed full use of the available spectrum, but only during certain periods in time.
Code Division Multiple Access is a modulation and multiple access scheme based on
the spread-spectrum communication technology. It is well-established technology and
applied to digital cellular radio and wireless communication systems in the early
1990s. Capacity concerns of major markets and efficient and economic wireless
communication needs of the industries were the most significant drivers for the
development of the CDMA cellular technology.
CDMA is a method in which users share time and frequency allocations, and are
channelized by unique assigned codes. The signals of different users are separated at
the receiver by using a correlator that captures signal energy only from the desired
user or channel. Undesired signals contribute only to noise and interference. Figure 1
illustrates the principle of the CDMA technique.
The development of the CDMA technique dates back to the early 1950s when different
studies of the spread-spectrum technologies were started. The first era in CDMA
history consisted of introducing basic ideas of the CDMA by Claude Shannon and
Robert Pierce in 1949. In 1950 De-Rosa-Rogoff defined the direct-sequence spread-
spectrum method, the processing gain equation, and a noise multiplexing idea. Price
and Green filed the RAKE receiver patent in 1956. In 1961 Manuski defined the near-
far problem crucial for CDMA systems. During the 1970s several military and
navigation applications were developed. [1]
The second CDMA era introduced studies focusing on narrowband systems. In 1978
Cooper and Nettleton suggested a cellular spread-spectrum application. During the
1980s communication company Qualcomm investigated narrowband CDMA
techniques for cellular applications, and the result was that in 1993 the
CDMA IS-95 standard was developed [1]. Compared to third generation CDMA
systems IS-95 can be considered a narrowband CDMA system with 1.2288 Mchip/s
carrier chip rate. Third generation wideband CDMA systems, such as CDMA IS-2000
and European WCDMA use higher chip rates than CDMA IS-95.
There are two basic spread-spectrum techniques: direct sequencing (DS), frequency
hopping (FH), and time hopping (TH). Also, a variety of hybrid techniques use different
combinations of these basic techniques. With direct-sequence spreading, the original signal is
multiplied by a known signal of much larger bandwidth. With frequency-hopped spreading, the
center frequency of the transmitted signal is varied in a pseudorandom pattern.
A. Processing Gain
Combining a bit stream of information with an independent pseudo-random code
sequence by simple multiplication carries out the spreading operation. One of the
main parameters of a spread spectrum communication system is the processing gain,
Gp. It is the ratio of the transmitted bandwidth, Bt, and information bandwidth, Bi, as
presented in the following equation 1. [1]
Gp is also called the spreading factor. This processing gain or spreading factor determines the
maximum number of simultaneous users or connections allowed in a communication system.
It determines the level of protection against multipath interference signals and signal
detection capabilities of a spread spectrum communication system. In multipath situations
the receiver observes spread spectrum signals summed with narrowband interference. The
processing gain determines the power ratio of the desired signal and interference after de-
spreading. Higher desired signal power leads to easier detection. It can be seen that low data
rates such as speech have high processing gain compared to high data rates.
The CDMA channel is nominally 1.23 MHz wide. CDMA networks use a scheme
called soft handoff, which minimizes signal breakup as a handset passes from one cell
to another. The combination of digital and spread spectrum modes supports several
times as many signals per unit of bandwidth as analog modes. CDMA is compatible with
other cellular technologies; this enables nationwide roaming. The original CDMA
standard, also known as CDMA One, offers a transmission speed of only up to 14.4
kilobits per second in its single channel form and up to 115 Kbps in an eight-channel
form. CDMA2000 and Wideband CDMA (W-CDMA) deliver data many times faster.
GSM and CDMA are multiple-access technologies that enable numerous data connections and
multiple calls on a single radio channel. CDMA cellular systems use a unique code to encode
every call's data and then transmit all those calls at once. On the other end, the receivers divide
the combined signal into their individual calls before channeling them to the intended recipient.
GSM transforms every call into digital data, transmits it via a shared channel at a specific time
and then puts each call back together at the other end of the line for the intended recipient.
Conclusion:
CDMA refers to a multiple access method in which theindividual terminals use spread-
spectrum techniques and occupy the entire spectrum whenever they transmit. Thisfeature
makes CDMA different from frequency division multiple access (FDMA) and from time division
multiple access (TDMA). In FDMA each user is given a small portion of the total available
spectrum, and in TDMA each user isallowed full use of the available spectrum, but only
during certain periods in time.