You are on page 1of 8

Localization Methods in WSN


Technological advances in electronics have led to highly efficient, low powered,

integrated communication devices and sensors. Sensors can be spread
throughout a region to build a network for many applications such as
environmental observations, habitat monitoring, protecting a region from
intruders, military applications and so on. Sensor networks have become a very
active topic of research due to its emerging importance in many personal,
home, industry, agriculture, and medical applications.


Recent advances in electronics and wireless communications have led to the

development of tiny, low-cost, low-power and active sensors. Besides, there are
large, high bit rate sensors such as web cam, pressure gauge and so on. These
large sensors are utilized in many practical sensing applications such as free
parking space finding applications. Also neural network and artificial engineers
are trying to embed some intelligence in today’s sensors. All of these types of
sensors observe a physical phenomenon such as temperature, humidity, and do
some processing and filtering on the sensed data. These sensors are spread
over a region to build a sensor network and the sensors in a region co-operate
to each other to sense, process, filter and routing. Usually a sensor node
contains a sensing, a processing and a communication unit where in some
sensor nodes mobility unit and location detection units are embedded.

Like the traditional computer networks sensor networks can also be analyzed in
terms of seven OSI layers as they are more or less a must analysis points in any
kind of networks with some different attentions.

For tiny, low power sensors the most important issue is the power consumption.
To make such sensor networks useful power consumption issues must be
addressed. In a word, all protocols and applications for sensor networks must
consider the power consumption issue and try to the best to minimize power

Sensor networks are somewhat different from traditional networks as sensor

nodes are very prone to failures. As sensor nodes die the topology of the sensor
networks changes very frequently. Therefore, the algorithms for sensor network
should be robust and stable. The algorithms should work in case of node failure.
When mobility is introduced in the sensor nodes, maintaining the robustness
and consistent topology discovery become much difficult. Besides, there are
huge amount of sensors in a small area. Most sensor networks use broadcasting
for communication while traditional and adhoc networks use point to point

communication. Hence the routing protocol should be designed considering

these issues as well.

Localization Methods in WSN

Localization in wireless sensor networks is about knowing the location of any

network node at any time. Thereby, nodes can be either mobile, which means
that their location can change, or static. The focus of this report is on localizing
mobile nodes.

In this report, it is assumed that sensor nodes only consist of a radio, a

processor, memory and a power supply. All additional hardware that might be
needed for doing localization, such as infrared sensors or ultrasound
transceivers, is considered extra in this context.

Most localization algorithms assume the presence of a few nodes with prior
knowledge of their location: anchor nodes or simply anchors. The position of the
other nodes is determined through interaction with or relative to the anchors.
From here on, nodes of which the position needs to be determined are referred
to as unknown nodes.

Trilateration is a common mathematical technique that is used to compute an

unknown node's location from the combination of distance estimates of the
node to anchors and location information of these anchors. Geometry is applied
to determine the unknown node's coordinates, as shown in Figure 3.1. A virtual
circle with a radius equal to the distance estimate between the anchor and the
unknown node is drawn around every anchor. The unknown node is then
located at the intersection of all three circles.

Figure 1. A visual representation of the trilateration algorithm.

Below we focus on the techniques and algorithms that are currently available
for doing localization in wireless sensor networks. For all of them, a short
description is given, followed by an overview of the main characteristics. The
methods presented here form the basis for determining the technique(s) to be
used for the localization mechanisms.

Localization using GPS

Localization systems for WSNs can be based on the Global Positioning System
(GPS), which is a satellite-based localization infrastructure. At any location on
earth, a GPS-receiver can be localized using information of at least four GPS-
satellites. The receiver computes the time-of-flight of the different satellite
signals as the difference between its local time and the time the signals were
sent and converts the times into distance estimates. The receiver also
determines the satellites' locations from their radio signals and an internal
satellite database. From this knowledge, the receiver's position is derived using
trilateration, generally with an accuracy of about ten meters. GPS can easily be
used in sensor networks, by equipping the sensor nodes with GPS-receivers.

Nevertheless, GPS-based localization in sensor networks has some

disadvantages. The first problem is that a GPS-receiver consumes a lot of
energy, which is known to be a scarce resource on a sensor node. The next
problem relates to radio signal propagation in an indoor environment: walls,
floors and furniture can disturb or even entirely block the satellite signals. It is
often a problem to even detect four satellite signals in an indoor environment.
In any case, bad distance estimates result and therefore localization errors are
large indoors. A final disadvantage is the high price for equipping all nodes in a
network with expensive GPS-receivers.

Localization using Infrared

Localization information in a WSN can also be acquired by equipping the sensor

nodes with infrared sensors. Throughout the environment, anchor nodes
equipped with infrared receivers are installed. Any unknown node sends an
infrared signal at regular intervals. Depending on the sender's location, the
signal is detected by a limited number of (different) anchors. Based on this
knowledge, the sender's position can be roughly estimated. Room-level
granularity is the best accuracy currently obtained with this method.

The infrared based solution is suitable for both indoor and outdoor use, but
because of the short range of infrared signals, many nodes with receivers are
required. This makes the solution quite expensive for large areas. Another
disadvantage of the method is the inaccuracy caused by multipath effects and

line-of-sight requirements. The first is responsible for false positives: a reflected

signal is received instead of a direct one and a receiver incorrectly assumes the
sender is within line-of-sight. The second one occurs when there is an object
between the sender and the receiver. The sender's signal is not detected, which
results in a false negative. Both problems lead to incorrect conclusions about
the sender's location.

Localization using Sound

Sound signals can also be used for localization purposes in wireless sensor
networks. For that, sensor nodes need to be equipped with sound transceivers.
In general, ultrasound is used: it is less intrusive since it is not audible for
human beings.

The first category of algorithms using sound is based on the time-of-arrival or

round-trip-time of a sound signal between an unknown node and an anchor.
Both methods take a timestamp the moment the sound signal is sent.
Depending on the method, the second timestamp is taken the moment the
signal arrives at the other node or back at the sending one. The timestamps are
used to calculate the sound signal's travel time. The time-of-arrival method
requires the nodes in a network to be synchronized, since it uses timestamps
taken by different nodes. The distance between the node and the anchor is
estimated by dividing the speed of sound by the travel time. Finally the
unknown node's location is derived by using for example trilateration.
Localization errors of tens of centimeters can be acquired with this technique.

An alternative approach using sound is to use time-difference-of-arrival

information, where sound signals are combined with radio signals. The principle
of the method is as follows: at regular intervals, anchors simultaneously send a
radio message and a sound signal. Unknown nodes receive the radio message
and somewhat later they detect the sound signal. Based on the knowledge of
the speeds of light and sound, a time-difference of arrival between the two
signals is computed. From that, a distance estimate to the anchor is derived.
The location of the unknown node is then determined using one of the above
mathematical techniques, with an accuracy of centimeters.

The main disadvantage of the time-of-arrival method is the need for an

accurate synchronization of the sensor nodes. This has proven to be very hard
in sensor networks, due to energy constraints and inaccurate processor clocks.
Radio messages are needed to synchronize between the different nodes, but
they introduce small errors in the time schemes because of the latency inherent
to radio communication. The small deviations in the time schemes cause errors
in the calculation of the signals' travel time, and small errors in the latter cause
large localization errors. A common disadvantage for both the time-of-flight and
the time-difference-of arrival method is that extra hardware is needed.

Ultrasound transceivers are still quite expensive and they increase the form
factor of a sensor node with at least a factor two.

Radio-based localization

Localization in sensor networks can be achieved using knowledge about the

radio signal behaviour and the reception characteristics between two different
sensor nodes. The quality of a radio signal, i.e. its strength at reception time, is
expressed by the radio signal strength indicator (RSSI): the higher the RSSI-
value, the better the signal reception. The main advantage of using radio-based
localization techniques is that no additional hardware for the sensor nodes is
required. The main disadvantage of the technique is that the measured signal
strengths are generally unstable and variable over time, which leads to
localization errors.

In this section, two common localization techniques using radio signal strength
information are presented. Afterwards, the proximity idea is discussed, a
technique that takes into account the range of radio communication rather than
its quality. Finally, a technique for analyzing the RSSI behaviour over time is
presented. The technique cannot be used for localization itself, but it can
provide useful mobility information about the node to be located.

Converting Signal Strength to Distance

In theory, there exists an exponential relation between the strength of a signal

sent out by a radio and the distance the signal has travelled, as shown in Figure
2. In reality, this correlation has proven to be less perfect, but it still exists.

Figure 2. The mathematical relation between signal strength and travelled


The above relation forms the basis for the first RSSI-based localization
technique. Anchors broadcast their position at regular intervals. Unknown nodes
receive the message and measure the strength of the received signal. This
signal strength is converted to a distance estimate, using the exponential
relation shown above. Trilateration is used to convert the distance estimate
between anchor and unknown node into coordinates for the latter.

Localization errors for this method range from two to three meters at average,
with indoor errors being larger than outdoor ones. The main reason for the large
errors is that the effective radio-signal propagation properties differ from the
perfect theoretical relation that is assumed in the algorithm. Reflections, fading
and multipath effects largely influence the effective signal propagation. The
distance estimates, which are based on the theoretical relation, are thus
inaccurate and lead to high errors in the calculated locations.

Fingerprinting Signal Strengths

The second method that uses RSSI for localization is called fingerprinting. This
technique is based on the specific behaviour of radio signals in a given
environment, including reflections, fading and so on, rather than on the
theoretical strength-distance relation.

The fingerprinting technique is an anchor-based technique that consists of two

separate phases. During the first phase, called the offline phase, a fingerprint
database of the environment is constructed. A node is put at a number of
predefined points in the deployment area to record the fluctuations in signal
strength at these specific points. At each location, the node sends a number of
messages and all anchors measure the signal strength of the received
messages, or the other way around. The combination of the RSSI-values
measured by the different anchors when the node is at a certain location forms
the fingerprint of this location: a series of RSSI-values that are representative
for that particular location. Per location, a number of fingerprints is stored in a
database, needed by the second phase.

During the next phase, called the online phase, real-time localization is
performed. An unknown node has to be localized in the deployment area. The
unknown node broadcasts a message at regular intervals and the anchors
measure the signal strength upon reception of a message. The measured RSSI-
values are combined into a RSSI-sample. Afterwards, the best matches between
the values in the RSSI-sample and the values stored in the database are
searched for. The resulting matches determine the final position of the unknown
node. Its location could either be the value of the closest match or an average
of a few best matches. The specific algorithm used for matching is not relevant

The main advantage of using RSSI this way is that the unpredictable RSSI
variations in space are handled, which makes the approach a little more

accurate. Errors using this method are reduced to an average of one to two
meters. The greatest disadvantage of the method is that an offline phase is
required for the system to work. The offline phase is in the first place very time
consuming. Moreover, the fingerprinting database that is created during the
offline phase is location dependent. If one wants to use the same system in
another environment or if radical changes to the current environment are
made, the offline phase has to be repeated.

Proximity-based localization

Proximity-based localization systems are an anchor-based solution to the

localization problem. These systems derive their location data from connectivity
information of the network. Knowledge about whether two devices, i.e. an
unknown node and an anchor, in the network are within communication range
is transformed into an assumption about their mutual distance and location.
The technique is based on the existence of a maximum communication range
for a node sending at a given power. Using a proximity-based algorithm, coarse-
grained localization can be achieved.

The location information can be refined by also measuring the strength of the
radio signals between the nodes that are within range of each other. The signal
strength can be translated into an estimate of the distance between the two
nodes, using for example statistical methods. By combining the location
information of the anchors with the distance estimates, the location of an
unknown node can be roughly determined. This refinement of the above
technique can reduce the errors by 50%.

A disadvantage of the method is that its performance in a network with high

message loss rates will most probably decrease, because in that case, the
algorithm can no longer conclude that a specific node is out of the sender's
radio range from the fact that the node is not receiving a message from the

Analysis of the Radio Signal Strength Behaviour over Time

Performing an analysis of the radio signal strength behaviour during a longer

period of time, i.e. a few seconds, can provide additional information about the
mobility status of a sensor node. Research performed in points out that the
variance of the signal strength is much larger when a node is moving than
when it is static. Knowledge about a person's mobility pattern does not provide
any real location information, but it is useful in combination with other
localization algorithms. In [2], the technique is combined with fingerprinting.
The inference of mobility information as well as location information from the
radio signal is done using a Hidden Markov Model (HMM). The algorithm in [2]

leads to a median localization error of 1.5 meters and tells whether a node is in
motion or not with an accuracy of 87%.


[1] N. Bulusu, D. Estrin, L. Girod, and J. Heidemann. Scalable coordination for

wireless sensor networks: Self configuring localization systems. International
Symposium on Communication Theory and Applications, 2001.

[2] N. Bulusu, D. Estrin, and J. Heidemann. Tradeoffs in location support

systems: The case for quality-expressive location models for applications. In
Proc. of the Ubicomp 2001 Workshop on Location Modeling, Oct 2001.

[3] N. Bulusu, J. Heidemann, and D. Estrin. Gps less low cost outdoor
localization for very small devices. IEEE Personal Communications Magazine,
7(5):28–34, Oct 2000.

[4] N. Bulusu, J. Heidemann, and D. Estrin. Adaptive beacon placement. the

21st International Conference on Distributed Computing Systems (ICDCS- 21),,
pages 489–498, Apr 2001.

[5] N. Bulusu, J. Heidemann, D. Estrin, and T. Tran. Self-configuring localization

systems: Design and experimental evaluation. ACM Transactions on Embedded
Computing Systems (ACM TECS), 2003.

[6] V. Bychkovskiy, N. Bulusu, D. Estrin, and J. Heidemann. Scalable, ad hoc

deployable, rf based localization. In Proc. of the Grace Hopper Conference on
Celebration of Women in Computing, Oct 2002.

[7] J. C. Chen, K. Yao, and R.E. Hudson. Source localization and beamforming.
IEEE Signal Processing Magazine, 19(2), Mar 2002.