You are on page 1of 12

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

IEEE TRANSACTIONS ON MOBILE COMPUTING 1

Energy Optimization of Wireless Visual


Sensor Networks with the Consideration of
the Desired Target Coverage
Reza Ghazalian, Ali Aghagolzadeh, Senior Member IEEE, and Seyed Mehdi Hosseini Andargoli,
Member, IEEE

Abstract—Wireless visual sensor networks (WVSN) have recently seen dramatic growth with technology development. These
networks consist of a number of smart visual sensors (VSes) that can collect visual information, i.e. images and video captured
in a network area. Optimizing the energy consumption and coverage are important contradictory challenges in WVSNs since
increase in coverage leads to increase in energy consumption. Therefore, optimization of energy consumption by maintaining
image quality defined by user or operator located in sink (quality of experience (QoE)) to be used in target tracking applications
is addressed in this paper. The target coverage as well as the quality of the received image of the target are considered as the
desired QoE. The novel two-dimensional target coverage model is also presented mathematically. This model is described as a
function of the VS inherent parameters, the target position and the visual sensor position. Based on a convex optimization
framework, a heuristic approach for the VS selection and the focal length adjustment is suggested to solve the optimization
problem while maintaining high image quality. Simulation results are presented to verify the capability and efficiency of the
proposed method in comparison with the optimal method (Exhaustive search method).

Index Terms—Wireless Visual Sensor Network, Energy Optimization, Visual Sensor Selection, Coverage, Focal Length,
Convex Optimization

——————————  ——————————

1 INTRODUCTION

R ecent developments in processing and camera technol-


ogies have increased researchers’ attention toward
optimization by maintaining QoE in the target tracking
scenario is addressed. The targets’ surface is considered as
WVSNs [1]. In this regard, optimizing energy consumption the coverage model.
and coverage are important issues and contradictory chal-
lenges. The more the number of the active VSes, the higher 1.1 MOTIVATION
the coverage. Indeed, the QoE satisfies when the more ac- In WVSNs, several VSes use battery as the energy source.
tive VSes provide more coverage from the different angles These sensors, equipped with transceiver, transfer a huge
of view towards the targets. In other words, the probability volume of the received visual data from the network area
of the QoE satisfaction increases when the number of the to the sink [1]. Battery replacement often comes with time
active VSes increases. Besides, the higher the number of the and the finance costs, and is impractical in special situa-
active VSes, the more the energy consumption. In other tions [3]. Therefore, presentation of an appropriate energy
words, sensing the targets with several VSes leads to im- optimization technique is essential. In the target tracking
prove the quality of the sensed coverage, which also results application, VS selection is the one of the practical methods
in consumes more energy [2]. Therefore, the tradeoff be- used for energy optimization. The key goals of the WVSN
tween the energy consumption and the coverage should be are the network area coverage and visual data gathering,
taken into account. Indeed, minimizing the energy con- which must satisfy the user quality requirement as QoE.
sumption results in minimizing the targets’ coverage However, QoE enhancing requires much more energy con-
which is not suitable for the desired quality satisfaction. sumption. So, this paper has investigated the WVSN en-
Thus, this is a challengeable issue examined in this paper. ergy optimization with regards to maintaining QoE in the
In fact, activating the minimal number of suitable VSes target tracking scenario. Although, the sink and the VSes
such that the coverage quality constraint is met, is the key are considered immoveable in the simulation section of
goal of this paper. Indeed, in this paper, the issue of energy this paper, the proposed method could be definitely ap-
plied in a scenario which the sink and VSes are moveable.
———————————————— Thus, to apply the proposed method in a mobile scenario,
 Reza Ghazalian, Ali Aghagolzadeh, and Seyed Mehdi Hosseini Andar- no modifications is needed in the proposed method since
goli are with the Department of Electrical and Computer Engineering, the main inputs of the proposed method are the distances
Babol Noshirvani University of thechnology, Babol, Iran.
E-mail: ghazalian66@gmail.com , {aghagol, smh_andargoli} @nit.ac.ir of each VS from the sink and the target. Providing these
parameters at each moment, the proposed method can be
applied in a mobile scenario. It should be noted that these

1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
xxxx-xxxx/0x/$xx.00 © 200x IEEE
See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Published by the IEEE Computer Society
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

2 IEEE TRANSACTIONS ON XXXXXXXXXXXXXXXXXXXX, VOL. #, NO. #, MMMMMMMM 1996

measurements must be done at each coherent time which parameters of the Pan Tilt Zoom visual sensor are ob-
calculations are valid for this time. The coherent time de- tained. This method uses Binary Integer Programming to
pends on the target’s velocity, delay of the setting com- optimize the sensor selection and the ultimate configura-
mand parameters such as focal length setting and the CPU tion. Amiri et al. have proposed a new VS selection method
time of the sink’s processor. with an efficient node collaboration data routing to sink to
Regarding to QoE, it can be stated that the user’s satisfac- achieve lifetime maximization in WVSNs [10]. The reduc-
tion is necessary if the service can be achieved successfully. tion of redundant multiple coverage is the most important
QoE reflects how the service provided by the network ful- attribute of these algorithms. Cai et al. have proposed some
fills the user’s expectations. QoE extends the concept of methods to maximize the lifetime of visual sensor net-
quality of service (QoS). In a target tracking scenario, one works [11]. These methods work on the basis of setting the
of the most important QoEs is the coverage of the target(s) sensor direction in a way that covers the maximum num-
provided by VSes. In the real world scenario, especially in ber of the targets in the network area. A model of coverage
monitoring and surveillance, the coverage of the target in WVSNs has been suggested by Yen et al. [12]. Using sen-
from the different angles of view is vital. However, the cov- sors that can cover a large number of nodes is essential to
erage cannot be just a suitable metric for QoE and the qual- implement this repetitive technique. Zennat et al. have pre-
ity of the captured images from the target is also substan- sented a method to solve the target coverage problem in
tial element of QoE. So, the target’s coverage along with visual sensor networks which extracts the priority function
the quality of the captured images has been considered as for target coverage [13]. A target that can be covered with
QoE. In fact, in a tracking scenario, the network’s user fol- fewer cameras is selected as the top priority of camera cov-
lows the target with the qualified coverage and the satis- erage, a process that continues until all targets are covered.
fied quality of the captured images. Thus, the target's cov- Amjad et al. have provided a framework for specifying the
erage along with the quality of the received images of the depth of field (DOF) of VSes in a way that energy con-
target(s) could be definitely the appropriate QoE in a target sumption is optimized [14]. Several factors such as the
tracking for monitoring. Besides, without considering QoE choice of sensing range, spatial coverage expansion,
constraints, turning off of all VSes could be an optimal so- adaptive task classification and the optimal number of vital
lution since in this situation the optimal solution is that all visual nodes have been considered in proposing this
of Vses’ energy consumption becomes zero to minimize framework. An effective method for VSN lifetime
the energy consumption of the network. Off course this is maximization based on quality of service (QoS) has been
not an optimal point in the practical scenario. suggested in [15]. In this method, visual sensors are
localized regardless of the quality of the received video.
1.2 RELATED WORKS Another research presents an efficient energy optimization
Soro and Heinzelman [4] have presented two camera se- method [16] in which a relay is selected to minimize the
lection methods which provide any viewpoint the user de- communication range of the sensors, independent from the
sires in the network area. In the first method, a VS is se- visual data quality.
lected when the angle between the view direction chosen To reach the maximum 3D directional coverage, the coor-
by the user and the VS view direction is minimized. In the dinates and direction of a number of unmanned aerial ve-
second method, the coverage view point volume is max- hicles, which are equipped with camera, have been opti-
imized as a criterion for the appropriate camera selection. mized by Wang and et al [17]. In addition, the 3D direc-
To maximize the network area coverage, a camera selec- tional coverage of the both camera and the object have
tion algorithm has been presented by Johnny Park et al. [5], been modeled in shape of a straight rectangle pyramid and
in which a lookup table is created for all the possible loca- a spherical base cone, respectively. In a similar study,
tions corresponding to all the possible angles for each cam- Wang and et al. have proposed the optimal placement of
era. Based on the coverage area volume, the lookup table unmanned aerial vehicles to maximize the quality of mon-
constantly determines the rank of the camera, and the cam- itoring [18]. In their proposed procedure, the aggregate ef-
era node with the higher rank is thus selected. This method fects including global positioning system (GPS) error, bias
is used for dynamic networks. By using a greedy algo- of orientation, and influence of wind in the air, have been
rithm, Fusco and Gupta [6] have presented an efficient taken into account. Wu and et al. have proposed smart-
camera node selection method, using k-coverage with the photo to optimize the number of crowd-sourced photos to
least number of cameras as the criterion for selecting the meet the coverage criteria [19]. This framework, i.e. smart
camera node and determining its direction. Hoshmand et photo, has been modeled based on the available data of the
al. [7] have suggested a VS selection method which max- smartphone such as: GPS based location, phone orienta-
imizes the lifetime of WVSNs by defining a priority assign- tion, etc. Considering this model and selecting proper pho-
ment function. This function compromises between a sen- tos, a remote server can tradeoff between the required cov-
sor's residual energy and amount of coverage, and the in- erage and the constrained resources including bandwidth,
tersection of coverage area. Hosseini et al. [8], [9] have pro- storage, computational power and device energy. In a sim-
posed a visual sensor selection method for visual sensor ilar study, a photo selection algorithm has been proposed
networks in the multi target detection scenario. Sensor based on geometric data through the embedded sensors on
nodes are selected in case the amount of the target cover- smartphones by Wu and et al [20]. This algorithm deter-
age corresponding to them is maximized. Also, the optimal mines the most useful of the captured photos from the tar-
get with the minimum joint coverage. A monitoring drone
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

AUTHOR: TITLE 3

based system has been presented based on the coverage has turned into a quadratic problem, and the Log-Barrier
maximization problem. It has been claimed that this sys- method has been used to solve this problem.
tem can provide a better coverage in surveillance scenar-
ios. Moreover, the main properties of the target such as size d
an
and orientation have been considered in the coverage o mm ta
C Da
model [21]. VS
In our previous work, the energy optimization of
WVSNs was addressed while the variance value of the im- a Target Path
at
age and DOF were considered as QoS [22], [23]. In addi- lD
Sink

ua
tion, in one of our studies, the quality of the captured im- V i s
age from the target has been determined based on the en-
Target’s Position Data

tropy of the received image. This metric has been pre-


sented based on the VS’s parameters [24]. In these studies,
the coverage of the target was not taken into account. In (a)
this paper, the combination of selecting the VSes and set-
ting their focal length value is proposed, in order to both Auxiliary
Positioning
User
satisfy the minimum coverage of the target and obtain the System

Visual Data

Parameters
Captured
desired DOF while the WVSN energy consumption is min-

Position
Target

Setting
The
imized.

1.3 CONTRIBUTIONS (QoE Measurement)


Most of the previous works have considered the amount of  Target Coverage Estimation
coverage regardless of the quality of the received data,  Check the Focal Length
Constraints
Sink
which is not a suitable criterion. The qualities of the cover- Sink To each VS

age and the received visual data must be simultaneously


taken into account as QoE defined by the users located in Determine The Command & Control Data By
Solving Energy Minimization Problem
the sink, an approach that has been followed in this paper.  VS Mode(ON/OFF)
Also, the target coverage is mathematically modeled as the 

Focal Length Setting Value
The Amount Of Each VS Rotation Toward The
function of the VS parameters such as focal length, visibil- Target

ity of the camera lens, imaging sensor physical dimension, (b)


the VS position with respect to the target position, and so
Visual Sensor
on. In fact, optimally setting the VS parameters results in From
satisfaction of QoE. Figure 1 shows the problem and sce- Target Setting Parameters
Visual Captured Visual Data Processor
nario which this study is focused on. Unit Unit
Furthermore, in the most recent works in this area, the
energy optimization of the WVSN has rarely been consid-
Sent by Sink
Setting
Parameters
Visual Data

Setting
Captured

Parameters
ered. So, this paper surveys the energy minimization of the
Rotation
WVSN problem considering QoE as elaborated above, Batte
Unit
ry
which has rarely been examined in literature. The coverage
and the quality of the received image/video are consid- Communication
To/From
ered QoE. Satisfiying the mentioned QoE definitely im- Sink
Unit
pacts on the energy consumtion. To meet QoE constraints,
the number of active VSes should be increased. By increas-
(c)
ing the number of active VSes, it is clear that the energy Fig. 1: The focused problem: a) topology of the considered network,
consumption increases. Analysing the energy optimization b) structure of the sink, c) structure of the VS
problem besides keeping QoE, this paper presents a new
algorithm, which compromises between energy consump- The rest of this paper is organized as follows: In section
tion and QoE satisfaction by selecting the suitable VSes 2, the WVSN model is described mathematically. The
and setting properly their parameters. In this novel problem statement is presented in the section 3. Our pro-
method, the two problems of regarding energy minimiza- posed algorithm for the energy minimization according to
tion based on VS selection and parameters’ setting are sep- the quality constraints is proposed in section 4. Section 5
arately addressed to reduce computation complexity and presents the simulation results and the performance eval-
to prevent the divergence of the algorithm. uation of the proposed algorithm. Finally, conclusions are
A novel technique is also applied in the VS selection problem given in section 6.
to reduce computational complexity. Based on a convex opti-
mization framework, the VS selection problem is analyzed
and a priority function is extracted bya combination of 2 CONSIDERED WVSN MODEL
KKT (Karush-Kuhn-Tucker) optimality conditions which In this section, the mathematical model of the WVSN used
facilitates enabling the proper VSes for the target tracking. in this paper is described. In general, visual sensor net-
Also, the cost function of the focal length setting problem works are classified into homogeneous and heterogeneous
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

4 IEEE TRANSACTIONS ON XXXXXXXXXXXXXXXXXXXX, VOL. #, NO. #, MMMMMMMM 1996

𝑐𝑑 𝑡 𝑤 𝑠 𝑖 𝑓 𝑐 𝑑 𝑡 𝐷𝐹𝑡ℎ 𝑐 𝑑 𝑡 𝐷𝐹𝑡ℎ
𝑚𝑎𝑥 {𝑓𝑚𝑖𝑛 , 𝑖 } ≤ 𝑓𝑖 ≤ 𝑚𝑖𝑛 {𝑓𝑚𝑎𝑥 , , 𝑡 𝑖 𝑛𝑡ℎ , } (1-1)
⏟ 𝐴+𝑐 𝑙𝑡 𝐴 𝑑𝑖 −𝐴 𝐷𝐹𝑛 −𝑐 𝐴 𝐷𝐹𝑓𝑡ℎ −𝐴 𝑑𝑖𝑡 +𝑐
2 tan( 𝑡 )
𝐹𝑚𝑖𝑛
𝑖
⏟ 𝑑𝑖
𝐹𝑚𝑎𝑥
𝑖
𝐷𝐹𝑛𝑡ℎ = 𝑑𝑖𝑡 − ∆ (1-2)
𝐷𝐹𝑓𝑡ℎ = 𝑑𝑖𝑡 + ∆. (1-3)

groups [25]. For the WVSN description, we have assumed (Visual Unit)
that the VSes are homogeneous, i.e., visual sensors have
Image To Processor Unit
similar structures. Each visual sensor consists of a visual

Lens
Sensor
unit, a communication unit, a processor unit and a rotation
unit. It is also assumed that this network uses an auxiliary Move Lens
Command From
Processor (Obtained
system to estimate the target position. The VSes capture Miniature
Driver
From Sink)

the image (video) of the target under tracking and transmit Motor
(Actuator)
Circuit

the obtained visual data to the sink which is located in the


middle of the network area. It is essential to state that this Fig. 2: Structure of the visual unit
auxiliary system could be GPS. Naturally, in the scenario
in which GPS signal is not provided, a low power radar Energy consumption for the video capturing during the
can be used for each VS to measure the range of the target active time 𝑇𝑎𝑐𝑡𝑖𝑣𝑒 can be obtained as [27]:
with respect to that VS. Then, these measurements are sent 𝐸𝑣𝑖𝑑𝑒𝑜 = 𝑃𝑣𝑢 𝑇𝑎𝑐𝑡𝑖𝑣𝑒 (3)
to the sink. At sink, localization algorithm is used in order 𝑃𝑣𝑢 is a function of clock cycle of the A/D and the pixels in
to determine the target’s position. This paper is not con- each cycle of the clock at the visual sensor module. In this
centrated on the target localization and it is assumed that study, it is assumed that 𝑃𝑣𝑢 is about 2 × 10−2 𝑚𝐽/𝑠𝑒𝑐
the target's position is available. Before describing the which is given in Table 2. This amount of energy for the
WVSN model, we demonstrate the important notations video capturing is typical for the considered image sen-
used in this paper in Table 1. sor’s resolution (160 𝑝𝑖𝑥𝑒𝑙 × 90 𝑝𝑖𝑥𝑒𝑙) [10].
2.1 The Visual Unit 2.2 The Processor Unit
The visual unit contains a camera set where the most im- The processor unit comprises the analog to digital (A/D)
portant parameter of the camera, focal length, is used to module, the memory and the microprocessor which is re-
model it. To obtain a qualified image of the target, the focal sponsible for the local processing. The video captured from
length has to be adjusted to yield the desired field of view the target is quantized to an image (frame) at 𝑅𝑓 frames per
(FOV) and DOF [22], [23]. Adjusting the camera lens focal second. Each image of the target is then sent to the A/D.
length is done based on the constraints given in (1-1) - (1- The A/D module has 2𝑁𝑏 levels for quantizing the still im-
3) [22]. Delta (∆) is fixed by the user. In order to adjust the age. In other words, each pixel value of the image writes
focal length, the visual module contains components such on the memory with 𝑁𝑏 bits. The image sensor plane has
as: lens, voice-coil motor (VCM) actuator, actuator driver an area of is 𝑠𝑊 𝑠𝐻 . Considering the sensor plane and its
[26]. The VCM actuator operates with a coil suspended in area, each captured image occupies 𝑆𝑊 𝑆𝐻 𝑁𝑏 bits of the pro-
a magnetic field. A current is passed via the coil to move cessor memory unit space. With this memory consumption
the coil in the magnetic field. By design, the displacement and frame rate (the time repetition interval of the video
is linear with the current passed through the coil. quantization), each active VS consumes
The amount of the current is controlled by a command 𝑆𝑊 𝑆𝐻 𝑁𝑏 𝑅𝑓 Tactive bits of the memory to capture the video of
control signal received from the sink and also an actuator the target in the active time . It is assumed that the sink is
driver. Indeed, in our study, one of the setting parameters, equipped with an inherent power supply and all the pro-
i.e. the focal length adjustment, is calculated at the sink and cessing is done in the sink; so the energy it consumes is ne-
is sent to the VS node in a format of control data. This con- glected. The most important energy consumption is for
trol data contain the amount of the focal length displace- VSes which are battery powered. Moreover, VSes provide
ment. In an active VS node, the focal length setting param- important data from the target.
eter translates linearly into current by a DAC system.
Based on the aforementioned materials, the energy of ad- 2.3 The Communication unit
justing the focal length by a force 𝛼 to move the lens with The communication unit consists of antenna and trans-
|𝑓𝑖 − 𝑓𝑖𝑖𝑛𝑖𝑡 | (meter) is achieved as (2): ceiver modules. The visual data obtained from the target
𝐸𝑓𝑜𝑐𝑢𝑠 = 𝛼|𝑓𝑖 − 𝑓𝑖𝑖𝑛𝑖𝑡 |. (2) and command data from the sink to the VS for decision
Figure 2 shows the structure of the visual unit. making purposes are sent by the transceiver module via
Binary Phase Shift Keying (BPSK) digital communication.
BPSK modulation has equal symbol rate and bit rate, so the
transmitted signal in the communication link between the
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

Reza Ghazalian ET AL.: Energy Optimization of Wireless Visual Sensor Networks with the Consideration of the Desired Target Coverage
5

VS and the sink has a bandwidth approximately equal mathematically modeled. For the sake of simplicity of the
to R s ; 𝐵𝑊 = 𝑅𝑠 . The properties of the transceiver module coverage model, it is assumed that the target’s shape is cyl-
utilized in the communication link between the sink and inder. Since, if the other complicated geometry shape takes
VS, such as the transmitted power, antenna gain, the com- into account, the obtained coverage model, could be very
munication link range, and the received power 𝑝𝑟 in the different for other shapes of the target. In the other words,
transceiver module of the sink or the VS are calculated ac- cylindrical shape is a general shape which closely encloses
cording to the free-space path loss law as follows: other geometry shapes and maintains the generality of the
𝑝𝑡 𝐺𝑠 𝐺𝑣𝑠 𝜆2 target’s coverage model for different complicated shapes
𝑝𝑟 = 2 . (4)
(4𝜋𝑑𝑖𝑠 ) of the target. Besides, for human tracking, a cylindrical
Dependent on the transceiver module noise temperature in shape is more suitable model than any other shape; since,
the receiving mode and the transmitted signal bandwidth, the shape of human more resembles to the cylinder.
the noise power in the transceiver module is calculated Initially, the target coverage sensed by one visual sensor
as 𝐾𝑇𝑒 𝑅𝑠 . Therefore, the signal to noise ratio (SNR) in the is calculated (Figure 3). As can be observed in the figure,
transceiver module will be: the active VS senses the cylindrical sector, which is shown
𝑝𝑡 𝐺𝑠 𝐺𝑉𝑆 𝜆2 with yellow region. To calculate the area of this region, the
𝑆𝑁𝑅 = 2 . (5)
(4𝜋𝑑𝑖𝑠 ) 𝐾𝑇𝑒 𝑅𝑠 length and width of the yellow rectangle must be first cal-
Correct signal detection in receiving mode requires the sat- culated. The former is the length of the arc 𝐵′ 𝐵′′ , which
isfaction of SNR 𝑚𝑖𝑛 based on the transceiver module circuit equals 2 × 𝑅𝑡 × 𝑂𝐴̂′ 𝐵′ . The angle 𝑂𝐴̂′ 𝐵′ is obtained as be-
features reported in its datasheet. As a result, the trans- low:
ceiver module must transmit a power of 𝑝𝑡 : π S R +R th t

𝑆𝑁𝑅𝑚𝑖𝑛 (4𝜋𝑑𝑖𝑠 )2 𝐾𝑇𝑒 𝑅𝑠 OÂ′ B′ = − min (tan−1 ( w ) , sin−1 ( t t )), (10)


2 2f d −R
𝑝𝑡 = . (6)
𝐺𝑠 𝐺𝑣𝑠 𝜆2 where R is the maximum radius of the cylindrical region
th
The energy consumption per bit transfer at the rate of 𝑅𝑠 is around the target on which the camera focuses. Conse-
thus calculated as below which is the same as the formula quently, the arc 𝐵′ 𝐵′′ will be given by:
in [28]: 𝜋 𝑆 𝑅𝑡ℎ +𝑅𝑡
𝑝𝑡 𝑆𝑁𝑅𝑚𝑖𝑛 (4𝜋𝑑𝑖𝑠 )2 𝐾𝑇𝑒 𝑆𝑁𝑅𝑚𝑖𝑛 (4𝜋)2 𝐾𝑇𝑒 |𝐵′ 𝐵′′ | = 𝑅𝑡 ( − 𝑚𝑖𝑛 (tan−1 ( 𝑤 ) , sin−1 ( ))). (11)
𝐸𝑡𝑥−𝑏𝑖𝑡 = = = (𝑑𝑖𝑠 )2 . (7) 2 2𝑓 𝑑 𝑡 −𝑅𝑡
𝑅𝑠 𝐺𝑠 𝐺𝑣𝑠 𝜆2 𝐺𝑠 𝐺𝑣𝑠 𝜆2
It should be noted that the transmitter electronics en- The width of the yellow rectangle is calculated as:
𝑆𝐻 (𝑑 𝑡 −𝑅 𝑡 )
ergy at the both transmitter and receiver is neglected 2𝑌 = 𝑚𝑖𝑛 (𝐿𝑡 + 𝐿𝑡ℎ , ), (12)
𝑓
which does not affectes on the solution of the energy opti-
where 𝐿 𝑎𝑛𝑑 𝐿 denote the length of the target and the
𝑡 𝑡ℎ
mization problem since this term is the same for all the
maximum length of cylindrical region around the target on
VSes. To transfer 𝑆𝑊 𝑆𝐻 𝑁𝑏 𝑅𝑓 bits at the rate of 𝑅𝑠 in the
which the camera focuses, respectively. Overall, based on
communication link between the sink and the VS, the ac-
(11) and (12), the area of the target sensed by the active VS
tive 𝑉𝑆𝑖 consumes 𝐸𝑡𝑥−𝑏𝑖𝑡 SW SH Nb Rf Jules of energy per
will be:
second. So, during the active time Tactive when the ith vis-
𝜋 𝑆𝑤 𝑅𝑡ℎ +𝑅𝑡
ual sensor is tracking the target, the energy consumption 𝑆𝑖 = 2𝑅𝑡 ( − 𝑚𝑖𝑛 (tan−1 ( ) , sin−1 ( ))) × 𝑚𝑖𝑛 (𝐿𝑡 +
2 2𝑓𝑖 𝑑 𝑡 −𝑅𝑡
of the VSes is calculated as:
2 𝑆𝐻 (𝑑 𝑡 −𝑅𝑡 )
SNRmin (4πdsi ) KTe 𝐿𝑡ℎ , ) ∀ 𝑖 = 1,2,3, … , 𝑁𝑎 , (13)
𝐸𝑡𝑥 = 𝑆𝑊 𝑆𝐻 𝑁𝑏 𝑅𝑓 Tactive . (8) 𝑓𝑖
Gs Gvs λ2
where 𝑆𝑖 shows the area of the target sensed by 𝑉𝑆𝑖 . When
2.4 The Rotation Unit the VSes are turned on, the surface of the target seen by
them is collected, so the coverage of the network (target
The visual unit of the active VS needs to rotate toward the
coverage) is obtained. It should be noted that the target
direction of the target under tracking in order to localize
coverages might overlap, and the exact sensed target cov-
the image of the target on the center of the camera image
erage is achieved by deducing the joint coverage.
sensor plane, which is carried out by the rotation unit us-
ing a step motor. The rotation power is calculated at sink.
VS Rt
Based on the target position toward each VS, the direction
of the VS and torque of the motor for the VS rotation, the
lt
energy of rotation for each VS is computed. It should be
noted that the direction of each VS is sent to the sink fre- Rt
D' A
quently to update this parameter. To be sure, VS direction 1 A2
after rotation toward the target is sent to the VS processor SH S
W Y
via control signal at motor.Indeed, the processor unit at the B 1 2
Y
B" B'
sink calculates the required angle Δ𝜃 for the visual unit ro- A
O 2
B' Y
1
tation toward the target in radians. With regards to this pa- D A
'

f B" A4 A3
rameter and the torque of the load on the step motor, the t
di
rotation unit consumes 𝐸𝑅𝑈 Jules:
𝐸𝑅𝑈 = 𝜏Δ𝜃 . (9)
Fig. 3: The coverage of the target sensed by a VS
2.5 The Coverage Model
In this subsection, the target coverage seen by the VS is
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

6 IEEE TRANSACTIONS ON MOBILE COMPUTING

To calculate the joint camera coverage of the target, con- As seen in Figure 4, the area of the joint coverage of the
sider Figure 4. target (yellow rectangle) is equal to the length of arc𝑤1 𝑤2
VS i Rt ̂2 × 𝑅𝑡 ) multiplied by the length of the 𝐴𝐴′ segment. Ac-
(𝛽
cording to trigonometric relations, the angle 𝛽 ̂2 is calcu-
l
A1' A2' lated as:
X A' ̂1 +𝛽
(𝛽 ̂2 +𝛽
̂3 )=𝜃𝑖𝑗
W1 2 O W2 W1 ̂2 = 𝜋 − (𝛽
𝛽 ̂1 + 𝛽̂2 + 𝛽̂3 ) − (𝛼̂𝑖 + 𝛼̂𝑗 ) ⇒
X W2

VS j
2 ̂
𝛽2 = 𝜋 − 𝜃𝑖𝑗 − (𝛼̂𝑖 + 𝛼̂𝑗 ), (14)
A4' A3' where 𝜃𝑖𝑗 is the angle between the connecting vectors ⃗⃗⃗
𝐴𝑖
A
t and ⃗⃗⃗
𝐴𝑗 which connect the target and 𝑉𝑆𝑖 and 𝑉𝑆𝑗 , respec-
 1 A'B` 
O R ˆ3
2 ̂1 tively. The angle 𝜃𝑖𝑗 is calculated as:
o ⃗⃗⃗⃗ ⃗⃗⃗⃗𝑗
Vj VSi 𝐴𝑖 .𝐴
̂1 W1 W2 𝜃𝑖𝑗 = . (15)
SH
S B
 2' fj
SH
2
̂2 ⃗⃗⃗⃗𝑖 ||𝐴
|𝐴 ⃗⃗⃗⃗𝑗 |
2 fi j
̂2 In addition, according to Figure 2, angles α
̂1 and α
̂2 are
1' VS
VS
i

given by:
Fig. 4: The joint coverage of the target sensed by two VSes
𝑅𝑡
𝛼̂𝑖 = 𝑠𝑖𝑛−1 ( 𝑡). (16)
TABLE 1 𝑑𝑖
𝑅𝑡
THE IMPORTANT NOTATIONS 𝛼̂𝑗 = 𝑠𝑖𝑛−1 ( 𝑡).
𝑑𝑗
(17).
The length of the segment AA′ is controlled by the dis-
Notation Description tances between the VSes and the target, and also their focal
𝒔𝒘 Smart camera sensor width (pixels) lengths. Therefore, the length of the segment 𝐴𝐴′ can be
𝒔𝑯 Smart camera sensor height (pixels) calculated as:
𝒇𝒎𝒊𝒏 Minimum focal length of the camera 𝑑𝑡 𝑑
𝑡
|𝐴𝐴′ | = 𝑆𝐻 × 𝑚𝑖𝑛 ( 𝑖 , 𝑗 ). (18)
𝒇𝒎𝒂𝒙 Maximum focal length of the camera 𝑓𝑖 𝑓𝑗
𝒇𝒊𝒏𝒊𝒕
𝒊 Focal length of the ith VSN before being set Based on (14), (16), (17) and (18), the area of the joint cov-
𝒇𝒊 Focal length of the ith VS erage of the target sensed by 𝑉𝑆𝑖 and 𝑉𝑆𝑗 is determined as:
𝑫𝑭𝒕𝒉
𝒏 Near depth of field threshold
𝑫𝑭𝒕𝒉
𝒇 Far depth of field threshold ̂
𝑆𝑖𝑗 = 𝑅𝑡 × 𝛽 ′ 𝑡
2 × |𝐴𝐴 | = 𝑅 (𝜋 − 𝜃𝑖𝑗 −
𝒅𝒕𝒊 Distance between the target and the ith VS
𝑡
𝑨 Camera aperture 𝑅𝑡 𝑑𝑡 𝑑
(sin−1 ( 𝑡 ) +)) 𝑆𝐻 × 𝑚𝑖𝑛 ( 𝑖 , 𝑗 ). (19)
𝑪 Circle of confusion 𝑑𝑖 𝑓𝑖 𝑓𝑗
∆ Distance between the posterior and anterior edges of the
target It is noteworthy that this area equals zero if the angle 𝜃𝑖𝑗 is
𝑲 Boltzmann constant 𝜋 radian. Therefore, (19) can be modified as:
𝑺𝑵𝑹𝒎𝒊𝒏 Minimum detectable signal to noise ratio at the receiver 2
𝝀 Wavelength of the transmitted signal (𝜋 − 𝜃𝑖𝑗 ) 𝑅𝑡 𝑅𝑡
𝑆𝑖𝑗 = 2
𝑆𝐻 𝑅𝑡 (𝜋 − 𝜃𝑖𝑗 − (sin−1 ( 𝑡 ) + sin−1 ( 𝑡 )))
𝑮𝒔 Antenna gain of the sink transceiver 2𝜋 𝑑𝑗 𝑑𝑖
𝑻𝒆 Effective noise temperature 𝑡
𝑑𝑡 𝑑
𝝉 Torque of the load on the step motor × 𝑚𝑖𝑛 ( 𝑖 , 𝑗 ). (20)
𝑓𝑖 𝑓𝑗
𝑵𝒃 The number of bits used to exhibit each pixel value
Consequently, the exact area of the target sensed by the
𝑩𝑾 Transmitted signal bandwidth
active VSes is calculated as follows:
𝐩𝐭 transmitted power from the transceiver module 𝑁𝑣𝑠 𝑁𝑣𝑠 𝑁𝑣𝑠 1
𝐩𝐫 Received power at the transceiver module 𝑆𝑛𝑒𝑡 = ∑𝑖=1 𝜌𝑖 𝑆𝑖 − ∑𝑖=1 ∑𝑖=1 𝜌𝑖 𝜌𝑗 𝑆𝑖𝑗 , (21)
2
𝑖≠𝑗
𝑹𝒇 Frame rate of the received video of the target
where 𝜌𝑖 is the selection index (activation index), defined
𝒅𝒔𝒊 Distance between the sink and the 𝒊𝐭𝐡 VS
by:
𝑩 Bandwidth of the communication module
1 𝑉𝑆𝑖 is activated
𝑮𝒗𝒔 Antenna gain of the VS transceiver 𝜌𝑖 = { . (22)
0 otherwise
𝜶 Required force for moving the lens
𝑬𝑹𝑼 Rotation unit energy consumption
𝑹𝒔 Symbol rate or the bit rate 3. THE PROBLEM STATEMENT
𝒍𝒕 Length of the target One of the important applications of visual sensor net-
𝑹𝒕 Radius of the target works is target tracking by capturing videos of the desired
𝑷𝒗𝒖 Video capturing unit power consumption
quality while energy consumption is minimized. Hence, in
𝑵𝒗𝒔 Total number of VSes
this section, the problem of optimizing energy consump-
𝑵𝒂 Number of active VSes
tion while considering the desired quality of both the
𝑬𝒕𝒙 Communication energy consumption for each VS
video captured of the target under tracking and the target
𝑻𝒂𝒄𝒕𝒊𝒗𝒆 Active time for each VS
Energy for capturing video
coverage in WVSNs is addressed.
𝑬𝒗𝒊𝒅𝒆𝒐
𝑬𝒇𝒐𝒄𝒖𝒔 Energy for the lens focusing operation
As presented in section 2, each visual sensor contains a
number of subsystems which consume energy when acti-
vated. The energy of the active VS (during the time 𝑇𝑎𝑐𝑡𝑖𝑣𝑒 )
is thus calculated as:
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

Reza Ghazalian ET AL.: Energy Optimization of Wireless Visual Sensor Networks with the Consideration of the Desired Target Coverage
7

𝐸𝑛𝑖 = 𝐸𝑓𝑜𝑐𝑢𝑠𝑖 + 𝐸𝑡𝑥 𝑖 + 𝐸𝑅𝑈 𝑖 + 𝐸𝑣𝑖𝑑𝑒𝑜𝑖 . (23) 𝜌𝑖 𝐹𝑚𝑖𝑛 − 𝑓𝑖 is defined to manifest the convexity of the con-
Based on (22), (23) is modified as follow: straint (24). As ℎ1 is an affine function, ℎ1 is proven to be a
𝐸𝑖 = 𝜌𝑖 𝐸𝑛𝑖 . (24) convex function. Thus, the constraint (26) forms a convex
So, the total energy consumption of the WVSN at every region, since the sublevel of a convex function is a set of
moment of target tracking is calculated based on (8), (9) convex regions [29]. The region indicated by the constraint
and (24): (27) is also convex since the eigenvalue of the Hessian ma-
𝐸𝑛𝑒𝑡 = ∑𝑖=1
𝑣𝑠 𝑁
𝐸𝑖 . (25) trix of ℎ2 = 𝜌𝑖 𝐹𝑚𝑖𝑛 − 𝑓𝑖 is positive for 0 ≤ 𝜌𝑖 ≤ 1 and 0 ≤
In fact, (25) expresses the cost function of the considered 𝐹𝑚𝑖𝑛𝑖 ≤ 𝑓𝑖 ≤ 𝐹𝑚𝑎𝑥𝑖 ≤ 1. Based on the empirical value of 𝑓𝑖
problem, which must be minimized. However, as men- and taking the same approach to prove convexity, con-
tioned previously, the goal of target tracking is taking vid- straint (28) is also shown to be convex.
eos which satisfy the desired QoE (defined by the WVSN Given the nature of this problem, the standard Log Bar-
user). One of the factors contributing to QoE is the quality rier method cannot be used [29]. The discrete nature of the
of the captured video, which is increased by properly set- VS activation index leads to the divergence of the Log Bar-
ting the focal length. Based on (1), the focal length may rier method. After iteration, the focal length of the VS and
vary in the desired region, which can be written as follows the VS activation index in continuous value are obtained.
for each desired region: Then, the VS activation index is mapped into a discrete
𝜌𝑖 𝐹𝑚𝑖𝑛 −𝑓𝑖 ≤ 0, (26) value, which causes the focal length value to swing in each
𝐹
𝑓𝑖 ≤ 𝑚𝑎𝑥 → 𝜌𝑖 𝑓𝑖 − 𝐹𝑚𝑎𝑥 ≤ 0. (27)
iteration of the Log Barrier method. So, the focal length
𝜌𝑖 value cannot converge toward a suboptimal value. To
Target coverage is another QoE factor considered in this tackle this issue, the problems of VS selection and focal
paper. If the coverage sensed by a VS or a set of VSes is length adjustment are heuristically separated. In this inno-
equal to or higher than the desired coverage defined by the vative method, it is assumed that all VSes capable of cor-
user, QoE will be satisfied. Therefore, the satisfaction of rectly adjusting their focal lengths (according to the target
target coverage as QoE is mathematically stated by: position) can be candidates for target tracking. Since cov-
𝑆𝑛𝑒𝑡 ≥ 𝑆𝑡ℎ ⟹ 𝑆𝑡ℎ − 𝑆𝑛𝑒𝑡 ≤ 0, (28) erage is a descending function of focal length, the focal
where 𝑆𝑡ℎ denotes the desired target coverage defined by length of the candidates are set on 𝐹𝑚𝑖𝑛 to satisfy the cov-
the user. Consequently, the energy minimization with re- erage constraints. As a result, the problem (P1) is reduced
gards to the mentioned QoE constraints can be written as: to merely the VS selection problem (the focal constraints
(P) min 𝐸𝑛𝑒𝑡 (26) and (27) are eliminated, and the coverage 𝑆𝑛𝑒𝑡 which
{𝜌𝑖 },{ 𝑓𝑖 }
Subject to was a function of the VS focal lengths and selection index
(26)- (28) is no longer dependent on the focal lengths). So, the VS se-
𝜌𝑖 ∈ {0,1}. (29) lection problem is defined as`:
𝑁
The stated problem (P) is a mixed-integer problem, whose (P2) min ∑𝑖=1
𝑣𝑠
𝜌𝑖 𝐸𝑖
{𝜌𝑖 }
optimization imposes high computational complexity Subject to
compared to the standard convex optimization problems. (30)- (31)
Thus, an innovative technique is used to change the integer 𝑛𝑒𝑤
𝑆𝑡ℎ − 𝑆𝑛𝑒𝑡 ≤ 0 (32)
problem into a continuous problem in order to reduce where 𝑆𝑛𝑒𝑡
𝑛𝑒𝑤
is defined by:
complexity. In this technique, the integer variable 𝜌𝑖 is 𝑛𝑒𝑤 𝑁𝑣𝑠 𝑁𝑣𝑠 𝑁𝑣𝑠 1
mapped into a continuous variable between zero and one: 𝑆𝑛𝑒𝑡 = ∑𝑖=1 𝜌𝑖 S𝑛𝑒𝑤
𝑖 − ∑𝑖=1 ∑𝑖=1 𝜌𝑖 𝜌𝑗 S𝑛𝑒𝑤
𝑖𝑗 (33-1)
2
𝑖≠𝑗
0 ≤ 𝜌𝑖 ≤ 1. (30)
𝜋 𝑆𝑤 𝑅𝑡ℎ +𝑅𝑡
Also, it is assumed that the maximum number of active S𝑖𝑛𝑒𝑤 = 2𝑅𝑡 ( − 𝑚𝑖𝑛 (tan−1 ( ) , sin−1 ( ))) ×
2 2 𝐹𝑚𝑖𝑛 𝑖 𝑑 𝑡 −𝑅𝑡
VSes is limited to M:
𝑆𝐻 (𝑑 𝑡 −𝑅𝑡 )
∑𝑁𝑣𝑠
𝑖=1 𝜌𝑖 ≤ 𝑀. (31) 𝑚𝑖𝑛 (𝐿𝑡 + 𝐿𝑡ℎ , ), (33-2)
𝐹𝑚𝑖𝑛 𝑖
Hence, the energy minimization with respect to the men- (𝜋 − 𝜃𝑖𝑗 )
2
tioned constraints is modified as: 𝑛𝑒𝑤
S𝑖𝑗 = 𝑆𝐻 𝑅𝑡 ×
(P1) min 𝐸𝑛𝑒𝑡 2𝜋 2
{𝜌𝑖 },{ 𝑓𝑖 } 𝑅𝑡 𝑅𝑡 𝑑𝑖𝑡 𝑑𝑗𝑡
(𝜋 − 𝜃𝑖𝑗 − (sin−1 ( 𝑡) + sin−1 ( 𝑡))) 𝑚𝑖𝑛 ( , ).(33-3)
Subject to 𝑑𝑖 𝑑𝑗 𝐹𝑚𝑖𝑛 𝑖 𝐹𝑚𝑖𝑛 𝑗
(26), (27), (28), (30), (31). With regards to the constraints and the convex cost func-
tion given in (34), the Lagrange function can be written as
4. PROPOSED METHOD [29]:
𝑁𝑣𝑠 𝑁𝑣𝑠 𝑁𝑣𝑠
𝐿 = ∑𝑖=1 𝜌𝑖 𝐸𝑖 + 𝜐(𝑆𝑡ℎ − 𝑆𝑛𝑒𝑤
𝑛𝑒𝑡 ) − ∑𝑖=1 𝜇𝑖 𝜌𝑖 + ∑𝑖=1 𝛿𝑖 (𝜌𝑖 − 1) +
In this section, an appropriate method by solving problem 𝑣𝑠 𝑁
𝜓(∑𝑖=1 𝜌𝑖 − 𝑀) (34)
(P1) is presented. At first, to prove the convexity of the
problem, we must show that the cost function stated in (23) where 𝜐, 𝜇𝑖 , 𝛿𝑖 and 𝜓 are Lagrange coefficients. Applying
is a convex function, and that the constraints of the prob- the KKT condition results in a complicated equation sys-
lem form a convex region with regards to the optimization tem with 𝑁𝑣𝑠 unknown ρi parameters. It should be noted
variables ({ 𝑓𝑖 } and {𝜌𝑖 }) [29]. Firstly, the cost function is in- that solving the problem is not aimed at finding the value
deed a convex function since it is a linear function of 𝜌𝑖 of ρi , but at determining the priority of VS selection in tar-
and an absolute function of 𝑓𝑖 . Secondly, the function ℎ1 = get tracking. Therefore, by obtaining the priority function
of the VS selection, the complex equations (Resulting from
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

8 IEEE TRANSACTIONS ON MOBILE COMPUTING

the KKT condition in (32)) are solved. Furthermore, in the Algorithm 1: VS Selection Algorithm
𝑁𝑣𝑠
light of these explanations, the terms ∑𝑖=1 𝜇𝑖 𝜌𝑖 + Select the sensor satisfying these conditions for target track-
𝑁𝑣𝑠
∑𝑖=1 𝛿𝑖 (𝜌𝑖 − 1) can be omitted from (34). Considering this ing:
omission as well as (33) and the KKT condition, we can 𝐹𝑚𝑖𝑛𝑖 ≤ 𝑓𝑖 ≤ 𝐹𝑚𝑎𝑥𝑖 ∀𝑖 = 1,2, … , 𝑁𝑣𝑠
write: 𝑟: a large number, 𝜐𝑚𝑖𝑛 = 0 , 𝜀𝐵𝑠𝑐 : a small number
𝜕𝐿 𝑁𝑣𝑠 1 1 Initialization:
= 0 → 𝐸𝑖 + 𝜓 + 𝜐 (S𝑖𝑛𝑒𝑤 − ∑ 𝑙=1 𝜌𝑙 S𝑖𝑙𝑛𝑒𝑤 ) = 𝜐𝜌𝑗 S𝑖𝑗
𝑛𝑒𝑤
WHILE (|𝜐𝑚𝑎𝑥 − 𝜐𝑚𝑖𝑛 | < 𝜀𝐵𝑠𝑐 )
𝜕𝜌𝑖 2 2
𝑙≠𝑖,𝑗 𝜐𝑚𝑎𝑥 +𝜐𝑚𝑖𝑛
(35-1) 𝜐=( )
2
𝜕𝐿 𝑁𝑣𝑠 1 𝑁𝑎 : number of active sensors
= 0 → 𝐸𝑗 + 𝜓 + 𝜐 (S𝑗𝑛𝑒𝑤 − ∑ 𝑙=1 𝜌𝑙 S𝑖𝑙𝑛𝑒𝑤 ) = Compute 𝑐𝑜𝑠𝑡(𝛺);
𝜕𝜌𝑗 2
𝑙≠𝑖,𝑗
1 𝑛𝑒𝑤
𝑁𝑎 = 1;
𝜐𝜌𝑗 S𝑖𝑗 . (35-2) While (selected 𝑁𝑎 sensor with the highest priority < 𝑀)
2
Regarding (35-1) and (35-2), the 𝜌𝑗 /𝜌𝑖 ratio is calculated (𝜓 Compute 𝑆𝑛𝑒𝑡 ;
is omitted from (35-1) and (35-2), because it is constant for IF 𝑆𝑛𝑒𝑡 ≥ 𝑆𝑡ℎ
all the selected VSes and does not change with i) as follows: Break;
End IF
1 𝑁
𝐸𝑖 + 𝜐(S𝑛𝑒𝑤
𝑖
𝑣𝑠 𝜌 S𝑛𝑒𝑤 )
− ∑ 𝑙=1 𝑙 𝑖𝑙 𝑁𝑎 = 𝑁𝑎 + 1;
2
𝜌𝑗 𝑙≠𝑖,𝑗 𝑐𝑜𝑠𝑡(𝑖)
= = , (36) End While
𝜌𝑖 1 𝑁𝑣𝑠 𝑐𝑜𝑠𝑡(𝑗)
𝐸𝑗 + 𝜐(S𝑛𝑒𝑤
𝑗 − ∑ 𝑙=1
2
𝜌𝑙 S𝑛𝑒𝑤
𝑖𝑙 ) Compute 𝐸𝑖 for all sensors;
𝑙≠𝑖,𝑗
if 𝑆𝑛𝑒𝑡 < 𝑆𝑡ℎ
where the cost function of the selected VS is defined as: 𝜐𝑚𝑎𝑥 = 𝜐;
𝑁𝑣𝑠 1 Else if 𝑆𝑛𝑒𝑡 > 𝑆𝑡ℎ
𝑐𝑜𝑠𝑡(𝑖) = 𝐸𝑖 + 𝜐 (S𝑖𝑛𝑒𝑤 − ∑ 𝑙=1 𝜌𝑙 S𝑖𝑙𝑛𝑒𝑤 ). (37)
2
𝑙≠𝑖,𝑗 𝜐𝑚𝑖𝑛 = 𝜐;
End if
Considering the fact that one or more VSes can be involved
End WHILE
in providing the desired coverage in target tracking, the
cost function of the selected VSes (𝜌𝑙 is set to one) is modi-
fied: After the proper VSes are selected, their appropriate fo-
cal lengths must be computed. For this purpose, we con-
𝑁 ≤𝑀 𝑁 ≤𝑀 𝑛𝑒𝑤 1 𝑁𝑎 ≤𝑀 𝑁𝑎 ≤𝑀 𝑛𝑒𝑤
𝑐𝑜𝑠𝑡(Ω) = ∑𝑖𝜖Ω
𝑎 𝑎
𝐸𝑖 + 𝜐 (∑𝑖𝜖Ω S𝑖 − ∑𝑖𝜖Ω
2
∑𝑗𝜖Ω S𝑖𝑗 ). sider the cost function of the focal length adjustment as:
𝑁
𝑗≠𝑖 𝐶𝑜𝑠𝑡𝐹𝑆 = ∑𝑖𝜖Ω
𝑎
𝛼|𝑓𝑖 − 𝑓𝑖𝑖𝑛𝑖𝑡 |. (40)
(38) 𝑖=1
where Ω is the selected VSes for target tracking. Also, based Since the term |𝑓𝑖 − 𝑓𝑖𝑖𝑛𝑖𝑡 | is not differentiable function with
on (31), 𝑁𝑎 must be less than or equal to M, i.e. 𝑁𝑎 ≤ 𝑀. respect to 𝑓𝑖 , it can be turned into a quadratic form
2
According to (36) and (38), it can be stated that those VSes (𝑓𝑖 − 𝑓𝑖𝑖𝑛𝑖𝑡 ) such that the minimum value of 𝐶𝑜𝑠𝑡𝐹𝑆 does
which provide the desired coverage with minimum energy not change. So, we have:
2
consumption have a higher priority in target tracking. The 𝐶𝑜𝑠𝑡𝐹𝑆 = ∑𝑖𝜖Ω
𝑎 𝑁
𝛼(𝑓𝑖 − 𝑓𝑖𝑖𝑛𝑖𝑡 ) . (41)
parameter 𝜐 is the only unknown parameter to be com- 𝑖=1
puted in the cost function. So, based on (34) and the com- It is worth noting that the selection index 𝜌𝑖 is set to one for
plementary slackness condition, we will have: the selected VS. Thus, based on (26), (27), (28), and (41), the
𝜐 = 0 𝑆𝑡ℎ < 𝑆𝑛𝑒𝑤
𝑛𝑒𝑡
problem of focal length adjustment is stated as follows:
𝜐(𝑆𝑡ℎ − 𝑆𝑛𝑒𝑤𝑛𝑒𝑡 ) = 0 → { . (39) (P3) min 𝐶𝑜𝑠𝑡𝐹𝑆 (42-1)
𝜐 > 0 𝑆𝑡ℎ = 𝑆𝑛𝑒𝑤
𝑛𝑒𝑡 { 𝑓𝑖 }
It can be deduced from (39) that 𝜐 must be positive, be- Subject to
cause the coverage constraint is not met if 𝜐 = 0. The bisec- 𝐹𝑚𝑖𝑛𝑖 − 𝑓𝑖 ≤ 0 , ∀𝑖 = 1,2, … , 𝑁𝑎 (42-2)
tion method is used to obtain the optimal value of 𝜐, which 𝑓𝑖 − 𝐹𝑚𝑎𝑥𝑖 ≤ 0, ∀𝑖 = 1,2, … , 𝑁𝑎 (42-3)
investigates the optimal coverage condition ( 𝑆𝑛𝑒𝑡𝑛𝑒𝑤
≥ 𝑆𝑡ℎ ). In 𝑆𝑡ℎ − 𝑆𝑛𝑒𝑡 ≤ 0, ∀𝑖 = 1,2, … , 𝑁𝑎 . (42-4)
each iteration of the algorithm (in the case of constant 𝜐), Due to the convexity of the cost function 𝐶𝑜𝑠𝑡𝐹𝑆 and the
the cost function is calculated for VSes and arranged from linearity of the constraint, the Log Barrier algorithm can be
the lowest to the highest cost. Then, the visual sensor with used to solve the problem [29]. Therefore, the cost function
the highest priority is selected and removed from the not of the Log Barrier method is defined as:
selected VS set. The process of adding members to the se-
𝑁 2 1 𝑁
lected VS set will continue until 𝑆𝑛𝑒𝑡
𝑛𝑒𝑤
≥ 𝑆𝑡ℎ or ∑𝑛𝑖=1 𝜌𝑖 = 𝑀 is 𝐵
𝐶𝑜𝑠𝑡𝐹𝑆 = ∑𝑖𝜖Ω
𝑎
𝛼(𝑓𝑖 − 𝑓𝑖𝑖𝑛𝑖𝑡 ) − ( ) (∑𝑖𝜖Ω
𝑎
𝑙𝑜𝑔(𝐹𝑚𝑖𝑛 𝑖 −𝑓𝑖 ) +
𝑡
𝑛
satisfied. Once 𝑆𝑛𝑒𝑡 ≥ 𝑆𝑡ℎ or ∑𝑖=1 𝜌𝑖 = 𝑀 is met, the 𝜐 value
𝑛𝑒𝑤 𝑖=1 𝑖=1

is updated according to the coverage level (it is decreased ∑𝑁 +


𝑖𝜖Ω 𝑙𝑜𝑔(𝑓𝑖 − 𝐹𝑚𝑎𝑥 𝑖 ) + 𝑙𝑜𝑔(𝑆𝑡ℎ − 𝑆𝑛𝑒𝑡 )) t ∈ ℝ .
𝑎
(43)
if𝑆𝑛𝑒𝑡
𝑛𝑒𝑤
≤ 𝑆𝑡ℎ is met and vice versa). Meanwhile, the search 𝑖=1
space is halved. The bisection method is continuously per- ℝ is composed of real positive numbers, and t is a param-
+
formed until the 𝜐 value reaches an accuracy of 𝜀𝐵𝑠𝑐 . The eter that controls the constraint penalty factor of the Barrier
VS selection algorithm is summarized as: method. Based on the Log Barrier method, the focal length
setting algorithm is presented as:

1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

Reza Ghazalian ET AL.: Energy Optimization of Wireless Visual Sensor Networks with the Consideration of the Desired Target Coverage
9

Algorithm 2: Focal Length Setting Algorithm compare its performence with other methods. Since there
Initialization: are no appropraite benchmarks in previous works to eval-
𝑡(0) > 0 , 𝜇 > 1 is a uniform random variable, 𝜀 > 0 is the uate the proposed method, it is compared with the exhaus-
accuracy of the Barrier method tive search algorithm (ES) which is presenting an optimal
𝑓𝑖 = (𝐹𝑚𝑎𝑥𝑖 − 𝐹𝑚𝑖𝑛𝑖 ) × 𝑟 + 𝐹𝑚𝑖𝑛𝑖 ∀𝑖 ∈ Ω solution. As stated in the pervious section, this method di-
r ∈ [0,1] is a uniform random variable vides the focal length of each VS in 𝑁𝑓𝑜𝑐𝑎𝑙 parts, and all the
m: the number of the constraints of the problem (41) possible VS combinations are checked based on their en-
WHILE < 𝜀
𝑚 ergy consumption and QoE constraints. Eventually, the
𝑡 best combinations which satisfy QoE with optimal energy
// Newton`s method
consumption are selected. In comparing the two algo-
Initialization: 𝜀𝑁𝑒𝑤𝑡𝑜𝑛 > 0 is the accuracy of Newton
rithms, the length of the network (each side of the network
method
𝐵 )−1
Define ∆f𝑖 = −(𝛻 2 𝐶𝑜𝑠𝑡𝐹𝑆 𝐵
𝛻𝐶𝑜𝑠𝑡𝐹𝑆
area) is considered as 20 meters to 120 meters. It is pre-
𝐵
𝜆2 = −𝛻 𝑇 𝐶𝑜𝑠𝑡𝐹𝑆 × ∆𝑓𝑖
sumed that the target enters a random coordination in the
𝜆2 network and is moving at constant speed (2.4m/sec) to-
While < 𝜀𝑁𝑒𝑤𝑡𝑜𝑛 ward a random direction with uniform distribu-
2
calculate tion [−𝜋, 𝜋]. Moreover, the simulation results are averaged
// Evaluate Newton’s method step size 𝐭 𝐍 by the over 50 realizations in which the energy consumption and
backtracking method performance accuracy in achieving the desired QoE are
Initialization: 𝑡𝑁 = 1 , 𝛼 ∈ (0,0.5) and 𝛽 ∈ (0,1) calculated for both algorithms.
𝐵 (𝑓 𝐵
while 𝐶𝑜𝑠𝑡𝐹𝑆 𝑖 + 𝑡𝑁 ∆𝑓𝑖 ) > 𝐶𝑜𝑠𝑡𝐹𝑆 (𝑓𝑖 ) + The values of the parameters used in the simulation re-
𝑇 𝐵 (𝑓
𝛼𝑡𝑁 𝛻 𝐶𝑜𝑠𝑡𝐹𝑆 𝑖 × ∆𝑓𝑖 )
sults are shown in Table 2. The visual sensor node param-
t N = βt N
eters are given in the IEEE802.15.4 standard.
End while
End Newton While
Figure 5 demonstrates the average energy consumed by
Update fi the two algorithms at different network lengths. The com-
𝑓𝑖 (𝑛𝑒𝑤) = 𝑓𝑖 𝑜𝑙𝑑 + 𝑡𝑁 ∆f𝑖 ; 𝑓𝑖 ← 𝑓𝑖 (𝑛𝑒𝑤) parison between the energy consumption of the algorithms
must take place in case the algorithms achieve the same
t = μt
End WHILE desired QoE.
As shown in Figure 5, the energy consumption by the
4.1. The complexity analysis of the proposed execution of the proposed method is nearly optimal,
algorithm though with a very subtle difference. This suggests that the
By examining (30), we realize that the problem we are deal- proposed method provides a solution close enough to the
ing with is NP-complete because the variable 𝜌𝑖 is discrete. optimal solution (the ES algorithm solution).
Hence, the exhaustive search (ES) algorithm will be pre- The percentage of success in achieving the desired QoE
sented as the optimum solution for this problem. As this (performance accuracy) is shown in Figure 6 for the two
algorithm examines all the possible combinations of the algorithms, which exhibits similar accuracies. It should
VSes and their focal lengths which include 𝑁𝑓𝑜𝑐𝑎𝑙 × also be stated that the performance accuracy of the func-
∑𝑁 ∑𝑁𝑣𝑠 tion is calculated as:
𝑖=1 𝑁𝑣𝑠 𝐶𝑖 = 𝑁𝑓𝑜𝑐𝑎𝑙 × 𝑖=1(𝑁𝑣𝑠 !/(𝑁𝑣𝑠 − 𝑖)! × 𝑖!) states (it di-
𝑣𝑠
𝑛𝑑
vides the interval between the minimum and maximum fo- 𝑃𝐴 = , (44)
𝑛𝑒𝑥𝑐
cal lengths into 𝑁𝑓𝑜𝑐𝑎𝑙 parts) to find the best combination of where 𝑛𝑒𝑥𝑐 and 𝑛𝑑 represent the number of algorithm im-
the VSes with the least energy consumption, QoE is satis- plementations and the number of times the desired QoE is
fied and optimum focal length is achieved. However, to satisfied, respectively.
obtain accuracy in the focal length setting for a large num-
ber 𝑁𝑣𝑠 of VSes, its complexity grows exponentially with TABLE 2
the order of 𝑂(𝑁𝑣𝑠 !). The algorithm proposed in this paper
THE VALUE OF THE SIMULATION PARAMETERS
utilizes a heuristic method for complexity reduction. First,
the VS selection algorithm is executed with a complexity
Parameter Value Parameter Value
order of 𝑂(𝑁𝑣𝑠 ). Then, using the focal length setting algo-
𝑃𝑉𝑈 2 × 10−2 𝑚𝐽/𝑠𝑒𝑐 𝑁𝑏 8
rithm, the focal length of each selected VS is found with the 𝑓𝑚𝑖𝑛 3.5mm A 1.6f
complexity order of 𝑂 (𝑁𝑎 × √𝑁𝑎 × 𝑙𝑜𝑔(𝑁𝑎/𝜀𝑡(0))). So, 𝑓𝑚𝑎𝑥 91mm Δ 0.4m
the complexity order of the proposed method is 𝑂 (𝑁𝑎 × C 0.3μm SNR min 30dB
√𝑁𝑎 × 𝑙𝑜𝑔(𝑁𝑎/𝜀𝑡(0))), indicating less complexity than the 𝑅𝑡 0.5m 𝑇𝑒 350 𝐶
optimum method. 𝑙𝑡 1.5m 𝑅𝑓 30 fps
𝜆 0.125m 𝐺𝑠 2
𝑁𝐹 10 𝜏 0.004 𝐽/𝑟𝑎𝑑
5. SIMULATION RESULTS 𝐺𝑉𝑆 2 𝑆𝑡ℎ 2.47𝑚2
𝑠𝑤 160pix- 𝑠𝐻 90pix-
We consider the WVSN to consist of 4 VS nodes for the nu- els(16mm) els(9mm)
merical analysis, and also assume that the VSes are placed
on a square field with the sink (fusion center) located in the
middle of the network as shown in Figure 1. To demon-
strate the efficiency of the proposed algorithm, we should
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

10 IEEE TRANSACTIONS ON MOBILE COMPUTING

Energy Consumption In Diffrent Network Size


Energy Consumption(J)
0.4 Proposed-3 Algorithm The Target Trajectory
10
ES x 10 VS1
0.3 1.0721 VS2
VS3
0.2 1.0721 5 VS4
39.999
39.999
39.999
39.999
39.999
0.1

Y(m)
0
0
20 40 60 80 100 120
Network Size(m)
Fig. 5: Camparison between the energy consumption by the execution
-5
of the two algorithms with the identical QoE at different network sizes
Accuracy In Ceriteria Satisfication
101 Proposed Algorithm -10
-10 -5 0 5 10
Accuracy(%)

ES X(m)
Fig. 8: Example of the path of the target (The network size is 20m ×
100 20m).

As shown in Figure. 9 (taking the target trajectory into


99 account), VSes no. 2 and 4 are initially activated for target
20 40 60 80 100 120
Network Size(m) tracking, since target coverage is dependent on the orien-
Fig. 6: Comparison of performance accuracy between the two algo-
tation of the sensors relative to the target, and the focal
rithms with identical energy consumption for different network sizes length constraints must be met as well. The angle between
these two VSes relative to the target is 1800 , so they can
cover a large area of the target. In addition, the focal length
Besides, the convergence time for the proposed method
constraints are satisfied using these VSes, leading us to the
and the exhaustive search (ES) method have been com-
conclusion that they should be enabled for target tracking.
pared which is demonstrated in Figure 7. As can be ob-
If so, the VS no. 3 can capture the image with desired qual-
served, the convergence time of ES is approximately ten
ity. However, it cannot provide suitable coverage if acti-
times larger than that of the proposed algorithm. Moreo-
vated at the same time as other sensors. As a result, in this
ver, the convergence time increases by increasing the net-
example, the VS no.3 is to be deactivated and reactivated
work size. The main reason is that the focal length setting
in the middle of the path, at the end of which the VSes no.
interval increases by increasing the network size since the
2 and no. 4 are activated. As can be realized, in each sce-
feasible set containing optimal solution becomes larger.
nario, it cannot estimate that which VS could be activated.
Considering the computational complexity of the pro-
VS selection is dependent to some factors such as the focal
posed method and analyzing the results of Figures 5, 6 and
length of each VS, the distance between the VS and the tar-
7, it can be concluded that the proposed algorithm is an
get, the distance between VS and fusion center, and so on.
efficient choice with high practical capabilities.
Considering Figure 9, we realize that the sensors are acti-
Convergence Time In Diffrent Network Size
2 vated in the same way when running both algorithms. The
focal length setting of the active VSes is shown in Figure
10.
Convergence Time (sec)

1.5 With regards to Figure 10, we find that the focal length
Proposed Algorithm adjustment is almost similar in the two algorithms. The
1 ES slight difference in the VS focal length adjustment is due to
the focal length resolution in the ES method. As can be
seen, the focal length will gradually decrease to achieve the
0.5 desired coverage as the target approaches and each visual
sensor is activated.
The instantaneous target coverage by sensors in the pro-
0
20 40 60 80 100 120 posed and Exhaustive search algorithms is shown in Fig-
Network Size(m)
ure 11.
Fig. 7. Comparison of convergence time between the two algorithms
An optimal coverage of the target is accomplished
(ES and the proposed method) with identical energy consumption for
thanks to implementing these algorithms. Figure 11 also
the different network sizes
shows that the same coverage is obtained from the target
as a result of executing both algorithms, which indicates
To illustrate how VSes are activated and their focal that the proposed method is close to the optimal method.
lengths are set while the target is tracked, a realistic target
tracking scenario is simulated. This scenario supposes that
the target passes through the network following the path
shown in Figure 8.
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

Reza Ghazalian ET AL.: Energy Optimization of Wireless Visual Sensor Networks with the Consideration of the Desired Target Coverage
11

VS1 VS2
2 2

1 1
Sensor Mode(ON/OFF)

0 0

-1 -1
0 2 4 6 8 0 2 4 6 8
Proposed Method
VS3 VS4
ES
2 2

1 1

0 0

-1 -1
0 2 4 6 8 0 2 4 6 8
time(sec) Fig. 11: Comparison of the target coverage when executing the pro-
Fig. 9: Comparision between VS mode change (ON/OFF) when the posed algorithm with the ES algorithm based on the path shown in
target is crossing the network following the path shown in Figure 8 Figure. 6 (optimal coverage 𝑆𝑡ℎ = 2.47𝑚2)
(when executing the proposed method and ES algorithm)

80 16
VS1 Proposed Method VS2 Proposed Method
ES ES 14
60
12
40
10
Focal Length(mm)

20
8

0 6
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
12 20
VS3 VS4 Proposed Method
10 ES 15
8 Proposed Method
10
6 ES
5
4

2 0
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
time(sec)
Fig. 10: Adjustment of the VS focal length when the target is crossing the network following the path shown in Figure 8 (when executing the ES
and the proposed algorithms)

6. CONCLUSIONS the optimal one which allows its practical implementation.


In this paper, two important contradictory challenges (en-
ergy consumption and coverage quality in WVSNs) have ACKNOWLEDGMENTS
been examined. First, the target coverage model was ex- The authors would like to acknowledge the funding sup-
pressed mathematically based on the VS and target posi- port of Babol Noshirvani University of Technology
tions and the intrinsic structure of the visual sensors. Then, through grant program No. BNUT/389059/98.
the energy consumption minimization problem with re-
spect to QoE constraints (coverage and the quality of the
REFERENCES
image received by the target) was converted into a convex
problem, and an innovative method was presented to [1]
M. Bramberger, A. Doblander, A. Maier, B. Rinner, and H.
solve it. Due to the nature of the problem, the energy min- Schwabach, “Distributed embedded smart cameras for
imization and the focal length adjustment problems were surveillance applications,” Computer, vol. 39, no. 2, pp. 68 - 75,
February. 2006.
separately addressed. In the former, the VS selection prior-
[2] H. Yetgin, KT. Cheung, M. El-Hajjar, LH. Hanzo, “A survey of
ity function was extracted by using a convex optimization
network lifetime maximization techniques in wireless sensor
framework. A combination of the highest priority VSes
networks,” IEEE Communications Surveys & Tutorials, vol. 19,
that satisfy the coverage constraint was selected based on no. 2, pp. 828-854, Jan. 2017.
the priority function. Another issue is the focal length ad- [3] Y. He, I. Lee, and L. Guan, “Distributed algorithms for network
justment, which was done afterwards using the Log-Bar- lifetime maximization in wireless visual sensor networks”, IEEE
rier method for the active visual sensors. To evaluate the Transaction on. Circuits and Systems for. Video Technology.,
proposed method, the energy consumption and accuracy vol. 19, no. 5, pp. 704-718, May. 2009.
of QoE satisfaction have been compared with the optimal [4] S. Soro and W. Heinzelman, “Camera selection in visual sensor
method (Exhaustive search method), and simulation re- networks,” in IEEE Conference on Advanced Video and Signal
sults attest that the two solutions are very close, while the Based Surveillance, September. 2007, pp. 81–86.
proposed method has less computational complexity than [5] J. Park, P. C. Bhat, and A. C. Kak, “A look-up table based
approach for solving the camera selection problem in large
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TMC.2020.2990596, IEEE Transactions on Mobile Computing

12 IEEE TRANSACTIONS ON MOBILE COMPUTING

camera networks,” in International Workshop on Distributed [21] A. Saeed, A. Abdelkader, M. Khan, et al, “In Proceedings of the
Smart Cameras, 2006. 16th ACM/IEEE International Conference on Information
[6] G. Fusco and H. Gupta, “Selection and orientation of directional Processing in Sensor Networks, pp. 155-166, Apr. 2017.
sensors for coverage maximization,” in IEEE Communications
[22] R. Ghazalian, A. Aghagolzadeh, and S. M. Hosseini Andargoli,:
Society Conference on Sensor, Mesh and Ad Hoc
“Wireless Visual Sensor Networks Energy Optimization with
Communications and Networks, 2009, pp. 1–9.
Maintaining Image Quality,” IEEE Sensor Journal, vol. 17, no.
[7] M. Hooshmand, S. M. R. Soroushmehr, P. Khadivi, S. Samavi,
13,pp. 4056-4066, July. 2017.
and S. Shirani, “Visual sensor network lifetime maximization by
[23] R. Ghazalian, A. Aghagolzadeh, S. M. Hosseini Andargoli,''
prioritized scheduling of nodes,” Journal of Network and
Energy Consumption Minimization in Wireless Visual Sensor
Computer Applications, vol. 36, no. 1, pp. 409–419, January.
2013. Networks Using Convex Optimization'', Seventh International
[8] M. Hosseini, M. Dehghan, and H. Pedram, “Sensor selection and Symposium on Telecommunications (IST2014), pp. 312-315, Sep.
configuration in visual sensor networks,” in 6th International 2014.
Symposium on Telecommunications (IST), November. 2012, pp. [24] M. Mirzazadeh Moallem, A. Aghagolzadeh, R. Ghazalian,
697–702. “Wireless Visual Sensor Networks Energy Optimization Based
[9] S. M. Hoseini, M. Dehghan, and H. Pedram, “Full angle coverage
on New Entropy Model,” IEEE Sensor Journal, 2019 (DOI:
in visual sensor networks,” in 2nd International Conference on
Computer and Knowledge Engineering (ICCKE), October. 2012, 10.1109/JSEN.2019.2944188).
pp. 260–265. [25] Y. Charfi, N. Wakamiya, and M. Murata, “Challenging Issues in
[10] S. M. Amiri, P. Nasiopoulos, and V. C. M. Leung, “Collaborative Visual Sensor Networks,” Wireless Communication, vol. 16, no.
routing and camera selection for visual wireless sensor 2, pp. 44–49, April. 2009.
networks,” IET Communications, vol. 5, no. 17, pp. 2443–2450, [26] STMicroelectronics, “5.0 megapixel auto-focus camera module,”
November. 2011. VB6955CM datasheet, October 2015.
[11] Y. Cai, W. Lou, M. Li, and X. Y. Li, “Energy Efficient Target- [27] R. LiKamWa, B. Priyantha, and et al, “Energy characterization
Oriented Scheduling in Directional Sensor Networks,” IEEE and optimization of image sensing toward continuous mobile
Transactions on Computers, vol. 58, no. 9, pp. 1259–1274, vision,” In Proceeding of the 11th annual international
September. 2009. conference on Mobile systems, applications, and services, pp. 69-
[12] H. H. Yen, “Efficient visual sensor coverage algorithm in
82, June 2013.
Wireless Visual Sensor Networks,” in 9th International
[28] W.B. Heinzelman, A.P. Chandrakasan, H. Balakrishnan, “An
Conference on Wireless Communications and Mobile
application-specific protocol architecture for wireless
Computing (IWCMC), August. 2013, pp. 1516–1521.
microsensor networks,” IEEE Transactions on wireless
[13] H. Zannat, T. Akter, M. Tasnim, and A. Rahman, “The coverage
communications. Vol.1, no.4, pp.660-670, October 2002.
problem in visual sensor networks: A target oriented approach,”
[29] S. Boyd and L. Vandenberghe, Convex Optimization, 1st
Journal of Network and Computer Applications, vol. 75, pp. 1–
Edition.Cambridge, UK ; New York: Cambridge University
15, November. 2016.
Press, 2004.
[14] A. Amjad, M. Patwary, A. Griffiths, and A. H. Soliman,
“Characterization of Field-of-View for Energy Efficient Reza Ghazalian received the B.Sc. degree in Electronics Engineering
Application-Aware Visual Sensor Networks,” IEEE Sensors ,and the M.Sc. Ph.D. degrees in Communications Engineering from
Journal, vol. 16, no. 9, pp. 3109–3122, May 2016. Babol Noshirvani University of Technology, Babol, Iran, in 2009, 2011
[15] H. Subir and A. Ghosal, “A location-wise predetermined and 2017, respectively. He is currently an Assistant Professor with the
deployment for optimizing lifetime in visual sensor networks,” Department of Electrical and Computer Engineering, Buein Zahra
Technical University, Buein Zahra, Iran. His current research interests
IEEE Transactions on Circuits and Systems for Video
include wirelss visual sensor network, optimization, SWIPT in 5G,
Technology, vol. 26 no. 6, pp.1131-1145, June. 2016. mm-wave communications, signal processing for communication, and
[16] Z. Sheng, J. Fan, C. H. Liu, V. C. Leung, X. Liu, and K. K. Leung, optimization.
“Energy-efficient relay selection for cooperative relaying in
wireless multimedia networks,” IEEE Transactions on Vehicular Ali Aghagolzadeh (S’87-M’92-SM’08) received the B.S. degree in
Technology, vol. 64, no.3 ,pp. 1156-1170, May. 2014. Electrical and Electronic Engineering from University of Tabriz, Tabriz,
[17] W. Wang, H. Dai, C . Dong, et al, “PANDA: Placement of Iran, in 1985. He received the M.S. and the Ph.D. degrees in Electrical
Engineering from the Illinois Institute of Technology, Chicago, IL,
Unmanned Aerial Vehicles Achieving 3D Directional USA, and Purdue University, West Lafayette, IN, USA, in 1988 and
Coverage,” In IEEE INFOCOM 2019-IEEE Conference on 1991, respectively. He is currently a Professor with the Faculty of
Computer Communications, pp. 1198-1206, Apr. 2019. Electrical and Computer Engineering, Babol Noshirvani University of
Technology, Babol, Iran. His research interests include image
[18] W. Wang, H. Dai, C . Dong, et al, “VISIT: Placement of processing, video coding and compression, information theory, and
Unmanned Aerial Vehicles for Anisotropic Monitoring Tasks” computer vision. He has supervised numerous master and Ph.D.
16th Annual IEEE International Conference on Sensing, students and published more than 200 scientific peer-reviewed journal
and conference papers.
Communication, and Networking (SECON), pp. 1-9, Jun. 2019.
[19] Y. Wu, Y. Wang, W. Hu, X. Zhang, and G. Cao, “ Resource-aware Seyed Mehdi Hosseini Andargoli received the B.Sc. degree in
photo crowdsourcing through disruption tolerant networks,” Electronics Engineering from Shahed University, Tehran, Iran, in
In IEEE 36th International Conference on Distributed 2004, and the M.Sc. and Ph.D. degrees in Telecommunication
Computing Systems (ICDCS), pp. 374-383, Jun. 2016. Systems Engineering from the K. N. Toosi University of Technology,
Tehran, in 2009 and 2011, respectively. He is currently an Assistant
[20] Y. Wu, Y. Wang, W. Hu, and G. Cao, “Smartphoto: a resource- Professor with the Department of Electrical and Computer
aware crowdsourcing approach for image sensing with Engineering, Babol Noshirvani University of Technology, Babol, Iran.
smartphones,”. IEEE Transactions on Mobile Computing. vol. His current research interests include resource allocation of cellular
15, no. 5, pp. 1249-1263, Jun. 2015. networks, cognitive radio networks, relay networks, and optimization.
1536-1233 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Authorized licensed use limited to: University of Exeter. Downloaded information.
on June 20,2020 at 23:08:20 UTC from IEEE Xplore. Restrictions apply.

You might also like