You are on page 1of 15

Supprimer filigrane Wondershare

PDFelement
Journal of Network and Computer Applications 123 (2018) 42–56

Contents lists available at ScienceDirect

Journal of Network and Computer Applications


journal homepage: www.elsevier.com/locate/jnca

A cognitive chronometry strategy associated with a revised cloud model to


deal with the dishonest recommendations attacks in wireless sensor
networks
Farah Khedim a, ∗ , Nabila Labraoui a , Ado Adamou Abba Ari b,c
a STIC, University of Tlemcen, Algeria
b
LaRI, University of Maroua, Cameroon
c
LI-PaRAD, Université Paris Saclay, University of Versailles Saint-Quentin-en-Yvelines, France

A R T I C L E I N F O A B S T R A C T

Keywords: Wireless sensor networks (WSNs) face many security issues. When external attacks can be prevented with tradi-
Trust and reputation systems tional cryptographic mechanisms; internal attacks remain difficult to be eliminated. Trust and reputation have
Security
been recently suggested by many researches as a powerful tool for guaranteeing an effective security mechanism.
WSN
They enable the detection and the isolation of both faulty and malicious nodes. Nevertheless, these systems are
Bad mouthing
Ballot stuffing
vulnerable to deliberate false or unfair testimonies especially in the case of dishonest recommendations attacks,
Cloud model i.e. badmouthing, ballot-stuffing and collusion attacks. In this paper, we propose a novel bio inspired trust
ABC model for WSNs namely Bee-Trust Scheme (BTS) based on the use of both a modified cloud model and a cogni-
tive chronometry parameter. The objective of the scheme is to achieve both a higher detection rate and a lower
false positive rate of dishonest recommendations attacks by allowing the distinction between erroneous recom-
mendations and dishonest ones which has thus far been overlooked by most research work. Simulation results
demonstrate that the proposed scheme is both effective and lightweight even when the number of dishonest
recommenders is large.

1. Introduction intercept, or insert wrong information and can successfully pass the
authentication process with their neighbors. Once a node is compro-
1.1. Background mised, the integrity and availability of the entire network applications
can be destroyed (Feng et al., 2015). Thus, network security is an abso-
Over the past few years, wireless sensor networks (WSNs) have lute necessity in order to guarantee the proper functioning of the whole
proven to be one of the most useful technologies due to the fact that network. When asymmetry cryptographic protection deals with exter-
they are potentially low-cost solutions to a variety of real-world chal- nal attacks, internal attacks remain undetected. This situation leads to
lenges (Akyildiz et al., 2002). The continuous development of WSNs has develop effective schemes that can cope with these attacks (Labraoui et
contributed to their extensive application in various industries, includ- al., 2016).
ing in key areas such as the electrical, healthcare, and military indus- Trust and reputation model has recently been suggested as an effec-
tries (Jin et al., 2017). However, as for all technologies, the advan- tive and challenging issue in the security of wireless sensor networks
tages of WSNs are often diminished due to the presence of risk factors (Feng et al., 2011). Trust management is fundamental to identify mali-
and potential for abuse (Momani and Challa, 2010). The capture of cious, selfish and compromised nodes which have been authenticated
a sensor node will reveal all the security mechanisms as well as all (Wu and Zheng, 2015). While many secure schemes focus on preventing
network information to the adversary which can easily generate the attackers from entering the network through secure key management,
so-called insider attacks, bypassing encryption and password security trust management, on the other hand, takes a further step to protect the
systems (Perrig et al., 2004). As a result, the adversary node may be entire network even if malicious nodes have had access to it. This fea-
taken as normal one in the network, which makes it possible to delete, ture is achieved in order to complement cryptography and to promote

∗ Corresponding author.
E-mail address: farahbouhamed@gmail.com (F. Khedim).
https://doi.org/10.1016/j.jnca.2018.09.001
Received 22 September 2017; Received in revised form 4 June 2018; Accepted 4 September 2018
Available online 6 September 2018
1084-8045/© 2018 Elsevier Ltd. All rights reserved.
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

a healthy collaboration relationship among participant sensors in WSNs namely Bee-Trust Scheme (BTS) to deal with the dishonest recommen-
(Lopez et al., 2010). dations attacks against trust and reputation model for mobile WSN. The
Currently, the most effective way to defend internal attack is trust proposed scheme is based on intelligent behaviors of bee colony. As far
management system (Fang et al., 2016). However, the performance as we know our model is the first one in applying bee colony algo-
of reputation models faces with several security issues that have not rithm in order to optimize the detection of dishonest recommendations
yet been solved completely. One type of these attacks is the dishonest attacks in WSN. Moreover, contrary to the existing schemes, our pro-
recommendations attacks. A malicious node participating in reputation tocol solves the dishonest recommendations problem from new angles.
system can falsely accuse well-behaving nodes of malicious actions and It is based on a revised Cloud model theory to filter out the dishonest
give them untrustworthy feedback in order to lower or destroy their recommendations, and a loyalty decision is introduced based on a Cog-
reputation (bad mouthing attacks) (Labraoui et al., 2016; Vijaya and nitive chronometry parameter to detect the source of the attack. The
Selvam). Malicious nodes can also falsely increase their own reputation main contributions of this work are as follows:
or give a higher recommendation for the other malicious nodes (ballot-
• Firstly, we model the reputation management scenario by draw-
stuffing attacks), compromising the network with the hope to change
ing inspiration from the foraging behavior of the honey bee swarm.
the outcome of a reputation vote to their advantage (Chen et al., 2012).
Some features of the artificial bee colony (ABC) algorithm are used
Several malicious nodes may also collaborate in order to cause greater
to take advantage of the collective intelligence implemented in the
harm to the network (collusion attack).
ABC for looking the best food source. By applying the survival of the
Dealing with dishonest recommendations attacks is a significant but
fittest principle, the fitness function is modeled as a multi-objective
hard optimization problem. Indeed, the hardness of these attacks lies
function that uses the weighted sum approach to find the fittest solu-
in how to distinguish between potential benevolent nodes and poten-
tion.
tial malicious nodes especially when dishonest recommenders are in
• Secondly, our protocol tends both to mitigate the influence of dis-
the majority. In another hand, choose who to believe is not an easy
honest recommendations and to disclose dishonest recommender’s,
task particularly when no single node has a complete global view of all
Thereby, two novel concepts: a recommendation deviation based
nodes’ reputations and where attacks may occur at any time. Thereby, it
on a revised Cloud model theory to provide robust statistical
is difficult to predict recommendation attacks. Let the dishonest recom-
method for the detection of outlying recommendations and Cogni-
menders lure the system slyly may eventually prove catastrophic and
tive chronometry parameter based on response speed index to detect
lead to the collapse of the whole network at worst, besides the disclo-
liar’s recommenders are applied.
sure of some vital network information (Khalid et al., 2013).
Different methods have been proposed in the literature for cop-
ing with the dishonest recommendations problem. Most of these 1.3. Organization of the paper
approaches are based on deviation detection methods that relied on sta-
tistical approaches. In these methods, the recommendations are judged The rest of this paper is organized as follows. Section 2 reviews some
either as a function of all the received recommendations and those that related work. A thorough analysis of the dishonest recommendations
are far away from this majority opinion are treated as dishonest ones attacks as well as the dishonest recommendation problem is presented
like in majority similarity measure (MSM). The recommendations can in section 3. In section 4, we first give an overview of the foraging
also be judged according to a comparison between the opinion of the behaviors of honeybees swarm intelligence and then, we present the
requester node and the recommendations that received as is the case in main steps of the ABC algorithm. The system model is presented in
personalized similarity measure (PSM). Unfortunately, deviation detec- section 5. In section 6, we describe our proposed Bee-Trust Scheme
tion methods are known to be ineffective if the dishonest recommenda- (BTS) in detail. The computation and memory costs are given in section
tions are in the majority or the recommendation deviation is slight and 7. The performance evaluation is given in section 8. In section 9, we
their dependence on the history interaction make the detection schemes then conclude our work and give some future directions.
confused (Chen et al., 2012). Therefore, given the limitations of classi-
cal techniques, it becomes overriding to find new methods that comple- 2. Related work
ment the deviation test techniques in order to reinforce the detection by
trying to look at the problem from a new angle and taking into account In recent years, there has been considerable interest to find effec-
other factors. tive ways to mitigate or even eliminate the influence of dishonest rec-
An increased emphasis is now being given to techniques possess- ommendations attacks in WSN. Different trust and reputation models
ing artificial intelligence (Sen and Mathur, 2016). Computational intel- (TRM) proposals have been suggested recently for addressing these
ligence (CI), a fast evolving area, is currently attracting a lot of attacks. These schemes can then be classified into two major categories
researchers’ attention in dealing with many complex problems (Abdou (Khedim et al., 2015): avoiding dishonest recommendations (Mármol
et al., 2011; Xing and Gao, 2014). Swarm Intelligence (SI) that is one of and Pérez, 2011; Boukerch et al., 2007; Chen et al., 2007; Michiardi
the most important disciplines in CI allows the development of intel- and Molva, 2002; Buchegger and Le Boudec, 2002), and dealing with
ligent multi-agent systems by taking inspiration from the collective dishonest recommendations (Chen et al., 2012; Dellarocas, 2000; Zouri-
behavior of social insects such as ants, termites, bees, and wasps (Blum daki et al., 2009; Buchegger and Boudec, 2004; Srinivasan et al., 2006;
and Li, 2008; Zungeru et al., 2012). The collective behavior of these Hur et al., 2005; Zahariadis et al., 2010; Babu et al., 2014).
social individual communities provides efficient metaheuristic tools and In the former, several schemes only allow avoiding bad recommen-
algorithms that deal with a lot of desirable and interesting properties dations attacks instead of proposing effective mechanisms to detect
applied in WSNs (Ari et al., 2016, 2017) as surveyed in (Xing and Gao, and remove them from the network, thanks to the use of two princi-
2014; Blum and Li, 2008). The advantage of these approaches over pals’ methods: (1) First hand information (Boukerch et al., 2007; Chen
traditional techniques is their robustness and flexibility (Blum and Li, et al., 2007) where only direct information is considered in the trust
2008). Thus, bio-inspired approaches are not only less complex but also model. (2) Positive/negative recommendations (Mármol and Pérez, 2011;
very efficient methods (Rathore, 2016). Michiardi and Molva, 2002; Buchegger and Le Boudec, 2002) where
some proposals believe that bad mouthing/ballot stuffing is completely
1.2. Author’s contributions eliminated in a reputation model where negative/positive experience
is not taken into account respectively. However, these beliefs do not
The main concern in this paper aims at addressing the mentioned stand all the time. On the one hand using only first-hand information
problems. We propose in this paper a bio-inspired trust model for WSNs requires much more time and more energy to build the reputation. On

43
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

the other hand, even in a reputation model where only positive expe- test is negative, the recommendation is disregarded as incompatible
rience is considered, bad mouthing can be carried out, in a way that information.
an attacker reports a relatively low positive recommendation for the The personalized similarity measure holds under the condition that
victim (Chen et al., 2012). Indeed, the attacker can harm the system’s the requesting node is sufficiently confident about its first-hand infor-
efficiency, as nodes will not be able to exchange their bad/good expe- mation and takes it as a reference value determining the deviation. This
rience with malicious/good nodes. case is not entirely realistic. In mobile WSN, there may be no inter-
Since avoiding dishonest recommendations is not a so effective solu- action between two nodes, making difficult to have consistent direct
tion, and since the survival of a WSN is dependent on the cooperative information. Even if there are interactions, this first-hand information
and trusting nature of its nodes (Whitby et al., 2004); coping with dis- only reflects the experience between the requester node and the evalu-
honest recommendations becomes an absolute necessity. In this section, ated node, which may be different from the received recommendations
we focus on protocols to deal with dishonest recommendations attacks. for at least two reasons: The reputation values depend heavily on the
To deal with the dishonest recommendations attacks in WSN, a number time factor. Thereby, reputation based on old observations may differ
of defense schemes have been proposed, the most of them (Chen et al., from those based on recent observations. Secondly, a malicious evalu-
2012; Dellarocas, 2000; Zouridaki et al., 2009; Buchegger and Boudec, ated node can behave differently with the requester node on one side
2004; Srinivasan et al., 2006; Hur et al., 2005; Zahariadis et al., 2010; and a recommender on the other side, as is, in the conflicting behavior
Babu et al., 2014) are based on their detection on deviation test tech- attack (Yu et al., 2012), generating different reputation values between
niques based on statistical methods. Deviation test techniques can be the first-hand and the second-hand information’s.
classified into two main categories (Chen et al., 2012): majority simi- Although effective for the detection of dishonest recommendation
larity measure (MSM), e.g., (Dellarocas, 2000; Whitby et al., 2004) and attacks, deviation-based detection methods, on the other hand, have
personalized similarity measure (PSM), e.g., (Buchegger and Boudec, several limitations as mentioned above. These disadvantages reduce
2004; Srinivasan et al., 2006; Hur et al., 2005). their effectiveness and generate high rates of false positives and false
negatives. Incorporating additional factors into these techniques seems
2.1. Majority similarity measure to be an ideal solution to improve their efficiency. Using this reasoning
principle, one of the best protocols for detecting dishonest recommen-
In the Majority Similarity Measure (MSM), the overall opinion is dation attacks was presented by Chen et al. (2012).
calculated over all the received recommendations, and the recommen- The protocol RecommVerifier uses two novel mechanisms, time veri-
dations that are far away from this majority opinion are treated as dis- fying and Proof verifying in addition to the traditional deviation method
honest ones. In (Whitby et al., 2004), an iterated filtering algorithm based on MSM. The Time verifying module allows checking again each
based on the beta distribution using a Z-score test is proposed. The recommendation in order to determine whether it reflects the request-
recommendations are judged to be honest or dishonest depending on ing node future reputation in the time domain. It was proposed as a
whether or not they belong to the interval [q, 1 − q] quantile. In (Del- correction mechanism to the deviation detection module. More specifi-
larocas, 2000), a cluster filtering approach is introduced to separate fair cally, if the requester reputation at time t is DR(t) and a recommender
and unfair ratings by clustering the members of the nearest neighbor set recommendation is r(t). After one period (i.e., at time t + 1), the
according to the values of their ratings based on some functions. That requester reputation becomes DR(t + 1). We can conclude that r(t)
function will most often be either the average of all the ratings or the reflects the requester node future reputation if two conditions are sat-
value of their most recent rating. isfied: r(t) has converged to DR(t + 1); or r(t) is on the way of con-
However, two main weaknesses make these schemes not so efficient. verging to DR(t + 1). However, the operating principle of the time
On one hand, to filter out the dishonest recommendations, they assume verifying module is very similar to the mechanism used in PSM, and
that there is a perceptible deviation value between honest recommen- therefore has the same disadvantages as the latter. Another disadvan-
dations and dishonest recommendations, which is not always the case tage is that the mechanism involves the use of different evaluations
especially in the case of smart attackers. On the other hand, the stan- of the requester node DR(t), DR(t + 1) and DR(t + m) in the less
dard deviation is strongly impacted by outliers who make the detection energy sensible network. However, obtaining these reputation evalua-
schemes confused and generate the false positives and false negatives. tions remains difficult especially in the case of mobile adhoc networks
where the nodes are in constant mobility. The protocol uses a second
2.2. Personalized similarity measure mechanism called proof verifying that works at the side of the evaluated
node. The idea is that the recommendations received by the requester
In the Personalized Similarity Measure (PSM), a comparison is made node are forwarded to the evaluated node, which checks the recom-
between the opinion of the requester node and the recommendations mender’s honesty by taking the history exchanges as proof. This mech-
that received following a defined threshold to filter out the recommen- anism will take effect when evaluated node acts as the role of requester
dations that deviate much. In (Buchegger and Boudec, 2004), the rec- node. Therefore this mechanism is much more a tool for the selection
ommender’s trustworthiness can be evaluated by performing a recom- of the recommenders than a mechanism of detection of the dishonest
menders test. In the protocol E-hermes, the test ensures that the recom- recommendations attacks.
mendations are accepted only when the recommender trustworthiness Unlike the existing schemes, our protocol solves the dishonest rec-
value is sufficiently close to the first-hand value computed by the eval- ommendation problem from new angle. First, the protocol is modeled
uated node. In (Srinivasan et al., 2006), a fully distributed reputation by drawing inspiration from the foraging behavior of the honey bee
system named (RRS) that can cope with false disseminated information swarm. Then, a new approach is proposed to calculate the deviation.
is presented. To detect and avoid false reports a deviation test is made This approach is based on a revised cloud model theory to detect rec-
between the expectation of the distribution for the first-hand informa- ommendations which differ from the majority opinion. To improve the
tion and that for the second hand information. When the deviation test performance of this detection method, we take into account an addi-
is positive, the recommender rating is considered incompatible and is tional criterion named cognitive chronometry parameter to detect the
not used. In (Hur et al., 2005), a distributed reputation-based beacon source of the attack that is to say the liars recommenders. As far as we
trust System (DRBTS) is proposed. In DRBTS, the recommendations are know, none of the related works did consider the possibility that some
judged to be honest or dishonest depending on the outcome of a devi- of the alleged dishonest recommendations are only erroneous reputa-
ation test. The test is made between the first hand information of the tion values which cause false positives. In WSNs erroneous recommen-
transmitter beacon node and the first hand information of the receiver dations may be due to node failure, to a bad communication channel,
beacon node according to a defined threshold value. If the deviation etc. Thereby, the cognitive chronometry parameter based on response

44
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

experience.
Therefore, what are the dishonest recommendations’ attacks? And
what are their influences on the WSN? To answer these questions, we
will start by giving a general description of the dishonest recommenda-
tions attacks. The dishonest recommendation problem is presented and
explained thereafter.

3.1. Dishonest recommendations attacks

To deal with the problem of uncertainty in decision making, trust is


used as an elementary criterion for authorizing known, partially known,
and unknown nodes to interact with each other (Iltaf et al., 2013). Have
recourse to recommenders to seek the recommendation of an unfamiliar
evaluated node can lead to erroneous decisions if the recommenders
provide recommendations that deviate from their experience.
According to the parts of the trust and reputation model involved
in the attack, we define our own categorization of the dishonest rec-
Fig. 1. Badmouthing attack. ommendations attacks following the definitions given by Dellarocas
(2000):

• Recommenders misbehaving attack well known as bad mouthing


attack, i.e., recommenders propagate negative reputation informa-
tion about an honest evaluated node to lessen its trust value. In this
attack model, malicious recommenders intentionally give negative
recommendation values for neighbor nodes, even if the neighbor
nodes are normal ones. Thus, recommendations under bad mouthing
attack cannot reflect the real opinion of the recommender (Han et
al., 2014).
Fig. 1-A shows the actual reputation value RB for node B maintained
in the tables of the nodes A and D. In Fig. 1-B, the adversary com-
promises the node C. It launches a bad mouthing attack by assigning
a negative reputation value -RB for the good node B to cause con-
fusion between node A and node D. Consequently, nodes A and D
will have opposite reputation values -RB and RB of the same node B
(Alzaid et al., 2013).
• Recommenders and Evaluated node misbehaving attack well known
as ballot stuffing attack, i.e., recommenders nodes collude to prop-
agate a false positive recommendation to elevate trust values of the
malicious evaluated node. In this attack, malicious recommenders
Fig. 2. Ballot-stuffing attack. intentionally give a higher recommendation values for the other
malicious nodes, compromising the network with the hope to mis-
guide and to change the outcome of a reputation vote to their
speed index addresses the false positive and false negative problems advantage. Fig. 2-A shows that the adversary has succeeded to com-
caused by the deviation test by disclosing the malicious nature of the promise the nodes B and C. These compromised nodes colluded to
node and providing proof of the lie, to distinguish between dishonest increase their reputations as it is shown in Fig. 2-B. Consequently,
and erroneous recommendations. The detection of liar’s recommenders the reputation calculation for nodes A and D at nodes B and C will
in our protocol will confirm the existence of the attack. Our protocol be distorted (Alzaid et al., 2013).
remains simple while being effective against dishonest recommenda- • Collusion attacks, i.e., malicious nodes collaborate to give false rec-
tions attacks. ommendations about normal nodes while promoting their reputa-
tion. By combining between the badmouthing and the ballot-stuffing
attacks, collusion attack causes greater harm on reputation models.
3. Dishonest recommendation analysis

Trust, as an integrative component of human society, is an abstract 3.2. Dishonest recommendations problem
matter that we deal with in our everyday lives (Khedim et al., 2015).
Trust and reputation model (TRM) involves the participation of differ- Designing an effective protocol to avoid the influence of dishonest
ent actors to monitor the changing behaviors of nodes in a network: recommendations from malicious recommenders is a major need. How-
(a) the requester node which has to determine by himself which rec- ever, the development of such a protocol is a real challenge who must
ommenders are the best ones according to well-defined criteria, (b) the deal with several problems.
recommenders who provide the recommendations and (c) the evalu- Firstly, a dishonest recommendations detection protocol must be
ated node concerned by the request. In the TRM, when all the recom- able to distinguish between dishonest recommendations provided by
menders are honest, the requester node can assess evaluated node accu- malicious recommenders and erroneous recommendations provided by
rately. Thus the reputation model helps the network to avoid bad nodes honest ones (Khedim et al., 2015). While the dishonest recommenda-
and improves interactions. Therefore, the recommender’s honesty gives tions want to distort the recommendations values in order to harm the
the requester node the confidence to interact with an unknown or not network, the erroneous recommendations have no dishonest intent and
very well-known evaluated node. However, the recommender’s nodes are due either to bad communication channels, nodes failure, pack-
may be malicious and provide recommendations that deviate from their ets losses, etc. Erroneous recommendations may also be due to a bad

45
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

behavior of the evaluated node like in the conflicting behavior attack. that mimics the intelligent foraging behavior (such as exploration,
Confusing between dishonest recommendations and erroneous recom- exploitation, recruitment, and abandonment) of honey bee swarm. The
mendations will cause false positives by judging as dishonest honest excellent global optimization ability and ease of implementation have
recommenders. allowed the ABC algorithm to attract attentions from scholars and thus
Secondly, a dishonest recommendations detection protocol must be applied to various fields.
remain effective even if the attackers are numerous enough or smart Typically, ABC algorithm consists of three kinds of bees, namely,
enough (Chen et al., 2012; Khedim et al., 2015). In the first case, the employed bees, onlookers, and scouts. These three groups are given
number of attackers can be significant; the evaluated node will there- hereinafter:
fore receive a significant number of dishonest values. If these recom-
mendations are aggregated blindly without filtering false recommenda-
4.2.1. The employed bees
tions, they can skew the evaluation of an entity’s trustworthiness (Iltaf
These worker bees are responsible for exploiting the nectar sources
et al., 2013). In the second case, the attackers will introduce a slight
and sharing information’s with the other bees waiting in the hive
deviation in their recommendations to remain undetectable to the devi-
through the celebratory “waggle dance”. In the ABC algorithm, one
ation mechanism. Therefore, the protocol can not distinguish anymore
employed bee is assigned to each food source. Therefore, the number of
dishonest recommendations from honest ones. Thus the false negatives
employed bees is equal to the number of food sources around the Hive
are caused.
(Karaboga and Basturk, 2007; Ari et al., 2018).

4. Swarm intelligence-honeybees
4.2.2. The onlooker bees
These bees wait on the dance area to get information about the food
Honey bees are wonderful social insects capable of fascinating facts.
sources. An onlooker makes a decision to choose one food source rather
These insects are the most important pollinator on the earth and are a
than the other depending on the nectar information distributed by the
source of nutritious and natural diet materials for human consumption
employed bees. The greater the quantity of nectar the greater the like-
since ancient times (Craneet al., 1980). The self-organized and collec-
lihood of selecting the source.
tive behavior of honey bees enables them to accomplish a variety of
complex tasks not feasible by the multitude of solitary insects. Forag-
ing behavior is known to play an important role in evolutionary biol- 4.2.3. The scout bees
ogy. It is not only a major determinant of survival and growth but also Scouts randomly search the environment surrounding the nest in
an influencing factor in the pollination and dispersal of potential food order to find new food sources. The employed bee whose food source is
organisms (Fox et al., 2001). This behavior has always been a central exhausted becomes a scout.
concern for the field of ecology, because understanding what animals Accordingly, the search process can be divided into three steps: the
eat facilitates the understanding of many ecological problems (Kamil et employed bees will be randomly sent to the food sources and then
al., 2012). The foraging behavior in the bee colony (nest site selection, measuring their nectar amounts; onlookers share the information of
food foraging) is one of the main activities in its life. This behavior employed bees, select food sources and determine the nectar amount;
has attracted researchers to design optimization algorithms (Xing and determining the scout bees and then sending them for discovering new
Gao, 2014). This behavior remained mysterious until Von Frisch (1974) food sources.
decoded the language of the bee waggle dance. The main steps of the ABC algorithm are given in Algorithm 1
In this work, we use this foraging behavior mechanism as an inspi- (Karaboga and Basturk, 2007):
ration to take advantage of the collective intelligence of a honey bee
Apis mellifera colony. Algorithm 1 Pseudo-code of the ABC algorithm.
Begin
1. Initialize
4.1. Foraging behaviors in honeybees 2. Repeat
4. a. Place the employed bees on the food sources
One of the main behavioral components of social insect’s societies 5. b. Place the onlooker bees on the food sources
featuring intelligent decision making in complex and unpredictable 6. c. Send the scouts to discover new food sources
environments is foraging behavior (Tereshko and Lee, 2002). The for- 7. Until Maximum Cycle Number
aging behaviors of honey bees Apis mellifera represent the link between End
the honey bee colony and the ambient environment. Depending on
In ABC, each food source represents a candidate solution and the
the collected resources, foraging activity is classified as nectar, water,
nectar amount of the associated food sources corresponds to the fitness
pollen or resin foraging (Picard-Nizou et al., 1995) to provide the neces-
value of the solution.
sary nutrition to the whole colony. Foraging process includes two main
modes of behavior: recruitment of nectar sources and abandonment of
a source (Tereshko and Lee, 2002). The process starts when the worker 5. System model
bees leave their nest to search food source. When a bee finds food (i.e.,
flowers), it returns to the hive and passes the information about the In this section, we describe the assumptions about the mobile WSN,
nectar source (i.e., distance from the hive, nectar quantity, and nectar the threat model. Moreover, we describe the used vocabulary.
quality) through a special movement called “waggle dance”. The other
bees in the nest watch the dance to determine the profitability of the
5.1. Network model
food source. In a period of time, more foraging bees will leave the hive
to collect nectar of the selected source.
We assume the WSN highly dynamic, which may be due to nodes’
mobility, dynamic environment factor or nodes’ changing behaviors
4.2. Artificial bee colony algorithm (Chen et al., 2012). Additionally, the network follows a flat architec-
ture. The identity of each node is unique and stable. Also, each node
Artificial Bee Colony (ABC) algorithm proposed in (Karaboga and knows its own location (for instance, using a GPS or a localization
Basturk, 2007), is one of the most popular biological-inspired opti- algorithm). A reputation mechanism is applied in the network to judge
mization algorithms. This is a class of swarm intelligence technique the interactions between neighbor nodes such as packet receive, send,

46
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

Table 1 6.1. General description


Notations.
Bee-Trust Scheme (BTS) is based on a novel architecture to model
Symbol Meaning
the reputation management scenario by drawing inspiration from the
TRM Trust and Reputation Model
foraging behavior of the honeybee swarm. The proposed scheme uses
S Requesting node
x requesting node some features of the artificial bee colony (ABC) algorithm that exploits
N Number of recommender’s the collective intelligence emerging from the honey bees to filter out
M Number of recommendations dishonest ratings which are often considered difficult and insoluble
MSM Majority Similarity Measure
problem (Zouridaki et al., 2009). BTS uses the four essential com-
PSM Personalized Similarity Measure
ABC Artificial Bee Colony
ponents of the ABC algorithm: The food sources, the employed bees,
SBM Scout Bees Module onlooker bees and scout bees, in order to benefit from their advantages
EBM Employed Bees Module regarding the exploitation and the exploration processes. The proposed
OBM Onlooker Bees Module protocol aims at maximizing the detection of dishonest recommenda-
TH Throughput
tions by unveiling the liar face of recommenders in order to avoid any
RP Recognition percentage
FP False positive percentage loopholes to such attacks. A multi-objective fitness function is applied
FNP False negative percentage using the weighted sum approach. To express the multi-objective fit-
ness function, we take into account two main factors: a recommen-
dation deviation parameter based on a revised cloud model to detect
dishonest recommendations and response speed index to detect liar’s
delivery and consistency. The interactions are therefore classified as recommenders. Liar’s recommenders are regarded as the source of the
negatives and positives according to the quality of service provided. attack; they intentionally increase or decrease the reputation of a spec-
ified node according to their malicious needs.
5.2. Threat model The BTS works as follows: when a node wants to evaluate the reputa-
tion of a desired node, it request recommendations from recommenders
In this paper, we assume an attacker model in which a node incor- belonging to its one hop neighborhood according to the following main
rectly propagates recommendations with dishonest intent. We define a steps: Firstly, recommenders (food sources) are chosen among the one-
dishonest recommender (bad recommender) as a node which voluntar- hop neighbors with high recommendation reputation which is deter-
ily reduces the positive reputation of a good node as well as increasing mined through the use of first-hand information (direct observation)
the negative reputation of a bad one. Conversely, a node that propa- by the Scout Bees Module (SBM), this first hand is updated after each
gates recommendations correctly is defined as an honest recommender interaction.
(good recommender). Further, we assume that the number of dishonest Secondly, Employed Bees Module (EBM) sends employed bees to
recommenders is less than or equal to the number of honest ones. request recommendations and calculate the fitness value of each rec-
ommender by using the proposed multi-objective fitness function given
5.3. Notations in Eq. (19). Based on the obtained fitness information, the Onlooker
Bees Module (OBM) chooses a probably profitable food source (recom-
In the following, we described the main symbols and notations used mender). By applying the survival of the fittest principle, recommenders
in the paper. Table 1 gives the considered notations. with lower fitness are therefore considered as dishonest ones and are
added into a blacklist. This list is detained by each node respectively
and helps this node when it acts as requesting node. The blacklisted
6. Bee-Trust Scheme nodes cannot be chosen as recommenders again. The recommendations
of higher fitness recommenders are then aggregated in order to consti-
In this section, we will present a novel defense scheme able to tute the desired node’s indirect reputation.
cope with dishonest recommendation attacks in WSN named Bee-Trust The main components of the proposed BTS compared with the orig-
Scheme (BTS). We will start by giving a general description of this inal ABC algorithm and the problem mapping are given in Fig. 3 and
scheme. Then the different modules are presented and explained respec- Table 2 respectively.
tively.

Fig. 3. The main components of the proposed Bee-Trust scheme compared with the original ABC algorithm.

47
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

Table 2 et al., 2006; Hur et al., 2005; Zahariadis et al., 2010; Babu et al., 2014;
Correspondence. Whitby et al., 2004) assess the reputations as honest or dishonest, our
protocol takes a further step by also considering erroneous recommen-
ABC BTS
dations to address the false positive and false negative issues. Thereby,
Food source Recommender’ s
transmitting the logs relating to transactions between the recommender
Hive requesting node
Employed Bee Employed Bees Module (EBM) and the requesting node is one of the proposed solutions. This feature is
Onlooker Bee Onlooker Bees Module (OBM) achieved in order to check the veracity of the information contained in
Scout Bee Scout Bees Module (SBM) the logs compared to the assigned reputation value, as well as having
Foraging behavior Requesting recommendation’s
tangible proof of honesty or dishonesty of each recommender.
The main steps of the send reply procedure relative to each recom-
mender are given in the Algorithm 2.
6.2. Scout Bees Module (SBM)
Algorithm 2 Send reply procedure.
Begin
Here we present our SBM. Unlike the original scoot bee in the ABC
1. Data
algorithm, the proposed SBM doesn’t perform a random search to iden-
2. Ri {identifier of recommender i}
tify potential sources but relies on first-hand information maintained by
3. PosRt {position of Ri at time t}
the requesting node about its one hop neighbors. In our model, interac- i
tions with only direct neighbors will allow significant lower energy con- 4. HistoryRi ,xt {Ri ’s logs corresponding to its exchanges with
sumption, less processing for trust level calculation, and less memory the requesting node x until time t}
space, since the evaluated node does not keep trust information about 5. RepRi ,x {the assigned reputation by Ri to x}
every node in the network but only the information of their neighbors 6. pck.id ⟵ Ri
(Labraoui et al., 2016). Obviously, the calculation of first-hand infor- 7. pck.pos ⟵ PosRt
i
mation is immune from dishonest recommendation attack (Chen et al., 8. pck.history ⟵ HistoryRi ,xt
2012). Its calculation can be achieved through several classical algo- 9. pck.rep ⟵ RepRi ,x
rithms. Like adopted by Labraoui et al. (2016), we used the direct trust 10. Send(pck)
evaluation. In (Labraoui et al., 2016), a risk-aware reputation-based End
trust management in WSNs is proposed. The direct trust evaluation is
based on the concept of trust is hard to acquire and easy to lost. The
t
local rating j LRi,kj of node i for node j during time unit tk (n ⩾ k ⩾ 1) is
defined by the formula given in Eq. (1). 6.4. Fitness function derivation

t
Si,kj ⎛ ⎞ As soon as the employed bees return to the requesting node, the
1
× ⎜1 − ⎟
t
LRi,kj = (1) EBM will analyze the information contained in each message to evalu-
t
Si,kj
t
+ Ui,kj ⎜ t
Si,kj + 1 ⎟⎠
⎝ ate the fitness of each recommender respectively. Appealing the survival
t t of the fittest principle, the recommender’s with the lowest fitness values
where: Si,kj (Ui,kj ) is the total number of successful (unsuccessful) interac-
are declared as dishonest ones. For this purpose, a multi-objective func-
tions of node i with j during time unit tk . tion is used to maximize the detection of dishonest recommendations
Contrary to the conventional beta reputation, a balancing factor is as well as liar’s recommender’s. Two optimization parameters are tak-
introduced to ensure that the trust value increases slowly when the ing into account: a reputation deviation based on a revised cloud model
number of unsuccessful interactions is considerably high. Since the theory and a cognitive chronometry parameter using a response speed
node behaviors could change from time to time, a time factor is used in index. The first constraint that has to be respected is the minimization
order to carry more importance of recent rating without forgetting the of the reputation deviation fRD . Indeed, this will allow dismissing the
last behaviors. The direct trust value DTi,j is calculated according to the recommendations that are far away from the majority opinion and to
formula given in Eq. (2). enhance the accuracy of our protocol. Thereby, a revised cloud model
∑n−1 tk is applied to handle uncertainties in the recommendations domain. The
𝜆 t =1 LRi,j 1 t
DTi,j = × k + × LRi,nj (2) second constraint to be faced is the detection of liars recommender’s
𝜆+1 n−1 𝜆+1
flying . This feature is achieved in order to bring a correction mechanism
where 𝜆 ∈]0, 1[∩ℕ is the decay factor used to ensure that the most to eventual false positives or false negatives that may be generated by
recent ratings will carry more weight when computing the direct trust the reputation deviation. Beyond the detection of dishonest recommen-
value. As a result, direct trust value reflects the most recent status of a dations, flying will allow the detection of liar’s recommenders to provide
behavior node without forgetting any past behavior. an effective way to differentiate between dishonest recommendations
and erroneous recommendations.
6.3. Employed Bees Module (EBM) The mathematical representation of the optimization parameters are
given in the following:
In the Bee-Trust Scheme, EBM is mainly responsible of requesting
recommendations. In a trust and reputation model, when a node S
wants to evaluate the reputation of a desired node x, it requests recom-
mendations from the recommenders Ri (n ⩾ i ⩾ 1) selected by the SBM,
by sending them a set of employed bees. Like in the ABC algorithm, the
number of employed bees is equal to the number of recommenders.
Therefore, each employed bee will be responsible of one recommender.
Each recommender Ri must transmit its own data, namely its ID (iden-
tifier), its position claim and its history log corresponding to its interac-
tions with the requesting node x as well as the reputation value assigned
by Ri to x. When most of the protocols (Chen et al., 2012; Dellarocas,
2000; Zouridaki et al., 2009; Buchegger and Boudec, 2004; Srinivasan Fig. 4. Process of forward cloud generator and backward cloud generator.

48
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

6.4.1. Cloud model Step 3. Calculating En according to Eq. (6)


Since the trust and reputation system allows a node to rate another √
node, both positively and negatively. Ensuring that recommenders will 1 𝜋 ∑
m
En = × × |EX − xi | (6)
be honest in their assessment regarding the evaluated node through m 2 | |
i=1
ratings is not realistic in WSN. In most cases, there is no guarantee,
mainly because of the hostile and unattended environment in which Step 4. Calculating He according to Eq. (7)
sensors are deployed as well as the malicious and unpredictable nature √
of the attackers. Therefore, the detection of dishonest recommendations | 2 |
He = |S − En2 | (7)
is considered as a difficult and insoluble problem. We present a devi- | |
ation detection mechanism based on a novel hybrid model integrating 2. Certainty degree calculation: the forward cloud algorithm is
randomness and fuzziness for the detection of outlying recommenda- applied. CG first uses the three cloud numerical characteristics C
tions. Our filtering algorithm uses a revised cloud model theory to mea- = (Ex, En, He) to calculate the certainty degree. Secondly, accord-
sure the deviation of a received recommendation from a normal recom- ing to the obtained factor value, a deviation factor is calculated to
mendations distribution. estimate the diversity between each recommendation and normal
The “cloud model” introduced by professor Li in (Deyi et al., ) is cloud model. The process follows the following steps:
a novel cognition model based on the conversion between qualitative
Step 1. Generate normal cloud model based on the entropy and
concept and quantitative data. The cloud models prove to be an effec-
hyper entropy of random number is given in Eq. (8).
tive tool for the application in data mining and in resolving uncertain-
ties. It consists of three digital characteristics which are named as the En′ = NormalRandom (En , He ) (8)
expectation Ex , the entropy En and the hyper-entropy He . Ex is the mea-
surement of the certainty that represents the qualitative concept; it is Step 2. Calculate the certainty degree factor corresponding to
the expected value of cloud drops. En is the entropy of the attribute, each recommendation xi (see Eq. (9))
which reflects the ambiguity of Ex . It can also be viewed as the measure- ( )2
ment of the uncertainty of qualitative concepts. He is the hyper entropy −1
×
xi −Ex
2 E′
of the attribute, namely the entropy of entropy, which is a measure of 𝜇xi = e n (9)
the uncertainty of the entropy En .
Cloud model can be generated by cloud generator, which is a basic Step 3. Calculate the deviation factor relative to each recommen-
tool making the transformation between qualitative concepts and quan- dation xi according to the formula given in Eq. (10)
titative data. Forward cloud generator (CG) and backward cloud gen-
𝜎xi = 1 − 𝜇xi (10)
erator (CG−1 ) are two of the most basic algorithm of cloud generator.
Forward cloud generator transforms a qualitative concept with three Therefore, we define the reputation deviation criterion of the fit-
numerical characters Ex , En and He into a number of cloud drops and ness function fRD in Eq. (11)
their corresponding certainty degree. On the contrary, backward cloud
generator transforms a number of cloud drops into the three numerical fRRD = 𝜎xi (11)
i
characteristics of the cloud. The forward and backward cloud genera-
tors are shown in Fig. 4. where Ri is a recommender and xi is its recommendation.
In the traditional dishonest recommendations detection protocols,
the number of attackers is supposed to be less than or equal to half of
the total number of recommenders. However, the original cloud model 6.4.2. Response speed index
cannot be applied when the number of dishonest recommenders is Although cloud model is an effective approach for the detection
greater than the quarter of the total number of recommenders. In order of outlying recommendations, the fact that it is based on Major-
to balance the constraint of the number of attackers in our mechanism, ity Similarity Measure (MSM) can lead him into two problematic
we revise the hyper entropy formula, such that the number of attack- situations. First, The inability to discriminate between the dishon-
ers considered in our mechanism is the same as that of the traditional est recommendations from the honest ones when the recommenda-
detection protocols. Our revised cloud model filtering algorithm with tion set is self-contradictory. In order to appreciate this fact, con-
the revised hyper entropy formula follows two main steps: (1) Normal sider a small set of n = 6 received recommendations with values
cloud model feature calculation and (2) Certainty degree calculation. 0.3, 0.29, 0.3, 0.8, 0.8, and 0.8. Half of the recommendations indicate
1. Normal cloud model feature calculation: the backward cloud algo- that the requesting node’s reputation value is nearly 0.3, and the
rithm is applied for computing the three cloud numerical charac- remaining half indicate that the reputation value should be 0.8. By
teristics C = (Ex , En , He ) of the set U. Where, U is assume to be the applying the revised cloud model, the expectation Ex is 0.5483, the
set of recommendations sent by recommenders and received by the entropy En is 0.3146 and the hyper entropy He is 0.1515. There-
requesting node U = {x1 , x2 , … , xm }, where: xi ∈ U is one of the fore, the certainty degree factors of the received recommendations
received recommendations, m ⩽ n while C is a qualitative concept are: 𝜇(0.3) = 0.85, 𝜇(0.29) = 0.844 and 𝜇(0.8) = 0.85. The similar-
of U. The specific process is as follows: ity between the values and the absence of other useful information
do not allow the model to make a decision regarding the dishonest
Step 1. Enter the normal data; calculate the mean X and the vari-
and honest recommendations. Second, the model can generate false
ance S2 according to Eqs. (3) and (4).
positives and false negatives. In order to appreciate this fact, we con-
sider another small set of n = 6 received recommendations with val-
1 ∑
m
X= × x (3) ues 0.2, 0.25, 0.8, 0.8, 0.79, and 0.8. Obviously, the first recommenda-
m i=1 i
tion (i.e., 0.2) is an erroneous reputation calculation due to bad chan-
m (
∑ )2 nel communication between an honest recommender and the request-
1
S2 = × X − xi (4) ing node. The second recommendation is on the other hand an out-
m−1 i=1 lier (i.e., 0.25) generated by a dishonest recommender. The expecta-
Step 2. Calculating EX according to Eq. (5). tion Ex is 0.6067, the entropy En is 0.3181 and the hyper entropy He
is 0.1162. Therefore, the certainty degree factors of the received rec-
EX = X (5) ommendations are: 𝜇(0.2) = 0.56, 𝜇(0.25) = 0.64, 𝜇(0.8) = 0.87 and

49
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

𝜇(0.79) = 0.89. The failure to discriminate between erroneous and dis- to calculate the response speed index RS IRi for each recommender is
honest recommendations led to unfairly judge good recommenders as given in Eq. (14).
dishonest ones. √
( )2 ( )2
Therefore, a workable dishonest recommendations detection proto- 1
RS IRi = × XRt − XAt + YRt − YAt (14)
col must discriminate between lying and truth-telling, not only statis- RTRi i i

tically (Gregg et al., 2014), but also sensitively. Thus, we introduce The more time it takes the recommender to change the requested
a correction mechanism based on response speed index. Our response data before sending them (logs, reputation) longer is his response time
speed index is inspired by The Timed Antagonistic Response Alethiome- and slower its response. Therefore, with the increase of RTRi , the RS IRi
ter (TARA). The TARA is a computer-based classification task (Gregg, decreases correspondingly. Therefore, we define the lying detection cri-
2007). It is a lie detection technique that diagnoses lying. TARA man- lying
terion of the fitness function fR of the recommender Ri by the formula
ufacturers an artificial situation in which truth tellers are able to com- i
given in Eq. (15).
plete a series of compatible classifications more easily than liars who
are obliged to complete a series of incompatible ones. Obviously, the lying 1
fR = (15)
second case is harder to accomplish than the first. Consequently, dis- i RS IRi
honest respondent’s answers are slower than those honest respondents lying
to reach an equivalent level of accuracy (Gregg, 2007). We notice that fR becomes minimal when the response speed index
i
When a WSN is attacked, malicious nodes employ different strate- RS IRi is maximal.
gies to remain undetectable. Sensors may lie about their position and Then, we define the multiple-objective function of our optimization
announce false location to their neighbors (Delaët et al., 2011). More- problem in such a way that the fitness function becomes maximal by
over, they may lie about their identity (Jamshidi et al.,). In our case, minimizing the reputation deviation and the lying parameter. The most
malicious sensors may lie about the assigned reputation value by send- often-employed strategy that transforms a multi-objective problem to a
ing dishonest recommendations to the evaluating node. single-objective one is the weighted method. Thus, let us give the single-
In Bee-Trust Scheme, the recommenders must transmit the logs asso- objective function for each recommender Ri that includes the reputation
ciated with their judgment regarding the reputation value assigned deviation, the lying parameter and the weighting parameters 𝛼 and 𝛽 ,
to the evaluated node as evidence of the transactions that caused as a linear programming problem in Eq. (16).
their judgments. Whereas truth-telling involves only sending truthful minimize f itRi = 𝛼 × fRRD + 𝛽 × fR
lying
(16)
logs associated with the corresponding reputation, lying additionally i i

involves a decision to lie followed by the construction of a falsehood Subjected to the constraints given in Eq. (17) and Eq. (18).
(Gregg, 2007) namely divergence between the information contained in
the log file and the assigned reputation value which make incompatible 𝛼 =1−𝛽 (17)
situation whether between logs as well as between logs and reputation.
The incompatible tasks are harder and take a longer realization time; 0<𝛽<1 (18)
therefore, slower responses reveal dishonesty (Gregg, 2007). Dishonest
recommender’s problem is a somewhat similar situation to that used in The final form of the weighting linear fitness function fRi corre-
TARA. sponding to each recommender is given according to the ABC algorithm
Recent research now abundantly confirms that, when responding to by Eq. (19).
direct inquiries in a structured manner, people take longer on average to 1
fRi = (19)
lie than to tell the truth (Gregg et al., 2014). Using the same reasoning, 1 + f itRi
we introduce the response speed as an index of deception to detect
where fRi denotes fitness value of recommender Ri and f itRi is its objec-
dishonest recommenders.
tive function.
For each recommender, the response speed index can be calculated
by the formula given in Eq. (12)
6.5. Onlooker bees module (OBM)
Dtr (A, Ri )
RS IRi = (12)
RTRi Like the original onlooker bee in the ABC algorithm, the OBM makes
where RS IRi is the response speed index of a standard recommender Ri , a classification of the recommender’s depending on the probability
Dtr (A, Ri ) is the traveled distance between the requesting node A and value pRi associated with each of them, calculated by the formula given
recommender Ri and RTRi is the approximate response time of node Ri . in Eq. (20).
The EBM calculates the traveled distance Dtr (A, Ri ) between the fR
requesting node A to whom it belongs and the recommender Ri hav- p Ri = ∑ n i (20)
i=1 fRi
ing the coordinates (XAt , YAt ) and (XRt , YRt ) at time t respectively. The
i i
euclidean distance expression is derived from the Pythagorean theorem where fRi is the fitness value of the recommender Ri evaluated by the
by the formula given in Eq. (13) EBM. In the case where pRi < 𝛾 (𝛾 is a threshold value that should be
√ properly chosen) then the recommender is declared outlier and its rec-
( )2 ( )2
ommendation is judged as dishonest one. The recommendations are
Dtr (A, Ri ) = XRt − XAt + YRt − YAt (13)
i i blacklisted, so as to prevent them from being chosen as recommender’s
To calculate the approximate response time RTRi , it is considered another time.
that the propagation delay is negligible compared with the transmis-
sion time therefore the transmission delay between nodes is selected 6.6. Aggregating recommendations
as the dominating delay (Lindsay et al., 2001). It will be assumed syn-
chronization of the clocks between the requesting node and the rec- After describing the main steps of our protocol, we are now inter-
ommenders Ri , no queuing, and the processing and propagation delays ested in how to obtain the final value of reputation thanks to the use of
are negligible (Alrashed, 2017; Abbasy et al., 2011). The approximate the aggregation of the various recommendations. When a node wants
response time RTRi can then be calculated as the difference between the to evaluate the reputation of a desired node, it requests recommenda-
departure time of employed bees to request recommendations and their tions from recommender’s belonging to its one hop neighborhood that
arrival time at the requesting node. Consequently, the final expression are not in the blacklist. The number of recommenders is fixed by each

50
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

evaluating node independently according to the needs of the applica- Table 3


tion. In order to give more weight to the information provided by the Simulation parameters.
fittest recommenders. The indirect reputation of the requesting node x
Parameter Default Value
is computed according to all received recommendations as given by Eq.
Area 1000 × 1000
(21):
Number of nodes 12
1 ∑( )
Radio range 300
IRx = ∑ × fRi × RepRi ,x (21)
Ri fRi R i

where fRi denotes fitness value of recommender Ri and is its recommen-


dation. 8.1. Simulation methodology

6.7. Algorithm simulating the BTS Our proposal scheme is implemented by using a discrete model
developed in MATLAB. Intensive experiments are conducted in order
The main steps of the proposed BTS are given in the Algorithm 3. to evaluate the performance of BTS under various attack scenarios.
We consider a simulation scenario consisting of 12 nodes involved
Algorithm 3 BTS algorithm. in the requesting recommendations process. Nodes 2 and 6 are
Begin assigned to be the requester node and the evaluated node respectively.
1. Receive-from-recommenders () Nodes 1, 3, 4, 5, 7, 8, 9, 10, 11, 12 are the selected recommenders. Rec-
2. /∗EBM phase∗/ ommenders are choosing among the one-hop neighbors with high rec-
3. Calculate-fitness-function according to Eq. (19) ommendation reputation. In the simulation scenarios, recommenders
4. /∗OBM phase∗/ exhibit three types of behavior:
5. Calculate-prob-pRi according to Eq. (20)
• Type I: Good recommenders and good recommendations.
6. Classify-recommender’s-Ri depending on pRi
• Type II: Good recommenders and erroneous recommendations.
7. /∗SBM phase∗/
• Type III: Bad recommenders and bad recommendations.
8. Blacklist-lowest-recommender ()
9. Replace-lowest-recommender () The simulation included the three attacks: badmouthing, ballot-
10. Updating-recommendations () stuffing and collusion. All experiments were conducted over 3000
11. Aggregate-recommendations () rounds. At each 1000 iterations the attacks were evaluated. We pro-
End ceeded to 10 runs of these aforementioned experiments. The default
simulation parameters are summarized in Table 3.
7. Computation and memory costs Te effect of dishonest recommendations and the performance of the
scheme were analyzed via three indicators:
In this section, we estimate the cost of the BTS procedure, as • Recommendation deviation
described in Algorithm 3. We estimate both the communication, storage • Dishonest recommenders
and time complexity costs. • Erroneous recommendations

7.1. Communication The performance of the scheme was examined via four metrics:

1. Throughput (TH): defined as the average rate of successful packets


The recommendation distribution includes both sending and receiv- delivered to the BS and it is measured in data packets per time slot.
ing procedures. The requesting node has to send n queries and receives 2. Recognition percentage (RP): defined as the number of nodes
m replies (m ⩽ n). Then, we can derive an average cost, per requesting detected as malicious compared to the total number of dishonest
node process, of O(n + m). recommendations.
3. False positive percentage (FPP): defined as the ratio between the
7.2. Storage number of honest recommendations incorrectly classified as dishon-
est and the total number of honest recommendations.
Given that the requesting node doesn’t need to store the information 4. False negative percentage (FNP): defined as the proportion of dis-
sent by the recommender’s, i.e., the identifier, the position, the logs, honest recommendations incorrectly classified as honest ones.
the assigned reputation, of each recommender since these data will be
processed and analyzed immediately. And given that all other steps of
this algorithm are constant in space. Then, the space complexity is O(1). 8.2. Simulation results

7.3. Time We evaluated the proposed BTS protocol and examined its compre-
hensive performance by conducting series of intensive experiments in
The time complexity of the revised cloud model is determined by the various scenarios. The throughput, the metrics average reputation as
number of cloud droplets generated, since the number of cloud drops well as the performance indicators defined in section 8.1 are simulated
is equal to the number of recommendation’s xm , i.e., O(m) time. Obvi- in order to demonstrate its effectiveness.
ously, the time complexity of the correction mechanism is O(1). This
algorithm can then be done in O(m) time. 8.2.1. Performance of BTS by varying the recommendation deviation
These costs remain negligible since they concern only the requesting parameter
node and not all the nodes of the network. Recommendation deviation is one of the important parameters influ-
encing the detection of dishonest recommendations. The smaller the
8. Performance evaluation difference between the requesting node calculated reputation and the
received recommendation, the more difficult the detection becomes and
In this section, a performance evaluation of the proposed algorithm conversely. Smart attackers wishing to remain undetectable can intro-
is proposed, which includes the simulation methodology and the simu- duce a relatively small bias in dishonest recommendations to bypass the
lation results according to some proposed evaluation metrics. detection mechanisms. A dishonest recommendation detection protocol

51
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

Fig. 5. Performance of BTS by varying the recommendation deviation parameter.

must remain effective regardless of the deviation rate. tive detection mechanism. The high percentage of the false positives
In this experiment we examine the performance of our protocol by and false negatives as well as the low detection rate is intolerable for a
varying the percentage of recommendation deviation. Firstly, %dishon- network.
est recommenders is fixed at its default value 50%. We examine the Moreover, from Fig. 5a–b, the most interesting result that can be
results of the comparison between the case where the protocol BTS observed is that even in the worst case where the deviation is at 10%
is disabled (i.e. without BTS) and when the protocol BTS is applied and when the protocol BTS is applied, we obtained very good perfor-
(i.e. with BTS) under the badmouthing and the ballot-stuffing attacks, mance with RP = 100%, FPP = 13% and FNP = 1%. This is due to the
as shown in Fig. 5a–b. We notice that whatever the attack our proto- fact that our protocol does not rely solely on the cloud model, which is
col gets similar performance, which means that BTS is not sensitive to a method using the MSM approach. This approach is known to be easily
attack methods. influenced by the recommendations deviation. The use of an additional
In each network scenario, the average proportion of metrics used as parameter independent of the recommendations’ values make the pro-
well as their standard deviations in each case is computed and high- tocol insensitive to the recommendation deviation and effective against
lighted in Tables 4 and 5. We can observe from Fig. 5a–b that without smart attackers. Then, the protocol BTS is insensitive to the recommen-
BTS, RP is low while the deviation is small, the more the deviation dation deviation, it is therefore effective against smart attackers.
increases the more RP increases too, since it becomes easy to detect the Besides, we also examine the metric throughput of whole network.
dishonest recommenders. FPP and FNP are high; the main reason is the The obtained results by varying %recommendation deviation are shown
presence of 50% of dishonest recommenders in the absence of an effec- in Fig. 6. Each line represents the throughput when the parameter %dis-

Table 4
Badmouthing, 50% recommendation deviation Mean (𝜇%) and standard deviation (𝜎(%)).
10% 20% 30% 40% 50% 60% 70% 80% 90%
𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎
RP with BTS 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00
FP with BTS 0.13 0.02 0.11 0.01 0.1 0.00 0.09 0.01 0.09 0.02 0.08 0.02 0.08 0.01 0.08 0.02 0.08 0.00
FN with BTS 0.02 0.00 0.02 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00
RP without BTS 0.46 0.01 0.48 0.02 0.59 0.03 0.61 0.01 0.63 0.03 0.64 0.04 0.64 0.05 0.65 0.03 0.66 0.02
FP without BTS 0.13 0.02 0.24 0.02 0.33 0.01 0.39 0.02 0.44 0.02 0.44 0.05 0.44 0.02 0.44 0.02 0.44 0.01
FN without BTS 0.45 0.03 0.49 0.01 0.5 0.02 0.52 0.02 0.52 0.07 0.52 0.06 0.52 0.05 0.52 0.04 0.52 0.05

Table 5
Ballot-stuffing, 50% recommendation deviation Mean (𝜇%) and standard deviation (𝜎(%)).
10% 20% 30% 40% 50% 60% 70% 80% 90%
𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎
RP with BTS 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00
FP with BTS 0.19 0.00 0.18 0.01 0.15 0.01 0.13 0.02 0.09 0.00 0.08 0.01 0.07 0.01 0.04 0.01 0.04 0.00
FN with BTS 0.02 0.00 0.02 0.00 0.02 0.00 0.02 0.00 0.02 0.00 0.02 0.00 0.02 0.00 0.02 0.00 0.02 0.00
RP without BTS 0.26 0.01 0.28 0.02 0.25 0.01 0.21 0.03 0.22 0.01 0.24 0.01 0.23 0.01 0.25 0.01 0.25 0.03
FP without BTS 0.19 0.00 0.19 0.00 0.23 0.01 0.29 0.02 0.33 0.05 0.33 0.04 0.35 0.03 0.36 0.02 0.36 0.02
FN without BTS 0.32 0.02 0.32 0.04 0.31 0.02 0.31 0.02 0.31 0.01 0.32 0.02 0.34 0.01 0.34 0.01 0.33 0.03

52
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

0.0002

0.0073
0.0059
0.0067
0.0049
0.0398
0.0519
0.0569
0.0501
0.001
𝜎
0.8422
0.7910
0.7501
0.6803
0.6103
0.5593
0.5198
0.4938
0.4638
0.4368
90%
𝜇
0.0001
0.0013
0.0041
0.0059
0.0087
0.0048
0.0189
0.0511
0.0491
0.0498
𝜎

0.80000
0.8422

0.7610
0.7067
0.6481
0.5809
0.5395
0.5105
0.4702
0.4509
80%
𝜇
0.0001
0.0012
0.0053
0.0062
0.0076
0.0048
0.0096
0.0417
0.0451
0.0407
𝜎
0.8420
0.8002
0.7682
0.7208
0.6898
0.6070
0.5598
0.5308
0.4905
0.4797
Fig. 6. Throughput when collusion.

70%
𝜇
0.0002
0.0003

0.0041
0.0045
0.0049
0.0069

0.0457
0.0405
honest recommenders is fixed. Even in the absence of dishonest recom-

0.006

0.041
menders the throughput is only 84.40%. The main cause is the pres-

𝜎
ence of bad nodes generating jamming and collision. We can observe

0.8429
0.8008
0.7799
0.7479
0.7148
0.6507
0.6197
0.5907
0.5408
0.5107
that when the parameters of (%recommendation deviation, %dishon-

60%
est recommenders) are below (20%,20%), the impact on throughput

𝜇
is small, since in these situations, the reputation values of bad nodes

0.0003
0.0002
0.0013
0.0054
0.0056
0.0051
0.0189
0.0341
0.0401
0.0304
are still much lower than that of good nodes. We can observe that
under (90%,90%), the whole network can only get 43.68% through-

𝜎
put, although there are only 20% bad nodes in the network. Table 6
shows the means and the standard deviations of the amount of packets

0.8429
0.8097
0.7890
0.7611
0.7399
0.7209
0.7028
0.6800
0.6208
0.5907
50%
received by the BS of the 10 simulation runs.

𝜇
0.0001
0.0003
0.0011
0.0034
0.0035
0.0041
0.0054
0.0311
0.0316
0.0206
8.2.2. Performance of BTS by varying the number of dishonest
recommenders 𝜎
The number of dishonest recommenders is another important
0.8421
0.8077
0.7990
0.7987
0.7765
0.7749
0.7680
0.7600
0.7160
0.6702
parameter influencing the detection of dishonest recommendations pro-
40%

tocol. The greater the number of attackers, the greater their influ-
𝜇

ence on the trust and reputation mechanism and the more difficult it
0.0001
0.0001

0.0013
0.0034
0.0033
0.0046
0.0111
0.0123
0.0195
becomes to detect them. The presence of several attackers in the system
0.008

leads them to collaborate and to divert the reputation mechanism to


𝜎

their advantage. A dishonest recommendation detection protocol must


0.8421
0.8079
0.8070
0.8068
0.8060
0.8057
0.8009
0.8084
0.7624
0.7424
remain effective regardless the number of dishonest recommenders.
30%

The experiment methodology is similar as the previous experiment. We


Throughput when collusion – Mean (𝜇%) and standard deviation (𝜎(%)).

examine the performance of our protocol by varying the percentage of


dishonest recommenders while fixing %recommendation deviation at
0.0000
0.0003
0.0009

0.0043
0.0034
0.0039
0.0096
0.0119
0.0181
0.001

its default value 50%. Therefore, similar metrics as the previous experi-
𝜎

ment are used. RP, FPP and FNP are employed to measure the detection
rate of dishonest recommenders, how many honest recommenders are
0.8440
0.8199
0.8170
0.8170
0.8168
0.8158
0.8117
0.8100
0.7898
0.7696
20%

declared dishonest and how many proportion honest recommenders are


𝜇

treated as dishonest ones, respectively. The performance of BTS against


the dishonest recommenders is shown in Fig. 7.
0.0000
0.0003
0.0003
0.0009

0.0014
0.0032
0.0089
0.0101
0.0101
0.001
Successful Packets Delivered to the Base Station

Fig. 7a–b illustrates the results of RP, FPP and FNP in the two stud-
𝜎

ied cases: with BTS and without BTS for the badmouthing and the
ballot-stuffing attacks. The performance of the BTS protocol remains
0.8440
0.8299
0.8179
0.8176
0.8173
0.8171
0.8150
0.8130
0.8108
0.8009

stable under varied % dishonest recommenders, as shown in Fig. 7a–b


10%
𝜇

RP ≈ 100%, FPP ≈ 2% and FNP ≈ 1% whatever % dishonest recom-


menders. This is because the calculation of the fitness function in BTS
0.0000
0.0003
0.0007
0.0009
0.0011
0.0015
0.0021
0.0032
0.0054
0.0051

does not depend solely on the received recommendations (𝛼 = 0.3).


𝜎

Considering the respond speed index as a second parameter and assign-


ing him a high weight (𝛽 = 0.7) allows to reveal the nature of the rec-
0.8440
0.8320
0.8289
0.8288
0.8286
0.8287
0.8270
0.8260
0.8256
0.8143

ommender and thus detect the attack independently of the number of


0%
𝜇

dishonest recommenders. More details of the obtained values for both


attacks of the 10 simulation runs are given in Tables 7 and 8 respec-
Table 6

%DR

10%
20%
30%
40%
50%
60%
70%
80%
90%
0%

tively.
At last, the metric throughput is examined. Like described by
Chen et al. (2012), the most harmful attack method collusion is car-

53
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

Fig. 7. Performance of BTS by varying the number of dishonest recommenders.

Table 7
Badmouthing, 50% dishonest recommenders Mean (𝜇%) and standard deviation (𝜎(%)).
10% 20% 30% 40% 50% 60% 70% 80% 90%
𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎
RP with BTS 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00
FP with BTS 0.12 0.01 0.11 0.02 0.11 0.02 0.11 0.02 0.11 0.03 0.08 0.02 0.06 0.01 0.02 0.02 0.06 0.01
FN with BTS 0.01 0.00 0.02 0.00 0.02 0.00 0.02 0.00 0.01 0.00 0.01 0.00 0.02 0.00 0.01 0.00 0.01 0.00
RP without BTS 0.89 0.04 0.8 0.06 0.77 0.03 0.69 0.03 0.63 0.05 0.56 0.04 0.39 0.03 0.24 0.06 0.13 0.03
FP without BTS 0.31 0.01 0.31 0.01 0.39 0.02 0.41 0.01 0.37 0.03 0.37 0.05 0.35 0.03 0.34 0.01 0.29 0.02
FN without BTS 0.06 0.01 0.31 0.04 0.44 0.06 0.54 0.05 0.63 0.06 0.69 0.03 0.77 0.04 0.8 0.04 0.92 0.07

ried out by attackers. A comparison is made between our proposed 8.2.3. Performance of BTS by varying the number of erroneous
protocol and four others recommendation schemes: RecommVerifier recommendations
(Chen et al., 2012), Whitby’s filtering scheme (WFS) (Whitby et al., Although often neglected, the errors are always present mainly in
2004), E-Hermes (Zouridaki et al., 2009) and RFSN (Ganeriwal et networks like WSNs. Considering the erroneous recommendations is a
al., 2008). As shown in Fig. 8, the throughput got by our protocol very important parameter influencing the detection scheme. In WSNs,
BTS exceeds the throughput of the other protocols, BTS remains sta- erroneous recommendations may be due either to communication prob-
ble under different parameters and converges to the ideal through- lems between the nodes such as bad communication channels, nodes
put obtained in (Fig. 6). Apart from the RecommVerifier protocol failure, packets losses, etc. Erroneous recommendations can also result
throughput which is also stable but slightly below our (80.84%), the from safety problems as is the case of conflicting behavior attack. A
throughput got by the other defense schemes decreases while the dishonest recommendations detection protocol that cannot distinguish
parameter % dishonest recommenders increases. A low throughput between erroneous and dishonest recommendations leads to several
value will allow the dishonest recommenders to take possession of problems. On the one hand, a high false-positive rate is generated, by
the network by manipulating the reputation mechanism at their dis- unfairly judging honest nodes providing erroneous recommendations as
cretion. dishonest ones. On the other hand, honest nodes will be removed from
the reputation mechanism thus leaving more chance for the dishonest

Table 8
Ballot-stuffing, 50% dishonest recommenders Mean (𝜇%) and standard deviation (𝜎(%)).
10% 20% 30% 40% 50% 60% 70% 80% 90%
𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎 𝜇 𝜎
RP with BTS 0.99 0.00 0.99 0.00 0.99 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00
FP with BTS 0.13 0.01 0.13 0.01 0.12 0.01 0.11 0.01 0.11 0.03 0.09 0.03 0.07 0.03 0.04 0.01 0.03 0.00
FN with BTS 0.02 0.00 0.02 0.00 0.01 0.00 0.01 0.00 0.01 0.00 0.02 0.00 0.02 0.00 0.01 0.00 0.01 0.00
RP without BTS 0.96 0.03 0.8 0.03 0.75 0.02 0.71 0.02 0.61 0.05 0.54 0.03 0.43 0.03 0.26 0.05 0.09 0.03
FP without BTS 0.14 0.01 0.29 0.02 0.38 0.03 0.48 0.01 0.51 0.03 0.52 0.06 0.58 0.04 0.59 0.02 0.62 0.02
FN without BTS 0.11 0.02 0.31 0.03 0.44 0.07 0.56 0.04 0.64 0.06 0.70 0.07 0.79 0.07 0.88 0.03 0.97 0.06

54
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

ing to two important parameters a revised cloud model and a response


speed index. The novelty of our proposal is that it is not only based
on traditional statistical methods but introduces an efficient cognitive
parameter which allows detecting liar’s recommender’s which are con-
sidered as the source of the attack. Furthermore, this respond speed
index allows the distinction between dishonest and erroneous recom-
mendations thus decreasing the rate of false positives caused by devia-
tion detection. Besides, our simulation results demonstrate that the pro-
posed scheme can efficiently discriminate between the dishonest rec-
ommendations and the erroneous recommendations by discovering the
dishonest recommenders from the honest ones. The different simulation
scenarios proved the effectiveness of our protocol under varied metrics
even if the number of dishonest recommenders is large or the recom-
mendation deviation slight. As a future work, we plan to realize the
prototype of our protocol and employ it to secure WSNs applications
such as routing, clustering or data aggregation.

Acknowledgment

We like to thank the editor and the anonymous reviewers for their
Fig. 8. Performance of BTS and other defense schemes.
valuable remarks that helped us in better improving the content and
presentation of the paper.

References

Abbasy, M.B., Barrantes, G., Marín, G., 2011. Time delay performance analysis of sensor
allocation strategies on a wsn. In: Proceedings of the 1st International Conference on
Wireless Technologies for Humanitarian Relief. ACM, pp. 135–140.
Abdou, W., Henriet, A., Bloch, C., Dhoutaut, D., Charlet, D., Spies, F., 2011. Using an
evolutionary algorithm to optimize the broadcasting methods in mobile ad hoc
networks. J. Netw. Comput. Appl. 34 (6), 1794–1804.
Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E., 2002. Wireless sensor
networks: a survey. Comput. Network. 38 (4), 393–422.
Alrashed, S., 2017. Reducing power consumption of non-preemptive real-time systems.
J. Supercomput. 73 (12), 5402–5413.
Alzaid, H., Alfaraj, M., Ries, S., Jøsang, A., Albabtain, M., Abuhaimed, A., 2013.
Reputation-based trust systems for wireless sensor networks: a comprehensive
review. In: IFIP International Conference on Trust Management. Springer, pp. 66–82.
Ari, A.A.A., Yenke, B.O., Labraoui, N., Damakoa, I., Gueroui, A., 2016. A power efficient
cluster-based routing algorithm for wireless sensor networks: honeybees swarm
intelligence based approach. J. Netw. Comput. Appl. 69, 77–97.
Ari, A.A.A., Damakoa, I., Gueroui, A., Titouna, C., Labraoui, N., Kaladzavi, G., Yenké,
B.O., 2017. Bacterial foraging optimization scheme for mobile sensing in wireless
sensor networks. Int. J. Wireless Inf. Network 59 (3), 254–267.
Ari, A.A.A., Labraoui, N., Yenke, B.O., Gueroui, A., 2018. Clustering algorithm for
wireless sensor networks: the honeybee swarms nest-sites selection process based
approach. Int. J. Sens. Netw. 27 (1), 1–13.
Babu, S.S., Raha, A., Naskar, M.K., 2014. Trust evaluation based on node’s characteristics
Fig. 9. Performance of BTS by varying the number of erroneous recommenda- and neighbouring nodes’ recommendations for wsn. Wirel. Sens. Netw. 6 (08), 157.
tions. Blum, C., Li, X., 2008. Swarm intelligence in optimization. In: Swarm Intelligence.
Springer, pp. 43–85.
Boukerch, A., Xu, L., El-Khatib, K., 2007. Trust-based security for wireless ad hoc and
sensor networks. Comput. Commun. 30 (11), 2413–2427.
nodes to manipulate node’s reputation and disrupt the network. Buchegger, S., Boudec, J.Y.L., 2004. A robust reputation system for p2p and mobile
In order to demonstrate the effectiveness of distinguishing between ad-hoc networks. In: Proceedings of the 2nd Workshop on the Economics of
Peer-to-peer Systems, pp. 1–6.
the erroneous and the dishonest recommendations of our protocol BTS Buchegger, S., Le Boudec, J.-Y., 2002. Performance analysis of the CONFIDANT
we realized the following scenario. As shown in Fig. 9, we simulated protocol. In: Proceedings of the 3rd ACM International Symposium on Mobile Ad
the presence of erroneous and dishonest recommendations within the Hoc Networking & Computing. ACM, pp. 226–236.
Chen, H., Wu, H., Zhou, X., Gao, C., 2007. Agent-based trust model in wireless sensor
same scenario while varying %erroneous recommendations, the inter-
networks. In: Software Engineering, Artificial Intelligence, Networking, and
esting result that can be observed is that the rate of detection of erro- Parallel/Distributed Computing, 2007. SNPD 2007. Eighth ACIS International
neous recommendations is increasing while the parameter % erroneous Conference on, vol. 3. IEEE, pp. 119–124.
recommenders is increasing. In summary, BTS is a good choice under Chen, S., Zhang, Y., Liu, Q., Feng, J., 2012. Dealing with dishonest recommendation: the
trials in reputation management court. Ad Hoc Netw. 10 (8), 1603–1618.
varied parameters in the simulation. Crane, E., et al., 1980. A Book of Honey. Oxford University Press.
Delaët, S., Mandal, P.S., Rokicki, M.A., Tixeuil, S., 2011. Deterministic secure
positioning in wireless sensor networks. Theor. Comput. Sci. 412 (35), 4471–4481.
9. Conclusion Dellarocas, C., 2000. Immunizing online reputation reporting systems against unfair
ratings and discriminatory behavior. In: Proceedings of the 2nd ACM Conference on
We presented in this work a novel bio inspired scheme based on a Electronic Commerce. ACM, pp. 150–157.
L. Deyi, M. Haijun, S. Xuemei, Membership clouds and membership cloud generators [j],
modified cloud model and cognitive chronometry strategy for mobile
J. Comput. Res. Dev. 6.
WSNs, which is designed to improve the detection of the dishonest rec- Fang, W., Zhang, C., Shi, Z., Zhao, Q., Shan, L., 2016. BTRES: beta-based trust and
ommendations attacks. In BTS, the reputation management scenario reputation evaluation system for wireless sensor networks. J. Netw. Comput. Appl.
is modeled by drawing inspiration from the foraging behavior of the 59, 88–94.
Feng, R., Xu, X., Zhou, X., Wan, J., 2011. A trust evaluation algorithm for wireless
honey bee swarm by using some features of the artificial bee colony sensor networks based on node behaviors and ds evidence theory. Sensors 11 (2),
(ABC) algorithm. The fitness of each recommender is judged accord- 1345–1360.

55
Supprimer filigrane Wondershare
PDFelement
F. Khedim et al. Journal of Network and Computer Applications 123 (2018) 42–56

Feng, R., Han, X., Liu, Q., Yu, N., 2015. A credible bayesian-based trust management Tereshko, V., Lee, T., 2002. How information-mapping patterns determine foraging
scheme for wireless sensor networks. Int. J. Distributed Sens. Netw. 11 (11), 678926. behaviour of a honey bee colony. Open Syst. Inf. Dynam. 9 (02), 181–193.
Fox, C.W., Roff, D.A., Fairbairn, D.J., 2001. Evolutionary Ecology: Concepts and Case K. Vijaya, M. Selvam, Improving resilience and revocation by mitigating bad mouthing
Studies. Oxford University Press. attacks in wireless sensor networks, Int. J. Sci. Eng. Res. 4 (4).
Ganeriwal, S., Balzano, L.K., Srivastava, M.B., 2008. Reputation-based framework for Von Frisch, K., 1974. Decoding the language of the bee. Science 185 (4152), 663–668.
high integrity sensor networks. ACM Trans. Sens. Netw. 4 (3), 15. Whitby, A., Jøsang, A., Indulska, J., 2004. Filtering out unfair ratings in bayesian
Gregg, A.P., 2007. When vying reveals lying: the timed antagonistic response reputation systems. In: Proc. 7th Int. Workshop on Trust in Agent Societies, vol. 6,
alethiometer. Appl. Cognit. Psychol. 21 (5), 621–647. pp. 106–117.
Gregg, A.P., Mahadevan, N., Edwards, S.E., Klymowsky, J., 2014. Detecting lies about Wu, X., Zheng, Q., 2015. A self-adaptive trust management scheme for wireless sensor
consumer attitudes using the timed antagonistic response alethiometer. Behav. Res. networks. Trans. Inst. Meas. Contr. 37 (10), 1197–1206.
Meth. 46 (3), 758–771. Xing, B., Gao, W.-J., 2014. Innovative Computational Intelligence: a Rough Guide to 134
Han, G., Jiang, J., Shu, L., Niu, J., Chao, H.-C., 2014. Management and applications of Clever Algorithms. Springer.
trust in wireless sensor networks: a survey. J. Comput. Syst. Sci. 80 (3), 602–617. Yu, Y., Li, K., Zhou, W., Li, P., 2012. Trust mechanisms in wireless sensor networks:
Hur, J., Lee, Y., Youn, H., Choi, D., Jin, S., 2005. Trust evaluation model for wireless attack analysis and countermeasures. J. Netw. Comput. Appl. 35 (3), 867–880.
sensor networks. In: Advanced Communication Technology, 2005, ICACT 2005. The Zahariadis, T., Leligou, H., Karkazis, P., Trakadas, P., Papaefstathiou, I., Vangelatos, C.,
7th International Conference on, vol. 1. IEEE, pp. 491–496. Besson, L., 2010. Design and implementation of a trust-aware routing protocol for
Iltaf, N., Ghafoor, A., Zia, U., 2013. A mechanism for detecting dishonest large wsns. Int. J. Netw. Secur. Appl. 2 (3), 52–68.
recommendation in indirect trust computation. EURASIP J. Wirel. Commun. Netw. Zouridaki, C., Mark, B.L., Hejmo, M., Thomas, R.K., 2009. E-hermes: a robust
2013 (1), 189. cooperative trust establishment scheme for mobile ad hoc networks. Ad Hoc Netw. 7
M. Jamshidi, E. Zangeneh, M. Esnaashari, M. R. Meybodi, A lightweight algorithm for (6), 1156–1168.
detecting mobile sybil nodes in mobile wireless sensor networks, Comput. Electr. Zungeru, A.M., Ang, L.-M., Seng, K.P., 2012. Classical and swarm intelligence based
Eng.. routing protocols for wireless sensor networks: a survey and comparison. J. Netw.
Jin, X., Liang, J., Tong, W., Lu, L., Li, Z., 2017. Multi-agent trust-based intrusion Comput. Appl. 35 (5), 1508–1536.
detection scheme for wireless sensor networks. Comput. Electr. Eng. 59, 262–273.
Kamil, A.C., Krebs, J.R., Pulliam, H.R., 2012. Foraging Behavior. Springer Science &
Farah Khedim was born in Tlemcen, Algeria. She received her
Business Media.
Master of Science degree in Computer Science from the Uni-
Karaboga, D., Basturk, B., 2007. A powerful and efficient algorithm for numerical
versity of Tlemcen, Algeria. Currently, she is a Ph.D. Candidate
function optimization: artificial bee colony (abc) algorithm. J. Global Optim. 39 (3),
in Telecommunication and Computer Networks at the Univer-
459–471.
sity of Tlemcen. Her current research with the STIC Lab of the
Khalid, O., Khan, S.U., Madani, S.A., Hayat, K., Khan, M.I., Min-Allah, N., Kolodziej, J.,
University of Tlemcen include Network Security in Wireless
Wang, L., Zeadally, S., Chen, D., 2013. Comparative study of trust and reputation
Sensor Networks.
systems for wireless sensor networks. Secur. Commun. Network. 6 (6), 669–688.
Khedim, F., Labraoui, N., Lehsaini, M., 2015. Dishonest recommendation attacks in
wireless sensor networks: a survey. In: Programming and Systems (ISPS), 2015 12th
International Symposium on. IEEE, pp. 1–10.
Labraoui, N., Gueroui, M., Sekhri, L., 2016. A risk-aware reputation-based trust
management in wireless sensor networks. Wireless Pers. Commun. 87 (3),
1037–1055.
Lindsay, S., Raghavendra, C.S., Sivalingam, K.M., 2001. Data gathering in sensor Nabila Labraoui is an Associate Professor in Computer Engi-
networks using the energy delay metric. In: Proceedings of the 15th International neering at the University of Tlemcen, Algeria. She received her
Parallel & Distributed Processing Symposium. IEEE Computer Society, p. 188. Ph.D. in Computer Engineering from the University of Tlem-
Lopez, J., Roman, R., Agudo, I., Fernandez-Gago, C., 2010. Trust management systems cen, Algeria. Her current research interests include wireless ad
for wireless sensor networks: best practices. Comput. Commun. 33 (9), 1086–1093. hoc sensor networks, network security, localization and trust
Mármol, F.G., Pérez, G.M., 2011. Providing trust in wireless sensor networks using a management for distributed and mobile systems.
bio-inspired technique. Telecommun. Syst. 46 (2), 163–180.
Michiardi, P., Molva, R., 2002. Core: a collaborative reputation mechanism to enforce
node cooperation in mobile ad hoc networks. In: Advanced Communications and
Multimedia Security. Springer, pp. 107–121.
Momani, M., Challa, S., 2010. Survey of trust models in different network domains. Int.
J. Adhoc, Sens. Ubiquitous Comput. (IJASUC) 1 (3), 1–19.
Perrig, A., Stankovic, J., Wagner, D., 2004. Security in wireless sensor networks.
Commun. ACM 47 (6), 53–57.
Picard-Nizou, A., Pham-Delegue, M., Kerguelen, V., Douault, P., Marilleau, R., Olsen, L., Ado Adamou Abba Ari is a Research Associate at the LI-
Grison, R., Toppan, A., Masson, C., 1995. Foraging behaviour of honey bees (apis PaRAD Lab of the University of Versailles Saint-Quentin-en-
mellifera l.) on transgenic oilseed rape (brassica napus l. var. oleifera). Transgenic Yvelines, France. He is also a Senior Lecturer and Researcher
Res. 4 (4), 270–276. in Computer Engineering at the University of Maroua,
Rathore, H., 2016. Case study: a review of security challenges, attacks and trust and Cameroon. He received his Ph.D. degree in Computer Sci-
reputation models in wireless sensor networks. In: Mapping Biological Systems to ence in 2016 from Université Paris-Saclay in France. He also
Network Systems. Springer, pp. 117–175. received the Master degree of Business Administration (MBA)
Sen, T., Mathur, H.D., 2016. A new approach to solve economic dispatch problem using in 2013, the Master of Science degree in Computer Engineer-
a hybrid ACO–ABC–HS optimization algorithm. Int. J. Electr. Power Energy Syst. 78, ing in 2012 and the Bachelor of Science degree in Mathe-
735–744. matics and Computer Science in 2010 from the University of
Srinivasan, A., Teitelbaum, J., Wu, J., 2006. DRBTS: distributed reputation-based beacon Ngaoundéré, Cameroon. His current research with the Net-
trust system. In: Dependable, Autonomic and Secure Computing, 2nd Ieee works Team of the LI-PaRAD Lab at the University of Versailles
International Symposium on. IEEE, pp. 277–283. Saint-Quentin-en-Yvelines is focused on bio-inspired comput-
ing, Wireless Networks, 5G and the Cloud Radio Access Net-
work.

56

You might also like