You are on page 1of 12

2506 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO.

9, SEPTEMBER 2017

Dynamic, Privacy-Preserving Decentralized


Reputation Systems
Michael R. Clark, Kyle Stewart, Student Member, IEEE, and Kenneth M. Hopkinson, Senior Member, IEEE

Abstract—Reputation systems provide an important basis for judging whether to interact with others in a networked system. Designing
such systems is especially interesting in decentralized environments such as mobile ad-hoc networks, as there is no trusted authority to
manage reputation feedback information. Such systems also come with significant privacy concerns, further complicating the issue.
Researchers have proposed privacy-preserving decentralized reputation systems (PDRS) which ensure individual reputation
information is not leaked. Instead, aggregate information is exposed. Unfortunately, in existing PDRS, when a party leaves the network,
all of the reputation information they possess about other parties in the network leaves too. This is a significant problem when applying
such systems to the kind of dynamic networks we see in mobile computing. In this article, we introduce dynamic, privacy-preserving
reputation systems (Dyn-PDRS) to solve the problem. We enumerate the features that a reputation system must support in order to be
considered a Dyn-PDRS. Furthermore, we present protocols to enable these features and describe how our protocols are composed to
form a Dyn-PDRS. We present simulations of our ideas to understand how a Dyn-PDRS impacts information availability in the network,
and report on an implementation of our protocols, including timing experiments.

Index Terms—Reputation systems, cryptography, multiparty computation, mobile ad-hoc networks

Ç
1 INTRODUCTION

R EPUTATION in another party is a measure of confidence


that that party will conform to a certain behavior or per-
form a certain action, rather than defect. For example, con-
In many scenarios, however, such a trusted party does
not exist. This includes peer-to-peer systems, MANETs,
wireless sensor networks (WSNs) etc. For this reason,
sider mobile ad-hoc networks (MANETs) in which a party’s decentralized reputation systems (DRS) exist. Example sys-
neighbors are used to route messages. A party might build tems can be found in [1], [2], [3], [4], [5], which are applied
up reputation information on his neighbors by observing to many systems, the majority of which deal with mobile
whether or not they forward messages he sends through computing situations. These systems are more ad-hoc in
them. When a new party joins the network, however, they nature. In these systems, a party pi , called the querying
will have no reputation information on others in the net- party would like to interact with another party pk , called the
work. In a reputation system, the new party asks its peers target party, but pi has no reputation information on pk .
for their reputation information on some other party, and Therefore, pi forms a query set, U and ask each party in U to
uses this as the basis to determine whether or not to interact provide their reputation information on pk . pi then averages
with the other party. Often this is done by averaging the these and stores the result. The result can then be used to
responses of the peers. help pi know whether or not to interact with pk .
Many online marketplaces have reputation systems built Recently, researchers have become concerned about pri-
in. They allow users to provide feedback (or ratings) on prod- vacy issues in DRS. In particular, if privacy of reputation
ucts and vendors. The aggregate of this feedback information information is not maintained, parties providing reputation
is displayed to customers in order to help them make choices information to a query could be subject to retaliation, retribu-
about what product to purchase or who to purchase it from. tion, or attack. Therefore, it may be in a party’s best interest
These are examples of centralized reputation systems. These to not provide honest feedback. To alleviate the situation,
reputation systems can function because the market operator researchers have proposed a number of privacy-preserving
(e.g., Amazon or eBay) is at least somewhat trusted by both decentralized reputation systems (PDRS). In such systems,
vendors and consumers. Indeed, it is in the market operator’s instead of providing their reputation information directly to
best interest to provide honest feedback to customers. pi , the parties in U run a protocol which allows them to jointly
compute a function of each of their individual reputation
values about pk (typically they compute the sum) and then
 M.R. Clark is with the Air Force Institute of Technology and Tenet 3, LLC, reveal the result of the computation to pi . The protocol run by
Wright-Patterson AFB, OH 45433. E-mail: michael.clark@afit.edu.
 K. Stewart and K.M. Hopkinson are with the Air Force Institute of Tech- the parties is specifically designed so that they have strong
nology, Wright-Patterson AFB, OH 45433. assurances that their reputation information has been kept
E-mail: {kyle.stewart, kenneth.hopkinson}@afit.edu. private. Examples of such can be found in [6], [7], [8], [9], [10].
Manuscript received 13 Feb. 2015; revised 10 Aug. 2016; accepted 22 Nov. We describe a few of these systems in more detail in Section 2.
2016. Date of publication 5 Dec. 2016; date of current version 2 Aug. 2017. All existing PDRS we are aware of fall into the category of
For information on obtaining reprints of this article, please send e-mail to:
reprints@ieee.org, and reference the Digital Object Identifier below. static PDRS. By static, we mean that when a party leaves the
Digital Object Identifier no. 10.1109/TMC.2016.2635645 network, all of the reputation information held leaves with
1536-1233 ß 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2507

them. In situations where reputation information is sparse, operate in networks where infinite delay can occur between
however, this can diminish the utility and security the system. the sending and receiving of message is possible using
This can be seen in the literature in a number of contexts. For already established techniques [13]. In a real system, this
example, Burke, in studying (non-privacy-preserving) recom- could be enforced by giving low reputation values to parties
mender systems, notes that researchers have turned to com- who refuse to participate. Our protocols build upon existing
plex computations and analyses to deal with the sparse data work and use well studied cryptographic primitives (e.g.,
problem [11]. In privacy-preserving systems, the problem is secret sharing) as building blocks.
even more complex. In addition to dealing with reputation We assume that any party can be corrupted and that cor-
accuracy problems due to data sparsity, the security of rupt parties will collude with one another. If the querying
privacy-preserving systems is often contingent on there being party is corrupt, privacy of honest parties is ensured as long
a sufficiently large number of parties in the query set U, some as at least two of the parties in the set U are honest. If the
fraction of which must be honest. Let us return to our previ- querying party is not corrupt, there need only be one honest
ous example using MANETs to illustrate the issue. Say that a party in the set U. For delegation and redelegation, privacy
party pi would like to interact with a party pk , both of which is maintained as long as there is at least one honest party in
are currently active in the network (it would not make sense the corresponding sets. We describe how we achieve these
to try to interact with inactive parties), but, due to the dynam- security guarantees in Section 4.
ics of the network, there may only be 1 peer of pi that has
reputation information on pk . Therefore the set U could only 1.2 Contributions
consist of that single party. Since the single party does not Our contributions (and the outline of the remainder of the
want his reputation information on pk leaked, he would refuse article) are as follows. In Section 2, we describe existing
to help pi . The issue extends to situations beyond the case PDRS from the literature and illustrate how those systems
where there is only a single party in U. Existing PDRS proto- fail when parties are allowed to leave the network. In
cols require that some fraction of the set U be honest. In a Section 3, we give a more formal definition of PDRS and
dynamic network where reputation information is lost when Dyn-PDRS and describe our problem setting. In Section 4,
a party leaves the network, this requirement could be hard to we specify four protocols. The first is necessary for a PDRS
satisfy. A well resourced adversary that has corrupted multi- and is similar to existing work in the area. The next three
ple parties in the system can coordinate an attack where all of are necessary to build our Dyn-PDRS. We also argue
the corrupt parties are in the network at the same time. Honest the correctness and security of our protocols. In Section 5,
parties, on the other hand, will enter and join the network as we describe tradeoffs in different delegation strategies,
they please. Thus the coordinated attack has the effect of or how to choose delegation and redelegation sets. We
increasing the fraction of corrupt to honest parties currently describe two methods for doing this and describe how pri-
active in the system. We give a specific example of how an vacy guarantees differ in each. In Section 6, we show the
existing system can break down in the context of dynamic results of a number of simulations we have run which illus-
networks in Section 2. trate the benefits of a Dyn-PDRS over the traditional PDRS.
To overcome the challenges just presented, we propose In Sections 7 and 8, we describe an implementation of our
a new paradigm for PDRS, namely dynamic, privacy- protocols and timing experiments we have conducted using
preserving decentralized reputation systems (Dyn-PDRS). our implementation. Finally, we conclude in Section 9.
A system which follows the Dyn-PDRS paradigm enables
parties to run a delegation protocol when they want to 2 RELATED WORK
leave the network. In this protocol, they delegate their
2.1 Reputation Systems
reputation information to a set, D, of other parties in
the network. The delegation is done in such a way that the A number of protocols have been proposed to construct
party’s privacy is still maintained. That party is then free PDRS. We describe some of the prominent ones and com-
to leave the network. When that party appears in a query ment on why the problem of operating in networks where
set U, the parties in D are able to act on its behalf. Further- parties are constantly leaving and rejoining the network is a
more, a Dyn-PDRS provides a redelegation protocol which concern. We focus on protocols which are secure in the
is run when a party in D wants to leave the network. This honest-but-curious model as that is the model in which we
allows the parties in D to redelegate to a new set D0 . That work. Similar to the works described below, we focus on a
set D0 can then act on the original party’s behalf. additive PDRS, in which reputation information from the
parties in the query set is added together privately, though
1.1 Adversary Model our ideas for Dyn-PDRS should be adaptable to other func-
We work in the honest-but-curious or semi-honest adver- tions such as weighted sums.
sary model [12]. In this model, we assume that corrupt
parties execute the protocol as specified, but use any infor- 2.1.1 Pavlov et al.
mation gleaned during execution to attempt to violate One of the earliest works in privacy-preserving decentralized
another party’s privacy. In particular, this means that all reputation systems comes from Pavlov et al. [6]. An important
parties who are active in the network will participate in the proof coming from this work is that if the querying node is
protocols when required. This is a common assumption in corrupt, there must be at least two honest nodes or privacy
the multiparty computation (MPC) literature. We also cannot be achieved. This is due to the fact that the querying
assume that the underlying communications system pro- party gets the result of the calculation. Given the result and
vides guaranteed delivery. Adapting our protocols to the inputs of the other dishonest parties, the reputation value

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2508 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017

2.1.2 Hasan et al.


Hasan et al. [7] propose the k-shares reputation protocol
which builds upon the work of Pavlov et al. The benefit of
the k-shares protocol is that witnesses are able to maximize
and quantify the probability that their reputation informa-
tion is kept private. In this protocol, the querying agent
chooses a set of witnesses (the exact method for this is not
specified in the paper). The description of the set of wit-
nesses is sent to each witness. Each witness chooses a subset
of the witnesses of size up to k which he considers trustwor-
thy. The witness then shares their reputation information
with the subset using additive secret sharing and sends the
description of the subset to the querying party. The query-
ing party informs each witness who they will receive shares
from. Each witness, upon receiving shares from other wit-
nesses, sums them up and sends the result to the querying
party. The querying party sums all of these values to get the
Fig. 1. Illustration of how an existing reputation system fails in dynamic
sum of the reputation values. We note that if a witness
networks as security is only guaranteed if there is a high probability of at
least two honest witnesses. decides to, he may choose not to input his reputation infor-
mation if he does not trust enough parties in the witness set.
Furthermore, since each witness is selecting up to k other
of the sole honest party can be calculated. The authors also
witnesses that he trusts, the authors note that this leaks
present three protocols (of varying strengths and security
some information about trust relationships (but not specific
guarantees) which enable such a system. We focus our analy-
reputation information). The authors propose solving this
sis here on their second protocol as it is closest to our setting
by allowing the querying party to add a few untrusted par-
(full-threshold security where corrupt parties are allowed to
ties to the subset and then selecting the same subset for
collude). The querying party begins the protocol by running a
repeated queries.
witness selection scheme. This results in a set of witnesses
Consider the operation of Hasan’s protocol in a dynamic
who will provide feedback on the target party and, with high
network. As the authors do not specify how the set of wit-
probability, will have at least two honest witnesses. The que-
nesses is chosen, we will assume it happens in the same
rying party sends a description of the set to all parties in the
manner as in Pavlov’s protocol. We saw previously how
set. Each witness splits his reputation score on the target party
using additive secret sharing and sends one share to each network availability affects the probability that there are at
party in the protocol (including the querying party) and keeps least two honest witnesses. In Hasan’s protocol, this means
one share for himself. Once a party has gathered shares from that honest witnesses will likely refuse to take part in the
every other party, he sums them all up and sends the result to computation, thus preserving their privacy. We note, how-
the querying party. The querying party then sums all the ever, that the fact that more honest parties are refusing to
values he receives to recover the sum of the reputation values. participate in reputation computations is not a good thing
For security and correctness proofs of this protocol, we refer for the system as a whole. Another issue arises when
the reader to the original work. attempting to use Hasan’s protocol in a dynamic network.
In the case of dynamic networks, the problem with In order to provide high efficiency, the authors require that
Pavlov’s protocol is that, while honest parties which could k < < n, where n is the size of the witness set and k is the
provide feedback for a particular target party will come and maximum size of the subsets chosen by each witness. In a
go due to normal churn in the network, dishonest parties dynamic network, it is possible that this inequality cannot
will not necessarily follow this pattern, making them more be met as the number of available (i.e., currently part of the
likely to be chosen as witnesses. Pavlov et al. prove their network) witnesses could be much smaller than in an
witness selection scheme will result in a witness set with at entirely static network.
least two honest witnesses with probability greater than
ð1  n1 ÞðNb1
N1 Þ, where n is the number of witnesses, N is the
2.1.3 Other Protocols
number of possible witnesses (i.e., the number of parties A number of other decentralized, privacy-preserving repu-
with reputation information on the target), and b is the num- tation systems have been proposed in the literature (e.g.,
ber of corrupt parties. [8], [9], [14]). Of the others we have looked at, all have simi-
For Pavlov’s protocol, a dynamic network has the effect lar issues with regards to dynamic networks. In particular,
of lowering N while b remains constant. We show how this the fact that reputation information from trustworthy par-
affects the probability of having at least two honest wit- ties may not be available at query time impacts the security
nesses in Fig. 1. The probability for a hypothetical static net- of existing reputation systems. This illustrates the impor-
work is also shown in the figure for reference. Here we have tance of availability in decentralized reputation systems in
fixed the fraction of corrupt parties to 0.1 and set the size of general. The solution to the problem is non-trivial as we
the witness set to one-tenth of the original network size. It is desire a reputation system that also preserves the privacy of
clear that the dynamic nature of the network has a signifi- reputation information used to help the querying party
cant impact on the security of Pavlov’s protocol. compute a reputation value for the target party.

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2509

2.2 Related Problems networks. In this section, we define the environment in


A number of related problems exist in the literature. One which we will be working and other important details of
related problem is privacy-preserving profile matching in our setup. We abstract many details of the network as our
social networks. In this problem, users hold sets of attrib- focus in this work is on building solid protocols to enable
utes about themselves and want to find others in the social such a reputation system.
network with similar attributes, without revealing their Let P be the set of parties which form our network. Par-
attributes. A recently proposed protocol to solve this prob- ties in P may leave and join the network as they please. We
lem [15] uses some of the same building blocks we use here assume that each pair, pi ; pj 2 P , is connected by a secure,
(i.e., secret sharing and MPC), and therefore, could poten- authenticated channel. Often this is achieved by using a
tially benefit from our delegation work. Privacy preserving public key infrastructure and is a common assumption in
proximity testing, to find out who is near you without the PDRS literature. Party pi stores reputation information
revealing your location can also be accomplished using sim- that it has generated about another party pj , say vij . Let vij
ilar cryptographic constructs [16] and could therefore bene- be between 0 and some global maximum reputation vmax if
fit from our work. pi has reputation information on pj , otherwise, vij ¼ ?.
Decentralized reputation systems are useful in the case
2.3 Secure Multiparty Computation where pi needs to interact with some pk but vik ¼ ?. In this
Secure multiparty computation is a branch of cryptography case, pi forms a set U  P and queries parties in U about pk to
1
P
which deals with the problem of computing functions of pri- help it compute vik . For example, if we let vik ¼ jUj pj 2U vjk
vate inputs from multiple parties and has been applied to var- we would have an additive system. Such a system is also
ious privacy related problems (including online auctions [17] privacy-preserving if it fits the following definition.
and statistical data analysis [18]). As we have seen, MPC is
very related to the problem at hand. Indeed, many existing Definition 1 (Privacy-Preserving Decentralized Reputa-
PDRS use constructs similar to those used in MPC protocols tion System). A (additive) PDRS consists of a decentralized
(e.g., additive secret sharing). Furthermore, our work in Dyn- protocol padd which allows a querying party, pi , to compute
1
P
PDRS builds upon MPC research. Therefore, we briefly dis- vik ¼ jUj pj 2U vjk , without any of the v values being leaked to
cuss the area of MPC, paying particular attention to those con- any other party. Here U is the query set and is chosen by pi .
structs which are important to (Dyn-)PDRS.
The MPC problem and its initial solutions were first pro- Definition 2 (Additive Secret Sharing). Fix a finite field F,
posed in the 1980’s. Formally, we have some number of par- for example Zp for some prime p. Give a secret s 2 F, the addi-
ties, say p1 ; . . . ; pn , each with a private input x1 ; . . . ; xn , tive secret sharing of s consists of the shares s1 ; . . . ; sn such
respectively. The parties would like to compute a function that s ¼ s1 þ s2 þ    þ sn and is computed as follows.
of their inputs, say fðx1 ; . . . ; xn Þ without revealing their s1 ; s2 ; . . . ; sn1 are chosen at random from F and sn ¼ s
individual inputs to anyone. Clearly, a solution to the MPC ðs1 þ s2 þ    þ sn1 Þ. Let Sn : F ! Fn be the additive secret
problem implies a solution to the PDRS problem. sharing function which outputs n shares of the input and let
There are two main classes of solutions to the MPC prob- Sn ðsÞ½i be the ith share of s. Given the n shares, one can recon-
lem, the garbled circuit class [19] and the secret sharing struct s simply by adding the shares together.
class [20], [21]. The garbled circuit class primarily focuses on Additive secret sharing, defined above, has been used in
the case where there are only two parties, while the secret a number of general secure multiparty computation proto-
sharing class focuses on two or more parties. Early solutions cols as a way to preserve privacy [28], [29], and we use it in
to the MPC problem were quite inefficient. Since the original our reputation system. It is linear, i.e, given shares of two
proposals, however, many efficiency optimizations have been values, one can compute a share of the sum of those values
proposed. It is now feasible to run MPC on large computa- without inverting the sharing function, or mathematically
tions in many circumstances [22], [23]. As we saw earlier, Sn ðsÞ½i þ Sn ðs0 Þ½i ¼ Sn ðs þ s0 Þ½i. Furthermore, any adver-
many of the existing PDRS protocols use additive secret shar- sary who does not know all the shares cannot compute the
ing (which we will define in Section 3). Secret sharing has secret. In fact, an adversary with up to n  1 shares gains no
been around for quite a while and is very useful as it is very additional information about s. In other words, additive
efficient. It has been utilized successfully in many recent cryp- secret sharing is information-theoretically secure. We omit
tographic protocols. Therefore, the secret sharing class of the subscript when it is clear from the context.
MPC solutions is of most interest to our work here. Many of the target use cases for decentralized reputation
Recent work proposed a new paradigm for MPC called systems (P2P network, MANETs, vehicular networks, etc.)
transferable MPC (T-MPC) [24]. T-MPC differs from MPC in are dynamic in nature. In other words, parties are often
that the set of computation parties is permitted to change over leaving and joining the network. In existing decentralized
time. The main idea behind this work was to pair secret share reputation systems, when a party leaves the network, all the
redistribution protocols (e.g., [25], [26]) with MPC protocols. reputation information they possess leaves with them. In
We use these ideas to build our protocols for Dyn-PDRS, but other words, that party cannot appear in a set U until it
applied to additive secret sharing instead of Shamir secret rejoins the network. We saw in Section 2 how this can affect
sharing [27], as was done in the original T-MPC work. security. Our goal is not only to change this, but to do so in
a privacy-preserving manner.
3 PROBLEM SETTING AND DEFINITIONS Consider how this might work in a system which does
Our problem area is that of computing reputation in a not preserve privacy. Just before party pi leaves the net-
privacy-preserving manner, in dynamic, decentralized work, he could digitally sign his reputation information and

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2510 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017

send it to another party in the network, say pj . Now pj can more detail. In a general sense, the security of the protocols
act on pi ’s behalf. When pj wants to leave the network, he we present are secure for up to n  1 corrupt parties. How-
could further delegate pi ’s reputation information (along ever, due to the specific computation, summation, n  1 cor-
with his own) to another party, say pk , by using digital sig- rupt parties can learn the remaining honest party’s input by
natures to create a delegation chain. Any other party in P , subtracting their individual inputs from the output. Only
using the delegated information, can verify the chain to the querying party should learn the output so if the query-
know that the responding party is authorized to do so. This ing party is corrupt, we must assume that at least two of the
example illustrates our goal, to enable parties to leave and parties in U are honest [6]. For simplicity we present our
join the network at will, yet keeping their reputation infor- protocols as if the querying party is honest. In the case of a
mation available to other parties in the network, all while dishonest querying party, the only thing that changes is the
still preserving privacy of reputation values. Such a system number of corruptions tolerated, as we just described.
does not currently exist. We formalize this idea with the fol-
lowing definition. 4.1 The PDRS Protocol
Let pi be the querying party, who wants to compute vik for
Definition 3 (Dynamic, Privacy-Preserving Decentral-
some party pk . Let U be the set of witnesses with inputs to
ized Reputation System). A (additive) Dyn-PDRS con-
the computation. Our protocol padd is shown in Protocol 1.
sists of a protocol padd as in Definition 1 and three additional
We note that while not identical to previously proposed
protocols: pdel , pact and pre del . Where pdel allows a party to del-
protocols for PDRS, our protocol is very similar, and, taken
egate the reputation information it holds to a set of parties D
in its own right, should have similar performance.
while still preserving the privacy of that information. The pro-
tocol pact allows a set of parties who have been authorized to act
on another party’s behalf to enter that party’s information into Protocol 1. padd
the protocol padd while still preserving the party’s privacy. 1) pi sends the description of U and the identity pk to each
Finally, the protocol pre del lets a set of parties, say D, re-dele- party in U.
gate reputation information that was delegated to it to another 2) Each pj 2 U computes ðs1 ; . . . ; sjUj Þ ¼ SjUj ðvjk Þ and sends
set of parties, say D0 , in a way which maintains the privacy of one share to each other party in U and keeps one share for
the information. himself.
3) Each pj collects one share from each other party in U. Let
In the next section, we give specific instances of these ðr1 ; . . . ; rjUj Þ be the shares he collects (including his own
protocols and how they are composed to form a Dyn-PDRS. share).
There is some tradeoff to be balanced in delegation. We 4) Each pj then computes tj ¼ r1 þ r2 þ    þ rjUj and sends tj
explore delegation strategies in order to balance the tradeoff to pi .
between information availability and privacy in Section 5. 5) Party pi receives jUj shares, tj from pj 2 U, and sets
1
P
vik ¼ jUj pj 2U tj .
4 PROTOCOLS
In this section, we present the four protocols introduced in Correctness. The correctness of our protocol is guaranteed
Section 3. We begin by presenting padd , the protocol to allow due to the linear naturePof additive secret sharing. Mathe-
1
P
pi to use the set U to compute vik ¼ jUj pj 2U vjk privately.
matically we have that pj 2U Sðvjk Þ, where addition is per-
The summation is computed via a simple multiparty compu- formed
P point-wise on the sharing vectors, is equal to
tation built on additive secret sharing. The concept is similar Sð pj 2U vjk Þ. These shares are, in essence, what the parties
to previous work in decentralized reputation systems and in U send to pi in the next to last step. So,
has similar performance characteristics. padd by itself could
be used as the basis of a PDRS. It is important to note that in a 1 X 1 X
vik ¼ tj ¼ vjk :
Dyn-PDRS, since we allow for delegations, the set U may jUj p 2U jUj p 2U
j j
contain parties which are not currently online, as long as the
party has delegated its reputation information. The set U can Security. The security of our protocol comes from the
be generated using methods from prior work, for example, security guarantees of additive secret sharing. As long as
Pavlov’s witness selection protocol. We do require that all the adversary has not corrupted all of U, all of the individ-
parties in U have reputation information on the target, pk . In ual reputation values vjk as well as the output value vik are
other words, we require that vjk 6¼ ? where pj 2 U. kept private.
Next, we present the remaining three protocols, pdel , pact
and pre del to enable privacy-preserving delegation. 4.2 The Dyn-PDRS Protocols
Together, these protocols enable a reputation system where Say party p‘ 2 P is leaving the network. In order to not lose
parties can leave the network, yet delegate their reputation all the reputation information of p‘ , in this section, we pro-
information in such a way that it can still be used to assist pose the necessary protocols to allow p‘ to delegate its repu-
other parties in computing reputation. We abstract away tation information to a set of parties D  P . We then show
many of the details of the underlying communication sys- how the parties in D can act on behalf of p‘ whenever p‘
tem and some fine details about how the protocols interact appears in a query set U. Furthermore, we present a proto-
to keep our discussion here focused on the protocols them- col to allow the parties in D to transfer the delegation of p‘ ’s
selves. In Section 7, we describe our implementation of reputation information to a new set D0 . This protocol is
these protocols in a real system and describe these parts in used when a party in D is leaving the network. D and D0

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2511

may be of different sizes, overlap or be completely indepen- Notice that the sum of all the subshares for every dj is still
dent. When p‘ rejoins the network, the parties in D can sim- v‘k . One subshare of each dj is sent to one party in U 0 . Since
ply discard p‘ ’s reputation information. It would be fairly addition is commutative, it turns out that the sum of all the
simple, however, to also allow the parties in D to return the r‘ shares computed in Step 6 of the protocol is still v‘k . Thus,
reputation information back to p‘ . For our purposes here, the parties in U 0 have additive shares of v‘k as needed for
we will simply let the set D be chosen at random. Thus, (1) the protocol to be correct.
gives that probability that the private information is leaked Security. We show that Protocol 3 is secure from the per-
due to this set. In Section 5, we explore other methods for spective that it does not give the adversary any additional
choosing D and the redelegation sets. information about v‘k . We show this for the worst case,
Protocol 2 describes pdel . The correctness and security of when the adversary controls all parties but one in D and all
this protocol come directly from the correctness and secu- parties but one in U 0 . Security in the case that the adversary
rity of additive secret sharing which we discussed in controls fewer parties is an immediate consequence from
Section 3. As long as the adversary does not control all of worst case security. Let ph 2 D and p0h 2 U 0 be the honest,
the parties in D, p‘ ’s reputation information is kept private. uncorrupted parties in each set. Note that ph and p0h could
be the same party. In the protocol, ph will create a number
Protocol 2. pdel of subshares, one of which will be sent to p0h . Since the
1) p‘ chooses a set D  P . adversary will not know that share, due to the security of
2) For each pj 2 P where v‘j 6¼ ? additive secret sharing, the adversary will also not know
p‘ computes sharesj ¼ SjDj ðv‘j Þ and sends the identity the r‘ that p0h computes in Step 6 of the protocol. Without
j and one share to each party in D. that value, the r‘ values computed by the corrupt parties
3) p‘ digitally signs a message signifying that it has delegated give the adversary no additional information about v‘k . This
its reputation information to the set D and sends the mes- shows that as long as there is at least one uncorrupted party
sage and signature to each party in D. in D and U 0 , the protocol leaks no additional information
about the private trust information.
Once the parties in D have received the information sent If a party in D leaves, the remaining parties would not be
by p‘ in Protocol 2 and verified the digital signature, they able to act on p‘ ’s behalf. Therefore, before any party in D
are ready to act on his behalf. At some later point in time leaves the network, pre del is run, as shown in Protocol 4. We
they will see a query set U that contains p‘ when a party, let p‘0 2 D be the party that is leaving the network. Further-
say pi , initiates Protocol 1. At this point they run pact , shown more, recall that from Protocol 2, the parties in D hold a
in Protocol 3. number of pairs ðj; sj Þ where j is an identity of a party and
sj is a share of v‘j . How the set D0 is chosen will be explored
Protocol 3. pact in Section 5. For now, we simply assume that D0 is a new
random set. Our description of pre del focuses on the case
1) The parties in D notify the parties in U that they are to act
where only one party has delegated information to the set
on behalf of p‘ by sending them the message and digital
D. The protocol can easily be adapted to the case where
signature received from p‘ .
multiple parties have delegated to D by running it once for
2) Parties in D select one of them to take p‘ ’s place in the set
U and notifies the parties in U of this choice. each party that has left the network and delegated to D.
3) The parties in U validate the digital signature and replace
p‘ in the set U with the party chosen in the previous step. Protocol 4. pre del
Call this new set U 0 . 1) p‘0 sends a message to all other parties in D that it is leav-
4) The parties in U use the set U 0 for sharing when continu- ing the network.
ing Protocol 1 with the exception of computing r‘ (the 2) The parties in D select a new set D0 which will be respon-
input shares that would have come from p‘ ). sible for acting on behalf of p‘ .
5) Each party in D takes its share of sharesk , say sk , received 3) Each party in D creates subshares of each sj it holds and
in Protocol 2 and computes ðs1 ; . . . ; sjU 0 j Þ ¼ SjU 0 j ðsk Þ and distributes one subshare to each party in D0 along with
sends one share to each party in U 0 . the identity j.
6) Each party in U 0 receives jDj shares from the previous 4) For each j, each party in D0 receives one subshare sj from
step. Call these shares ðs01 ; . . . ; s0jDj Þ. They then compute each party in D and stores the sum of these subshares along
r‘ ¼ s01 þ    þ s0jDj . r‘ takes the place of what they would with j. The sum of these subshares is a new share of v‘j .
have received from p‘ in Step 3 of Protocol 1. 5) Parties in D also send the message and digital signature
they received from p‘ to the parties in D0 . They also each
digitally sign a message stating that they are transferring
At the end of Protocol 3, the parties in U 0 can complete delegation of p‘ ’s reputation information to D0 .
the execution of Protocol 1. Some interesting features of the
protocol are that not all of D is required to participate in the
execution of Protocol 1 and that the trust value v‘k is never Thus, by doing something similar to what was done in
revealed, either to the parties in D or the parties in U 0 . Protocol 3, i.e., creating and distributing subshares, the par-
Correctness. From Protocol 2, the parties in D hold shares ties in D are able to transfer all delegated information they
of v‘k , say sharesk ¼ ðd1 ; . . . ; djDj Þ where v‘k ¼ d1 þ    þ djDj . hold for p‘ to the set D0 without revealing the values. Given
These shares are then split into subshares and distributed to the results of this protocol, simple modifications can be
the parties in U 0 . In other words, dj is split into d0j1 ; . . . ; d0jjU 0 j . made to Protocol 3 to allow the set U to properly validate

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2512 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017

that D0 is authorized to act on p‘ ’s behalf. Correctness and TABLE 1


security proofs for this protocol follow a similar logic that Table of Symbols for Simulator
was used in Protocol 3. Therefore, we omit the exact details.
Symbol Description
N Number of parties
5 DELEGATION STRATEGIES pa In-network probability
Consider a simple delegation strategy in which p‘ chooses a c Fraction of corrupt parties
random set D, and any time pre del is run, a new random set D0 s Fraction of information to bootstrap
is chosen. With each delegation (or redelegation), there is q Fraction that query in an iteration
jDj Size of the delegation set D
some chance that the delegated information will leak. This
d Bound on delegation chain depth
happens when all parties in the delegation set are corrupt. Let g Network churn rate
jDj be the size of the delegation set, which, for simplicity, we
assume to be constant but our protocols in Section
  4 will work
for different sized sets. Therefore, there are cjP j
sets of size parties in Du . Therefore, at any moment in time, the set D
jDj
will contain at least d honest parties. Since we limit the dele-
jDj for which all parties in the set are corrupt. (1) gives the
gation chain by d, we are guaranteed that there will always
probability of choosing a delegation set where all parties are
be at least one honest party in the redelegation sets. There-
corrupt, or in other words, the probability of a single delega-
fore, privacy is ensured.
tion (or redelegation) resulting in leaking private reputation
information given the delegation strategy we just described
5.2 Probabilistic Privacy
 
cpa N We can make our delegation strategy simpler by relaxing the
jDj
prob leak ¼  : (1) security guarantees. Let p‘ choose a set Du at random of size
pa N 1c
jDj pa d where d is the desired delegation chain depth to ensure
some level of availability. We set D ¼ in networkðDu Þ when
Using the parameters from our first simulation in p‘ runs pdel . When the parties in D run pre del , they set
Section 6 (N ¼ 10;000, pa ¼ 0:75, c ¼ 0:2 and jDj ¼ 5), we D0 ¼ D  fp0‘ g. Due to the way Du is chosen, we expect there
get that prob leak  0:0003. Therefore, with 1,700 delega-
to be d honest parties in D at any instant in time, and since
tions or redelegations total, the probability that the private
we limit the delegation chain depth by d, there will always be
reputation information would have leaked is
at least one honest party in the redelegation sets. In practice,
0:0003ð1700Þ ¼ 0:51. With high churn rates in a network, we
we can make Du somewhat large in order to have even stron-
can expect a lot of delegations and redelegations and would
ger assurances of privacy.
have to stop delegating at some point in order to guarantee
security. Therefore, we need a better delegation strategy.
Here we study two delegation strategies. One provides 5.3 Discussion
strong privacy guarantees but could potentially leak some The first delegation strategy is able to provide better privacy
information about who p‘ trusts (but not the actual reputa- guarantees by exploiting the reputation values that p‘ pos-
tion values). The other has weaker privacy guarantees, but sesses. There are circumstances where this could leak infor-
does not leak information about who p‘ trusts. mation about who p‘ trusts but not the actual reputation
values of p‘ . This may or may not be of concern, depending
5.1 Guaranteed Privacy on the application. The second delegation strategy does not
We can exploit the fact that p‘ has reputation information on have this problem, but is not able to provide as strong of pri-
other parties in the network to guarantee privacy when vacy guarantees, though we feel this strategy could be very
choosing how delegation should work, i.e., the initial set D viable in networks where c, the fraction of corrupt parties, is
and the delegation chain depth d. We let p‘ choose d accord- very low. Our description of the second strategy requires
ing to how available he wants his information to be when knowledge of c, which is a drawback, but conservative esti-
out of the network. For example, this could be determined mates of c can likely be computed.
based on the churn rate of the network. Once d is set, p‘
forms the set Dh of parties that he trusts the most (based on 6 SIMULATION
reputation values he possesses) where the size of Dh is dpda e. Our protocols shown in the previous section can be used to
These parties are known by p‘ to be honest and will help build a dynamic, privacy-preserving decentralized reputa-
provide strong privacy guarantees by forming part of D. p‘ tion system. We give an example implementation of such a
also chooses some number of other parties from the network system in Section 7. In this section, we show the utility of
at random, whose trustworthiness is possibly unknown. increasing availability in decentralized reputation systems
Call this set Du . When p‘ would like to leave the network, through a number of simulations. In order to establish a
he runs protocol pdel with D ¼ in networkðDh [ Du Þ, where comparison with previous work, we also simulate the case
in network returns the subset of the parameter of those par- where no delegation of reputation information occurs.
ties which are currently in the network. At a later point This is what all previous privacy-preserving decentralized
when a party, say p0‘ 2 D, wants to leave the network, the reputation systems do. Thus, in a network with churn, the
parties in D run pre del with D0 ¼ D  fp0‘ g. reputation information of parties leaves when the parties
Due to the way we constructed Dh , jin networkðDh Þj  d. leave the network. We summarize all the symbols we use in
Furthermore, there are approximately ð1  cÞjDu j honest this section in Table 1.

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2513

Fig. 2. Information availability for various delegation depths (pa ¼ 0:75). Fig. 3. Information availability for various delegation depths (pa ¼ 0:50).

6.1 The Setup network. We compute the fraction of information available


For our simulation, we let N be the total number of parties by counting the total number of reputation values available
in the network, pa be the probability that a party is in the in the network (either directly from a party or through dele-
network during one iteration of the simulation (1  pa is gation) divided by the total possible number of reputation
the probability that they are not in the network). Let c be the values (N 2  N).
fraction of corrupt parties. We bootstrap the reputation sys- We can see in the figure that even with d ¼ 1 there is a
tem by giving each party some reputation information on significant increase in the amount of available information.
other parties in the network. We let s be the fraction of par- Furthermore, with d ¼ 4 the information availability in the
ties for which a given party holds reputation information. simulated Dyn-PDRS is very close to the theoretic upper
At each iteration of the simulation some fraction, q, of the bound. This plot illustrates how effective simple delegation
parties in the network ask for reputation information on can be in a reputation system.
some other in-network party. Also, in each iteration, some
fraction, g, of the network leaves or rejoins the network. Par- 6.3 Varied pa
ties (both those in the network and those out of the network) We now look at how changing pa affects information avail-
will leave the network with probability 1  pa or join the ability. In Fig. 3 we repeat our previous simulation but
network (if they were already gone) with probability pa . We lower pa to 0.5. There is still a significant advantage in dele-
simulate a static network by setting pa ¼ 1. As we saw pre- gating reputation information, but it takes a longer delega-
viously, when a party leaves the network, they delegate tion chain to approach the theoretic limit. In essence, the
their reputation information to a delegation set. If someone effect of a lower probability of availability of the parties is a
in that set leaves before the original party returns to the net- slower growth of information availability in the system
work, a redelegation occurs. We bound the total number of over time. To combat this in a deployed system, we would
delegations and redelegations with a parameter d. Bounding need a deeper delegation chain.
depth of the delegation chain affects both efficiency and
security. Let d be the bound on the depth of the delegation 6.4 Varied g
chain. In other words, if d ¼ 1, when p‘ leaves the network, Next we run our simulation with various values for g.
he will delegate his trust information to some set D. When Recall that g specifies what fraction of the parties might
one of the parties in D leaves the network, they do not do change their network status. This relates to the churn of the
any further delegations. With d ¼ 2, p‘ would delegate to a network. At each iteration of the simulation, gN of the par-
set D who in turn would delegate to a set D0 when a party ties will flip a weighted coin to determine if they should be
in D is leaving the network, but the chain would end there. in the network (either join the network if they were out, or
When p‘ returns to the network the delegation chain resets stay in). pa specifies the probability that the party should
(i.e., delegation would occur again if p‘ left again). In effect, stay in or join the network. Other parameters are fixed at
existing privacy-preserving decentralized reputation sys- N ¼ 10;000, pa ¼ 0:75, c ¼ 0:2, s ¼ 0:05, q ¼ 0:05, jDj ¼ 5
tems have d ¼ 0, i.e., no delegation. and d ¼ 4. We run our simulation with the following values
for g: 0.25, 0.50, 0.75, 1.00. The results are shown in Fig. 4.
6.2 Varied d Again, we include in the plot the line for the theoretic limit
We begin by studying how d affects the level of information derived from a static network.
availability we can achieve. Fig. 2 shows a simulation with We can see from the figure that, surprisingly, the churn
N ¼ 10;000, pa ¼ 0:75, c ¼ 0:2, s ¼ 0:05, q ¼ 0:05, jDj ¼ 5, rate has little effect on information availability. To under-
g ¼ 0:25 and various values for d. The line for the theoretic stand why this is the case, consider what happens when g is
limit is achieved by setting p1 ¼ 1. In other words, we can- high. Parties are more likely to leave the network, so they
not surpass the information availability of a fully static will have to delegate their reputation information. They are

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2514 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017

Fig. 4. Information availability plot with various churn rates. Fig. 5. Information availability plot with various delegation set sizes.

also, however, more likely to come back quickly, which 7 IMPLEMENTATION


means the delegation chain limit is less likely to be reached.
In order to better understand the timing characteristics of
Contrast this with the case where g is low. Parties are less
our protocols, we have implemented the four protocols
likely to leave the network, so they will not have to delegate
given in Section 4. Our implementation is in the Python lan-
their information as often. When they do leave, however,
guage, and all communications take place using the Python
they stay out longer. But, since the parties in their delega-
remote object functionality provided by Pyro [30]. For our
tion (and redelegation) set are less likely to leave also, the
finite field for additive secret sharing, we use Z1;021 . This
delegation chain will not grow as quickly.
field is more than sufficient as we set the maximum reputa-
tion value to be 10 and our query set sizes are sufficiently
6.5 Varied jDj
small so as to not cause overflow.
We now vary the size of the delegation and redelegation
The primary component of our implementation is the
sets to see how that parameter affects information availabil-
Agent. An agent is a party in the network. Each agent begins
ity. For simplicity, we assume that the size of the delegation
with some amount of reputation information on other par-
set and the redelegation sets are the same. Fig. 5 shows the
ties in the network. This bootstraps the reputation system.
results of this simulation. We can see in the figure that the
While we did not do this in our experiments, an agent could
size of the delegation set indeed has an effect on informa-
start with no reputation information. Agents register with
tion availability. The effect is not drastic, but at the same
the Pyro nameserver to make their availability in the net-
time is non-negligible. The reason for lower information
work known. They are then free to communicate with each
availability as jDj increases is that there are more parties
other. For the purposes of our implementation, query and
who can cause redelegations, thus, it is more likely that we
delegation sets are chosen randomly and we can set the size
will reach the delegation chain depth limit.
of each of these sets programmatically. We use the same
sizes across the whole network, though in practice they can
6.6 Discussion
differ. For our delegation strategy, we have simply chosen
From the previous simulations, it is clear that different
to set the redelegation set equal to the previous delegation
parameters affect information availability differently. Net-
set minus the party that is leaving. This sets a natural bound
work churn has little to no effect on information availability,
on the delegation chain to be the size of the original delega-
but the in-network probability and the delegation chain
tion set minus two. That way there will always be at least
depth can both have significant impacts. We can classify the
two parties in the delegation set. In practice, we would need
impacts on how changing each parameter affects informa-
to be more careful in choosing the delegation set and setting
tion availability by looking at how the slope and y-intercept
the bound on the delegation chain depth accordingly, as we
of the lines plotted above change. Increasing d increases the
discussed in the previous section. For the purposes of our
y-intercepts and the slopes of the lines in Fig. 2. Comparing
timing experiments, this delegation strategy will suffice.
that figure with Fig. 3, we can see that decreasing pa
decreases the y-intercept and the slope. Increasing churn, g,
has little to no effect on either the y-intercepts or the slopes 8 EXPERIMENTATION
of the lines in Fig. 4. Finally, increasing jDj causes only a Using our implementation detailed in the previous section,
small decrease in both the y-intercepts and the slopes of the we have run a number of experiments, collecting timing
lines in Fig. 5. Therefore, we can see that while the in-net- information on the four protocols given in Section 4. In this
work probability has the biggest negative impact on infor- section, we plot the results and describe the meaning of the
mation availability, increasing the delegation chain depth plots. For our experiments, we have set N ¼ 50, pa ¼ 0:9
limit can be a viable way to significantly increase informa- when delegation is used, s ¼ 0:5 and g ¼ 0:1 unless other-
tion availability. wise stated. An explanation of these symbols is given in

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2515

Fig. 6. Average time to execute padd with varying query set size and Fig. 8. Average time to execute pdel with varying delegation set size and
95 percent confidence interval. 95 percent confidence interval.

Table 1. Our experiments were carried out ten Amazon EC2 on the running time of these protocols and is fixed at five
instances, each with 4 GB of RAM and two virtual CPUs at for these experiments. In Fig. 8, we see that pdel is a very fast
2.5 GHz. protocol and increases linearly as the delegation set size
In Fig. 6, we show the timing information for running padd increases. Fig. 9 reveals that pre del is the most expensive pro-
with various query set sizes and a fixed delegation set size tocol in our Dyn-PDRS. With smaller delegation set sizes,
(jDj ¼ 6). We can see that the time to execute padd increases however, it is still practical. In future work, we plan to study
as the query set size increases. This plot also reveals a lot how different delegation strategies could help minimize the
about pact . pact is called as a subroutine of padd when delega- number of times pre del is run.
tion is enabled and a party that appears in the query set has All of the previous plots showed the average execution
left the network. In which case, the parties in the delegation time over the entire experiment. The time to execute pdel and
set act on his behalf. In the plot, we can see the effect of pact , pre del can vary greatly depending on how much information
both the overall time to execute and the slope increase. Even needs to be delegated or redelegated. Indeed our plots show
with a query set size of ten, however, padd , both with and this behavior, especially so for pre del . To better understand
without delegation is very fast. In Fig. 7, we show the results how the amount of information affects the running time of
of a similar experiment but this time varied the size of the these protocols, we plot the individual data points collected
delegation set and fixed the size of the query set to five. With during an experiment in which we varied s (the fraction of
no delegation, the delegation set size has no effect. Again, we information bootstrapped into the reputation system) and,
see how the time to execute padd with delegation increases as upon either a delegation or redelgation, counted the number
the size of the delegation set increases. of reputation values being delegated or redelegated respec-
In Figs. 8 and 9, we plot the average time (with 95 percent tively. For this experiment, we fixed the query set size at five
confidence interval) to run pdel and pre del respectively, with and the delegation set size at six. We show the results of this
varying delegation set sizes. The query set size has no effect

Fig. 7. Average time to execute padd with varying delegation set size and Fig. 9. Average time to execute pre del with varying delegation set size
95 percent confidence interval. and 95 percent confidence interval.

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2516 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017

Fig. 10. Timing as amount of information increases for pdel .


Fig. 11. Timing as amount of information increases for pre del .
experiment in Fig. 10 for pdel and Fig. 11 for pre del . In each Reputation systems can be an important technological
plot, we have included the linear least squares regression mechanism to enable interaction between devices who are
line. For both protocols, the execution time increases linearly unwilling to trust others blindly. Our work builds upon
as the amount of information increases, though the slope of existing work in privacy-preserving decentralized reputa-
the line is much higher for pre del . tion systems by adding a dynamic nature to the systems.
We feel that being able to support churn in networks that
9 CONCLUSION use reputation systems is important as many of the primary
We have proposed dynamic, privacy-preserving decentral- target networks (peer-to-peer and mobile ad-hoc networks)
ized reputation systems. The purpose of these reputation will have high rates of churn.
system is to combat a problem in existing privacy-preserv-
ing decentralized reputation system where, when a party ACKNOWLEDGMENTS
leaves the network, all of their reputation information Disclaimer: The views expressed in this article are those of
leaves with them. This can be a major problem in reputation the authors and do not reflect the official policy or position
systems where reputation information is sparse. We have of the United States Air Force, Department of Defense, or
presented four protocols which compose to form a Dyn- the U.S. Government.
PDRS and argued the correctness and security of our proto-
cols. The basis of our protocols is additive secret sharing REFERENCES
and delegation/redelegation of additive secret shares.
[1] K. Aberer and Z. Despotovic, “Managing trust in a peer-2-peer
Through simulation, we found that by enabling parties to information system,” in Proc. 10th Int. Conf. Inf. Knowl. Manage.,
privately delegate/redelegate reputation information, we 2001, pp. 310–317.
can significantly increase the amount of information avail- [2] A. Jsang and R. Ismail, “The beta reputation system,” in Proc. 15th
Bled Electron. Commerce Conf., 2002, pp. 41–55.
able in the reputation system. We found that the network [3] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina, “The Eigen-
churn rate had little to no impact on the amount of informa- trust algorithm for reputation management in P2P networks,” in
tion availability, which leads us to believe our protocols can Proc. 12th Int. Conf. World Wide Web, 2003, pp. 640–651.
be very effective, even in networks with very high churn. [4] S. Buchegger and J.-Y. Le Boudec, “A robust reputation system for
peer-to-peer and mobile ad-hoc networks,” in Proc. 3rd Workshop
We explored two interesting delegation strategies for Dyn- Econ. Peer-to-Peer Syst., pp. 1–6, 2004.
PDRS, each with their own strengths and weaknesses. [5] S. Ganeriwal, L. K. Balzano, and M. B. Srivastava, “Reputation-
Through experimentation, using an implementation of based framework for high integrity sensor networks,” ACM Trans.
Sensor Netw., vol. 4, no. 3, 2008, Art. no. 15.
our protocols, we found that our protocols are very efficient. [6] E. Pavlov, J. S. Rosenschein, and Z. Topol, “Supporting privacy in
There is likely room for improvement with our redelegation decentralized additive reputation systems,” in Trust Management.
protocol though. We believe that this could be achieved Berlin, Germany: Springer, 2004, pp. 108–119.
with better delegation strategies (e.g., putting people in the [7] O. Hasan, L. Brunie, and E. Bertino, “Preserving privacy of feed-
back providers in decentralized reputation systems,” Comput.
delegation set who are less likely to leave the network), but Secur., vol. 31, no. 7, pp. 816–826, 2012.
different delegation strategies must be balanced with secu- [8] T. Dimitriou and A. Michalas, “Multi-party trust computation in
rity concerns. Another way we can improve the efficiency of decentralized environments,” in Proc. 5th Int. Conf. New Technol.
the redelegation protocol would be to better optimize what Mobility Secur., 2012, pp. 1–5.
[9] S. Dolev, N. Gilboa, and M. Kopeetsky, “Computing multi-party
information is delegated and redelegated. For example, we trust privately: In O (n) time units sending one (possibly large)
could only delegate information that is sparse in the net- message at a time,” in Proc. ACM Symp. Appl. Comput., 2010,
work or information that is fresh, i.e., recently acquired. pp. 1460–1465.
[10] O. Hasan, L. Brunie, E. Bertino, and N. Shang, “A decentralized
This would alleviate the need to delegate and redelegate all privacy preserving reputation protocol for the malicious adversar-
reputation information. We leave this to future work, ial model,” IEEE Trans. Inf. Forensics Secur., vol. 8, no. 6, pp. 949–
however. 962, Jun. 2013.

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2517

[11] R. Burke, “Hybrid recommender systems: Survey and Michael R. Clark received the BS degree from
experiments,” User Model. User-Adapted Interaction, vol. 12, no. 4, Brigham Young University, Provo, Utah, in 2008,
pp. 331–370, 2002. the MS degree from the University of Utah, Salt
[12] O. Goldreich, The Foundations of Cryptography-Volume 2, Basic Lake City, Utah, in 2010, and the PhD degree
Applications. Cambridge, U.K.: Cambridge Univ. Press, 2004. from the Air Force Institute of Technology (AFIT),

[13] I. Damgard, M. Geisler, M. Krøigaard, and J. B. Nielsen, Wright-Patterson AFB, Ohio, in 2015, all in com-
“Asynchronous multiparty computation: Theory and puter science. His research interests include the
implementation,” in Proc. 12th Int. Conf. Practice Theory Public Key areas of cryptography, homomorphic encryption,
Cryptography, 2009, pp. 160–179. secure multiparty computation, and applying
[14] E. Gudes, N. Gal-Oz, and A. Grubshtein, “Methods for computing these technologies to design more resilient and
trust and reputation while preserving privacy,” in Data and Appli- secure systems.
cations Security XXIII. Berlin, Germany: Springer, 2009, pp. 291–298.
[15] M. Li, N. Cao, S. Yu, and W. Lou, “FindU: Privacy-preserving per-
sonal profile matching in mobile social networks,” in Proc. IEEE Kyle Stewart received the BS degree from the
INFOCOM, 2011, pp. 2435–2443. University of Utah, in 2008, the MS degree in
[16] A. Narayanan, N. Thiagarajan, M. Lakhani, M. Hamburg, and computer engineering from the Air Force Institute
D. Boneh, “Location privacy via private proximity testing,” in of Technology (AFIT), in 2010, and the PhD
Proc. Annu. Netw. Distrib. Syst. Secur. Symp., pp. 1–16, 2011. degree in computer engineering from AFIT, in
[17] P. Bogetoft, et al., “Multiparty computation goes live,” Cryptology 2015. He then spent two years as an engineer
ePrint Archive, Report 2008/068, 2008. [Online]. Available: working on the operational test and evaluation of
http://eprint.iacr.org/ the F-35 Joint Strike Fighter. His research inter-
[18] D. Bogdanov, L. Kamm, S. Laur, P. Pruulmann-Vengerfeldt, R. ests include secure computing, virtualization,
Talviste, and J. Willemson, “Privacy-preserving statistical data cloud computing, and test and evaluation frame-
analysis on federated databases,” in Privacy Technologies and Pol- works. He is a member of the ACM and a student
icy, B. Preneel and D. Ikonomou, Eds. Berlin, Germany: Springer, member of the IEEE.
2014, pp. 30–55. [Online]. Available: http://dx.doi.org/10.1007/
978–3-319-06749-0_3 Kenneth M. Hopkinson received the BS degree
[19] A. Yao, “How to generate and exchange secrets (extended from Rensselaer Polytechnic Institute, Troy,
abstract),” in Proc. 27th Annu. Symp. Found. Comput. Sci., 1986, New York, in 1997 and the MS and PhD degrees
pp. 162–167. from Cornell University, Ithaca, New York, in
[20] M. Ben-Or, S. Goldwasser, and A. Wigderson, “Completeness the- 2002 and 2004, respectively, all in computer
orems for non-cryptographic fault-tolerant distributed computa- science. He is a professor of computer science
tion (extended abstract),” in Proc. 20th Annu. ACM Symp. Theory in the Air Force Institute of Technology (AFIT),
Comput., 1988, pp. 1–10. Wright-Patterson AFB, Ohio. His research inter-
[21] O. Goldreich, S. Micali, and A. Wigderson, “How to play any ests include the areas of simulation, networking,
mental game,” in Proc. 19th Annu. ACM Symp. Theory Comput., and distributed systems. He is a senior member
1987, pp. 218–229. of the IEEE.
[22] B. Kreuter, A. Shelat, and C.-H. Shen, “Billion-gate secure compu-
tation with malicious adversaries,” in Proc. USENIX Secur. Symp., " For more information on this or any other computing topic,
2012, pp. 285–300. please visit our Digital Library at www.computer.org/publications/dlib.
[23] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, and N. P.
Smart, “Practical covertly secure MPC for dishonest majority-or:
Breaking the SPDZ limits,” Cryptology ePrint Archive, Report
2012/642, 2012. [Online]. Available: http://eprint.iacr.org/
[24] M. Clark and K. Hopkinson, “Transferable multiparty computa-
tion with applications to the smart grid,” IEEE Trans. Inf. Forensics
Secur., vol. 9, no. 9, pp. 1356–1366, Sep. 2014.
[25] Y. Desmedt and S. Jajodia, “Redistributing secret shares to new
access structures and its applications,” George Mason Univ., Fair-
fax, VA, USA, Tech. Rep. ISSE TR-97–01, Jul. 1997.
[26] T. M. Wong, C. Wang, and J. M. Wing, “Verifiable secret redistri-
bution for threshold sharing schemes,” Carnegie Mellon Univ.,
Pittsburgh, PA, USA, Tech. Rep. CMU-CS-02–114, 2002.
[27] A. Shamir, “How to share a secret,” Commun. ACM, vol. 22, no. 11,
pp. 612–613, 1979.
[28] I. Damgard, V. Pastro, N. Smart, and S. Zakarias, “Multiparty
computation from somewhat homomorphic encryption,” Cryptol-
ogy ePrint Archive, Report 2011/535, 2011. [Online]. Available:
http://eprint.iacr.org/
[29] J. Launchbury, I. S. Diatchki, T. DuBuisson, and A. Adams-Moran,
“Efficient lookup-table protocol in secure multiparty
computation,” in Proc. 17th ACM SIGPLAN Int. Conf. Functional
Program., 2012, pp. 189–200. [Online]. Available: http://doi.acm.
org/10.1145/2364527.2364556
[30] I. de Jong, “Pyro-python remote objects-4.26,” 2014. [Online].
Available: https://pythonhosted.org/Pyro4/

Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.

You might also like