Professional Documents
Culture Documents
9, SEPTEMBER 2017
Abstract—Reputation systems provide an important basis for judging whether to interact with others in a networked system. Designing
such systems is especially interesting in decentralized environments such as mobile ad-hoc networks, as there is no trusted authority to
manage reputation feedback information. Such systems also come with significant privacy concerns, further complicating the issue.
Researchers have proposed privacy-preserving decentralized reputation systems (PDRS) which ensure individual reputation
information is not leaked. Instead, aggregate information is exposed. Unfortunately, in existing PDRS, when a party leaves the network,
all of the reputation information they possess about other parties in the network leaves too. This is a significant problem when applying
such systems to the kind of dynamic networks we see in mobile computing. In this article, we introduce dynamic, privacy-preserving
reputation systems (Dyn-PDRS) to solve the problem. We enumerate the features that a reputation system must support in order to be
considered a Dyn-PDRS. Furthermore, we present protocols to enable these features and describe how our protocols are composed to
form a Dyn-PDRS. We present simulations of our ideas to understand how a Dyn-PDRS impacts information availability in the network,
and report on an implementation of our protocols, including timing experiments.
Ç
1 INTRODUCTION
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2507
them. In situations where reputation information is sparse, operate in networks where infinite delay can occur between
however, this can diminish the utility and security the system. the sending and receiving of message is possible using
This can be seen in the literature in a number of contexts. For already established techniques [13]. In a real system, this
example, Burke, in studying (non-privacy-preserving) recom- could be enforced by giving low reputation values to parties
mender systems, notes that researchers have turned to com- who refuse to participate. Our protocols build upon existing
plex computations and analyses to deal with the sparse data work and use well studied cryptographic primitives (e.g.,
problem [11]. In privacy-preserving systems, the problem is secret sharing) as building blocks.
even more complex. In addition to dealing with reputation We assume that any party can be corrupted and that cor-
accuracy problems due to data sparsity, the security of rupt parties will collude with one another. If the querying
privacy-preserving systems is often contingent on there being party is corrupt, privacy of honest parties is ensured as long
a sufficiently large number of parties in the query set U, some as at least two of the parties in the set U are honest. If the
fraction of which must be honest. Let us return to our previ- querying party is not corrupt, there need only be one honest
ous example using MANETs to illustrate the issue. Say that a party in the set U. For delegation and redelegation, privacy
party pi would like to interact with a party pk , both of which is maintained as long as there is at least one honest party in
are currently active in the network (it would not make sense the corresponding sets. We describe how we achieve these
to try to interact with inactive parties), but, due to the dynam- security guarantees in Section 4.
ics of the network, there may only be 1 peer of pi that has
reputation information on pk . Therefore the set U could only 1.2 Contributions
consist of that single party. Since the single party does not Our contributions (and the outline of the remainder of the
want his reputation information on pk leaked, he would refuse article) are as follows. In Section 2, we describe existing
to help pi . The issue extends to situations beyond the case PDRS from the literature and illustrate how those systems
where there is only a single party in U. Existing PDRS proto- fail when parties are allowed to leave the network. In
cols require that some fraction of the set U be honest. In a Section 3, we give a more formal definition of PDRS and
dynamic network where reputation information is lost when Dyn-PDRS and describe our problem setting. In Section 4,
a party leaves the network, this requirement could be hard to we specify four protocols. The first is necessary for a PDRS
satisfy. A well resourced adversary that has corrupted multi- and is similar to existing work in the area. The next three
ple parties in the system can coordinate an attack where all of are necessary to build our Dyn-PDRS. We also argue
the corrupt parties are in the network at the same time. Honest the correctness and security of our protocols. In Section 5,
parties, on the other hand, will enter and join the network as we describe tradeoffs in different delegation strategies,
they please. Thus the coordinated attack has the effect of or how to choose delegation and redelegation sets. We
increasing the fraction of corrupt to honest parties currently describe two methods for doing this and describe how pri-
active in the system. We give a specific example of how an vacy guarantees differ in each. In Section 6, we show the
existing system can break down in the context of dynamic results of a number of simulations we have run which illus-
networks in Section 2. trate the benefits of a Dyn-PDRS over the traditional PDRS.
To overcome the challenges just presented, we propose In Sections 7 and 8, we describe an implementation of our
a new paradigm for PDRS, namely dynamic, privacy- protocols and timing experiments we have conducted using
preserving decentralized reputation systems (Dyn-PDRS). our implementation. Finally, we conclude in Section 9.
A system which follows the Dyn-PDRS paradigm enables
parties to run a delegation protocol when they want to 2 RELATED WORK
leave the network. In this protocol, they delegate their
2.1 Reputation Systems
reputation information to a set, D, of other parties in
the network. The delegation is done in such a way that the A number of protocols have been proposed to construct
party’s privacy is still maintained. That party is then free PDRS. We describe some of the prominent ones and com-
to leave the network. When that party appears in a query ment on why the problem of operating in networks where
set U, the parties in D are able to act on its behalf. Further- parties are constantly leaving and rejoining the network is a
more, a Dyn-PDRS provides a redelegation protocol which concern. We focus on protocols which are secure in the
is run when a party in D wants to leave the network. This honest-but-curious model as that is the model in which we
allows the parties in D to redelegate to a new set D0 . That work. Similar to the works described below, we focus on a
set D0 can then act on the original party’s behalf. additive PDRS, in which reputation information from the
parties in the query set is added together privately, though
1.1 Adversary Model our ideas for Dyn-PDRS should be adaptable to other func-
We work in the honest-but-curious or semi-honest adver- tions such as weighted sums.
sary model [12]. In this model, we assume that corrupt
parties execute the protocol as specified, but use any infor- 2.1.1 Pavlov et al.
mation gleaned during execution to attempt to violate One of the earliest works in privacy-preserving decentralized
another party’s privacy. In particular, this means that all reputation systems comes from Pavlov et al. [6]. An important
parties who are active in the network will participate in the proof coming from this work is that if the querying node is
protocols when required. This is a common assumption in corrupt, there must be at least two honest nodes or privacy
the multiparty computation (MPC) literature. We also cannot be achieved. This is due to the fact that the querying
assume that the underlying communications system pro- party gets the result of the calculation. Given the result and
vides guaranteed delivery. Adapting our protocols to the inputs of the other dishonest parties, the reputation value
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2508 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2509
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2510 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017
send it to another party in the network, say pj . Now pj can more detail. In a general sense, the security of the protocols
act on pi ’s behalf. When pj wants to leave the network, he we present are secure for up to n 1 corrupt parties. How-
could further delegate pi ’s reputation information (along ever, due to the specific computation, summation, n 1 cor-
with his own) to another party, say pk , by using digital sig- rupt parties can learn the remaining honest party’s input by
natures to create a delegation chain. Any other party in P , subtracting their individual inputs from the output. Only
using the delegated information, can verify the chain to the querying party should learn the output so if the query-
know that the responding party is authorized to do so. This ing party is corrupt, we must assume that at least two of the
example illustrates our goal, to enable parties to leave and parties in U are honest [6]. For simplicity we present our
join the network at will, yet keeping their reputation infor- protocols as if the querying party is honest. In the case of a
mation available to other parties in the network, all while dishonest querying party, the only thing that changes is the
still preserving privacy of reputation values. Such a system number of corruptions tolerated, as we just described.
does not currently exist. We formalize this idea with the fol-
lowing definition. 4.1 The PDRS Protocol
Let pi be the querying party, who wants to compute vik for
Definition 3 (Dynamic, Privacy-Preserving Decentral-
some party pk . Let U be the set of witnesses with inputs to
ized Reputation System). A (additive) Dyn-PDRS con-
the computation. Our protocol padd is shown in Protocol 1.
sists of a protocol padd as in Definition 1 and three additional
We note that while not identical to previously proposed
protocols: pdel , pact and pre del . Where pdel allows a party to del-
protocols for PDRS, our protocol is very similar, and, taken
egate the reputation information it holds to a set of parties D
in its own right, should have similar performance.
while still preserving the privacy of that information. The pro-
tocol pact allows a set of parties who have been authorized to act
on another party’s behalf to enter that party’s information into Protocol 1. padd
the protocol padd while still preserving the party’s privacy. 1) pi sends the description of U and the identity pk to each
Finally, the protocol pre del lets a set of parties, say D, re-dele- party in U.
gate reputation information that was delegated to it to another 2) Each pj 2 U computes ðs1 ; . . . ; sjUj Þ ¼ SjUj ðvjk Þ and sends
set of parties, say D0 , in a way which maintains the privacy of one share to each other party in U and keeps one share for
the information. himself.
3) Each pj collects one share from each other party in U. Let
In the next section, we give specific instances of these ðr1 ; . . . ; rjUj Þ be the shares he collects (including his own
protocols and how they are composed to form a Dyn-PDRS. share).
There is some tradeoff to be balanced in delegation. We 4) Each pj then computes tj ¼ r1 þ r2 þ þ rjUj and sends tj
explore delegation strategies in order to balance the tradeoff to pi .
between information availability and privacy in Section 5. 5) Party pi receives jUj shares, tj from pj 2 U, and sets
1
P
vik ¼ jUj pj 2U tj .
4 PROTOCOLS
In this section, we present the four protocols introduced in Correctness. The correctness of our protocol is guaranteed
Section 3. We begin by presenting padd , the protocol to allow due to the linear naturePof additive secret sharing. Mathe-
1
P
pi to use the set U to compute vik ¼ jUj pj 2U vjk privately.
matically we have that pj 2U Sðvjk Þ, where addition is per-
The summation is computed via a simple multiparty compu- formed
P point-wise on the sharing vectors, is equal to
tation built on additive secret sharing. The concept is similar Sð pj 2U vjk Þ. These shares are, in essence, what the parties
to previous work in decentralized reputation systems and in U send to pi in the next to last step. So,
has similar performance characteristics. padd by itself could
be used as the basis of a PDRS. It is important to note that in a 1 X 1 X
vik ¼ tj ¼ vjk :
Dyn-PDRS, since we allow for delegations, the set U may jUj p 2U jUj p 2U
j j
contain parties which are not currently online, as long as the
party has delegated its reputation information. The set U can Security. The security of our protocol comes from the
be generated using methods from prior work, for example, security guarantees of additive secret sharing. As long as
Pavlov’s witness selection protocol. We do require that all the adversary has not corrupted all of U, all of the individ-
parties in U have reputation information on the target, pk . In ual reputation values vjk as well as the output value vik are
other words, we require that vjk 6¼ ? where pj 2 U. kept private.
Next, we present the remaining three protocols, pdel , pact
and pre del to enable privacy-preserving delegation. 4.2 The Dyn-PDRS Protocols
Together, these protocols enable a reputation system where Say party p‘ 2 P is leaving the network. In order to not lose
parties can leave the network, yet delegate their reputation all the reputation information of p‘ , in this section, we pro-
information in such a way that it can still be used to assist pose the necessary protocols to allow p‘ to delegate its repu-
other parties in computing reputation. We abstract away tation information to a set of parties D P . We then show
many of the details of the underlying communication sys- how the parties in D can act on behalf of p‘ whenever p‘
tem and some fine details about how the protocols interact appears in a query set U. Furthermore, we present a proto-
to keep our discussion here focused on the protocols them- col to allow the parties in D to transfer the delegation of p‘ ’s
selves. In Section 7, we describe our implementation of reputation information to a new set D0 . This protocol is
these protocols in a real system and describe these parts in used when a party in D is leaving the network. D and D0
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2511
may be of different sizes, overlap or be completely indepen- Notice that the sum of all the subshares for every dj is still
dent. When p‘ rejoins the network, the parties in D can sim- v‘k . One subshare of each dj is sent to one party in U 0 . Since
ply discard p‘ ’s reputation information. It would be fairly addition is commutative, it turns out that the sum of all the
simple, however, to also allow the parties in D to return the r‘ shares computed in Step 6 of the protocol is still v‘k . Thus,
reputation information back to p‘ . For our purposes here, the parties in U 0 have additive shares of v‘k as needed for
we will simply let the set D be chosen at random. Thus, (1) the protocol to be correct.
gives that probability that the private information is leaked Security. We show that Protocol 3 is secure from the per-
due to this set. In Section 5, we explore other methods for spective that it does not give the adversary any additional
choosing D and the redelegation sets. information about v‘k . We show this for the worst case,
Protocol 2 describes pdel . The correctness and security of when the adversary controls all parties but one in D and all
this protocol come directly from the correctness and secu- parties but one in U 0 . Security in the case that the adversary
rity of additive secret sharing which we discussed in controls fewer parties is an immediate consequence from
Section 3. As long as the adversary does not control all of worst case security. Let ph 2 D and p0h 2 U 0 be the honest,
the parties in D, p‘ ’s reputation information is kept private. uncorrupted parties in each set. Note that ph and p0h could
be the same party. In the protocol, ph will create a number
Protocol 2. pdel of subshares, one of which will be sent to p0h . Since the
1) p‘ chooses a set D P . adversary will not know that share, due to the security of
2) For each pj 2 P where v‘j 6¼ ? additive secret sharing, the adversary will also not know
p‘ computes sharesj ¼ SjDj ðv‘j Þ and sends the identity the r‘ that p0h computes in Step 6 of the protocol. Without
j and one share to each party in D. that value, the r‘ values computed by the corrupt parties
3) p‘ digitally signs a message signifying that it has delegated give the adversary no additional information about v‘k . This
its reputation information to the set D and sends the mes- shows that as long as there is at least one uncorrupted party
sage and signature to each party in D. in D and U 0 , the protocol leaks no additional information
about the private trust information.
Once the parties in D have received the information sent If a party in D leaves, the remaining parties would not be
by p‘ in Protocol 2 and verified the digital signature, they able to act on p‘ ’s behalf. Therefore, before any party in D
are ready to act on his behalf. At some later point in time leaves the network, pre del is run, as shown in Protocol 4. We
they will see a query set U that contains p‘ when a party, let p‘0 2 D be the party that is leaving the network. Further-
say pi , initiates Protocol 1. At this point they run pact , shown more, recall that from Protocol 2, the parties in D hold a
in Protocol 3. number of pairs ðj; sj Þ where j is an identity of a party and
sj is a share of v‘j . How the set D0 is chosen will be explored
Protocol 3. pact in Section 5. For now, we simply assume that D0 is a new
random set. Our description of pre del focuses on the case
1) The parties in D notify the parties in U that they are to act
where only one party has delegated information to the set
on behalf of p‘ by sending them the message and digital
D. The protocol can easily be adapted to the case where
signature received from p‘ .
multiple parties have delegated to D by running it once for
2) Parties in D select one of them to take p‘ ’s place in the set
U and notifies the parties in U of this choice. each party that has left the network and delegated to D.
3) The parties in U validate the digital signature and replace
p‘ in the set U with the party chosen in the previous step. Protocol 4. pre del
Call this new set U 0 . 1) p‘0 sends a message to all other parties in D that it is leav-
4) The parties in U use the set U 0 for sharing when continu- ing the network.
ing Protocol 1 with the exception of computing r‘ (the 2) The parties in D select a new set D0 which will be respon-
input shares that would have come from p‘ ). sible for acting on behalf of p‘ .
5) Each party in D takes its share of sharesk , say sk , received 3) Each party in D creates subshares of each sj it holds and
in Protocol 2 and computes ðs1 ; . . . ; sjU 0 j Þ ¼ SjU 0 j ðsk Þ and distributes one subshare to each party in D0 along with
sends one share to each party in U 0 . the identity j.
6) Each party in U 0 receives jDj shares from the previous 4) For each j, each party in D0 receives one subshare sj from
step. Call these shares ðs01 ; . . . ; s0jDj Þ. They then compute each party in D and stores the sum of these subshares along
r‘ ¼ s01 þ þ s0jDj . r‘ takes the place of what they would with j. The sum of these subshares is a new share of v‘j .
have received from p‘ in Step 3 of Protocol 1. 5) Parties in D also send the message and digital signature
they received from p‘ to the parties in D0 . They also each
digitally sign a message stating that they are transferring
At the end of Protocol 3, the parties in U 0 can complete delegation of p‘ ’s reputation information to D0 .
the execution of Protocol 1. Some interesting features of the
protocol are that not all of D is required to participate in the
execution of Protocol 1 and that the trust value v‘k is never Thus, by doing something similar to what was done in
revealed, either to the parties in D or the parties in U 0 . Protocol 3, i.e., creating and distributing subshares, the par-
Correctness. From Protocol 2, the parties in D hold shares ties in D are able to transfer all delegated information they
of v‘k , say sharesk ¼ ðd1 ; . . . ; djDj Þ where v‘k ¼ d1 þ þ djDj . hold for p‘ to the set D0 without revealing the values. Given
These shares are then split into subshares and distributed to the results of this protocol, simple modifications can be
the parties in U 0 . In other words, dj is split into d0j1 ; . . . ; d0jjU 0 j . made to Protocol 3 to allow the set U to properly validate
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2512 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2513
Fig. 2. Information availability for various delegation depths (pa ¼ 0:75). Fig. 3. Information availability for various delegation depths (pa ¼ 0:50).
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2514 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017
Fig. 4. Information availability plot with various churn rates. Fig. 5. Information availability plot with various delegation set sizes.
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2515
Fig. 6. Average time to execute padd with varying query set size and Fig. 8. Average time to execute pdel with varying delegation set size and
95 percent confidence interval. 95 percent confidence interval.
Table 1. Our experiments were carried out ten Amazon EC2 on the running time of these protocols and is fixed at five
instances, each with 4 GB of RAM and two virtual CPUs at for these experiments. In Fig. 8, we see that pdel is a very fast
2.5 GHz. protocol and increases linearly as the delegation set size
In Fig. 6, we show the timing information for running padd increases. Fig. 9 reveals that pre del is the most expensive pro-
with various query set sizes and a fixed delegation set size tocol in our Dyn-PDRS. With smaller delegation set sizes,
(jDj ¼ 6). We can see that the time to execute padd increases however, it is still practical. In future work, we plan to study
as the query set size increases. This plot also reveals a lot how different delegation strategies could help minimize the
about pact . pact is called as a subroutine of padd when delega- number of times pre del is run.
tion is enabled and a party that appears in the query set has All of the previous plots showed the average execution
left the network. In which case, the parties in the delegation time over the entire experiment. The time to execute pdel and
set act on his behalf. In the plot, we can see the effect of pact , pre del can vary greatly depending on how much information
both the overall time to execute and the slope increase. Even needs to be delegated or redelegated. Indeed our plots show
with a query set size of ten, however, padd , both with and this behavior, especially so for pre del . To better understand
without delegation is very fast. In Fig. 7, we show the results how the amount of information affects the running time of
of a similar experiment but this time varied the size of the these protocols, we plot the individual data points collected
delegation set and fixed the size of the query set to five. With during an experiment in which we varied s (the fraction of
no delegation, the delegation set size has no effect. Again, we information bootstrapped into the reputation system) and,
see how the time to execute padd with delegation increases as upon either a delegation or redelgation, counted the number
the size of the delegation set increases. of reputation values being delegated or redelegated respec-
In Figs. 8 and 9, we plot the average time (with 95 percent tively. For this experiment, we fixed the query set size at five
confidence interval) to run pdel and pre del respectively, with and the delegation set size at six. We show the results of this
varying delegation set sizes. The query set size has no effect
Fig. 7. Average time to execute padd with varying delegation set size and Fig. 9. Average time to execute pre del with varying delegation set size
95 percent confidence interval. and 95 percent confidence interval.
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
2516 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 16, NO. 9, SEPTEMBER 2017
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.
CLARK ET AL.: DYNAMIC, PRIVACY-PRESERVING DECENTRALIZED REPUTATION SYSTEMS 2517
[11] R. Burke, “Hybrid recommender systems: Survey and Michael R. Clark received the BS degree from
experiments,” User Model. User-Adapted Interaction, vol. 12, no. 4, Brigham Young University, Provo, Utah, in 2008,
pp. 331–370, 2002. the MS degree from the University of Utah, Salt
[12] O. Goldreich, The Foundations of Cryptography-Volume 2, Basic Lake City, Utah, in 2010, and the PhD degree
Applications. Cambridge, U.K.: Cambridge Univ. Press, 2004. from the Air Force Institute of Technology (AFIT),
[13] I. Damgard, M. Geisler, M. Krøigaard, and J. B. Nielsen, Wright-Patterson AFB, Ohio, in 2015, all in com-
“Asynchronous multiparty computation: Theory and puter science. His research interests include the
implementation,” in Proc. 12th Int. Conf. Practice Theory Public Key areas of cryptography, homomorphic encryption,
Cryptography, 2009, pp. 160–179. secure multiparty computation, and applying
[14] E. Gudes, N. Gal-Oz, and A. Grubshtein, “Methods for computing these technologies to design more resilient and
trust and reputation while preserving privacy,” in Data and Appli- secure systems.
cations Security XXIII. Berlin, Germany: Springer, 2009, pp. 291–298.
[15] M. Li, N. Cao, S. Yu, and W. Lou, “FindU: Privacy-preserving per-
sonal profile matching in mobile social networks,” in Proc. IEEE Kyle Stewart received the BS degree from the
INFOCOM, 2011, pp. 2435–2443. University of Utah, in 2008, the MS degree in
[16] A. Narayanan, N. Thiagarajan, M. Lakhani, M. Hamburg, and computer engineering from the Air Force Institute
D. Boneh, “Location privacy via private proximity testing,” in of Technology (AFIT), in 2010, and the PhD
Proc. Annu. Netw. Distrib. Syst. Secur. Symp., pp. 1–16, 2011. degree in computer engineering from AFIT, in
[17] P. Bogetoft, et al., “Multiparty computation goes live,” Cryptology 2015. He then spent two years as an engineer
ePrint Archive, Report 2008/068, 2008. [Online]. Available: working on the operational test and evaluation of
http://eprint.iacr.org/ the F-35 Joint Strike Fighter. His research inter-
[18] D. Bogdanov, L. Kamm, S. Laur, P. Pruulmann-Vengerfeldt, R. ests include secure computing, virtualization,
Talviste, and J. Willemson, “Privacy-preserving statistical data cloud computing, and test and evaluation frame-
analysis on federated databases,” in Privacy Technologies and Pol- works. He is a member of the ACM and a student
icy, B. Preneel and D. Ikonomou, Eds. Berlin, Germany: Springer, member of the IEEE.
2014, pp. 30–55. [Online]. Available: http://dx.doi.org/10.1007/
978–3-319-06749-0_3 Kenneth M. Hopkinson received the BS degree
[19] A. Yao, “How to generate and exchange secrets (extended from Rensselaer Polytechnic Institute, Troy,
abstract),” in Proc. 27th Annu. Symp. Found. Comput. Sci., 1986, New York, in 1997 and the MS and PhD degrees
pp. 162–167. from Cornell University, Ithaca, New York, in
[20] M. Ben-Or, S. Goldwasser, and A. Wigderson, “Completeness the- 2002 and 2004, respectively, all in computer
orems for non-cryptographic fault-tolerant distributed computa- science. He is a professor of computer science
tion (extended abstract),” in Proc. 20th Annu. ACM Symp. Theory in the Air Force Institute of Technology (AFIT),
Comput., 1988, pp. 1–10. Wright-Patterson AFB, Ohio. His research inter-
[21] O. Goldreich, S. Micali, and A. Wigderson, “How to play any ests include the areas of simulation, networking,
mental game,” in Proc. 19th Annu. ACM Symp. Theory Comput., and distributed systems. He is a senior member
1987, pp. 218–229. of the IEEE.
[22] B. Kreuter, A. Shelat, and C.-H. Shen, “Billion-gate secure compu-
tation with malicious adversaries,” in Proc. USENIX Secur. Symp., " For more information on this or any other computing topic,
2012, pp. 285–300. please visit our Digital Library at www.computer.org/publications/dlib.
[23] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, and N. P.
Smart, “Practical covertly secure MPC for dishonest majority-or:
Breaking the SPDZ limits,” Cryptology ePrint Archive, Report
2012/642, 2012. [Online]. Available: http://eprint.iacr.org/
[24] M. Clark and K. Hopkinson, “Transferable multiparty computa-
tion with applications to the smart grid,” IEEE Trans. Inf. Forensics
Secur., vol. 9, no. 9, pp. 1356–1366, Sep. 2014.
[25] Y. Desmedt and S. Jajodia, “Redistributing secret shares to new
access structures and its applications,” George Mason Univ., Fair-
fax, VA, USA, Tech. Rep. ISSE TR-97–01, Jul. 1997.
[26] T. M. Wong, C. Wang, and J. M. Wing, “Verifiable secret redistri-
bution for threshold sharing schemes,” Carnegie Mellon Univ.,
Pittsburgh, PA, USA, Tech. Rep. CMU-CS-02–114, 2002.
[27] A. Shamir, “How to share a secret,” Commun. ACM, vol. 22, no. 11,
pp. 612–613, 1979.
[28] I. Damgard, V. Pastro, N. Smart, and S. Zakarias, “Multiparty
computation from somewhat homomorphic encryption,” Cryptol-
ogy ePrint Archive, Report 2011/535, 2011. [Online]. Available:
http://eprint.iacr.org/
[29] J. Launchbury, I. S. Diatchki, T. DuBuisson, and A. Adams-Moran,
“Efficient lookup-table protocol in secure multiparty
computation,” in Proc. 17th ACM SIGPLAN Int. Conf. Functional
Program., 2012, pp. 189–200. [Online]. Available: http://doi.acm.
org/10.1145/2364527.2364556
[30] I. de Jong, “Pyro-python remote objects-4.26,” 2014. [Online].
Available: https://pythonhosted.org/Pyro4/
Authorized licensed use limited to: University of Pittsburgh. Downloaded on December 18,2020 at 02:13:48 UTC from IEEE Xplore. Restrictions apply.