You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/239521436

Multiparty Access Control for Online Social Networks: Model and


Mechanisms

Article  in  IEEE Transactions on Knowledge and Data Engineering · July 2013

CITATIONS READS
16 2,443

1 author:

Hongxin Hu
University at Buffalo, The State University of New York
163 PUBLICATIONS   3,333 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Multi-bit LWE-based encryption scheme without decryption errors View project

SDI-defined Security OS (S2OS) View project

All content following this page was uploaded by Hongxin Hu on 17 August 2014.

The user has requested enhancement of the downloaded file.


Multiparty Access Control for Online Social
Networks: Model and Mechanisms
Hongxin Hu, Member, IEEE, Gail-Joon Ahn, Senior Member, IEEE, and Jan Jorgensen

Abstract—Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for
hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing,
but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently
do not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we propose
an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control
model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and
a policy enforcement mechanism. Besides, we present a logical representation of our access control model which allows us to
leverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept
prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.

Index Terms—Social network, multiparty access control, security model, policy specification and management.

1 I NTRODUCTION data, depending on their personal authorization and privacy


requirements.
NLINE social networks (OSNs) such as Facebook,
O Google+, and Twitter are inherently designed to
enable people to share personal and public information
Although OSNs currently provide simple access control
mechanisms allowing users to govern access to information
contained in their own spaces, users, unfortunately, have no
and make social connections with friends, coworkers, col-
control over data residing outside their spaces. For instance,
leagues, family and even with strangers. In recent years,
if a user posts a comment in a friend’s space, s/he cannot
we have seen unprecedented growth in the application of
specify which users can view the comment. In another case,
OSNs. For example, Facebook, one of representative social
when a user uploads a photo and tags friends who appear
network sites, claims that it has more than 800 million
in the photo, the tagged friends cannot restrict who can
active users and over 30 billion pieces of content (web links,
see this photo, even though the tagged friends may have
news stories, blog posts, notes, photo albums, etc.) shared
different privacy concerns about the photo. To address such
each month [3]. To protect user data, access control has
a critical issue, preliminary protection mechanisms have
become a central feature of OSNs [2], [4].
been offered by existing OSNs. For example, Facebook
A typical OSN provides each user with a virtual space
allows tagged users to remove the tags linked to their
containing profile information, a list of the user’s friends,
profiles or report violations asking Facebook managers to
and web pages, such as wall in Facebook, where users and
remove the contents that they do not want to share with
friends can post content and leave messages. A user profile
the public. However, these simple protection mechanisms
usually includes information with respect to the user’s
suffer from several limitations. On one hand, removing a tag
birthday, gender, interests, education and work history, and
from a photo can only prevent other members from seeing
contact information. In addition, users can not only upload
a user’s profile by means of the association link, but the
a content into their own or others’ spaces but also tag
user’s image is still contained in the photo. Since original
other users who appear in the content. Each tag is an
access control policies cannot be changed, the user’s image
explicit reference that links to a user’s space. For the
continues to be revealed to all authorized users. On the
protection of user data, current OSNs indirectly require
other hand, reporting to OSNs only allows us to either keep
users to be system and policy administrators for regulating
or delete the content. Such a binary decision from OSN
their data, where users can restrict data sharing to a specific
managers is either too loose or too restrictive, relying on the
set of trusted users. OSNs often use user relationship
OSN’s administration and requiring several people to report
and group membership to distinguish between trusted and
their request on the same content. Hence, it is essential
untrusted users. For example, in Facebook, users can allow
to develop an effective and flexible access control mecha-
friends, friends of friends, groups or public to access their
nism for OSNs, accommodating the special authorization
requirements coming from multiple associated users for
• H. Hu is with the School of Computing, Clemson University, Clemson,
SC 29634, USA. E-mail: hongxih@clemson.edu.
managing the shared data collaboratively.
• G.-J. Ahn and J. Jorgensen are with the Security Engineering for Future In this paper, we pursue a systematic solution to facilitate
Computing (SEFCOM) Laboratory, Arizona State University, Tempe, collaborative management of shared data in OSNs. We
AZ 85287, USA. E-mail: {gahn,jan.jorgensen}@asu.edu.
begin by examining how the lack of multiparty access
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 2

(a) A disseminator shares other’s profile (b) A user shares her/his relationships

Fig. 1. Multiparty Access Control Pattern for Profile and Relationship Sharing.
control for data sharing in OSNs can undermine the pro- 2 M ULTIPARTY ACCESS C ONTROL FOR
tection of user data. Some typical data sharing patterns OSN S : R EQUIREMENTS AND PATTERNS
with respect to multiparty authorization in OSNs are also
In this section we proceed with a comprehensive re-
identified. Based on these sharing patterns, a multiparty
quirement analysis of multiparty access control in OSNs.
access control (MPAC) model is formulated to capture
Meanwhile, we discuss several typical sharing patterns
the core features of multiparty authorization requirements
occurring in OSNs where multiple users may have different
which have not been accommodated so far by existing
authorization requirements to a single resource. We specif-
access control systems and models for OSNs (e.g., [10],
ically analyze three scenarios—profile sharing, relationship
[11], [16], [17], [27]). Our model also contains a multiparty
sharing and content sharing—to understand the risks posted
policy specification scheme. Meanwhile, since conflicts are
by the lack of collaborative control in OSNs. We leverage
inevitable in multiparty authorization enforcement, a voting
Facebook as the running example in our discussion since
mechanism is further provided to deal with authorization
it is currently the most popular and representative social
and privacy conflicts in our model.
network provider. In the meantime, we reiterate that our
discussion could be easily extended to other existing social
network platforms, such as Google+ [23].
Another compelling feature of our solution is the support
Profile sharing: An appealing feature of some OSNs
of analysis on multiparty access control model and systems.
is to support social applications written by third-party
The correctness of implementation of an access control
developers to create additional functionalities built on the
model is based on the premise that the access control
top of users’ profile for OSNs [1]. To provide meaningful
model is valid. Moreover, while the use of multiparty access
and attractive services, these social applications consume
control mechanism can greatly enhance the flexibility for
user profile attributes, such as name, birthday, activities,
regulating data sharing in OSNs, it may potentially reduce
interests, and so on. To make matters more complicated,
the certainty of system authorization consequences due to
social applications on current OSN platforms can also
the reason that authorization and privacy conflicts need to
consume the profile attributes of a user’s friends. In this
be resolved elegantly. Assessing the implications of access
case, users can select particular pieces of profile attributes
control mechanisms traditionally relies on the security anal-
they are willing to share with the applications when their
ysis technique, which has been applied in several domains
friends use the applications. At the same time, the users
(e.g., operating systems [19], trust management [29], and
who are using the applications may also want to con-
role-based access control [6], [20]). In our approach, we
trol what information of their friends is available to the
additionally introduce a method to represent and reason
applications since it is possible for the applications to
about our model in a logic program. In addition, we
infer their private profile attributes through their friends’
provide a prototype implementation of our authorization
profile attributes [33]. This means that when an application
mechanism in the context of Facebook. Our experimental
accesses the profile attributes of a user’s friend, both the
results demonstrate the feasibility and usability of our
user and her friend want to gain control over the profile
approach.
attributes. If we consider the application is an accessor, the
user is a disseminator and the user’s friend is the owner
of shared profile attributes in this scenario, Figure 1(a)
The rest of the paper is organized as follows. In Sec-
demonstrates a profile sharing pattern where a disseminator
tion 2, we present multiparty authorization requirements
can share others’ profile attributes to an accessor. Both
and access control patterns for OSNs. We articulate our
the owner and the disseminator can specify access control
proposed MPAC model, including multiparty authorization
policies to restrict the sharing of profile attributes.
specification and multiparty policy evaluation in Section 3.
Relationship sharing: Another feature of OSNs is that
Section 4 addresses the logical representation and analysis
users can share their relationships with other members.
of multiparty access control. The details about prototype
Relationships are inherently bidirectional and carry poten-
implementation and experimental results are described in
tially sensitive information that associated users may not
Section 5. Section 6 discusses how to tackle collusion
want to disclose. Most OSNs provide mechanisms that
attacks followed by the related work in Section 7. Section 8
users can regulate the display of their friend lists. A user,
concludes this paper and discusses our future directions.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 3

(a) A shared content has multiple stakeholders (b) A shared content is published by a contributor

(c) A disseminator shares other’s content published by a contributor.

Fig. 2. Multiparty Access Control Pattern for Content Sharing.

however, can only control one direction of a relationship. Carol is explicitly identified by @-mention (at-mention) in
Let us consider, for example, a scenario where a user Alice this note, she is considered as a stakeholder of the note
specifies a policy to hide her friend list from the public. and may also want to control the exposure of this note.
However, Bob, one of Alice’s friends, specifies a weaker Figure 2(b) shows a content sharing pattern reflecting this
policy that permits his friend list visible to anyone. In this scenario where a contributor publishes a content to other’s
case, if OSNs can solely enforce one party’s policy, the space and the content may also have multiple stakeholders
relationship between Alice and Bob can still be learned (e.g., tagged users). All associated users should be allowed
through Bob’s friend list. Figure 1(b) shows a relationship to define access control policies for the shared content.
sharing pattern where a user called owner, who has a
relationship with another user called stakeholder, shares the
relationship with an accessor. In this scenario, authorization
requirements from both the owner and the stakeholder
should be considered. Otherwise, the stakeholder’s privacy
concern may be violated. OSNs also enable users to share others’ contents. For
example, when Alice views a photo in Bob’s space and
Content sharing: OSNs provide built-in mechanisms decides to share this photo with her friends, the photo
enabling users to communicate and share contents with will be in turn posted in her space and she can specify
other members. OSN users can post statuses and notes, access control policy to authorize her friends to see this
upload photos and videos in their own spaces, tag others photo. In this case, Alice is a disseminator of the photo.
to their contents, and share the contents with their friends. Since Alice may adopt a weaker control saying the photo is
On the other hand, users can also post contents in their visible to everyone, the initial access control requirements
friends’ spaces. The shared contents may be connected with of this photo should be compliant with, preventing from the
multiple users. Consider an example where a photograph possible leakage of sensitive information via the procedure
contains three users, Alice, Bob and Carol. If Alice uploads of data dissemination. Figure 2(c) shows a content sharing
it to her own space and tags both Bob and Carol in the pattern where the sharing starts with an originator (owner
photo, we call Alice the owner of the photo, and Bob and or contributor who uploads the content) publishing the
Carol stakeholders of the photo. All of them may specify content, and then a disseminator views and shares the
access control policies to control over who can see this content. All access control policies defined by associated
photo. Figure 2(a) depicts a content sharing pattern where users should be enforced to regulate access of the content
the owner of a content shares the content with other OSN in disseminator’s space. For a more complicated case,
members, and the content has multiple stateholders who the disseminated content may be further re-disseminated
may also want to involve in the control of content sharing. by disseminator’s friends, where effective access control
In another case, when Alice posts a note stating “I will mechanisms should be applied in each procedure to regulate
attend a party on Friday night with @Carol” to Bob’s sharing behaviors. Especially, regardless of how many steps
space, we call Alice the contributor of the note and she may the content has been re-disseminated, the original access
want to make the control over her notes. In addition, since control policies should be always enforced to protect further
dissemination of the content.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 4

3 M ULTIPARTY ACCESS C ONTROL M ODEL We now formally define our MPAC model as follows:
FOR OSN S • U = {u1 , . . . , un } is a set of users of the OSN. Each
In this section, we formalize a MultiParty Access Control user has a unique identifier;
(MPAC) model for OSNs (Section 3.1), as well as a policy • G = {g1 , . . . , gn } is a set of groups to which the users
scheme (Section 3.2) and a policy evaluation mechanism can belong. Each group also has a unique identifier;
(Section 3.3) for the specification and enforcement of • P = {p1 , . . . , pn } is a collection of user profile sets,
MPAC policies in OSNs . where pi = {qi1 , . . . , qim } is the profile of a user i ∈
U . Each profile entry is a <attribute: profile-value>
3.1 MPAC Model pair, qij =< attrj : pvaluej >, where attrj is an
An OSN can be represented by a relationship network, attribute identifier and pvaluej is the attribute value;
a set of user groups and a collection of user data. The • RT is a set of relationship types supported by the
relationship network of an OSN is a directed labeled graph, OSN. Each user in an OSN may be connected with
where each node denotes a user and each edge represents others by relationships of different types;
a relationship between two users. The label associated • R = {r1 , . . . , rn } is a collection of user relationship
with each edge indicates the type of the relationship. sets, where ri = {si1 , . . . , sim } is the relationship list
Edge direction denotes that the initial node of an edge of a user i ∈ U . Each relationship entry is a <user:
establishes the relationship and the terminal node of the relationship-type> pair, sij =< uj : rtj >, where
edge accepts the relationship. The number and type of uj ∈ U , rtj ∈ RT ;
supported relationships rely on the specific OSNs and its • C = {c1 , . . . , cn } is a collection of user content sets,
purposes. Besides, OSNs include an important feature that where ci = {ei1 , . . . , eim } is a set of contents of a
allows users to be organized in groups [40], [39] (or called user i ∈ U , where eij is a content identifier;
circles in Google+ [5]), where each group has a unique • D = {d1 , . . . , dn } is a collection of data sets, where
name. This feature enables users of an OSN to easily find di = pi ∪ ri ∪ ci is a set of data of a user i ∈ U ;
other users with whom they might share specific interests • CT = {OW, CB, ST, DS} is a set of con-
(e.g., same hobbies), demographic groups (e.g., studying at troller types, indicating ownerOf, contributorOf, stake-
the same schools), political orientation, and so on. Users holderOf, and disseminatorOf, respectively;
can join in groups without any approval from other group • U U = {U Urt1 , . . . , U Urtn } is a collection of
members. Furthermore, OSNs provide each member a Web uni-directional binary user-to-user relations, where
space where users can store and manage their personal data U Urti ⊆ U × U specifies the pairs of users having
including profile information, friend list and content. relationship type rti ∈ RT ;
Recently, several access control schemes (e.g., [10], [11], • U G ⊆ U × G is a set of binary user-to-group
[16], [17]) have been proposed to support fine-grained membership relations;
authorization specifications for OSNs. Unfortunately, these • U D = {U Dct1 , . . . , U Dctn } is a collection of binary
schemes can only allow a single controller, the resource user-to-data relations, where U Dcti ⊆ U ×D specifies
owner, to specify access control policies. Indeed, a flexible a set of < user, data > pairs having controller type
access control mechanism in a multi-user environment like cti ∈ CT ;
RT
OSNs should allow multiple controllers, who are associated • relation members : U −−→ 2U , a function mapping
with the shared data, to specify access control policies. As each user u ∈ U to a set of users with whom s/he has
we identified previously in the sharing patterns (Section 2), a relationship rt ∈ RT :
in addition to the owner of data, other controllers, including relation members(u : U, rt : RT ) = {u′ ∈ U |
the contributor, stakeholder and disseminator of data, need (u, u′ ) ∈ U Urt };
RT
to regulate the access of the shared data as well. We define • ROR members : U −−→ 2U , a function mapping
these controllers as follows: each user u ∈ U to a set of users with whom s/he has
Definition 1: (Owner). Let d be a data item in the space a transitive relation of a relationship rt ∈ RT , denoted
of a user u in the social network. The user u is called the as relationships-of-relationships (ROR). For example,
owner of d. if a relationship is friend, then its transitive relation is
Definition 2: (Contributor). Let d be a data item pub- friends-of-friends (FOF):
lished by a user u in someone else’s space in the social ROR members(u : U, rt : RT ) = {u′ ∈
network. The user u is called the contributor of d. U | u′ ∈ relation members(u, rt) ∨ (∃u′′ ∈
Definition 3: (Stakeholder). Let d be a data item in the U [u′′ ∈ ROR members(u, rt)) ∧ u′ ∈
space of a user in the social network. Let T be the set ROR members(u′′ , rt)]};
CT
of tagged users associated with d. A user u is called a • controllers : D −−→ 2U , a function mapping each
stakeholder of d, if u ∈ T . date item d ∈ D to a set of users who are the controller
Definition 4: (Disseminator). Let d be a data item with the controller type ct ∈ CT :
shared by a user u from someone else’s space to his/her controllers(d : D, ct : CT ) = {u ∈ U | (u, d) ∈
space in the social network. The user u is called a dissem- U Dct }; and
inator of d. • group members : G → 2U , a function mapping each
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 5

group g ∈ G to a set of users who belong to the group: is defined as a set, accessors = {a1 , . . . , an }, where each
group members(g : G) = {u ∈ U | (u, g) ∈ U G}; element is a tuple < ac, at >.
groups(u : U ) = {g ∈ G | (u, g) ∈ U G}; Data Specification: In OSNs, user data is composed of
three types of information, user profile, user relationship
and user content.
To facilitate effective privacy conflict resolution for mul-
tiparty access control, we introduce sensitivity levels for
data specification, which are assigned by the controllers to
the shared data items. A user’s judgment of the sensitivity
level of the data is not binary (private/public), but multi-
dimensional with varying degrees of sensitivity. Formally,
the data specification is defined as follows:
Fig. 3. An Example of Multiparty Social Network Definition 6: (Data Specification). Let dt ∈ D be a data
Representation. item. Let sl be a sensitivity level, which is a rational number
in the range [0,1], assigned to dt. The data specification is
Figure 3 depicts an example of multiparty social net- defined as a tuple < dt, sl >.
work representation. It describes relationships of five in-
Access Control Policy: To summarize the above-mentioned
dividuals, Alice (A), Bob (B), Carol (C), Dave (D) and
policy elements, we introduce the definition of a multiparty
Edward (E), along with their relations with data and their
access control policy as follows:
groups of interest. Note that two users may be directly
Definition 7: (MPAC Policy). A multiparty
connected by more than one edge labeled with different
access control policy is a 5-tuple P =<
relationship types in the relationship network. For example,
controller, ctype, accessor, data, ef f ect >, where
in Figure 3, Alice (A) has a direct relationship of type
colleagueOf with Bob (B), whereas Bob (B) has a rela- • controller ∈ U is a user who can regulate the access
tionship of friendOf with Alice (A). In addition, two users of data;
may have transitive relationship, such as friends-of-friends • ctype ∈ CT is the type of the controller;
(FOF), colleagues-of-colleagues (COC) and classmates-of- • accessor is a set of users to whom the authorization
classmates (LOL) in this example. Moreover, this example is granted, representing with an access specification
shows that some data items have multiple controllers. For defined in Definition 5.
instance, RelationshipA has two controllers: the owner, • data is represented with a data specification defined
Alice (A) and a stakeholder, Carol (C). Also, some users in Definition 6; and
may be the controllers of multiple data items. For example, • ef f ect ∈ {permit, deny} is the authorization effect
Carol (C) is a stakeholder of RelationshipA as well as the of the policy.
contributor of ContentE . Furthermore, we can notice there Suppose a controller can leverage five sensitivity levels:
are two groups in this example that users can participate 0.00 (none), 0.25 (low), 0.50 (medium), 0.75 (high), and
in: the “Fashion” group and the “Hiking” group, and some 1.00 (highest) for the shared data. We illustrate several
users, such as Bob (B) and Dave (D), may join in multiple examples of MPAC policies for OSNs as follows:
groups. The MPAC policies
(1) “Alice authorizes her friends to view her status identi-
3.2 MPAC Policy Specification fied by status01 with a medium sensitivity level, where
Alice is the owner of the status.”
To enable a collaborative authorization management of
(2) “Bob authorizes users who are his colleagues or in
data sharing in OSNs, it is essential for multiparty access
hiking group to view a photo, summer.jpg, that he
control policies to be in place to regulate access over shared
is tagged in with a high sensitivity level, where Bob is
data, representing authorization requirements from multiple
a stakeholder of the photo.”
associated users. Our policy specification scheme is built
(3) “Carol disallows Dave and Edward to watch a video,
upon the proposed MPAC model.
play.avi, that she uploads to someone else’s spaces
Accessor Specification: Accessors are a set of users who with a highest sensitivity level, where Carol is the
are granted to access the shared data. Accessors can be contributor of the video.”
represented with a set of user names, a set of relationship
are expressed as:
names or a set of group names in OSNs. We formally define
the accessor specification as follows: (1) p1 = (Alice, OW, {< f riendOf, RN >},
Definition 5: (Accessor Specification). Let ac ∈ U ∪ < status01, 0.50 >, permit)
RT ∪ G be a user u ∈ U , a relationship type rt ∈ RT , (2) p2 = (Bob, ST, {< colleageOf, RN >,
or a group g ∈ G. Let at ∈ {U N, RN, GN } be the type < hiking, GN >}, < summer.jpg, 0.75 >, permit)
of the accessor specification (user name, relationship type, (3) p3 = (Carol, CB, {< Dave, U N >,
and group name, respectively). The accessor specification < Edward, U N >}, < play.avi, 1.00 >, deny)
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 6

naive solution for conflict resolution may turn out that no


one can access this photo. However, it is reasonable to give
the view permission to the common friends of Alice, Bob
and Carol.
A strong conflict resolution strategy may provide a better
privacy protection. Meanwhile, it may reduce the social
value of data sharing in OSNs. Therefore, it is important
to consider the tradeoff between privacy and utility when
resolving privacy conflicts. To address this issue, we in-
troduce a simple but flexible voting scheme for resolving
multiparty privacy conflicts in OSNs.
Fig. 4. Multiparty Policy Evaluation Process. 3.3.1 A voting scheme for decision making of multi-
party control
3.3 Multiparty Policy Evaluation
Majority voting is a popular mechanism for decision mak-
Two steps are performed to evaluate an access request over ing [28]. Inspired by such a decision making mechanism,
multiparty access control policies. The first step checks the we propose a voting scheme to achieve an effective multi-
access request against the policy specified by each con- party conflict resolution for OSNs. A notable feature of
troller and yields a decision for the controller. The accessor the voting mechanism for conflict resolution is that the
element in a policy decides whether the policy is applicable decision from each controller is able to have an effect on
to a request. If the user who sends the request belongs to the final decision. Our voting scheme contains two voting
the user set derived from the accessor of a policy, the policy mechanisms, decision voting and sensitivity voting.
is applicable and the evaluation process returns a response Decision Voting: A decision voting value (DV ) derived
with the decision (either permit or deny) indicated by from the policy evaluation is defined as follows, where
the effect element in the policy. Otherwise, the response Evaluation(p) returns the decision of a policy p:
yields deny decision if the policy is not applicable to the {
request. In the second step, decisions from all controllers DV =
0 if Evaluation(p) = Deny
(1)
responding to the access request are aggregated to make 1 if Evaluation(p) = P ermit
a final decision for the access request. Figure 4 illustrates Assume that all controllers are equally important, an
the evaluation process of multiparty access control policies. aggregated decision value (DVag ) (with a range of 0.00
Since data controllers may generate different decisions to 1.00) from multiple controllers including the owner
(permit and deny) for an access request, conflicts may (DVow ), the contributor (DVcb ) and stakeholders (DVst ),
occur. In order to make an unambiguous decision for each is computed with following equation:
access request, it is essential to adopt a systematic conflict ∑ 1
resolution mechanism to resolve those conflicts during DVag = (DVow + DVcb + DVsti ) × (2)
i∈SS
m
multiparty policy evaluation.
The essential reason leading to the conflicts–especially where SS is the stakeholder set of the shared data item,
privacy conflicts–is that multiple controllers of the shared and m is the number of controllers of the shared data item.
data item often have different privacy concerns over the Each controller of the shared data item may have (i) a
data item. For example, assume that Alice and Bob are different trust level over the data owner and (ii) a different
two controllers of a photo. Both of them defines their own reputation value in terms of collaborative control. Thus,
access control policy stating only her/his friends can view a generalized decision voting scheme needs to introduce
this photo. Since it is almost impossible that Alice and Bob weights, which can be calculated by aggregating trust levels
have the same set of friends, privacy conflicts may always and reputation values [18], on different controllers. Dif-
exist when considering multiparty control over the shared ferent weights of controllers are essentially represented by
data item. different importance degrees on the aggregated decision. In
A naive solution for resolving multiparty privacy con- general, the importance degree of controller x is “weightx /
i
flicts is to only allow the common users of accessor sum of weights”. Suppose that ωow , ωcb and ωsh are weight
sets defined by the multiple controllers to access the data values for owner, contributor and stakeholders, respectively,
item. Unfortunately, this strategy is too restrictive in many and n is the number of stakeholders of the shared data item.
cases and may not produce desirable results for resolving A weighted decision voting scheme is as follows:
multiparty privacy conflicts. Consider an example that four ∑
n

users, Alice, Bob, Carol and Dave, are the controllers of DVag = (ωow × DVow + ωcb × DVcb + i
(ωst × DVsti ))
i=1
a photo, and each of them allows her/his friends to see
1
the photo. Suppose that Alice, Bob and Carol are close × ∑n i
(3)
friends and have many common friends, but Dave has no ωow + ωcb + i=1 ωst
common friends with them and also has a pretty weak Sensitivity Voting: Each controller assigns a sensitivity level
privacy concern on the photo. In this case, adopting the (SL) to the shared data item to reflect her/his privacy
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 7

concern. A sensitivity score (S C) (in the range from 0.00 to and the final decision can be made as follows:
1.00) for the data item can be calculated based on following {
Permit if DVag = 1
equation: Decision = (6)
Deny if DVag = 0
∑ 1
S C = (SLow + SLcb + SList ) × (4) • Full−consensus−permit : if any controller denies
i∈SS
m the access, the final decision is deny. This strategy
can achieve the naive conflict resolution that we dis-
Note that we can also use a generalized sensitivity voting cussed previously. The final decision can be derived
scheme like equation (3) to compute the sensitivity score as: {
(S C). Permit if DVag = 1
Decision = (7)
Deny otherwise
3.3.2 Threshold-based conflict resolution • Majority−permit : this strategy permits (denies,
A basic idea of our approach for threshold-based conflict resp.) a request if the number of controllers to permit
resolution is that the sensitivity score (S C) can be utilized (deny, resp.) the request is greater than the number
as a threshold for decision making. Intuitively, if the sensi- of controllers to deny (permit, resp.) the request. The
tivity score is higher, the final decision has a high chance final decision can be made as:
to deny access, taking into account the privacy protection {
Permit if DVag ≥ 1/2
of high sensitive data. Otherwise, the final decision is Decision = (8)
Deny if DVag < 1/2
very likely to allow access, so that the utility of OSN
Other majority voting strategies [30] can be easily sup-
services cannot be affected. The final decision is made
ported by our voting scheme, such as strong-majority-
automatically by OSN systems with this threshold-based
permit (this strategy permits a request if over 2/3 controllers
conflict resolution as follows:
permit it), super-majority-permit (this strategy permits a
{
Permit if DVag > S C request if over 3/4 controllers permit it).
Decision = (5)
Deny if DVag ≤ S C
3.3.4 Conflict resolution for dissemination control
It is worth noticing that our conflict resolution approach
A user can share others’ contents with her/his friends in
has an adaptive feature which reflects the changes of
OSNs. In this case, the user is a disseminator of the content,
policies and sensitivity levels. If any controller changes
and the content will be posted in the disseminator’s space
her/his policy or sensitivity level for the shared data item,
and visible to her/his friends or the public. Since a dissem-
the aggregated decision value (DVag ) and the sensitivity
inator may adopt a weaker control over the disseminated
score (S C) will be recomputed and the final decision may
content but the content may be much more sensitive from
be changed accordingly.
the perspective of original controllers of the content, the
privacy concerns from the original controllers of the content
3.3.3 Strategy-based conflict resolution with privacy should be always fulfilled, preventing inadvertent disclosure
recommendation of sensitive contents. In other words, the original access
Above threshold-based conflict resolution provides a quite control policies should be always enforced to restrict access
fair mechanism for making the final decision when we to the disseminated content. Thus, the final decision for an
treat all controllers equally important. However, in practice, access request to the disseminated content is a composition
different controllers may have different priorities in making of the decisions aggregated from original controllers and the
impact on the final decision. In particular, the owner of decision from the current disseminator. In order to elimi-
data item may be desirable to possess the highest priority nate the risk of possible leakage of sensitive information
in the control of shared data item. Thus, we further provide from the procedure of data dissemination, we leverage a
a strategy-based conflict resolution mechanism to fulfill restrictive conflict resolution strategy, Deny-overrides,
specific authorization requirements from the owners of to resolve conflicts between original controllers’ decision
shared data. and the disseminator’s decision. In such a context, if either
In this conflict resolution, the sensitivity score (S C) of of those decisions is to deny the access request, the final
a data item is considered as a guideline for the owner of decision is deny. Otherwise, if both of them are permit,
shared data item in selecting an appropriate strategy for the final decision is permit.
conflict resolution. We introduce following strategies for the 4 L OGICAL R EPRESENTATION AND A NALY -
purpose of resolving multiparty privacy conflicts in OSNs.
SIS OF M ULTIPARTY ACCESS C ONTROL
• Owner−overrides : the owner’s decision has the
In this section, we adopt answer set programming (ASP),
highest priority. This strategy achieves the owner con-
a recent form of declarative programming [31], to formally
trol mechanism that most OSNs are currently utilizing
represent our model, which essentially provide a formal
for data sharing. Based on the weighted decision
foundation of our model in terms of ASP-based represen-
voting scheme, we set ωow = 1, ωcb = 0 and ωst = 0,1
tation. Then, we demonstrate how the correctness analysis
1. In real system implementation, those weights need to be adjusted by and authorization analysis [7] of multiparty access control
the system. can be carried out based on such a logical representation.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 8

4.1 Representing Multiparty Access Control in For instance, p1 in Section 3.2 is translated into an ASP
ASP rule as follows:
We introduce a translation module that turns multiparty decision(Alice, permit) ← f riendOf (Alice, X)∧
authorization specification into an ASP program. This OW (Alice, photoId) ∧ U (Alice) ∧ U (X) ∧ status(statusId)
interprets a formal semantics of multiparty authorization The translation module converts a multiparty authoriza-
specification in terms of the Answer Set semantics2 . tion policy including the type of the accessor specification
at = U N
4.1.1 Logical definition of multiple controllers and
transitive relationships (controller, ctype, {< ac1 , U N >, . . . , < acn , U N >},
< dt, sl >, ef f ect)
The basic components and relations in our MPAC model
can be directly defined with corresponding predicates in into a set of ASP rules

ASP. We have defined U Dct as a set of user-to-data decision(controller, ef f ect) ← U (ack ) ∧ ctype(
relations with controller type ct ∈ CT . Then, the logical 1≤k≤n
definition of multiple controllers is as follows: controller, data) ∧ U (controller) ∧ D(dt).
• The owner of a data item can be represented as:
For example, p3 in Section 3.2 can be translated into
OW (controller, data) ← U DOW (controller, data)∧ following an ASP rules
U (controller) ∧ D(data).
decision(Carol, deny) ← U (Dave) ∨ U (Edward)∧
• The contributor of a data item can be represented as: CB(Carol, videoId) ∧ U (Carol) ∧ video(videoId).
CB(controller, data) ← U DCB (controller, data)∧
U (controller) ∧ D(data). The translation module converts a multiparty authoriza-
tion policy including the type of the accessor specification
• The stakeholder of a data item can be represented as: at = GN
ST (controller, data) ← U DST (controller, data)∧ (controller, ctype, {< ac1 , GN >, . . . , < acn , GN >},
U (controller) ∧ D(data). < dt, sl >, ef f ect)
• The disseminator of a data item can be represented as: into an ASP rule
DS(controller, data) ← U DDS (controller, data)∧ ∨
U (controller) ∧ D(data). decision(controller, ef f ect) ← U G(ack , X) ∧
1≤k≤n
Our MPAC model supports transitive relationships. For ctype(controller, data) ∧ U (controller) ∧ U (X) ∧ D(dt)
example, David is a friend of Allice, and Edward is a friend
of David in a social network. Then, we call Edward is a For example, p2 in Section 3.2 specifies the accessor with
friends of friends of Allice. The friend relation between two a relationship name (RN ) and a group name (GN ). Thus,
users Allice and David is represented in ASP as follows: this policy can be translated into an ASP rule as:
f riendOf (Allice, David). decision(Bob, permit) ← colleageOf (Bob, X)∨
U G(hiking, X) ∧ ST (Bob, photoId)∧
It is known that the transitive closure (e.g., reachability) U (Bob) ∧ U (X) ∧ photo(photoId).
cannot be expressed in first-order logic [32], however, it
can be easily handled in the stable model semantics. Then, 4.1.3 Logical representation of conflict resolution
friends-of-friends can be represented as a transitive closure mechanism
of friend relation with ASP as follows: Our voting schemes are represented in ASP rules as fol-
f riendsOF f riends(U 1, U 2) ← f riendOf (U 1, U 2). lows:
f riendsOF f riends(U 1, U 3) ← f riendsOF f riends(U 1, U 2), decision voting(C) = 1 ← decision(C, permit).
f riendsOF f riends(U 2, U 3). decision voting(C) = 0 ← decision(C, deny).
aggregation weight(K) ← K =
4.1.2 Logical policy specification sum{weight(C) : controller(C)}.
The translation module converts a multiparty authorization aggregation decision(N ) ← N =
sum{decision voting(C) × weight(C) : controller(C)}.
policy containing the type of the accessor specification at = aggregation sensitivity(M ) ← M =
RN sum{sensitivity voting(C) × weight(C) : controller(C)}.
(controller, ctype, {< ac1 , RN >, . . . , < acn , RN >},
< dt, sl >, ef f ect) Our threshold-based conflict resolution mechanism is
represented as:
into an ASP rule
∨ decision(controllers, permit) ← N > M ∧
decision(controller, ef f ect) ← ack (controller, X) ∧ aggregation decision(N ) ∧ aggregation sensitivity(M ).
1≤k≤n decision(controllers, deny) ←
ctype(controller, data) ∧ U (controller) ∧ U (X) ∧ D(dt) not decision(controllers, permit).

2. We identify “∧” with “,”, “←” with “ :- ” and a policy of the form Our strategy-based conflict resolution mechanism is rep-
A ← B, C ∨ D as a set of the two policies A ← B, C and A ← B, D. resented with ASP as well. For example,
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 9

• The conflict resolution strategy Owner−overrides is


represented in ASP rules as follows ∨
decision(controllers, permit) ← C⊆CS decision(C, deny).
weight(controllers) = 1 ← OW (controller, data).
weight(controllers) = 0 ← CB(controller, data). If no answer set is found, this implies that the authoriza-
weight(controllers) = 0 ← ST (controller, data). tion property is verified. Otherwise, an answer set returned
decision(controllers, permit) ← N/K == 1∧ by an ASP solver serves as a counterexample that indi-
aggregation weight(K) ∧ aggregation decision(N ).
cates why the description does not entail the authorization
decision(controllers, deny) ←
not decision(controllers, permit). property. This helps determine the problems in the model
definition.
• The conflict resolution strategy Authorization Analysis: is employed to examine over-
Full−consensus−permit is represented as: sharing (does current authorization state disclose the data to
decision(controllers, permit) ← N/K == 1∧ some users undesirable?) and under-sharing (does current
aggregation weight(K) ∧ aggregation decision(N ). authorization state disallow some users to access the data
decision(controllers, deny) ← that they are supposed to be allowed?). This analysis
not decision(controllers, permit). service should be incorporated into OSN systems to enable
• The conflict resolution strategy Majority−permit is users checking potential authorization impacts derived from
represented with following ASP rules: collaborative control of shared data.
Example 2: (Checking over-sharing) Alice has defined
decision(controllers, permit) ← N/K > 1/2 ∧
aggregation weight(K) ∧ aggregation decision(N ). a policy to disallow her family members to see a photo.
decision(controllers, deny) ← Then, she wants to check if any family members can see
not decision(controllers, permit). this photo after applying conflict resolution mechanism for
collaborative authorization management considering differ-
• The conflict resolution strategy Deny−overrides for ent privacy preferences from multiple controllers. The input
dissemination control is represented as follows: query Πquery can be represented as follows:
decision(deny) ← decision(controllers, deny).
check :- decision(permit), familyof(alice, x),
decision(deny) ← decision(disseminator, deny).
ow(alice, photoid), user(alice),
decision(permit) ← not decision(deny).
user(x), photo(photoid).
:- not check.

4.2 Reasoning about Multiparty Access Control If any answer set is found, it means that there are family
One of the promising advantages in logic-based represen- members who can see the photo. Thus, from the perspective
tation of access control is that formal reasoning of the of Alice’s authorization, this photo is over shared to some
authorization properties can be achieved. Once we represent users. The possible solution is to adopt a more restricted
our multiparty access control model into an ASP program, conflict resolution strategy or increase the weight value of
we can use off-the-shelf ASP solvers to carry out several Alice.
automated analysis tasks. The problem of verifying an Example 3: (Checking under-sharing) Bob has defined a
authorization property against our model description can policy to authorize his friends to see a photo. He wants to
be cast into the problem of checking whether the program check if any friends cannot see this photo in current system.
The input query Πquery can be specified as follows:
Π ∪ Πquery ∪ Πconfig check :- decision(deny), friendof(bob, x),
ow(alice, photoid), user(bob),
has no answer sets, where Π is the program corresponding
user(x), photo(photoid).
to the model specification, Πquery is the program corre- :- not check.
sponding to the program that encodes the negation of the
property to check, and Πconfig is the program that can If an answer set contains check, this means that there
generate arbitrary configurations, for example: are friends who cannot view the photo. Regarding Bob’s
authorization requirement, this photo is under shared with
user_attributes(alice, bob, carol, dave,
edward).
his friends.
group_attributes(fashion, hiking). 5 I MPLEMENTATION AND E VALUATION
photo_attributes(photoid).
1{user(X) : user_attributes(X)}. 5.1 Prototype Implementation
1{group(X) : group_attributes(X)}. We implemented a proof-of-concept Facebook application
1{photo(X) : photo_attributes(X)}.
for the collaborative management of shared data, called
Correctness Analysis: is used to prove if our proposed MController (http://apps.facebook.com/MController). Our
access control model is valid. prototype application enables multiple associated users to
Example 1: If we want to check whether the conflict specify their authorization policies and privacy preferences
resolution strategy Full−consensus−permit is correctly to co-control a shared data item. It is worth noting that our
defined based on the voting scheme in our model, the input current implementation was restricted to handle photo shar-
query Πquery can be represented as follows: ing in OSNs. Obversely, our approach can be generalized
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 10

Query Language. Once a user installs MController in


her/his Facebook space and accepts the necessary permis-
sions, MController can access a user’s basic information
and contents. Especially, MController can retrieve and list
all photos, which are owned or uploaded by the user, or
where the user was tagged. Once information is imported,
the user accesses MController through its application page
Fig. 5. Overall Architecture of MController Application. on Facebook, where s/he can query access information, set
privacy for photos that s/he is a controller, or view photos
s/he is allowed to access.
Requests
A core component of MController is the decision mak-
Privacy Policy
ing module, which processes access requests and returns
Privacy Policy Privacy Policy
Owner Sensitivity Setting Contributor Stakeholder Disseminator Privacy Policy responses (either permit or deny) for the requests. Fig-
Control Control Sensitivity Setting Control Sensitivity Setting Control
Strategy Selection ure 6 depicts a system architecture of the decision making
module in MController. To evaluate an access request,
Recommendation
Sensitivity Aggregation Decision Aggregation Decision the policies of each controller of the targeted content are
Strategy
Repository enforced first to generate a decision for the controller. Then,
Decision Making the decisions of all controllers are aggregated to yield a final
decision as the response of the request. Multiparty privacy
Final Decisions conflicts are resolved based on the configured conflict
resolution mechanism when aggregating the decisions of
Fig. 6. System Architecture of Decision Making in controllers. If the owner of the content chooses automatic
MController. conflict resolution, the aggregated sensitivity value is uti-
lized as a threshold for decision making. Otherwise, multi-
party privacy conflicts are resolved by applying the strategy
to deal with other kinds of data sharing, such as videos and selected by the owner, and the aggregated sensitivity score
comments, in OSNs as long as the stakeholder of shared is considered as a recommendation for strategy selection.
data are identified with effective methods like tagging or Regarding the access requests to disseminated content, the
searching. final decision is made by combining the disseminator’s
Figure 5 shows the architecture of MController, which decision and original controllers’ decision adopting corre-
is divided into two major pieces, Facebook server and sponding combination strategy discussed previously.
application server. The Facebook server provides an entry A snapshot of main interface of MController is shown
point via the Facebook application page, and provides in Figure 7 (a). All photos are loaded into a gallery-
references to photos, friendships, and feed data through style interface. To control photo sharing, the user clicks
API calls. Facebook server accepts inputs from users, then the “Owned”, “Tagged”, “Contributed”, or “Disseminated”
forwards them to the application server. The application tabs, then selects any photo to define her/his privacy prefer-
server is responsible for the input processing and collabora- ence by clicking the lock below the gallery. If the user is not
tive management of shared data. Information related to user the owner of selected photo, s/he can only edit the privacy
data such as user identifiers, friend lists, user groups, and setting and sensitivity setting of the photo.3 Otherwise, as
user contents are stored in the application database. Users shown in Figure 7 (c), if the user is the owner of the photo,
can access the MController application through Facebook, s/he has the option of clicking “Show Advanced Controls”
which serves the application in an iFrame. When access to assign weight values to different types of controllers 4
requests are made to the decision making portion in the and configure the conflict resolution mechanism for the
application server, results are returned in the form of access shared photo. By default, the conflict resolution is set
to photos or proper information about access to photos. to automatic. However, if the owner chooses to set a
In addition, when privacy changes are made, the decision manual conflict resolution, s/he is informed of a sensitivity
making portion returns change-impact information to the score of shared photo and receives a recommendation for
interface to alert the user. Moreover, analysis services in choosing an appropriate conflict resolution strategy. Once
MController application are provided by implementing an a controller saves her/his privacy setting, a corresponding
ASP translator, which communicates with an ASP reasoner. feedback is provided to indicate the potential authorization
Users can leverage the analysis services to perform com- impact of her/his choice. The controller can immediately
plicated authorization queries. determine how many users can see the photo and should
MController is developed as a third-party Facebook ap-
plication, which is hosted in an Apache Tomcat application 3. Note that users are able to predefine their privacy preferences, which
server supporting PHP and MySQL database. MController are then applied by default to photos they need to control.
application is based on the iFrame external application 4. In our future work, we will explore how each controller can be
assigned with a weight. Also, an automatic mechanism to determine
approach. Using the Javascript and PHP SDK, it accesses weights of controllers will be studied, considering the trust and reputation
users’ Facebook data through the Graph API and Facebook level of controllers for the collaborative control.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 11

Fig. 7. MController Interface.

be denied, and how many users cannot see the photo the opportunity to share our application and play with their
and should be allowed. MController can also display the friends. While this is not a random sampling, recruiting us-
details of all users who violate against the controller’s ing the natural dissemination features of Facebook arguably
privacy setting (See Figure 7 (d)). The purpose of such gives an accurate profile of the ecosystem.
feedback information is to guide the controller to evaluate Participants were first asked to answer some ques-
the impact of collaborative authorization. If the controller tions about their usage and perception of Facebook’s
is not satisfied with the current privacy control, s/he may privacy controls, then were invited to watch a video
adjust her/his privacy setting, contact the owner of the photo (http://bit.ly/MController) describing the concept behind
to ask her/him to change the conflict resolution strategies, MController. Users were then instructed to install the
or even report a privacy violation to OSN administrators application using their Facebook profiles and complete the
who can delete the photo. A controller can also perform following actions: set privacy settings for a photo they do
authorization analysis by advanced queries as shown in not own but are tagged in, set privacy settings for a photo
Figure 7 (b). Both over-sharing and under-sharing can be they own, set privacy settings for a photo they contributed,
examined by using such an analysis service in MController. and set privacy settings for a photo they disseminated. As
users completed these actions, they answered questions on
the usability of the controls in MController. Afterward, they
5.2 System Usability and Performance Evaluation
were asked to answer further questions and compare their
5.2.1 Participants and Procedure experience with MController to that in Facebook.
MController is a functional proof-of-concept implementa-
tion of collaborative privacy management. To measure the 5.2.2 User Study of MController
practicality and usability of our mechanism, we conducted For evaluation purposes, questions (http://goo.gl/eDkaV)
a survey study (n=35) to explore the factors surrounding were split into three areas: likeability, simplicity, and con-
users’ desires for privacy and discover how we might trol. Likeability is a measure of a user’s satisfaction with
improve those implemented in MController. Specifically, a system (e.g. “I like the idea of being able to control
we were interested in users’ perspectives on the current photos in which I am tagged”). Simplicity is a measure how
Facebook privacy system and their desires for more control intuitive and useful the system is (e.g. “Setting my privacy
over photos they do not own. We recruited participants settings for a photo in MController is Complicated (1) to
through university mailing lists and through Facebook itself Simple (5)” with a 5-point scale). Control is a measure of
using Facebook’s built-in sharing API. Users were given the user’s perceived control of their personal data (e.g. “If
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 12
TABLE 1
Usability Evaluation for Facebook and MController Privacy Controls.

Facebook MController
Metric Upper bound on 95% Lower bound on 95%
Average Average
confidence interval confidence interval
Likability 0.20 0.25 0.83 0.80
Simplicity 0.38 0.44 0.72 0.64
Control 0.20 0.25 0.83 0.80

Facebook implemented controls like MController’s to con- of the number of controllers. With the simplest implementa-
trol photo privacy, my photos would be better protected”). tion of our mechanism, where n is the number of controllers
Questions were either True/False or measured on a 5-point of a shared photo, a series of operations essentially takes
likert scale, and all responses were scaled from 0 to 1 for place n times. There are O(n) MySQL calls and data
numerical analysis. In the measurement, a higher number fetching operations and O(1) for additional operations.
indicates a positive perception or opinion of the system Moreover, we could observe there was no significant over-
while a lower number indicates a negative one. To analyze head when we run MController in Facebook.
the average user perception of the system, we used a 95%
confidence interval for the users’ answers. This assumes the 6 D ISCUSSIONS
population to be mostly normal.
Before Using MController. Prior to using MController, In our multiparty access control system, a group of users
users were asked a few questions about their usage of could collude with one another so as to manipulate the final
Facebook to determine the user’s perceived usability of access control decision. Consider an attack scenario, where
the current Facebook privacy controls. Since we were a set of malicious users may want to make a shared photo
interested in the maximum average perception of Facebook, available to a wider audience. Suppose they can access
we looked at the upper bound of the confidence interval. the photo, and then they all tag themselves or fake their
An average user asserts at most 25% positively about the identities to the photo. In addition, they collude with each
likability and control of Facebook’s privacy management other to assign a very low sensitivity level for the photo
mechanism, and at most 44% on Facebook’s simplicity as and specify policies to grant a wider audience to access the
shown in Table 1. This demonstrates an average negative photo. With a large number of colluding users, the photo
opinion of the Facebook’s privacy controls that users cur- may be disclosed to those users who are not expected to
rently must use. After Using MController. Users were then gain the access. To prevent such an attack scenario from
asked to perform a few tasks in MController. Since we were occurring, three conditions need to be satisfied: (1) there
interested in the average minimum opinion of MController, is no fake identity in OSNs; (2) all tagged users are real
we looked at the lower bound of the confidence interval. users appeared in the photo; and (3) all controllers of the
photo are honest to specify their privacy preferences.
An average user asserts at least 80% positively about
the likability and control, and at least 67% positively Regarding the first condition, two typical attacks, Sybil
on MController’s simplicity as shown in Table 1. This attacks [14] and Identity Clone attacks [8], have been
demonstrates an average positive opinion of the controls introduced to OSNs and several effective approaches have
and ideas presented to users in MController. been recently proposed to prevent the former [15], [38]
and latter attacks [26], respectively. To guarantee the
second condition, an effective tag validation mechanism
5.2.3 Performance Evaluation is needed to verify each tagged user against the photo.
To evaluate the performance of the policy evaluation mech- In our current system, if any users tag themselves or
anism in MController, we changed the number of the others in a photo, the photo owner will receive a tag
controllers of a shared photo from 1 to 20, and assigned notification. Then, the owner can verify the correctness of
each controller with the average number of friends, 130, the tagged users. As effective automated algorithms (e.g.,
which is claimed by Facebook statistics [3]. Also, we facial recognition [13]) are being developed to recognize
considered two cases for our evaluation. In the first case, people accurately in contents such as photos, automatic tag
each controller allows “friends” to access the shared photo. validation is feasible. Considering the third condition, our
In the second case, controllers specify “friends of friends” current system provides a function (See Figure 7 (d)) to
as the accessors instead of “friends”. In our experiments, indicate the potential authorization impact with respect to
we performed 1,000 independent trials and measured the a controller’s privacy preference. Using such a function,
performance of each trial. Since the system performance the photo owner can examine all users who are granted
depends on other processes running at the time of mea- the access by the collaborative authorization and are not
surement, we had initial discrepancies in our performance. explicitly granted by the owner her/himself. Thus, it enables
To minimize such an impact, we performed 10 independent the owner to discover potential malicious activities in
trials (a total of 10,000 calculations for each number of collaborative control. The detection of collusion behaviors
controllers). in collaborative systems has been addressed by the recent
For both cases, the experimental results showed that the work [34], [37]. Our future work would integrate an effec-
policy evaluation time increases linearly with the increase tive collusion detection technique into MPAC. To prevent
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 13

collusion activities, our current prototype has implemented determine who can access the data, instead of accommo-
a function for owner control (See Figure 7 (c)), where the dating all stakeholders’ privacy preferences. Carminati et
photo owner can disable any controller, who is suspected al. [9] recently introduced a new class of security policies,
to be malicious, from participating in collaborative control called collaborative security policies, that basically enhance
of the photo. In addition, we would further investigate how topology-based access control with respect to a set of
users’ reputations–based on their collaboration activities– collaborative users. In contrast, our work proposes a formal
can be applied to prevent and detect malicious activities in model to address the multiparty access control issue in
our future work. OSNs, along with a general policy specification scheme
and a simple but flexible conflict resolution mechanism
7 R ELATED W ORK for collaborative management of shared data in OSNs. In
Access control for OSNs is still a relatively new research particular, our proposed solution can also conduct various
area. Several access control models for OSNs have been analysis tasks on access control mechanisms used in OSNs,
introduced (e.g., [10], [11], [16], [17], [27]). Early access which has not been addressed by prior work.
control solutions for OSNs introduced trust-based access 8 C ONCLUSION
control inspired by the developments of trust and repu-
In this paper, we have proposed a novel solution for collab-
tation computation in OSNs. The D-FOAF system [27]
orative management of shared data in OSNs. A multiparty
is primarily a Friend of a Friend (FOAF) ontology-based
access control model was formulated, along with a multi-
distributed identity management system for OSNs, where
party policy specification scheme and corresponding policy
relationships are associated with a trust level, which indi-
evaluation mechanism. In addition, we have introduced an
cates the level of friendship between the users participating
approach for representing and reasoning about our proposed
in a given relationship. Carminati et al. [10] introduced
model. A proof-of-concept implementation of our solution
a conceptually-similar but more comprehensive trust-based
called MController has been discussed as well, followed by
access control model. This model allows the specification
the usability study and system evaluation of our method.
of access rules for online resources, where authorized users
As part of future work, we are planning to investi-
are denoted in terms of the relationship type, depth, and
gate more comprehensive privacy conflict resolution ap-
trust level between users in OSNs. They further presented a
proach [22], [25] and analysis services for collaborative
semi-decentralized discretionary access control model and a
management of shared data in OSNs. Also, we would ex-
related enforcement mechanism for controlled sharing of in-
plore more criteria to evaluate the features of our proposed
formation in OSNs [11]. Fong et al. [17] proposed an access
MPAC model. For example, one of our recent work has
control model that formalizes and generalizes the access
evaluated the effectiveness of MPAC conflict resolution
control mechanism implemented in Facebook, admitting
approach based on the tradeoff of privacy risk and sharing
arbitrary policy vocabularies that are based on theoretical
loss [24]. In addition, users may be involved in the control
graph properties. Gates [12] described relationship-based
of a larger number of shared photos and the configurations
access control as one of new security paradigms that
of the privacy preferences may become time-consuming
addresses unique requirements of Web 2.0. Then, Fong [16]
and tedious tasks. Therefore, we would study inference-
recently formulated this paradigm called a Relationship-
based techniques [36] for automatically configure privacy
Based Access Control (ReBAC) model that bases autho-
preferences in MPAC. Besides, we plan to systematically
rization decisions on the relationships between the resource
integrate the notion of trust and reputation into our MPAC
owner and the resource accessor in an OSN. However,
model and investigate a comprehensive solution to cope
none of these existing work could model and analyze
with collusion attacks for providing a robust MPAC service
access control requirements with respect to collaborative
in OSNs.
authorization management of shared data in OSNs.
The need of joint management for data sharing, espe- ACKNOWLEDGMENTS
cially photo sharing, in OSNs has been recognized by the This work was partially supported by the grants from
recent work [9], [21], [24], [35]. Squicciarini et al. [35] National Science Foundation (NSF-IIS-0900970 and NSF-
provided a solution for collective privacy management in CNS-0831360).
OSNs. Their work considered access control policies of R EFERENCES
a content that is co-owned by multiple users in an OSN, [1] Facebook Developers. http://developers.facebook.com/.
such that each co-owner may separately specify her/his [2] Facebook Privacy Policy. http://www.facebook.com/policy.php/.
own privacy preference for the shared content. The Clarke- [3] Facebook Statistics. http://www.facebook.com/press/info.php?
statistics.
Tax mechanism was adopted to enable the collective en- [4] Google+ Privacy Policy. http://http://www.google.com/intl/en/+/
forcement of policies for shared contents. Game theory policy/.
was applied to evaluate the scheme. However, a general [5] The Google+ Project. https://plus.google.com.
[6] G. Ahn and H. Hu. Towards realizing a formal rbac model in real
drawback of their solution is the usability issue, as it systems. In Proceedings of the 12th ACM symposium on Access
could be very hard for ordinary OSN users to comprehend control models and technologies, pages 215–224. ACM, 2007.
the Clarke-Tax mechanism and specify appropriate bid [7] G. Ahn, H. Hu, J. Lee, and Y. Meng. Representing and reasoning
about web access control policies. In Computer Software and
values for auctions. Also, the auction process adopted in Applications Conference (COMPSAC), 2010 IEEE 34th Annual,
their approach indicates that only the winning bids could pages 137–146. IEEE, 2010.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 14

[8] L. Bilge, T. Strufe, D. Balzarotti, and E. Kirda. All your contacts Proceedings of the 14th ACM symposium on Access control models
are belong to us: automated identity theft attacks on social networks. and technologies, pages 135–144. ACM, 2009.
In Proceedings of the 18th international conference on World wide [31] V. Lifschitz. What is answer set programming? In Proceedings of
web, pages 551–560. ACM, 2009. the AAAI Conference on Artificial Intelligence, pages 1594–1597.
[9] B. Carminati and E. Ferrari. Collaborative access control in on- MIT Press, 2008.
line social networks. In Proceedings of the 7th International [32] V. Marek and M. Truszczyński. Stable models and an alternative
Conference on Collaborative Computing: Networking, Applications logic programming paradigm. In The Logic Programming Paradigm:
and Worksharing (CollaborateCom), pages 231–240. IEEE, 2011. a 25-Year Perspective, pages 375–398. Springer Verlag, 1999.
[10] B. Carminati, E. Ferrari, and A. Perego. Rule-based access control [33] A. Mislove, B. Viswanath, K. Gummadi, and P. Druschel. You are
for social networks. In On the Move to Meaningful Internet Systems who you know: Inferring user profiles in online social networks.
2006: OTM 2006 Workshops, pages 1734–1744. Springer, 2006. In Proceedings of the third ACM international conference on Web
[11] B. Carminati, E. Ferrari, and A. Perego. Enforcing access control in search and data mining, pages 251–260. ACM, 2010.
web-based social networks. ACM Transactions on Information and [34] B. Qureshi, G. Min, and D. Kouvatsos. Collusion detection and
System Security (TISSEC), 13(1):1–38, 2009. prevention with fire+ trust and reputation model. In Computer
[12] E. Carrie. Access Control Requirements for Web 2.0 Security and and Information Technology (CIT), 2010 IEEE 10th International
Privacy. In Proc. of Workshop on Web 2.0 Security & Privacy Conference on, pages 2548–2555. IEEE, 2010.
(W2SP). Citeseer, 2007. [35] A. Squicciarini, M. Shehab, and F. Paci. Collective privacy manage-
[13] J. Choi, W. De Neve, K. Plataniotis, and Y. Ro. Collaborative ment in social networks. In Proceedings of the 18th international
face recognition for improved face annotation in personal photo conference on World wide web, pages 521–530. ACM, 2009.
collections shared on online social networks. Multimedia, IEEE [36] A. Squicciarini, S. Sundareswaran, D. Lin, and J. Wede. A3p:
Transactions on, 13(1):14–28, 2011. adaptive policy prediction for shared images over popular content
[14] J. Douceur. The sybil attack. Peer-to-peer Systems, pages 251–260, sharing sites. In Proceedings of the 22nd ACM conference on
2002. Hypertext and hypermedia, pages 261–270. ACM, 2011.
[15] P. Fong. Preventing sybil attacks by privilege attenuation: A design [37] E. Staab and T. Engel. Collusion detection for grid computing. In
principle for social network systems. In Security and Privacy (SP), Proceedings of the 2009 9th IEEE/ACM International Symposium on
2011 IEEE Symposium on, pages 263–278. IEEE, 2011. Cluster Computing and the Grid, pages 412–419. IEEE Computer
[16] P. Fong. Relationship-based access control: Protection model and Society, 2009.
policy language. In Proceedings of the first ACM conference on [38] B. Viswanath, A. Post, K. Gummadi, and A. Mislove. An analysis of
Data and application security and privacy, pages 191–202. ACM, social network-based sybil defenses. In ACM SIGCOMM Computer
2011. Communication Review, volume 40, pages 363–374. ACM, 2010.
[17] P. Fong, M. Anwar, and Z. Zhao. A privacy preservation model [39] G. Wondracek, T. Holz, E. Kirda, and C. Kruegel. A practical attack
for facebook-style social network systems. In Proceedings of the to de-anonymize social network users. In 2010 IEEE Symposium on
14th European conference on Research in computer security, pages Security and Privacy, pages 223–238. IEEE, 2010.
303–320. Springer-Verlag, 2009. [40] E. Zheleva and L. Getoor. To join or not to join: the illusion
[18] J. Golbeck. Computing and applying trust in web-based social of privacy in social networks with mixed public and private user
networks. Ph.D. thesis, University of Maryland at College Park profiles. In Proceedings of the 18th international conference on
College Park, MD, USA. 2005. World wide web, pages 531–540. ACM, 2009.
[19] M. Harrison, W. Ruzzo, and J. Ullman. Protection in operating
systems. Communications of the ACM, 19(8):461–471, 1976. Hongxin Hu is an Assistant Professor in the
[20] H. Hu and G. Ahn. Enabling verification and conformance testing for School of Computing at Clemson University.
access control model. In Proceedings of the 13th ACM symposium His current research interests include access
on Access control models and technologies, pages 195–204. ACM, control models and mechanisms, security
2008. and privacy in social networks, security in
[21] H. Hu and G. Ahn. Multiparty authorization framework for data cloud and mobile computing, network and
sharing in online social networks. In Proceedings of the 25th annual system security, and secure software engi-
IFIP WG 11.3 conference on Data and applications security and neering. He received the Ph.D. degree in
privacy, pages 29–43. Springer-Verlag, 2011. computer science from Arizona State Univer-
[22] H. Hu, G. Ahn, and K. Kulkarni. Anomaly discovery and resolution sity, Tempe, AZ, in 2012.
in web access control policies. In Proceedings of the 16th ACM
symposium on Access control models and technologies, pages 165– Gail-Joon Ahn is an Associate Professor in
174. ACM, 2011. the School of Computing, Informatics, and
[23] H. Hu, G.-J. Ahn, and J. Jorgensen. Enabling Collaborative Data Decision Systems Engineering, Ira A. Ful-
Sharing in Google+. Technical Report ASU-SCIDSE-12-1, April ton Schools of Engineering and the Director
2012. http://sefcom.asu.edu/mpac/mpac+.pdf. of Security Engineering for Future Comput-
[24] H. Hu, G.-J. Ahn, and J. Jorgensen. Detecting and resolving privacy ing Laboratory, Arizona State University. His
conflicts for collaborative data sharing in online social networks. research has been supported by the U.S.
In Proceedings of the 27th Annual Computer Security Applications National Science Foundation, National Se-
Conference, ACSAC ’11, pages 103–112. ACM, 2011. curity Agency, U.S. Department of Defense,
[25] H. Hu, G.-J. Ahn, and K. Kulkarni. Detecting and resolving firewall U.S. Department of Energy, Bank of Amer-
policy anomalies. IEEE Transactions on Dependable and Secure ica, Hewlett Packard, Microsoft, and Robert
Computing, 9:318–331, 2012. Wood Johnson Foundation. Dr. Ahn is a recipient of the U.S. De-
[26] L. Jin, H. Takabi, and J. Joshi. Towards active detection of identity partment of Energy CAREER Award and the Educator of the Year
clone attacks on online social networks. In Proceedings of the Award from the Federal Information Systems Security Educators
first ACM conference on Data and application security and privacy, Association. He received the Ph.D. degree in information technology
pages 27–38. ACM, 2011. from George Mason University, Fairfax, VA, in 2000.
[27] S. Kruk, S. Grzonkowski, A. Gzella, T. Woroniecki, and H. Choi. Jan Jorgensen is junior pursuing his B.S. in
D-FOAF: Distributed identity management with access rights dele- computer science from Ira A. Fulton School
gation. The Semantic Web–ASWC 2006, pages 140–154, 2006. of Engineering and Barrett Honors College,
[28] L. Lam and S. Suen. Application of majority voting to pattern recog- Arizona State University. He is an under-
nition: an analysis of its behavior and performance. Systems, Man graduate research assistant in the Security
and Cybernetics, Part A: Systems and Humans, IEEE Transactions Engineering for Future Computing Labora-
on, 27(5):553–568, 2002. tory, Arizona State University. His current re-
[29] N. Li, J. Mitchell, and W. Winsborough. Beyond proof-of- search pursuits include security and privacy
compliance: security analysis in trust management. Journal of the in social networks.
ACM (JACM), 52(3):474–514, 2005.
[30] N. Li, Q. Wang, W. Qardaji, E. Bertino, P. Rao, J. Lobo, and
D. Lin. Access control policy combining: theory meets practice. In

View publication stats

You might also like