You are on page 1of 10

Neurocomputing 285 (2018) 94–103

Contents lists available at ScienceDirect

journal homepage:

A dynamic trust based two-layer neighbor selection scheme towards

online recommender systemsR
Ziyang Zhang a, Yuhong Liu b,∗, Zhigang Jin a,∗, Rui Zhang a,c
Department of Communications Engineering, Tianjin University, Tianjin, PR China
Department of Computer Engineering, Santa Clara University, Santa Clara, USA
Department of Communications Engineering, Tianjin Sino-German University of Applied Sciences, Tianjin, PR China

a r t i c l e i n f o a b s t r a c t

Article history: Collaborative filtering has become one of the most widely used methods for providing recommendations
Received 27 June 2017 in various online environments. Its recommendation accuracy highly relies on the selection of appropriate
Revised 15 November 2017
neighbors for the target user/item. However, existing neighbor selection schemes have some inevitable in-
Accepted 30 December 2017
adequacies, such as neglecting users’ capability of providing trustworthy recommendations, and ignoring
Available online 31 January 2018
users’ preference changes. Such inadequacies may lead to drop of the recommendation accuracy, espe-
Communicated by Prof. Yicong Zhou cially when recommender systems are facing the data sparseness issue caused by the dramatic increase
of users and items. To improve the recommendation accuracy, we propose a novel two-layer neighbor se-
Recommender system lection scheme that takes users’ capability and trustworthiness into account. In particular, the proposed
Collaborative filtering scheme consists of two modules: (1) capability module that selects the first layer neighbors based on
Neighbors their capability of providing recommendations and (2) a trust module that further identifies the sec-
Availability ond layer neighbors based on their dynamic trustworthiness on recommendations. The performance of
Trust the proposed scheme is validated through experiments on real user datasets. Compared to three exist-
Temporal information ing neighbor selection schemes, the proposed scheme consistently achieves the highest recommendation
Forgetting factor accuracy across data sets with different degrees of sparseness.
© 2018 Elsevier B.V. All rights reserved.

1. Introduction mender systems are developed, which provide item recommenda-

tions to users by recording and analyzing users’ behavior data ex-
The rapid growth of the Web provides huge potentials for in- plicitly (e.g. through ratings) or implicitly (e.g. through web brows-
dividual users to contribute and access diverse information online ing history). By utilizing such data, recommender systems cannot
every day. For example, YouTube users were uploading 300 h of only help users in finding their desired items in a reasonable time,
new videos every minute in the year 2014, three times more than but also make it more convenient to show items to users [4].
one year earlier [2]; 500 million tweets are generated on Twitter Among different approaches implementing recommender sys-
every day, bringing around 30% growth in volume every year [3]; tems, the collaborative filtering (CF) algorithm has been most
and Wikipedia and its sister projects receive over 10 edits per sec- widely adopted, due to the fact that it does not require domain
ond, more than 800 new articles per day from editors all over the related feature extractions, which makes the processing of un-
world [2]. In such context, people are often overwhelmed by the structured data very convenient. Specifically, the CF algorithm can
vast amount of information and have to spend much more time be divided into two categories: (1) item-similarity-based method,
and energy looking for their favorite items. The item can be a which recommends items that are similar to the items previously
video clip on YouTube, a piece of news on social media, a post liked by the target user, and (2) user-similarity-based method,
on Wikipedia, or a book on Amazon. Fortunately, online recom- which provides recommendations based on the preferences of
users who are similar to the target user in previous item prefer-
ences. Approaches in both of these two categories calculate simi-
The preliminary version [1] of this paper has been accepted for publication in
larities based on a user-item matrix that holds the previously ob-
the proceedings of the 7th IEEE Annual Computing and Communication Workshop
and Conference (IEEE CCWC 2017), Hotel Stratosphere, Las Vegas, USA, 9–11 January served user preferences on items. The calculated similarities are
2017. then used to select the most appropriate neighbors of the target

Corresponding authors. user/item to provide accurate recommendations. Therefore, how to
E-mail addresses: (Z. Zhang), (Y. Liu), calculate user/item similarities, or one step further, how to select (Z. Jin), (R. Zhang).
0925-2312/© 2018 Elsevier B.V. All rights reserved.
Z. Zhang et al. / Neurocomputing 285 (2018) 94–103 95

appropriate neighbors is the key of the CF algorithm that dramati- matrix. Such patterns will then be used to predict this user’s pref-
cally influences the recommendation accuracy. erences. The most popular model-based CF approaches are based
The CF algorithm is facing new challenges in the big data on clustering [9], co-clustering [10,11], matrix factorization [12–15],
era. One of these challenges is the data sparseness issue. As dis- mixtures models [16,17], and transfer learning approaches [18,19].
cussed above, recent online recommender systems often contain Compared to memory-based CF methods, model-based CF methods
vast amount of items. And the number of items is still rapidly in- can handle large-scale datasets well and provide faster predictions
creasing. Compared to the total amount of items available in the once the model has been established. However, the modeling pro-
system, the number of items rated by an individual user becomes cess itself is usually time-consuming and often causes information
very limited, leading to high uncertainty in estimating this user’s loss, which may lead to the drop of recommendation accuracy.
preferences.Although the total number of users involved in recom-
mender systems is also increasing, how to appropriately select ca-
2.2. User similarity calculations
pable and reliable (i.e. trustworthy) neighbors to predict the target
user’s preferences for accurate recommendations is very challeng-
Due to the wide adoption of the memory-based CF, many re-
ing, which leads to the so-called data sparseness problem.
searchers are attracted to improve its accuracy. Consequently, a
Many existing neighbor selection mechanisms, however, have
number of methods are proposed. Pearson correlation coefficient,
intrinsic inadequacies in handling the data sparseness issue. First,
cosine-base similarity and adjust cosine-base similarity, are the
most of existing user similarity calculation mechanisms, such as
most popular methods to calculate user similarity, which serves as
Pearson correlation coefficient, cosine-based similarity and ad-
the foundation of selecting appropriate neighbors for recommen-
justed cosine-based similarity [5–7], calculate the similarity be-
dations. Some extension methods are proposed to further improve
tween a pair of users as a symmetric value, while ignoring these
the accuracy of user similarity computations. In [20], the authors
two users’ asymmetric capabilities in recommending items to each
propose a method to detect and correct unreliable ratings to en-
other. Second, when comparing two neighbors, their total number
sure the availability of the data set. In [21], a significance-based
of commonly rated items with the target user is often ignored,
similarity measure is proposed to compute user similarities based
leading to a weird scenario where a neighbor sharing only one
on three types of significances. A new similarity function, proposed
commonly rated item with the target user may yield a higher sim-
in [22], achieves higher recommendation accuracy by (1) assigning
ilarity score than the neighbor who shares 100 commonly rated
different weights to each individual item and (2) selecting differ-
items with the target user. Third, most current neighbor selection
ent sets of neighbors for each specific user. This scheme, however,
schemes do not consider the consistency of users’ preferences on
also significantly increases computational complexity. A new infor-
different items, not to mention the dynamic changes of users’ pref-
mation entropy-driven user similarity measure model is proposed
in [23] to measure the relative difference between ratings and a
This paper aims to improve the recommendation accuracy of
Manhattan distance-based model is then developed to address the
the user-similarity based CF, which, as discussed above, highly
fat tail problem by estimating the alternative active user average
relies on precisely selecting neighbors for the target user, and
rating, which improves the accuracy of the similarity computation.
resolve the problems caused by sparse data through the opti-
In [24], a multi-level collaborative filtering method is proposed to
mization of neighbor selection. To achieve this goal, we propose
assign a higher similarity score to a pair of users if their Pearson
a novel two-layer neighbor selection scheme that selects capable
correlation coefficient or the number of commonly rated items ex-
and trustworthy neighbors based on two modules: (1) a capability
ceeds a certain threshold.
module that selects the first layer neighbors by considering the
These methods, however, calculate user similarities without
asymmetric capabilities as well as the total number of commonly
considering the size of their commonly rated items and their
rated items between a pair of users, and (2) a dynamic trust
asymmetric capabilities in recommending items to each other,
module that performs the second layer neighbor selections by
not to mention their recommendation trustworthiness. More im-
considering users’ preference consistencies on different items.
portant, the data sparseness issue makes such inadequacies even
Experiments on real user datasets verify that the proposed scheme
worse, which may then cause significant drop of recommendation
consistently achieves high recommendation accuracies across
datasets with different sparseness degree.
The rest of the paper is organized as follows. Section 2 re-
views the related work; Section 3 introduces the proposed scheme; 2.3. User trustworthiness evaluations
Section 4 describes the experiments and results, followed by con-
clusion in Section 5. Another trend is to improve recommendation accuracy by
introducing trust values among users in collaborative filtering. For
2. Related work instance, the trust values can be computed based on the transi-
tivity rules for similarities among users [25]. Some researchers
2.1. Memory-based and model-based CF algorithms propose trust models based on users’ social network trust rela-
tionships or the propagation effect of online word-of-mouth social
User-similarity-based CF algorithms can be conducted through networks [26–29]. In [30], the authors propose an innovative
either memory-based methods or model-based methods. Memory- Trust-Semantic Fusion (TSF)-based recommendation approach
based CF methods calculate similarities directly based on the user- within the CF framework which incorporates additional informa-
item matrix. Users with higher similarities to the target user are tion from the users’ social trust network and the items’ semantic
identified as the neighbors, whose preferences will then be uti- domain knowledge in order to deal with the data sparsity,user
lized to predict preferences of the target user [8]. As a result, the and item cold-start problems.However, these trust models may
recommendation accuracy of such methods highly relies on the not be applicable in many recommender systems due to the lack
precise neighbor selection. Although memory-based CF algorithms of information about users’ social relationship or behavior. Some
have been widely adopted by many recommender systems, they researchers propose trust models based on subjective logics or
still have some significant limitations when handling data sparse- belief theory such as [31] and [32]. In [33], the model calculates
ness, cold start and scalability. On the other hand, model-based CF direct and indirect trust among users by considering one-hop or
methods model user’s behavior patterns based on the user-item multiple-hop distances among items. The authors in [34] propose
96 Z. Zhang et al. / Neurocomputing 285 (2018) 94–103

to calculate the trust values based on the recommendation accu- Table 1

An example of a user-item rating
racy of a user’s previous ratings. These models have verified that
the selection of trustworthy neighbors will significantly improve
recommendation accuracy. Nevertheless, most of them do not con- I1 I2 I3 I4 I5 I6
sider the problem of data sparsity, and may sometimes introduce u1 – 4 5 – 3 –
high computational complexity. u2 4 – – 4 3 5
Recently, there are some emerging studies that improve recom- u3 2 2 4 – 3 4
u4 5 4 4 – 4 –
mendation accuracy by considering the time factor. Some studies
u5 1 1 5 – 2 –
manage to recommend diversified items [35–37]. In [38], the au-
thors propose to calculate item similarities through a time-based
correlation degree. The authors in [39] proposed a time function
to calculate weighted index decays. These studies have verified an the rating values are in a 5-star scale and “–” represents that the
important fact that has been ignored by traditional user similar- item is unrated.
ity calculations. That is, time factor does play a critical role in in- First, users with sparse rating data may obtain high similarity
fluencing recommendation accuracy since users’ preferences may scores even if they rate very limited number of items in common
change. This fact inspires us to also consider the impact of time with the target user. Specifically, let us assume that the target user
when evaluating the trustworthiness of neighbors. is user u1 and we want to figure out which one of user u2 and
Different from these studies, we propose to gradually forget user u3 is more similar to user u1 . By checking the user-item rat-
users’ rating history by introducing time decay factors window by ing matrix directly, we observe that user u3 should be more simi-
window. More important, we move one step further by introduc- lar to user u1 . The reason is as follows. Although both user u2 and
ing different time decay factors for forgetting neighbors’ “good” user u3 rate item I5 with exactly the same value as user u1 does,
and “bad” behaviors. In other words, our proposed scheme remem- it is the only commonly rated item between user u1 and user u2 .
bers neighbors’ “bad behaviors” (i.e. inconsistent preferences) for a User u3 , on the other hand, demonstrates more evidence for be-
longer time while forgetting neighbors’ “good behaviors” (i.e. con- ing similar to user u1 by rating two additional items (i.e. I2 and I3 )
sistent preferences) faster, so that the neighbors with more con- with user u1 in common with very close values. Hence, the simi-
sistent preferences will yield higher trust values and finally stand larity score between user u3 and user u1 should be higher. How-
out. ever, according to Eq. (1), we obtain the similarity scores between
Inspired by these existing studies, we propose a novel two-layer user u1 and the other two users u2 and u3 as sim(u1 , u2 )=1 and
neighbor selection scheme. The first layer neighbors are selected sim(u1 , u3 )=0.5, respectively, which leads to a counter intuitive re-
by taking into account users’ capability in providing recommenda- sult that user u2 is more similar to user u1 . This is because Pearson
tions to the target user. The second layer selection further filters correlation coefficient calculates the similarities between a pair of
out untrustworthy neighbors who cannot consistently provide reli- users mainly based on their rating values on the commonly rated
able recommendations across different items. items, while neglecting the total number of these items.
Second, the symmetric similarity scores between a pair of users
3. Proposed scheme cannot represent their asymmetric capabilities of providing recom-
mendations to each other. For example, when user u1 is the target
In this section, we discuss the proposed scheme in details. In user, user u3 can offer ratings about item I1 and item I6 to help
particular, we first discuss the limitations of the conventional user predicting user u1 ’s preferences on these two items. In the oppo-
similarity calculations through some specific examples. Then we site way, when user u3 is the target user, user u1 is not able to
introduce the two key modules of the proposed scheme: the ca- provide any information for predicting user u3 ’s unknown prefer-
pability evaluation module and the trust evaluation module. At ences on items because all the items rated by user u1 are rated by
the end, the proposal of a two-layer neighbor selection strategy is user u3 already. In other words, user u1 and user u3 have differ-
presented. ent capabilities in providing recommendations for each other. Nev-
ertheless, Pearson correlation coefficient always obtains the same
3.1. Existing user similarity calculation mehtods value for sim(u1 , u3 ) and sim(u3 , u1 ), indicating that user u1 may
be selected as a neighbor of user u3 for recommendation decisions,
User similarity calculations have been widely adopted in col- which does not make sense.
laborative filtering recommender systems. However, the user sim- Third, users with high similarity scores may not always share
ilarity score calculated by conventional methods, such as Pearson consistent preferences with the target user across different items.
correlation coefficient and cosine-base similarity, may not be ade- We demonstrate this by using u1 , u4 and u5 as an example. Specif-
quate to accurately reflect users’ capability and trustworthiness in ically, assume u1 is the target user, and we want to select one user
providing recommendations to the target user. In this section, we out of u4 and u5 as the neighbor of u1 . By simply checking the rat-
use one of the most popular similarity calculation methods, Pear- ings, we observe that (1) u4 consistently shares very similar prefer-
son correlation coefficient, as an example to analyze such inade- ences with u1 on all the commonly rated items, and (2) although
quacies in details. u5 shares exactly the same preference with u1 on I3 , he/she dis-
Specifically, Pearson correlation coefficient is calculated as: plays completely different preferences with u1 on the other two
 commonly rated items: I2 and I5 . Intuitively, u5 should not be cho-
(Rui − Ru )(Rvi − Rv )
i∈Iuv sen as u1 ’s neighbor since his/her preference consistency with the
sim(u, v ) =  (1)
 2  2 target user u1 is fluctuating a lot. However, by Pearson correla-
(Rui − Ru ) ( Rvi − Rv ) tion coefficient calculation, sim(u1 , u4 )=0 while sim(u1 , u5 ) > 0, in-
i∈Iuv i∈Iuv
dicating that users who do not share consistent preferences across
where Rui and Rvi are the ratings of user u and user v for item i, items (e.g. u5 ) may yield higher similarity score. In addition, as
respectively; Iuv is the set of commonly rated items by both users; shown in Eq. (1), Pearson correlation coefficient calculation ig-
and Ru and Rv are the average rating values of user u and user nores the timing when users provide ratings. Therefore, two users
v, respectively. We apply Pearson correlation coefficient calculation who provide exactly same ratings at different time may be consid-
on a sample user-item rating matrix as shown in Table 1, where ered as equally similar to the target user, whereas the user who
Z. Zhang et al. / Neurocomputing 285 (2018) 94–103 97

provided ratings long time ago may already change his/her prefer- with the target user across different items. Therefore, in this sec-
ences. tion, we propose a novel trust evaluation module, which evaluates
As a summary, by using Pearson correlation coefficient calcu- users’ trustworthiness on providing recommendations to the target
lation as an example, we demonstrate that the conventional user user. This trust evaluation module serves as an important basis for
similarity calculations are not adequate in selecting the most ap- our neighbor selection strategy.
propriate neighbors to predict target user’s preferences. To address
these issues, we propose a scheme which contains two key mod- 3.3.1. The revised beta trust model
ules: a capability evaluation module and a trust evaluation mod- In the first step, we aim to use trust models to evaluate
ule. Details of the proposed scheme are discussed in the following whether a user shares consistent preferences with the target user
sections. across different items. There are diverse models to evaluate trust,
such as Beta trust model [40], Bayesian Network [41] and be-
3.2. Capability evaluation module lief theory [42]. In this work, we adopt and revise the Beta trust
model [40] to conduct the basic trust evaluation due to its low
In this section, we present the proposed capability evaluation computational complexity.
module that addresses the first two inadequacies discussed above In particular, we consider that the rating offset between user u
by introducing user capability. Specifically, a user v’s capability to and user v follows Beta distribution. In the classic Beta trust model,
be used for predicting target user u’s preferences (i.e. marked as a trustee’s behavior is evaluated by a trustor as a binary value (i.e.
ava(u, v)), is calculated as in below equation. either “good” or “bad”). In our proposed scheme, we evaluate the
 trustworthiness of a user v from the target user u’s perspective. For
0, Iv ⊂ Iu or sim(u, v ) < 0
ava(u, v ) = (2) each specific item that user u and v rate in common, the rating off-
|Iuv |/|Iu | × sim(u, v ), other set between these two users can serve as one observation on their
preference consistency. Depending on whether v’s preference on an
where |Iu | denotes the total number of items rated by target user
item is close to or far away from target user u’s preference, we
u; |Iuv | denotes the number of items rated by user u and user v in
consider user v has displayed either a “good behavior” (i.e. consis-
common, and sim(u, v) denotes the Pearson correlation coefficient
tent preference) or a “bad behavior” (i.e. inconsistent preference).
between user u and user v.
As a result, for a given user, his/her preference consistency with
The first inadequacy is addressed by considering the number of
the target user across different items can be modeled as a random
commonly rated items in the user capability calculation. Specifi-
variable that follows Beta distribution.
cally, we calculate ava(u, v) by multiplying sim(u, v) by |Iuv |/|Iu |. As
More important, in the proposed trust module, we further mod-
a result, the more items that user v rates in common with the tar-
ify the beta trust model to better fit the multivariate rating values
get user, the higher capability he/she will have to be used for pro-
available in recommender systems. Specifically, we consider that a
viding recommendations for the target user u. On the other hand,
user v’s behavior contains both a good portion and a bad portion
users rating very few common items with the target user u will
(i.e. marked as gi (u, v) or bi (u, v)), each of which can be quantified
not be able to obtain a very high capability score and hence have
as a continuous value in the range of (0, 1). Specifically, gi (u, v)
lower possibilities to be selected as neighbors of the target user.
and bi (u, v) are computed as follows.
Furthermore, by setting zero capability scores to users who can-
not provide influential information for recommending items to the |Rvi − Rui |
target user, we can avoid selecting these users as neighbors, which gi (u, v ) = 1 − (3)
Rmax − Rmin
addresses the second inadequacy. Specifically, in Eq. (2), when
Iv ⊂Iu , which indicates that all the items rated by user v are rated |Rvi − Rui |
by the target user u already, ava(u, v) is set as 0. In addition, when bi (u, v ) = (4)
Rmax − Rmin
sim(u, v) < 0, indicating an extremely low similarity between user
u and v, ava(u, v) is also set as 0. Please note that although the where Rmax and Rmin represent the maximum and the minimum
proposed scheme ignores users with negative Pearson correlation rating values in a recommender system, respectively. For instance,
coefficients, it is compatible with the case when the absolute value in a 5-star rating scale, Rmax is 5 and Rmin is 1. According to
of the Pearson similarity is used. (3) and (4), for each item (i.e. marked as i) rated by user u and
By applying the proposed capability evaluation module on the user v in common, user v’s “good” and “bad” portion of rating be-
sample user-item rating matrix in Table 1, we obtain the capability havior will always add up to 1. In addition, user v is believed to
scores as ava(u1 , u3 )=0.5, ava(u1 , u2 )=0.333, ava(u3 , u1 )=0, indi- conduct a higher portion of “good behavior” if his/her rating value
cating that (1) compared to user u2 , user u3 is more capable of is closer to the rating value of the target user u.
recommending items to user u1 ; and (2) although user u3 could Furthermore, the total number of good/bad behaviors con-
be a capable neighbor for user u1 , user u1 is not capable of pro- ducted by user v is calculated as the sum of his/her good/bad be-
viding recommendations in the opposite way. These observations havior values on all the commonly rated items.
further validate our previous arguments. 
As a summary, we construct the capability evaluation module G(u, v )= gi (u, v ) (5)
by considering the number of commonly rated items and setting
zero capability scores to users who cannot provide helpful infor- 
mation for recommending items to the target user. In this way, we B(u, v )= bi (u, v ) (6)
can select neighbors with higher capability, which addresses the i∈Iuv

first two inadequacies. At the end, the trustworthiness of user v from target user u’s
perspective (i.e. marked as tru(u, v)) is calculated as in (7). We
3.3. Trust evaluation module can observe that, the more good behaviors a user v conducts, the
higher trust value he/she will obtain.
Although the proposed capability evaluation module can ad-
G(u, v ) + 1
dress the first two inadequacies, it cannot address the third issue: tru(u, v )= (7)
identifying “trustworthy” users who share consistent preferences G(u, v ) + B(u, v ) + 2
98 Z. Zhang et al. / Neurocomputing 285 (2018) 94–103

3.3.2. Time factor v or user w. Therefore, the timing of Rui is ignored in the above
In the second step, we further evaluate whether a user shares computation.
consistent preferences with the target user by introducing a time Furthermore, as user v may rate multiple common items with
factor, an essential factor that is seldom studied by existing work the target user u in window θ , the total number of good/bad be-
for selecting trustworthy neighbors. Time factor is critical because haviors in window θ is calculated as the sum of his/her good/bad
users’ interests/preferences could change. This naturally leads to behavior values on all the commonly rated items.
two consequences. First, the ratings provided by a user long time 
ago should take lower weights in influencing this user’s trust- gθ (u, v ) = gθi (u, v ) (11)
worthiness as it may no longer be able to accurately reflect this i∈Iuθv
user’s up-to-date preferences. Second, comparing two users v and

w, who share consistent preferences with the target user long time bθ (u, v ) = bθi (u, v ) (12)
ago and more recently, respectively, the latter one should be more i∈Iuθv
trustworthy as he/she could probably provide more accurate infor-
mation for inferring target user’s current preferences. Next, we will discuss the overall amount of “good behaviors” and
To let users’ trust values accurately reflect their preference “bad behaviors” with forgetting factor. To let the more recent be-
changes , we propose to gradually forget users’ previous prefer- haviors take higher weights in the calculation of users’ trustworthi-
ences by introducing time decay factors. Specifically, we need to ness, we use time decay factors to gradually forget a user’s previ-
carefully determine when to forget and how to forget. These two ous behaviors. Specifically, for time window θ , the overall amount
questions will be answered in the following two subsections in of “good behaviors” and “bad behaviors” (i.e. Gθ (u, v) and Bθ (u, v))
details. can be calculated as:

3.3.3. Time window Gθ (u, v ) = Gθ −1 (u, v ) × fg + gθ (u, v ) (13)

In this section, we explain when to forget. There are two possi-
ble ways. First, gradually forget a user’s previous ratings each time
when he/she posts a new rating. Second, divide the entire time Bθ (u, v ) = Bθ −1 (u, v ) × fb + bθ (u, v ) (14)
into multiple time windows and forget ratings in previous win-
dows each time when a new window starts. In the first approach, where fg and fb are two continuous values in the range of (0, 1)
the forgetting timing is determined by each individual user’s rating representing the forgetting factors for “good behaviors” and “bad
timing. This may lead to two problems: (1) forgetting too fast as a behaviors”, respectively.
user’s preference is estimated to change each time when he/she When fg = fb , it means that a user’s “good behaviors” and “bad
provides a new rating; and (2) not fair for all users as their forget- behaviors” will be forgotten at the same speed. In this case, if both
ting timings could be very different. The second approach, how- of these two forgetting factors equal to 1, users’ previous behav-
ever, enables dynamic adjustment of the forgetting speed by al- iors will be remembered forever, which becomes the same as the
lowing flexible time window lengths. In addition, it provides a previously discussed basic trust model. On the other hand, if the
fair comparison among all users as the forgetting timing is iden- forgetting factor values are less than 1, the previous behaviors will
tical. Therefore, we introduce the second approach in the proposed be gradually forgotten. And rating behaviors falling into early time
scheme, where non-overlapping windows are adopted. windows will be forgotten more.
In particular, given a specific rating provided at time tc , the in- Furthermore, we observe that some candidate neighbors are
dex of the window that this rating falls in is marked as θ and cal- more consistent in their preferences while some others may fluctu-
culated in Eq. (8). ate a bit. The neighbors who share consistent preferences with the
target user should be able to reflect the target user’s preferences
θ = [(tc − ts )/tw ] + 1 (8) more accurately. Therefore, we propose to design forgetting factors
where ts and tw represent the training starting time and the win- in a way that prefers consistent neighbors to fluctuating neigh-
dow length, respectively. And all the ratings falling into the same bors. In particular, to punish users with fluctuating preferences, we
time window θ will be assigned with the same forgetting factor. propose to have fg < fb , so that users’ inconsistent preferences (i.e.
As shown in Eq. (8), the key issue is to determine an appropri- bad behavior in the proposed model) is remembered for a longer
ate window length. In the experiment section, we further discuss time. It also indicates that when a candidate neighbor’s trust value
the optimal values of the window length in details. drops, it will take him/her a longer time to recover. The optimal
values of these two forgetting factors will be discussed in the ex-
3.3.4. Forgetting factor periment section in details. At the end, target user u’s trust value
In this section, we further discuss how to forget a user’s rat- on user v at window θ is calculated as in below equation.
ings in the window θ . By combining the revised Beta trust model
Gθ (u, v ) + 1
and time window proposed above, we are able to quantify the tru(u, v )= θ (15)
good/bad portion for each of user v’s rating behaviors in window
G (u, v ) + Bθ (u, v ) + 2
θ (i.e. marked as gθi (u, v ) or bθi (u, v )). Specifically, gθi (u, v ) and
bθi (u, v ) are computed as follows, 3.3.5. Summary
As a summary, in this section, we propose a dynamic trust
|Rvi − Rui |
gθi (u, v ) = 1 − (9) evaluation module, which evaluates a user’s trustworthiness as
Rmax − Rmin whether he/she shares consistent preferences with the target user
across different items. Specifically, we adopt and revise a Beta trust
|Rvi − Rui |
bθi (u, v ) = (10) model to calculate users’ trust values from the target user’s point
Rmax − Rmin
of view, and consider time factors to capture users’ preference
where Rvi represents user v’s rating value on item i in window θ , changes. Selecting neighbors with high trust values will signifi-
and Rui represents the target user u’s rating value on item i. Please cantly help the system consistently making high quality recom-
note that the target user u’s rating timing is identical regardless mendations to the target user, which addresses the third inade-
of which neighbor user’s behavior is evaluated, for example, user quacy of existing schemes.
Z. Zhang et al. / Neurocomputing 285 (2018) 94–103 99

3.4. Neighbor selection strategy Intel i5 2.4 GHz, 12 GBs of RAM, windows 8.1 system. All the
comparison methods are implemented in the Python programming
In this section, we propose a two-layer neighbor selection strat- language.
egy by integrating the capability evaluation module and the trust
evaluation module. 4.1. Experiment data set
In conventional collaborative filtering algorithms, there are two
most popular neighbor selection methods. One is to select a fixed We conduct experiments using the MovieLens-100k dataset col-
number (i.e. K) of neighbors with the highest similarity scores; lected by the GroupLens Research Project at the University of Min-
The other one is to select neighbors with similarity scores higher nesota, which is one of the most popular datasets used by re-
than a certain threshold. However, both of these two methods have searchers in the field of collaborative filtering. The MovieLens-100k
their own limitations. The former one can easily include users with dataset consists of 10 0,0 0 0 ratings from 943 users on 1682 movies.
small similarity scores as the top K neighbors. The latter one has to The rating values are integer values ranging from 1 to 5. The min-
deal with threshold selections for different environment settings, imum number of items rated by each user is 20.
which is not trivial. An extension of the latter one is proposed In addition, the training and testing data are extracted based on
scheme in [19] which sets the threshold as the average similar- the temporal domain information to examine the effectiveness of
ity scores of all neighbors. Nevertheless, the range of the threshold the time factor proposed in our scheme. Specifically, we first order
values determined by this extension is hard to be dynamically ad- all the user ratings as a sequence according to the time when they
justed. In addition, it also introduces more computational costs. were provided. Then the first 70% ratings in the sequence are used
To address these limitations, we propose a two-layer neighbor as training data and the remaining 30% ratings are used as testing
selection strategy where the first layer neighbors (i.e. marked as data. In this way, all the ratings in the training set are provided
N (u)) are selected as the top K users with the highest availabil- before any ratings in the testing set, which matches the real world
ity scores. Assume K is the number of neighbors that we plan to recommendation scenarios. Furthermore, the time of the last rating
select, and ε ∈ {ε ∈ R|ε ≥ 1}. The number of the first layer neighbors in the training set represents the training ending time, which will
(i.e. K ) is obtained as the maximum integer value not exceeding be used later for forgetting purposes.
ε × K, which is calculated as in below equation. Such a way of organizing training and testing data also brings
one issue that some users, who only provide ratings after the train-
K  = [ε × K ] (16) ing ending time, may not have any ratings in the training set. As
Furthermore, the second layer neighbors, marked as N(u), are we have no knowledge about these users at the training stage,
further selected to include the top K users with the highest we simple exclude these users from the testing user group. On
trust values. We display the proposed strategy in Algorithm 1. By the other hand, there are also some users who have sufficient rat-
ing data in the training set while never provide any ratings after
Algorithm 1. the training ending time. These users are also excluded from the
testing user group, as there is no ground truth for us to validate
Input: the user-item matrix; the target user u; maximum number
the performance of the recommendation scheme on these users.
of neighbors K; parameter ε .
However, they may still be used as trustworthy neighbors by the
Output: the set of user u’s neighbors N (u )
recommendation algorithm to predict other users’ future ratings.
Calculate K  based on Eq. (16)
Please note that there are 943 users in the original data set. And
for each user v in the user-item matrix except user u do
after our preprocessing, there are 687 users in the training data
Calculate common rated items between user u and user v,
and 126 of users in the testing data, which still provides sufficient
and store them into set Iuv
experiment data.
Calculate ava(u, v ) based on Eqs. (1) and (2)
After the above preprocessing, the extracted training and test-
end for
ing data are used in the later experiments for performance valida-
if the number of users who have non-zero ava(u, v ) > K  then
tion. The detailed experiment results and analysis are presented in
Insert K  users who have higher ava(u, v )into N  (u )
the following sections.
Insert all users who have non-zero ava(u, v ) into N  (u ) 4.2. Performance evaluation metric
end if
for each user v in N  (u ) do The predicted rating of item i for the target user u is the
Calculate tru(u, v ) based on Eqs. (3)-(15) weighted average of ratings from user u’s selected neighbors. It is
end for calculated as in below equation.
if the size of N  (u ) > K  then 
Insert K users who have higher tru(u, v ) into N (u ) (Rvi − Rv ) × ava(u, v ) × tru(u, v )
v∈N ( u )
else Pui = Ru +  (17)
N (u ) = N  (u ) |ava(u, v ) × tru(u, v )|
v∈N ( u )
end if
return N (u ) To measure the effectiveness of the proposed method, we use
Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) ,
which are widely accepted by the research community. MAE and
using the proposed two-layer neighbor selection strategy, we aim RMSE are used to compute the deviation between the predicted
to select appropriate neighbors with high capability and trustwor- ratings and the actual ratings in all experiments. Specifically, the
thiness in predicting the target users’ preferences. MAE and RMSE are calculated as
4. Experimental results MAE = |ri − pi | (18)
In this section, in order to validate the effectiveness of the pro- 
posed scheme, we conduct several experiments based on a real
user dataset. The experiments are performed on a computer with RMSE = (ri − pi )2 (19)
100 Z. Zhang et al. / Neurocomputing 285 (2018) 94–103

Fig. 1. Impact of parameter ε on MAE.

Fig. 2. Scheme comparisons for different window lengths.

where ri represents the target user’s actual rating value on item

i and pi is the corresponding predicted rating value. N is the total
number of items rated by the target user in the testing set. A lower also become less effective, since there will be too few first-layer
MAE or RMSE value indicates better performance. neighbors, which makes it difficult to select sufficient number of
second layer neighbors with high trust values. Based on these re-
4.3. Module testing sults, we choose the ε value as 1.25 and the K value as 40 in the
later experiments.
In this section, we would like to discuss the impact of differ- Fig. 2 shows the performance of the proposed scheme with
ent parameters in the proposed scheme and their optimal values. different window lengths and forgetting factor values, where the
As the proposed scheme contains multiple interacting parameters, x-axis represents the optimal f value that leads to the minimum
for simplicity reasons, we would like to study the impact of one MAE, and the y-axis represents the MAE. The three different win-
or two parameters when all other parameters are fixed. Specifi- dow lengths are chosen as 43,20 0, 64,80 0 and 108,0 0 0 system
cally, three experiments are conducted. In the first experiment, we time, respectively. Please note that in this experiment, we assume
test different ε values, ranging from 1.0 to 2.0, to obtain the op- that the proposed scheme forgets neighbors’ good behavior and
timal one that leads to the minimum MAE for different number bad behavior with the same speed by setting fg = fb . Therefore, in
of neighbors (i.e. K). Please note that in this experiment, by set- the later discussion on this experiment, we will use f to represent
ting fg = fb =1, we do not forget any users’ preferences, which also the value of the forgetting factor.
leads to indifference in the setting of time window length. In the In Fig. 2, we observe that when the forgetting factor f is set
second and third experiments, we further validate the impact of to 1, which indicates no forgetting at all, the MAE values for all
forgetting factor by adopting the optimal values of ε and K ob- different window lengths are identical as 0.8150. In this case, the
served in the first experiment. In particular, the second experiment scheme becomes the same as the scheme proposed in our previ-
focuses on comparing the impact of different window lengths by ous work [1]. In addition, for each specific window length, when
setting fg = fb ,we find that when the time window length increases, we change the value of f to be smaller than 1, which indicates for-
the optimal forgetting factor drops. Then in the third experiment, getting to some extent, there is always a better f value that leads
we further examine the impact of forgetting users’ bad behavior to a smaller MAE than that of the case when f =1. It means that
slower by making fg = fb . We find that the optimized fg values are the proposed scheme is able to outperform the scheme in our pre-
always smaller than fb , which validates our proposal that bad be- vious work [1]. Specifically, taking the tw =64, 800 as an for exam-
havior should be remembered for a longer time. To generalize the ple, the minimum MAE value of the proposed scheme is 0.8138
experiments, we relax the constraints in the previous experiments when fg = fb =0.998, which outperforms the scheme in our previ-
by allowing flexible changes of fb , fg and tw , through which the ous work [1] by decreasing the MAE by 0.1%. This observation sup-
optimal values of window length and forgetting factors are ob- ports our argument that gradually forgetting users’ previous behav-
tained.Specific results and analysis of experiments are as follows. iors will positively improve the recommendation accuracy.
Fig. 1 shows the performance of the proposed scheme for dif- As shown in Fig. 2, the optimal forgetting factors that lead
ferent ε and K values. In particular, the x-axis and the y-axis rep- to the minimum MAE are different for different window lengths.
resent the ε value and the MAE, respectively. Therefore, it would be interesting to further investigate the rela-
From Fig. 1, we observe that for a fixed ε value, a larger K value tionship between window length and the corresponding optimal
will lead to better performance. This is due to the fact that more forgetting factor values. This may serve as the guidance for us to
neighbors will provide more reliable information for recommenda- later identify the best combination of window length and forget-
tions. In addition, Fig. 1 also shows that, regardless of the K value, ting factor value. As both the window length and the two forget-
the minimum MAE will always be obtained when ε takes a value ting factors fg and fb can change, identifying the best combination
between 1.1 and 1.3. The reasons are as follows. On the one hand, of them is not trivial. Therefore, we conduct a series of experi-
when ε goes above 1.3, the proposed scheme becomes less effec- ments and show the results in Fig. 3, Tables 2 and 3.
tive because too many first layer neighbors will be selected and We first start with the simple case where fg = fb . It indicates
the last few neighbors may have very low availability scores. On that a user’s “good behaviors” and “bad behaviors” will be for-
the other hand, when ε goes below 1.1, the proposed scheme will gotten at the same speed. The results are shown in Fig. 3. In
Z. Zhang et al. / Neurocomputing 285 (2018) 94–103 101

MAE, a longer time window should correspond to a smaller forget-

ting factor, and vice versa. Next, we study the more complex case
where fg = fb . As we believe that neighbors whose preferences are
always consistent with the target user should be more trustwor-
thy than neighbors with fluctuating preferences, we design fg < fb
in the next two experiments to remember users’ inconsistent pref-
erences for a longer time. In particular, we first set a fixed fb value
as 0.998, and dynamically adjust the value of fg to reach the mini-
mum MAE for each window length. The optimal fg values and cor-
responding minimum MAE for different window lengths are shown
in Table 2.
In Table 2, we can observe that for different window lengths,
the minimum MAEs that are achieved by setting fg < fb are always
smaller than that in Fig. 2, where fg = fb . This observation supports
our argument that we should remember users’ inconsistent prefer-
ences (i.e. bad behavior in the proposed trust model) for a longer
time. Furthermore, among the minimum MAEs, the smallest one in
Table 2 is 0.8106 when tw =86, 400 and fg =0.984. In addition, we
can see that the longer the window length, the smaller the fg value
Fig. 3. Optimal forgetting factors for different window lengths. will be to achieve the smallest MAE value.
Please note that we also conduct further experiments to exam-
Table 2
ine the case where fg > fb . As we expected, when fg > fb , the per-
Scheme comparisons for different window lengths.
formances are worse than that of the case where fg = fb , indicat-
tw 43,200 64,800 86,400 108,0 0 0 129,600 ing that forgetting users’ inconsistent preferences faster will not
Optimal fg 0.990 0.986 0.984 0.982 0.976
help selecting more trustworthy neighbors. Such observation fur-
Minimum MAE 0.8114 0.8115 0.8106 0.8115 0.8115
ther supports our above design.
In Table 2, we have fixed the value of fb . To make it more gen-
Table 3 eral, we repeat the above experiment by changing the fb values
Scheme comparisons for different fb values.
from 0.992 to 1. For each specific fb value, we obtain the optimal
fb 0.992 0.994 0.996 0.998 1 fg value and tw value, as well as the corresponding MAE. The re-
tw 86,400 86,400 86,400 86,400 86,400 sults are shown in Table 3.
fg 0.976 0.978 0.982 0.984 0.986
MAE 0.8117 0.8116 0.8110 0.8108 0.8140
As shown in Table 3, the smallest MAE value is achieved when
fb =0.998, tw =86, 400 and fg =0.984. We will use this optimal com-
bination in the next sectiont, where the performance of the pro-
posed scheme is compared to that of other existing studies.
Fig. 3, the x-axis represents five different window lengths and the
y-axis represents the corresponding optimal forgetting factor val-
ues. From Fig. 3, we can observe that all the minimum MAEs are
achieved when the forgetting factor f is smaller than 1. It validates 4.4. Comparison schemes
the effect of introducing forgetting factor. That is, gradually forget-
ting users’ previous preferences will help improving recommenda- In this experiment, we compare the recommendation accu-
tion accuracy. In addition, an interesting observation made from racy of different methods by adjusting the number of selected
Fig. 3 is that when the window length is increasing, the corre- neighbors. In particular, we set ε =1.25, fb =0.998, tw =86, 400 and
sponding optimal forgetting factor is decreasing. Recall that longer fg =0.984 in the proposed scheme. Furthermore, we adopt three
window length indicates slower forgetting, while larger forgetting different comparison schemes that all use Pearson correlation co-
factor value indicates faster forgetting. It indicates that the mini- efficient as: (1) the multi-level collaborative filtering (ML-CF) pro-
mum MAE is achieved at roughly the same forgetting speed, which posed in [24]; (2) a dual neighbor selection strategy based collab-
could be implemented through different combinations of window orative filtering (DN-CF) proposed in [34]; (3) collaborative filter-
length and forgetting factor. Therefore, to achieve the minimum ing using Pearson correlation coefficient (CF). For these comparison

Fig. 4. Scheme comparisons for different K values.

102 Z. Zhang et al. / Neurocomputing 285 (2018) 94–103

Fig. 5. Scheme comparisons on data with different data sparseness.

schemes, please refer to Section II for more detailed discussion on MovieLens-100K dataset. The experimental results show that the
these comparison schemes. proposed scheme outperforms the comparison schemes by consis-
Fig. 4 shows the performances of different schemes with dif- tenly achieving higher recommendation accuracy across datasets
ferent K values, where the x-axis represent the K value and and with different degrees of data sparseness.
the y-axis represent the MAE and RMSE, respectively. In Fig. 4,
we observe that the proposed scheme achieves the best MAE and
RMSE for all different K values. Specifically, taking the K value as References
40 for example, the proposed scheme outperform the CF, DNC-CF
[1] Z.Y. Zhang, Y.H. Liu, Z.G. Jin, R. Zhang, Selecting influential and trustworthy
and ML-CF by decreasing the MAE by 16.24%, 11.82% and 2.82% and
neighbors for collaborative filtering recommender systems, in: Proceedings of
the RMSE by 16.39%, 10.95%, 2.80%, respectively. the 7th IEEE Annual Computing and Communication Workshop and Confer-
At the end, we would like to compare the performance of dif- ence, IEEE CCWC, Hotel Stratosphere, Las Vegas, USA, 2017, pp. 1–7.
ferent schemes in terms of handling the data sparseness issue. [2] M. Zhang, X. Guo, G. Chen, Prediction uncertainty in collaborative filtering: en-
hancing personalized online product ranking, Decis. Support Syst. 83 (2016)
Specifically, we set the K value as 40, and show the performance 10–21.
of different schemes on various datasets with different degrees of [3] F. Xie, Z. Chen, J. Shang, W. Huang, J. Li, Item similarity learning methods for
sparseness in Fig. 5. For example, the 0.3 data sparseness degree collaborative filtering recommender systems, in: Proceedings of the IEEE In-
ternational Conference on Advanced Information Networking and Applications,
represents the scenario that only 30% of the original training data 2015, pp. 896–903.
is adopted for parameter training. As shown in Fig. 5, we compare [4] J. Lu, D. Wu, M. Mao, W. Wang, G. Zhang, Recommender system application
the proposed scheme to three other schemes when 30%, 40%, 50%, developments, Decis. Support Syst. 74 (C) (2015) 12–32.
[5] H. Yu, J.H. Li, Algorithm to Solve the Cold-Start Problem in New Item Recom-
60%, 70%, 80% and 90% of the original training data is used for mendations, Chin. J. Software 26 (6) (2015) 1395–1408.
training, respectively. We observe that the proposed scheme sta- [6] D. Li, C. Chen, Q. Lv, L. Shang, Y. Zhao, T. Lu, N. Gu, An algorithm for effi-
bly outperforms all other schemes when the dataset is sparse in cient privacy-preserving item-based collaborative filtering, Future Gener. Com-
put. Syst. 55 (2016) 311–320.
different degrees, demonstrating its high and stable capability in [7] B.K. Patra, R. Launonen, V. Ollikainen, S. Nandi, A new similarity measure us-
handling the data sparseness issue. In addition, the ML-CF scheme ing Bhattacharyya coefficient for collaborative filtering in sparse data, Knowl.
achieves very good performance when more than 50% of the orig- Based Syst. 82 (2015) 163–177.
[8] S. Jamalzehi, M.B. Menhaj, A new similarity measure based on item proxim-
inal training data set is used for training. However, when data
ity and closeness for collaborative filtering recommendation, in: Proceedings
sparseness issue becomes more significant (e.g. 0.3 or 0.4), it per- of the International Conference on Control, Instrumentation, and Automation,
forms the worst, which indicates a poor capability to handle data 2015, pp. 445–450.
sparseness issue. [9] A. Salah, N. Rogovschi, M. Nadif, A dynamic collaborative filtering system via a
weighted clustering approach, Neurocomputing 175 (2016) 206–215.
As a summary, with appropriate values of ε , tw , fg , fb , the pro- [10] T.F. George, S. Merugu, A Scalable Collaborative Filtering Framework Based on
posed scheme achieves the best recommendation accuracy regard- Co-Clustering (2005) 625–628.
less of the K value and the data sparseness. [11] M. Khoshneshin, W.N. Street, Incremental collaborative filtering via evolution-
ary co-clustering, in: Proceedings of the Conference on Recommender Systems,
Recsys, ACM, Barcelona, Spain, 2010, pp. 325–328.
5. Conclusion [12] Y. Koren, R. Bell, C. Volinsky, Matrix factorization techniques for recommender
systems, Computer 42 (8) (2009) 30–37.
[13] Y. Xu, R. Hao, W. Yin, Z. Su, Parallel matrix factorization for low-rank tensor
In this work, a novel two-layer neighbor selection scheme is completion, Inverse Probl. Imaging 9 (2) (2017) 601–624.
proposed for collaborative filtering recommender systems, aim- [14] R. Mazumder, T. Hastie, R. Tibshirani, Spectral regularization algorithms for
learning large incomplete matrices, J. Mach. Learn. Res. 11 (11) (2010) 2287.
ing at improving the recommendation accuracy by selecting the [15] T. Hastie, R. Mazumder, R. Zadeh, R. Zadeh, Matrix completion and low-rank
most capable and trustworthy neighbors. Specifically, the proposed SVD via fast alternating least squares, J. Mach. Learn. Res. 16 (1) (2015)
scheme contains two modules: the capability evaluation module 3367–3402.
[16] B.M. Marlin, R.S. Zemel, S. Roweis, M. Slaney, Collaborative filtering and the
and the trust evaluation module. The capability module selects
missing at random assumption, in: Proceedings of the Conference on Uncer-
the first layer neighbors by (1) considering the number of com- tainty in Artificial Intelligence, 2007, pp. 267–275.
monly rated items between potential neighbors and the target [17] Y.D. Kim, S. Choi, Bayesian binomial mixture model for collaborative prediction
with non-random missing data, in: Proceedings of the Conference on Recom-
user, and (2) setting zero capability scores to potential neighbors
mender Systems, ACM, 2014, pp. 201–208.
who cannot provide helpful information for recommending items [18] B. Li, Q. Yang, X. Xue, Can movies and books collaborate? Cross-domain col-
to the target user. In addition, the trust module further identi- laborative filtering for sparsity reduction., in: Proceedings of the International
fies the second layer neighbors who share consistent preferences Joint Conference on Artificial Intelligence, IJCAI, Pasadena, California, USA,
2009, pp. 2052–2057.
with the target user across different items. To evaluate the perfor- [19] J. Wang, L. Ke, Feature subspace transfer for collaborative filtering, Neurocom-
mance of the proposed scheme, experiments are conducted on the puting 136 (1) (2014) 1–6.
Z. Zhang et al. / Neurocomputing 285 (2018) 94–103 103

[20] P. Moradi, S. Ahmadian, A Reliability-Based Recommendation Method to Im- Ziyang Zhang received the B.S. degree from Tianjin Uni-
prove Trust-Aware Recommender Systems, Pergamon, Press, Inc., 2015. versity, Tianjin, China, in 2015. He is currently a M.S. stu-
[21] A. Hernando, F. Ortega, Collaborative filtering based on significances, Inf. Sci. dent at the School of Electrical and Information Engi-
185 (1) (2012) 1–17. neering, Tianjin University. His research interests include,
[22] K. Choi, Y. Suh, A new similarity function for selecting neighbors for each tar- developing trust models, recommendation algorithm and
get item in collaborative filtering, Knowl. Based Syst. 37 (1) (2013) 146–153. social media.
[23] W. Wang, G.Z. M, L.â. Jie, Collaborative filtering with entropy-driven user sim-
ilarity in recommender systems, Int. J. Intell. Syst. 30 (8) (2015) 854–870.
[24] N. Polatidis, C.K. Georgiadis, A multi-level collaborative filtering method that
improves recommendations, Expert Syst. Appl. 48 (2016) 100–110.
[25] M. Papagelis, D. Plexousakis, T. Kutsuras, Alleviating the sparsity problem
of collaborative filtering using trust inferences, in: Proceedings of the Third
International Conference on Trust Management, iTrust, Paris, France, 2005,
pp. 224–239. Yuhong Liu received the B.S. and M.S. degrees from Bei-
[26] S. Deng, L. Huang, G. Xu, X. Wu, Z. Wu, On deep learning for trust-aware rec- jing University of Posts and Telecommunications, Beijing,
ommendations in social networks, IEEE Trans. Neural Netw. Learn. Syst. 28 (5) China, in 2004 and 2007, respectively, and the Ph.D. de-
(2016) 1164–1177. gree from University of Rhode Island in 2012. She is as-
[27] K.T. Senthilkumar, R. Ponnusamy, Diffusing multi-aspects of local and global sistant Professor at Department of Computer Engineer-
social trust for personalizing trust enhanced recommender system, in: Pro- ing, Santa Clara University. She is the recipient of the
ceedings of the International Conference on Advanced Computing and Com- 2013 University of Rhode Island Graduate School Excel-
munication Systems, 2016, pp. 1–8. lence in Doctoral Research Award. With expertise in trust-
[28] D.H. Alahmadi, X.J. Zeng, Twitter-based Recommender System to Address Cold- worthy computing and cyber security, her research inter-
start: A Genetic Algorithm Based Trust Modelling and Probabilistic Sentiment ests include developing trust models and applying them
Analysis (2015) 1045–1052. on emerging applications, such as online social media,
[29] R.S. Liu, T.C. Yang, Improving recommendation accuracy by considering elec- cyber-physical systems and cloud computing. She is the
tronic word-of-mouth and the effects of its propagation using collective matrix recipient of the best paper awards at the IEEE Interna-
factorization, in: Proceedings of the IEEE Datacom, 2016. tional Conference on Social Computing 2010 (acceptance rate = 13%) and The 9th
[30] Q. Shambour, J. Lu, A trust-semantic fusion-based recommendation approach International Conference on Ubi-Media Computing (UMEDIA 2016).
for e-business applications, Decis. Support Syst. 54 (1) (2012) 768–780.
[31] G. Pitsilis, L. Marshall, A model of trust derivation from evidence for use in
recommendation systems, Proceedings of the PREP, Presented Poster(2004). Zhigang Jin received his Ph.D. degree of EE from Tian-
[32] G. Pitsilis, L.F. Marshall, Modeling Trust for Recommender Systems using Sim- jin University, Tianjin, China, in 1999. He was a visiting
ilarity Metrics, Springer, US, 2008. professor in Ottawa University, Ottawa, Canada, in 2002.
[33] X.M. Wang, X.M. Zhang, W.U. Jiang-Xing, Collaborative filtering recommen- He is currently a professor in Tianjin University, Tianjin,
dation algorithm based on one-jump trust model, J. Commun. 36 (6) (2015) China. His research interests focus on underwater sensor
197–204. networks, the management and security of the computer
[34] D. Jia, F. Zhang, A collaborative filtering recommendation algorithm based networks, and social networks.
on double neighbor choosing strategy, J. Comput. Res. Dev. 50 (5) (2013)
[35] N. Lathia, S. Hailes, L. Capra, X. Amatriain, Temporal diversity in recommender
systems, in: Proceedings of the International ACM SIGIR Conference on Re-
search and Development in Information Retrieval, 2010, pp. 210–217.
[36] G. Zhao, M.L. Lee, W. Hsu, W. Chen, Increasing temporal diversity with
purchase intervals, in: Proceedings of the 35th International ACM SIGIR
Conference on Research and Development in Information Retrieval, 2012, Rui Zhang received his M.S. degree in Electronic and In-
pp. 165–174. formation Engineering from the college of Electrical and
[37] L.I. Jing-Jiao, L.M. Sun, W. Jiao, SRL recommendation system model im- Information Engineering of Tianjin University. He is cur-
proving session recommendation diversity, J. Northeast. Univ. 34 (5) (2013) rently a Ph.D. student at the School of Electrical and Infor-
650–653+662. mation Engineering, Tianjin University. He was a lecturer
[38] Y. Xiao, A.I. Pengqiang, C.H. Hsu, H. Wang, X. Jiao, Time-ordered collaborative in the Department of Software and Communication at
filtering for news recommendation, China Commun. 12 (12) (2015) 53–62. Tianjin Sino-German University of Applied Sciences, Tian-
[39] Y. Ding, X. Li, Time weight collaborative filtering, in: Proceedings of the ACM jin. His research interests include computer vision, deep
International Conference on Information and Knowledge Management, CIKM, learning and social media as well as computational ge-
Bremen, Germany, 2005, pp. 485–492. ometry and artificial intelligence.
[40] A. Jsang, R. Ismail, The Beta Reputation System (2002).
[41] A. Whitby, A. Jsang, J. Indulska, Filtering out unfair ratings in Bayesian rep-
utation systems, in: Proceedings of the International Joint Conference on Au-
tonomous Agenst Systems, 2005, pp. 106–117.
[42] B. Yu, M.P. Singh, An evidential model of distributed reputation management,
in: Proceedings of the International Joint Conference on Autonomous Agents
and Multiagent Systems, 2002, pp. 294–301.