You are on page 1of 6

IEEE CONECCT2014 1569825549

A Multi-attributed Hybrid Re-ranking Technique for


Diversified Recommendations

Chetan B . Patil Rajnikant B . Wagh


Computer Department Computer Department
R. C. Patel Institute of Technology R. C. Patel Institute of Technology
Shirpur, India Shirpur, India
chetan.rcpit@gmail.com raLwagh@rediffmail.com

Abstract- Recommender systems which are simulations of standard ranking. This standard ranking yields into the list of
web personalization are now-a-days widely integrated in most relevant movies for those particular users at the top in
various domains for improving quality of fetched order. Out of this list only top-N movies will be actually
information. The most critical job for any recommender recommended to the users. Figure 1 shows the general steps
system is to provide more and more utilizable items to its followed by conventional recommender systems in general.
users. Today lots of knowledge discovery techniques are
lip
invented to dig into huge chunks of user-item information Da t a s e t
and accurately read users mind for selecting list of items .
Recent studies have shown the importance of diversified
recommendations to increase item utilization . In short
both accurate and diversified recommendations are key
quality factors of user satisfaction. Re-ranking techniques
are one of the solutions proven to be best for balancing
both accuracy and diversity. After using collaborative
filtering for standard ranking, we propose use of various
content based attributes for re-ranking recommendations
to introduce aggregate diversity selectively and flexibly.
We experimented on movilens dataset and finally propose
multi-attributed hybrid movie's content based re-ranking
technique (MCBRT) for recommendation.

alp
Re C O lll m e n d a t i o n s

K eywords- web personalization, accuracy, diversity, re-ranking

I. INTRODUCTION Figure 1 . General overview of conventional recommender systems

Recommender systems are special forms of information


For an online experiment the information about user ' s
filtering systems. Search engines also perform the information
filtering but they are required to submit search keywords. interest towards movies can be collected explicitly or
According to similarity between search keywords and implicitly. In explicit way, some sample movies are presented
information availability the results are retrieved for user. But if to the users and they are asked to rate those movies on some
we could minimize this j ob of submitting keywords and then rating scale (viz. 1 to 5). Users may be asked to answer yes or
waiting for results, user will be more satisfied. Recommender no type questions about their interest in those presented items.
engines need to fmd out their users interest in various Feedback text may also be collected from users. In implicit
information items like movies, books, j obs etc. The user can be way, information like browsing history of users, their daily
a single individual user or may be a group of users. transactions are observed for which cookies and web log files
Recommender systems observe their users to collect
can be used.
information about user's likes and dislikes. The process of
Whereas, offline experiment that we are doing uses
creating basic information, indicating user's preferences,
characteristics and activities, is called as user profiling. These information which is already collected during some other
user profiles and the contents (viz. movies here) database are online experiments and made available for performing next
then taken as input for prediction techniques. Now the output steps of recommendation experiment. The collection we have
of prediction techniques is simply predicted ratings of these used is movilens dataset which contains separate information
users to their respective unrated movies on some predefined files of all the users, movies and their respective user-movie
rating scale. Now for a user the process sorting all the movies rating matrix.
in descending order of their predicted ratings is called as
Based on prediction approach used, recommender systems B. Re-rankingTechnique
are classified in three categories : content-based, collaborative It is the process of rearranging the sequence of movies
and hybrid. In first, items having similar properties as that of which are most relevant to the user according to specified
items liked by a user in past are predicted to be the next most sorting criteria, in such a way that the user satisfaction will
relevant items for that user. In second, the items matching improve by avoiding repetitive items . Standard ranking focuses
common tests of group of users about movie interests are on improving accuracy of system but merely satisfies user after
predicted to be next most relevant. In third, combination of a period of time. Inclusion of some diversified or unexpected
both content-based and collaborative is used to give better movies in top-N recommendation list may attract user' s
results. Recommender systems can also be classified attention over the usual most highly relevant but frequently
according to their algorithm approach as either memory-based seen movies. To put up such unexpected movies in these top-N
approach, where past activities of users are directly used for lists we need to change the criteria for sorting top-N list, we
prediction or as model-based approach, where past user need to re-rank the top-N list. Though we may turn away from
activities are taken as training set and then applied to get most accurate recommendations, we can still satisfy users by
relevant items. maintaining diversity in recommendations in appropriate
proportion. Thus we need sorting criteria that eliminates usual
Neighborhood based collaborative filtering is the widely
common movies and give chance to unseen or unexpected
used rating prediction technique where similarities between
movies .
users or items are calculated and most similar users or items
are referred as neighbors of each other. When similarities C. Hybrid Recommender System
between users calculated it is called as user-based and when Combining the content-based recommendation and
similarities between items are calculated it is called as item­ collaborative recommendations will give advantages of both
based. approach and may conceal limitations of each other. Such
Chapter II is about facts that motivated us to work on our hybrid systems are being extensively researched for quality
proposed technique multi-attributed hybrid MCBRT. Chapter improvements in recommender systems. Instead of making
III is the in depth literature survey about the efforts in use of hybrid just for doing accurate predictions, we thought to
diversity improvements over the last decade . Chapter IV is the apply collaborative approach in prediction step and content
methodology description we have used. Chapter V is the based approach in re-ranking step. This fact motivated us to
experimental setup showing input dataset and parameters make such a better combination that can result in diversity
values. Chapter VI shows the results in table and graph format improvements and balancing the accuracy as well.
that we have obtained for accuracy and diversity. Chapter VII
conclude the proposed work and highlight the future directions III. RELATED WORK
set by our work. A. Individual Diversity
II. MOTIVATION B. Smyth and P. McClave [ 1 ] proposed some ad-hoc
strategies to rank items for inclusion in recommendation list.
A. Diversified Recommendations According to author, maximum similarity between target query
Quality simply relates to the user satisfaction. Because of and cases to be retrieved is the general strategy in many
the uncertain human mental ability of users the things that can domains, but it doesn't work in some domains.
satisfy users may change over a period of time. For example a K. Bradley and B. Smyth [2] designed three new algorithms
user found to be interested in action or horror movies again for improving individual diversity. According to author
and again may become interested in romance or comedy type diversity problem is always been limitation for content based
movies at different times. Means sorting out user ' s interest recommendation techniques and the proposed algorithms have
once can't be presumed as permanent solution for his or her formed a benchmark on this concern. Out of these, Bounded
lifetime, some change is always expected. On most of Greedy Selection algorithm has greatly reduced the retrieval

getting recommended same list of items {i I , i2 . . . ik } again and


traditional recommender systems user u faces the problem of cost and caused minimal loss of similarity among target query
and recommendations.
again. This problem arises just because of these items are used C. Ziegler et al. [3 ] developed topic diversification, a new
by various other users of the same system with high frequency heuristic approach to optimize the balance between accuracy
or because of user himself or herself has liked movies of and diversity. So it keeps the accuracy in a certain level when
similar test in past. And hence some diversification in increasing diversity, specifically for recommendation lists
obtained as a result of some item based collaborative filtering
recommendation becomes essential to adopt the change in
algorithm. The authors also propose intra-list similarity, a new
human mentality.
metric which is well suited to capture the diversity using
Diversity can be achieved by trying to uncover and proposed algorithm. According to authors effective use of
recommend highly personalized items for each user, which content descriptions along with relevance weights of products
often have less data and are inherently more difficult to predict has effective impact while ranking items and that is where the
and thus, may lead to a decrease in recommendation accuracy. proposed method differs from other existing ones. Their
Now diversity is said to be individual diversity when experimental results shown that users preferred the altered
diversified recommendations are observed for an individual diversified list even some loss of accuracy occurred, than the
during user's access to the system. Diversity is said to be accurate unaltered list.
aggregate when recommendations of all users of the system are
found to be diversified during their access to the system. Thus D. Fleder and K. Hosanagar [4] showed how basic design
aggregate diversity is the indicator for better utilization of choices affect the outcome, and thus managers can choose
resources across all of the users. recommender designs that are more consistent with their sales
goals and consumers' preferences. They found that

2
recommenders can increase sales, and recommenders that C. Other Related Work
discount popularity appropriately may increase sales more. The K. Alodhaibi et al. [ 1 1 ] built a recommender algorithm
M. Zhang and N. Hurley [5] proposed a approach that seeks that works for compound products and services instead of just
to find out best possible subset of items to be recommended individual items. For this they implemented CARD framework
over all possible subsets. Here resultant list's similarity to target which basically separates the utility space and diversity space
query and diversity within list these two are taken as a binary to avoid the tradeoff between similarity and diversity. The
optimization problem. A new evaluation metrics, item novelty, algorithm he designed is computationally efficient and
is proposed. Item novelty means how much an item is different outperforms in terms of diversity.
than current item list. Item novelty depends upon other existing
items in user profile. Item novelty brings certain level of M. Ge et al. [ 1 2] proposed an approach where position of
difficulty for recommendations and hence can be used to items in recommendation list has given most importance.
generate useful test cases. By adjusting the novelty value the Authors said that though diversity weather individual or
tolerance in accuracy loss is balanced. Author points out that aggregate is achieved, it is not necessary that user will
probability of recommending novel items is low whenever perceive its advantages. A well diversified list may display the
similarity is the basic selection criterion. similar items at the top of the list and diverse items may be at
bottom which is not desirable always. Especially it is highly
P. Castells et at. [6] said that though various novelty and
important in case of small screen devices where user is
diversity metrics are popular in literature of recommender
systems, they do not address two important characteristics item interested in only top few recommendations.
ranking and relevance. Author concentrated on two ground B. Wang et al. [ 1 3 ] said that user's interests are always full
concepts, namely item similarity and user-item interaction. of uncertainty which could not be addressed by top-N list of
User-item interaction is modeled based on three conditions recommendations easily. Instead of this author proposed a
choice, discovery and relevance. They tried to cover and cloud model which is powerful at solving knowledge
generalize the old metrics and put in better format. uncertainties.
S. Vergas [7] showed that intent oriented information
R. B . Wagh and J. B . Patil [ 1 4] discussed about web
retrieval diversity can be applied improving recommendation
personalization techniques and use of web mining for web
diversity. They formalized the diversity and novelty metrics
and their results showed that resulting diversification personalization. They suggested novel clustering methods for
techniques can give best results. In addition these proposed web page recommendations.

The Figure 2 shows the architecture of the system we


metrics are well aware of ranking and relevance issues pointed IV. METHODOLOGY
out in [6] .
A. Said et at. [8] proposed an approach that is completely implemented. We used collaborative method for predicting
orthogonal to the standard k nearest neighbor (kNN) algorithms unknown ratings and multi-attributed hybrid content based
followed in traditional recommender systems. The variant they approach (MCBRT) for re-ranking the most relevant movies
proposed is k furthest neighbors (kFN) algorithm that uses an found through standard ranking.
augmented inverted similarity measure. The least liked items of
neighborhoods are recommended. Their results on standard

Oataset I/ P
datasets have shown that diversity is improved with
MovieLe n s Predictio n
insignificant loss in recommendation accuracy. t'----lI t----�
Tec h n iq u e
B. Aggregate Diversity
T. Zhou et at. [9] developed an approach that combines a
accuracy focused algorithm and a diversity focused algorithm.
According to author such collaborations can yield best results
balancing both accuracy and diversity, without relying on any
semantic or context specific information. They used averaging
process in their algorithm that supports diversity F i n d top·k genres Find Release_year
enhancements. a n d Gen_categories categories

G. Adomavicius and Y. Kwon [ 1 0] designed some ranking


based techniques that can improve aggregate diversity of
recommendation lists across all users. They conducted online
experiments on MoviLense, Netix and YahooMovies datasets,
each operated by different rating prediction techniques in
combination with the proposed seven different ranking based
techniques. For item based popularity ranking approach they
formed a parameterized function through which the level of
accuracy and diversity to be maintained is controlled. They top-K
proposed precision-in-top-N, a metric to measure the accuracy Recommendations
of the recommendation list and diversity-in-top-N, a metric to alp
measure the aggregate diversity of recommendation list. The
analysis shows that popularity of items can be a good tool to
Figure 2. System Architecture
enhance the diversity and hence lead to user satisfaction.

3
A . Use of Neighbourhood Based Collaborative Filtering (CF) Moviei d 23 107 56 78 301 211 590
Pred icted
As A Prediction Technique
4.8 4.78 4.62 4. 10 4.08 3.74 3,44
R ati n g
We have used user-based technique of neighborhood
based CF to predict the unknown ratings of different users for
Genre
different movies. We calculated similarities between users. H' H' H' 0' 0' H' 0'
We used cosine similarity metric [ 1 0] to calculate similarity category
Rel ease_year
: E iF-: I(u.,Ll,1) R(u, 1.) , R(ur O)
H" H" H"
values. Similarity of user u with user u' is given as follows:
0" H" 0" 0"
S!: n'l.(u" iLl )
catego ry
-;::======-r=======

:E i El(�Ll ,) R (u, i) .E [E:I(Ll,U ,) R (u.', /;) -


=
Fi nal
H H H H 0 H 0
category

Where, I (u, u ') is set of all movies rated by both u and u ' , R Figure 3. Logical OR to manipulate final categories
(u, i) is known rating of user u to movie i, R *(u, i) is predicted
rating of user u to movie i. We listed the top-K favorite genres for each user using the
input tables in the dataset. Then we categorized all movies
Based on the similarity values the first K most similar users obtained after standard ranking in two types of categories .
are considered as K-neighbors of user u. Now highly rated
First type is Genre_category which manipulates movies based
movies by these K-neighbors are supposed to get high
on genre attribute of movie. Movie is accepted as Home

liking genres of user u else accepted as Other category (0')


prediction values for user u also, because their test of movies is
same as user u. The [mal prediction value is calculated as category (H' ) movie if that movie genre is one of the top-k
following
movie. Second type of category i s Release_year category

) A( ) ' L ,El: , u,u '), si


I
' l{tl, ur) . ( R(u l , i) - RA(u)) which is manipulated based on releasing year attribute of
R (U, � ' R'
'

= I I T --.:...-'-------------
.:.. movies and user ' s age attribute . Depending on the age of

category (H") or Other category (0"). Now as shown in figure


\ I sim ( I, ul ) I movie and age of user the movie is categorized as Home

LuE N(Ur) 3 we performed logical OR operation on Genre category and


Where W' (u) is average rating of user u and N (u) is the set of
category (H) or Other category (0).
Release3ear category values . We got [mal category as Home
K-neighbors of user u.

This process takes care of a movie being completely


B. Proposed Re-ranking Technique Multi-attributed Hybrid strange for that user from the prospective of both attributes.
MCBRT(Movie 's Content Based Re-ranking Technique) Further we used those [mal categories values for re-ranking
purpose as shown in figure 4 so that this complete strangeness
MCBRT only considers single attribute whereas multi­ of movie should not result in decreasing accuracy
attributed hybrid MCBRT considers more than one attributes significantly.

to p-S re co m me ndat i o n fo r use r u


of contents to achieve higher aggregate diversity.

Movi e i d 23 1 07 56 78 301 21 1 590 122 700 89 1001 1 89 34


(a ) P re d i cte d 4.8 4.78 4. 62 4. 1 0 4. 0 8 3 .74 3 .44 3 .32 3.10 --- 2.8 2 .7 5 2.45 2. 1 ---
Rati ng

I
c ategory H H H 0 0 H 0 0 0 0 H 0 0

I �
!

-l Th =2.4
Movi e i d 78 3 01 590 122 700 23 1 07 56 21 1 89 1001 1 89 34
P re d i cted 4. 1 0 4. 08 3 . 44 3 .3 2 3.10 4. 8 4.7 8 4. 62 3 .74 --- 2. 8 2 .7 5 2 .45 2. 1 ---
. ( b)
Rati ng
C ategory 0 0 0 0 0 H H H H 0 H 0 0

J
I
J- Tr=3.4 Th =2.4
Movi e i d 78 301 590 23 107 56 21 1 122 700 89 1001 1 89 34
(c) P re d i cted 4. 1 0 4. 0 8 3 . 44 4. 8 4.7 8 4. 62 3 .74 3 .32 3.10 --- 2 .8 2 .7 5 2 .45 2. 1 ---
Rati ng
C ategory 0 0 0 H H H H 0 0 0 H 0 0

(a) Re c ommendation; a c c ording t o St andard Ranking Appro a ch

(b) Re c ommendation; a c c ording t o pro p o se d t e chnique

(c) Re c ommendation; a c c ording t o pro p o se d t e chnique using ranking thre shold

Figure 4. General idea of proposed re-ranking technique multi-sattributed hybrid MCBRT with respect to standard ranking

4
As shown in figure 4 (a) the movies for user u are sorted in 20% split, are already available in the dataset itself. We have
descending order of their predicted rating values obtained in verified our results on all this disjoint sets.
step- I . The Top-5 movies to be recommended contains all TABLE T
highest predicted ratings but some of them are from home BASIC INFORMATION OF INPUT DATASET
category itself. We can take chance here and replace home
category movies with next highly rated movies of other
category in sequence as shown in figure 4 (b). We can see N u m be r of u s e rs 94 3
movies 23, 1 07 and 56 are replaced by next movies 590, 1 22
and 700 respectively. But it may be too risky for maintaining N u m be r o f mov i e s 1682
accuracy level to include certain movies like 1 22 and 700
having comparatively low values of predictions, especially N u m be r o f ra ting s 100000
where accuracy is vital. For this reason depending on the need
of situation we should have flexibility to decide required
accuracy and diversity levels . It is achieved using Ranking b) Parameters
threshold (Tr) as shown in fig. 4 (c) where Tr is set as 3 .4 and
During the whole experiment, we have used following
hence movies 1 22 and 700 are out of the top-5 competition
different parameters shown in table II with different values to
and next movies in sequence from other category is tried to
crosscheck versatility of performance of our method.
find out . As now none of the movie from other category
satisfying ranking threshold is available hence remaining TABLE II
highly ranked home category movies are appeared inside the SETTING PARAMETER VALUES
top-5 window at the empty places sequentially.
Pa rameter Va l ues

v. DAT ASET AND EXPERIMENTAL SETUP N u m be r of n e ig h bo r s 2 0 to 200


To p -N 1 to 10
a) Input Dataset To p -M 1 to 5
We experimented on movielens dataset ( l OOk) which is Ra ti ng t h re s h o l d (Th) 2.5
openly made available by GroupLens research for research Ra n ki ng t h re s h o l d ( T r) 2.5 to 5.0
purpose. The basic information of this dataset is shown in table
I. Number of disj oint training and testing sets, which are 80%-

TABLE III
Accuracy vs. Diversity performance of multi-attributed hybrid MCBRT and other ranking techniques for 50 neighbors, top-N=5, top-k=3 and Th=2.5

Multi- Item Average Reverse


M CBRT
attrib uted Pop uI.arity Item Rating Predi:tion
Tr
M CBRT

D A D A D A D A D A

2 .5 620 0 .835 600 0.827 580 0.188 609 0.1lt 500 0.814

3 .0 590 0 .842 588 0.830 518 0.198 609 0.152 430 0.813

3 .5 580 0 .855 588 0.832 560 0.8 13 530 0.189 430 0.827

4 .0 516 0 .851 510 0.845 550 0.836 5 10 0.824 344 0.84

4 .5 530 0 .856 500 0.855 419 0.851 420 0.855 330 0.865

5 .0 425 0 .868 390 0.861 400 0.855 380 0.854 320 0.851

Standard Ranking: Diversity(D)=31 2 & hC1Ir.I.cy=0.816

5
7 10 0
675
6 5 10

""
625

'"
6 10 0


� -... ....
575
5 5 10 " "' � M CBR T
'¥- "" \.
,\ \
...
525 r-- M u lti - a,tt r i b ute d M CB RT

"- "toil (\
I::: - ......
,:t::=

!!l 5 10 0
+-A- A ve r age
Q 475
"- \
"" \.
4 5 10 I t e m p - 10 0 10 u I a r it','

� \ -
425
4 10 0
375 J •
3 5 10
325
3 10 0
10 . 7 10 . 7 10 . 7 28 . 7 30 . 7 40 . 7 50 . 7 60 . 7 70 . 7 00 . 7 9 0 . 8 . 8 10 . 8 28 . 8 30 . 8 40 . 8 50 . 8 60 . 8 70 . 8 00 . 8 9 10 . 9
A O C Il <iI" , <i CV

Figure 5. Graph for accuracy vs. diversity performance of all re-ranking techniques for 50 neighbors, top-N=5, top-k=3 and Th=2.5

- movie genre, release3ear and user age. More number of


VI. RESULTS AND PERFORMANCE ANALYSIS
attributes will increase the diversity but also increase the
The Table III and Figure 5 shows the results for accuracy complexity. In future researchers should think about better
and diversity values of multi-attributed hybrid MCBRT, hybrid of collaborative and content filtering not only in doing
MCBRT only and some other re-ranking techniques-Average prediction but also for re-ranking purposes.
re-ranking [ 1 0] , Item-popularity re-ranking [ 1 0] . We have
used the same metrics proposed by G. Adomavicius [ 1 0] to VIII. REFERENCES
measure the accuracy and diversity. Precision-in-top-N is [ I] B. Smyth and P. McClave, "Similarity vs. diversity," Proceeding of 4th
measured as the ratio of "truly highly ranked" movies among International Conference on Case-Based Reasoning, 200 1 , pp. 348-3 6 1 .
[2] K . Bradely and B. Smyth, "Improving recommendation diversity,"
the total number of top-N most relevant movies recommended Proceeding 12th Irish Conference Artificial Intelligence and Cognitive
across all users. Here "truly highly ranked" means the movies Science, 200 I .
were predicted to get ratings above or equal to the rating [3] C.-N. Ziegler, S . McNee, J . Konstan, and G . Lausen, "Improving
threshold (i.e. R*(u, i» =Th) and in fact really has achieved recommendation list through topic diversification," Proceeding 14th
International WWW Conference, 2005, pp. 22-32.
ratings greater than or equal to rating threshold (i.e_ R (u, i) [4] D. Fleder and K. Hosanagar, "Blockbuster culture's next rise or fall: The
>=Th)_ Diversity-in-top-N is the total number of distinct impact of recommender systems on sales diversity," Proceeding of 8th
movies recommended across all users. This gives us the ACM conference on Electronics Commerce, 2007, pp. 1 92-199.
aggregate diversity. The base accuracy i.e. accuracy of [5] M. Zhang and N. Hurley, "Avoiding monotony: Improving the diversity
of recommendation list, " 2008.
standard ranking method is 0 . 876 (87.6%) where as accuracy [6] P. Castells, S. Vargas, and J. Wang, "Novelty and diversity metrics for
in all other ranking techniques is less than this one. It means recommender systems: Choice, discovery and relevance," Proceeding of
we are facing some precision loss in all techniques. The International Workshop on Diversity in Document retrieval (DDR),
amount of precision loss is inversely proportional to the 20 1 1 , pp. 29-37.
[7] S. Vargas, "New approaches to diversity and novelty in recommender
amount of diversity gain. The standard diversity obtained is systems," the 4'h BCS IRSG Symposium on Future Directions in
320 and hence all other techniques are gaining high diversities Information Access, 2012.
than standard one. Simultaneously we can observe the effect [8] A. Said, B. Kille, B. Jain, and S . Albayrak, "Increasing diversity through
of parameter Tr. When Tr is minimum (Tr=Th=2 .5) high furthest neighbor-based recommendation," 20 12.
[9] T. Zhou, Z. Kuscsik, J. -G. Liu, M. Medo, J. Wakeling, and Y.-c. Zhang,
diversity is gained on account of high precision loss. As the Tr "Solving the apparent diversity-accuracy dilemma of recommender
increased the diversity gain got down and accuracy value is systems," Proceeding of National Academy Sciences of the United
increased. This shows that system is flexible to the need of States of America, 20 1 0, pp. 45 1 1 -45 1 5 .
required accuracy and diversity levels. [ 1 0] G . Adomavicius and Y . Kwon, "Improving aggregate recommendation
diversity using ranking-based techniques," IEEE Transactions on
Multi-attributed hybrid MCBRT is performing better than Knowledge and Data Engineering, 2012.
[ I I] K. A1odhaibi, A. Brodsky, and G. Mihaila, "A randomized algorithm for
all other methods by attending maximum diversity gain with
maximizing the diversity of recommendations," Proceedings of the 44th
considerable precision loss at all levels of ranking threshold. Hawaii International Conference on System Sciences, 20 1 1 .
[12] M . Ge, F . Gedikli, and D . Jannach, "Placing high-diversity items in top­
VII. CONCLUSION AND FUTURE SCOPE n recommendation lists," Proceedings of International Joint Conferences
on Artificial Intelligence, 201 1 .
We worked on aggregate diversity improvement with a [13] B. Wang, Z . Tao, and J . Hu, "Improving the diversity o f user-based ton­
collaborative approach for rating prediction and a content n recommendation by cloud model," 20 10.
[ 14] R. B. Wagh and J. B. Patil, "Web Personalization and Recommender
Systems : An Overview," 1 8,h International Conference on Management
based approach for re-ranking purpose. This hybrid approach
when applied and tested for movilens dataset has given of Data, 20 12, pp. I l4-I l 5 .
significant diversity improvement by paying negligible
accuracy loss. This is achieved by using multiple attributes like

You might also like