Professional Documents
Culture Documents
Recommendation
Tokyo Web Mining Meetup
1 / 36
About myself
加藤公一 Kimikazu Kato
Twitter: @hamukazu
LinkedIn: http://linkedin.com/in/kimikazukato
2 / 36
About our company
Silver Egg Technology
Established: 1998
3 / 36
Table of Contents
Introduction
Types of recommendation
Evaluation metrics
Algorithms
Conclusion
4 / 36
Caution
This presentation includes:
5 / 36
Recommendation System
Recommender systems or recommendation systems (sometimes
replacing "system" with a synonym such as platform or engine) are a
subclass of information filtering system that seek to predict the
'rating' or 'preference' that user would give to an item. — Wikipedia
Content-based methods
Method using demographic data
Hybrid
6 / 36
Rating Prediction Problem
user\movie W X Y Z
A 5 4 1 4
B 4
C 2 3
D 1 4 ?
7 / 36
Item Prediction Problem
user\item W X Y Z
A 1 1 1 1
B 1
C 1
D 1 ? 1 ?
8 / 36
Input/Output of the systems
Rating Prediction
Input: set of ratings for user/item pairs
Output: map from user/item pair to predicted rating
Item Prediction
Input: set of user/item pairs as shopping data, integer k
Output: top k items for each user which are most likely to be bought by
him/her
9 / 36
Evaluation Metrics for Recommendation
Systems
Rating prediction
The Root of the Mean Squared Error (RMSE)
The square root of the sum of squared errors
Item prediction
Precision
(# of Recommended and Purchased)/(# of Recommended)
Recall
(# of Recommended and Purchased)/(# of Purchased)
10 / 36
RMSE of Rating Prediction
Some user/item pairs are randomly chosen to be hidden.
user\movie W X Y Z
A 5 4 1 4
B 4
C 2 3
D 1 4 ?
2
Predicted as 3.1 but the actual is 4, then the squared error is |3.1 − 4|
2
= 0.9
Take the sum over the error over all the hidden items and then, take the
square root of it.
−−−−−−−−−−−−−−−−−−−−−−−−−−
2
∑ (predicted − actualui )
ui
√
(u,i)∈hidden
11 / 36
Precision/Recall of Item Prediction
12 / 36
ROC and AUC
# of
1 2 3 4 5 6 7 8 9 10
recom.
# of
1 1 1 2 2 3 4 5 5 6
whites
# of
0 1 2 2 3 3 3 3 4 4
blacks
Divide the first and second row by total number of white and blacks
respectively, and plot the values in xy plane.
13 / 36
This curve is called "ROC curve." The area under this curve is called "AUC."
14 / 36
Netflix Prize
The Netflix Prize was an open competition for the best collaborative
filtering algorithm to predict user ratings for films, based on previous
ratings without any other information about the users or films, i.e.
without the users or the films being identified except by numbers
assigned for the contest. — Wikipedia
Closed in 2009.
15 / 36
Outline of Winner's Algorithm
Refer to the blog by E.Chen.
http://blog.echen.me/2011/10/24/winning-the-netflix-prize-a-summary/
Neighborhood Method
Matrix Factorization
Restricted Boltzmann Machines
Regression
Regularization
Ensemble Methods
16 / 36
Notations
Number of users: n
Set of users: U = {1, 2, … , n}
Number of items (movies): m
Set of items (movies): I = {1, 2, … , m}
Input matrix: A (n × m matrix)
17 / 36
Matrix Factorization
Based on the assumption that each item is described by a small number of
latent factors
Each rating is expressed as a linear combination of the latent factors
Achieve good performance in Netflix Prize
T
A ≈ X Y
18 / 36
T
p (A|X, Y , σ) = ∏ N (Aui |Xu Y i , σ)
aui ≠0
19 / 36
According to Bayes' Theorem,
p (X, Y |A, σ)
Thus,
log p (U , V |A, σ, σU , σV )
2
T 2 2
= ∑ (Aui − Xu Y i ) + λ X ∥X∥ + λ Y ∥Y ∥ + const.
Fro Fro
A ui
How can this be computed? Use MCMC. See [Salakhutdinov et al., 2008].
~
Once X and Y are determined, A := X
T
Y and the prediction for Aui is
~
estimated by Aui
20 / 36
Difference between Rating and Shopping
Rating Shopping (Browsing)
user\movie W X Y Z user\item W X Y Z
A 5 4 1 4 A 1 1 1 1
B 4 B 1
C 2 3 C 1
D 1 4 ? D 1 ? 1 ?
Consequently, the algorithm effective for the rating matrix is not necessarily
effective for the shopping matrix.
21 / 36
Solutions
Adding a constraint to the optimization problem
Changing the objective function itself
22 / 36
Adding a Constraint
The problem has the too much degree of freedom
Desirable characteristic is that many elements of the product should be
zero.
Assume that a certain ratio of zero elements of the input matrix remains
zero after the optimization [Sindhwani et al., 2010]
Experimentally outperform the "zero-as-negative" method
23 / 36
One-class Matrix Completion
[Sindhwani et al., 2010]
Minimize
T 2 2
∑ (Aui − Xu Y i ) + λ X ∥X∥ + λ Y ∥Y ∥
Fro Fro
A ui ≠0
T 2 T 2
+ ∑ [pui (0 − Xu Y i ) + (1 − pui )(1 − Xu Y i ) ]
A ui =0
A ui =0
subject to
1
∑ pui = r
|{Aui |Aui = 0}|
A ui =0
24 / 36
T 2 2
∑ (Aui − Xu Y i ) + λ X ∥X∥ + λ Y ∥Y ∥
Fro Fro
A ui ≠0
T 2 T 2
+ ∑ [pui (0 − Xu Y i ) + (1 − pui )(1 − Xu Y i ) ]
A ui =0
A ui =0
Intuitive explanation:
puimeans how likely the (u, i)-element is zero.
The second term is the error of estimation considering pui 's.
The third term is the entropy of the distribution.
25 / 36
Implicit Sparseness constraint: SLIM (Elastic Net)
In the regression model, adding L1 term makes the solution sparse:
1 λ(1 − ρ)
2 2
min [ ∥Xw − y∥ + ∥w∥ + λρ|w| 1 ]
2 2
w 2n 2
The similar idea is used for the matrix factorization [Ning et al., 2011]:
Minimize
λ(1 − ρ)
2
∥A − AW ∥ + ∥W ∥ + λρ|W | 1
Fro
2
subject to
diag W = 0
26 / 36
Ranking prediction
Another strategy of shopping prediction
"Learn from the order" approach
Predict whether X is more likely to be bought than Y, rather than the
probability for X or Y.
27 / 36
Bayesian Probabilistic Ranking
[Rendle et al., 2009]
Consider a total order >u for each u ∈ U . Suppose that i >u j(i, j ∈ I )
The objective is to calculate p(i >u j) such that Aui = 0 and Auj (which
means i and j are not bought by u ).
28 / 36
Let
and define
u∈U (u,i,j)∈D A
where we assume
T
p(i >u j|X, Y ) = σ(Xu Y i − Xu Y j )
1
σ(x) =
−x
1 + e
29 / 36
Taking log of this,
2 2
= log ∏ p(i >u j|X, Y ) − λ X ∥X∥ − λ Y ∥Y ∥
Fro Fro
(u,i,j)∈D A
T T 2 2
= ∑ log σ(Xu Y i − Xu Y j ) − λ X ∥X∥ − λ Y ∥Y ∥
Fro Fro
(u,i,j)∈D A
T T 2 2
max[ ∑ log σ(Xu Y i − Xu Y j ) − λ X ∥X∥ − λ Y ∥Y ∥ ]
Fro Fro
X,Y
(u,i,j)∈D A
This means "find a pair of matrices X, Y which preserve the order of the
element of the input matrix for each u ."
30 / 36
Computation
The function we want to optimize:
T T 2 2
∑ log σ(Xu Y i − Xu Y j ) − λ X ∥X∥ − λ Y ∥Y ∥
Fro Fro
(u,i,j)∈D A
Update Θ with
∂ T T 2 2
Θ = Θ − α (log σ(Xu Y i − Xu Y j ) − λ X ∥X∥ − λ Y ∥Y ∥ )
∂Θ Fro Fro
31 / 36
MyMediaLite
http://www.mymedialite.net/
32 / 36
Practical Aspect of Recommendation
Problem
Computational time
Memory consumption
How many services can be integrated in a server rack?
Super high accuracy with a super computer is useless for real business
33 / 36
Concluding Remarks: What is Important for
Good Prediction?
Theory
Machine learning
Mathematical optimization
Implementation
Algorithms
Computer architecture
Mathematics
Human factors!
Hand tuning of parameters
Domain specific knowledge
34 / 36
References (1/2)
For beginers
比戸ら, データサイエンティスト養成読本 機械学習入門編, 技術評論社, 2016
T.Segaran. Programming Collective Intelligence, O'Reilly Media, 2007.
E.Chen. Winning the Netflix Prize: A Summary.
A.Gunawardana and G.Shani. A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, The Journal of Machine Learning Research,
Volume 10, 2009.
35 / 36
References (2/2)
Papers
Salakhutdinov, Ruslan, and Andriy Mnih. "Bayesian probabilistic matrix
factorization using Markov chain Monte Carlo." Proceedings of the 25th
international conference on Machine learning. ACM, 2008.
Sindhwani, Vikas, et al. "One-class matrix completion with low-density
factorizations." Data Mining (ICDM), 2010 IEEE 10th International
Conference on. IEEE, 2010.
Rendle, Steffen, et al. "BPR: Bayesian personalized ranking from implicit
feedback." Proceedings of the Twenty-Fifth Conference on Uncertainty in
Artificial Intelligence. AUAI Press, 2009.
Zou, Hui, and Trevor Hastie. "Regularization and variable selection via the
elastic net." Journal of the Royal Statistical Society: Series B (Statistical
Methodology) 67.2 (2005): 301-320.
Ning, Xia, and George Karypis. "SLIM: Sparse linear methods for top-n
recommender systems." Data Mining (ICDM), 2011 IEEE 11th
International Conference on. IEEE, 2011.
36 / 36