You are on page 1of 60

CS246: Mining Massive Datasets

Jure Leskovec, Stanford University


http://cs246.stanford.edu
¡ Training movie rating data
§ 100 million ratings, 480,000 users, 17,770 movies
§ 6 years of data: 2000-2005
¡ Test movie rating data data
§ Last few ratings of each user (2.8 million ratings)
§ Evaluation criterion: Root Mean Square Error (RMSE) =
" *
∑((,')∈# 𝑟̂'( − 𝑟'(
#
§ Netflix’s system RMSE: 0.9514
¡ Competition:
§ 2,700+ teams
§ $1 million prize for 10% improvement on Netflix
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 2
Labels known publicly Labels only known to Netflix

Training Data Held-Out Data

3 million ratings

100 million ratings

1.5m ratings 1.5m ratings

Quiz Set: Test Set:


scores scores
posted on known only
leaderboard to Netflix

Scores used in
determining
final winner
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 3
480,000 users
Matrix R 1 3 4
3 5 5
4 5 5
3
17,700
movies 3
2 2 2
5
2 1 1
3 3
1

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 4


480,000 users
Matrix R 1 3 4
𝒓𝟑,𝟔
3 5 5
4 5 5
3
17,700
movies 3
2 ? ?
Training Data Set Test Data Set
?
2 1 ?
3 ?
True rating of
1 user x on item i

"
RMSE = ∑((,')∈# 𝑟̂'( − 𝑟'( *
0
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets
Predicted rating 5
¡ The winner of the Netflix Challenge
¡ Multi-scale modeling of the data:
Combine top level, “regional” Global effects
modeling of the data, with
a refined, local view:
§ Global: Factorization

§ Overall deviations of users/movies


§ Factorization: Collaborative
filtering
§ Addressing “regional” effects
§ Collaborative filtering:
§ Extract local patterns

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 6


¡ Global:
§ Mean movie rating: 3.7 stars
§ The Sixth Sense is 0.5 stars above avg.
§ Joe rates 0.2 stars below avg.
Þ Baseline estimation:
Joe will rate The Sixth Sense 4 stars
¡ Local neighborhood (CF/NN):
§ Joe didn’t like related movie Signs
§ Þ Final estimate:
Joe will rate The Sixth Sense 3.8 stars

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 7


¡ Earliest and most popular collaborative
filtering method
¡ Derive unknown ratings from those of “similar”
movies (item-item variant)
¡ Define similarity measure sij of items i and j
¡ Select k-nearest neighbors, compute the rating
§ N(i; x): items most similar to i that were rated by x

rˆxi =
å jÎN ( i ; x )
sij × rxj
sij… similarity of items i and j

å s
jÎN ( i ; x ) ij
rxj…rating of user x on item j
N(i;x)… set of items similar to
item i that were rated by x
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 8
¡ In practice we get better estimates if we
model deviations:

^
rxi = bxi +
å jÎN ( i ; x )
sij × ( rxj - bxj )
å s
jÎN ( i ; x ) ij
baseline estimate for rxi Problems/Issues with CF:
𝒃𝒙𝒊 = 𝝁 + 𝒃𝒙 + 𝒃𝒊 1) Similarity measures are “arbitrary”
2) Pairwise similarities neglect
interdependencies among users
μ = overall mean rating 3) Taking a weighted average can be
bx = rating deviation of user x
= (avg. rating of user x) – μ
restricting
bi = (avg. rating of movie i) – μ Solution: Instead of sij use wij that
we estimate directly from data
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 9
¡ Use a weighted sum rather than weighted avg.:
𝑟:
'( = 𝑏'( + < 𝑤(> 𝑟'> − 𝑏'>
>∈?((;')
¡ A few notes:
§ 𝑵(𝒊; 𝒙) … set of movies rated by user x that are
similar to movie i
§ 𝒘𝒊𝒋 is the interpolation weight (some real number)
§ We allow: ∑𝒋∈𝑵(𝒊;𝒙) 𝒘𝒊𝒋 ≠ 𝟏
§ 𝒘𝒊𝒋 models interaction between pairs of movies
(it does not depend on user x)
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 10
'( = 𝑏'( + ∑>∈?((,') 𝑤(>
¡ 𝑟: 𝑟'> − 𝑏'>
¡ How to set wij?
"
§ Remember, error metric is: ∑((,')∈# 𝑟̂'( − 𝑟'( *
#
𝟐
or equivalently SSE: ∑(𝒊,𝒙)∈𝑹 𝒓F𝒙𝒊 − 𝒓𝒙𝒊
§ Find wij that minimize SSE on training data!
§ Models relationships between item i and its neighbors j
§ wij can be learned/estimated based on x and
all other users that rated i
Why is this a good idea?
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 11
1 3 4
3 5 5

¡ Goal: Make good recommendations 4 5


3
3
5

2 2 2
§ Quantify goodness using RMSE: 2 1
5
1

Lower RMSE Þ better recommendations 1


3 3

§ Want to make good recommendations on items


that user has not yet seen. Can’t really do this!

§ Let’s set build a system such that it works well


on known (user, item) ratings
And hope the system will also predict well the
unknown ratings

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 12


¡ Idea: Let’s set values w such that they work well
on known (user, item) ratings
¡ How to find such values w?
¡ Idea: Define an objective function
and solve the optimization problem

¡ Find wij that minimize SSE on training data!


*
𝐽 𝑤 = < 𝑏'( + < 𝑤(> 𝑟'> − 𝑏'> − 𝑟'(
',(∈# >∈? (;'
True
Predicted rating rating
¡ Think of w as a vector of numbers
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 13
¡ A simple way to minimize a function 𝒇(𝒙):
§ Compute the take a derivative 𝜵𝒇
§ Start at some point 𝒚 and evaluate 𝜵𝒇(𝒚)
§ Make a step in the reverse direction of the
gradient: 𝒚 = 𝒚 − 𝜵𝒇(𝒚)
§ Repeat until converged
𝑓 𝑓 𝑦 + 𝛻𝑓(𝑦)

𝑦
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 14
¡ We have the optimization 𝐽 𝑤 = < 𝑏'( + < 𝑤(> 𝑟'> − 𝑏'> − 𝑟'(
*

problem, now what? ',(∈# >∈? (;'

¡ Gradient decent:
§ Iterate until convergence: 𝒘 ← 𝒘 − h𝜵𝒘𝑱 h … learning rate
§ where 𝜵𝒘𝑱 is the gradient (derivative evaluated on data):
𝜕𝐽(𝑤)
𝛻R 𝐽 = =2 < 𝑏'( + < 𝑤(U 𝑟'U − 𝑏'U − 𝑟'( 𝑟'> − 𝑏'>
𝜕𝑤(>
',(∈# U∈? (;'

for 𝒋 ∈ {𝑵 𝒊; 𝒙 , ∀𝒊, ∀𝒙 }
YZ(R)
else =𝟎
YR[\
§ Note: We fix movie i, go over all rxi, for every movie
𝝏𝑱(𝒘)
𝒋 ∈ 𝑵 𝒊; 𝒙 , we compute while |wnew - wold| > ε:
𝝏𝒘𝒊𝒋
wold = wnew
2/15/17
wnew = wold - h ·Ñwold
Jure Leskovec, Stanford C246: Mining Massive Datasets 15
¡ '( = 𝑏'( + ∑>∈?((;') 𝑤(> 𝑟'> − 𝑏'>
So far: 𝑟:
§ Weights wij derived based Global effects

on their role; no use of an


arbitrary similarity measure
(wij ¹ sij) Factorization

§ Explicitly account for


interrelationships among CF/NN
the neighboring movies
¡ Next: Latent factor model
§ Extract “regional” correlations

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 16


Global average: 1.1296

User average: 1.0651


Movie average: 1.0533

Netflix: 0.9514

Basic Collaborative filtering: 0.94


CF+Biases+learned weights: 0.91

Grand Prize: 0.8563

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 17


Serious Braveheart
The Color Amadeus
Purple

Lethal
Sense and Weapon
Sensibility Ocean’s 11
Geared Geared
towards towards
females males

The Lion King

The Princess Independence


Diaries Day
Dumb and
Funny Dumber
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 18
SVD: A = U S VT
¡ “SVD” on Netflix data: R ≈ Q · PT
users factors
.1 -.4 .2
1 3 5 5 4

5 4 4 2 1 3 -.5 .6 .5 users
items

factors
-.2 .3 .5 1.1 -.2 .3 .5 -2 -.5 .8 -.4 .3 1.4 2.4 -.9
2 4 1 2 3 4 3 5

2 4 5 4 2 ≈
items 1.1 2.1 .3
-.8
2.1
.7
-.4
.5
.6
1.4
1.7
.3
2.4
-1
.9
1.4
-.3
2.9
.4
-.7
.8
1.2
.7
-.1
-.6
1.3
.1
4 3 4 2 2 5 -.7 2.1 -2

1 3 3 2 4 -1 .7 .3 PT
R Q
¡ For now let’s assume we can approximate the
rating matrix R as a product of “thin” Q · PT
§ R has missing entries but let’s ignore that for now!
§ Basically, we will want the reconstruction error to be small on known
ratings and we don’t care about the values on the missing ones
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 19
¡ How to estimate the missing rating of
user x for item i? 𝒓F𝒙𝒊 = 𝒒𝒊 ⋅ 𝒑𝒙
users
1 3 5 5 4

5 4
? 4 2 1 3
= < 𝒒𝒊𝒇 ⋅ 𝒑𝒙𝒇
items

2 4 1 2 3 4 3 5

𝒇
2 4 5 4 2

4 3 4 2 2 5
qi = row i of Q
1 3 3 2 4
px = column x of PT
.1 -.4 .2
users
-.5 .6 .5
1.1 -.2 .3 .5 -2 -.5 .8 -.4 .3 1.4 2.4 -.9
factors
items

-.2 .3 .5
-.8 .7 .5 1.4 .3 -1 1.4 2.9 -.7 1.2 -.1 1.3
1.1 2.1 .3
2.1 -.4 .6 1.7 2.4 .9 -.3 .4 .8 .7 -.6 .1
-.7 2.1 -2

-1 .7 .3 PT
factors Q
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 20
¡ How to estimate the missing rating of
user x for item i? 𝒓F𝒙𝒊 = 𝒒𝒊 ⋅ 𝒑𝒙
users
1 3 5 5 4

5 4
? 4 2 1 3
= < 𝒒𝒊𝒇 ⋅ 𝒑𝒙𝒇
items

2 4 1 2 3 4 3 5

𝒇
2 4 5 4 2

4 3 4 2 2 5
qi = row i of Q
1 3 3 2 4
px = column x of PT
.1 -.4 .2
users
-.5 .6 .5
1.1 -.2 .3 .5 -2 -.5 .8 -.4 .3 1.4 2.4 -.9
factors
items

-.2 .3 .5
-.8 .7 .5 1.4 .3 -1 1.4 2.9 -.7 1.2 -.1 1.3
1.1 2.1 .3
2.1 -.4 .6 1.7 2.4 .9 -.3 .4 .8 .7 -.6 .1
-.7 2.1 -2

-1 .7 .3 PT
factors Q
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 21
¡ How to estimate the missing rating of
user x for item i? 𝒓F𝒙𝒊 = 𝒒𝒊 ⋅ 𝒑𝒙
users
1 3 5 5 4

5 4 2.4
? 4 2 1 3
= < 𝒒𝒊𝒇 ⋅ 𝒑𝒙𝒇
items

2 4 1 2 3 4 3 5

𝒇
2 4 5 4 2

4 3 4 2 2 5
qi = row i of Q
1 3 3 2 4
px = column x of PT
.1 -.4 .2
users
-.5 .6 .5
f factors

1.1 -.2 .3 .5 -2 -.5 .8 -.4 .3 1.4 2.4 -.9


items

-.2 .3 .5
-.8 .7 .5 1.4 .3 -1 1.4 2.9 -.7 1.2 -.1 1.3
1.1 2.1 .3
2.1 -.4 .6 1.7 2.4 .9 -.3 .4 .8 .7 -.6 .1
-.7 2.1 -2

-1 .7 .3 PT
f factors Q
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 22
Serious Braveheart
The Color Amadeus
Purple

Lethal
Sense and Weapon
Sensibility Ocean’s 11
Geared Geared
Factor 1
towards towards
females males

The Lion King


Factor 2

The Princess Independence


Diaries Day
Dumb and
Funny Dumber
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 23
Serious Braveheart
The Color Amadeus
Purple

Lethal
Sense and Weapon
Sensibility Ocean’s 11
Geared Geared
Factor 1
towards towards
females males

The Lion King


Factor 2

The Princess Independence


Diaries Day
Dumb and
Funny Dumber
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 24
n n
¡ Remember SVD:
§ A: Input data matrix
»
S VT
m A m
§ U: Left singular vecs
§ V: Right singular vecs
§ S: Singular values U

¡ So in our case:
“SVD” on Netflix data: R ≈ Q · PT
A = R, Q = U, PT = S VT
𝒓F𝒙𝒊 = 𝒒𝒊 ⋅ 𝒑𝒙
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 25
¡ We already know that SVD gives minimum
reconstruction error (Sum of Squared Errors):
l *
min < 𝐴(> − 𝑈Σ𝑉 (>
e,f,g
(>∈m
¡ Note two things:
§ SSE and RMSE are monotonically related:
𝟏
§ 𝑹𝑴𝑺𝑬 = 𝑺𝑺𝑬 Great news: SVD is minimizing RMSE!
𝒄
§ Complication: The sum in SVD error term is over
all entries (no-rating in interpreted as zero-rating).
But our R has missing entries!
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 26
users factors
.1 -.4 .2
1 3 5 5 4

5 4 4 2 1 3 -.5 .6 .5 users

factors
items

1.1 -.2 .3 .5 -2 -.5 .8 -.4 .3 1.4 2.4 -.9

»
2 4 1 2 3 4 3 5 -.2 .3 .5
-.8 .7 .5 1.4 .3 -1 1.4 2.9 -.7 1.2 -.1 1.3
2 4 5 4 2 1.1 2.1 .3
2.1 -.4 .6 1.7 2.4 .9 -.3 .4 .8 .7 -.6 .1

items
4 3 4 2 2 5 -.7 2.1 -2
PT
1 3 3 2 4 -1 .7 .3
Q
¡ SVD isn’t defined when entries are missing!
¡ Use specialized methods to find P, Q
*
§ min ∑ (,' ∈0 𝑟'( − 𝑞( ⋅ 𝑝' 𝑟̂'( = 𝑞( ⋅ 𝑝'
r,s
§ Note:
§ We don’t require cols of P, Q to be orthogonal/unit length
§ P, Q map users/movies to a latent space
§ The most popular model among Netflix contestants
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 27
¡ Our goal is to find P and Q such tat:
𝟐
𝒎𝒊𝒏 < 𝒓𝒙𝒊 − 𝒒𝒊 ⋅ 𝒑𝒙
𝑷,𝑸
𝒊,𝒙 ∈𝑹

users factors
.1 -.4 .2
1 3 5 5 4

5 4 4 2 1 3 -.5 .6 .5 users

factors
1.1 -.2 .3 .5 -2 -.5 .8 -.4 .3 1.4 2.4 -.9

»
2 4 1 2 3 4 3 5 -.2 .3 .5
items

-.8 .7 .5 1.4 .3 -1 1.4 2.9 -.7 1.2 -.1 1.3


2 4 5 4 2 1.1 2.1 .3
2.1 -.4 .6 1.7 2.4 .9 -.3 .4 .8 .7 -.6 .1
items

4 3 4 2 2 5 -.7 2.1 -2
PT
1 3 3 2 4 -1 .7 .3
Q

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 29


¡ Want to minimize SSE for unseen test data
¡ Idea: Minimize SSE on training data
§ Want large k (# of factors) to capture all the signals
§ But, SSE on test data begins to rise for k > 2

This is a classical example of overfitting:


1 3 4

¡ 3 5
4 5
3
5
5

§ With too much freedom (too many free 2 ?

2 1
?
?

parameters) the model starts fitting noise


3 ?
1

§ That is, it fits too well the training data and thus
not generalizing well to unseen test data

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 30


1 3 4
3 5 5

To solve overfitting we introduce


4 5 5

¡ 2
3
3
? ?

regularization:
?
2 1 ?
3 ?
1

§ Allow rich model where there are sufficient data


§ Shrink aggressively where data are scarce

é 2ù
å (rxi - qi p x ) + êl1 å p x + l2 å qi ú
2
min
2

P ,Q training ë x i û
“error” “length”
l1, l2 … user set regularization parameters

Note: We do not care about the “raw” value of the objective function,
but we care in P,Q that achieve the minimum of the objective
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 31
serious Braveheart
The Color Amadeus
Purple

Lethal
Sense and Weapon
Sensibility Ocean’s 11 Geared
Geared
towards towards
Factor 1 males
females

The Princess The Lion King


Dumb and
Diaries
Dumber
Factor 2

Independence
é 2ù
Day
min å (r - qi p x ) 2 + l êå p x + å qi ú
2
xi
P ,Q training ë x i û
minfactors “error” + l “length” funny
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 32
serious Braveheart
The Color Amadeus
Purple

Lethal
Sense and Weapon
Sensibility Ocean’s 11 Geared
Geared
towards towards
Factor 1 males
females

The Princess The Lion King


Dumb and
Diaries
Dumber
Factor 2

Independence
é 2ù
Day
min å (r - qi px ) 2 + l êå p x +å qi ú
2
xi
P ,Q training ë x i û
minfactors “error” + l “length” funny
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 33
serious Braveheart
The Color Amadeus
Purple

Lethal
Sense and Weapon
Sensibility Ocean’s 11 Geared
Geared
towards towards
Factor 1 males
females

The Princess The Lion King


Dumb and
Diaries
Dumber
Factor 2

Independence
é 2ù
Day
min å (r - qi px ) 2 + l êå p x +å qi ú
2
xi
P ,Q training ë x i û
minfactors “error” + l “length” funny
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 34
serious Braveheart
The Color Amadeus
Purple

Lethal
Sense and Weapon
Sensibility Ocean’s 11 Geared
Geared
towards towards
Factor 1 males
females

The Princess The Lion King


Dumb and
Diaries
Dumber
Factor 2

Independence
é 2ù
Day
min å (r - qi px ) 2 + l êå p x +å qi ú
2
xi
P ,Q training ë x i û
minfactors “error” + l “length” funny
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 35
¡ Want to find matrices P and Q:
é 2ù
å (rxi - qi p x ) + êl1 å p x + l2 å qi ú
2
min
2

P ,Q training ë x i û
¡ Gradient descent:
§ Initialize P and Q (using SVD, pretend missing ratings are 0)
§ Do gradient descent: How to compute gradient
of a matrix?
§ P ¬ P - h ·ÑP Compute gradient of every
§ Q ¬ Q - h ·ÑQ element independently!

§ where ÑQ is gradient/derivative of matrix Q:


𝛻𝑄 = [𝛻𝑞(| ] and 𝛻𝑞(| = ∑',( −2 𝑟'( − 𝑞( 𝑝' 𝑝'| + 2𝜆*𝑞(|
§ Here 𝒒𝒊𝒇 is entry f of row qi of matrix Q
§ Observation: Computing gradients is slow!
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 36
¡ Gradient Descent (GD) vs. Stochastic GD (SGD)
§ Observation: 𝛻𝑄 = [𝛻𝑞(| ] where
𝛻𝑞(| = < −2 𝑟'( − 𝑞(| 𝑝'| 𝑝'| + 2𝜆𝑞(| = < Ñ𝑸 𝒓𝒙𝒊
',( 𝒙,𝒊
§ Here 𝒒𝒊𝒇 is entry f of row qi of matrix Q
§ 𝑸 = 𝑸 − hÑ𝑸 = 𝑸 − h ∑𝒙,𝒊 Ñ𝑸 (𝒓𝒙𝒊 )
§ Idea: Instead of evaluating gradient over all ratings
evaluate it for each individual rating and make a step
¡ GD: 𝑸¬𝑸 − h ∑𝒓𝒙𝒊 Ñ𝑸(𝒓𝒙𝒊)
¡ SGD: 𝑸¬𝑸 − 𝜇Ñ𝑸(𝒓𝒙𝒊)
§ Faster convergence!
§ Need more steps but each step is computed much faster
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 37
¡ Convergence of GD vs. SGD
Value of the objective function

GD improves the value


of the objective function
at every step.
SGD improves the value
but in a “noisy” way.
GD takes fewer steps to
converge but each step
Iteration/step takes much longer to
compute.
In practice, SGD is
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets
much faster! 38
¡ Stochastic gradient descent:
§ Initialize P and Q (using SVD, pretend missing ratings are 0)
§ Then iterate over the ratings (multiple times if
necessary) and update factors:
For each rxi:
§ 𝜀'( = 2(𝑟'( − 𝑞( ⋅ 𝑝' ) (derivative of the “error”)
§ 𝑞( ← 𝑞( + 𝜇" 𝜀'( 𝑝' − 2𝜆* 𝑞( (update equation)
§ 𝑝' ← 𝑝' + 𝜇* 𝜀'( 𝑞( − 2𝜆" 𝑝' (update equation)
𝜇 … learning rate
¡ 2 for loops:
§ For until convergence:
§ For each rxi
2/15/17
§ Compute Jure
gradient, do a “step” a above
Leskovec, Stanford C246: Mining Massive Datasets 39
Koren, Bell, Volinksy, IEEE Computer, 2009
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 40
user bias movie bias user-movie interaction

Baseline predictor User-Movie interaction


§ Separates users and movies ¡ Characterizes the matching between
§ Benefits from insights into user’s users and movies
behavior ¡ Attracts most research in the field
§ Among the main practical ¡ Benefits from algorithmic and
contributions of the competition mathematical innovations

¡ μ = overall mean rating


¡ bx = bias of user x
¡ bi = bias of movie i
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 42
¡ We have expectations on the rating by
user x of movie i, even without estimating x’s
attitude towards movies like i

– Rating scale of user x – (Recent) popularity of movie i


– Values of other ratings user – Selection bias; related to
gave recently (day-specific number of ratings user gave on
mood, anchoring, multi-user the same day (“frequency”)
accounts)

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 43


𝑟'( = 𝜇 + 𝑏' + 𝑏( + 𝑞( ⋅ 𝑝'
Overall Bias for Bias for User-Movie
mean rating user x movie i interaction

¡ Example:
§ Mean rating: µ = 3.7
§ You are a critical reviewer: your ratings are 1 star
lower than the mean: bx = -1
§ Star Wars gets a mean rating of 0.5 higher than
average movie: bi = + 0.5
§ Predicted rating for you on Star Wars:
= 3.7 - 1 + 0.5 = 3.2
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 44
Solve:
¡

min å (rxi - (µ + bx + bi + qi px ))
2

Q,P ( x ,i )ÎR goodness of fit

æ 2ö
+ ç l1 å qi + l2 å p x + l3 å bx + l4 å bi ÷
2 2 2

è i x
regularization
x i ø
l is selected via grid-
search on a validation set

¡ Stochastic gradient decent to find parameters


§ Note: Both biases bx, bi as well as interactions qi, px
are treated as parameters (we estimate them)
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 45
0.92
CF (no time bias)

0.915 Basic Latent Factors


Latent Factors w/ Biases
0.91

0.905
RMSE

0.9

0.895

0.89

0.885
1 10 100 1000
Millions of parameters

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 46


Global average: 1.1296

User average: 1.0651


Movie average: 1.0533

Netflix: 0.9514

Basic Collaborative filtering: 0.94


Collaborative filtering++: 0.91
Latent factors: 0.90

Latent factors+Biases: 0.89

Grand Prize: 0.8563

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 47


¡ Sudden rise in the
average movie rating
(early 2004)
§ Improvements in Netflix
§ GUI improvements
§ Meaning of rating changed
¡ Movie age
§ Users prefer new movies
without any reasons
§ Older movies are just
inherently better than
newer ones
Y. Koren, Collaborative filtering with
temporal dynamics, KDD ’09
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 49
¡ Original model:
rxi = µ +bx + bi + qi ·px
¡ Add time dependence to biases:
rxi = µ +bx(t)+ bi(t) +qi · px
§ Make parameters bx and bi to depend on time
§ (1) Parameterize time-dependence by linear trends
(2) Each bin corresponds to 10 consecutive weeks

¡ Add temporal dependence to factors


§ px(t)… user preference vector on day t
Y. Koren, Collaborative filtering with temporal dynamics, KDD ’09
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 50
0.92
CF (no time bias)
0.915 Basic Latent Factors
CF (time bias)
0.91
Latent Factors w/ Biases
0.905 + Linear time factors

0.9 + Per-day user biases


RMSE

+ CF
0.895

0.89

0.885

0.88

0.875
1 10 100 1000 10000
Millions of parameters

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 51


Global average: 1.1296

User average: 1.0651


Movie average: 1.0533

Netflix: 0.9514

Basic Collaborative filtering: 0.94


Collaborative filtering++: 0.91 Still no prize! L
Latent factors: 0.90 Getting desperate.
Latent factors+Biases: 0.89
Try a “kitchen
Latent factors+Biases+Time: 0.876
sink” approach!
Grand Prize: 0.8563

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 52


2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 53
June 26th submission triggers 30-day “last call”
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 54
¡ Ensemble team formed
§ Group of other teams on leaderboard forms a new team
§ Relies on combining their models
§ Quickly also get a qualifying score over 10%

¡ BellKor
§ Continue to get small improvements in their scores
§ Realize they are in direct competition with team Ensemble

¡ Strategy
§ Both teams carefully monitoring the leaderboard
§ Only sure way to check for improvement is to submit a set
of predictions
§ This alerts the other team of your latest score
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 55
¡ Submissions limited to 1 a day
§ Only 1 final submission could be made in the last 24h
¡ 24 hours before deadline…
§ BellKor team member in Austria notices (by chance) that
Ensemble posts a score that is slightly better than BellKor’s
¡ Frantic last 24 hours for both teams
§ Much computer time on final optimization
§ Carefully calibrated to end about an hour before deadline
¡ Final submissions
§ BellKor submits a little early (on purpose), 40 mins before
deadline
§ Ensemble submits their final entry 20 mins later
§ ….and everyone waits….
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 56
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 57
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 58
What’s the moral of
the story?

Submit early! J
2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 59
¡ Some slides and plots borrowed from
Yehuda Koren, Robert Bell and Padhraic
Smyth
¡ Further reading:
§ Y. Koren, Collaborative filtering with temporal
dynamics, KDD ’09
§ http://www2.research.att.com/~volinsky/netflix/bpc.html
§ http://www.the-ensemble.com/

2/15/17 Jure Leskovec, Stanford C246: Mining Massive Datasets 60

You might also like