Professional Documents
Culture Documents
SS 2021
Outline
Learner Modelling
Recommender Systems
2
Learning Objectives
You
• know what learner modelling is
• know what relevant learner characteristics are
• know different methods for modelling learners
• know what recommender systems are
• know different types of recommender systems and
advantages/disadvantages
• know how to evaluate recommender systems
3
Learner Modelling
Learner Modelling (Chrysafiadi et al., 2013)
5
Learners’ Characteristics
6
Methods (Chrysafiadi et al., 2013)
• Overlay model
• Stereotypes
• Perturbation
• Cognitive theories
• Fuzzy student modeling
• Bayesian networks
• …
7
Overlay model
Assumption:
• Learner may have (incomplete) knowledge of the domain
• Learner‘s set of knowledge is a subset of an expert‘s set of knowledge
Expert’s knowledge
Learner’s knowledge
8
Stereotypes (Kay, 2000)
Idea:
• Define shared characteristics
• Assign characteristics to learners
Advantages:
• “no” cold-start problem
• utilize group characteristics instead of individual users
Disadvantages:
• inflexible
• must be maintained and updated manually
• requires that the users can be classified into stereotypes
9
Perturbation
Learner’s knowledge
10
Cognitive Theories
Idea:
Usage of cognitive theories to model the learner‘s learning processes of
thinking and understanding
→ simulate the learners’ reasoning
• Bayesian network
directed acyclic graph modeling
variables (nodes) and their probabilistic
dependences
Types:
• Expert centric models
• Efficiency centric models
• Data centric models
12
Image Source: Conati, et al., 2013
Recommender Systems
Definition
Recommender System
• Filter problem
15
Tasks
“Good”
• in respect to learning task or learning goal
• in respect to prior knowledge
• in respect to current situation (location, time, noise level, …)
16
Items recommended (Erdt et al., 2015)
17
Data used for calculation of recommendations
Ratings/transactions can be
• explicit
• implicit
Ratings/Transactions used
• Viewed learning items
• Rated learning items
• Tagged learning items
• Successful processed learning items – additional knowledge about the user can
be used for calculation of recommendation
• Learning items processed unsuccessful
• Communication history – communication with others may be indicator for same
interest and common learning items
• …
18
Quality Measures for Recommender Systems in a
Learning Scenario
Relevance:
• “Relevance refers to the ability of a RS (Recommender System) to provide
items that fit the user’s preferences” (Epifania, 2012).
Novelty:
• “Novelty (or discovery) is the extent to which users receive new and
interesting recommendations” (Pu, 2011).
Diversity:
• “Diversity measures the diversity level of items in the recommendation list”
(Pu, 2011).
19
Recommender Systems
Neighborhood-
Model-based
based
e.g. LSA, SVM
20
Technologies used (Erdt et al., 2015)
21
Content-based filtering
3. (Prediction)
• T2: Rated with 4
• T3 is similar, Prediction:
22
Knowledge-based filtering
• frequently: user is
involved in filtering
process → interactive
process
• Two types:
• constraint-based
• case-based
• automatic matching of
learner model and artifact
model, based on the
knowledge stored there
23
Collaborative filters
domain independent
24
Collaborative filters: Similarity measure
3.0 1.5
e.g., based on Euclidean distance d(x, y) = σ𝑑𝑖=𝑖(𝑥𝑖 − 𝑦𝑖 )² :
1 3.5 5.0
s(𝑖, 𝑗) =
1 + 𝑑(𝑖, 𝑗) 2.5 3.5
3.0 3.0
Example: distance Rose & Seymour
d Rose, Seymour =
= 3 − 2.5 2 + 3.5 − 3.5 2 + 3 − 1.5 2 + 3.5 − 5 2 +⋯
≈ 2.4
s Rose, Seymour ≈ 0.3
25
Collaborative filters: Similarity measures
Alternative, e.g. Pearson correlation (~how much the scores do line up):
0.4 0.75
Calculation:
26
Collaborative filters: Simple algorithms
Based on the matrix and the similarity function, the most similar other users
can be easily determined for a user
Analogous: for an item, the most similar other items can be easily
determined
Simple approach for item recommendation:
• Search most similar user
• Show me the top recommendations that I don't know yet
Disadvantages:
• It could be that in principle well matching items are not found, if the
found "best matching user" has not rated them
• Possibly the "best matching user" has rated an item completely against
the general trend
• Cold Start Problem (general issue for collaborative filters)
27
Collaborative filtering: product/user recommendation
28
Item-based collaborative filtering
Basic idea:
• First calculate the n items most similar to an item (this can be done in
advance/offline, e.g. regularly overnight).
• Recommendations for a user:
• Determine their top rated items
• Determine weighted average of those items that are most similar to them
29
Example: item-based collaborative filtering
Procedure
• does not need to look at all data for a concrete recommendation
• copes well with a sparse matrix
• has (manageable) additional effort for matrix of similar items
30
Matrix factorization in recommender systems
Observations:
• Matrix user x items is often (too) large
• Presumably, there are latent factors of items that are important for the
recommendation to users.
• Knowledge of these factors is interesting for the platform
Approach: decomposition of the matrix into
• users x factors and
• items x factors
31
Matrix factorization in recommender systems
(Koren et al., 2009)
32
Generic issues
Cold Start
• new user
• new item
• Boosting
• new community/system
33
Evaluation
Recommender Evaluation
(Aggarwal, 2016; Herlocker, 2004)
35
Recommender Evaluation: Common offline approach
Procedure
• Hide some items “used” by the user (ground truth)
• Recommend items & rank items
• Compare with ground truth – hidden items “used” by the user
36
Hit Rate/Recall and Precision
Recommended
Item 98
Used in reality
Item 32
32
Item 32
Item 152
Item 56
Item 74
74
Item 74
Item 59
Hit Rate/Recall:
𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑖𝑡𝑒𝑚𝑠 𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑑
𝑅𝑒𝑐𝑎𝑙𝑙(𝑘) =
𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑖𝑡𝑒𝑚𝑠
Precision:
𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑖𝑡𝑒𝑚𝑠 𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑑
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛(𝑘) =
𝑖𝑡𝑒𝑚𝑠 𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑑
37
Positions matters
Recommended
Item 98
98
Used in reality
Item 32
32
Item 32
Item 152
152
Item 56
Item 74
74
Item 74
Item 59
59
38
Only first k positions matter
Recommended
Item 98
Used in reality
Item 32
Item 32
Item 152
Item 56
Item 74
Item 74
Item 59
…
Item 56
39
Utility-based measure
Idea:
• each item has a utility for each user
• utility correlates to the ground truth rating and position in the
recommendation list
40
Selecting a metric
41
Wrap up
Wrap up
43
Next Lecture
44
Prof. Dr. Sven Strickroth
Ludwig-Maximilians-Universität München
Institut für Informatik
Lehr- und Forschungseinheit für
Programmier- und Modellierungssprachen
Oettingenstraße 67
80538 München
Telefon: +49-89-2180-9300
sven.strickroth@ifi.lmu.de
45
References & Further Reading
46