SlideShare a Scribd company logo
1 of 36
Download to read offline
Multiverse Recommendation: N-dimensional Tensor
Factorization for Context-aware Collaborative
Filtering
Alexandros Karatzoglou1 Xavier Amatriain 1 Linas Baltrunas 2
Nuria Oliver 2
1Telefonica Research
Barcelona, Spain
2Free University of Bolzano
Bolzano, Italy
November 4, 2010
1
Context in Recommender Systems
2
Context in Recommender Systems
Context is an important factor to consider in personalized
Recommendation
2
Context in Recommender Systems
Context is an important factor to consider in personalized
Recommendation
2
Context in Recommender Systems
Context is an important factor to consider in personalized
Recommendation
2
Context in Recommender Systems
Context is an important factor to consider in personalized
Recommendation
2
Current State of the Art in Context- aware
Recommendation
3
Current State of the Art in Context- aware
Recommendation
Pre-Filtering Techniques
3
Current State of the Art in Context- aware
Recommendation
Pre-Filtering Techniques
Post-Filtering Techniques
3
Current State of the Art in Context- aware
Recommendation
Pre-Filtering Techniques
Post-Filtering Techniques
Contextual modeling
3
Current State of the Art in Context- aware
Recommendation
Pre-Filtering Techniques
Post-Filtering Techniques
Contextual modeling
The approach presented here fits in the Contextual Modeling category
3
Collaborative Filtering problem setting
Typically data sizes e.g. Netlix data n = 5 × 105, m = 17 × 103
4
Standard Matrix Factorization
Find U ∈ Rn×d and M ∈ Rd×m so that F = UM
minimizeU,ML(F, Y) + λΩ(U, M)
Movies!
Users!
U!
M!
5
Multiverse Recommendation: Tensors for Context
Aware Collaborative Filtering
Movies!
Users!
6
Tensors for Context Aware Collaborative Filtering
Movies!
Users!
U!
C!
M!
S!
7
Tensors for Context Aware Collaborative Filtering
Movies!
Users!
U!
C!
M!
S!
Fijk = S ×U Ui∗ ×M Mj∗ ×C Ck∗
R[U, M, C, S] := L(F, Y) + Ω[U, M, C] + Ω[S]
7
Regularization
Ω[F] = λM M 2
F + λU U 2
F + λC C 2
F
Ω[S] := λS S 2
F (1)
8
Squared Error Loss Function
Many implementations of MF used a simple squared error regression
loss function
l(f, y) =
1
2
(f − y)2
thus the loss over all users and items is:
L(F, Y) =
n
i
m
j
l(fij, yij)
Note that this loss provides an estimate of the conditional mean
9
Absolute Error Loss Function
Alternatively one can use the absolute error loss function
l(f, y) = |f − y|
thus the loss over all users and items is:
L(F, Y) =
n
i
m
j
l(fij, yij)
which provides an estimate of the conditional median
10
Optimization - Stochastic Gradient Descent for TF
The partial gradients with respect to U, M, C and S can then be
written as:
∂Ui∗
l(Fijk , Yijk ) = ∂Fijk
l(Fijk , Yijk )S ×M Mj∗ ×C Ck∗
∂Mj∗
l(Fijk , Yijk ) = ∂Fijk
l(Fijk , Yijk )S ×U Ui∗ ×C Ck∗
∂Ck∗
l(Fijk , Yijk ) = ∂Fijk
l(Fijk , Yijk )S ×U Ui∗ ×M Mj∗
∂Sl(Fijk , Yijk ) = ∂Fijk
l(Fijk , Yijk )Ui∗ ⊗ Mj∗ ⊗ Ck∗
11
Optimization - Stochastic Gradient Descent for TF
The partial gradients with respect to U, M, C and S can then be
written as:
∂Ui∗
l(Fijk , Yijk ) = ∂Fijk
l(Fijk , Yijk )S ×M Mj∗ ×C Ck∗
∂Mj∗
l(Fijk , Yijk ) = ∂Fijk
l(Fijk , Yijk )S ×U Ui∗ ×C Ck∗
∂Ck∗
l(Fijk , Yijk ) = ∂Fijk
l(Fijk , Yijk )S ×U Ui∗ ×M Mj∗
∂Sl(Fijk , Yijk ) = ∂Fijk
l(Fijk , Yijk )Ui∗ ⊗ Mj∗ ⊗ Ck∗
We then iteratively update the parameter matrices and tensors using
the following update rules:
Ut+1
i∗ = Ut
i∗ − η∂UL − ηλUUi∗
Mt+1
j∗ = Mt
j∗ − η∂ML − ηλMMj∗
Ct+1
k∗ = Ct
k∗ − η∂CL − ηλCCk∗
St+1
= St
− η∂Sl(Fijk , Yijk ) − ηλSS
where η is the learning rate. 11
Optimization - Stochastic Gradient Descent for TF
Movies!
Users!
U!
C!
M!
S!
12
Optimization - Stochastic Gradient Descent for TF
Movies!
Users!
U!
C!
M!
S!
13
Optimization - Stochastic Gradient Descent for TF
Movies!
Users!
U!
C!
M!
S!
14
Optimization - Stochastic Gradient Descent for TF
Movies!
Users!
U!
C!
M!
S!
15
Optimization - Stochastic Gradient Descent for TF
Movies!
Users!
U!
C!
M!
S!
16
Experimental evaluation
We evaluate our model on contextual rating data and computing the
Mean Absolute Error (MAE),using 5-fold cross validation defined as
follows:
MAE =
1
K
n,m,c
ijk
Dijk |Yijk − Fijk |
17
Data
Data set Users Movies Context Dim. Ratings Scale
Yahoo! 7642 11915 2 221K 1-5
Adom. 84 192 5 1464 1-13
Food 212 20 2 6360 1-5
Table: Data set statistics
18
Context Aware Methods
Pre-filtering based approach, (G. Adomavicius et.al), computes
recommendations using only the ratings made in the same context
as the target one
Item splitting method (L. Baltrunas, F. Ricci) which identifies items
which have significant differences in their rating under different
context situations.
19
Results: Context vs. No Context
No
context
Tensor
Factorization
1.9
2.0
2.1
2.2
2.3
2.4
2.5
MAE
(a)
No
context
Tensor
Factorization
0.80
0.85
0.90
0.95
MAE
(b)
Figure: Comparison of matrix (no context) and tensor (context) factorization
on the Adom and Food data.
20
Yahoo Artificial Data
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
MAE
α=0.1
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
α=0.5
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
α=0.9
No Context Reduction Item-Split Tensor Factorization
Figure: Comparison of context-aware methods on the Yahoo! artificial data
21
Yahoo Artificial Data
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Probability of contextual influence
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95MAE
No Context
Reduction
Item-Split
Tensor Factorization
22
Tensor Factorization
Reduction Item-Split Tensor
Factorization
1.9
2.0
2.1
2.2
2.3
2.4
2.5
MAE
Figure: Comparison of context-aware methods on the Adom data.
23
Tensor Factorization
No
context
Reduction Tensor
Factorization
0.80
0.85
0.90
0.95
MAE
Figure: Comparison of context-aware methods on the Food data.
24
Conclusions
Tensor Factorization methods seem to be promising for CARS
Many different TF methods exist
Future work: extend to implicit taste data
Tensor representation of context data seems promising
25
Thank You !
26

More Related Content

Similar to Multiverse Recommendation: N-dimensional Tensor Factorization for Context-aware Collaborative Filtering

Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...
Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...
Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...
Cemal Ardil
 
Thesis seminar
Thesis seminarThesis seminar
Thesis seminar
gvesom
 

Similar to Multiverse Recommendation: N-dimensional Tensor Factorization for Context-aware Collaborative Filtering (20)

Stat982(chap13)
Stat982(chap13)Stat982(chap13)
Stat982(chap13)
 
Tutorial: Context In Recommender Systems
Tutorial: Context In Recommender SystemsTutorial: Context In Recommender Systems
Tutorial: Context In Recommender Systems
 
Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...
Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...
Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...
 
NIPS2017 Few-shot Learning and Graph Convolution
NIPS2017 Few-shot Learning and Graph ConvolutionNIPS2017 Few-shot Learning and Graph Convolution
NIPS2017 Few-shot Learning and Graph Convolution
 
An introduction to digital health surveillance from online user-generated con...
An introduction to digital health surveillance from online user-generated con...An introduction to digital health surveillance from online user-generated con...
An introduction to digital health surveillance from online user-generated con...
 
EPFL workshop on sparsity
EPFL workshop on sparsityEPFL workshop on sparsity
EPFL workshop on sparsity
 
Gene's law
Gene's lawGene's law
Gene's law
 
Error entropy minimization for brain image registration using hilbert huang t...
Error entropy minimization for brain image registration using hilbert huang t...Error entropy minimization for brain image registration using hilbert huang t...
Error entropy minimization for brain image registration using hilbert huang t...
 
2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...
2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...
2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...
 
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...
 
GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...
GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...
GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...
 
A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...
 
Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...
Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...
Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...
 
HOP-Rec_RecSys18
HOP-Rec_RecSys18HOP-Rec_RecSys18
HOP-Rec_RecSys18
 
Lecture12 xing
Lecture12 xingLecture12 xing
Lecture12 xing
 
Thesis seminar
Thesis seminarThesis seminar
Thesis seminar
 
Steffen Rendle, Research Scientist, Google at MLconf SF
Steffen Rendle, Research Scientist, Google at MLconf SFSteffen Rendle, Research Scientist, Google at MLconf SF
Steffen Rendle, Research Scientist, Google at MLconf SF
 
Steffen Rendle, Research Scientist, Google at MLconf SF
Steffen Rendle, Research Scientist, Google at MLconf SFSteffen Rendle, Research Scientist, Google at MLconf SF
Steffen Rendle, Research Scientist, Google at MLconf SF
 
Computational methods for nanoscale bio sensors
Computational methods for nanoscale bio sensorsComputational methods for nanoscale bio sensors
Computational methods for nanoscale bio sensors
 
PMF BPMF and BPTF
PMF BPMF and BPTFPMF BPMF and BPTF
PMF BPMF and BPTF
 

More from Alexandros Karatzoglou

More from Alexandros Karatzoglou (9)

Deep Learning for Recommender Systems RecSys2017 Tutorial
Deep Learning for Recommender Systems RecSys2017 Tutorial Deep Learning for Recommender Systems RecSys2017 Tutorial
Deep Learning for Recommender Systems RecSys2017 Tutorial
 
Deep Learning for Recommender Systems - Budapest RecSys Meetup
Deep Learning for Recommender Systems  - Budapest RecSys MeetupDeep Learning for Recommender Systems  - Budapest RecSys Meetup
Deep Learning for Recommender Systems - Budapest RecSys Meetup
 
Machine Learning for Recommender Systems MLSS 2015 Sydney
Machine Learning for Recommender Systems MLSS 2015 SydneyMachine Learning for Recommender Systems MLSS 2015 Sydney
Machine Learning for Recommender Systems MLSS 2015 Sydney
 
Ranking and Diversity in Recommendations - RecSys Stammtisch at SoundCloud, B...
Ranking and Diversity in Recommendations - RecSys Stammtisch at SoundCloud, B...Ranking and Diversity in Recommendations - RecSys Stammtisch at SoundCloud, B...
Ranking and Diversity in Recommendations - RecSys Stammtisch at SoundCloud, B...
 
Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial
Learning to Rank for Recommender Systems -  ACM RecSys 2013 tutorialLearning to Rank for Recommender Systems -  ACM RecSys 2013 tutorial
Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial
 
ESSIR 2013 Recommender Systems tutorial
ESSIR 2013 Recommender Systems tutorial ESSIR 2013 Recommender Systems tutorial
ESSIR 2013 Recommender Systems tutorial
 
TFMAP: Optimizing MAP for Top-N Context-aware Recommendation
TFMAP: Optimizing MAP for Top-N Context-aware RecommendationTFMAP: Optimizing MAP for Top-N Context-aware Recommendation
TFMAP: Optimizing MAP for Top-N Context-aware Recommendation
 
CLiMF: Collaborative Less-is-More Filtering
CLiMF: Collaborative Less-is-More FilteringCLiMF: Collaborative Less-is-More Filtering
CLiMF: Collaborative Less-is-More Filtering
 
Machine Learning in R
Machine Learning in RMachine Learning in R
Machine Learning in R
 

Recently uploaded

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Recently uploaded (20)

Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 

Multiverse Recommendation: N-dimensional Tensor Factorization for Context-aware Collaborative Filtering

  • 1. Multiverse Recommendation: N-dimensional Tensor Factorization for Context-aware Collaborative Filtering Alexandros Karatzoglou1 Xavier Amatriain 1 Linas Baltrunas 2 Nuria Oliver 2 1Telefonica Research Barcelona, Spain 2Free University of Bolzano Bolzano, Italy November 4, 2010 1
  • 3. Context in Recommender Systems Context is an important factor to consider in personalized Recommendation 2
  • 4. Context in Recommender Systems Context is an important factor to consider in personalized Recommendation 2
  • 5. Context in Recommender Systems Context is an important factor to consider in personalized Recommendation 2
  • 6. Context in Recommender Systems Context is an important factor to consider in personalized Recommendation 2
  • 7. Current State of the Art in Context- aware Recommendation 3
  • 8. Current State of the Art in Context- aware Recommendation Pre-Filtering Techniques 3
  • 9. Current State of the Art in Context- aware Recommendation Pre-Filtering Techniques Post-Filtering Techniques 3
  • 10. Current State of the Art in Context- aware Recommendation Pre-Filtering Techniques Post-Filtering Techniques Contextual modeling 3
  • 11. Current State of the Art in Context- aware Recommendation Pre-Filtering Techniques Post-Filtering Techniques Contextual modeling The approach presented here fits in the Contextual Modeling category 3
  • 12. Collaborative Filtering problem setting Typically data sizes e.g. Netlix data n = 5 × 105, m = 17 × 103 4
  • 13. Standard Matrix Factorization Find U ∈ Rn×d and M ∈ Rd×m so that F = UM minimizeU,ML(F, Y) + λΩ(U, M) Movies! Users! U! M! 5
  • 14. Multiverse Recommendation: Tensors for Context Aware Collaborative Filtering Movies! Users! 6
  • 15. Tensors for Context Aware Collaborative Filtering Movies! Users! U! C! M! S! 7
  • 16. Tensors for Context Aware Collaborative Filtering Movies! Users! U! C! M! S! Fijk = S ×U Ui∗ ×M Mj∗ ×C Ck∗ R[U, M, C, S] := L(F, Y) + Ω[U, M, C] + Ω[S] 7
  • 17. Regularization Ω[F] = λM M 2 F + λU U 2 F + λC C 2 F Ω[S] := λS S 2 F (1) 8
  • 18. Squared Error Loss Function Many implementations of MF used a simple squared error regression loss function l(f, y) = 1 2 (f − y)2 thus the loss over all users and items is: L(F, Y) = n i m j l(fij, yij) Note that this loss provides an estimate of the conditional mean 9
  • 19. Absolute Error Loss Function Alternatively one can use the absolute error loss function l(f, y) = |f − y| thus the loss over all users and items is: L(F, Y) = n i m j l(fij, yij) which provides an estimate of the conditional median 10
  • 20. Optimization - Stochastic Gradient Descent for TF The partial gradients with respect to U, M, C and S can then be written as: ∂Ui∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×M Mj∗ ×C Ck∗ ∂Mj∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×U Ui∗ ×C Ck∗ ∂Ck∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×U Ui∗ ×M Mj∗ ∂Sl(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )Ui∗ ⊗ Mj∗ ⊗ Ck∗ 11
  • 21. Optimization - Stochastic Gradient Descent for TF The partial gradients with respect to U, M, C and S can then be written as: ∂Ui∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×M Mj∗ ×C Ck∗ ∂Mj∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×U Ui∗ ×C Ck∗ ∂Ck∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×U Ui∗ ×M Mj∗ ∂Sl(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )Ui∗ ⊗ Mj∗ ⊗ Ck∗ We then iteratively update the parameter matrices and tensors using the following update rules: Ut+1 i∗ = Ut i∗ − η∂UL − ηλUUi∗ Mt+1 j∗ = Mt j∗ − η∂ML − ηλMMj∗ Ct+1 k∗ = Ct k∗ − η∂CL − ηλCCk∗ St+1 = St − η∂Sl(Fijk , Yijk ) − ηλSS where η is the learning rate. 11
  • 22. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 12
  • 23. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 13
  • 24. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 14
  • 25. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 15
  • 26. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 16
  • 27. Experimental evaluation We evaluate our model on contextual rating data and computing the Mean Absolute Error (MAE),using 5-fold cross validation defined as follows: MAE = 1 K n,m,c ijk Dijk |Yijk − Fijk | 17
  • 28. Data Data set Users Movies Context Dim. Ratings Scale Yahoo! 7642 11915 2 221K 1-5 Adom. 84 192 5 1464 1-13 Food 212 20 2 6360 1-5 Table: Data set statistics 18
  • 29. Context Aware Methods Pre-filtering based approach, (G. Adomavicius et.al), computes recommendations using only the ratings made in the same context as the target one Item splitting method (L. Baltrunas, F. Ricci) which identifies items which have significant differences in their rating under different context situations. 19
  • 30. Results: Context vs. No Context No context Tensor Factorization 1.9 2.0 2.1 2.2 2.3 2.4 2.5 MAE (a) No context Tensor Factorization 0.80 0.85 0.90 0.95 MAE (b) Figure: Comparison of matrix (no context) and tensor (context) factorization on the Adom and Food data. 20
  • 31. Yahoo Artificial Data 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 MAE α=0.1 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 α=0.5 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 α=0.9 No Context Reduction Item-Split Tensor Factorization Figure: Comparison of context-aware methods on the Yahoo! artificial data 21
  • 32. Yahoo Artificial Data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Probability of contextual influence 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95MAE No Context Reduction Item-Split Tensor Factorization 22
  • 33. Tensor Factorization Reduction Item-Split Tensor Factorization 1.9 2.0 2.1 2.2 2.3 2.4 2.5 MAE Figure: Comparison of context-aware methods on the Adom data. 23
  • 35. Conclusions Tensor Factorization methods seem to be promising for CARS Many different TF methods exist Future work: extend to implicit taste data Tensor representation of context data seems promising 25