5. Sources of Recommendation Knowledge
Transactional & Behavioural Data
Clicks, purchases, likes, rating, actions (save to wish-list)
Content & Meta Data
Features and tags, structured and unstructured.
Experiential Data
User-generated opinions. Based on real subjective experiences vs
objective catalog metadata.
5
9. ... focus on extracting only features and sentiment from a collection
of reviews for product ...
(e, f, s, h, t)
... to produce aggregate product descriptions from e1,..,en.
The Anatomy of an Opinion
11. 11
The Fuji X100 is a great camera. It looks beautiful and takes great quality images.
I have found the battery life to be superb during normal use. I only seem to charge
after well over 1000 shots. The build quality is excellent and it is a joy to hold.
The camera is not without its quirks however and it does take some getting used to.
The auto focus can be slow to catch, for example. So it's not so good for action shots
but it does take great portraits and its night shooting is excellent.
Product Reviews
Feature Extraction Sentiment Mining
Generate Cases
R1, …, Rk
(F1,S1)…(Fn,Sn)
ANs & NNs Ns
Bi-Gram Analysis Unigram Analysis
Validation & Filtering
IdentifySentimentWords
ExtractOpinionPatterns
SentimentAssignment
(F1,S1,wmin1)…
(F1,S1,S2,...)…
FilterFeatures
Summarise
Sentiment
(F1,L1,L2,...)…
(F1,Sent1)…
(Dong et al, ICCBR 2013)
15. Opinionated Explanations
Opinionated Explanation Interfaces
Incorporating explanations into recommendation interfaces.
From Basic to Compelling Explanations
Filtering and specialising explanations.
Explanation-Based Ranking
Using explanation strength to rank recommendations.
15
18. Hotel/Item Descriptions
Hotel, hi, is associated with a set of reviews, R(hi) = {r1,…,rn}.
Opinion mining process extracts f1,…fm for hi from R(hi).
18
19. Item Descriptions
Each feature has an importance score (imp) and a sentiment
score (s)…
Note: pos(fj,hi) = num. of mentions of fj in hi labeled as +’ve.
19
22. From Basic to Compelling Explanations
Not all of the features in an explanation make for strong
reasons to select or reject a hotel.
In previous example, “Free Breakfast” is a pro but it is only
better than 10% of the alternatives.
We can use better/worse scores to help identify compelling
features and filter out weaker, less compelling features.
22
26. But the link between
recommendations and explanations
remains tenuous …
26
27. Explanation-Based Ranking
But what if explanations played a more intimate role in
recommendation / ranking?
The idea is to rank recommendations based on the
strength of their corresponding explanation.
In other words, a recommendation should be preferred if it
can be explained in a compelling way to the user.
30. Evaluation
5,179 TripAdvisor users each with at least 4 reviews; 224,760 hotel
reviews for 2,298 hotels.
For each uT, and hB, reviewed (booked) hotel, plus top TA
alternatives
Generate explanations for each of the 10 “recommended” hotels and
rank by explanation strength.
uT : {hB, h1, …, h9}
31. Baseline Comparisons
As a baseline we can also rank the recommended hotels
based on TA’s average review ratings.
Compare the number of pros/cons in explanations based
on rank of recommendations; higher rank = more pros /
fewer cons.
Compare position of booked hotel in baseline-ranking vs.
explanation-ranked recommendations.
34. Results Summary
Recommendations ranked by explanation strength
prioritises better hotels (more pros and fewer cons).
Ranking by TA review score produces less compelling
recommendations.
On average the booked hotel occurred at rank position 5
→ not necessarily best choice for the user at the time?
35. Conclusions
User reviews can be a rich source of recommendation
knowledge.
Novel approach to explanation based on features mined
from user-generated reviews.
Creating personalised and compelling explanations.
Using explanations to guide recommendation ranking.