7. CONTEXT
• Personal Learning Environment:
• customizable
• re-use, creation & mashup of tools, resources
• enable users to access content
• in different contexts
Saturday 19 March 2011
8. CONTEXT
• Personal Learning Environment:
• customizable
• re-use, creation & mashup of tools, resources
• enable users to access content
• in different contexts
Saturday 19 March 2011
9. CONTEXT
• Personal Learning Environment:
• customizable
• re-use, creation & mashup of tools, resources
• enable users to access content
• in different contexts
Saturday 19 March 2011
24. PAGE RANK
HYPERLINK
WEB
SITE
Rank of node i:
A node is important if and only if many other
important nodes point to it
Saturday 19 March 2011
25. OUR CASE
hare
d R1
d /s
save
saved/shared
d
en tion
Sten R2
fr i e c dis
n like
c on d
lik
ed
R3
d n
Sandy
ien ctio
fr e
c on
n lik
ed
R4
Erik R5
Saturday 19 March 2011
26. NOW FOR MULTI-DIRECTIONAL,
PERSONALIZED & CONTEXTUAL RANKING
A node is important to a particular set of nodes (representing the
target user and the context) if and only if many important nodes
connected to this root set, via important relation types, point to it
Saturday 19 March 2011
27. EVALUATION
• 15 PhD students at K.U. Leuven and EPFL.
• What?
• usability
• user satisfaction
• usefulness
Saturday 19 March 2011
28. FIRST PHASE
• current media search tool: Google & YouTube
• understanding recommendations: 6/15 from like/dislike
Saturday 19 March 2011
29. FIRST PHASE
• current media search tool: Google & YouTube
• understanding recommendations: 6/15 from like/dislike
Saturday 19 March 2011
30. FIRST PHASE
• current media search tool: Google & YouTube
• understanding recommendations: 6/15 from like/dislike
Saturday 19 March 2011
31. FIRST PHASE
• current media search tool: Google & YouTube
• understanding recommendations: 6/15 from like/dislike
Saturday 19 March 2011
32. FIRST PHASE
• current media search tool: Google & YouTube
• understanding recommendations: 6/15 from like/dislike
Saturday 19 March 2011
33. FIRST PHASE
• current media search tool: Google & YouTube
• understanding recommendations: 6/15 from like/dislike
Saturday 19 March 2011
34. FIRST PHASE
• current media search tool: Google & YouTube
• understanding recommendations: 6/15 from like/dislike
Saturday 19 March 2011
36. SECOND PHASE
• only 14 participants (one less)
• open questions
• usefulness of recommendations: 11/14 pro.
• user satisfaction: System Usability Scale (SUS) & MS
Desirability Toolkit
• SUS score: 66,25%
•2 groups
Saturday 19 March 2011
37. SECOND PHASE
• only 14 participants (one less)
• open questions
• usefulness of recommendations: 11/14 pro.
• user satisfaction: System Usability Scale (SUS) & MS
Desirability Toolkit
• SUS score: 66,25%
•2 K.U. Leuven: high (75%)
groups
EPFL + one K.U.Leuven: low (50%)
Saturday 19 March 2011
38. WHY THE DIFFERENT SUS?
• 1st phase by 2 interviewers
Saturday 19 March 2011
39. WHY THE DIFFERENT SUS?
• 1st phase by 2 interviewers
Saturday 19 March 2011
40. WHY THE DIFFERENT SUS?
• 1st phase by 2 interviewers
Saturday 19 March 2011
41. WHY THE DIFFERENT SUS?
• issues:
• distracts of unrelated widget’s UI updates.
• layout too dense
• height of widgets too small
• KULeuven student had prior experience with iGoogle.
• not evaluating the widget but the whole experience...
Saturday 19 March 2011
55. RECOMMENDATIONS
EVALUATION
• compare recommendations to their favourite
tool: Google
•2 groups with different queries
Saturday 19 March 2011
56. RECOMMENDATIONS
EVALUATION
• compare recommendations to their favourite
tool: Google
•2 groups with different queries
# relevant items returned
Precision =
# total items returned
Saturday 19 March 2011
57. RECOMMENDATIONS
EVALUATION
• compare recommendations to their favourite
tool: Google
•2 groups with different queries
Precision # relevant items returned in top N list
=
at N # total items returned
Saturday 19 March 2011
58. RECOMMENDATIONS
EVALUATION
• compare recommendations to their favourite
tool: Google
•2 groups with different queries
Precision # relevant items returned in top N list
=
at N # total items returned
• Google: more relevant results
• Google: avg. prec. at 10 = 65%
• widget: avg. prec. at 10 = 50%
Saturday 19 March 2011
59. RECOMMENDATIONS
EVALUATION
• compare recommendations to their favourite
tool: Google
•2 groups with different queries
Precision # relevant items returned in top N list
=
at N # total items returned
• Google: more relevant results
• Google: avg. prec. at 10 = 65% less variation
• widget: avg. prec. at 10 = 50% in results
Saturday 19 March 2011
64. FUTURE WORK
• evaluation in larger scale real-world
situation (university + business)
• evaluate user satisfaction of widget
and not container
• evaluate the recommendations
further (based on use).
• make recommendations transparent
Saturday 19 March 2011