SlideShare a Scribd company logo
1 of 103
Download to read offline
AI for Leaders
The Power of Why
Explanation Interfaces for Decision Support
Systems
Emily Sullivan
About me
- 2016 Ph.D. in Philosophy
- Philosophy of Science
- Epistemology
- Post-Doc at TU Delft since 2017
- Web Information Systems
2
Research area & Domains
3
Research area & Domains
Trusted Data Analytics
4
Research area & Domains
Trusted Data Analytics
Epsilon
Human-Understandable Explanations
Lab
5
EPSILON LAB – EXPLANATIONS
Post-docs
Oana Inel
Emily Sullivan
Mesut Kaya (research
engineer DDS)
PhDs
Shabnam Najafian
Yucheng Jin (KU
Leuven)
Tim Draws
Twitter (TBA)
AITech (TBA)
NL4XAI (TBA)
Nava Tintarev
● 6
Research area & Domains
Music News
Travel Education
7
Outline
- Introducing decision support systems
- Introducing the value of explanations and
explanation interfaces
- Explanation Research
- Explanation Pipeline
- Research Challenges
- FD Media Case
- Current and Future Epsilon Directions
- Addressing new challenges
- Entering new domains and mediums
8
What is a Decision Support System?
What is a Decision Support System?
A computational system that assists with decision-making
10
What is a Decision Support System?
A computational system that assists with decision-making
- Expert systems
11
What is a Decision Support System?
A computational system that assists with decision-making
- Expert systems
- Recommender systems
12
Expert
systems
• Use-case data is collected
• The expert system finds
patterns in the data
• The expert systems
determines a classification
(e.g. inspect vs. not inspect)
13
Recommender
systems
• User identifies one or more
objects as being of interest
• The recommender system
suggests other objects that
are similar (infers liking)
• Ranks the options, filters
out lower ranking options
14
15
Why Explanation?
Why Explanation?
INSPECT
DON’T INSPECT
17
Why Explanation?
User profile Recommendations
18
Why Explanation?
Case: Job Match
Job Match gives job
recommendations to job-seekers
based on their CV.
Companies seeking quality workers
get employer recommendations
based on CV evaluation.
19
Why Explanation?
Case: Job Match
- What is the stakeholder seeing,
thinking, feeling, saying?
- What does the stakeholder lose if
no explanation is given?
- What does the stakeholder lose if
the explanation is bad?
- What is the stakeholder willing to
act on to remedy the situation?
20
Scenario 1: Job – Seeker
Janna has 15 years experience in HR. She is
looking for a new job where she can use her
experience to make an impact.
Janna gets a list of potential jobs from Job
Match. The jobs seem interesting, but they are
not the challenge she is looking for.
(NO Explanation)
21
Scenario 1: Job – Seeker
Janna has 15 years experience in HR. She is
looking for a new job where she can use her
experience to make an impact.
Janna gets a list of potential jobs from Job
Match. The jobs seem interesting, but they
are not the challenge she is looking for.
(NO Explanation)
22
- What is the stakeholder seeing,
thinking, feeling, saying?
- What does the stakeholder lose if no
explanation is given?
- What does the stakeholder lose if the
explanation is bad?
- What is the stakeholder willing to act
on to remedy the situation?
Scenario 1: Job – Seeker
Janna has 15 years experience in HR. She is
looking for a new job where she can use her
experience to make an impact.
Job Match introduces a new explanation
feature. The jobs are said to be
recommended because she has < 5 years of
work experience. (Bad Explanation)
23
- What is the stakeholder seeing,
thinking, feeling, saying?
- What does the stakeholder lose if no
explanation is given?
- What does the stakeholder lose if the
explanation is bad?
- What is the stakeholder willing to act
on to remedy the situation?
Scenario 2: Company Seeking Worker
NLCompany is looking for ambitious workers
with at least 2-years of experience in data
science.
Job Match gives them a list of job candidates
that they should consider interviewing. The
candidates seem like good fits.
(NO Explanation)
NLCompany.nl
24
Scenario 2: Company Seeking Worker
25
NLCompany.nl
- What is the stakeholder seeing, thinking,
feeling, saying?
- What does the stakeholder lose if no
explanation is given?
- What does the stakeholder lose if the
explanation is bad?
- What is the stakeholder willing to act on
to remedy the situation?
NLCompany is looking for ambitious
workers with at least 2-years of
experience in data science.
Job Match gives them a list of job
candidates that they should consider
interviewing. The candidates seem like
good fits. (NO Explanation)
Scenario 2: Company Seeking Worker
26
NLCompany.nl
- What is the stakeholder seeing, thinking,
feeling, saying?
- What does the stakeholder lose if no
explanation is given?
- What does the stakeholder lose if the
explanation is bad?
- What is the stakeholder willing to act on
to remedy the situation?
NLCompany is looking for ambitious
workers with at least 2-years of
experience in data science.
Job Match introduces a new explanation
feature. With the next recommended job
candidates, the explanation says the
candidates are recommended because
NLCompany has hired male data scientists in
the past and these candidates are male.
(BAD explanation)
Scenario 3: Job Match Company
Job Match is a start-up that spent a lot
of resources in developing their job
match recommender system.
After several months, Job Match hasn’t
seen the level of user engagement with
the recommendations as they expected.
(NO explanation)
JOB MATCH
27
- What is the stakeholder seeing,
thinking, feeling, saying?
- What does the stakeholder lose if no
explanation is given?
- What does the stakeholder lose if the
explanation is bad?
- What is the stakeholder willing to act
on to remedy the situation?
Scenario 3: Job Match Company
Job Match is a start-up that spent a lot of
resources in developing their job match
recommender system.
28
After several months, Job Match hasn’t
seen the level of user engagement with
the recommendations as they expected.
(NO explanation)
- What is the stakeholder seeing,
thinking, feeling, saying?
- What does the stakeholder lose if no
explanation is given?
- What does the stakeholder lose if the
explanation is bad?
- What is the stakeholder willing to act
on to remedy the situation?
Scenario 3: Job Match Company
Job Match is a start-up that spent a lot of
resources in developing their job match
recommender system.
29
Job Match introduces a basic explanation
feature. After it is implemented several
users begin to complain and cancel their
service. (BAD explanation)
NLCompany.nl
Why Explanation?
30
- Lack of understanding
- Frustration
No Explanation
No Explanation
- Unengaged users
- Lost revenue
No Explanation
- Lack of understanding
- Prevents discussion about desired
employee characteristics
NLCompany.nl
Why Explanation?
31
- Lack of understanding
- Frustration
No Explanation
No Explanation
No Explanation
BAD Explanation BAD Explanation
BAD Explanation
- Unengaged users
- Lost revenue
- Wrong information, no way to update
- Lost confidence
- Goes against company values
- Lost confidence
- Possible legal issues
- Bias
- Points to underlying problems with
the system
- Lost user confidence
- Lost revenue
- Possible legal issues
- Lack of understanding
- Prevents discussion about desired
employee characteristics
Why Explanation?
Explanations can
serve to establish
several aims.
Tintarev and Masthoff (2007)
32
Why Explanation?
These aims can
conflict
Tintarev and Masthoff (2007)
33
Why Explanation?
To link the mental models
of both systems and
people, our work develops
ways to supply users with
a level of transparency
and control that is
meaningful and useful to
them.
Tintarev and Masthoff (2007)
34
Explanations in
Recommender
Systems
Unfortunately, this movie
belongs to at least one genre
you do not want to see:
Action & Adventure. It also
belongs to the genre(s):
Comedy. This movie stars Jo
Marr and Robert Redford.
Tintarev and Masthoff 2012
35
Explanation Research
(in Epsilon Lab)
Explanation Research
- Methods for generating and interpreting rich meta-
data for explanation.
37
Explanation Research
- Methods for generating and interpreting rich meta-
data for explanation.
What features are important for selecting
articles that represent diverse viewpoints?
38
Explanation Research
- Develop theoretical frameworks for generating better
explanations (as both text and interactive explanation
interfaces).
39
Explanation Research
- Develop theoretical frameworks for generating better
explanations (as both text and interactive explanation
interfaces).
When to explain
What to explain
Adapting to context: e.g., surprising content, risk,
complexity;
Adapting to users: e.g., group dynamics, personal
characteristics of a user. 40
Explanation Research Pipeline
Problem Space
Explanation
Framework
Selecting and
interpreting
meta-data / item
features
Explanation
generation
Interface design
Evaluation
User testing
41
Explanation Research Challenges
Challenges
Some Technical Challenges
- Sparse data (gaps in the data)
- Messy data; unstructured data; poor ontology
- Low model confidence
43
Challenges
Some Technical Challenges
- Sparse data (gaps in the data)
- Messy data; unstructured data; poor ontology
- Low model confidence
44
Explain low model
confidence to aid
decision support
Challenges
Often the challenges are not technical but cognitive.
45
Challenges
Often the challenges are not technical but cognitive.
- Competing interests among stakeholders
46
Challenges
Often the challenges are not technical but cognitive.
- Competing interests among stakeholders
- Competing functions of explanations
47
Challenges
Often the challenges are not technical but cognitive.
- Competing interests among stakeholders
- Competing functions of explanations
- Different expertise / capacities of users
48
Challenges
Often the challenges are not technical but cognitive.
- Competing interests among stakeholders
- Competing functions of explanations
- Different expertise / capacities of users
- Ethical challenges (e.g. bias, fairness, ‘nudging’)
49
Challenges
• Ethical challenges (e.g. bias, fairness, ‘nudging’)
50
Challenges
Often the challenges are not technical but cognitive.
- Competing interests among stakeholders
- Competing functions of explanations
- Different expertise / capacities of users
- Ethical challenges (e.g. bias, fairness, ‘nudging’)
51
Reading News
with a Purpose
Explaining user-profiles
for self-actualisation
E. Sullivan, D. Bountouridis, J. Harambam, S. Najafian, F.
Loecherbach, M. Makhortykh, D. Kelen, D. Wilkinson, D. Grauss, N.
Tintarev (2019)
WITH
53
Challenge: Competing interests
• Users want personalized content
• Increased engagement, satisfaction
• Users care for explainability & transparency
• Why is an item recommended?
• What personal data is stored?
• How do recommenders work?
Problem
Space
Challenge: Competing interests
• Journalists value journalistic
independence.
• Journalists want their articles to be
read.
• Journalists value their editorial
choices.
– What is important for users to read
Problem
Space
54
Challenge: Competing interests
• FD wants increased readership.
• FD values broadness, diversity,
autonomy, objectivity, and
controllability.
• FD wants to match users with their
needs.
Problem
Space
55
Challenge: Competing interests
• AI developers want data for their
recommender system.
• AI developers want to build systems
using exciting AI technology.
• AI developers value novel solutions.
Problem
Space
56
Challenge: Competing interests
• Transparency and satisfaction for users.
• Increased readership.
• Not threaten journalistic independence.
• Continued data collection
Problem
Space
57
58
Context: the recommendation pipeline
User-data Rec engine
Inferred
data
Recommen
dations
Eg. Reading history,
Clicked items
E.g.User-to-user
similarities
E.g.User-KNN
Recommended
items
Problem
Space
59
Motivation
• Focus on explaining the user profile
• Helps evaluate the appropriateness of recs. (Bonhard, Sasse, 2006)
• Relevant for the news domain (Graus et al., 2018)
• Facilitate self-actualization (Knijnenburg et al., 2016)
• Users understand why data is collected
User-data Inferred
data
Eg. Reading history,
Clicked items
E.g.User-to-user
similarities
Problem
Space
60
Motivation
• Questions emerging when explaining the user profile
• What is the purpose of the explanations?
• What user-goal do they serve?
• What type of user-control and visualization should they include?
• How do we weigh the stakeholder competing interests?
User-data Inferred
data
Eg. Reading history,
Clicked items
E.g.User-to-user
similarities
Problem
Space
Problem Space
Explanation
Framework
Selecting and
interpreting
meta-data / item
features
Explanation
generation
Interface design
Evaluation
User testing
61
Framework
Explanation
Framework
62
Framework: Self-actualisation
• Few studies have tried to connect transparency and
explanation with certain personal or societal values and
goals
• It is goal-directed and allows for user-control to achieve those
goals
• The user has direct control over which goal they want to
explore, and the recommendations that result from the
chosen goal
Explanation
Framework
63
Framework: Self-actualisation
Broadness, diversity, autonomy, objectivity,
match with the user needs, controllability
FD Values
I want to be an
expert
I want to stay
informed
I want to broaden
my horizon
I want discover
the unexplored
Explanation
Framework
64
Problem Space
Explanation
Framework
Selecting and
interpreting
meta-data / item
features
Explanation
generation
Interface design
Evaluation
User testing
65
Data
• One month of real data of reading behavior of users of fd.nl,
of the Dutch Het Financieele Dagblad
• 100 user profiles
• 50 average reading activity
• 50 highly active
• 1600 articles
• Metadata
• Topic tags for articles (e.g. politics, economy)
• Word2vec features
66
Selecting and
interpreting
meta-data /
item features
Data Challenges: Clean Data
- We want to know the topics of articles
67
Selecting and
interpreting
meta-data /
item features
Data Challenges: Clean Data
- We want to know the topics of articles
- Each article had a list of “tags” (e.g. politics)
- The tags were not systematic
- Tag “ontology changed over time”
68
Explanation
generation
Interface
design
Selecting and
interpreting
meta-data /
item features
Item selection challenges
What features do we use to explain / recommend?
Broaden my Horizon
Discover the Unexplored
Selecting and
interpreting
meta-data /
item features
69
Item selection challenges
What features do we use to explain / recommend?
Compare user to others “like them”
Compare user to all users
Compare user to others working in the same industry
Compare user to their past selves
Compare user to publication history
Selecting and
interpreting
meta-data /
item features
70
Problem Space
Explanation
Framework
Selecting and
interpreting
meta-data / item
features
Explanation
generation
Interface design
Evaluation
User testing
71
Data
• One month of real data of reading behavior of users of fd.nl,
of the Dutch Het Financieele Dagblad
• 100 user profiles
• 50 average reading activity
• 50 highly active
• 4 maximally separated profiles for word clouds
• 1600 articles
• Metadata
• Topic tags for articles (e.g. politics, economy)
• Word2vec features
Explanation
generation
Interface
design
72
73
User study: Visualisation
Familiarity
Similarity
Explanation
generation
Interface
design
User study: Topic scoring
• Familiarity
• User’s familiarity score with a topic: the ratio of
articles read by a user on a topic, over the total
number of articles published on that topic.
• Similarity
• The similarity between topics in the user profile
Explanation
generation
Interface
design
75
User study: Goals
Your goal is to Broaden your Horizons. There may
be topics you do not normally read about, but you may actually find
interesting. Exploring this helps to build a broad perspective on the
issues that matter to you.
Your goal is to Discover the Unexplored. There
may be topics that you haven’t explored before
that may actually become new interests.
Exploring new topics can promote creativity and
objectivity.
Evaluation
User Study
76
User study: Visualisation
Familiarity
Similarity
Explanation
generation
Interface
design
77
User study: Visualisation
• Why this visualisation?
• Allows for two goals in one visualisation
• Broaden horizon: we expect users to choose topics that have high(er) similarity (and
somewhat unfamiliar)
• Discover the unexplored: we expect users to click topics that have low similarity (and
unfamiliar)
7
8
User study: Design
Objective
Pick a persona from four
data-driven profiles
Random assign
goal-order
Explain the goals: Broaden Horizons,
Discover the unexplored
Goal A
Show
visualisation
Questionnaire
Persona 1
Goal B
Persona 4
Which three topics do you want to explore
for [goal]? Please select three.
User study: Hypotheses
H1: Goal Framework.
Providing the user with different goals will influence which
topics they wish to read about next.
Evaluation
User Study
79
User study: Hypotheses
H2: Broaden Horizons.
We expect users to choose topics that have high similarity (H2a),
and high familiarity (H2b), compared to their non-selected topics.
The goal of broadening horizons is to make small steps outside
of current reading behavior.
Evaluation
User Study
80
User study: Hypotheses
H3: Discover the Unexplored.
We expect users to choose topics that have low similarity (H3a),
and low familiarity (H3b), compared to their non-selected topics.
The goal of discover the unexplored urges users to explore
topics they are largely outside of their current interests
Evaluation
User Study
81
82
Results
Participants
• 58 Amazon Mechanical Turk-ers
• Time: January 25, 2019
• Location: U.S.A.
• Quality: Qualification as ‘master’ (reliable worker)
• Age: 3% 18-24; 41% 25-34; 34% 35-44; 11% 45-54; 11%; 55 or older.
• Gender: 54% male; 43% female; 3% other
• Education: 2% less than high school; 16% high school or equivalent; 29% some
college, no degree, 41% bachelor degree, 11% graduate degree
Evaluation
User Study
83
Results - summary
Broaden: more familiar topics selected (than discover the unexplored).
Providing users with different goals influences their reading intentions. Discover could lead to more
new topics read.
Discover: less familiar in selected (compared to unselected)
Can encourage readers to explore new topics beyond topics they normally read using this goal, but
they may be related to topics they already read.
Both: high variance of similarity for selected topics compared to unselected
Either goal encourages readers to read completely unrelated topics. Needs more study.
Evaluation
User Study
Cognitive challenge
Evaluation
User Study
84
Challenges
The challenges are often not technical but cognitive.
- Competing interests among stakeholders
- Competing functions of explanations
- Different expertise / capacities of users
- Ethical challenges (e.g. bias, fairness, ‘nudging’)
85
Other Research Challenges and Solutions
with on-going research projects
How to recommend items to groups when no best option (for all) exist?
PhD: Shabnam Najafian (Start Feb 2018)
Competing Values / Preferences
87
Competing Values / Preferences
How do we aggregate
preferences?
- Average rating?
- Mixture of each person’s
songs?
88
• Divergent preferences
• Everyone shares a
screen
• How much should we
say?
Competing Values / Preferences
89
Explanations helping to inform
Reflective Assessment of Online Videos
People are currently exposed to a growing
amount of controversial, polarized online
video content.
Natural Language Explanations in addition to
the video can foster reflective assessment in
viewers and help them make informed
decisions wrt the videos
Riot police clash with Catalan independence voters in Spain
Oana Inel
Reflective Assessment of Online Videos
Riot police clash with Catalan independence
voters in Spain
Use natural language explanations to provide information
regarding the source of the video
Reflective Assessment of Online Videos
Riot police clash with Catalan independence
voters in Spain
Use natural language explanations to provide information
regarding the controversial topics mentioned in the video
Reflective Assessment of Online Videos
Riot police clash with Catalan independence
voters in Spain
Use natural language explanations to provide information
regarding the evoked emotions in users’ comments
Reflective Assessment of Online Videos
Riot police clash with Catalan independence
voters in Spain
Use natural language explanations to provide information
regarding the evoked sentiments in video content and users’
comments
FairView: Explaining Video Summaries
Video summaries can provide a quick look
into long and dense video material
But, there are many ways to summarize the
same video, which:
● could amplify or diminish a specific aspect
or perspective in the original video
● introduce a bias
● potentially lead to misinformation
Original
Video
Automatically
generated
video
summaries
FairView: Explaining Video Summaries
How can we make the video summarization
process more transparent?
Use visual explanations to show the key
concepts (in video summary subtitles and video
summary stream) are covered by the video
summary
green concepts - covered by the summary
size of the concepts - the larger the concept, the more prominent it is
FairView: Explaining Video Summaries
And how can we also measure the
representativeness of a video summary?
Use visual explanations to show the amount
of key concepts covered by the video
summary
green slice - percentage of concepts covered by the summary
purple slice - percentage of concepts not covered by the summary
FairView: Explaining Video Summaries
Are the key concepts of the video well
represented in the video summary?
Use visual explanations to show how well the
key concepts of a video are represented in
video summaries
green concepts - covered by the summary
purple concepts - not covered by the summary
size of the concepts - the larger the concept, the more prominent it is in the video
100
101
102
AI for Leaders
The Power of Why
Explanation Interfaces for Decision Support
Systems
Emily Sullivan

More Related Content

Similar to AI for leaders: Intro to Explanations

GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
 
Keepler | Understanding your own predictive models
Keepler | Understanding your own predictive modelsKeepler | Understanding your own predictive models
Keepler | Understanding your own predictive modelsKeepler Data Tech
 
How to Get Hired in Analytics
How to Get Hired in AnalyticsHow to Get Hired in Analytics
How to Get Hired in AnalyticsVault Analytics
 
Venture Design, Module I at General Assembly (GA SF)
Venture Design, Module I at General Assembly (GA SF)Venture Design, Module I at General Assembly (GA SF)
Venture Design, Module I at General Assembly (GA SF)Alex Cowan
 
[INNOVATUBE] How Do An Investor Makes An Investment Decision? - Hajime Hotta
[INNOVATUBE] How Do An Investor Makes An Investment Decision? - Hajime Hotta[INNOVATUBE] How Do An Investor Makes An Investment Decision? - Hajime Hotta
[INNOVATUBE] How Do An Investor Makes An Investment Decision? - Hajime HottaNexus FrontierTech
 
These are very short essay like questions. Please in your own words,.docx
These are very short essay like questions. Please in your own words,.docxThese are very short essay like questions. Please in your own words,.docx
These are very short essay like questions. Please in your own words,.docxcroftsshanon
 
Information Systems Strategic Planning Risk And...
Information Systems Strategic Planning Risk And...Information Systems Strategic Planning Risk And...
Information Systems Strategic Planning Risk And...Kara Liu
 
Project World 2015: DIY Change - Practical Approaches for Impact
Project World 2015: DIY Change - Practical Approaches for ImpactProject World 2015: DIY Change - Practical Approaches for Impact
Project World 2015: DIY Change - Practical Approaches for ImpactExperiencePoint
 
personal branding cvvdt_en_2014
personal branding cvvdt_en_2014personal branding cvvdt_en_2014
personal branding cvvdt_en_2014Tom Scholte
 
Personal branding CVvdT_en_2014/12/8
Personal branding CVvdT_en_2014/12/8Personal branding CVvdT_en_2014/12/8
Personal branding CVvdT_en_2014/12/8Tom Scholte
 
Structured Interview Protocol
Structured Interview ProtocolStructured Interview Protocol
Structured Interview ProtocolLondon Graves
 
Surveys for communicators
Surveys for communicatorsSurveys for communicators
Surveys for communicatorsGlenn O'Neil
 
Mgmt 591 final exam guide set 2 (new)
Mgmt 591 final exam guide set 2 (new)Mgmt 591 final exam guide set 2 (new)
Mgmt 591 final exam guide set 2 (new)mikhailkoty2
 
Making the Cut: A Review of Open Talent Analytics Job Postings
Making the Cut: A Review of Open Talent Analytics Job PostingsMaking the Cut: A Review of Open Talent Analytics Job Postings
Making the Cut: A Review of Open Talent Analytics Job PostingsAndrea Kropp
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Krishnaram Kenthapadi
 
Mgmt 591 final exam guide set 2 (new)
Mgmt 591 final exam guide set 2 (new)Mgmt 591 final exam guide set 2 (new)
Mgmt 591 final exam guide set 2 (new)thisisatest14
 
Digital analytics: Wrap-up (Lecture 12)
Digital analytics: Wrap-up (Lecture 12)Digital analytics: Wrap-up (Lecture 12)
Digital analytics: Wrap-up (Lecture 12)Joni Salminen
 

Similar to AI for leaders: Intro to Explanations (20)

GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
 
Keepler | Understanding your own predictive models
Keepler | Understanding your own predictive modelsKeepler | Understanding your own predictive models
Keepler | Understanding your own predictive models
 
PPMA Annual Seminar 2017 - Productivity - what role should HR & OD profession...
PPMA Annual Seminar 2017 - Productivity - what role should HR & OD profession...PPMA Annual Seminar 2017 - Productivity - what role should HR & OD profession...
PPMA Annual Seminar 2017 - Productivity - what role should HR & OD profession...
 
How to Get Hired in Analytics
How to Get Hired in AnalyticsHow to Get Hired in Analytics
How to Get Hired in Analytics
 
Venture Design, Module I at General Assembly (GA SF)
Venture Design, Module I at General Assembly (GA SF)Venture Design, Module I at General Assembly (GA SF)
Venture Design, Module I at General Assembly (GA SF)
 
[INNOVATUBE] How Do An Investor Makes An Investment Decision? - Hajime Hotta
[INNOVATUBE] How Do An Investor Makes An Investment Decision? - Hajime Hotta[INNOVATUBE] How Do An Investor Makes An Investment Decision? - Hajime Hotta
[INNOVATUBE] How Do An Investor Makes An Investment Decision? - Hajime Hotta
 
SM2701 Class 01
SM2701 Class 01SM2701 Class 01
SM2701 Class 01
 
These are very short essay like questions. Please in your own words,.docx
These are very short essay like questions. Please in your own words,.docxThese are very short essay like questions. Please in your own words,.docx
These are very short essay like questions. Please in your own words,.docx
 
Information Systems Strategic Planning Risk And...
Information Systems Strategic Planning Risk And...Information Systems Strategic Planning Risk And...
Information Systems Strategic Planning Risk And...
 
Project World 2015: DIY Change - Practical Approaches for Impact
Project World 2015: DIY Change - Practical Approaches for ImpactProject World 2015: DIY Change - Practical Approaches for Impact
Project World 2015: DIY Change - Practical Approaches for Impact
 
personal branding cvvdt_en_2014
personal branding cvvdt_en_2014personal branding cvvdt_en_2014
personal branding cvvdt_en_2014
 
Personal branding CVvdT_en_2014/12/8
Personal branding CVvdT_en_2014/12/8Personal branding CVvdT_en_2014/12/8
Personal branding CVvdT_en_2014/12/8
 
Structured Interview Protocol
Structured Interview ProtocolStructured Interview Protocol
Structured Interview Protocol
 
Surveys for communicators
Surveys for communicatorsSurveys for communicators
Surveys for communicators
 
Mgmt 591 final exam guide set 2 (new)
Mgmt 591 final exam guide set 2 (new)Mgmt 591 final exam guide set 2 (new)
Mgmt 591 final exam guide set 2 (new)
 
Making the Cut: A Review of Open Talent Analytics Job Postings
Making the Cut: A Review of Open Talent Analytics Job PostingsMaking the Cut: A Review of Open Talent Analytics Job Postings
Making the Cut: A Review of Open Talent Analytics Job Postings
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)
 
TS1 Task Scenarios
TS1 Task ScenariosTS1 Task Scenarios
TS1 Task Scenarios
 
Mgmt 591 final exam guide set 2 (new)
Mgmt 591 final exam guide set 2 (new)Mgmt 591 final exam guide set 2 (new)
Mgmt 591 final exam guide set 2 (new)
 
Digital analytics: Wrap-up (Lecture 12)
Digital analytics: Wrap-up (Lecture 12)Digital analytics: Wrap-up (Lecture 12)
Digital analytics: Wrap-up (Lecture 12)
 

Recently uploaded

Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureEric D. Schabell
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfJamie (Taka) Wang
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Adtran
 
Videogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfVideogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfinfogdgmi
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1DianaGray10
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxMatsuo Lab
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesMd Hossain Ali
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsSeth Reyes
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?IES VE
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7DianaGray10
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesDavid Newbury
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPathCommunity
 

Recently uploaded (20)

20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability Adventure
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™
 
Videogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfVideogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdf
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
20150722 - AGV
20150722 - AGV20150722 - AGV
20150722 - AGV
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptx
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and Hazards
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?
 
UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7UiPath Studio Web workshop series - Day 7
UiPath Studio Web workshop series - Day 7
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond Ontologies
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation Developers
 

AI for leaders: Intro to Explanations

  • 1. AI for Leaders The Power of Why Explanation Interfaces for Decision Support Systems Emily Sullivan
  • 2. About me - 2016 Ph.D. in Philosophy - Philosophy of Science - Epistemology - Post-Doc at TU Delft since 2017 - Web Information Systems 2
  • 3. Research area & Domains 3
  • 4. Research area & Domains Trusted Data Analytics 4
  • 5. Research area & Domains Trusted Data Analytics Epsilon Human-Understandable Explanations Lab 5
  • 6. EPSILON LAB – EXPLANATIONS Post-docs Oana Inel Emily Sullivan Mesut Kaya (research engineer DDS) PhDs Shabnam Najafian Yucheng Jin (KU Leuven) Tim Draws Twitter (TBA) AITech (TBA) NL4XAI (TBA) Nava Tintarev ● 6
  • 7. Research area & Domains Music News Travel Education 7
  • 8. Outline - Introducing decision support systems - Introducing the value of explanations and explanation interfaces - Explanation Research - Explanation Pipeline - Research Challenges - FD Media Case - Current and Future Epsilon Directions - Addressing new challenges - Entering new domains and mediums 8
  • 9. What is a Decision Support System?
  • 10. What is a Decision Support System? A computational system that assists with decision-making 10
  • 11. What is a Decision Support System? A computational system that assists with decision-making - Expert systems 11
  • 12. What is a Decision Support System? A computational system that assists with decision-making - Expert systems - Recommender systems 12
  • 13. Expert systems • Use-case data is collected • The expert system finds patterns in the data • The expert systems determines a classification (e.g. inspect vs. not inspect) 13
  • 14. Recommender systems • User identifies one or more objects as being of interest • The recommender system suggests other objects that are similar (infers liking) • Ranks the options, filters out lower ranking options 14
  • 15. 15
  • 18. Why Explanation? User profile Recommendations 18
  • 19. Why Explanation? Case: Job Match Job Match gives job recommendations to job-seekers based on their CV. Companies seeking quality workers get employer recommendations based on CV evaluation. 19
  • 20. Why Explanation? Case: Job Match - What is the stakeholder seeing, thinking, feeling, saying? - What does the stakeholder lose if no explanation is given? - What does the stakeholder lose if the explanation is bad? - What is the stakeholder willing to act on to remedy the situation? 20
  • 21. Scenario 1: Job – Seeker Janna has 15 years experience in HR. She is looking for a new job where she can use her experience to make an impact. Janna gets a list of potential jobs from Job Match. The jobs seem interesting, but they are not the challenge she is looking for. (NO Explanation) 21
  • 22. Scenario 1: Job – Seeker Janna has 15 years experience in HR. She is looking for a new job where she can use her experience to make an impact. Janna gets a list of potential jobs from Job Match. The jobs seem interesting, but they are not the challenge she is looking for. (NO Explanation) 22 - What is the stakeholder seeing, thinking, feeling, saying? - What does the stakeholder lose if no explanation is given? - What does the stakeholder lose if the explanation is bad? - What is the stakeholder willing to act on to remedy the situation?
  • 23. Scenario 1: Job – Seeker Janna has 15 years experience in HR. She is looking for a new job where she can use her experience to make an impact. Job Match introduces a new explanation feature. The jobs are said to be recommended because she has < 5 years of work experience. (Bad Explanation) 23 - What is the stakeholder seeing, thinking, feeling, saying? - What does the stakeholder lose if no explanation is given? - What does the stakeholder lose if the explanation is bad? - What is the stakeholder willing to act on to remedy the situation?
  • 24. Scenario 2: Company Seeking Worker NLCompany is looking for ambitious workers with at least 2-years of experience in data science. Job Match gives them a list of job candidates that they should consider interviewing. The candidates seem like good fits. (NO Explanation) NLCompany.nl 24
  • 25. Scenario 2: Company Seeking Worker 25 NLCompany.nl - What is the stakeholder seeing, thinking, feeling, saying? - What does the stakeholder lose if no explanation is given? - What does the stakeholder lose if the explanation is bad? - What is the stakeholder willing to act on to remedy the situation? NLCompany is looking for ambitious workers with at least 2-years of experience in data science. Job Match gives them a list of job candidates that they should consider interviewing. The candidates seem like good fits. (NO Explanation)
  • 26. Scenario 2: Company Seeking Worker 26 NLCompany.nl - What is the stakeholder seeing, thinking, feeling, saying? - What does the stakeholder lose if no explanation is given? - What does the stakeholder lose if the explanation is bad? - What is the stakeholder willing to act on to remedy the situation? NLCompany is looking for ambitious workers with at least 2-years of experience in data science. Job Match introduces a new explanation feature. With the next recommended job candidates, the explanation says the candidates are recommended because NLCompany has hired male data scientists in the past and these candidates are male. (BAD explanation)
  • 27. Scenario 3: Job Match Company Job Match is a start-up that spent a lot of resources in developing their job match recommender system. After several months, Job Match hasn’t seen the level of user engagement with the recommendations as they expected. (NO explanation) JOB MATCH 27
  • 28. - What is the stakeholder seeing, thinking, feeling, saying? - What does the stakeholder lose if no explanation is given? - What does the stakeholder lose if the explanation is bad? - What is the stakeholder willing to act on to remedy the situation? Scenario 3: Job Match Company Job Match is a start-up that spent a lot of resources in developing their job match recommender system. 28 After several months, Job Match hasn’t seen the level of user engagement with the recommendations as they expected. (NO explanation)
  • 29. - What is the stakeholder seeing, thinking, feeling, saying? - What does the stakeholder lose if no explanation is given? - What does the stakeholder lose if the explanation is bad? - What is the stakeholder willing to act on to remedy the situation? Scenario 3: Job Match Company Job Match is a start-up that spent a lot of resources in developing their job match recommender system. 29 Job Match introduces a basic explanation feature. After it is implemented several users begin to complain and cancel their service. (BAD explanation)
  • 30. NLCompany.nl Why Explanation? 30 - Lack of understanding - Frustration No Explanation No Explanation - Unengaged users - Lost revenue No Explanation - Lack of understanding - Prevents discussion about desired employee characteristics
  • 31. NLCompany.nl Why Explanation? 31 - Lack of understanding - Frustration No Explanation No Explanation No Explanation BAD Explanation BAD Explanation BAD Explanation - Unengaged users - Lost revenue - Wrong information, no way to update - Lost confidence - Goes against company values - Lost confidence - Possible legal issues - Bias - Points to underlying problems with the system - Lost user confidence - Lost revenue - Possible legal issues - Lack of understanding - Prevents discussion about desired employee characteristics
  • 32. Why Explanation? Explanations can serve to establish several aims. Tintarev and Masthoff (2007) 32
  • 33. Why Explanation? These aims can conflict Tintarev and Masthoff (2007) 33
  • 34. Why Explanation? To link the mental models of both systems and people, our work develops ways to supply users with a level of transparency and control that is meaningful and useful to them. Tintarev and Masthoff (2007) 34
  • 35. Explanations in Recommender Systems Unfortunately, this movie belongs to at least one genre you do not want to see: Action & Adventure. It also belongs to the genre(s): Comedy. This movie stars Jo Marr and Robert Redford. Tintarev and Masthoff 2012 35
  • 37. Explanation Research - Methods for generating and interpreting rich meta- data for explanation. 37
  • 38. Explanation Research - Methods for generating and interpreting rich meta- data for explanation. What features are important for selecting articles that represent diverse viewpoints? 38
  • 39. Explanation Research - Develop theoretical frameworks for generating better explanations (as both text and interactive explanation interfaces). 39
  • 40. Explanation Research - Develop theoretical frameworks for generating better explanations (as both text and interactive explanation interfaces). When to explain What to explain Adapting to context: e.g., surprising content, risk, complexity; Adapting to users: e.g., group dynamics, personal characteristics of a user. 40
  • 41. Explanation Research Pipeline Problem Space Explanation Framework Selecting and interpreting meta-data / item features Explanation generation Interface design Evaluation User testing 41
  • 43. Challenges Some Technical Challenges - Sparse data (gaps in the data) - Messy data; unstructured data; poor ontology - Low model confidence 43
  • 44. Challenges Some Technical Challenges - Sparse data (gaps in the data) - Messy data; unstructured data; poor ontology - Low model confidence 44 Explain low model confidence to aid decision support
  • 45. Challenges Often the challenges are not technical but cognitive. 45
  • 46. Challenges Often the challenges are not technical but cognitive. - Competing interests among stakeholders 46
  • 47. Challenges Often the challenges are not technical but cognitive. - Competing interests among stakeholders - Competing functions of explanations 47
  • 48. Challenges Often the challenges are not technical but cognitive. - Competing interests among stakeholders - Competing functions of explanations - Different expertise / capacities of users 48
  • 49. Challenges Often the challenges are not technical but cognitive. - Competing interests among stakeholders - Competing functions of explanations - Different expertise / capacities of users - Ethical challenges (e.g. bias, fairness, ‘nudging’) 49
  • 50. Challenges • Ethical challenges (e.g. bias, fairness, ‘nudging’) 50
  • 51. Challenges Often the challenges are not technical but cognitive. - Competing interests among stakeholders - Competing functions of explanations - Different expertise / capacities of users - Ethical challenges (e.g. bias, fairness, ‘nudging’) 51
  • 52. Reading News with a Purpose Explaining user-profiles for self-actualisation E. Sullivan, D. Bountouridis, J. Harambam, S. Najafian, F. Loecherbach, M. Makhortykh, D. Kelen, D. Wilkinson, D. Grauss, N. Tintarev (2019) WITH
  • 53. 53 Challenge: Competing interests • Users want personalized content • Increased engagement, satisfaction • Users care for explainability & transparency • Why is an item recommended? • What personal data is stored? • How do recommenders work? Problem Space
  • 54. Challenge: Competing interests • Journalists value journalistic independence. • Journalists want their articles to be read. • Journalists value their editorial choices. – What is important for users to read Problem Space 54
  • 55. Challenge: Competing interests • FD wants increased readership. • FD values broadness, diversity, autonomy, objectivity, and controllability. • FD wants to match users with their needs. Problem Space 55
  • 56. Challenge: Competing interests • AI developers want data for their recommender system. • AI developers want to build systems using exciting AI technology. • AI developers value novel solutions. Problem Space 56
  • 57. Challenge: Competing interests • Transparency and satisfaction for users. • Increased readership. • Not threaten journalistic independence. • Continued data collection Problem Space 57
  • 58. 58 Context: the recommendation pipeline User-data Rec engine Inferred data Recommen dations Eg. Reading history, Clicked items E.g.User-to-user similarities E.g.User-KNN Recommended items Problem Space
  • 59. 59 Motivation • Focus on explaining the user profile • Helps evaluate the appropriateness of recs. (Bonhard, Sasse, 2006) • Relevant for the news domain (Graus et al., 2018) • Facilitate self-actualization (Knijnenburg et al., 2016) • Users understand why data is collected User-data Inferred data Eg. Reading history, Clicked items E.g.User-to-user similarities Problem Space
  • 60. 60 Motivation • Questions emerging when explaining the user profile • What is the purpose of the explanations? • What user-goal do they serve? • What type of user-control and visualization should they include? • How do we weigh the stakeholder competing interests? User-data Inferred data Eg. Reading history, Clicked items E.g.User-to-user similarities Problem Space
  • 61. Problem Space Explanation Framework Selecting and interpreting meta-data / item features Explanation generation Interface design Evaluation User testing 61
  • 63. Framework: Self-actualisation • Few studies have tried to connect transparency and explanation with certain personal or societal values and goals • It is goal-directed and allows for user-control to achieve those goals • The user has direct control over which goal they want to explore, and the recommendations that result from the chosen goal Explanation Framework 63
  • 64. Framework: Self-actualisation Broadness, diversity, autonomy, objectivity, match with the user needs, controllability FD Values I want to be an expert I want to stay informed I want to broaden my horizon I want discover the unexplored Explanation Framework 64
  • 65. Problem Space Explanation Framework Selecting and interpreting meta-data / item features Explanation generation Interface design Evaluation User testing 65
  • 66. Data • One month of real data of reading behavior of users of fd.nl, of the Dutch Het Financieele Dagblad • 100 user profiles • 50 average reading activity • 50 highly active • 1600 articles • Metadata • Topic tags for articles (e.g. politics, economy) • Word2vec features 66 Selecting and interpreting meta-data / item features
  • 67. Data Challenges: Clean Data - We want to know the topics of articles 67 Selecting and interpreting meta-data / item features
  • 68. Data Challenges: Clean Data - We want to know the topics of articles - Each article had a list of “tags” (e.g. politics) - The tags were not systematic - Tag “ontology changed over time” 68 Explanation generation Interface design Selecting and interpreting meta-data / item features
  • 69. Item selection challenges What features do we use to explain / recommend? Broaden my Horizon Discover the Unexplored Selecting and interpreting meta-data / item features 69
  • 70. Item selection challenges What features do we use to explain / recommend? Compare user to others “like them” Compare user to all users Compare user to others working in the same industry Compare user to their past selves Compare user to publication history Selecting and interpreting meta-data / item features 70
  • 71. Problem Space Explanation Framework Selecting and interpreting meta-data / item features Explanation generation Interface design Evaluation User testing 71
  • 72. Data • One month of real data of reading behavior of users of fd.nl, of the Dutch Het Financieele Dagblad • 100 user profiles • 50 average reading activity • 50 highly active • 4 maximally separated profiles for word clouds • 1600 articles • Metadata • Topic tags for articles (e.g. politics, economy) • Word2vec features Explanation generation Interface design 72
  • 74. User study: Topic scoring • Familiarity • User’s familiarity score with a topic: the ratio of articles read by a user on a topic, over the total number of articles published on that topic. • Similarity • The similarity between topics in the user profile Explanation generation Interface design
  • 75. 75 User study: Goals Your goal is to Broaden your Horizons. There may be topics you do not normally read about, but you may actually find interesting. Exploring this helps to build a broad perspective on the issues that matter to you. Your goal is to Discover the Unexplored. There may be topics that you haven’t explored before that may actually become new interests. Exploring new topics can promote creativity and objectivity. Evaluation User Study
  • 77. 77 User study: Visualisation • Why this visualisation? • Allows for two goals in one visualisation • Broaden horizon: we expect users to choose topics that have high(er) similarity (and somewhat unfamiliar) • Discover the unexplored: we expect users to click topics that have low similarity (and unfamiliar)
  • 78. 7 8 User study: Design Objective Pick a persona from four data-driven profiles Random assign goal-order Explain the goals: Broaden Horizons, Discover the unexplored Goal A Show visualisation Questionnaire Persona 1 Goal B Persona 4 Which three topics do you want to explore for [goal]? Please select three.
  • 79. User study: Hypotheses H1: Goal Framework. Providing the user with different goals will influence which topics they wish to read about next. Evaluation User Study 79
  • 80. User study: Hypotheses H2: Broaden Horizons. We expect users to choose topics that have high similarity (H2a), and high familiarity (H2b), compared to their non-selected topics. The goal of broadening horizons is to make small steps outside of current reading behavior. Evaluation User Study 80
  • 81. User study: Hypotheses H3: Discover the Unexplored. We expect users to choose topics that have low similarity (H3a), and low familiarity (H3b), compared to their non-selected topics. The goal of discover the unexplored urges users to explore topics they are largely outside of their current interests Evaluation User Study 81
  • 82. 82 Results Participants • 58 Amazon Mechanical Turk-ers • Time: January 25, 2019 • Location: U.S.A. • Quality: Qualification as ‘master’ (reliable worker) • Age: 3% 18-24; 41% 25-34; 34% 35-44; 11% 45-54; 11%; 55 or older. • Gender: 54% male; 43% female; 3% other • Education: 2% less than high school; 16% high school or equivalent; 29% some college, no degree, 41% bachelor degree, 11% graduate degree Evaluation User Study
  • 83. 83 Results - summary Broaden: more familiar topics selected (than discover the unexplored). Providing users with different goals influences their reading intentions. Discover could lead to more new topics read. Discover: less familiar in selected (compared to unselected) Can encourage readers to explore new topics beyond topics they normally read using this goal, but they may be related to topics they already read. Both: high variance of similarity for selected topics compared to unselected Either goal encourages readers to read completely unrelated topics. Needs more study. Evaluation User Study
  • 85. Challenges The challenges are often not technical but cognitive. - Competing interests among stakeholders - Competing functions of explanations - Different expertise / capacities of users - Ethical challenges (e.g. bias, fairness, ‘nudging’) 85
  • 86. Other Research Challenges and Solutions with on-going research projects
  • 87. How to recommend items to groups when no best option (for all) exist? PhD: Shabnam Najafian (Start Feb 2018) Competing Values / Preferences 87
  • 88. Competing Values / Preferences How do we aggregate preferences? - Average rating? - Mixture of each person’s songs? 88
  • 89. • Divergent preferences • Everyone shares a screen • How much should we say? Competing Values / Preferences 89
  • 91. Reflective Assessment of Online Videos People are currently exposed to a growing amount of controversial, polarized online video content. Natural Language Explanations in addition to the video can foster reflective assessment in viewers and help them make informed decisions wrt the videos Riot police clash with Catalan independence voters in Spain Oana Inel
  • 92. Reflective Assessment of Online Videos Riot police clash with Catalan independence voters in Spain Use natural language explanations to provide information regarding the source of the video
  • 93. Reflective Assessment of Online Videos Riot police clash with Catalan independence voters in Spain Use natural language explanations to provide information regarding the controversial topics mentioned in the video
  • 94. Reflective Assessment of Online Videos Riot police clash with Catalan independence voters in Spain Use natural language explanations to provide information regarding the evoked emotions in users’ comments
  • 95. Reflective Assessment of Online Videos Riot police clash with Catalan independence voters in Spain Use natural language explanations to provide information regarding the evoked sentiments in video content and users’ comments
  • 96. FairView: Explaining Video Summaries Video summaries can provide a quick look into long and dense video material But, there are many ways to summarize the same video, which: ● could amplify or diminish a specific aspect or perspective in the original video ● introduce a bias ● potentially lead to misinformation Original Video Automatically generated video summaries
  • 97. FairView: Explaining Video Summaries How can we make the video summarization process more transparent? Use visual explanations to show the key concepts (in video summary subtitles and video summary stream) are covered by the video summary green concepts - covered by the summary size of the concepts - the larger the concept, the more prominent it is
  • 98. FairView: Explaining Video Summaries And how can we also measure the representativeness of a video summary? Use visual explanations to show the amount of key concepts covered by the video summary green slice - percentage of concepts covered by the summary purple slice - percentage of concepts not covered by the summary
  • 99. FairView: Explaining Video Summaries Are the key concepts of the video well represented in the video summary? Use visual explanations to show how well the key concepts of a video are represented in video summaries green concepts - covered by the summary purple concepts - not covered by the summary size of the concepts - the larger the concept, the more prominent it is in the video
  • 100. 100
  • 101. 101
  • 102. 102
  • 103. AI for Leaders The Power of Why Explanation Interfaces for Decision Support Systems Emily Sullivan