TCAV is a method for interpreting machine learning models by quantitatively measuring the importance of user-chosen concepts for a model's predictions, even if those concepts were not part of the model's training data or input features. It does this by learning concept activation vectors (CAVs) that represent concepts and using the CAVs to calculate a model's sensitivity or importance to each concept via directional derivatives. TCAV was shown to validate ground truths from sanity check experiments, uncover geographical biases in widely used models, and match domain expert concepts for diabetic retinopathy versus those a model may use, helping ensure models' values and knowledge are properly aligned and reflected.
[2024]Digital Global Overview Report 2024 Meltwater.pdf
Interpretability beyond feature attribution: Testing with Concept Activation Vectors (TCAV
1. Interpretability beyond feature attribution:
Testing with Concept Activation Vectors
TCAV
Been Kim
Presenting work with a lot of awesome people inside and outside of Google:
Marten Wattenberg, Julius Adebayo, Justin Gilmer, Carrie Cai, James Wexler,
Fernanda Viegas, Ian Goodfellow, Mortiz Hardt, Rory Sayres
2. My goal
interpretability
!2
To use machine learning responsibly
we need to ensure that
1. our values are aligned
2. our knowledge is reflected
for everyone.
http://blogs.teradata.com/
Machine
Learning
Models
Human
3. My goal
interpretability
!3
To use machine learning responsibly
we need to ensure that
1. our values are aligned
2. our knowledge is reflected
for everyone.
http://blogs.teradata.com/
Machine
Learning
Models
Human
13. Agenda
1. Revisiting existing methods:
Saliency maps
2. Making explanations using the way humans think:
Testing with concept activation vectors (TCAV)
Post-training explanations
14. Agenda
Sanity Checks for Saliency Maps
Joint work with Adebayo, Gilmer, Goodfellow, Hardt, [NIPS 18]
1. Revisiting existing methods:
Saliency maps
2. Making explanations using the way humans think:
Testing with concept activation vectors (TCAV)
Post-training explanations
16. One of the most popular interpretability methods for images:
Saliency maps
!16
SmoothGrad [Smilkov, Thorat, K., Viégas, Wattenberg ’17]
Integrated gradient [Sundararajan, Taly, Yan ’17]
picture credit: @sayres
Caaaaan do! We’ve got
saliency maps to measure
importance of each pixel!
a logit
pixel i,j
17. One of the most popular interpretability methods for images:
Saliency maps
!17
SmoothGrad [Smilkov, Thorat, K., Viégas, Wattenberg ’17]
Integrated gradient [Sundararajan, Taly, Yan ’17]
widely used
for images
local
undestandingNN
humans’
subjective
judgement
picture credit: @sayres
18. One of the most popular interpretability methods for images:
Saliency maps
!18
Sanity check:
If I change M a lot, will human
perceive that E has changed a lot?
SmoothGrad [Smilkov, Thorat, K., Viégas, Wattenberg ’17]
Integrated gradient [Sundararajan, Taly, Yan ’17]
widely used
for images
local
undestandingNN
humans’
subjective
judgement
picture credit: @sayres
19. Some confusing behaviors of saliency maps.
Saliency map
Sanity Checks for Saliency Maps
Joint work with Adebayo, Gilmer, Goodfellow, Hardt, [NIPS 18]
20. Some confusing behaviors of saliency maps.
Randomized weights!
Network now makes garbage prediction.
Saliency map
Sanity Checks for Saliency Maps
Joint work with Adebayo, Gilmer, Goodfellow, Hardt, [NIPS 18]
21. Some confusing behaviors of saliency maps.
Saliency map
Randomized weights!
Network now makes garbage prediction.
!!!!!???!?
Sanity Checks for Saliency Maps
Joint work with Adebayo, Gilmer, Goodfellow, Hardt, [NIPS 18]
22. Some saliency maps look similar
when we randomize the network
(= making the network completely useless)
Before After
Guided
Backprop
Integrated
Gradient
Sanity Checks for Saliency Maps
Joint work with Adebayo, Gilmer, Goodfellow, Hardt, [NIPS 18]
23. • Potential human confirmation bias: Just because it
“makes sense” to humans, doesn’t mean they reflect
evidence for the prediction.
• Our discovery is consistent with other findings.
[Nie, Zhang, Patel ’18] [Ulyanov, Vedaldi, Lempitsky ’18]
• Some of these methods have been shown to be useful for
humans. Why? More studies needed.
What can we learn from this?
Sanity Checks for Saliency Maps
Joint work with Adebayo, Gilmer, Goodfellow, Hardt, [NIPS 18]
26. Agenda
TCAV [ICML’18]
Joint work with Wattenberg, Gilmer, Cai, Wexler, Viegas, Sayres
1. Revisiting existing methods:
Saliency maps
2. Making explanations using the way humans think:
Testing with concept activation vectors (TCAV)
Post-training explanations
30. prediction:
Cash machine
https://pair-code.github.io/saliency/
SmoothGrad [Smilkov, Thorat, K., Viégas, Wattenberg ’17]
What we really want to ask…
!30
Were there more pixels on the cash
machine than on the person?
Which concept mattered more?
Is this true for all other cash
machine predictions?
Did the ‘human’ concept matter?
Did the ‘wheels’ concept matter?
31. prediction:
Cash machine
https://pair-code.github.io/saliency/
SmoothGrad [Smilkov, Thorat, K., Viégas, Wattenberg ’17]
What we really want to ask…
!31
Oh no! I can’t express these concepts
as pixels!!
They weren’t my input features either!
Were there more pixels on the cash
machine than on the person?
Which concept mattered more?
Is this true for all other cash
machine predictions?
Did the ‘human’ concept matter?
Did the ‘wheels’ concept matter?
32. prediction:
Cash machine
https://pair-code.github.io/saliency/
SmoothGrad [Smilkov, Thorat, K., Viégas, Wattenberg ’17]
What we really want to ask…
!32
Were there more pixels on the cash
machine than on the person?
Which concept mattered more?
Is this true for all other cash
machine predictions?
Wouldn’t it be great if we can
quantitatively measure how
important any of these
user-chosen concepts are?
Did the ‘human’ concept matter?
Did the ‘wheels’ concept matter?
33. Quantitative explanation: how much a concept (e.g., gender, race)
was important for a prediction in a trained model.
…even if the concept was not part of the training.
Goal of TCAV:
Testing with Concept Activation Vectors
!33 ICML 2018
34. Goal of TCAV:
Testing with Concept Activation Vectors
!34
Doctor-ness
A trained
machine learning model
(e.g., neural network)
vactruth.com healthcommunitiesproviderservices
35. Goal of TCAV:
Testing with Concept Activation Vectors
!35
Doctor-ness
A trained
machine learning model
(e.g., neural network)
Was gender concept important
to this doctor image classifier?
vactruth.com healthcommunitiesproviderservices
36. Goal of TCAV:
Testing with Concept Activation Vectors
!36
Doctor-ness
TCAV score for
womennot women
Doctor
A trained
machine learning model
(e.g., neural network)
vactruth.com healthcommunitiesproviderservices
Was gender concept important
to this doctor image classifier?
37. Goal of TCAV:
Testing with Concept Activation Vectors
!37
Doctor-ness
TCAV score for
womennot women
Doctor
A trained
machine learning model
(e.g., neural network)
vactruth.com healthcommunitiesproviderservices
Was gender concept important
to this doctor image classifier?
TCAV provides
quantitative importance of
a concept if and only if your
network learned about it.
38. Goal of TCAV:
Testing with Concept Activation Vectors
!38
zebra-ness
A trained
machine learning model
(e.g., neural network)
Was striped concept important
to this zebra image classifier?
TCAV score for
not stripedstriped
Zebra
TCAV provides
quantitative importance of
a concept if and only if your
network learned about it.
39. TCAV
TCAV:
Testing with Concept Activation Vectors
!39
zebra-ness
A trained
machine learning model
(e.g., neural network)
Was striped concept important
to this zebra image classifier?
1. Learning CAVs
1. How to define
concepts?
40. Defining concept activation vector (CAV)
Inputs:
!40
Random
images
Examples of
concepts
A trained network under investigation
and
Internal tensors
41. !41
Inputs:
Train a linear classifier to
separate activations.
CAV ( ) is the vector
orthogonal to the decision
boundary.
[Smilkov ’17, Bolukbasi ’16 , Schmidt ’15]
Defining concept activation vector (CAV)
42. TCAV
TCAV:
Testing with Concept Activation Vectors
!42
zebra-ness
A trained
machine learning model
(e.g., neural network)
Was striped concept important
to this zebra image classifier?
1. Learning CAVs 2. Getting TCAV score
2. How are the CAVs
useful to get
explanations?
43. TCAV core idea:
Derivative with CAV to get prediction sensitivity
!43
TCAV score
Directional derivative with CAV
45. TCAV
TCAV:
Testing with Concept Activation Vectors
!45
zebra-ness
A trained
machine learning model
(e.g., neural network)
Was striped concept important
to this zebra image classifier?
1. Learning CAVs 2. Getting TCAV score
46. TCAV
TCAV:
Testing with Concept Activation Vectors
!46
zebra-ness
A trained
machine learning model
(e.g., neural network)
Was striped concept important
to this zebra image classifier?
1. Learning CAVs 2. Getting TCAV score 3. CAV validation
Qualitative
Quantitative
51. Check the distribution of
is statistically
different from random
using t-test
TCAV score
random
……
Zebra
Quantitative validation:
Guarding against spurious CAV
!51
*
52. Recap TCAV:
Testing with Concept Activation Vectors
!52
1. Learning CAVs 2. Getting TCAV score 3. CAV validation
Qualitative
Quantitative
TCAV provides
quantitative importance of
a concept if and only if your
network learned about it.
Even if your training data wasn’t
tagged with the concept
Even if your input feature did
not include the concept
53. Results
1. Sanity check experiment
2. Biases in Inception V3 and GoogleNet
3. Domain expert confirmation from Diabetic Retinopathy
!53
54. Results
1. Sanity check experiment
2. Biases from Inception V3 and GoogleNet
3. Domain expert confirmation from Diabetic Retinopathy
!54
57. Sanity check experiment setup
!57
An image
+
Potentially noisy Caption
image
concept
models can use either
image or caption
concept for
classification.
caption
concept
58. Sanity check experiment setup
!58 Caption noise level in training set
An image
+
Potentially noisy Caption
image
concept
models can use either
image or caption
concept for
classification.
caption
concept
59. Sanity check experiment setup
!59
Test accuracy
with
no caption image
=
Importance of
image concept
Caption noise level in training set
image
concept
caption
concept
models can use either
image or caption
concept for
classification.
62. Can saliency maps communicate
the same information?
!62
Ground truth
Image
concept
Image
concept
Image
concept
Image
concept
Image
with caption
63. Human subject experiment:
Can saliency maps communicate the same
information?
• 50 turkers are
• asked to judge importance of
image vs. c. ept given saliency
maps.
• asked to indicate their confidence
• shown 3 classes (cab, zebra,
cucumber) x 2 saliency maps for
one model
!63
image caption
64. !64
• Random chance: 50%
• Human performance with
saliency map: 52%
• Humans can’t agree: more
than 50% no significant
consensus
• Humans are very confident
even when they are wrong.
Human subject experiment:
Can saliency maps communicate the same
information?
65. Human subject experiment:
Can saliency maps communicate the same
information?
• Random chance: 50%
• Human performance with
saliency map: 52%
• Humans can’t agree: more
than 50% no significant
consensus
• Humans are very confident
even when they are wrong.
!65
66. Results
1. Sanity check experiment
2. Biases from Inception V3 and GoogleNet
3. Domain expert confirmation from Diabetic Retinopathy
!66
68. TCAV in
Two widely used image prediction models
!68
Geographical
bias!
http://www.abc.net.au
69. TCAV in
Two widely used image prediction models
!69
Quantitative
confirmation to
previously
qualitative
findings
[Stock & Cisse,
2017]
Geographical
bias?
70. TCAV in
Two widely used image prediction models
!70
Quantitative
confirmation to
previously
qualitative
findings
[Stock & Cisse,
2017]
Geographical
bias?
Goal of interpretability:
To use machine learning responsibly
we need to ensure that
1. our values are aligned
2. our knowledge is reflected
71. Results
1. Sanity check experiment
2. Biases Inception V3 and GoogleNet
3. Domain expert confirmation from Diabetic Retinopathy
!71
72. Diabetic Retinopathy
• Treatable but sight-threatening conditions
• Have model to with accurate prediction of DR (85%)
[Krause et al., 2017]
!72
Concepts the ML model uses
Vs
Diagnostic Concepts human doctors use
73. Collect human doctor’s knowledge
!73
PRP
PRH/VH
NV/FP
VB
MA HMA
DR level 4
DR level 1
Concepts
belong to
this level
Concepts do not
belong to
this level
74. TCAV for Diabetic Retinopathy
!74
PRP PRH/VH NV/FP VB
Green: domain expert’s label on concepts belong to the level
Red: domain expert’s label on concepts does not belong to the level
Prediction
class
DR level 4
Prediction
accuracy
High
Example
TCAV scores TCAV shows the
model is consistent
with doctor’s
knowledge when
model is accurate
75. PRP PRH/VH NV/FP VB
Green: domain expert’s label on concepts belong to the level
Red: domain expert’s label on concepts does not belong to the level
Prediction
class
DR level 4
Prediction
accuracy
High
Example
TCAV scores TCAV shows the
model is consistent
with doctor’s
knowledge when
model is accurate
TCAV shows the
model is inconsistent
with doctor’s
knowledge for classes
when model is less
accurate
DR level 1 Med
TCAV for Diabetic Retinopathy
!75
MA HMA
76. PRP PRH/VH NV/FP VB
Green: domain expert’s label on concepts belong to the level
Red: domain expert’s label on concepts does not belong to the level
Prediction
class
DR level 4
Prediction
accuracy
High
Example
TCAV scores TCAV shows the
model is consistent
with doctor’s
knowledge when
model is accurate
Level 1 was often confused to level 2.
DR level 1 Low
TCAV shows the
model is inconsistent
with doctor’s
knowledge for classes
when model is less
accurate
TCAV for Diabetic Retinopathy
!76
MA HMA
Goal of interpretability:
To use machine learning responsibly
we need to ensure that
1. our values are aligned
2. our knowledge is reflected
77. Summary:
Testing with Concept Activation Vectors
!77
stripes concept (score: 0.9)
was important to zebra class
for this trained network.
PRP PRH/VH NV/FP VB
Our values Our knowledge
TCAV provides
quantitative importance of
a concept if and only if your
network learned about it.
Joint work with Wattenberg, Gilmer, Cai, Wexler, Viegas, Sayres
ICML 2018