SlideShare a Scribd company logo
1 of 55
Download to read offline
Explainability and Bias in
ML/AI Models
Naveen Sundar Govindarajulu
August 9, 2019
Visit
and
sign up
RealityEngines.AI
Why now?
Life
Impacting
ML & AI
models
COMPAS
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingFrom:
Non recidivating black people twice as likely to be labelled high risk than non
recidivating white people
Why Explainability?
ā€¢ More use of ML/AI models by laypersons.
ā€¢ Laypersons need explanations
ā€¢ Developers also need quick explanations to debug models
faster
ā€¢ There may be a legal need for explanations:
ā€¢ If you deny someone a loan, you may need to explain the
reason for the denial.
Explainability
Explainability using Interpretable Models
Prior offenses <= 0
Low Risk High Risk
Armed offense?
Med Risk
YES
NO
NO YES
Explainability vs Performance
Tradeoff
ā€¢ Some machine learning models are more explainable than
others.
Performance
Explainability
Deep learning models
Linear Models
DecisionTrees
Explainability Method:
Feature Attribution
Classiļ¬er
Explainer
features
ā€œWeightsā€ for
features
Input
features
Output
What Features?
Interpretable Features
ā€¢ We need interpretable features.
ā€¢ Difļ¬cult for laypersons to understand raw feature spaces (e.g.
word embeddings)
ā€¢ Humans are good at understanding presence or absence of
components.
Interpretable Instance
ā€¢ E.g.
ā€¢ For Text:
ā€¢ Convert to a binary vector indicating presence or absence
of words
ā€¢ For images
ā€¢ Convert to a binary vector indicating presence or absence
of pixels or contiguous regions.
Method 1: LIME
From
https://github.com/marcotcr/lime
Locally Interpretable Model-agnostic
Explanations
Ribeiro, M.T., Singh, S. and Guestrin, C., 2016, August. Why Should I Trust
You?: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd
ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining (pp. 1135-1144). ACM.
Method 1: LIME
Any classiļ¬er
1 1 0 1 1 0 1 0 0 1 0
0 0 0 1 0 1 1 1 1 0 1
-2.1 1.1 -0.5 2.2 -1.2 -1.5 1 -3 0.8 5.6 1.5
Weights for the linear classiļ¬er then
give us feature importances
Binary vectors
-2.1 2.2 -3 5.6
Enforce
sparsity
Example:
Text Sentiment Classiļ¬cation
ā€œThe movie is not badā€
This movie is not bad
0 0 0 2.3 -1.5
Explanation for ā€œCatā€
LIME with Images
From
https://github.com/marcotcr/lime
Ribeiro, M.T., Singh, S. and Guestrin, C., 2016, August. Why Should I Trust You?: Explaining the Predictions of Any Classifier. In
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM.
Explanations for Multi-Label
Classiļ¬ers
Ribeiro, M.T., Singh, S. and Guestrin, C., 2016, August. Why Should I Trust You?: Explaining the Predictions of Any Classifier. In
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM.
Using LIME for Debugging (E.g. 1)
https://github.com/marcotcr/lime
Using LIME for Debugging (E.g. 2)
https://github.com/marcotcr/lime
Using LIME for Debugging (E.g. 2)
Method 2: SHAP
Uniļ¬es many different feature attribution methods and has some
desirable properties.
1. LIME
2. Integrated Gradients
3. Shapley values
4. DeepLift
Lundberg, S.M. and Lee, S.I., 2017. A unified approach to interpreting model predictions. In
Advances in Neural Information Processing Systems (pp. 4765-4774).
Method 2: SHAP
ā€¢ Derives from game-theoretic foundations.
ā€¢ Shapley values used in game theory to assign values to players
in cooperative games.
What are Shapley values?
ā€¢ Suppose there is a set S of N players
participating in a game with payoff for any S
subset of players participating in the game
given by:
ā€¢ Shapley values provide one fair
way of dividing up the total
payoff among the N players.
ShapleyValue
Payoļ¬€ for the group
including player i
Shapley value for player i
Payoļ¬€ for a group without player i
SHAP Explanations
ā€¢ Players are features.
ā€¢ Payoff is the modelā€™s real valued prediction.
SHAP Implementation
(https://github.com/slundberg/shap)
Different kinds of explainers:
1. TreeExplainer: fast and exact SHAP values for tree ensembles
2. KernelExplainer: approximate explainer for black box estimators
3. DeepExplainer: high-speed approximate explainer for deep learning models.
4. ExpectedGradients: SHAP-based extension of integrated gradients
XGBoost on UCI Income Dataset
Output is probability of income
over 50k
f87
f23
f23 f3
f34
f41
Base ValueOutput
Note: SHAP values are Model
Dependent.
Model 1
Model 2
Is This Form of Explainability
Enough?
ā€¢ Explainability does not provide us with recourse.
ā€¢ Recourse: Information needed to change a speciļ¬c prediction to a
desired value.
ā€¢ ā€œIf you had paid your credit card balance in full for the last three
months, you would have got that loan.ā€
Issues with SHAP and LIME
SHAP and LIME values are highly variable for instances that are very similar for
non-linear models.ā€Ø
On the Robustness of Interpretability Methods
https://arxiv.org/abs/1806.08049
Issues with SHAP and LIME
SHAP and LIME values are highly variable for instances that are very similar for
non-linear models.ā€Ø
On the Robustness of Interpretability Methods
https://arxiv.org/abs/1806.08049
Issues with SHAP and LIME
SHAP and LIME values donā€™t provide insight into how the model will behave on new instances.
https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982
High-Precision Model-Agnostic Explanations
Take-home message
ā€¢ Explainability is possible need not come at the cost of
performance.
ā€¢ Explainability is not enough
ā€¢ Recourse, etc.
Bias
Fairness and Bias in Machine
Learning
1. Bias in this context is unfairness (more or less).
2. Note we are not talking about standard statistical bias in machine
learning (the bias in the bias vs. variance tradeoff).
3. For completeness, this is one deļ¬nition of statistical bias in machine
learning.
ā€¢ Bias = Expected value of model - true value
Deļ¬nitions of Fairness or Bias
1. Many, many, many deļ¬nitions exists.
2. Application dependent. No one deļ¬nition is better.
3. See ā€œ21 Deļ¬nitions of Fairnessā€ tutorial by Arvind Narayanan,ACM
2018 FAT*.
1. Key Point: Dozens of deļ¬nitions exist (and not just 21)
Setting
1. Classiļ¬er C with binary output d in {+, -}, a real-valued score s.
1. Instances or data points are generally humans.
2. The + class is desired and the negative - class is not desired.
2. Input X, and
1. one or more sensitive/protected attribute G (e.g. gender) that are part
of the input. E.g. Possible values of G = {m, f}
3. A set of instances sharing a common sensitive attribute is privileged
(receives more + labels).The other is unprivileged (receives less + labels)
4. True output Y
1. Fairness through
Unawareness
ā€¢ Simple Idea: Do not consider any sensitive attributes when
building the model.
ā€¢ Advantage: Some support in the law (disparate treatment)?
ā€¢ Disadvantage:: Other attributes may be correlated with
sensitive attributes (such as job history, geographical location
etc.)
2. Statistical Parity Difference
ā€¢ Different groups should have the same proportion (or
probability) of positive and negative labels. Ideally the below
value should be close to zero:
ā€¢ Advantages: Legal support in the form of a rule known as the fourth-ļ¬fths rule. May remove
historical bias.
ā€¢ Disadvantages:
ā€¢ Trivial classiļ¬ers such as classiļ¬ers which randomly assign the same of proportion of labels
across different groups satisfy this deļ¬nition.
ā€¢ Perfect classiļ¬er Y = d may not be allowed if ground truth rates of labels are different across
groups.
3. Equal Opportunity
Difference
ā€¢ Different groups have the same true positive rate. Ideally the
below value should be close to zero:
ā€¢ Advantages:
ā€¢ Perfect classiļ¬er allowed.
ā€¢ Disadvantages:
ā€¢ May perpetuate historical biases.
ā€¢ E.g. Hiring application with 100 privileged and 100 unprivileged, but 40 qualiļ¬ed in privileged and 4 in unprivileged.
ā€¢ By hiring 20 and 2 from each privileged and unprivileged you will satisfy this.
4. False Negative Error
Balance
ā€¢ If the application is punitive in nature
ā€¢ Different groups should have the same false negative scores.
ā€¢ Example:
ā€¢ The proportion of black defendants who donā€™t recidivate and receive high risk
scoresā€Ø
Should be the same as
ā€¢ The proportion of white defendants who donā€™t recidivate and receive high risk
scores.
5.Test Fairness
ā€¢ Scores should have the same meaning across different groups.
Impossibility Results
ā€¢ Core of the debate in COMPAS.
ā€¢ ProPublica: false negatives should be the same across
different groups
ā€¢ Northpointe: scores should have the same meaning across
groups. (test fairness)
ā€¢ Result: If prevalence rates (ground truth proportion of labels
across different groups) are different, and if test fairness is
satisļ¬ed then false negatives will differ across groups.
Chouldechova, A., 2017. Fair prediction with disparate impact: A study of bias in recidivism
prediction instruments. Big data, 5(2), pp.153-163.
Tools for Measuring Bias
https://github.com/IBM/AIF360
AI Fairness 360 (AIF 360):
Measuring Bias
Mitigation: Removing Bias
ā€¢ Mitigation can be happen in three different places:
ā€¢ Before the model is built, in the training data
ā€¢ In the model
ā€¢ After the model is built, with the predictions:
Accuracy = 66%
COMPAS
Before the model is built
ā€¢ Reweighing (roughly at a high-level):
ā€¢ Increase weights for some
ā€¢ Unprivileged with positive labels
ā€¢ Privileged with negative labels
ā€¢ Decrease weights for some
ā€¢ Unprivileged with negative labels
ā€¢ Privileged with positive labels
+ -
- +
COMPAS
Accuracy = 66%
Accuracy = 66%
Reweighing
AI Fairness 360 Toolkit https://aif360.mybluemix.net
In the model
Zhang, B.H., Lemoine, B. and Mitchell, M., 2018, December. Mitigating
unwanted biases with adversarial learning. In Proceedings of the 2018
AAAI/ACM Conference on AI, Ethics, and Society (pp. 335-340). ACM.
COMPAS
Adversarial
De-biasing
Accuracy = 67%Accuracy = 66%
AI Fairness 360 Toolkit https://aif360.mybluemix.net
After the model is built
ā€¢ Reject option classiļ¬cation:
ā€¢ Assume the classiļ¬er outputs a probability score.
ā€¢ If the classiļ¬er score is within a small band around 0.5:
ā€¢ If unprivileged then predict positive
ā€¢ If privileged then predict negative
Probability of + label for
unprivileged
0 1
0
1
Probability of - label for
unprivileged
COMPAS
Reject
Option
Accuracy = 66% Accuracy = 65%
AI Fairness 360 Toolkit https://aif360.mybluemix.net
Tools
https://github.com/IBM/AIF360
AI Fairness 360 (AIF 360):
Mitigating Bias
Take-home message
ā€¢ Many forms of fairness and bias exist: most of them are
incompatible with each other.
ā€¢ Bias can be decreased with algorithms (with usually some
loss in performance)
Thank you
Extras
Choosing Deļ¬nitions
https://dsapp.uchicago.edu/projects/aequitas/From

More Related Content

What's hot

Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Hayim Makabee
Ā 
DC02. Interpretation of predictions
DC02. Interpretation of predictionsDC02. Interpretation of predictions
DC02. Interpretation of predictionsAnton Kulesh
Ā 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & ConcernsAjitesh Kumar
Ā 
Building trust through Explainable AI
Building trust through Explainable AIBuilding trust through Explainable AI
Building trust through Explainable AIPeet Denny
Ā 
Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Raheel Ahmad
Ā 
Explainable AI
Explainable AIExplainable AI
Explainable AIDinesh V
Ā 
Responsible AI
Responsible AIResponsible AI
Responsible AINeo4j
Ā 
Using Generative AI
Using Generative AIUsing Generative AI
Using Generative AIMark DeLoura
Ā 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
Ā 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI Kalilur Rahman
Ā 
Machine Learning Interpretability / Explainability
Machine Learning Interpretability / ExplainabilityMachine Learning Interpretability / Explainability
Machine Learning Interpretability / ExplainabilityRaouf KESKES
Ā 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Krishnaram Kenthapadi
Ā 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
Ā 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
Ā 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
Ā 
Future of AI - 2023 07 25.pptx
Future of AI - 2023 07 25.pptxFuture of AI - 2023 07 25.pptx
Future of AI - 2023 07 25.pptxGreg Makowski
Ā 
Generative AI: Past, Present, and Future ā€“ A Practitioner's Perspective
Generative AI: Past, Present, and Future ā€“ A Practitioner's PerspectiveGenerative AI: Past, Present, and Future ā€“ A Practitioner's Perspective
Generative AI: Past, Present, and Future ā€“ A Practitioner's PerspectiveHuahai Yang
Ā 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)GoDataDriven
Ā 

What's hot (20)

Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)
Ā 
Explainable AI
Explainable AIExplainable AI
Explainable AI
Ā 
DC02. Interpretation of predictions
DC02. Interpretation of predictionsDC02. Interpretation of predictions
DC02. Interpretation of predictions
Ā 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & Concerns
Ā 
Building trust through Explainable AI
Building trust through Explainable AIBuilding trust through Explainable AI
Building trust through Explainable AI
Ā 
Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models?
Ā 
Explainable AI (XAI)
Explainable AI (XAI)Explainable AI (XAI)
Explainable AI (XAI)
Ā 
Explainable AI
Explainable AIExplainable AI
Explainable AI
Ā 
Responsible AI
Responsible AIResponsible AI
Responsible AI
Ā 
Using Generative AI
Using Generative AIUsing Generative AI
Using Generative AI
Ā 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
Ā 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI
Ā 
Machine Learning Interpretability / Explainability
Machine Learning Interpretability / ExplainabilityMachine Learning Interpretability / Explainability
Machine Learning Interpretability / Explainability
Ā 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
Ā 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Ā 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
Ā 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Ā 
Future of AI - 2023 07 25.pptx
Future of AI - 2023 07 25.pptxFuture of AI - 2023 07 25.pptx
Future of AI - 2023 07 25.pptx
Ā 
Generative AI: Past, Present, and Future ā€“ A Practitioner's Perspective
Generative AI: Past, Present, and Future ā€“ A Practitioner's PerspectiveGenerative AI: Past, Present, and Future ā€“ A Practitioner's Perspective
Generative AI: Past, Present, and Future ā€“ A Practitioner's Perspective
Ā 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)
Ā 

Similar to Explainability and bias in AI

Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Francesca Lazzeri, PhD
Ā 
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018Sri Ambati
Ā 
The Incredible Disappearing Data Scientist
The Incredible Disappearing Data ScientistThe Incredible Disappearing Data Scientist
The Incredible Disappearing Data ScientistRebecca Bilbro
Ā 
The Dangers of Machine Learning
The Dangers of Machine LearningThe Dangers of Machine Learning
The Dangers of Machine LearningtothepointIT
Ā 
ā€‹ā€‹Explainability in AI and Recommender systems: letā€™s make it interactive!
ā€‹ā€‹Explainability in AI and Recommender systems: letā€™s make it interactive!ā€‹ā€‹Explainability in AI and Recommender systems: letā€™s make it interactive!
ā€‹ā€‹Explainability in AI and Recommender systems: letā€™s make it interactive!Eindhoven University of Technology / JADS
Ā 
Coder Name Rebecca Oquendo
Coder Name  Rebecca Oquendo                                    Coder Name  Rebecca Oquendo
Coder Name Rebecca Oquendo DioneWang844
Ā 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
Ā 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine LearningSri Ambati
Ā 
Coder Name Rebecca Oquendo .docx
Coder Name  Rebecca Oquendo                                    .docxCoder Name  Rebecca Oquendo                                    .docx
Coder Name Rebecca Oquendo .docxmary772
Ā 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsFrancesca Lazzeri, PhD
Ā 
Explain! Or I will sue you!
Explain! Or I will sue you!Explain! Or I will sue you!
Explain! Or I will sue you!Przemek Biecek
Ā 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
Ā 
Adversarial Analytics - 2013 Strata & Hadoop World Talk
Adversarial Analytics - 2013 Strata & Hadoop World TalkAdversarial Analytics - 2013 Strata & Hadoop World Talk
Adversarial Analytics - 2013 Strata & Hadoop World TalkRobert Grossman
Ā 
Using AI to Build Fair and Equitable Workplaces
Using AI to Build Fair and Equitable WorkplacesUsing AI to Build Fair and Equitable Workplaces
Using AI to Build Fair and Equitable WorkplacesData Con LA
Ā 
M2 l10 fairness, accountability, and transparency
M2 l10 fairness, accountability, and transparencyM2 l10 fairness, accountability, and transparency
M2 l10 fairness, accountability, and transparencyBoPeng76
Ā 
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
Ā 
Towards Responsible AI - NY.pptx
Towards Responsible AI - NY.pptxTowards Responsible AI - NY.pptx
Towards Responsible AI - NY.pptxLuis775803
Ā 
Machine Learning in the Financial Industry
Machine Learning in the Financial IndustryMachine Learning in the Financial Industry
Machine Learning in the Financial IndustrySubrat Panda, PhD
Ā 
Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018HJ van Veen
Ā 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
Ā 

Similar to Explainability and bias in AI (20)

Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Ā 
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018
Ā 
The Incredible Disappearing Data Scientist
The Incredible Disappearing Data ScientistThe Incredible Disappearing Data Scientist
The Incredible Disappearing Data Scientist
Ā 
The Dangers of Machine Learning
The Dangers of Machine LearningThe Dangers of Machine Learning
The Dangers of Machine Learning
Ā 
ā€‹ā€‹Explainability in AI and Recommender systems: letā€™s make it interactive!
ā€‹ā€‹Explainability in AI and Recommender systems: letā€™s make it interactive!ā€‹ā€‹Explainability in AI and Recommender systems: letā€™s make it interactive!
ā€‹ā€‹Explainability in AI and Recommender systems: letā€™s make it interactive!
Ā 
Coder Name Rebecca Oquendo
Coder Name  Rebecca Oquendo                                    Coder Name  Rebecca Oquendo
Coder Name Rebecca Oquendo
Ā 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AI
Ā 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
Ā 
Coder Name Rebecca Oquendo .docx
Coder Name  Rebecca Oquendo                                    .docxCoder Name  Rebecca Oquendo                                    .docx
Coder Name Rebecca Oquendo .docx
Ā 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systems
Ā 
Explain! Or I will sue you!
Explain! Or I will sue you!Explain! Or I will sue you!
Explain! Or I will sue you!
Ā 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open Source
Ā 
Adversarial Analytics - 2013 Strata & Hadoop World Talk
Adversarial Analytics - 2013 Strata & Hadoop World TalkAdversarial Analytics - 2013 Strata & Hadoop World Talk
Adversarial Analytics - 2013 Strata & Hadoop World Talk
Ā 
Using AI to Build Fair and Equitable Workplaces
Using AI to Build Fair and Equitable WorkplacesUsing AI to Build Fair and Equitable Workplaces
Using AI to Build Fair and Equitable Workplaces
Ā 
M2 l10 fairness, accountability, and transparency
M2 l10 fairness, accountability, and transparencyM2 l10 fairness, accountability, and transparency
M2 l10 fairness, accountability, and transparency
Ā 
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
Ā 
Towards Responsible AI - NY.pptx
Towards Responsible AI - NY.pptxTowards Responsible AI - NY.pptx
Towards Responsible AI - NY.pptx
Ā 
Machine Learning in the Financial Industry
Machine Learning in the Financial IndustryMachine Learning in the Financial Industry
Machine Learning in the Financial Industry
Ā 
Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018
Ā 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Ā 

More from Bill Liu

Walk Through a Real World ML Production Project
Walk Through a Real World ML Production ProjectWalk Through a Real World ML Production Project
Walk Through a Real World ML Production ProjectBill Liu
Ā 
Redefining MLOps with Model Deployment, Management and Observability in Produ...
Redefining MLOps with Model Deployment, Management and Observability in Produ...Redefining MLOps with Model Deployment, Management and Observability in Produ...
Redefining MLOps with Model Deployment, Management and Observability in Produ...Bill Liu
Ā 
Productizing Machine Learning at the Edge
Productizing Machine Learning at the EdgeProductizing Machine Learning at the Edge
Productizing Machine Learning at the EdgeBill Liu
Ā 
Transformers in Vision: From Zero to Hero
Transformers in Vision: From Zero to HeroTransformers in Vision: From Zero to Hero
Transformers in Vision: From Zero to HeroBill Liu
Ā 
Deep AutoViML For Tensorflow Models and MLOps Workflows
Deep AutoViML For Tensorflow Models and MLOps WorkflowsDeep AutoViML For Tensorflow Models and MLOps Workflows
Deep AutoViML For Tensorflow Models and MLOps WorkflowsBill Liu
Ā 
Metaflow: The ML Infrastructure at Netflix
Metaflow: The ML Infrastructure at NetflixMetaflow: The ML Infrastructure at Netflix
Metaflow: The ML Infrastructure at NetflixBill Liu
Ā 
Practical Crowdsourcing for ML at Scale
Practical Crowdsourcing for ML at ScalePractical Crowdsourcing for ML at Scale
Practical Crowdsourcing for ML at ScaleBill Liu
Ā 
Building large scale transactional data lake using apache hudi
Building large scale transactional data lake using apache hudiBuilding large scale transactional data lake using apache hudi
Building large scale transactional data lake using apache hudiBill Liu
Ā 
Deep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsDeep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsBill Liu
Ā 
Big Data and AI in Fighting Against COVID-19
Big Data and AI in Fighting Against COVID-19Big Data and AI in Fighting Against COVID-19
Big Data and AI in Fighting Against COVID-19Bill Liu
Ā 
Highly-scalable Reinforcement Learning RLlib for Real-world Applications
Highly-scalable Reinforcement Learning RLlib for Real-world ApplicationsHighly-scalable Reinforcement Learning RLlib for Real-world Applications
Highly-scalable Reinforcement Learning RLlib for Real-world ApplicationsBill Liu
Ā 
Build computer vision models to perform object detection and classification w...
Build computer vision models to perform object detection and classification w...Build computer vision models to perform object detection and classification w...
Build computer vision models to perform object detection and classification w...Bill Liu
Ā 
Causal Inference in Data Science and Machine Learning
Causal Inference in Data Science and Machine LearningCausal Inference in Data Science and Machine Learning
Causal Inference in Data Science and Machine LearningBill Liu
Ā 
Weekly #106: Deep Learning on Mobile
Weekly #106: Deep Learning on MobileWeekly #106: Deep Learning on Mobile
Weekly #106: Deep Learning on MobileBill Liu
Ā 
Weekly #105: AutoViz and Auto_ViML Visualization and Machine Learning
Weekly #105: AutoViz and Auto_ViML Visualization and Machine LearningWeekly #105: AutoViz and Auto_ViML Visualization and Machine Learning
Weekly #105: AutoViz and Auto_ViML Visualization and Machine LearningBill Liu
Ā 
AISF19 - On Blending Machine Learning with Microeconomics
AISF19 - On Blending Machine Learning with MicroeconomicsAISF19 - On Blending Machine Learning with Microeconomics
AISF19 - On Blending Machine Learning with MicroeconomicsBill Liu
Ā 
AISF19 - Travel in the AI-First World
AISF19 - Travel in the AI-First WorldAISF19 - Travel in the AI-First World
AISF19 - Travel in the AI-First WorldBill Liu
Ā 
AISF19 - Unleash Computer Vision at the Edge
AISF19 - Unleash Computer Vision at the EdgeAISF19 - Unleash Computer Vision at the Edge
AISF19 - Unleash Computer Vision at the EdgeBill Liu
Ā 
AISF19 - Building Scalable, Kubernetes-Native ML/AI Pipelines with TFX, KubeF...
AISF19 - Building Scalable, Kubernetes-Native ML/AI Pipelines with TFX, KubeF...AISF19 - Building Scalable, Kubernetes-Native ML/AI Pipelines with TFX, KubeF...
AISF19 - Building Scalable, Kubernetes-Native ML/AI Pipelines with TFX, KubeF...Bill Liu
Ā 
Toronto meetup 20190917
Toronto meetup 20190917Toronto meetup 20190917
Toronto meetup 20190917Bill Liu
Ā 

More from Bill Liu (20)

Walk Through a Real World ML Production Project
Walk Through a Real World ML Production ProjectWalk Through a Real World ML Production Project
Walk Through a Real World ML Production Project
Ā 
Redefining MLOps with Model Deployment, Management and Observability in Produ...
Redefining MLOps with Model Deployment, Management and Observability in Produ...Redefining MLOps with Model Deployment, Management and Observability in Produ...
Redefining MLOps with Model Deployment, Management and Observability in Produ...
Ā 
Productizing Machine Learning at the Edge
Productizing Machine Learning at the EdgeProductizing Machine Learning at the Edge
Productizing Machine Learning at the Edge
Ā 
Transformers in Vision: From Zero to Hero
Transformers in Vision: From Zero to HeroTransformers in Vision: From Zero to Hero
Transformers in Vision: From Zero to Hero
Ā 
Deep AutoViML For Tensorflow Models and MLOps Workflows
Deep AutoViML For Tensorflow Models and MLOps WorkflowsDeep AutoViML For Tensorflow Models and MLOps Workflows
Deep AutoViML For Tensorflow Models and MLOps Workflows
Ā 
Metaflow: The ML Infrastructure at Netflix
Metaflow: The ML Infrastructure at NetflixMetaflow: The ML Infrastructure at Netflix
Metaflow: The ML Infrastructure at Netflix
Ā 
Practical Crowdsourcing for ML at Scale
Practical Crowdsourcing for ML at ScalePractical Crowdsourcing for ML at Scale
Practical Crowdsourcing for ML at Scale
Ā 
Building large scale transactional data lake using apache hudi
Building large scale transactional data lake using apache hudiBuilding large scale transactional data lake using apache hudi
Building large scale transactional data lake using apache hudi
Ā 
Deep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its ApplicationsDeep Reinforcement Learning and Its Applications
Deep Reinforcement Learning and Its Applications
Ā 
Big Data and AI in Fighting Against COVID-19
Big Data and AI in Fighting Against COVID-19Big Data and AI in Fighting Against COVID-19
Big Data and AI in Fighting Against COVID-19
Ā 
Highly-scalable Reinforcement Learning RLlib for Real-world Applications
Highly-scalable Reinforcement Learning RLlib for Real-world ApplicationsHighly-scalable Reinforcement Learning RLlib for Real-world Applications
Highly-scalable Reinforcement Learning RLlib for Real-world Applications
Ā 
Build computer vision models to perform object detection and classification w...
Build computer vision models to perform object detection and classification w...Build computer vision models to perform object detection and classification w...
Build computer vision models to perform object detection and classification w...
Ā 
Causal Inference in Data Science and Machine Learning
Causal Inference in Data Science and Machine LearningCausal Inference in Data Science and Machine Learning
Causal Inference in Data Science and Machine Learning
Ā 
Weekly #106: Deep Learning on Mobile
Weekly #106: Deep Learning on MobileWeekly #106: Deep Learning on Mobile
Weekly #106: Deep Learning on Mobile
Ā 
Weekly #105: AutoViz and Auto_ViML Visualization and Machine Learning
Weekly #105: AutoViz and Auto_ViML Visualization and Machine LearningWeekly #105: AutoViz and Auto_ViML Visualization and Machine Learning
Weekly #105: AutoViz and Auto_ViML Visualization and Machine Learning
Ā 
AISF19 - On Blending Machine Learning with Microeconomics
AISF19 - On Blending Machine Learning with MicroeconomicsAISF19 - On Blending Machine Learning with Microeconomics
AISF19 - On Blending Machine Learning with Microeconomics
Ā 
AISF19 - Travel in the AI-First World
AISF19 - Travel in the AI-First WorldAISF19 - Travel in the AI-First World
AISF19 - Travel in the AI-First World
Ā 
AISF19 - Unleash Computer Vision at the Edge
AISF19 - Unleash Computer Vision at the EdgeAISF19 - Unleash Computer Vision at the Edge
AISF19 - Unleash Computer Vision at the Edge
Ā 
AISF19 - Building Scalable, Kubernetes-Native ML/AI Pipelines with TFX, KubeF...
AISF19 - Building Scalable, Kubernetes-Native ML/AI Pipelines with TFX, KubeF...AISF19 - Building Scalable, Kubernetes-Native ML/AI Pipelines with TFX, KubeF...
AISF19 - Building Scalable, Kubernetes-Native ML/AI Pipelines with TFX, KubeF...
Ā 
Toronto meetup 20190917
Toronto meetup 20190917Toronto meetup 20190917
Toronto meetup 20190917
Ā 

Recently uploaded

DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
Ā 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
Ā 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
Ā 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdfSandro Moreira
Ā 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
Ā 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
Ā 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
Ā 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
Ā 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
Ā 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
Ā 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
Ā 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
Ā 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
Ā 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
Ā 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024The Digital Insurer
Ā 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfOverkill Security
Ā 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
Ā 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
Ā 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
Ā 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
Ā 

Recently uploaded (20)

DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
Ā 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
Ā 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Ā 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
Ā 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
Ā 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
Ā 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
Ā 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Ā 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Ā 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Ā 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
Ā 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Ā 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Ā 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Ā 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
Ā 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
Ā 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
Ā 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
Ā 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
Ā 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Ā 

Explainability and bias in AI

  • 1. Explainability and Bias in ML/AI Models Naveen Sundar Govindarajulu August 9, 2019 Visit and sign up RealityEngines.AI
  • 3. COMPAS https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingFrom: Non recidivating black people twice as likely to be labelled high risk than non recidivating white people
  • 4. Why Explainability? ā€¢ More use of ML/AI models by laypersons. ā€¢ Laypersons need explanations ā€¢ Developers also need quick explanations to debug models faster ā€¢ There may be a legal need for explanations: ā€¢ If you deny someone a loan, you may need to explain the reason for the denial.
  • 6. Explainability using Interpretable Models Prior offenses <= 0 Low Risk High Risk Armed offense? Med Risk YES NO NO YES
  • 7. Explainability vs Performance Tradeoff ā€¢ Some machine learning models are more explainable than others. Performance Explainability Deep learning models Linear Models DecisionTrees
  • 9. What Features? Interpretable Features ā€¢ We need interpretable features. ā€¢ Difļ¬cult for laypersons to understand raw feature spaces (e.g. word embeddings) ā€¢ Humans are good at understanding presence or absence of components.
  • 10. Interpretable Instance ā€¢ E.g. ā€¢ For Text: ā€¢ Convert to a binary vector indicating presence or absence of words ā€¢ For images ā€¢ Convert to a binary vector indicating presence or absence of pixels or contiguous regions.
  • 11. Method 1: LIME From https://github.com/marcotcr/lime Locally Interpretable Model-agnostic Explanations Ribeiro, M.T., Singh, S. and Guestrin, C., 2016, August. Why Should I Trust You?: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM.
  • 12. Method 1: LIME Any classiļ¬er 1 1 0 1 1 0 1 0 0 1 0 0 0 0 1 0 1 1 1 1 0 1 -2.1 1.1 -0.5 2.2 -1.2 -1.5 1 -3 0.8 5.6 1.5 Weights for the linear classiļ¬er then give us feature importances Binary vectors -2.1 2.2 -3 5.6 Enforce sparsity
  • 13. Example: Text Sentiment Classiļ¬cation ā€œThe movie is not badā€ This movie is not bad 0 0 0 2.3 -1.5
  • 14. Explanation for ā€œCatā€ LIME with Images From https://github.com/marcotcr/lime
  • 15. Ribeiro, M.T., Singh, S. and Guestrin, C., 2016, August. Why Should I Trust You?: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM. Explanations for Multi-Label Classiļ¬ers
  • 16. Ribeiro, M.T., Singh, S. and Guestrin, C., 2016, August. Why Should I Trust You?: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM. Using LIME for Debugging (E.g. 1)
  • 19. Method 2: SHAP Uniļ¬es many different feature attribution methods and has some desirable properties. 1. LIME 2. Integrated Gradients 3. Shapley values 4. DeepLift Lundberg, S.M. and Lee, S.I., 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (pp. 4765-4774).
  • 20. Method 2: SHAP ā€¢ Derives from game-theoretic foundations. ā€¢ Shapley values used in game theory to assign values to players in cooperative games.
  • 21. What are Shapley values? ā€¢ Suppose there is a set S of N players participating in a game with payoff for any S subset of players participating in the game given by: ā€¢ Shapley values provide one fair way of dividing up the total payoff among the N players.
  • 22. ShapleyValue Payoļ¬€ for the group including player i Shapley value for player i Payoļ¬€ for a group without player i
  • 23. SHAP Explanations ā€¢ Players are features. ā€¢ Payoff is the modelā€™s real valued prediction.
  • 24. SHAP Implementation (https://github.com/slundberg/shap) Different kinds of explainers: 1. TreeExplainer: fast and exact SHAP values for tree ensembles 2. KernelExplainer: approximate explainer for black box estimators 3. DeepExplainer: high-speed approximate explainer for deep learning models. 4. ExpectedGradients: SHAP-based extension of integrated gradients
  • 25. XGBoost on UCI Income Dataset Output is probability of income over 50k f87 f23 f23 f3 f34 f41 Base ValueOutput
  • 26. Note: SHAP values are Model Dependent. Model 1 Model 2
  • 27. Is This Form of Explainability Enough? ā€¢ Explainability does not provide us with recourse. ā€¢ Recourse: Information needed to change a speciļ¬c prediction to a desired value. ā€¢ ā€œIf you had paid your credit card balance in full for the last three months, you would have got that loan.ā€
  • 28. Issues with SHAP and LIME SHAP and LIME values are highly variable for instances that are very similar for non-linear models.ā€Ø On the Robustness of Interpretability Methods https://arxiv.org/abs/1806.08049
  • 29. Issues with SHAP and LIME SHAP and LIME values are highly variable for instances that are very similar for non-linear models.ā€Ø On the Robustness of Interpretability Methods https://arxiv.org/abs/1806.08049
  • 30. Issues with SHAP and LIME SHAP and LIME values donā€™t provide insight into how the model will behave on new instances. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982 High-Precision Model-Agnostic Explanations
  • 31. Take-home message ā€¢ Explainability is possible need not come at the cost of performance. ā€¢ Explainability is not enough ā€¢ Recourse, etc.
  • 32. Bias
  • 33. Fairness and Bias in Machine Learning 1. Bias in this context is unfairness (more or less). 2. Note we are not talking about standard statistical bias in machine learning (the bias in the bias vs. variance tradeoff). 3. For completeness, this is one deļ¬nition of statistical bias in machine learning. ā€¢ Bias = Expected value of model - true value
  • 34. Deļ¬nitions of Fairness or Bias 1. Many, many, many deļ¬nitions exists. 2. Application dependent. No one deļ¬nition is better. 3. See ā€œ21 Deļ¬nitions of Fairnessā€ tutorial by Arvind Narayanan,ACM 2018 FAT*. 1. Key Point: Dozens of deļ¬nitions exist (and not just 21)
  • 35. Setting 1. Classiļ¬er C with binary output d in {+, -}, a real-valued score s. 1. Instances or data points are generally humans. 2. The + class is desired and the negative - class is not desired. 2. Input X, and 1. one or more sensitive/protected attribute G (e.g. gender) that are part of the input. E.g. Possible values of G = {m, f} 3. A set of instances sharing a common sensitive attribute is privileged (receives more + labels).The other is unprivileged (receives less + labels) 4. True output Y
  • 36. 1. Fairness through Unawareness ā€¢ Simple Idea: Do not consider any sensitive attributes when building the model. ā€¢ Advantage: Some support in the law (disparate treatment)? ā€¢ Disadvantage:: Other attributes may be correlated with sensitive attributes (such as job history, geographical location etc.)
  • 37. 2. Statistical Parity Difference ā€¢ Different groups should have the same proportion (or probability) of positive and negative labels. Ideally the below value should be close to zero: ā€¢ Advantages: Legal support in the form of a rule known as the fourth-ļ¬fths rule. May remove historical bias. ā€¢ Disadvantages: ā€¢ Trivial classiļ¬ers such as classiļ¬ers which randomly assign the same of proportion of labels across different groups satisfy this deļ¬nition. ā€¢ Perfect classiļ¬er Y = d may not be allowed if ground truth rates of labels are different across groups.
  • 38. 3. Equal Opportunity Difference ā€¢ Different groups have the same true positive rate. Ideally the below value should be close to zero: ā€¢ Advantages: ā€¢ Perfect classiļ¬er allowed. ā€¢ Disadvantages: ā€¢ May perpetuate historical biases. ā€¢ E.g. Hiring application with 100 privileged and 100 unprivileged, but 40 qualiļ¬ed in privileged and 4 in unprivileged. ā€¢ By hiring 20 and 2 from each privileged and unprivileged you will satisfy this.
  • 39. 4. False Negative Error Balance ā€¢ If the application is punitive in nature ā€¢ Different groups should have the same false negative scores. ā€¢ Example: ā€¢ The proportion of black defendants who donā€™t recidivate and receive high risk scoresā€Ø Should be the same as ā€¢ The proportion of white defendants who donā€™t recidivate and receive high risk scores.
  • 40. 5.Test Fairness ā€¢ Scores should have the same meaning across different groups.
  • 41. Impossibility Results ā€¢ Core of the debate in COMPAS. ā€¢ ProPublica: false negatives should be the same across different groups ā€¢ Northpointe: scores should have the same meaning across groups. (test fairness) ā€¢ Result: If prevalence rates (ground truth proportion of labels across different groups) are different, and if test fairness is satisļ¬ed then false negatives will differ across groups. Chouldechova, A., 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2), pp.153-163.
  • 42. Tools for Measuring Bias https://github.com/IBM/AIF360 AI Fairness 360 (AIF 360): Measuring Bias
  • 43. Mitigation: Removing Bias ā€¢ Mitigation can be happen in three different places: ā€¢ Before the model is built, in the training data ā€¢ In the model ā€¢ After the model is built, with the predictions:
  • 45. Before the model is built ā€¢ Reweighing (roughly at a high-level): ā€¢ Increase weights for some ā€¢ Unprivileged with positive labels ā€¢ Privileged with negative labels ā€¢ Decrease weights for some ā€¢ Unprivileged with negative labels ā€¢ Privileged with positive labels + - - +
  • 46. COMPAS Accuracy = 66% Accuracy = 66% Reweighing AI Fairness 360 Toolkit https://aif360.mybluemix.net
  • 47. In the model Zhang, B.H., Lemoine, B. and Mitchell, M., 2018, December. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 335-340). ACM.
  • 48. COMPAS Adversarial De-biasing Accuracy = 67%Accuracy = 66% AI Fairness 360 Toolkit https://aif360.mybluemix.net
  • 49. After the model is built ā€¢ Reject option classiļ¬cation: ā€¢ Assume the classiļ¬er outputs a probability score. ā€¢ If the classiļ¬er score is within a small band around 0.5: ā€¢ If unprivileged then predict positive ā€¢ If privileged then predict negative Probability of + label for unprivileged 0 1 0 1 Probability of - label for unprivileged
  • 50. COMPAS Reject Option Accuracy = 66% Accuracy = 65% AI Fairness 360 Toolkit https://aif360.mybluemix.net
  • 52. Take-home message ā€¢ Many forms of fairness and bias exist: most of them are incompatible with each other. ā€¢ Bias can be decreased with algorithms (with usually some loss in performance)