Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
DecipheringAI
Dipyaman Sanyal
Program Director
Post Graduate Program in Data Science and
Machine Learning
Jigsaw Academy –...
The “Black Box” of AI/ML
• It doesn’t take a data scientist to work out that the machine and deep learning
algorithms buil...
The $15 trillion question
• PwC tells us that AI is a
$15 trillion opportunity
by 2030
• But they also tell us,
“67% of th...
Do we Care if we WIN?
• Algorithms with Nested Non-Linear Structures – in short, inexplicable
• Not just for the lay perso...
What about Medical Decisions?
• Rational decision making is impossible in most
medical emergencies
• Doctors have to take ...
Man vs. Machine
• We see that doctors are not optimizing
• None of us optimize
• If the doctor fails – the justification i...
Success vs. Culpability
• If all cars in the world are driven by AI
• αGo 2.0 only trained for 24 hours to beat αGo!
• Can...
Why Explainable AI (XAI)?
• Can government policy be dictated by AI?
More importantly
• Should government policy be dictat...
The “Black Box” of AI/ML
• We need answers not just predictions
• Success is not the only marker
• We cannot trust the mac...
Accuracy – Explainability Trade-Off ?
Explainable AI Is About People
• Explainable AI begins with people.
• AI engineers can work with subject matter experts an...
People are not Rational
Who is Spending on XAI?
• Defense Advanced Research Projects Agency (DARPA)
• A research wing of the American Army
• Why d...
Interpretability V/S Explainability
• In the midst of what The Economist termed as techlash, this lack of transparency
has...
Interpretability V/S Explainability
• Interpretability is the extent to which
you are able to predict what is going
to hap...
Definitions are Challenging
• Depending on who requires the explanation, explainability can mean
different things to diffe...
The Tree Approach
• In most cases, the easiest way for humans to visualize decision processes is by the use of
decision tr...
Sensitivity Analysis
• Quantifies the importance of each input variable
• Output most sensitive to feature = most importan...
Layer-wise Relevance Propagation
• Used often to understand feed forward Neural Nets (NN), Long-Short
Term Memory (LSTM) a...
A Simple Image Classification Example
Use Case: XAI in Finance
• The financial sector holds huge datasets and the
complex maths models used take a lot of time,
...
Use Case: XAI in Finance
• On one hand, regulators should be exploring black box simulation statistical testing
and reinfo...
Improving Explainability
Algorithmic generalization
• When you think most machine learning engineering is applying
algorit...
Control and Understand
Deeper control and understanding
• If an organisation relies on AI for any part of that–it is imper...
Easiest Way: Feature Importance
Feature importance
• Looking closely at the way the various features of your algorithm
hav...
Example: running a Random Forest
Classifier on peer-to-peer lending
data to classify Loan applications.
Observe the featur...
Plot of feature importance of the forest
LIME
LIME: Local Interpretable Model-Agnostic Explanations
• LIME is an actual method developed by researchers to gain gre...
• Following is a sample example of LIME in action in the Skater framework
explaining why the model predicted a person will...
DeepLLIFT
DeepLIFT (Deep Learning Important Features)
• DeepLIFT is a useful model in the particularly tricky area of deep...
Back to Cooperative Games Theory:
The Shapley Value
• SHAP (SHapley Additive exPlanations)
• The Shapley value is the aver...
Visualizing Beyond Feature Importance:
Partial Dependence Plots
Marginal Impact of a Model with “Other Things Remaining Co...
Can AI X AI?
• Packages in Python:
• LIME
• SHAP
• Of course, VARIMP functions
• The Skater Framework
• So, potentially, m...
Training the Next Gen Data
Scientists to Remove Opacity
• The central problem with both explainability and interpretabilit...
Upcoming SlideShare
Loading in …5
×

Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Technology By Dipyaman Sanyal Faculty and Academic Head at Jigsaw Academy

Most organizations understand the predictive power and the potential gains from AIML, but AI and ML are still now a black box technology for them. While deep learning and neural networks can provide excellent inputs to businesses, leaders are challenged to use them because of the complete blind faith required to ‘trust’ AI. In this talk we will use the latest technological developments from researchers, the US defense department, and the industry to unbox the black box and provide businesses a clear understanding of the policy levers that they can pull, why, and by how much, to make effective decisions?

  • Login to see the comments

  • Be the first to like this

Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Technology By Dipyaman Sanyal Faculty and Academic Head at Jigsaw Academy

  1. 1. DecipheringAI Dipyaman Sanyal Program Director Post Graduate Program in Data Science and Machine Learning Jigsaw Academy – University of Chicago
  2. 2. The “Black Box” of AI/ML • It doesn’t take a data scientist to work out that the machine and deep learning algorithms built into automation and artificial intelligence systems lack transparency • One of the challenges of using artificial intelligence solutions in the enterprise is that the technology operates in what is commonly referred to as a black box
  3. 3. The $15 trillion question • PwC tells us that AI is a $15 trillion opportunity by 2030 • But they also tell us, “67% of the businesses leaders taking part in PwC’s 2017 Global CEO Survey believe that AI and automation will impact negatively on stakeholder trust levels in their industry in the next five years.”
  4. 4. Do we Care if we WIN? • Algorithms with Nested Non-Linear Structures – in short, inexplicable • Not just for the lay person: also for the ones who built the algorithm • αGo: the Go algorithm which beat the best human on the planet 23 - 7 • αGo 2.0: beat αGo 100 – 0 • We could not care less about how it works, as long as we win!
  5. 5. What about Medical Decisions? • Rational decision making is impossible in most medical emergencies • Doctors have to take immediate decisions to save lives • How do doctors actually take complex decisions? • Neural Networks? • Deep Learning? • SVM?
  6. 6. Man vs. Machine • We see that doctors are not optimizing • None of us optimize • If the doctor fails – the justification is clear • But what if AI fails?
  7. 7. Success vs. Culpability • If all cars in the world are driven by AI • αGo 2.0 only trained for 24 hours to beat αGo! • Can we give free rein to the roads of the world to AI for 24 hours without considering the human costs??? • Even if the accident rates of self-driven cars become lower than human drivers – when that fatal accident occurs, who will be blamed?
  8. 8. Why Explainable AI (XAI)? • Can government policy be dictated by AI? More importantly • Should government policy be dictated by AI? • Our experience with skilling
  9. 9. The “Black Box” of AI/ML • We need answers not just predictions • Success is not the only marker • We cannot trust the machines. Or can we? • What about when you fly?
  10. 10. Accuracy – Explainability Trade-Off ?
  11. 11. Explainable AI Is About People • Explainable AI begins with people. • AI engineers can work with subject matter experts and learn about their domains, studying their work from an algorithm/process/detective perspective. • What the engineers learn is encoded into a knowledge base that enables the cognitive AI to verify its recommendations and explain its reasoning in a way that humans can understand. • For humans to trust AI, systems must not lock all of their secrets inside a black box. XAI provides that explanation • But, XAI is not Data Viz!
  12. 12. People are not Rational
  13. 13. Who is Spending on XAI? • Defense Advanced Research Projects Agency (DARPA) • A research wing of the American Army • Why does the CIA and the US Army want to explain AI models?
  14. 14. Interpretability V/S Explainability • In the midst of what The Economist termed as techlash, this lack of transparency has only (ironically) become more visible • It’s in this context that the concepts of explainability and interpretability have taken on new urgency. • It’s likely that they are only going to become more important as discussions around the ethics of artificial intelligence continues.
  15. 15. Interpretability V/S Explainability • Interpretability is the extent to which you are able to predict what is going to happen, given a change in input or algorithmic parameters. It’s being able to look at an algorithm and go yep, I can see what’s happening here • Interpretability is about being able to discern the mechanics without necessarily knowing why. • Explainability, meanwhile, is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms • Explainability is being able to quite literally explain what is happening.
  16. 16. Definitions are Challenging • Depending on who requires the explanation, explainability can mean different things to different people. • Generally speaking, however, if the stakes are high, then more explainability is required. • Explanations can be very detailed, showing the individual pieces of data and decision points used to derive the answer. • Explainability could also refer to a system that writes summary reports for the end user.
  17. 17. The Tree Approach • In most cases, the easiest way for humans to visualize decision processes is by the use of decision trees, with the top of the tree containing the least amount of information and the bottom containing the most. The top-down approach is for end users who are not interested in the nitty-gritty details The bottom-up approach is useful to the engineers who must diagnose and fix the problem.
  18. 18. Sensitivity Analysis • Quantifies the importance of each input variable • Output most sensitive to feature = most important feature • Do not care about f(x) in the equation…care about how the change affects the feature prediction • “A heatmap computed with sensitivity analysis indicates which pixels need to be changed to make the image look (from the AI system’s perspective) more/less like the predicted class.”
  19. 19. Layer-wise Relevance Propagation • Used often to understand feed forward Neural Nets (NN), Long-Short Term Memory (LSTM) and other similar blocks • “redistributes the prediction f(x) backwards using local redistribution rules until it assigns a relevance score Ri to each input variable.” • The relevance score tells us the measure of relevance for each additional variable added to the model
  20. 20. A Simple Image Classification Example
  21. 21. Use Case: XAI in Finance • The financial sector holds huge datasets and the complex maths models used take a lot of time, effort, domain knowledge, skill, and brain capacity to be adequately understood by a human • Financial markets are full of noise and complexities that are rarely comprehensible by a human • In regulated industries like Finance, an explanation request is often simply a demand from regulators in the best interest of customers and investors
  22. 22. Use Case: XAI in Finance • On one hand, regulators should be exploring black box simulation statistical testing and reinforcement learning techniques to validate that what models and machines are doing is in line with customers and investors interest and not dangerous for the markets • But on the other hand regulators are right to challenge the industry to take a more responsible approach while using AI in their products and services and to be mindful of AI ethics and current limitations. • European General Data Protection Regulation (GDPR) contained what has been labelled as a “right to an explanation”, and states that important decisions significantly affecting people cannot be solely based on a machine decision
  23. 23. Improving Explainability Algorithmic generalization • When you think most machine learning engineering is applying algorithms in a very specific way to uncover a certain desired outcome, the model itself can feel like a secondary element - it’s simply a means to an end. • However, by shifting this attitude to consider the overall health of the algorithm, and the data on which it is running, you can begin to set a solid foundation for improved interpretability
  24. 24. Control and Understand Deeper control and understanding • If an organisation relies on AI for any part of that–it is imperative to understand how that part works, and why the technology makes decisions, suggestions, and predictions. • This plays a big role in integrating AI into the enterprise, achieving an effective collaboration of humans and machines, providing better customer experience • Example: a credit scoring model, which is trained using a dataset that includes people’s postcodes and does not explicitly include any race information.
  25. 25. Easiest Way: Feature Importance Feature importance • Looking closely at the way the various features of your algorithm have been set is a practical way to actually engage with a diverse range of questions, from business alignment to ethics. • Debate and discussion over how each feature should be set might be a little time-consuming, but having that tacit awareness that different features have been set in a certain way is nevertheless an important step in moving towards interpretability and explainability.
  26. 26. Example: running a Random Forest Classifier on peer-to-peer lending data to classify Loan applications. Observe the feature ranking: Does it make sense? Feature Ranking
  27. 27. Plot of feature importance of the forest
  28. 28. LIME LIME: Local Interpretable Model-Agnostic Explanations • LIME is an actual method developed by researchers to gain greater transparency on what’s happening inside an algorithm. • The researchers explain that LIME can explain “the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction.” • What this means  LIME model develops an approximation of the model by testing it out to see what happens when certain aspects within the model are changed. Essentially it’s about trying to recreate the output from the same input through a process of experimentation
  29. 29. • Following is a sample example of LIME in action in the Skater framework explaining why the model predicted a person will earn more than $50K LIME in Action
  30. 30. DeepLLIFT DeepLIFT (Deep Learning Important Features) • DeepLIFT is a useful model in the particularly tricky area of deep learning. • It works through a form of backpropagation: it takes the output, then attempts to pull it apart by ‘reading’ the various neurons that have gone into developing that original output. • Essentially, it’s a way of digging back into the feature selection inside of the algorithm (as the name indicates)
  31. 31. Back to Cooperative Games Theory: The Shapley Value • SHAP (SHapley Additive exPlanations) • The Shapley value is the average marginal contribution of a feature value over all possible coalitions. Coalitions are basically combinations of features which are used to estimate the shapley value of a specific feature. • It is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods and representing the only possible consistent and locally accurate additive feature attribution method based on expectations
  32. 32. Visualizing Beyond Feature Importance: Partial Dependence Plots Marginal Impact of a Model with “Other Things Remaining Constant”
  33. 33. Can AI X AI? • Packages in Python: • LIME • SHAP • Of course, VARIMP functions • The Skater Framework • So, potentially, maybe! • But will need to know how explanations need to be made for People • Incorporate our heuristics and biases about learning
  34. 34. Training the Next Gen Data Scientists to Remove Opacity • The central problem with both explainability and interpretability is the addition of steps in the development process. • From one perspective, this looks like you’re trying to tackle complexity with even greater complexity. • If we’re to get really serious about interpretability and explainability, there needs to be a broader cultural change in the way in which data science and engineering is done, and how people believe it should be done. • We are incorporating this in our programs! It is essential for junior data scientists as well…not just data visualization or communication, but explanation

×