SlideShare a Scribd company logo
1 of 55
Download to read offline
Explainability for Learning
to Rank models





Anna Ruggero, R&D Software Engineer
Ilaria Petreti, IR/ML Engineer
30th
March 2021
‣ R&D Search Software Engineer
‣ Master Degree in Computer Science
Engineering
‣ Big Data, Information Retrieval
‣ Organist, Music lover
Who We Are
Anna Ruggero
‣ Information Retrieval/Machine Learning
Engineer
‣ Master in Data Science
‣ Data Mining and Machine Learning
technologies passionate
‣ Sports Lover
Who We Are
Ilaria Petreti
Overview
What is Explainability
Explainability Methods
SHAP Library
Case Study - Tree SHAP
Warnings
“XAI (eXplainable Artificial Intelligence) refers to methods and techniques in the application
of artificial intelligence technology such that the results of the solution can be understood by
humans”
XAI - What is Explainability?
Explainability - Why is important?
Neural Networks
Forest of Regression Trees
Predictive Performance
Explainability
Explainability - Accuracy Trade Off
• Model debugging
• Increase detection of bias
• Verifying accuracy
Explainability - Why is important?
Why and Who needs Explanations?
• Building trust in the model’s output
• Building social acceptance
• Increase transparency
• Satisfying regulatory requirements
• Verifying model safety
MODEL
BUILDERS
END
USERS
PUBLIC
STAKEHOLDERS
Explainability - Why is important in Information Retrieval?
Understand your Learning To Rank model
‣ Why a search result is at a certain position
‣ How does the model calculate the score?
‣ How is a feature affecting the ranking?
‣ How is the feature values affecting the ranking?
‣ Has the model learned any weird behaviour?
Taxonomy of Explainability Methods
GLOBAL
Explain the overall model
Post-Hoc Explainability
Explainability - Methods
‣ Anchors
‣ CEM: Contrastive Explanations Method
‣ LIME
‣ SHAP
‣ DeepLIFT
‣ ….
‣ PDP: Partial Dependence Plots
‣ ALE: Accumulated Local Effect
‣ ICE: Individual Conditional Expectation
‣ Feature Importance and Permutation
Importance (through ELI5)
‣ ….
LOCAL
Explain a single prediction
References of some popular Python libraries for Explainability:
Explainability - Libraries
Python Library Type Links
ELI5 Model Agnostic https://eli5.readthedocs.io/en/latest/overview.html
LIME Model Agnostic https://github.com/marcotcr/lime
SHAP Model Agnostic + Specific Explainers https://github.com/slundberg/shap
Anchors Model Agnostic https://github.com/marcotcr/anchor
DeepLIFT Neural Network https://github.com/kundajelab/deeplift
Library Methods Links
SHAPASH (Python) LIME & SHAP https://github.com/MAIF/shapash
AI Explainability 360 (Python) CEM, LIME & SHAP https://github.com/Trusted-AI/AIX360
InterpretML (Python) PDP, LIME & SHAP https://github.com/interpretml/interpret
ALIBI (Python) ALE, SHAP, Anchors, etc. https://github.com/SeldonIO/alibi
SKATER (Python) LIME (for Local Expl.) https://github.com/oracle/Skater
TrustyAI (Java) PDP & personalised LIME
https://blog.kie.org/2020/10/an-introduction-to-trustyai-ex
plainability-capabilities.html
What-If (TensorFlow Interface) https://pair-code.github.io/what-if-tool/
Explainability - Libraries
Other Open Source Libraries/Tools:
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
Python Library Python Library
SHAP
(Shapley Additive
Explanations)
pip install lime pip install shap
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
Python Library Python Library
Local Explainability Local and Global Explainability
SHAP
(Shapley Additive
Explanations)
Python Library Python Library
Local Explainability Local and Global Explainability
Model Agnostic
KernelExplainer (model agnostic) + optimised
explainer
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
SHAP
(Shapley Additive
Explanations)
SHAP EXPLAINER
Python Library Python Library
Local Explainability Local and Global Explainability
Model Agnostic
KernelExplainer (model agnostic) + optimised
explainer
Importance Scores Importance Scores
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
SHAP
(Shapley Additive
Explanations)
Type of Explanation Family
The importance scores are meant to communicate the relative contribution made
by each input feature to a given prediction
Python Library Python Library
Local Explainability Local and Global Explainability
Model Agnostic
KernelExplainer (model agnostic) + optimised
explainer
Importance Scores Importance Scores
Input Perturbation
Particular case of Input Perturbation —> Shapley
Values
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
SHAP
(Shapley Additive
Explanations)
For a given observation, it generates
local perturbations of the features and
captures their impact using a linear
model => the weights of the linear model
are used as feature importance scores
Shapley value is computed by examining
all possible perturbations of inputting
other features
Python Library Python Library
Local Explainability Local and Global Explainability
Model Agnostic
KernelExplainer (model agnostic) + optimised
explainer
Importance Scores Importance Scores
Input Perturbation
Particular case of Input Perturbation —> Shapley
Values
Fast
Computationally Expensive
(especially for large dataset)
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
SHAP
(Shapley Additive
Explanations)
Python Library Python Library
Local Explainability Local and Global Explainability
Model Agnostic
KernelExplainer (model agnostic) + optimised
explainer
Importance Scores Importance Scores
Input Perturbation
Particular case of Input Perturbation —> Shapley
Values
Fast
Computationally Expensive
(especially for large dataset)
Not optimised for all model types (i.e. XGBoost) Not optimised for all model types (i.e. k-NN)
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
SHAP
(Shapley Additive
Explanations)
Python Library Python Library
Local Explainability Local and Global Explainability
Model Agnostic
KernelExplainer (model agnostic) + optimised
explainer
Importance Scores Importance Scores
Input Perturbation
Particular case of Input Perturbation —> Shapley
Values
Fast
Computationally Expensive
(especially for large dataset)
Not optimised for all model types (i.e. XGBoost) Not optimised for all model types (i.e. k-NN)
Not designed to work with one hot encoded
data
Handle categorical data
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
SHAP
(Shapley Additive
Explanations)
Python Library Python Library
Local Explainability Local and Global Explainability
Model Agnostic
KernelExplainer (model agnostic) + optimised
explainer
Importance Scores Importance Scores
Input Perturbation
Particular case of Input Perturbation —> Shapley
Values
Fast
Computationally Expensive
(especially for large dataset)
Not optimised for all model types (XGBoost) Not optimised for all model types (k-NN)
Not designed to work with one hot encoded
data
Handle categorical data
Interfaces for different type of data (Tabular,
Text and Image)
Handle all type of data
Explainability - LIME vs SHAP
LIME
(Local Interpretable
Model-Agnostic
Explanations)
SHAP
(Shapley Additive
Explanations)
SHAP
https://github.com/slundberg/shap
❑ Game Theory Approach
❑ Explain the Output of any Machine Learning Models
❑ Inspects Model Prediction
SHapley Additive exPlanations
https://towardsdatascience.com/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30
SHAP - Theory
❑ In the Game Theory we have two components:
❑ Game
❑ Players
❑ The application of the game theory approach to explain a machine learning model
makes these two elements becoming:
❑ Game → Output of the model for one observation
❑ Players → Features
SHAP - Theory
What Shapley does is quantifying the
contribution that each player brings to the
game.
What SHAP does is quantifying the
contribution that each feature brings to the
prediction made by the model.
SHAP - Theory
What is the impact of each feature in the predictions?
❑ Suppose to predict the income of a person.
❑ Suppose to have 3 features in our model: age, gender and job.
❑ We have to consider all the possible features orderings of f features
(f going from 0 to 3).
SHAP - Theory
❑ The combinations of features can
be seen as a tree
❑ Node = 1 specific combination of
features
❑ Edge = marginal contribution that
each feature gives to the model.
SHAP - Theory
❑ Possible combinations = 2^n (for a power set)
❑ In our example, possible combinations = 2^3 = 8
❑ SHAP trains a distinct model on each of these combinations, therefore it
considers 2^n models.
❑ Too expensive, SHAP actually implements a variation of this naif approach.
SHAP - Theory
❑ Suppose to have trained all the 8 models
❑ Suppose to consider one observation called x0
❑ Let’s see what each model
predict for the same
observation x0
SHAP - Theory
❑ To get the overall contribution
of Age in the model,
we have to consider its
marginal contribution of Age
in all the models where Age is
present.
SHAP - Theory
We have to consider all the edges
connecting two nodes such that:
❑ the upper one does not
contain Age, and
❑ the bottom one contains Age.
SHAP - Theory
SHAP - Theory
❑ The main assumptions are:
❑ The sum of the weights on each row of the tree should be equal.
w1 = w2 + w3 = w4
❑ Each weight inside one row of the tree should be equal.
w2 = w3
❑ In our example:
❑ w1 = w4 = 1/3
❑ w2 = w3 = 1/6
SHAP - Theory
❑ The main assumptions are:
❑ The sum of the weights on each row of the tree should be equal.
w1 = w2 + w3 = w4
❑ Each weight inside one row of the tree should be equal.
w2 = w3
❑ In our example:
❑ w1 = w4 = 1/3
❑ w2 = w3 = 1/6
SHAP - Theory
❑ On our example, the formula yields:
❑ SHAP_Age(x₀) = -11.33k $
❑ SHAP_Gender(x₀) = -2.33k $
❑ SHAP_Job(x₀) = +46.66k $
❑ Summing them up gives: +33k $
❑ It is exactly the difference between the output of the full model (83k $) and
the output of the dummy model with no features (50k $).
SHAP - Theory
Generalising:
❑ The weight of an edge is the reciprocal of the total number of edges in the same
“row”.
Or, equivalently, the weight of a marginal contribution to a f-feature-model is the
reciprocal of the number of possible marginal contributions to all the
f-feature-models.
❑ Each f-feature-model has f marginal contributions (one per feature), so it is
enough to count the number of possible f-feature-models and to multiply it by f.
SHAP - Theory
SHAP - Theory
SHAP - Theory
SHAP - Theory
❑ We have built the formula for calculating the SHAP value of Age in a
3-feature-model.
❑ Generalising to any feature and any F, we obtain the formula reported in the
article by Slundberg and Lee:
Case Study - TREE SHAP
❑ Let’s imagine to have a book e-commerce
❑ Every book is characterised by several features:
❑ Sales done the last week and total sales
❑ Number of reviews and average of these reviews
❑ Genre
❑ Price
❑ Author
❑ …
❑ Train a LTR model using LambdaMART algorithm
SHAP Plots
❑ Y-axis: most important features
for the model
❑ X-axis: average SHAP value
(impact on model score)
Summary Plot
SHAP Plots
Summary Plot
❑ Y-axis: most important features
for the model
❑ X-axis: SHAP value (impact on
model score)
❑ Feature value in color
❑ Each point is a prediction result
SHAP Plots
❑ Single model prediction lowest relevance
❑ Model output: -7.01
❑ Impact of each feature
Force Plot
SHAP Plots
❑ Single model prediction highest relevance
❑ Model output: -3.25
❑ Impact of each feature
Force Plot
SHAP Plots
Force Plot
SHAP Plots
❑ Show interaction between 2
features
❑ Each point is a prediction
❑ X-axis: first feature value
❑ Y-axis: impact on the score
❑ Color is a second feature
Dependence Plot
SHAP Plots
Dependence Plot
❑ Show interaction between 2
features
❑ Each point is a prediction
❑ X-axis: first feature value
❑ Y-axis: impact on the score
❑ Color is a second feature
https://slundberg.github.io/shap/notebooks/plots/dependence_plot.html
SHAP Plots
Decision Plot
❑ How the prediction changes during
the decision process
❑ Y-axis: features names
❑ X-axis: output of the model
❑ Each row show the impact of each
feature
❑ Each vertical line is a prediction
SHAP Plots
Decision Plot
❑ How the prediction changes
during the decision process
❑ Y-axis: features names
❑ X-axis: output of the model
❑ Feature value between brackets ()
SHAP - Python Code
Tree Explainer explainer = shap.TreeExplainer(xgb_model)
SHAP values shap_values = explainer.shap_values(training_data_set)
https://github.com/slundberg/shap
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.html
import shap
import matplotlib.pyplot as plt
To save Plots
plt.savefig(images_path + '/summary_plot.png', bbox_inches=‘tight')
plt.close()
Summary Plot shap.summary_plot(shap_values, training_data_set)
Summary Plot with Bars shap.summary_plot(shap_values, training_data_set, plot_type="bar")
Decision Plot
(total AND single observation)
shap.decision_plot(explainer.expected_value, shap_values, training_data_set,
feature_names=training_data_set.columns.tolist(), ignore_warnings=True)
AND
shap.decision_plot(explainer.expected_value, shap_values[0], training_data_set.iloc[0],
feature_names=training_data_set.columns.tolist(), ignore_warnings=True)
Force Plot
(total AND single observation)
html_img = shap.force_plot(explainer.expected_value, shap_values, training_data_set)
AND
shap.force_plot(explainer.expected_value, shap_values[0], training_data_set.iloc[0], matplotlib=True)
Dependence Plot
if 'is_genre_fantasy' in training_data_set.columns:
shap.dependence_plot(feature_to_analyze, shap_values, training_data_set, interaction_index='is_genre_fantasy')
SHAP - Python Code
https://github.com/slundberg/shap
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.html
SHAP - Warnings
❑ Attention on Data preprocessing
❑ SHAP doesn’t consider queries
Extrapolate all the interactions with the same query and then execute the plot
❑ The output (score) of the model is NOT the Relevance Label
Relative relevance between products
Conclusion
❑ Explainability is useful to understand the model behaviour
❑ There are several method for explainability
❑ SHAP is a very powerful library that provides several tools
❑ SHAP’s plots allow us to give local and global explainability to the model
❑ Keep attention on data preprocessing, queries and relevance during plots
interpretation
‣ Integration of SHAP Library in Apache Solr
‣ Exploration of other Explainability libraries/methods, that are
more specific for ranking algorithm
Future Works
Our Blog Posts about Explainability:
‣ https://sease.io/2020/07/explaining-learnin
g-to-rank-models-with-tree-shap.html
‣ https://sease.io/2021/02/a-learning-to-ran
k-project-on-a-daily-song-ranking-problem
-part-2.html
Thank You!
Keep an eye on our Blog page, as more is coming!

More Related Content

What's hot

Gradient Boosted trees
Gradient Boosted treesGradient Boosted trees
Gradient Boosted treesNihar Ranjan
 
Machine Learning with Decision trees
Machine Learning with Decision treesMachine Learning with Decision trees
Machine Learning with Decision treesKnoldus Inc.
 
Machine Learning Pipelines
Machine Learning PipelinesMachine Learning Pipelines
Machine Learning Pipelinesjeykottalam
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Saurabh Kaushik
 
Machine Learning Interpretability
Machine Learning InterpretabilityMachine Learning Interpretability
Machine Learning Interpretabilityinovex GmbH
 
An introduction to the Transformers architecture and BERT
An introduction to the Transformers architecture and BERTAn introduction to the Transformers architecture and BERT
An introduction to the Transformers architecture and BERTSuman Debnath
 
Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Marina Santini
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningKhang Pham
 
AlexNet, VGG, GoogleNet, Resnet
AlexNet, VGG, GoogleNet, ResnetAlexNet, VGG, GoogleNet, Resnet
AlexNet, VGG, GoogleNet, ResnetJungwon Kim
 
Classification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsClassification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsMd. Main Uddin Rony
 
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attent...
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attent...LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attent...
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attent...Po-Chuan Chen
 
Algorithmic Music Recommendations at Spotify
Algorithmic Music Recommendations at SpotifyAlgorithmic Music Recommendations at Spotify
Algorithmic Music Recommendations at SpotifyChris Johnson
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERKnoldus Inc.
 
Feature Selection in Machine Learning
Feature Selection in Machine LearningFeature Selection in Machine Learning
Feature Selection in Machine LearningUpekha Vandebona
 
Boosting Approach to Solving Machine Learning Problems
Boosting Approach to Solving Machine Learning ProblemsBoosting Approach to Solving Machine Learning Problems
Boosting Approach to Solving Machine Learning ProblemsDr Sulaimon Afolabi
 

What's hot (20)

Shap
ShapShap
Shap
 
Gradient Boosted trees
Gradient Boosted treesGradient Boosted trees
Gradient Boosted trees
 
Machine Learning with Decision trees
Machine Learning with Decision treesMachine Learning with Decision trees
Machine Learning with Decision trees
 
Machine Learning Pipelines
Machine Learning PipelinesMachine Learning Pipelines
Machine Learning Pipelines
 
Naive bayes
Naive bayesNaive bayes
Naive bayes
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
XGBoost & LightGBM
XGBoost & LightGBMXGBoost & LightGBM
XGBoost & LightGBM
 
Machine Learning Interpretability
Machine Learning InterpretabilityMachine Learning Interpretability
Machine Learning Interpretability
 
eScience SHAP talk
eScience SHAP talkeScience SHAP talk
eScience SHAP talk
 
An introduction to the Transformers architecture and BERT
An introduction to the Transformers architecture and BERTAn introduction to the Transformers architecture and BERT
An introduction to the Transformers architecture and BERT
 
Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
 
AlexNet, VGG, GoogleNet, Resnet
AlexNet, VGG, GoogleNet, ResnetAlexNet, VGG, GoogleNet, Resnet
AlexNet, VGG, GoogleNet, Resnet
 
Classification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsClassification Based Machine Learning Algorithms
Classification Based Machine Learning Algorithms
 
Decision tree and random forest
Decision tree and random forestDecision tree and random forest
Decision tree and random forest
 
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attent...
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attent...LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attent...
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attent...
 
Algorithmic Music Recommendations at Spotify
Algorithmic Music Recommendations at SpotifyAlgorithmic Music Recommendations at Spotify
Algorithmic Music Recommendations at Spotify
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIER
 
Feature Selection in Machine Learning
Feature Selection in Machine LearningFeature Selection in Machine Learning
Feature Selection in Machine Learning
 
Boosting Approach to Solving Machine Learning Problems
Boosting Approach to Solving Machine Learning ProblemsBoosting Approach to Solving Machine Learning Problems
Boosting Approach to Solving Machine Learning Problems
 

Similar to Explainability for Learning to Rank

Solving Cross-Cutting Concerns in PHP - DutchPHP Conference 2016
Solving Cross-Cutting Concerns in PHP - DutchPHP Conference 2016 Solving Cross-Cutting Concerns in PHP - DutchPHP Conference 2016
Solving Cross-Cutting Concerns in PHP - DutchPHP Conference 2016 Alexander Lisachenko
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable MLMayur Sand
 
Recommender Systems at Scale
Recommender Systems at ScaleRecommender Systems at Scale
Recommender Systems at ScaleEoin Hurrell, PhD
 
Deploying MLlib for Scoring in Structured Streaming with Joseph Bradley
Deploying MLlib for Scoring in Structured Streaming with Joseph BradleyDeploying MLlib for Scoring in Structured Streaming with Joseph Bradley
Deploying MLlib for Scoring in Structured Streaming with Joseph BradleyDatabricks
 
Shapley Tech Talk - SHAP and Shapley Discussion
Shapley Tech Talk - SHAP and Shapley DiscussionShapley Tech Talk - SHAP and Shapley Discussion
Shapley Tech Talk - SHAP and Shapley DiscussionTushar Tank
 
Asynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
Asynchronous Hyperparameter Search with Spark on Hopsworks and MaggyAsynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
Asynchronous Hyperparameter Search with Spark on Hopsworks and MaggyJim Dowling
 
Distributed ML in Apache Spark
Distributed ML in Apache SparkDistributed ML in Apache Spark
Distributed ML in Apache SparkDatabricks
 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
 
Introduction to Apache Hivemall v0.5.0
Introduction to Apache Hivemall v0.5.0Introduction to Apache Hivemall v0.5.0
Introduction to Apache Hivemall v0.5.0Makoto Yui
 
Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Hayim Makabee
 
Spark for Recommender Systems
Spark for Recommender SystemsSpark for Recommender Systems
Spark for Recommender SystemsSorin Peste
 
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...Chris Fregly
 
Recommendation Subsystem - Museum Radar
Recommendation Subsystem - Museum RadarRecommendation Subsystem - Museum Radar
Recommendation Subsystem - Museum RadarPanos Gemos
 
Solving cross cutting concerns in PHP - PHPSerbia-2017
Solving cross cutting concerns in PHP - PHPSerbia-2017Solving cross cutting concerns in PHP - PHPSerbia-2017
Solving cross cutting concerns in PHP - PHPSerbia-2017Alexander Lisachenko
 
EvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge Bases
EvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge BasesEvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge Bases
EvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge BasesSebastian Tramp
 
Scaling PyData Up and Out
Scaling PyData Up and OutScaling PyData Up and Out
Scaling PyData Up and OutTravis Oliphant
 

Similar to Explainability for Learning to Rank (20)

Solving Cross-Cutting Concerns in PHP - DutchPHP Conference 2016
Solving Cross-Cutting Concerns in PHP - DutchPHP Conference 2016 Solving Cross-Cutting Concerns in PHP - DutchPHP Conference 2016
Solving Cross-Cutting Concerns in PHP - DutchPHP Conference 2016
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable ML
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Recommender Systems at Scale
Recommender Systems at ScaleRecommender Systems at Scale
Recommender Systems at Scale
 
Deploying MLlib for Scoring in Structured Streaming with Joseph Bradley
Deploying MLlib for Scoring in Structured Streaming with Joseph BradleyDeploying MLlib for Scoring in Structured Streaming with Joseph Bradley
Deploying MLlib for Scoring in Structured Streaming with Joseph Bradley
 
Shapley Tech Talk - SHAP and Shapley Discussion
Shapley Tech Talk - SHAP and Shapley DiscussionShapley Tech Talk - SHAP and Shapley Discussion
Shapley Tech Talk - SHAP and Shapley Discussion
 
Asynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
Asynchronous Hyperparameter Search with Spark on Hopsworks and MaggyAsynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
Asynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
 
UseR 2017
UseR 2017UseR 2017
UseR 2017
 
Distributed ML in Apache Spark
Distributed ML in Apache SparkDistributed ML in Apache Spark
Distributed ML in Apache Spark
 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
 
Introduction to Apache Hivemall v0.5.0
Introduction to Apache Hivemall v0.5.0Introduction to Apache Hivemall v0.5.0
Introduction to Apache Hivemall v0.5.0
 
Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)
 
Spark for Recommender Systems
Spark for Recommender SystemsSpark for Recommender Systems
Spark for Recommender Systems
 
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
Cassandra Summit Sept 2015 - Real Time Advanced Analytics with Spark and Cass...
 
Recommendation Subsystem - Museum Radar
Recommendation Subsystem - Museum RadarRecommendation Subsystem - Museum Radar
Recommendation Subsystem - Museum Radar
 
The Power of Machine Learning and Graphs
The Power of Machine Learning and GraphsThe Power of Machine Learning and Graphs
The Power of Machine Learning and Graphs
 
Solving cross cutting concerns in PHP - PHPSerbia-2017
Solving cross cutting concerns in PHP - PHPSerbia-2017Solving cross cutting concerns in PHP - PHPSerbia-2017
Solving cross cutting concerns in PHP - PHPSerbia-2017
 
EvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge Bases
EvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge BasesEvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge Bases
EvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge Bases
 
Scaling PyData Up and Out
Scaling PyData Up and OutScaling PyData Up and Out
Scaling PyData Up and Out
 
OpenML DALI
OpenML DALIOpenML DALI
OpenML DALI
 

More from Sease

Multi Valued Vectors Lucene
Multi Valued Vectors LuceneMulti Valued Vectors Lucene
Multi Valued Vectors LuceneSease
 
When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...
When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...
When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...Sease
 
How To Implement Your Online Search Quality Evaluation With Kibana
How To Implement Your Online Search Quality Evaluation With KibanaHow To Implement Your Online Search Quality Evaluation With Kibana
How To Implement Your Online Search Quality Evaluation With KibanaSease
 
Introducing Multi Valued Vectors Fields in Apache Lucene
Introducing Multi Valued Vectors Fields in Apache LuceneIntroducing Multi Valued Vectors Fields in Apache Lucene
Introducing Multi Valued Vectors Fields in Apache LuceneSease
 
Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...
Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...
Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...Sease
 
How does ChatGPT work: an Information Retrieval perspective
How does ChatGPT work: an Information Retrieval perspectiveHow does ChatGPT work: an Information Retrieval perspective
How does ChatGPT work: an Information Retrieval perspectiveSease
 
How To Implement Your Online Search Quality Evaluation With Kibana
How To Implement Your Online Search Quality Evaluation With KibanaHow To Implement Your Online Search Quality Evaluation With Kibana
How To Implement Your Online Search Quality Evaluation With KibanaSease
 
Neural Search Comes to Apache Solr
Neural Search Comes to Apache SolrNeural Search Comes to Apache Solr
Neural Search Comes to Apache SolrSease
 
Large Scale Indexing
Large Scale IndexingLarge Scale Indexing
Large Scale IndexingSease
 
Dense Retrieval with Apache Solr Neural Search.pdf
Dense Retrieval with Apache Solr Neural Search.pdfDense Retrieval with Apache Solr Neural Search.pdf
Dense Retrieval with Apache Solr Neural Search.pdfSease
 
Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...
Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...
Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...Sease
 
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdf
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdfWord2Vec model to generate synonyms on the fly in Apache Lucene.pdf
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdfSease
 
How to cache your searches_ an open source implementation.pptx
How to cache your searches_ an open source implementation.pptxHow to cache your searches_ an open source implementation.pptx
How to cache your searches_ an open source implementation.pptxSease
 
Online Testing Learning to Rank with Solr Interleaving
Online Testing Learning to Rank with Solr InterleavingOnline Testing Learning to Rank with Solr Interleaving
Online Testing Learning to Rank with Solr InterleavingSease
 
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...Sease
 
Apache Lucene/Solr Document Classification
Apache Lucene/Solr Document ClassificationApache Lucene/Solr Document Classification
Apache Lucene/Solr Document ClassificationSease
 
Advanced Document Similarity with Apache Lucene
Advanced Document Similarity with Apache LuceneAdvanced Document Similarity with Apache Lucene
Advanced Document Similarity with Apache LuceneSease
 
Search Quality Evaluation: a Developer Perspective
Search Quality Evaluation: a Developer PerspectiveSearch Quality Evaluation: a Developer Perspective
Search Quality Evaluation: a Developer PerspectiveSease
 
Introduction to Music Information Retrieval
Introduction to Music Information RetrievalIntroduction to Music Information Retrieval
Introduction to Music Information RetrievalSease
 
Rated Ranking Evaluator: an Open Source Approach for Search Quality Evaluation
Rated Ranking Evaluator: an Open Source Approach for Search Quality EvaluationRated Ranking Evaluator: an Open Source Approach for Search Quality Evaluation
Rated Ranking Evaluator: an Open Source Approach for Search Quality EvaluationSease
 

More from Sease (20)

Multi Valued Vectors Lucene
Multi Valued Vectors LuceneMulti Valued Vectors Lucene
Multi Valued Vectors Lucene
 
When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...
When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...
When SDMX meets AI-Leveraging Open Source LLMs To Make Official Statistics Mo...
 
How To Implement Your Online Search Quality Evaluation With Kibana
How To Implement Your Online Search Quality Evaluation With KibanaHow To Implement Your Online Search Quality Evaluation With Kibana
How To Implement Your Online Search Quality Evaluation With Kibana
 
Introducing Multi Valued Vectors Fields in Apache Lucene
Introducing Multi Valued Vectors Fields in Apache LuceneIntroducing Multi Valued Vectors Fields in Apache Lucene
Introducing Multi Valued Vectors Fields in Apache Lucene
 
Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...
Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...
Stat-weight Improving the Estimator of Interleaved Methods Outcomes with Stat...
 
How does ChatGPT work: an Information Retrieval perspective
How does ChatGPT work: an Information Retrieval perspectiveHow does ChatGPT work: an Information Retrieval perspective
How does ChatGPT work: an Information Retrieval perspective
 
How To Implement Your Online Search Quality Evaluation With Kibana
How To Implement Your Online Search Quality Evaluation With KibanaHow To Implement Your Online Search Quality Evaluation With Kibana
How To Implement Your Online Search Quality Evaluation With Kibana
 
Neural Search Comes to Apache Solr
Neural Search Comes to Apache SolrNeural Search Comes to Apache Solr
Neural Search Comes to Apache Solr
 
Large Scale Indexing
Large Scale IndexingLarge Scale Indexing
Large Scale Indexing
 
Dense Retrieval with Apache Solr Neural Search.pdf
Dense Retrieval with Apache Solr Neural Search.pdfDense Retrieval with Apache Solr Neural Search.pdf
Dense Retrieval with Apache Solr Neural Search.pdf
 
Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...
Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...
Neural Search Comes to Apache Solr_ Approximate Nearest Neighbor, BERT and Mo...
 
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdf
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdfWord2Vec model to generate synonyms on the fly in Apache Lucene.pdf
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdf
 
How to cache your searches_ an open source implementation.pptx
How to cache your searches_ an open source implementation.pptxHow to cache your searches_ an open source implementation.pptx
How to cache your searches_ an open source implementation.pptx
 
Online Testing Learning to Rank with Solr Interleaving
Online Testing Learning to Rank with Solr InterleavingOnline Testing Learning to Rank with Solr Interleaving
Online Testing Learning to Rank with Solr Interleaving
 
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...
Rated Ranking Evaluator Enterprise: the next generation of free Search Qualit...
 
Apache Lucene/Solr Document Classification
Apache Lucene/Solr Document ClassificationApache Lucene/Solr Document Classification
Apache Lucene/Solr Document Classification
 
Advanced Document Similarity with Apache Lucene
Advanced Document Similarity with Apache LuceneAdvanced Document Similarity with Apache Lucene
Advanced Document Similarity with Apache Lucene
 
Search Quality Evaluation: a Developer Perspective
Search Quality Evaluation: a Developer PerspectiveSearch Quality Evaluation: a Developer Perspective
Search Quality Evaluation: a Developer Perspective
 
Introduction to Music Information Retrieval
Introduction to Music Information RetrievalIntroduction to Music Information Retrieval
Introduction to Music Information Retrieval
 
Rated Ranking Evaluator: an Open Source Approach for Search Quality Evaluation
Rated Ranking Evaluator: an Open Source Approach for Search Quality EvaluationRated Ranking Evaluator: an Open Source Approach for Search Quality Evaluation
Rated Ranking Evaluator: an Open Source Approach for Search Quality Evaluation
 

Recently uploaded

Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 

Recently uploaded (20)

Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 

Explainability for Learning to Rank

  • 1. Explainability for Learning to Rank models
 
 
 Anna Ruggero, R&D Software Engineer Ilaria Petreti, IR/ML Engineer 30th March 2021
  • 2. ‣ R&D Search Software Engineer ‣ Master Degree in Computer Science Engineering ‣ Big Data, Information Retrieval ‣ Organist, Music lover Who We Are Anna Ruggero
  • 3. ‣ Information Retrieval/Machine Learning Engineer ‣ Master in Data Science ‣ Data Mining and Machine Learning technologies passionate ‣ Sports Lover Who We Are Ilaria Petreti
  • 4. Overview What is Explainability Explainability Methods SHAP Library Case Study - Tree SHAP Warnings
  • 5. “XAI (eXplainable Artificial Intelligence) refers to methods and techniques in the application of artificial intelligence technology such that the results of the solution can be understood by humans” XAI - What is Explainability?
  • 6. Explainability - Why is important? Neural Networks Forest of Regression Trees Predictive Performance Explainability Explainability - Accuracy Trade Off
  • 7. • Model debugging • Increase detection of bias • Verifying accuracy Explainability - Why is important? Why and Who needs Explanations? • Building trust in the model’s output • Building social acceptance • Increase transparency • Satisfying regulatory requirements • Verifying model safety MODEL BUILDERS END USERS PUBLIC STAKEHOLDERS
  • 8. Explainability - Why is important in Information Retrieval? Understand your Learning To Rank model ‣ Why a search result is at a certain position ‣ How does the model calculate the score? ‣ How is a feature affecting the ranking? ‣ How is the feature values affecting the ranking? ‣ Has the model learned any weird behaviour?
  • 10. GLOBAL Explain the overall model Post-Hoc Explainability Explainability - Methods ‣ Anchors ‣ CEM: Contrastive Explanations Method ‣ LIME ‣ SHAP ‣ DeepLIFT ‣ …. ‣ PDP: Partial Dependence Plots ‣ ALE: Accumulated Local Effect ‣ ICE: Individual Conditional Expectation ‣ Feature Importance and Permutation Importance (through ELI5) ‣ …. LOCAL Explain a single prediction
  • 11. References of some popular Python libraries for Explainability: Explainability - Libraries Python Library Type Links ELI5 Model Agnostic https://eli5.readthedocs.io/en/latest/overview.html LIME Model Agnostic https://github.com/marcotcr/lime SHAP Model Agnostic + Specific Explainers https://github.com/slundberg/shap Anchors Model Agnostic https://github.com/marcotcr/anchor DeepLIFT Neural Network https://github.com/kundajelab/deeplift
  • 12. Library Methods Links SHAPASH (Python) LIME & SHAP https://github.com/MAIF/shapash AI Explainability 360 (Python) CEM, LIME & SHAP https://github.com/Trusted-AI/AIX360 InterpretML (Python) PDP, LIME & SHAP https://github.com/interpretml/interpret ALIBI (Python) ALE, SHAP, Anchors, etc. https://github.com/SeldonIO/alibi SKATER (Python) LIME (for Local Expl.) https://github.com/oracle/Skater TrustyAI (Java) PDP & personalised LIME https://blog.kie.org/2020/10/an-introduction-to-trustyai-ex plainability-capabilities.html What-If (TensorFlow Interface) https://pair-code.github.io/what-if-tool/ Explainability - Libraries Other Open Source Libraries/Tools:
  • 13. Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) Python Library Python Library SHAP (Shapley Additive Explanations) pip install lime pip install shap
  • 14. Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) Python Library Python Library Local Explainability Local and Global Explainability SHAP (Shapley Additive Explanations)
  • 15. Python Library Python Library Local Explainability Local and Global Explainability Model Agnostic KernelExplainer (model agnostic) + optimised explainer Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) SHAP (Shapley Additive Explanations) SHAP EXPLAINER
  • 16. Python Library Python Library Local Explainability Local and Global Explainability Model Agnostic KernelExplainer (model agnostic) + optimised explainer Importance Scores Importance Scores Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) SHAP (Shapley Additive Explanations) Type of Explanation Family The importance scores are meant to communicate the relative contribution made by each input feature to a given prediction
  • 17. Python Library Python Library Local Explainability Local and Global Explainability Model Agnostic KernelExplainer (model agnostic) + optimised explainer Importance Scores Importance Scores Input Perturbation Particular case of Input Perturbation —> Shapley Values Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) SHAP (Shapley Additive Explanations) For a given observation, it generates local perturbations of the features and captures their impact using a linear model => the weights of the linear model are used as feature importance scores Shapley value is computed by examining all possible perturbations of inputting other features
  • 18. Python Library Python Library Local Explainability Local and Global Explainability Model Agnostic KernelExplainer (model agnostic) + optimised explainer Importance Scores Importance Scores Input Perturbation Particular case of Input Perturbation —> Shapley Values Fast Computationally Expensive (especially for large dataset) Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) SHAP (Shapley Additive Explanations)
  • 19. Python Library Python Library Local Explainability Local and Global Explainability Model Agnostic KernelExplainer (model agnostic) + optimised explainer Importance Scores Importance Scores Input Perturbation Particular case of Input Perturbation —> Shapley Values Fast Computationally Expensive (especially for large dataset) Not optimised for all model types (i.e. XGBoost) Not optimised for all model types (i.e. k-NN) Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) SHAP (Shapley Additive Explanations)
  • 20. Python Library Python Library Local Explainability Local and Global Explainability Model Agnostic KernelExplainer (model agnostic) + optimised explainer Importance Scores Importance Scores Input Perturbation Particular case of Input Perturbation —> Shapley Values Fast Computationally Expensive (especially for large dataset) Not optimised for all model types (i.e. XGBoost) Not optimised for all model types (i.e. k-NN) Not designed to work with one hot encoded data Handle categorical data Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) SHAP (Shapley Additive Explanations)
  • 21. Python Library Python Library Local Explainability Local and Global Explainability Model Agnostic KernelExplainer (model agnostic) + optimised explainer Importance Scores Importance Scores Input Perturbation Particular case of Input Perturbation —> Shapley Values Fast Computationally Expensive (especially for large dataset) Not optimised for all model types (XGBoost) Not optimised for all model types (k-NN) Not designed to work with one hot encoded data Handle categorical data Interfaces for different type of data (Tabular, Text and Image) Handle all type of data Explainability - LIME vs SHAP LIME (Local Interpretable Model-Agnostic Explanations) SHAP (Shapley Additive Explanations)
  • 22. SHAP https://github.com/slundberg/shap ❑ Game Theory Approach ❑ Explain the Output of any Machine Learning Models ❑ Inspects Model Prediction SHapley Additive exPlanations https://towardsdatascience.com/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30
  • 23. SHAP - Theory ❑ In the Game Theory we have two components: ❑ Game ❑ Players ❑ The application of the game theory approach to explain a machine learning model makes these two elements becoming: ❑ Game → Output of the model for one observation ❑ Players → Features
  • 24. SHAP - Theory What Shapley does is quantifying the contribution that each player brings to the game. What SHAP does is quantifying the contribution that each feature brings to the prediction made by the model.
  • 25. SHAP - Theory What is the impact of each feature in the predictions? ❑ Suppose to predict the income of a person. ❑ Suppose to have 3 features in our model: age, gender and job. ❑ We have to consider all the possible features orderings of f features (f going from 0 to 3).
  • 26. SHAP - Theory ❑ The combinations of features can be seen as a tree ❑ Node = 1 specific combination of features ❑ Edge = marginal contribution that each feature gives to the model.
  • 27. SHAP - Theory ❑ Possible combinations = 2^n (for a power set) ❑ In our example, possible combinations = 2^3 = 8 ❑ SHAP trains a distinct model on each of these combinations, therefore it considers 2^n models. ❑ Too expensive, SHAP actually implements a variation of this naif approach.
  • 28. SHAP - Theory ❑ Suppose to have trained all the 8 models ❑ Suppose to consider one observation called x0 ❑ Let’s see what each model predict for the same observation x0
  • 29. SHAP - Theory ❑ To get the overall contribution of Age in the model, we have to consider its marginal contribution of Age in all the models where Age is present.
  • 30. SHAP - Theory We have to consider all the edges connecting two nodes such that: ❑ the upper one does not contain Age, and ❑ the bottom one contains Age.
  • 32. SHAP - Theory ❑ The main assumptions are: ❑ The sum of the weights on each row of the tree should be equal. w1 = w2 + w3 = w4 ❑ Each weight inside one row of the tree should be equal. w2 = w3 ❑ In our example: ❑ w1 = w4 = 1/3 ❑ w2 = w3 = 1/6
  • 33. SHAP - Theory ❑ The main assumptions are: ❑ The sum of the weights on each row of the tree should be equal. w1 = w2 + w3 = w4 ❑ Each weight inside one row of the tree should be equal. w2 = w3 ❑ In our example: ❑ w1 = w4 = 1/3 ❑ w2 = w3 = 1/6
  • 34. SHAP - Theory ❑ On our example, the formula yields: ❑ SHAP_Age(x₀) = -11.33k $ ❑ SHAP_Gender(x₀) = -2.33k $ ❑ SHAP_Job(x₀) = +46.66k $ ❑ Summing them up gives: +33k $ ❑ It is exactly the difference between the output of the full model (83k $) and the output of the dummy model with no features (50k $).
  • 35. SHAP - Theory Generalising: ❑ The weight of an edge is the reciprocal of the total number of edges in the same “row”. Or, equivalently, the weight of a marginal contribution to a f-feature-model is the reciprocal of the number of possible marginal contributions to all the f-feature-models. ❑ Each f-feature-model has f marginal contributions (one per feature), so it is enough to count the number of possible f-feature-models and to multiply it by f.
  • 39. SHAP - Theory ❑ We have built the formula for calculating the SHAP value of Age in a 3-feature-model. ❑ Generalising to any feature and any F, we obtain the formula reported in the article by Slundberg and Lee:
  • 40. Case Study - TREE SHAP ❑ Let’s imagine to have a book e-commerce ❑ Every book is characterised by several features: ❑ Sales done the last week and total sales ❑ Number of reviews and average of these reviews ❑ Genre ❑ Price ❑ Author ❑ … ❑ Train a LTR model using LambdaMART algorithm
  • 41. SHAP Plots ❑ Y-axis: most important features for the model ❑ X-axis: average SHAP value (impact on model score) Summary Plot
  • 42. SHAP Plots Summary Plot ❑ Y-axis: most important features for the model ❑ X-axis: SHAP value (impact on model score) ❑ Feature value in color ❑ Each point is a prediction result
  • 43. SHAP Plots ❑ Single model prediction lowest relevance ❑ Model output: -7.01 ❑ Impact of each feature Force Plot
  • 44. SHAP Plots ❑ Single model prediction highest relevance ❑ Model output: -3.25 ❑ Impact of each feature Force Plot
  • 46. SHAP Plots ❑ Show interaction between 2 features ❑ Each point is a prediction ❑ X-axis: first feature value ❑ Y-axis: impact on the score ❑ Color is a second feature Dependence Plot
  • 47. SHAP Plots Dependence Plot ❑ Show interaction between 2 features ❑ Each point is a prediction ❑ X-axis: first feature value ❑ Y-axis: impact on the score ❑ Color is a second feature https://slundberg.github.io/shap/notebooks/plots/dependence_plot.html
  • 48. SHAP Plots Decision Plot ❑ How the prediction changes during the decision process ❑ Y-axis: features names ❑ X-axis: output of the model ❑ Each row show the impact of each feature ❑ Each vertical line is a prediction
  • 49. SHAP Plots Decision Plot ❑ How the prediction changes during the decision process ❑ Y-axis: features names ❑ X-axis: output of the model ❑ Feature value between brackets ()
  • 50. SHAP - Python Code Tree Explainer explainer = shap.TreeExplainer(xgb_model) SHAP values shap_values = explainer.shap_values(training_data_set) https://github.com/slundberg/shap https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.html import shap import matplotlib.pyplot as plt To save Plots plt.savefig(images_path + '/summary_plot.png', bbox_inches=‘tight') plt.close()
  • 51. Summary Plot shap.summary_plot(shap_values, training_data_set) Summary Plot with Bars shap.summary_plot(shap_values, training_data_set, plot_type="bar") Decision Plot (total AND single observation) shap.decision_plot(explainer.expected_value, shap_values, training_data_set, feature_names=training_data_set.columns.tolist(), ignore_warnings=True) AND shap.decision_plot(explainer.expected_value, shap_values[0], training_data_set.iloc[0], feature_names=training_data_set.columns.tolist(), ignore_warnings=True) Force Plot (total AND single observation) html_img = shap.force_plot(explainer.expected_value, shap_values, training_data_set) AND shap.force_plot(explainer.expected_value, shap_values[0], training_data_set.iloc[0], matplotlib=True) Dependence Plot if 'is_genre_fantasy' in training_data_set.columns: shap.dependence_plot(feature_to_analyze, shap_values, training_data_set, interaction_index='is_genre_fantasy') SHAP - Python Code https://github.com/slundberg/shap https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.html
  • 52. SHAP - Warnings ❑ Attention on Data preprocessing ❑ SHAP doesn’t consider queries Extrapolate all the interactions with the same query and then execute the plot ❑ The output (score) of the model is NOT the Relevance Label Relative relevance between products
  • 53. Conclusion ❑ Explainability is useful to understand the model behaviour ❑ There are several method for explainability ❑ SHAP is a very powerful library that provides several tools ❑ SHAP’s plots allow us to give local and global explainability to the model ❑ Keep attention on data preprocessing, queries and relevance during plots interpretation
  • 54. ‣ Integration of SHAP Library in Apache Solr ‣ Exploration of other Explainability libraries/methods, that are more specific for ranking algorithm Future Works
  • 55. Our Blog Posts about Explainability: ‣ https://sease.io/2020/07/explaining-learnin g-to-rank-models-with-tree-shap.html ‣ https://sease.io/2021/02/a-learning-to-ran k-project-on-a-daily-song-ranking-problem -part-2.html Thank You! Keep an eye on our Blog page, as more is coming!