shapper is on CRAN, it’s an R wrapper over SHAP explainer for black-box models

Written by: Alicja Gosiewska

In applied machine learning, there are opinions that we need to choose between interpretability and accuracy. However in field of the Interpretable Machine Learning, there are more and more new ideas for explaining black-box models. One of the best known method for local explanations is SHapley Additive exPlanations (SHAP).

The SHAP method is used to calculate influences of variables on the particular observation. This method is based on Shapley values, a technique borrowed from the game theory. SHAP was introduced by Scott M. Lundberg and Su-In Lee in A Unified Approach to Interpreting Model Predictions NIPS paper. Originally it was implemented in the Python library shap.

The R package shapper is a port of the Python library shap. In this post we show the functionalities of shapper. The examples are provided on titanic_train data set for classification.

While shapper is a port for Python library shap, there are also pure R implementations of the SHAP method, e.g. iml or shapleyR.

Installation

The shapper wraps up the Python library, therefore installation requires a bit more effort than installation of an ordinary R package.

Install the R package shapper

First of all we need to install shapper, this may be the stable release from CRAN

or the developer version form GitHub.

Install the Python library shap

Before you run shapper, make sure that you have installed Python.

Python library shap is required to use shapper. The shap can be installed both by Python or R. To install it through R, you an use function install_shap() from the shapper package.

If you experience any problems related to the installation of Python libraries or evaluation of Python code, see the reticulate documentation. The shapper access Python within reticulate, therefore the solution to the problem is likely to be in there ;-).

Would you survive sinking of the RMS Titanic?

The example usage is presented on the titanic_train dataset from the R package titanic. We will predict the Survived status. The other variables used by the model are: Pclass, Sex, Age, SibSp, Parch, Fare and Embarked.

Let’s build a model

Let’s see what are our chances assessed by the random forest model.

Prediction to be explained

Let’s assume that we want to explain the prediction of a particular observation (male, 8 years old, traveling 1-st class embarked at C, without parents and siblings.

Model prediction for this observation is .558 for survival.

Here shapper starts

To use the function shap() function (alias for individual_variable_effect()) we need four elements

  • a model,
  • a data set,
  • a function that calculated scores (predict function),
  • an instance (or instances) to be explained.

The shap() function can be used directly with these four arguments, but for the simplicity here we are using the DALEX package with preimplemented predict functions.

The explainer is an object that wraps up a model and meta-data. Meta data consists of, at least, the data set used to fit model and observations to explain.

And now it’s enough to generate SHAP attributions with explainer for RF model.

Plotting results

To generate a plot of Shapley values you can simply pass an object of class importance_variable_effect to a plot() function. Since we are interested in the class Survived = 1 we may add additional parameter that filter only selected classes.

Labels on y-axis show values of variables for this particular observation. Black arrows show predictions of model, in this case, probabilities of each status. Other arrows show effect of each variable on this prediction. Effects may be positive or negative and they sum up to the value of prediction.

On this plot we can see that model predicts that the passenger will survive. Changes are higher due to young age and 1st class, only the Sex = male decreases chances of survival for this observation.

More models

It is useful to contrast prediction of two models. Here we will show how to use shapper for such contrastive explanations.

We will compare randomForest with svm implemented in the e1071.

This model predict 32.5% chances of survival.

Shapley values plot may be modified. To show more than one model you can pass more individual_variable_plot objects.

To see only attributions use option show_predcited = FALSE.

More

Documentation and more examples are available at https://modeloriented.github.io/shapper/. The stable version of the package is on CRAN, the development version is on GitHub (https://github.com/ModelOriented/shapper). Shapper is a part of the DALEX universe.

Break Down: model explanations with interactions and DALEX in the BayArea

The breakDown package explains predictions from black-box models, such as random forest, xgboost, svm or neural networks (it works for lm and glm as well). As a result you gets decomposition of model prediction that can be attributed to particular variables.

The version 0.3 has a new function break_down. It identifies pairwise interactions of variables. So if the model is not additive, then instead of seeing effects of single variables you will see effects for interactions.
It’s easy to use this function. See an example below.
HR is an artificial dataset. The break_down function correctly identifies interaction between gender and age. Find more examples in the documentation.

Figure below shows that a single prediction was decomposed into 4 parts. One of them is related to the interaction between age and gender.

BreakDown is a part of DALEXverse – collection of tools for visualisation, exploration and explanation of complex machine learning models.

Till the end of September I am visiting UC Davis and UC Berkeley. Happy to talk about DALEX explainers, XAI and related stuff.
So, if you want to talk about interpretability of complex ML models, just let me know.

Yes, it’s part of the DALEX invasion 😉
Thanks to the H2020 project RENOIR.

Ceteris Paribus v0.3 is on CRAN

Ceteris Paribus package is a part of DALEX family of model explainers. Version 0.3 just gets to CRAN. It’s equipped with new functions for very elastic visual exploration of black box models. Its grammar generalizes Partial Dependency Plots, Individual Conditional Expectations, Wangkardu Plots and gives a lot of flexibility in model comparisons, groups comparisons and so on.

See a 100 sec introduction to the ceterisPackage package on YouTube.

Here you will find a one-pager cheat-sheet with selected use cases.

Here is a longer introduction with notation and some theory.

Here there is a vignette with examples for regression (housing prices).

And here for multiclass classification (HR models).

It’s a work in progress. Feel free to contribute!

Local Goodness-of-Fit Plots / Wangkardu Explanations – a new DALEX companion

The next DALEX workshop will take place in 4 days at UseR. In the meantime I am working on a new explainer for a single observation.
Something like a diagnostic plot for a single observation. Something that extends Ceteris Paribus Plots. Something similar to Individual Conditional Expectation (ICE) Plots. An experimental version is implemented in ceterisParibus package.
 
Intro

For a single observation, Ceteris Paribus Plots (What-If plots) show how predictions for a model change along a single variable. But they do not tell if the model is well fitted around this observation.

Here is an idea how to fix this:
(1) Take N points from validation dataset, points that are closest to a selected observation (Gower distance is used by default).
(2) Plot N Ceteris Paribus Plots for these points,
(3) Since we know the true y for these points, then we can plot model residuals in these points.
 
Examples

Here we have an example for a random forest model. The validation dataset has 9000 observations. We use N=18 observations closest to the observation of interest to show the model stability and the local goodness-of-fit.


(click to enlarge)

The empty circle in the middle stands for the observation of interest. We may read its surface component (OX axis, around 85 sq meters), and the model prediction (OY axis, around 3260 EUR).
The thick line stands for Ceteris Paribus Plot for the observation of interest.
Grey points stands for 18 closest observations from the validation dataset while grey lines are their Ceteris Paribus Plots. 
Red and blue lines stand for residuals for these neighbours. Corresponding true values of y are marked with red and blue circles. 

Red and blue intervals are short and symmetric so one may say that the model is well fitted around the observation of interest.
Czytaj dalej Local Goodness-of-Fit Plots / Wangkardu Explanations – a new DALEX companion

modelDown: a website generator for your predictive models

I love the pkgdown package. With a single line of code you can create a complete website with examples, vignettes and documentation for your package. Brilliant!

So what about a website generator for predictive models?
Imagine that you can take a set of predictive models (generated with caret, mlr, glm, xgboost or randomForest, anything) and automagically generate a website with an exploration/documentation for these models. A documentation with archvist hooks to models, with tables and graphs for model performance explainers, conditional model response explainers or explainers for particular predictions.

During the summer semester three students from Warsaw University of Technology (Kamil Romaszko, Magda Tatarynowicz, Mateusz Urbański) developed modelDown package for R as an team project assignment. You can find the package here. Visit an example website created with this package for four example models (instructions). And read more about this package at its pkgdown website or below.

BTW: If you want to learn more about model explainers, please come to our DALEX workshops at WhyR? 2018 conference in Wroclaw or UseR! 2018 conference in Brisbane.

Getting started with modelDown
by Kamil Romaszko, Magda Tatarynowicz, Mateusz Urbański

Introduction

Did you ever want to have one place where you can find information explaining your model? Or maybe you were missing a tool that can show difference in multiple models for the same dataset? Well, here comes modelDown package. By using DALEX package, it creates one html page with plots and information related to the model(s) you want to analyze.

If you want to check out example website generated with modelDown, check out this link (along with script that was used to create the html). Read on to see how to use package for your own models and what features it provides.

The examples presented here were generated for dataset HR_data from breakDown package (available on CRAN). The dataset contains various information about employees (for example their satisfaction from work or their salary). The information we predict is whether they left the company.

Installation
First things first – how can you use this package? Install it from github:

Czytaj dalej modelDown: a website generator for your predictive models

Not only LIME

I’ve heard about a number of consulting companies, that decided to use simple linear model instead of a black box model with higher performance, because ,,client wants to understand factors that drive the prediction’’.
And usually the discussion goes as following: ,,We have tried LIME for our black-box model, it is great, but it is not working in our case’’, ,,Have you tried other explainers?’’, ,,What other explainers’’?

So here you have a map of different visual explanations for black-box models. Choose one in (on average) less than three simple steps.

These are available in the DALEX package. Feel free to propose other visual explainers that should be added to this map (and the package).

Ceteris Paribus Plots – a new DALEX companion

If you like magical incantations in Data Science, please welcome the Ceteris Paribus Plots. Otherwise feel free to call them What-If Plots.

Ceteris Paribus (latin for all else unchanged) Plots explain complex Machine Learning models around a single observation. They supplement tools like breakDown, Shapley values, LIME or LIVE. In addition to feature importance/feature attribution, now we can see how the model response changes along a specific variable, keeping all other variables unchanged.

How cancer-risk-scores change with age? How credit-scores change with salary? How insurance-costs change with age?

Well, use the ceterisParibus package to generate plots like the one below.
Here we have an explanation for a random forest model that predicts apartments prices. Presented profiles are prepared for a single observation marked with dashed lines (130m2 apartment on 3rd floor). From these profiles one can read how the model response is linked with particular variables.

Instead of original values on the OX scale one can plot qunatiles. This way one can put all variables in a single plot.

And once all variables are in the same scale, one can compare two or more models.

Yes, they are model agnostic and will work for any model!
Yes, they can be interactive (see plot_interactive function or examples below)!
And yes, you can use them with other DALEX explainers!
More examples with R code.

ML models: What they can’t learn?

What I love in conferences are the people, that come after your talk and say: It would be cool to add XYZ to your package/method/theorem.

After the eRum (great conference by the way) I was lucky to hear from Tal Galili: It would be cool to use DALEX for teaching, to show how different ML models are learning relations.

Cool idea. So let’s see what can and what cannot be learned by the most popular ML models. Here we will compare random forest against linear models against SVMs.
Find the full example here. We simulate variables from uniform U[0,1] distribution and calculate y from following equation

In all figures below we compare PDP model responses against the true relation between variable x and the target variable y (pink color). All these plots are created with DALEX package.

For x1 we can check how different models deal with a quadratic relation. The linear model fails without prior feature engineering, random forest is guessing the shape but the best fit if found by SVMs.

With sinus-like oscillations the story is different. SVMs are not that flexible while random forest is much closer.

Turns out that monotonic relations are not easy for these models. The random forest is close but event here we cannot guarantee the monotonicity.

The linear model is the best one when it comes to truly linear relation. But other models are not that far.

The abs(x) is not an easy case for neither model.

Find the R codes here.

Of course the behavior of all these models depend on number of observation, noise to signal ratio, correlation among variables and interactions.
Yet is may be educational to use PDP curves to see how different models are learning relations. What they can grasp easily and what they cannot.

DALEX @ eRum 2018

DALEX invasion has started with the workshop and talk @ eRum 2018.

Find workshop materials at DALEX: Descriptive mAchine Learning EXplanations. Tools for exploration, validation and explanation of complex machine learning models (thanks Mateusz Staniak for having the second part of the workshop).

And my presentation Show my your model 2.0! (thanks go to the whole MI2DataLab for contributions and Malgorzata Pawlak for great stickers).

DALEX Stories – Warsaw apartments

This Monday we had a machine learning oriented meeting of Warsaw R Users. Prof Bernd Bischl from LMU gave an excellent overview of mlr package (machine learning in R), then I introduced DALEX (Descriptive mAchine Learning EXplanations) and Mateusz Staniak introduced live and breakDown packages.

The meeting pushed me to write down a gentle introduction to the DALEX package. You may find it here https://pbiecek.github.io/DALEX_docs/.
The introduction is written around a story based on Warsaw apartments data.

The story goes like that:
We have two models fitted to apartments data: a linear model and a randomForest model. It happens that both models have exactly identical root mean square for errors calculated on a validation dataset.
So an interesting question arise: which model we should choose?

The analysis of variable importance for both models reveals that variable construction.year is important for randomForest but is completely neglected by linear model.
New observation: something is going on with construction.year.

The analysis of model responses reveals that the relation between construction.year and price of square meter is nonlinear.
At this point it looks like the random forest model is the better one, since it captures relation, which the linear model do not see.

But (there is always but) when you audit residuals from the random forest model it turns out that it heavily under predicts prices of expensive apartments.
This is a very bad property for a pricing model. This may result in missed opportunities for larger profits.
New observation: do not use this rf model for predictions.

So, what to do?
DALEX shows that despite equal root mean squares of both models they are very different and capture different parts of the signal.
As we increase our understanding of the signal we are able to design a better model. And here we go.
This new liner model has much lower root mean square of residuals, as it is build on strengthens of both initial models.

All plots were created with DALEX explainers. Find R code and more details here.