DALEX has a new skin! Learn how it was designed at gdansk2019.satRdays

DALEX is an R package for visual explanation, exploration, diagnostic and debugging of predictive ML models (aka XAI – eXplainable Artificial Intelligence). It has a bunch of visual explainers for different aspects of predictive models. Some of them are useful during model development some for fine tuning, model diagnostic or model explanations.

Recently Hanna Dyrcz designed a new beautiful theme for these explainers. It’s implemented in the DALEX::theme_drwhy() function.
Find some teaser plots below. A nice Interpretable Machine Learning story for the Titanic data is presented here.

Hanna is a very talented designer. So I’m super happy that at the next satRdays @ gdansk2019 we will have a joint talk ,,Machine Learning meets Design. Design meets Machine Learning”.

New plots are available in the GitHub version of DALEX 0.2.8 (please star if you like it/use it. This helps to attract new developers). Will get to the CRAN soon (I hope).

Instance level explainers, like Break Down or SHAP

Instance level profiles, like Ceteris Paribus or Partial Dependency

Global explainers, like Variable Importance Plots

See you at satRdays!

shapper is on CRAN, it’s an R wrapper over SHAP explainer for black-box models

Written by: Alicja Gosiewska

In applied machine learning, there are opinions that we need to choose between interpretability and accuracy. However in field of the Interpretable Machine Learning, there are more and more new ideas for explaining black-box models. One of the best known method for local explanations is SHapley Additive exPlanations (SHAP).

The SHAP method is used to calculate influences of variables on the particular observation. This method is based on Shapley values, a technique borrowed from the game theory. SHAP was introduced by Scott M. Lundberg and Su-In Lee in A Unified Approach to Interpreting Model Predictions NIPS paper. Originally it was implemented in the Python library shap.

The R package shapper is a port of the Python library shap. In this post we show the functionalities of shapper. The examples are provided on titanic_train data set for classification.

While shapper is a port for Python library shap, there are also pure R implementations of the SHAP method, e.g. iml or shapleyR.

Installation

The shapper wraps up the Python library, therefore installation requires a bit more effort than installation of an ordinary R package.

Install the R package shapper

First of all we need to install shapper, this may be the stable release from CRAN

or the developer version form GitHub.

Install the Python library shap

Before you run shapper, make sure that you have installed Python.

Python library shap is required to use shapper. The shap can be installed both by Python or R. To install it through R, you an use function install_shap() from the shapper package.

If you experience any problems related to the installation of Python libraries or evaluation of Python code, see the reticulate documentation. The shapper access Python within reticulate, therefore the solution to the problem is likely to be in there ;-).

Would you survive sinking of the RMS Titanic?

The example usage is presented on the titanic_train dataset from the R package titanic. We will predict the Survived status. The other variables used by the model are: Pclass, Sex, Age, SibSp, Parch, Fare and Embarked.

Let’s build a model

Let’s see what are our chances assessed by the random forest model.

Prediction to be explained

Let’s assume that we want to explain the prediction of a particular observation (male, 8 years old, traveling 1-st class embarked at C, without parents and siblings.

Model prediction for this observation is .558 for survival.

Here shapper starts

To use the function shap() function (alias for individual_variable_effect()) we need four elements

  • a model,
  • a data set,
  • a function that calculated scores (predict function),
  • an instance (or instances) to be explained.

The shap() function can be used directly with these four arguments, but for the simplicity here we are using the DALEX package with preimplemented predict functions.

The explainer is an object that wraps up a model and meta-data. Meta data consists of, at least, the data set used to fit model and observations to explain.

And now it’s enough to generate SHAP attributions with explainer for RF model.

Plotting results

To generate a plot of Shapley values you can simply pass an object of class importance_variable_effect to a plot() function. Since we are interested in the class Survived = 1 we may add additional parameter that filter only selected classes.

Labels on y-axis show values of variables for this particular observation. Black arrows show predictions of model, in this case, probabilities of each status. Other arrows show effect of each variable on this prediction. Effects may be positive or negative and they sum up to the value of prediction.

On this plot we can see that model predicts that the passenger will survive. Changes are higher due to young age and 1st class, only the Sex = male decreases chances of survival for this observation.

More models

It is useful to contrast prediction of two models. Here we will show how to use shapper for such contrastive explanations.

We will compare randomForest with svm implemented in the e1071.

This model predict 32.5% chances of survival.

Shapley values plot may be modified. To show more than one model you can pass more individual_variable_plot objects.

To see only attributions use option show_predcited = FALSE.

More

Documentation and more examples are available at https://modeloriented.github.io/shapper/. The stable version of the package is on CRAN, the development version is on GitHub (https://github.com/ModelOriented/shapper). Shapper is a part of the DALEX universe.

x-mas tRees with gganimate, ggplot, plotly and friends

At the last homework before Christmas I asked my students from DataVisTechniques to create a ,,Christmas style” data visualization in R or Python (based on simulated data).

Libaries like rbokeh, ggiraph, vegalite, shiny+ggplot2 or plotly were popular last year. This year there are also some nice submissions that use gganimate.

Find source codes here. Plots created last year are here.
And here are homeworks from this year.

Trees created with gganimate (and gifski)



Trees created with ggplot2 (and sometimes shiny)









Trees created with plotly


Trees created with python


Trees created with rbokeh


Trees created with vegalite


Trees created with ggiraph

Data, movies and ggplot2

Yet another boring barplot?
No!
I’ve asked my students from MiNI WUT to visualize some data about their favorite movies or series.
Results are pretty awesome.
Believe me or not, but charts in these posters are created with ggplot2 (most of them)!

Star Wars

Fan of StaR WaRs? Find out which color is the most popular for lightsabers!
Yes, these lightsabers are created with ggplot2.
Would you guess which characters are overweighed?
Find the R code and the data on the GitHub.

Harry Pixel

Take fames from Harry Potter movies, use k-means to extract dominant colors for each frame, calculate derivative for color changes and here you are.
The R code and the poster are here.
(steep derivatives in color space is a nice proxy for dynamic scenes).

Social Network for Super Heroes

Have you ever wondered how the distribution of super powers looks like among Avengers?
Check put this poster or dive in the data.

Pardon my French, but…

Scrap transcripts from over 100k movies, find out how many curse words you may find in these movies, plot these statistics.
Here are sources and the poster.
(Bonus Question 1: how curse words are related to Obama/Trump presidency?
Bonus Question 2: is the number of hard curse words increasing or not?)

Rick and Morty

Interested in the demography of characters from Rick and Morty?
Here is the R code and the poster.
(Tricky question: what is happening with season 3?)

Twin Peaks

Transcripts from Twin Peaks are full of references to coffee and donuts.
Except the episode in which the Laura’s murdered is revealed (ups, spoiler alert).
Check out this by yourself with these scripts.

The Lion King

Which Disney’s movie is the most popular?
It wasn’t hard to guess.

Box Office

5D scatterplots?
Here you have.

Next time I will ask my students to visualize data about R packages…
Or maybe you have some other ideas?

Ceteris Paribus Plots – a new DALEX companion

If you like magical incantations in Data Science, please welcome the Ceteris Paribus Plots. Otherwise feel free to call them What-If Plots.

Ceteris Paribus (latin for all else unchanged) Plots explain complex Machine Learning models around a single observation. They supplement tools like breakDown, Shapley values, LIME or LIVE. In addition to feature importance/feature attribution, now we can see how the model response changes along a specific variable, keeping all other variables unchanged.

How cancer-risk-scores change with age? How credit-scores change with salary? How insurance-costs change with age?

Well, use the ceterisParibus package to generate plots like the one below.
Here we have an explanation for a random forest model that predicts apartments prices. Presented profiles are prepared for a single observation marked with dashed lines (130m2 apartment on 3rd floor). From these profiles one can read how the model response is linked with particular variables.

Instead of original values on the OX scale one can plot qunatiles. This way one can put all variables in a single plot.

And once all variables are in the same scale, one can compare two or more models.

Yes, they are model agnostic and will work for any model!
Yes, they can be interactive (see plot_interactive function or examples below)!
And yes, you can use them with other DALEX explainers!
More examples with R code.

ML models: What they can’t learn?

What I love in conferences are the people, that come after your talk and say: It would be cool to add XYZ to your package/method/theorem.

After the eRum (great conference by the way) I was lucky to hear from Tal Galili: It would be cool to use DALEX for teaching, to show how different ML models are learning relations.

Cool idea. So let’s see what can and what cannot be learned by the most popular ML models. Here we will compare random forest against linear models against SVMs.
Find the full example here. We simulate variables from uniform U[0,1] distribution and calculate y from following equation

In all figures below we compare PDP model responses against the true relation between variable x and the target variable y (pink color). All these plots are created with DALEX package.

For x1 we can check how different models deal with a quadratic relation. The linear model fails without prior feature engineering, random forest is guessing the shape but the best fit if found by SVMs.

With sinus-like oscillations the story is different. SVMs are not that flexible while random forest is much closer.

Turns out that monotonic relations are not easy for these models. The random forest is close but event here we cannot guarantee the monotonicity.

The linear model is the best one when it comes to truly linear relation. But other models are not that far.

The abs(x) is not an easy case for neither model.

Find the R codes here.

Of course the behavior of all these models depend on number of observation, noise to signal ratio, correlation among variables and interactions.
Yet is may be educational to use PDP curves to see how different models are learning relations. What they can grasp easily and what they cannot.

DALEX @ eRum 2018

DALEX invasion has started with the workshop and talk @ eRum 2018.

Find workshop materials at DALEX: Descriptive mAchine Learning EXplanations. Tools for exploration, validation and explanation of complex machine learning models (thanks Mateusz Staniak for having the second part of the workshop).

And my presentation Show my your model 2.0! (thanks go to the whole MI2DataLab for contributions and Malgorzata Pawlak for great stickers).

DALEX: which variables are really important? Ask your black box model!

Third post from the short series about black-box explainers implemented in the DALEX package. Learn more about DALEX at SER (Warsaw, April 2018), eRum (Budapest, May 2018), WhyR (Wroclaw, June 2018) or UseR (Brisbane, July 2018).

Two weeks ago I wrote about single variable conditional responses and last week I wrote about decompositions of a single prediction.

Sometimes we would like to know the general structure of a model, or at least know which variables are the most influential. There is a lot of different approaches to this problem proposed in literature. A nice, simple, and model agnostic approach is described in this article (Fisher, Rudin, and Dominici 2018). To see how important is variable X let’s permute it’s values and measure the drop in model accuracy.
This procedure is implemented in the DALEX package in the variable_dropout() function. There are some tweaks (for large datasets you do not need to permute all rows while for small datasets you could consider some oversampling) but the idea is the same.

In the figure below you will find variable importances for three models created with the HR dataset. It is easy to spot that the randomForest model results in the best model and satisfaction_level is the most important variable in all three models.

plot.variable_dropout_explainer-19

There are two things that I like in this explainer.

1) Variable effects for a single model are interesting, but ability to compare effects for many modes is even more interesting. In the DALEX you can simply contrast/compare explainers across different models.

2) There is no reason to start variable importance plots in the point 0, since the initial model performance is different for different plots. It is much more informative to present both the initial model performance and drop in the performance resulting from the dropout of a variable.

If you want to learn more about DALEX package and variable importances consult following vignette or the DALEX website.

DALEX_variable_dropout

How fractals helped my students to master package development in R

Last semester I taught an R programming at MIMUW. My lectures are project oriented, the second project was related to package development. The idea was straightforward: each team of students shall create a package that produces IFS fractals (based on iterated function systems). Each package shall have two generic functions: create() and plot(), documentation and vignette. Fractals shall be implemented with the use of S3 or S4 classes.

I have students with different backgrounds. Mostly statistics, but some are from physics, psychology or biology. I was a bit unsure how they will deal with concepts like iterated contractions.
After all results exceeded my expectations.

Guess what is happening with students engagement when their packages start producing nice plots. Their need/hunger for more leads to beautiful things.

This team got interested in nonlinear transformations. They manage to create Apollonian Gasket generator and much more. See their vignette and package here.

Screen Shot 2018-03-09 at 7.31.24 PM

This team got interested in probabilistic mixtures of two fractals. They developed a Shiny app that mixes two sets of contractions with given mixture proportions. Here is a mixture of Sierpinski gasket and a tree. Find out their vignette here.

Screen Shot 2018-03-09 at 7.29.58 PM

This team got interested in random fractals. They developed fractal generator that draws parameters of each contraction. In result they get beautiful random shapes like these. Here is their vignette.

Screen Shot 2018-03-09 at 7.37.42 PMScreen Shot 2018-03-09 at 7.37.31 PMScreen Shot 2018-03-09 at 7.37.26 PM

And these two teams got interested in different ways of fractal colouring. Vignettes of Team 1 and Team 2

Screen Shot 2018-03-09 at 7.40.15 PMScreen Shot 2018-03-09 at 7.36.01 PMScreen Shot 2018-03-09 at 7.31.05 PM

After all it turns out that fractals are very addictive!
Use it with care 😉

DALEX: how would you explain this prediction?

Last week I wrote about single variable explainers implemented in the DALEX package. They are useful to plot relation between a model output and a single variable.

But sometimes we are more focused on a single model prediction. If our model predicts possible drug response for a patient, we really need to know which factors drive the model prediction for a particular patient. For linear models it is relatively easy as the structure of the model is additive. In 2017 we have developed breakDown package for lm/glm models.

But how to explain/decompose/approximate predictions for any black box model?
There are several approaches. The (probably) most known is LIME with great examples for image and text data. There is an R port lime developed by Thomas Pedersen. In collaboration with Mateusz Staniak we developed live package, similar approach, easy to use with models created by mlr package.
The other technique that can be used here are Shapley values which use attribution theory/game theory to attribute effects of different variables for a single prediction.

Recently we have developed a yet another approach (paper under preparation, implemented in the breakDown version 0.4) that works in a model agnostic way (here you can check how to use it for caret models). You can play with it via the single_prediction() function in the DALEX package.
Such decomposition is useful for many reasons mentioned in papers listed above (deeper understanding, validation, trust, etc).
And, what is really extra about the DALEX package, you can compare contributions of different models on the same scale.

Let’s train three models (glm / gradient boosting model and random forest model) to predict quality of wine. These models are very different in structure. In the figure below, for a single wine, we compare predictions calculated by these models. For this single wine, for all models the most influential variable is the alcohol concentration as the wine has much higher concentration than average. Then pH and sulphates take second and third positions in all three models. Looks like models have some agreement even if they structure is very different.

plot.single_prediction_explainer-19

If you want to learn more about DALEX package and decompositions for model predictions please consult following cheatsheet or the DALEX website.

If you want to learn more about explainers in general, join our DALEX Invasion!
Find our DALEX workshops at SER (Warsaw, April 2018), ERUM (Budapest, May 2018), WhyR (Wroclaw, June 2018) or UseR (Brisbane, July 2018).

DALEX_single_prediction