Ceteris Paribus Plots – a new DALEX companion

If you like magical incantations in Data Science, please welcome the Ceteris Paribus Plots. Otherwise feel free to call them What-If Plots.

Ceteris Paribus (latin for all else unchanged) Plots explain complex Machine Learning models around a single observation. They supplement tools like breakDown, Shapley values, LIME or LIVE. In addition to feature importance/feature attribution, now we can see how the model response changes along a specific variable, keeping all other variables unchanged.

How cancer-risk-scores change with age? How credit-scores change with salary? How insurance-costs change with age?

Well, use the ceterisParibus package to generate plots like the one below.
Here we have an explanation for a random forest model that predicts apartments prices. Presented profiles are prepared for a single observation marked with dashed lines (130m2 apartment on 3rd floor). From these profiles one can read how the model response is linked with particular variables.

Instead of original values on the OX scale one can plot qunatiles. This way one can put all variables in a single plot.

And once all variables are in the same scale, one can compare two or more models.

Yes, they are model agnostic and will work for any model!
Yes, they can be interactive (see plot_interactive function or examples below)!
And yes, you can use them with other DALEX explainers!
More examples with R code.

ML models: What they can’t learn?

What I love in conferences are the people, that come after your talk and say: It would be cool to add XYZ to your package/method/theorem.

After the eRum (great conference by the way) I was lucky to hear from Tal Galili: It would be cool to use DALEX for teaching, to show how different ML models are learning relations.

Cool idea. So let’s see what can and what cannot be learned by the most popular ML models. Here we will compare random forest against linear models against SVMs.
Find the full example here. We simulate variables from uniform U[0,1] distribution and calculate y from following equation

In all figures below we compare PDP model responses against the true relation between variable x and the target variable y (pink color). All these plots are created with DALEX package.

For x1 we can check how different models deal with a quadratic relation. The linear model fails without prior feature engineering, random forest is guessing the shape but the best fit if found by SVMs.

With sinus-like oscillations the story is different. SVMs are not that flexible while random forest is much closer.

Turns out that monotonic relations are not easy for these models. The random forest is close but event here we cannot guarantee the monotonicity.

The linear model is the best one when it comes to truly linear relation. But other models are not that far.

The abs(x) is not an easy case for neither model.

Find the R codes here.

Of course the behavior of all these models depend on number of observation, noise to signal ratio, correlation among variables and interactions.
Yet is may be educational to use PDP curves to see how different models are learning relations. What they can grasp easily and what they cannot.

DALEX @ eRum 2018

DALEX invasion has started with the workshop and talk @ eRum 2018.

Find workshop materials at DALEX: Descriptive mAchine Learning EXplanations. Tools for exploration, validation and explanation of complex machine learning models (thanks Mateusz Staniak for having the second part of the workshop).

And my presentation Show my your model 2.0! (thanks go to the whole MI2DataLab for contributions and Malgorzata Pawlak for great stickers).

DALEX: which variables are really important? Ask your black box model!

Third post from the short series about black-box explainers implemented in the DALEX package. Learn more about DALEX at SER (Warsaw, April 2018), eRum (Budapest, May 2018), WhyR (Wroclaw, June 2018) or UseR (Brisbane, July 2018).

Two weeks ago I wrote about single variable conditional responses and last week I wrote about decompositions of a single prediction.

Sometimes we would like to know the general structure of a model, or at least know which variables are the most influential. There is a lot of different approaches to this problem proposed in literature. A nice, simple, and model agnostic approach is described in this article (Fisher, Rudin, and Dominici 2018). To see how important is variable X let’s permute it’s values and measure the drop in model accuracy.
This procedure is implemented in the DALEX package in the variable_dropout() function. There are some tweaks (for large datasets you do not need to permute all rows while for small datasets you could consider some oversampling) but the idea is the same.

In the figure below you will find variable importances for three models created with the HR dataset. It is easy to spot that the randomForest model results in the best model and satisfaction_level is the most important variable in all three models.

plot.variable_dropout_explainer-19

There are two things that I like in this explainer.

1) Variable effects for a single model are interesting, but ability to compare effects for many modes is even more interesting. In the DALEX you can simply contrast/compare explainers across different models.

2) There is no reason to start variable importance plots in the point 0, since the initial model performance is different for different plots. It is much more informative to present both the initial model performance and drop in the performance resulting from the dropout of a variable.

If you want to learn more about DALEX package and variable importances consult following vignette or the DALEX website.

DALEX_variable_dropout

How fractals helped my students to master package development in R

Last semester I taught an R programming at MIMUW. My lectures are project oriented, the second project was related to package development. The idea was straightforward: each team of students shall create a package that produces IFS fractals (based on iterated function systems). Each package shall have two generic functions: create() and plot(), documentation and vignette. Fractals shall be implemented with the use of S3 or S4 classes.

I have students with different backgrounds. Mostly statistics, but some are from physics, psychology or biology. I was a bit unsure how they will deal with concepts like iterated contractions.
After all results exceeded my expectations.

Guess what is happening with students engagement when their packages start producing nice plots. Their need/hunger for more leads to beautiful things.

This team got interested in nonlinear transformations. They manage to create Apollonian Gasket generator and much more. See their vignette and package here.

Screen Shot 2018-03-09 at 7.31.24 PM

This team got interested in probabilistic mixtures of two fractals. They developed a Shiny app that mixes two sets of contractions with given mixture proportions. Here is a mixture of Sierpinski gasket and a tree. Find out their vignette here.

Screen Shot 2018-03-09 at 7.29.58 PM

This team got interested in random fractals. They developed fractal generator that draws parameters of each contraction. In result they get beautiful random shapes like these. Here is their vignette.

Screen Shot 2018-03-09 at 7.37.42 PMScreen Shot 2018-03-09 at 7.37.31 PMScreen Shot 2018-03-09 at 7.37.26 PM

And these two teams got interested in different ways of fractal colouring. Vignettes of Team 1 and Team 2

Screen Shot 2018-03-09 at 7.40.15 PMScreen Shot 2018-03-09 at 7.36.01 PMScreen Shot 2018-03-09 at 7.31.05 PM

After all it turns out that fractals are very addictive!
Use it with care 😉

DALEX: how would you explain this prediction?

Last week I wrote about single variable explainers implemented in the DALEX package. They are useful to plot relation between a model output and a single variable.

But sometimes we are more focused on a single model prediction. If our model predicts possible drug response for a patient, we really need to know which factors drive the model prediction for a particular patient. For linear models it is relatively easy as the structure of the model is additive. In 2017 we have developed breakDown package for lm/glm models.

But how to explain/decompose/approximate predictions for any black box model?
There are several approaches. The (probably) most known is LIME with great examples for image and text data. There is an R port lime developed by Thomas Pedersen. In collaboration with Mateusz Staniak we developed live package, similar approach, easy to use with models created by mlr package.
The other technique that can be used here are Shapley values which use attribution theory/game theory to attribute effects of different variables for a single prediction.

Recently we have developed a yet another approach (paper under preparation, implemented in the breakDown version 0.4) that works in a model agnostic way (here you can check how to use it for caret models). You can play with it via the single_prediction() function in the DALEX package.
Such decomposition is useful for many reasons mentioned in papers listed above (deeper understanding, validation, trust, etc).
And, what is really extra about the DALEX package, you can compare contributions of different models on the same scale.

Let’s train three models (glm / gradient boosting model and random forest model) to predict quality of wine. These models are very different in structure. In the figure below, for a single wine, we compare predictions calculated by these models. For this single wine, for all models the most influential variable is the alcohol concentration as the wine has much higher concentration than average. Then pH and sulphates take second and third positions in all three models. Looks like models have some agreement even if they structure is very different.

plot.single_prediction_explainer-19

If you want to learn more about DALEX package and decompositions for model predictions please consult following cheatsheet or the DALEX website.

If you want to learn more about explainers in general, join our DALEX Invasion!
Find our DALEX workshops at SER (Warsaw, April 2018), ERUM (Budapest, May 2018), WhyR (Wroclaw, June 2018) or UseR (Brisbane, July 2018).

DALEX_single_prediction

DALEX: understand a black box model – conditional responses for a single variable

Black-box models, like random forest model or gradient boosting model, are commonly used in predictive modelling due to their elasticity and high accuracy. The problem is, that it is hard to understand how a single variable affects model predictions.

As a remedy one can use excellent tools like pdp package (Brandon Greenwell, pdp: An R Package for Constructing Partial Dependence Plots, The R Journal 9(2017)) or ALEPlot package (Apley, Dan. Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models (2016)).
OR
Now one can use the DALEX package to not only plot a conditional model response but also superimpose responses from different models to better understand differences between models.

Screen Shot 2018-02-19 at 12.27.58 AM

Consult the following vignette to learn more about the DALEX package and explainers for a single variable.

DALEX_single_variable

OR
if you want to learn more about explainers, join our DALEX Invasion!
Find our DALEX workshops at SER (Warsaw, April 2018), ERUM (Budapest, May 2018), WhyR (Wroclaw, June 2018) or UseR (Brisbane, July 2018).

chRistmas tRees

Year over year, in the last classes before Christmas I ask my students to create a Christmas tree in R.
Classes are about Techniques of data visualisation and usually, at this point, we are discussing interactive graphics and tools like rbokeh, ggiraph, vegalite, googleVis, D3, rCharts or plotly. I like this exercise because with most tools it is easy to create a barchart, but how good must be the tool and the craftsman to handle a christmas tree?

Here is what they did this year (having around 1 hour to finish the task). Knitr scripts.

Update: I am still getting new submissions, feel free to submit yours as well.

Screen Shot 2017-12-22 at 13.07.26Screen Shot 2017-12-22 at 13.04.49

Screen Shot 2017-12-21 at 23.10.40Screen Shot 2017-12-21 at 23.10.23

Screen Shot 2017-12-21 at 22.06.35Screen Shot 2017-12-21 at 22.00.11

Screen Shot 2017-12-21 at 23.11.45Screen Shot 2017-12-21 at 23.11.19

Screen Shot 2017-12-21 at 21.57.54Screen Shot 2017-12-22 at 13.07.54

Screen Shot 2017-12-22 at 23.09.48Screen Shot 2017-12-22 at 23.09.25

Screen Shot 2018-01-09 at 22.37.20Screen Shot 2018-01-09 at 22.20.51

Screen Shot 2018-01-09 at 22.13.01Screen Shot 2018-01-09 at 21.57.06

Screen Shot 2018-01-09 at 21.47.06Screen Shot 2018-01-09 at 21.43.25

Screen Shot 2018-01-09 at 21.39.49Screen Shot 2018-01-09 at 21.22.31

Screen Shot 2017-12-21 at 23.10.48

archivist: Boost the reproducibility of your research

A few days ago Journal of Statistical Software has published our article (in collaboration with Marcin Kosiński) archivist: An R Package for Managing, Recording and Restoring Data Analysis Results.

Why should you care? Let’s see.

Starter

a
Would you want to retrieve a ggplot2 object with the plot on the right?
Just call the following line in your R console.

archivist::aread('pbiecek/Eseje/arepo/65e430c4180e97a704249a56be4a7b88')

Want to check versions of packages loaded when the plot was created?
Just call

archivist::asession('pbiecek/Eseje/arepo/65e430c4180e97a704249a56be4a7b88')

Wishful Thinking?

When people talk about reproducibility, usually they focus on tools like packrat, MRAN, docker or RSuite. These are great tools, that help to manage the execution environment in which analyses are performed. The common belief is that if one is able to replicate the execution environment then the same R source code will produce same results.

And it’s true in most cases, maybe even more than 99% of cases. Except that there are things in the environment that are hard to control or easy to miss. Things like external system libraries or dedicated hardware or user input. No matter what you will copy, you will never know if it was enough to recreate exactly same results in the future. So you can hope that results will be replicated, but do not bet too high.
Even if some result will pop up eventually, how can you check if it’s the same result as previously?

Literate programming is not enough

There are other great tools like knitr, Sweave, Jupiter or others. The advantage of them is that results are rendered as tables or plots in your report. This gives you chance to verify if results obtained now and some time ago are identical.
But what about more complicated results like a random forest with 100k trees created with 100k variables or some deep neural network. It will be hard to verify by eye that results are identical.

So, what can I do?

The safest solution would be to store copies of every object, ever created during the data analysis. All forks, wrong paths, everything. Along with detailed information which functions with what parameters were used to generate each result. Something like the ultimate TimeMachine or GitHub for R objects.

With such detailed information, every analysis would be auditable and replicable.
Right now the full tracking of all created objects is not possible without deep changes in the R interpreter.
The archivist is the light-weight version of such solution.

What can you do with archivist?

Use the saveToRepo() function to store selected R objects in the archivist repository.
Use the addHooksToPrint() function to automatically keep copies of every plot or model or data created with knitr report.
Use the aread() function to share your results with others or with future you. It’s the easiest way to access objects created by a remote shiny application.
Use the asearch() function to browse objects that fit specified search criteria, like class, date of creation, used variables etc.
Use asession() to access session info with detailed information about versions of packages available during the object creation.
Use ahistory() to trace how given object was created.

Lots of function, do you have a cheatsheet?

Yes! It’s here.
If it’s not enough, find more details in the JSS article.

Explain! Explain! Explain!


Predictive modeling is fun. With random forest, xgboost, lightgbm and other elastic models…
Problems start when someone is asking how predictions are calculated.
Well, some black boxes are hard to explain.
And this is why we need good explainers.

In the June Aleksandra Paluszynska defended her master thesis Structure mining and knowledge extraction from random forest. Find the corresponding package randomForestExplainer and its vignette here.

In the September David Foster published a very interesting package xgboostExplainer. Try it to extract useful information from a xgboost model and create waterfall plots that explain variable contributions in predictions. Read more about this package here.

In the October Albert Cheng published lightgbmExplainer. Package with waterfall plots implemented for lightGBM models. Its usage is very similar to the xgboostExplainer package.

Waterfall plots that explain single predictions are great. They are useful also for linear models. So if you are working with lm() or glm() try the brand new breakDown package (hmm, maybe it should be named glmExplainer). It creates graphical explanations for predictions and has such a nice cheatsheet:

breakDownCheatsheet

Install the package from https://pbiecek.github.io/breakDown/.

Thanks to RStudio for the cheatsheet’s template.