How to design a model visualisation @ Gdansk satRdays


I had amazing weekend in Gdansk thanks to the satRday conference organized by Olgun Aydin, Ania Rybinska and Michal Maj.

Together with Hanna Piotrowska we had a talk ,,Machine learning meets design. Design meets machine learning”. Hanna redesigned DALEX visualisations (DALEX is a set of tools for visual explanation of predictive ML models). During the talk she explained what and why was changed.

See for example the metamorphosis of the Break Down explainer. How many differences can you spot?

Every change (axis, reading order, spacing, colors, descriptions, background, annotations) serves some purpose.

Find our presentation at slideshare.

List of satRday talks (machine learning was quite popular).

Hanna design is implemented in ggplot2 thanks to Tomasz Mikołajczyk and in D3 thanks to Huber Baniecki! Find more examples of how to use new plots here.

Make it explainable!

Most people make the mistake of thinking design is what it looks like… People think it’s this veneer — that the designers are handed this box and told, ‚Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.

Steve Jobs, The New York Times, 2003.

Same goes with interpretable machine learning.
Recently, I am talking a lot about interpretations and explainability. And sometimes I got impression that techniques like SHAP, Break Down, LIME, SAFE are treated like magical incantations that converts complex predictive models into ,,something interpretable’’.

But interpretability/explainability is not a binary feature that you have it or not. It’s a process. The goal is to increase our understanding of the model behavior. Try different techniques to broaden the knowledge about the model or about model predictions.
Maybe you will never explain 100%, but you will understand more.

XAI/IML (eXplainable Artificial Intelligence/Interpretable Machine Learning) techniques can be used not only for post-hoc explainability, but also for model maintenance, debugging or in early phases of crisp modeling. Visual tools like PDP/ALE/CeterisParibus will change the way how we approach modeling and how we interact with models. We as model developers, model auditors or users.

Together with Tomasz Burzykowski from UHasselt we work on a book about the methodology for visual exploration, explanation and debugging predictive models.

Find the early version here https://pbiecek.github.io/PM_VEE/.

There is a lot of R snippets that shows how to use DALEX (and sometimes other packages like shapper, ingredients, iml, iBreakDown, condvis, localModel, pdp) to better understand some aspects of your predictive model.

It’s a work in process and even in an early dirty phase (despite the fact that we have started a year ago).
Feel free to comment it, or suggest improvements. Easiest way to do this is to add a new issue.

Code snippets are fully thanks to archivist hooks. I think that it’s a first book that uses archivist hooks for blended experience. You can read about a model online and in just one line of code you can download an object to your R console.

First chapters show how to use Ceteris Paribus Profiles / Individual Conditional Expectations to perform what-if/sensitivity analysis of a model.

DALEX for keras and parsnip

DALEX is a set of tools for explanation, exploration and debugging of predictive models. The nice thing about it is that it can be easily connected to different model factories.

Recently Michal Maj wrote a nice vignette how to use DALEX with models created in keras (an open-source neural-network library in python with an R interface created by RStudio). Find the vignette here.
Michal compared a keras model against deeplearning from h2o package, so you can check which model won on the Titanic dataset.

Next nice vignette was created by Szymon Maksymiuk. In this vignette Szymon shows how to use DALEX with parsnip models (parsnip is a part of the tidymodels ecosystem, created by Max Kuhn and Davis Vaughan). Models like boost_tree, mlp and svm_rbf are competing on the Titanic data.

These two new vignettes add to our collection how to use DALEX with mlr, caret, h2o and others model factories.

iBreakDown: faster, prettier and more precise explanations for predictive models (with interactions)

LIME and SHAP are two very popular methods for instance level explanations of machine learning models (XAI).
They work nicely for images and text inputs, but share similar weakness in case of tabular data: explanations are additive while complex models are (sometimes) not. iBreakDown addresses this problem.

iBreakDown is a a successor of the breakDown package. Yesterday it has arrived on CRAN. Key new features are:

– It identifies and shows feature interactions (if there are local interactions in the model).
– It is much faster. For additive explanations the complexity is O(p) instead of O(p^2).
– The plotD3 function creates an interactive D3-based break-down plot (thanks to r2d3).
– iBreakDown has a new design, created by Hanna Dyrcz. We will have a talk about it ,,Machine learning meets design. Design meets machine learning.” at satRdays. Try the new theme theme_drwhy()!.
– It shows explanation level uncertainty – how good are explanations?

A methodology behind this package is described in the iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models.

A nice titanic-powered use-case is described in the titanic vignette.

An example of the D3 interactive explainer is here.

Some intuition is introduced in the Visual Exploration, Explanation and Debugging (working version, still in progress).

iBreakDown is a part of the DrWhy.AI family of explainers consistent with the DALEX.

Let us know if you like it. Feel free to create a pull request with new features, add issue with new idea or star the github repository if you like this package.

Bank będzie musiał wyjaśnić… czyli o wyjaśnialnych modelach predykcyjnych

Czym są wyjaśnialne modele predykcyjne?

Interpretowalne uczenie maszynowe (IML od Interpretable Machine Learning) czy wyjaśnialna syntetyczna inteligencja (XAI od eXplainable Artificial Intelligence) to względnie nowa, a ostatnio bardzo szybko rozwijająca się, gałąź uczenia maszynowego.

W skrócie chodzi o to, by konstruować takie modele, dla których człowiek możne zrozumieć skąd biorą się decyzje modelu. Złożone modele typu lasy losowe czy głębokie sieci są ok, o ile potrafimy w jakiś sposób wyjaśnić co wpłynęło na konkretną decyzję modelu.

Po co?

W ostatnich latach często uczenie maszynowe było uprawiane ,,w stylu Kaggle”. Jedynym kryterium oceny modelu była skuteczność modelu na jakimś ustalonym zbiorze testowym. Takie postawienie sprawie często zamienia się w bezsensowne żyłowanie ostatnich 0.00001% accuracy na zbiorze testowym.

Tak wyżyłowane modele najczęściej epicko upadają w zderzeniu z rzeczywistością. Ja na prezentacjach lubię wymieniać przykłady Google Flu, Watson for Oncology, Amazon CV, COMPAS i recydywizm czy przykłady z książki ,,Broń matematycznej zagłady”. Ale lista jest znacznie dłuższa.

Dlaczego to takie ważne?

W lutym fundacja Panoptykon pisała Koniec z „czarną skrzynką” przy udzielaniu kredytów. W ostatni czwartek (21 marca) w gazecie Bankier można było znaleźć ciekawy artykuł Bank będzie musiał wyjaśnić, dlaczego odmówił kredytu, w której opisuje niektóre konsekwencje ustawy przyjętej przez Senat.

Przykładowy cytat:
,,Ustawa wprowadza także m.in. przepis nakazujący bankom przedstawienie klientowi wyjaśnienia dotyczącego tego, które dane osobowe miały wpływ na ostatecznie dokonaną ocenę zdolności kredytowej. Obowiązek ten będzie dotyczył zarówno sytuacji, w której decyzja ta została podjęta w pełni zautomatyzowanym procesie, na podstawie tzw. algorytmów, jak i sytuacji, w której w podejmowaniu decyzji brał udział także człowiek”.

Wygląda więc na to, że niedługo wyjaśnialne uczenie maszynowe spotka nas w okienkach bankowych przy okazji decyzji kredytowych.

Nie tylko banki

Okazuje się, że temat wyjaśnialności w czwartek omawiany był nie tylko w Senacie. Akurat byłem tego dnia na bardzo ciekawej konferencji Polish Business Analytics Summit, na której dr Andrey Sharapov opowiadał o tym jak Lidl wykorzystuje techniki XAI i IML do lepszego wspomagania decyzji.

Zbudować model jest prosto, ale pokazać wyniki modelu biznesowi, tak by ten wiedział jak na ich podstawie podejmować lepsze decyzje – to jest wyzwanie dla XAI. Andrey Sharapov prowadzi na LinkedIn ciekawą grupę na którą wrzuca materiały o wyjaśnialnym uczeniu maszynowym. Sporo pozycji można też naleźć na tej liście.

Na poniższym zdjęciu jest akurat przykład wykorzystania techniki Break Down (made in MI2 Data Lab!!!) do wspomagania decyzji dotyczących kampanii marketingowych.

Warszawa po raz trzeci

Aż trudno uwierzyć w ten zbieg okoliczności, ale tego samego dnia (tak, wciąż piszę o 21 marca) na Spotkaniach Entuzjastów R profesor Marco Robnik omawiał różne techniki wyjaśnialności opartej o permutacje.

Skupił się na technika EXPLAIN i IME, ale było też o LIME i SHAP a na niektórych slajdach pojawiał się nasz DALEX i live (choć pewnie my byśmy już reklamowani nowsze rozwiązanie Mateusza Staniaka, czyli pakiet localModels).

Btw, spotkanie było nagrywane, więc niedługo powinno być dostępne na youtube.

Gdzie mogę dowiedzieć się więcej?

Wyjaśnialne uczenie maszynowe to przedmiot badań znacznej części osób z MI2DataLab. Rozwijamy platformę do automatycznej analizy, eksploracji i wyjaśnień dla modeli predykcyjnych DrWhy.AI.

Niedługo napisze więcej o materiałach i okazjach podczas których można dowiedzieć się więcej o ciekawych zastosowaniach technik wyjaśnialnego uczenia maszynowego w finansach, medycynie spersonalizowanej czy innych ciekawych miejscach.

DALEX has a new skin! Learn how it was designed at gdansk2019.satRdays

DALEX is an R package for visual explanation, exploration, diagnostic and debugging of predictive ML models (aka XAI – eXplainable Artificial Intelligence). It has a bunch of visual explainers for different aspects of predictive models. Some of them are useful during model development some for fine tuning, model diagnostic or model explanations.

Recently Hanna Dyrcz designed a new beautiful theme for these explainers. It’s implemented in the DALEX::theme_drwhy() function.
Find some teaser plots below. A nice Interpretable Machine Learning story for the Titanic data is presented here.

Hanna is a very talented designer. So I’m super happy that at the next satRdays @ gdansk2019 we will have a joint talk ,,Machine Learning meets Design. Design meets Machine Learning”.

New plots are available in the GitHub version of DALEX 0.2.8 (please star if you like it/use it. This helps to attract new developers). Will get to the CRAN soon (I hope).

Instance level explainers, like Break Down or SHAP

Instance level profiles, like Ceteris Paribus or Partial Dependency

Global explainers, like Variable Importance Plots

See you at satRdays!

shapper is on CRAN, it’s an R wrapper over SHAP explainer for black-box models

Written by: Alicja Gosiewska

In applied machine learning, there are opinions that we need to choose between interpretability and accuracy. However in field of the Interpretable Machine Learning, there are more and more new ideas for explaining black-box models. One of the best known method for local explanations is SHapley Additive exPlanations (SHAP).

The SHAP method is used to calculate influences of variables on the particular observation. This method is based on Shapley values, a technique borrowed from the game theory. SHAP was introduced by Scott M. Lundberg and Su-In Lee in A Unified Approach to Interpreting Model Predictions NIPS paper. Originally it was implemented in the Python library shap.

The R package shapper is a port of the Python library shap. In this post we show the functionalities of shapper. The examples are provided on titanic_train data set for classification.

While shapper is a port for Python library shap, there are also pure R implementations of the SHAP method, e.g. iml or shapleyR.

Installation

The shapper wraps up the Python library, therefore installation requires a bit more effort than installation of an ordinary R package.

Install the R package shapper

First of all we need to install shapper, this may be the stable release from CRAN

or the developer version form GitHub.

Install the Python library shap

Before you run shapper, make sure that you have installed Python.

Python library shap is required to use shapper. The shap can be installed both by Python or R. To install it through R, you an use function install_shap() from the shapper package.

If you experience any problems related to the installation of Python libraries or evaluation of Python code, see the reticulate documentation. The shapper access Python within reticulate, therefore the solution to the problem is likely to be in there ;-).

Would you survive sinking of the RMS Titanic?

The example usage is presented on the titanic_train dataset from the R package titanic. We will predict the Survived status. The other variables used by the model are: Pclass, Sex, Age, SibSp, Parch, Fare and Embarked.

Let’s build a model

Let’s see what are our chances assessed by the random forest model.

Prediction to be explained

Let’s assume that we want to explain the prediction of a particular observation (male, 8 years old, traveling 1-st class embarked at C, without parents and siblings.

Model prediction for this observation is .558 for survival.

Here shapper starts

To use the function shap() function (alias for individual_variable_effect()) we need four elements

  • a model,
  • a data set,
  • a function that calculated scores (predict function),
  • an instance (or instances) to be explained.

The shap() function can be used directly with these four arguments, but for the simplicity here we are using the DALEX package with preimplemented predict functions.

The explainer is an object that wraps up a model and meta-data. Meta data consists of, at least, the data set used to fit model and observations to explain.

And now it’s enough to generate SHAP attributions with explainer for RF model.

Plotting results

To generate a plot of Shapley values you can simply pass an object of class importance_variable_effect to a plot() function. Since we are interested in the class Survived = 1 we may add additional parameter that filter only selected classes.

Labels on y-axis show values of variables for this particular observation. Black arrows show predictions of model, in this case, probabilities of each status. Other arrows show effect of each variable on this prediction. Effects may be positive or negative and they sum up to the value of prediction.

On this plot we can see that model predicts that the passenger will survive. Changes are higher due to young age and 1st class, only the Sex = male decreases chances of survival for this observation.

More models

It is useful to contrast prediction of two models. Here we will show how to use shapper for such contrastive explanations.

We will compare randomForest with svm implemented in the e1071.

This model predict 32.5% chances of survival.

Shapley values plot may be modified. To show more than one model you can pass more individual_variable_plot objects.

To see only attributions use option show_predcited = FALSE.

More

Documentation and more examples are available at https://modeloriented.github.io/shapper/. The stable version of the package is on CRAN, the development version is on GitHub (https://github.com/ModelOriented/shapper). Shapper is a part of the DALEX universe.

Break Down: model explanations with interactions and DALEX in the BayArea

The breakDown package explains predictions from black-box models, such as random forest, xgboost, svm or neural networks (it works for lm and glm as well). As a result you gets decomposition of model prediction that can be attributed to particular variables.

The version 0.3 has a new function break_down. It identifies pairwise interactions of variables. So if the model is not additive, then instead of seeing effects of single variables you will see effects for interactions.
It’s easy to use this function. See an example below.
HR is an artificial dataset. The break_down function correctly identifies interaction between gender and age. Find more examples in the documentation.

Figure below shows that a single prediction was decomposed into 4 parts. One of them is related to the interaction between age and gender.

BreakDown is a part of DALEXverse – collection of tools for visualisation, exploration and explanation of complex machine learning models.

Till the end of September I am visiting UC Davis and UC Berkeley. Happy to talk about DALEX explainers, XAI and related stuff.
So, if you want to talk about interpretability of complex ML models, just let me know.

Yes, it’s part of the DALEX invasion 😉
Thanks to the H2020 project RENOIR.

Ceteris Paribus v0.3 is on CRAN

Ceteris Paribus package is a part of DALEX family of model explainers. Version 0.3 just gets to CRAN. It’s equipped with new functions for very elastic visual exploration of black box models. Its grammar generalizes Partial Dependency Plots, Individual Conditional Expectations, Wangkardu Plots and gives a lot of flexibility in model comparisons, groups comparisons and so on.

See a 100 sec introduction to the ceterisPackage package on YouTube.

Here you will find a one-pager cheat-sheet with selected use cases.

Here is a longer introduction with notation and some theory.

Here there is a vignette with examples for regression (housing prices).

And here for multiclass classification (HR models).

It’s a work in progress. Feel free to contribute!

Local Goodness-of-Fit Plots / Wangkardu Explanations – a new DALEX companion

The next DALEX workshop will take place in 4 days at UseR. In the meantime I am working on a new explainer for a single observation.
Something like a diagnostic plot for a single observation. Something that extends Ceteris Paribus Plots. Something similar to Individual Conditional Expectation (ICE) Plots. An experimental version is implemented in ceterisParibus package.
 
Intro

For a single observation, Ceteris Paribus Plots (What-If plots) show how predictions for a model change along a single variable. But they do not tell if the model is well fitted around this observation.

Here is an idea how to fix this:
(1) Take N points from validation dataset, points that are closest to a selected observation (Gower distance is used by default).
(2) Plot N Ceteris Paribus Plots for these points,
(3) Since we know the true y for these points, then we can plot model residuals in these points.
 
Examples

Here we have an example for a random forest model. The validation dataset has 9000 observations. We use N=18 observations closest to the observation of interest to show the model stability and the local goodness-of-fit.


(click to enlarge)

The empty circle in the middle stands for the observation of interest. We may read its surface component (OX axis, around 85 sq meters), and the model prediction (OY axis, around 3260 EUR).
The thick line stands for Ceteris Paribus Plot for the observation of interest.
Grey points stands for 18 closest observations from the validation dataset while grey lines are their Ceteris Paribus Plots. 
Red and blue lines stand for residuals for these neighbours. Corresponding true values of y are marked with red and blue circles. 

Red and blue intervals are short and symmetric so one may say that the model is well fitted around the observation of interest.
Czytaj dalej Local Goodness-of-Fit Plots / Wangkardu Explanations – a new DALEX companion