Ceteris Paribus v0.3 is on CRAN

Ceteris Paribus package is a part of DALEX family of model explainers. Version 0.3 just gets to CRAN. It’s equipped with new functions for very elastic visual exploration of black box models. Its grammar generalizes Partial Dependency Plots, Individual Conditional Expectations, Wangkardu Plots and gives a lot of flexibility in model comparisons, groups comparisons and so on.

See a 100 sec introduction to the ceterisPackage package on YouTube.

Here you will find a one-pager cheat-sheet with selected use cases.

Here is a longer introduction with notation and some theory.

Here there is a vignette with examples for regression (housing prices).

And here for multiclass classification (HR models).

It’s a work in progress. Feel free to contribute!

No worries! Afterthoughts from UseR 2018

This year the UseR conference took place in Brisbane, Australia. UseR is my favorite conference and this one was mine 11th (counting from Dortmund 2008). 
Every UseR is unique. Every UseR is great. But my feelings are that European UseRs are (on average) more about math, statistics and methodology while US UseRs are more about big data, data science, technology and tools. 

So, how was the one in Australia? Was it more similar to Europe or US?

IMHO – neither of them. 
This one was (for me) about welcoming of new users, being open for diversified community, being open for changes, caring about R culture. Some footmarks of these values were present in most keynotes.

Talking about keynotes. All of them were great, but the ,,Teaching R to New Users” given by Roger Peng was outstanding. I will use the video or the essay as the MUST READ material for students in my R programming classes.

Venue, talks, atmosphere were great as well (thanks to the organizing crew led by Di Cook). Lots of people (including myself) spend time around the hex wall looking for their favorite packages (here you will read more about it ). There was an engaging team exercise during the conference diner (how much your table knows about R). The poster sessions was being handled on TV screens, therefore some posters were interactive (Miles McBain had poster related to R and Virtual Reality, cool). 

Last but not least, there was a great mixture of contributed talks and workshops. Everyone could find something for himself. And even too often it was hard to choose between few tempting options (fortunately, talks are recorded). 
Here I would like to mention three talks I found inspiring.

,,The Minard Paradox” given by Paul Murrel was refreshing. 
One may think nowadays we are so good in data vis, with all these shiny tools and interactive widgets. Yet Paul showed how hard it is to reproduce great works like Minard’s Map even in the cutting edge software (i.e. R). God is in the detail. Watch Paul’s talk here.

,,Data Preprocessing using Recipes” given by Max Kuhn touched an important, jet often neglected truth: Columns in the source data are unnecessary final features. Between ‘read the data’ and ‘fit the model’ there is an important process of feature engineering. This process needs to be reproducible, needs to be based on some well planned grammar. The recipes package helps here. Find the recipes talk here (tutorial is also recorded)

,,Glue strings to data in R” given by James Hester shows a package that is doing only one thing (glue strings) but is doing it extremely well. I have not expected 20 minutes of absorbing talk focused only on gluing strings. Yet, this is my third favourite. Watch it here.

David Smith shared his highlights here. You will find there quite a collection of links.

Videos for recorded talks, keynotes and tutorials are on R consortium youtube.

Local Goodness-of-Fit Plots / Wangkardu Explanations – a new DALEX companion

The next DALEX workshop will take place in 4 days at UseR. In the meantime I am working on a new explainer for a single observation.
Something like a diagnostic plot for a single observation. Something that extends Ceteris Paribus Plots. Something similar to Individual Conditional Expectation (ICE) Plots. An experimental version is implemented in ceterisParibus package.

For a single observation, Ceteris Paribus Plots (What-If plots) show how predictions for a model change along a single variable. But they do not tell if the model is well fitted around this observation.

Here is an idea how to fix this:
(1) Take N points from validation dataset, points that are closest to a selected observation (Gower distance is used by default).
(2) Plot N Ceteris Paribus Plots for these points,
(3) Since we know the true y for these points, then we can plot model residuals in these points.

Here we have an example for a random forest model. The validation dataset has 9000 observations. We use N=18 observations closest to the observation of interest to show the model stability and the local goodness-of-fit.

(click to enlarge)

The empty circle in the middle stands for the observation of interest. We may read its surface component (OX axis, around 85 sq meters), and the model prediction (OY axis, around 3260 EUR).
The thick line stands for Ceteris Paribus Plot for the observation of interest.
Grey points stands for 18 closest observations from the validation dataset while grey lines are their Ceteris Paribus Plots. 
Red and blue lines stand for residuals for these neighbours. Corresponding true values of y are marked with red and blue circles. 

Red and blue intervals are short and symmetric so one may say that the model is well fitted around the observation of interest.
Czytaj dalej Local Goodness-of-Fit Plots / Wangkardu Explanations – a new DALEX companion

modelDown: a website generator for your predictive models

I love the pkgdown package. With a single line of code you can create a complete website with examples, vignettes and documentation for your package. Brilliant!

So what about a website generator for predictive models?
Imagine that you can take a set of predictive models (generated with caret, mlr, glm, xgboost or randomForest, anything) and automagically generate a website with an exploration/documentation for these models. A documentation with archvist hooks to models, with tables and graphs for model performance explainers, conditional model response explainers or explainers for particular predictions.

During the summer semester three students from Warsaw University of Technology (Kamil Romaszko, Magda Tatarynowicz, Mateusz Urbański) developed modelDown package for R as an team project assignment. You can find the package here. Visit an example website created with this package for four example models (instructions). And read more about this package at its pkgdown website or below.

BTW: If you want to learn more about model explainers, please come to our DALEX workshops at WhyR? 2018 conference in Wroclaw or UseR! 2018 conference in Brisbane.

Getting started with modelDown
by Kamil Romaszko, Magda Tatarynowicz, Mateusz Urbański


Did you ever want to have one place where you can find information explaining your model? Or maybe you were missing a tool that can show difference in multiple models for the same dataset? Well, here comes modelDown package. By using DALEX package, it creates one html page with plots and information related to the model(s) you want to analyze.

If you want to check out example website generated with modelDown, check out this link (along with script that was used to create the html). Read on to see how to use package for your own models and what features it provides.

The examples presented here were generated for dataset HR_data from breakDown package (available on CRAN). The dataset contains various information about employees (for example their satisfaction from work or their salary). The information we predict is whether they left the company.

First things first – how can you use this package? Install it from github:

Czytaj dalej modelDown: a website generator for your predictive models

Z pamiętnika nauczyciela akademickiego: Challenge-Based Learning

Challenge-Based Learning to technika uczenia przez zderzanie uczestników (studenci, uczniowie) ze współczesnym, ciekawym, rzeczywistym problemem do rozwiązania.
Aby taki problem rozwiązać, uczestnicy muszą zrobić badania literaturowe, zrozumieć problem, zaprojektować rozwiązanie i to najlepiej działające. Ta technika jest coraz częściej stosowana w szkołach średnich i podstawowych otwartych na nowe formy nauczania. Rozmawiałem ostatnio z twórcą koderka (aktywności dla dzieci związane z informatyką i nowymi technologiami) o edukacji STEM dla dzieci i młodzieży. Wątek Challenge-Based Learning pojawiał się nieustannie.

A jak to może wyglądać na uczelni?
Od jakiegoś czasu (ojej, to już 10 lat?) testuję różne techniki edukacyjne na zajęciach. Tym razem sprawdzałem pewien pomysł wzorowany na Challenge-Based Learning. Poniżej opiszę sam pomysł wraz z moimi obserwacjami po przeprowadzeniu zajęć.

Zaprojektowany by upaść

Jak pokazać na zajęciach wyzwania, jakie niesie komunikacja przy budowaniu wspólnego rozwiązania przez wiele osób?
W letnim semestrze prowadziłem Zaawansowane programowanie i analizę danych w R na MiNI PW. Jako drugi projekt studenci wykonali inteligentnego asystenta, pakiet R, który pomaga w pracy analityka danych wykonując co trudniejsze/żmudniejsze czynności (jak już raz się nauczy wczytywać dane to nie będzie w kółko pytać o te same parametry analityka).

Każdy z 14 studentów (luksus pracy z małymi grupami) dostał do wykonania jedną funkcjonalność. W sumie te funkcjonalności powinny złożyć się w jeden pakiet – jednego asystenta wspierającego pracę analityka.
Wciąż, jeden student opiekuje się jedną przypisaną mu funkcjonalnością – wczytaj dane, wykonaj preprocessing danych, przeprowadź budowę modelu predykcyjnego, wygeneruj raport, zapisz wykres, odtwórz sesji itp.
Zaliczenie projektu dotyczy częściowo tej jednej funkcjonalności a częściowo spójności rozwiązania z całą resztą pakietu.
Pomimo iż każdy opiekuje się swoją częścią to też opłaca się wszystkim by całość działała.
A jak wiadomo, całość to więcej niż suma składowych.
Wspomniany asystent nazywa się Hugo. Jeżeli chcecie go poznać bliżej, to zerknijcie na https://github.com/hugo4r/hugo.

Czytaj dalej Z pamiętnika nauczyciela akademickiego: Challenge-Based Learning

Not only LIME

I’ve heard about a number of consulting companies, that decided to use simple linear model instead of a black box model with higher performance, because ,,client wants to understand factors that drive the prediction’’.
And usually the discussion goes as following: ,,We have tried LIME for our black-box model, it is great, but it is not working in our case’’, ,,Have you tried other explainers?’’, ,,What other explainers’’?

So here you have a map of different visual explanations for black-box models. Choose one in (on average) less than three simple steps.

These are available in the DALEX package. Feel free to propose other visual explainers that should be added to this map (and the package).

Ceteris Paribus Plots – a new DALEX companion

If you like magical incantations in Data Science, please welcome the Ceteris Paribus Plots. Otherwise feel free to call them What-If Plots.

Ceteris Paribus (latin for all else unchanged) Plots explain complex Machine Learning models around a single observation. They supplement tools like breakDown, Shapley values, LIME or LIVE. In addition to feature importance/feature attribution, now we can see how the model response changes along a specific variable, keeping all other variables unchanged.

How cancer-risk-scores change with age? How credit-scores change with salary? How insurance-costs change with age?

Well, use the ceterisParibus package to generate plots like the one below.
Here we have an explanation for a random forest model that predicts apartments prices. Presented profiles are prepared for a single observation marked with dashed lines (130m2 apartment on 3rd floor). From these profiles one can read how the model response is linked with particular variables.

Instead of original values on the OX scale one can plot qunatiles. This way one can put all variables in a single plot.

And once all variables are in the same scale, one can compare two or more models.

Yes, they are model agnostic and will work for any model!
Yes, they can be interactive (see plot_interactive function or examples below)!
And yes, you can use them with other DALEX explainers!
More examples with R code.

RODO + DALEX, kilka słów o moim referacie na DSS

W przyszły piątek (8 czerwca) na wydziale MiNI PW odbędzie się konferencja Data Science Summit.
W sali 107 pomiędzy 10:50 a 11:20 ma miejsce mój referat Wyjaśnij! Jak budować wyjaśnialne modele ML / AI i jak to się ma do RODO?, na który serdecznie zapraszam.

Planuję opowiedzieć o temacie, który wciąga mnie coraz bardziej, czyli wyjaśnialnym AI (XAI). Jak to się ma do RODO i o co chodzi z pogłoskami o ,,prawie do wyjaśnienia”?

To będzie techniczny referat (sorry, żadnych zdjęć kotów czy psów, być może jakieś zdjęcia robotów). Pokażę jak konstruować i używać wykresy breakDown (i powiem dlaczego są lepsze niż LIME czy wartości Shapleya), będzie też mowa o najnowszym wyniku naszego zespołu, czyli wykresach What-If.

Osoby zainteresowane tematem, ale nie planujące udziału w konferencji, zapraszam do lektury dokumentacji DALEXa.

Btw: Na konferencji DSS planowany jest hackaton ,,Conquer urban data”, organizowany przez dr Marcina Lucknera. Hataton wykorzystujący dane z API Warszawy. Warto tam zajrzeć.

ML models: What they can’t learn?

What I love in conferences are the people, that come after your talk and say: It would be cool to add XYZ to your package/method/theorem.

After the eRum (great conference by the way) I was lucky to hear from Tal Galili: It would be cool to use DALEX for teaching, to show how different ML models are learning relations.

Cool idea. So let’s see what can and what cannot be learned by the most popular ML models. Here we will compare random forest against linear models against SVMs.
Find the full example here. We simulate variables from uniform U[0,1] distribution and calculate y from following equation

In all figures below we compare PDP model responses against the true relation between variable x and the target variable y (pink color). All these plots are created with DALEX package.

For x1 we can check how different models deal with a quadratic relation. The linear model fails without prior feature engineering, random forest is guessing the shape but the best fit if found by SVMs.

With sinus-like oscillations the story is different. SVMs are not that flexible while random forest is much closer.

Turns out that monotonic relations are not easy for these models. The random forest is close but event here we cannot guarantee the monotonicity.

The linear model is the best one when it comes to truly linear relation. But other models are not that far.

The abs(x) is not an easy case for neither model.

Find the R codes here.

Of course the behavior of all these models depend on number of observation, noise to signal ratio, correlation among variables and interactions.
Yet is may be educational to use PDP curves to see how different models are learning relations. What they can grasp easily and what they cannot.

DALEX @ eRum 2018

DALEX invasion has started with the workshop and talk @ eRum 2018.

Find workshop materials at DALEX: Descriptive mAchine Learning EXplanations. Tools for exploration, validation and explanation of complex machine learning models (thanks Mateusz Staniak for having the second part of the workshop).

And my presentation Show my your model 2.0! (thanks go to the whole MI2DataLab for contributions and Malgorzata Pawlak for great stickers).