modelDown is now on CRAN!


The modelDown package turns classification or regression models into HTML static websites.
With one command you can convert one or more models into a website with visual and tabular model summaries. Summaries like model performance, feature importance, single feature response profiles and basic model audits.

The modelDown uses DALEX explainers. So it’s model agnostic (feel free to combine random forest with glm), easy to extend and parameterise.

Here you can browse an example website automatically created for 4 classification models (random forest, gradient boosting, support vector machines, k-nearest neighbours). The R code beyond this example is here.

Fun facts:

archivist hooks are generated for every documented object. So you can easily extract R objects from the HTML website. Try

archivist::aread("MI2DataLab/modelDown_example/docs/repository/574defd6a96ecf7e5a4026699971b1d7")

– session info is automatically recorded. So you can check version of packages available at model development (https://github.com/MI2DataLab/modelDown_example/blob/master/docs/session_info/session_info.txt)

– This package is initially created by Magda Tatarynowicz, Kamil Romaszko, Mateusz Urbański from Warsaw University of Technology as a student project.

MI2 @ Data Science Summit (x5) – już za tydzień


Już za tydzień na wydziale MiNI Politechniki Warszawskiej odbędzie się konferencja Data Science Summit.

Aż trudno uwierzyć, że to dopiero trzecia edycja. Z roku na rok rośnie w zawrotnym tempie ściągając ciekawych prelegentów i uczestników z Polski i zagranicy. Dziś jest to jedna z największych konferencji Data Science w regionie.

Rada programowa DSS miała nie lada zadanie by wybrać z ponad 160 zgłoszeń te, które porwą uczestników konferencji (a ma ich być rekordowo wielu). Zgłoszone tematy są bardzo ciekawe i różnorodne (pełny program). Mnie szczególnie cieszy szeroka reprezentacja współpracowników z MI2 DataLab na tej konferencji.
Znajdziecie nas na tych prezentacjach:

W bloku NLP w godzinach 11:00 – 11:30 Barbara Rychalska i Anna Wróblewska opowiedzą o frameworku WildNLP to analizy wrażliwości modeli NLP na celowe ataki lub losowe zakłócenia (więcej o projekcie na tym repo).

W bloku Computer Vision w godzinach 11:40 – 12:10 Anna Wróblewska i studenci z Projektu Zespołowego opowiedzą o fantastycznym projekcie ChaTa – (Charts and Tables), który wspiera automatyczną ekstrakcję i analizę wykresów i tabel w raportach.

Na Main Stage w godzinach 14:30 – 15:00 Przemyslaw Biecek (czyli ja 😉 ) będzie opowiadał o wyjaśnialnym uczeniu maszynowym. To super gorący temat w świecie AI/ML. Nie zabraknie oczywiście naszego flagowego projektu DrWhy.AI, ale będzie też sporo ciekawostek ze świata IML/XAI.

W bloku Future of Data Science: Healthcare w godzinach 15:50 – 16:20 Adam Dobrakowski opowie o wynikach z prowadzonego projektu dotyczącego segmentacji wizyt lekarskich. Jak AI może wspierać naszą służbę zdrowia? Przyjdźcie, zobaczcie!

W bloku Customer Analytics w godzinach 14:30 – 15:00 o segmentacji z użyciem NMF będzie opowiadał Marcin Kosiński (nasz alumni, obecnie Gradient).

W przerwie pomiędzy referatami możecie znaleźć nasz DataLab w pokoju 44 w budynku MiNI (tam gdzie będą referaty). Wpadnijcie porozmawiać o wspomnianych wyżej i innych toczących się projektach (XAI, AutoML, AutoEDA, IML, NLP, AI w medycynie i inne). Jeżeli nie wiecie jak do nas zagadać, to zawsze możecie zacząć od ,,Słyszałem, że macie świetną kawę…”. Nie odmówimy!

Btw, szukamy doktoranta do zespołu, więc może akurat…

Matematyka w komiksie, komiks w matematyce – jeszcze tylko tydzień na Wasze zgłoszenia!


Do 12 czerwca można zgłaszać pomysłowe komiksy o matematyce, informatyce lub analizie danych na konkurs ,,Matematyka w komiksie, komiks w matematyce”.

Zgłaszać można komiksy o objętości od jednego okienka lub jednego paska do jednej strony A4.
Najlepsze komiksy trafią na okładkę Delty i/lub otrzymają nagrody rzeczowe.

Kręci Cię matematyka?
Masz pomysł jak ją pokazać w komiksie!
Prześlij Twoją propozycję na ten konkurs.

Więcej informacji na stronie konkursu https://dpm.mini.pw.edu.pl/node/710.

xaibot – conversations with predictive models!


If you could talk to a predictive machine learning model, what would you ask for?

Try! Michał Kuźba is developing a mind-blowing project – xai chat-bot. Dialog based system that helps to explore and understand predictive models through natural language conversations (type, speak or phone the model 😉 ).

For example, imagine that you have a random forest model that predicts survival for titanic data. With xai-bot you can chat about your chances of survival, variables that influence survival, options that you have to increase your odds or just chat about life models.


The chatbot is based on dialog-flow google infrastructure. It communicates with DALEX explainers written in R through plumber REST API.

Find the chatbot here: https://kmichael08.github.io.

The project is under development, but the bot is already pretty smart.

So, have fun!

How to design a model visualisation @ Gdansk satRdays


I had amazing weekend in Gdansk thanks to the satRday conference organized by Olgun Aydin, Ania Rybinska and Michal Maj.

Together with Hanna Piotrowska we had a talk ,,Machine learning meets design. Design meets machine learning”. Hanna redesigned DALEX visualisations (DALEX is a set of tools for visual explanation of predictive ML models). During the talk she explained what and why was changed.

See for example the metamorphosis of the Break Down explainer. How many differences can you spot?

Every change (axis, reading order, spacing, colors, descriptions, background, annotations) serves some purpose.

Find our presentation at slideshare.

List of satRday talks (machine learning was quite popular).

Hanna design is implemented in ggplot2 thanks to Tomasz Mikołajczyk and in D3 thanks to Huber Baniecki! Find more examples of how to use new plots here.

Make it explainable!

Most people make the mistake of thinking design is what it looks like… People think it’s this veneer — that the designers are handed this box and told, ‚Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.

Steve Jobs, The New York Times, 2003.

Same goes with interpretable machine learning.
Recently, I am talking a lot about interpretations and explainability. And sometimes I got impression that techniques like SHAP, Break Down, LIME, SAFE are treated like magical incantations that converts complex predictive models into ,,something interpretable’’.

But interpretability/explainability is not a binary feature that you have it or not. It’s a process. The goal is to increase our understanding of the model behavior. Try different techniques to broaden the knowledge about the model or about model predictions.
Maybe you will never explain 100%, but you will understand more.

XAI/IML (eXplainable Artificial Intelligence/Interpretable Machine Learning) techniques can be used not only for post-hoc explainability, but also for model maintenance, debugging or in early phases of crisp modeling. Visual tools like PDP/ALE/CeterisParibus will change the way how we approach modeling and how we interact with models. We as model developers, model auditors or users.

Together with Tomasz Burzykowski from UHasselt we work on a book about the methodology for visual exploration, explanation and debugging predictive models.

Find the early version here https://pbiecek.github.io/PM_VEE/.

There is a lot of R snippets that shows how to use DALEX (and sometimes other packages like shapper, ingredients, iml, iBreakDown, condvis, localModel, pdp) to better understand some aspects of your predictive model.

It’s a work in process and even in an early dirty phase (despite the fact that we have started a year ago).
Feel free to comment it, or suggest improvements. Easiest way to do this is to add a new issue.

Code snippets are fully thanks to archivist hooks. I think that it’s a first book that uses archivist hooks for blended experience. You can read about a model online and in just one line of code you can download an object to your R console.

First chapters show how to use Ceteris Paribus Profiles / Individual Conditional Expectations to perform what-if/sensitivity analysis of a model.

DALEX for keras and parsnip

DALEX is a set of tools for explanation, exploration and debugging of predictive models. The nice thing about it is that it can be easily connected to different model factories.

Recently Michal Maj wrote a nice vignette how to use DALEX with models created in keras (an open-source neural-network library in python with an R interface created by RStudio). Find the vignette here.
Michal compared a keras model against deeplearning from h2o package, so you can check which model won on the Titanic dataset.

Next nice vignette was created by Szymon Maksymiuk. In this vignette Szymon shows how to use DALEX with parsnip models (parsnip is a part of the tidymodels ecosystem, created by Max Kuhn and Davis Vaughan). Models like boost_tree, mlp and svm_rbf are competing on the Titanic data.

These two new vignettes add to our collection how to use DALEX with mlr, caret, h2o and others model factories.

Explore the landscape of R packages for automated data exploration

Do you spend a lot of time on data exploration? If yes, then you will like today’s post about AutoEDA written by Mateusz Staniak.

If you ever dreamt of automating the first, laborious part of data analysis when you get to know the variables, print descriptive statistics, draw a lot of histograms and scatter plots – you weren’t the only one. Turns out that a lot of R developers and users thought of the same thing. There are over a dozen R packages for automated Exploratory Data Analysis and the interest in them is growing quickly. Let’s just look at this plot of number of downloads from the official CRAN repository.

Replicate this plot with

New tools arrive each year with a variety of functionalities: creating summary tables, initial visualization of a dataset, finding invalid values, univariate exploration (descriptive and visual) and searching for bivariate relationships.

We compiled a list of R packages dedicated to automated EDA, where we describe twelve packages: their capabilities, their strong aspects and possible extensions. You can read our review paper on arxiv: https://arxiv.org/abs/1904.02101.

Spoiler alert: currently, automated means simply fast. The packages that we describe can perform typical data analysis tasks, like drawing bar plot for each categorical feature, creating a table of summary statistics, plotting correlations, with a single command. While this speeds up the work significantly, it can be problematic for high-dimensional data and it does not take the advantage of AI tools for actual automatization. There is a lot of potential for intelligent data exploration (or model exploration) tools.

More extensive list of software (including Python libraries and web applications) and papers is available on Mateusz’s GitHub. Researches can follow our autoEDA project on ResearchGate.

Kto myśli na rok do przodu sieje zboże (…) a kto myśli na wiele wiele lat do przodu wychowuje młodzież

Dzisiaj rozpoczyna się strajk nauczycieli. Gorąco kibicuję nauczycielom. I jako rodzic dzieci w wieku szkolnym, i jako nauczyciel akademicki, i jako entuzjasta edukacji dzieci i młodzieży. Bardzo dużo zawdzięczam moim nauczycielom, a los zetknął mnie z wieloma pozytywnie zakręconymi pasjonatami.

W czasach gospodarki opartej na wiedzy to edukacja jest sprawą kluczową. A nie ma dobrej edukacji bez pozytywnej selekcji, którą zapewnić mogą dobre warunki pracy. Dobre zarówno jeżeli chodzi o wynagrodzenia jak i stabilne podstawy programowe, możliwości rozwoju i odpowiednie wyposażenie szkół.
Dlatego popieram strajkujących nauczycieli.

Przemysław Biecek

Btw: Poniższy wykres z twittera KPRM ma współczynnik Lie-Factor przekraczający 350%. Jednak warto zwiększyć liczbę godzin matematyki w szkołach.

iBreakDown: faster, prettier and more precise explanations for predictive models (with interactions)

LIME and SHAP are two very popular methods for instance level explanations of machine learning models (XAI).
They work nicely for images and text inputs, but share similar weakness in case of tabular data: explanations are additive while complex models are (sometimes) not. iBreakDown addresses this problem.

iBreakDown is a a successor of the breakDown package. Yesterday it has arrived on CRAN. Key new features are:

– It identifies and shows feature interactions (if there are local interactions in the model).
– It is much faster. For additive explanations the complexity is O(p) instead of O(p^2).
– The plotD3 function creates an interactive D3-based break-down plot (thanks to r2d3).
– iBreakDown has a new design, created by Hanna Dyrcz. We will have a talk about it ,,Machine learning meets design. Design meets machine learning.” at satRdays. Try the new theme theme_drwhy()!.
– It shows explanation level uncertainty – how good are explanations?

A methodology behind this package is described in the iBreakDown: Uncertainty of Model Explanations for Non-additive Predictive Models.

A nice titanic-powered use-case is described in the titanic vignette.

An example of the D3 interactive explainer is here.

Some intuition is introduced in the Visual Exploration, Explanation and Debugging (working version, still in progress).

iBreakDown is a part of the DrWhy.AI family of explainers consistent with the DALEX.

Let us know if you like it. Feel free to create a pull request with new features, add issue with new idea or star the github repository if you like this package.