Z pamiętnika nauczyciela akademickiego: Challenge-Based Learning


Challenge-Based Learning to technika uczenia przez zderzanie uczestników (studenci, uczniowie) ze współczesnym, ciekawym, rzeczywistym problemem do rozwiązania.
Aby taki problem rozwiązać, uczestnicy muszą zrobić badania literaturowe, zrozumieć problem, zaprojektować rozwiązanie i to najlepiej działające. Ta technika jest coraz częściej stosowana w szkołach średnich i podstawowych otwartych na nowe formy nauczania. Rozmawiałem ostatnio z twórcą koderka (aktywności dla dzieci związane z informatyką i nowymi technologiami) o edukacji STEM dla dzieci i młodzieży. Wątek Challenge-Based Learning pojawiał się nieustannie.

A jak to może wyglądać na uczelni?
Od jakiegoś czasu (ojej, to już 10 lat?) testuję różne techniki edukacyjne na zajęciach. Tym razem sprawdzałem pewien pomysł wzorowany na Challenge-Based Learning. Poniżej opiszę sam pomysł wraz z moimi obserwacjami po przeprowadzeniu zajęć.

Zaprojektowany by upaść

Jak pokazać na zajęciach wyzwania, jakie niesie komunikacja przy budowaniu wspólnego rozwiązania przez wiele osób?
W letnim semestrze prowadziłem Zaawansowane programowanie i analizę danych w R na MiNI PW. Jako drugi projekt studenci wykonali inteligentnego asystenta, pakiet R, który pomaga w pracy analityka danych wykonując co trudniejsze/żmudniejsze czynności (jak już raz się nauczy wczytywać dane to nie będzie w kółko pytać o te same parametry analityka).

Każdy z 14 studentów (luksus pracy z małymi grupami) dostał do wykonania jedną funkcjonalność. W sumie te funkcjonalności powinny złożyć się w jeden pakiet – jednego asystenta wspierającego pracę analityka.
Wciąż, jeden student opiekuje się jedną przypisaną mu funkcjonalnością – wczytaj dane, wykonaj preprocessing danych, przeprowadź budowę modelu predykcyjnego, wygeneruj raport, zapisz wykres, odtwórz sesji itp.
Zaliczenie projektu dotyczy częściowo tej jednej funkcjonalności a częściowo spójności rozwiązania z całą resztą pakietu.
Pomimo iż każdy opiekuje się swoją częścią to też opłaca się wszystkim by całość działała.
A jak wiadomo, całość to więcej niż suma składowych.
Wspomniany asystent nazywa się Hugo. Jeżeli chcecie go poznać bliżej, to zerknijcie na https://github.com/hugo4r/hugo.

Czytaj dalej Z pamiętnika nauczyciela akademickiego: Challenge-Based Learning

Not only LIME

I’ve heard about a number of consulting companies, that decided to use simple linear model instead of a black box model with higher performance, because ,,client wants to understand factors that drive the prediction’’.
And usually the discussion goes as following: ,,We have tried LIME for our black-box model, it is great, but it is not working in our case’’, ,,Have you tried other explainers?’’, ,,What other explainers’’?

So here you have a map of different visual explanations for black-box models. Choose one in (on average) less than three simple steps.

These are available in the DALEX package. Feel free to propose other visual explainers that should be added to this map (and the package).

Ceteris Paribus Plots – a new DALEX companion

If you like magical incantations in Data Science, please welcome the Ceteris Paribus Plots. Otherwise feel free to call them What-If Plots.

Ceteris Paribus (latin for all else unchanged) Plots explain complex Machine Learning models around a single observation. They supplement tools like breakDown, Shapley values, LIME or LIVE. In addition to feature importance/feature attribution, now we can see how the model response changes along a specific variable, keeping all other variables unchanged.

How cancer-risk-scores change with age? How credit-scores change with salary? How insurance-costs change with age?

Well, use the ceterisParibus package to generate plots like the one below.
Here we have an explanation for a random forest model that predicts apartments prices. Presented profiles are prepared for a single observation marked with dashed lines (130m2 apartment on 3rd floor). From these profiles one can read how the model response is linked with particular variables.

Instead of original values on the OX scale one can plot qunatiles. This way one can put all variables in a single plot.

And once all variables are in the same scale, one can compare two or more models.

Yes, they are model agnostic and will work for any model!
Yes, they can be interactive (see plot_interactive function or examples below)!
And yes, you can use them with other DALEX explainers!
More examples with R code.

RODO + DALEX, kilka słów o moim referacie na DSS


W przyszły piątek (8 czerwca) na wydziale MiNI PW odbędzie się konferencja Data Science Summit.
W sali 107 pomiędzy 10:50 a 11:20 ma miejsce mój referat Wyjaśnij! Jak budować wyjaśnialne modele ML / AI i jak to się ma do RODO?, na który serdecznie zapraszam.

Planuję opowiedzieć o temacie, który wciąga mnie coraz bardziej, czyli wyjaśnialnym AI (XAI). Jak to się ma do RODO i o co chodzi z pogłoskami o ,,prawie do wyjaśnienia”?

To będzie techniczny referat (sorry, żadnych zdjęć kotów czy psów, być może jakieś zdjęcia robotów). Pokażę jak konstruować i używać wykresy breakDown (i powiem dlaczego są lepsze niż LIME czy wartości Shapleya), będzie też mowa o najnowszym wyniku naszego zespołu, czyli wykresach What-If.

Osoby zainteresowane tematem, ale nie planujące udziału w konferencji, zapraszam do lektury dokumentacji DALEXa.

Btw: Na konferencji DSS planowany jest hackaton ,,Conquer urban data”, organizowany przez dr Marcina Lucknera. Hataton wykorzystujący dane z API Warszawy. Warto tam zajrzeć.

ML models: What they can’t learn?

What I love in conferences are the people, that come after your talk and say: It would be cool to add XYZ to your package/method/theorem.

After the eRum (great conference by the way) I was lucky to hear from Tal Galili: It would be cool to use DALEX for teaching, to show how different ML models are learning relations.

Cool idea. So let’s see what can and what cannot be learned by the most popular ML models. Here we will compare random forest against linear models against SVMs.
Find the full example here. We simulate variables from uniform U[0,1] distribution and calculate y from following equation

In all figures below we compare PDP model responses against the true relation between variable x and the target variable y (pink color). All these plots are created with DALEX package.

For x1 we can check how different models deal with a quadratic relation. The linear model fails without prior feature engineering, random forest is guessing the shape but the best fit if found by SVMs.

With sinus-like oscillations the story is different. SVMs are not that flexible while random forest is much closer.

Turns out that monotonic relations are not easy for these models. The random forest is close but event here we cannot guarantee the monotonicity.

The linear model is the best one when it comes to truly linear relation. But other models are not that far.

The abs(x) is not an easy case for neither model.

Find the R codes here.

Of course the behavior of all these models depend on number of observation, noise to signal ratio, correlation among variables and interactions.
Yet is may be educational to use PDP curves to see how different models are learning relations. What they can grasp easily and what they cannot.

DALEX @ eRum 2018

DALEX invasion has started with the workshop and talk @ eRum 2018.

Find workshop materials at DALEX: Descriptive mAchine Learning EXplanations. Tools for exploration, validation and explanation of complex machine learning models (thanks Mateusz Staniak for having the second part of the workshop).

And my presentation Show my your model 2.0! (thanks go to the whole MI2DataLab for contributions and Malgorzata Pawlak for great stickers).

DALEX Stories – Warsaw apartments

This Monday we had a machine learning oriented meeting of Warsaw R Users. Prof Bernd Bischl from LMU gave an excellent overview of mlr package (machine learning in R), then I introduced DALEX (Descriptive mAchine Learning EXplanations) and Mateusz Staniak introduced live and breakDown packages.

The meeting pushed me to write down a gentle introduction to the DALEX package. You may find it here https://pbiecek.github.io/DALEX_docs/.
The introduction is written around a story based on Warsaw apartments data.

The story goes like that:
We have two models fitted to apartments data: a linear model and a randomForest model. It happens that both models have exactly identical root mean square for errors calculated on a validation dataset.
So an interesting question arise: which model we should choose?

The analysis of variable importance for both models reveals that variable construction.year is important for randomForest but is completely neglected by linear model.
New observation: something is going on with construction.year.

The analysis of model responses reveals that the relation between construction.year and price of square meter is nonlinear.
At this point it looks like the random forest model is the better one, since it captures relation, which the linear model do not see.

But (there is always but) when you audit residuals from the random forest model it turns out that it heavily under predicts prices of expensive apartments.
This is a very bad property for a pricing model. This may result in missed opportunities for larger profits.
New observation: do not use this rf model for predictions.

So, what to do?
DALEX shows that despite equal root mean squares of both models they are very different and capture different parts of the signal.
As we increase our understanding of the signal we are able to design a better model. And here we go.
This new liner model has much lower root mean square of residuals, as it is build on strengthens of both initial models.

All plots were created with DALEX explainers. Find R code and more details here.

DALEX: which variables are really important? Ask your black box model!

Third post from the short series about black-box explainers implemented in the DALEX package. Learn more about DALEX at SER (Warsaw, April 2018), eRum (Budapest, May 2018), WhyR (Wroclaw, June 2018) or UseR (Brisbane, July 2018).

Two weeks ago I wrote about single variable conditional responses and last week I wrote about decompositions of a single prediction.

Sometimes we would like to know the general structure of a model, or at least know which variables are the most influential. There is a lot of different approaches to this problem proposed in literature. A nice, simple, and model agnostic approach is described in this article (Fisher, Rudin, and Dominici 2018). To see how important is variable X let’s permute it’s values and measure the drop in model accuracy.
This procedure is implemented in the DALEX package in the variable_dropout() function. There are some tweaks (for large datasets you do not need to permute all rows while for small datasets you could consider some oversampling) but the idea is the same.

In the figure below you will find variable importances for three models created with the HR dataset. It is easy to spot that the randomForest model results in the best model and satisfaction_level is the most important variable in all three models.

plot.variable_dropout_explainer-19

There are two things that I like in this explainer.

1) Variable effects for a single model are interesting, but ability to compare effects for many modes is even more interesting. In the DALEX you can simply contrast/compare explainers across different models.

2) There is no reason to start variable importance plots in the point 0, since the initial model performance is different for different plots. It is much more informative to present both the initial model performance and drop in the performance resulting from the dropout of a variable.

If you want to learn more about DALEX package and variable importances consult following vignette or the DALEX website.

DALEX_variable_dropout

Rozstrzygnięcie konkursu Data Science Masters na najlepszą pracę z DS i ML

Screen Shot 2018-03-14 at 9.20.11 AM
Dzisiaj dzień liczby Pi. Dobry dzień na rozstrzygnięcie PIerwszej edycji konkursu Data Science Masters na najlepsza pracę magisterską.

Ze zgłoszonych 72 prac trzeba było wybrać 3, które otrzymają nagrodę. Tematyka tych prac była bardzo różna (chmura słów po prawej została wygenerowana z tytułów i abstraktów). Prace zgłaszane były z całej Polski (statystyki dotyczące uczelni są poniżej). Na gali będzie można zapoznać się z procedurą konkursową oraz z nagrodzonymi pracami. Autorzy nagrodzonych prac zostali poproszeni o kilkunasto-minutowe prezentacje.

W imieniu komisji oraz organizatorów (MiNI PW i Nethone) serdecznie zapraszam dzisiaj do sali 107 na godzinę 16:15. Po gali, przy poczęstunku, będzie można porozmawiać z nagrodzonymi. Więcej informacji tutaj.

How fractals helped my students to master package development in R

Last semester I taught an R programming at MIMUW. My lectures are project oriented, the second project was related to package development. The idea was straightforward: each team of students shall create a package that produces IFS fractals (based on iterated function systems). Each package shall have two generic functions: create() and plot(), documentation and vignette. Fractals shall be implemented with the use of S3 or S4 classes.

I have students with different backgrounds. Mostly statistics, but some are from physics, psychology or biology. I was a bit unsure how they will deal with concepts like iterated contractions.
After all results exceeded my expectations.

Guess what is happening with students engagement when their packages start producing nice plots. Their need/hunger for more leads to beautiful things.

This team got interested in nonlinear transformations. They manage to create Apollonian Gasket generator and much more. See their vignette and package here.

Screen Shot 2018-03-09 at 7.31.24 PM

This team got interested in probabilistic mixtures of two fractals. They developed a Shiny app that mixes two sets of contractions with given mixture proportions. Here is a mixture of Sierpinski gasket and a tree. Find out their vignette here.

Screen Shot 2018-03-09 at 7.29.58 PM

This team got interested in random fractals. They developed fractal generator that draws parameters of each contraction. In result they get beautiful random shapes like these. Here is their vignette.

Screen Shot 2018-03-09 at 7.37.42 PMScreen Shot 2018-03-09 at 7.37.31 PMScreen Shot 2018-03-09 at 7.37.26 PM

And these two teams got interested in different ways of fractal colouring. Vignettes of Team 1 and Team 2

Screen Shot 2018-03-09 at 7.40.15 PMScreen Shot 2018-03-09 at 7.36.01 PMScreen Shot 2018-03-09 at 7.31.05 PM

After all it turns out that fractals are very addictive!
Use it with care 😉