Data, movies and ggplot2

Yet another boring barplot?
No!
I’ve asked my students from MiNI WUT to visualize some data about their favorite movies or series.
Results are pretty awesome.
Believe me or not, but charts in these posters are created with ggplot2 (most of them)!

Star Wars

Fan of StaR WaRs? Find out which color is the most popular for lightsabers!
Yes, these lightsabers are created with ggplot2.
Would you guess which characters are overweighed?
Find the R code and the data on the GitHub.

Harry Pixel

Take fames from Harry Potter movies, use k-means to extract dominant colors for each frame, calculate derivative for color changes and here you are.
The R code and the poster are here.
(steep derivatives in color space is a nice proxy for dynamic scenes).

Social Network for Super Heroes

Have you ever wondered how the distribution of super powers looks like among Avengers?
Check put this poster or dive in the data.

Pardon my French, but…

Scrap transcripts from over 100k movies, find out how many curse words you may find in these movies, plot these statistics.
Here are sources and the poster.
(Bonus Question 1: how curse words are related to Obama/Trump presidency?
Bonus Question 2: is the number of hard curse words increasing or not?)

Rick and Morty

Interested in the demography of characters from Rick and Morty?
Here is the R code and the poster.
(Tricky question: what is happening with season 3?)

Twin Peaks

Transcripts from Twin Peaks are full of references to coffee and donuts.
Except the episode in which the Laura’s murdered is revealed (ups, spoiler alert).
Check out this by yourself with these scripts.

The Lion King

Which Disney’s movie is the most popular?
It wasn’t hard to guess.

Box Office

5D scatterplots?
Here you have.

Next time I will ask my students to visualize data about R packages…
Or maybe you have some other ideas?

ML models: What they can’t learn?

What I love in conferences are the people, that come after your talk and say: It would be cool to add XYZ to your package/method/theorem.

After the eRum (great conference by the way) I was lucky to hear from Tal Galili: It would be cool to use DALEX for teaching, to show how different ML models are learning relations.

Cool idea. So let’s see what can and what cannot be learned by the most popular ML models. Here we will compare random forest against linear models against SVMs.
Find the full example here. We simulate variables from uniform U[0,1] distribution and calculate y from following equation

In all figures below we compare PDP model responses against the true relation between variable x and the target variable y (pink color). All these plots are created with DALEX package.

For x1 we can check how different models deal with a quadratic relation. The linear model fails without prior feature engineering, random forest is guessing the shape but the best fit if found by SVMs.

With sinus-like oscillations the story is different. SVMs are not that flexible while random forest is much closer.

Turns out that monotonic relations are not easy for these models. The random forest is close but event here we cannot guarantee the monotonicity.

The linear model is the best one when it comes to truly linear relation. But other models are not that far.

The abs(x) is not an easy case for neither model.

Find the R codes here.

Of course the behavior of all these models depend on number of observation, noise to signal ratio, correlation among variables and interactions.
Yet is may be educational to use PDP curves to see how different models are learning relations. What they can grasp easily and what they cannot.