Top interactive visualizations of movie scripts

One of the highest pleasures for an academic teacher is to be surprised by an extraordinary student’s project or homework. Something that greatly exceeds expectations. I’ve reoriented my courses in a way to make such surprises frequent.
The second project in my Data Visualisation classes was related to interactive graphics. The task was to create an interactive graphics/app/tool that summarizes scripts from a selected movie or series.

Here are the top 6 visualizations created by my students. Mostly with R.

Harry Potter

Which duelling spells were cast most often in which book from the Harry Potter series?
Find out here.

HarryPotter

Avengers

Which emotions are the most common for your favorite Avenger?
Find out here.

Avengers

Games of Thrones

Which Starks are mentioned frequently in Season 6?
Find out here.

GameOfThrones

The Lion King

What Simba felt during The Lion King?
Find out here.

LionKing

Pulp Fiction

What is the f**k factor for different characters in the Pulp Fiction?
Find out here.

pulpFiction

Léon The Professional

Which character is the most angry in the Léon The Professional?
Find out here.

Leon

chRistmas tRees

Year over year, in the last classes before Christmas I ask my students to create a Christmas tree in R.
Classes are about Techniques of data visualisation and usually, at this point, we are discussing interactive graphics and tools like rbokeh, ggiraph, vegalite, googleVis, D3, rCharts or plotly. I like this exercise because with most tools it is easy to create a barchart, but how good must be the tool and the craftsman to handle a christmas tree?

Here is what they did this year (having around 1 hour to finish the task). Knitr scripts.

Update: I am still getting new submissions, feel free to submit yours as well.

Screen Shot 2017-12-22 at 13.07.26Screen Shot 2017-12-22 at 13.04.49

Screen Shot 2017-12-21 at 23.10.40Screen Shot 2017-12-21 at 23.10.23

Screen Shot 2017-12-21 at 22.06.35Screen Shot 2017-12-21 at 22.00.11

Screen Shot 2017-12-21 at 23.11.45Screen Shot 2017-12-21 at 23.11.19

Screen Shot 2017-12-21 at 21.57.54Screen Shot 2017-12-22 at 13.07.54

Screen Shot 2017-12-22 at 23.09.48Screen Shot 2017-12-22 at 23.09.25

Screen Shot 2018-01-09 at 22.37.20Screen Shot 2018-01-09 at 22.20.51

Screen Shot 2018-01-09 at 22.13.01Screen Shot 2018-01-09 at 21.57.06

Screen Shot 2018-01-09 at 21.47.06Screen Shot 2018-01-09 at 21.43.25

Screen Shot 2018-01-09 at 21.39.49Screen Shot 2018-01-09 at 21.22.31

Screen Shot 2017-12-21 at 23.10.48

archivist: Boost the reproducibility of your research

A few days ago Journal of Statistical Software has published our article (in collaboration with Marcin Kosiński) archivist: An R Package for Managing, Recording and Restoring Data Analysis Results.

Why should you care? Let’s see.

Starter

a
Would you want to retrieve a ggplot2 object with the plot on the right?
Just call the following line in your R console.

archivist::aread('pbiecek/Eseje/arepo/65e430c4180e97a704249a56be4a7b88')

Want to check versions of packages loaded when the plot was created?
Just call

archivist::asession('pbiecek/Eseje/arepo/65e430c4180e97a704249a56be4a7b88')

Wishful Thinking?

When people talk about reproducibility, usually they focus on tools like packrat, MRAN, docker or RSuite. These are great tools, that help to manage the execution environment in which analyses are performed. The common belief is that if one is able to replicate the execution environment then the same R source code will produce same results.

And it’s true in most cases, maybe even more than 99% of cases. Except that there are things in the environment that are hard to control or easy to miss. Things like external system libraries or dedicated hardware or user input. No matter what you will copy, you will never know if it was enough to recreate exactly same results in the future. So you can hope that results will be replicated, but do not bet too high.
Even if some result will pop up eventually, how can you check if it’s the same result as previously?

Literate programming is not enough

There are other great tools like knitr, Sweave, Jupiter or others. The advantage of them is that results are rendered as tables or plots in your report. This gives you chance to verify if results obtained now and some time ago are identical.
But what about more complicated results like a random forest with 100k trees created with 100k variables or some deep neural network. It will be hard to verify by eye that results are identical.

So, what can I do?

The safest solution would be to store copies of every object, ever created during the data analysis. All forks, wrong paths, everything. Along with detailed information which functions with what parameters were used to generate each result. Something like the ultimate TimeMachine or GitHub for R objects.

With such detailed information, every analysis would be auditable and replicable.
Right now the full tracking of all created objects is not possible without deep changes in the R interpreter.
The archivist is the light-weight version of such solution.

What can you do with archivist?

Use the saveToRepo() function to store selected R objects in the archivist repository.
Use the addHooksToPrint() function to automatically keep copies of every plot or model or data created with knitr report.
Use the aread() function to share your results with others or with future you. It’s the easiest way to access objects created by a remote shiny application.
Use the asearch() function to browse objects that fit specified search criteria, like class, date of creation, used variables etc.
Use asession() to access session info with detailed information about versions of packages available during the object creation.
Use ahistory() to trace how given object was created.

Lots of function, do you have a cheatsheet?

Yes! It’s here.
If it’s not enough, find more details in the JSS article.

Explain! Explain! Explain!


Predictive modeling is fun. With random forest, xgboost, lightgbm and other elastic models…
Problems start when someone is asking how predictions are calculated.
Well, some black boxes are hard to explain.
And this is why we need good explainers.

In the June Aleksandra Paluszynska defended her master thesis Structure mining and knowledge extraction from random forest. Find the corresponding package randomForestExplainer and its vignette here.

In the September David Foster published a very interesting package xgboostExplainer. Try it to extract useful information from a xgboost model and create waterfall plots that explain variable contributions in predictions. Read more about this package here.

In the October Albert Cheng published lightgbmExplainer. Package with waterfall plots implemented for lightGBM models. Its usage is very similar to the xgboostExplainer package.

Waterfall plots that explain single predictions are great. They are useful also for linear models. So if you are working with lm() or glm() try the brand new breakDown package (hmm, maybe it should be named glmExplainer). It creates graphical explanations for predictions and has such a nice cheatsheet:

breakDownCheatsheet

Install the package from https://pbiecek.github.io/breakDown/.

Thanks to RStudio for the cheatsheet’s template.

intsvy: PISA for research and PISA for teaching

The Programme for International Student Assessment (PISA) is a worldwide study of 15-year-old school pupils’ scholastic performance in mathematics, science, and reading. Every three years more than 500 000 pupils from 60+ countries are surveyed along with their parents and school representatives. The study yields in more than 1000 variables concerning performance, attitude and context of the pupils that can be cross-analyzed. A lot of data.

OECD prepared manuals and tools for SAS and SPSS that show how to use and analyze this data. What about R? Just a few days ago Journal of Statistical Software published an article ,,intsvy: An R Package for Analyzing International Large-Scale Assessment Data”. It describes the intsvy package and gives instructions on how to download, analyze and visualize data from various international assessments with R. The package was developed by Daniel Caro and me. Daniel prepared various video tutorials on how to use this package; you may find them here: http://users.ox.ac.uk/~educ0279/.

PISA is intended not only for researchers. It is a great data set also for teachers who may employ it as an infinite source of ideas for projects for students. In this post I am going to describe one such project that I have implemented in my classes in R programming.

I usually plan two or three projects every semester. The objective of my projects is to show what is possible with R. They are not set to verify knowledge nor practice a particular technique for data analysis. This year the first project for R programming class was designed to experience that ,,With R you can create an automated report that summaries various subsets of data in one-page summaries”.
PISA is a great data source for this. Students were asked to write a markdown file that generates a report in the form of one-page summary for every country. To do this well you need to master loops, knitr, dplyr and friends (we are rather focused on tidyverse). Students had a lot of freedom in trying out different things and approaches and finding out what works and how.

This project has finished just a week ago and the results are amazing.
Here you will find a beamer presentation with one-page summary, smart table of contents on every page, and archivist links that allow you to extract each ggplot2 plots and data directly from the report (click to access full report or the R code).

FR

Here you will find one-pagers related to the link between taking extra math and students’ performance for boys and girls separately (click to access full report or the R code).

ZKJ

And here is a presentation with lots of radar plots (click to access full report or the R code).

GMS

Find all projects here: https://github.com/pbiecek/ProgramowanieWizualizacja2017/tree/master/Projekt_1.

And if you are willing to use PISA data for your students or if you need any help, just let me know.

DIY – cheat sheets

I found recently, that in addition to a great list of cheatsheets designed by RStudio, one can also download a template for new cheatsheets from RStudio Cheat Sheets webpage.
With this template you can design your own cheatsheet, and submit it to the collection of Contributed Cheatsheets (Garrett Grolemund will help to improve the submission if needed).

Working on a new cheatsheet is pretty enjoying. You need to squeeze selected important stuff into a quite small surface like one or two pages.
Lots of fun.
I did it for eurostat and survminer packages (cheatsheets below).
And would love to see one for caret.

Is it a job offer for a Data Scientist?

screen-shot-2017-01-09-at-21-41-24
TL;DR
Konrad Więcko and Krzysztof Słomczyński (with tiny help from my side) have created a system that is tracing what skills are currently in demand among job offers for data scientists in Poland. What skills, how frequent and how the demand is changing over time.

The full description how this was done. static, +shiny.

Here: The shiny application for browsing skill sets.

Here: The R package that allows to access the live data.

Full version
A data science track (MSc level) will be very soon offered at MiNI department/Warsaw University of Technology and we (i.e. program committee) are spending a lot of time setting up the program. How cool it would be taking into account the current (and forecasted) demand on data science skills on the market? The project executed by Konrad Więcko and Krzysztof Słomczyński is dealing exactly with this problem.

So, the data is scrapped from pracuj.pl, one of the most popular (in Poland) websites with job offers.
Only offers interested to data scientists were used for further analyses.
How these offers were identified? In Poland the job title ‘Data Scientist’ is (still) not that common (except linkedin). And there are different translations and also there is a lot of different job titles that may be interested for a data scientist.
So, Konrad and Krzysztof have developed a machine learning algorithm that scores how much a given offer matches a ‘data scientist profile’ (whatever that means) based on the content of job offer description. And then, based on these predictions, various statistics may be calculated, like: number of offers, locations, demand on skills etc.

Here: Description of the dataset used for the model training.

Here: Description of the machine learning part of the project.

Trends for selected skill sets.

How to weigh a dog with a ruler? (looking for translators)


We are working on a series of comic books that introduce statistical thinking and could be used as activity booklets in primary schools. Stories are built around adventures of siblings: Beta (skilled mathematician) and Bit (data hacker).

What is the connection between these comic books and R? All plots are created with ggplot2.

The first story (How to weigh a dog with a ruler?) is translated to English, Polish and Czech. If you would like to help us to translate this story to your native language, just write to me (przemyslaw.biecek at gmail) or create an issue on GitHub. It’s just 8 pages long, translations are available on Creative Commons BY-ND licence.

Click images below to get the comic book:
In English
bb_en

In Polish
bb_pl

In Czech
bb_cz

The main point of the first story is to find the relation between Height and Weight of different animals and then assess the weight of dinosaur T-Rex based only on the length of its skeleton. A method called Regression by Eye.

bb_rel

PISA 2015 – how to read/process/plot the data with R

Yesterday OECD has published results and data from PISA 2015 study (Programme for International Student Assessment). It’s a very cool study – over 500 000 pupils (15-years old) are examined every 3 years. Raw data is publicly available and one can easily access detailed information about pupil’s academic performance and detailed data from surveys for studetns, parents and school officials (~2 000 variables). Lots of stories to be found.

You can download the dataset in the SPSS format from this webpage. Then use the foreign package to read sav files and intsvy package to calculate aggregates/averages/tables/regression models (for 2015 data you shall use the GitHub version of the package).

Below you will find a short example, how to read the data, calculate weighted averages for genders/countries and plot these results with ggplot2. Here you will find other use cases for the intsvy package.

pisa2015

ggmail + forecast = how many emails I will get tomorrow?


During the eRum 2016, Adam Zagdański gave a very good tutorial about time series modeling. Among other things I’ve learned that the forecast package (created by Rob Hyndman) got cool new plots based on the ggplot2 package.

Let’s use it to play with mailbox statistics for my gmail account!

1. Get the data

Follow this link to download the data from your gmail account as a single mbox file.
It may be large (15GB in my case), but for further steps it’s enough to keep only headers. grep + cat will do the job.

Czytaj dalej ggmail + forecast = how many emails I will get tomorrow?