Program of the european R users meeting [only 7 days to go]

The european R users meeting [eRum] is going to start in just 7 days.

We expect over 250 participants, 10 invited talks, 47 regular talks, 13 lightning talks and 12 posters. In order to handle that much content we scheduled 18 sessions [+ workshops].

Find the program of the conference here or here. In the second sheet you will find a detailed list of talks and sessions.

As you see the conference is full of very interesting stuff. So, get prepared and see you in Poznań!

MinechaRts #1 (Minecraft + R + Edgar Anderson’s Iris Data)

How to use R to draw 3D scatterplots in Minecraft? Let’s see.

Minecraft is a game about placing blocks and going on adventures (source). Blocks are usually placed by players but there are add-ons that allow to add/modify/remove blocks through external API.
And this feature is being used in educational materials that show how to use Minecraft to learn Python (or how to use Python to modify Minecraft worlds, see this book for example). You need to master loops to build a pyramid or a cube. And you need to touch some math to build an igloo or a fractal. Find a lot of cool examples by googling ‘minecraft python programming’.

So, Python+Minecraft are great, but how to do similar things in R?
You need to do just three things:

  1. Install the Spigot Minecraft Server along with all required dependencies. The detailed instruction how to do this is here.
  2. Create a socket connection to the Minecraft Server port 4711. In R it’s just a single line

  3. Send building instructions through this connection. For example

    will create a cube 11x11x11 made of TNT blocks (id=46 is for TNT, see the full list here) placed between coordinates (0,70,0) (10,80,10). You can add and remove blocks, move players, spawn entities and so on. See a short overview of the server API.

The R code below creates a connection to the minecraft server, builds a flat grassland around the spawning point and plots 3d scatterplot with 150 blocks (surprise surprise, blocks coordinates correspond to Sepal.Length, Sepal.Width, Petal.Length variables from the iris dataset).

If you do not like scatterplots try barcharts ;-)

A link that can tell more than dozens of lines of R code – what’s new in archivist?

Can you spot the difference between this plot:

And this one:

You are right! The latter has an embedded piece of R code.
What for?

It’s a call to a function aread from archivist – a package that manages external copies of R objects. This piece of code was added by the function addHooksToPrint(), that enriches knitr reports in links to all objects of a given class, e.g. ggplot.

You can copy this hook to your R session and you will automagically recreate this plot in your local session.

But it’s not all.
Actually here the story is just beginning.

Don’t you think, that this plot is badly annotated? It is not clear what is being presented. Something about terrorism, but for which year, are these results for all countries or there is some filtering? What is on axes? Why the author skip all these important information? Why he does not include the full R code that explains how this plot was created?

Actually, having this single link you can get answers for all these questions.

First, let’s download the plot and extract the data out of it.

This data object is also in the repository so I can download it with the aread function.

But here is the coolest part.
Having an object one can (in some cases) examine the history of this objects, i.e. check how it was created. Here is how to do this:

Now you can see what operations have been used to create data used in this plot. It’s clear how the aggregation has been done, what is the filtering condition and so on.
Also you have hashes to all objects created along the way, co you can download the partial results. This history is being recorded with an operator %a% that is working in a similar fashion to %>%.

We have the plot, now we know what is being presented, let’s change some annotations.

The ahistory() function for remote repositories was introduced to archivist in version 2.1 (on CRAN since yesterday). Other new feature is the support for repositories in shiny applications. Now you can enrich your app in links to copies of R objects generated by shiny.
You can find more information about these and other features in the useR2016 presentation about archivist (html, video).
Or look for Marcin Kosiński talk during the european R users meeting in Poznań.

The data presented in here is just a small fraction of data from National Consortium for the Study of Terrorism and Responses to Terrorism (START) (2016) retrieved from http://www.start.umd.edu/gtd.

Shiny + archivist = reproducible interactive exploration


Shiny is a great tool for interactive exploration (and not only for that). But, due to its architecture, all objects/results that are generated are stored in a separate R process so you cannot access them easily from your R console.

In some cases you may wish to retrieve a model or a plot that you have just generated. Or maybe just wish to store all R objects (plots, data sets, models) that have been ever generated by your Shiny application. Or maybe you would like to do some further tuning or validation of a selected model or plot. Or maybe you wish to collect and compare all lm() models ever generated by your app? Or maybe you would like to have an R code that will recover given R object in future.

So, how to do this?

Czytaj dalej Shiny + archivist = reproducible interactive exploration

eRum 2016 — last days of call for papers

erum2
Only 6 days left to the end of the call for papers for eRum 2016! Register and submit your talk proposal at www.erum.ue.poznan.pl.

European R users meeting will be a great place to learn and share ideas on R. Moreover, we have already confirmed the following invited talks:

  • Browse Till You Die: Scalable Hierarchical Bayesian Modeling of cookie deletion — Jakub Glinka, GfK Data Lab,
  • Design of Experiments in R — Ulrike Grömping, Beuth University of Applied Sciences Berlin,
  • Genie: A new, fast, and outlier-resistant hierarchical clustering algorithm and its R interface — Marek Gagolewski, Systems Research Institute, Polish Academy of Sciences,
  • Addressing the Gender Gap in the R Project — Heather Turner, University of Warwick,
  • Heteroscedastic Discriminant Analysis and its integration into “mlR” package for uniform machine learning — Katarzyna Stąpor, Institute of Computer Science, Silesian Technical University,
  • How to use R to hack the publicly available data about skills of 2M+ worldwide students? — Przemysław Biecek, University of Warsaw,
  • A survey of tools for Bayesian data analysis in R — Rasmus Bååth, Lund University,
  • Geo-located point data: measurement of agglomeration and concentration — Katarzyna Kopczewska, University of Warsaw.

European R users meeting is an international conference that aims at integrating users of the R language. eRum 2016 will be a good chance to exchange experiences, broaden knowledge on R and collaborate. One can participate in eRum 2016:
(1) with a regular oral presentation,
(2) with a lightning talk,
(3) with a poster presentation,
(4) or attending without presentation nor poster.

Due to space available at the conference venue, organizers set limit of participants at 250 (only 97 left!).

Frequency analysis challenge – a console-based game for R/python

Six months ago we’ve introduced ‘The Proton’ – a console based R game with six data wrangling puzzles. Around 15-30 minutes of fun with data. The game is on CRAN in the package BetaBit.

And just few days ago we’ve added a second game – frequon(). Eight puzzles related with frequency analysis of encoded messages.

It’s much harder than proton.
Expect around two hours of playing with ciphers.
Try it yourself. To get the R version just type

install.packages("BetaBit")
library("BetaBit")
frequon()

You can also try the experimental python version.

pip install --upgrade https://github.com/BetaAndBit/BetaBitPython/archive/master.tar.gz

If you like these games and going to attend useR2016 (June, Stanford, USA) or eRum2016 (October, Poznań, Poland) feel free to ping me (Przemyslaw.Biecek).

All your models belong to us: how to combine package archivist and function trace()

Let’s see how to collect all linear regression models that you will ever create in R.

It’s easy with the trace() function. A really powerful, yet not that popular function, that allows you to inject any R code in any point of a body of any function.
Useful in debugging and have other interesting applications.
Below I will show how to use this function to store a copy of every linear model that is created with lm(). In the same way you may store copies of plots/other models/data frames/anything.

To store a persistent copy of an object one can simply use the save() function. But we are going to use the archivist package instead. It stores objects in a repository and give you some nice features, like searching within repository, sharing the repository with other users, checking session info for a particular object or restoring packages to versions consistent with a selected object.

To use archivist with the trace() function you just need to call two lines. First one will create an empty repo, and the second will execute ‘saveToLocalRepo()’ at the end of each call to the lm() function.

Now, at the end of every lm() function the fitted model will be stored in the repository.
Let’s see this in action.

All models are stored as rda files in a disk based repository.
You can load them to R with the asearch() function.
Let’s get all lm objects, apply the AIC function to each of them and sort along AIC.

The aread() function will download the selected model.

Now you can just create model after model and if needed they all can be restored.

Read more about the archivist here: http://pbiecek.github.io/archivist/.

Call for Papers: eRum 2016 (European R users meeting)

6_edited

The European R users meeting (eRum) is an international conference that aims at integrating users of the R language. eRum 2016 will be held on October 13 and 14, 2016, in Poznan, Poland at the Poznan University of Economics and Business. We already confirm the following invited speakers: Rasmus Bååth, Romain Francois, Ulrike Grömping, Matthias Templ, Heather Turner, Przemysław Biecek, Marek Gągolewski, Jakub Glinka, Katarzyna Kopczewska and Katarzyna Stąpor.

We would like to bring together participants from around the world. It will be a good chance to exchange experiences, broaden knowledge of R and collaborate. The conference will cover topics including:

• Bayesian Statistics,
• Bioinformatics,
• Economics, Finance and Insurance,
• High Performance Computing,
• Reproducible Research,
• Industrial Applications,
• Statistical Learning with Big Data,
• Spatial Statistics,
• Teaching,
• Visualization & Graphics,
• and many more.

We invite you to participate in eRum 2016:
(1) with a regular oral presentation,
(2) with a lightning talk,
(3) with a poster presentation,
(4) or without a presentation or poster.

Due to limited space at the conference venue, the organizers have set a limit for the number of participants at 250 and persons with regular/lighting talks/posters will be considered first and those attending without a presentation or poster will be handled on a first-come, first-served basis.

Please make your submission online at http://erum.ue.poznan.pl/#register. The submission deadline is June 15, 2016. Submitters will be notified via email by July 1, 2016 of acceptance. Additional details will be announced via the eRum conference website.

European R users meeting / meeting of R heroes / Poznań 12-14.10.2016

6_edited

European R users meeting (eRum 2016) will take place between October 12th and 14th.

We already have confirmed great invited speakers such as: Rasmus Bååth, Romain François, Ulrike Grömping, Matthias Templ, and Heather Turner, as well as strong representation from Poland: Przemysław Biecek (omg, it’s me!), Marek Gągolewski, Jakub Glinka, Katarzyna Kopczewska, and Katarzyna Stąpor. We are planning a meeting of more than 200 useRs from all across Europe working in different areas of the industry, academy, and government.

On behalf of organising committee, chaired by Maciej Beręsewicz, we want to invite you to be a part of this historical meeting by proposing a workshop, submitting a regular or lightning talk, presenting a poster, or just attending the activities we are preparing for the meeting.

You will find more details about the registration process on the website www.erum.ue.poznan.pl.

If you have any questions do not hesitate to ask through erum@konf.ue.poznan.pl.

See you in Poznań.

5_edited

Why should you backup your R objects?

There is a saying that there are two groups of people: those who are already doing backups and those who will. So, how this is linked with reproducible research and R?

If your work is to analyze data then you often face a need to restore/recreate/update results that you have generated some time ago.
You may think ,,I have a knitr reports for everything!”. That’s great! It will save you a lot of troubles. But to have 100% of warranty for exactly same results you need to have exactly the same environment and same versions of packages.

Do you know how many R packages have been updated during last 12 months?

I took list of top 20 R packages from here, scrap dates of their current and older CRAN releases from here and generate a plot with dates of submissions to CRAN sorted along date of last submission.

Czytaj dalej Why should you backup your R objects?