Hello stranger, and welcome! 👋😊

I'm Rasmus Bååth, data scientist, engineering manager, father, husband, tinkerer,
tweaker, coffee brewer, tea steeper, and, occasionally, publisher of stuff I find
interesting down below👇

This January I played the most intriguing computer game I’ve played in ages:
The Return of the Obra Dinn. Except for being a masterpiece of murder-mystery storytelling it also has the most unique art-style as it only uses black and white pixels. To pull this off Obra Dinn makes use of *image dithering*: the arrangement of pixels of low color resolution to emulate the color shades in between. Since the game was over all too quickly I thought I instead would explore how basic image dithering can be implemented in R. If old school graphics piques your interest, read on! There will be some grainy looking ggplot charts at the end.

(*The image above is copyright Lucas Pope and is the title screen of
The Return of the Obra Dinn*)

The Beta-Binomial model is the “hello world” of Bayesian statistics. That is, it’s the first model you get to run, often before you even know what you are doing. There are many reasons for this:

- It only has one parameter, the underlying proportion of success, so it’s easy to visualize and reason about.
- It’s easy to come up with a scenario where it can be used, for example: “What is the proportion of patients that will be cured by this drug?”
- The model can be computed analytically (no need for any messy MCMC).
- It’s relatively easy to come up with an informative prior for the underlying proportion.
- Most importantly: It’s fun to see some results before diving into the theory! 😁

That’s why I also introduced the Beta-Binomial model as the first model in my DataCamp course
Fundamentals of Bayesian Data Analysis in R and quite a lot of people have asked me for the code I used to visualize the Beta-Binomial. Scroll to the bottom of this post if that’s what you want, otherwise, here is how I visualized the Beta-Binomial in my course given two successes and four failures:

So, after having held workshops introducing Bayes for a couple of years now, I finally pulled myself together and completed my DataCamp course:
Fundamentals of Bayesian Data Analysis in R! 😁

The Stan project for statistical computation has a great collection of
curated case studies which anybody can contribute to, maybe even me, I was thinking. But I don’t have time to worry about that right now because I’m on vacation, being on the yearly visit to my old family home in the north of Sweden.

What I *do* worry about is that my son will be stung by a bumblebee. His name is Torsten, he’s almost two years old, and he loves running around barefoot on the big lawn. Which has its fair share of bumblebees. Maybe I should put shoes on him so he wont step on one, but what are the chances, really.

Well, what *are* the chances? I guess if I only had

- Data on the bumblebee density of the lawn.
- Data on the size of Torsten’s feet and how many steps he takes when running around.
- A reasonable Bayesian model, maybe implemented in Stan.

I could figure that out. “How hard can it be?”, I thought. And so I made an attempt.

## Getting the data

To get some data on bumblebee density I marked out a 1 m² square on a representative part of the lawn. During the course of the day, now and then, I counted up how many bumblebees sat in the square.

This is the last video of a three part introduction to Bayesian data analysis aimed at *you* who isn’t necessarily that well-versed in probability theory but that do know a little bit of programming. If you haven’t watched the other parts yet, I really recommend you do that first:
Part 1 &
Part 2.

This third video covers the *how?* of Bayesian data analysis: How to do it efficiently and how to do it in practice. But *covers* is really a big word, *briefly introduces* is really more appropriate. Along the way I will then *briefly introduce* Markov chain Monte Carlo, parameter spaces and the computational framework
Stan:

This is video two of a three part introduction to Bayesian data analysis aimed at *you* who isn’t necessarily that well-versed in probability theory but that do know a little bit of programming. If you haven’t watched part one yet, I really recommend you do that first,
here it is. This second video covers the *why?* of Bayesian data analysis: Why (and when) use it instead of some other method of analyzing data?

This is video one of a three part introduction to Bayesian data analysis aimed at *you* who isn’t necessarily that well-versed in probability theory but that do know a little bit of programming. I gave a version of this tutorial at the UseR 2015 conference, but I didn’t get around doing a screencast of it. Until now, that is! I should warn you that this tutorial is quite handwavey (but it’s also pretty short), and if you want a more rigorous video tutorial I can really recommend
Richard McElreath’s YouTube lectures.

This first video covers the *what?* of Bayesian data analysis with part two and three covering the *why?* and the *how?*. I expect to be able to record part two and three over the next couple of weeks but, for now, here is part one:

Over the last two years I’ve occasionally been giving a very basic tutorial to Bayesian statistics using R and
Stan. At the end of the tutorial I hand out an exercise for those that want to flex their newly acquired skills. I call this exercise *Bayesian computation with Stan and Farmer Jöns* and it’s pretty cool! Now, it’s not cool because of *me*, but because the expressiveness of Stan allowed me to write a small number of data analytic questions that quickly takes you from running a simple binomial model up to running a linear regression. Throughout the exercise you work with the same model code and each question just requires you to make a *minimal* change to this code, yet you will cover most models taught in a basic statistics course! Well, briefly at least… :) If you want to try out this exercise yourself, or use it for some other purpose, you can find it here:

Beginners Exercise: Bayesian computation with Stan and Farmer Jöns (R-markdown source)

Solutions to Bayesian computation with Stan and Farmer Jöns (R-markdown source)

My friend and colleague Christophe Carvenius also helped me translate this exercise into Python:

Python Beginners Exercise: Bayesian computation with Stan and Farmer Jöns

Python Solutions to Bayesian computation with Stan and Farmer Jöns

Now, this exercise would surely have been better if I’d used real data, but unfortunately I couldn’t find enough datasets related to cows… Finally, here is a depiction of farmer Jöns and his two lazy siblings by the great master Hokusai.

I just found a fun food themed dataset that I’d never heard about and that I thought I’d share. It’s from a project called
*What’s on the menu* where the New York Public Library has crowdsourced a digitization of their collection of historical restaurant menus. The collection stretches all the way back to the 19th century and well into the 1990’s, and on the home page it is stated that there are “1,332,271 dishes transcribed from 17,545 menus”. Here is one of those menus, from a turn of the (old) century Chinese-American restaurant:

The data is freely available in csv format (yay!) and here I ’ll just show how to the get the data into R and I’ll use it to plot the popularity of some foods over time.

As I’m in
the industry now I figured I needed some business cards and as it seems the 90s never left us and Japanese monsters are hip again, I decided to make them Pokémon themed.

I think they turned out pretty well, and here I’m just going to give some pointers on how I did them.