As the normal distribution is sort of the default choice when modeling continuous data (but not necessarily the best choice), the Poisson distribution is the default when modeling counts of events. Indeed, when all you know is the number of events during a certain period it is hard to think of any other distribution, whether you are modeling the number of deaths in the Prussian army due to horse kicks or the numer of goals scored in a football game. Like the t.test
in R there is also a poisson.test
that takes one or two samples of counts and spits out a p-value. But what if you have some counts, but don’t significantly feel like testing a null hypothesis? Stay tuned!
Bayesian First Aid is an attempt at implementing reasonable Bayesian alternatives to the classical hypothesis tests in R. For the rationale behind Bayesian First Aid see the original announcement. The development of Bayesian First Aid can be followed on GitHub. Bayesian First Aid is a work in progress and I’m grateful for any suggestion on how to improve it!
The original poisson.test
function that comes with R is rather limited, and that makes it fairly simple to construct the Bayesian alternative. However, at first sight poisson.test
may look more limited than it actually is. The one sample version just takes one count of events $x$ and the number of periods $T$ during which the number of events were counted. If your ice cream truck sold 14 ice creams during one day you would call the function like poisson.test(x = 14, T = 1)
. This seems limited, what if you have a number of counts, say you sell ice cream during a whole week, what to do then? The trick here is that you can add up the counts and the number of time periods and this will be perfectly fine. The code below will still give you an estimate for the underlying rate of ice cream sales per day:
ice_cream_sales = c(14, 16, 9, 18, 10, 6, 13)
poisson.test(x = sum(ice_cream_sales), T = length(ice_cream_sales))
Note that this only works if the counts are well modeled by the same Poisson distribution. If the ice cream sales are much higher on the weekends, adding up the counts might not be a good idea. poisson.test
is also limited in that it can only handle two counts; you can compare the performance of your ice cream truck with just one competitor’s, no more. As the Bayesian alternative accepts the same input as poisson.test
it inherits some of it’s limitations (but it can easily be extended, read on!). The model for the Bayesian First Aid alternative to the one sample possion.test
is:
Here $x$ is again the count of events, $T$ is the number of periods, and $\lambda$ is the parameter of interest, the underlying rate at which the events occur. In the two sample case the one sample model is just separately fitted to each sample.
As $x$ is assumed to be Poisson distributed, all that is required to turn this into a fully Bayesian model is a prior on $\lambda$. In the the literature there are two common recommendations for an objective prior for the rate of a Poisson distribution. The first one is $p(\lambda) \propto 1 / \lambda$ which is the same as $p(log(\lambda)) \propto \text{const}$ and is proposed, for example, by Villegas (1977). While it can be argued that this prior is as non-informative as possible, it is problematic in that it will result in an improper posterior when the number of events is zero ($x = 0$). I feel that seeing zero events should tell the model something and, at least, not cause it to blow up. The second proposal is Jeffreys prior $p(\lambda) \propto 1 / \sqrt{\lambda}$, (as proposed by the great BUGS Book) which has a slight positive bias compared to the former prior but handles counts of zero just fine. The difference between these two priors is very small and will only matter when you have very few counts. Therefore the Bayesian First Aid alternative to poisson.test
uses Jeffreys prior.
So if the model uses Jeffreys prior, what is the $\lambda \sim \text{Gamma}(0.5, 0.00001)$ doing in the model definition? Well, the computational framework underlying Bayesian First Aid is JAGS and in JAGS you build your model using probability distributions. The Jeffreys prior is not a proper probability distribution but it turns out that it can be reasonably well approximated by ${Gamma}(0.5, \epsilon)$ with $\epsilon \rightarrow 0$ (in the same way as $\lambda \propto 1 / \lambda$ can be approximated by ${Gamma}(\epsilon, \epsilon)$ with $\epsilon \rightarrow 0$).
bayes.poisson.test
FunctionThe bayes.poisson.test
function accepts the same arguments as the original poisson.test
function, you can give it one or two counts of events. If you just ran poisson.test(x=14, T=1)
, prepending bayes.
runs the Bayesian First Aid alternative and prints out a summary of the model result (like bayes.poisson.test(x=14, T=1)
). By saving the output, for example, like fit <- bayes.poisson.test(x=14, T=1)
you can inspect it further using plot(fit)
, summary(fit)
and diagnostics(fit)
.
To demonstrate the use of bayes.poisson.test
I will use data from Boice and Monson (1977) on the number of women diagnosed with breast cancer in one group of 1,047 tuberculosis patients that had received on average 102 X-ray exams and one group of 717 tuberculosis patients whose treatment had not required a large number of X-ray exams. Here is the full data set:
Here WY stand for woman-years (as if woman-years would be different from man-years, or person-years…). While the data is from a relatively old article we are going to replicate a more recent reanalysis of that data from the article Testing the Ratio of Two Poisson Rates by Gu et al. (2008). They tested the alternative hypothesis that the rate of breast cancer per person-year would be 1.5 times greater in the group that was X-rayed compared to the control group. They tested it like this:
no_cancer_cases <- c(41, 15)
# person-millennia rather than person-years to get the estimated rate
# on a more interpretable scale.
person_millennia <- c(28.011, 19.025)
poisson.test(no_cancer_cases, person_millennia, r = 1.5, alternative = "greater")
##
## Comparison of Poisson rates
##
## data: no_cancer_cases time base: person_millennia
## count1 = 41, expected count1 = 41.61, p-value = 0.291
## alternative hypothesis: true rate ratio is greater than 1.5
## 95 percent confidence interval:
## 1.098 Inf
## sample estimates:
## rate ratio
## 1.856
and concluded that “There is not enough evidence that the incidence rate of breast cancer in the X-ray fluoroscopy group is 1.5 times to the incidence rate of breast cancer in control group”. It is oh-so-easy to interpret this as that there is no evidence that the incidence rate is more than 1.5 times higher, but this is wrong and the Bayesian First Aid alternative makes this clear:
library(BayesianFirstAid)
bayes.poisson.test(no_cancer_cases, person_millennia, r = 1.5, alternative = "greater")
## Warning: The argument 'alternative' is ignored by bayes.poisson.test
##
## Bayesian Fist Aid poisson test - two sample
##
## number of events: 41 and 15, time periods: 28.011 and 19.025
##
## Estimates [95% credible interval]
## Group 1 rate: 1.5 [1.1, 1.9]
## Group 2 rate: 0.80 [0.43, 1.2]
## Rate ratio (Group 1 rate / Group 2 rate):
## 1.8 [1.1, 3.4]
##
## The event rate of group 1 is more than 1.5 times that of group 2 by a probability
## of 0.754 and less than 1.5 times that of group 2 by a probability of 0.246 .
The warning here is nothing to worry about, there is no need to specify what alternative is tested and bayes.poisson.test
just tells you that. So sure, the evidence is far from conclusive, but given the data and the model there is a 75% probability that the incidence rate is more than 1.5 times higher in the X-rayed group. That is, rather than just saying that there is not enough evidence we have quantified how much evidence there is, and the evidence actually slightly favors the alternative hypothesis. This is also easily seen in the default plot of bayes.poisson.test
:
plot( bayes.poisson.test(no_cancer_cases, person_millennia, r = 1.5) )
Back to the ice cream truck, say that you sold 14 ice creams in one day and your competitors Karl and Anna sold 22 and 7 ice creams, respectively. How would you estimate and compare the underlying rates of sold ice creams of these three trucks when bayes.poisson.test
only accepts counts from two groups? When you want to go off the beaten path the model.code
function is your friend as it takes the result from Bayesian First Aid method and returns R and JAGS code that replicates the analysis you just ran. In this case start by running the model with two counts and then print out the model code:
fit <- bayes.poisson.test(x = c(14, 22), T = c(1, 1))
model.code(fit)
### Model code for the Bayesian First Aid two sample Poisson test ###
require(rjags)
# Setting up the data
x <- c(14, 22)
t <- c(1, 1)
# The model string written in the JAGS language
model_string <- "model {
for(group_i in 1:2) {
x[group_i] ~ dpois(lambda[group_i] * t[group_i])
lambda[group_i] ~ dgamma(0.5, 0.00001)
x_pred[group_i] ~ dpois(lambda[group_i] * t[group_i])
}
rate_diff <- lambda[1] - lambda[2]
rate_ratio <- lambda[1] / lambda[2]
}"
# Running the model
model <- jags.model(textConnection(model_string), data = list(x = x, t = t), n.chains = 3)
samples <- coda.samples(model, c("lambda", "x_pred", "rate_diff", "rate_ratio"), n.iter=5000)
# Inspecting the posterior
plot(samples)
summary(samples)
Just copy-n-paste this code directly into an R script and make the following changes:
x <- c(14, 22)
→ x <- c(14, 22, 7)
t <- c(1, 1)
→ t <- c(1, 1, 1)
for(group_i in 1:2) {
→ for(group_i in 1:3) {
And that’s it! Now we can run the model script and take a look at the estimated rates of ice cream sales for the three trucks.
plot(samples)
If you want to compare many groups you should perhaps consider using a hierarchical Poisson model. (Pro tip: John K. Kruschke’s Doing Bayesian Data Analysis has a great chapter on hierarchical Poisson models.)
Boice, J. D., & Monson, R. R. (1977). Breast cancer in women after repeated fluoroscopic examinations of the chest. Journal of the National Cancer Institute, 59(3), 823-832. link to article (unfortunatel behind paywall)
Gu, K., Ng, H. K. T., Tang, M. L., & Schucany, W. R. (2008). Testing the ratio of two poisson rates. Biometrical Journal, 50(2), 283-298. doi: 10.1002/bimj.200710403, pdf
Villegas, C. (1977). On the representation of ignorance. Journal of the American Statistical Association, 72(359), 651-654. doi: 10.2307/2286233
Lunn, D., Jackson, C., Best, N., Thomas, A., & Spiegelhalter, D. (2012). The BUGS book: a practical introduction to Bayesian analysis. CRC Press. pdf of chapter 5 on Prior distributions
]]>Inspired by events that took place at UseR 2014 last month I decided to implement an app that estimates one’s blood alcohol concentration (BAC). Today I present to you drinkR, implemented using R and Shiny, Rstudio’s framework for building web apps using R. So, say that I had a good dinner, drinking a couple of glasses of wine, followed by an evening at a divy karaoke bar, drinking a couple of red needles and a couple of beers. By entering my sex, height and weight and the times when I drank the drinks in the drinkR app I end up with this estimated BAC curve:
(Now I might be totally off with what drinks I had and when but Romain Francois, Karl Broman, Sandy Griffith, Karthik Ram and Hilary Parker can probably fill in the details.) If you want to estimate your current BAC (or a friend’s…) then head over to the drinkr app hosted at ShinyApps.io. If you want to know how how the app estimates BAC read on below. The code for drinkR is available on GitHub, any suggestion on how it can be improved is greatly appreciated.
drinkR estimates the BAC according to the formulas given in The estimation of blood alcohol concentration by Posey and Mozayani (2007). I was also helped by reading through Computer simulation analysis of blood alcohol and the Widmark factor (explained below) was calculated according to The calculation of blood ethanol concentrations in males and females. Unfortunately all these articles are behind paywalls, that is how most how publicly funded research works these days…
The BAC estimates you get out of drinkR will be as good as the formulas in Posey and Mozayani (2007). I don’t know how good they are and I don’t know how well they’ll fit you. Estimating BAC is of course a prediction problem and what you really would want to have is data so that you could build a predictive model and get an idea of how well it predicts BAC. Unfortunately I haven’t found any data on this so the Posey and Mozayani formulas is as good as I can do.
Estimating the BAC (according to Posey and Mozayani, 2007) after you have drunken, say, a beer requires “simulating” three processes:
Alcohol absorption. Just because you drank a beer doesn’t mean it goes directly into your blood stream, it has to be absorbed by your digestive system first and this takes some time.
Alcohol distribution. Your BAC depends on how much of you the absorbed alcohol will be “diluted” by. This depends on, among other things, your weight, height and sex.
Alcohol elimintation. How drunk you get (and how soon you will get sober) depends on how fast your body eliminates the absorbed alcohol.
Alcohol absorption can be approximated by assuming it is first order, that is, assuming there is an alcohol halflife, a time it takes for half of a drink to be absorbed. When measured this halflife tend to be between 6 min to 18 min, depending on how much you have reacently eaten. If you haven’t eaten for a while your halflife might be closer to 6 min while if you just had a big döner kebab it might be closer to 18 min.
Alcohol distribution depends on the amount of water that the alcohol in your body will be diluted in. It can be estimated by the following equation:
$$ C = {A \over rW}$$
where $C$ is the alcohol concentration, $A$ is the mass of the alcohol, $W$ is your body weight and $r$ is the Widmark factor. This factor can be seen as an adjustment that is necessary because your whole body is not made of water, thus the alcohol is not “diluted by” your whole weight. There are many different formulas for estimating $r$ and drinkR uses the one given by Seidl et al. (2000) which estimates $r$ dependent on sex, height and weight:
$r_{\text{female}} = 0.31 - 0.0064 \times \text{weight in kg} + 0.0045 \times \text{height in cm}$$r_{\text{male}} = 0.32 - 0.0048 \times \text{weight in kg} + 0.0046 \times \text{height in cm}$
These linear equations can give really strange values for $r$, for example, if you weight a lot. Therefore I also bound $r$ to be within the limits found by Seidl et al. (2000): 0.44 to 0.80 in women and 0.60 to 0.87 in men.
Finally, alcohol elimination can be reasonably approximated by a constant elimination rate of the BAC. This rate can vary from around 0.009 % per hour to 0.035 % per hour with 0.018 % per hour being a reasonable average.
drinkR puts these three processes together and estimates your BAC over time given a number of drinks with time stamps. Assuming that you are also interested in how drunk you are right now, drinkR shows an estimate of your current BAC by fetching your computers local time (see this stackoverflow question for how this is done). The estimate given by drinkR might be very missleading so don’t use it for any serious purposes! To get a sense of the uncertainty in the BAC estimate play around with the parameters (especially the alcohol elimination rate) and see how much your BAC curve changes.
If you want to see how different levels of BAC could affect you see the Progressive effects of alcohol chart over at Wikipedia and if you want to try out drinkR live I would recommend one of my favorite drinks: Absinthe mixed with Orange soda (say Fanta orange). It’s better than you think it is! :)
Posey, D., & Mozayani, A. (2007). The estimation of blood alcohol concentration. Forensic Science, Medicine, and Pathology, 3(1), 33-39. Link (Unfortunately behind paywall)
Rockerbie, D. W., & Rockerbie, R. A. (1995). Computer simulation analysis of blood alcohol. Journal of clinical forensic medicine, 2(3), 137-141. Link (Unfortunately behind paywall)
Seidl, S., Jensen, U., & Alt, A. (2000). The calculation of blood ethanol concentrations in males and females. International journal of legal medicine, 114(1-2), 71-77. Link (Unfortunately behind paywall)
]]>This year’s UseR! conference was held at the University of California in Los Angeles. Despite the great weather and a nearby beach, most of the conference was spent in front of projector screens in 18° c (64° f) rooms because there were so many interesting presentations and tutorials going on. I was lucky to present my R package Bayesian First Aid and the slides can be found here:
There was so much great stuff going on at UseR! and here follows a random sample:
John Chambers on Interfaces, Efficiency and Big Data. One of the creators of S (the predecessor of R) talked about the history of R and exiting new developments such as Rcpp11. He was also kind enough to to sign my copy of S: An Interactive Environment for Data Analysis and Graphics, the original S book from 1984 :)
Yihui Xie the Knitr Ninja. Yihui held the most amazing presentation about how to be a Knitr ninja using only an R script and sound effects. The “anime sword” sound effect used by Yihui is just now available in the development version of beepr
and can be played by running beep("sword")
.
Romain François held both a tutorial and a presentation on the Rcpp11 package, a most convenient way of connecting R and C++.
Dirk Eddelbuettel held a keynote on the topic of R, C++ and Rcpp, another convenient way of connecting R and C++. Do we see a theme here? He also talked about Docker which I never heard of before, which allows sort of light-weight virtual machines which can be easily built and distributed (this is my interpretation, which might be a bit off).
Rstudio was otherwise running the show with great presentation with Winston Chang on ggvis, Joe Cheng on Shiny, J.J. Allaire and Kevin Ushey on Packrat - A Dependency Management System for R, Jeff Allen on The Next Generation of R Markdown and, of course, Hadley Wickham on dplyr: a grammar of data manipulation.
Dieter De Mesmaeker presented a poster on Rdocumentation.org a really nice web-interface to the documentation of R.
All in all, a great conference! I’m already looking forward to next years UseR! conference which will be held at Aalborg University, not too far from where I live (at least compared to LA).
]]>Even though I said it would never happen, my silly package with the sole purpose of playing notification sounds is now on CRAN. Big thanks to the CRAN maintainers for their patience! For instant gratification run the following in R to install beepr
and make R produce a notification sound:
install.packages("beepr")
library(beepr)
beep()
This package was previously called pingr
and included a ping()
function. By request from the CRAN maintainers it has been renamed in order to not be confused with the Unix tool ping. Consequently it is now called beepr
and includes a beep()
function instead. Other things that have changed since the original announcement is that it is now possible to play a custom wav-file by running beep("path/to/my_sound.wav")
and that a facsimile of the Facebook notification sound has been added and which can be played by running beep("facebook")
(thanks Romain Francois for the suggestion!).
For fun I made a little animation of the actual “ping” sound that plays when you run beep()
using the audio package and the animation package. Sure, the function is now called beep
but I still like the original sound :)
Here is the code:
library(audio)
library(animation)
# You would have to change this path to point to a valid wav-file
w <- load.wave("inst/sounds/microwave_ping_mono.wav")
w <- w[1000:7000] # Trim both the start and the end of the ping sound
plot_frame <- function(sample_i) {
old_par=par(mar=rep(0.1, 4));
plot(w[seq(1, sample_i)], type="l", xaxt="n", yaxt="n", ylim=c(-0.3, 0.3), col="darkblue")
text(x=3400, y=0.2, labels="beepr (former pingr)", cex=1.5)
text(x=3900, y=-0.2, labels="- now on CRAN!", cex=1.5)
par(old_par)
}
saveGIF(interval = 0.1, ani.width = 200, ani.height = 100, expr = {
# The animation
for(sample_i in seq(1, length(w), length.out=40)) {
plot_frame(sample_i)
}
# Just repeating the last image a couple of times...
for(i in 1:15) {
plot_frame(length(w))
}
})
]]>