https://github.com/cran/bayestestR
Revision 9985109256c08d654edb46adb9cb20c913fb1888 authored by Dominique Makowski on 29 May 2019, 14:10:02 UTC, committed by cran-robot on 29 May 2019, 14:10:02 UTC
1 parent fe07bfa
Raw File
Tip revision: 9985109256c08d654edb46adb9cb20c913fb1888 authored by Dominique Makowski on 29 May 2019, 14:10:02 UTC
version 0.2.0
Tip revision: 9985109
example1.Rmd
---
title: "Example 1: Initiation to Bayesian models"
output: 
  github_document:
    toc: true
    fig_width: 10.08
    fig_height: 6
  rmarkdown::html_vignette:
    toc: true
    fig_width: 10.08
    fig_height: 6
tags: [r, bayesian, posterior, test]
vignette: >
  \usepackage[utf8]{inputenc}
  %\VignetteIndexEntry{Example 1: Initiation to Bayesian models}
  %\VignetteEngine{knitr::rmarkdown}
editor_options: 
  chunk_output_type: console
bibliography: bibliography.bib
---

This vignette can be referred to by citing the package:

- Makowski, D., Ben-Shachar M. S. \& Lüdecke, D. (2019). *Understand and Describe Bayesian Models and Posterior Distributions using bayestestR*. Available from https://github.com/easystats/bayestestR. DOI: [10.5281/zenodo.2556486](https://zenodo.org/record/2556486).

---

```{r message=FALSE, warning=FALSE, include=FALSE}
library(knitr)
options(knitr.kable.NA = '')
knitr::opts_chunk$set(comment=">")
options(digits=2)

set.seed(333)
```

Now that you've read the [**Get started**](https://easystats.github.io/bayestestR/articles/bayestestR.html) section, let's dive in the **subtleties of Bayesian modelling using R**.

## Loading the packages

Once you've [installed](https://easystats.github.io/bayestestR/articles/bayestestR.html#bayestestr-installation) the necessary packages, we can load `rstanarm` (to fit the models), `bayestestR` (to compute useful indices) and `insight` (to access the parameters).

```{r message=FALSE, warning=FALSE}
library(rstanarm)
library(bayestestR)
library(insight)
```

## Simple linear model (*aka* a regression)

We will begin by a simple regression to test the relationship between **Petal.Length** (our predictor, - or *independent*, variable) and **Sepal.Length** (our response, - or *dependent*, variable) from the [`iris`](https://en.wikipedia.org/wiki/Iris_flower_data_set) dataset, included by default in R. 


### Fitting the model


Let's start by fitting the **frequentist** version of the model, just to have some **reference**:

```{r message=FALSE, warning=FALSE, eval=TRUE}
model <- lm(Sepal.Length ~ Petal.Length, data=iris)
summary(model)
```


In this model, the linear relationship between **Petal.Length** and **Sepal.Length** is **positive and significant** (beta = 0.41, t(148) = 21.6, p < .001). This means that for every increase of 1 in **Petal.Length** (the predictor), **Sepal.Length** (the response) increases by **0.41**. This can be visualized by plotting, using the `ggplot2` package, the predictor values on the `x` axis and the response values as `y`:

```{r message=FALSE, warning=FALSE, eval=TRUE}
library(ggplot2)  # Load the package

# The ggplot function takes the data as argument, and then the variables 
# related to aesthetic features such as the x and y axes.
ggplot(iris, aes(x=Petal.Length, y=Sepal.Length)) +
  geom_point() +  # This adds the points
  geom_smooth(method="lm") # This adds a regression line
```


Let's fit a **Bayesian version** of the model:

```{r message=FALSE, warning=FALSE, eval=FALSE}
model <- stan_glm(Sepal.Length ~ Petal.Length, data=iris)
```
```{r echo=FALSE, message=FALSE, warning=FALSE, comment=NA, results='hide'}
library(rstanarm)
set.seed(333)

model <- stan_glm(Sepal.Length ~ Petal.Length, data=iris)
```

You can see the sampling algorithm being run. 

### Extracting the posterior

Once it is done, let us extract the parameters (*i.e.*, the coefficients) of the model.

```{r message=FALSE, warning=FALSE, eval=FALSE}
posteriors <- insight::get_parameters(model)

head(posteriors)  # Show the first 6 rows
```
```{r message=FALSE, warning=FALSE, echo=FALSE}
posteriors <- insight::get_parameters(model)

head(posteriors)  # Show the first 6 rows
```

As we can see, the parameters take the form of a long dataframe with two columns, corresponding to the **intercept** and the effect of **Petal.Length**. These columns contain the **posterior distributions** of these two parameters, *i.e.*, a set of different plausible values.

#### About posterior draws

Let's look at the length of the posteriors.

```{r message=FALSE, warning=FALSE}
nrow(posteriors)  # Size (number of rows)
```

> **Why is the size 4000, and not more or less?**

First of all, these observations (the rows) are usually referred to as **posterior draws**. The underlying idea is that the Bayesian sampling algorithm (such as **Monte Carlo Markov Chains - MCMC**) will *draw* from the hidden true posterior distribution. Thus, it is through these posterior draws that we can estimate the underlying true posterior distribution. **The more draws you have, the better is your estimation of the posterior distribution**. But it also takes longer to compute.

If we look at the documentation (`?sampling`) for the rstanarm `"sampling"` algorithm used by default in the model above, we can see several parameters that influence the number of posterior draws. By default, there are **4** `chains` (you can see it as distinct sampling runs), that each create **2000** `iter` (draws). However, only half of these iterations is kept, as half is used for `warm-up` (the convergence of the algorithm). The total is **`4 chains * (2000 iterations - 1000 warm-up) = 4000`** posterior draws. We can change that, for instance:

```{r message=FALSE, warning=FALSE, eval=FALSE}
model <- stan_glm(Sepal.Length ~ Petal.Length, data=iris, chains = 2, iter = 1000, warmup = 250)
 
nrow(insight::get_parameters(model))  # Size (number of rows)
```
```{r echo=FALSE, message=FALSE, warning=FALSE, comment=NA, echo=FALSE}
junk <- capture.output(model <- stan_glm(Sepal.Length ~ Petal.Length, data=iris, chains = 2, iter = 1000, warmup = 250))
nrow(insight::get_parameters(model))  # Size (number of rows)
```

In this case, we indeed have **`2 chains * (1000 iterations - 250 warm-up) = 1500`** posterior draws. Let's keep our first model with the default setup.


#### Visualizing the posterior distribution


Now that we've understood where do these values come from, let's look at them. We will start by visualizing the distribution of our parameter of interest, the effect of `Petal.Length`.


```{r message=FALSE, warning=FALSE}
ggplot(posteriors, aes(x = Petal.Length)) +
  geom_density(fill = "orange")
```

This distribution represents the probability (the y axis) of different effects (the x axis). The central values are more probable than the extreme values. As you can see, this distribution ranges from about **0.35 to 0.50**, with the bulk of it being at around **0.41**.

> **Congrats! You've just described your posterior distribution.**

And this is at the heart of Bayesian analysis. We don't need *p*-values, *t*-values or degrees of freedom: **everything is there, under our nose**, within this **posterior distribution**.

As you can see, our description is consistent with the values of the frequentist regression (which had a beta of **0.41**). This is reassuring! And indeed, in most cases, a **Bayesian analysis does not drastically change the results** or their interpretation. Rather, it makes the results more interpretable, intuitive and easy to understand and describe.

We can now go ahead and **precisely characterize** this posterior distribution.

### Describing the Posterior

Unfortunately, it is often not practical to report the whole posterior distributions as graphs. We need to find a **concise way to summarize it**. We recommend to describe the posterior distribution with **3 elements**. A **point-estimate**, a one-value summary (similar to the *beta* in frequentist regressions); a **credible interval**, representing the associated uncertainty and some **indices of significance**, giving information about the importance of this effect.


#### Point-estimate

**What single value can best represent my posterior distribution?**

Centrality indices, such as the *mean*, the *median* or the *mode* are usually used as point-estimates. What's the difference? Let's check the **mean**:

```{r message=FALSE, warning=FALSE}
mean(posteriors$Petal.Length)
```

This is close to the frequentist beta. But as we know, the mean is quite sensitive to outliers or extremes values. Maybe the **median** could be more robust?

```{r message=FALSE, warning=FALSE}
median(posteriors$Petal.Length)
```

Well, this is **close to the mean**. Maybe we could take the **mode**, the *peak* of the posterior distribution? In the Bayesian framework, this value is called the **Maximum A Posteriori (MAP)**. Let's see:

```{r message=FALSE, warning=FALSE}
map_estimate(posteriors$Petal.Length)
```

**They are all very close!** Let's visualize these values on the posterior distribution:

```{r message=FALSE, warning=FALSE}
ggplot(posteriors, aes(x = Petal.Length)) +
  geom_density(fill = "orange") +
  # The mean in blue
  geom_vline(xintercept=mean(posteriors$Petal.Length), color="blue", size=1) +
  # The median in red
  geom_vline(xintercept=median(posteriors$Petal.Length), color="red", size=1) +
  # The MAP in purple
  geom_vline(xintercept=map_estimate(posteriors$Petal.Length), color="purple", size=1)
```

Well, all these values give very similar results. Thus, **we will chose the median**, as this value has a direct meaning from a probabilistic perspective: **there is 50\% chance that the true effect is higher and 50\% chance that the effect is lower** (as it *cuts* the distribution in two equal parts).


#### Uncertainty

Now that the have a point-estimate, we have to **describe the uncertainty**. We could compute the range:

```{r message=FALSE, warning=FALSE}
range(posteriors$Petal.Length)
```

But does it make sense to include all these extreme values? Probably not. Thus, we will compute a [**credible interval**](https://easystats.github.io/bayestestR/articles/credible_interval.html). Long story short, it's kind of similar to a frequentist **confidence interval**, but it is easier to interpret and easier to compute. *And it makes more sense*.

We will compute this **credible interval** based on the [Highest Density Interval (HDI)](https://easystats.github.io/bayestestR/articles/credible_interval.html#different-types-of-cis). It will give us the range containing the 89\% more probable effect values. **Note that we will use 89\% CIs instead of 95\%** CIs (as in the frequentist framework), as the 89\% level gives more [stable results](https://easystats.github.io/bayestestR/articles/credible_interval.html#why-is-the-default-89) [@kruschke2014doing] and reminds us about the arbitrarity of such conventions [@mcelreath2018statistical].

```{r message=FALSE, warning=FALSE}
hdi(posteriors$Petal.Length, ci=0.89)
```

Nice, so we can conclude that **the effect has 89\% chance of falling within the `[0.38, 0.44]` range**. We have just computed the two most important information to describe our effects. 

#### Effect significance

However, in many scientific fields, simply describing the effect is not enough. The readers also want to know if this effect ***means*** something. Is it important? Is it different from 0? In other words, **assessing the *significance* of the effect**. How can we do this?

Well, in this particular case, it is very eloquent: **all possible effect values (*i.e.*, the whole posterior distribution) are positive and superior to 0.35, which is already a lot**.

But still, we want some objective decision criterion, to say if **yes or no the effect is significant**.  We could do similarly to the frequentist framework, and see if the **Credible Interval** contains 0. Here, it is not the case, which means that our **effect is significant**. 

But this index is not very fine-grained, isn't it? **Can we do better? Yes.**


## A linear model with a categorical predictor

Let's take interest in the **weight of chickens**, depending on 2 **feed types**. We will start by selecting, from the `chickwts` dataset (also available in base R), the 2 feed types that are of interest for us (*because reasons*): **Meat meals** and **Sunflowers**.

### Data preparation and model fitting

```{r message=FALSE, warning=FALSE, eval=TRUE}
library(dplyr)

# We keep only rows for which feed is meatmeal or sunflower
data <- chickwts %>% 
  filter(feed %in% c("meatmeal", "sunflower"))
```


Let's run another Bayesian regression to predict the **Weight** with the **2 types of feed type**.

```{r message=FALSE, warning=FALSE, eval=FALSE}
model <- stan_glm(weight ~ feed, data=data)
```
```{r echo=FALSE, message=FALSE, warning=FALSE, comment=NA, results='hide'}
model <- stan_glm(weight ~ feed, data=data)
```

### Posterior description


```{r message=FALSE, warning=FALSE, eval=TRUE}
posteriors <- insight::get_parameters(model)

ggplot(posteriors, aes(x=feedsunflower)) +
  geom_density(fill = "red")
```

This represents the **posterior distribution of the difference between meat meals and Sunflowers**. Seems that the difference is rather **positive**... Eating Sunflowers makes you more fat (if you're a chicken). **By how much?** Let us compute the **median** and the **CI**:

```{r message=FALSE, warning=FALSE, eval=TRUE}
median(posteriors$feedsunflower)
hdi(posteriors$feedsunflower)
```

It makes you fat by around `51` (the median). However, the uncertainty is quite high: **there is 89\% chance that the difference between the two feed types is between `7.77` and `87.66`.**

> **Is this effect different from 0?**

### ROPE Percentage

Testing whether this distribution is different from 0 doesn't make sense, as 0 is a single value (*and the probability that any distribution is different from a single value is infinite*). 

However, one way to assess **significance** could be to define an area around 0, which will consider as *equivalent* to zero (to no, - or negligible, effect). This is called the [**Region of Practical Equivalence (ROPE)**](https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html), and is one way of testing the significance of parameters.

**How can we define this region?**


> ***Driing driiiing***

-- ***The easystats team speaking. How can we help?***

-- ***I am Prof Howard. An expert in chicks... I mean chickens. Just calling to let you know that based on my expert knowledge, an effect between -20 and 20 is negligible. Bye.***


Well, that's convenient. Now we know that we can define the ROPE as the `[-20, 20]` range. All effects within this range are considered as *null* (negligible). We can now compute the **proportion of the 89\% most probable values (the 89\% CI) which are not null**, *i.e.*, which are outside this range. 

```{r message=FALSE, warning=FALSE, eval=TRUE}
rope(posteriors$feedsunflower, range = c(-20, 20), ci=0.89)
```


**7.75\% of the 89\% CI can be considered as null**. Is that a lot? Based on our [**guidelines**](https://easystats.github.io/bayestestR/articles/guidelines.html), yes, it is too much. **Based on this particular definition of ROPE**, we conclude that this effect is not significant (the probability of being negligible is too high).

Although, to be honest, I have **some doubts about this Prof Howard**. I don't really trust **his definition of ROPE**. Is there a more **objective** way of defining it?

**Yes.** One of the practice is for instance to use the **tenth (`1/10 = 0.1`) of the standard deviation (SD)** of the response variable, which can be considered as a "negligible" effect size.

```{r message=FALSE, warning=FALSE, eval=TRUE}
rope_value <- 0.1 * sd(data$weight)
rope_range <- c(-rope_value, rope_value)
rope_range
```

Let's redefine our ROPE as the region within the `[-6.2, 6.2]` range. **Note that this can be directly obtained by the `rope_range` function :)**

```{r message=FALSE, warning=FALSE, eval=TRUE}
rope_value <- rope_range(model)
rope_range
```

Let's recompute the **percentage in ROPE**:

```{r message=FALSE, warning=FALSE, eval=TRUE}
rope(posteriors$feedsunflower, range = rope_range, ci=0.89)
```

With this reasonable definition of ROPE, we observe that the 89\% of the posterior distribution of the effect does **not** overlap with the ROPE. Thus, we can conclude that **the effect is significant** (in the sense of *important* enough to be noted).


### Probability of Direction (pd)

Maybe we are not interested in whether the effect is not-negligible. Maybe **we just want to know if this effect is positive or negative**. In this case, we can simply compute the proportion of the posterior that is positive, no matter the "size" of the effect. 

```{r message=FALSE, warning=FALSE, eval=FALSE}
n_positive <- posteriors %>% 
  filter(feedsunflower > 0) %>% # select only positive values
  nrow() # Get length
n_positive / nrow(posteriors) * 100
```
```{r echo=FALSE, message=FALSE, warning=FALSE}
n_positive <- posteriors %>% 
  filter(feedsunflower > 0) %>% # select only positive values
  nrow() # Get length
format(n_positive / nrow(posteriors) * 100, nsmall = 2)
```


We can conclude that **the effect is positive with a probability of 97.82\%**. We call this index the [**Probability of Direction (pd)**](https://easystats.github.io/bayestestR/articles/probability_of_direction.html). It can, in fact, be computed more easily with the following:

```{r message=FALSE, warning=FALSE, eval=TRUE}
p_direction(posteriors$feedsunflower)
```

Interestingly, it so happens that **this index is usually highly correlated with the frequentist *p*-value**. We could almost roughly infer the corresponding *p*-value with a simple transformation:

```{r message=FALSE, warning=FALSE, eval=TRUE}
pd <- 97.82
onesided_p <- 1 - pd / 100  
twosided_p <- onesided_p * 2
twosided_p
```

If we ran our model in the frequentist framework, we should approximately observe an effect with a *p*-value of `r round(twosided_p, digits=3)`. **Is that true?**




### Comparison to frequentist


```{r message=FALSE, warning=FALSE, eval=TRUE}
lm(weight ~ feed, data=data) %>% 
  summary()
```

The frequentist model tells us that the difference is **positive and significant** (beta = 52, p = 0.04). 

**Although we arrived to a similar conclusion, the Bayesian framework allowed us to develop a more profound and intuitive understanding of our effect, and of the uncertainty of its estimation.**


## All with one function

And yet, I agree, it was a bit **tedious** to extract and compute all the indices. **But what if I told you that we can do all of this, and more, with only one function?**

> **Behold, `describe_posterior`!**

This function computes all of the adored mentioned indices, and can be run directly on the model:
```{r message=FALSE, warning=FALSE, eval=TRUE}
describe_posterior(model)
```

**Tada!** There we have it! The **median**, the **CI**, the **pd** and the **ROPE percentage**!

Understanding and describing posterior distributions is just one aspect of Bayesian modelling... **Are you ready for more?** [**Click here**](https://easystats.github.io/bayestestR/articles/example2_GLM.html) to see the next example.
back to top