guidelines.Rmd

```
---
title: "Reporting Guidelines"
output:
github_document:
toc: true
fig_width: 10.08
fig_height: 6
rmarkdown::html_vignette:
toc: true
fig_width: 10.08
fig_height: 6
tags: [r, bayesian, posterior, test]
vignette: >
%\VignetteIndexEntry{Reporting Guidelines}
\usepackage[utf8]{inputenc}
%\VignetteEngine{knitr::rmarkdown}
editor_options:
chunk_output_type: console
bibliography: bibliography.bib
csl: apa.csl
---
```{r message=FALSE, warning=FALSE, include=FALSE}
library(knitr)
options(knitr.kable.NA = '')
knitr::opts_chunk$set(comment=">")
options(digits=2)
```
These guidelines can be referred to by citing the package:
- Makowski, D., Ben-Shachar M. S. \& Lüdecke, D. (2019). *Understand and Describe Bayesian Models and Posterior Distributions using bayestestR*. Available from https://github.com/easystats/bayestestR. DOI: [10.5281/zenodo.2556486](https://zenodo.org/record/2556486).
---
# Reporting Guidelines
## How to describe and report the parameters of a model
A Bayesian analysis returns a posterior distribution for each parameter (or *effect*). To minimally describe these distributions, we recommend reporting a point-estimate of [centrality](https://en.wikipedia.org/wiki/Central_tendency) as well as information characterizing the estimation uncertainty (the [dispersion](https://en.wikipedia.org/wiki/Statistical_dispersion)). Additionally, one can also report indices of effect existence and/or significance.
Based on the previous [**comparison of point-estimates**](https://easystats.github.io/bayestestR/articles/indicesEstimationComparison.html) and [**indices of effect existence**](https://easystats.github.io/bayestestR/articles/indicesExistenceComparison.html), we can draw the following recommendations.
#### **Centrality**
We suggest reporting the [**median**](https://easystats.github.io/bayestestR/reference/point_estimate.html) as an index of centrality, as it is more robust compared to the [mean](https://easystats.github.io/bayestestR/reference/point_estimate.html) or the [MAP estimate](https://easystats.github.io/bayestestR/reference/map_estimate.html). However, in case of severly skewed posterior distributions, the MAP estimate could be a good alternative.
#### **Uncertainty**
The [**89\% Credible Interval (CI)**](https://easystats.github.io/bayestestR/articles/credible_interval.html) appears as a reasonable range to characterize the uncertainty related to the estimation, being more stable than higher thresholds (such as 90\% and 95\%). We also recommend computing the CI based on the [HDI](https://easystats.github.io/bayestestR/reference/hdi.html) rather than [quantiles](https://easystats.github.io/bayestestR/reference/ci.html), favouring probable, - over central - values.
Note that a CI based on the quantile (equal-tailed interval) might be more appropriate in case of transformations (for instance when transforming log-odds to probabilities). Otherwise, intervals that originally do not cover the null might cover it after transformation (see [here](https://easystats.github.io/bayestestR/articles/credible_interval.html#different-types-of-cis)).
#### **Existence**
The Bayesian framework allows to neatly delineate and quantify different aspects of hypothesis testing, such as effect *existence* and *significance*. The most straightforward index to describe effect existence is the [**Probability of Direction (pd)**](https://easystats.github.io/bayestestR/articles/probability_of_direction.html), representing the certainty associated with the most probable direction (positive or negative) of the effect. This index is easy to understand, simple to interpret, straightforward to compute, robust to model characteristics and independent from the scale of the data.
Moreover, it is strongly correlated with the frequentist ***p*-value**, and can thus be used to draw parallels and give some reference to readers non-familiar with Bayesian statistics. A **two-sided *p*-value** of respectively `.1`, `.05`, `.01` and `.001` would correspond approximately to a ***pd*** of 95\%, 97.5\%, 99.5\% and 99.95\%. Thus, for convenience, we suggest the following reference values as an interpretation helpers:
- *pd* **\<= 95\%** ~ *p* \> .1: uncertain
- *pd* **\> 95\%** ~ *p* \< .1: possibly existing
- *pd* **\> 97\%**: likely existing
- *pd* **\> 99\%**: probably existing
- *pd* **\> 99.9\%**: certainly existing
#### **Significance**
The percentage in **ROPE** is a index of **significance** (in its primary meaning), informing us whether a parameter is related - or not - to a non-negligible change (in terms of magnitude) in the outcome. We suggest reporting the **percentage of the full posterior distribution** (the *full* ROPE) instead of a given proportion of CI, in the ROPE, which appears as more sensitive (especially do delineate highly significant effects). Rather than using it as a binary, all-or-nothing decision criterion, such as suggested by the original [equivalence test](https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html#equivalence-test), we recommend using the percentage as a continuous index of significance. However, based on [simulation data](https://easystats.github.io/bayestestR/articles/indicesExistenceComparison.html), we suggest the following reference values as an interpretation helpers:
- **\> 99\%** in ROPE: negligible (we can accept the null hypothesis)
- **\> 97.5\%** in ROPE: probably negligible
- **\<= 97.5\%** \& **\>= 2.5\%** in ROPE: undecided significance
- **\< 2.5\%** in ROPE: probably significant
- **\< 1\%** in ROPE: significant (we can reject the null hypothesis)
*Note that extra caution is required as its interpretation highly depends on other parameters such as sample size and ROPE range (see [here](https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html#sensitivity-to-parameters-scale))*.
#### **Template Sentence**
Based on these suggestions, a template sentence for minimal reporting of a parameter based on its posterior distribution could be:
- "the effect of *X* has a probability of ***pd*** of being *negative* (Median = ***median***, 89\% CI [ ***HDI<sub>low</sub>*** , ***HDI<sub>high</sub>*** ] and can be considered as *significant* (***ROPE***\% in ROPE)."
## How to compare different models
Altough it can also be used to assess effect existence and signficance, the **Bayes factor (BF)** is a versatile index that can be used to directly compare different models (or data generation processes). The [Bayes factor](https://easystats.github.io/bayestestR/articles/bayes_factors.html) is a ratio, informing us by how much more (or less) likely the observed data are under two compared models - usually a model with an effect vs. a model *without* the effect. Depending on the specifications of the null model (whether it is a point-estimate (e.g., **0**) or an interval), the Bayes factor could be used both in the context of effect existence and significance.
In general, a Bayes factor greater than 1 giving evidence in favour of one of the models, and a Bayes factor smaller than 1 giving evidence in favour of the other model. Several rules of thumb exist to help the interpretation (see [here](https://easystats.github.io/report/articles/interpret_metrics.html#bayes-factor-bf)), with **\> 3** being one common treshold to categorize non-anecdotal evidence.
#### **Template Sentence**
When reporting Bayes factors (BF), one can use the following sentence:
- "There is *moderate evidence* in favour of an *absence* of effect of *x* (BF = *BF*)."
*Note: If you have any advice, opinion or such, we encourage you to let us know by opening an [discussion thread](https://github.com/easystats/bayestestR/issues) or making a pull request.*
```