Raw File
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rope.R
\title{Region of Practical Equivalence (ROPE)}
rope(x, ...)

\method{rope}{default}(x, ...)

\method{rope}{numeric}(x, range = "default", ci = 0.89,
  verbose = TRUE, ...)

\method{rope}{data.frame}(x, range = "default", ci = 0.89,
  verbose = TRUE, ...)

\method{rope}{BFBayesFactor}(x, range = "default", ci = 0.89,
  verbose = TRUE, ...)

\method{rope}{stanreg}(x, range = "default", ci = 0.89,
  effects = c("fixed", "random", "all"), parameters = NULL,
  verbose = TRUE, ...)

\method{rope}{brmsfit}(x, range = "default", ci = 0.89,
  effects = c("fixed", "random", "all"), component = c("conditional",
  "zi", "zero_inflated", "all"), parameters = NULL, verbose = TRUE,
\item{x}{Vector representing a posterior distribution. Can also be a \code{stanreg} or \code{brmsfit} model.}

\item{...}{Currently not used.}

\item{range}{ROPE's lower and higher bounds. Should be a vector of length two (e.g., \code{c(-0.1, 0.1)}) or \code{"default"}. If \code{"default"}, the range is set to \code{c(-0.1, 0.1)} if input is a vector, and based on \code{\link[=rope_range]{rope_range()}} if a Bayesian model is provided.}

\item{ci}{The Credible Interval (CI) probability, corresponding to the proportion of HDI, to use for the percentage in ROPE.}

\item{verbose}{Toggle off warnings.}

\item{effects}{Should results for fixed effects, random effects or both be returned?
Only applies to mixed models. May be abbreviated.}

\item{parameters}{Regular expression pattern that describes the parameters that
should be returned. Meta-parameters (like \code{lp__} or \code{prior_}) are
filtered by default, so only parameters that typically appear in the
\code{summary()} are returned. Use \code{parameters} to select specific parameters
for the output.}

\item{component}{Should results for all parameters, parameters for the conditional model
or the zero-inflated part of the model be returned? May be abbreviated. Only
applies to \pkg{brms}-models.}
Compute the proportion (in percentage) of the HDI (default to the 90\% HDI) of a posterior distribution that lies within a region of practical equivalence.
Statistically, the probability of a posterior distribution of being
  different from 0 does not make much sense (the probability of a single value
  null hypothesis in a continuous distribution is 0). Therefore, the idea
  underlining ROPE is to let the user define an area around the null value
  enclosing values that are \emph{equivalent to the null} value for practical
  purposes (\cite{Kruschke 2010, 2011, 2014}).
  \cr \cr
  Kruschke (2018) suggests that such null value could be set, by default,
  to the -0.1 to 0.1 range of a standardized parameter (negligible effect
  size according to Cohen, 1988). This could be generalized: For instance,
  for linear models, the ROPE could be set as \code{0 +/- .1 * sd(y)}.
  This ROPE range can be automatically computed for models using the
  \link{rope_range} function.
  \cr \cr
  Kruschke (2010, 2011, 2014) suggests using the proportion of  the 95\%
  (or 89\%, considered more stable) \link[=hdi]{HDI} that falls within the
  ROPE as an index for "null-hypothesis" testing (as understood under the
  Bayesian framework, see \code{\link[=equivalence_test]{equivalence_test()}}).
  \cr \cr
  \strong{ Sensitivity to parameter's scale}
  \cr \cr
  It is important to consider the unit (i.e., the scale) of the predictors
  when using an index based on the ROPE, as the correct interpretation of the
  ROPE as representing a region of practical equivalence to zero is dependent
  on the scale of the predictors. Indeed, the percentage in ROPE depend on
  the unit of its parameter. In other words, as the ROPE represents a fixed
  portion of the response's scale, its proximity with a coefficient depends
  on the scale of the coefficient itself.
  \cr \cr
  \strong{Multicollinearity: Non-independent covariates}
  \cr \cr
  When parameters show strong correlations, i.e. when covariates are not
  independent, the joint parameter distributions may shift towards or
  away from the ROPE. Collinearity invalidates ROPE and hypothesis
  testing based on univariate marginals, as the probabilities are conditional
  on independence. Most problematic are parameters that only have partial
  overlap with the ROPE region. In case of collinearity, the (joint) distributions
  of these parameters may either get an increased or decreased ROPE, which
  means that inferences based on \code{rope()} are inappropriate
  (\cite{Kruschke 2014, 340f}).
  \cr \cr
  \code{rope()} performs a simple check for pairwise correlations between
  parameters, but as there can be collinearity between more than two variables,
  a first step to check the assumptions of this hypothesis testing is to look
  at different pair plots. An even more sophisticated check is the projection
  predictive variable selection (\cite{Piironen and Vehtari 2017}).

rope(x = rnorm(1000, 0, 0.01), range = c(-0.1, 0.1))
rope(x = rnorm(1000, 0, 1), range = c(-0.1, 0.1))
rope(x = rnorm(1000, 1, 0.01), range = c(-0.1, 0.1))
rope(x = rnorm(1000, 1, 1), ci = c(.90, .95))

model <- stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200)
rope(model, ci = c(.90, .95))

model <- brms::brm(mpg ~ wt + cyl, data = mtcars)
rope(model, ci = c(.90, .95))

bf <- ttestBF(x = rnorm(100, 1, 1))
rope(bf, ci = c(.90, .95))

\item Cohen, J. (1988). Statistical power analysis for the behavioural sciences.
\item Kruschke, J. K. (2010). What to believe: Bayesian methods for data analysis. Trends in cognitive sciences, 14(7), 293-300. \doi{10.1016/j.tics.2010.05.001}.
\item Kruschke, J. K. (2011). Bayesian assessment of null values via parameter estimation and model comparison. Perspectives on Psychological Science, 6(3), 299-312. \doi{10.1177/1745691611406925}.
\item Kruschke, J. K. (2014). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. Academic Press. \doi{10.1177/2515245918771304}.
\item Kruschke, J. K. (2018). Rejecting or accepting parameter values in Bayesian estimation. Advances in Methods and Practices in Psychological Science, 1(2), 270-280. \doi{10.1177/2515245918771304}.
\item Piironen, J., & Vehtari, A. (2017). Comparison of Bayesian predictive methods for model selection. Statistics and Computing, 27(3), 711–735. \doi{10.1007/s11222-016-9649-y}
back to top