https://github.com/cran/multivariance
Revision 2ffcc6849222a41d7684515fd823120c29cfa0e2 authored by Björn Böttcher on 06 February 2019, 12:30:06 UTC, committed by cran-robot on 06 February 2019, 12:30:06 UTC
1 parent e0320c9
Tip revision: 2ffcc6849222a41d7684515fd823120c29cfa0e2 authored by Björn Böttcher on 06 February 2019, 12:30:06 UTC
version 2.0.0
version 2.0.0
Tip revision: 2ffcc68
multivariance-functions.R
# Package description ####
#' multivariance: Measuring Multivariate Dependence Using Distance Multivariance
# Multivariance: detecting and measuring multivariate dependence
#'
#' The multivariance package provides basic functions to calculate distance multivariance and related quantities. To test independence use \code{\link{multivariance.test}}, it provides an interface (via its arguments) to all the tests based on distance (m-/total-)multivariance. The package offers also several other functions related to distance multivariance, e.g. a detection and visualization of dependence structures \code{\link{dependence.structure}}. See below for details on the full content of the package.
#'
# It also includes a function to perform a dependence analysis.
#'
#' Distance multivariance is a multivariate dependence measure, which can be used to detect dependencies between an arbitrary number of random vectors each of which can have a distinct dimension. The necessary functions are implemented in this package, and examples are given. For the theoretic background we refer to the papers [1,2,3,4]. Paper [3] includes a summary of the first two. It is the recommended starting point for users with an applied interest. Paper [4] is concerned with new (faster) p-value estimates for the independence tests.
#'
#' The (current) code is tested and speed improved in comparison to the former releases. Certainly there is still room for improvement and development. Questions, comments and remarks are welcome: \email{bjoern.boettcher@@tu-dresden.de}
#'
# Users on Windows machines might get a considerable speed up using MRO instead of the standard R release - since this is particularly faster for large matrix operations.
#'
#' For infos on the latest changes and/or updates to the package use \code{news(package="multivariance")}.
#'
#' To cite this package use the standard citation for R packages, i.e., the output of \code{citation("multivariance")}.
#'
#'
#'
#' @section Multivariance:
#'
#' \code{\link{multivariance}} computes the distance multivariance
#'
#' \code{\link{total.multivariance}} computes the total distance multivariance
#'
#' \code{\link{m.multivariance}} computes the m-multivariance (introduced in [3])
#'
#' It might be convenient to compute these simultaneously using \code{\link{multivariances.all}}.
#'
#' @section Functions to use and interpret multivariance:
#'
#' \code{\link{rejection.level}} computes a (conservative) rejection level for a given significance level. This can be used for a conservative interpretation of distance multivariance. The counterpart is \code{\link{multivariance.pvalue}}, which computes a conservative p-value for a given distance multivariance. Both methods are distribution-free.
#'
#' \code{\link{resample.rejection.level}} and \code{\link{resample.pvalue}} are the distribution dependent versions of the above. They are approximately sharp, but computational more expensive. Any resampling is done by \code{\link{resample.multivariance}}.
#'
#' Using the methods developed in [4] approximate p-value estimates are provided by \code{\link{pearson.pvalue}}. This method is much faster than the resampling method, but still experimental. Use the resampling method for reliable results.
#'
#' \code{\link{multivariance.test}} and \code{\link{independence.test}} provide the corresponding tests of independence. The former provides output as common for tests in R, the latter is a simple significance test returning for a given significance level just \code{TRUE} or \code{FALSE}
#'
#' \code{\link{cdm}} and \code{\link{cdms}} compute the centered distance matrix and matrices, respectively. These can be used to speed up repeated computations of distance multivariance.
#'
#' In [4] various methods to estimate the moments of the test statistic under H0 were developed, these are (implicitly) implemented in this package only for the moments used in \code{\link{pearson.pvalue}}. Further and explicit functions can be added upon request. Please feel free to contact the author.
#'
#' For bigger projects it might be convenient to estimate the computation time of multivariance via \code{\link{multivariance.timing}}.
#'
#' @section Dependence structures:
#'
#' \code{\link{dependence.structure}} performs the dependence structure detection algorithm as described in [3].
#'
#' \code{\link{find.cluster}} is the basic building block of \code{\link{dependence.structure}}. It is recommended to use \code{\link{dependence.structure}}.
#'
#' @section Examples:
#'
#' \code{\link{coins}} and \code{\link{tetrahedron}} generate samples of pairwise independent random variables, with dependence of higher order.
#'
#' \code{\link{dep_struct_iterated_13_100}}, \code{\link{dep_struct_ring_15_100}}, \code{\link{dep_struct_several_26_100}} and \code{\link{dep_struct_star_9_100}} are example data sets for the dependence structure detection. These might also serve as benchmark examples.
#'
#' @references
#' [1] B. Böttcher, M. Keller-Ressel, R.L. Schilling, Detecting independence of random vectors: generalized distance covariance and Gaussian covariance. Modern Stochastics: Theory and Applications 2018, Vol. 5, No. 3, 353-383. \url{https://www.vmsta.org/journal/VMSTA/article/127/info}
#'
#' [2] B. Böttcher, M. Keller-Ressel, R.L. Schilling, Distance multivariance: New dependence measures for random vectors. Accepted for publication in Annals of Statistics. \url{https://arxiv.org/abs/1711.07775}
#'
#' [3] B. Böttcher, Dependence and Dependence Structures: Estimation and Visualization Using Distance Multivariance. Preprint. \url{https://arxiv.org/abs/1712.06532}
#'
#' [4] G. Berschneider, B. Böttcher, On complex Gaussian random fields, Gaussian quadratic forms and sample distance multivariance. Preprint. \url{https://arxiv.org/abs/1808.07280}
#'
#' @docType package
#' @name multivariance-package
NULL
# speed up things with Rcpp - fastdist is a fast euclidean distance matrix computation
#' @useDynLib multivariance
NULL
#' @importFrom Rcpp sourceCpp
NULL
################# Multivariance ###########
#' rejection level for the test statistic
#'
#' Under independence the probability for the normalized and Nscaled multivariance to be above this level is less than \code{alpha}. The same holds for the normalized, Nscaled and Escaled total multivariance and m-multivariance.
#'
#' @param alpha level of significance
#' @details
#' This is based on a distribution-free approach. The value might be very conservative. This is the counterpart to \code{\link{multivariance.pvalue}}. For a less conservative approach see \code{\link{resample.rejection.level}}.
#'
#' The estimate is only valid for \code{alpha} smaller than 0.215.
#'
#' @examples
#' rejection.level(0.05) #the rejection level, for comparison with the following values
#' total.multivariance(matrix(rnorm(100*3),ncol = 3)) #independent sample
#' total.multivariance(coins(100)) #dependent sample which is 2-independent
#'
#' # and the p values are (to compare with alpha)
#' multivariance.pvalue(total.multivariance(matrix(rnorm(100*3),ncol = 3))) #independent sample
#' multivariance.pvalue(total.multivariance(coins(100))) #dependent sample which is 2-independent
#'
#' \dontrun{
#' # visualization of the rejection level
#' curve(rejection.level(x),xlim = c(0.001,0.215),xlab = "alpha")
#' }
#'
#' @export
rejection.level = function(alpha) {
if (any(alpha > 0.215)) warning("alpha too large. Only valid for alpha smaller than 0.215!")
return((stats::qnorm(1-alpha/2)^2))
# identical with qchisq(1-alpha,1)
}
#' transform multivariance to p-value
#'
#' Computes a conservative p-value for the hypothesis of independence for a given multivariance/total multivariance.
#'
#' @param x value of a normalized and Nscaled \code{\link{multivariance}}
#'
#' @details
#' This is based on a distribution-free approach. The p-value is conservative, i.e. it might be much smaller. This is the counterpart to \code{\link{rejection.level}}. For a less conservative approach see \code{\link{resample.pvalue}}.
#'
#' p-values larger than 0.215 might be incorrect, since the distribution-free estimate on which the computation is based only holds up to 0.215.
#'
#' @references
#' For the theoretic background see the references given on the main help page of this package: \link{multivariance-package}.
#'
#' @export
multivariance.pvalue = function(x) {
if (any(x < 0,na.rm = TRUE)) print(paste("Negative multivariance = ",x[which(x<0)]))
2-2*stats::pnorm(sqrt(x))
}
#' centered distance matrix
#'
#' computes the centered distance matrix
#'
#' @param x matrix, each row of the matrix is treated as one sample
#' @param normalize logical, indicates if the matrix should be normalized
#' @param psi if it is \code{NULL}, the euclidean distance will be used. In the case of \code{isotropic = TRUE}: a real valued negative definite function of one variable (accepting vectors as arguments; returning a vector of the same length). In the case of \code{isotropic = FALSE}: a real valued function of two variables (or vectors) to compute the distance of two samples based on a continuous negative definite function.
#' @param isotropic logical, indicates if psi of the Euclidean distance matrix should be computed, i.e., if an isotropic distance should be used.
#' @param p numeric, if it is a value between 1 and 2 then the Minkowski distance with parameter p is used.
#' @param external.dm.fun function, which computes the distance matrix.
#'
#' @details
#' The centered distance matrices are required for the computation of (total / m-) multivariance.
#'
#'If \code{normalize = TRUE} then the value of multivariance is comparable and meaningful. It can be compared to the \code{\link{rejection.level}} or its p-value \code{\link{multivariance.pvalue}} can be computed.
#'
#' More details: If \code{normalize = TRUE} the matrix is scaled such that the multivariance based on it, times the sample size, has in the limit - in the case of independence - the distribution of an L^2 norm of a Gaussian process with known expectation.
#'
#' As default the Euclidean distance is used. The parameters \code{psi}, \code{p}, \code{isotropic} and \code{external.dm.fun} can be used to select a different distance. In particular, \code{external.dm.fun} can be used to provide any function which calculates a distance matrix for the rows of a given matrix.
#'
#' @references
#' For the theoretic background see the references given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' x = coins(100)
#' cdm(x) # fast euclidean distances
#' cdm(x,psi = function(x,y) sqrt(sum((x-y)^2))) # this is identical to the previous (but slower)
#'
#' # the function cdm does the following three lines in a faster way
#' N = nrow(x)
#' C = diag(N) - matrix(1/N,nrow = N,ncol = N)
#' A = - C %*% as.matrix(stats::dist(x,method="euclidean")) %*% C #'
#' all(abs(A- cdm(x,normalize = FALSE)) < 10^(-12))
#'
#' @export
cdm = function(x, normalize = TRUE, psi = NULL, p = NULL, isotropic = FALSE, external.dm.fun = NULL) {
if (is.null(psi) & is.null(p) & is.null(external.dm.fun)) {
#dm = dist.to.matrix(stats::dist(x,method="euclidean"))
#DEVELOPING NOTE: here as.matrix was slow, dist.to.matrix is faster. Instead one could just use the vector....
# even faster for the euclidean case is fastdist defined via Rcpp
#dm = fastdist(as.matrix(x))
return(fastEuclideanCdm(as.matrix(x),normalize))
} else {
if (!is.null(p)) {
if ((p<1) || (p>2)) warning("p is not in [1,2]")
dm = dist.to.matrix(stats::dist(x,method="minkowski", p = p))
} else { # case: psi is given
if (!is.null(external.dm.fun)) {
dm = external.dm.fun(as.matrix(x))
} else {
if (isotropic) {
#dm = psi(dist.to.matrix(stats::dist(x,method="euclidean")))
dm = psi(fastdist(as.matrix(x)))
} else {
x = as.matrix(x)
n = nrow(x)
d = ncol(x)
dm = matrix(apply(cbind(x[rep(1:n,n),],x[rep(1:n,each = n),]), #create all combinations
1, # apply to each row
function(y) psi(y[1:d], y[(d+1):(2*d)])),nrow = n)
# DEVELOPING NOTE: could double the speed if only the upper triangular matrix is computed, using the idea of dist.to.matrix
}
}
}
}
return(doubleCenterSymMat(dm,normalize))
#alternative (even slower) implementations:
# colm = colMeans(dm)
# m = mean(colm) # equals mean(dm)
#
# if (m == 0) warning("It seems that one variable is constant. Constants are always independent.\n")
# if (normalize && (m != 0)) {
# return((-dm + outer(colm,colm, FUN ="+") - m)/ m)
# } else {
# return(-dm + outer(colm,colm, FUN ="+") - m)
# }
#alternative (even slower) implementations:
#cdm1 = sweep(dm,1,colm)
#cdm2 = -sweep(cdm1,2,rowMeans(dm)) - m
#cdm2 = -(x - rep(colm, ncol(dm)) - rep(rowMeans(dm),each = ncol(dm))) - m # for quadratic matrix
}
#' computes the centered distance matrices
#' @param x matrix, each row is a sample
#' @param vec vector which indicates which columns are treated as one sample
#' @param membership depreciated. Now use \code{vec}.
#' @param ... these are passed to \code{\link{cdm}}
#'
#' @return It returns a list of distance matrices.
#'
#' @export
cdms = function(x,vec = 1:ncol(x),membership = NULL,...) {
if (!is.null(membership)) {
vec = membership
warning("Use 'vec' instead of 'membership' as argument to 'cdms'. 'membership' is depreciated.")
}
if (anyNA(vec)) vec = 1:ncol(x)
n = max(vec)
# N = nrow(x)
# array.cdm = array(,dim = c(N,N,n))
# for (i in 1:n) array.cdm[,,i] = cdm(x[,(vec == i)],...)
# return(array.cdm)
return(lapply(1:n, function(i) cdm(x[,(vec == i)],...)))
}
#' double centering of a matrix
#'
#' @keywords internal
double.center = function(dm,normalize = FALSE) {
if (is.list(dm)) { # double center a list of matrices
return(lapply(dm, function(l) double.center(l,normalize) ))
# n = lenght(tm)
# array.cdm = array(,dim=dim(dm))
# for (i in 1:n) array.cdm[,,i] = double.center(dm[,,i],normalize)
# return(array.cdm)
} else { # double center a matrix
colm = colMeans(dm)
m = mean(colm) # equals mean(dm)
if (m == 0) warning("It seems that one variable is constant. Constants are always independent.\n")
if (normalize && (m != 0)) {
return((-dm + outer(colm,colm, FUN ="+") - m)/ m)
} else {
return(-dm + outer(colm,colm, FUN ="+") - m)
}
}
}
#' distance multivariance
#'
#' Computes the distance multivariance, either for given data or a given list of centered distance matrices.
#'
#' @param x either a data matrix or a list of centered distance matrices
#' @param vec if x is a matrix, then this indicates which columns are treated together as one sample; if x is a list, these are the indexes for which the multivariance is calculated. The default is all columns and all indexes, respectively.
#' @param Nscale if \code{TRUE} the multivariance is scaled up by the sample size (and thus it is exactly as required for the test of independence)
#' @param squared if \code{FALSE} it returns the actual multivariance, otherwise the squared multivariance (less computation)
#' @param ... these are passed to \code{\link{cdms}} (which is only invoked if \code{x} is a matrix)
#' @param correlation depreciated, please use the function \code{\link{multicorrelation}} instead.
#'
#' @details
#'
#' If \code{x} is an matrix and \code{vec} is not given, then each column is treated as a separate sample. Otherwise \code{vec} has to have as many elements as \code{x} has columns and values starting from 1 up to the number of 'variables', e.g. if \code{x} is an \code{N} by 5 matrix and \code{vec = c(1,2,1,3,1)} then the multivariance of the 1-dimensional variables represented by column 2 and 4 and the 3-dimensional variable represented by the columns 1,3,5 is computed.
#'
#' As default it computes the normalized Nscaled squared multivariance, for a multivariance without normalization the argument \code{normalize = FALSE} has to be passed to \code{cdms}.
#'
#'
#' \code{correlation = TRUE} yields values between 0 and 1. These can be interpreted similarly to classical correlations, see also \code{\link{multicorrelation}}.
#'
#' As a rough guide to interpret the value of distance multivariance note:
#' \itemize{
#' \item If the random variables are not (n-1)-independent, large values indicate dependence, but small values are meaningless. Thus in this case use \code{\link{total.multivariance}}.
#' \item If the random variables are (n-1)-independent and \code{Nscale = TRUE}, values close to 1 and smaller indicate independence, larger values indicate dependence. In fact, in the case of independence the test statistic is a Gaussian quadratic form with expectation 1 and samples of it can be generated by \code{\link{resample.multivariance}}.
#' \item If the random variables are (n-1)-independent and \code{Nscale = FALSE}, small values (close to 0) indicate independence, larger values indicate dependence.
#' }
#'
#' Finally note, that due to numerical (in)precision the value of multivariance might become negative. In these cases it is set to 0. A warning is issued, if the value is negative and further than the usual (used by \code{\link[base]{all.equal}}) tolerance away from 0.
#'
#' @references
#' For the theoretic background see the references given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' multivariance(matrix(rnorm(100*3),ncol = 3)) #independent sample
#' multivariance(coins(100)) #dependent sample which is 2-independent
#'
#' x = matrix(rnorm(100*2),ncol = 2)
#' x = cbind(x,x[,2])
#' multivariance(x) #dependent sample which is not 2-independent (thus small values are meaningless!)
#' multivariance(x[,1:2]) #these are independent
#' multivariance(x[,2:3]) #these are dependent
#'
#' multivariance(x[,2:3],correlation = TRUE)
#'
#' @export
multivariance = function(x,vec = NA,Nscale = TRUE,correlation = FALSE, squared = TRUE, ...) {
if (correlation) warning("The option 'correlation' is depreciated. Please use the function 'multicorrelation' instead.")
if (!is.list(x)) { # if the input is a matrix, the distance matrices are computed
if (is.array(x) & (length(dim(x))>2)) stop("Please provide a list instead of an array. Changed since version 2.0.0.")
if (anyNA(vec)) vec = 1:ncol(x)
x = cdms(x,vec,...)
vec = 1:max(vec)
}
if (anyNA(vec)) vec = 1:length(x)
if (anyNA(x)) stop("provided x contains NA")
#if (length(vec) > dim(x)[2]) warning("More data columns than rows.")
#Aprod = x[,,vec[1]]
#for (i in 2:length(vec)) Aprod = Aprod * x[,,vec[i]]
#result = mean(Aprod)
result = mean(Reduce("*", x[vec]))
if (Nscale && !correlation) result = result *nrow(x[[1]])
if (correlation) {
n = length(vec)
norm.n = function(x) mean(abs(x)^n)^(1/n)
#Anorm = mean(abs(x[,,vec[1]]^n))^(1/n)
#for (i in 2:length(vec)) Anorm = Anorm * mean(abs(x[,,vec[i]]^n))^(1/n)
#result = result / Anorm
result = result / Reduce("*",lapply(x[vec], norm.n))
}
# DEVELOPING NOTE: The following is much slower .... It might be faster, if we store the matrices only as vectors with the upper triangular as elements.
#ut = upper.tri(Aprod)
#diat = diag(x[vec[1],,])
#test = x[vec[1],,][ut]
#for (i in 2:length(vec)) {
# diat = diat * diag(x[vec[i],,])
# test = test * x[vec[i],,][ut]
#}
#erg = sum(diat,2*test)/ncol(x)^2
if (result < 0) {
if (!isTRUE(all.equal(result,0))) warning(paste("Value of multivariance was negative (",result,"). This is usually due to numerical (in)precision. It was set to 0."))
result = 0
}
if (squared | (Nscale && !correlation)) { return(result)
} else { return(sqrt(result))}
# return(mean(apply(x[vec,,],c(2,3),prod))) #this vector version is also much much slower!!!
# DEVELOPING NOTE: mean is slow due to error correction. sum()/length() is faster, but not as precise.
}
#' total distance multivariance
#'
#' computes the total distance multivariance
#'
#' @inheritParams multivariance
#' @param lambda a scaling parameter >0. Each k-tuple multivariance gets weight \code{lambda^(n-k)}.
#' @param Escale if \code{TRUE} then it is scaled by the number of multivariances which are theoretically summed up (in the case of independence this yields for normalized distance matrices an estimator with expectation 1)
#'
#' @details
#' Total distance multivariance is per definition the scaled sum of certain distance multivariances, and it characterize dependence.
#'
#' As a rough guide to interpret the value of total distance multivariance note:
#' \itemize{
#' \item Large values indicate dependence.
#' \item For\code{Nscale = TRUE} values close to 1 and smaller indicate independence, larger values indicate dependence. In fact, in the case of independence the test statistic is a Gaussian quadratic form with expectation 1 and samples of it can be generated by \code{\link{resample.multivariance}}.
#' \item For \code{Nscale = FALSE} small values (close to 0) indicate independence, larger values indicate dependence.
#' }
#'
#' Finally note, that due to numerical (in)precision the value of total multivariance might become negative. In these cases it is set to 0. A warning is issued, if the value is negative and further than the usual (used by \code{\link[base]{all.equal}}) tolerance away from 0.
#'
#'@references
#' For the theoretic background see the references given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' x = matrix(rnorm(100*3),ncol = 3)
#' total.multivariance(x) #for an independent sample
#' # the value coincides with
#' (multivariance(x[,c(1,2)],Nscale = TRUE) + multivariance(x[,c(1,3)],Nscale = TRUE)+
#' multivariance(x[,c(2,3)],Nscale = TRUE) + multivariance(x,Nscale = TRUE))/4
#'
#' total.multivariance(coins(100)) #value for a dependent sample which is 2-independent
#'
#' @export
total.multivariance = function(x,vec = NA,lambda = 1, Nscale = TRUE,Escale = TRUE,squared = TRUE,...) {
if (!is.list(x)) { # if the input is a matrix, the distance matrices are computed
if (is.array(x) & (length(dim(x))>2)) stop("Please provide a list instead of an array. Changed since version 2.0.0.")
if (anyNA(vec)) vec = 1:ncol(x)
x = cdms(x,vec,...)
vec = 1:max(vec)
}
if (anyNA(vec)) vec = 1:length(x)
if (anyNA(x)) stop("provided x contains NA")
n = length(vec)
result = mean(Reduce("*", lapply(x[vec],function(y) lambda + y))) - lambda^n
#Aprod = lambda + x[,,vec[1]]
#for (i in 2:length(vec)) Aprod = Aprod * (lambda + x[,,vec[i]])
#result = mean(Aprod)-lambda^(length(vec))
if (result < 0) {
if (!isTRUE(all.equal(result,0))) warning(paste("Value of total multivariance was negative (",result,"). This is usually due to numerical (in)precision. It was set to 0."))
result = 0
}
if (Nscale) result = result * nrow(x[[1]])
if (Escale) result = result/((1+lambda)^n - n*lambda^(n-1) - lambda^n)
if (squared | Nscale) { return(result)
} else { return(sqrt(result))}
}
#' m distance multivariance
#'
#' Computes m distance multivariance.
#'
#' @details
#'
#' m-distance multivariance is per definition the scaled sum of certain distance multivariances, and it characterize m-dependence.
#'
#' As a rough guide to interpret the value of total distance multivariance note:
#' \itemize{
#' \item Large values indicate dependence.
#' \item If the random variables are (m-1)-independent and \code{Nscale = TRUE}, values close to 1 and smaller indicate m-independence, larger values indicate dependence. In fact, in the case of independence the test statistic is a Gaussian quadratic form with expectation 1 and samples of it can be generated by \code{\link{resample.multivariance}}.
#' \item If the random variables are (m-1)-independent and \code{Nscale = FALSE}, small values (close to 0) indicate m-independence, larger values indicate dependence.
#' }
#'
#' Since random variables are always 1-independent, the case \code{m=2} characterizes pairwise independence.
#'
#' Finally note, that due to numerical (in)precision the value of m-multivariance might become negative. In these cases it is set to 0. A warning is issued, if the value is negative and further than the usual (used by \code{\link[base]{all.equal}}) tolerance away from 0.
#'
#' @references
#' For the theoretic background see the reference [3] given on the main help page of this package: \link{multivariance-package}.
#'
#' @inheritParams multivariance
#' @param m \code{=2} or \code{3} the m-multivariance will be computed.
#' @param Escale if \code{TRUE} then it is scaled by the number of multivariances which are theoretically summed up (in the case of independence this yields for normalized distance matrices an estimator with expectation 1)
#'
#'
#'
#' @examples
#' x = matrix(rnorm(3*30),ncol = 3)
#'
#' # the following values are identical
#' m.multivariance(x,m =2)
#' 1/choose(3,2)*(multivariance(x[,c(1,2)]) +
#' multivariance(x[,c(1,3)]) +
#' multivariance(x[,c(2,3)]))
#'
#' # the following values are identical
#' m.multivariance(x,m=3)
#' multivariance(x)
#'
#' # the following values are identical
#' 1/4*(3*(m.multivariance(x,m=2)) + m.multivariance(x,m=3))
#' total.multivariance(x, Nscale = TRUE)
#' 1/4*(multivariance(x[,c(1,2)], Nscale = TRUE) +
#' multivariance(x[,c(1,3)], Nscale = TRUE) +
#' multivariance(x[,c(2,3)], Nscale = TRUE) + multivariance(x, Nscale = TRUE))
#'
#' @export
m.multivariance = function(x, vec= NA, m = 2, Nscale = TRUE, Escale = TRUE, squared = TRUE,...) {
if (!is.list(x)) { # if the input is a matrix, the distance matrices are computed
if (is.array(x) & (length(dim(x))>2)) stop("Please provide a list instead of an array. Changed since version 2.0.0.")
if (anyNA(vec)) vec = 1:ncol(x)
x = cdms(x,vec,...)
vec = 1:max(vec)
}
if (anyNA(vec)) vec = 1:length(x)
if (anyNA(x)) stop("provided x contains NA")
n = length(vec)
xvec = x[vec]
vec = 1:n
#k = 2
if (m == 2) {
# Asum = x[,,vec[1]]
# A2sum = Asum^2 #x[vec[1],,]^2
# for (i in 2:length(vec)) {
# tempFactor = x[,,vec[i]]
# Asum = Asum + tempFactor
# A2sum = A2sum + tempFactor^2
# }
Asum = Reduce("+", xvec)
A2sum = Reduce("+", lapply(xvec,function(y) y^2))
result = mean(Asum^2 - A2sum)/2
}
#k more general
#k = 3
if (m == 3) {
if (n < 3) {
result = NA
} else {
# Asum = x[,,vec[1]]
# A2sum = Asum^2 #x[vec[1],,]^2
# A3sum = A2sum * Asum #x[vec[1],,]^3
# for (i in 2:length(vec)) {
# tempFactor = x[,,vec[i]]
# Asum = Asum + tempFactor
# summand = tempFactor^2
# A2sum = A2sum + summand #x[vec[i],,]^2
# A3sum = A3sum + summand *tempFactor #x[vec[i],,]^3
# }
Asum = Reduce("+", xvec)
A2sum = Reduce("+", lapply(xvec,function(y) y^2))
A3sum = Reduce("+", lapply(xvec,function(y) y^2*y))
result = mean(Asum^2*Asum- 3* Asum *A2sum + 2 * A3sum)/ 6
# result = mean(Asum^3 - choose(3,2)* Asum *A2sum + 2 * A3sum)/ factorial(3)
}
}
if (m > 3) {
warning("m > 3, not implemented.")
result = NA
}
if (is.na(result)) {
return(result)
} else {
if (result < 0) {
if (!isTRUE(all.equal(result,0))) warning(paste("Value of m-multivariance was negative (",result,"). This is usually due to numerical (in)precision. It was set to 0."))
result = 0
}
if (Nscale) result = result *nrow(Asum)
if (Escale) result = result/(choose(n,m))
if (squared | Nscale) { return(result)
} else { return(sqrt(result))}
}
}
#' simultaneous computation of multivariance and total/ 2-/ 3-multivariance
#'
#' Computes simultaneously multivariance, total multivariance, 2-multivariance and 3-multivariance.
#'
#' @inheritParams multivariance
#'
#' @seealso \code{\link{multivariance}}, \code{\link{total.multivariance}}, \code{\link{m.multivariance}}
#'
#' @details
#' The computation is faster than the separate computations.
#'
#' @return Returns a vector with multivariance, total.multivariance, 2-multivariance and 3-multivariance
#'
#' @examples
#' x = coins(100,k = 3)
#' multivariances.all(x)
#' # yields the same as:
#' multivariance(x)
#' total.multivariance(x)
#' m.multivariance(x,m=2)
#' m.multivariance(x,m=3)
#'
#'
#' @export
#'
multivariances.all = function(x, vec= NA, Nscale = TRUE, squared = TRUE,...) {
if (!is.list(x)) { # if the input is a matrix, the distance matrices are computed
if (is.array(x) & (length(dim(x))>2)) stop("Please provide a list instead of an array. Changed since version 2.0.0.")
if (anyNA(vec)) vec = 1:ncol(x)
x = cdms(x,vec,...)
vec = 1:max(vec)
}
if (anyNA(vec)) vec = 1:dim(x)[3]
if (anyNA(x)) stop("provided x contains NA")
n = length(vec)
xvec = x[vec]
vec = 1:n
# Asum = x[,,vec[1]]
# A2sum = Asum^2 # = x[vec[1],,]^2
# A3sum = A2sum * Asum # = x[vec[1],,]^3
# Aprod = Asum
# Aplusprod = 1 + Asum # = x[vec[1],,]
# for (i in 2:n) {
# tempFactor = x[,,vec[i]]
# Asum = Asum + tempFactor
# Aprod = Aprod * tempFactor
# Aplusprod = Aplusprod * (1 + tempFactor)
# summand = tempFactor^2
# A2sum = A2sum + summand #x[vec[i],,]^2
# A3sum = A3sum + summand *tempFactor #x[vec[i],,]^3
# }
#
Aprod = Reduce("*", xvec)
Aplusprod = (Reduce("*", lapply(xvec,function(y) 1 + y)))
Asum = Reduce("+", xvec)
A2sum = Reduce("+", lapply(xvec,function(y) y^2))
A3sum = Reduce("+", lapply(xvec,function(y) y^2*y))
m = mean(Aprod)
mt = (mean(Aplusprod)-1)/(2^n - n - 1)
m2 = mean(Asum^2 - A2sum)/(n *(n-1))
if (n > 2) {
m3 = mean(Asum^2*Asum - 3* Asum *A2sum + 2 * A3sum)/ (n *(n-1)*(n-2))
} else {
m3 = NA
}
result = c(multi = m,total = mt, m.multi.2 = m2,m.multi.3 = m3)
neg.res = result<0
if (any(neg.res,na.rm = TRUE)) {
if (!isTRUE(all.equal(result[neg.res],0,check.names = FALSE))) {
warning(paste0("Value of ",c("","total-","2-","3-")[neg.res],"multivariance was negative (",result[neg.res],"). This is usually due to numerical (in)precision. It was set to 0."))
}
result[neg.res] = 0
}
if (Nscale) result = result *nrow(Asum)
if (squared | Nscale) { return(result)
} else { return(sqrt(result))}
}
#' distance multicorrelation
#'
#' Computes various types of sample distance multicorrelation as defined in [3].
#'
#' @inheritParams multivariance
#' @param type one of \code{"m.multi.2","multi","m.multi.3","(lower bound) total"}
#' @param multicorrelation.type one of \code{"normalized","unnormalized"}
#'
#' @details The unnormalized and normalized versions coincide if an even number of variables is considered (in particular always for 2-multivariance). They usually differ if an odd number of variables is considered (always for 3-multivariance). If all variables are related by similarity transforms the unnormalized \code{"unnormalized"} multicorrelations are 1.
#'
#' For \code{"m.multi.2"} the empirical 2-multicorrelation is computed. Which is a dependence measure for pairwise dependence, i.e. the 3-multicorrelation is 0 if and only if all variables are pairwise independent.
#'
#' For total multicorrelation there is currently only a feasible empirical estimator for a lower bound (\code{"(lower bound) total"}). But it still characterizes dependence in the sense that the population version of this bound is 0 if and only if the variables are independent.
#'
#' A value of 0 of multicorrelation \code{"multi"} or 3-multicorrelation \code{"m.multi.3"} does not characterize independence.
#' @return
#'
#' Value of the multicorrelation.
#'
#' @references
#' For the theoretic background see the references given on the main help page of this package: \link{multivariance-package}.
#' @examples
#' y = rnorm(100)
#' x = cbind(y,y*2,(y-2)/3,y+1,y*5)
#'
#' # compute all types of correlations for x:
#' for (ty in c("(lower bound) total","m.multi.2","m.multi.3","multi"))
#' for (mty in c("normalized","unnormalized"))
#' print(paste(format(multicorrelation(x,type=ty,multicorrelation.type = mty)
#' ,digits=3,nsmall = 3,width = 7),mty,ty,"correlation"))
#' @export
multicorrelation = function(x, vec = 1:ncol(x), type = "m.multi.2", multicorrelation.type = "unnormalized", squared = TRUE, ...) {
if (!is.list(x)) { # if the input is a matrix, the distance matrices are computed
if (is.array(x) & (length(dim(x))>2)) stop("Please provide a list instead of an array. Changed since version 2.0.0.")
if (anyNA(vec)) vec = 1:ncol(x)
x = cdms(x,vec,normalize = FALSE,...)
vec = 1:max(vec)
}
if (anyNA(vec)) vec = 1:length(x)
if (anyNA(x)) stop("provided x contains NA")
n = max(vec)
if (type == "(lower bound) total") type = "total"
if (type == "m.multi.2") n = 2
if (type == "m.multi.3") n = 3
#list.cdms = cdms(x,normalize = FALSE)
switch(multicorrelation.type,
normalized = {
list.norms = lapply(x, function(x) (mean(abs(x)^n))^(1/n))
},
unnormalized = {
list.norms = lapply(x, function(x) (mean(x^n))^(1/n))
},
{stop(paste("unkown multicorrelation.type:",type))}
)
list.cdms = lapply(1:length(x), function(i) x[[i]]/list.norms[[i]])
switch(type,
multi = {
return(c( multicorrelation = multivariance(list.cdms,Nscale = FALSE,squared = squared)))
},
total = {
return( c(total.multicorrelation.lower.bound = total.multivariance(list.cdms,Nscale = FALSE,squared = squared)))
},
m.multi.2 = {
return(c(multicorrelation.2 = m.multivariance(list.cdms,Nscale = FALSE,squared = squared)))
},
m.multi.3 = {
return(c(multicorrelation.3 = m.multivariance(list.cdms,m = 3,Nscale = FALSE,squared = squared)))
},
{stop(paste("unkown type:",type))}
)
}
#' test for independence
#'
#' This computes a test of independence for the columns of a sample matrix (required for the resampling test) or for given centered distance matrices (only possible for the distribution-free test).
#'
#' For a test with p-value output (as standard for tests in R) see \code{\link{multivariance.test}}.
#'
#' @inheritParams multivariance
#' @param alpha significance level
#' @param type one of \code{"pearson_approx","distribution_free","resample"}
#' @param verbose logical, if TRUE meaningful text output is generated.
#'
#' @return Returns \code{TRUE} if the hypothesis of independence is NOT rejected, otherwise \code{FALSE}.
#' @details The \code{"pearson_approx"} and \code{"resample"} are approximately sharp. The latter is based on a resampling approach and thus much slower. The \code{"distribution_free"} test might be very conservative.
#' The centered distance matrices can be prepared by \code{\link{cdms}}. But note that for the test based on Pearson's approximation and for the resampling test, the data matrix has to be given.
#'
#' @references
#' For the theoretic background see the references given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' independence.test(coins(100)) #dependent sample which is 2-independent
#' independence.test(coins(100),type = "resample") #dependent sample which is 2-independent
#'
#' independence.test(coins(100)[,2:3]) # independent sample
#' independence.test(coins(100)[,2:3],type = "resample") # independent sample
#'
#' independence.test(coins(10),type = "resample") #dependent sample which is 2-independent
#' independence.test(coins(10)[,2:3],type = "resample") #dependent sample which is 2-independent
#'
#' @export
independence.test = function(x,vec = 1:ncol(x),alpha = 0.05,type = "distribution_free",verbose = TRUE,...) {
tm = total.multivariance(x,vec,...)
switch(type,
distribution_free = {
R = rejection.level(alpha)
outtext = paste("\nDistribution free test (classical): The value of the test statistic is",tm,"and values above",R,"are rejected.\n")
result = tm>R
},
resample = {
p.value = resample.pvalue(tm,x=x,vec=vec,times = 300,type="total",...)
outtext = paste("\nResampling test: The value of the test statistic is",tm,"and (by resampling) its p-value is",p.value,"\n")
result = p.value<alpha
},
pearson_approx = {
p.value = pearson.pvalue(x=x,vec=vec,type="total",...)
outtext = paste("\nTest using Pearson's approximation: The value of the test statistic is",tm,"and its p-value is",p.value,"\n")
result = p.value<alpha
}
)
if (verbose) {
cat(outtext)
if (result) {cat("The hypothesis of independence is rejected.\n")
}
else {cat("The hypothesis of independence is NOT rejected.\n")
}
}
invisible(result)
}
#' independence tests based on (total-/2-/3-) multivariance
#'
#' This performs the (specified by \code{type} and \code{p.value.type}) independence test for the columns of a sample matrix.
#'
#' @inheritParams multivariance
#' @param type one of \code{"independence","pairwise independence","multi","total","m.multi.2","m.multi.3"}
#' @param p.value.type one of \code{"pearson_approx","distribution_free","resample"}
#' @param verbose logical, if TRUE meaningful text output is generated.
#'
#' @return A list with class "\code{htest}" containing the following components:
#' \describe{
#' \item{\code{statistic}}{the value of the test statistic,}
#' \item{\code{p.value}}{the p-value of the test statistic,}
#' \item{\code{method}}{a character string indicating the type of test performed,}
#' \item{\code{data.name}}{a character string giving the name(s) of the data.}
#' }
#' @details The types \code{"independence"} and \code{"total"} are identical: an independence test is performed.
#'
#' Also the types \code{"pairwise independence"} and \code{"m.multi.2"} are identical: a test of pairwise independence is performed.
#'
#' The type \code{"m.multi.3"}, performs a test for 3-independence, assuming pairwise independence. The type \code{"multi"} performs a test for n-independence, assuming (n-1)-independence.
#'
#' There are several ways (determined by \code{p.value.type}) to estimate the p-value: The \code{"pearson_approx"} and \code{"resample"} are approximately sharp. The latter is based on a resampling approach and thus much slower. The \code{"distribution_free"} test might be very conservative, its p-value estimates are only valid for p-values lower than 0.215 - values above should be interpreted as "values larger than 0.215".
#'
#' All tests are performed using the standard euclidean distance. Other distances can be supplied via the \code{...}, see \code{\link{cdm}} for the accepted arguments.
#'
#' @references
#' For the theoretic background see the references given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' # an independence test
#' multivariance.test(dep_struct_several_26_100) # conservative
#' multivariance.test(dep_struct_several_26_100,p.value.type = "resample") #sharp but slow
#' multivariance.test(dep_struct_several_26_100,p.value.type = "pearson_approx") #
#'
#' # as an example, all tests for one data set:
#' coins100 = coins(100)
#' for (ty in c("total","m.multi.2","m.multi.3","multi"))
#' for (pvt in c("distribution_free","resample","pearson_approx"))
#' print(multivariance.test(coins100,type=ty,p.value.type = pvt))
#'
#'
#' @export
multivariance.test = function(x,vec = 1:ncol(x),type = "total",p.value.type = "distribution_free",verbose = TRUE,...) {
data.name = deparse(substitute(x))
if (type == "independence") type = "total"
if (type == "pairwise independence") type = "m.multi.2"
switch(type,
multi = {
method = "multivariance: test of n-independence (assuming (n-1)-independence)"
fun = function() multivariance(x,vec = vec,...)
statistic = c(multivariance = NA)
},
total = {
method = "total multivariance: test of independence"
fun = function() total.multivariance(x,vec = vec,...)
statistic = c(total.multivariance = NA)
},
m.multi.2 = {
method = "2-multivariance: test of pairwise independence"
fun = function() m.multivariance(x,vec = vec,...)
statistic = c(m.multivariance = NA)
},
m.multi.3 = {
method = "3-multivariance: test of 3-independence (assuming pairwise independence)"
fun = function() m.multivariance(x,vec = vec,m = 3,...)
statistic = c(m.multivariance = NA)
},
{stop(paste("unkown type:",type))}
)
#method = "total multivariance: test of independence"
#m = total.multivariance(x,vec,...)
#statistic = c(total.multivariance = m)
#statistic = NULL
p.value = NULL
#method = NULL
parameter = NULL
alternative = NULL
switch(p.value.type,
distribution_free = {
method = paste0(method, "; distribution-free p-value (conservative for p-values in (0,0.215))")
statistic[1] = fun()
p.value = multivariance.pvalue(statistic)
},
resample = {
method = paste0(method, "; resampling test")
res = resample.multivariance(x=x,vec=vec,type=type,...)
statistic[1] = res$original
p.value = res$p.value
},
pearson_approx = {
method = paste0(method, "; p-value via Pearson's approximation (approx. sharp)")
statistic[1] = fun()
p.value = pearson.pvalue(x=x,vec=vec,type=type,...)
},
{stop(paste("unkown p.value.type:",p.value.type))}
)
result = list(statistic = statistic, p.value = p.value, method = method, data.name = data.name) #, parameter = parameter, alternative = alternative
class(result) = "htest"
return(result)
}
################# Example data ##########
#' dependence example: tetrahedron sampling
#'
#' This function creates samples of a tetrahedron-dice colored r, g, b and rgb. Each sample indicates if for the thrown dice the colors r, g and b are contained on the bottom side of the dice.
#'
#' @param N number of samples
#' @return It returns the samples of the events r, g and b as rows of a \code{N} by 3 matrix (the first column corresponds to r, the second to g,...). TRUE indicates that this color is on the bottom side of the dice. The columns are dependent but 2-independent.
#' @examples
#' tetrahedron(10)
#'
#' @references
#' For the theoretic background see the reference [3] given on the main help page of this package: \link{multivariance-package}.
#'
#' @export
tetrahedron = function(N = 1000) {
# rolls the tetrahedron with sides r,g,b,rgb
side = sample.int(4,N,replace=TRUE)
x = (side == 1)|(side == 4)
y = (side == 2)|(side == 4)
z = (side == 3)|(side == 4)
return(unname(cbind(x,y,z)))
}
#' dependence example: k-independent coin sampling
#'
#' This function creates samples which are dependent but k-independent.
#' @param N number of samples
#' @param k each k-tuple will be independent
#' @return It returns the samples as rows of an \code{N} by \code{k+1} matrix. The columns are dependent but k-independent.
#'
#' @references
#' For the theoretic background see the reference [3] given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' coins(200,4)
#'
#' @export
coins = function(N = 1000, k = 2) {
c = matrix(sample.int(2,k*N,replace = TRUE)-1,ncol = k)
d = (rowSums(as.matrix(c[,1:(k-1)])) %% 2) == c[,k]
return(unname(cbind(c,d)))
}
# resample ######
#' resample the columns of a matrix
#' @param x matrix
#' @param vec vector, indicates which columns belong together
#' @param replace boolean, sampling with or without replacement
#'
#' @return Returns a matrix with the same dimensions as \code{x}. The columns are resampled from the original columns. The resampling is done with replacement (\code{replace = TRUE}) or without (\code{replace = FALSE}). Columns which belong together (indicated by vec) are resampled identically, i.e., all values in rows of these are kept together.
#'
#' @examples
#' sample.cols(matrix(1:15,nrow = 5),vec = c(1,1,2))
#'
#' @export
sample.cols = function(x,vec = 1:ncol(x),replace = TRUE) {
if (anyNA(vec)) vec = 1:ncol(x)
N = nrow(x)
xnew = x
for (i in 1:max(vec)) {
xnew[,vec == i] = x[sample.int(N,replace = replace),vec == i]
}
return(xnew)
}
#' resamples centered distance matrices
#' @param list.cdm a list of centered distance matrices
#' @param replace boolean, sampling with or without replacement
#'
#' @return Returns a list of centered distance matrices, each matrix corresponds to the resampled columns of the corresponding sample, using resampling with replacement (bootstrap) or without replacement (permutations).
#'
#' @export
sample.cdms = function(list.cdm, replace = FALSE) {
N = nrow(list.cdm[[1]])
# n = dim(array.cdm)[3]
#
# for (i in 2:n) { # for i == 1 the resampling is not necessary.
# neworder = sample.int(N)
# array.cdm[,,i] = array.cdm[neworder,neworder,i]
# }
# return(array.cdm)
#
#return(lapply(list.cdm, function(y) {neworder = sample.int(N, replace = replace) y[neworder,neworder]})) # resamples also the first
return(
lapply(1:length(list.cdm),
function(i) {
if (i > 1) {
neworder = sample.int(N, replace = replace)
return(list.cdm[[i]][neworder,neworder])
} else { # no resampling for the first
return(list.cdm[[i]])
}
})
)
}
#' resampling (total /m-) multivariance
#'
#' The distribution of the test statistic under the hypothesis of independence is required for the independence tests. This function generates approximate samples of this distribution either by sampling without replacement (permutations) or by sampling with replacement (bootstrap).
#'
#' @details
#' The resampling is done by sampling from the original data either without replacement (\code{"permutation"}) or with replacement (\code{"bootstrap"}).
#'
#' For convenience also the actual (total /m-) multivariance is computed and its p-value.
#'
#' @param x matrix, the rows should be iid samples
#' @param vec vector, which indicates which columns of \code{x} are treated together as one sample
#' @param times integer, number of samples to generate
#' @param type one of \code{"multi","total","m.multi.2","m.multi.3","all"}
#' @param resample.type one of \code{"permutation", "bootstrap"}. The samples are generated without replacement (permutations) or with replacement (bootstrap).
#' @param ... is passed to \code{\link{cdms}, \link{multivariance}, \link{total.multivariance}, \link{m.multivariance}}, respectively.
#'
#' @return A list with elements
#' \describe{
#' \item{\code{resampled}}{the (total/m-)multivariances of the resampled data,}
#' \item{\code{original}}{the (total/m-)multivariance of the original data,}
#' \item{\code{p.value}}{the p-value of the original data, computed using the resampled data}
#' }
#'
#' @references
#' For the theoretic background see the reference [3] given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' re.m = resample.multivariance(matrix(rnorm(30*2),nrow = 30),
#' type= "multi",times = 300)$resampled
#' curve(ecdf(re.m)(x), xlim = c(0,4),main = "empirical distribution of the test statistic under H_0")
#' @export
resample.multivariance = function(x,vec = 1:ncol(x),times = 300,type = "multi",resample.type = "permutation",...) {
replace = FALSE
if (resample.type == "bootstrap") {
replace = TRUE
resample.type = "permutation"
}
switch(resample.type,
#distinct.permutation = {resample = function() matrix(x[derangements.without.fixpoint(N,n, distinctcols,vec)],ncol = n)},
permutation = {
dots <- list(...) # we filter the argument lambda which might be used for total.multivariance, we keep everything else to preserve other error messages.
#argnames <- names(formals(cdm))
#list.cdm = do.call('cdms', c(list(x = x, vec = vec), dots[names(dots) %in% argnames]))
list.cdm = do.call('cdms', c(list(x = x, vec = vec), dots[!(names(dots) %in% "lambda")]))
# list.cdm = cdms(x,vec,...) #!! TODO: note there is the argument lambda for total.multivariance this should be excluded here!
vec = 1:max(vec) # all distance matrices shall be used.
resample = function() sample.cdms(list.cdm, replace = replace)}, # resampling of the centered distance matrices - this is faster than permutation.orig
# bootstrap = {
# list.cdm = cdms(x,vec,...)
# vec = 1:max(vec)
# resample = function() sample.cdms(list.cdm,replace = TRUE)},
permutation.orig = {resample = function() sample.cols(x,vec,replace =FALSE)}, # resampling of the original data
{stop(paste("unkown resample.type:",resample.type))}
)
switch(type,
multi = {fun = function (x) multivariance(x,vec = vec,...)},
total = {fun = function (x) total.multivariance(x,vec = vec,...)},
m.multi.2 = {fun = function (x) m.multivariance(x,vec = vec,...)},
m.multi.3 = {fun = function (x) m.multivariance(x,vec = vec,m = 3,...)},
all = {fun = function (x) multivariances.all(x,vec = vec,...)}#doAll = TRUE}
,
{stop(paste("unkown type:",type))}
)
results = matrix(,nrow = times, ncol = 1 + (type == "all")*3)
for (i in 1:times) {# we use a for loop instead of replicate to prevent trouble with '...'
results[i,] = fun(resample())
}
multi = fun(x)
p.value = rowSums(t(results) >= multi)/times
names(p.value) = names(multi)
invisible(list(resampled = results,
original = multi,
p.value = p.value))
}
#' rejection level via resampling
#'
#' Uses the resample method to sample from the test statistic under the hypothesis of independence. The alpha quantile of these samples is returned.
#'
#' @param alpha numeric, the significance value
#' @param ... passed to \code{\link{resample.multivariance}}. Required is the data matrix \code{x}.
#'
#'@references
#' For the theoretic background see the reference [3] given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' resample.rejection.level(0.05,matrix(rnorm(30*2),nrow = 30))
#' resample.rejection.level(0.05,matrix(rnorm(30*3),nrow = 30),vec = c(1,1,2))
#'
#' @export
resample.rejection.level = function(alpha = 0.05,...) {
samples = resample.multivariance(...)$resampled
stats::quantile(samples,probs= 1-alpha)
}
#' p-value via resampling
#'
#' Use a resampling method to generate samples of the test statistic under the hypothesis of independence. Based on these the p.value of a given value of a test statistic is computed.
#'
#' @return It returns 1 minus the value of the empirical distribution function of the resampling samples evaluated at the given value.
#' @param value numeric, the value of (total-/m-)multivariance for which the p-value shall be computed
#' @param ... passed to \code{\link{resample.multivariance}}. Required is the data matrix \code{x}.
#'
#' @details This function is useful if a p-value of a test statistic shall be computed based on the resampling values of the test statistic of a different sample. For the p-value based on the same sample \code{\link{resample.multivariance}(...)$p.value} is sufficient.
#'
#' @references
#' For the theoretic background see the reference [3] given on the main help page of this package: \link{multivariance-package}.
#'
#' @examples
#' x = coins(100)
#' resample.pvalue(multivariance(x),x=x,times = 300)
#' resample.pvalue(multivariances.all(x),x=x,times = 300,type = "all")
#'
#' @export
resample.pvalue = function(value,...){
samples = resample.multivariance(...)$resampled
#sum(samples >= value) / length(samples)
#slower: 1-stats::ecdf(samples)(value)
result = rowSums(t(samples) >= value)/ncol(t(samples))
names(result) = names(value)
return(result)
}
##### Moments ####
#' given the distance matrix the unbiased estimate for mu3 is computed
#' @keywords internal
mu3.unbiased = function(B,b2ob = sum(tcrossprod(B)*B)) {
#B2 = tcrossprod(B) # B%*%B ; note that the matrices are symmetric; this seems to be a faster implementation.
#B3 = tcrossprod(B,B2) # B2%*%B
cb = colSums(B)
cbob = colSums(B^2)
b = sum(cb) #sum(B)
b2 = sum(cb^2) #sum(B2)
b3 = cb%*%B%*%cb #sum(B3)
bob = sum(cbob) #sum(B^2)
bobob = sum(B^2*B)
lboblb = sum(cbob * cb) #sum(tcrossprod(B,B^2))
csbo3 = sum(cb^2*cb)
N = ncol(B)
facN = function(k) 1/(prod(N:(N-k)))
f = facN(3)*(b3-b2ob-2*lboblb+bobob)
ei = facN(2)*b2ob
y = facN(4)*(-4*b3 - 4*bobob - 2*csbo3 + 2*b2ob + 10*lboblb + b*b2 - b*bob)
u = facN(5)*(-48*lboblb - 8*b2ob + 16*bobob + 16*csbo3 + 24*b3 - 12*b*b2 + 6*b*bob + b^2*b)
mu3 = - ei + 3*f - 3*y + u
return(mu3)
}
#' given the sample of a single variable the centered distance matrix, mu and bcd are computed
#' The normalization should be postponed to the moment calculation.
#'
#' NOTE: speedup might be possible by incorporating mu3 and some matrix-norm-identities
#' @keywords internal
cdm.mu.bcd = function(x, normalize = FALSE, psi = NULL, p = NULL, isotropic = FALSE, unbiased.moments = TRUE, external.dm.fun = NULL) {
if (normalize) stop("normalized not implemented")
##### lines copied from cmd
if (is.null(psi) & is.null(p) & is.null(external.dm.fun)) {
#dm = dist.to.matrix(stats::dist(x,method="euclidean"))
#DEVELOPING NOTE: here as.matrix was slow, dist.to.matrix is faster. Instead one could just use the vector....
# even faster for the euclidean case is fastdist defined via Rcpp
dm = fastdist(as.matrix(x))
} else {
if (!is.null(p)) {
if ((p<1) || (p>2)) warning("p is not in [1,2]")
dm = dist.to.matrix(stats::dist(x,method="minkowski", p = p))
} else { # case: psi is given
if (!is.null(external.dm.fun)) {
dm = external.dm.fun(as.matrix(x))
} else {
if (isotropic) {
#dm = psi(dist.to.matrix(stats::dist(x,method="euclidean")))
dm = psi(fastdist(as.matrix(x)))
} else {
x = as.matrix(x)
n = nrow(x)
d = ncol(x)
dm = matrix(apply(cbind(x[rep(1:n,n),],x[rep(1:n,each = n),]), #create all combinations
1, # apply to each row
function(y) psi(y[1:d], y[(d+1):(2*d)])),nrow = n)
# DEVELOPING NOTE: could double the speed if only the upper triangular matrix is computed, using the idea of dist.to.matrix
}
}
}
}
### end copy of cdm
colm = colMeans(dm)
m = mean(colm) # equals mean(dm)
# if (unbiased & normalize) warning("Unbiased normalized cdm, m2, m3 not implemented. \n")
if (m == 0) warning("It seems that one variable is constant. Constants are always independent.\n")
cdm = (-dm + outer(colm,colm, FUN ="+") - m)
###### for mu (biased)
B = dm
B2 = tcrossprod(B) #mymm(B,B) # B%*%B note that the matrices are symmetric; this seems to be a faster implementation.
#B3 = tcrossprod(B,B2) #mymm(B,B2) #B2%*%B
N = nrow(B)
cb = colSums(B)
mB = 1/N^2 * sum(cb) #mean(B)
mB2 = 1/N^2 * sum(cb^2) #mean(B2)
mB3 = 1/N^2 * cb%*%B%*%cb #mean(B3)
mBsq = mean(B^2)
mB2oB = mean(B2*B)
m2 = mBsq-2/N*mB2+mB^2
m3 = -1/N*mB2oB + 3/N^2*mB3-3/N*mB2*mB + mB^2*mB
mu = c(mB,m2,m3)
###### for bcd (and mu unbiased)
if (unbiased.moments) {
# bcd = c(N^2/(N*(N-1))*mBsq,N^2/(N*(N-1)*(N-2)) * (mB2-mBsq),N^2/(N*(N-1)*(N-2)*(N-3))*(N^2*mB^2+2*mBsq-4*mB2)) #b,c,d
bcd = c(N/((N-1))*mBsq, N/((N-1)*(N-2)) * (mB2-mBsq), N/((N-1)*(N-2)*(N-3))*(N^2*mB^2+2*mBsq-4*mB2)) #b,c,d
# unbiased mB, m2
mB = N/(N-1)*mB #!! since we are in the case without normalization
m2 = sum(bcd * c(1,-2,1))
m3 = mu3.unbiased(B,b2ob = mB2oB*N^2)
mu = c(mB,m2,m3)
} else {
bcd = c(mBsq,1/N*mB2,mB^2) #b,c,d
}
return(list(cdm = cdm, mu = mu, bcd = bcd, mean = m))
}
#' computes the centered distance matrices, mus and bcds
#' @param x matrix, each row is a sample
#' @param vec vector which indicates which columns are treated as one sample
#' @param membership depreciated. Now use \code{vec}.
#' @param ... these are passed to \code{\link{cdm}}
#'
#' @return It returns an 3 dimensional array of the distance matrices. The index of the third dimension names the component for which the matrix is computed, thus it ranges from 1 to max(vec).
#'
#' @keywords internal
cdms.mu.bcd = function(x,vec = 1:ncol(x),membership = NULL,...) {
if (!is.null(membership)) {
vec = membership
warning("Use 'vec' instead of 'membership' as argument to 'cdms'. 'membership' is depreciated.")
}
if (anyNA(vec)) vec = 1:ncol(x)
n = max(vec)
N = nrow(x)
#array.cdm = array(,dim = c(N,N,n))
list.cdm = list()
mu = matrix(ncol = n,nrow = 3)
bcd = matrix(ncol = n,nrow = 3)
m = numeric(n)
for (i in 1:n) {
res = cdm.mu.bcd(x[,(vec == i)],...)
#array.cdm[,,i] = res$cdm
if(res$mean ==0) {
list.cdm[[i]] = res$cdm
} else {
list.cdm[[i]] = res$cdm/res$mean
}
mu[,i] = res$mu
bcd[,i] = res$bcd
m[i] = res$mean
}
return(list(list.cdm = list.cdm, mu = mu, bcd = bcd, mean = m))
}
coef.7.cases = structure(c(0, 0, 0, 0, 0, 0, 0, 0, -6, -2, 8, -1, -4, 4, 1,
0, 11, 2, -12, 1, 4, -6, 0, -1, -6, 0, 4, 0, 0, 2, 0, 2, 1, 0, 0, 0, 0, 0, 0, 0), .Dim = c(8L, 5L))
coef.array = structure(c(0, 0, 0, 0, 0, 6, -24, 18, -1, 2, -4, 3, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, -24, 18, -1, -2, 12, -9, 0,
-2, 4, -2, 0, 1, -2, 1, 0, 0, 0, 0, 0, 6, -24, 18, -1, 0, 4,
-3, 0, -1, 2, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, -24, 18, -1,
-2, 12, -9, 2, 0, 0, -2, -1, 0, 0, 1, 0, 0, 0, 0, 0, 6, -24,
18, -1, -4, 20, -15, 1, 0, -4, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0,
6, -24, 18, -1, 0, 4, -3, 1, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 6, -24, 18, -1, -10, 44, -33, 2, 4, -24, 18, -1, 0, 4, -3), .Dim = c(4L,
5L, 7L))
#' Computes the explicit coefficients for the finite sample variance for a sample of size N
#' @keywords internal
N.coefficients = function(N) {
freq.o.c = as.vector(coef.7.cases %*% N^(0:4))
case.coef = t(apply(coef.array,3,function(x) x %*%(N^{0:4})))
dimnames(case.coef)[[2]]=c("a","b","c","d")
# names(freq.o.c)=c("dd","bb","cc")
return(list(freq.o.c = freq.o.c, case.coef = case.coef))
}
#' functions which are required for the calculation of the finite sample expectation and variance for m-multivariance and total multivariance
#' @keywords internal
d2 = function(a,b) sum(a)*sum(b)-sum(a*b)
d3 = function(a,b,c) sum(c)*d2(a,b) - d2(a*c,b) - d2(a,b*c)
d4 = function(a,b,c,d) sum(d)*d3(a,b,c)- d3(a*d,b,c)- d3(a,b*d,c)- d3(a,b,c*d)
G2 = function(a,b,c) {
d2(c,c)/2 + d3(a,b,c) + d4(a,a,b,b)/4
}
d5 = function(a,b,c,d,e) sum(e)*d4(a,b,c,d) - d4(e*a,b,c,d) - d4(a,e*b,c,d) -d4(a,b,e*c,d) -d4(a,b,c,e*d)
d6 = function(a,b,c,d,e,f) sum(f)*d5(a,b,c,d,e) - d5(f*a,b,c,d,e) - d5(a,f*b,c,d,e) -d5(a,b,f*c,d,e) -d5(a,b,c,f*d,e) - d5(a,b,c,d,f*e)
G3 = function(a,b,c) {
d3(c,c,c)/6 + d4(a,b,c,c)/2 + d5(a,a,b,b,c)/4 + d6(a,a,a,b,b,b)/36
}
Gt = function(a,b,c) {
return(
prod(a+b+c+1) - prod(b+1) *(1 + sum((a+c)/(b+1))) - prod(a+1) *(1 + sum((b+c)/(a+1)))
+ 1 + sum(a)*sum(b) - sum(a*b) + sum(a+b+c)
)
}
#' This is the function GC which is required for the computation of the finite sample variance for m and total multivariance
#' @keywords internal
sums.of.products = function(a,b,c, type = "multi") {
switch(type,
multi = { temp = prod(c)},
m.multi.2 = { temp = G2(a,b,c)},
m.multi.3 = { temp = G3(a,b,c)},
total = { temp = Gt(a,b,c)},
{stop(paste("unkown type:",type))}
)
return(temp)
}
#' computes the moments as required by pearson
#' @keywords internal
moments.for.pearson = function(N,bcd, mu, mmean, type = "multi") {
switch(type,
multi = {fun = function (x) prod(x)},
total = {fun = function (x) prod(x+1)-sum(x)-1},
m.multi.2 = {fun = function (x) (sum(x)^2-sum(x^2))/2},
m.multi.3 = {fun = function (x) (sum(x)^3-3*sum(x)*sum(x^2)+2*sum(x^2*x))/6},
{stop(paste("unkown type:",type))}
)
limit.variance = 2*fun(mu[2,]/mmean^2) # variance
limit.skewness = 8*fun(mu[3,]/mmean^3)/limit.variance^(3/2) #skewness
n = dim(mu)[2]
switch(type,
multi = {scalevec = 1},
total = {scalevec = (2^n-n-1)^2},
m.multi.2 = {scalevec = (choose(n,2))^2},
m.multi.3 = {scalevec = (choose(n,3))^2},
)
res = N.coefficients(N)
muvec = mu[1,]
bcdsums = (res$case.coef[,2:4]/N^4)%*%bcd[1:3,] #bbi+cci+ddi
### normalized:
sumh2 = (res$freq.o.c[c(2,3,1)]/N^4)%*%bcd[1:3,] #(C(N,2)bi+C(N,3)ci+C(N,1)di)/N^4
bcdsums.normalized = t(apply(bcdsums,1,function(x) x/sumh2))
one = rep(1,n)
E.no = (fun(one) + (N-1)*fun((-1/(N-1))*one))/sqrt(scalevec) # finite sample expectation
s.no = numeric(7)
for (i in 1:3) s.no[i] = res$freq.o.c[i]/N^2*sums.of.products((-1/(N-1))*one,(-1/(N-1))*one,bcdsums.normalized[i,],type = type)
for (i in c(4,7)) s.no[i] = res$freq.o.c[i]/N^2*sums.of.products(one,one,bcdsums.normalized[i,],type = type)
for (i in c(5,6)) s.no[i] = res$freq.o.c[i]/N^2*sums.of.products(one,(-1/(N-1))*one,bcdsums.normalized[i,],type = type)
E2.no = sum(s.no/scalevec)
Esq.no.biased = (E.no)^2 # is this the same as the following?.... guess not.
r.no = numeric(7)
for (i in 1:3) r.no[i] = res$freq.o.c[i]/N^2*sums.of.products((-1/(N-1))*one,(-1/(N-1))*one,(-1/(N-1))^2*one,type = type)
for (i in c(4,7)) r.no[i] = res$freq.o.c[i]/N^2*sums.of.products(one,one,one,type = type)
for (i in c(5,6)) r.no[i] = res$freq.o.c[i]/N^2*sums.of.products(one,(-1/(N-1))*one,(-1/(N-1))*one,type = type)
Esq.no = sum(r.no/scalevec)
Var.no = E2.no-Esq.no # finite sample variance
return(c(E.no,Var.no,limit.skewness))
}
#' approximate distribution of Gaussian quadratic form
#'
#' Approximation of the of the value of the distribution function of a Gaussian quadratic form based on its first three moments.
#'
#' @param x value at which the distribution function is to be evaluated
#' @param moment vector with the mean, variance and skewness of the quadratic form
#' @param lower.tail logical, indicating of the lower or upper tail of the distribution function should be calculated
#'
#' @details This is Pearson's approximation for Gaussian quadratic forms as stated in Equation (4.64) in [4]
#'
#' @references
#' For the theoretic background see the reference [4] given on the main help page of this package: \link{multivariance-package}.
#'
#' @export
pearson.qf = function(x,moment, lower.tail = TRUE) {
m = moment[1]
v = max(moment[2],0) #! since estimates of v might be negative.
s = moment[3]
a = sqrt(8)/s
nu = a^2
if (is.na(v)) return(NA)
if (v == 0) {
return(x > m)
} else {
stats::pchisq(sqrt(2)*a*(x-m)/sqrt(as.vector(v))+nu,df = nu, lower.tail= lower.tail)
}
}
#' fast p-value approximation
#'
#' Computes the p-value of a sample using Pearson's approximation of Gaussian quadratic forms with the estimators developed by Berschneider and Böttcher in [4].
#'
#' @param x matrix, the rows should be iid samples
#' @param vec vector, which indicates which columns of \code{x} are treated together as one sample
#' @param type one of \code{"multi","total","m.multi.2","m.multi.3"}
#' @param ... these are passed to \code{\link{cdms}}
#'
#' @details This is the method recommended in [4], i.e., using Pearson's quadratic form estimate with the unbiased finite sample estimators for the mean and variance of normalized multivariance together with the unbiased estimator for the limit skewness.
#'
#' @references
#' For the theoretic background see the reference [4] given on the main help page of this package: \link{multivariance-package}.
#'
#' @export
pearson.pvalue = function(x,vec = 1:ncol(x), type = "multi",...) {
dots <- list(...) # we filter the argument lambda which might be used for total.multivariance, we keep everything else to preserve other error messages.
cmb = do.call('cdms.mu.bcd', c(list(x = x, vec = vec), dots[!(names(dots) %in% "lambda")]))
# cmb = cdms.mu.bcd(x,vec, psi = psi, p = p, isotropic = isotropic,...)
moms = moments.for.pearson(nrow(x),cmb$bcd, cmb$mu, cmb$mean, type = type)
# normalization already in cdms.mu.bcd
# normalizing.factor = rep(cmb$mean, each = nrow(x)^2)
# normalizing.factor[normalizing.factor == 0] = 1 # prevent division by 0. if 0 then cdm == 0 anyway
## list.cdm = alply(cmb$array.cdm/normalizing.factor,3)
switch( type,
multi = {m = multivariance(cmb$list.cdm)},
total = {m = total.multivariance(cmb$list.cdm)},
m.multi.2 = {m = m.multivariance(cmb$list.cdm)},
m.multi.3 = {m = m.multivariance(cmb$list.cdm,m = 3)},
{stop(paste("unkown type:",type))}
)
return(pearson.qf(m,moms,lower.tail = FALSE))
}
##### Dependence structure ####
# * Data ####
#' example dataset for \code{\link{dependence.structure}}
#'
#' It was generated by \preformatted{
#' set.seed(1348879148)
#' N = 100
#' dep_struct_several_26_100 = cbind(coins(N,2),tetrahedron(N),coins(N,4),
#' tetrahedron(N),tetrahedron(N),coins(N,3),coins(N,3),rnorm(N))
#'save(dep_struct_several_26_100,file ="dep_struct_several_26_100.rda")
#'}
#'
#' To avoid irritation, note that the seed is just a simple integer hash value of the variable name.
#'
#' @format \code{matrix} 26 variables (columns), 100 independent samples (rows)
#'
"dep_struct_several_26_100"
#' example dataset for \code{\link{dependence.structure}}
#'
#' It was generated by \preformatted{
#' set.seed(222454572)
#' N = 100
#' y = coins(N,2)
#' dep_struct_star_9_100 = cbind(y,y,y)
#' save(dep_struct_star_9_100,file ="dep_struct_star_9_100.rda")
#'}
#'
#' To avoid irritation, note that the seed is just a simple integer hash value of the variable name.
#'
#' @format \code{matrix} 9 variables (columns), 100 independent samples (rows)
#'
"dep_struct_star_9_100"
#' example dataset for \code{\link{dependence.structure}}
#'
#' It was generated by \preformatted{
#' set.seed(532333356)
#' N = 100
#' x = matrix(sample.int(2,10*N,replace = TRUE)-1,ncol = 10)
#' for (i in c(2,5,9)) x = cbind(x,(rowSums(as.matrix(x[,1:(i-1)])) %% 2) == x[,i])
#' dep_struct_iterated_13_100 = x
#' save(dep_struct_iterated_13_100,file ="dep_struct_iterated_13_100.rda")
#'}
#'
#' To avoid irritation, note that the seed is just a simple integer hash value of the variable name.
#'
#' @format \code{matrix} 13 variables (columns), 100 independent samples (rows)
#'
"dep_struct_iterated_13_100"
#' example dataset for \code{\link{dependence.structure}}
#'
#' It was generated by \preformatted{
#' set.seed(436646700)
#' N = 100
#' n= 15
#' x=matrix(sample.int(2,N*n,replace = TRUE)-1,nrow =N)
#' x[,4] = rowSums(x[,1:3]) %% 2
#' x[,7] = rowSums(x[,4:6]) %% 2
#' x[,10] = rowSums(x[,7:9]) %% 2
#' x[,13] = rowSums(x[,10:12]) %% 2
#' x[,15] = rowSums(x[,c(13,14,1)]) %% 2
#' dep_struct_ring_15_100 = x
#' save(dep_struct_ring_15_100,file ="dep_struct_ring_15_100.rda")
#'}
#'
#' To avoid irritation, note that the seed is just a simple integer hash value of the variable name.
#'
#' @format \code{matrix} 15 variables (columns), 100 independent samples (rows)
#'
"dep_struct_ring_15_100"
# * detection ####
#' determines the dependence structure
#'
#' Determines the dependence structure as described in [3].
#'
#' @param x matrix, each row of the matrix is treated as one sample
#' @param vec vector, it indicates which columns are initially treated together as one sample
#' @param verbose boolean, if \code{TRUE} details are printed during the detection and whenever a cluster is newly detected the (so far) detected dependence structure is plotted.
#' @param detection.aim \code{=NULL} or a list of vectors which indicate the expected detection, see below for more details
#' @param ... these are passed to \code{\link{find.cluster}}
#'
#' @details
#' Performs the detection of the dependence structure as described in [3].
#'
#' If \code{fixed.rejection.level} is not provided, the significance level \code{alpha} is used to determine which multivariances are significant using the distribution-free rejection level. As default the Holm method is used for p-value correction corresponding to multiple testing.
#'
#' The resulting graph can be simplified (pairwise dependence can be represented by edges instead of vertices) using \code{\link{clean.graph}}.
#'
#' Advanced:
#' The argument \code{detection.aim} can be used to check, if an expected dependence structure was detected. This might be useful for simulation studies to determine the empirical power of the detection algorithm. Hereto \code{detection.aim} is set to a list of vectors which indicate the expected detected dependence structures (one for each run of \code{\link{find.cluster}}). The vector has as first element the \code{k} for which k-tuples are detected (for this aim the detection stops without success if no k-tuple is found), and the other elements, indicate to which clusters all present vertices belong after the detection, e.g. \code{c(3,2,2,1,2,1,1,2,1)} expects that 3-tuples are detected and in the graph are 8 vertices (including those representing the detected 3 dependencies), the order of the 2's and 1's indicate which vertices belong to which cluster. If \code{detection.aim} is provided, the vector representing the actual detection is printed, thus one can use the output with copy-paste to fix successively the expected detection aims.
#'
#' Note that a failed detection might invoice the warning:
#' \preformatted{
#' run$mem == detection.aim[[k]][-1] :
#' longer object length is not a multiple of shorter object length
#' }
#'
#'
#'
#' @return returns a list with elements:
#' \describe{
#' \item{\code{multivariances}}{calculated multivariances,}
#' \item{\code{cdms}}{calculated centered distance matrices,}
#' \item{\code{graph}}{graph representing the dependence structure.}
#' \item{\code{detected}}{boolean, this is only included if a \code{detection.aim} is given.}
#' }
#'
#' @references
#' For the theoretic background see the reference [3] given on the main help page of this package: \link{multivariance-package}.
#'
#' @example inst/examples/dependence-structures.R
#' @export
#'
dependence.structure = function(x, vec = 1:ncol(x), verbose = TRUE, detection.aim = NULL, ...) {
list.cdm = cdms(x,vec = vec) # creates the distance matrices
all.multivariances = numeric(0) # vector which will contain all distance multivariances which are calculated
mem = as.numeric(1:max(vec))
#its length is the number of vertices, its content is the number of the corresponding cluster for the current iteration!!!
# has to be numeric, since otherwise 'identical' fails to end the loop (in the case of
# no detected clusters in the first run)
cluster.to.vertex = 1:max(mem) # cluster to vertex relation - gets renewed each iteration (since the names of the clusters change)
vertex.to.cdm = 1:max(mem) # vertex to A (the centered distance matrices) relation - gets appended each iteration
previous.n.o.cdms = rep(0,max(mem)) # number of As in the previous iteration
n = max(mem) # number of clusters
g = igraph::graph.empty(,directed=FALSE)
g = igraph::add.vertices(g,n,label = sapply(1:max(mem),function(r) paste(colnames(x,do.NULL = FALSE,prefix = "")[vec == r],collapse = ",")),shape = "circle")
# Loop through the tuples
detected = TRUE
k = 1
while (detected) {
if (!is.null(detection.aim)) {
run = find.cluster(x,vec,list.cdm,mem,cluster.to.vertex,vertex.to.cdm,previous.n.o.cdms,all.multivariances,g,kvec = 2:detection.aim[[k]][1], verbose = verbose, ...)
if (verbose) {
cat("last detected structure (in detection.aim format): ")
dput(c(run$k,run$mem))
}
success = all(run$mem == detection.aim[[k]][-1])
k = k+1
} else {
run = find.cluster(x,vec,list.cdm,mem,cluster.to.vertex,vertex.to.cdm,previous.n.o.cdms,all.multivariances,g,verbose = verbose,...)
}
detected = run$detected
list.cdm = run$list.cdm
mem = run$mem
cluster.to.vertex = run$cluster.to.vertex
vertex.to.cdm = run$vertex.to.cdm
previous.n.o.cdms = run$previous.n.o.cdms
all.multivariances = run$all.multivariances
g = run$g
if (!is.null(detection.aim)) if (!success) break
}
if (!is.null(detection.aim)) {
return(invisible(list(cdms = run$list.cdm, multivariances = run$all.multivariances, graph = run$g,detected = success)))
} else {
return(invisible(list(cdms = run$list.cdm, multivariances = run$all.multivariances, graph = run$g)))
}
}
#' cluster detection
#'
#' Performs the detection of dependence structures algorithm until a cluster is found. This function is the basic building block \code{\link{dependence.structure}}. Advanced users, might use it directly.
#'
#' @param x matrix with the samples
#'
#' @param vec vector, it indicates which columns are initially treated together as one sample
#' @param list.cdm list of centered distance matrices
#' @param mem numeric vector, its length is the number of vertices, its content is the number of the corresponding cluster for the current iteration, i.e., vertex \code{i} belongs to cluster \code{mem[i]}
#' @param cluster.to.vertex vector, contains the cluster to vertex relations, i.e., \code{cluster.to.vertex[i]} is the index of the vertex which represents cluster \code{i}
#' @param vertex.to.cdm vector, contains the vertex to centered distance matrix relations, i.e., \code{vertex.to.cdm[i]} is the index centered distance matrix in \code{list.cdm} which corresponds to vertex \code{i}
#' @param previous.n.o.cdms vector, number of centered distance matrices in the previous iteration (it is used to ensure that previously check tuples are not checked again)
#' @param all.multivariances vector, which contains all distance multivariances which have been calculated so far. Only used to finally return all distance multivariances which have been calculated.
#' @param g dependence structure graph
#' fixed.rejection.level = NA, alpha=0.05,method = "holm",explore = FALSE, verbose = TRUE, kvec = 2:max(mem)
#' @param alpha numeric, significance level used for the (distribution-free) tests
#' @param fixed.rejection.level vector, if not \code{NA} the \code{fixed.rejection.level[k]} is used for the k-tuples, instead of a level derived from the significance level \code{alpha}
#' @param p.adjust.method name of the method used to adjust the p-values for multiple testing, see \code{\link[stats]{p.adjust}} for all possible options.
#' @param verbose boolean, if \code{TRUE} details during the detection are printed and whenever a cluster is newly detected the (so far) detected dependence structure is plotted.
#' @param kvec vector, k-tuples are only checked for each k in \code{kvec}, i.e., for \code{kvec = 2:4} only 2,3 and 4-tuples would be check and then the algorithm stops.
#'
#' @details
#' For further details see \code{\link{dependence.structure}}.
#'
find.cluster = function(x,
vec = 1:ncol(x), # which columns should be treated as one sample
list.cdm = cdms(x,vec = vec), # creates the distance matrices
mem = as.numeric(1:max(vec)),
#its length is the number of vertices, its content is the number of the corresponding cluster for the current iteration!!!
# has to be numeric, since otherwise 'identical' fails to end the loop (in the case of
# no detected clusters in the first run)
cluster.to.vertex = 1:max(mem), # cluster to vertex relation - gets renewed each iteration (since the names of the clusters change)
vertex.to.cdm = 1:max(mem), # vertex to A (the centered distance matrices) relation - gets appended each iteration
previous.n.o.cdms = rep(0,max(mem)), # number of As in the iteration before. it is used to speed up the detection.
all.multivariances = numeric(0), # vector which will contain all distance multivariances which are calculated
g = igraph::add.vertices(igraph::graph.empty(,directed=FALSE),max(mem),label = sapply(1:max(mem),function(r) paste(colnames(x,do.NULL = FALSE,prefix = "")[vec == r],collapse = ",")),shape = "circle"), #the graph
fixed.rejection.level = NA, alpha=0.05,p.adjust.method = "holm", verbose = TRUE, kvec = 2:max(mem)) {
explore = FALSE # undocumented option, which would provide some more graphs during the detection
if (verbose) graphics::plot(g)
n = max(mem) # number of clusters
nV = length(igraph::V(g)) #length(mem) # number of vertices at the start of the iteration
n.o.cdm = length(list.cdm) # number of As at the start of the iteration
cluster.to.cdm = vertex.to.cdm[cluster.to.vertex] #cluster to A relation - gets renewed each iteration
# Each cluster is represented by its 'largest' vertex
cdm.to.vertex = NA # A to vertex relation
for (i in 1:n.o.cdm) cdm.to.vertex[i] = which(vertex.to.cdm == i)
for (k in 2:min(max(kvec),max(mem))) { # look at the k-tuples of the n variables.
tuples = utils::combn(n,k) # all k-tuples of 1,..,n
tuples = matrix(cluster.to.cdm[tuples],ncol = k,byrow = TRUE)
#transform tuples into the A indexes
tuples = tuples[apply(tuples,1,max) > previous.n.o.cdms[k],]
# to speed up, we only consider those with new A
if (length(tuples) == k) dim(tuples) = c(1,k)
# make sure that it is a matrix
multivariances = apply(tuples,1,function(x) multivariance(list.cdm,x,Nscale = TRUE)) #calculates all distance multivariances
all.multivariances = c(all.multivariances,multivariances)
# print(multivariances)
if (explore) {
graphics::plot(multivariances,main = paste(k,"tuple multivariances"))
graphics::abline(h = c( rejection.level(alpha/choose(n,k)), rejection.level(alpha)), col = c("red","green"))
readline("continue with [enter]")
}
for (i in which(((anyNA(fixed.rejection.level) & (stats::p.adjust(multivariance.pvalue(multivariances),method = p.adjust.method) < alpha))) | (multivariances > fixed.rejection.level[k]) )) {
# for each tuple, with adjusted p value less than the significance level (or multivariance less than a prescribed fixed.rejection.level, if given) we add a vertex and the edges
new = length(igraph::V(g))+1
g = igraph::add.vertices(g,1,label = signif(multivariances[i],4), shape = "none", level=k)
g = igraph::add_edges(g, as.vector(t(cbind(new,cdm.to.vertex[tuples[i,]]))), weight= NA, color = k)
# }
}
if (verbose) cat(paste(k,"-tuples: max. multivariance: ",format(max(multivariances),digits=3,nsmall = 3,width = 7),"; min. p-value: ",multivariance.pvalue(max(multivariances)),"\n",sep =""))
#readline(paste("level",k,", press [Enter] for next level"))
previous.n.o.cdms[k] = n.o.cdm
if ((anyNA(fixed.rejection.level) && (stats::p.adjust(multivariance.pvalue(max(multivariances)),method = p.adjust.method,n = length(multivariances)) < alpha)) || (!anyNA(fixed.rejection.level) && (max(multivariances) > fixed.rejection.level[k]))) {
#if a cluster was found exit the loop
break
}
} # end loop k
previous.n.o.cdms[(k+1):length(previous.n.o.cdms)] = 0 # ensures that for all higher tuples all combinations are checked. (Otherwise those tuples which were checked previously are skipped)
if (verbose) graphics::plot(g)
#print(paste("mem: ",paste(mem,collapse = " "),"clust",paste(igraph::clusters(g)$membership,collapse = " ")))
if (identical(mem, igraph::clusters(g)$membership) || (igraph::clusters(g)$no == 1) ) {
detected = FALSE
if (verbose) {
if (igraph::clusters(g)$no == 1) {cat("All vertices are in one cluster.\n")} else { cat("No new cluster detected.\n")}
} # will end recursion if the membership did not change or there is only one cluster
} else {
if (verbose) cat("New cluster detected. Not all vertices are in one cluster.\n")
detected = TRUE
}
mem = igraph::clusters(g)$membership
cluster.to.vertex = NA
for (i in 1:max(mem)) cluster.to.vertex[i] = max(which(mem == i))
# cluster to vertex relation. Each cluster is represented by its 'largest' vertex
previous.n.o.cdms[2] = n.o.cdm
for (i in cluster.to.vertex[cluster.to.vertex > nV]) {
# for every vertex which represents a new cluster the distance matrix representing the cluster is calculated.
# the vertex i gets the cdm with index vertex.to.cdm[i]
n.o.cdm = n.o.cdm + 1
vertex.to.cdm[i] = n.o.cdm
# print(which(mem[1:ncol(x)] == which(cluster.to.vertex== i)))
# array.cdm = abind::abind(array.cdm, cdm(x[,which(mem[1:ncol(x)] == which(cluster.to.vertex== i))]),along=1)
#array.cdm = abind::abind(array.cdm, cdm(x[,which(vec %in% which(mem[1:ncol(x)] == which(cluster.to.vertex== i)))]),along=3)
list.cdm[[length(list.cdm)+1]] = cdm(x[,which(vec %in% which(mem[1:ncol(x)] == which(cluster.to.vertex== i)))])
# which(cluster.to.vertex== i) is the number of the cluster represented by vertex i
# which(mem ...) gives the "original" vertices which belong to the same cluster as vertex i
# which(vec ...) finally gives the columns of x which belong to the same cluster as vertex i
} # at the end of this for loop, n.o.cdm contains the new number of As
invisible(list(detected = detected,list.cdm = list.cdm,mem = mem,cluster.to.vertex = cluster.to.vertex,vertex.to.cdm = vertex.to.cdm,previous.n.o.cdms = previous.n.o.cdms,all.multivariances = all.multivariances,g = g, k = k))
}
#' cleanup dependence structure graph
#'
#' Given a dependence structure graph: vertices representing the multivariances of only two vertices become an edge labeled with the label of the vertex.
#'
#' @param g graph, created by \code{\link{dependence.structure}}
#' @return graph
#'
#' @examples
#'
#' N = 200
#' y = coins(N,2)
#' x = cbind(y,y,y)
#' ds = dependence.structure(x)
#' plot(clean.graph(ds$graph))
#' @export
clean.graph = function(g) {
# g = ds$graph
vert = which(igraph::V(g)$level == 2) # might still have more neighbors!!!
only.two.neighbors = logical(igraph::vcount(g))
for (i in vert) {
only.two.neighbors[i] = length(igraph::neighbors(g,igraph::V(g)[i]))==2
if (only.two.neighbors[i]) g = igraph::add_edges(g, igraph::neighbors(g,i), weight= NA, label = igraph::V(g)[i]$label, color = 2)
}
return(igraph::delete.vertices(g,only.two.neighbors))
}
# Utility functions ####
#' transforms a distance matrix to a matrix
#'
#' Does for a distance matrix generated via \code{dist} the same as \code{as.matrix} only slightly faster.
#'
#' @param ds a distance matrix object, e.g. generated by \code{\link{dist}}
#'
#' @keywords internal
dist.to.matrix = function(ds) {
N=attr(ds,"Size")
m = matrix(0, nrow = N, ncol = N)
m[outer(1:N,1:N,">")] = ds
m+t(m)
}
#' estimate of the computation time
#'
#' Estimates of the computation time. This is relative rough. First run with \code{determine.parameters = TRUE} (which takes a while). Then use the computed parameters to determine the computation time/or sample size.
#'
#' @param determine.parameters if \code{TRUE} then the parameters for the current computer are determined. This might take a while (3 loops to N=1000).
#' @param N number of samples. If \code{NULL} and \code{sectime} is given, then \code{N} is computed.
#' @param n number of variables
#' @param sectime desired computation time in seconds. If \code{NULL} then the required computation time is computed.
#' @param coef.cdm computation time parameter for the centered distance matrices
#' @param coef.prod computation time parameter for matrix products
#' @param coef.sum computation time parameter for matrix sums
#'
#' @details When detecting the parameters, the median of the computation times is used.
#'
#' @examples
#' Ns = (1:100)*10
#' ns = 1:100
#' fulltime = outer(Ns,ns,FUN = function(N,n) multivariance.timing(N,n))
#' contour(Ns,ns,fulltime,xlab = "N",ylab = "n",
#' main = "computation time of multivariance in secs",
#' sub = "using default parameters -
#' use 'determine.parameters = TRUE' to compute machine specific values")
#'
#' # Run to determine the parameters of your system:
#' # multivariance.timing(determine.parameters = TRUE)
#'
#' @export
multivariance.timing = function(N=NULL,n,sectime = NULL,coef.cdm=15.2,coef.prod=2.1,coef.sum = 1.05,determine.parameters = FALSE) {
if (!determine.parameters) {
const = coef.cdm*n+coef.prod*(n-1)+coef.sum
if (!is.null(N) & is.null(sectime)) {
nanotime = const * N^2
return(nanotime/10^9)
}
if (is.null(N) & !is.null(sectime)) {
N = sqrt(sectime*10^9/const)
return(ceiling(N))
}
} else {
op = graphics::par(mfrow = c(2,2))
Ns = (1:100)*10
Nssq = Ns^2
times = numeric(length(Ns))
for (i in 1:length(Ns)) {
N = Ns[i]
x = stats::rnorm(N)
mb = microbenchmark::microbenchmark(cdm(x)) # timing of cdm
print(N)
print(mb)
times[i] = stats::median(mb$time)
}
res = stats::lm(times ~ Nssq -1)
graphics::plot(Ns^2,times,main = paste("cdm of NxN - ",version$version.string),sub = res$coefficients)
graphics::abline(res)
coef.cdm = res$coefficients
Ns = (1:100)*10
prodtimes = numeric(length(Ns))
mat = matrix(stats::rnorm(max(Ns)^2),nrow = max(Ns))
for (i in 1:length(Ns)) {
N = Ns[i]
x = mat[1:N,1:N]
mb = microbenchmark::microbenchmark(x*x) #timing of matrix products
print(N)
print(mb)
prodtimes[i] = stats::median(mb$time)
}
res = stats::lm(prodtimes ~ Nssq -1)
graphics::plot(Ns^2,prodtimes,main = paste("products of NxN - ",version$version.string),sub = res$coefficients)
graphics::abline(res)
coef.prod = res$coefficients
Ns = (1:100)*10
sumtimes = numeric(length(Ns))
for (i in 1:length(Ns)) {
N = Ns[i]
x = mat[1:N,1:N]
mb = microbenchmark::microbenchmark(sum(x)) # timing of matrix sums
print(N)
print(mb)
sumtimes[i] = stats::median(mb$time)
}
res = stats::lm(sumtimes ~ Nssq -1)
graphics::plot(Ns^2,sumtimes,main = paste("sum of NxN elem - ",version$version.string),sub = res$coefficients)
graphics::abline(res)
coef.sum = res$coefficients
ns = 1:100
fulltime = outer(Ns,ns,FUN = function(N,n) multivariance.timing(N,n,sectime = NULL, coef.cdm,coef.prod,coef.sum))
graphics::contour(Ns,ns,fulltime,xlab = "N",ylab = "n", main = "computation time of multivariance in secs")
graphics::par(op)
print( "\nThe computed parameter are:")
return(c( coef.cdm = coef.cdm,coef.prod = coef.prod,coef.sum = coef.sum))
}
}
Computing file changes ...