https://github.com/cran/neuralnet
Raw File
Tip revision: c4e62da3b7aa50d399a9e1d112462b6f057873fd authored by Frauke Guenther on 10 August 2008, 00:00:00 UTC
version 1.2
Tip revision: c4e62da
neuralnet.Rd
\name{neuralnet}
\alias{neuralnet}
\alias{print.nn}
\title{Training of neural networks}
\description{
    \code{neuralnet} is used to train neural networks using the Resilient Backpropagation with (Riedmiller, 1994) or without 
    weightbacktracking (Riedmiller and Braun, 1993) or the modified
    globally convergent version by Anastasiadis et al. (2005). The
    function allows flexible settings through custom-choice of error-
    and activation-function. Furthermore the calculation of 
    generalized weights (Intrator O. and Intrator N., 1993) is
    implemented.
}
\details{
The globally convergent algorithm is based on the Resilient Backpropagation without weightbacktracking and additionally modifies one learning rate, either the learningrate associated with the smallest absolute gradient (sag) or the smallest learningrate (slr) itself. The learning rates in the grprop algorithm are limited to the boundaries defined in learningrate.limit. 
}
\usage{
neuralnet(formula, data, hidden = 1, threshold = 0.01, stepmax = 1e+05, rep = 1, startweights = NULL, learningrate.limit = NULL, learningrate.factor = list(minus = 0.5, plus = 1.2), lifesign = "none", lifesign.step = 1000, algorithm = "rprop+", err.fct = "sse", act.fct = "logistic", linear.output = TRUE, exclude = NULL, constant.weights = NULL, likelihood = FALSE)
}
\arguments{
  \item{formula}{ a symbolic description of the model to be fitted. }
  \item{data}{ a data frame in which the variables specified in \code{formula} will be found.  }
  \item{hidden}{ a vector of integers specifying the number of hidden neurons (vertices) in each layer. }
  \item{threshold}{ a vector of integers for the threshold of the minimal error.  }
  \item{stepmax}{ the maximum steps for the training of the neural network. Reaching this maximum leads to a stop of the neural network's training process. }
  \item{rep}{ the number of repetitions for the neural network's training for every threshold. }
  \item{startweights}{ a vector containing starting values for the weights. The weights will not be randomly initialized.}
  \item{learningrate.limit}{ a vector or a list containing the lowest and highest limit for the learning rate. }
  \item{learningrate.factor}{ a vector or a list containing the multiplication factors for the upper and lower learning rate. }
  \item{lifesign}{ a string specifying how much the function will print during the calculation of the neural network. 'none', 'minimal' or 'full'. }
  \item{lifesign.step}{ an integer specifying the stepsize to print the minimal threshold in full lifesign mode.  }
  \item{algorithm}{ a string containing the algorithm type to calculate the neural network. The  following types are possible: 'rprop+', 'rprop-', 'sag', or 'slr'. 'rprop+' and 'rprop-' refer to  the Resilient Backpropagation with and without weightbacktracking,
 while 'sag' and 'slr' induce the usage of the modified globally convergent algorithm (grprop). See Details for more information.}
  \item{err.fct}{ a differentiable function that is used for the calculation of the error. Alternatively, the strings 'sse' and 'ce' which stand for the sum of squared errors and the cross-entropy can be used.}
  \item{act.fct}{ a differentiable function that is used for smoothing the result of the cross product of the covariate or neurons and the weights. Additionally the strings, 'logistic' and 'tanh' are possible for the logistic function and tangent hyperbolicus. }
  \item{linear.output}{ logical. If act.fct should not be applied to the output neurons set linear output to TRUE, otherwise to FALSE. }
  \item{exclude}{ a vector or a matrix specifying the weights, that are excluded from the calculation. If given as a vector, the exact positions of the weights must be known. A matrix with n-rows and 3 columns will exclude n weights, where the first column stands for the layer, the second column for the input neuron and the third column for the output neuron of the weight.  }
  \item{constant.weights}{ a vector specifying the values of the weights that are excluded from the training process and treated as fix. }
  \item{likelihood}{ logical. If the error function is equal to the negative log-likelihood function, the information criteria AIC and BIC will be calculated. Furthermore the usage of confidence.interval is meaningfull. }
}
\value{
  \code{neuralnet} returns an object of class \code{nn}.
  An object of class \code{nn} is a list containing at most the following components:

  \item{ call }{ the matched call. }
  \item{ response }{ extracted from the \code{data argument}.  }
  \item{ covariate }{ the variables extracted from the \code{data argument}. }
  \item{ model.list }{ a list containing the covariates and the response variables extracted from the \code{formula argument}. }
  \item{ err.fct }{ the error function. }
  \item{ act.fct }{ the activation function. }
  \item{ data }{ the \code{data argument}.}
  \item{ net.result }{ a list containing the overall result of the neural network for every repetition.}
  \item{ weights }{ a list containing the fitted weights of the neural network for every repetition. }
  \item{ generalized.weights }{ a list containing the generalized weights of the neural network for every repetition. }
  \item{ result.matrix }{ a matrix containing the threshold, reached threshold, steps, error, AIC and BIC (if computed) and weights for every repetition. Each column represents one repetition. }
  \item{ startweights }{ a list containing the startweights of the neural network for every repetition. }
}
\references{ 
    Riedmiller M. (1994) 
    \emph{Rprop - Description and Implementation Details.}
    Technical Report. University of Karlsruhe.

    Riedmiller M. and Braun H. (1993) 
    \emph{A direct adaptive method for faster backpropagation learning: The RPROP algorithm.}
    Proceedings of the IEEE International Conference on Neural Networks (ICNN), pages 586-591.
    San Francisco.

    Anastasiadis A. et. al. (2005) 
    \emph{New globally convergent training scheme based on the resilient propagation algorithm.} 
    Neurocomputing 64, pages 253-270. 

    Intrator O. and Intrator N. (1993)
    \emph{Using Neural Nets for Interpretation of Nonlinear Models.}  
    Proceedings of the Statistical Computing Section, 244-249 
    San Francisco: American Statistical Society (eds). 
}
\author{ Stefan Fritsch \email{fritsch@bips.uni-bremen.de} }

\seealso{
 \code{\link{plot.nn}} for plotting of the neural network.

 \code{\link{gwplot}} for plotting of the generalized weights.

 \code{\link{compute}} for computation of a given neural network for a new covariate vector.

 \code{\link{confidence.interval}} for calculation of confidence intervals of the weights.

 \code{\link{prediction}} for a summary of the output of the neural networks.
}
\examples{
AND <- c(rep(0,7),1)
OR <- c(0,rep(1,7))
binary.data <- data.frame(expand.grid(c(0,1), c(0,1), c(0,1)), AND, OR)
print(net <- neuralnet( AND+OR~Var1+Var2+Var3,  binary.data, hidden=0, rep=10, err.fct="ce", linear.output=FALSE))

data(infert, package="datasets")
print(net.infert <- neuralnet( case~parity+induced+spontaneous,  infert, err.fct="ce", linear.output=FALSE, likelihood=TRUE))
}
\keyword{ neural }
back to top