https://github.com/berndbischl/mlr
Raw File
Tip revision: b251a1542cf77ae04eb0b76db947f58fc7aab3a9 authored by pat-s on 22 January 2021, 10:10:04 UTC
update update-tic
Tip revision: b251a15
TuneControl.Rd
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/TuneControl.R
\name{TuneControl}
\alias{TuneControl}
\title{Control object for tuning}
\arguments{
\item{same.resampling.instance}{(\code{logical(1)})\cr
Should the same resampling instance be used for all evaluations to reduce variance?
Default is \code{TRUE}.}

\item{impute.val}{(\link{numeric})\cr
If something goes wrong during optimization (e.g. the learner crashes),
this value is fed back to the tuner, so the tuning algorithm does not abort.
It is not stored in the optimization path, an NA and a corresponding error message are
logged instead.
Note that this value is later multiplied by -1 for maximization measures internally, so you
need to enter a larger positive value for maximization here as well.
Default is the worst obtainable value of the performance measure you optimize for when
you aggregate by mean value, or \code{Inf} instead.
For multi-criteria optimization pass a vector of imputation values, one for each of your measures,
in the same order as your measures.}

\item{start}{(\link{list})\cr
Named list of initial parameter values.}

\item{tune.threshold}{(\code{logical(1)})\cr
Should the threshold be tuned for the measure at hand, after each hyperparameter evaluation,
via \link{tuneThreshold}?
Only works for classification if the predict type is \dQuote{prob}.
Default is \code{FALSE}.}

\item{tune.threshold.args}{(\link{list})\cr
Further arguments for threshold tuning that are passed down to \link{tuneThreshold}.
Default is none.}

\item{log.fun}{(\code{function} | \code{character(1)})\cr
Function used for logging. If set to \dQuote{default} (the default), the evaluated design points, the resulting
performances, and the runtime will be reported.
If set to \dQuote{memory} the memory usage for each evaluation will also be displayed, with \code{character(1)} small increase
in run time.
Otherwise \code{character(1)} function with arguments \code{learner}, \code{resampling}, \code{measures},
\code{par.set}, \code{control}, \code{opt.path}, \code{dob}, \code{x}, \code{y}, \code{remove.nas},
\code{stage} and \code{prev.stage} is expected.
The default displays the performance measures, the time needed for evaluating,
the currently used memory and the max memory ever used before
(the latter two both taken from \link{gc}).
See the implementation for details.}

\item{final.dw.perc}{(\code{boolean})\cr
If a Learner wrapped by a \link{makeDownsampleWrapper} is used, you can define the value of \code{dw.perc} which is used to train the Learner with the final parameter setting found by the tuning.
Default is \code{NULL} which will not change anything.}

\item{...}{(any)\cr
Further control parameters passed to the \code{control} arguments of
\link[cmaes:cma_es]{cmaes::cma_es} or \link[GenSA:GenSA]{GenSA::GenSA}, as well as
towards the \code{tunerConfig} argument of \link[irace:irace]{irace::irace}.}
}
\description{
General tune control object.
}
\seealso{
Other tune: 
\code{\link{getNestedTuneResultsOptPathDf}()},
\code{\link{getNestedTuneResultsX}()},
\code{\link{getResamplingIndices}()},
\code{\link{getTuneResult}()},
\code{\link{makeModelMultiplexerParamSet}()},
\code{\link{makeModelMultiplexer}()},
\code{\link{makeTuneControlCMAES}()},
\code{\link{makeTuneControlDesign}()},
\code{\link{makeTuneControlGenSA}()},
\code{\link{makeTuneControlGrid}()},
\code{\link{makeTuneControlIrace}()},
\code{\link{makeTuneControlMBO}()},
\code{\link{makeTuneControlRandom}()},
\code{\link{makeTuneWrapper}()},
\code{\link{tuneParams}()},
\code{\link{tuneThreshold}()}
}
\concept{tune}
back to top