|
||||||||||
PREV NEXT | FRAMES NO FRAMES All Classes |
Packages that use EvaluationException | |
---|---|
de.jstacs.algorithms.optimization | Provides classes for different types of algorithms that are not directly linked to the modelling components of Jstacs: Algorithms on graphs, algorithms for numerical optimization, and a basic alignment algorithm. |
de.jstacs.classifiers.differentiableSequenceScoreBased | Provides the classes for Classifier s that are based on SequenceScore s. |
de.jstacs.classifiers.differentiableSequenceScoreBased.gendismix | Provides an implementation of a classifier that allows to train the parameters of a set of
DifferentiableStatisticalModel s by
a unified generative-discriminative learning principle |
de.jstacs.classifiers.differentiableSequenceScoreBased.logPrior | Provides a general definition of a parameter log-prior and a number of implementations of Laplace and Gaussian priors |
Uses of EvaluationException in de.jstacs.algorithms.optimization |
---|
Methods in de.jstacs.algorithms.optimization that throw EvaluationException | |
---|---|
static double[] |
Optimizer.brentsMethod(OneDimensionalFunction f,
double a,
double x,
double b,
double tol)
Approximates a minimum (not necessary the global) in the interval [lower,upper] . |
static double[] |
Optimizer.brentsMethod(OneDimensionalFunction f,
double a,
double x,
double fx,
double b,
double tol)
Approximates a minimum (not necessary the global) in the interval [lower,upper] . |
static int |
Optimizer.conjugateGradientsFR(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The conjugate gradient algorithm by Fletcher and Reeves. |
static int |
Optimizer.conjugateGradientsPR(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The conjugate gradient algorithm by Polak and Ribière. |
static int |
Optimizer.conjugateGradientsPRP(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The conjugate gradient algorithm by Polak and Ribière called "Polak-Ribière-Positive". |
double |
OneDimensionalSubFunction.evaluateFunction(double x)
|
abstract double |
OneDimensionalFunction.evaluateFunction(double x)
Evaluates the function at position x . |
double |
NegativeOneDimensionalFunction.evaluateFunction(double x)
|
double |
OneDimensionalFunction.evaluateFunction(double[] x)
|
double |
NegativeOneDimensionalFunction.evaluateFunction(double[] x)
|
double |
NegativeFunction.evaluateFunction(double[] x)
|
double |
NegativeDifferentiableFunction.evaluateFunction(double[] x)
|
double |
Function.evaluateFunction(double[] x)
Evaluates the function at a certain vector (in mathematical sense) x . |
double[] |
NumericalDifferentiableFunction.evaluateGradientOfFunction(double[] x)
Evaluates the gradient of a function at a certain vector (in mathematical sense) x numerically. |
double[] |
NegativeDifferentiableFunction.evaluateGradientOfFunction(double[] x)
|
abstract double[] |
DifferentiableFunction.evaluateGradientOfFunction(double[] x)
Evaluates the gradient of a function at a certain vector (in mathematical sense) x , i.e.,
![]() |
static double[] |
Optimizer.findBracket(OneDimensionalFunction f,
double lower,
double startDistance)
This method returns a bracket containing a minimum. |
static double[] |
Optimizer.findBracket(OneDimensionalFunction f,
double lower,
double fLower,
double startDistance)
This method returns a bracket containing a minimum. |
double[] |
OneDimensionalFunction.findMin(double lower,
double fLower,
double eps,
double startDistance)
This method returns a minimum x and the value
f(x) , starting the search at lower . |
protected double[] |
DifferentiableFunction.findOneDimensionalMin(double[] x,
double[] d,
double alpha_0,
double fAlpha_0,
double linEps,
double startDistance)
This method is used to find an approximation of an one-dimensional subfunction. |
static double[] |
Optimizer.goldenRatio(OneDimensionalFunction f,
double lower,
double upper,
double eps)
Approximates a minimum (not necessary the global) in the interval [lower,upper] . |
static double[] |
Optimizer.goldenRatio(OneDimensionalFunction f,
double lower,
double p1,
double fP1,
double upper,
double eps)
Approximates a minimum (not necessary the global) in the interval [lower,upper] . |
static int |
Optimizer.limitedMemoryBFGS(DifferentiableFunction f,
double[] currentValues,
byte m,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The Broyden-Fletcher-Goldfarb-Shanno version of limited memory quasi-Newton methods. |
static int |
Optimizer.optimize(byte algorithm,
DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out)
This method enables you to use all different implemented optimization algorithms by only one method. |
static int |
Optimizer.optimize(byte algorithm,
DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
This method enables you to use all different implemented optimization algorithms by only one method. |
static int |
Optimizer.quasiNewtonBFGS(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The Broyden-Fletcher-Goldfarb-Shanno version of the quasi-Newton method. |
static int |
Optimizer.quasiNewtonDFP(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The Davidon-Fletcher-Powell version of the quasi-Newton method. |
static int |
Optimizer.steepestDescent(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The steepest descent. |
Uses of EvaluationException in de.jstacs.classifiers.differentiableSequenceScoreBased |
---|
Methods in de.jstacs.classifiers.differentiableSequenceScoreBased that throw EvaluationException | |
---|---|
double |
AbstractMultiThreadedOptimizableFunction.evaluateFunction(double[] x)
|
protected abstract void |
AbstractMultiThreadedOptimizableFunction.evaluateFunction(int index,
int startClass,
int startSeq,
int endClass,
int endSeq)
This method evaluates the function for a part of the data. |
double[] |
AbstractMultiThreadedOptimizableFunction.evaluateGradientOfFunction(double[] x)
|
protected abstract double |
AbstractMultiThreadedOptimizableFunction.joinFunction()
This method joins the partial results that have been computed using AbstractMultiThreadedOptimizableFunction.evaluateFunction(int, int, int, int, int) . |
protected abstract double[] |
AbstractMultiThreadedOptimizableFunction.joinGradients()
This method joins the gradients of each part that have been computed using AbstractMultiThreadedOptimizableFunction.evaluateGradientOfFunction(int, int, int, int, int) . |
Uses of EvaluationException in de.jstacs.classifiers.differentiableSequenceScoreBased.gendismix |
---|
Methods in de.jstacs.classifiers.differentiableSequenceScoreBased.gendismix that throw EvaluationException | |
---|---|
protected void |
OneDataSetLogGenDisMixFunction.evaluateFunction(int index,
int startClass,
int startSeq,
int endClass,
int endSeq)
|
protected void |
LogGenDisMixFunction.evaluateFunction(int index,
int startClass,
int startSeq,
int endClass,
int endSeq)
|
protected double |
LogGenDisMixFunction.joinFunction()
|
protected double[] |
LogGenDisMixFunction.joinGradients()
|
Uses of EvaluationException in de.jstacs.classifiers.differentiableSequenceScoreBased.logPrior |
---|
Methods in de.jstacs.classifiers.differentiableSequenceScoreBased.logPrior that throw EvaluationException | |
---|---|
abstract void |
LogPrior.addGradientFor(double[] params,
double[] vector)
Adds the gradient of the log-prior using the current parameters to a given vector. |
void |
CompositeLogPrior.addGradientFor(double[] params,
double[] grad)
|
double |
SeparateLaplaceLogPrior.evaluateFunction(double[] x)
|
double |
SeparateGaussianLogPrior.evaluateFunction(double[] x)
|
double |
CompositeLogPrior.evaluateFunction(double[] x)
|
double[] |
LogPrior.evaluateGradientOfFunction(double[] params)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES All Classes |