Package | Description |
---|---|
de.jstacs.algorithms.optimization |
Provides classes for different types of algorithms that are not directly linked to the modelling components of Jstacs: Algorithms on graphs, algorithms for numerical optimization, and a basic alignment algorithm.
|
de.jstacs.classifiers.differentiableSequenceScoreBased |
Provides the classes for
Classifier s that are based on SequenceScore s.It includes a sub-package for discriminative objective functions, namely conditional likelihood and supervised posterior, and a separate sub-package for the parameter priors, that can be used for the supervised posterior. |
de.jstacs.classifiers.differentiableSequenceScoreBased.gendismix |
Provides an implementation of a classifier that allows to train the parameters of a set of
DifferentiableStatisticalModel s by
a unified generative-discriminative learning principle. |
de.jstacs.classifiers.differentiableSequenceScoreBased.logPrior |
Provides a general definition of a parameter log-prior and a number of implementations of Laplace and Gaussian priors.
|
de.jstacs.sequenceScores.statisticalModels.trainable.discrete.inhomogeneous |
This package contains various inhomogeneous models.
|
Modifier and Type | Method and Description |
---|---|
static int |
Optimizer.conjugateGradientsFR(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The conjugate gradient algorithm by Fletcher and Reeves.
|
static int |
Optimizer.conjugateGradientsPR(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The conjugate gradient algorithm by Polak and Ribière.
|
static int |
Optimizer.conjugateGradientsPRP(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The conjugate gradient algorithm by Polak and Ribière
called "Polak-Ribière-Positive".
|
double |
OneDimensionalFunction.evaluateFunction(double[] x) |
double |
NumericalDifferentiableFunction.evaluateFunction(double[] x) |
double |
NegativeOneDimensionalFunction.evaluateFunction(double[] x) |
double |
NegativeFunction.evaluateFunction(double[] x) |
double |
NegativeDifferentiableFunction.evaluateFunction(double[] x) |
double |
Function.evaluateFunction(double[] x)
Evaluates the function at a certain vector (in mathematical sense)
x . |
double[] |
NumericalDifferentiableFunction.evaluateGradientOfFunction(double[] x)
Evaluates the gradient of a function at a certain vector (in mathematical
sense)
x numerically. |
double[] |
NegativeDifferentiableFunction.evaluateGradientOfFunction(double[] x) |
abstract double[] |
DifferentiableFunction.evaluateGradientOfFunction(double[] x)
Evaluates the gradient of a function at a certain vector (in mathematical
sense)
x , i.e.,
![]() |
double[] |
DifferentiableFunction.findOneDimensionalMin(double[] x,
double[] d,
double alpha_0,
double fAlpha_0,
double linEps,
double startDistance)
This method is used to find an approximation of an one-dimensional
subfunction.
|
static int |
Optimizer.limitedMemoryBFGS(DifferentiableFunction f,
double[] currentValues,
byte m,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The Broyden-Fletcher-Goldfarb-Shanno version
of limited memory quasi-Newton methods.
|
static int |
Optimizer.optimize(byte algorithm,
DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out)
This method enables you to use all different implemented optimization
algorithms by only one method.
|
static int |
Optimizer.optimize(byte algorithm,
DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
This method enables you to use all different implemented optimization
algorithms by only one method.
|
static int |
Optimizer.quasiNewtonBFGS(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The Broyden-Fletcher-Goldfarb-Shanno version
of the quasi-Newton method.
|
static int |
Optimizer.quasiNewtonDFP(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The Davidon-Fletcher-Powell version of the
quasi-Newton method.
|
void |
OneDimensionalSubFunction.set(double[] current,
double[] d)
Sets the current values and direction.
|
static int |
Optimizer.steepestDescent(DifferentiableFunction f,
double[] currentValues,
TerminationCondition terminationMode,
double linEps,
StartDistanceForecaster startDistance,
OutputStream out,
Time t)
The steepest descent.
|
Modifier and Type | Method and Description |
---|---|
double |
AbstractMultiThreadedOptimizableFunction.evaluateFunction(double[] x) |
double[] |
AbstractMultiThreadedOptimizableFunction.evaluateGradientOfFunction(double[] x) |
protected abstract double |
AbstractMultiThreadedOptimizableFunction.joinFunction()
This method joins the partial results that have been computed using
AbstractMultiThreadedOptimizableFunction.evaluateFunction(int, int, int, int, int) . |
abstract void |
OptimizableFunction.setParams(double[] current)
Sets the current values as parameters.
|
void |
AbstractMultiThreadedOptimizableFunction.setParams(double[] params) |
protected void |
DiffSSBasedOptimizableFunction.setParams(int index) |
protected abstract void |
AbstractMultiThreadedOptimizableFunction.setParams(int index)
This method sets the parameters for thread
index |
protected void |
DiffSSBasedOptimizableFunction.setThreadIndependentParameters() |
protected abstract void |
AbstractMultiThreadedOptimizableFunction.setThreadIndependentParameters()
This method allows to set thread independent parameters.
|
Modifier and Type | Method and Description |
---|---|
protected double |
LogGenDisMixFunction.joinFunction() |
Modifier and Type | Method and Description |
---|---|
double |
SeparateLaplaceLogPrior.evaluateFunction(double[] x) |
double |
SeparateGaussianLogPrior.evaluateFunction(double[] x) |
double |
CompositeLogPrior.evaluateFunction(double[] x) |
Modifier and Type | Method and Description |
---|---|
double |
MEMTools.DualFunction.evaluateFunction(double[] x) |
double[] |
MEMTools.DualFunction.evaluateGradientOfFunction(double[] x) |