SHOGUN
v3.0.0
|
all of classes and functions are contained in the shogun namespace More...
Classes | |
class | DynArray |
Template Dynamic array class that creates an array that can be used like a list or an array. More... | |
class | Parallel |
Class Parallel provides helper functions for multithreading. More... | |
struct | TParameter |
parameter struct More... | |
class | Parameter |
Parameter class. More... | |
class | SGParamInfo |
Class that holds informations about a certain parameter of an CSGObject. Contains name, type, etc. This is used for mapping types that have changed in different versions of shogun. Instances of this class may be compared to each other. Ordering is based on name, equalness is based on all attributes. More... | |
class | ParameterMapElement |
Class to hold instances of a parameter map. Each element contains a key and a set of values, which each are of type SGParamInfo. May be compared to each other based on their keys. More... | |
class | ParameterMap |
Implements a map of ParameterMapElement instances Maps one key to a set of values. More... | |
class | CSGObject |
Class SGObject is the base class of all shogun objects. More... | |
class | Version |
Class Version provides version information. More... | |
class | CAveragedPerceptron |
Class Averaged Perceptron implements the standard linear (online) algorithm. Averaged perceptron is the simple extension of Perceptron. More... | |
class | CFeatureBlockLogisticRegression |
class FeatureBlockLogisticRegression, a linear binary logistic loss classifier for problems with complex feature relations. Currently two feature relations are supported - feature group (done via CIndexBlockGroup) and feature tree (done via CIndexTree). Handling of feature relations is done via L1/Lq (for groups) and L1/L2 (for trees) regularization. More... | |
class | CGaussianProcessBinaryClassification |
Class GaussianProcessBinaryClassification implements binary classification based on Gaussian Processes. More... | |
class | CLDA |
Class LDA implements regularized Linear Discriminant Analysis. More... | |
class | CLPBoost |
Class LPBoost trains a linear classifier called Linear Programming Machine, i.e. a SVM using a \(\ell_1\) norm regularizer. More... | |
class | CLPM |
Class LPM trains a linear classifier called Linear Programming Machine, i.e. a SVM using a \(\ell_1\) norm regularizer. More... | |
class | CMKL |
Multiple Kernel Learning. More... | |
class | CMKLClassification |
Multiple Kernel Learning for two-class-classification. More... | |
class | CMKLMulticlass |
MKLMulticlass is a class for L1-norm Multiclass MKL. More... | |
class | MKLMulticlassGLPK |
MKLMulticlassGLPK is a helper class for MKLMulticlass. More... | |
class | MKLMulticlassGradient |
MKLMulticlassGradient is a helper class for MKLMulticlass. More... | |
class | MKLMulticlassOptimizationBase |
MKLMulticlassOptimizationBase is a helper class for MKLMulticlass. More... | |
class | CMKLOneClass |
Multiple Kernel Learning for one-class-classification. More... | |
class | CNearestCentroid |
Class NearestCentroid, an implementation of Nearest Shrunk Centroid classifier. More... | |
class | CPerceptron |
Class Perceptron implements the standard linear (online) perceptron. More... | |
class | CPluginEstimate |
class PluginEstimate More... | |
class | CCPLEXSVM |
CplexSVM a SVM solver implementation based on cplex (unfinished). More... | |
class | CGNPPLib |
class GNPPLib, a Library of solvers for Generalized Nearest Point Problem (GNPP). More... | |
class | CGNPPSVM |
class GNPPSVM More... | |
class | CGPBTSVM |
class GPBTSVM More... | |
class | CLibLinear |
class to implement LibLinear More... | |
class | CLibSVM |
LibSVM. More... | |
class | CLibSVMOneClass |
class LibSVMOneClass More... | |
class | CMPDSVM |
class MPDSVM More... | |
class | CNewtonSVM |
NewtonSVM, In this Implementation linear SVM is trained in its primal form using Newton-like iterations. This Implementation is ported from the Olivier Chapelles fast newton based SVM solver, Which could be found here :http://mloss.org/software/view/30/ For further information on this implementation of SVM refer to this paper: http://www.kyb.mpg.de/publications/attachments/neco_%5B0%5D.pdf. More... | |
class | COnlineLibLinear |
Class implementing a purely online version of LibLinear, using the L2R_L1LOSS_SVC_DUAL solver only. More... | |
class | COnlineSVMSGD |
class OnlineSVMSGD More... | |
class | CQPBSVMLib |
class QPBSVMLib More... | |
class | CSGDQN |
class SGDQN More... | |
class | CSVM |
A generic Support Vector Machine Interface. More... | |
class | CSVMLight |
class SVMlight More... | |
class | CSVMLightOneClass |
Trains a one class C SVM. More... | |
class | CSVMLin |
class SVMLin More... | |
class | CSVMOcas |
class SVMOcas More... | |
class | CSVMSGD |
class SVMSGD More... | |
class | CWDSVMOcas |
class WDSVMOcas More... | |
class | CVwCacheReader |
Base class from which all cache readers for VW should be derived. More... | |
class | CVwCacheWriter |
CVwCacheWriter is the base class for all VW cache creating classes. More... | |
class | CVwNativeCacheReader |
Class CVwNativeCacheReader reads from a cache exactly as that which has been produced by VW's default cache format. More... | |
class | CVwNativeCacheWriter |
Class CVwNativeCacheWriter writes a cache exactly as that which would be produced by VW's default cache format. More... | |
class | CVwAdaptiveLearner |
VwAdaptiveLearner uses an adaptive subgradient technique to update weights. More... | |
class | CVwNonAdaptiveLearner |
VwNonAdaptiveLearner uses a standard gradient descent weight update rule. More... | |
class | CVowpalWabbit |
Class CVowpalWabbit is the implementation of the online learning algorithm used in Vowpal Wabbit. More... | |
class | VwFeature |
One feature in VW. More... | |
class | VwExample |
Example class for VW. More... | |
class | VwLabel |
Class VwLabel holds a label object used by VW. More... | |
class | CVwEnvironment |
Class CVwEnvironment is the environment used by VW. More... | |
class | CVwLearner |
Base class for all VW learners. More... | |
class | CVwParser |
CVwParser is the object which provides the functions to parse examples from buffered input. More... | |
class | CVwRegressor |
Regressor used by VW. More... | |
class | CGMM |
Gaussian Mixture Model interface. More... | |
class | CHierarchical |
Agglomerative hierarchical single linkage clustering. More... | |
class | CKMeans |
KMeans clustering, partitions the data into k (a-priori specified) clusters. More... | |
class | CConverter |
class Converter used to convert data More... | |
class | CDiffusionMaps |
class DiffusionMaps used to preprocess given data using Diffusion Maps dimensionality reduction technique as described in More... | |
class | CEmbeddingConverter |
class EmbeddingConverter (part of the Efficient Dimensionality Reduction Toolkit) used to construct embeddings of features, e.g. construct dense numeric embedding of string features More... | |
class | CFactorAnalysis |
class | CHashedDocConverter |
This class can be used to convert a document collection contained in a CStringFeatures<char> object where each document is stored as a single vector into a hashed Bag-of-Words representation. Like in the standard Bag-of-Words representation, this class considers each document as a collection of tokens, which are then hashed into a new feature space of a specified dimension. This class is very flexible and allows the user to specify the tokenizer used to tokenize each document, specify whether the results should be normalized with regards to the sqrt of the document size, as well as to specify whether he wants to combine different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n tokens, while skipping up to k. Eg. for the tokens ["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a", "ab", "ac" (skipped 1), "ad" (skipped 2), "b", "bc", "bd" (skipped 1), "c", "cd", "d"]. More... | |
class | CHessianLocallyLinearEmbedding |
class HessianLocallyLinearEmbedding used to preprocess data using Hessian Locally Linear Embedding algorithm as described in More... | |
class | CFastICA |
class FastICA More... | |
class | CFFSep |
class FFSep More... | |
class | CICAConverter |
class ICAConverter Base class for ICA algorithms More... | |
class | CJade |
class Jade More... | |
class | CJediSep |
class JediSep More... | |
class | CSOBI |
class SOBI More... | |
class | CUWedgeSep |
class UWedgeSep More... | |
class | CIsomap |
class Isomap used to embed data using Isomap algorithm as described in More... | |
class | CKernelLocallyLinearEmbedding |
class KernelLocallyLinearEmbedding used to construct embeddings of data using kernel formulation of Locally Linear Embedding algorithm as described in More... | |
class | CLaplacianEigenmaps |
class LaplacianEigenmaps used to construct embeddings of data using Laplacian Eigenmaps algorithm as described in: More... | |
class | CLinearLocalTangentSpaceAlignment |
class LinearLocalTangentSpaceAlignment converter used to construct embeddings as described in: More... | |
class | CLocalityPreservingProjections |
class LocalityPreservingProjections used to compute embeddings of data using Locality Preserving Projections method as described in More... | |
class | CLocallyLinearEmbedding |
class LocallyLinearEmbedding used to embed data using Locally Linear Embedding algorithm described in More... | |
class | CLocalTangentSpaceAlignment |
class LocalTangentSpaceAlignment used to embed data using Local Tangent Space Alignment (LTSA) algorithm as described in: More... | |
class | CManifoldSculpting |
class | CMultidimensionalScaling |
class Multidimensionalscaling is used to perform multidimensional scaling (capable of landmark approximation if requested). More... | |
class | CNeighborhoodPreservingEmbedding |
NeighborhoodPreservingEmbedding converter used to construct embeddings as described in: More... | |
class | CStochasticProximityEmbedding |
class StochasticProximityEmbedding used to construct embeddings of data using the Stochastic Proximity algorithm. More... | |
class | CTDistributedStochasticNeighborEmbedding |
class | CAttenuatedEuclideanDistance |
class AttenuatedEuclideanDistance More... | |
class | CBrayCurtisDistance |
class Bray-Curtis distance More... | |
class | CCanberraMetric |
class CanberraMetric More... | |
class | CCanberraWordDistance |
class CanberraWordDistance More... | |
class | CChebyshewMetric |
class ChebyshewMetric More... | |
class | CChiSquareDistance |
class ChiSquareDistance More... | |
class | CCosineDistance |
class CosineDistance More... | |
class | CCustomDistance |
The Custom Distance allows for custom user provided distance matrices. More... | |
class | CCustomMahalanobisDistance |
Class CustomMahalanobisDistance used to compute the distance between feature vectors \( \vec{x_i} \) and \( \vec{x_j} \) as \( (\vec{x_i} - \vec{x_j})^T \mathbf{M} (\vec{x_i} - \vec{x_j}) \), given the matrix \( \mathbf{M} \) which will be referred to as Mahalanobis matrix. More... | |
class | CDenseDistance |
template class DenseDistance More... | |
class | CDistance |
Class Distance, a base class for all the distances used in the Shogun toolbox. More... | |
class | CEuclideanDistance |
class EuclideanDistance More... | |
class | CGeodesicMetric |
class GeodesicMetric More... | |
class | CHammingWordDistance |
class HammingWordDistance More... | |
class | CJensenMetric |
class JensenMetric More... | |
class | CKernelDistance |
The Kernel distance takes a distance as input. More... | |
class | CMahalanobisDistance |
class MahalanobisDistance More... | |
class | CManhattanMetric |
class ManhattanMetric More... | |
class | CManhattanWordDistance |
class ManhattanWordDistance More... | |
class | CMinkowskiMetric |
class MinkowskiMetric More... | |
class | CRealDistance |
class RealDistance More... | |
class | CSparseDistance |
template class SparseDistance More... | |
class | CSparseEuclideanDistance |
class SparseEucldeanDistance More... | |
class | CStringDistance |
template class StringDistance More... | |
class | CTanimotoDistance |
class Tanimoto coefficient More... | |
class | CGaussianDistribution |
Dense version of the well-known Gaussian probability distribution, defined as
\[ \mathcal{N}_x(\mu,\Sigma)= \frac{1}{\sqrt{|2\pi\Sigma|}} \exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right) \] . More... | |
class | CProbabilityDistribution |
A base class for representing n-dimensional probability distribution over the real numbers (64bit) for which various statistics can be computed and which can be sampled. More... | |
class | CDistribution |
Base class Distribution from which all methods implementing a distribution are derived. More... | |
class | CGaussian |
Gaussian distribution interface. More... | |
class | CGHMM |
class GHMM - this class is non-functional and was meant to implement a Generalize Hidden Markov Model (aka Semi Hidden Markov HMM). More... | |
class | CHistogram |
Class Histogram computes a histogram over all 16bit unsigned integers in the features. More... | |
class | Model |
class Model More... | |
class | CHMM |
Hidden Markov Model. More... | |
class | CLinearHMM |
The class LinearHMM is for learning Higher Order Markov chains. More... | |
class | CPositionalPWM |
Positional PWM. More... | |
class | CCombinationRule |
CombinationRule abstract class The CombinationRule defines an interface to how to combine the classification or regression outputs of an ensemble of Machines. More... | |
class | CMajorityVote |
CMajorityVote is a CWeightedMajorityVote combiner, where each Machine's weight in the ensemble is 1.0. More... | |
class | CMeanRule |
CMeanRule simply averages the outputs of the Machines in the ensemble. More... | |
class | CWeightedMajorityVote |
Weighted Majority Vote implementation. More... | |
class | CBinaryClassEvaluation |
The class TwoClassEvaluation, a base class used to evaluate binary classification labels. More... | |
class | CClusteringAccuracy |
clustering accuracy More... | |
class | CClusteringEvaluation |
The base class used to evaluate clustering. More... | |
class | CClusteringMutualInformation |
clustering (normalized) mutual information More... | |
class | CContingencyTableEvaluation |
The class ContingencyTableEvaluation a base class used to evaluate 2-class classification with TP, FP, TN, FN rates. More... | |
class | CAccuracyMeasure |
class AccuracyMeasure used to measure accuracy of 2-class classifier. More... | |
class | CErrorRateMeasure |
class ErrorRateMeasure used to measure error rate of 2-class classifier. More... | |
class | CBALMeasure |
class BALMeasure used to measure balanced error of 2-class classifier. More... | |
class | CWRACCMeasure |
class WRACCMeasure used to measure weighted relative accuracy of 2-class classifier. More... | |
class | CF1Measure |
class F1Measure used to measure F1 score of 2-class classifier. More... | |
class | CCrossCorrelationMeasure |
class CrossCorrelationMeasure used to measure cross correlation coefficient of 2-class classifier. More... | |
class | CRecallMeasure |
class RecallMeasure used to measure recall of 2-class classifier. More... | |
class | CPrecisionMeasure |
class PrecisionMeasure used to measure precision of 2-class classifier. More... | |
class | CSpecificityMeasure |
class SpecificityMeasure used to measure specificity of 2-class classifier. More... | |
class | CCrossValidationResult |
type to encapsulate the results of an evaluation run. May contain confidence interval (if conf_int_alpha!=0). m_conf_int_alpha is the probability for an error, i.e. the value does not lie in the confidence interval. More... | |
class | CCrossValidation |
base class for cross-validation evaluation. Given a learning machine, a splitting strategy, an evaluation criterium, features and correspnding labels, this provides an interface for cross-validation. Results may be retrieved using the evaluate method. A number of repetitions may be specified for obtaining more accurate results. The arithmetic mean of different runs is returned along with confidence intervals, if a p-value is specified. Default number of runs is one, confidence interval combutation is disabled. More... | |
class | CCrossValidationMKLStorage |
Class for storing MKL weights in every fold of cross-validation. More... | |
class | CCrossValidationMulticlassStorage |
Class for storing multiclass evaluation information in every fold of cross-validation. More... | |
class | CCrossValidationOutput |
Class for managing individual folds in cross-validation. More... | |
class | CCrossValidationPrintOutput |
Class for outputting cross-validation intermediate results to the standard output. Simply prints all messages it gets. More... | |
class | CCrossValidationSplitting |
Implementation of normal cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference) More... | |
class | CDifferentiableFunction |
An abstract class that describes a differentiable function used for GradientEvaluation. More... | |
class | CEvaluation |
Class Evaluation, a base class for other classes used to evaluate labels, e.g. accuracy of classification or mean squared error of regression. More... | |
class | CEvaluationResult |
Abstract class that contains the result generated by the MachineEvaluation class. More... | |
class | CGradientCriterion |
Simple class which specifies the direction of gradient search. More... | |
class | CGradientEvaluation |
Class evaluates a machine using its associated differentiable function for the function value and its gradient with respect to parameters. More... | |
class | CGradientResult |
Container class that returns results from GradientEvaluation. It contains the function value as well as its gradient. More... | |
class | CMachineEvaluation |
Machine Evaluation is an abstract class that evaluates a machine according to some criterion. More... | |
class | CMeanAbsoluteError |
Class MeanAbsoluteError used to compute an error of regression model. More... | |
class | CMeanSquaredError |
Class MeanSquaredError used to compute an error of regression model. More... | |
class | CMeanSquaredLogError |
Class CMeanSquaredLogError used to compute an error of regression model. More... | |
class | CMulticlassAccuracy |
The class MulticlassAccuracy used to compute accuracy of multiclass classification. More... | |
class | CMulticlassOVREvaluation |
The class MulticlassOVREvaluation used to compute evaluation parameters of multiclass classification via binary OvR decomposition and given binary evaluation technique. More... | |
class | CPRCEvaluation |
Class PRCEvaluation used to evaluate PRC (Precision Recall Curve) and an area under PRC curve (auPRC). More... | |
class | CROCEvaluation |
Class ROCEvalution used to evaluate ROC (Receiver Operating Characteristic) and an area under ROC curve (auROC). More... | |
class | CSplittingStrategy |
Abstract base class for all splitting types. Takes a CLabels instance and generates a desired number of subsets which are being accessed by their indices via the method generate_subset_indices(...). More... | |
class | CStratifiedCrossValidationSplitting |
Implementation of stratified cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference) in which the label ratio is equal (at most one difference) to the label ratio of the specified labels. Do not use for regression since it may be impossible to distribute nice in that case. More... | |
class | CStructuredAccuracy |
class CStructuredAccuracy used to compute accuracy of structured classification More... | |
class | CAlphabet |
The class Alphabet implements an alphabet and alphabet utility functions. More... | |
class | CAttributeFeatures |
Implements attributed features, that is in the simplest case a number of (attribute, value) pairs. More... | |
class | CBinnedDotFeatures |
The class BinnedDotFeatures contains a 0-1 conversion of features into bins. More... | |
class | CCombinedDotFeatures |
Features that allow stacking of a number of DotFeatures. More... | |
class | CCombinedFeatures |
The class CombinedFeatures is used to combine a number of of feature objects into a single CombinedFeatures object. More... | |
class | CDataGenerator |
Class that is able to generate various data samples, which may be used for examples in SHOGUN. More... | |
class | CDenseFeatures |
The class DenseFeatures implements dense feature matrices. More... | |
class | CDenseSubsetFeatures |
class | CDotFeatures |
Features that support dot products among other operations. More... | |
class | CDummyFeatures |
The class DummyFeatures implements features that only know the number of feature objects (but don't actually contain any). More... | |
class | CExplicitSpecFeatures |
Features that compute the Spectrum Kernel feature space explicitly. More... | |
class | CFactorGraphFeatures |
CFactorGraphFeatures maintains an array of factor graphs, each graph is a sample, i.e. an instance of structured input. More... | |
class | CFeatures |
The class Features is the base class of all feature objects. More... | |
class | CFKFeatures |
The class FKFeatures implements Fischer kernel features obtained from two Hidden Markov models. More... | |
class | CHashedDenseFeatures |
This class is identical to the CDenseFeatures class except that it hashes each dimension to a new feature space. More... | |
class | CHashedDocDotFeatures |
This class can be used to provide on-the-fly vectorization of a document collection. Like in the standard Bag-of-Words representation, this class considers each document as a collection of tokens, which are then hashed into a new feature space of a specified dimension. This class is very flexible and allows the user to specify the tokenizer used to tokenize each document, specify whether the results should be normalized with regards to the sqrt of the document size, as well as to specify whether he wants to combine different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n tokens, while skipping up to k. Eg. for the tokens ["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a", "ab", "ac" (skipped 1), "ad" (skipped 2), "b", "bc", "bd" (skipped 1), "c", "cd", "d"]. More... | |
class | CHashedSparseFeatures |
This class is identical to the CDenseFeatures class except that it hashes each dimension to a new feature space. More... | |
class | CHashedWDFeatures |
Features that compute the Weighted Degreee Kernel feature space explicitly. More... | |
class | CHashedWDFeaturesTransposed |
Features that compute the Weighted Degreee Kernel feature space explicitly. More... | |
class | CImplicitWeightedSpecFeatures |
Features that compute the Weighted Spectrum Kernel feature space explicitly. More... | |
class | CLatentFeatures |
Latent Features class The class if for representing features for latent learning, e.g. LatentSVM. It's basically a very generic way of storing features of any (user-defined) form based on CData. More... | |
class | CLBPPyrDotFeatures |
Implements Local Binary Patterns with Scale Pyramids as dot features for a set of images. Expects the images to be loaded in a CDenseFeatures object. More... | |
class | CMatrixFeatures |
Class CMatrixFeatures used to represent data whose feature vectors are better represented with matrices rather than with unidimensional arrays or vectors. Optionally, it can be restricted that all the feature vectors have the same number of features. Set the attribute num_features different to zero to use this restriction. Allow feature vectors with different number of features by setting num_features equal to zero (default behaviour). More... | |
class | CPolyFeatures |
implement DotFeatures for the polynomial kernel More... | |
class | CRandomFourierDotFeatures |
This class implements the random fourier features for the DotFeatures framework. Basically upon the object creation it computes the random coefficients, namely w and b, that are needed for this method and then every time a vector is required it is computed based on the following formula z(x) = sqrt(2/D) * cos(w'*x + b), where D is the number of samples that are used. More... | |
class | CRandomKitchenSinksDotFeatures |
class that implements the Random Kitchen Sinks for the DotFeatures as mentioned in http://books.nips.cc/papers/files/nips21/NIPS2008_0885.pdf. More... | |
class | CRealFileFeatures |
The class RealFileFeatures implements a dense double-precision floating point matrix from a file. More... | |
class | CSNPFeatures |
Features that compute the Weighted Degreee Kernel feature space explicitly. More... | |
class | CSparseFeatures |
Template class SparseFeatures implements sparse matrices. More... | |
class | CSparsePolyFeatures |
implement DotFeatures for the polynomial kernel More... | |
class | CGaussianBlobsDataGenerator |
class | CMeanShiftDataGenerator |
class | CStreamingDenseFeatures |
This class implements streaming features with dense feature vectors. More... | |
class | CStreamingDotFeatures |
Streaming features that support dot products among other operations. More... | |
class | CStreamingFeatures |
Streaming features are features which are used for online algorithms. More... | |
class | CStreamingHashedDenseFeatures |
This class acts as an alternative to the CStreamingDenseFeatures class and their difference is that the current example in this class is hashed into a smaller dimension dim. More... | |
class | CStreamingHashedDocDotFeatures |
This class implements streaming features for a document collection. Like in the standard Bag-of-Words representation, this class considers each document as a collection of tokens, which are then hashed into a new feature space of a specified dimension. This class is very flexible and allows the user to specify the tokenizer used to tokenize each document, specify whether the results should be normalized with regards to the sqrt of the document size, as well as to specify whether he wants to combine different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n tokens, while skipping up to k. Eg. for the tokens ["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a", "ab", "ac" (skipped 1), "ad" (skipped 2), "b", "bc", "bd" (skipped 1), "c", "cd", "d"]. More... | |
class | CStreamingHashedSparseFeatures |
This class acts as an alternative to the CStreamingSparseFeatures class and their difference is that the current example in this class is hashed into a smaller dimension dim. More... | |
class | CStreamingSparseFeatures |
This class implements streaming features with sparse feature vectors. The vector is represented as an SGSparseVector<T>. Each entry is of type SGSparseVectorEntry<T> with members `feat_index' and `entry'. More... | |
class | CStreamingStringFeatures |
This class implements streaming features as strings. More... | |
class | CStreamingVwFeatures |
This class implements streaming features for use with VW. More... | |
class | CStringFeatures |
Template class StringFeatures implements a list of strings. More... | |
class | CStringFileFeatures |
File based string features. More... | |
class | CSubset |
Wrapper class for an index subset which is used by SubsetStack. More... | |
class | CSubsetStack |
class to add subset support to another class. A CSubsetStackStack instance should be added and wrapper methods to all interfaces should be added. More... | |
class | CTOPFeatures |
The class TOPFeatures implements TOP kernel features obtained from two Hidden Markov models. More... | |
class | CWDFeatures |
Features that compute the Weighted Degreee Kernel feature space explicitly. More... | |
class | CBinaryFile |
A Binary file access class. More... | |
class | CBinaryStream |
memory mapped emulation via binary streams (files) More... | |
class | CCSVFile |
Class CSVFile used to read data from comma-separated values (CSV) files. See http://en.wikipedia.org/wiki/Comma-separated_values. More... | |
class | CFile |
A File access base class. More... | |
class | CIOBuffer |
An I/O buffer class. More... | |
class | CLibSVMFile |
read sparse real valued features in svm light format e.g. -1 1:10.0 2:100.2 1000:1.3 with -1 == (optional) label and dim 1 - value 10.0 dim 2 - value 100.2 dim 1000 - value 1.3 More... | |
class | CLineReader |
Class for buffered reading from a ascii file. More... | |
class | CMemoryMappedFile |
memory mapped file More... | |
class | CParser |
Class for reading from a string. More... | |
class | CSerializableAsciiFile |
serializable ascii file More... | |
class | SerializableAsciiReader00 |
Serializable ascii reader. More... | |
class | CSerializableFile |
serializable file More... | |
struct | substring |
struct Substring, specified by start position and end position. More... | |
class | SGIO |
Class SGIO, used to do input output operations throughout shogun. More... | |
class | CSimpleFile |
Template class SimpleFile to read and write from files. More... | |
class | CStreamingAsciiFile |
Class StreamingAsciiFile to read vector-by-vector from ASCII files. More... | |
class | CStreamingFile |
A Streaming File access class. More... | |
class | CStreamingFileFromDenseFeatures |
Class CStreamingFileFromDenseFeatures is a derived class of CStreamingFile which creates an input source for the online framework from a CDenseFeatures object. More... | |
class | CStreamingFileFromFeatures |
Class StreamingFileFromFeatures to read vector-by-vector from a CFeatures object. More... | |
class | CStreamingFileFromSparseFeatures |
Class CStreamingFileFromSparseFeatures is derived from CStreamingFile and provides an input source for the online framework. It uses an existing CSparseFeatures object to generate online examples. More... | |
class | CStreamingFileFromStringFeatures |
Class CStreamingFileFromStringFeatures is derived from CStreamingFile and provides an input source for the online framework from a CStringFeatures object. More... | |
class | CStreamingVwCacheFile |
Class StreamingVwCacheFile to read vector-by-vector from VW cache files. More... | |
class | CStreamingVwFile |
Class StreamingVwFile to read vector-by-vector from Vowpal Wabbit data files. It reads the example and label into one object of VwExample type. More... | |
class | CANOVAKernel |
ANOVA (ANalysis Of VAriances) kernel. More... | |
class | CAUCKernel |
The AUC kernel can be used to maximize the area under the receiver operator characteristic curve (AUC) instead of margin in SVM training. More... | |
class | CBesselKernel |
the class Bessel kernel More... | |
class | CCauchyKernel |
Cauchy kernel. More... | |
class | CChi2Kernel |
The Chi2 kernel operating on realvalued vectors computes the chi-squared distance between sets of histograms. More... | |
class | CCircularKernel |
Circular kernel. More... | |
class | CCombinedKernel |
The Combined kernel is used to combine a number of kernels into a single CombinedKernel object by linear combination. More... | |
class | CConstKernel |
The Constant Kernel returns a constant for all elements. More... | |
class | CCustomKernel |
The Custom Kernel allows for custom user provided kernel matrices. More... | |
class | CDiagKernel |
The Diagonal Kernel returns a constant for the diagonal and zero otherwise. More... | |
class | CDistanceKernel |
The Distance kernel takes a distance as input. More... | |
class | CDotKernel |
Template class DotKernel is the base class for kernels working on DotFeatures. More... | |
class | CExponentialKernel |
The Exponential Kernel, closely related to the Gaussian Kernel computed on CDotFeatures. More... | |
class | CGaussianARDKernel |
Gaussian Kernel with Automatic Relevance Detection. More... | |
class | CGaussianKernel |
The well known Gaussian kernel (swiss army knife for SVMs) computed on CDotFeatures. More... | |
class | CGaussianShiftKernel |
An experimental kernel inspired by the WeightedDegreePositionStringKernel and the Gaussian kernel. More... | |
class | CGaussianShortRealKernel |
The well known Gaussian kernel (swiss army knife for SVMs) on dense short-real valued features. More... | |
class | CHistogramIntersectionKernel |
The HistogramIntersection kernel operating on realvalued vectors computes the histogram intersection distance between sets of histograms. Note: the current implementation assumes positive values for the histograms, and input vectors should sum to 1. More... | |
class | CInverseMultiQuadricKernel |
InverseMultiQuadricKernel. More... | |
class | CJensenShannonKernel |
The Jensen-Shannon kernel operating on real-valued vectors computes the Jensen-Shannon distance between the features. Often used in computer vision. More... | |
struct | K_THREAD_PARAM |
class | CKernel |
The Kernel base class. More... | |
class | CLinearARDKernel |
Linear Kernel with Automatic Relevance Detection. More... | |
class | CLinearKernel |
Computes the standard linear kernel on CDotFeatures. More... | |
class | CLogKernel |
Log kernel. More... | |
class | CMultiquadricKernel |
MultiquadricKernel. More... | |
class | CAvgDiagKernelNormalizer |
Normalize the kernel by either a constant or the average value of the diagonal elements (depending on argument c of the constructor). More... | |
class | CDiceKernelNormalizer |
DiceKernelNormalizer performs kernel normalization inspired by the Dice coefficient (see http://en.wikipedia.org/wiki/Dice's_coefficient) More... | |
class | CFirstElementKernelNormalizer |
Normalize the kernel by a constant obtained from the first element of the kernel matrix, i.e. \( c=k({\bf x},{\bf x})\). More... | |
class | CIdentityKernelNormalizer |
Identity Kernel Normalization, i.e. no normalization is applied. More... | |
class | CKernelNormalizer |
The class Kernel Normalizer defines a function to post-process kernel values. More... | |
class | CRidgeKernelNormalizer |
Normalize the kernel by adding a constant term to its diagonal. This aids kernels to become positive definite (even though they are not - often caused by numerical problems). More... | |
class | CScatterKernelNormalizer |
the scatter kernel normalizer More... | |
class | CSqrtDiagKernelNormalizer |
SqrtDiagKernelNormalizer divides by the Square Root of the product of the diagonal elements. More... | |
class | CTanimotoKernelNormalizer |
TanimotoKernelNormalizer performs kernel normalization inspired by the Tanimoto coefficient (see http://en.wikipedia.org/wiki/Jaccard_index ) More... | |
class | CVarianceKernelNormalizer |
VarianceKernelNormalizer divides by the ``variance''. More... | |
class | CZeroMeanCenterKernelNormalizer |
ZeroMeanCenterKernelNormalizer centers the kernel in feature space. More... | |
class | CPolyKernel |
Computes the standard polynomial kernel on CDotFeatures. More... | |
class | CPowerKernel |
Power kernel. More... | |
class | CProductKernel |
The Product kernel is used to combine a number of kernels into a single ProductKernel object by element multiplication. More... | |
class | CPyramidChi2 |
Pyramid Kernel over Chi2 matched histograms. More... | |
class | CRationalQuadraticKernel |
Rational Quadratic kernel. More... | |
class | CSigmoidKernel |
The standard Sigmoid kernel computed on dense real valued features. More... | |
class | CSparseKernel |
Template class SparseKernel, is the base class of kernels working on sparse features. More... | |
class | CSphericalKernel |
Spherical kernel. More... | |
class | CSplineKernel |
Computes the Spline Kernel function which is the cubic polynomial. More... | |
class | CCommUlongStringKernel |
The CommUlongString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 64bit integers. More... | |
class | CCommWordStringKernel |
The CommWordString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 16bit integers. More... | |
class | CDistantSegmentsKernel |
The distant segments kernel is a string kernel, which counts the number of substrings, so-called segments, at a certain distance from each other. More... | |
class | CFixedDegreeStringKernel |
The FixedDegree String kernel takes as input two strings of same size and counts the number of matches of length d. More... | |
class | CGaussianMatchStringKernel |
The class GaussianMatchStringKernel computes a variant of the Gaussian kernel on strings of same length. More... | |
class | CHistogramWordStringKernel |
The HistogramWordString computes the TOP kernel on inhomogeneous Markov Chains. More... | |
class | CLinearStringKernel |
Computes the standard linear kernel on dense char valued features. More... | |
class | CLocalAlignmentStringKernel |
The LocalAlignmentString kernel compares two sequences through all possible local alignments between the two sequences. More... | |
class | CLocalityImprovedStringKernel |
The LocalityImprovedString kernel is inspired by the polynomial kernel. Comparing neighboring characters it puts emphasize on local features. More... | |
class | CMatchWordStringKernel |
The class MatchWordStringKernel computes a variant of the polynomial kernel on strings of same length converted to a word alphabet. More... | |
class | COligoStringKernel |
This class offers access to the Oligo Kernel introduced by Meinicke et al. in 2004. More... | |
class | CPolyMatchStringKernel |
The class PolyMatchStringKernel computes a variant of the polynomial kernel on strings of same length. More... | |
class | CPolyMatchWordStringKernel |
The class PolyMatchWordStringKernel computes a variant of the polynomial kernel on word-features. More... | |
class | CRegulatoryModulesStringKernel |
The Regulaty Modules kernel, based on the WD kernel, as published in Schultheiss et al., Bioinformatics (2009) on regulatory sequences. More... | |
class | CSalzbergWordStringKernel |
The SalzbergWordString kernel implements the Salzberg kernel. More... | |
class | CSimpleLocalityImprovedStringKernel |
SimpleLocalityImprovedString kernel, is a ``simplified'' and better performing version of the Locality improved kernel. More... | |
class | CSNPStringKernel |
The class SNPStringKernel computes a variant of the polynomial kernel on strings of same length. More... | |
struct | SSKFeatures |
SSKFeatures. More... | |
class | CSparseSpatialSampleStringKernel |
Sparse Spatial Sample String Kernel by Pavel Kuksa pkuks and Vladimir Pavlovic a@cs .rutg ers. eduvladi mir@ cs.ru tger s.eduMore... | |
class | CSpectrumMismatchRBFKernel |
spectrum mismatch rbf kernel More... | |
class | CSpectrumRBFKernel |
spectrum rbf kernel More... | |
class | CStringKernel |
Template class StringKernel, is the base class of all String Kernels. More... | |
class | CWeightedCommWordStringKernel |
The WeightedCommWordString kernel may be used to compute the weighted spectrum kernel (i.e. a spectrum kernel for 1 to K-mers, where each k-mer length is weighted by some coefficient \(\beta_k\)) from strings that have been mapped into unsigned 16bit integers. More... | |
class | CWeightedDegreePositionStringKernel |
The Weighted Degree Position String kernel (Weighted Degree kernel with shifts). More... | |
class | CWeightedDegreeStringKernel |
The Weighted Degree String kernel. More... | |
class | CTensorProductPairKernel |
Computes the Tensor Product Pair Kernel (TPPK). More... | |
class | CTStudentKernel |
Generalized T-Student kernel. More... | |
class | CWaveKernel |
Wave kernel. More... | |
class | CWaveletKernel |
the class WaveletKernel More... | |
class | CWeightedDegreeRBFKernel |
weighted degree RBF kernel More... | |
class | CBinaryLabels |
Binary Labels for binary classification. More... | |
class | CDenseLabels |
Dense integer or floating point labels. More... | |
class | CFactorGraphObservation |
Class CFactorGraphObservation is used as the structured output. More... | |
class | CFactorGraphLabels |
Class FactorGraphLabels used e.g. in the application of Structured Output (SO) learning with the FactorGraphModel. Each of the labels is represented by a graph. Each label is of type CFactorGraphObservation and all of them are stored in a CDynamicObjectArray. More... | |
class | CLabels |
The class Labels models labels, i.e. class assignments of objects. More... | |
class | CLabelsFactory |
The helper class to specialize base class instances of labels. More... | |
class | CLatentLabels |
abstract class for latent labels As latent labels always depends on the given application, this class only defines the API that the user has to implement for latent labels. More... | |
class | CMulticlassLabels |
Multiclass Labels for multi-class classification. More... | |
class | CMulticlassMultipleOutputLabels |
Multiclass Labels for multi-class classification with multiple labels. More... | |
class | CRegressionLabels |
Real Labels are real-valued labels. More... | |
class | CStructuredLabels |
Base class of the labels used in Structured Output (SO) problems. More... | |
class | CLatentModel |
Abstract class CLatentModel It represents the application specific model and contains most of the application dependent logic to solve latent variable based problems. More... | |
class | CLatentSOSVM |
class Latent Structured Output SVM, an structured output based machine for classification problems with latent variables. More... | |
class | CLatentSVM |
LatentSVM class Latent SVM implementation based on [1]. For optimization this implementation uses SVMOcas. More... | |
class | CBitString |
a string class embedding a string in a compact bit representation More... | |
class | CCache |
Template class Cache implements a simple cache. More... | |
class | CCircularBuffer |
Implementation of circular buffer This buffer has logical structure such as queue (FIFO). But this queue is cyclic: tape, ends of which are connected, just instead tape there is block of physical memory. So, if you push big block of data it can be situated both at the end and the begin of buffer's memory. More... | |
class | CCompressor |
Compression library for compressing and decompressing buffers using one of the standard compression algorithms: More... | |
class | CJobResultAggregator |
Abstract base class that provides an interface for computing an aggeregation of the job results of independent computation jobs as they are submitted and also for finalizing the aggregation. More... | |
class | CStoreScalarAggregator |
Template class that aggregates scalar job results in each submit_result call, finalize then transforms current aggregation into a CScalarResult. More... | |
class | CStoreVectorAggregator |
Abstract template class that aggregates vector job results in each submit_result call, finalize is abstract. More... | |
class | CIndependentComputationEngine |
Abstract base class for solving multiple independent instances of CIndependentJob. It has one method, submit_job, which may add the job to an internal queue and might block if there is yet not space in the queue. After jobs are submitted, it might not yet be ready. wait_for_all waits until all jobs are completed, which must be called to guarantee that all jobs are finished. More... | |
class | CSerialComputationEngine |
Class that computes multiple independent instances of computation jobs sequentially. More... | |
class | CIndependentJob |
Abstract base for general computation jobs to be registered in CIndependentComputationEngine. compute method produces a job result and submits it to the internal JobResultAggregator. Each set of jobs that form a result will share the same job result aggregator. More... | |
class | CJobResult |
Base class that stores the result of an independent job. More... | |
class | CScalarResult |
Base class that stores the result of an independent job when the result is a scalar. More... | |
class | CVectorResult |
Base class that stores the result of an independent job when the result is a vector. More... | |
class | CData |
dummy data holder More... | |
struct | TSGDataType |
Datatypes that shogun supports. More... | |
class | CDelimiterTokenizer |
The class CDelimiterTokenizer is used to tokenize a SGVector<char> into tokens using custom chars as delimiters. One can set the delimiters to use by setting to 1 the appropriate index of the public field delimiters. Eg. to set as delimiter the character ':', one should do: tokenizer->delimiters[':'] = 1;. More... | |
class | CDynamicArray |
Template Dynamic array class that creates an array that can be used like a list or an array. More... | |
class | CDynamicObjectArray |
Dynamic array class for CSGObject pointers that creates an array that can be used like a list or an array. More... | |
class | CDynInt |
integer type of dynamic size More... | |
class | CGCArray |
Template class GCArray implements a garbage collecting static array. More... | |
class | CHash |
Collection of Hashing Functions. More... | |
class | CIndexBlock |
class IndexBlock used to represent contiguous indices of one group (e.g. block of related features) More... | |
class | CIndexBlockGroup |
class IndexBlockGroup used to represent group-based feature relation. More... | |
class | CIndexBlockRelation |
class IndexBlockRelation More... | |
class | CIndexBlockTree |
class IndexBlockTree used to represent tree guided feature relation. More... | |
class | CIndirectObject |
an array class that accesses elements indirectly via an index array. More... | |
class | v_array |
Class v_array taken directly from JL's implementation. More... | |
class | CJLCoverTreePoint |
Class Point to use with John Langford's CoverTree. This class must have some assoficated functions defined (distance, parse_points and print, see below) so it can be used with the CoverTree implementation. More... | |
class | CListElement |
Class ListElement, defines how an element of the the list looks like. More... | |
class | CList |
Class List implements a doubly connected list for low-level-objects. More... | |
class | CLock |
Class Lock used for synchronization in concurrent programs. More... | |
class | CMap |
the class CMap, a map based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table More... | |
class | CNGramTokenizer |
The class CNGramTokenizer is used to tokenize a SGVector<char> into n-grams. More... | |
class | RefCount |
class | CSet |
the class CSet, a set based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table More... | |
class | SGMatrix |
shogun matrix More... | |
class | SGMatrixList |
shogun matrix list More... | |
class | SGNDArray |
shogun n-dimensional array More... | |
class | SGReferencedData |
shogun reference count managed data More... | |
class | SGSparseMatrix |
template class SGSparseMatrix More... | |
struct | SGSparseVectorEntry |
template class SGSparseVectorEntry More... | |
class | SGSparseVector |
template class SGSparseVector The assumtion is that the stored SGSparseVectorEntry<T>* vector is ordered by SGSparseVectorEntry.feat_index in non-decreasing order. This has to be assured by the user of the class. More... | |
class | SGString |
shogun string More... | |
class | SGStringList |
template class SGStringList More... | |
struct | IndexSorter |
class | SGVector |
shogun vector More... | |
class | ShogunException |
Class ShogunException defines an exception which is thrown whenever an error inside of shogun occurs. More... | |
class | CSignal |
Class Signal implements signal handling to e.g. allow ctrl+c to cancel a long running process. More... | |
class | CStructuredData |
Base class of the components of StructuredLabels. More... | |
class | CTime |
Class Time that implements a stopwatch based on either cpu time or wall clock time. More... | |
class | CTokenizer |
The class CTokenizer acts as a base class in order to implement tokenizers. Sub-classes must implement the methods has_next(), next_token_idx() and get_copy(). More... | |
class | CTrie |
Template class Trie implements a suffix trie, i.e. a tree in which all suffixes up to a certain length are stored. More... | |
class | CHingeLoss |
CHingeLoss implements the hinge loss function. More... | |
class | CLogLoss |
CLogLoss implements the logarithmic loss function. More... | |
class | CLogLossMargin |
Class CLogLossMargin implements a margin-based log-likelihood loss function. More... | |
class | CLossFunction |
Class CLossFunction is the base class of all loss functions. More... | |
class | CSmoothHingeLoss |
CSmoothHingeLoss implements the smooth hinge loss function. More... | |
class | CSquaredHingeLoss |
Class CSquaredHingeLoss implements a squared hinge loss function. More... | |
class | CSquaredLoss |
CSquaredLoss implements the squared loss function. More... | |
class | CBaggingMachine |
: Bagging algorithm i.e. bootstrap aggregating More... | |
class | CBaseMulticlassMachine |
class | CDistanceMachine |
A generic DistanceMachine interface. More... | |
class | CGaussianProcessMachine |
A base class for Gaussian Processes. More... | |
class | CEPInferenceMethod |
Class of the Expectation Propagation (EP) posterior approximation inference method. More... | |
class | CExactInferenceMethod |
The Gaussian exact form inference method class. More... | |
class | CFITCInferenceMethod |
The Fully Independent Conditional Training inference method class. More... | |
class | CGaussianLikelihood |
Class that models Gaussian likelihood. More... | |
class | CInferenceMethod |
The Inference Method base class. More... | |
class | CLaplacianInferenceMethod |
The Laplace approximation inference method class. More... | |
class | CLikelihoodModel |
The Likelihood model base class. More... | |
class | CLogitLikelihood |
Class that models Logit likelihood. More... | |
class | CMeanFunction |
An abstract class of the mean function. More... | |
class | CProbitLikelihood |
Class that models Probit likelihood. More... | |
class | CStudentsTLikelihood |
Class that models a Student's-t likelihood. More... | |
class | CZeroMean |
The zero mean function class. More... | |
class | CKernelMachine |
A generic KernelMachine interface. More... | |
class | CKernelMulticlassMachine |
generic kernel multiclass More... | |
class | CKernelStructuredOutputMachine |
class | CLinearLatentMachine |
abstract implementaion of Linear Machine with latent variable This is the base implementation of all linear machines with latent variable. More... | |
class | CLinearMachine |
Class LinearMachine is a generic interface for all kinds of linear machines like classifiers. More... | |
class | CLinearMulticlassMachine |
generic linear multiclass machine More... | |
class | CLinearStructuredOutputMachine |
class | CMachine |
A generic learning machine interface. More... | |
class | CMulticlassMachine |
experimental abstract generic multiclass machine class More... | |
class | CNativeMulticlassMachine |
experimental abstract native multiclass machine class More... | |
class | COnlineLinearMachine |
Class OnlineLinearMachine is a generic interface for linear machines like classifiers which work through online algorithms. More... | |
class | CStructuredOutputMachine |
class | CApproxJointDiagonalizer |
Class ApproxJointDiagonalizer defines an Approximate Joint Diagonalizer (AJD) interface. More... | |
class | CFFDiag |
Class FFDiag. More... | |
class | CJADiag |
Class JADiag. More... | |
class | CJADiagOrth |
Class JADiagOrth. More... | |
class | CJediDiag |
Class Jedi. More... | |
class | CQDiag |
Class QDiag. More... | |
class | CUWedge |
Class UWedge. More... | |
class | CCplex |
Class CCplex to encapsulate access to the commercial cplex general purpose optimizer. More... | |
class | EigenSparseUtil |
This class contains some utilities for Eigen3 Sparse Matrix integration with shogun. Currently it provides a method for converting SGSparseMatrix to Eigen3 SparseMatrix. More... | |
class | CFunction |
Class of a function of one variable. More... | |
class | CIntegration |
Class that contains certain methods related to numerical integration. More... | |
class | CJacobiEllipticFunctions |
Class that contains methods for computing Jacobi elliptic functions related to complex analysis. These functions are inverse of the elliptic integral of first kind, i.e.
\[ u(k,m)=\int_{0}^{k}\frac{dt}{\sqrt{(1-t^{2})(1-m^{2}t^{2})}} =\int_{0}^{\varphi}\frac{d\theta}{\sqrt{(1-m^{2}sin^{2}\theta)}} \] where \(k=sin\varphi\), \(t=sin\theta\) and parameter \(m, 0\le m \le 1\) is called modulus. Three main Jacobi elliptic functions are defined as \(sn(u,m)=k=sin\theta\), \(cn(u,m)=cos\theta=\sqrt{1-sn(u,m)^{2}}\) and \(dn(u,m)=\sqrt{1-m^{2}sn(u,m)^{2}}\). For \(k=1\), i.e. \(\varphi=\frac{\pi}{2}\), \(u(1,m)=K(m)\) is known as the complete elliptic integral of first kind. Similarly, \(u(1,m'))= K'(m')\), \(m'=\sqrt{1-m^{2}}\) is called the complementary complete elliptic integral of first kind. Jacobi functions are double periodic with quardratic periods \(K\) and \(K'\). More... | |
class | CDirectEigenSolver |
Class that computes eigenvalues of a real valued, self-adjoint dense matrix linear operator using Eigen3. More... | |
class | CEigenSolver |
Abstract base class that provides an abstract compute method for computing eigenvalues of a real valued, self-adjoint linear operator. It also provides method for getting min and max eigenvalues. More... | |
class | CLanczosEigenSolver |
Class that computes eigenvalues of a real valued, self-adjoint linear operator using Lanczos algorithm. More... | |
class | CDenseMatrixOperator |
Class that represents a dense-matrix linear operator. It computes matrix-vector product \(Ax\) in its apply method, \(A\in\mathbb{C}^{m\times n},A:\mathbb{C}^{n}\rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in\mathbb{C}^{n}\) being the vector. The result is a vector \(y\in\mathbb{C}^{m}\). More... | |
class | CLinearOperator |
Abstract template base class that represents a linear operator, e.g. a matrix. More... | |
class | CMatrixOperator |
Abstract base class that represents a matrix linear operator. It provides an interface to computes matrix-vector product \(Ax\) in its apply method, \(A\in\mathbb{C}^{m\times n},A:\mathbb{C}^{n} \rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in \mathbb{C}^{n}\) being the vector. The result is a vector \(y\in \mathbb{C}^{m}\). More... | |
struct | SparsityStructure |
Struct that represents the sparsity structure of the Sparse Matrix in CRS. Implementation has been adapted from Krylstat (https://github.com/ Froskekongen/KRYLSTAT) library (c) Erlend Aune erlen under GPL2+. da@m ath.n tnu. noMore... | |
class | CSparseMatrixOperator |
Class that represents a sparse-matrix linear operator. It computes matrix-vector product \(Ax\) in its apply method, \(A\in\mathbb{C}^{m\times n},A:\mathbb{C}^{n}\rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in\mathbb{C}^{n}\) being the vector. The result is a vector \(y\in\mathbb{C}^{m}\). More... | |
class | CCGMShiftedFamilySolver |
class that uses conjugate gradient method for solving a shifted linear system family where the linear opeator is real valued and symmetric positive definite, the vector is real valued, but the shifts are complex More... | |
class | CConjugateGradientSolver |
class that uses conjugate gradient method of solving a linear system involving a real valued linear operator and vector. Useful for large sparse systems involving sparse symmetric and positive-definite matrices. More... | |
class | CConjugateOrthogonalCGSolver |
class that uses conjugate orthogonal conjugate gradient method of solving a linear system involving a complex valued linear operator and vector. Useful for large sparse systems involving sparse symmetric matrices that are not Herimitian. More... | |
class | CDirectLinearSolverComplex |
Class that provides a solve method for complex dense-matrix linear systems. More... | |
class | CDirectSparseLinearSolver |
Class that provides a solve method for real sparse-matrix linear systems using LLT. More... | |
class | CIterativeLinearSolver |
abstract template base for all iterative linear solvers such as conjugate gradient (CG) solvers. provides interface for setting the iteration limit, relative/absolute tolerence. solve method is abstract. More... | |
class | CIterativeShiftedLinearFamilySolver |
abstract template base for CG based solvers to the solution of shifted linear systems of the form \((A+\sigma)x=b\) for several values of \(\sigma\) simultaneously, using only as many matrix-vector operations as the solution of a single system requires. This class adds another interface to the basic iterative linear solver that takes the shifts, \(\sigma\), and also weights, \(\alpha\), and returns the summation \(\sum_{i} \alpha_{i}x_{i}\), where \(x_{i}\) is the solution of the system \((A+\sigma_{i})x_{i}=b\). More... | |
struct | _IterInfo |
struct that contains current state of the iteration for iterative linear solvers More... | |
class | IterativeSolverIterator |
template class that is used as an iterator for an iterative linear solver. In the iteration of solving phase, each solver initializes the iteration with a maximum number of iteration limit, and relative/ absolute tolerence. They then call begin with the residual vector and continue until its end returns true, i.e. either it has converged or iteration count reached maximum limit. More... | |
class | CLinearSolver |
Abstract template base class that provides an abstract solve method for linear systems, that takes a linear operator \(A\), a vector \(b\), solves the system \(Ax=b\) and returns the vector \(x\). More... | |
class | CIndividualJobResultAggregator |
Class that aggregates vector job results in each submit_result call of jobs generated from rational approximation of linear operator function times a vector. finalize extracts the imaginary part of that aggregation, applies the linear operator to the aggregation, performs a dot product with the sample vector, multiplies with the constant multiplier (see CRationalApproximation) and stores the result as CScalarResult. More... | |
class | CDenseExactLogJob |
Class that represents the job of applying the log of a CDenseMatrixOperator on a real vector. More... | |
class | CRationalApproximationCGMJob |
Implementation of independent jobs that solves one whole family of shifted systems in rational approximation of linear operator function times a vector using CG-M linear solver. compute calls submit_results of the aggregator with CScalarResult (see CRationalApproximation) More... | |
class | CRationalApproximationIndividualJob |
Implementation of independent job that solves one of the family of shifted systems in rational approximation of linear operator function times a vector using a direct linear solver. The shift is moved inside the operator. compute calls submit_results of the aggregator with CVectorResult which is the solution vector for that shift multiplied by complex weight (See CRationalApproximation) More... | |
class | CLogDetEstimator |
Class to create unbiased estimators of \(log(\left|C\right|)= trace(log(C))\). For each estimate, it samples trace vectors (one by one) and calls submit_jobs of COperatorFunction, stores the resulting job result aggregator instances, calls wait_for_all of CIndependentComputationEngine to ensure that the job result aggregators are all up to date. Then simply computes running averages over the estimates. More... | |
class | CDenseMatrixExactLog |
Class that generates jobs for computing logarithm of a dense matrix linear operator. More... | |
class | CLogRationalApproximationCGM |
Implementaion of rational approximation of a operator-function times vector where the operator function is log of a linear operator. Each complex system generated from the shifts due to rational approximation of opertor- log times vector expression are solved at once with a shifted linear-family solver by the computation engine. generate_jobs generates one job per sample. More... | |
class | CLogRationalApproximationIndividual |
Implementaion of rational approximation of a operator-function times vector where the operator function is log of a dense-matrix. Each complex system generated from the shifts due to rational approximation of opertor- log times vector expression are solved individually with a complex linear solver by the computation engine. generate_jobs generates num_shifts number of jobs per trace sample. More... | |
class | COperatorFunction |
Abstract template base class for computing \(s^{T} f(C) s\) for a linear operator C and a vector s. submit_jobs method creates a bunch of jobs needed to solve for this particular \(s\) and attaches one unique job aggregator to each of them, then submits them all to the computation engine. More... | |
class | CRationalApproximation |
Abstract base class of the rational approximation of a function of a linear operator (A) times vector (v) using Cauchy's integral formula -
\[f(\text{A})\text{v}=\oint_{\Gamma}f(z)(z\text{I}-\text{A})^{-1} \text{v}dz\] Computes eigenvalues of linear operator and uses Jacobi elliptic functions and conformal maps [2] for quadrature rule for discretizing the contour integral and computes complex shifts, weights and constant multiplier of the rational approximation of the above expression as \[f(\text{A})\text{v}\approx \eta\text{A}\Im-\left(\sum_{l=1}^{N}\alpha_{l} (\text{A}-\sigma_{l}\text{I})^{-1}\text{v}\right)\] where \(\alpha_{l},\sigma_{l}\in\mathbb{C}\) are respectively the shifts and weights of the linear systems generated from the rational approximation, and \(\eta\in\mathbb{R}\) is the constant multiplier, equals to \(\frac{-8K(\lambda_{m}\lambda_{M})^{\frac{1}{4}}}{k\pi N}\). More... | |
class | CNormalSampler |
Class that provides a sample method for Gaussian samples. More... | |
class | CTraceSampler |
Abstract template base class that provides an interface for sampling the trace of a linear operator using an abstract sample method. More... | |
class | CLoss |
Class which collects generic mathematical functions. More... | |
class | CMath |
Class which collects generic mathematical functions. More... | |
class | Munkres |
Munkres. More... | |
class | CRandom |
: Pseudo random number geneartor More... | |
class | CSparseInverseCovariance |
used to estimate inverse covariance matrix using graphical lasso More... | |
class | CStatistics |
Class that contains certain functions related to statistics, such as probability/cumulative distribution functions, different statistics, etc. More... | |
class | CLMNN |
Class LMNN that implements the distance metric learning technique Large Margin Nearest Neighbour (LMNN) described in. More... | |
class | CLMNNStatistics |
Class LMNNStatistics used to give access to intermediate results obtained training LMNN. More... | |
class | CGradientModelSelection |
Model selection class which searches for the best model by a gradient-search. More... | |
class | CGridSearchModelSelection |
Model selection class which searches for the best model by a grid- search. See CModelSelection for details. More... | |
class | CModelSelection |
Abstract base class for model selection. More... | |
class | CModelSelectionParameters |
Class to select parameters and their ranges for model selection. The structure is organized as a tree with different kinds of nodes, depending on the values of its member variables of name and CSGObject. More... | |
class | CParameterCombination |
Class that holds ONE combination of parameters for a learning machine. The structure is organized as a tree. Every node may hold a name or an instance of a Parameter class. Nodes may have children. The nodes are organized in such way, that every parameter of a model for model selection has one node and sub-parameters are stored in sub-nodes. Using a tree of this class, parameters of models may easily be set. There are these types of nodes: More... | |
class | CRandomSearchModelSelection |
Model selection class which searches for the best model by a random search. See CModelSelection for details. More... | |
class | CECOCAEDDecoder |
class | CECOCDecoder |
class | CECOCDiscriminantEncoder |
class | CECOCEDDecoder |
class | CECOCEncoder |
ECOCEncoder produce an ECOC codebook. More... | |
class | CECOCForestEncoder |
class | CECOCHDDecoder |
class | CECOCIHDDecoder |
class | CECOCLLBDecoder |
class | CECOCOVOEncoder |
class | CECOCOVREncoder |
class | CECOCRandomDenseEncoder |
class | CECOCRandomSparseEncoder |
class | CECOCSimpleDecoder |
class | CECOCStrategy |
class | CECOCUtil |
class | CGaussianNaiveBayes |
Class GaussianNaiveBayes, a Gaussian Naive Bayes classifier. More... | |
class | CGMNPLib |
class GMNPLib Library of solvers for Generalized Minimal Norm Problem (GMNP). More... | |
class | CGMNPSVM |
Class GMNPSVM implements a one vs. rest MultiClass SVM. More... | |
class | CKNN |
Class KNN, an implementation of the standard k-nearest neigbor classifier. More... | |
class | CLaRank |
the LaRank multiclass SVM machine More... | |
class | CMCLDA |
Class MCLDA implements multiclass Linear Discriminant Analysis. More... | |
class | CMulticlassLibLinear |
multiclass LibLinear wrapper. Uses Crammer-Singer formulation and gradient descent optimization algorithm implemented in the LibLinear library. Regularized bias support is added using stacking bias 'feature' to hyperplanes normal vectors. More... | |
class | CMulticlassLibSVM |
class LibSVMMultiClass. Does one vs one classification. More... | |
class | CMulticlassLogisticRegression |
multiclass logistic regression More... | |
class | CMulticlassOCAS |
multiclass OCAS wrapper More... | |
class | CMulticlassOneVsOneStrategy |
multiclass one vs one strategy used to train generic multiclass machines for K-class problems with building voting-based ensemble of K*(K-1) binary classifiers multiclass probabilistic outputs can be obtained by using the heuristics described in [1] More... | |
class | CMulticlassOneVsRestStrategy |
multiclass one vs rest strategy used to train generic multiclass machines for K-class problems with building ensemble of K binary classifiers More... | |
class | CMulticlassStrategy |
class MulticlassStrategy used to construct generic multiclass classifiers with ensembles of binary classifiers More... | |
class | CMulticlassSVM |
class MultiClassSVM More... | |
class | CMulticlassTreeGuidedLogisticRegression |
multiclass tree guided logistic regression More... | |
class | CQDA |
Class QDA implements Quadratic Discriminant Analysis. More... | |
class | CRejectionStrategy |
base rejection strategy class More... | |
class | CThresholdRejectionStrategy |
threshold based rejection strategy More... | |
class | CDixonQTestRejectionStrategy |
simplified version of Dixon's Q test outlier based rejection strategy. Statistic values are taken from http://www.vias.org/tmdatanaleng/cc_outlier_tests_dixon.html More... | |
class | CScatterSVM |
ScatterSVM - Multiclass SVM. More... | |
class | CShareBoost |
class | ShareBoostOptimizer |
class | CBalancedConditionalProbabilityTree |
class | CConditionalProbabilityTree |
struct | ConditionalProbabilityTreeNodeData |
struct to store data of node of conditional probability tree More... | |
class | CRandomConditionalProbabilityTree |
class | CRelaxedTree |
struct | RelaxedTreeNodeData |
class | RelaxedTreeUtil |
class | CTreeMachine |
class TreeMachine, a base class for tree based multiclass classifiers More... | |
class | CTreeMachineNode |
struct | VwConditionalProbabilityTreeNodeData |
class | CVwConditionalProbabilityTree |
struct | tag_callback_data |
struct | tag_iteration_data |
struct | lbfgs_parameter_t |
class | CDecompressString |
Preprocessor that decompresses compressed strings. More... | |
class | CDensePreprocessor |
Template class DensePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CDenseFeatures (i.e. rectangular dense matrices) More... | |
class | CDimensionReductionPreprocessor |
the class DimensionReductionPreprocessor, a base class for preprocessors used to lower the dimensionality of given simple features (dense matrices). More... | |
class | CHomogeneousKernelMap |
Preprocessor HomogeneousKernelMap performs homogeneous kernel maps as described in. More... | |
class | CKernelPCA |
Preprocessor KernelPCA performs kernel principal component analysis. More... | |
class | CLogPlusOne |
Preprocessor LogPlusOne does what the name says, it adds one to a dense real valued vector and takes the logarithm of each component of it. More... | |
class | CNormOne |
Preprocessor NormOne, normalizes vectors to have norm 1. More... | |
class | CPCA |
Preprocessor PCACut performs principial component analysis on the input vectors and keeps only the n eigenvectors with eigenvalues above a certain threshold. More... | |
class | CPNorm |
Preprocessor PNorm, normalizes vectors to have p-norm. More... | |
class | CPreprocessor |
Class Preprocessor defines a preprocessor interface. More... | |
class | CPruneVarSubMean |
Preprocessor PruneVarSubMean will substract the mean and remove features that have zero variance. More... | |
class | CRandomFourierGaussPreproc |
Preprocessor CRandomFourierGaussPreproc implements Random Fourier Features for the Gauss kernel a la Ali Rahimi and Ben Recht Nips2007 after preprocessing the features using them in a linear kernel approximates a gaussian kernel. More... | |
class | CRescaleFeatures |
Preprocessor RescaleFeautres is rescaling the range of features to make the features independent of each other and aims to scale the range in [0, 1] or [-1, 1]. More... | |
class | CSortUlongString |
Preprocessor SortUlongString, sorts the indivual strings in ascending order. More... | |
class | CSortWordString |
Preprocessor SortWordString, sorts the indivual strings in ascending order. More... | |
class | CSparsePreprocessor |
Template class SparsePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CSparseFeatures. More... | |
class | CStringPreprocessor |
Template class StringPreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CStringFeatures (i.e. strings of variable length). More... | |
class | CSumOne |
Preprocessor SumOne, normalizes vectors to have sum 1. More... | |
class | CGaussianProcessRegression |
Class GaussianProcessRegression implements regression based on Gaussian Processes. More... | |
class | CKernelRidgeRegression |
Class KernelRidgeRegression implements Kernel Ridge Regression - a regularized least square method for classification and regression. More... | |
class | CLeastAngleRegression |
Class for Least Angle Regression, can be used to solve LASSO. More... | |
class | CLeastSquaresRegression |
class to perform Least Squares Regression More... | |
class | CLinearRidgeRegression |
Class LinearRidgeRegression implements Ridge Regression - a regularized least square method for classification and regression. More... | |
class | CLibLinearRegression |
LibLinear for regression. More... | |
class | CLibSVR |
Class LibSVR, performs support vector regression using LibSVM. More... | |
class | CMKLRegression |
Multiple Kernel Learning for regression. More... | |
class | CSVRLight |
Class SVRLight, performs support vector regression using SVMLight. More... | |
class | CHSIC |
This class implements the Hilbert Schmidtd Independence Criterion based independence test as described in [1]. More... | |
class | CKernelIndependenceTestStatistic |
Independence test base class. Provides an interface for performing an independence test. Given samples \(Z=\{(x_i,y_i)\}_{i=1}^m\) from the joint distribution \(\textbf{P}_x\textbf{P}_y\), does the joint distribution factorize as \(\textbf{P}_{xy}=\textbf{P}_x\textbf{P}_y\)? The null- hypothesis says yes, i.e. no independence, the alternative hypothesis says yes. More... | |
class | CKernelMeanMatching |
Kernel Mean Matching. More... | |
class | CKernelTwoSampleTestStatistic |
Two sample test base class. Provides an interface for performing a two-sample test, i.e. Given samples from two distributions \(p\) and \(q\), the null-hypothesis is: \(H_0: p=q\), the alternative hypothesis: \(H_1: p\neq q\). More... | |
class | CLinearTimeMMD |
This class implements the linear time Maximum Mean Statistic as described in [1]. This statistic is in particular suitable for streaming data. Therefore, only streaming features may be passed. To process other feature types, construct streaming features from these (see constructor documentations). A blocksize has to be specified that determines how many examples are processed at once. This should be set as large as available memory allows to ensure faster computations. More... | |
class | CMMDKernelSelection |
Base class for kernel selection for MMD-based two-sample test statistic implementations (e.g. MMD). Provides abstract methods for selecting kernels and computing criteria or kernel weights for the implemented method. In order to implement new methods for kernel selection, simply write a new implementation of this class. More... | |
class | CMMDKernelSelectionComb |
Base class for kernel selection of combined kernels. Given an MMD instance whose underlying kernel is a combined one, this class provides an interface to select weights of this combined kernel. More... | |
class | CMMDKernelSelectionCombMaxL2 |
Implementation of maximum MMD kernel selection for combined kernel. This class selects a combination of baseline kernels that maximises the the MMD for a combined kernel based on a L2-regularization approach. This boils down to solve the convex program
\[ \min_\beta \{\beta^T \beta \quad \text{s.t.}\quad \beta^T \eta=1, \beta\succeq 0\}, \] where \(\eta\) is a vector whose elements are the MMDs of the baseline kernels. More... | |
class | CMMDKernelSelectionCombOpt |
Implementation of optimal kernel selection for combined kernel. This class selects a combination of baseline kernels that maximises the ratio of the MMD and its standard deviation for a combined kernel. This boils down to solve the convex program
\[ \min_\beta \{\beta^T (Q+\lambda_m) \beta \quad \text{s.t.}\quad \beta^T \eta=1, \beta\succeq 0\}, \] where \(\eta\) is a vector whose elements are the MMDs of the baseline kernels and \(Q\) is a linear time estimate of the covariance of \(\eta\). More... | |
class | CMMDKernelSelectionMax |
Kernel selection class that selects the single kernel that maximises the MMD statistic. Works for CQuadraticTimeMMD and CLinearTimeMMD. This leads to a heuristic that is better than the standard median heuristic for Gaussian kernels. However, it comes with no guarantees. More... | |
class | CMMDKernelSelectionMedian |
Implements MMD kernel selection for a number of Gaussian baseline kernels via selecting the one with a bandwidth parameter that is closest to the median of all pairwise distances in the underlying data. Therefore, it only works for data to which a GaussianKernel can be applied, which are grouped under the class CDotFeatures in SHOGUN. More... | |
class | CMMDKernelSelectionOpt |
Implements optimal kernel selection for single kernels. Given a number of baseline kernels, this method selects the one that minimizes the type II error for a given type I error for a two-sample test. This only works for the CLinearTimeMMD statistic. More... | |
class | CQuadraticTimeMMD |
This class implements the quadratic time Maximum Mean Statistic as described in [1]. The MMD is the distance of two probability distributions \(p\) and \(q\) in a RKHS
\[ \text{MMD}[\mathcal{F},p,q]^2=\textbf{E}_{x,x'}\left[ k(x,x')\right]- 2\textbf{E}_{x,y}\left[ k(x,y)\right] +\textbf{E}_{y,y'}\left[ k(y,y')\right]=||\mu_p - \mu_q||^2_\mathcal{F} \] . More... | |
class | CTestStatistic |
Test statistic base class. Provides an interface for statistical tests via three methods: compute_statistic(), compute_p_value() and compute_threshold(). The second computes a p-value for the statistic computed by the first method. The p-value represents the position of the statistic in the null-distribution, i.e. the distribution of the statistic population given the null-hypothesis is true. (1-position = p-value). The third method, compute_threshold(), computes a threshold for a given test level which is needed to reject the null-hypothesis. More... | |
class | CTwoDistributionsTestStatistic |
Provides an interface for performing statistical tests on two sets of samples from two distributions. Instances of these tests are the classical two-sample test and the independence test. This class may be used as base class for both. More... | |
struct | BmrmStatistics |
class | CCCSOSVM |
CCSOSVM. More... | |
class | CDisjointSet |
Class CDisjointSet data structure for linking graph nodes It's easy to identify connected graph, acyclic graph, roots of forest etc. please refer to http://en.wikipedia.org/wiki/Disjoint-set_data_structure. More... | |
class | CDualLibQPBMSOSVM |
Class DualLibQPBMSOSVM that uses Bundle Methods for Regularized Risk Minimization algorithms for structured output (SO) problems [1] presented in [2]. More... | |
class | CDynProg |
Dynamic Programming Class. More... | |
class | CFactorDataSource |
Class CFactorDataSource Source for factor data. In some cases, the same data can be shared by many factors. More... | |
class | CFactor |
Class CFactor A factor is defined on a clique in the factor graph. Each factor can have its own data, either dense, sparse or shared data. Note that currently this class is table factor oriented. More... | |
class | CFactorGraph |
Class CFactorGraph a factor graph is a structured input in general. More... | |
class | CFactorGraphModel |
CFactorGraphModel defines a model in terms of CFactorGraph and CMAPInference, where parameters are associated with factor types, in the model. There is a mapping vector records the locations of local factor parameters in the global parameter vector. More... | |
class | CFactorType |
Class CFactorType defines the way of factor parameterization. More... | |
class | CTableFactorType |
Class CTableFactorType the way that store assignments of variables and energies in a table or a multi-array. More... | |
class | CHMSVMModel |
Class CHMSVMModel that represents the application specific model and contains the application dependent logic to solve Hidden Markov Support Vector Machines (HM-SVM) type of problems within a generic SO framework. More... | |
class | CIntronList |
class IntronList More... | |
struct | bmrm_ll |
struct | ICP_stats |
struct | line_search_res |
class | CMAPInference |
Class CMAPInference performs MAP inference on a factor graph. Briefly, given a factor graph model, with features \(\bold{x}\), the prediction is obtained by \( {\arg\max} _{\bold{y}} P(\bold{Y} = \bold{y} | \bold{x}; \bold{w}) \). More... | |
class | CMAPInferImpl |
Class CMAPInferImpl abstract class of MAP inference implementation. More... | |
class | CMulticlassModel |
Class CMulticlassModel that represents the application specific model and contains the application dependent logic to solve multiclass classification within a generic SO framework. More... | |
struct | CRealNumber |
Class CRealNumber to be used in the application of Structured Output (SO) learning to multiclass classification. Even though it is likely that it does not make sense to consider real numbers as structured data, it has been made in this way because the basic type to use in structured labels needs to inherit from CStructuredData. More... | |
class | CMulticlassSOLabels |
Class CMulticlassSOLabels to be used in the application of Structured Output (SO) learning to multiclass classification. Each of the labels is represented by a real number and it is required that the values of the labels are in the set {0, 1, ..., num_classes-1}. Each label is of type CRealNumber and all of them are stored in a CDynamicObjectArray. More... | |
class | CPlif |
class Plif More... | |
class | CPlifArray |
class PlifArray More... | |
class | CPlifBase |
class PlifBase More... | |
class | CPlifMatrix |
store plif arrays for all transitions in the model More... | |
class | CSegmentLoss |
class IntronList More... | |
class | CSequence |
Class CSequence to be used in the application of Structured Output (SO) learning to Hidden Markov Support Vector Machines (HM-SVM). More... | |
class | CSequenceLabels |
Class CSequenceLabels used e.g. in the application of Structured Output (SO) learning to Hidden Markov Support Vector Machines (HM-SVM). Each of the labels is represented by a sequence of integers. Each label is of type CSequence and all of them are stored in a CDynamicObjectArray. More... | |
class | CStateModel |
class CStateModel base, abstract class for the internal state representation used in the CHMSVMModel. More... | |
struct | TMultipleCPinfo |
struct | CResultSet |
class | CStructuredModel |
Class CStructuredModel that represents the application specific model and contains most of the application dependent logic to solve structured output (SO) problems. The idea of this class is to be instantiated giving pointers to the functions that are dependent on the application, i.e. the combined feature representation \(\Psi(\bold{x},\bold{y})\) and the argmax function \( {\arg\max} _{\bold{y} \neq \bold{y}_i} \left \langle { \bold{w}, \Psi(\bold{x}_i,\bold{y}) } \right \rangle \). See: MulticlassModel.h and .cpp for an example of these functions implemented. More... | |
class | CTwoStateModel |
class CTwoStateModel class for the internal two-state representation used in the CHMSVMModel. More... | |
class | CDomainAdaptationMulticlassLibLinear |
domain adaptation multiclass LibLinear wrapper Source domain is assumed to b More... | |
class | CDomainAdaptationSVM |
class DomainAdaptationSVM More... | |
class | CDomainAdaptationSVMLinear |
class DomainAdaptationSVMLinear More... | |
class | MappedSparseMatrix |
mapped sparse matrix for representing graph relations of tasks More... | |
class | CLibLinearMTL |
class to implement LibLinear More... | |
class | CMultitaskClusteredLogisticRegression |
class MultitaskClusteredLogisticRegression, a classifier for multitask problems. Supports only task group relations. Based on solver ported from the MALSAR library. Assumes task in group are related with a clustered structure. More... | |
class | CMultitaskKernelMaskNormalizer |
The MultitaskKernel allows Multitask Learning via a modified kernel function. More... | |
class | CMultitaskKernelMaskPairNormalizer |
The MultitaskKernel allows Multitask Learning via a modified kernel function. More... | |
class | CMultitaskKernelMklNormalizer |
Base-class for parameterized Kernel Normalizers. More... | |
class | CMultitaskKernelNormalizer |
The MultitaskKernel allows Multitask Learning via a modified kernel function. More... | |
class | CMultitaskKernelPlifNormalizer |
The MultitaskKernel allows learning a piece-wise linear function (PLIF) via MKL. More... | |
class | CNode |
A CNode is an element of a CTaxonomy, which is used to describe hierarchical structure between tasks. More... | |
class | CTaxonomy |
CTaxonomy is used to describe hierarchical structure between tasks. More... | |
class | CMultitaskKernelTreeNormalizer |
The MultitaskKernel allows Multitask Learning via a modified kernel function based on taxonomy. More... | |
class | CMultitaskL12LogisticRegression |
class MultitaskL12LogisticRegression, a classifier for multitask problems. Supports only task group relations. Based on solver ported from the MALSAR library. More... | |
class | CMultitaskLeastSquaresRegression |
class Multitask Least Squares Regression, a machine to solve regression problems with a few tasks related via group or tree. Based on L1/Lq regression for groups and L1/L2 for trees. More... | |
class | CMultitaskLinearMachine |
class MultitaskLinearMachine, a base class for linear multitask classifiers More... | |
class | CMultitaskLogisticRegression |
class Multitask Logistic Regression used to solve classification problems with a few tasks related via group or tree. Based on L1/Lq regression for groups and L1/L2 for trees. More... | |
class | CMultitaskROCEvaluation |
Class MultitaskROCEvalution used to evaluate ROC (Receiver Operating Characteristic) and an area under ROC curve (auROC) of each task separately. More... | |
class | CMultitaskTraceLogisticRegression |
class MultitaskTraceLogisticRegression, a classifier for multitask problems. Supports only task group relations. Based on solver ported from the MALSAR library. More... | |
class | CTask |
class Task used to represent tasks in multitask learning. Essentially it represent a set of feature vector indices. More... | |
class | CTaskGroup |
class TaskGroup used to represent a group of tasks. Tasks in group do not overlap. More... | |
class | CTaskRelation |
used to represent tasks in multitask learning More... | |
class | CTaskTree |
class TaskTree used to represent a tree of tasks. Tree is constructed via task with subtasks (and subtasks of subtasks ..) passed to the TaskTree. More... | |
class | CGUIClassifier |
UI classifier. More... | |
class | CGUIConverter |
UI converter. More... | |
class | CGUIDistance |
UI distance. More... | |
class | CGUIFeatures |
UI features. More... | |
class | CGUIHMM |
UI HMM (Hidden Markov Model) More... | |
class | CGUIKernel |
UI kernel. More... | |
class | CGUILabels |
UI labels. More... | |
class | CGUIMath |
UI math. More... | |
class | CGUIPluginEstimate |
UI estimate. More... | |
class | CGUIPreprocessor |
UI preprocessor. More... | |
class | CGUIStructure |
UI structure. More... | |
class | CGUITime |
UI time. More... |
Typedefs | |
typedef uint32_t(* | hash_func_t )(substring, uint32_t) |
Hash function typedef, takes a substring and seed as parameters. | |
typedef uint32_t | vw_size_t |
vw_size_t typedef to work across platforms | |
typedef int32_t(CVwParser::* | parse_func )(CIOBuffer *, VwExample *&) |
Parse function typedef. Takes an IOBuffer and VwExample as arguments. | |
typedef float64_t | KERNELCACHE_ELEM |
typedef int64_t | KERNELCACHE_IDX |
typedef struct shogun::_IterInfo | IterInfo |
struct that contains current state of the iteration for iterative linear solvers | |
typedef struct tag_callback_data | callback_data_t |
typedef struct tag_iteration_data | iteration_data_t |
typedef int32_t(* | line_search_proc )(int32_t n, float64_t *x, float64_t *f, float64_t *g, float64_t *s, float64_t *stp, const float64_t *xp, const float64_t *gp, float64_t *wa, callback_data_t *cd, const lbfgs_parameter_t *param) |
typedef float64_t(* | lbfgs_evaluate_t )(void *instance, const float64_t *x, float64_t *g, const int n, const float64_t step) |
typedef int(* | lbfgs_progress_t )(void *instance, const float64_t *x, const float64_t *g, const float64_t fx, const float64_t xnorm, const float64_t gnorm, const float64_t step, int n, int k, int ls) |
HMM specific types | |
typedef float64_t | T_ALPHA_BETA_TABLE |
type for alpha/beta caching table | |
typedef uint8_t | T_STATES |
typedef T_STATES * | P_STATES |
convenience typedefs | |
typedef CDynInt< uint64_t, 3 > | uint192_t |
192 bit integer constructed out of 3 64bit uint64_t's | |
typedef CDynInt< uint64_t, 4 > | uint256_t |
256 bit integer constructed out of 4 64bit uint64_t's | |
typedef CDynInt< uint64_t, 8 > | uint512_t |
512 bit integer constructed out of 8 64bit uint64_t's | |
typedef CDynInt< uint64_t, 16 > | uint1024_t |
1024 bit integer constructed out of 16 64bit uint64_t's |
Functions | |
CSGObject * | new_sgserializable (const char *sgserializable_name, EPrimitiveType generic) |
void | init_shogun (void(*print_message)(FILE *target, const char *str), void(*print_warning)(FILE *target, const char *str), void(*print_error)(FILE *target, const char *str), void(*cancel_computations)(bool &delayed, bool &immediately)) |
void | sg_global_print_default (FILE *target, const char *str) |
void | init_shogun_with_defaults () |
void | exit_shogun () |
void | set_global_io (SGIO *io) |
SGIO * | get_global_io () |
void | set_global_parallel (Parallel *parallel) |
Parallel * | get_global_parallel () |
void | set_global_version (Version *version) |
Version * | get_global_version () |
void | set_global_math (CMath *math) |
CMath * | get_global_math () |
void | set_global_rand (CRandom *rand) |
CRandom * | get_global_rand () |
float32_t | sd_offset_add (float32_t *weights, vw_size_t mask, VwFeature *begin, VwFeature *end, vw_size_t offset) |
float32_t | sd_offset_truncadd (float32_t *weights, vw_size_t mask, VwFeature *begin, VwFeature *end, vw_size_t offset, float32_t gravity) |
float32_t | one_pf_quad_predict (float32_t *weights, VwFeature &f, v_array< VwFeature > &cross_features, vw_size_t mask) |
float32_t | one_pf_quad_predict_trunc (float32_t *weights, VwFeature &f, v_array< VwFeature > &cross_features, vw_size_t mask, float32_t gravity) |
float32_t | real_weight (float32_t w, float32_t gravity) |
void * | sqdist_thread_func (void *P) |
template<class T > | |
void | push (v_array< T > &v, const T &new_ele) |
template<class T > | |
void | alloc (v_array< T > &v, int length) |
template<class T > | |
v_array< T > | pop (v_array< v_array< T > > &stack) |
float | distance (CJLCoverTreePoint p1, CJLCoverTreePoint p2, float64_t upper_bound) |
v_array< CJLCoverTreePoint > | parse_points (CDistance *distance, EFeaturesContainer fc) |
void | print (CJLCoverTreePoint &p) |
static const double * | get_col (uint32_t j) |
malsar_result_t | malsar_clustered (CDotFeatures *features, double *y, double rho1, double rho2, const malsar_options &options) |
malsar_result_t | malsar_joint_feature_learning (CDotFeatures *features, double *y, double rho1, double rho2, const malsar_options &options) |
malsar_result_t | malsar_low_rank (CDotFeatures *features, double *y, double rho, const malsar_options &options) |
void * | sg_malloc (size_t size) |
void * | sg_calloc (size_t num, size_t size) |
void | sg_free (void *ptr) |
void * | sg_realloc (void *ptr, size_t size) |
slep_result_t | slep_mc_plain_lr (CDotFeatures *features, CMulticlassLabels *labels, float64_t z, const slep_options &options) |
slep_result_t | slep_mc_tree_lr (CDotFeatures *features, CMulticlassLabels *labels, float64_t z, const slep_options &options) |
double | compute_regularizer (double *w, double lambda, double lambda2, int n_vecs, int n_feats, int n_blocks, const slep_options &options) |
double | compute_lambda (double *ATx, double z, CDotFeatures *features, double *y, int n_vecs, int n_feats, int n_blocks, const slep_options &options) |
void | projection (double *w, double *v, int n_feats, int n_blocks, double lambda, double lambda2, double L, double *z, double *z0, const slep_options &options) |
double | search_point_gradient_and_objective (CDotFeatures *features, double *ATx, double *As, double *sc, double *y, int n_vecs, int n_feats, int n_tasks, double *g, double *gc, const slep_options &options) |
slep_result_t | slep_solver (CDotFeatures *features, double *y, double z, const slep_options &options) |
void | wrap_dsyev (char jobz, char uplo, int n, double *a, int lda, double *w, int *info) |
void | wrap_dgesvd (char jobu, char jobvt, int m, int n, double *a, int lda, double *sing, double *u, int ldu, double *vt, int ldvt, int *info) |
void | wrap_dgeqrf (int m, int n, double *a, int lda, double *tau, int *info) |
void | wrap_dorgqr (int m, int n, int k, double *a, int lda, double *tau, int *info) |
void | wrap_dsyevr (char jobz, char uplo, int n, double *a, int lda, int il, int iu, double *eigenvalues, double *eigenvectors, int *info) |
void | wrap_dsygvx (int itype, char jobz, char uplo, int n, double *a, int lda, double *b, int ldb, int il, int iu, double *eigenvalues, double *eigenvectors, int *info) |
void | wrap_dstemr (char jobz, char range, int n, double *diag, double *subdiag, double vl, double vu, int il, int iu, int *m, double *w, double *z__, int ldz, int nzc, int *isuppz, int tryrac, int *info) |
template<class T > | |
SGVector< T > | create_range_array (T min, T max, ERangeType type, T step, T type_base) |
static larank_kcache_t * | larank_kcache_create (CKernel *kernelfunc) |
static void | xtruncate (larank_kcache_t *self, int32_t k, int32_t nlen) |
static void | xpurge (larank_kcache_t *self) |
static void | larank_kcache_set_maximum_size (larank_kcache_t *self, int64_t entries) |
static void | larank_kcache_destroy (larank_kcache_t *self) |
static void | xminsize (larank_kcache_t *self, int32_t n) |
static int32_t * | larank_kcache_r2i (larank_kcache_t *self, int32_t n) |
static void | xextend (larank_kcache_t *self, int32_t k, int32_t nlen) |
static void | xswap (larank_kcache_t *self, int32_t i1, int32_t i2, int32_t r1, int32_t r2) |
static void | larank_kcache_swap_rr (larank_kcache_t *self, int32_t r1, int32_t r2) |
static void | larank_kcache_swap_ri (larank_kcache_t *self, int32_t r1, int32_t i2) |
static float64_t | xquery (larank_kcache_t *self, int32_t i, int32_t j) |
static float64_t | larank_kcache_query (larank_kcache_t *self, int32_t i, int32_t j) |
static void | larank_kcache_set_buddy (larank_kcache_t *self, larank_kcache_t *buddy) |
static float32_t * | larank_kcache_query_row (larank_kcache_t *self, int32_t i, int32_t len) |
static int32_t | line_search_backtracking (int32_t n, float64_t *x, float64_t *f, float64_t *g, float64_t *s, float64_t *stp, const float64_t *xp, const float64_t *gp, float64_t *wa, callback_data_t *cd, const lbfgs_parameter_t *param) |
static int32_t | line_search_backtracking_owlqn (int32_t n, float64_t *x, float64_t *f, float64_t *g, float64_t *s, float64_t *stp, const float64_t *xp, const float64_t *gp, float64_t *wp, callback_data_t *cd, const lbfgs_parameter_t *param) |
static int32_t | line_search_morethuente (int32_t n, float64_t *x, float64_t *f, float64_t *g, float64_t *s, float64_t *stp, const float64_t *xp, const float64_t *gp, float64_t *wa, callback_data_t *cd, const lbfgs_parameter_t *param) |
static int32_t | update_trial_interval (float64_t *x, float64_t *fx, float64_t *dx, float64_t *y, float64_t *fy, float64_t *dy, float64_t *t, float64_t *ft, float64_t *dt, const float64_t tmin, const float64_t tmax, int32_t *brackt) |
static float64_t | owlqn_x1norm (const float64_t *x, const int32_t start, const int32_t n) |
static void | owlqn_pseudo_gradient (float64_t *pg, const float64_t *x, const float64_t *g, const int32_t n, const float64_t c, const int32_t start, const int32_t end) |
static void | owlqn_project (float64_t *d, const float64_t *sign, const int32_t start, const int32_t end) |
void | lbfgs_parameter_init (lbfgs_parameter_t *param) |
int32_t | lbfgs (int32_t n, float64_t *x, float64_t *ptr_fx, lbfgs_evaluate_t proc_evaluate, lbfgs_progress_t proc_progress, void *instance, lbfgs_parameter_t *_param) |
int | lbfgs (int n, float64_t *x, float64_t *ptr_fx, lbfgs_evaluate_t proc_evaluate, lbfgs_progress_t proc_progress, void *instance, lbfgs_parameter_t *param) |
void | add_cutting_plane (bmrm_ll **tail, bool *map, float64_t *A, uint32_t free_idx, float64_t *cp_data, uint32_t dim) |
void | remove_cutting_plane (bmrm_ll **head, bmrm_ll **tail, bool *map, float64_t *icp) |
void | clean_icp (ICP_stats *icp_stats, BmrmStatistics &bmrm, bmrm_ll **head, bmrm_ll **tail, float64_t *&Hmat, float64_t *&diag_H, float64_t *&beta, bool *&map, uint32_t cleanAfter, float64_t *&b, uint32_t *&I, uint32_t cp_models) |
static const float64_t * | get_col (uint32_t i) |
BmrmStatistics | svm_bmrm_solver (CDualLibQPBMSOSVM *machine, float64_t *W, float64_t TolRel, float64_t TolAbs, float64_t _lambda, uint32_t _BufSize, bool cleanICP, uint32_t cleanAfter, float64_t K, uint32_t Tmax, bool verbose) |
float64_t * | get_cutting_plane (bmrm_ll *ptr) |
uint32_t | find_free_idx (bool *map, uint32_t size) |
static const float64_t * | get_col (uint32_t i) |
static line_search_res | zoom (CDualLibQPBMSOSVM *machine, float64_t lambda, float64_t a_lo, float64_t a_hi, float64_t initial_fval, SGVector< float64_t > &initial_solution, SGVector< float64_t > &search_dir, float64_t wolfe_c1, float64_t wolfe_c2, float64_t init_lgrad, float64_t f_lo, float64_t g_lo, float64_t f_hi, float64_t g_hi) |
std::vector< line_search_res > | line_search_with_strong_wolfe (CDualLibQPBMSOSVM *machine, float64_t lambda, float64_t initial_val, SGVector< float64_t > &initial_solution, SGVector< float64_t > &initial_grad, SGVector< float64_t > &search_dir, float64_t astart, float64_t amax=1.1, float64_t wolfe_c1=1E-4, float64_t wolfe_c2=0.9, float64_t max_iter=5) |
void | update_H (BmrmStatistics &ncbm, bmrm_ll *head, bmrm_ll *tail, SGMatrix< float64_t > &H, SGVector< float64_t > &diag_H, float64_t lambda, uint32_t maxCP, int32_t w_dim) |
BmrmStatistics | svm_ncbm_solver (CDualLibQPBMSOSVM *machine, float64_t *w, float64_t TolRel, float64_t TolAbs, float64_t _lambda, uint32_t _BufSize, bool cleanICP, uint32_t cleanAfter, bool is_convex, bool line_search, bool verbose) |
static const float64_t * | get_col (uint32_t i) |
BmrmStatistics | svm_p3bm_solver (CDualLibQPBMSOSVM *machine, float64_t *W, float64_t TolRel, float64_t TolAbs, float64_t _lambda, uint32_t _BufSize, bool cleanICP, uint32_t cleanAfter, float64_t K, uint32_t Tmax, uint32_t cp_models, bool verbose) |
static const float64_t * | get_col (uint32_t i) |
BmrmStatistics | svm_ppbm_solver (CDualLibQPBMSOSVM *machine, float64_t *W, float64_t TolRel, float64_t TolAbs, float64_t _lambda, uint32_t _BufSize, bool cleanICP, uint32_t cleanAfter, float64_t K, uint32_t Tmax, bool verbose) |
Variables | |
Parallel * | sg_parallel = NULL |
SGIO * | sg_io = NULL |
Version * | sg_version = NULL |
CMath * | sg_math = NULL |
CRandom * | sg_rand = NULL |
void(* | sg_print_message )(FILE *target, const char *str) = NULL |
function called to print normal messages | |
void(* | sg_print_warning )(FILE *target, const char *str) = NULL |
function called to print warning messages | |
void(* | sg_print_error )(FILE *target, const char *str) = NULL |
function called to print error messages | |
void(* | sg_cancel_computations )(bool &delayed, bool &immediately) = NULL |
function called to cancel things | |
const int32_t | quadratic_constant = 27942141 |
Constant used while hashing/accessing quadratic features. | |
const int32_t | constant_hash = 11650396 |
Constant used to access the constant feature. | |
const uint32_t | hash_base = 97562527 |
Seed for hash. | |
const int32_t | LOGSUM_TBL = 10000 |
static double * | H_diag_matrix |
static int | H_diag_matrix_ld |
static const float64_t | Q_test_statistic_values [10][8] |
static const lbfgs_parameter_t | _defparam |
static const uint32_t | QPSolverMaxIter = 0xFFFFFFFF |
static const float64_t | epsilon = 0.0 |
static float64_t * | H |
static uint32_t | BufSize |
static float64_t * | HMatrix |
static uint32_t | maxCPs |
static const float64_t | epsilon = 0.0 |
static const uint32_t | QPSolverMaxIter = 0xFFFFFFFF |
static const float64_t | epsilon = 0.0 |
static float64_t * | H |
static float64_t * | H2 |
static uint32_t | BufSize |
static const uint32_t | QPSolverMaxIter = 0xFFFFFFFF |
static const float64_t | epsilon = 0.0 |
static float64_t * | H |
static float64_t * | H2 |
static uint32_t | BufSize |
all of classes and functions are contained in the shogun namespace
typedef struct tag_callback_data callback_data_t |
typedef uint32_t(* hash_func_t)(substring, uint32_t) |
Hash function typedef, takes a substring and seed as parameters.
Definition at line 21 of file vw_constants.h.
typedef struct tag_iteration_data iteration_data_t |
typedef struct shogun::_IterInfo IterInfo |
struct that contains current state of the iteration for iterative linear solvers
typedef float64_t KERNELCACHE_ELEM |
typedef int64_t KERNELCACHE_IDX |
typedef int32_t(* line_search_proc)(int32_t n, float64_t *x, float64_t *f, float64_t *g, float64_t *s, float64_t *stp, const float64_t *xp, const float64_t *gp, float64_t *wa, callback_data_t *cd, const lbfgs_parameter_t *param) |
Parse function typedef. Takes an IOBuffer and VwExample as arguments.
Definition at line 20 of file StreamingVwFile.h.
typedef float64_t T_ALPHA_BETA_TABLE |
typedef uint8_t T_STATES |
typedef CDynInt<uint64_t,16> uint1024_t |
typedef uint32_t vw_size_t |
vw_size_t typedef to work across platforms
Definition at line 24 of file vw_constants.h.
enum BaumWelchViterbiType |
enum E_COMPRESSION_TYPE |
compression type
Definition at line 21 of file Compressor.h.
enum E_PROB_TYPE |
enum E_VW_PARSER_TYPE |
The type of input to parse.
Definition at line 28 of file VwParser.h.
enum EAlphabet |
Alphabet of charfeatures/observations.
Definition at line 20 of file Alphabet.h.
type of measure
Definition at line 25 of file ContingencyTableEvaluation.h.
enum ECovType |
Covariance type
Definition at line 29 of file Gaussian.h.
enum EDirectSolverType |
solver type for direct solvers
Definition at line 22 of file DirectLinearSolverComplex.h.
enum EDistanceType |
type of distance
Definition at line 31 of file Distance.h.
enum EEvaluationDirection |
enum which used to define whether an evaluation measure has to be minimized or maximized
Definition at line 24 of file Evaluation.h.
Type of evaluation result. Currently this includes Cross Validation and Gradient Evaluation
Definition at line 21 of file EvaluationResult.h.
enum EFeatureClass |
shogun feature class
Definition at line 35 of file FeatureTypes.h.
enum EFeatureProperty |
shogun feature properties
Definition at line 60 of file FeatureTypes.h.
enum EFeaturesContainer |
Type used to indicate where to find (either lhs or rhs) the coordinate information of this point in the CDistance object associated
Definition at line 107 of file JLCoverTreePoint.h.
enum EFeatureType |
shogun feature type
F_UNKNOWN | |
F_BOOL | |
F_CHAR | |
F_BYTE | |
F_SHORT | |
F_WORD | |
F_INT | |
F_UINT | |
F_LONG | |
F_ULONG | |
F_SHORTREAL | |
F_DREAL | |
F_LONGREAL | |
F_ANY |
Definition at line 16 of file FeatureTypes.h.
gradient availability
Definition at line 97 of file SGObject.h.
enum EInferenceType |
inference type
Definition at line 32 of file InferenceMethod.h.
enum EKernelProperty |
enum EKernelType |
kernel type
enum ELikelihoodModelType |
type of likelihood model
Definition at line 23 of file LikelihoodModel.h.
enum ELossType |
shogun loss type
L_HINGELOSS | |
L_SMOOTHHINGELOSS | |
L_SQUAREDHINGELOSS | |
L_SQUAREDLOSS | |
L_LOGLOSS | |
L_LOGLOSSMARGIN |
Definition at line 27 of file LossFunction.h.
enum EMachineType |
classifier type
enum EMAPInferType |
the following inference methods are acceptable: Tree Max Product, Loopy Max Product, LP Relaxation, Sequential Tree Reweighted Max Product (TRW-S), Iterated Conditional Mode (ICM), Naive Mean Field, Structured Mean Field.
TREE_MAX_PROD | |
LOOPY_MAX_PROD | |
LP_RELAXATION | |
TRWS_MAX_PROD | |
ITER_COND_MODE | |
NAIVE_MEAN_FIELD | |
STRUCT_MEAN_FIELD |
Definition at line 28 of file MAPInference.h.
enum EMessageLocation |
enum EMessageType |
The io libs output [DEBUG] etc in front of every message 'higher' messages filter output depending on the loglevel, i.e. CRITICAL messages will print all MSG_CRITICAL TO MSG_EMERGENCY messages.
model selection availability
Definition at line 91 of file SGObject.h.
enum EMSParamType |
value type of a model selection parameter node
MSPT_NONE |
no type |
MSPT_FLOAT64 | |
MSPT_INT32 | |
MSPT_FLOAT64_VECTOR | |
MSPT_INT32_VECTOR | |
MSPT_FLOAT64_SGVECTOR | |
MSPT_INT32_SGVECTOR |
Definition at line 29 of file ModelSelectionParameters.h.
enum ENormalizerType |
enum for different method to approximate null-distibution
Definition at line 25 of file TestStatistic.h.
enum EOperatorFunction |
linear operator function types
Definition at line 23 of file OperatorFunction.h.
enum EOptimizationType |
enum EPCAMode |
enum EPreprocessorType |
enumeration of possible preprocessor types used by Shogun UI
Note to developer: any new preprocessor should be added here.
Definition at line 30 of file Preprocessor.h.
enum EProbHeuristicType |
multiclass prob output heuristics in [1] OVA_NORM: simple normalization of probabilites, eq.(6) OVA_SOFTMAX: normalizing using softmax function, eq.(7) OVO_PRICE: proposed by Price et al. see method 1 in [1] OVO_HASTIE: proposed by Hastie et al. see method 2 [9] in [1] OVO_HAMAMURA: proposed by Hamamura et al. see eq.(14) in [1]
[1] J. Milgram, M. Cheriet, R.Sabourin, "One Against One" or "One Against One": Which One is Better for Handwriting Recognition with SVMs?
Definition at line 34 of file MulticlassStrategy.h.
enum EProblemType |
enum EQPType |
enum EQuadraticMMDType |
Enum to select which statistic type of quadratic time MMD should be computed
Definition at line 23 of file QuadraticTimeMMD.h.
enum ERangeType |
type of range
Definition at line 23 of file ModelSelectionParameters.h.
enum ERegressionType |
type of regressor
Definition at line 17 of file Regression.h.
enum ESolver |
Enum Training method selection
BMRM |
Standard BMRM algorithm. |
PPBMRM |
Proximal Point BMRM (BMRM with prox-term) |
P3BMRM |
Proximal Point P-BMRM (multiple cutting plane models) |
NCBM |
Definition at line 24 of file DualLibQPBMSOSVM.h.
enum ESolverType |
enum ESPEStrategy |
Stochastic Proximity Embedding (SPE) strategy
Definition at line 23 of file StochasticProximityEmbedding.h.
enum EStateModelType |
state model type
Definition at line 18 of file StateModelTypes.h.
enum EStatisticType |
enum for different statistic types
Definition at line 19 of file TestStatistic.h.
enum EStructRiskType |
The structured empirical risk types, corresponding to different training objectives [1].
[1] T. Joachims, T. Finley, Chun-Nam Yu, Cutting-Plane Training of Structural SVMs, Machine Learning Journal, 2009.
N_SLACK_MARGIN_RESCALING | |
N_SLACK_SLACK_RESCALING | |
ONE_SLACK_MARGIN_RESCALING | |
ONE_SLACK_SLACK_RESCALING | |
CUSTOMIZED_RISK |
Definition at line 29 of file StructuredOutputMachine.h.
enum EStructuredDataType |
structured data type
Definition at line 17 of file StructuredDataTypes.h.
enum ETrainingType |
which training method to use for KRR
Definition at line 26 of file KernelRidgeRegression.h.
enum ETransformType |
enum EVwCacheType |
Enum EVwCacheType specifies the type of cache used, either C_NATIVE or C_PROTOBUF.
Definition at line 29 of file VwCacheReader.h.
enum EWDKernType |
WD kernel type
E_WD | |
E_EXTERNAL | |
E_BLOCK_CONST | |
E_BLOCK_LINEAR | |
E_BLOCK_SQPOLY | |
E_BLOCK_CUBICPOLY | |
E_BLOCK_EXP | |
E_BLOCK_LOG |
Definition at line 25 of file WeightedDegreeStringKernel.h.
Type of spectral windowing function.
HomogeneousKernelMapWindowUniform |
uniform window |
HomogeneousKernelMapWindowRectangular |
rectangular window |
Definition at line 30 of file HomogeneousKernelMap.h.
Type of kernel.
HomogeneousKernelIntersection |
intersection kernel |
HomogeneousKernelChi2 |
Chi2 kernel |
HomogeneousKernelJS |
Jensen-Shannon kernel |
Definition at line 23 of file HomogeneousKernelMap.h.
enum KernelName |
names of kernels that can be approximated currently
GAUSSIAN |
approximate gaussian kernel expects one parameter to be specified : kernel width |
NOT_SPECIFIED |
not specified |
Definition at line 24 of file RandomFourierDotFeatures.h.
liblinar regression solver type
Definition at line 22 of file LibLinearRegression.h.
liblinar solver type
Definition at line 25 of file LibLinear.h.
enum SCATTER_TYPE |
scatter svm variant
NO_BIAS_LIBSVM |
no bias w/ libsvm |
NO_BIAS_SVMLIGHT |
no bias w/ svmlight |
TEST_RULE1 |
training with bias using test rule 1 |
TEST_RULE2 |
training with bias using test rule 2 |
Definition at line 25 of file ScatterSVM.h.
void add_cutting_plane | ( | bmrm_ll ** | tail, |
bool * | map, | ||
float64_t * | A, | ||
uint32_t | free_idx, | ||
float64_t * | cp_data, | ||
uint32_t | dim | ||
) |
Add cutting plane
tail | Pointer to the last CP entry |
map | Pointer to map storing info about CP physical memory |
A | CP physical memory |
free_idx | Index to physical memory where the CP data will be stored |
cp_data | CP data |
dim | Dimension of CP data |
Definition at line 26 of file libbmrm.cpp.
void shogun::alloc | ( | v_array< T > & | v, |
int | length | ||
) |
Used to modify the capacity of the vector
v | vector |
length | the new length of the vector |
Definition at line 78 of file JLCoverTreePoint.h.
void clean_icp | ( | ICP_stats * | icp_stats, |
BmrmStatistics & | bmrm, | ||
bmrm_ll ** | head, | ||
bmrm_ll ** | tail, | ||
float64_t *& | H, | ||
float64_t *& | diag_H, | ||
float64_t *& | beta, | ||
bool *& | map, | ||
uint32_t | cleanAfter, | ||
float64_t *& | b, | ||
uint32_t *& | I, | ||
uint32_t | cp_models = 0 |
||
) |
Clean-up in-active cutting planes
Definition at line 88 of file libbmrm.cpp.
double shogun::compute_lambda | ( | double * | ATx, |
double | z, | ||
CDotFeatures * | features, | ||
double * | y, | ||
int | n_vecs, | ||
int | n_feats, | ||
int | n_blocks, | ||
const slep_options & | options | ||
) |
Definition at line 106 of file slep_solver.cpp.
double shogun::compute_regularizer | ( | double * | w, |
double | lambda, | ||
double | lambda2, | ||
int | n_vecs, | ||
int | n_feats, | ||
int | n_blocks, | ||
const slep_options & | options | ||
) |
Definition at line 23 of file slep_solver.cpp.
SGVector<T> shogun::create_range_array | ( | T | min, |
T | max, | ||
ERangeType | type, | ||
T | step, | ||
T | type_base | ||
) |
Creates an array of values specified by the parameters. A minimum and a maximum is specified, step interval, and an ERangeType (s. above) of the range, which is used to fill an array with concrete values. For some range types, a base is required. All values are given by void pointers to them (type conversion is done via m_value_type variable).
min | minimum of desired range. Requires min<max |
max | maximum of desired range. Requires min<max |
type | the way the values are created, see ERangeType |
step | increment instaval for the values |
type_base | base for EXP or LOG ranges |
Definition at line 207 of file ModelSelectionParameters.h.
float shogun::distance | ( | CJLCoverTreePoint | p1, |
CJLCoverTreePoint | p2, | ||
float64_t | upper_bound | ||
) |
Functions declared out of the class definition to respect JLCoverTree structure
Call m_distance->distance() with the proper index order depending on the feature containers in m_distance for each of the points
Definition at line 138 of file JLCoverTreePoint.h.
void exit_shogun | ( | ) |
uint32_t shogun::find_free_idx | ( | bool * | map, |
uint32_t | size | ||
) |
|
static |
Definition at line 26 of file libncbm.cpp.
|
static |
Definition at line 28 of file malsar_clustered.cpp.
|
static |
Definition at line 29 of file libp3bm.cpp.
|
static |
Definition at line 29 of file libppbm.cpp.
|
static |
Definition at line 174 of file libbmrm.cpp.
float64_t* shogun::get_cutting_plane | ( | bmrm_ll * | ptr | ) |
SGIO * get_global_io | ( | ) |
CMath * get_global_math | ( | ) |
Parallel * get_global_parallel | ( | ) |
CRandom * get_global_rand | ( | ) |
Version * get_global_version | ( | ) |
void init_shogun | ( | void(*)(FILE *target, const char *str) | print_message = NULL , |
void(*)(FILE *target, const char *str) | print_warning = NULL , |
||
void(*)(FILE *target, const char *str) | print_error = NULL , |
||
void(*)(bool &delayed, bool &immediately) | cancel_computations = NULL |
||
) |
This function must be called before libshogun is used. Usually shogun does not provide any output messages (neither debugging nor error; apart from exceptions). This function allows one to specify customized output callback functions and a callback function to check for exceptions:
print_message | function pointer to print a message |
print_warning | function pointer to print a warning message |
print_error | function pointer to print an error message (this will be printed before shogun throws an exception) |
cancel_computations | function pointer to check for exception |
void init_shogun_with_defaults | ( | ) |
|
static |
Definition at line 65 of file LaRank.cpp.
|
static |
Definition at line 133 of file LaRank.cpp.
|
static |
Definition at line 324 of file LaRank.cpp.
|
static |
Definition at line 356 of file LaRank.cpp.
|
static |
Definition at line 197 of file LaRank.cpp.
|
static |
Definition at line 333 of file LaRank.cpp.
|
static |
Definition at line 125 of file LaRank.cpp.
|
static |
Definition at line 290 of file LaRank.cpp.
|
static |
Definition at line 284 of file LaRank.cpp.
std::vector<line_search_res> shogun::line_search_with_strong_wolfe | ( | CDualLibQPBMSOSVM * | machine, |
float64_t | lambda, | ||
float64_t | initial_val, | ||
SGVector< float64_t > & | initial_solution, | ||
SGVector< float64_t > & | initial_grad, | ||
SGVector< float64_t > & | search_dir, | ||
float64_t | astart, | ||
float64_t | amax = 1.1 , |
||
float64_t | wolfe_c1 = 1E-4 , |
||
float64_t | wolfe_c2 = 0.9 , |
||
float64_t | max_iter = 5 |
||
) |
Definition at line 159 of file libncbm.cpp.
malsar_result_t malsar_clustered | ( | CDotFeatures * | features, |
double * | y, | ||
double | rho1, | ||
double | rho2, | ||
const malsar_options & | options | ||
) |
Routine for learning a linear multitask logistic regression model using Clustered multitask learning algorithm.
Definition at line 33 of file malsar_clustered.cpp.
malsar_result_t malsar_joint_feature_learning | ( | CDotFeatures * | features, |
double * | y, | ||
double | rho1, | ||
double | rho2, | ||
const malsar_options & | options | ||
) |
Routine for learning a linear multitask logistic regression model using Joint Feature algorithm.
Definition at line 24 of file malsar_joint_feature_learning.cpp.
malsar_result_t malsar_low_rank | ( | CDotFeatures * | features, |
double * | y, | ||
double | rho, | ||
const malsar_options & | options | ||
) |
Routine for learning a linear multitask logistic regression model using Low Rank multitask algorithm.
Definition at line 22 of file malsar_low_rank.cpp.
CSGObject * new_sgserializable | ( | const char * | sgserializable_name, |
EPrimitiveType | generic | ||
) |
new shogun serializable
sgserializable_name | |
generic |
Definition at line 2062 of file class_list.cpp.
float32_t one_pf_quad_predict | ( | float32_t * | weights, |
VwFeature & | f, | ||
v_array< VwFeature > & | cross_features, | ||
vw_size_t | mask | ||
) |
Get the prediction contribution from one feature.
weights | weights |
f | feature |
cross_features | paired features |
mask | mask |
Definition at line 40 of file vw_math.cpp.
float32_t one_pf_quad_predict_trunc | ( | float32_t * | weights, |
VwFeature & | f, | ||
v_array< VwFeature > & | cross_features, | ||
vw_size_t | mask, | ||
float32_t | gravity | ||
) |
Get the prediction contribution from one feature.
Weights are taken as truncated weights.
weights | weights |
f | feature |
cross_features | paired features |
mask | mask |
gravity | weight threshold value |
Definition at line 48 of file vw_math.cpp.
v_array< CJLCoverTreePoint > shogun::parse_points | ( | CDistance * | distance, |
EFeaturesContainer | fc | ||
) |
Fills up a v_array of CJLCoverTreePoint objects
Definition at line 183 of file JLCoverTreePoint.h.
v_array<T> shogun::pop | ( | v_array< v_array< T > > & | stack | ) |
Returns the vector previous to the pointed one in the stack of vectors and decrements the index of the stack. No memory is freed here. If there are no vectors stored in the stack, create and return a new empty vector
stack | of vectors |
Definition at line 94 of file JLCoverTreePoint.h.
void shogun::print | ( | CJLCoverTreePoint & | p | ) |
Print the information of the CoverTree point
Definition at line 207 of file JLCoverTreePoint.h.
void shogun::projection | ( | double * | w, |
double * | v, | ||
int | n_feats, | ||
int | n_blocks, | ||
double | lambda, | ||
double | lambda2, | ||
double | L, | ||
double * | z, | ||
double * | z0, | ||
const slep_options & | options | ||
) |
Definition at line 277 of file slep_solver.cpp.
void shogun::push | ( | v_array< T > & | v, |
const T & | new_ele | ||
) |
Insert a new element at the end of the vector
v | vector |
new_ele | element to insert |
Definition at line 61 of file JLCoverTreePoint.h.
void remove_cutting_plane | ( | bmrm_ll ** | head, |
bmrm_ll ** | tail, | ||
bool * | map, | ||
float64_t * | icp | ||
) |
Remove cutting plane at given index
head | Pointer to the first CP entry |
tail | Pointer to the last CP entry |
map | Pointer to map storing info about CP physical memory |
icp | Pointer to inactive CP that should be removed |
Definition at line 55 of file libbmrm.cpp.
float32_t sd_offset_add | ( | float32_t * | weights, |
vw_size_t | mask, | ||
VwFeature * | begin, | ||
VwFeature * | end, | ||
vw_size_t | offset | ||
) |
Dot product of feature vector with the weight vector with an offset added to the feature indices.
weights | weight vector |
mask | mask |
begin | first feature of the vector |
end | last feature of the vector |
offset | index offset |
Definition at line 20 of file vw_math.cpp.
float32_t sd_offset_truncadd | ( | float32_t * | weights, |
vw_size_t | mask, | ||
VwFeature * | begin, | ||
VwFeature * | end, | ||
vw_size_t | offset, | ||
float32_t | gravity | ||
) |
Dot product of feature vector with the weight vector with an offset added to the feature indices.
Weights are taken as the truncated weights.
weights | weights |
mask | mask |
begin | first feature of the vector |
end | last feature of the vector |
offset | index offset |
gravity | weight threshold value |
Definition at line 28 of file vw_math.cpp.
double shogun::search_point_gradient_and_objective | ( | CDotFeatures * | features, |
double * | ATx, | ||
double * | As, | ||
double * | sc, | ||
double * | y, | ||
int | n_vecs, | ||
int | n_feats, | ||
int | n_tasks, | ||
double * | g, | ||
double * | gc, | ||
const slep_options & | options | ||
) |
Definition at line 313 of file slep_solver.cpp.
void set_global_io | ( | SGIO * | io | ) |
void set_global_math | ( | CMath * | math | ) |
void set_global_parallel | ( | Parallel * | parallel | ) |
void set_global_rand | ( | CRandom * | rand | ) |
void set_global_version | ( | Version * | version | ) |
void* shogun::sg_calloc | ( | size_t | num, |
size_t | size | ||
) |
Definition at line 226 of file memory.cpp.
void shogun::sg_free | ( | void * | ptr | ) |
Definition at line 262 of file memory.cpp.
void shogun::sg_global_print_default | ( | FILE * | target, |
const char * | str | ||
) |
void* shogun::sg_malloc | ( | size_t | size | ) |
Definition at line 193 of file memory.cpp.
void* shogun::sg_realloc | ( | void * | ptr, |
size_t | size | ||
) |
Definition at line 278 of file memory.cpp.
slep_result_t slep_mc_plain_lr | ( | CDotFeatures * | features, |
CMulticlassLabels * | labels, | ||
float64_t | z, | ||
const slep_options & | options | ||
) |
Accelerated projected gradient solver for multiclass logistic regression problem with feature tree regularization.
features | features to be used |
labels | labels to be used |
z | regularization ratio |
options | options of solver |
(n_vecs*n_classes);
Definition at line 27 of file slep_mc_plain_lr.cpp.
slep_result_t slep_mc_tree_lr | ( | CDotFeatures * | features, |
CMulticlassLabels * | labels, | ||
float64_t | z, | ||
const slep_options & | options | ||
) |
Accelerated projected gradient solver for multiclass logistic regression problem with feature tree regularization.
features | features to be used |
labels | labels to be used |
z | regularization ratio |
options | options of solver |
(n_vecs*n_classes);
Definition at line 29 of file slep_mc_tree_lr.cpp.
slep_result_t slep_solver | ( | CDotFeatures * | features, |
double * | y, | ||
double | z, | ||
const slep_options & | options | ||
) |
Learning optimization task solver ported from the SLEP (Sparse LEarning Package) library.
Based on accelerated projected gradient method.
Supports two types of losses: logistic and least squares.
Supports multitask problems (task group [MULTITASK_GROUP] and task tree [MULTITASK_TREE] relations), problems with feature relations (feature group [FEATURE_GROUP] and feature tree [FEATURE_TREE]), basic regularized problems [PLAIN] and fused formulation.
Definition at line 402 of file slep_solver.cpp.
void* shogun::sqdist_thread_func | ( | void * | P | ) |
Definition at line 144 of file KMeans.cpp.
BmrmStatistics svm_bmrm_solver | ( | CDualLibQPBMSOSVM * | machine, |
float64_t * | W, | ||
float64_t | TolRel, | ||
float64_t | TolAbs, | ||
float64_t | _lambda, | ||
uint32_t | _BufSize, | ||
bool | cleanICP, | ||
uint32_t | cleanAfter, | ||
float64_t | K, | ||
uint32_t | Tmax, | ||
bool | verbose | ||
) |
Standard BMRM Solver for Structured Output Learning
machine | Pointer to the BMRM machine |
W | Weight vector |
TolRel | Relative tolerance |
TolAbs | Absolute tolerance |
_lambda | Regularization constant |
_BufSize | Size of the CP buffer (i.e. maximal number of iterations) |
cleanICP | Flag that enables/disables inactive cutting plane removal feature |
cleanAfter | Number of iterations that should be cutting plane inactive for to be removed |
K | Parameter K |
Tmax | Parameter Tmax |
verbose | Flag that enables/disables screen output |
Definition at line 179 of file libbmrm.cpp.
BmrmStatistics svm_ncbm_solver | ( | CDualLibQPBMSOSVM * | machine, |
float64_t * | w, | ||
float64_t | TolRel, | ||
float64_t | TolAbs, | ||
float64_t | _lambda, | ||
uint32_t | _BufSize, | ||
bool | cleanICP, | ||
uint32_t | cleanAfter, | ||
bool | is_convex = false , |
||
bool | line_search = true , |
||
bool | verbose = false |
||
) |
NCBM (non-convex bundle method) solver Solves any unconstrainedminimization problem in the form of: min lambda/2 ||w||^2 + R(w) where R(w) is a risk funciton of any kind.
Definition at line 308 of file libncbm.cpp.
BmrmStatistics svm_p3bm_solver | ( | CDualLibQPBMSOSVM * | machine, |
float64_t * | W, | ||
float64_t | TolRel, | ||
float64_t | TolAbs, | ||
float64_t | _lambda, | ||
uint32_t | _BufSize, | ||
bool | cleanICP, | ||
uint32_t | cleanAfter, | ||
float64_t | K, | ||
uint32_t | Tmax, | ||
uint32_t | cp_models, | ||
bool | verbose | ||
) |
Proximal Point P-BMRM (multiple cutting plane models) Solver for Structured Output Learning
machine | Pointer to the BMRM machine |
W | Weight vector |
TolRel | Relative tolerance |
TolAbs | Absolute tolerance |
_lambda | Regularization constant |
_BufSize | Size of the CP buffer (i.e. maximal number of iterations) |
cleanICP | Flag that enables/disables inactive cutting plane removal feature |
cleanAfter | Number of iterations that should be cutting plane inactive for to be removed |
K | Parameter K |
Tmax | Parameter Tmax |
cp_models | Count of cutting plane models to be used |
verbose | Flag that enables/disables screen output |
Definition at line 34 of file libp3bm.cpp.
BmrmStatistics svm_ppbm_solver | ( | CDualLibQPBMSOSVM * | machine, |
float64_t * | W, | ||
float64_t | TolRel, | ||
float64_t | TolAbs, | ||
float64_t | _lambda, | ||
uint32_t | _BufSize, | ||
bool | cleanICP, | ||
uint32_t | cleanAfter, | ||
float64_t | K, | ||
uint32_t | Tmax, | ||
bool | verbose | ||
) |
Proximal Point BMRM Solver for Structured Output Learning
machine | Pointer to the BMRM machine |
W | Weight vector |
TolRel | Relative tolerance |
TolAbs | Absolute tolerance |
_lambda | Regularization constant |
_BufSize | Size of the CP buffer (i.e. maximal number of iterations) |
cleanICP | Flag that enables/disables inactive cutting plane removal feature |
cleanAfter | Number of iterations that should be cutting plane inactive for to be removed |
K | Parameter K |
Tmax | Parameter Tmax |
verbose | Flag that enables/disables screen output |
Definition at line 34 of file libppbm.cpp.
void shogun::update_H | ( | BmrmStatistics & | ncbm, |
bmrm_ll * | head, | ||
bmrm_ll * | tail, | ||
SGMatrix< float64_t > & | H, | ||
SGVector< float64_t > & | diag_H, | ||
float64_t | lambda, | ||
uint32_t | maxCP, | ||
int32_t | w_dim | ||
) |
Definition at line 274 of file libncbm.cpp.
|
static |
Update a safeguarded trial value and interval for line search.
The parameter x represents the step with the least function value. The parameter t represents the current step. This function assumes that the derivative at the point of x in the direction of the step. If the bracket is set to true, the minimizer has been bracketed in an interval of uncertainty with endpoints between x and y.
x | The pointer to the value of one endpoint. |
fx | The pointer to the value of f(x). |
dx | The pointer to the value of f'(x). |
y | The pointer to the value of another endpoint. |
fy | The pointer to the value of f(y). |
dy | The pointer to the value of f'(y). |
t | The pointer to the value of the trial value, t. |
ft | The pointer to the value of f(t). |
dt | The pointer to the value of f'(t). |
tmin | The minimum value for the trial value, t. |
tmax | The maximum value for the trial value, t. |
brackt | The pointer to the predicate if the trial value is bracketed. |
int32_t | Status value. Zero indicates a normal termination. |
void shogun::wrap_dgeqrf | ( | int | m, |
int | n, | ||
double * | a, | ||
int | lda, | ||
double * | tau, | ||
int * | info | ||
) |
Definition at line 271 of file lapack.cpp.
void shogun::wrap_dgesvd | ( | char | jobu, |
char | jobvt, | ||
int | m, | ||
int | n, | ||
double * | a, | ||
int | lda, | ||
double * | sing, | ||
double * | u, | ||
int | ldu, | ||
double * | vt, | ||
int | ldvt, | ||
int * | info | ||
) |
Definition at line 253 of file lapack.cpp.
void shogun::wrap_dorgqr | ( | int | m, |
int | n, | ||
int | k, | ||
double * | a, | ||
int | lda, | ||
double * | tau, | ||
int * | info | ||
) |
Definition at line 289 of file lapack.cpp.
void shogun::wrap_dstemr | ( | char | jobz, |
char | range, | ||
int | n, | ||
double * | diag, | ||
double * | subdiag, | ||
double | vl, | ||
double | vu, | ||
int | il, | ||
int | iu, | ||
int * | m, | ||
double * | w, | ||
double * | z__, | ||
int | ldz, | ||
int | nzc, | ||
int * | isuppz, | ||
int | tryrac, | ||
int * | info | ||
) |
Definition at line 373 of file lapack.cpp.
void shogun::wrap_dsyev | ( | char | jobz, |
char | uplo, | ||
int | n, | ||
double * | a, | ||
int | lda, | ||
double * | w, | ||
int * | info | ||
) |
Definition at line 235 of file lapack.cpp.
void shogun::wrap_dsyevr | ( | char | jobz, |
char | uplo, | ||
int | n, | ||
double * | a, | ||
int | lda, | ||
int | il, | ||
int | iu, | ||
double * | eigenvalues, | ||
double * | eigenvectors, | ||
int * | info | ||
) |
Definition at line 307 of file lapack.cpp.
void shogun::wrap_dsygvx | ( | int | itype, |
char | jobz, | ||
char | uplo, | ||
int | n, | ||
double * | a, | ||
int | lda, | ||
double * | b, | ||
int | ldb, | ||
int | il, | ||
int | iu, | ||
double * | eigenvalues, | ||
double * | eigenvectors, | ||
int * | info | ||
) |
Definition at line 342 of file lapack.cpp.
|
static |
Definition at line 203 of file LaRank.cpp.
|
static |
Definition at line 166 of file LaRank.cpp.
|
static |
Definition at line 111 of file LaRank.cpp.
|
static |
Definition at line 296 of file LaRank.cpp.
|
static |
Definition at line 222 of file LaRank.cpp.
|
static |
Definition at line 85 of file LaRank.cpp.
|
static |
Definition at line 42 of file libncbm.cpp.
|
static |
|
static |
Definition at line 24 of file libbmrm.cpp.
|
static |
Definition at line 24 of file libppbm.cpp.
|
static |
Definition at line 24 of file libp3bm.cpp.
const int32_t constant_hash = 11650396 |
Constant used to access the constant feature.
Definition at line 30 of file vw_constants.h.
|
static |
Definition at line 21 of file libbmrm.cpp.
|
static |
Definition at line 21 of file libp3bm.cpp.
|
static |
Definition at line 21 of file libppbm.cpp.
|
static |
Definition at line 24 of file libncbm.cpp.
|
static |
Definition at line 23 of file libppbm.cpp.
|
static |
Definition at line 23 of file libbmrm.cpp.
|
static |
Definition at line 23 of file libp3bm.cpp.
|
static |
Definition at line 23 of file libppbm.cpp.
|
static |
Definition at line 23 of file libp3bm.cpp.
|
static |
Definition at line 25 of file malsar_clustered.cpp.
|
static |
Definition at line 26 of file malsar_clustered.cpp.
const uint32_t hash_base = 97562527 |
Seed for hash.
Definition at line 33 of file vw_constants.h.
|
static |
Definition at line 22 of file libncbm.cpp.
const int32_t LOGSUM_TBL = 10000 |
span of the logsum table
Definition at line 21 of file LocalAlignmentStringKernel.h.
|
static |
Definition at line 23 of file libncbm.cpp.
|
static |
Definition at line 78 of file RejectionStrategy.h.
|
static |
Definition at line 20 of file libppbm.cpp.
|
static |
Definition at line 20 of file libp3bm.cpp.
|
static |
Definition at line 20 of file libbmrm.cpp.
const int32_t quadratic_constant = 27942141 |
Constant used while hashing/accessing quadratic features.
Definition at line 27 of file vw_constants.h.
void(* sg_cancel_computations)(bool &delayed, bool &immediately) = NULL |
void(* sg_print_error)(FILE *target, const char *str) = NULL |
void(* sg_print_message)(FILE *target, const char *str) = NULL |
void(* sg_print_warning)(FILE *target, const char *str) = NULL |