SHOGUN
4.2.0
|
►NEigen | |
CLDLT | |
CMap | |
CMatrix | |
CStride | |
►Nshogun | All of classes and functions are contained in the shogun namespace |
►Nlinalg | |
►Nimplementation | |
►Nspecial_purpose | |
Ccross_entropy | |
Ccross_entropy< Backend::EIGEN3, Matrix > | |
Clogistic | |
Clogistic< Backend::EIGEN3, Matrix > | |
Cmultiply_by_logistic_derivative | |
Cmultiply_by_logistic_derivative< Backend::EIGEN3, Matrix > | |
Cmultiply_by_rectified_linear_derivative | |
Cmultiply_by_rectified_linear_derivative< Backend::EIGEN3, Matrix > | |
Crectified_linear | |
Crectified_linear< Backend::EIGEN3, Matrix > | |
Csoftmax | |
Csoftmax< Backend::EIGEN3, Matrix > | |
Csquared_error | |
Csquared_error< Backend::EIGEN3, Matrix > | |
Cadd | Generic class which is specialized for different backends to perform addition |
Cadd< Backend::EIGEN3, Matrix > | Partial specialization of add for the Eigen3 backend |
Capply | Generic class which is specialized for different backends to perform apply |
Capply< Backend::EIGEN3, Matrix, Vector > | Partial specialization of apply for the Eigen3 backend |
Ccholesky | Generic class which is specialized for different backends to compute the cholesky decomposition of a dense matrix |
Ccholesky< Backend::EIGEN3, Matrix > | Partial specialization of add for the Eigen3 backend |
Ccolwise_sum | Generic class colwise_sum which provides a static compute method. This class is specialized for different types of matrices and backend, providing a means to deal with various matrices directly without having to convert |
Ccolwise_sum< Backend::EIGEN3, Matrix > | Specialization of generic colwise_sum which works with SGMatrix and uses Eigen3 as backend for computing sum |
Cconvolve | |
Cconvolve< Backend::EIGEN3, Matrix > | |
Cdot | Generic class dot which provides a static compute method. This class is specialized for different types of vectors and backend, providing a mean to deal with various vectors directly without having to convert |
Cdot< Backend::EIGEN3, Vector > | Specialization of generic dot for the Eigen3 backend |
Celementwise_product | |
Celementwise_product< Backend::EIGEN3, Matrix > | |
Celementwise_square | Generic class square which provides a static compute method. This class is specialized for different types of matrices and backend, providing a mean to deal with various matrices directly without having to convert |
Celementwise_square< Backend::EIGEN3, Matrix > | Partial specialization of generic elementwise_square for the Eigen3 backend |
Celementwise_unary_operation | Template struct elementwise_unary_operation. This struct is specialized for computing element-wise operations for both matrices and vectors of CPU (SGMatrix/SGVector) or GPU (CGPUMatrix/CGPUVector) |
Celementwise_unary_operation< Backend::EIGEN3, Operand, ReturnType, UnaryOp > | Specialization for elementwise_unary_operation with EIGEN3 backend. The operand types MUST be of CPU types (SGMatrix/SGVector) |
Celementwise_unary_operation< Backend::NATIVE, Operand, ReturnType, UnaryOp > | Specialization for elementwise_unary_operation with NATIVE backend. The operand types MUST be of CPU types (SGMatrix/SGVector) |
Cint2float | Generic class int2float which converts different types of integer into float64 type |
Cint2float< int32_t > | Specialization of generic class int2float which converts int32 into float64 |
Cint2float< int64_t > | Specialization of generic class int2float which converts int64 into float64 |
Cmatrix_product | |
Cmatrix_product< Backend::EIGEN3, Matrix > | |
Cmax | Generic class which is specialized for different backends to perform the max operation |
Cmax< Backend::EIGEN3, Matrix > | Specialization of max for the Eigen3 backend |
Cmean | Generic class mean which provides a static compute method |
Cmean< Backend::EIGEN3, Matrix > | Specialization of generic mean which works with SGVector and SGMatrix and uses Eigen3 as backend for computing mean |
Crange_fill | Generic class which is specialized for different backends to perform the Range fill operation |
Crange_fill< Backend::EIGEN3, Matrix > | Partial specialization of add for the Eigen3 backend |
Crowwise_mean | Generic class rowwise_mean which provides a static compute method |
Crowwise_mean< Backend::EIGEN3, Matrix > | Specialization of generic mean which works with SGMatrix and uses Eigen3 as backend for computing rowwise mean |
Crowwise_sum | Generic class rowwise_sum which provides a static compute method. This class is specialized for different types of matrices and backend, providing a means to deal with various matrices directly without having to convert |
Crowwise_sum< Backend::EIGEN3, Matrix > | Specialization of generic rowwise_sum which works with SGMatrix and uses Eigen3 as backend for computing sum |
Cscale | |
Cscale< Backend::EIGEN3, Matrix > | |
Cset_rows_const | |
Cset_rows_const< Backend::EIGEN3, Matrix, Vector > | |
Csum | Generic class sum which provides a static compute method. This class is specialized for different types of matrices and backend, providing a means to deal with various matrices directly without having to convert |
Csum< Backend::EIGEN3, Matrix > | Specialization of generic sum which works with SGMatrix and uses Eigen3 as backend for computing sum |
Csum_symmetric | Generic class sum symmetric which provides a static compute method. This class is specialized for different types of matrices and backend, providing a means to deal with various matrices directly without having to convert |
Csum_symmetric< Backend::EIGEN3, Matrix > | Specialization of generic sum symmetric which works with SGMatrix and uses Eigen3 as backend for computing sum |
Cvector_sum | Generic class vector_sum which provides a static compute method. This class is specialized for different types of vectors and backend, providing a mean to deal with various vectors directly without having to convert |
Cvector_sum< Backend::EIGEN3, Vector > | Specialization of generic vector_sum for the Eigen3 backend |
►Nocl | |
CParameter | Struct Parameter for wrapping up parameters to custom OpenCL operation strings. Supports string type, C-style string type and all basic types of parameters |
►Noperations | |
Cocl_operation | Class ocl_operation for element-wise unary OpenCL operations for GPU-types (CGPUMatrix/CGPUVector) |
Csin | |
Csin< complex128_t > | |
►Nutil | |
Callocate_result | Template struct allocate_result for allocating objects of return type for element-wise operations. This generic version takes care of the vector types supported by Shogun (SGVector and CGPUVector) |
Callocate_result< SGMatrix< T >, SGMatrix< ST > > | Specialization for allocate_result when return type is SGMatrix. Works with different scalar types as well. T defines the scalar type for the operand and whereas ST is the scalar type for the result of the element-wise operation |
CBlock | Generic class Block which wraps a matrix class and contains block specific information, providing a uniform way to deal with matrix blocks for all supported backend matrices |
C_IterInfo | Struct that contains current state of the iteration for iterative linear solvers |
CAdaDeltaUpdater | The class implements the AdaDelta method. \[ \begin{array}{l} g_\theta=(1-\lambda){(\frac{ \partial f(\cdot) }{\partial \theta })}^2+\lambda g_\theta\\ d_\theta=\alpha\frac{\sqrt{s_\theta+\epsilon}}{\sqrt{g_\theta+\epsilon}}\frac{ \partial f(\cdot) }{\partial \theta }\\ s_\theta=(1-\lambda){(d_\theta)}^2+\lambda s_\theta \end{array} \] |
CAdaGradUpdater | The class implements the AdaGrad method |
CAdamUpdater | The class implements the Adam method |
CAdaptMomentumCorrection | This implements the adaptive momentum correction method |
►CAny | Allows to store objects of arbitrary types by using a BaseAnyPolicy and provides a type agnostic API. See its usage in CSGObject::Self, CSGObject::set(), CSGObject::get() and CSGObject::has(). |
CEmpty | |
CBaseAnyPolicy | An interface for a policy to store a value. Value can be any data like primitive data-types, shogun objects, etc. Policy defines how to handle this data. It works with a provided memory region and is able to set value, clear it and return the type-name as string |
CBaseTag | Base class for all tags. This class stores name and not the type information for a shogun object. It can be used as an identifier for a shogun object where type information is not known. One application of this can be found in CSGObject::set_param_with_btag() |
CC45TreeNodeData | Structure to store data of a node of C4.5 tree. This can be used as a template type in TreeMachineNode class. Ex: C4.5 algorithm uses nodes of type CTreeMachineNode<C45TreeNodeData> |
CCAbsoluteDeviationLoss | CAbsoluteDeviationLoss implements the absolute deviation loss function. \(L(y_i,f(x_i)) = \mod{y_i-f(x_i)}\) |
CCAccuracyMeasure | Class AccuracyMeasure used to measure accuracy of 2-class classifier |
CCAlphabet | The class Alphabet implements an alphabet and alphabet utility functions |
CCANOVAKernel | ANOVA (ANalysis Of VAriances) kernel |
CCApproxJointDiagonalizer | Class ApproxJointDiagonalizer defines an Approximate Joint Diagonalizer (AJD) interface |
CCARTreeNodeData | Structure to store data of a node of CART. This can be used as a template type in TreeMachineNode class. CART algorithm uses nodes of type CTreeMachineNode<CARTreeNodeData> |
CCAttenuatedEuclideanDistance | Class AttenuatedEuclideanDistance |
CCAttributeFeatures | Implements attributed features, that is in the simplest case a number of (attribute, value) pairs |
CCAUCKernel | The AUC kernel can be used to maximize the area under the receiver operator characteristic curve (AUC) instead of margin in SVM training |
CCAutoencoder | Represents a single layer neural autoencoder |
CCAveragedPerceptron | Class Averaged Perceptron implements the standard linear (online) algorithm. Averaged perceptron is the simple extension of Perceptron |
CCAvgDiagKernelNormalizer | Normalize the kernel by either a constant or the average value of the diagonal elements (depending on argument c of the constructor) |
CCBaggingMachine | : Bagging algorithm i.e. bootstrap aggregating |
CCBAHSIC | Class CBAHSIC, that extends CKernelDependenceMaximization and uses HSIC [1] to compute dependence measures for feature selection using a backward elimination approach as described in [1]. This class serves as a convenience class that initializes the CDependenceMaximization::m_estimator with an instance of CHSIC and allows only shogun::BACKWARD_ELIMINATION algorithm to use which is set internally. Therefore, trying to use other algorithms by set_algorithm() will not work. Plese see the class documentation of CHSIC and [2] for more details on mathematical description of HSIC |
CCBalancedConditionalProbabilityTree | |
CCBallTree | This class implements Ball tree. The ball tree is contructed using the top-down approach. cf. ftp://ftp.icsi.berkeley.edu/pub/techreports/1989/tr-89-063.pdf |
CCBALMeasure | Class BALMeasure used to measure balanced error of 2-class classifier |
CCBaseMulticlassMachine | |
CCBesselKernel | Class Bessel kernel |
CCBinaryClassEvaluation | The class TwoClassEvaluation, a base class used to evaluate binary classification labels |
CCBinaryFile | A Binary file access class |
CCBinaryLabels | Binary Labels for binary classification |
CCBinaryStream | Memory mapped emulation via binary streams (files) |
CCBinaryTreeMachineNode | The node of the tree structure forming a TreeMachine The node contains pointer to its parent and pointers to its 2 children: left child and right child. The node also contains data which can be of any type and has to be specified using template specifier |
CCBinnedDotFeatures | The class BinnedDotFeatures contains a 0-1 conversion of features into bins |
CCBitString | String class embedding a string in a compact bit representation |
CCBrayCurtisDistance | Class Bray-Curtis distance |
CCC45ClassifierTree | Class C45ClassifierTree implements the C4.5 algorithm for decision tree learning. The algorithm steps are briefy explained below : |
CCCache | Template class Cache implements a simple cache |
CCCanberraMetric | Class CanberraMetric |
CCCanberraWordDistance | Class CanberraWordDistance |
CCCARTree | This class implements the Classification And Regression Trees algorithm by Breiman et al for decision tree learning. A CART tree is a binary decision tree that is constructed by splitting a node into two child nodes repeatedly, beginning with the root node that contains the whole dataset. TREE GROWING PROCESS : During the tree growing process, we recursively split a node into left child and right child so that the resulting nodes are "purest". We do this until any of the stopping criteria is met. To find the best split, we scan through all possible splits in all predictive attributes. The best split is one that maximises some splitting criterion. For classification tasks, ie. when the dependent attribute is categorical, the Gini index is used. For regression tasks, ie. when the dependent variable is continuous, least squares deviation is used. The algorithm uses two stopping criteria : if node becomes completely "pure", ie. all its members have identical dependent variable, or all of them have identical predictive attributes (independent variables). |
CCCauchyKernel | Cauchy kernel |
CCCCSOSVM | CCSOSVM |
CCCGMShiftedFamilySolver | Class that uses conjugate gradient method for solving a shifted linear system family where the linear opeator is real valued and symmetric positive definite, the vector is real valued, but the shifts are complex |
CCCHAIDTree | This class implements the CHAID algorithm proposed by Kass (1980) for decision tree learning. CHAID consists of three steps: merging, splitting and stopping. A tree is grown by repeatedly using these three steps on each node starting from the root node. CHAID accepts nominal or ordinal categorical predictors only. If predictors are continuous, they have to be transformed into ordinal predictors before tree growing. CONVERTING CONTINUOUS PREDICTORS TO ORDINAL : Continuous predictors are converted to ordinal by binning. The number of bins (K) has to be supplied by the user. Given K, a predictor is split in such a way that all the bins get the same number (more or less) of distinct predictor values. The maximum feature value in each bin is used as a breakpoint. MERGING : During the merging step, allowable pairs of categories of a predictor are evaluated for similarity. If the similarity of a pair is above a threshold, the categories constituting the pair are merged into a single category. The process is repeated until there is no pair left having high similarity between its categories. Similarity between categories is evaluated using the p_value SPLITTING : The splitting step selects which predictor to be used to best split the node. Selection is accomplished by comparing the adjusted p_value associated with each predictor. The predictor that has the smallest adjusted p_value is chosen for splitting the node. STOPPING : The tree growing process stops if any of the following conditions is satisfied : |
CCChebyshewMetric | Class ChebyshewMetric |
CCChi2Kernel | The Chi2 kernel operating on realvalued vectors computes the chi-squared distance between sets of histograms |
CCChiSquareDistance | Class ChiSquareDistance |
CCCircularBuffer | Implementation of circular buffer This buffer has logical structure such as queue (FIFO). But this queue is cyclic: tape, ends of which are connected, just instead tape there is block of physical memory. So, if you push big block of data it can be situated both at the end and the begin of buffer's memory |
CCCircularKernel | Circular kernel |
CCClusteringAccuracy | Clustering accuracy |
CCClusteringEvaluation | The base class used to evaluate clustering |
CCClusteringMutualInformation | Clustering (normalized) mutual information |
CCCombinationRule | CombinationRule abstract class The CombinationRule defines an interface to how to combine the classification or regression outputs of an ensemble of Machines |
CCCombinedDotFeatures | Features that allow stacking of a number of DotFeatures |
CCCombinedFeatures | The class CombinedFeatures is used to combine a number of of feature objects into a single CombinedFeatures object |
CCCombinedKernel | The Combined kernel is used to combine a number of kernels into a single CombinedKernel object by linear combination |
CCCommUlongStringKernel | The CommUlongString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 64bit integers |
CCCommWordStringKernel | The CommWordString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 16bit integers |
CCCompressor | Compression library for compressing and decompressing buffers using one of the standard compression algorithms: |
CCConditionalProbabilityTree | |
CCConjugateGradientSolver | Class that uses conjugate gradient method of solving a linear system involving a real valued linear operator and vector. Useful for large sparse systems involving sparse symmetric and positive-definite matrices |
CCConjugateOrthogonalCGSolver | Class that uses conjugate orthogonal conjugate gradient method of solving a linear system involving a complex valued linear operator and vector. Useful for large sparse systems involving sparse symmetric matrices that are not Herimitian |
CCConstKernel | The Constant Kernel returns a constant for all elements |
CCConstMean | The Const mean function class |
CCContingencyTableEvaluation | The class ContingencyTableEvaluation a base class used to evaluate 2-class classification with TP, FP, TN, FN rates |
CCConverter | Class Converter used to convert data |
CCConvolutionalFeatureMap | Handles convolution and gradient calculation for a single feature map in a convolutional neural network |
CCCosineDistance | Class CosineDistance |
CCCplex | Class CCplex to encapsulate access to the commercial cplex general purpose optimizer |
CCCPLEXSVM | CplexSVM a SVM solver implementation based on cplex (unfinished) |
CCCrossCorrelationMeasure | Class CrossCorrelationMeasure used to measure cross correlation coefficient of 2-class classifier |
CCCrossValidation | Base class for cross-validation evaluation. Given a learning machine, a splitting strategy, an evaluation criterion, features and corresponding labels, this provides an interface for cross-validation. Results may be retrieved using the evaluate method. A number of repetitions may be specified for obtaining more accurate results. The arithmetic mean and standard deviation of different runs is returned. Default number of runs is one |
CCCrossValidationMKLStorage | Class for storing MKL weights in every fold of cross-validation |
CCCrossValidationMulticlassStorage | Class for storing multiclass evaluation information in every fold of cross-validation |
CCCrossValidationOutput | Class for managing individual folds in cross-validation |
CCCrossValidationPrintOutput | Class for outputting cross-validation intermediate results to the standard output. Simply prints all messages it gets |
CCCrossValidationResult | Type to encapsulate the results of an evaluation run |
CCCrossValidationSplitting | Implementation of normal cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference) |
CCCSVFile | Class CSVFile used to read data from comma-separated values (CSV) files. See http://en.wikipedia.org/wiki/Comma-separated_values |
CCCustomDistance | The Custom Distance allows for custom user provided distance matrices |
CCCustomKernel | The Custom Kernel allows for custom user provided kernel matrices |
CCCustomMahalanobisDistance | Class CustomMahalanobisDistance used to compute the distance between feature vectors \( \vec{x_i} \) and \( \vec{x_j} \) as \( (\vec{x_i} - \vec{x_j})^T \mathbf{M} (\vec{x_i} - \vec{x_j}) \), given the matrix \( \mathbf{M} \) which will be referred to as Mahalanobis matrix |
CCData | Dummy data holder |
CCDataGenerator | Class that is able to generate various data samples, which may be used for examples in SHOGUN |
CCDecompressString | Preprocessor that decompresses compressed strings |
CCDeepAutoencoder | Represents a muti-layer autoencoder |
CCDeepBeliefNetwork | A Deep Belief Network |
CCDelimiterTokenizer | The class CDelimiterTokenizer is used to tokenize a SGVector<char> into tokens using custom chars as delimiters. One can set the delimiters to use by setting to 1 the appropriate index of the public field delimiters. Eg. to set as delimiter the character ':', one should do: tokenizer->delimiters[':'] = 1; |
CCDenseDistance | Template class DenseDistance |
CCDenseExactLogJob | Class that represents the job of applying the log of a CDenseMatrixOperator on a real vector |
CCDenseFeatures | The class DenseFeatures implements dense feature matrices |
CCDenseLabels | Dense integer or floating point labels |
CCDenseMatrixExactLog | Class that generates jobs for computing logarithm of a dense matrix linear operator |
CCDenseMatrixOperator | Class that represents a dense-matrix linear operator. It computes matrix-vector product \(Ax\) in its apply method, \(A\in\mathbb{C}^{m\times n},A:\mathbb{C}^{n}\rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in\mathbb{C}^{n}\) being the vector. The result is a vector \(y\in\mathbb{C}^{m}\) |
CCDensePreprocessor | Template class DensePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CDenseFeatures (i.e. rectangular dense matrices) |
CCDenseSubSamplesFeatures | |
CCDenseSubsetFeatures | |
CCDependenceMaximization | Class CDependenceMaximization, base class for all feature selection preprocessors which select a subset of features that shows maximum dependence between the features and the labels. This is done via an implementation of CIndependenceTest, m_estimator inside compute_measures() (see class documentation of CFeatureSelection), which performs a statistical test for a given feature \(\mathbf{X}_i\) from the set of features \(\mathbf{X}\), and the labels \(\mathbf{Y}\). The test checks \[ \textbf{H}_0 : P\left(\mathbf{X}\setminus \mathbf{X}_i, \mathbf{Y}\right) =P\left(\mathbf{X}\setminus \mathbf{X}_i\right)P\left(\mathbf{Y}\right) \] The test statistic is then used as a measure which signifies the independence between the rest of the features and the labels - higher the value of the test statistic, greater the dependency between the rest of the features and the class labels, and therefore lesser significant the current feature becomes. Therefore, highest scoring features are removed. The removal policy thus can only be shogun::N_LARGEST and shogun::PERCENTILE_LARGEST and it can be set via set_policy() call. remove_feats() method handles the removal of features based on the specified policy |
CCDiagKernel | The Diagonal Kernel returns a constant for the diagonal and zero otherwise |
CCDiceKernelNormalizer | DiceKernelNormalizer performs kernel normalization inspired by the Dice coefficient (see http://en.wikipedia.org/wiki/Dice's_coefficient) |
CCDifferentiableFunction | An abstract class that describes a differentiable function used for GradientEvaluation |
CCDiffusionMaps | Class DiffusionMaps used to preprocess given data using Diffusion Maps dimensionality reduction technique as described in |
CCDimensionReductionPreprocessor | Class DimensionReductionPreprocessor, a base class for preprocessors used to lower the dimensionality of given simple features (dense matrices) |
CCDirectEigenSolver | Class that computes eigenvalues of a real valued, self-adjoint dense matrix linear operator using Eigen3 |
CCDirectLinearSolverComplex | Class that provides a solve method for complex dense-matrix linear systems |
CCDirectSparseLinearSolver | Class that provides a solve method for real sparse-matrix linear systems using LLT |
CCDiscreteDistribution | This is the base interface class for all discrete distributions |
CCDisjointSet | Class CDisjointSet data structure for linking graph nodes It's easy to identify connected graph, acyclic graph, roots of forest etc. please refer to http://en.wikipedia.org/wiki/Disjoint-set_data_structure |
CCDistance | Class Distance, a base class for all the distances used in the Shogun toolbox |
CCDistanceKernel | The Distance kernel takes a distance as input |
CCDistanceMachine | A generic DistanceMachine interface |
CCDistantSegmentsKernel | The distant segments kernel is a string kernel, which counts the number of substrings, so-called segments, at a certain distance from each other |
CCDistribution | Base class Distribution from which all methods implementing a distribution are derived |
CCDixonQTestRejectionStrategy | Simplified version of Dixon's Q test outlier based rejection strategy. Statistic values are taken from http://www.vias.org/tmdatanaleng/cc_outlier_tests_dixon.html |
CCDomainAdaptationMulticlassLibLinear | Domain adaptation multiclass LibLinear wrapper Source domain is assumed to b |
CCDomainAdaptationSVM | Class DomainAdaptationSVM |
CCDomainAdaptationSVMLinear | Class DomainAdaptationSVMLinear |
CCDotFeatures | Features that support dot products among other operations |
CCDotKernel | Template class DotKernel is the base class for kernels working on DotFeatures |
CCDualVariationalGaussianLikelihood | Class that models dual variational likelihood |
CCDummyFeatures | The class DummyFeatures implements features that only know the number of feature objects (but don't actually contain any) |
CCDynamicArray | Template Dynamic array class that creates an array that can be used like a list or an array |
CCDynamicObjectArray | Dynamic array class for CSGObject pointers that creates an array that can be used like a list or an array |
CCDynInt | Integer type of dynamic size |
CCDynProg | Dynamic Programming Class |
CCECOCAEDDecoder | |
CCECOCDecoder | |
CCECOCDiscriminantEncoder | |
CCECOCEDDecoder | |
CCECOCEncoder | ECOCEncoder produce an ECOC codebook |
CCECOCForestEncoder | |
CCECOCHDDecoder | |
CCECOCIHDDecoder | |
CCECOCLLBDecoder | |
CCECOCOVOEncoder | |
CCECOCOVREncoder | |
CCECOCRandomDenseEncoder | |
CCECOCRandomSparseEncoder | |
CCECOCSimpleDecoder | |
CCECOCStrategy | |
CCECOCUtil | |
CCEigenSolver | Abstract base class that provides an abstract compute method for computing eigenvalues of a real valued, self-adjoint linear operator. It also provides method for getting min and max eigenvalues |
CCEMBase | This is the base class for Expectation Maximization (EM). EM for various purposes can be derived from this base class. This is a template class having a template member called data which can be used to store all parameters used and results calculated by the expectation and maximization steps of EM |
CCEmbeddingConverter | Class EmbeddingConverter (part of the Efficient Dimensionality Reduction Toolkit) used to construct embeddings of features, e.g. construct dense numeric embedding of string features |
CCEMMixtureModel | This is the implementation of EM specialized for Mixture models |
CCEPInferenceMethod | Class of the Expectation Propagation (EP) posterior approximation inference method |
CCErrorRateMeasure | Class ErrorRateMeasure used to measure error rate of 2-class classifier |
CCEuclideanDistance | Class EuclideanDistance |
CCEvaluation | Class Evaluation, a base class for other classes used to evaluate labels, e.g. accuracy of classification or mean squared error of regression |
CCEvaluationResult | Abstract class that contains the result generated by the MachineEvaluation class |
CCExactInferenceMethod | The Gaussian exact form inference method class |
CCExplicitSpecFeatures | Features that compute the Spectrum Kernel feature space explicitly |
CCExponentialARDKernel | Exponential Kernel with Automatic Relevance Detection computed on CDotFeatures |
CCExponentialKernel | The Exponential Kernel, closely related to the Gaussian Kernel computed on CDotFeatures |
CCExponentialLoss | CExponentialLoss implements the exponential loss function. \(L(y_i,f(x_i)) = \exp^{-y_if(x_i)}\) |
CCF1Measure | Class F1Measure used to measure F1 score of 2-class classifier |
CCFactor | Class CFactor A factor is defined on a clique in the factor graph. Each factor can have its own data, either dense, sparse or shared data. Note that currently this class is table factor oriented |
CCFactorAnalysis | The Factor Analysis class is used to embed data using Factor Analysis algorithm |
CCFactorDataSource | Class CFactorDataSource Source for factor data. In some cases, the same data can be shared by many factors |
CCFactorGraph | Class CFactorGraph a factor graph is a structured input in general |
CCFactorGraphDataGenerator | Class CFactorGraphDataGenerator Create factor graph data for multiple unit tests |
CCFactorGraphFeatures | CFactorGraphFeatures maintains an array of factor graphs, each graph is a sample, i.e. an instance of structured input |
CCFactorGraphLabels | Class FactorGraphLabels used e.g. in the application of Structured Output (SO) learning with the FactorGraphModel. Each of the labels is represented by a graph. Each label is of type CFactorGraphObservation and all of them are stored in a CDynamicObjectArray |
CCFactorGraphModel | CFactorGraphModel defines a model in terms of CFactorGraph and CMAPInference, where parameters are associated with factor types, in the model. There is a mapping vector records the locations of local factor parameters in the global parameter vector |
CCFactorGraphObservation | Class CFactorGraphObservation is used as the structured output |
CCFactorType | Class CFactorType defines the way of factor parameterization |
CCFastICA | Class FastICA |
CCFeatures | The class Features is the base class of all feature objects |
CCFeatureSelection | Template class CFeatureSelection, base class for all feature selection preprocessors which select a subset of features (dimensions in the feature matrix) to achieve a specified number of dimensions, m_target_dim from a given set of features. This class showcases all feature selection algorithms via a generic interface. Supported algorithms are specified by the enum EFeatureSelectionAlgorithm which can be set via set_algorithm() call. Supported wrapper algorithms are |
CCFFDiag | Class FFDiag |
CCFFSep | Class FFSep |
CCFile | A File access base class |
CCFirstElementKernelNormalizer | Normalize the kernel by a constant obtained from the first element of the kernel matrix, i.e. \( c=k({\bf x},{\bf x})\) |
CCFisherLDA | Preprocessor FisherLDA attempts to model the difference between the classes of data by performing linear discriminant analysis on input feature vectors/matrices. When the init method in FisherLDA is called with proper feature matrix X(say N number of vectors and D feature dimensions) supplied via apply_to_feature_matrix or apply_to_feature_vector methods, this creates a transformation whose outputs are the reduced T-Dimensional & class-specific distribution (where T<= number of unique classes-1). The transformation matrix is essentially a DxT matrix, the columns of which correspond to the specified number of eigenvectors which maximizes the ratio of between class matrix to within class matrix |
CCFITCInferenceMethod | The Fully Independent Conditional Training inference method class |
CCFixedDegreeStringKernel | The FixedDegree String kernel takes as input two strings of same size and counts the number of matches of length d |
CCFKFeatures | The class FKFeatures implements Fischer kernel features obtained from two Hidden Markov models |
CCFunction | Class of a function of one variable |
CCFWSOSVM | Class CFWSOSVM solves SOSVM using Frank-Wolfe algorithm [1] |
CCGaussian | Gaussian distribution interface |
CCGaussianARDKernel | Gaussian Kernel with Automatic Relevance Detection computed on CDotFeatures |
CCGaussianARDSparseKernel | Gaussian Kernel with Automatic Relevance Detection with supporting Sparse inference |
CCGaussianBlobsDataGenerator | |
CCGaussianCompactKernel | The compact version as given in Bart Hamers' thesis Kernel Models for Large Scale Applications (Eq. 4.10) is computed as |
CCGaussianDistribution | Dense version of the well-known Gaussian probability distribution, defined as \[ \mathcal{N}_x(\mu,\Sigma)= \frac{1}{\sqrt{|2\pi\Sigma|}} \exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right) \] |
CCGaussianKernel | The well known Gaussian kernel (swiss army knife for SVMs) computed on CDotFeatures |
CCGaussianLikelihood | Class that models Gaussian likelihood |
CCGaussianMatchStringKernel | The class GaussianMatchStringKernel computes a variant of the Gaussian kernel on strings of same length |
CCGaussianNaiveBayes | Class GaussianNaiveBayes, a Gaussian Naive Bayes classifier |
CCGaussianProcessClassification | Class GaussianProcessClassification implements binary and multiclass classification based on Gaussian Processes |
CCGaussianProcessMachine | A base class for Gaussian Processes |
CCGaussianProcessRegression | Class GaussianProcessRegression implements regression based on Gaussian Processes |
CCGaussianShiftKernel | An experimental kernel inspired by the WeightedDegreePositionStringKernel and the Gaussian kernel |
CCGaussianShortRealKernel | The well known Gaussian kernel (swiss army knife for SVMs) on dense short-real valued features |
CCGCArray | Template class GCArray implements a garbage collecting static array |
►CCGEMPLP | |
CParameter | |
CCGeodesicMetric | Class GeodesicMetric |
CCGMM | Gaussian Mixture Model interface |
CCGMNPLib | Class GMNPLib Library of solvers for Generalized Minimal Norm Problem (GMNP) |
CCGMNPSVM | Class GMNPSVM implements a one vs. rest MultiClass SVM |
CCGNPPLib | Class GNPPLib, a Library of solvers for Generalized Nearest Point Problem (GNPP) |
CCGNPPSVM | Class GNPPSVM |
CCGradientCriterion | Simple class which specifies the direction of gradient search |
CCGradientEvaluation | Class evaluates a machine using its associated differentiable function for the function value and its gradient with respect to parameters |
CCGradientResult | Container class that returns results from GradientEvaluation. It contains the function value as well as its gradient |
CCGraphCut | |
CCGridSearchModelSelection | Model selection class which searches for the best model by a grid- search. See CModelSelection for details |
CCGUIClassifier | UI classifier |
CCGUIConverter | UI converter |
CCGUIDistance | UI distance |
CCGUIFeatures | UI features |
CCGUIHMM | UI HMM (Hidden Markov Model) |
CCGUIKernel | UI kernel |
CCGUILabels | UI labels |
CCGUIMath | UI math |
CCGUIPluginEstimate | UI estimate |
CCGUIPreprocessor | UI preprocessor |
CCGUIStructure | UI structure |
CCGUITime | UI time |
CCHAIDTreeNodeData | Structure to store data of a node of CHAID. This can be used as a template type in TreeMachineNode class. CHAID algorithm uses nodes of type CTreeMachineNode<CHAIDTreeNodeData> |
CCHammingWordDistance | Class HammingWordDistance |
CCHash | Collection of Hashing Functions |
CCHashedDenseFeatures | This class is identical to the CDenseFeatures class except that it hashes each dimension to a new feature space |
CCHashedDocConverter | This class can be used to convert a document collection contained in a CStringFeatures<char> object where each document is stored as a single vector into a hashed Bag-of-Words representation. Like in the standard Bag-of-Words representation, this class considers each document as a collection of tokens, which are then hashed into a new feature space of a specified dimension. This class is very flexible and allows the user to specify the tokenizer used to tokenize each document, specify whether the results should be normalized with regards to the sqrt of the document size, as well as to specify whether he wants to combine different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n tokens, while skipping up to k. Eg. for the tokens ["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a", "ab", "ac" (skipped 1), "ad" (skipped 2), "b", "bc", "bd" (skipped 1), "c", "cd", "d"] |
CCHashedDocDotFeatures | This class can be used to provide on-the-fly vectorization of a document collection. Like in the standard Bag-of-Words representation, this class considers each document as a collection of tokens, which are then hashed into a new feature space of a specified dimension. This class is very flexible and allows the user to specify the tokenizer used to tokenize each document, specify whether the results should be normalized with regards to the sqrt of the document size, as well as to specify whether he wants to combine different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n tokens, while skipping up to k. Eg. for the tokens ["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a", "ab", "ac" (skipped 1), "ad" (skipped 2), "b", "bc", "bd" (skipped 1), "c", "cd", "d"] |
CCHashedMultilabelModel | Class CHashedMultilabelModel represents application specific model and contains application dependent logic for solving multilabel classification with feature hashing within a generic SO framework. We hash the feature of each class with a separate seed and put them in the same feature space (exploded feature space) |
CCHashedSparseFeatures | This class is identical to the CDenseFeatures class except that it hashes each dimension to a new feature space |
CCHashedWDFeatures | Features that compute the Weighted Degreee Kernel feature space explicitly |
CCHashedWDFeaturesTransposed | Features that compute the Weighted Degreee Kernel feature space explicitly |
CCHessianLocallyLinearEmbedding | Class HessianLocallyLinearEmbedding used to preprocess data using Hessian Locally Linear Embedding algorithm as described in |
CCHierarchical | Agglomerative hierarchical single linkage clustering |
CCHierarchicalMultilabelModel | Class CHierarchicalMultilabelModel represents application specific model and contains application dependent logic for solving hierarchical multilabel classification[1] within a generic SO framework |
CCHingeLoss | CHingeLoss implements the hinge loss function |
CCHistogram | Class Histogram computes a histogram over all 16bit unsigned integers in the features |
CCHistogramIntersectionKernel | The HistogramIntersection kernel operating on realvalued vectors computes the histogram intersection distance between sets of histograms. Note: the current implementation assumes positive values for the histograms, and input vectors should sum to 1 |
CCHistogramWordStringKernel | The HistogramWordString computes the TOP kernel on inhomogeneous Markov Chains |
CCHMM | Hidden Markov Model |
CCHMSVMModel | Class CHMSVMModel that represents the application specific model and contains the application dependent logic to solve Hidden Markov Support Vector Machines (HM-SVM) type of problems within a generic SO framework |
CCHomogeneousKernelMap | Preprocessor HomogeneousKernelMap performs homogeneous kernel maps as described in |
CCHSIC | This class implements the Hilbert Schmidtd Independence Criterion based independence test as described in [1] |
CCHuberLoss | CHuberLoss implements the Huber loss function. It behaves like SquaredLoss function at values below Huber delta and like absolute deviation at values greater than the delta |
CCHypothesisTest | Hypothesis test base class. Provides an interface for statistical hypothesis testing via three methods: compute_statistic(), compute_p_value() and compute_threshold(). The second computes a p-value for the statistic computed by the first method. The p-value represents the position of the statistic in the null-distribution, i.e. the distribution of the statistic population given the null-hypothesis is true. (1-position = p-value). The third method, compute_threshold(), computes a threshold for a given test level which is needed to reject the null-hypothesis |
CCICAConverter | Class ICAConverter Base class for ICA algorithms |
CCID3ClassifierTree | Class ID3ClassifierTree, implements classifier tree for discrete feature values using the ID3 algorithm. The training algorithm implemented is as follows : |
CCIdentityKernelNormalizer | Identity Kernel Normalization, i.e. no normalization is applied |
CCImplicitWeightedSpecFeatures | Features that compute the Weighted Spectrum Kernel feature space explicitly |
CCIndependenceTest | Provides an interface for performing the independence test. Given samples \(Z=\{(x_i,y_i)\}_{i=1}^m\) from the joint distribution \(\textbf{P}_{xy}\), does the joint distribution factorize as \(\textbf{P}_{xy}=\textbf{P}_x\textbf{P}_y\), i.e. product of the marginals? The null-hypothesis says yes, i.e. no dependence, the alternative hypothesis says no |
CCIndependentComputationEngine | Abstract base class for solving multiple independent instances of CIndependentJob. It has one method, submit_job, which may add the job to an internal queue and might block if there is yet not space in the queue. After jobs are submitted, it might not yet be ready. wait_for_all waits until all jobs are completed, which must be called to guarantee that all jobs are finished |
CCIndependentJob | Abstract base for general computation jobs to be registered in CIndependentComputationEngine. compute method produces a job result and submits it to the internal JobResultAggregator. Each set of jobs that form a result will share the same job result aggregator |
CCIndexBlock | Class IndexBlock used to represent contiguous indices of one group (e.g. block of related features) |
CCIndexBlockGroup | Class IndexBlockGroup used to represent group-based feature relation |
CCIndexBlockRelation | Class IndexBlockRelation |
CCIndexBlockTree | Class IndexBlockTree used to represent tree guided feature relation |
CCIndexFeatures | The class IndexFeatures implements features that contain the index of the features. This features used in the CCustomKernel::init to make the subset of the kernel matrix. Initial CIndexFeature of row_idx and col_idx, pass them to the CCustomKernel::init(row_idx, col_idx), then use CCustomKernel::get_kernel_matrix() will get the sub kernel matrix specified by the row_idx and col_idx |
CCIndirectObject | Array class that accesses elements indirectly via an index array |
CCIndividualJobResultAggregator | Class that aggregates vector job results in each submit_result call of jobs generated from rational approximation of linear operator function times a vector. finalize extracts the imaginary part of that aggregation, applies the linear operator to the aggregation, performs a dot product with the sample vector, multiplies with the constant multiplier (see CRationalApproximation) and stores the result as CScalarResult |
CCInference | The Inference Method base class |
CCIntegration | Class that contains certain methods related to numerical integration |
CCIntronList | Class IntronList |
CCInverseMultiQuadricKernel | InverseMultiQuadricKernel |
CCIOBuffer | An I/O buffer class |
CCIsomap | The Isomap class is used to embed data using Isomap algorithm as described in: |
CCIterativeLinearSolver | Abstract template base for all iterative linear solvers such as conjugate gradient (CG) solvers. provides interface for setting the iteration limit, relative/absolute tolerence. solve method is abstract |
CCIterativeShiftedLinearFamilySolver | Abstract template base for CG based solvers to the solution of shifted linear systems of the form \((A+\sigma)x=b\) for several values of \(\sigma\) simultaneously, using only as many matrix-vector operations as the solution of a single system requires. This class adds another interface to the basic iterative linear solver that takes the shifts, \(\sigma\), and also weights, \(\alpha\), and returns the summation \(\sum_{i} \alpha_{i}x_{i}\), where \(x_{i}\) is the solution of the system \((A+\sigma_{i})x_{i}=b\) |
CCJacobiEllipticFunctions | Class that contains methods for computing Jacobi elliptic functions related to complex analysis. These functions are inverse of the elliptic integral of first kind, i.e. \[ u(k,m)=\int_{0}^{k}\frac{dt}{\sqrt{(1-t^{2})(1-m^{2}t^{2})}} =\int_{0}^{\varphi}\frac{d\theta}{\sqrt{(1-m^{2}sin^{2}\theta)}} \] where \(k=sin\varphi\), \(t=sin\theta\) and parameter \(m, 0\le m \le 1\) is called modulus. Three main Jacobi elliptic functions are defined as \(sn(u,m)=k=sin\theta\), \(cn(u,m)=cos\theta=\sqrt{1-sn(u,m)^{2}}\) and \(dn(u,m)=\sqrt{1-m^{2}sn(u,m)^{2}}\). For \(k=1\), i.e. \(\varphi=\frac{\pi}{2}\), \(u(1,m)=K(m)\) is known as the complete elliptic integral of first kind. Similarly, \(u(1,m'))= K'(m')\), \(m'=\sqrt{1-m^{2}}\) is called the complementary complete elliptic integral of first kind. Jacobi functions are double periodic with quardratic periods \(K\) and \(K'\) |
CCJade | Class Jade |
CCJADiag | Class JADiag |
CCJADiagOrth | Class JADiagOrth |
CCJediDiag | Class Jedi |
CCJediSep | Class JediSep |
CCJensenMetric | Class JensenMetric |
CCJensenShannonKernel | The Jensen-Shannon kernel operating on real-valued vectors computes the Jensen-Shannon distance between the features. Often used in computer vision |
CCJLCoverTreePoint | Class Point to use with John Langford's CoverTree. This class must have some assoficated functions defined (distance, parse_points and print, see below) so it can be used with the CoverTree implementation |
CCJobResult | Base class that stores the result of an independent job |
CCJobResultAggregator | Abstract base class that provides an interface for computing an aggeregation of the job results of independent computation jobs as they are submitted and also for finalizing the aggregation |
CCKDTree | This class implements KD-Tree. cf. http://www.autonlab.org/autonweb/14665/version/2/part/5/data/moore-tutorial.pdf |
CCKernel | The Kernel base class |
CCKernelDensity | This class implements the kernel density estimation technique. Kernel density estimation is a non-parametric way to estimate an unknown pdf. The pdf at a query point given finite training samples is calculated using the following formula : \ \(pdf(x')= \frac{1}{nh} \sum_{i=1}^n K(\frac{||x-x_i||}{h})\) \ K() in the above formula is called the kernel function and is controlled by the parameter h called kernel bandwidth. Presently, this class supports only Gaussian kernel which can be used with either Euclidean distance or Manhattan distance. This class makes use of 2 tree structures KD-tree and Ball tree for fast calculation. KD-trees are faster than ball trees at lower dimensions. In case of high dimensional data, ball tree tends to out-perform KD-tree. By default, the class used is Ball tree |
CCKernelDependenceMaximization | Class CKernelDependenceMaximization, that uses an implementation of CKernelIndependenceTest to compute dependence measures for feature selection. Different kernels are used for labels and data. For the sake of computational convenience, the precompute() method is overridden to precompute the kernel for labels and save as an instance of CCustomKernel |
CCKernelDistance | The Kernel distance takes a distance as input |
CCKernelIndependenceTest | Kernel independence test base class. Provides an interface for performing an independence test. Given samples \(Z=\{(x_i,y_i)\}_{i=1}^m\) from the joint distribution \(\textbf{P}_{xy}\), does the joint distribution factorize as \(\textbf{P}_{xy}=\textbf{P}_x\textbf{P}_y\), i.e. product of the marginals? |
CCKernelLocallyLinearEmbedding | Class KernelLocallyLinearEmbedding used to construct embeddings of data using kernel formulation of Locally Linear Embedding algorithm as described in |
CCKernelMachine | A generic KernelMachine interface |
CCKernelMulticlassMachine | Generic kernel multiclass |
CCKernelNormalizer | The class Kernel Normalizer defines a function to post-process kernel values |
CCKernelPCA | Preprocessor KernelPCA performs kernel principal component analysis |
CCKernelRidgeRegression | Class KernelRidgeRegression implements Kernel Ridge Regression - a regularized least square method for classification and regression |
CCKernelSelection | Base class for kernel selection for kernel two-sample test statistic implementations (e.g. MMD). Provides abstract methods for selecting kernels and computing criteria or kernel weights for the implemented method. In order to implement new methods for kernel selection, simply write a new implementation of this class |
CCKernelStructuredOutputMachine | |
CCKernelTwoSampleTest | Kernel two sample test base class. Provides an interface for performing a two-sample test using a kernel, i.e. Given samples from two distributions \(p\) and \(q\), the null-hypothesis is: \(H_0: p=q\), the alternative hypothesis: \(H_1: p\neq q\) |
CCKLCholeskyInferenceMethod | The KL approximation inference method class |
CCKLCovarianceInferenceMethod | The KL approximation inference method class |
CCKLDiagonalInferenceMethod | The KL approximation inference method class |
CCKLDualInferenceMethod | The dual KL approximation inference method class |
CCKLDualInferenceMethodMinimizer | Build-in minimizer for KLDualInference |
CCKLInference | The KL approximation inference method class |
CCKLLowerTriangularInference | The KL approximation inference method class |
CCKMeans | KMeans clustering, partitions the data into k (a-priori specified) clusters |
CCKMeansBase | |
CCKMeansMiniBatch | |
CCKNN | Class KNN, an implementation of the standard k-nearest neigbor classifier |
CCKNNHeap | This class implements a specialized version of max heap structure. This heap specializes in storing the least k values seen so far along with the indices (or id) of the entities with which the values are associated. On calling the push method, it is automatically checked, if the new value supplied, is among the least k distances seen so far. Also, in case the heap is full already, the max among the stored values is automatically thrown out as the new value finds its proper place in the heap |
CCKRRNystrom | Class KRRNystrom implements the Nyström method for kernel ridge regression, using a low-rank approximation to the kernel matrix |
CCLabels | The class Labels models labels, i.e. class assignments of objects |
CCLabelsFactory | The helper class to specialize base class instances of labels |
CCLanczosEigenSolver | Class that computes eigenvalues of a real valued, self-adjoint linear operator using Lanczos algorithm |
CCLaplaceInference | The Laplace approximation inference method base class |
CCLaplacianEigenmaps | Class LaplacianEigenmaps used to construct embeddings of data using Laplacian Eigenmaps algorithm as described in: |
CCLaRank | LaRank multiclass SVM machine This implementation uses LaRank algorithm from Bordes, Antoine, et al., 2007. "Solving multiclass support vector machines with LaRank." |
CCLatentFeatures | Latent Features class The class if for representing features for latent learning, e.g. LatentSVM. It's basically a very generic way of storing features of any (user-defined) form based on CData |
CCLatentLabels | Abstract class for latent labels As latent labels always depends on the given application, this class only defines the API that the user has to implement for latent labels |
CCLatentModel | Abstract class CLatentModel It represents the application specific model and contains most of the application dependent logic to solve latent variable based problems |
CCLBFGSMinimizer | The class wraps the Shogun's C-style LBFGS minimizer |
CCLBPPyrDotFeatures | Implements Local Binary Patterns with Scale Pyramids as dot features for a set of images. Expects the images to be loaded in a CDenseFeatures object |
CCLDA | Class LDA implements regularized Linear Discriminant Analysis |
CCLeastAngleRegression | Class for Least Angle Regression, can be used to solve LASSO |
CCLeastSquaresRegression | Class to perform Least Squares Regression |
CCLibLinear | This class provides an interface to the LibLinear library for large- scale linear learning focusing on SVM [1]. This is the classification interface. For regression, see CLibLinearRegression. There is also an online version, see COnlineLibLinear |
CCLibLinearMTL | Class to implement LibLinear |
CCLibLinearRegression | This class provides an interface to the LibLinear library for large- scale linear learning focusing on SVM [1]. This is the regression interface. For classification, see CLibLinear |
CCLibSVM | LibSVM |
CCLibSVMFile | Read sparse real valued features in svm light format e.g. -1 1:10.0 2:100.2 1000:1.3 with -1 == (optional) label and dim 1 - value 10.0 dim 2 - value 100.2 dim 1000 - value 1.3 |
CCLibSVMOneClass | Class LibSVMOneClass |
CCLibSVR | Class LibSVR, performs support vector regression using LibSVM |
CCLikelihoodModel | The Likelihood model base class |
CCLinearHMM | The class LinearHMM is for learning Higher Order Markov chains |
CCLinearKernel | Computes the standard linear kernel on CDotFeatures |
CCLinearLatentMachine | Abstract implementaion of Linear Machine with latent variable This is the base implementation of all linear machines with latent variable |
CCLinearLocalTangentSpaceAlignment | Class LinearLocalTangentSpaceAlignment converter used to construct embeddings as described in: |
CCLinearMachine | Class LinearMachine is a generic interface for all kinds of linear machines like classifiers |
CCLinearMulticlassMachine | Generic linear multiclass machine |
CCLinearOperator | Abstract template base class that represents a linear operator, e.g. a matrix |
CCLinearRidgeRegression | Class LinearRidgeRegression implements Ridge Regression - a regularized least square method for classification and regression |
CCLinearSolver | Abstract template base class that provides an abstract solve method for linear systems, that takes a linear operator \(A\), a vector \(b\), solves the system \(Ax=b\) and returns the vector \(x\) |
CCLinearStringKernel | Computes the standard linear kernel on dense char valued features |
CCLinearStructuredOutputMachine | |
CCLinearTimeMMD | This class implements the linear time Maximum Mean Statistic as described in [1] for streaming data (see CStreamingMMD for description) |
CCLineReader | Class for buffered reading from a ascii file |
CCList | Class List implements a doubly connected list for low-level-objects |
CCListElement | Class ListElement, defines how an element of the the list looks like |
CCLMNN | Class LMNN that implements the distance metric learning technique Large Margin Nearest Neighbour (LMNN) described in |
CCLMNNStatistics | Class LMNNStatistics used to give access to intermediate results obtained training LMNN |
CCLocalAlignmentStringKernel | The LocalAlignmentString kernel compares two sequences through all possible local alignments between the two sequences |
CCLocalityImprovedStringKernel | The LocalityImprovedString kernel is inspired by the polynomial kernel. Comparing neighboring characters it puts emphasize on local features |
CCLocalityPreservingProjections | Class LocalityPreservingProjections used to compute embeddings of data using Locality Preserving Projections method as described in |
CCLocallyLinearEmbedding | Class LocallyLinearEmbedding used to embed data using Locally Linear Embedding algorithm described in |
CCLocalTangentSpaceAlignment | Class LocalTangentSpaceAlignment used to embed data using Local Tangent Space Alignment (LTSA) algorithm as described in: |
CCLock | Class Lock used for synchronization in concurrent programs |
CCLogDetEstimator | Class to create unbiased estimators of \(log(\left|C\right|)= trace(log(C))\). For each estimate, it samples trace vectors (one by one) and calls submit_jobs of COperatorFunction, stores the resulting job result aggregator instances, calls wait_for_all of CIndependentComputationEngine to ensure that the job result aggregators are all up to date. Then simply computes running averages over the estimates |
CCLogitDVGLikelihood | Class that models dual variational logit likelihood |
CCLogitLikelihood | Class that models Logit likelihood |
CCLogitVGLikelihood | Class that models Logit likelihood and uses numerical integration to approximate the following variational expection of log likelihood \[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \] |
CCLogitVGPiecewiseBoundLikelihood | Class that models Logit likelihood and uses variational piecewise bound to approximate the following variational expection of log likelihood \[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \] where \[ p(y_i|f_i) = \frac{exp(y_i*f_i)}{1+exp(f_i)}, y_i \in \{0,1\} \] |
CCLogKernel | Log kernel |
CCLogLoss | CLogLoss implements the logarithmic loss function |
CCLogLossMargin | Class CLogLossMargin implements a margin-based log-likelihood loss function |
CCLogPlusOne | Preprocessor LogPlusOne does what the name says, it adds one to a dense real valued vector and takes the logarithm of each component of it |
CCLogRationalApproximationCGM | Implementaion of rational approximation of a operator-function times vector where the operator function is log of a linear operator. Each complex system generated from the shifts due to rational approximation of opertor- log times vector expression are solved at once with a shifted linear-family solver by the computation engine. generate_jobs generates one job per sample |
CCLogRationalApproximationIndividual | Implementaion of rational approximation of a operator-function times vector where the operator function is log of a dense-matrix. Each complex system generated from the shifts due to rational approximation of opertor- log times vector expression are solved individually with a complex linear solver by the computation engine. generate_jobs generates num_shifts number of jobs per trace sample |
CCLOOCrossValidationSplitting | Implementation of Leave one out cross-validation on the base of CCrossValidationSplitting. Produces subset index sets consisting of one element,for each label |
CCLoss | Class which collects generic mathematical functions |
CCLossFunction | Class CLossFunction is the base class of all loss functions |
CCLPBoost | Class LPBoost trains a linear classifier called Linear Programming Machine, i.e. a SVM using a \(\ell_1\) norm regularizer |
CCLPM | Class LPM trains a linear classifier called Linear Programming Machine, i.e. a SVM using a \(\ell_1\) norm regularizer |
CCMachine | A generic learning machine interface |
CCMachineEvaluation | Machine Evaluation is an abstract class that evaluates a machine according to some criterion |
CCMahalanobisDistance | Class MahalanobisDistance |
CCMajorityVote | CMajorityVote is a CWeightedMajorityVote combiner, where each Machine's weight in the ensemble is 1.0 |
CCManhattanMetric | Class ManhattanMetric |
CCManhattanWordDistance | Class ManhattanWordDistance |
CCManifoldSculpting | Class CManifoldSculpting used to embed data using manifold sculpting embedding algorithm |
CCMap | Class CMap, a map based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table |
CCMAPInference | Class CMAPInference performs MAP inference on a factor graph. Briefly, given a factor graph model, with features \(\bold{x}\), the prediction is obtained by \( {\arg\max} _{\bold{y}} P(\bold{Y} = \bold{y} | \bold{x}; \bold{w}) \) |
CCMAPInferImpl | Class CMAPInferImpl abstract class of MAP inference implementation |
CCMatchWordStringKernel | The class MatchWordStringKernel computes a variant of the polynomial kernel on strings of same length converted to a word alphabet |
►CCMath | Class which collects generic mathematical functions |
CIndexSorter | |
CCMatrixFeatures | Class CMatrixFeatures used to represent data whose feature vectors are better represented with matrices rather than with unidimensional arrays or vectors. Optionally, it can be restricted that all the feature vectors have the same number of features. Set the attribute num_features different to zero to use this restriction. Allow feature vectors with different number of features by setting num_features equal to zero (default behaviour) |
CCMatrixOperations | The helper class is used for Laplace and KL methods |
CCMatrixOperator | Abstract base class that represents a matrix linear operator. It provides an interface to computes matrix-vector product \(Ax\) in its apply method, \(A\in\mathbb{C}^{m\times n},A:\mathbb{C}^{n} \rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in \mathbb{C}^{n}\) being the vector. The result is a vector \(y\in \mathbb{C}^{m}\) |
CCMCLDA | Class MCLDA implements multiclass Linear Discriminant Analysis |
CCMeanAbsoluteError | Class MeanAbsoluteError used to compute an error of regression model |
CCMeanFunction | An abstract class of the mean function |
CCMeanRule | CMeanRule simply averages the outputs of the Machines in the ensemble |
CCMeanShiftDataGenerator | |
CCMeanSquaredError | Class MeanSquaredError used to compute an error of regression model |
CCMeanSquaredLogError | Class CMeanSquaredLogError used to compute an error of regression model |
CCMemoryMappedFile | Memory mapped file |
CCMinkowskiMetric | Class MinkowskiMetric |
CCMixtureModel | This is the generic class for mixture models. The final distribution is a mixture of various simple distributions supplied by the user |
►CCMKL | Multiple Kernel Learning |
CSelf | |
CCMKLClassification | Multiple Kernel Learning for two-class-classification |
CCMKLMulticlass | MKLMulticlass is a class for L1-norm Multiclass MKL |
CCMKLOneClass | Multiple Kernel Learning for one-class-classification |
CCMKLRegression | Multiple Kernel Learning for regression |
CCMMDKernelSelection | Base class for kernel selection for MMD-based two-sample test statistic implementations. Provides abstract methods for selecting kernels and computing criteria or kernel weights for the implemented method. In order to implement new methods for kernel selection, simply write a new implementation of this class |
CCMMDKernelSelectionMax | Kernel selection class that selects the single kernel that maximises the MMD statistic. Works for CQuadraticTimeMMD and CLinearTimeMMD. This leads to a heuristic that is better than the standard median heuristic for Gaussian kernels. However, it comes with no guarantees |
CCMMDKernelSelectionMedian | Implements MMD kernel selection for a number of Gaussian baseline kernels via selecting the one with a bandwidth parameter that is closest to the median of all pairwise distances in the underlying data. Therefore, it only works for data to which a GaussianKernel can be applied, which are grouped under the class CDotFeatures in SHOGUN |
CCMMDKernelSelectionOpt | Implements optimal kernel selection for single kernels. Given a number of baseline kernels, this method selects the one that minimizes the type II error for a given type I error for a two-sample test. This only works for the CLinearTimeMMD statistic |
CCModelSelection | Abstract base class for model selection |
CCModelSelectionParameters | Class to select parameters and their ranges for model selection. The structure is organized as a tree with different kinds of nodes, depending on the values of its member variables of name and CSGObject |
CCMPDSVM | Class MPDSVM |
CCMulticlassAccuracy | The class MulticlassAccuracy used to compute accuracy of multiclass classification |
CCMulticlassLabels | Multiclass Labels for multi-class classification |
CCMulticlassLibLinear | Multiclass LibLinear wrapper. Uses Crammer-Singer formulation and gradient descent optimization algorithm implemented in the LibLinear library. Regularized bias support is added using stacking bias 'feature' to hyperplanes normal vectors |
CCMulticlassLibSVM | Class LibSVMMultiClass. Does one vs one classification |
CCMulticlassMachine | Experimental abstract generic multiclass machine class |
CCMulticlassModel | Class CMulticlassModel that represents the application specific model and contains the application dependent logic to solve multiclass classification within a generic SO framework |
CCMulticlassOneVsOneStrategy | Multiclass one vs one strategy used to train generic multiclass machines for K-class problems with building voting-based ensemble of K*(K-1) binary classifiers multiclass probabilistic outputs can be obtained by using the heuristics described in [1] |
CCMulticlassOneVsRestStrategy | Multiclass one vs rest strategy used to train generic multiclass machines for K-class problems with building ensemble of K binary classifiers |
CCMulticlassOVREvaluation | The class MulticlassOVREvaluation used to compute evaluation parameters of multiclass classification via binary OvR decomposition and given binary evaluation technique |
CCMulticlassSOLabels | Class CMulticlassSOLabels to be used in the application of Structured Output (SO) learning to multiclass classification. Each of the labels is represented by a real number and it is required that the values of the labels are in the set {0, 1, ..., num_classes-1}. Each label is of type CRealNumber and all of them are stored in a CDynamicObjectArray |
CCMulticlassStrategy | Class MulticlassStrategy used to construct generic multiclass classifiers with ensembles of binary classifiers |
CCMulticlassSVM | Class MultiClassSVM |
CCMultidimensionalScaling | Class Multidimensionalscaling is used to perform multidimensional scaling (capable of landmark approximation if requested) |
CCMultilabelAccuracy | Class CMultilabelAccuracy used to compute accuracy of multilabel classification |
CCMultilabelCLRModel | Class MultilabelCLRModel represents application specific model and contains application dependent logic for solving multi-label classification using Calibrated Label Ranking (CLR) [1] method within a generic SO framework |
CCMultilabelLabels | Multilabel Labels for multi-label classification |
CCMultilabelModel | Class CMultilabelModel represents application specific model and contains application dependent logic for solving multilabel classification within a generic SO framework |
CCMultilabelSOLabels | Class CMultilabelSOLabels used in the application of Structured Output (SO) learning to Multilabel Classification. Labels are subsets of {0, 1, ..., num_classes-1}. Each of the label if of type CSparseMultilabel and all of them are stored in a CDynamicObjectArray |
CCMultiLaplaceInferenceMethod | The Laplace approximation inference method class for multi classification |
CCMultiquadricKernel | MultiquadricKernel |
CCMultitaskKernelMaskNormalizer | The MultitaskKernel allows Multitask Learning via a modified kernel function |
CCMultitaskKernelMaskPairNormalizer | The MultitaskKernel allows Multitask Learning via a modified kernel function |
CCMultitaskKernelMklNormalizer | Base-class for parameterized Kernel Normalizers |
CCMultitaskKernelNormalizer | The MultitaskKernel allows Multitask Learning via a modified kernel function |
CCMultitaskKernelPlifNormalizer | The MultitaskKernel allows learning a piece-wise linear function (PLIF) via MKL |
CCMultitaskKernelTreeNormalizer | The MultitaskKernel allows Multitask Learning via a modified kernel function based on taxonomy |
CCMultitaskROCEvaluation | Class MultitaskROCEvalution used to evaluate ROC (Receiver Operating Characteristic) and an area under ROC curve (auROC) of each task separately |
CCNativeMulticlassMachine | Experimental abstract native multiclass machine class |
CCNbodyTree | This class implements genaralized tree for N-body problems like k-NN, kernel density estimation, 2 point correlation |
CCNearestCentroid | Class NearestCentroid, an implementation of Nearest Shrunk Centroid classifier |
CCNeighborhoodPreservingEmbedding | NeighborhoodPreservingEmbedding converter used to construct embeddings as described in: |
CCNeuralConvolutionalLayer | Main component in convolutional neural networks |
CCNeuralInputLayer | Represents an input layer. The layer can be either connected to all the input features that a network receives (default) or connected to just a small part of those features |
CCNeuralLayer | Base class for neural network layers |
CCNeuralLayers | A class to construct neural layers |
CCNeuralLeakyRectifiedLinearLayer | Neural layer with leaky rectified linear neurons |
CCNeuralLinearLayer | Neural layer with linear neurons, with an identity activation function. can be used as a hidden layer or an output layer |
CCNeuralLogisticLayer | Neural layer with linear neurons, with a logistic activation function. can be used as a hidden layer or an output layer |
CCNeuralNetwork | A generic multi-layer neural network |
CCNeuralRectifiedLinearLayer | Neural layer with rectified linear neurons |
CCNeuralSoftmaxLayer | Neural layer with linear neurons, with a softmax activation function. can be only be used as an output layer. Cross entropy error measure is used |
CCNewtonSVM | NewtonSVM, In this Implementation linear SVM is trained in its primal form using Newton-like iterations. This Implementation is ported from the Olivier Chapelles fast newton based SVM solver, Which could be found here :http://mloss.org/software/view/30/ For further information on this implementation of SVM refer to this paper: http://www.kyb.mpg.de/publications/attachments/neco_%5B0%5D.pdf |
CCNGramTokenizer | The class CNGramTokenizer is used to tokenize a SGVector<char> into n-grams |
CCNOCCO | This class implements the NOrmalized Cross Covariance Operator (NOCCO) based independence test as described in [1] |
CCNode | A CNode is an element of a CTaxonomy, which is used to describe hierarchical structure between tasks |
CCNormalSampler | Class that provides a sample method for Gaussian samples |
CCNormOne | Preprocessor NormOne, normalizes vectors to have norm 1 |
CCNumericalVGLikelihood | Class that models likelihood and uses numerical integration to approximate the following variational expection of log likelihood \[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \] |
CCOligoStringKernel | This class offers access to the Oligo Kernel introduced by Meinicke et al. in 2004 |
CConditionalProbabilityTreeNodeData | Struct to store data of node of conditional probability tree |
CCOnlineLibLinear | Class implementing a purely online version of CLibLinear, using the L2R_L1LOSS_SVC_DUAL solver only |
CCOnlineLinearMachine | Class OnlineLinearMachine is a generic interface for linear machines like classifiers which work through online algorithms |
CCOnlineSVMSGD | Class OnlineSVMSGD |
CConstLearningRate | This implements the const learning rate class for a descent-based minimizer |
CCOperatorFunction | Abstract template base class for computing \(s^{T} f(C) s\) for a linear operator C and a vector s. submit_jobs method creates a bunch of jobs needed to solve for this particular \(s\) and attaches one unique job aggregator to each of them, then submits them all to the computation engine |
CCParameterCombination | Class that holds ONE combination of parameters for a learning machine. The structure is organized as a tree. Every node may hold a name or an instance of a Parameter class. Nodes may have children. The nodes are organized in such way, that every parameter of a model for model selection has one node and sub-parameters are stored in sub-nodes. Using a tree of this class, parameters of models may easily be set. There are these types of nodes: |
CCParser | Class for reading from a string |
CCPCA | Preprocessor PCA performs principial component analysis on input feature vectors/matrices. When the init method in PCA is called with proper feature matrix X (with say N number of vectors and D feature dimension), a transformation matrix is computed and stored internally. This transformation matrix is then used to transform all D-dimensional feature vectors or feature matrices (with D feature dimensions) supplied via apply_to_feature_matrix or apply_to_feature_vector methods. This tranformation outputs the T-Dimensional approximation of all these input vectors and matrices (where T<=min(D,N)). The transformation matrix is essentially a DxT matrix, the columns of which correspond to the eigenvectors of the covariance matrix(XX') having top T eigenvalues |
CCPerceptron | Class Perceptron implements the standard linear (online) perceptron |
CCPeriodicKernel | The periodic kernel as described in The Kernel Cookbook by David Duvenaud: http://people.seas.harvard.edu/~dduvenaud/cookbook/ |
CCPlif | Class Plif |
CCPlifArray | Class PlifArray |
CCPlifBase | Class PlifBase |
CCPlifMatrix | Store plif arrays for all transitions in the model |
CCPluginEstimate | Class PluginEstimate |
CCPNorm | Preprocessor PNorm, normalizes vectors to have p-norm |
CCPolyFeatures | Implement DotFeatures for the polynomial kernel |
CCPolyKernel | Computes the standard polynomial kernel on CDotFeatures |
CCPolyMatchStringKernel | The class PolyMatchStringKernel computes a variant of the polynomial kernel on strings of same length |
CCPolyMatchWordStringKernel | The class PolyMatchWordStringKernel computes a variant of the polynomial kernel on word-features |
CCPositionalPWM | Positional PWM |
CCPowerKernel | Power kernel |
CCPRCEvaluation | Class PRCEvaluation used to evaluate PRC (Precision Recall Curve) and an area under PRC curve (auPRC) |
CCPrecisionMeasure | Class PrecisionMeasure used to measure precision of 2-class classifier |
CCPreprocessor | Class Preprocessor defines a preprocessor interface |
CCProbabilityDistribution | A base class for representing n-dimensional probability distribution over the real numbers (64bit) for which various statistics can be computed and which can be sampled |
CCProbitLikelihood | Class that models Probit likelihood |
CCProbitVGLikelihood | Class that models Probit likelihood and uses numerical integration to approximate the following variational expection of log likelihood \[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \] |
CCProductKernel | The Product kernel is used to combine a number of kernels into a single ProductKernel object by element multiplication |
CCProtobufFile | Class for work with binary file in protobuf format |
CCPruneVarSubMean | Preprocessor PruneVarSubMean will substract the mean and remove features that have zero variance |
CCPyramidChi2 | Pyramid Kernel over Chi2 matched histograms |
CCQDA | Class QDA implements Quadratic Discriminant Analysis |
CCQDiag | Class QDiag |
CCQuadraticTimeMMD | This class implements the quadratic time Maximum Mean Statistic as described in [1]. The MMD is the distance of two probability distributions \(p\) and \(q\) in a RKHS which we denote by \[ \hat{\eta_k}=\text{MMD}[\mathcal{F},p,q]^2=\textbf{E}_{x,x'} \left[ k(x,x')\right]-2\textbf{E}_{x,y}\left[ k(x,y)\right] +\textbf{E}_{y,y'}\left[ k(y,y')\right]=||\mu_p - \mu_q||^2_\mathcal{F} \] |
CCRandom | : Pseudo random number geneartor |
CCRandomCARTree | This class implements randomized CART algorithm used in the tree growing process of candidate trees in Random Forests algorithm. The tree growing process is different from the original CART algorithm because of the input attributes which are considered for each node split. In randomized CART, a few (fixed number) attributes are randomly chosen from all available attributes while deciding the best split. This is unlike the original CART where all available attributes are considered while deciding the best split |
CCRandomConditionalProbabilityTree | |
CCRandomForest | This class implements the Random Forests algorithm. In Random Forests algorithm, we train a number of randomized CART trees (see class CRandomCARTree) using the supplied training data. The number of trees to be trained is a parameter (called number of bags) controlled by the user. Test feature vectors are classified/regressed by combining the outputs of all these trained candidate trees using a combination rule (see class CCombinationRule). The feature for calculating out-of-box error is also provided to help determine the appropriate number of bags. The evaluatin criteria for calculating this out-of-box error is specified by the user (see class CEvaluation) |
CCRandomFourierDotFeatures | This class implements the random fourier features for the DotFeatures framework. Basically upon the object creation it computes the random coefficients, namely w and b, that are needed for this method and then every time a vector is required it is computed based on the following formula z(x) = sqrt(2/D) * cos(w'*x + b), where D is the number of samples that are used |
CCRandomFourierGaussPreproc | Preprocessor CRandomFourierGaussPreproc implements Random Fourier Features for the Gauss kernel a la Ali Rahimi and Ben Recht Nips2007 after preprocessing the features using them in a linear kernel approximates a gaussian kernel |
CCRandomKitchenSinksDotFeatures | Class that implements the Random Kitchen Sinks (RKS) for the DotFeatures as mentioned in http://books.nips.cc/papers/files/nips21/NIPS2008_0885.pdf |
CCRandomSearchModelSelection | Model selection class which searches for the best model by a random search. See CModelSelection for details |
CCRationalApproximation | Abstract base class of the rational approximation of a function of a linear operator (A) times vector (v) using Cauchy's integral formula - \[f(\text{A})\text{v}=\oint_{\Gamma}f(z)(z\text{I}-\text{A})^{-1} \text{v}dz\] Computes eigenvalues of linear operator and uses Jacobi elliptic functions and conformal maps [2] for quadrature rule for discretizing the contour integral and computes complex shifts, weights and constant multiplier of the rational approximation of the above expression as \[f(\text{A})\text{v}\approx \eta\text{A}\Im-\left(\sum_{l=1}^{N}\alpha_{l} (\text{A}-\sigma_{l}\text{I})^{-1}\text{v}\right)\] where \(\alpha_{l},\sigma_{l}\in\mathbb{C}\) are respectively the shifts and weights of the linear systems generated from the rational approximation, and \(\eta\in\mathbb{R}\) is the constant multiplier, equals to \(\frac{-8K(\lambda_{m}\lambda_{M})^{\frac{1}{4}}}{k\pi N}\) |
CCRationalApproximationCGMJob | Implementation of independent jobs that solves one whole family of shifted systems in rational approximation of linear operator function times a vector using CG-M linear solver. compute calls submit_results of the aggregator with CScalarResult (see CRationalApproximation) |
CCRationalApproximationIndividualJob | Implementation of independent job that solves one of the family of shifted systems in rational approximation of linear operator function times a vector using a direct linear solver. The shift is moved inside the operator. compute calls submit_results of the aggregator with CVectorResult which is the solution vector for that shift multiplied by complex weight (See CRationalApproximation) |
CCRationalQuadraticKernel | Rational Quadratic kernel |
CCRBM | A Restricted Boltzmann Machine |
CCRealDistance | Class RealDistance |
CCRealFileFeatures | The class RealFileFeatures implements a dense double-precision floating point matrix from a file |
CCRealNumber | Class CRealNumber to be used in the application of Structured Output (SO) learning to multiclass classification. Even though it is likely that it does not make sense to consider real numbers as structured data, it has been made in this way because the basic type to use in structured labels needs to inherit from CStructuredData |
CCRecallMeasure | Class RecallMeasure used to measure recall of 2-class classifier |
CCRegressionLabels | Real Labels are real-valued labels |
CCRegulatoryModulesStringKernel | The Regulaty Modules kernel, based on the WD kernel, as published in Schultheiss et al., Bioinformatics (2009) on regulatory sequences |
CCRejectionStrategy | Base rejection strategy class |
CCRelaxedTree | |
CCRescaleFeatures | Preprocessor RescaleFeautres is rescaling the range of features to make the features independent of each other and aims to scale the range in [0, 1] or [-1, 1] |
CCResultSet | |
CCRidgeKernelNormalizer | Normalize the kernel by adding a constant term to its diagonal. This aids kernels to become positive definite (even though they are not - often caused by numerical problems) |
CCROCEvaluation | Class ROCEvalution used to evaluate ROC (Receiver Operating Characteristic) and an area under ROC curve (auROC) |
CCSalzbergWordStringKernel | The SalzbergWordString kernel implements the Salzberg kernel |
CCScalarResult | Base class that stores the result of an independent job when the result is a scalar |
CCScatterKernelNormalizer | Scatter kernel normalizer |
CCScatterSVM | ScatterSVM - Multiclass SVM |
CCSegmentLoss | Class IntronList |
CCSequence | Class CSequence to be used in the application of Structured Output (SO) learning to Hidden Markov Support Vector Machines (HM-SVM) |
CCSequenceLabels | Class CSequenceLabels used e.g. in the application of Structured Output (SO) learning to Hidden Markov Support Vector Machines (HM-SVM). Each of the labels is represented by a sequence of integers. Each label is of type CSequence and all of them are stored in a CDynamicObjectArray |
CCSerialComputationEngine | Class that computes multiple independent instances of computation jobs sequentially |
CCSerializableAsciiFile | Serializable ascii file |
►CCSerializableFile | Serializable file |
CTSerializableReader | Serializable reader |
CCSet | Class CSet, a set based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table |
CCSGDQN | Class SGDQN |
►CCSGObject | Class SGObject is the base class of all shogun objects |
CSelf | |
CCShareBoost | |
CCShiftInvariantKernel | Base class for the family of kernel functions that only depend on the difference of the inputs, i.e. whose values does not change if the inputs are shifted by the same amount. More precisely, \[ k(\mathbf{x}, \mathbf{x'}) = k(\mathbf{x-x'}) \] For example, Gaussian (RBF) kernel is a shfit invariant kernel |
CCSigmoidKernel | The standard Sigmoid kernel computed on dense real valued features |
CCSignal | Class Signal implements signal handling to e.g. allow ctrl+c to cancel a long running process |
CCSimpleFile | Template class SimpleFile to read and write from files |
CCSimpleLocalityImprovedStringKernel | SimpleLocalityImprovedString kernel, is a ``simplified'' and better performing version of the Locality improved kernel |
CCSingleFITCInference | The Fully Independent Conditional Training inference base class for Laplace and regression for 1-D labels (1D regression and binary classification) |
CCSingleFITCLaplaceInferenceMethod | The FITC approximation inference method class for regression and binary Classification. Note that the number of inducing points (m) is usually far less than the number of input points (n). (the time complexity is computed based on the assumption m < n) |
CCSingleFITCLaplaceNewtonOptimizer | The build-in minimizer for SingleFITCLaplaceInference |
CCSingleLaplaceInferenceMethod | The SingleLaplace approximation inference method class for regression and binary Classification |
CCSingleLaplaceNewtonOptimizer | The build-in minimizer for SingleLaplaceInference |
CCSingleSparseInference | The sparse inference base class for classification and regression for 1-D labels (1D regression and binary classification) |
CCSmoothHingeLoss | CSmoothHingeLoss implements the smooth hinge loss function |
CCSNPFeatures | Features that compute the Weighted Degreee Kernel feature space explicitly |
CCSNPStringKernel | The class SNPStringKernel computes a variant of the polynomial kernel on strings of same length |
CCSOBI | Class SOBI |
CCSoftMaxLikelihood | Class that models Soft-Max likelihood |
CCSortUlongString | Preprocessor SortUlongString, sorts the indivual strings in ascending order |
CCSortWordString | Preprocessor SortWordString, sorts the indivual strings in ascending order |
CCSOSVMHelper | Class CSOSVMHelper contains helper functions to compute primal objectives, dual objectives, average training losses, duality gaps etc. These values will be recorded to check convergence. This class is inspired by the matlab implementation of the block coordinate Frank-Wolfe SOSVM solver [1] |
CCSparseDistance | Template class SparseDistance |
CCSparseEuclideanDistance | Class SparseEucldeanDistance |
CCSparseFeatures | Template class SparseFeatures implements sparse matrices |
CCSparseInference | The Fully Independent Conditional Training inference base class |
CCSparseKernel | Template class SparseKernel, is the base class of kernels working on sparse features |
CCSparseMatrixOperator | Class that represents a sparse-matrix linear operator. It computes matrix-vector product \(Ax\) in its apply method, \(A\in\mathbb{C}^{m\times n},A:\mathbb{C}^{n}\rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in\mathbb{C}^{n}\) being the vector. The result is a vector \(y\in\mathbb{C}^{m}\) |
CCSparseMultilabel | Class CSparseMultilabel to be used in the application of Structured Output (SO) learning to Multilabel classification |
CCSparsePolyFeatures | Implement DotFeatures for the polynomial kernel |
CCSparsePreprocessor | Template class SparsePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CSparseFeatures |
CCSparseSpatialSampleStringKernel | Sparse Spatial Sample String Kernel by Pavel Kuksa pkuks.nosp@m.a@cs.nosp@m..rutg.nosp@m.ers..nosp@m.edu and Vladimir Pavlovic vladi.nosp@m.mir@.nosp@m.cs.ru.nosp@m.tger.nosp@m.s.edu |
CCSpecificityMeasure | Class SpecificityMeasure used to measure specificity of 2-class classifier |
CCSpectrumMismatchRBFKernel | Spectrum mismatch rbf kernel |
CCSpectrumRBFKernel | Spectrum rbf kernel |
CCSphericalKernel | Spherical kernel |
CCSplineKernel | Computes the Spline Kernel function which is the cubic polynomial |
CCSplittingStrategy | Abstract base class for all splitting types. Takes a CLabels instance and generates a desired number of subsets which are being accessed by their indices via the method generate_subset_indices(...) |
CCSqrtDiagKernelNormalizer | SqrtDiagKernelNormalizer divides by the Square Root of the product of the diagonal elements |
CCSquaredHingeLoss | Class CSquaredHingeLoss implements a squared hinge loss function |
CCSquaredLoss | CSquaredLoss implements the squared loss function |
CCStateModel | Class CStateModel base, abstract class for the internal state representation used in the CHMSVMModel |
►CCStatistics | Class that contains certain functions related to statistics, such as probability/cumulative distribution functions, different statistics, etc |
CSigmoidParamters | |
CCStochasticGBMachine | This class implements the stochastic gradient boosting algorithm for ensemble learning invented by Jerome H. Friedman. This class works with a variety of loss functions like squared loss, exponential loss, Huber loss etc which can be accessed through Shogun's CLossFunction interface (cf. http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLossFunction.html). Additionally, it can create an ensemble of any regressor class derived from the CMachine class (cf. http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMachine.html). For one dimensional optimization, this class uses the backtracking linesearch accessed via Shogun's L-BFGS class. A concise description of the algorithm implemented can be found in the following link : http://en.wikipedia.org/wiki/Gradient_boosting#Algorithm |
CCStochasticProximityEmbedding | Class StochasticProximityEmbedding used to construct embeddings of data using the Stochastic Proximity algorithm |
CCStochasticSOSVM | Class CStochasticSOSVM solves SOSVM using stochastic subgradient descent on the SVM primal problem [1], which is equivalent to SGD or Pegasos [2]. This class is inspired by the matlab SGD implementation in [3] |
CCStoreScalarAggregator | Template class that aggregates scalar job results in each submit_result call, finalize then transforms current aggregation into a CScalarResult |
CCStoreVectorAggregator | Abstract template class that aggregates vector job results in each submit_result call, finalize is abstract |
CCStratifiedCrossValidationSplitting | Implementation of stratified cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference) in which the label ratio is equal (at most one difference) to the label ratio of the specified labels. Do not use for regression since it may be impossible to distribute nice in that case |
CCStreamingAsciiFile | Class StreamingAsciiFile to read vector-by-vector from ASCII files |
CCStreamingDenseFeatures | This class implements streaming features with dense feature vectors |
CCStreamingDotFeatures | Streaming features that support dot products among other operations |
CCStreamingFeatures | Streaming features are features which are used for online algorithms |
CCStreamingFile | A Streaming File access class |
CCStreamingFileFromDenseFeatures | Class CStreamingFileFromDenseFeatures is a derived class of CStreamingFile which creates an input source for the online framework from a CDenseFeatures object |
CCStreamingFileFromFeatures | Class StreamingFileFromFeatures to read vector-by-vector from a CFeatures object |
CCStreamingFileFromSparseFeatures | Class CStreamingFileFromSparseFeatures is derived from CStreamingFile and provides an input source for the online framework. It uses an existing CSparseFeatures object to generate online examples |
CCStreamingFileFromStringFeatures | Class CStreamingFileFromStringFeatures is derived from CStreamingFile and provides an input source for the online framework from a CStringFeatures object |
CCStreamingHashedDenseFeatures | This class acts as an alternative to the CStreamingDenseFeatures class and their difference is that the current example in this class is hashed into a smaller dimension dim |
CCStreamingHashedDocDotFeatures | This class implements streaming features for a document collection. Like in the standard Bag-of-Words representation, this class considers each document as a collection of tokens, which are then hashed into a new feature space of a specified dimension. This class is very flexible and allows the user to specify the tokenizer used to tokenize each document, specify whether the results should be normalized with regards to the sqrt of the document size, as well as to specify whether he wants to combine different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n tokens, while skipping up to k. Eg. for the tokens ["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a", "ab", "ac" (skipped 1), "ad" (skipped 2), "b", "bc", "bd" (skipped 1), "c", "cd", "d"] |
CCStreamingHashedSparseFeatures | This class acts as an alternative to the CStreamingSparseFeatures class and their difference is that the current example in this class is hashed into a smaller dimension dim |
CCStreamingMMD | Abstract base class that provides an interface for performing kernel two-sample test on streaming data using Maximum Mean Discrepancy (MMD) as the test statistic. The MMD is the distance of two probability distributions \(p\) and \(q\) in a RKHS (see [1] for formal description) |
CCStreamingSparseFeatures | This class implements streaming features with sparse feature vectors. The vector is represented as an SGSparseVector<T>. Each entry is of type SGSparseVectorEntry<T> with members `feat_index' and `entry' |
CCStreamingStringFeatures | This class implements streaming features as strings |
CCStreamingVwCacheFile | Class StreamingVwCacheFile to read vector-by-vector from VW cache files |
CCStreamingVwFeatures | This class implements streaming features for use with VW |
CCStreamingVwFile | Class StreamingVwFile to read vector-by-vector from Vowpal Wabbit data files. It reads the example and label into one object of VwExample type |
CCStringDistance | Template class StringDistance |
CCStringFeatures | Template class StringFeatures implements a list of strings |
CCStringFileFeatures | File based string features |
CCStringKernel | Template class StringKernel, is the base class of all String Kernels |
CCStringMap | The class is a customized map for the optimization framework |
CCStringPreprocessor | Template class StringPreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CStringFeatures (i.e. strings of variable length) |
CCStructuredAccuracy | Class CStructuredAccuracy used to compute accuracy of structured classification |
CCStructuredData | Base class of the components of StructuredLabels |
CCStructuredLabels | Base class of the labels used in Structured Output (SO) problems |
CCStructuredModel | Class CStructuredModel that represents the application specific model and contains most of the application dependent logic to solve structured output (SO) problems. The idea of this class is to be instantiated giving pointers to the functions that are dependent on the application, i.e. the combined feature representation \(\Psi(\bold{x},\bold{y})\) and the argmax function \( {\arg\max} _{\bold{y} \neq \bold{y}_i} \left \langle { \bold{w}, \Psi(\bold{x}_i,\bold{y}) } \right \rangle \). See: MulticlassModel.h and .cpp for an example of these functions implemented |
CCStructuredOutputMachine | |
CCStudentsTLikelihood | Class that models a Student's-t likelihood |
CCStudentsTVGLikelihood | Class that models Student's T likelihood and uses numerical integration to approximate the following variational expection of log likelihood \[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \] |
CCSubsequenceStringKernel | Class SubsequenceStringKernel that implements String Subsequence Kernel (SSK) discussed by Lodhi et. al.[1]. A subsequence is any ordered sequence of \(n\) characters occurring in the text, though not necessarily contiguous. More formally, string \(u\) is a subsequence of string \(s\), iff there exists indices \(\mathbf{i}=(i_{1},\dots,i_{|u|})\), with \(1\le i_{1} \le \cdots \le i_{|u|} \le |s|\), such that \(u_{j}=s_{i_{j}}\) for \(j=1,\dots,|u|\), written as \(u=s[\mathbf{i}]\). The feature mapping \(\phi\) in this scenario is given by \[ \phi_{u}(s)=\sum_{\mathbf{i}:u=s[\mathbf{i}]}\lambda^{l(\mathbf{i})} \] for some \(lambda\le 1\), where \(l(\mathbf{i})\) is the length of the subsequence in \(s\), given by \(i_{|u|}-i_{1}+1\). The kernel here is an inner product in the feature space generated by all subsequences of length \(n\). \[ K_{n}(s,t)=\sum_{u\in\Sigma^{n}}\langle \phi_{u}(s), \phi_{u}(t)\rangle = \sum_{u\in\Sigma^{n}}\sum_{\mathbf{i}:u=s[\mathbf{i}]} \sum_{\mathbf{j}:u=t[\mathbf{j}]}\lambda^{l(\mathbf{i})+l(\mathbf{j})} \] Since the subsequences are weighted by the exponentially decaying factor \(\lambda\) of their full length in the text, more weight is given to those occurrences that are nearly contiguous. A direct computation is infeasible since the dimension of the feature space grows exponentially with \(n\). The paper describes an efficient computation approach using a dynamic programming technique |
CCSubset | Wrapper class for an index subset which is used by SubsetStack |
CCSubsetStack | Class to add subset support to another class. A CSubsetStackStack instance should be added and wrapper methods to all interfaces should be added |
CCSumOne | Preprocessor SumOne, normalizes vectors to have sum 1 |
CCSVM | A generic Support Vector Machine Interface |
CCSVMLight | Class SVMlight |
CCSVMLightOneClass | Trains a one class C SVM |
CCSVMLin | Class SVMLin |
CCSVMSGD | Class SVMSGD |
CCSVRLight | Class SVRLight, performs support vector regression using SVMLight |
CCTableFactorType | Class CTableFactorType the way that store assignments of variables and energies in a table or a multi-array |
CCTanimotoDistance | Class Tanimoto coefficient |
CCTanimotoKernelNormalizer | TanimotoKernelNormalizer performs kernel normalization inspired by the Tanimoto coefficient (see http://en.wikipedia.org/wiki/Jaccard_index ) |
CCTask | Class Task used to represent tasks in multitask learning. Essentially it represent a set of feature vector indices |
CCTaskGroup | Class TaskGroup used to represent a group of tasks. Tasks in group do not overlap |
CCTaskRelation | Used to represent tasks in multitask learning |
CCTaskTree | Class TaskTree used to represent a tree of tasks. Tree is constructed via task with subtasks (and subtasks of subtasks ..) passed to the TaskTree |
CCTaxonomy | CTaxonomy is used to describe hierarchical structure between tasks |
CCTDistributedStochasticNeighborEmbedding | Class CTDistributedStochasticNeighborEmbedding used to embed data using t-distributed stochastic neighbor embedding algorithm: http://jmlr.csail.mit.edu/papers/volume9/vandermaaten08a/vandermaaten08a.pdf |
CCTensorProductPairKernel | Computes the Tensor Product Pair Kernel (TPPK) |
CCThresholdRejectionStrategy | Threshold based rejection strategy |
CCTime | Class Time that implements a stopwatch based on either cpu time or wall clock time |
CCTokenizer | The class CTokenizer acts as a base class in order to implement tokenizers. Sub-classes must implement the methods has_next(), next_token_idx() and get_copy() |
CCTOPFeatures | The class TOPFeatures implements TOP kernel features obtained from two Hidden Markov models |
CCTraceSampler | Abstract template base class that provides an interface for sampling the trace of a linear operator using an abstract sample method |
CCTreeMachine | Class TreeMachine, a base class for tree based multiclass classifiers. This class is derived from CBaseMulticlassMachine and stores the root node (of class type CTreeMachineNode) to the tree structure |
CCTreeMachineNode | The node of the tree structure forming a TreeMachine The node contains a pointer to its parent and a vector of pointers to its children. A node of this class can have only one parent but any number of children.The node also contains data which can be of any type and has to be specified using template specifier |
CCTrie | Template class Trie implements a suffix trie, i.e. a tree in which all suffixes up to a certain length are stored |
CCTStudentKernel | Generalized T-Student kernel |
CCTwoSampleTest | Provides an interface for performing the classical two-sample test i.e. Given samples from two distributions \(p\) and \(q\), the null-hypothesis is: \(H_0: p=q\), the alternative hypothesis: \(H_1: p\neq q\) |
CCTwoStateModel | Class CTwoStateModel class for the internal two-state representation used in the CHMSVMModel |
CCUAIFile | Class UAIFILE used to read data from UAI files. See http://graphmod.ics.uci.edu/uai08/FileFormat for more details |
CCUWedge | Class UWedge |
CCUWedgeSep | Class UWedgeSep |
CCVarDTCInferenceMethod | The inference method class based on the Titsias' variational bound. For more details, see Titsias, Michalis K. "Variational learning of inducing variables in sparse Gaussian processes." International Conference on Artificial Intelligence and Statistics. 2009 |
CCVarianceKernelNormalizer | VarianceKernelNormalizer divides by the ``variance'' |
CCVariationalGaussianLikelihood | The variational Gaussian Likelihood base class. The variational distribution is Gaussian |
CCVariationalLikelihood | The Variational Likelihood base class |
CCVectorResult | Base class that stores the result of an independent job when the result is a vector |
CCVowpalWabbit | Class CVowpalWabbit is the implementation of the online learning algorithm used in Vowpal Wabbit |
CCVwAdaptiveLearner | VwAdaptiveLearner uses an adaptive subgradient technique to update weights |
CCVwCacheReader | Base class from which all cache readers for VW should be derived |
CCVwCacheWriter | CVwCacheWriter is the base class for all VW cache creating classes |
CCVwConditionalProbabilityTree | |
CCVwEnvironment | Class CVwEnvironment is the environment used by VW |
CCVwLearner | Base class for all VW learners |
CCVwNativeCacheReader | Class CVwNativeCacheReader reads from a cache exactly as that which has been produced by VW's default cache format |
CCVwNativeCacheWriter | Class CVwNativeCacheWriter writes a cache exactly as that which would be produced by VW's default cache format |
CCVwNonAdaptiveLearner | VwNonAdaptiveLearner uses a standard gradient descent weight update rule |
CCVwParser | CVwParser is the object which provides the functions to parse examples from buffered input |
CCVwRegressor | Regressor used by VW |
CCWaveKernel | Wave kernel |
CCWaveletKernel | Class WaveletKernel |
CCWDFeatures | Features that compute the Weighted Degreee Kernel feature space explicitly |
CCWeightedCommWordStringKernel | The WeightedCommWordString kernel may be used to compute the weighted spectrum kernel (i.e. a spectrum kernel for 1 to K-mers, where each k-mer length is weighted by some coefficient \(\beta_k\)) from strings that have been mapped into unsigned 16bit integers |
CCWeightedDegreePositionStringKernel | The Weighted Degree Position String kernel (Weighted Degree kernel with shifts) |
CCWeightedDegreeRBFKernel | Weighted degree RBF kernel |
CCWeightedDegreeStringKernel | The Weighted Degree String kernel |
CCWeightedMajorityVote | Weighted Majority Vote implementation |
CCWRACCMeasure | Class WRACCMeasure used to measure weighted relative accuracy of 2-class classifier |
CCWrappedBasic | Simple wrapper class that allows to store any Shogun basic parameter (i.e. float64_t, int64_t, char, etc) in a CSGObject, and therefore to make it serializable. Using a template argument that is not a Shogun parameter will cause a compile error when trying to register the passed value as a parameter in the constructors |
CCWrappedObjectArray | Specialization of CDynamicObjectArray that adds methods to append wrapped elements to make them serializable. Objects are wrapped through the classes CWrappedBasic, CWrappedSGVector, CWrappedSGMatrix |
CCWrappedSGMatrix | Simple wrapper class that allows to store any Shogun SGMatrix<T> in a CSGObject, and therefore to make it serializable. Using a template argument that is not a Shogun parameter will cause a compile error when trying to register the passed value as a parameter in the constructors |
CCWrappedSGVector | Simple wrapper class that allows to store any Shogun SGVector<T> in a CSGObject, and therefore to make it serializable. Using a template argument that is not a Shogun parameter will cause a compile error when trying to register the passed value as a parameter in the constructors |
CCZeroMean | The zero mean function class |
CCZeroMeanCenterKernelNormalizer | ZeroMeanCenterKernelNormalizer centers the kernel in feature space |
CDescendCorrection | This is a base class for descend based correction method |
CDescendUpdater | This is a base class for descend update |
CDescendUpdaterWithCorrection | This is a base class for descend update with descend based correction |
CDynArray | Template Dynamic array class that creates an array that can be used like a list or an array |
CEigenSparseUtil | This class contains some utilities for Eigen3 Sparse Matrix integration with shogun. Currently it provides a method for converting SGSparseMatrix to Eigen3 SparseMatrix |
CElasticNetPenalty | The is the base class for ElasticNet penalty/regularization within the FirstOrderMinimizer framework |
CFirstOrderBoundConstraintsCostFunction | The first order cost function base class with bound constrains |
CFirstOrderCostFunction | The first order cost function base class |
CFirstOrderMinimizer | The first order minimizer base class |
CFirstOrderSAGCostFunction | The class is about a stochastic cost function for stochastic average minimizers |
CFirstOrderStochasticCostFunction | The first order stochastic cost function base class |
CFirstOrderStochasticMinimizer | The base class for stochastic first-order gradient-based minimizers |
CGCEdge | Graph cuts edge |
CGCNode | Graph cuts node |
CGCNodePtr | Graph guts node pointer |
CGradientDescendUpdater | The class implements the gradient descend method |
Cid3TreeNodeData | Structure to store data of a node of id3 tree. This can be used as a template type in TreeMachineNode class. Ex: id3 algorithm uses nodes of type CTreeMachineNode<id3TreeNodeData> |
CInverseScalingLearningRate | The implements the inverse scaling learning rate |
CIterativeSolverIterator | Template class that is used as an iterator for an iterative linear solver. In the iteration of solving phase, each solver initializes the iteration with a maximum number of iteration limit, and relative/ absolute tolerence. They then call begin with the residual vector and continue until its end returns true, i.e. either it has converged or iteration count reached maximum limit |
CK_THREAD_PARAM | |
CL1Penalty | The is the base class for L1 penalty/regularization within the FirstOrderMinimizer framework |
CL1PenaltyForTG | The is the base class for L1 penalty/regularization within the FirstOrderMinimizer framework |
CL2Penalty | The class implements L2 penalty/regularization within the FirstOrderMinimizer framework |
Clbfgs_parameter_t | |
CLearningRate | The base class about learning rate for descent-based minimizers |
CMappedSparseMatrix | Mapped sparse matrix for representing graph relations of tasks |
CMappingFunction | The base mapping function for mirror descend |
CMaybe | Holder that represents an object that can be either present or absent. Quite simllar to std::optional introduced in C++14, but provides a way to pass the reason of absence (e.g. "incorrect parameter") |
CMinimizer | The minimizer base class |
CMixModelData | This structure is used for storing data required for using the generic Expectation Maximization (EM) implemented by the template class CEMBase for mixture models like gaussian mixture model, multinomial mixture model etc. The EM specialized for mixture models is implemented by the class CEMMixtureModel which uses this MixModelData structure |
CMKLMulticlassGLPK | MKLMulticlassGLPK is a helper class for MKLMulticlass |
CMKLMulticlassGradient | MKLMulticlassGradient is a helper class for MKLMulticlass |
CMKLMulticlassOptimizationBase | MKLMulticlassOptimizationBase is a helper class for MKLMulticlass |
CModel | Class Model |
CMomentumCorrection | This is a base class for momentum correction methods |
CMunkres | Munkres |
CNbodyTreeNodeData | Structure to store data of a node of N-Body tree. This can be used as a template type in TreeMachineNode class. N-Body tree building algorithm uses nodes of type CBinaryTreeMachineNode<NbodyTreeNodeData> |
CNesterovMomentumCorrection | This implements the Nesterov's Accelerated Gradient (NAG) correction |
CNothing | |
CParallel | Class Parallel provides helper functions for multithreading |
CParameter | Parameter class |
CPenalty | The base class for penalty/regularization used in minimization |
CPNormMappingFunction | This implements the P-norm mapping/projection function |
CPointerValueAnyPolicy | This is one concrete implementation of policy that uses void pointers to store values |
CProximalPenalty | The base class for sparse penalty/regularization used in minimization |
CRefCount | |
CRelaxedTreeNodeData | |
CRelaxedTreeUtil | |
CRmsPropUpdater | The class implements the RmsProp method |
CSerializableAsciiReader00 | Serializable ascii reader |
CSGDMinimizer | The class implements the stochastic gradient descend (SGD) minimizer |
CSGIO | Class SGIO, used to do input output operations throughout shogun |
CSGMatrix | Shogun matrix |
CSGMatrixList | Shogun matrix list |
CSGNDArray | Shogun n-dimensional array |
CSGReferencedData | Shogun reference count managed data |
CSGSparseMatrix | Template class SGSparseMatrix |
CSGSparseVector | Template class SGSparseVector The assumtion is that the stored SGSparseVectorEntry<T>* vector is ordered by SGSparseVectorEntry.feat_index in non-decreasing order. This has to be assured by the user of the class |
CSGSparseVectorEntry | Template class SGSparseVectorEntry |
CSGString | Shogun string |
CSGStringList | Template class SGStringList |
CSGVector | Shogun vector |
CShareBoostOptimizer | |
CShogunException | Class ShogunException defines an exception which is thrown whenever an error inside of shogun occurs |
CSMDMinimizer | The class implements the stochastic mirror descend (SMD) minimizer |
CSMIDASMinimizer | The class implements the Stochastic MIrror Descent mAde Sparse (SMIDAS) minimizer |
CSparsePenalty | The base class for sparse penalty/regularization used in minimization |
CSparsityStructure | Struct that represents the sparsity structure of the Sparse Matrix in CRS. Implementation has been adapted from Krylstat (https://github.com/ Froskekongen/KRYLSTAT) library (c) Erlend Aune erlen.nosp@m.da@m.nosp@m.ath.n.nosp@m.tnu..nosp@m.no under GPL2+ |
CSSKFeatures | SSKFeatures |
CStandardMomentumCorrection | This implements the plain momentum correction |
Csubstring | Struct Substring, specified by start position and end position |
CSVRGMinimizer | The class implements the stochastic variance reduced gradient (SVRG) minimizer |
CTag | Acts as an identifier for a shogun object. It contains type information and name of the object. Generally used to CSGObject::set() and CSGObject::get() parameters of a class |
Ctag_callback_data | |
Ctag_iteration_data | |
CTMultipleCPinfo | |
CTParameter | Parameter struct |
CTSGDataType | Datatypes that shogun supports |
CUnique | |
Cv_array | Class v_array taken directly from JL's implementation |
CVersion | Class Version provides version information |
CVwConditionalProbabilityTreeNodeData | |
CVwExample | Example class for VW |
CVwFeature | One feature in VW |
CVwLabel | Class VwLabel holds a label object used by VW |
►Nstd | |
Chash< shogun::BaseTag > | |
Cblock_tree_node_t | |
CCSyntaxHighLight | Syntax highlight |
CCTron | Class Tron |
Cd_node | |
CD_THREAD_PARAM | |
Cds_node | |
CEntryComparator | |
Cnode | |
CShogunFeatureVectorCallback | |
CShogunLoggerImplementation | |
Ctask_tree_node_t | |
Ctree_node_t |