block_tree_node_t | |
bmrm_ll | |
bmrm_return_value_T | |
CAccuracyMeasure | Class AccuracyMeasure used to measure accuracy of 2-class classifier |
CAlphabet | The class Alphabet implements an alphabet and alphabet utility functions |
CANOVAKernel | ANOVA (ANalysis Of VAriances) kernel |
CAsciiFile | A Ascii File access class |
CAttenuatedEuclideanDistance | Class AttenuatedEuclideanDistance |
CAttributeFeatures | Implements attributed features, that is in the simplest case a number of (attribute, value) pairs |
CAUCKernel | The AUC kernel can be used to maximize the area under the receiver operator characteristic curve (AUC) instead of margin in SVM training |
CAveragedPerceptron | Class Averaged Perceptron implements the standard linear (online) algorithm. Averaged perceptron is the simple extension of Perceptron |
CAvgDiagKernelNormalizer | Normalize the kernel by either a constant or the average value of the diagonal elements (depending on argument c of the constructor) |
CBalancedConditionalProbabilityTree | |
CBALMeasure | Class BALMeasure used to measure balanced error of 2-class classifier |
CBaseMulticlassMachine | |
CBesselKernel | Class Bessel kernel |
CBinaryClassEvaluation | The class TwoClassEvaluation, a base class used to evaluate binary classification labels |
CBinaryFile | A Binary file access class |
CBinaryLabels | Binary Labels for binary classification |
CBinaryStream< T > | Memory mapped emulation via binary streams (files) |
CBinnedDotFeatures | The class BinnedDotFeatures contains a 0-1 conversion of features into bins |
CBitString | String class embedding a string in a compact bit representation |
CBrayCurtisDistance | Class Bray-Curtis distance |
CCache< T > | Template class Cache implements a simple cache |
CCanberraMetric | Class CanberraMetric |
CCanberraWordDistance | Class CanberraWordDistance |
CCauchyKernel | Cauchy kernel |
CChebyshewMetric | Class ChebyshewMetric |
CChi2Kernel | The Chi2 kernel operating on realvalued vectors computes the chi-squared distance between sets of histograms |
CChiSquareDistance | Class ChiSquareDistance |
CCircularKernel | Circular kernel |
CClusteringAccuracy | Clustering accuracy |
CClusteringEvaluation | The base class used to evaluate clustering |
CClusteringMutualInformation | Clustering (normalized) mutual information |
CCombinedDotFeatures | Features that allow stacking of a number of DotFeatures |
CCombinedFeatures | The class CombinedFeatures is used to combine a number of of feature objects into a single CombinedFeatures object |
CCombinedKernel | The Combined kernel is used to combine a number of kernels into a single CombinedKernel object by linear combination |
CCommUlongStringKernel | The CommUlongString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 64bit integers |
CCommWordStringKernel | The CommWordString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 16bit integers |
CCompressor | Compression library for compressing and decompressing buffers using one of the standard compression algorithms, LZO, GZIP, BZIP2 or LZMA |
CConditionalProbabilityTree | |
CConjugateIndex | Conjugate index classifier. Described in: |
CConstKernel | The Constant Kernel returns a constant for all elements |
CContingencyTableEvaluation | The class ContingencyTableEvaluation a base class used to evaluate 2-class classification with TP, FP, TN, FN rates |
CConverter | Class Converter used to convert data |
CCosineDistance | Class CosineDistance |
CCplex | Class CCplex to encapsulate access to the commercial cplex general purpose optimizer |
CCPLEXSVM | CplexSVM a SVM solver implementation based on cplex (unfinished) |
CCrossCorrelationMeasure | Class CrossCorrelationMeasure used to measure cross correlation coefficient of 2-class classifier |
CCrossValidation | Base class for cross-validation evaluation. Given a learning machine, a splitting strategy, an evaluation criterium, features and correspnding labels, this provides an interface for cross-validation. Results may be retrieved using the evaluate method. A number of repetitions may be specified for obtaining more accurate results. The arithmetic mean of different runs is returned along with confidence intervals, if a p-value is specified. Default number of runs is one, confidence interval combutation is disabled |
CCrossValidationMKLStorage | Class for storing MKL weights in every fold of cross-validation |
CCrossValidationMulticlassStorage | Class for storing multiclass evaluation information in every fold of cross-validation |
CCrossValidationOutput | Class for managing individual folds in cross-validation |
CCrossValidationPrintOutput | Class for outputting cross-validation intermediate results to the standard output. Simply prints all messages it gets |
CCrossValidationResult | Type to encapsulate the results of an evaluation run. May contain confidence interval (if conf_int_alpha!=0). m_conf_int_alpha is the probability for an error, i.e. the value does not lie in the confidence interval |
CCrossValidationSplitting | Implementation of normal cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference) |
CCustomDistance | The Custom Distance allows for custom user provided distance matrices |
CCustomKernel | The Custom Kernel allows for custom user provided kernel matrices |
CData | Dummy data holder |
CDataGenerator | Class that is able to generate various data samples, which may be used for examples in SHOGUN |
CDecompressString< ST > | Preprocessor that decompresses compressed strings |
CDenseDistance< ST > | Template class DenseDistance |
CDenseFeatures< ST > | The class DenseFeatures implements dense feature matrices |
CDenseLabels | Dense integer or floating point labels |
CDensePreprocessor< ST > | Template class DensePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CDenseFeatures (i.e. rectangular dense matrices) |
CDenseSubsetFeatures< ST > | |
CDiagKernel | The Diagonal Kernel returns a constant for the diagonal and zero otherwise |
CDiceKernelNormalizer | DiceKernelNormalizer performs kernel normalization inspired by the Dice coefficient (see http://en.wikipedia.org/wiki/Dice's_coefficient) |
CDifferentiableFunction | DifferentiableFunction |
CDiffusionMaps | Class DiffusionMaps (part of the Efficient Dimensionality Reduction Toolkit) used to preprocess given data using Diffusion Maps dimensionality reduction technique as described in |
CDimensionReductionPreprocessor | Class DimensionReductionPreprocessor, a base class for preprocessors used to lower the dimensionality of given simple features (dense matrices) |
CDistance | Class Distance, a base class for all the distances used in the Shogun toolbox |
CDistanceKernel | The Distance kernel takes a distance as input |
CDistanceMachine | A generic DistanceMachine interface |
CDistantSegmentsKernel | The distant segments kernel is a string kernel, which counts the number of substrings, so-called segments, at a certain distance from each other |
CDistribution | Base class Distribution from which all methods implementing a distribution are derived |
CDixonQTestRejectionStrategy | Simplified version of Dixon's Q test outlier based rejection strategy. Statistic values are taken from http://www.vias.org/tmdatanaleng/cc_outlier_tests_dixon.html |
CDomainAdaptationMulticlassLibLinear | Domain adaptation multiclass LibLinear wrapper Source domain is assumed to b |
CDomainAdaptationSVM | Class DomainAdaptationSVM |
CDomainAdaptationSVMLinear | Class DomainAdaptationSVMLinear |
CDotFeatures | Features that support dot products among other operations |
CDotKernel | Template class DotKernel is the base class for kernels working on DotFeatures |
CDualLibQPBMSOSVM | Class DualLibQPBMSOSVM that uses Bundle Methods for Regularized Risk Minimization algorithms for structured output (SO) problems [1] presented in [2] |
CDummyFeatures | The class DummyFeatures implements features that only know the number of feature objects (but don't actually contain any) |
CDynamicArray< T > | Template Dynamic array class that creates an array that can be used like a list or an array |
CDynamicObjectArray | Dynamic array class for CSGObject pointers that creates an array that can be used like a list or an array |
CDynInt< T, sz > | Integer type of dynamic size |
CDynProg | Dynamic Programming Class |
CECOCAEDDecoder | |
CECOCDecoder | |
CECOCDiscriminantEncoder | |
CECOCEDDecoder | |
CECOCEncoder | ECOCEncoder produce an ECOC codebook |
CECOCForestEncoder | |
CECOCHDDecoder | |
CECOCIHDDecoder | |
CECOCLLBDecoder | |
CECOCOVOEncoder | |
CECOCOVREncoder | |
CECOCRandomDenseEncoder | |
CECOCRandomSparseEncoder | |
CECOCSimpleDecoder | |
CECOCStrategy | |
CECOCUtil | |
CEmbeddingConverter | Class EmbeddingConverter (part of the Efficient Dimensionality Reduction Toolkit) used to construct embeddings of features, e.g. construct dense numeric embedding of string features |
CErrorRateMeasure | Class ErrorRateMeasure used to measure error rate of 2-class classifier |
CEuclideanDistance | Class EuclideanDistance |
CEvaluation | Class Evaluation, a base class for other classes used to evaluate labels, e.g. accuracy of classification or mean squared error of regression |
CEvaluationResult | EvaluationResult is the abstract class that contains the result generated by the MachineEvaluation class |
CExactInferenceMethod | The Gaussian Exact Form Inference Method |
CExplicitSpecFeatures | Features that compute the Spectrum Kernel feature space explicitly |
CExponentialKernel | The Exponential Kernel, closely related to the Gaussian Kernel computed on CDotFeatures |
CF1Measure | Class F1Measure used to measure F1 score of 2-class classifier |
CFeatureBlockLogisticRegression | Class FeatureBlockLogisticRegression, a linear binary logistic loss classifier for problems with complex feature relations. Currently two feature relations are supported - feature group (done via CIndexBlockGroup) and feature tree (done via CIndexTree). Handling of feature relations is done via L1/Lq (for groups) and L1/L2 (for trees) regularization |
CFeatures | The class Features is the base class of all feature objects |
CFile | A File access base class |
CFirstElementKernelNormalizer | Normalize the kernel by a constant obtained from the first element of the kernel matrix, i.e. |
CFITCInferenceMethod | The Fully Independent Conditional Training Inference Method |
CFixedDegreeStringKernel | The FixedDegree String kernel takes as input two strings of same size and counts the number of matches of length d |
CFKFeatures | The class FKFeatures implements Fischer kernel features obtained from two Hidden Markov models |
CGaussian | Gaussian distribution interface |
CGaussianARDKernel | Gaussian Kernel with Automatic Relevance Detection |
CGaussianKernel | The well known Gaussian kernel (swiss army knife for SVMs) computed on CDotFeatures |
CGaussianLikelihood | This is the class that models a Gaussian Likelihood |
CGaussianMatchStringKernel | The class GaussianMatchStringKernel computes a variant of the Gaussian kernel on strings of same length |
CGaussianNaiveBayes | Class GaussianNaiveBayes, a Gaussian Naive Bayes classifier |
CGaussianProcessRegression | Class GaussianProcessRegression implements Gaussian Process Regression.vInstead of a distribution over weights, the GP specifies a distribution over functions |
CGaussianShiftKernel | An experimental kernel inspired by the WeightedDegreePositionStringKernel and the Gaussian kernel |
CGaussianShortRealKernel | The well known Gaussian kernel (swiss army knife for SVMs) on dense short-real valued features |
CGCArray< T > | Template class GCArray implements a garbage collecting static array |
CGeodesicMetric | Class GeodesicMetric |
CGHMM | Class GHMM - this class is non-functional and was meant to implement a Generalize Hidden Markov Model (aka Semi Hidden Markov HMM) |
CGMM | Gaussian Mixture Model interface |
CGMNPLib | Class GMNPLib Library of solvers for Generalized Minimal Norm Problem (GMNP) |
CGMNPSVM | Class GMNPSVM implements a one vs. rest MultiClass SVM |
CGNPPLib | Class GNPPLib, a Library of solvers for Generalized Nearest Point Problem (GNPP) |
CGNPPSVM | Class GNPPSVM |
CGPBTSVM | Class GPBTSVM |
CGradientCriterion | CGradientCriterion Simple class which specifies the direction of gradient search. Does not provide any label evaluation measure, however |
CGradientEvaluation | GradientEvaluation evaluates a machine using its associated differentiable function for the function value and its gradient with respect to parameters |
CGradientModelSelection | Model selection class which searches for the best model by a gradient- search |
CGradientResult | GradientResult is a container class that returns results from GradientEvaluation. It contains the function value as well as its gradient |
CGridSearchModelSelection | Model selection class which searches for the best model by a grid- search. See CModelSelection for details |
CGUIClassifier | UI classifier |
CGUIConverter | UI converter |
CGUIDistance | UI distance |
CGUIFeatures | UI features |
CGUIHMM | UI HMM (Hidden Markov Model) |
CGUIKernel | UI kernel |
CGUILabels | UI labels |
CGUIMath | UI math |
CGUIPluginEstimate | UI estimate |
CGUIPreprocessor | UI preprocessor |
CGUIStructure | UI structure |
CGUITime | UI time |
CHammingWordDistance | Class HammingWordDistance |
CHash | Collection of Hashing Functions |
CHashedWDFeatures | Features that compute the Weighted Degreee Kernel feature space explicitly |
CHashedWDFeaturesTransposed | Features that compute the Weighted Degreee Kernel feature space explicitly |
CHessianLocallyLinearEmbedding | Class HessianLocallyLinearEmbedding (part of the Efficient Dimensionality Reduction Toolkit) used to preprocess data using Hessian Locally Linear Embedding algorithm as described in |
CHierarchical | Agglomerative hierarchical single linkage clustering |
CHingeLoss | CHingeLoss implements the hinge loss function |
CHistogram | Class Histogram computes a histogram over all 16bit unsigned integers in the features |
CHistogramIntersectionKernel | The HistogramIntersection kernel operating on realvalued vectors computes the histogram intersection distance between sets of histograms. Note: the current implementation assumes positive values for the histograms, and input vectors should sum to 1 |
CHistogramWordStringKernel | The HistogramWordString computes the TOP kernel on inhomogeneous Markov Chains |
CHMM | Hidden Markov Model |
CHMSVMLabels | Class CHMSVMLabels to be used in the application of Structured Output (SO) learning to Hidden Markov Support Vector Machines (HM-SVM). Each of the labels is represented by a sequence of integers. Each label is of type CSequence and all of them are stored in a CDynamicObjectArray |
CHMSVMModel | Class CHMSVMModel that represents the application specific model and contains the application dependent logic to solve Hidden Markov Support Vector Machines (HM-SVM) type of problems within a generic SO framework |
CHomogeneousKernelMap | Preprocessor HomogeneousKernelMap performs homogeneous kernel maps as described in |
CHSIC | This class implements the Hilbert Schmidtd Independence Criterion based independence test as described in [1] |
CIdentityKernelNormalizer | Identity Kernel Normalization, i.e. no normalization is applied |
CImplicitWeightedSpecFeatures | Features that compute the Weighted Spectrum Kernel feature space explicitly |
CIndexBlock | Class IndexBlock used to represent contiguous indices of one group (e.g. block of related features) |
CIndexBlockGroup | Class IndexBlockGroup used to represent group-based feature relation |
CIndexBlockRelation | Class IndexBlockRelation |
CIndexBlockTree | Class IndexBlockTree used to represent tree guided feature relation |
CIndirectObject< T, P > | Array class that accesses elements indirectly via an index array |
CInferenceMethod | The Inference Method base class |
CInputParser< T > | Class CInputParser is a templated class used to maintain the reading/parsing/providing of examples |
CIntronList | Class IntronList |
CInverseMultiQuadricKernel | InverseMultiQuadricKernel |
CIOBuffer | An I/O buffer class |
CIsomap | Class Isomap (part of the Efficient Dimension Reduction Toolkit) used to embed data using Isomap algorithm as described in |
CJensenMetric | Class JensenMetric |
CJensenShannonKernel | The Jensen-Shannon kernel operating on real-valued vectors computes the Jensen-Shannon distance between the features. Often used in computer vision |
CJLCoverTreePoint | Class Point to use with John Langford's CoverTree. This class must have some assoficated functions defined (distance, parse_points and print, see below) so it can be used with the CoverTree implementation |
CKernel | The Kernel base class |
CKernelDistance | The Kernel distance takes a distance as input |
CKernelIndependenceTestStatistic | Independence test base class. Provides an interface for performing an independence test. Given samples from the joint distribution , does the joint distribution factorize as ? The null- hypothesis says yes, i.e. no independence, the alternative hypothesis says yes |
CKernelLocallyLinearEmbedding | Class KernelLocallyLinearEmbedding (part of the Efficient Dimensionality Reduction Toolkit) used to construct embeddings of data using kernel formulation of Locally Linear Embedding algorithm as described in |
CKernelLocalTangentSpaceAlignment | Class LocalTangentSpaceAlignment (part of the Efficient Dimensionality Reduction Toolkit) used to embed data using kernel extension of the Local Tangent Space Alignment (LTSA) algorithm |
CKernelMachine | A generic KernelMachine interface |
CKernelMeanMatching | Kernel Mean Matching |
CKernelMulticlassMachine | Generic kernel multiclass |
CKernelNormalizer | The class Kernel Normalizer defines a function to post-process kernel values |
CKernelPCA | Preprocessor KernelPCA performs kernel principal component analysis |
CKernelRidgeRegression | Class KernelRidgeRegression implements Kernel Ridge Regression - a regularized least square method for classification and regression |
CKernelStructuredOutputMachine | |
CKernelTwoSampleTestStatistic | Two sample test base class. Provides an interface for performing a two-sample test, i.e. Given samples from two distributions and , the null-hypothesis is: , the alternative hypothesis: |
CKMeans | KMeans clustering, partitions the data into k (a-priori specified) clusters |
CKNN | Class KNN, an implementation of the standard k-nearest neigbor classifier |
CLabels | The class Labels models labels, i.e. class assignments of objects |
CLaplacianEigenmaps | Class LaplacianEigenmaps (part of the Efficient Dimensionality Reduction Toolkit) used to construct embeddings of data using Laplacian Eigenmaps algorithm as described in: |
CLaplacianInferenceMethod | The Laplace Approximation Inference Method |
CLaRank | LaRank multiclass SVM machine |
CLatentFeatures | Latent Features class The class if for representing features for latent learning, e.g. LatentSVM. It's basically a very generic way of storing features of any (user-defined) form based on CData |
CLatentLabels | Abstract class for latent labels As latent labels always depends on the given application, this class only defines the API that the user has to implement for latent labels |
CLatentModel | Abstract class CLatentModel It represents the application specific model and contains most of the application dependent logic to solve latent variable based problems |
CLatentSOSVM | Class Latent Structured Output SVM, an structured output based machine for classification problems with latent variables |
CLatentSVM | LatentSVM class Latent SVM implementation based on [1]. For optimization this implementation uses SVMOcas |
CLBPPyrDotFeatures | Implement DotFeatures for the polynomial kernel |
CLDA | Class LDA implements regularized Linear Discriminant Analysis |
CLeastAngleRegression | Class for Least Angle Regression, can be used to solve LASSO |
CLeastSquaresRegression | Class to perform Least Squares Regression |
CLibLinear | Class to implement LibLinear |
CLibLinearMTL | Class to implement LibLinear |
CLibLinearRegression | LibLinear for regression |
CLibSVM | LibSVM |
CLibSVMOneClass | Class LibSVMOneClass |
CLibSVR | Class LibSVR, performs support vector regression using LibSVM |
CLikelihoodModel | The Likelihood Model base class |
CLinearARDKernel | Linear Kernel with Automatic Relevance Detection |
CLinearHMM | The class LinearHMM is for learning Higher Order Markov chains |
CLinearKernel | Computes the standard linear kernel on CDotFeatures |
CLinearLatentMachine | Abstract implementaion of Linear Machine with latent variable This is the base implementation of all linear machines with latent variable |
CLinearLocalTangentSpaceAlignment | Class LinearLocalTangentSpaceAlignment (part of the Efficient Dimensionality Reduction Toolkit) converter used to construct embeddings as described in: |
CLinearMachine | Class LinearMachine is a generic interface for all kinds of linear machines like classifiers |
CLinearMulticlassMachine | Generic linear multiclass machine |
CLinearRidgeRegression | Class LinearRidgeRegression implements Ridge Regression - a regularized least square method for classification and regression |
CLinearStringKernel | Computes the standard linear kernel on dense char valued features |
CLinearStructuredOutputMachine | |
CLinearTimeMMD | This class implements the linear time Maximum Mean Statistic as described in [1]. This statistic is in particular suitable for streaming data. Therefore, only streaming features may be passed. To process other feature types, construct streaming features from these (see constructor documentations). A blocksize has to be specified that determines how many examples are processed at once. This should be set as large as available memory allows to ensure faster computations |
CList | Class List implements a doubly connected list for low-level-objects |
CListElement | Class ListElement, defines how an element of the the list looks like |
CLocalAlignmentStringKernel | The LocalAlignmentString kernel compares two sequences through all possible local alignments between the two sequences |
CLocalityImprovedStringKernel | The LocalityImprovedString kernel is inspired by the polynomial kernel. Comparing neighboring characters it puts emphasize on local features |
CLocalityPreservingProjections | Class LocalityPreservingProjections (part of the Efficient Dimensionality Reduction Toolkit) used to compute embeddings of data using Locality Preserving Projections method as described in |
CLocallyLinearEmbedding | Class LocallyLinearEmbedding (part of the Efficient Dimensionality Reduction Toolkit) used to embed data using Locally Linear Embedding algorithm described in |
CLocalTangentSpaceAlignment | Class LocalTangentSpaceAlignment (part of the Efficient Dimensionality Reduction Toolkit) used to embed data using Local Tangent Space Alignment (LTSA) algorithm as described in: |
CLogKernel | Log kernel |
CLogLoss | CLogLoss implements the logarithmic loss function |
CLogLossMargin | Class CLogLossMargin implements a margin-based log-likelihood loss function |
CLogPlusOne | Preprocessor LogPlusOne does what the name says, it adds one to a dense real valued vector and takes the logarithm of each component of it |
CLoss | Class which collects generic mathematical functions |
CLossFunction | Class CLossFunction is the base class of all loss functions |
CLPBoost | Class LPBoost trains a linear classifier called Linear Programming Machine, i.e. a SVM using a norm regularizer |
CLPM | Class LPM trains a linear classifier called Linear Programming Machine, i.e. a SVM using a norm regularizer |
CMachine | A generic learning machine interface |
CMachineEvaluation | Machine Evaluation is an abstract class that evaluates a machine according to some criterion |
CMahalanobisDistance | Class MahalanobisDistance |
CManhattanMetric | Class ManhattanMetric |
CManhattanWordDistance | Class ManhattanWordDistance |
CMap< K, T > | Class CMap, a map based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table |
CMatchWordStringKernel | The class MatchWordStringKernel computes a variant of the polynomial kernel on strings of same length converted to a word alphabet |
CMath | Class which collects generic mathematical functions |
CMatrixFeatures< ST > | Class CMatrixFeatures used to represent data whose feature vectors are better represented with matrices rather than with unidimensional arrays or vectors. Optionally, it can be restricted that all the feature vectors have the same number of features. Set the attribute num_features different to zero to use this restriction. Allow feature vectors with different number of features by setting num_features equal to zero (default behaviour) |
CMeanAbsoluteError | Class MeanAbsoluteError used to compute an error of regression model |
CMeanFunction | Mean Function base class |
CMeanShiftDataGenerator< T > | Class to generate dense features data via the streaming features interface. The core are pairs of methods to a) set the data model and parameters, and b) to generate a data vector using these model parameters Both methods are automatically called when calling get_next_example() This allows to treat generated data as a stream via the standard streaming features interface |
CMeanSquaredError | Class MeanSquaredError used to compute an error of regression model |
CMeanSquaredLogError | Class CMeanSquaredLogError used to compute an error of regression model |
CMemoryMappedFile< T > | Memory mapped file |
CMinkowskiMetric | Class MinkowskiMetric |
CMKL | Multiple Kernel Learning |
CMKLClassification | Multiple Kernel Learning for two-class-classification |
CMKLMulticlass | MKLMulticlass is a class for L1-norm multiclass MKL |
CMKLOneClass | Multiple Kernel Learning for one-class-classification |
CMKLRegression | Multiple Kernel Learning for regression |
CModelSelection | Abstract base class for model selection. Takes a parameter tree which specifies parameters for model selection, and a cross-validation instance and searches for the best combination of parameters in the abstract method select_model(), which has to be implemented in concrete sub-classes |
CModelSelectionParameters | Class to select parameters and their ranges for model selection. The structure is organized as a tree with different kinds of nodes, depending on the values of its member variables of name and CSGObject |
CMPDSVM | Class MPDSVM |
CMulticlassAccuracy | The class MulticlassAccuracy used to compute accuracy of multiclass classification |
CMulticlassLabels | Multiclass Labels for multi-class classification |
CMulticlassLibLinear | Multiclass LibLinear wrapper. Uses Crammer-Singer formulation and gradient descent optimization algorithm implemented in the LibLinear library. Regularized bias support is added using stacking bias 'feature' to hyperplanes normal vectors |
CMulticlassLibSVM | Class LibSVMMultiClass. Does one vs one classification |
CMulticlassLogisticRegression | Multiclass logistic regression |
CMulticlassMachine | Experimental abstract generic multiclass machine class |
CMulticlassModel | Class CMulticlassModel that represents the application specific model and contains the application dependent logic to solve multiclass classification within a generic SO framework |
CMulticlassMultipleOutputLabels | Multiclass Labels for multi-class classification with multiple labels |
CMulticlassOCAS | Multiclass OCAS wrapper |
CMulticlassOneVsOneStrategy | Multiclass one vs one strategy used to train generic multiclass machines for K-class problems with building voting-based ensemble of K*(K-1) binary classifiers |
CMulticlassOneVsRestStrategy | Multiclass one vs rest strategy used to train generic multiclass machines for K-class problems with building ensemble of K binary classifiers |
CMulticlassOVREvaluation | The class MulticlassOVREvaluation used to compute evaluation parameters of multiclass classification via binary OvR decomposition and given binary evaluation technique |
CMulticlassSOLabels | Class CMulticlassSOLabels to be used in the application of Structured Output (SO) learning to multiclass classification. Each of the labels is represented by a real number and it is required that the values of the labels are in the set {0, 1, ..., num_classes-1}. Each label is of type CRealNumber and all of them are stored in a CDynamicObjectArray |
CMulticlassStrategy | Class MulticlassStrategy used to construct generic multiclass classifiers with ensembles of binary classifiers |
CMulticlassSVM | Class MultiClassSVM |
CMulticlassTreeGuidedLogisticRegression | Multiclass tree guided logistic regression |
CMultidimensionalScaling | Class Multidimensionalscaling (part of the Efficient Dimensionality Reduction Toolkit) is used to perform multidimensional scaling (capable of landmark approximation if requested) |
CMultiquadricKernel | MultiquadricKernel |
CMultitaskClusteredLogisticRegression | Class MultitaskClusteredLogisticRegression, a classifier for multitask problems. Supports only task group relations. Based on solver ported from the MALSAR library. Assumes task in group are related with a clustered structure |
CMultitaskCompositeMachine | Class MultitaskCompositeMachine used to solve multitask binary classification problems with separate training of given binary classifier on each task |
CMultitaskKernelMaskNormalizer | The MultitaskKernel allows Multitask Learning via a modified kernel function |
CMultitaskKernelMaskPairNormalizer | The MultitaskKernel allows Multitask Learning via a modified kernel function |
CMultitaskKernelMklNormalizer | Base-class for parameterized Kernel Normalizers |
CMultitaskKernelNormalizer | The MultitaskKernel allows Multitask Learning via a modified kernel function |
CMultitaskKernelPlifNormalizer | The MultitaskKernel allows learning a piece-wise linear function (PLIF) via MKL |
CMultitaskKernelTreeNormalizer | The MultitaskKernel allows Multitask Learning via a modified kernel function based on taxonomy |
CMultitaskL12LogisticRegression | Class MultitaskL12LogisticRegression, a classifier for multitask problems. Supports only task group relations. Based on solver ported from the MALSAR library |
CMultitaskLeastSquaresRegression | Class Multitask Least Squares Regression, a machine to solve regression problems with a few tasks related via group or tree. Based on L1/Lq regression for groups and L1/L2 for trees |
CMultitaskLinearMachine | Class MultitaskLinearMachine, a base class for linear multitask classifiers |
CMultitaskLogisticRegression | Class Multitask Logistic Regression used to solve classification problems with a few tasks related via group or tree. Based on L1/Lq regression for groups and L1/L2 for trees |
CMultitaskROCEvaluation | Class MultitaskROCEvalution used to evaluate ROC (Receiver Operating Characteristic) and an area under ROC curve (auROC) of each task separately |
CMultitaskTraceLogisticRegression | Class MultitaskTraceLogisticRegression, a classifier for multitask problems. Supports only task group relations. Based on solver ported from the MALSAR library |
CNativeMulticlassMachine | Experimental abstract native multiclass machine class |
CNearestCentroid | Class NearestCentroid, an implementation of Nearest Shrunk Centroid classifier |
CNeighborhoodPreservingEmbedding | NeighborhoodPreservingEmbedding (part of the Efficient Dimensionality Reduction Toolkit) converter used to construct embeddings as described in: |
CNewtonSVM | NewtonSVM, In this Implementation linear SVM is trained in its primal form using Newton-like iterations. This Implementation is ported from the Olivier Chapelles fast newton based SVM solver, Which could be found here :http://mloss.org/software/view/30/ For further information on this implementation of SVM refer to this paper: http://www.kyb.mpg.de/publications/attachments/neco_%5B0%5D.pdf |
CNode | A CNode is an element of a CTaxonomy, which is used to describe hierarchical structure between tasks |
CNormOne | Preprocessor NormOne, normalizes vectors to have norm 1 |
COligoStringKernel | This class offers access to the Oligo Kernel introduced by Meinicke et al. in 2004 |
ConditionalProbabilityTreeNodeData | Struct to store data of node of conditional probability tree |
COnlineLibLinear | Class implementing a purely online version of LibLinear, using the L2R_L1LOSS_SVC_DUAL solver only |
COnlineLinearMachine | Class OnlineLinearMachine is a generic interface for linear machines like classifiers which work through online algorithms |
COnlineSVMSGD | Class OnlineSVMSGD |
CoverTree< Point > | |
CParameterCombination | Class that holds ONE combination of parameters for a learning machine. The structure is organized as a tree. Every node may hold a name or an instance of a Parameter class. Nodes may have children. The nodes are organized in such way, that every parameter of a model for model selection has one node and sub-parameters are stored in sub-nodes. Using a tree of this class, parameters of models may easily be set. There are these types of nodes: |
CParseBuffer< T > | Class CParseBuffer implements a ring of examples of a defined size. The ring stores objects of the Example type |
CPCA | Preprocessor PCACut performs principial component analysis on the input vectors and keeps only the n eigenvectors with eigenvalues above a certain threshold |
CPerceptron | Class Perceptron implements the standard linear (online) perceptron |
CPlif | Class Plif |
CPlifArray | Class PlifArray |
CPlifBase | Class PlifBase |
CPlifMatrix | Store plif arrays for all transitions in the model |
CPluginEstimate | Class PluginEstimate |
CPNorm | Preprocessor PNorm, normalizes vectors to have p-norm |
CPolyFeatures | Implement DotFeatures for the polynomial kernel |
CPolyKernel | Computes the standard polynomial kernel on CDotFeatures |
CPolyMatchStringKernel | The class PolyMatchStringKernel computes a variant of the polynomial kernel on strings of same length |
CPolyMatchWordStringKernel | The class PolyMatchWordStringKernel computes a variant of the polynomial kernel on word-features |
CPositionalPWM | Positional PWM |
CPowerKernel | Power kernel |
CPRCEvaluation | Class PRCEvaluation used to evaluate PRC (Precision Recall Curve) and an area under PRC curve (auPRC) |
CPrecisionMeasure | Class PrecisionMeasure used to measure precision of 2-class classifier |
CPreprocessor | Class Preprocessor defines a preprocessor interface |
CProductKernel | The Product kernel is used to combine a number of kernels into a single ProductKernel object by element multiplication |
CPruneVarSubMean | Preprocessor PruneVarSubMean will substract the mean and remove features that have zero variance |
CPyramidChi2 | Pyramid Kernel over Chi2 matched histograms |
CQDA | Class QDA implements Quadratic Discriminant Analysis |
CQPBSVMLib | Class QPBSVMLib |
CQuadraticTimeMMD | This class implements the quadratic time Maximum Mean Statistic as described in [1]. The MMD is the distance of two probability distributions and in a RKHS
|
CRandomConditionalProbabilityTree | |
CRandomFourierGaussPreproc | Preprocessor CRandomFourierGaussPreproc implements Random Fourier Features for the Gauss kernel a la Ali Rahimi and Ben Recht Nips2007 after preprocessing the features using them in a linear kernel approximates a gaussian kernel |
CRandomSearchModelSelection | Model selection class which searches for the best model by a random search. See CModelSelection for details |
CRationalQuadraticKernel | Rational Quadratic kernel |
CRealDistance | Class RealDistance |
CRealFileFeatures | The class RealFileFeatures implements a dense double-precision floating point matrix from a file |
CRealNumber | Class CRealNumber to be used in the application of Structured Output (SO) learning to multiclass classification. Even though it is likely that it does not make sense to consider real numbers as structured data, it has been made in this way because the basic type to use in structured labels needs to inherit from CStructuredData |
CRecallMeasure | Class RecallMeasure used to measure recall of 2-class classifier |
CRegressionLabels | Real Labels are real-valued labels |
CRegulatoryModulesStringKernel | The Regulaty Modules kernel, based on the WD kernel, as published in Schultheiss et al., Bioinformatics (2009) on regulatory sequences |
CRejectionStrategy | Base rejection strategy class |
CRelaxedTree | |
CResultSet | |
CRidgeKernelNormalizer | Normalize the kernel by adding a constant term to its diagonal. This aids kernels to become positive definite (even though they are not - often caused by numerical problems) |
CROCEvaluation | Class ROCEvalution used to evaluate ROC (Receiver Operating Characteristic) and an area under ROC curve (auROC) |
CSalzbergWordStringKernel | The SalzbergWordString kernel implements the Salzberg kernel |
CScatterKernelNormalizer | Scatter kernel normalizer |
CScatterSVM | ScatterSVM - Multiclass SVM |
CSegmentLoss | Class IntronList |
CSequence | Class CSequence to be used in the application of Structured Output (SO) learning to Hidden Markov Support Vector Machines (HM-SVM) |
CSerializableAsciiFile | Serializable ascii file |
CSerializableFile | Serializable file |
CSet< T > | Class CSet, a set based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table |
CSGDQN | Class SGDQN |
CSGObject | Class SGObject is the base class of all shogun objects |
CShareBoost | |
CSigmoidKernel | The standard Sigmoid kernel computed on dense real valued features |
CSignal | Class Signal implements signal handling to e.g. allow ctrl+c to cancel a long running process |
CSimpleFile< T > | Template class SimpleFile to read and write from files |
CSimpleLocalityImprovedStringKernel | SimpleLocalityImprovedString kernel, is a ``simplified'' and better performing version of the Locality improved kernel |
CSmoothHingeLoss | CSmoothHingeLoss implements the smooth hinge loss function |
CSNPFeatures | Features that compute the Weighted Degreee Kernel feature space explicitly |
CSNPStringKernel | The class SNPStringKernel computes a variant of the polynomial kernel on strings of same length |
CSortUlongString | Preprocessor SortUlongString, sorts the indivual strings in ascending order |
CSortWordString | Preprocessor SortWordString, sorts the indivual strings in ascending order |
CSparseDistance< ST > | Template class SparseDistance |
CSparseEuclideanDistance | Class SparseEucldeanDistance |
CSparseFeatures< ST > | Template class SparseFeatures implements sparse matrices |
CSparseInverseCovariance | Used to estimate inverse covariance matrix using graphical lasso |
CSparseKernel< ST > | Template class SparseKernel, is the base class of kernels working on sparse features |
CSparsePolyFeatures | Implement DotFeatures for the polynomial kernel |
CSparsePreprocessor< ST > | Template class SparsePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CSparseFeatures |
CSparseSpatialSampleStringKernel | Sparse Spatial Sample String Kernel by Pavel Kuksa <pkuksa@cs.rutgers.edu> and Vladimir Pavlovic <vladimir@cs.rutgers.edu> |
CSpecificityMeasure | Class SpecificityMeasure used to measure specificity of 2-class classifier |
CSpectrumMismatchRBFKernel | Spectrum mismatch rbf kernel |
CSpectrumRBFKernel | Spectrum rbf kernel |
CSphericalKernel | Spherical kernel |
CSplineKernel | Computes the Spline Kernel function which is the cubic polynomial |
CSplittingStrategy | Abstract base class for all splitting types. Takes a CLabels instance and generates a desired number of subsets which are being accessed by their indices via the method generate_subset_indices(...) |
CSqrtDiagKernelNormalizer | SqrtDiagKernelNormalizer divides by the Square Root of the product of the diagonal elements |
CSquaredHingeLoss | Class CSquaredHingeLoss implements a squared hinge loss function |
CSquaredLoss | CSquaredLoss implements the squared loss function |
CStateModel | Class CStateModel base, abstract class for the internal state representation used in the CHMSVMModel |
CStatistics | Class that contains certain functions related to statistics, such as probability/cumulative distribution functions, different statistics, etc |
CStochasticProximityEmbedding | Class StochasticProximityEmbedding (part of the Efficient Dimensionality Reduction Toolkit) used to construct embeddings of data using the Stochastic Proximity algorithm |
CStratifiedCrossValidationSplitting | Implementation of stratified cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference) in which the label ratio is equal (at most one difference) to the label ratio of the specified labels. Do not use for regression since it may be impossible to distribute nice in that case |
CStreamingAsciiFile | Class StreamingAsciiFile to read vector-by-vector from ASCII files |
CStreamingDenseFeatures< T > | This class implements streaming features with dense feature vectors |
CStreamingDotFeatures | Streaming features that support dot products among other operations |
CStreamingFeatures | Streaming features are features which are used for online algorithms |
CStreamingFile | A Streaming File access class |
CStreamingFileFromDenseFeatures< T > | Class CStreamingFileFromDenseFeatures is a derived class of CStreamingFile which creates an input source for the online framework from a CDenseFeatures object |
CStreamingFileFromFeatures | Class StreamingFileFromFeatures to read vector-by-vector from a CFeatures object |
CStreamingFileFromSparseFeatures< T > | Class CStreamingFileFromSparseFeatures is derived from CStreamingFile and provides an input source for the online framework. It uses an existing CSparseFeatures object to generate online examples |
CStreamingFileFromStringFeatures< T > | Class CStreamingFileFromStringFeatures is derived from CStreamingFile and provides an input source for the online framework from a CStringFeatures object |
CStreamingSparseFeatures< T > | This class implements streaming features with sparse feature vectors. The vector is represented as an SGSparseVector<T>. Each entry is of type SGSparseVectorEntry<T> with members `feat_index' and `entry' |
CStreamingStringFeatures< T > | This class implements streaming features as strings |
CStreamingVwCacheFile | Class StreamingVwCacheFile to read vector-by-vector from VW cache files |
CStreamingVwFeatures | This class implements streaming features for use with VW |
CStreamingVwFile | Class StreamingVwFile to read vector-by-vector from Vowpal Wabbit data files. It reads the example and label into one object of VwExample type |
CStringDistance< ST > | Template class StringDistance |
CStringFeatures< ST > | Template class StringFeatures implements a list of strings |
CStringFileFeatures< ST > | File based string features |
CStringKernel< ST > | Template class StringKernel, is the base class of all String Kernels |
CStringPreprocessor< ST > | Template class StringPreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CStringFeatures (i.e. strings of variable length) |
CStructuredAccuracy | Class CStructuredAccuracy used to compute accuracy of structured classification |
CStructuredData | Base class of the components of StructuredLabels |
CStructuredLabels | Base class of the labels used in Structured Output (SO) problems |
CStructuredModel | Class CStructuredModel that represents the application specific model and contains most of the application dependent logic to solve structured output (SO) problems. The idea of this class is to be instantiated giving pointers to the functions that are dependent on the application, i.e. the combined feature representation and the argmax function . See: MulticlassModel.h and .cpp for an example of these functions implemented |
CStructuredOutputMachine | |
CStudentsTLikelihood | This is the class that models a likelihood model with a Student's T Distribution. The parameters include degrees of freedom as well as a sigma scale parameter |
CSubGradientLPM | Class SubGradientSVM trains a linear classifier called Linear Programming Machine, i.e. a SVM using a norm regularizer |
CSubGradientSVM | Class SubGradientSVM |
CSubset | Wrapper class for an index subset which is used by SubsetStack |
CSubsetStack | Class to add subset support to another class. A CSubsetStackStack instance should be added and wrapper methods to all interfaces should be added |
CSumOne | Preprocessor SumOne, normalizes vectors to have sum 1 |
CSVM | A generic Support Vector Machine Interface |
CSVMLight | Class SVMlight |
CSVMLightOneClass | Trains a one class C SVM |
CSVMLin | Class SVMLin |
CSVMOcas | Class SVMOcas |
CSVMSGD | Class SVMSGD |
CSVRLight | Class SVRLight, performs support vector regression using SVMLight |
CSyntaxHighLight | Syntax highlight |
CTanimotoDistance | Class Tanimoto coefficient |
CTanimotoKernelNormalizer | TanimotoKernelNormalizer performs kernel normalization inspired by the Tanimoto coefficient (see http://en.wikipedia.org/wiki/Jaccard_index ) |
CTask | Class Task used to represent tasks in multitask learning. Essentially it represent a set of feature vector indices |
CTaskGroup | Class TaskGroup used to represent a group of tasks. Tasks in group do not overlap |
CTaskRelation | Used to represent tasks in multitask learning |
CTaskTree | Class TaskTree used to represent a tree of tasks. Tree is constructed via task with subtasks (and subtasks of subtasks ..) passed to the TaskTree |
CTaxonomy | CTaxonomy is used to describe hierarchical structure between tasks |
CTensorProductPairKernel | Computes the Tensor Product Pair Kernel (TPPK) |
CTestStatistic | Test statistic base class. Provides an interface for statistical tests via three methods: compute_statistic(), compute_p_value() and compute_threshold(). The second computes a p-value for the statistic computed by the first method. The p-value represents the position of the statistic in the null-distribution, i.e. the distribution of the statistic population given the null-hypothesis is true. (1-position = p-value). The third method, compute_threshold(), computes a threshold for a given test level which is needed to reject the null-hypothesis |
CThresholdRejectionStrategy | Threshold based rejection strategy |
CTime | Class Time that implements a stopwatch based on either cpu time or wall clock time |
CTOPFeatures | The class TOPFeatures implements TOP kernel features obtained from two Hidden Markov models |
CTreeMachine< T > | Class TreeMachine, a base class for tree based multiclass classifiers |
CTreeMachineNode< T > | |
CTrie< Trie > | Template class Trie implements a suffix trie, i.e. a tree in which all suffixes up to a certain length are stored |
CTron | Class Tron |
CTStudentKernel | Generalized T-Student kernel |
CTwoDistributionsTestStatistic | Provides an interface for performing statistical tests on two sets of samples from two distributions. Instances of these tests are the classical two-sample test and the independence test. This class may be used as base class for both |
CTwoStateModel | Class CTwoStateModel class for the internal two-state representation used in the CHMSVMModel |
CVarianceKernelNormalizer | VarianceKernelNormalizer divides by the ``variance'' |
CVowpalWabbit | Class CVowpalWabbit is the implementation of the online learning algorithm used in Vowpal Wabbit |
CVwAdaptiveLearner | VwAdaptiveLearner uses an adaptive subgradient technique to update weights |
CVwCacheReader | Base class from which all cache readers for VW should be derived |
CVwCacheWriter | CVwCacheWriter is the base class for all VW cache creating classes |
CVwConditionalProbabilityTree | |
CVwEnvironment | Class CVwEnvironment is the environment used by VW |
CVwLearner | Base class for all VW learners |
CVwNativeCacheReader | Class CVwNativeCacheReader reads from a cache exactly as that which has been produced by VW's default cache format |
CVwNativeCacheWriter | Class CVwNativeCacheWriter writes a cache exactly as that which would be produced by VW's default cache format |
CVwNonAdaptiveLearner | VwNonAdaptiveLearner uses a standard gradient descent weight update rule |
CVwParser | CVwParser is the object which provides the functions to parse examples from buffered input |
CVwRegressor | Regressor used by VW |
CWaveKernel | Wave kernel |
CWaveletKernel | Class WaveletKernel |
CWDFeatures | Features that compute the Weighted Degreee Kernel feature space explicitly |
CWDSVMOcas | Class WDSVMOcas |
CWeightedCommWordStringKernel | The WeightedCommWordString kernel may be used to compute the weighted spectrum kernel (i.e. a spectrum kernel for 1 to K-mers, where each k-mer length is weighted by some coefficient ) from strings that have been mapped into unsigned 16bit integers |
CWeightedDegreePositionStringKernel | The Weighted Degree Position String kernel (Weighted Degree kernel with shifts) |
CWeightedDegreeRBFKernel | Weighted degree RBF kernel |
CWeightedDegreeStringKernel | The Weighted Degree String kernel |
CWRACCMeasure | Class WRACCMeasure used to measure weighted relative accuracy of 2-class classifier |
CZeroMean | Zero Mean Function |
CZeroMeanCenterKernelNormalizer | ZeroMeanCenterKernelNormalizer centers the kernel in feature space |
d_node< P > | |
ds_node< P > | |
DynArray< T > | Template Dynamic array class that creates an array that can be used like a list or an array |
EntryComparator | |
Example< T > | Class Example is the container type for the vector+label combination |
func_wrapper | |
SGVector< T >::IndexSorter | |
lbfgs_parameter_t | |
MappedSparseMatrix | Mapped sparse matrix for representing graph relations of tasks |
MKLMulticlassGLPK | MKLMulticlassGLPK is a helper class for MKLMulticlass |
MKLMulticlassGradient | MKLMulticlassGradient is a helper class for MKLMulticlass |
MKLMulticlassOptimizationBase | MKLMulticlassOptimizationBase is a helper class for MKLMulticlass |
mocas_data | |
Model | Class Model |
Munkres | Munkres |
CGradientModelSelection::nlopt_package | Struct used for nlopt callback function |
node< P > | |
Parallel | Class Parallel provides helper functions for multithreading |
Parameter | Parameter class |
ParameterMap | Implements a map of ParameterMapElement instances Maps one key to a set of values |
ParameterMapElement | Class to hold instances of a parameter map. Each element contains a key and a set of values, which each are of type SGParamInfo. May be compared to each other based on their keys |
Psi_line | |
refcount_t | |
RelaxedTreeNodeData | |
RelaxedTreeUtil | |
SerializableAsciiReader00 | Serializable ascii reader |
SGIO | Class SGIO, used to do input output operations throughout shogun |
SGMatrix< T > | Shogun matrix |
SGMatrixList< T > | Shogun matrix list |
SGNDArray< T > | Shogun n-dimensional array |
SGParamInfo | Class that holds informations about a certain parameter of an CSGObject. Contains name, type, etc. This is used for mapping types that have changed in different versions of shogun. Instances of this class may be compared to each other. Ordering is based on name, equalness is based on all attributes |
SGReferencedData | Shogun reference count managed data |
SGSparseMatrix< T > | Template class SGSparseMatrix |
SGSparseVector< T > | Template class SGSparseVector The assumtion is that the stored SGSparseVectorEntry<T>* vector is ordered by SGSparseVectorEntry.feat_index in non-decreasing order. This has to be assured by the user of the class |
SGSparseVectorEntry< T > | Template class SGSparseVectorEntry |
SGString< T > | Shogun string |
SGStringList< T > | Template class SGStringList |
SGVector< T > | Shogun vector |
ShareBoostOptimizer | |
ShogunException | Class ShogunException defines an exception which is thrown whenever an error inside of shogun occurs |
SPE_COVERTREE_POINT | |
SSKFeatures | SSKFeatures |
substring | Struct Substring, specified by start position and end position |
tag_callback_data | |
tag_iteration_data | |
task_tree_node_t | |
TMultipleCPinfo | |
TParameter | Parameter struct |
tree_node_t | |
CSerializableFile::TSerializableReader | Serializable reader |
TSGDataType | Datatypes that shogun supports |
v_array< T > | Class v_array taken directly from JL's implementation |
Version | Class Version provides version information |
VwConditionalProbabilityTreeNodeData | |
VwExample | Example class for VW |
VwFeature | One feature in VW |
VwLabel | Class VwLabel holds a label object used by VW |