# Linear Support Vector Machine¶

Linear Support Vector Machine is a binary classifier which finds a hyper-plane such that the margins between the two classes are maximized. The loss function that needs to be minimized is:

$\min_{\bf w} \frac{1}{2}{\bf w}^\top{\bf w} + C\sum_{i=1}^{N}\xi({\bf w};{\bf x_i}, y_i)$

where $${\bf w}$$ is vector of weights, $${\bf x_i}$$ is feature vector, $$y_i$$ is the corresponding label, $$C>0$$ is a penalty parameter, $$N$$ is the number of training samples and $$\xi$$ is hinge loss function.

The solution takes the following form:

$\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x}_i$

$$\alpha_i$$ are sparse in the above solution.

See [FCH+08] and Chapter 6 in [CST00] for a detailed introduction.

## Example¶

Imagine we have files with training and test data. We create CDenseFeatures (here 64 bit floats aka RealFeatures) and CBinaryLabels as

features_train = RealFeatures(f_feats_train)
features_test = RealFeatures(f_feats_test)
labels_train = BinaryLabels(f_labels_train)
labels_test = BinaryLabels(f_labels_test)

features_train = RealFeatures(f_feats_train);
features_test = RealFeatures(f_feats_test);
labels_train = BinaryLabels(f_labels_train);
labels_test = BinaryLabels(f_labels_test);

RealFeatures features_train = new RealFeatures(f_feats_train);
RealFeatures features_test = new RealFeatures(f_feats_test);
BinaryLabels labels_train = new BinaryLabels(f_labels_train);
BinaryLabels labels_test = new BinaryLabels(f_labels_test);

features_train = Shogun::RealFeatures.new f_feats_train
features_test = Shogun::RealFeatures.new f_feats_test
labels_train = Shogun::BinaryLabels.new f_labels_train
labels_test = Shogun::BinaryLabels.new f_labels_test

features_train <- RealFeatures(f_feats_train)
features_test <- RealFeatures(f_feats_test)
labels_train <- BinaryLabels(f_labels_train)
labels_test <- BinaryLabels(f_labels_test)

features_train = shogun.RealFeatures(f_feats_train)
features_test = shogun.RealFeatures(f_feats_test)
labels_train = shogun.BinaryLabels(f_labels_train)
labels_test = shogun.BinaryLabels(f_labels_test)

RealFeatures features_train = new RealFeatures(f_feats_train);
RealFeatures features_test = new RealFeatures(f_feats_test);
BinaryLabels labels_train = new BinaryLabels(f_labels_train);
BinaryLabels labels_test = new BinaryLabels(f_labels_test);

auto features_train = some<CDenseFeatures<float64_t>>(f_feats_train);
auto features_test = some<CDenseFeatures<float64_t>>(f_feats_test);
auto labels_train = some<CBinaryLabels>(f_labels_train);
auto labels_test = some<CBinaryLabels>(f_labels_test);


In order to run CLibLinear, we need to initialize some parameters like $$C$$ and epsilon which is the residual convergence parameter of the solver.

C = 1.0
epsilon = 0.001

C = 1.0;
epsilon = 0.001;

double C = 1.0;
double epsilon = 0.001;

C = 1.0
epsilon = 0.001

C <- 1.0
epsilon <- 0.001

C = 1.0
epsilon = 0.001

double C = 1.0;
double epsilon = 0.001;

auto C = 1.0;
auto epsilon = 0.001;


We create an instance of the CLibLinear classifier by passing it regularization coefficient, features and labels. We here set the solver type to L2 regularized classification. There are many other solver types in CLibLinear to choose from.

svm = LibLinear(C, features_train, labels_train)
svm.set_liblinear_solver_type(L2R_L2LOSS_SVC)
svm.set_epsilon(epsilon)

svm = LibLinear(C, features_train, labels_train);
svm.set_liblinear_solver_type(L2R_L2LOSS_SVC);
svm.set_epsilon(epsilon);

LibLinear svm = new LibLinear(C, features_train, labels_train);
svm.set_liblinear_solver_type(LIBLINEAR_SOLVER_TYPE.L2R_L2LOSS_SVC);
svm.set_epsilon(epsilon);

svm = Shogun::LibLinear.new C, features_train, labels_train
svm.set_liblinear_solver_type Shogun::L2R_L2LOSS_SVC
svm.set_epsilon epsilon

svm <- LibLinear(C, features_train, labels_train)
svm$set_liblinear_solver_type("L2R_L2LOSS_SVC") svm$set_epsilon(epsilon)

svm = shogun.LibLinear(C, features_train, labels_train)
svm:set_liblinear_solver_type(shogun.L2R_L2LOSS_SVC)
svm:set_epsilon(epsilon)

LibLinear svm = new LibLinear(C, features_train, labels_train);
svm.set_liblinear_solver_type(LIBLINEAR_SOLVER_TYPE.L2R_L2LOSS_SVC);
svm.set_epsilon(epsilon);

auto svm = some<CLibLinear>(C, features_train, labels_train);
svm->set_liblinear_solver_type(LIBLINEAR_SOLVER_TYPE::L2R_L2LOSS_SVC);
svm->set_epsilon(epsilon);


Then we train and apply it to test data, which here gives CBinaryLabels.

svm.train()
labels_predict = svm.apply_binary(features_test)

svm.train();
labels_predict = svm.apply_binary(features_test);

svm.train();
BinaryLabels labels_predict = svm.apply_binary(features_test);

svm.train
labels_predict = svm.apply_binary features_test

svm$train() labels_predict <- svm$apply_binary(features_test)

svm:train()
labels_predict = svm:apply_binary(features_test)

svm.train();
BinaryLabels labels_predict = svm.apply_binary(features_test);

svm->train();
auto labels_predict = svm->apply_binary(features_test);


We can extract $${\bf w}$$ and $$b$$.

w = svm.get_w()
b = svm.get_bias()

w = svm.get_w();
b = svm.get_bias();

DoubleMatrix w = svm.get_w();
double b = svm.get_bias();

w = svm.get_w
b = svm.get_bias

w <- svm$get_w() b <- svm$get_bias()

w = svm:get_w()
b = svm:get_bias()

double[] w = svm.get_w();
double b = svm.get_bias();

auto w = svm->get_w();
auto b = svm->get_bias();


We can evaluate test performance via e.g. CAccuracyMeasure.

eval = AccuracyMeasure()
accuracy = eval.evaluate(labels_predict, labels_test)

eval = AccuracyMeasure();
accuracy = eval.evaluate(labels_predict, labels_test);

AccuracyMeasure eval = new AccuracyMeasure();
double accuracy = eval.evaluate(labels_predict, labels_test);

eval = Shogun::AccuracyMeasure.new
accuracy = eval.evaluate labels_predict, labels_test

eval <- AccuracyMeasure()
accuracy <- eval\$evaluate(labels_predict, labels_test)

eval = shogun.AccuracyMeasure()
accuracy = eval:evaluate(labels_predict, labels_test)

AccuracyMeasure eval = new AccuracyMeasure();
double accuracy = eval.evaluate(labels_predict, labels_test);

auto eval = some<CAccuracyMeasure>();
auto accuracy = eval->evaluate(labels_predict, labels_test);


## References¶

Wikipedia: Support_vector_machine

Wikipedia: Lagrange_multiplier

LibLinear website

CST00

N. Cristianini and J. Shawe-Taylor. An Introduction To Support Vector Machines And Other Kernel-Based Learning Methods. Cambridge University Press, 2000.

FCH+08

R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and C.J. Lin. LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9:1871–1874, 2008.