# Linear Time MMD¶

The linear time MMD implements a nonparametric statistical hypothesis test to reject the null hypothesis that to distributions $$p$$ and $$q$$, each only observed via $$n$$ samples, are the same, i.e. $$H_0:p=q$$.

The (unbiased) statistic is given by

$\frac{2}{n}\sum_{i=1}^n k(x_{2i},x_{2i}) + k(x_{2i+1}, x_{2i+1}) - 2k(x_{2i},x_{2i+1}).$

See [GBR+12] for a detailed introduction.

## Example¶

Imagine we have samples from $$p$$ and $$q$$. As the linear time MMD is a streaming statistic, we need to pass it CStreamingFeatures. Here, we use synthetic data generators, but it is possible to construct CStreamingFeatures from (large) files. We create an instance of CLinearTimeMMD, passing it data and the kernel to use,

mmd = LinearTimeMMD()
kernel = GaussianKernel(10, 1)
mmd.set_kernel(kernel)
mmd.set_p(features_p)
mmd.set_q(features_q)
mmd.set_num_samples_p(1000)
mmd.set_num_samples_q(1000)
alpha = 0.05

mmd = LinearTimeMMD();
kernel = GaussianKernel(10, 1);
mmd.set_kernel(kernel);
mmd.set_p(features_p);
mmd.set_q(features_q);
mmd.set_num_samples_p(1000);
mmd.set_num_samples_q(1000);
alpha = 0.05;

LinearTimeMMD mmd = new LinearTimeMMD();
GaussianKernel kernel = new GaussianKernel(10, 1);
mmd.set_kernel(kernel);
mmd.set_p(features_p);
mmd.set_q(features_q);
mmd.set_num_samples_p(1000);
mmd.set_num_samples_q(1000);
double alpha = 0.05;

mmd = Shogun::LinearTimeMMD.new
kernel = Shogun::GaussianKernel.new 10, 1
mmd.set_kernel kernel
mmd.set_p features_p
mmd.set_q features_q
mmd.set_num_samples_p 1000
mmd.set_num_samples_q 1000
alpha = 0.05

mmd <- LinearTimeMMD()
kernel <- GaussianKernel(10, 1)
mmd$set_kernel(kernel) mmd$set_p(features_p)
mmd$set_q(features_q) mmd$set_num_samples_p(1000)
mmd$set_num_samples_q(1000) alpha <- 0.05  mmd = shogun.LinearTimeMMD() kernel = shogun.GaussianKernel(10, 1) mmd:set_kernel(kernel) mmd:set_p(features_p) mmd:set_q(features_q) mmd:set_num_samples_p(1000) mmd:set_num_samples_q(1000) alpha = 0.05  LinearTimeMMD mmd = new LinearTimeMMD(); GaussianKernel kernel = new GaussianKernel(10, 1); mmd.set_kernel(kernel); mmd.set_p(features_p); mmd.set_q(features_q); mmd.set_num_samples_p(1000); mmd.set_num_samples_q(1000); double alpha = 0.05;  auto mmd = some<CLinearTimeMMD>(); auto kernel = some<CGaussianKernel>(10, 1); mmd->set_kernel(kernel); mmd->set_p(features_p); mmd->set_q(features_q); mmd->set_num_samples_p(1000); mmd->set_num_samples_q(1000); auto alpha = 0.05;  An important parameter for controlling the efficiency of the linear time MMD is block size of the number of samples that is processed at once. As a guideline, set as large as memory allows. Computing the statistic is done as We can perform the hypothesis test via computing a test threshold for a given $$\alpha$$, or by directly computing a p-value. ## Kernel learning¶ There are various options to learn a kernel. All options allow to learn a single kernel among a number of provided baseline kernels. Furthermore, some of these criterions can be used to learn the coefficients of a convex combination of baseline kernels. We specify the desired baseline kernels to consider. Note the kernel above is not considered in the selection. kernel1 = GaussianKernel(10, 0.1) kernel2 = GaussianKernel(10, 1) kernel3 = GaussianKernel(10, 10) mmd.add_kernel(kernel1) mmd.add_kernel(kernel2) mmd.add_kernel(kernel3)  kernel1 = GaussianKernel(10, 0.1); kernel2 = GaussianKernel(10, 1); kernel3 = GaussianKernel(10, 10); mmd.add_kernel(kernel1); mmd.add_kernel(kernel2); mmd.add_kernel(kernel3);  GaussianKernel kernel1 = new GaussianKernel(10, 0.1); GaussianKernel kernel2 = new GaussianKernel(10, 1); GaussianKernel kernel3 = new GaussianKernel(10, 10); mmd.add_kernel(kernel1); mmd.add_kernel(kernel2); mmd.add_kernel(kernel3);  kernel1 = Shogun::GaussianKernel.new 10, 0.1 kernel2 = Shogun::GaussianKernel.new 10, 1 kernel3 = Shogun::GaussianKernel.new 10, 10 mmd.add_kernel kernel1 mmd.add_kernel kernel2 mmd.add_kernel kernel3  kernel1 <- GaussianKernel(10, 0.1) kernel2 <- GaussianKernel(10, 1) kernel3 <- GaussianKernel(10, 10) mmd$add_kernel(kernel1)
mmd$add_kernel(kernel2) mmd$add_kernel(kernel3)

kernel1 = shogun.GaussianKernel(10, 0.1)
kernel2 = shogun.GaussianKernel(10, 1)
kernel3 = shogun.GaussianKernel(10, 10)

GaussianKernel kernel1 = new GaussianKernel(10, 0.1);
GaussianKernel kernel2 = new GaussianKernel(10, 1);
GaussianKernel kernel3 = new GaussianKernel(10, 10);

auto kernel1 = some<CGaussianKernel>(10, 0.1);
auto kernel2 = some<CGaussianKernel>(10, 1);
auto kernel3 = some<CGaussianKernel>(10, 10);


IMPORTANT: when learning the kernel for statistical testing, this needs to be done on different data than being used for performing the actual test. One way to accomplish this is to manually provide a different set of features for testing. It is also possible to automatically split the provided data by specifying the ratio between train and test data, via enabling the train-test mode.

mmd.set_train_test_mode(True)
mmd.set_train_test_ratio(1)

mmd.set_train_test_mode(true);
mmd.set_train_test_ratio(1);

mmd.set_train_test_mode(true);
mmd.set_train_test_ratio(1);

mmd.set_train_test_mode true
mmd.set_train_test_ratio 1

mmd$set_train_test_mode(TRUE) mmd$set_train_test_ratio(1)

mmd:set_train_test_mode(True)
mmd:set_train_test_ratio(1)

mmd.set_train_test_mode(true);
mmd.set_train_test_ratio(1);

mmd->set_train_test_mode(true);
mmd->set_train_test_ratio(1);


A ratio of 1 means the data is split into half during learning the kernel, and subsequent tests are performed on the second half.

We learn the kernel and extract the result.
Note that the kernel of the mmd itself is replaced.

If all kernels have the same type, we can convert the result into that type, for example to extract its parameters.

mmd.set_kernel_selection_strategy(KSM_MAXIMIZE_POWER)
mmd.select_kernel()
learnt_kernel_single = GaussianKernel.obtain_from_generic(mmd.get_kernel())
width = learnt_kernel_single.get_width()

mmd.set_kernel_selection_strategy(KSM_MAXIMIZE_POWER);
mmd.select_kernel();
learnt_kernel_single = GaussianKernel.obtain_from_generic(mmd.get_kernel());
width = learnt_kernel_single.get_width();

mmd.set_kernel_selection_strategy(EKernelSelectionMethod.KSM_MAXIMIZE_POWER);
mmd.select_kernel();
GaussianKernel learnt_kernel_single = GaussianKernel.obtain_from_generic(mmd.get_kernel());
double width = learnt_kernel_single.get_width();

mmd.set_kernel_selection_strategy Shogun::KSM_MAXIMIZE_POWER
mmd.select_kernel
learnt_kernel_single = Shogun::GaussianKernel.obtain_from_generic mmd.get_kernel
width = learnt_kernel_single.get_width

mmd$set_kernel_selection_strategy("KSM_MAXIMIZE_POWER") mmd$select_kernel()
learnt_kernel_single <- GaussianKernel$obtain_from_generic(mmd$get_kernel())
width <- learnt_kernel_single$get_width()  mmd:set_kernel_selection_strategy(shogun.KSM_MAXIMIZE_POWER) mmd:select_kernel() learnt_kernel_single = GaussianKernel:obtain_from_generic(mmd:get_kernel()) width = learnt_kernel_single:get_width()  mmd.set_kernel_selection_strategy(EKernelSelectionMethod.KSM_MAXIMIZE_POWER); mmd.select_kernel(); GaussianKernel learnt_kernel_single = GaussianKernel.obtain_from_generic(mmd.get_kernel()); double width = learnt_kernel_single.get_width();  mmd->set_kernel_selection_strategy(EKernelSelectionMethod::KSM_MAXIMIZE_POWER); mmd->select_kernel(); auto learnt_kernel_single = CGaussianKernel::obtain_from_generic(mmd->get_kernel()); auto width = learnt_kernel_single->get_width();  Note that in order to extract particular kernel parameters, we need to cast the kernel to its actual type. Similarly, a convex combination of kernels, in the form of CCombinedKernel can be learned and extracted as mmd.set_kernel_selection_strategy(KSM_MAXIMIZE_POWER, True) mmd.select_kernel() learnt_kernel_combined = CombinedKernel.obtain_from_generic(mmd.get_kernel()) weights = learnt_kernel_combined.get_subkernel_weights()  mmd.set_kernel_selection_strategy(KSM_MAXIMIZE_POWER, true); mmd.select_kernel(); learnt_kernel_combined = CombinedKernel.obtain_from_generic(mmd.get_kernel()); weights = learnt_kernel_combined.get_subkernel_weights();  mmd.set_kernel_selection_strategy(EKernelSelectionMethod.KSM_MAXIMIZE_POWER, true); mmd.select_kernel(); CombinedKernel learnt_kernel_combined = CombinedKernel.obtain_from_generic(mmd.get_kernel()); DoubleMatrix weights = learnt_kernel_combined.get_subkernel_weights();  mmd.set_kernel_selection_strategy Shogun::KSM_MAXIMIZE_POWER, true mmd.select_kernel learnt_kernel_combined = Shogun::CombinedKernel.obtain_from_generic mmd.get_kernel weights = learnt_kernel_combined.get_subkernel_weights  mmd$set_kernel_selection_strategy("KSM_MAXIMIZE_POWER", TRUE)
mmd$select_kernel() learnt_kernel_combined <- CombinedKernel$obtain_from_generic(mmd$get_kernel()) weights <- learnt_kernel_combined$get_subkernel_weights()

mmd:set_kernel_selection_strategy(shogun.KSM_MAXIMIZE_POWER, True)
mmd:select_kernel()
learnt_kernel_combined = CombinedKernel:obtain_from_generic(mmd:get_kernel())
weights = learnt_kernel_combined:get_subkernel_weights()

mmd.set_kernel_selection_strategy(EKernelSelectionMethod.KSM_MAXIMIZE_POWER, true);
mmd.select_kernel();
CombinedKernel learnt_kernel_combined = CombinedKernel.obtain_from_generic(mmd.get_kernel());
double[] weights = learnt_kernel_combined.get_subkernel_weights();

mmd->set_kernel_selection_strategy(EKernelSelectionMethod::KSM_MAXIMIZE_POWER, true);
mmd->select_kernel();
auto learnt_kernel_combined = CCombinedKernel::obtain_from_generic(mmd->get_kernel());
auto weights = learnt_kernel_combined->get_subkernel_weights();


We can perform the test on the last learnt kernel. Since we enabled the train-test mode, this automatically is done on the held out test data.

threshold = mmd.compute_threshold(alpha)
p_value = mmd.compute_p_value(statistic)

threshold = mmd.compute_threshold(alpha);
p_value = mmd.compute_p_value(statistic);

double threshold = mmd.compute_threshold(alpha);
double p_value = mmd.compute_p_value(statistic);

threshold = mmd.compute_threshold alpha
p_value = mmd.compute_p_value statistic

threshold <- mmd$compute_threshold(alpha) p_value <- mmd$compute_p_value(statistic)

threshold = mmd:compute_threshold(alpha)
p_value = mmd:compute_p_value(statistic)

double threshold = mmd.compute_threshold(alpha);
double p_value = mmd.compute_p_value(statistic);

auto threshold = mmd->compute_threshold(alpha);
auto p_value = mmd->compute_p_value(statistic);


## References¶

 [GBR+12] A. Gretton, K.M. Borgwardt, M.J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.