Quadratic Time MMD

The quadratic time MMD implements a nonparametric statistical hypothesis test to reject the null hypothesis that to distributions \(p\) and \(q\), only observed via \(n\) and \(m\) samples respectively, are the same, i.e. \(H_0:p=q\).

The (biased) test statistic is given by

\[\frac{1}{nm}\sum_{i=1}^n\sum_{j=1}^m k(x_i,x_i) + k(x_j, x_j) - 2k(x_i,x_j).\]

See [GBR+12] for a detailed introduction.

Example

Imagine we have samples from \(p\) and \(q\), here in the form of CDenseFeatures (here 64 bit floats aka RealFeatures).

features_p = RealFeatures(f_features_p)
features_q = RealFeatures(f_features_q)
features_p = RealFeatures(f_features_p);
features_q = RealFeatures(f_features_q);
RealFeatures features_p = new RealFeatures(f_features_p);
RealFeatures features_q = new RealFeatures(f_features_q);
features_p = Modshogun::RealFeatures.new f_features_p
features_q = Modshogun::RealFeatures.new f_features_q
features_p <- RealFeatures(f_features_p)
features_q <- RealFeatures(f_features_q)
features_p = modshogun.RealFeatures(f_features_p)
features_q = modshogun.RealFeatures(f_features_q)
RealFeatures features_p = new RealFeatures(f_features_p);
RealFeatures features_q = new RealFeatures(f_features_q);
auto features_p = some<CDenseFeatures<float64_t>>(f_features_p);
auto features_q = some<CDenseFeatures<float64_t>>(f_features_q);

We create an instance of CQuadraticTimeMMD, passing it data the kernel.

mmd = QuadraticTimeMMD(features_p, features_q)
kernel = GaussianKernel(10, 1)
mmd.set_kernel(kernel)
alpha = 0.05
mmd = QuadraticTimeMMD(features_p, features_q);
kernel = GaussianKernel(10, 1);
mmd.set_kernel(kernel);
alpha = 0.05;
QuadraticTimeMMD mmd = new QuadraticTimeMMD(features_p, features_q);
GaussianKernel kernel = new GaussianKernel(10, 1);
mmd.set_kernel(kernel);
double alpha = 0.05;
mmd = Modshogun::QuadraticTimeMMD.new features_p, features_q
kernel = Modshogun::GaussianKernel.new 10, 1
mmd.set_kernel kernel
alpha = 0.05
mmd <- QuadraticTimeMMD(features_p, features_q)
kernel <- GaussianKernel(10, 1)
mmd$set_kernel(kernel)
alpha <- 0.05
mmd = modshogun.QuadraticTimeMMD(features_p, features_q)
kernel = modshogun.GaussianKernel(10, 1)
mmd:set_kernel(kernel)
alpha = 0.05
QuadraticTimeMMD mmd = new QuadraticTimeMMD(features_p, features_q);
GaussianKernel kernel = new GaussianKernel(10, 1);
mmd.set_kernel(kernel);
double alpha = 0.05;
auto mmd = some<CQuadraticTimeMMD>(features_p, features_q);
auto kernel = some<CGaussianKernel>(10, 1);
mmd->set_kernel(kernel);
auto alpha = 0.05;

We can select multiple ways to compute the test statistic, see CQuadraticTimeMMD for details. The biased statistic is computed as

mmd.set_statistic_type(ST_BIASED_FULL)
statistic = mmd.compute_statistic()
mmd.set_statistic_type(ST_BIASED_FULL);
statistic = mmd.compute_statistic();
mmd.set_statistic_type(EStatisticType.ST_BIASED_FULL);
double statistic = mmd.compute_statistic();
mmd.set_statistic_type Modshogun::ST_BIASED_FULL
statistic = mmd.compute_statistic 
mmd$set_statistic_type("ST_BIASED_FULL")
statistic <- mmd$compute_statistic()
mmd:set_statistic_type(modshogun.ST_BIASED_FULL)
statistic = mmd:compute_statistic()
mmd.set_statistic_type(EStatisticType.ST_BIASED_FULL);
double statistic = mmd.compute_statistic();
mmd->set_statistic_type(EStatisticType::ST_BIASED_FULL);
auto statistic = mmd->compute_statistic();

There are multiple ways to perform the actual hypothesis test, see CQuadraticTimeMMD for details. The permutation version simulates from \(H_0\) via repeatedly permuting the samples from \(p\) and \(q\). We can perform the test via computing a test threshold for a given \(\alpha\), or by directly computing a p-value.

mmd.set_null_approximation_method(NAM_PERMUTATION)
mmd.set_num_null_samples(200)
threshold = mmd.compute_threshold(alpha)
p_value = mmd.compute_p_value(statistic)
mmd.set_null_approximation_method(NAM_PERMUTATION);
mmd.set_num_null_samples(200);
threshold = mmd.compute_threshold(alpha);
p_value = mmd.compute_p_value(statistic);
mmd.set_null_approximation_method(ENullApproximationMethod.NAM_PERMUTATION);
mmd.set_num_null_samples(200);
double threshold = mmd.compute_threshold(alpha);
double p_value = mmd.compute_p_value(statistic);
mmd.set_null_approximation_method Modshogun::NAM_PERMUTATION
mmd.set_num_null_samples 200
threshold = mmd.compute_threshold alpha
p_value = mmd.compute_p_value statistic
mmd$set_null_approximation_method("NAM_PERMUTATION")
mmd$set_num_null_samples(200)
threshold <- mmd$compute_threshold(alpha)
p_value <- mmd$compute_p_value(statistic)
mmd:set_null_approximation_method(modshogun.NAM_PERMUTATION)
mmd:set_num_null_samples(200)
threshold = mmd:compute_threshold(alpha)
p_value = mmd:compute_p_value(statistic)
mmd.set_null_approximation_method(ENullApproximationMethod.NAM_PERMUTATION);
mmd.set_num_null_samples(200);
double threshold = mmd.compute_threshold(alpha);
double p_value = mmd.compute_p_value(statistic);
mmd->set_null_approximation_method(ENullApproximationMethod::NAM_PERMUTATION);
mmd->set_num_null_samples(200);
auto threshold = mmd->compute_threshold(alpha);
auto p_value = mmd->compute_p_value(statistic);

Multiple kernels

It is possible to perform all operations (computing statistics, performing test, etc) for multiple kernels at once, via the CMultiKernelQuadraticTimeMMD interface.

mk = mmd.multikernel()
mk.add_kernel(kernel1)
mk.add_kernel(kernel2)
mk.add_kernel(kernel2)

mk_statistic = mk.compute_statistic()
mk_p_value = mk.compute_p_value()
mk = mmd.multikernel();
mk.add_kernel(kernel1);
mk.add_kernel(kernel2);
mk.add_kernel(kernel2);

mk_statistic = mk.compute_statistic();
mk_p_value = mk.compute_p_value();
MultiKernelQuadraticTimeMMD mk = mmd.multikernel();
mk.add_kernel(kernel1);
mk.add_kernel(kernel2);
mk.add_kernel(kernel2);

DoubleMatrix mk_statistic = mk.compute_statistic();
DoubleMatrix mk_p_value = mk.compute_p_value();
mk = mmd.multikernel 
mk.add_kernel kernel1
mk.add_kernel kernel2
mk.add_kernel kernel2

mk_statistic = mk.compute_statistic 
mk_p_value = mk.compute_p_value 
mk <- mmd$multikernel()
mk$add_kernel(kernel1)
mk$add_kernel(kernel2)
mk$add_kernel(kernel2)

mk_statistic <- mk$compute_statistic()
mk_p_value <- mk$compute_p_value()
mk = mmd:multikernel()
mk:add_kernel(kernel1)
mk:add_kernel(kernel2)
mk:add_kernel(kernel2)

mk_statistic = mk:compute_statistic()
mk_p_value = mk:compute_p_value()
MultiKernelQuadraticTimeMMD mk = mmd.multikernel();
mk.add_kernel(kernel1);
mk.add_kernel(kernel2);
mk.add_kernel(kernel2);

double[] mk_statistic = mk.compute_statistic();
double[] mk_p_value = mk.compute_p_value();
auto mk = mmd->multikernel();
mk->add_kernel(kernel1);
mk->add_kernel(kernel2);
mk->add_kernel(kernel2);

auto mk_statistic = mk->compute_statistic();
auto mk_p_value = mk->compute_p_value();

Note that the results are now a vector with one entry per kernel. Also note that the kernels for single and multiple are kept separately.

Kernel learning

There are various options to learn a kernel. All options allow to learn a single kernel among a number of provided baseline kernels. Furthermore, some of these criterions can be used to learn the coefficients of a convex combination of baseline kernels.

We specify the desired baseline kernels to consider. Note the kernel above is not considered in the selection.

kernel1 = GaussianKernel(10, 0.1)
kernel2 = GaussianKernel(10, 1)
kernel3 = GaussianKernel(10, 10)
mmd.add_kernel(kernel1)
mmd.add_kernel(kernel2)
mmd.add_kernel(kernel3)
kernel1 = GaussianKernel(10, 0.1);
kernel2 = GaussianKernel(10, 1);
kernel3 = GaussianKernel(10, 10);
mmd.add_kernel(kernel1);
mmd.add_kernel(kernel2);
mmd.add_kernel(kernel3);
GaussianKernel kernel1 = new GaussianKernel(10, 0.1);
GaussianKernel kernel2 = new GaussianKernel(10, 1);
GaussianKernel kernel3 = new GaussianKernel(10, 10);
mmd.add_kernel(kernel1);
mmd.add_kernel(kernel2);
mmd.add_kernel(kernel3);
kernel1 = Modshogun::GaussianKernel.new 10, 0.1
kernel2 = Modshogun::GaussianKernel.new 10, 1
kernel3 = Modshogun::GaussianKernel.new 10, 10
mmd.add_kernel kernel1
mmd.add_kernel kernel2
mmd.add_kernel kernel3
kernel1 <- GaussianKernel(10, 0.1)
kernel2 <- GaussianKernel(10, 1)
kernel3 <- GaussianKernel(10, 10)
mmd$add_kernel(kernel1)
mmd$add_kernel(kernel2)
mmd$add_kernel(kernel3)
kernel1 = modshogun.GaussianKernel(10, 0.1)
kernel2 = modshogun.GaussianKernel(10, 1)
kernel3 = modshogun.GaussianKernel(10, 10)
mmd:add_kernel(kernel1)
mmd:add_kernel(kernel2)
mmd:add_kernel(kernel3)
GaussianKernel kernel1 = new GaussianKernel(10, 0.1);
GaussianKernel kernel2 = new GaussianKernel(10, 1);
GaussianKernel kernel3 = new GaussianKernel(10, 10);
mmd.add_kernel(kernel1);
mmd.add_kernel(kernel2);
mmd.add_kernel(kernel3);
auto kernel1 = some<CGaussianKernel>(10, 0.1);
auto kernel2 = some<CGaussianKernel>(10, 1);
auto kernel3 = some<CGaussianKernel>(10, 10);
mmd->add_kernel(kernel1);
mmd->add_kernel(kernel2);
mmd->add_kernel(kernel3);

IMPORTANT: when learning the kernel for statistical testing, this needs to be done on different data than being used for performing the actual test. One way to accomplish this is to manually provide a different set of features for testing. It is also possible to automatically split the provided data by specifying the ratio between train and test data, via enabling the train-test mode.

mmd.set_train_test_mode(True)
mmd.set_train_test_ratio(1)
mmd.set_train_test_mode(true);
mmd.set_train_test_ratio(1);
mmd.set_train_test_mode(true);
mmd.set_train_test_ratio(1);
mmd.set_train_test_mode true
mmd.set_train_test_ratio 1
mmd$set_train_test_mode(TRUE)
mmd$set_train_test_ratio(1)
mmd:set_train_test_mode(True)
mmd:set_train_test_ratio(1)
mmd.set_train_test_mode(true);
mmd.set_train_test_ratio(1);
mmd->set_train_test_mode(true);
mmd->set_train_test_ratio(1);

A ratio of 1 means the data is split into half during learning the kernel, and subsequent tests are performed on the second half.

We learn the kernel and extract the result. Note that the kernel of the mmd itself is replaced. If all kernels have the same type, we can convert the result into that type, for example to extract its parameters.

num_runs = 1
num_folds = 3
mmd.set_kernel_selection_strategy(KSM_CROSS_VALIDATION, num_runs, num_folds, alpha)
mmd.select_kernel()
learnt_kernel_single = GaussianKernel.obtain_from_generic(mmd.get_kernel())
width = learnt_kernel_single.get_width()
num_runs = 1;
num_folds = 3;
mmd.set_kernel_selection_strategy(KSM_CROSS_VALIDATION, num_runs, num_folds, alpha);
mmd.select_kernel();
learnt_kernel_single = GaussianKernel.obtain_from_generic(mmd.get_kernel());
width = learnt_kernel_single.get_width();
int num_runs = 1;
int num_folds = 3;
mmd.set_kernel_selection_strategy(EKernelSelectionMethod.KSM_CROSS_VALIDATION, num_runs, num_folds, alpha);
mmd.select_kernel();
GaussianKernel learnt_kernel_single = GaussianKernel.obtain_from_generic(mmd.get_kernel());
double width = learnt_kernel_single.get_width();
num_runs = 1
num_folds = 3
mmd.set_kernel_selection_strategy Modshogun::KSM_CROSS_VALIDATION, num_runs, num_folds, alpha
mmd.select_kernel 
learnt_kernel_single = Modshogun::GaussianKernel.obtain_from_generic mmd.get_kernel 
width = learnt_kernel_single.get_width 
num_runs <- 1
num_folds <- 3
mmd$set_kernel_selection_strategy("KSM_CROSS_VALIDATION", num_runs, num_folds, alpha)
mmd$select_kernel()
learnt_kernel_single <- GaussianKernel$obtain_from_generic(mmd$get_kernel())
width <- learnt_kernel_single$get_width()
num_runs = 1
num_folds = 3
mmd:set_kernel_selection_strategy(modshogun.KSM_CROSS_VALIDATION, num_runs, num_folds, alpha)
mmd:select_kernel()
learnt_kernel_single = GaussianKernel:obtain_from_generic(mmd:get_kernel())
width = learnt_kernel_single:get_width()
int num_runs = 1;
int num_folds = 3;
mmd.set_kernel_selection_strategy(EKernelSelectionMethod.KSM_CROSS_VALIDATION, num_runs, num_folds, alpha);
mmd.select_kernel();
GaussianKernel learnt_kernel_single = GaussianKernel.obtain_from_generic(mmd.get_kernel());
double width = learnt_kernel_single.get_width();
auto num_runs = 1;
auto num_folds = 3;
mmd->set_kernel_selection_strategy(EKernelSelectionMethod::KSM_CROSS_VALIDATION, num_runs, num_folds, alpha);
mmd->select_kernel();
auto learnt_kernel_single = CGaussianKernel::obtain_from_generic(mmd->get_kernel());
auto width = learnt_kernel_single->get_width();

Note that in order to extract particular kernel parameters, we need to cast the kernel to its actual type.

Similarly, a convex combination of kernels, in the form of CCombinedKernel can be learned and extracted as

mmd.set_kernel_selection_strategy(KSM_MAXIMIZE_MMD, True)
mmd.select_kernel()
learnt_kernel_combined = CombinedKernel.obtain_from_generic(mmd.get_kernel())
weights = learnt_kernel_combined.get_subkernel_weights()
mmd.set_kernel_selection_strategy(KSM_MAXIMIZE_MMD, true);
mmd.select_kernel();
learnt_kernel_combined = CombinedKernel.obtain_from_generic(mmd.get_kernel());
weights = learnt_kernel_combined.get_subkernel_weights();
mmd.set_kernel_selection_strategy(EKernelSelectionMethod.KSM_MAXIMIZE_MMD, true);
mmd.select_kernel();
CombinedKernel learnt_kernel_combined = CombinedKernel.obtain_from_generic(mmd.get_kernel());
DoubleMatrix weights = learnt_kernel_combined.get_subkernel_weights();
mmd.set_kernel_selection_strategy Modshogun::KSM_MAXIMIZE_MMD, true
mmd.select_kernel 
learnt_kernel_combined = Modshogun::CombinedKernel.obtain_from_generic mmd.get_kernel 
weights = learnt_kernel_combined.get_subkernel_weights 
mmd$set_kernel_selection_strategy("KSM_MAXIMIZE_MMD", TRUE)
mmd$select_kernel()
learnt_kernel_combined <- CombinedKernel$obtain_from_generic(mmd$get_kernel())
weights <- learnt_kernel_combined$get_subkernel_weights()
mmd:set_kernel_selection_strategy(modshogun.KSM_MAXIMIZE_MMD, True)
mmd:select_kernel()
learnt_kernel_combined = CombinedKernel:obtain_from_generic(mmd:get_kernel())
weights = learnt_kernel_combined:get_subkernel_weights()
mmd.set_kernel_selection_strategy(EKernelSelectionMethod.KSM_MAXIMIZE_MMD, true);
mmd.select_kernel();
CombinedKernel learnt_kernel_combined = CombinedKernel.obtain_from_generic(mmd.get_kernel());
double[] weights = learnt_kernel_combined.get_subkernel_weights();
mmd->set_kernel_selection_strategy(EKernelSelectionMethod::KSM_MAXIMIZE_MMD, true);
mmd->select_kernel();
auto learnt_kernel_combined = CCombinedKernel::obtain_from_generic(mmd->get_kernel());
auto weights = learnt_kernel_combined->get_subkernel_weights();

We can perform the test on the last learnt kernel. Since we enabled the train-test mode, this automatically is done on the held out test data.

statistic_optimized = mmd.compute_statistic()
p_value_optimized = mmd.compute_p_value(statistic)
statistic_optimized = mmd.compute_statistic();
p_value_optimized = mmd.compute_p_value(statistic);
double statistic_optimized = mmd.compute_statistic();
double p_value_optimized = mmd.compute_p_value(statistic);
statistic_optimized = mmd.compute_statistic 
p_value_optimized = mmd.compute_p_value statistic
statistic_optimized <- mmd$compute_statistic()
p_value_optimized <- mmd$compute_p_value(statistic)
statistic_optimized = mmd:compute_statistic()
p_value_optimized = mmd:compute_p_value(statistic)
double statistic_optimized = mmd.compute_statistic();
double p_value_optimized = mmd.compute_p_value(statistic);
auto statistic_optimized = mmd->compute_statistic();
auto p_value_optimized = mmd->compute_p_value(statistic);

References

[GBR+12]A. Gretton, K.M. Borgwardt, M.J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.

Wikipedia: Statistical_hypothesis_testing